Implement Solutions to Model Explainability and Interpretability with Python
Buch, Englisch, 254 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 429 g
ISBN: 978-1-4842-9028-6
Verlag: Apress
The book starts with model interpretation for supervised learning linear models, which includes feature importance, partial dependency analysis, and influential data point analysis for both classification and regression models. Next, it explains supervised learning using non-linear models and state-of-the-art frameworks such as SHAP values/scores and LIME for local interpretation. Explainability for time series models is covered using LIME and SHAP, as are natural language processing-related tasks such as text classification, and sentiment analysis with ELI5, and ALIBI. The book concludes with complex model classification and regression-like neural networks and deep learning models using the CAPTUM framework that shows feature attribution, neuron attribution,and activation attribution.
After reading this book, you will understand AI and machine learning models and be able to put that knowledge into practice to bring more accuracy and transparency to your analyses.
What You Will Learn
- Create code snippets and explain machine learning models using Python
- Leverage deep learning models using the latest code with agile implementations
- Build, train, and explain neural network models designed to scale
- Understand the different variants of neural network models
AI engineers, data scientists, and software developers interested in XAI
Zielgruppe
Professional/practitioner
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
Chapter 1: Introduction to Explainability Library Installations.- Chapter 2: Linear Supervised Model Explainability.- Chapter 3: Non-Linear Supervised Learning Model Explainability.- Chapter 4: Ensemble Model for Supervised Learning Explainability.- Chapter 5: Explainability for Natural Language Modeling.- Chapter 6: Time Series Model Explainability.- Chapter 7: Deep Neural Network Model Explainability.