Hands-On Explainable AI (XAI) with Python: Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps
Use the Amazon App to scan ISBNs and compare prices.
Frequently bought together
From the Publisher
What are the key takeaways you want readers to get from this book?
In this book, you'll learn about tools and techniques using Python to visualize, explain, and integrate trustworthy AI results to deliver business value, while avoiding common issues with AI bias and ethics.
You'll also get to work with hands-on Python machine learning projects in Python and TensorFlow 2.x, and learn how to use WIT, SHAP, and other key explainable AI (XAI) tools - along with those designed by IBM, Google, and other advanced AI research labs.
Two of my favorite concepts that I hope readers will also fall in love with are:
- The fact that XAI can pinpoint the exact feature(s) that led to an output such as SHAP, LIME, Anchors, CEM, and the other XAI methods in this book
- Ethics - we can finally scientifically pinpoint discrimination and eradicate it!
Finally, I would want readers to understand that it is an illusion to think that anybody can understand the output of an AI program that contains millions of parameters by just looking at the code and intermediate outputs.
What are the main tools used in the book?
The book shows you how to implement two essential tools to detect problems and bias: Facets and Google's What-If Tool (WIT). With this you'll learn to find, display, and explain bias to the developers and users of an AI project.
In addition to this, you'll use the knowledge and tools you've acquired to build an XAI solution from scratch using Python, TensorFlow, Facets, and WIT.
We often isolate ourselves from reality when experimenting with machine learning (ML) algorithms. We take the ready-to-use online datasets, use the algorithms suggested by a given cloud AI platform, and display the results as we saw in a tutorial we found on the web.
However, by only focusing on what we think is the technical aspect, we miss a lot of critical moral, ethical, legal, and advanced technical issues. In this book, we will enter the real world of AI with its long list of XAI issues, using Python as the key language to explain concepts.
"Interpretability and explainability are key considerations beyond predictive accuracy for building trust in machine learning systems for high-stakes applications. There are many different ways of explaining that are relevant for different use cases and personas who consume the explanations. Denis Rothman has done a good job in providing step-by-step tutorial examples in Python to provide an entrée into this important topic, focusing on one of the ways of explaining: post hoc local explanations."--
Kush R. Varshney, Distinguished Research Staff Member and Manager, Foundations of Trustworthy AI, IBM Research
"Hands-On Explainable AI (XAI) with Python is a timely book on a complex subject, and it fulfills its promise. The book covers the whole spectrum i.e. XAI for types of users, XAI for phases of a project, legal issues, data issues etc. It also covers techniques like LIME, SHAP from Microsoft, and WIT from Google, and also explores implementation scenarios like healthcare, self-driving cars etc. There is a lot to learn from this book both in the breadth and depth and it's a recommended read."--
Ajit Jaokar, Principal Data Scientist/AI Designer, Feynlabs.ai, and Director, FutureText
"The timing of Denis Rothman's book Hands-on Explainable AI (XAI) with Python is perfect. Not only does the book provide a solid overview of the XAI concepts and challenges necessitated by XAI, but it is a perfect catalyst for those data scientists who want to get their hands dirty exploring different XAI techniques."
Bill Schmarzo, Dean of Big Data, Author of The Economics of Data, Analytics, and Digital Transformation
"Hands-on Explainable AI (XAI) with Python covers XAI white box models for the explainability and interpretability of algorithms with transparency for the accuracy of predictable outcomes and results from XAI applications keeping ethics in mind. Denis Rothman shows how to install LIME, SHAP, and WIT tools and the ethical standards to maintain balanced datasets with best practices and principles. The book is a recommended read for data scientists."--
Dr. Ganapathi Pulipaka, Chief Data Scientist, Chief AI HPC Scientist, DeepSingularity
About the Author
Denis Rothman graduated from Sorbonne University and Paris-Diderot University, writing one of the very first word2vector embedding solutions. He began his career authoring one of the first AI cognitive natural language processing (NLP) chatbots applied as a language teacher for Moët et Chandon and other companies. He has also authored an AI resource optimizer for IBM and apparel producers. He then authored an advanced planning and scheduling (APS) solution that is used worldwide. Denis is an expert in explainable AI (XAI), having added interpretable mandatory, acceptance-based explanation data and explanation interfaces to the solutions implemented for major corporate aerospace, apparel, and supply chain projects.
- Publisher : Packt Publishing (July 31, 2020)
- Language : English
- Paperback : 454 pages
- ISBN-10 : 1800208138
- ISBN-13 : 978-1800208131
- Item Weight : 1.71 pounds
- Dimensions : 7.5 x 1.03 x 9.25 inches
- Best Sellers Rank: #1,500,937 in Books (See Top 100 in Books)
- Customer Reviews:
About the author
Reviews with images
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Yet, for such a burgeoning field there is a dearth of resources when it comes to explainable AI. Part of the problem is that Machine Learning tools have become so refined and user-friendly that it is no longer necessary to have a good understanding of the core principles before calling an API and making predictions; just about any non-technical person could be trained to carry out a few easy steps to get low-hanging results. Thus, the demand is pretty high for introductory texts that walk the users through the many techniques for handling data, training models, and presenting predictions.
However, when it comes to unearthing the insights that lead to such predictions, the literature is sadly lacking. It used to be the case that if you needed to train an easily explainable model, you could get away with some sort of regression or decision-tree based approach. But the quantity and complexity of data in recent times have led us into the territory of "black-box models": Neural Networks and Gradient-boosted trees. While these are very powerful with very intricate architecture that can handle everything from images, text, video, sound, to even creative processes, they are not easily explainable.
More and more countries are recognizing the uses and misuses of AI tools and calling for legislation to reign in the scope and manner in which these tools are applied and rightfully questioning the process by which they were designed and if their creators have taken full account of the possible consequences. Just in the USA, SR11-7 has been written with an eye to curb model risk. In the EU, you have to look no further than the GDPR (“General Data Protection Regulation”). Among their chief concerns is the issue of built-in bias.
So, the days are coming to an end when you could easily build models and deploy them and hide behind their accuracy as long as they got you the results you wanted.
That's why I think this book is a bit of a gem: it's getting the ball rolling in training ML practitioners in not only recognizing the need to explain AI models, but more importantly giving them the tools to do so.
I wish I had a book like this two, or even one year ago when I was developing explainers for Anomaly Detection and Neural networks for the financial sector.
It is both highly accessible and authoritative in its survey of methods for extracting root-cause level intelligence of the prediction effected by the models. It is very current as well: SHAP, LIME, Google's WhatIf, and more are discussed with several illuminating examples to help the reader grasp the concepts and practice.
What is even better, is that the author does not hide behind the same datasets used over-and-over again in every ML text, such as MNIST, CIFAR, Boston Housing, Titanic, etc.. This alone, is so refreshing and it made it such a please to read. I wish more would follow his lead.
Overall, I whole-heartedly recommend this book to any ML&AI practitioner looking to understand their models and data better, and more importantly, to those looking to future-proof their organization's AI capital.
This is a book I've been waiting for for a few years. Explainable AI (XAI) is the next step in artificial intelligence theory and practice. In this book, Denis Rothman explores the currently available technologies of XAI and discusses the theory behind it as well as the legal hurdles they will help us cross. The technologies range from straight Python to various offerings from Microsoft and Google.
Who is This For?
The preface gives a wide range of potential readers, but I think the author pulls this off. You can easily read the theory without getting bogged down by the code, or you could work through the examples and have a good basic knowledge to apply the theory to your next project.
Why Was This Written?
Explainable AI is still a very new subfield of AI and there are very few texts written about it. Rothman came through at just the right time with this book. AI cannot progress much further if it continues to be a black box.
There is no overall organization to this book, but this is a fairly new field, so that's understandable. There is a nice flow that makes sure that a new topic is introduced cleanly before being used to extend another technique.
The microstructure is well suited for this type of book. Each chapter has a summary, questions, references, and further reading. Given the amount of theory in this book, the questions (largely true or false) are a useful aid in recall. The further reading section is very welcome to extend the reader's knowledge even deeper.
Did This Book Succeed?
I can easily say that, yes, the author reached his stated goals. This is a book that any serious AI theorist or practitioner should have in their library. Any student of AI should read through this book and practice the exercises to be relevant in the field. I hope to add a physical copy of this book to my library in the near future.
Rating and Final Thoughts
This is the book the Denis Rothman needed to write. I was very critical about his last book, but knew that he had a lot of knowledge and understanding to contribute to the field. I am very pleased to say that this is it. Rothman pushes our understanding of AI forward, in more ways than one.
I am happy to give this book a 5 out of 5 and look forward to Rothman's next book.
Especially, the structure of the book is very useful by covering some mathematical foundations of the problems/solutions as well as the step-by-step implementation of each technique with python libraries.
Examples and tools are designed to cover essential areas that explanations could benefit AI-based systems.
The book starts introducing XAI as a potential solution in safety-critical applications of AI like medical diagnosis, self-driving cars, autopilot systems.
After introducing the Google Facet visualization tool, the author presents an analysis of training data with it from ethical and legal perspectives.
As for the interpretability techniques, the book covers multiple model agnostic techniques, including SHAP, LIME, Anchors, and Google's What-if tool that can generate interpretations from any black-box model.
The author provides examples from different data domains like images, text, and tabular data in each chapter.
I strongly suggest this book to those who are eager to learn about the broad spectrum of XAI and how it can be used to build a more accountable AI.