Top positive review
Start here - the best resource on XAI
Reviewed in the United States on August 31, 2020
If you're looking to buy this book, then I don't need to tell you about the explosion in ML and AI-related applications across many industries over the last decade: e-commerce, streaming services, automobiles, finance, imaging, virtual assistants, etc.
Yet, for such a burgeoning field there is a dearth of resources when it comes to explainable AI. Part of the problem is that Machine Learning tools have become so refined and user-friendly that it is no longer necessary to have a good understanding of the core principles before calling an API and making predictions; just about any non-technical person could be trained to carry out a few easy steps to get low-hanging results. Thus, the demand is pretty high for introductory texts that walk the users through the many techniques for handling data, training models, and presenting predictions.
However, when it comes to unearthing the insights that lead to such predictions, the literature is sadly lacking. It used to be the case that if you needed to train an easily explainable model, you could get away with some sort of regression or decision-tree based approach. But the quantity and complexity of data in recent times have led us into the territory of "black-box models": Neural Networks and Gradient-boosted trees. While these are very powerful with very intricate architecture that can handle everything from images, text, video, sound, to even creative processes, they are not easily explainable.
More and more countries are recognizing the uses and misuses of AI tools and calling for legislation to reign in the scope and manner in which these tools are applied and rightfully questioning the process by which they were designed and if their creators have taken full account of the possible consequences. Just in the USA, SR11-7 has been written with an eye to curb model risk. In the EU, you have to look no further than the GDPR (“General Data Protection Regulation”). Among their chief concerns is the issue of built-in bias.
So, the days are coming to an end when you could easily build models and deploy them and hide behind their accuracy as long as they got you the results you wanted.
That's why I think this book is a bit of a gem: it's getting the ball rolling in training ML practitioners in not only recognizing the need to explain AI models, but more importantly giving them the tools to do so.
I wish I had a book like this two, or even one year ago when I was developing explainers for Anomaly Detection and Neural networks for the financial sector.
It is both highly accessible and authoritative in its survey of methods for extracting root-cause level intelligence of the prediction effected by the models. It is very current as well: SHAP, LIME, Google's WhatIf, and more are discussed with several illuminating examples to help the reader grasp the concepts and practice.
What is even better, is that the author does not hide behind the same datasets used over-and-over again in every ML text, such as MNIST, CIFAR, Boston Housing, Titanic, etc.. This alone, is so refreshing and it made it such a please to read. I wish more would follow his lead.
Overall, I whole-heartedly recommend this book to any ML&AI practitioner looking to understand their models and data better, and more importantly, to those looking to future-proof their organization's AI capital.