Top positive review
Best single non-technical book to read about where AI is and where it should be going
September 22, 2019
While there has been significant and important progress in artificial intelligence, there has also been a tidal wave of hype. Pundits hyperventilate about super-intelligent systems and less discerning (and in a few cases, less scrupulous) researchers overstate progress in ways that result in misleading media stories. Marcus & Davis' new book, Rebooting AI, is a perfect corrective to such hype. They do an excellent job of describing where the field actually is, what the strengths and weaknesses are of some current technologies, where they think the field is going wrong, and what we should be doing instead. Most importantly, it is written for the general reader, lavishly illustrated with many examples and a good dose of humor where appropriate. If you want one book to read to catch up on where AI is and should be going, this is the book for you.
Here is a chapter by chapter breakdown. Chapter 1 does an excellent job of laying out the basic argument, that today's AI systems are narrow and only by moving beyond the big data/statistical learning focus of much of today's work will we achieve flexible AI systems. The discussion of overattribution, illusory progress, and the robustness gap are especially useful for understanding the difference between what often gets reported versus where the state of the art is. Demonstrations and laboratory experiments are (hopefully) on the path to robust technologies, but the distance is often not clear to outsiders. Chapter 2 explains why the problems with today's AI technologies matters, focusing mostly on bias found in machine learning systems.
Chapter 3 dissects deep learning, which is the revolution in AI that everyone knows about, due both to real progress but also media attention. (There are two others, as noted below.) They provide a non-technical overview of neural networks and deep learning, and point out both their strengths and weaknesses in a balanced way. Many who only read popular press accounts of deep learning will find the examples and arguments about brittleness surprising, but the phenomena are quite replicable. My only fault with Chapter 3 is that the picture it paints of modern AI is a bit oversimplified, even for this level of discussion. There are two other revolutions in AI. The first is knowledge graphs, where structured, relational representations straight out of the classic AI playbook have been applied to many tasks (mostly via semantic web technologies), and at industrial scale. Google and Microsoft both use billion-fact knowledge graphs in their search engines and other products, for example, and the technology is spreading quickly (even Spotify has its own knowledge graph). The second is high-performance reasoning systems, where satisfiability solvers are part of the constraint solvers used every day by logistics companies and other industrial concerns for planning and scheduling. (Marcus and Davis do bring up one line of this revolution, model checking, on page 187). I can see why, rhetorically, focusing only on deep learning makes sense for them, it simplifies the main argument considerably. On the other hand, these other two revolutions lend credence to their call for revisiting ideas from classical AI. A common claim by neural network modelers has always been that symbolic representations and reasoning over them cannot scale, but the same rising tide of massive data and computation that lifted deep learning has also lifted work in knowledge representation and reasoning, although these are not receiving the same attention that deep learning is. So to my mind, these other revolutions make the approach argued for in Chapter 7 even stronger.
Chapters 4 and 5 dissect the state of the art in machine reading and robotics, two areas where there is an astonishing amount of hype. Their examples do an excellent job of pointing out what can and cannot be done today, and just how far we are from systems that can read as humans do, or operate in the physical world the way we do.
Chapters 6 and 7 chart their alternate course. Chapter 6 provides a capsule summary of the kinds of insights that AI could be taking from other areas of cognitive science. It is a sad comment on the current state of AI education that many of the eleven hard-won insights listed here will be news to many of today's graduate students and even some AI practitioners. Chapter 7 sketches some ideas about common sense. They carefully walk readers through some basic ideas about knowledge representation, to get across some of the pitfalls as well as the power, and argue that time, space, and causality are the three key areas to focus on. As with Chapter 3, so much more could be said -- and Davis has written an excellent book about this, albeit for a technical audience -- but the key thing is, you will come out of this chapter with a good sense of the overall approach.
Chapter 8 is about trust, and its relationship with good engineering practices. They do a fine job at outlining basics of software development that are relevant to understanding how people build safe and reliable software. Their handling of ethical questions is very sensible.
To summarize: This is an excellent non-technical book which debunks hype about AI while pointing out both real progress and the daunting open questions that remain on the road to understanding how to build intelligent systems with human-like flexibility and breadth. If you are interested in AI, or its possible impacts, you should read it.