Top positive review
A brilliant guide to the massive AI revolution headed our way
August 29, 2017
The first chapter of Tegmark’s new book is called “Welcome to the most important conversation of our time,” and that’s exactly what this book is. Before diving into the book, a few words about why this conversation is so important and why Tegmark is a central agent helping make it happen and, through the book, the perfect guide.
Have you notice how you don’t “solve” CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) anymore? That’s because computers now can. Artificial Intelligence, from being a fairly niche area of mostly academic study a decade ago has exploded in the last five years. Much more quickly than many anticipated, machine learning (a subset of AI) systems have defeated the best human Go players, are piloting self-driving cars, usefully if imperfectly translating documents, labeling your photos, understanding your speech, and so on. This has led to huge investment in AI by companies and governments, with every sign that progress will continue. This book is about what happens if and when it does.
But why hear about it from Tegmark, an accomplished MIT physicists and cosmologist, rather than (say) an AI researcher? First, Tegmark has over the past few years *become* an AI researcher, with 5 published technical papers in the past two years. But he’s also got a lifetime of experience thinking carefully, rigorously, generally (and entertainingly to boot) about the “big picture” of what is possible, and what is not, over long timescales and cosmic distances (see his last book!) – which most AI researchers do not. Finally, he's played an active and very key role (as you can read about in the book’s epilogue) in actually creating conversation and research about the impacts and safety of AI in the long-term. I don’t think anyone is more comprehensively aware of the full spectrum of important aspects of the issue.
So now the book. Chapter 1 lays out why AI is suddenly on everyone’s radar, and very likely to be extremely important over the coming decades, situating present-day as a crucial point within the wider sweep of human and evolutionary history on Earth. Chapter 2 takes the question of “what is intelligence?” and abstracts it from its customary human application, to “what is intelligence *in general*?” How can we define it in a useful way to cover both biological and artificial forms, and how do these tie to a basic understanding of the physical world? This lays the groundwork for the question of what happens as artificial intelligences grow ever more powerful. Chapter 3 addresses this question in the near future: what happens as more and more human jobs can be done by AIs? What about AI weapons replacing human-directed ones? How will be cope when more and more decision are made by AIs what may be flawed or biased? This is a about a lot of important changes occurring *right now* to which society is, for the most part, asleep at the wheel. Chapter 4 gets into what is exciting – and terrifying – about AI: as a designed intelligence, it can in principle *re*design itself to get better and better, potentially on a relatively short timescale. This raises a lot of rich, important, and extremely difficult questions that not that many people have thought through carefully (another in-print example is the excellent book by Bostrom). Chapter 5 discusses where what happens to humans as a species after an “intelligence explosion” takes place. Here Tegmark is making a call to start thinking about where we want to be, as we may end up somewhere sooner than we think, and some of the possibilities are pretty awful. Chapter 6 exhibits Tegmark’s unique talent for tackling the big questions, looking at the *ultimate* limits and promise of intelligent life in the universe, and how stupefyingly high the stakes might be fore getting the next few decades right. It’s both a sobering and an exhilerating prospect. Chapters 7 and 8 then dig into some of the deep and interesting questions about AI: what does it mean for a machine to have “goals”? What are our goals as individuals and a society, and how can we best aim toward them in the long term? Can a machine we design have consciousness? What is the long-term future of consciousness? Is there a danger of relapsing into a universe *without* consciousness if we aren’t careful? Finally, an epilogue describes Tegmark’s own experience – which I’ve had the privilege to personally witness – as a key player in an effort to focus thought and effort on AI and its long-term implications, of which writing this book is a part. (And I should also mention the prologue, which gives an fictional but less *science*fictional depiction of an artificial superintelligence being used by a small group to seize control of human society.
The book is written in a very lively and engaging style. The explanations are clear, and Tegmark develops a lot of material at a level that is understandable to a general audience, but rigorous enough to give readers a real understanding of the issues relevant to thinking about the future impact of AI. There are a lot of news ideas in the book, and although it is sometimes written in a breezy and engaging style, that belies a lot of careful thinking about the issues.
It’s possible that real, general artificial intelligence (AGI) is 100 or more years away, a problem for the next generation, with large but manageable effects of “narrow” AI to deal with over a span of decades. But it’s also quite possible that it’s going to happen 10, 15, 20, or 30 years from now, in which case society is going to have to make a lot of very wise and very important (literally of cosmic import) decisions very quickly. It’s important to start the conversation now, and there’s no better way.