Amazon calculates a product’s star ratings based on a machine learned model instead of a raw data average. The model takes into account factors including the age of a rating, whether the ratings are from verified purchasers, and factors that establish reviewer trustworthiness.
Reviewed in the United States on November 19, 2019
Human Compatible is an extremely important book. It convincingly argues that true success in building AI would be one of the greatest ever changes in the human condition (for better or for worse), yet that the researchers working towards such a success are deeply unprepared for what it would bring.
This goes far beyond the early social impacts of AI we are beginning to see, pointing instead to a future where humans are no longer the most intelligent, or powerful, entities in the world. And thus to a future where humans are at the mercy of their creations. If these intelligent systems can be aligned with human values, they would be able to help protect us and empower us to achieve what we truly desire. But if they are misaligned, their moment of creation might be the moment when all hope for a humane future was lost. Moreover, the science of how to align AI with our values is at a very early stage, with deep challenges yet to be overcome.
Stuart Russell’s warning is all the more compelling because he is one of the most eminent researchers in the field of artificial intelligence. He is not someone with an axe to grind, but someone who clearly feels a deep weight of responsibility as his own field gets closer to achieving its goals. Motivated by the arguments he lays out in this book, he has also become a leader in the emerging field of research on aligning AI systems with human values. Refreshingly, his message is not one of doom and gloom, but of explaining the nature of the dangers and difficulties, before setting out a plan for how we can rise to these challenges. His approach involves redefining the goals of the field of AI to include alignment as an inextricable part, and he points to a promising line of attack on the problem of alignment based on uncertainty about values.
Unusually, the book is aimed both for the thoughtful public, and for AI researchers themselves. The public will benefit immensely from his characteristically clear and approachable prose, which cuts to the heart of what AI is and exactly how it could be dangerous. Researchers will benefit from the clearest presentation to date of exactly how things could go wrong, why they need to start worrying about this now, and how important and stimulating the research needed to solve this will be.
This is a remarkable achievement and a must-read for anyone who is interested in this most important technology of our time.