To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzes reviews to verify trustworthiness.
AI systems with human or greater level of intelligence could be a dangerous thing. Imagine a super smart coffee machine dedicated to making you the perfect cup of coffee. Suppose one day that you don’t want the coffee and you decide to turn off the machine. However, the machine has already thought about this, it’s only goal is to make you coffee, and it can’t make coffee when it’s turned off, therefore it stops you turning it off, by whatever means possible! This book explores how these super smart systems could be kept under control. After discussing the development of AI systems and the risks, it presents a simple ‘provably beneficial’ system, one that will always defer to human wishes and then elaborates this into increasing complex situations. Along the way this takes in ethics and moral philosophy. I found this one of the most interesting parts of the book, particularly the attempts to formulate the problems algebraically. Stuart Russell writes clearly and with plenty of humour. The technical material is well presented and mostly relegated to a series of annexes. Overall, I found this to be an informative and thought-provoking book. Highly recommended to anyone with an interest in technology and AI.
Reviewed in the United States on November 19, 2019
Human Compatible is an extremely important book. It convincingly argues that true success in building AI would be one of the greatest ever changes in the human condition (for better or for worse), yet that the researchers working towards such a success are deeply unprepared for what it would bring.
This goes far beyond the early social impacts of AI we are beginning to see, pointing instead to a future where humans are no longer the most intelligent, or powerful, entities in the world. And thus to a future where humans are at the mercy of their creations. If these intelligent systems can be aligned with human values, they would be able to help protect us and empower us to achieve what we truly desire. But if they are misaligned, their moment of creation might be the moment when all hope for a humane future was lost. Moreover, the science of how to align AI with our values is at a very early stage, with deep challenges yet to be overcome.
Stuart Russell’s warning is all the more compelling because he is one of the most eminent researchers in the field of artificial intelligence. He is not someone with an axe to grind, but someone who clearly feels a deep weight of responsibility as his own field gets closer to achieving its goals. Motivated by the arguments he lays out in this book, he has also become a leader in the emerging field of research on aligning AI systems with human values. Refreshingly, his message is not one of doom and gloom, but of explaining the nature of the dangers and difficulties, before setting out a plan for how we can rise to these challenges. His approach involves redefining the goals of the field of AI to include alignment as an inextricable part, and he points to a promising line of attack on the problem of alignment based on uncertainty about values.
Unusually, the book is aimed both for the thoughtful public, and for AI researchers themselves. The public will benefit immensely from his characteristically clear and approachable prose, which cuts to the heart of what AI is and exactly how it could be dangerous. Researchers will benefit from the clearest presentation to date of exactly how things could go wrong, why they need to start worrying about this now, and how important and stimulating the research needed to solve this will be.
This is a remarkable achievement and a must-read for anyone who is interested in this most important technology of our time.
Very insightful read. Grossly over simplifying it here, but Russell essentially reassures us that we should not worry about becoming the cautionary tale of another Black Mirror episode, so long as we're able to effectively apply control measures to future AI systems 🤖 whether or not you feel confident about that premise is your decision, but hey ho, suppose we won't have to wait too long to find out if we've created our own demise anyway! There's a handy appendices at the back explaining core concepts of contemporary AI systems which is useful. Brilliant for people familiar in the field, not so sure about the causal reader unless you have a background in mathematics/engineering etc.
Excellent and thought provoking book. It states clearly the issues facing humans and the common future vis a vis machines that may or may not think. Humorous at times, provocative, and well thought out. It explains key concepts and raises all the essential questions. The difference between human proactive thinking and AI ratiocination and reactive processing is clear, and woe betide us if we don’t realise the difference.
Engaging, illuminating look into the world of AI risk from an AI expert. As a researcher in the field, I found this helpful - but it's also written for non-technical readers. It is for example easier to read than Bostrom's book 'Superintelligence'.
Its hard to recommend this book even for those without any idea of what AI is; you could get a better intro from Wikipedia or simple internet searches. The discussion is too basic and shallow its almost like reading news excerpts :( For an author of this caliber, you would expect better (no offense intended)