Life 3.0: Being Human in the Age of Artificial Intelligence Audio CD – January 1, 2035
|New from||Used from|
The Amazon Book Review
Book recommendations, author interviews, editors' picks, and more. Read it now
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Customers who viewed this item also viewed
What other items do customers buy after viewing this item?
“All of us—not only scientists, industrialists and generals—should ask ourselves what can we do now to improve the chances of reaping the benefits of future AI and avoiding the risks. This is the most important conversation of our time, and Tegmark’s thought-provoking book will help you join it.” —Professor Stephen Hawking, Director of Research, Cambridge Centre for Theoretical Cosmology
“Tegmark’s new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” —Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity is Near and How to Create a Mind
“Being an eminent physicist and the leader of the Future of Life Institute has given Max Tegmark a unique vantage point from which to give the reader an inside scoop on the most important issue of our time, in a way that is approachable without being dumbed down.” —Jaan Tallinn, co-founder of Skype
“This is an exhilarating book that will change the way we think about AI, intelligence, and the future of humanity.” —Bart Selman, Professor of Computer Science, Cornell University
“The unprecedented power unleashed by artificial intelligence means the next decade could be humanity’s best—or worst. Tegmark has written the most insightful and just plain fun exploration of AI’s implications that I’ve ever read. If you haven’t been exposed to Tegmark’s joyful mind yet, you’re in for a huge treat.” —Professor Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy and co-author of The Second Machine Age
“Tegmark seeks to facilitate a much wider conversation about what kind of future we, as a species, would want to create. Though the topics he covers—AI, cosmology, values, even the nature of conscious experience—can be fairly challenging, he presents them in an unintimidating manner that invites the reader to form her own opinions.” —Nick Bostrom, Founder of Oxford’s Future of Humanity Institute, author of Superintelligence
"I was riveted by this book. The transformational consequences of AI may soon be upon us—but will they be utopian or catastrophic? The jury is out, but this enlightening, lively and accessible book by a distinguished scientist helps us to assess the odds." —Professor Martin Rees, Astronomer Royal, cosmology pioneer, author of Our Final Hour
About the Author
There was a problem filtering reviews right now. Please try again later.
Have you notice how you don’t “solve” CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) anymore? That’s because computers now can. Artificial Intelligence, from being a fairly niche area of mostly academic study a decade ago has exploded in the last five years. Much more quickly than many anticipated, machine learning (a subset of AI) systems have defeated the best human Go players, are piloting self-driving cars, usefully if imperfectly translating documents, labeling your photos, understanding your speech, and so on. This has led to huge investment in AI by companies and governments, with every sign that progress will continue. This book is about what happens if and when it does.
But why hear about it from Tegmark, an accomplished MIT physicists and cosmologist, rather than (say) an AI researcher? First, Tegmark has over the past few years *become* an AI researcher, with 5 published technical papers in the past two years. But he’s also got a lifetime of experience thinking carefully, rigorously, generally (and entertainingly to boot) about the “big picture” of what is possible, and what is not, over long timescales and cosmic distances (see his last book!) – which most AI researchers do not. Finally, he's played an active and very key role (as you can read about in the book’s epilogue) in actually creating conversation and research about the impacts and safety of AI in the long-term. I don’t think anyone is more comprehensively aware of the full spectrum of important aspects of the issue.
So now the book. Chapter 1 lays out why AI is suddenly on everyone’s radar, and very likely to be extremely important over the coming decades, situating present-day as a crucial point within the wider sweep of human and evolutionary history on Earth. Chapter 2 takes the question of “what is intelligence?” and abstracts it from its customary human application, to “what is intelligence *in general*?” How can we define it in a useful way to cover both biological and artificial forms, and how do these tie to a basic understanding of the physical world? This lays the groundwork for the question of what happens as artificial intelligences grow ever more powerful. Chapter 3 addresses this question in the near future: what happens as more and more human jobs can be done by AIs? What about AI weapons replacing human-directed ones? How will be cope when more and more decision are made by AIs what may be flawed or biased? This is a about a lot of important changes occurring *right now* to which society is, for the most part, asleep at the wheel. Chapter 4 gets into what is exciting – and terrifying – about AI: as a designed intelligence, it can in principle *re*design itself to get better and better, potentially on a relatively short timescale. This raises a lot of rich, important, and extremely difficult questions that not that many people have thought through carefully (another in-print example is the excellent book by Bostrom). Chapter 5 discusses where what happens to humans as a species after an “intelligence explosion” takes place. Here Tegmark is making a call to start thinking about where we want to be, as we may end up somewhere sooner than we think, and some of the possibilities are pretty awful. Chapter 6 exhibits Tegmark’s unique talent for tackling the big questions, looking at the *ultimate* limits and promise of intelligent life in the universe, and how stupefyingly high the stakes might be fore getting the next few decades right. It’s both a sobering and an exhilerating prospect. Chapters 7 and 8 then dig into some of the deep and interesting questions about AI: what does it mean for a machine to have “goals”? What are our goals as individuals and a society, and how can we best aim toward them in the long term? Can a machine we design have consciousness? What is the long-term future of consciousness? Is there a danger of relapsing into a universe *without* consciousness if we aren’t careful? Finally, an epilogue describes Tegmark’s own experience – which I’ve had the privilege to personally witness – as a key player in an effort to focus thought and effort on AI and its long-term implications, of which writing this book is a part. (And I should also mention the prologue, which gives an fictional but less *science*fictional depiction of an artificial superintelligence being used by a small group to seize control of human society.
The book is written in a very lively and engaging style. The explanations are clear, and Tegmark develops a lot of material at a level that is understandable to a general audience, but rigorous enough to give readers a real understanding of the issues relevant to thinking about the future impact of AI. There are a lot of news ideas in the book, and although it is sometimes written in a breezy and engaging style, that belies a lot of careful thinking about the issues.
It’s possible that real, general artificial intelligence (AGI) is 100 or more years away, a problem for the next generation, with large but manageable effects of “narrow” AI to deal with over a span of decades. But it’s also quite possible that it’s going to happen 10, 15, 20, or 30 years from now, in which case society is going to have to make a lot of very wise and very important (literally of cosmic import) decisions very quickly. It’s important to start the conversation now, and there’s no better way.
Top international reviews
However I really didn’t enjoy it and ended up skimming large chunks of it to the synopsis at each chapter. I bought I having become intrigued by AI after watching the Go documentary on Netflix and wanted to find out a bit more about the subject. This book doesn’t really do that (apart from the first few chapters) but is more of a societal analysis of potential dystopian effects of AI, which reads like bad sci-fi and has very little depth
Good book but I’ll stick to the google AI blog
Tegmark writes on a topic that's not his area of expertise, he is a physicist and a reductionist. I'm not sure real Ai and consciousness should not be viewed as other emergent fields like biology or more specifically neuroscience. It doesn't lend itself to the sparseness of equations (especially when they have not been clearly dilineated) - this focus had me a bit disappointed with the book
When to read:
This is definitely not the first read for anyone interested in the field of Ai security. There is no alternative to Bostrom there. If you are in general interested in superintelligence you should start with Vernor Vinge 's original paper and then move to Ray Kurzweil. In both of these aspects this could be a supplementary read.
There are some sublime pieces of writing where Tegmark' s clear logic and incisive thought made me go wow. For example when he tries to derive subprinciples and goals from any possible ultimate goals (page 264) or in the section where he defines consciousness through our knowledge of decision making (page 312). These were fresh perspectives for me and were my main takeaways from the book.
i felt much more could be said about neuroscience of our brain (given that he chose to touch on the topic). If the topic of the book was AI security more could be said about why each principle was taken up. Possible there was too much emphasis (without enough mathematics) on going directly from physical substrates (quarks, electrons) to "sentronium" (sentient matter)
The prelude is a masterful imagination of the future and reads like a fast sci-fi piece. Loved it.
The book print quality (i got the hard copy) is excellent for the price.
As an Ai researcher my views may be colored by higher expectations.
PS: Irrespective of whether you buy the book, head over to futureoflife.org - it's a movement started by Tegmark that deserves a read through.
It encourages you to skip early chapter if your familiar with the basics of AI. Early Chapters is excellent for layman's and people who needs to get up to speed with the basics of AI.
Very interesting and worthy of a read.
Life 3.0 : Being human in an age of Artificial Intelligence
TLDR : Engaging, Futuristic, Concerned, Practical : 5/5
I have read Max Tegmarks Digital Physics before. I was looking forward to his work when I got this book. He has started a Future of Life institute dedicated to AI Safety research which according to a Google Scientist is one of the key research areas in AI.
The book starts with an interesting sci-fi story of AI development, AI breakout and various utopian and dystopian scenario which was superb.
Life is progressing from biological to cultural to technological. We are capable to edit both the hardware (DNA) and software (institutions) of Life today which he terms Life 3.0. The classification of AI visionaries utopians, skeptics, luddites, beneficial AI and nobodies is interesting. I would put my tent with beneficial AI I guess.
Further, Max defines terms like intelligence, life, learning and computation so we can be clear of what exactly he means. I loved Moravecs landscape of human competence. AI is fast saturating the areas which humans can claim exclusive competence. I was also piqued by words like substrate independence of intelligence, computronium and even senetronium he introduced. He challenges the notion of creativity and intuition as exclusive to humans by analysis of DeepMind and I had many a Aha moment. The world averting nuclear disaster through Human in Loop in past occasions creates a case for humans in a purely AI controlled weapon system. His career advice for kids for an AI world was novel. I plan to apply that soon. I read somewhere that Massage Therapist could be an AI proof profession :D
His AI aftermath scenario was well though out with pros and cons. AIs can become protector god, zookeepers, benevolent dictators, enslaved god and so on. I think for me libertarian utopia appeals to me the most due to my political leanings unless eliminating suffering is more important than other values like choice.
The analysis of available energy of universe for computation was good and expanded my mind. The next chapter on Goals was the most striking and seems to be the most likely need why AIs would be created for humans and I was surprised to know about natural subgoals like curiosity, self preservation that would form the bottom of any super goal.
I have read a lot about consciousness from Daniel Dennett so the last chapter was not very new. But I liked the integrated information theory (which has found medical use) and the way it can be used to assess or model consciousness as a emergent physical property.
I wish Max Tegmark and FLI best of luck and glad that I can be part of the conversation
Thank you Max Tegmark! Your book has blown my mind!