Deep Learning (Adaptive Computation and Machine Learning series) Kindle Edition
Ian Goodfellow (Author) Find all the books, read about the author, and more. See search results for this author |
Yoshua Bengio (Author) Find all the books, read about the author, and more. See search results for this author |
Aaron Courville (Author) Find all the books, read about the author, and more. See search results for this author |


Learn more

Use the Amazon App to scan ISBNs and compare prices.

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle Cloud Reader.
Using your mobile phone camera - scan the code below and download the Kindle app.

“Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.”
—Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX
Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning.
The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models.
Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
The Amazon Book Review
Book recommendations, author interviews, editors' picks, and more. Read it now
-
Next 3 for you in this series
$111.97 -
Next 5 for you in this series
$208.95
Customers who viewed this item also viewed
Editorial Reviews
Review
Deep learning has taken the world of technology by storm since the beginning of the decade. There was a need for a textbook for students, practitioners, and instructors that includes basic concepts, practical aspects, and advanced research topics. This is the first comprehensive textbook on the subject, written by some of the most innovative and prolific researchers in the field. This will be a reference for years to come.
―Yann LeCun, Director of AI Research, Facebook; Silver Professor of Computer Science, Data Science, and Neuroscience, New York University --This text refers to an out of print or unavailable edition of this title.Review
About the Author
Yoshua Bengio is Professor of Computer Science at the Université de Montréal.
Aaron Courville is Assistant Professor of Computer Science at the Université de Montréal. --This text refers to an out of print or unavailable edition of this title.
Product details
- ASIN : B08FH8Y533
- Publisher : The MIT Press (November 10, 2016)
- Publication date : November 10, 2016
- Language : English
- File size : 17743 KB
- Text-to-Speech : Enabled
- Screen Reader : Supported
- Enhanced typesetting : Enabled
- X-Ray : Not Enabled
- Word Wise : Not Enabled
- Print length : 800 pages
- Lending : Not Enabled
- Best Sellers Rank: #56,079 in Kindle Store (See Top 100 in Kindle Store)
- #13 in AI & Semantics
- #56 in Artificial Intelligence & Semantics
- Customer Reviews:
About the authors
Discover more of the author’s books, see similar authors, read author blogs and more
Ian Goodfellow is a research scientist at OpenAI. He has invented a variety of machine learning algorithms including generative adversarial networks. He has contributed to a variety of open source machine learning software, including TensorFlow and Theano.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on Amazon
Reviewed in the United States on September 27, 2017
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Bad mistake. Only a few of the reviews clearly state the obvious problems of this book. Oddly enough, these informative
reviews tend to attract aggressively negative comments of an almost personal nature.
The disconnect between the majority of cloyingly effusive reviews of this book and the reality of how it is written
is quite flabbergasting. I do not wish to speculate on the reason for this but it does sometimes does occur with
a first book in an important area or when dealing with pioneer authors with a cult following.
First of all, it is not clear who is the audience--the writing does not provide details at the level one
expects from a textbook. It also does not provide a good overview ("big picture thinking"). Advanced readers
would also not gain much because it is too superficial, when it comes to the advanced topics (final 35% of book).
More than half of this book reads like a bibliographic notes section of a book, and the authors seem
to be have no understanding of the didactic intention of a textbook (beyond a collation or importance sampling
of various topics). In other words, these portions read
like a prose description of a bibliography, with equations thrown in for annotation. The level of
detail is more similar to an expanded ACM Computing Surveys article rather than a textbook in
several chapters. At the other extreme of audience expectation, we have a review of linear algebra in the beginning,
which is a waste of useful space that could have been spent on actual explanations in other
chapters. If you don't know linear algebra already, you cannot really hope to follow
anything (especially in the way the book is written). In any case, the linear
algebra introduced in that chapter is too poorly written to even brush up on known material-- so who is that for?
As a practical matter, Part I of the book is mostly redundant/off-topic for a neural network book
(containing linear algebra, probability, and so on)
and Part III is written in a superficial way--so only a third of the book is remotely useful.
Other than a chapter on optimization algorithms (good description of algorithms like
Adam), I do not see even a single chapter that has done a half-decent job of presenting
algorithms with the proper conceptual framework. The presentation style is unnecessarily terse,
and dry, and is stylistically more similar to a research paper rather than a book.
It is understood that any machine learning book would have some mathematical sophistication, but the
main problem is caused by a lack of concern on part of the authors in promoting readability and an inability to
put themselves in reader shoes (surprisingly enough, some defensive responses to negative reviews tend to place
blame on math-phobic readers). At the end of the day, it is the author's responsibility to make
notational and organizational choices that are likely to maximize understanding.
Good mathematicians have excellent manners while choosing notation (you don't use nested
subscripts/superscripts/functions if you possess the clarity to do it more simply).
And no, math equations are not the same as algorithms-- only a small part of it. Where is the rest?
Where is the algorithm described? Where is the conceptual framework?
Where is the intuition? Where are the pseudocodes? Where are the illustrations? Where are the examples?
No, I am not asking for recipes or Python code. Just some decent writing, details, and explanations.
The sections on applications, LSTM and convolutional neural networks are hand-wavy at places and
read like "you can do this to achieve that." It is impossible to fully reconstruct the methods from the
description provided.
A large part of the book (including restricted Boltzmann machines)
is so tightly integrated with Probabilistic Graphical models (PGM), so that it loses its neural network focus.
This portion is also in the latter part of the book that is written in a rather superficial way and
therefore it implicitly creates another prerequisite of being very used to PGM (sort-of knowing it wouldn't be enough). .
Keep in mind that the PGM view of neural networks is not the dominant view today, from either a practitioner
or a research point of view. So why the focus on PGM, if they don't have the space to elaborate?
On the one hand, the authors make a futile attempt at promoting accessibility by discussing redundant
pre-requisites like basic linear algebra/probability basics. On the other hand, the PGM-heavy approach implicitly
increases the pre-requisites to include an even more advanced machine learning topic than neural networks
(with a 1200+ page book of its own). What the authors are doing is the equivalent of trying to teach someone
how to multiply two numbers as a special case of tensor multiplication. Even for RNNs with deterministic hidden states
they feel the need to couch it as a graphical model. It is useful to connect areas, but mixing them
is a bad idea. Look at Hinton's course. It does explain the connection between Boltzmann machines and PGM
very nicely, but one can easily follow RBM without having to bear the constant burden of a PGM-centric view.
One fact that I think played a role in these types of strategic errors of judgement is the fact that the
lead author is a fresh PhD graduate There is no substitute for experience when it comes to maturity
in writing ability (irrespective of how good a researcher someone is). Mature writers have the ability to put
themselves in reader shoes and have a good sense of what is conceptually important. The
authors clearly miss the forest from the trees, with chapter titles like "Confronting
the partition function." The book is an example of the fact that a first book in an important area with the name of
a pioneer author in it is not necessarily a qualification for being considered a good book.
I am not hesitant to call it out. The emperor has no clothes.

Reviewed in the United States on September 27, 2017
Bad mistake. Only a few of the reviews clearly state the obvious problems of this book. Oddly enough, these informative
reviews tend to attract aggressively negative comments of an almost personal nature.
The disconnect between the majority of cloyingly effusive reviews of this book and the reality of how it is written
is quite flabbergasting. I do not wish to speculate on the reason for this but it does sometimes does occur with
a first book in an important area or when dealing with pioneer authors with a cult following.
First of all, it is not clear who is the audience--the writing does not provide details at the level one
expects from a textbook. It also does not provide a good overview ("big picture thinking"). Advanced readers
would also not gain much because it is too superficial, when it comes to the advanced topics (final 35% of book).
More than half of this book reads like a bibliographic notes section of a book, and the authors seem
to be have no understanding of the didactic intention of a textbook (beyond a collation or importance sampling
of various topics). In other words, these portions read
like a prose description of a bibliography, with equations thrown in for annotation. The level of
detail is more similar to an expanded ACM Computing Surveys article rather than a textbook in
several chapters. At the other extreme of audience expectation, we have a review of linear algebra in the beginning,
which is a waste of useful space that could have been spent on actual explanations in other
chapters. If you don't know linear algebra already, you cannot really hope to follow
anything (especially in the way the book is written). In any case, the linear
algebra introduced in that chapter is too poorly written to even brush up on known material-- so who is that for?
As a practical matter, Part I of the book is mostly redundant/off-topic for a neural network book
(containing linear algebra, probability, and so on)
and Part III is written in a superficial way--so only a third of the book is remotely useful.
Other than a chapter on optimization algorithms (good description of algorithms like
Adam), I do not see even a single chapter that has done a half-decent job of presenting
algorithms with the proper conceptual framework. The presentation style is unnecessarily terse,
and dry, and is stylistically more similar to a research paper rather than a book.
It is understood that any machine learning book would have some mathematical sophistication, but the
main problem is caused by a lack of concern on part of the authors in promoting readability and an inability to
put themselves in reader shoes (surprisingly enough, some defensive responses to negative reviews tend to place
blame on math-phobic readers). At the end of the day, it is the author's responsibility to make
notational and organizational choices that are likely to maximize understanding.
Good mathematicians have excellent manners while choosing notation (you don't use nested
subscripts/superscripts/functions if you possess the clarity to do it more simply).
And no, math equations are not the same as algorithms-- only a small part of it. Where is the rest?
Where is the algorithm described? Where is the conceptual framework?
Where is the intuition? Where are the pseudocodes? Where are the illustrations? Where are the examples?
No, I am not asking for recipes or Python code. Just some decent writing, details, and explanations.
The sections on applications, LSTM and convolutional neural networks are hand-wavy at places and
read like "you can do this to achieve that." It is impossible to fully reconstruct the methods from the
description provided.
A large part of the book (including restricted Boltzmann machines)
is so tightly integrated with Probabilistic Graphical models (PGM), so that it loses its neural network focus.
This portion is also in the latter part of the book that is written in a rather superficial way and
therefore it implicitly creates another prerequisite of being very used to PGM (sort-of knowing it wouldn't be enough). .
Keep in mind that the PGM view of neural networks is not the dominant view today, from either a practitioner
or a research point of view. So why the focus on PGM, if they don't have the space to elaborate?
On the one hand, the authors make a futile attempt at promoting accessibility by discussing redundant
pre-requisites like basic linear algebra/probability basics. On the other hand, the PGM-heavy approach implicitly
increases the pre-requisites to include an even more advanced machine learning topic than neural networks
(with a 1200+ page book of its own). What the authors are doing is the equivalent of trying to teach someone
how to multiply two numbers as a special case of tensor multiplication. Even for RNNs with deterministic hidden states
they feel the need to couch it as a graphical model. It is useful to connect areas, but mixing them
is a bad idea. Look at Hinton's course. It does explain the connection between Boltzmann machines and PGM
very nicely, but one can easily follow RBM without having to bear the constant burden of a PGM-centric view.
One fact that I think played a role in these types of strategic errors of judgement is the fact that the
lead author is a fresh PhD graduate There is no substitute for experience when it comes to maturity
in writing ability (irrespective of how good a researcher someone is). Mature writers have the ability to put
themselves in reader shoes and have a good sense of what is conceptually important. The
authors clearly miss the forest from the trees, with chapter titles like "Confronting
the partition function." The book is an example of the fact that a first book in an important area with the name of
a pioneer author in it is not necessarily a qualification for being considered a good book.
I am not hesitant to call it out. The emperor has no clothes.

This book is not going to teach you machine learning and I don't even know why they bothered including the math sections because they just restate definitions, of varying relevance, that you may or may not know, in a confusing way.
It isn't going to teach you the math or even serve as a refresher on the math. At best, if you already know the math you can decode what they are saying and nod along.
It feels like the book is compressed. They write out overly elaborate mathematical symbols and then you just have to think it through and remember that Andrew NG video where he actually explained the concept.
So in short the math is overly elaborate and it really doesn't explain anything. The math review section is worthless. They don't have examples or practice problems. They expect you to do all the work, which you should, with another book.
As for the structure of the book, it's like an example of how not to structure a book. It has some linear algebra, probability at the start (not good enough, and confuses more people and wastes paper). Goes on to prove other algorithms such as PCA (yeah, ok!). Then, talks about how this architecture works for this and that architecture.
So, yeah, if you really want to try out deep learning, don't buy this book. Set up Tensorflow/pytorch/ other library, run the tutorials, find an architecture for the problem you are interested in and start tweaking that. You will have far more fun and would have saved your money.
The praise that this book gets is beyond me. Did Musk even read this book? I doubt it.
If it’s for the people who want to get started with deep learning, it’s completely off topic, since it presents the mathematical nitty-gritty of the deep learning algorithms without mentioning any specifics of how to train a convo-net for example. The amount of information on convolutional networks and LSTMs is worse than on any number of blogs on deep learning or Wikipedia.
If you’re really interested in Math behind Deep Learning out of curiousity (perhaps you’re a mathematician who wants to know what this deep learning thing is all about) perhaps this is a book for you. Otherwise, do yourself a favor and watch/read Andrej Karpathy’s Stanford class.
Top reviews from other countries



Reviewed in the United Kingdom on August 8, 2018




The reason for my poor review is that the quality of the actual book (binding, paper, etc.) is very poor. I was shocked at the numerous issues in this respect and I am worried that the book will not last.
