How We Know What Isn't So Reprint Edition, Kindle Edition
Use the Amazon App to scan ISBNs and compare prices.
The Amazon Book Review
Book recommendations, author interviews, editors' picks, and more. Read it now.
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
About the Author
From Publishers Weekly
Copyright 1991 Reed Business Information, Inc. --This text refers to an alternate kindle_edition edition.
- ASIN : B001D1SS2M
- Publisher : Free Press; Reprint edition (June 30, 2008)
- Publication date : June 30, 2008
- Language : English
- File size : 708 KB
- Text-to-Speech : Enabled
- Enhanced typesetting : Enabled
- X-Ray : Not Enabled
- Word Wise : Enabled
- Print length : 228 pages
- Lending : Not Enabled
- Best Sellers Rank: #469,008 in Kindle Store (See Top 100 in Kindle Store)
- Customer Reviews:
Reviews with images
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Part One of the book looks at the reasoning behind why we are susceptible to ideas and conclusions that are not supported by fact. An example is given about pattern recognition in the realm of sports that deftly demonstrates how the human brain is programmed to seek out patterns, sometimes even if there aren’t any to be found. Likewise, this phenomenon convinces the subject erroneously and leads to one believing false information. The example revolves around a basketball player’s belief that scoring comes in streaks, and that one can develop a “hot-hand”. Research, however, shows that statistically, a prior make or miss has no bearing on the success of a future shot attempt. This belief in a player’s mind leads them to believe that if they have made one or two shots in a row, that they will continue to make them at a greater percentage, leading them to change the way they play; such as not passing to open teammates. This idea is further proven incorrect by the introduction of regression and how the concept shows that some decline must occur once a peak has been achieved.
Part Two of the book delves into the motivations behind our beliefs, such as how social standards, biases, and overstated conversations can convince us of false realities. Again, the author uses several practical examples: one going back to the sports world and describing the biasing effects of referees who unfairly penalize certain jersey colors, as well as conditioning story of Albert, a young boy who was subjected to conditioning tests with animals and sounds.
The final section takes an unexpected turn and goes after a handful of unconventional beliefs such as alternative medicine and extrasensory perception (ESP). Gilovich reveals himself to be quite the skeptic as he skillfully pokes holes in the non-scientific nature of these activities.
Using his extensive background in social and behavioral psychology, Gilovich has created an insightful book that is essentially a "how-to" guide to avoiding irrational thinking. By giving the reader a set of tools to critically think about data, long-held beliefs, and newer fringe philosophies, Gilovich has empowered his audience to challenge the status quo by analyzing and evaluating the information that goes into making decisions or choosing what to believe as fact. The biggest criticisms of the book are related to how some topics seem to be discussed longer than necessary, and that several of the references are outdated. That being said, for a book that is 25+ years old, the content is written in a way that keeps the reader engaged, and explains the core concepts in a way that the layperson can sufficiently understand.
Is the Hidden Data problem just a matter of poorly designed testing of job and college applicants? Is it inevitable that some students that could have done well in college are not admitted or the job applicants that would have done well don’t get the job? What if the optimum entrance tests provided a very high correlation between testing and performance? The resulting number of rejects might be so large that too many prospects are rejected that would have done well also. Would the university scale down the number of students? Would the employer quit the business? Or is an element of uncertainty about a candidate's abilities a necessary thing? Students might drop out, employees may later quit or get fired. Is this more of a natural process than try to achieve perfect testing? This brings up the question that maybe it is necessary to have a certain amount of failure in human reasoning. Perfect reasoning used to solve practical problems is not possible in many cases but we should still try and just be aware of a certain amount of failures and the limits of such tests. We can deal with the results of imperfect testing in other ways but perfection will never occur. People are unpredictable and change with time.
How can we be reasonable about our reasoning? A recursive conundrum?
Every time I would my teacher the first thing he told me in the morning was "Minds are like parachutes and parachutes only work when they are open!"
This has tuck with me forever, and it's something I remind myself every morning.
This book is great however the last chapter on ESP is not that fun so I had to skip. Rest of the book has some great ideas and anyone can learn something new from this old but still gold book.
This book will help you improve your thinking. To me that is very worthwhile. The book read fairly easily and I am currently on my third reading of the book while taking detailed notes from which to teach my son.
If you want to better understand how people think and improve your own thinking get this book!
Top reviews from other countries
The brain is hard-wired to detect order in the nature of things. We can learn from experience by accumulated observations and this has obvious survival advantages in evolutionary terms.
But where do things start to go wrong? First of all, we see ordered patterns of outcomes that are in fact the blind product of chance. Chance produces less alternation than our intuition leads us to expect. If we toss a coin 20 times, we're unlikely to see 10 heads and 10 tails. A series of 20 tosses has a 50-50 chance of producing 4 heads in row. When we see patterns such as the lucky streak in baseball we think we are spotting an order that isn't in fact there.
The regression effect also fools us into misattributing a cause to an effect. You perform exceptional badly or excel when taking an exam, much worse or better than your average. Your next result is likely to be better or worse as you move back to your average. That's the regression effect. But we make assumptions that the exceptional and atypical is representative when the regression effect would tell us otherwise: investors may assume that a company's bumper profits in one year will predict will be repeated in future years when in all likelihood they will actually fall.
We underdetermine beliefs with insufficient data, treating weakly tested hypotheses as facts. We look for confirmatory examples while overlooking or discounting facts that contradict a belief. We fail in other words to understand the distinction between necessary and sufficient evidence. We seize on isolated, salient examples of pieces of data that prematurely confirm a hypothesis. Take the homoeopathist's claims that a cancer patient was miraculously cured after taking an alternative remedy. The recovery is treated as conclusive evidence of the remedy's efficacy. But such evidence is in itself insufficient to proof anything - isolated facts do not in themselves provide sufficient confirmation. They are too vulnerable to the discovery of counter-examples that contradict the hypothesis.
We leap to such conclusions because when we test for a hypothesis, we fail to define what success or failure is. Too often beliefs are formed with vague definitions of what counts as a successful confirmation. Studies of identical twins separated at birth may well track an identity of life outcomes that point strongly to genetic influences. But there are many outcomes or results in any given life. Some of these may overlap and give the impression of congruence. So the twins may both choose the same occupation and this is indeed a striking identity of outcome but this is only one such outcome, and others may vary. The danger once again is taking an overlap of outcomes from two sets of data similar while overlooking variances. Likewise many predictions are couched so vaguely to guarantee against disconfirmation, akin to Woody Allen's spoof Nostradamus character who portentously avers that `two nations will go to war but only one will win'.
Does our social nature compensate for this? Not necessarily. We tend to associate with like-minded people and to fight shy of conflict and controversy. Therefore members of presidential advisory groups hold their own counsel. We keep our mouths shut during a meeting at work. We do not want to be seen to rock the boat. The result is that others believe that their beliefs are more broadly shared than they actually are (one reason why the bore and the name dropper carries on with a self-defeating strategy is precisely the reluctance of others to point it out).
Good heavens, having said all this how on earth can we tell if our beliefs our well founded? There is no easy way out of these cognitive illusions. It's not all bad. We do have good reasons for example to accept the theory of gravity, which has weight (so to speak) and well attested by centuries of sense and statistical data. So we can rightly disregard claims of levitation on this basis.
We can also tighten up our definitions of what counts as confirmation, as we noted earlier. If we were testing whether a training course that claims it can raise sales staff performance really works, then we would define successful confirmation as increased sales figures. The scientific process of peer review also helps: we can make sure that a researcher does not which members of the trial group are receiving the new drug being tested, so preconceptions of success or failure do not contaminate the researcher's observations. We can test if a claim for an extraordinary effect like Extra Sensory Perception can be replicated (it can't).
These are palliatives however. We can only strive imperfectly to try and recognise when our reasoning faculties are leading us up blind-allies. This book will help you at least be a little more vigilant when it comes to forming conclusions about why you think you are right to believe the way you do