- Paperback: 123 pages
- Publisher: Springer; 2011 edition (July 25, 2011)
- Language: English
- ISBN-10: 1441994998
- ISBN-13: 978-1441994998
- Product Dimensions: 6.1 x 0.3 x 9.2 inches
- Shipping Weight: 8.8 ounces (View shipping rates and policies)
- Customer Reviews:
- Amazon Best Sellers Rank: #1,348,452 in Books (See Top 100 in Books)
Fisher, Neyman, and the Creation of Classical Statistics 2011th Edition
Use the Amazon App to scan ISBNs and compare prices.
Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products. Something we hope you'll especially enjoy: FBA items qualify for FREE Shipping and Amazon Prime.
If you're a seller, Fulfillment by Amazon can help you grow your business. Learn more about the program.
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Customers who bought this item also bought these eBooks
Customers who viewed this item also viewed these digital items
From the reviews:
“I enjoyed reading about the Human traits of the founders of modern classical statistics. The author put a lot of work into finding and citing the writings from Fisher and Neyman. … This is a well done book that I recommend reading. I also think that it would make a great graphic novel.” (Cats and Dogs with Data, maryannedata.wordpress.com, August, 2013)“It provides a historical account of the development of classical statistics over a period covering approximately the first half of the twentieth century … . The purported aim of this book is to ‘trace the creation of classical statistics, and to show that it was principally the work of two men, Fisher and Neyman’ … . it has been reasonably successful in achieving this. … I would recommend this book to those who have a serious interest in the history of statistics … .” (Martin Griffiths, The Mathematical Gazette, Vol. 97 (538), March, 2013)
From the Back Cover
Classical statistical theory―hypothesis testing, estimation, and the design of experiments and sample surveys―is mainly the creation of two men: Ronald A. Fisher (1890-1962) and Jerzy Neyman (1894-1981). Their contributions sometimes complemented each other, sometimes occurred in parallel, and, particularly at later stages, often were in strong opposition. The two men would not be pleased to see their names linked in this way, since throughout most of their working lives they detested each other. Nevertheless, they worked on the same problems, and through their combined efforts created a new discipline.
This new book by E.L. Lehmann, himself a student of Neyman’s, explores the relationship between Neyman and Fisher, as well as their interactions with other influential statisticians, and the statistical history they helped create together. Lehmann uses direct correspondence and original papers to recreate an historical account of the creation of the Neyman-Pearson Theory as well as Fisher’s dissent, and other important statistical theories.
There was a problem filtering reviews right now. Please try again later.
In this I was a little disappointed but that may be more because I did not carefully review the book’s intent before purchasing. I was hoping to gain a deeper understanding of the views of both Fisher and Neyman on such questions. Instead I got a one hundred page treatise on the history of statistical concepts.
Even so, you cannot really fault a book for being what the author wanted to write. If you are interested in a history of the fundamentals of twentieth century statistics you will find the book well-researched, well-written and concise. If you are looking to understand these concepts in greater depth you will need to turn to another source.
To understand what makes this book so special it may help to return to the evening of May 29, 1832. For most of this night the remarkable French mathematician Évariste Galois (October 25, 1811 - May 31, 1832) sat at a table in Paris and wrote a long letter to his friend Auguste Chevalier. The letter was accompanied by three rough manuscripts that together laid out the rudiments of what has come to be called Galois Group Theory. Galois did this because he was scheduled for a duel on May 30th that he feared he would not survive. Unfortunately, Galois was correct. He died on the 31st from the gunshot wound he suffered in the duel.
In his letter Galois told Chevalier to take this work and "Ask Jacobi or Gauss publicly to give their opinion of these theorems, not of their truth (for they are certainly true), but of their importance"
Neither Jacobi nor Gauss was available to go through the mass of material written at the birth of classical statistics, but fortunately Lehmann was. Some statisticians might have been able to read through all of the material that is covered in this book, but Lehmann is able to select what was important. Providing focus and emphasis, and being able to place the connections in sharp relief, requires rare talent and knowledge. I don't know how many others could have done it and I will be forever grateful that Lehmann elected to leave us with this parting gift. I am also grateful to Juliet Popper Shaffer, his wife, who, without fanfare, shepherded the manuscript through the publication gauntlet.
The book provides an up-close look at the evolution of the methodology and philosophy of the three cornerstones of modern statistics: hypothesis testing, estimation and experimental design. The initial development of each of these was the work of two men, Ronald Fisher and Jerzy Neyman. In each of these Fisher was the leader, driven by his intuition, but running beside him was Neyman. He placed Fisher's ideas on a more rigorous mathematical and logical basis, expanding their reach and occasionally correcting the great man. The path was never smooth, with forays and reversals. Their work was not, in any real sense, collaborative; they merely worked on the same problems at, more-or-less, the same time. Progress was hampered because they often used different vocabulary for almost identical concepts, e.g. what Fisher called `sensitiveness,' Neyman called `power'. In addition, what the terms meant often changed meaning over time. Although Fisher would never admit to it, insisting that he had been misunderstood.
Although Lehmann was far too polite to ever say so directly, it was clear that progress was not helped by Fisher's acerbic nature. Instead of saying that Fisher was nasty, Lehmann let's the great man speak for himself:
In a 1951 letter to Horace Gray, Fisher writes,
"Neyman is, judging by my own experience, a malicious mischief maker. Probably by now this is sufficiently realized in California."
This was provoked by Neyman's review of Fisher's Selected Papers in the Scientific Monthly (1951):
"Unfortunately, in conceptual mathematical statistics, Fisher was much less successful than in manipulatory, and of the three above concepts only one, that of a sufficient statistic, continues to be of substantial interest. The other two proved to be futile or self-congratulatory, and have been more-or-less generally abandoned."
The `other two' he referred to was the use of likelihood as a measure of confidence and fiducial inference. No one would now suggest that likelihood has been abandoned, although, in Lehmann's gentle words, "few were able to follow Fisher's fiducial argument."
Were this the only example of Fisher's difficult nature, we might be tempted to forgive him, thinking that he had been unfairly provoked. But there are many more instances with little or no provocation.
The aspect of Fisher's work that generated many such outbursts was his defense of his "fiducial argument." This was Fisher's attempt to accomplish what Bayesian methods can do, but without having to make assumptions about priors. Despite many attempts to try to understand this, I have failed, even with Lehmann's patience in placing it in context. And I am not alone. In 1955, Tukey corresponded with Fisher about his fiducial argument. In a letter dated April 22, 1955 Tukey included two examples that showed the nonuniqueness of fiducial probability.
Fisher replied on April 27th and said that there was no problem,
"...as you would see if you could ever get your bull-headed mind to stop and think."
But these are merely some tasty sidelights. The main story is one of a monumental contribution. The list of practical ideas that Fisher originated is staggering:
* Hypothesis testing (with a long list of specific procedures of which analysis of variance and covariance are but two). Prior to Fisher the choices were much more limited and often wrong (e.g. Pearson provided the incorrect number of degrees of freedom for a chi-square test). Fisher also derived the t-distribution as well as the phrase "testing for significance."
* Point estimation (again with many methods, maximum likelihood being the most prominent). Prior to Fisher the options open were least squares (with its well-known sensitivity to outliers) and Karl Pearson's Method of Moments. Neither provides the power and asymptotic optimality of Fisher's maximum likelihood. We need only compare today's relative frequency of use of the method of moments to maximum likelihood to obtain a measure of the improvement.
* Experimental design (which includes the importance of factorial designs, blocking and randomization). Prior to Fisher it was felt that experiments should change just one variable at a time. Fisher pointed out how a factorial design not only massively increased efficiency but also allowed us to find out about interactions. He also provided other short cuts like Latin and Greco-Latin Square designs and the computational details on how to analyze the data they would generate and suggestions on how to interpret those results.
It would be easy for Neyman to be lost in the glare of Fisher's brilliance, so let me mention three of his contributions; the first is the ubiquitous Neyman-Pearson theory which added optimality to the benefits that Fisher listed. Second is Neyman's remarkable 1934 paper on sampling that laid the foundation for survey sampling; it expanded Fisher's design of experiments to the important area of finite populations. And third, was his insistence on the importance of randomization in making causal inferences. This latter contribution foreshadowed Rubin's model of causal inference, but was long invisible because it was published in Polish.
Lehmann's book is a scant 93 pages. It is written at a level of technical difficulty on a par with many of the articles published in social science journals, and so there is nothing standing in the way of a delightful and informative stroll through the origins of some of the most important and influential ideas in modern statistics. I can't recommend it highly enough.
I must confess, however, that despite Lehmann's help, Fisher's fiducial argument remains out of my reach. Don Rubin has pointed out that it only seems to make sense if you say it fast.