Pearson, Karl
PEARSON, KARL
(b. London, England, 27 March 1857; d. Coldharbour, Surrey, England, 27 April 1936)
applied mathematics, biometry, statistics.
Pearson, founder of the twentiethcentury science of statistics, was the younger son and the second of three children of William Pearson, a barrister of the Inner Temple, and his wife, Fanny Smith. Educated at home until the age of nine, he was sent to University College School, London, for seven years. He withdrew in 1873 for reasons of health and spent the next year with a private tutor. He obtained a scholarship at King’s College, Cambridge, in 1875, placing second on the list. At Cambridge, Pearson studied mathematics under E. J. Routh, G. G. Stokes, J. C. Maxwell, Arthur Cayley, and William Burnside. He received the B.A. with mathematical honors in 1879 and was third wrangler in the mathematical tripos that year.
Pearson went to Germany after receiving his degree. At Heidelberg he studied physics under G. H. Quincke and metaphysics under Kuno Fischer. At Berlin he attended the lectures of Emil du BoisReymond on Darwinism. With his father’s profession no doubt in mind, Pearson went up to London, took rooms in the Inner Temple in November 1880, read in Chambers in Lincoln’s Inn, and was called to the bar in 1881. He received an LL.B. from Cambridge University in 1881 and an M.A. in 1882, but he never practiced.
Pearson was appointed Goldsmid professor of applied mathematics and mechanics at University College, London, in 1884 and was lecturer in geometry at Gresham College, London, from 1891 to 1894. In 1911 he relinquished the Goldsmid chair to become the first Galton professor of eugenics, a chair that had been offered first to Pearson in keeping with Galton’s expressed wish. He retired in 1933 but continued to work in a room at University College until a few months before his death.
Elected a fellow of the Royal Society in 1896, Pearson was awarded its Darwin Medal in 1898. He was awarded many honors by British and foreign anthropological and medical organizations, but never joined and was not honored during his lifetime by the Royal Statistical Society.
In 1890 Pearson married Maria Sharpe, who died in 1928. They had one son, Egon, and two daughters, Sigrid and Helga. In 1929 he married a coworker in his department, Margaret Victoria Child.
At Cambridge, Pearson’s coach under the tripos system was Routh, probably the greatest mathematical coach in the history of the university, who aroused in Pearson a special interest in applied mathematics, mechanics, and the theory of elasticity. Pearson took the Smith’s Prize examination, which called for the very best in mathematics. He failed to become a prizeman; but his response to a question set by Isaac Todhunter was found, on Todhunter’s death in 1884, to have been incorporated in the manuscript of his unfinished History of the Theory of Elasticity, with the comment “This proof is better than De St. Venant’s.”^{1} As a result, in the same year Pearson was appointed by the syndics of the Cambridge University Press to finish and edit the work.
Pearson did not confine himself to mathematics at Cambridge. He read Dante, Goethe, and Rousseau in the original, sat among the divinity students listening to the discourse of the university’s regius professor of divinity, and discussed the moral sciences tripos with a fellow student. Before leaving Cambridge he wrote reviews of two books on Spinoza for the Cambridge Review, and a paper on Maimonides and Spinoza for Mind.
Although intensely interested in the basis, doctrine, and history of religion, Pearson rebelled at attending the regular divinity lectures, compulsory since the founding of King’s in 1441, and after a hard fight saw compulsory divinity lectures abolished. He next sought and, with the assistance of his father, obtained release from compulsory attendance at chapel; after which, to the astonishment and pique of the authorities, he continued to attend as the spirit moved him.
Pearson’s life in Germany, as at Cambridge, involved much more than university lectures and related study. He became interested in German folklore, in medieval and renaissance German literature, in the history of the Reformation, and in the development of ideas on the position of women. He also came into contact with the ideas of Karl Marx and Ferdinand Lassalle, the two leaders of German socialism. His writings and lectures on his return to England indicate that he had become both a convinced evolutionist and a fervent socialist, and that he had begun to merge these two doctrines into his own rather special variety of social Darwinism. His given name was originally Carl; at about this time he began spelling it with a “K.” A King’s College fellowship, conferred in 1880 and continued until 1886, gave Pearson financial independence and complete freedom from duties of any sort, and during these years he was frequently in Germany, where he found a quiet spot in the Black Forest to which he often returned.
In 1880 Pearson worked for some weeks in the engineering shops at Cambridge and drew up the schedule in Middle and Ancient High German for the medieval languages tripos. In the same year he published his first book, a literary work entitled The New Werther, “by Loki,” written in the form of letters from a young man wandering in Germany to his finacée.
During 1880–1881 Pearson found diversion from his legal studies in lecturing on Martin Luther at Hampstead, and on socialism, Marx, and Lassalle at workingmen’s clubs in Soho. In 1882–1884 he gave a number of courses of lectures around London on German social life and thought from the earliest times up to the sixteenth century, and on Luther’s influence on the material and intellectual welfare of Germany. In addition he published in the Academy, Athenaeum, and elsewhere a substantial number of letters, articles, and reviews relating to Luther. Many of these were later republished, together with other lectures delivered between 1885–1887, in his The Ethic of Freethought (1888).
During 1880–1884 Pearson’s mathematical talent was not entirely dormant. He gave University of London extension lectures on “Heat” and served as a temporary substitute for absent professors of mathematics at King’s College and University College, London. At the latter Pearson met Alexander B. W. Kennedy, professor of engineering and mechanical technology, who was instrumental in securing Pearson’s appointment to the Goldsmid professorship.
During his first six years in the Goldsmid chair, Pearson demonstrated his great capacity for hard work and extraordinary productivity. His professorial duties included lecturing on statics, dynamics, and mechanics, with demonstrations and proofs based on geometrical and graphical methods, and conducting practical instruction in geometrical drawing and projection. Soon after assuming the professorship, he began preparing for publication the incomplete manuscript of The Common Sense of the Exact Sciences left by his penultimate predecessor, William Kingdon Clifford; and it was issued in 1885. The preface, the entire chapter “Position,” and considerable portions of the chapters “Quantity” and “Motion” were written by Pearson. A far more difficult and laborious task was the completion and editing of Todhunter’s unfinished History of the Theory of Elasticity. He wrote about half the final text of the first volume (1886) and was responsible for almost the whole of the second volume, encompassing several hundred memoirs (1893). His editing of these volumes, along with his own papers on related topics published during the same decade, established Pearson’s reputation as an applied mathematician.
Somehow Pearson also found the time and energy to plan and deliver the later lectures of The Ethic of Freethought series; to complete Die Fronica (1887), a historical study that traced the development of the Veronica legend and the history of the Veronicaportraits of Christ, written in German and dedicated to Henry Bradshaw, the Cambridge University Librarian; and to collect the material on the evolution of western Christianity that later formed much of the substance of The Chances of Death (1897). In these historical studies Pearson was greatly influenced and guided by Bradshaw, from whom he learned the importance of patience and thoroughness in research. In 1885 Pearson became an active founding member of a small club of men and women dedicated to the discussion of the relationship between the sexes. He gave the opening address on “The Woman’s Question,” and addressed a later meeting on “Socialism and Sex.” Among the members of the group was Maria Sharpe, whom he married in 1890.
In the 1890’s the sole duty of the lecturer in geometry at Gresham College seems to have been to give three courses per year of four lectures to an extramural audience on topics of his own choosing. Pearson’s aim in applying for the lectureship was apparently to gain an opportunity to present some of his ideas to a fairly general audience. In his first two courses, delivered in March and April 1891 under the general title “The Scope and Concepts of Modern Science,” he explored the philosophical foundations of science. These lectures, developed and enlarged, became the first edition of The Grammar of Science (1892), a remarkable book that influenced the scientific thought of an entire generation.
Pearson outlined his concept of the nature, scope, function, and method of science in a series of articles in the first chapter of his book. “The material of science,” he said, “is coextensive with the whole physical universe, not only . . . as it now exists, but with its past history and the past history of all life therein,” while “The function of science” is “the classification of facts, the recognition of their sequence and their relative significance,” and “The unity of all science consists alone in its method, not its material . . . It is not the facts themselves which form science, but the method in which they are dealt with.” In a summary of the chapter he wrote that the method of science consists of “(a) careful and accurate classification of facts and observation of their correlation and sequence; (b)the discovery of scientific laws by aid of the creative imagination; (c) selfcriticism and the final touchstone of equal validity for all normally constituted minds,” He emphasized repeatedly that science can only describe the “how” of phenomena and can never explain the “why,” and stressed the necessity of eliminating from science all elements over which theology and metaphysics may claim jurisdiction. The Grammar of Science also anticipated in many ways the revolutionary changes in scientific thought brought about by Einstein’s special theory of relativity. Pearson insisted on the relativity of all motion, completely restated the Newtonian laws of motion in keeping with this primary principle, and developed a system of mechanics logically from them. Recognizing mass to be simply the ratio of the number of units in two accelerations as “expressed briefly by the statement that mutual accelerations are inversely as masses” (ch. 8, sec. 9), he ridiculed the current textbook definition of mass as “quantity of matter.” Although recognized as a classic in the philosophy of science, the Grammar of Science is little read today by scientists and students of science mainly because its literary style has dated it.
Pearson was thus well on the way to a respectable career as a teacher of applied mathematics and philosopher of science when two events occurred that markedly changed the direction of his professional activity and shaped his future career. The first was the publication of Galton’s Natural Inheritance in 1889; the second, the appointment of W. F. R. Weldon to the Jodrell professorship of zoology at University College, London, in 1890.
Natural Inheritance summed up Galton’s work on correlation and regression, concepts and techniques that he had discovered and developed as tools for measuring the influence of heredity;^{2} presented all that he contributed to their theory; and clearly reflected his recognition of their applicability and value in studies of all living forms. In the year of its appearance, Pearson read a paper on Natural Inheritance before the aforementioned small discussion club, stressing the light that it threw on the laws of heredity, rather than the mathematics of correlation and regression. Pearson became quite charmed by the concept and implications of Galton’s “correlation,” which he saw to be a “category broader than causation . . . of which causation was only the limit, and [which] brought psychology, anthropology, medicine and sociology in large parts into the field of mathematical treatment,” which opened up the “possibility . . . of reaching knowledge—as valid as physical knowledge was then thought to be—in the field of living forms and above all in the field of human conduct.”^{3} Almost immediately his life took a new course: he began to lay the foundations of the new science of statistics that he was to develop almost singlehanded during the next decade and a half. But it is doubtful whether much of this would have come to pass had it not been for Weldon, who posed the questions that impelled Pearson to make his most significant contributions to statistical theory and methodology.^{4}
Weldon, a Cambridge zoologist, had been deeply impressed by Darwin’s theory of natural selection and in the 1880’s had sought to devise means for deriving concrete support for it from studies of animal and plant populations. Galton’s Natural Inheritance convinced him that the most promising route was through statistical studies of variation and correlation in those populations. Taking up his appointment at University College early in 1891, Weldon began to apply, extend, and improve Galton’s methods of measuring variation and correlation, in pursuit of concrete evidence to support Darwin’s “working hypothesis.” These undertakings soon brought him face to face with problems outside the realm of the classical theory of errors: How describe asymmetrical, doublehumped, and other nonGaussian frequency distributions? How derive “best”—or at least “good”—values for the parameters of such distributions? What are the “probable errors” of such estimates? What is the effect of selection on one or more of a number of correlated variables? Finding the solution of these problems to be beyond his mathematical capacity, Weldon turned to Pearson for help.
Pearson, in turn, seeing an opportunity to contribute, through his special skills, to the improvement of the understanding of life, characteristically directed his attention to this new area with astonishing energy. The sudden change in his view of statistics, and the early stages of his rapid development of a new science of statistics are evident in the syllabuses of his lectures at Gresham College in 1891–1894 and in G. Udny Yule’s summaries of Pearson’s two lecture courses on the theory of statistics at University College during the sessions of 1894–1895 and 1895–1896,^{5} undoubtedly the first of their kind ever given. Pearson was an enthusiast for graphic presentation; and his Gresham lectures on “Geometry of Statistics” (November 1891May 1892) were devoted almost entirely to a comprehensive formal treatment of graphical representation of statistical data from the biological, physical, and social sciences, with only brief mention of numerical descriptive statistics. In “Laws of Chance” (November 1892February 1893) he discussed probability theory and the concept of “correlation,” illustrating both by cointossing and carddrawing experiments and by observations of natural phenomena. The term “standard deviation” was introduced in the lecture of 31 January 1893, as a convenient substitute for the cumbersome “root mean square error” and the older expressions “error of mean square” and “mean error”; and in the lecture of 1 February, he discussed whether an observed discrepancy between a theoretical standard deviation and an experimentally determined value for it is “sufficiently great to create suspicion.” In “The Geometry of Chance” (November 1893May 1894) he devoted a lecture to “Normal Curves,”^{6} one to “Skew Curves,” and one to “Compound Curves.”
In 1892 Pearson lectured on variation, and in 1893 on correlation, to research students at University College, the material being published as the first four of his Philosophical Transactions memoirs on evolution. At this time he worked out his general theory of normal correlation for three, four, and finally n variables. Syllabuses or summaries of these lectures at University College are not available, but much of the substance of the four memoirs is visible in Yule’s summaries. Those of the lectures of November 1895 through March 1896 reveal Pearson’s early groping toward a general theory of skew correlation and nonlinear regression that was not published until 1905. His summary of Pearson’s lecture of 14 May 1896 shows that considerable progress had already been made on both the experimental and theoretical material on errors of judgement, measurement errors, and the variation over time of the “personal equations” of individual observers that constituted Pearson’s 1902 memoir on these matters.
These lectures mark the beginning of a new epoch in statistical theory and practice. Pearson communicated some thirtyfive papers on statistical matters to the Royal Society during 1893–1901. By 1906 he had published over seventy additional papers embodying further statistical theory and applications. In retrospect, it is clear that Pearson’s contributions during this period firmly established statistics as a discipline in its own right. Yet, at the time, “the main purpose of all this work” was not development of statistical theory and techniques for their own sake but, rather, “development and application of statistical methods for the study of problems of heredity and evolution.”^{7}
In order to place the whole of Pearson’s work in proper perspective, it will be helpful to examine his contributions to distinct areas of theory and practice. Consider, for example, his “method of moments” and his system of wonderfully diverse frequency curves. Pearson’s aim in developing the method of moments was to provide a general method for determining the values of the parameters of a frequency distribution of some particular from selected to describe a given set of observational or experimental data. This is clear from his basic exposition of the subject in the first (1894) of his series of memoirs entitled “Contributions to the Mathematical Theory of Evolution.”^{8}
The foundations of the system of Pearson curves were laid in the second memoir of this series, “Skew Variation in Homogeneous Material” (1895). Types IIV were defined and applied in this memoir; Types V and VI, in a “Supplement . . .” (1901); and Types VIIXII in a “Second Supplement . . .” (1916). The system includes symmetrical and asymmetrical curves of both limited and unlimited range (in either or both directions); most are unimodal, but some are U, J, or reverse Jshaped. Pearson’s purpose in developing them was to provide a collection of frequency curves of diverse forms to be fitted to data as “graduation curves, mathematical constructs to describe more or less accurately what we have observed.”^{9} Their use was facilitated by the central role played by the method of moments:(1) the appropriate curve type is determined by the values of two dimensionless ratios of centroidal moments,
defined in the basic memoir (1894); and (2) values of the parameters of the selected types of probability (or frequency) curve are determined by the conditions μ_{0} = 1 (or μ_{0} = N, the total number of observations), μ_{1} = 0, and the observed or otherwise indicated values of μ_{2}(=σ^{2}), β_{1} and β_{2}. The acceptance and use of curves of Pearson’s system for this purpose may also have been aided by the fact that all were derived from a single differential equation, to which Pearson had been led by considering the slopes of segments of frequency polygons determined by the ordinates of symmetric and asymmetric binomial and hypergeometric probability distributions. That derivation may well have provided some support to Pearson curves as probability or frequency curves, rather than as purely arbitrary graduation curves. Be that as it may, the fitting of Pearson curves to observational data was extensively practiced curves to observational data was extensively practiced by biologists and social scientists in the decades that followed. The results did much to dispel the almost religious acceptance of the normal distribution as the mathematical model of variation of biological, physical, and social phenomena.
Meanwhile, Pearson’s system of frequency curves acquired a new and unanticipated importance in statistical theory and practice with the discovery that the sampling distributions of many statistical test functions appropriate to analyses of small samples from normal, binomial, and Poisson distributions such as χ^{2}, S^{2}, t, S^{1}^{2}/S^{2}^{2}, and r (when ρ=0)are represented by particular families of Pearson curves, either directly or through simple transformation. This application of Pearson curves, and their use to approximate percentage points of statistical test functions whose sampling distributions are either untabulated or analytically or numerically intractable, but whose moments are readily evaluated, have now transcended their use as graduation curves; they have also done much to ensure the value of Pearson’s comprehensive system of frequency curves in statistical theory and practice. The use of Pearson curves for either purpose would, however, have been gravely handicapped had not Pearson and his coworkers prepared detailed and extensive tables of their ordinates, integrals, and other characteristics, which were published principally in Biometrika beginning in 1901, and reprinted, with additions, in his Tables for Statisticians and Biometricians (1914; Part II, 1931).
As statistical concepts and techniques of correlation and regression originated with Galton, who devised rudimentary arithmetical and graphical procedures (utilizing certain medians and quartiles of the data in hand) to derive sample values for his “regression” coefficient, or “index of corelation,” r. Galton was also the first, though he had assistance from J. D. Hamilton Dickson, to express the bivariate normal distribution in the “Galtonian form” of the frequency distribution of two correlated variables.^{10} Weldon and F. Y. Edgeworth devised alternative means of computation, which, however, were somewhat arbitrary and did not fully utilize all the data. It was Pearson who established, by what would now be termed the method of maximum likelihood, that the “best value of the correlation coefficient” (ρ) of a bivariate normal distribution is given by the sample productmoment coefficient of correlation,
where x and y denote the deviations of the measured values of the x and y characteristics of an individual sample object from their respective arithmetic means (m_{x} and m_{y}) in the sample, Σ denotes summation overall N individuals in the sample, and s_{x} and s_{y} are the sample standard deviations of the measured values of x and y, respectively.^{11} The expression “coefficient of correlation” apparently was originated by Edgeworth in 1892,^{12} but the value of r defined by the above equation is quite properly known as “Pearson’s coefficient of correlation/” Its derivation may be found in section 4b. of “Regression, Heredity, and Panmixia” (1896), his first fundamental paper on correlation theory and its application to problems of heredity.
In the same memoir Pearson also showed how the “best value” of r could be evaluated conveniently from the sample standard deviations s_{x}, s_{y} and either S_{xy} or S_{x+y}, thereby avoiding computation of the sample product moment (Σxy/N); gave a mistaken expression for the standard deviation of the sampling error^{13} of r as a measure of ρ in large sampleswhich he corrected in “Probable Errors of frequency Constants. . .” (1898); introduced the term “coefficient of variation” for the ratio of a standard deviation to the corresponding mean expressed as a percentage; expressed explicitly, in his discussion of the trivariate case, what are now called coefficients of “multiple” correlation and “partial” regression in terms of the three “zeroorder” coefficients of correlation (r_{12}, r_{13}, r_{23}); gave the partial regression equation for predicting the (population) mean value of trait X_{1}, say, corresponding to given values of traits X_{2} and X_{3}, the coefficients of X_{2} and X_{3} being expressed explicitly in terms of r_{12}, r_{13}, r_{23} and the three sample standard deviations (S_{1}, S_{2}, S_{3}); gave the formula for the largesample standard error of the value of X_{1} predicted by this equation; restated Edgeworth’s formula (1892) for the trivariate normal distribution in improved determinantal notation; and carried through explicitly the extension to the general case of ρvariate normal correlation surface, expressed in a form that brought the computations within the power of those lacking advanced mathematical training.
In this first fundamental memoir on correlation, Pearson carried the development of the theory of multivariate normal correlation as a practical tool almost to completion. When the joint distribution of a number of traits X_{1}, X_{2}, . . ., X_{v}, (ρ≥2) over the individuals of a population is multivariate normal then the population coefficients of correlation, ρ_{ij}, (i,j = 1, 2 . . ., ρ; i ≠ j), completely characterize the degress of association among these traits in the populationtraits X_{i} and X_{j} are independent if and only if ρ_{ij} = 0 and completely interdependent if and only if ρ_{ij} equals ± 1—and the regression in the population of each one of the traits on any combination of the others is linear. It is clear from footnotes to section 5 of this memoir that Pearson was fully aware that linearity of regressions and this comprehensive feature of population (productmoment) coefficients of correlation do not carry over to multivariate skew frequency distributions, and he recognized “the need of [a] theory of skew correlation” which he proposed to treat “in a memoir on skew correlation.^{14} The promised memoir, On the General Theory of Skew Correlation and NonLinear Regression, appeared in 1905.
Pearson there dealt with the properties of the correlation ratio, η( = η_{yx}), a sample measure of correlation that he had inroduced in a paper of 1903 to replace the sample correlation coefficient, r, when the observed regression curve of y on x (obtained by plotting the means of the y values, ȳ_{xi}, corresponding to the respective x values, x_{1}, x_{2}, . . ., as a function of x) exhibits a curvilinear relationship and showed that · is the square root of the fraction of the variability of the N y values about their mean, ȳ, that is ascribable to the variability of the y means ȳ_{xi} about ȳ; that 1 – ·^{2} is the fraction of the total variability of the y valued about their mean ȳ contributed by the variability of the y values within in respective x arrays about their respective mean values, ȳ_{xi}, within these arrays; and that ·^{2} – r^{2} is the fraction acribable to the deviations of the points (ȳ_{xi}, x_{i}) from the straight line of closest fit to these points, indicating the importance of the difference between · and r as an indicator of the departure of regression from linearity.^{15} He also gave an expression for the standard deviation of the sampling error of · in large samples that has subsequently been shown to be somewhat inaccurate; classified the different forms of regression curves and the different patters of withinarray variability that may arise when the joint distribution of two traits can not be represented by the bivariate normal distribution, terming the system “homoscedastic” or “heteroscedastic” according to whether the withinarray variability is or is not the same for all array,s respectively; gave explicit formulas for the coefficients of parabolic, cubic, and quartic regression curves, in terms of η^{2}r^{2} and other moments and product moments of the sample values of x and y; and listed the conditions in terms of ·^{2} – r^{2} and the other sample moments and product moments that must be satisfied for linear, parabolic, cubic, and quartic regression equations to be adequate representations of the observed regression of y on x.
In a footnote to the section “Cubical Regression,” Pearson noted that he had pointed out previously^{16} that when a polynomial of any degree, ρ(ρ ≥ n), is fit to all of n distinct observational points by the method of moments, the curve determined by “the method of moments becomes identical with that of least squares”; but, he continued, “the retention of the method of moments. . .enables us, without abrupt change of method, to introduce the need for ·, and to grasp at once the application of the proper SHEPPARD’S corrections [to the sample moments and product moments of x and y when the measurements of either or both are coarsely grouped].”
Pearson clearly favored his method of moments; but the method of least squares has prevailed. However, use of the method of least squares to fit polynomial regression curves in a bivariate correlation situation involves an extension beyond the original formulation and development of the method of least squares by Legendre, Gauss, Laplace, and their followers in the nineteenth century. In this classical development of the method of least squares, one of the variablesx, for example—was a quantity that could be measured with negligible error, and the other, y, a quantity of interest functionally related to x, the observed values of which for particular values of x, Y_{x}, were, however, subject to nonnegligible measurement errors. The problem was to determine “best” values for the parameters of the functional relation between y and x despite the measurement errors in the observed values of Y_{x}. The method of least squares as developed by Gauss gave a demonstrably optimal solution when the functional dependence of y upon x was expressible with negligible error in a form in which the unknown parameters entered linearly—for instance, as a polynomial in x. In the GaltonPearson correlation situation, in contrast, the traits X and Y may both be measurable with negligible error with respect to any single individual but in some population of individuals have a joint frequency or probability distribution. The regression of y on x is not an expression of a mathematical functional dependence of the trait Y on the trait X but, rather, an expression of the mean of values of Y corresponding to values of X = x as a function of x—for example, as a polynomial in x. In the classical leastsquares situation, the aim was to obtain the best possible approximation to the correct functional relation between the variables despite variations introduced by unwanted errors of measurement. In the GaltonPearson correlation situation, on the other hand, the aim of regression analysis is to describe two important characteristics of the joint variation of the traits concerned. Pearson’s development of the theory of skew correlation and nonlinear regression was, therefore, not merely an elaboration on the work of Gauss but a major step in a new direction.
Pearson did not pursue the theory of multiple and partial correlation beyond the point to which he had carried it in his basic memoir on correlation (1896). The general theory of multiple and partial correlation and regression was developed by his mathematical assistant, G. Udny Yule, in two papers published in 1897. Yule was the first to give mathematical expressions for what are now called partial correlation coefficients, whcih he termed “net correlation coefficients.” What Pearson had called coefficients of double regression, Yule renamed net regressions; they are now called partial regression coefficients. The expressions “multiple correlation” and “partial correlation” stem from the paper written Alice Lee and read to the Royal Society in June 1897.^{17}
In order to see whether the correlations found in studies of the heredity of continuously varying physical characteristics held also for the less tractable psychological and mental traits, Pearson made a number of efforts to extend correlation methods to bivariate data coarsely classified into two or more ordered categories with respect to each trait. Thus, in “On the Correlation of Characters Not Quantitatively Measurable” (1900), he introduced the “tetrachoric” coefficient of correlation, r_{t}, derived on the supposition that the traits concerned were distributed continuously in accordance with a bivariate normal distribution in the population of individuals sampled, though not measured on continuous scales for the individuals in the sample but merely classified into the cells of a fourfold table in terms of more or less arbitrary but precise dichotomous divisions of the two trait scales. The derived value of r_{t} was the value of the correlation coefficient (ρ) of the bivariate normal distribution with frequencies in four quadrants corresponding to a division of the x, y plane by lines parallel to the coordinate axes that agreed exactly with the four cell frequencies of the fourfold table. Hence the value of r_{t} calculated from the data fo a particular fourfold table was considered to be theoretically the best measure of the intensity of the correlation between the traints concerned. Pearson gave a formula for the standard deviation of the sampling error of r_{t} in large samples. He corrected two misprints in this formula and gave a simplified approximate formula in a paper of 1913.^{18}
To cope with the intermediate case, in which one characteristic of the sample individuals is measured on a continuous scale and the other is merely classified dichotomously, Pearson, in a Biometrika paper of 1909, introduced (but did not name) the “biserial” coefficient of correlation, say r_{b}.
The idea involved in the development of the “tetrachoric” correlation coefficient, r_{t}, for data classified in a fourfold table was extended by Pearson in 1910 to cover cases in which “one variable is give by alternative and the other by multiple categories.” The sample measure of correlation introduced but not named in this paper became known as “biserial ·” because of its analogy with the biserial correlation coefficient, r_{b}, and the fact that it is defined by a special adaptation of the formula for the correlation ration, ·, based on comparatively nonrestrictive assumptions with respect to the joint distribution of the two traits concerned in the population sampled. The numeerical evaluation of “biserial ·,” however, involves the further assumption that the joint variation of the traits is bivariate normal in the population; and its value for a particular sample, say r_{·}, is taken to be an estimate of the correlation coefficient, ρ, of the assumed bivariate normal distribution of the traits in the population sampled. The sampling variation of r_{·} as a measure of ρ was unknown until Pearson published an expression for its standard error in large samples from a bivariate normal population in 1917.^{19} It is not known how large the sample size N must be for this asymptotic expression to yield a satisfactory approximation.
Meanwhile, Charles Spearman had introduced (1904) his coefficient of rankorder correlation, say r^{t}, which, although first defined in terms of the rank differences of the individuals in the sample with respect to the two traits concerned, is equivalent to the productmoment correlation coefficient between the paired ranks themselves. Three years later Pearson, in “On Further Methods of Determining Correlation,” gave the now familiar formula, ρ=2 sin(πr′/6), for obtaining an estimate, ρ, of the coefficient of correlation (ρ) of a bivariate normal population from an observed value of the coefficient of rankorder correlation (r′) derived from the rankings of the individuals in a sample therefrom with respect to the two traits concerned; he also presented a formula for the standard error of ρ in large samples.
The “tetrachoric” and “biserial” coefficients of correlation and “biserial ·” played important parts in the biometric, eugenic, and medical investigations of Pearson and the biometric school during the first two decades of the twentieth century. Pearson was fully aware of the crucial dependence of their interpretation upon the validity of the assumed bivariate normality and was circumspect in their application; his discusions of numerical results are full of caution. (a sample productmoment coefficient of correlation, r, always provides a usable determination of the productmoment coefficient of correlation, ρ, in the population sampled, bivariate normal or otherwise. On the other hand, when the joint distribution of the two traits concerned is continuous but not bivariate normal in the population sampled, exactly what interpretations are to be accorded to observed values of r_{t}, r_{b}, and r_{·} is not at all clear; and if assumed continuity with respect to both variables is not valid, their interpretation is even less clearthey may be virtually meaningless.) The crucial dependence of the interpretation of these measures on the uncheckable assumption of bivariate normality of the joint distribution of the traits concerned in the population sampled, together with their uncritical application and incautious interpretation by some scholars, brought severe criticism; and doubt was cast on the meaning and value of “coefficients of correlation” thus obtained. In particular, Pearson and one of his assistants, David Heron, ultimately became embroiled in a long an dbitter argument on the matter with Yule, whose paper embodying a theory and a measure of association of attributes free of any assumption of an underlying continuous distribution Pearson had communicated to the Royal Society in 1899. Despite this skepticism, r_{t}, r_{b}, and r_{·} have survived and are used today as standard statistical tools, mainly by psychologists, in situations where the traits concerned can be logically assumed to have a joint continuous distribution in the population sampled and the at least approximate normality of this distribution is not seriously questioned.
Pearson did not attempt to investigate sampling distributions of r or · in small samples from bivariate normal or other population distributions because he saw no need to do so. He and his coworkers in the 1890’s and early 1900’s saw their mission to be the advancement of knowledge and understanding of “variation, inheritance, and selection in Animals and Plants” through studies “based upon the examination of statistically large numbers of specimens,” and the development of statistical theory, tables of mathematical functions, and graphical methods needed in the pursuit of such studies.^{20} They were not concerned with the analysis of data from smallscale laboratory experiments or with comparisons of yield from small numbers of plots of land in agricultural field trials. It was the need to interpret values of r obtained from smallscale industrial experiments in the brewing in 1908 that r is symmetrically distributed about 0 in accordance with a Pearson Type II curve in random samples of any size from a bivariate normal distribution when ρ = 0; and, when ρ ≠ 0, its distribution is skew, with the longer tail toward 0, and cannot be represented by any of Pearson’s curves.^{21}
In another paper published earlier in 1908 (“The Probable Error of a Mean”), “Student” had discovered that the sampling distribution of s^{2} (the square of a sample standard deviation), in random samples from a normal distribution, can be represented by a Pearson Type III curve. Although these discoveries stemmed from knowledge and experience that “Student” had gained at Pearson’s biometric laboratory in London and were published in the journal that Pearson edited, they seem to have awakened no interest in Pearson or his coworkers in developing statistical theory and techniques appropriate to the analysis of results from smallscale experiments. This indifference may have stemmed from preoccupation with other matters, from recognition that establishment of the small trends or differences for which they were looking required large samples, or from a desire “to discourage the biologist or the medical man from believing that he had been supplied with an easy method of drawing conclusions from scantly data.”^{22}
In September 1914 Pearson received the manuscript of the paper in which R. A. Fisher derived the general sampling distribution of r in random samples of any size n ≥ 2 from a bivariate normal population with any degree of correlation, –1 ≤ ρ ≤ + 1, and pointed out the extremen skewness of the distribution for large positive or negative values of ρ even for large sample sizes^{23} Pearson responded with enthusiasm, congratulated Fisher “very heartily on getting out the actual distribution form of r,” and stated that “if the analysis is correct which seems highly probable, [he] should be delighted to publish the paper in Biometrika.”^{24} A week later he wrote to Fisher: “I have now read your paper fully and think it marks a distinct advance. . .I shall be very glad to publish it. [it] shall appear in the next issue [May 1915]. . . I wish you had had the leisure to extend the last pages a little. . . I should like to see some attempt to determine at what value of n and for what values of ρ we may suppose the distribution of r practically normal.”^{25}
In the “last pages” of the paper, Fisher introduced two transformations of r, and thanh^{1}r, his aim being to find a function of r whose sampling distribution would have greater stability of form as ρ varied from –1 to +1, would be more nearly symmetric, or would have an approcimately constant standard deviation, for all values of ρ. The first of these two transformatiuons he considered in detail. Denoting the transformed variable by t, and the corresponding transformation of ρ by τ, he showed that the mean value of t was proportional to τ, the constant of proportionality increasing toward unity with increasing sample size. He also gave exact formulas for σ^{2}(t), β_{1}(t,) β_{2}(t), and tables of their numerical values for selected values of τ^{2} from .01 to 100 (that is, ρ from .0995 to .995) and sample sizes n from 8 to 53. Although the distribution of t was, by design, much less asymmetric and of more stable form than the distribution of rthis became unmistakably clear when the corresponding values of β_{1}(r) and β_{2}(r) became known in the “Cooperative Study” (see below)—the transformation was not an unqualified success: its distribution was not close to normal except in the vicinity of ρ = 0, and σ^{2}(t) was not approximately constant but nearly proportional to 1/(1 – ρ^{2}). In the final paragraph Fisher dismissed the second transformation for the time being with the comment (with respect to the aims mentioned above): “It is not a little attractive, but so far as I have examined it, it does not tend to simplify the analysis. . .” (He later found it very much to his liking.)
Reasoning about a function of sample values, such as r, in terms of a transform of it, instead of in terms of the function itself, seems to have been foreign to Pearson’s way of thinking. He wrote to Fisher:
I have rather difficulties over this r and t business—not that I have anything to say about it from the theoretical standpoint—but there appear to me difficulties from the everyday applications with which we as statisticians are most familiar. Let me indicate what I mean.
A man finds a correlation coefficient r from a small sample n of a population; often the material is urgent and an answer on the significance has to be given at once. What he wants to know, say, is whether the true value of r(ρ) is likely to exceed or fall short of his observed value by, say 10. It may be for instance the correlation between height of firing a gun and the rate of consumption of a time fuse, or between a particular form of treatment of wound and time of recovery. . .. For exmple, suppose that ρ =.30, and I want to find what is the chance that in 40 observations the resulting r will lie between.20 and.40. Now what we need practically are the β_{1} and β_{2} for ρ =.30 and n = 40, and if they are not sufficiently Gaussian for us to use the probability integral, we need the frequency curve of r for ρ =.30 and n = 40 to help us out. . .. Had I the graph of t I could deduce the graph of r, and mechanically integrate to determine the answer to my problem, but you have not got the ordinates of the tcurve and the practical problem remains it seems to me unsolved. It still seems to me essential (i) to determine β_{1} and β_{2} accurately for r . . . and (ii) determine a table of frequencies or areas (integral curve) of the r distribution curve for values of ρ and n which do not provide approximately Gaussian results. Of curse you may be able to dispose of my practical difficulties, which do not touch your beautiful theory.^{26}
Pearson then proposed a specific program of tabulation of the ordinates of the frequency curves for r for selected values of ρ and n to be executed by his trained calculators “unless you really want to do them yourself.” The letter in which Fisher is said to have “welcomed the suggestion” that the computations of these ordinates be carried out at the Galton laboratory “seems to have been lost through the distrubance of papers during the 1939–45 war,”^{27} On the other hand, Fisher seems to have agreed (in this missing, or some other, letter) to undertake the evaluation of the integral of the distribution of r for a selection of values of ρ and n. In a May 1916 letter to Pearson he comments, “I have been very slow about my paper on the probability integral.”
When not engaged in war work, Pearson and several members of his staff took on the onerous task of developing reliable formulas for the moments of the distribution of r and calculating tables of its ordinated for ρ from 0.0 to 0.9 and selected values of n. In May 1916, Pearson wrote to Fisher: “. . . the whole of the correlation business has come out quite excellently. . . By [n = ] 25 my curves [curves of the Pearson system] give the frequency very satisfactorily, but even when n = 400, for high values of ρ the normal curve is really not good enough. . ..”^{28} It is quite clear form this correspondence between Pearson and Fisher during 1914–1916 that the relationship was entirely friendly, and the implication in some accounts of Fisher’s life and work^{29} that this venture was carried out without his knowledge is far from correct.
The results of this joint effort of Pearson and his staff were published as “. . .A Cooperative Study” in the May 1917 issue of Biometrika. Included were tables of ordinates of the distribution of r for ρ = 0.0(0.1)0.9 and n = 3(1)25, 50, 100, 400; values of β_{1}(r) and β_{2}(r) for the same ρ when n = 3, 4, 25, 50, 100, 400; and of the normal approximation to the ordinates for n = 100, ρ = 0.9, and n = 400, ρ = 0.7(0.1)0.9. There were also photographs of seven cardboard models showing, for example, the changes in the distribution of r from Ushaped through Jshaped to skew “cocked hat” forms with increasing sample size for n = 2(1)25 for ρ = 0.6, 0.8, and illustrating the rate of deviation from normality and increasing skewness with increase of ρ from 0.0 to 0.9 in samples of 25 and 50. This publication represented a truly monumental undertaking. Unfortunately, it ahd little longrange impact on practical correlation analysis, and it contained material in the section “On the Determination of the ‘Most Likely’ Value of the Correlation in Sampled Population” that contributed to the widening of the rift that was beginning to develop between Pearson and Fisher.
In his 1915 paper Fisher derived (pp.520–521), from his general expression for the sampling distribution of r in samples of size n from a bivariate normal population, a twoterm approximation,
to the “relation between an observed correlation of the sample and the most probable value of the correlation of the whole population” [emphasis added]. He referred to his 1912 paper “On an Absolute Criterion for Fitting Frequency Curves” for justification of this procedure.^{30} Inasmuch as Pearson had shown in his 1896 memoir that an observed sample from a bivariate normal population is “the most probable” when ρ = r (μ_{x} = m_{x}, σ_{x}, = S_{x}, μ_{y} = m_{y}, and ρ_{y} = S_{y}), Fisher’s proposed adjustment must have been puzzling to him. The result Fisher obtained is the same as what would be obtained, via the sampling distribution of r, by the method of inverse probability, using Bayes’s theorem and an assumed uniform a priori distribution of ρ from –1 to +1. This, and Fisher’s use of the expression “most probable value,” evidently led Pearson, who presumably drafted the text of the “Cooperative Study,”^{31} to state mistakely (pp. 352,353) that Fisher had assumed such a uniform a priori distribution in deriving his result. Pearson may have been misled also by a “draft of a Note”^{32} that he had received from Fisher in mid1916, commenting on a paper by Kirstine Smith that had appeared in the May 1916 issue of Biometrika, in which Fisher had writter: “There is nothing at all ‘arbitrary’ in the use of the method of moments for the normal curve; as I have shown elsewhere it flows directly from th4e absolute criterion (Σlogf a maximum) derived from the Principle of Inverse Probability.”
Not realizing that Fisher had not only not assumed a uniform a priori distribution of ρ but had also considered his procedure (which he later termed the method of “maximum likelihood”) to be completely distinct from “inverse probability” via Bayes’s theorem with an assumed a priori distribution, Pearson proceeded to devote over a page of the “Study” to pointing out the absurdity of such an “equal distribution of ignorance” assumption when estimating ρ from an observed r. Several additional pages contain a detailed consideration of alternative forms for the a priori distribution of ρ, showing that with large samples the assumed distribution had little effect on the end result but in small samples could dominate the sampel evidence, from which he concluded that “in problems like the present indiscriminate use of Bayes’ Theorem is to be deprecated” (p. 359). All of this amounted to flogging a dead horse, soi to speak, because Fisher was as fully opposed as Pearson to using Bayes’s theorem in such problems. Unfortunately, Fisher probably was totally unaware of this offending section before proofs became available in 1917. Papers such as the “Study” were not readily typed in those days, so that there would have been only a single manuscript of the text and tables prior to typesetting. Has Fisher, who was then teaching mathematics and physics in English public schools, been in closer touch with Pearson, these misunder standings might have been resolved before publication of the offending passages.
In August 1920 Fisher sent Pearson a copy of his manuscript “On the ‘Probable Error’ of a Coefficient of Correlation Deduced From a Small Sample,” in which he reexamined in detail the tanh^{1}r transformation and, denoting the transformed variable by z and the corresponding transformation of ρ by ζ showed that z can be taken to be approximatley normally distributed about a mean of with a standard deviation equal to ζ the normal approximation being extraordinarily good even in very small samples—of the order of n = 10 This transformation thus made it possible to answer questions of the types that Pearson had raised without recourse to tables of the integral of the distribution of r, and obviated the immediate need for the preparation of such tables. (It was not until 1931 that Pearson suggested to Florence N. David the computation of tables of the integral. Values of the integral obtained by quadrature of the ordinates given in the “Cooperative Study” were completed in 1934. Additional ordinates and values of the integral were calculated to facilitate interpolations. These improved tables, together with four charts for obtaining confidence limits for ρ given r, were published in 1938.^{33})
In his discussion of applications, Fisher took pains to point out that the formula he had given in his 1915 paper for what he then “termed the ‘most likely value,’ which [he] now, for greater precision, term[ed] the ‘optimum’ value of ρ, for a given observed r” involved in its derivation “no assumption whatsoever as to the probable distribution of ρ,” being merely that value of ρ for which the observed r occurs with greatest frequency.” He also noted that one is led to exactly the same expression for the optimum value of ρ in terms of an observed r if one seeks the optimum through the z distribution rather than the r distribution and he commented that the derivation of this optimum cannot, therefore, be inferred to depend upon an assumed uniform prior distribution of ζ and upon an assumed uniform prior distribution of ρ, since these two assumptions are mutually inconsistent. Then, “though. . .reluctant to criticize the distinguished statisticians who put their names to the Cooperative study,” Fisher went on to criticize with a tone of ridicule some of the illustrative examples of the application of Bayes’s theorem considered on pp. 357–358 of the “Study,” without noting the authors’ conclusions from these, and other examples considered, that such “use of Bayes’ Theorem is to be deprecated” (p. 359) and when applied to “val;ues observed in a small sample may lead to results very wide from the truth” (p. 360). Fisher concluded his paper with a “Note on the Confusion Between Bayes’ Rule and My Method of the Evaluation of the Optimum.”
Pearson returned the manuscript to Fisher with the following comment:
. . . I fear if I could give full attention to your paper, which I cannot at the present time, I shoule be unlikely to publish it in its present form, or without a reply to your criticisms which would involve also a criticism of your work of 1912–1 would prefer you publish elsewhere. Under present printing and financial conditions, I am regretfully compelled to exclude all that I think erroneous on my own judgment, because I cannot afford controversy.^{34}
Fisher therefore submitted his paper to Metron, a new journal, which published the work in its first volume.^{35}
The cross criticism, at cross purposes, conducted by Pearson and Fisher over the use of Bayes’s theorem in estimating ρ from r was multiply unfortunate: it was unnecessary and illtimed; it might have been avoided; and it fostered ill will and fueled the innately contentious temperament of both parties at an early stage of their argument over the relative merits of the method of moments and method of maximum likelihood. This argument was started by Fisher’s “Draft of a Note,” which Pearson took to be a criticism not only of the minimum chisquare technique that Kirstine Smith had propounded but also of his method of moments, and refused to publish it in both original (1916) and revised (1918) forms on the grounds of its being controversial and liable to provoke a quarrel among contributors.^{36} The argument, which grew into a raging controversy, was fed by later developments on various fronts and continued to the end of Pearson’s lifeand beyond.^{37}
In 1922 Fisher found the sampling distribution of ·^{2} in random samples of any size from a bivariate normal population in which the correlation is zero (ρ = 0), and later (1928) derived the distribution of n^{2} in samples of any size when the x values are fixed and the y values are normally distributed with a common standard deviation σ about array means μ_{y\x} which may be different for different values of x, thereby giving rise to a nonzero value of the “population” correlation ratio. In particular, it was found that for any value of the population correlation ratio different from zero, the sampling distribution of · tends in sufficiently large samples to be approximately normal about the population value with standard error given by Pearson’s formula; but when the correlation ratio in the population is exactly zero—that is, when sampling from uncorrelated material—the sampling distribution of · does not tend to normality with increasing sample size for any finite number of arrays. This led to formulation of new procedures, since become standard, value of η and of η^{2} – r^{2} as a test for departure from linearity.
In 1926 Pearson showed that the distribution of sample regression coefficients, that is, of the slopes of the sample regression of y on x and of x on y, respectively, is his Type VII distribution symmetrical about the corresponding population regression coefficient. It tends to normality much more rapidly than the distribution of r with increasing sample size, so that the use Pearson’s expression for the standard error of regression coefficents is therefore valid for lower values of n than in the case of r. It is, however, not of much use in small samples, since it depends upon the unknown values of the population standard deviations and correlation, σ_{y}, σ_{x}, and ρ_{x y}. Four years earlier, however, in response to repeated queries from “Student” in correspondence, Fisher had succeeded in showing that in random samples of any size from a general bivariate normal population, the sampling distribution of the ratio(b – β)/S_{b–β}, where β is the population regression coefficinet corresponding to the sample coefficient b, and S_{b–β} is a particular sample estimate of the standard error of their difference, does not depend upon any of the population parameters other than β and is given by a special form of Pearson’s Type VII curve now known as “Student’s” tdistribution for n – 2 degrees of freedom. Consequently, it is this latter distribution, free of “nuisance parameters,” that is customarily employed today in making inferences about a population regression coefficient from an observed value of the corresponding sample coefficient.
Although the final steps of correlation and regression analyses today differ from those originally advanced by Pearson and his coworkers, there can be no question that today’s procedures were built upon those earlier ones; and correlation and regression analysis is still very much indebted to those highly original and very much indebted to those highly original and very difficult steps into the unknown taken by Pearson at the turn of the century.
Derivation of formulas for standard errors in large samples of functions of sample values used to estimate parameters of the population sampled did not, of course, originate with Pearson. It dates from Gauss’s derivation (1816) of the standard errors in large samples of the respective functions of successive sample absolute moments that might be used as estimators of the population standard deviation. Another early contribution was Gauss’s derivation (1823) of a formula comparable with that derived by Pearson in 1903 for the standard error in large samples of the sample standard deviation as estimator of the standard deviation of an arbitrary population having finite centroidal moments of fourth order or higher. Subsequent writers treated these matters somewhat more fully and made a number of minor extensions, but the first general approach to the problem of standard errors and intercorrelations in large samples of sample functions used to estimate values of population parameters is that given in “On the Probable Errors of Frequency Constants. . .,” written by Pearson and his young French mathematical demonstrator, L. N. G. Filon, and read to the Royal Society in November 1897. In section II there is the first derivation of the now familiar expressions for the asymptotic variances and covariances of sample estimators of a group of population parameters in terms of mathematical expectations of second derivatives of the logarithm of what is now called the “likelihood function,” but without recognition of their applicability only to maximum likelihood estimators, a limitation first pointed out by Edgeworth (1908).^{38} Today these formulas are usually associated with Fisher’s paper “On the Mathematical foundations of Theoretical Statistics” (1922)—and perhaps rightly so, because, although the expressions derived by Pearson and Filon, and by Fisher, are of identical mathematical form, what they meant to Pearson and Filon in 1897 and continued to mean to Pearson may have been quite different from what they meant to Fisher.^{39} (This may have been a major obstacle to their conciliation.)
Specific formulas derived by Pearson and Filon include expressions for the standard error of a coefficient of correlation r; the correlation between the sample means m_{x} and m_{y} of two correlated traitsl the correlation between the sample standard deviations, S_{x} and S_{y}; the correlation between a sample coefficient of correlation r and a sample standard deviation s_{x} or s_{y}; the standard errors of regression coefficients,a nd of partial regression coefficients, for the twoand threevariable cases, respectively; and the correlations between pairs of sample correlation coefficients (r_{12}, r_{13}), (r_{12}, r_{34})—all in the case of large samples from a correlated noraml distribution. In the process it was noted that in the case of large samples from a correlated normal distribution, the errors of sample means are uncorrelated with the errors of sample standard deviations and sample correlation coefficients; and that through failure to recognize the existence of correlation between the errors of sample standard deviations and a sample correlation coefficient, the formula given previously for the large sample standard error of the sample correlation coefficient r was in error, because it was appropriate to the case in which the population standard deviations, and are known exactly, Large sample formulas were found also for the standard errors and correlations between the errors of sample estimates of the parameters of Pearson Type, I, III, and IV distributions, making this the first comprehensive study of such matters in the case of skew distributions.
Pearson returned to this subject in a series of three editorials in Biometrika, “On the Probable Errors of Frequency Constants,” prepared in response to a need expressed by queries from readers. The first (1903) deals with the standard errors of, and correlations between, (i) cell frequencies in a histogram and (ii) sample centroidal moments, in terms of the centroidal moments of a univariate distribution of general form. Some of the results given are exact and some are limiting values for large samples. In some instances a “probable error” ( = 0.6745 x standard error) is given, but the practice is deprecated: “The adoption of the ‘probable error”. . .as a measure of. . .exactness must not, however, be taken as equivalent to asserting the validity of the normal law of errors or deviations, but merely as a purely conventional reduction of the standard deviation. It would be equally valid provided it were customary to omit this reduction or indeed to multiply the standard deviation by any other conventional factor” (p. 273).
The extension to samples from a general bivariate distribution was made in “Part II” (1913), reproduced from Pearson’s lecture notes. Formulas were given for the correlation of errors in sample means; the correlation of errors in sample standard deviations; the standard error of the correlation coefficient r (in terms of the population coefficient of correlation ρ and the β_{2} s of the two marginal distributions); the correlation between the random sampling deviations of a sample mean and a sample standard deviation for the same variate; correlation between the random sampling deviations of sample mean of one variate and the standard deviation of a correlated variate; the correlation between a mean and a sample coefficient of correlation; the correlation between the sampling deviations of a sample standard deviation and sample coefficient of correlation; and the standard errors of coefficients of linear regression lines and of the means of arrays. In this paper it is also shown that in the case of all symmetric distributions, there is no correlation between the sample mean and sample standard deviation. “Part III” (1920) deals with the standard errors of, and the correlations between, the sampling variations of the sample median, quartiles, deciles, and other quantiles in random samples from a general univariate distribution. The relative efficiency of estimating the standard deviation of a normal population from the difference between two symmetrical quantiles of a large sample therefrom is discussed, and the “optimum” is found to be the difference between the seventh and ninetythird percentiles.
The results given in these three editorials are derived by a procedure considerably more elementary than that employed in the PearsonFilon paper. Some of the results given are exact; others are limiting values for large samples; and many have become more or less standard in statistical circles.
The July 1900 issue of Philosophical Magazine contained Pearson’s paper in which he introduced the criterion
as a measure of the agreement between observation and hypothesis overall to be used as a basis for determining the probability with which the differences f_{i} –F_{i} (i = 1, 2, . . ., k), collectively might be due solely to the unavoidable fluctuations of random sampling, where f_{i} denotes the observed frequency (the observed number of observations falling) in the ith of k mutually exclusive categories, and F_{i} is the corresponding theoretical frequency (the number exected in the ith category in accordance with some particular true or hypothetical frequency distribution), with Σf_{i} = ΣF_{i} = N, the total number of independent observations involved. To this end he derived the sampling distribution of x^{2} in large samples as a function of k, finding it to be a specialized form of the Pearson Type III distribution now known as the “X^{2} distribution for k – 1 degrees of freedom,” the k – 1 being explained by the remark (in our notation) “only k – 1 of the k errors are variables; the kth is determined when the first k – 1 are known”; he also gave a small table of the integral of the distribution for X^{2} from 1 to 70 and k from 3 to 20. Of Pearson’s many contributions to statistical theory and practice, many contributions to statistical theory and practice, this X^{2} text for goodness of fit is certainly one of his greatest; and in its original and extended forms it has remained one of the most useful of all statistical tests.
Four years later, in On the Theory of Contingency and Its Relation to Association and Normal Correlation, Pearson extended the application of his X^{2} criterion to the analysis of the cell frequencies in a “contingency table” of r rows and c columns resulting from the partitioning of a sample of N observations into r distinct classes in terms of some particular characteristic, and into c distinct classes with respect to another characteristic; showed how the X^{2} criterion could be used to test the independence of the two classifications; termed ø^{2} = X^{2}/N the “mean square contingency” and
the coefficient of mean square contingency; showed that, if a large sample from a bivariate normal distribution with correlation coefficient ρ is partitioned into the cells of a contingency table, then C^{2} will tend to approximate ρ^{2} as the number of categories in the table increases, the correct sign of ρ then being determined from the order of the two classifications and the pattern of the order of the two classifications and the pattern of the cell frequencies within the r × c table; and that, when r = c = 2, ø^{2} is equal to the square of the productmoment coefficient of correlation computed from the observed frequencies in the fourfold table with purely arbitrary values (for instance, 0, 1) assigned to the two row categories and to the two column categories.
Pearson made much of the fact that the value of X^{2} and of C is unaffected by reordering either or both of the marginal categories, so that X^{2} provides a means of testing the independence of the two characteristics (such as eye color and occupation) in terms of which the marginal classes are defined without, and independently of, any additional assumptions as to the nature of the association, if any. In view of the above mentioned relation of C to ρ under the indicated circumstances, C would seem to be a generally useful measure of the degree or intensity of the association when a large value of X^{2} leads to rejection of the hypothesis of independence; and Pearson proposed its use for this purpose. It is, however, not a very satisfactory measure of associationfor example, the values of C obtained from r × c classification and an r′ × c′ classification of the same data will usually be different. also, some fundamental objections have been raised to the use of C, or any other function of X^{2} as a measure of association. Nonetheless, c played an important role in its day in the analysis of data classified into r × c tables when the categories for both characteristics can be arranged in meaningful orders if the ctegories for either characteristic cannot be put into a meaningful order, then there can be no satisfactory measure of the intensity of the association; and a large value of X^{2} may simply be an indication of some fault in the sampling procedure.
In a 1911 Biometrika paper, Pearson showed how his X^{2} criterion could be extended to provide a test of the hypothesis that “two independent distributions of frequency [arrayed in a 2 × c table] are really samples from the same population.” The theoretical proportions in the respective cells implied by the presumed common population being unknown, they are estimated from the corresponding proportions of the two samples combined. Illustrative examples show that to find P, the probability of a larger value of X^{2}, the “Tables for Testing Goodness Fit” are to be entered with n′ = c, signifying that there are c – 1 “independent variables” (“degrees of freedom”) involved, which agrees with present practive. In a Biometrika paper, “On the General Theory of Multiple Contingency. . .” (1916), Pearson gave a new derivation of the X^{2} distribution, as the limiting distribution of the class frequencies of a multinomial distribution as the sample sizeN → ∞ pointed out (pp. 153–155) that if q linear restraints are imposed on the n′ cell frequencies in addition to the usual Σf_{i} = N, then to find P one must enter the table with n′ – q; and extended the X^{2} techique to testing whether the frequencies arrayed in two (2 × c) contingency tables can be considered random samples from the same bivariate population. In this application of “partial X^{2},” Pearson considers the c column totals of each table to be fixed, thereby imposing 2c linear restraints on the 4c cell frequencies involved. The theoretical proportion, p_{1} in the presumed common population, corresponding to the cell in the top row and jth column of either table being unknown, it is taken as equal to the corresponding proportion in this cell of the two tables combined, (j = 1, 2, . . ., c), thereby imposing c additional linear restraints (ρ_{2f} is, of course, simply 1—ρ_{2j} [j = 1, 2, . . . c]). Hence there remain only 4c – 2c – c = c “independent variables”; and Pearson notes that the X^{2} tables are to be entered with n′ = c + 1. These two papers clearly contain the basic elements of a large part of presentday X^{2} technique.
In section 5 of his 1900 paper on X^{2} Pearson pointe dout that one must distinguish between a value of X^{2} calculated from theoretical frequencies F_{i} derived from a theoretical probability distribution completely specified a priori and values of say, calculted from theoretical frequencies F̃_{i} derived from a theoretical probability distribution of specified form but with the values of one or more of its parameters left unspecified so that “best values” for these had to be determined from the data in hand. It was clear that could never exceed the “true” X^{2} From a brief, cursory analysis Pearson concluded that the differenceX^{2}— was likely to be negligible. Evidently he did not realize that the difference might depend on the number of constants the values of which were determined from the sample and that, if k constants were fit, might be zero.
Ultimately Fisher showed in a series of three papers (1922, 1923,1924) that when the unknown parameters of the population sampled are efficiently estimated from the data in such a manner as to impose c additional linear restraints on t cell frequencies, then, when the total number of observations N is large, will be distributed in accordance with a X^{2} distribution for (t – 1 – c) degrees of freedom. Pearson had recognized this in the cases of the particular problems discussed in his 1911 and 1916 papers considered above; but he never accepted Fisher’s modification of the value of n′ with whcih the “Tables of Goodness of Fit” were to be entered in the original 1900 problem of testing the agreement of an observed and a theoretical frequency distribution when some parameters of the latter wer estimated from the observed data, or in the 1904 problem of testing the independence of the two classification of an r × c contingency table.
During Pearson’s highly innovative decade and a half, 1891–1906, in addition to laying the foundations of the major contributions to statistical theory and practive recviewed above, he also initiated a number of other topics that later blossomed into important areas of statistics and other disciplines. Brief mention was made above of “On the Mathematical Theory of Errors of Judgment.” (1902). This investigation was founded on two series of experiments in which three observers each individually (a) estimated the midpoints of segments of stragight lines; and (b), estimated the position on a scale of a bright line moving slowly downward at the moment when a bell sounded. The study revealed that the errors of different observers estimating or measuring the same series of quantities are in general correlated; that the frequency distributions of such errors of estimation or measurement certainly are not always normal; and that the variation over a period of time of the “personal equation” (the pattern of the systematic error or bias of an individual observer) is not explainable soley by the fluctuations of random sampling. The investigation stemmed from Pearson’s observation that when three observers individually estimate or measure a series of physical quantities, the actual magnitudes of which may or may not be known or determinable, then, on the assumption of independence of the judgments of the respective observers, it is possible to determine the standard deviations of the distributions of measurement errors of each of the three observers from the observed standard deviations of the differences between their respective measurements of the same quantities. The investigation reported in this memoir is thus the forerunner of the work carried out by Frank E. Grubbs during the 1940’s on methods for determining the individual precisions of two, three, four, or more measuring instruments in the presence of product variability.
A second example is provided by Pearson’s “Note on Francis Galton’s Problem” (August 1902), in which he derived the general expression for the mean value of the difference between the rth and the (r + 1)th individuals ranked in order of size n from any continuous distribution. This is one of the earliest general results in the sampling theory of order statistics, a very active subfield of statistics since the 1930’s. Pearson later gave general expressions for the variances of, and correlations between, such intervals in random samples from any continuous distribution in a joint paper with his second wife, “On the Mean. . .and Variance of a Ranked Individual, and. . .of the Intervals Between Ranked Individuals, Part I . . .”(1931).
A third example is the theory of “random walk,” a term Pearson coined in a brief letter, “The Problem of the Random Walk,” published in the 17 July 1905 issue of Nature, in which he asked for information on the probability distribution of the walker’s distance from the origin after n steps . Lord Rayleigh replied in the issue of 3 August, pointing out that the problem is formally the same as that of “the composition of n isoperiodic vibrations of unit amplitude and of phases distributed at random” (p. 318), which he had considered as early as 1880, and indicated the asymptotic solution as n → ∞. The general solution for finite n was published by J. C. Kluyver in Dutch later the same year and, among other applications, provides the basis for a test of whether a set of orientation or directional data is “random” or tends to exhibit a “preferred direction.” With John Blakeman, Pearson published A Mathematical Theory of Random Migration (1906), in which various theoretical forms of distribution were derived that would result from random migration from a point of origin under certain ideal conditions and solutions to a number of subsidiary problems were given, results that have found various other applications. Today “random walks” of various kinds, with and without reflecting or absorbing barriers, play important roles not only in the theory of Brownian motion but also in the treatment of random phenomena in astronomy, biology, physics, and communications engineering; in statistics, they are used in the theory of sequential estimation and of sequential tests of statistical hypotheses.
Pearson’s involvement in heredity and evolution dates from his first fundamental paper on correlation and regression (1896), in which, to illustrate the value of these new mathematical tools in attacking problems of heredity and evolution, he included evaluations of partial regressions of offspring on each parent for sets of data from Galton’s Record of Family Faculties (London, 1884) and considerably extended Galton’s collateral studies of heredity by considering types of selection, assortative mating, and “panmixia” (suspension of selection and subsequent free interbreeding). Galton’s formulation, in Natural Inheritance (1889), of his law of ancestral heredity was somewhat ambiguous and imprecise because of his failure to take into account the additional mathematical complexity involved in the joint consideration of more than two mutually correlated characteristics. Pearson supposed him to mean (p. 303) that the coefficients of correlation between offspring and parent, grandparent, and greatgrandparent,. . .were to be taken as r, r^{2}r^{3}, . . .. This led him to the paradoxical conclusion that “a knowledge of the ancestry beyond the parents in no way alters our judgment as to the size of organ or degree of characteristic probable in the offspring, nor its variability” (p. 306), a conclusion that he said in a footnote “seems especially noteworthy” inasmuch as it is quite contrary to what “it would seem natural to suppose.”
In “On the Reconstruction of the Stature of Prehistoric Races” (1898), Pearson used multiple regression techniques to predict (“reconstruct”) average measurements of extinct races from the sizes of existing bones and known correlations among bone lengths in an extant race, as a means of testing the accuracy of predictions in evolutionary problems in the light of certain evolutionary theories.
Meanwhile, Galton had formulated (1897) his “law” more precisely. After some correspondence Pearson, in “On the Law of Ancestral Heredity” (1898), subtitled “A New Year’s Greeting to Francis Galton, January 1, 1898,” expressed what he christened “Galton’s Law of Ancestral Heredity” in the form of a multiple regression equation of offspring on midparental ancestry
where x_{0} is the predicted deviation of an individual offspring from the mean of the offspring generation, x_{1} is the deviation of the offspring generation, x_{2} the deviation of the offspring’s “midgrandparent” from the mean of the grandparental generation, and so on, and σ_{0} σ_{1} . . . are the standard deviations of the distributions of individuals in the respective generations. In order that this formulation of Galton’s law be unambiguous, it was necessary to have a precise definition of “sth midparent.” The definition that Pearson adopted “with reservations” was “[If] a father is a first parent, a grandfather a second parent, a greatgrandfather a third parent, and so on, [then] the mid sth parent or the sth midparent is derived from [is the mean of] all 2^{8} individual sth parents” (footnote, p. 387).
From this formulation Pearson deduced theoretical values for regression and correlation coefficients between various kin, tested Galton’s stature data against these expectations, and suggested generalizing Galton’s law by substituting γβ, γβ^{2}, γβ^{3}, . . . for Galton’s geometric series coefficients 1/2, 1/4, 1/8,. . . to allow “greater scope for variety of inheritance in different species” (p. 403). In the concluding section Pearson claims: “If either [Galton’s Law], or its suggested modification be substantially correct, they embrace the whole theory of heredity. They bring into one simple statement an immense range of facts, thus fulfilling the fundamental purpose of a great law of nature” (p. 411). After noting some difficulties that would have to be met and stating, “We must wait at present for further determinations of hereditary influence, before the actual degree of approximation between law and nature can be appreciated,” he concluded with the sweeping statement: “At present I would merely state my opinion that, with all due reservations it seems to me that . . . it is highly probable that [the law of ancestral heredity] is the simple descriptive statement which brings into a single focus all the complex lines of hereditary influence. If Darwinian evolution be natural selection combined with heredity, then the single statement which embraces the whole field of heredity must prove almost as epochmaking to the biologist as the law of gravitation to the astronomer” (p. 412).
These claims were obviously too sweeping. Neither the less nor the more general form of the law was founded on any clear conception of the mechanism of heredity. Also, most unfortunately, some of the wording employedfor instance, “I shall now proceed to determine. . . the correlation between an individual and any sth parent from a knowledge of the regression between the individual and his midsth parent” (p. 391)—tended to give the erroneous idea that the law expressed a relation between a particular individual and his sth parents, and thus to mislead biologists of the period, who had not become fully conscious that regression equations merely expressed relationships that held on the average between the generic types of “individuals” involved, and not between particular individuals of those types.
During the summer vacations of 1899 and 1900 Pearson, with the aid of many willing friends and colleagues, collected material to test a novel theory of “homotyposis, which if correct would imply that the correlation between offspring of the same parents should on the average be equal to the correlation between undifferentiated like organs of an individual.” The volume of data collected and reduced was far greater than Pearson had previously attempted. The result was a joint memoir by Pearson and several members of his staff, “On the Principle of Homotyposis and Its Relation to Heredity . . . Part I. Homotyposis in the Vegetable Kingdom,” which was “received” by the Royal Society on 6 October 1900. William Bateson, biologist and pioneer in genetics, who had just become a convert to Mendel’s theory, was one of those chosen to referee the memoir, which was “read”persumably only the fivepage abstract^{40} and certainly in highly abridged format the meeting of 15 November 1900. In the discussion that followed the presentation, Bateson sharply criticized the paper, its thesis being, in his view, mistaken; and other fellows present added criticism of both its length and its content.
The next day (16 November 1900) Weldon wrote to Pearson: “The contention ‘that numbers mean nothing and do not exist in Nature’ is a very serious thing, which will have to be fought. Most other people have got beyond it, but most biologists have not. Do you think it would be too hopelessly expensive to start a journal of some kind?. . .”^{41} Pearson was enthusiastically in favor of the ideaon 13 December 1900 he wrote to Galton that Bateson’s adverse criticism “did not apply to this memoir only but to all my work,. . . if the r. S. people send my papers to Bateson, one cannot hope to get them printed. It is a practical notice to quit. This notice applies not only to my work, but to most work on similar statistical lines.”^{42} On 29 November Weldon wrote to him: “Get a better title for this wouldbe journal than I can think of!”^{43} Pearson replied with the suggestion that “the science in future should be called Biometry and its official organ be Biometrika^{44}
A circular was sent out during December 1900 to solicit financial support and resulted in a fund sufficient to support the journal for a number of years. Weldon, Pearson, and C. B. Davenport were to be the editors; and Galton agreed to be “consulting editor.” The first issue appeared in October 1901, and the editorial “The Scope of Biometrika” stated:
Biometrika will include (a) memoirs on variation, inheritance, and selection in Animals and Plants, based upon the examination of statistically large numbers of specimens (this will of course include statistical investigations in anthropometry); (b) those developments of statistical theory which are applicable to biological problems; (c) numerical tables and graphical solutions tending to reduce the labour] of statistical arithmetic; (d) abstracts of memoirs, dealing with these subjects, which are published elsewhere; and (e) notes on current biometric work and unsolved problems.
In the years that followed, Biometrika became a major medium for the publication of mathematical tables and other aids to statistical analysis and detailed tables of biological data.
The memoir on homotyposis was not published in the Philosophical Transactions until 12 November 1901, and only after a direct appeal by Pearson to the president of the Royal Society on grounds of general principle rather than individual unfairness. Meanwhile, Bateson had prepared detailed adverse criticisms. Under pressure from Bateson, the secretary of the Royal Society put aside protocol and permitted the printing of Bateson’s comments and their issuance to the fellows at the meeting of 14 February 1901—before the full memoir by Pearson and his colleagues was in their hands, and even before its authors had been notified whether it had been accepted for publication. Then, with the approval of the Zoological Committee, Bateson’s full critique was published in the Proceedings of the Royal Society before the memoir criticized had appeared.^{45} One can thus appreciate the basis for the acerbity of Pearson’s rejoinder, which he chose to publish in Biometrika^{46} because he had been “officially informed that [he had] a right to a rejoinder, but only to such a one as will not confer on [his] opponent a right to a further reply!” (footnote, p. 321)
This fracas over the homotyposis memoir was but one manifestation of the division that had developed in the 1890’s between the biometric “school” of Galton, Weldon, and Pearson and certain biologists—notably Bateson—over the nature of evolution. The biometricians held that evolution of new species was the result of gradual accumulation of the effects of small continuous variations. In 1894 Bateson published a book in which he noted that deviations from normal parental characteristics frequently take the form of discontinous “jumps” of definite measurable magnitude, and held that discontinuous variation of this kind—evidenced by what we today call sports or mutations—is necessary for the evolution of new species.^{47} He was deeply hurt when Weldon took issue with this thesis in an otherwise very favorable review published in Nature (10 May 1894).
When Gregor Mendel’s longoverlooked paper of 1866 was resurrected in 1900 by three Continental botanists, the particulate nature of Mendel’s theory of “dominance” and “segregation” was clearly in keeping with Bateson’s views; and he became a totally committed Mendelist, taking it upon himself to convert all English biologists into disciples of Mendel. Meanwhile, Weldon and Pearson had become deeply committed adherents to Galton’s law of ancestral heredity, to which Bateson was antiplatheitc. There followed a heated controversy between the “ancestrians,” led by Pearson and Weldon, and the “Mendelians,” led by Bateson. Pearson and Weldon were not, as some supposed, unreceptive to Mendelian ideas but were concerned with the too ready acceptance of Mendlism as a complete gospel without regard to certain incompatibilities they had found between Mendel’s laws of “dominance” and “segregation” and other work. Weldon, the naturalist, regarded Mendelism as an unimportant but inconvenient exception to the ancestral law. Pearson, the applied mathematician and philosopher of science, saw that Mendelism was not incompatible with the ancestral law but in some circumstances could lead directly to it; and he sought to bring all heredity into a single system embodying both Mendelian and ancestrian principles, with the latter dominant. To Bateson, Mendel’s laws were the truth and all else was heresy. The controversy raged on with much mutual incomprehension, and with great bitterness on both sides, until Weldon’s death in April 1906 removed the most committed ancestrian and Bateson’s main target.^{48} Without the help of Weldon’s biologically trained mind, Pearson had no inclination, nor the necessary training, to keep in close touch with the growing complexity of the Mendelian hypothesis, which was coming to depend increasingly on purely biological discoveries for its development; he therefore turned his attention to unfinished business in other areas and to eugenics.
During the succeeding decades Mendelian theory became firmly established—but only after much testing on diverse material, clarification of ideas, explanation of “exceptions,” and tying in with cytological discoveries. Mendel’s laws have been shown to apply to many kinds of characters in almost all organisms, but this has not entirely eliminated” biometrical” methods. Quite the contrary: multiple regression techniques are still needed to cope with the inheritance of quantitative characters that presumably depend upon so many genes that Mendelian theory cannot be brought to bear in practice. For example, coat color of dairy cows depends upon only a few genes and its Mendelian inheritance is readily verified; but the quantitative trait of milk production capacity is so complex geneticaloy that multiple regression methods are used to predict th average milkproduction character of offspring of particular matings, given the relevant ancestral information.
In fact, geneticists today ascribe the reconciliation of the “ancestral” and “Mendelian” positions and “Mendelian” positions, and definitive synthesis of the two theories, to Fisher’s first genetical paper, “The Correlations to be Expected Between Relatives on the Supposition of Mendelian Inheritance” (1918), in which, in response to new data, he improved upon the kinds of models that Pearson, Weldon, and Yule had been considering 10–20 years before, and showed clearly that the correlations observed between human relatives not only could be interpreted on the supposition of Mendelian inheritance, but also that Mendelian inheritance must lead to precisely the kind of correlations observed.
Weldon’s death was not only a tremendous blow to Pearson but also removed a close colleague of high caliber, without whom it was not possible to continue work in biometry along some of the lines that they had developed during the preceding fifteen years. Yet Pearson’s productivity hardly faltered. During his remaining thirty years his articles, editorials, memoirs, and books on or related to biometry and statistics numbered over 300; he also produced one in astronomy and four in mechanics and about seventy published letters, reviews, and prefatory and other notes in scientific publications, the last of which was a letter (1935) on the aims of the founders of Biometrika and the conditions under which the journal had been published.
Following Weldon’s death, Pearson gave increasing attention to eugenics. In 1904 Galton had provided funds for the establishment of a eugenics record office, to be concerned with collecting data for the scientific study of eugenics. Galton kept the office under his control until late in 1906, when, at the age of eighty four, he turned it over to Pearson. With a change of name to eugenics laboratory, it became a companion to Pearson’s biometric laboratory. It was transferred in 1907 to University College and with a small staff carried out studies of the relative importance of heredity and environment in alcoholism, tuberculosis, insanity, and infant mortality.^{49} The findings were published as Studies in National Deterioration, nos. 1–11 (1906–1924) and in Eugenics Laboratory Memoirs, nos. 1–29 (1907–1935). Thirteen issues of the latter were devoted to “The Treasury of Human Inheritance” (1909–1933), a vast collection of pedigrees forming the basic material for the discussion of the inheritance of abnormalities, disorders, and other traits.
Pearson’s major effort during the period 1906–1914, however, was devoted to developing a postgraduate center in order” to make statistics branch of applied mathematics with a technique and nomenclature of its own, to train statistics as men of science . . . and in general to convert statistics and in this country from being the playing field of dilettanti and controversialists into a serious branch of science, which no man could attempt to use effectively without adequate training, any more than he could attempt to use the differential calculus, being ignorant of mathematics.”^{50} At the beginning of of this period Pearson was not only head of the department of applied mathematics, but also in charge of the drawing office for engineering students, giving evening classes in astronomy, directing the biometric and eugenics laboratories, and editing their various publications, and Biometrika, a tremendous task for one man. In the summer of 1911, however, he was able to cut back somewhat on these diverse activities by relinquishing the Goldmid chair of applied mathematics to become the first Galton professor of eugenics and head of a new department of applied statistics in which were incorporated the biometric and eugenics laboratories. But he also assumed a new task about the same time: soon after Galton’s death in 1911, his relatives had asked Pearson to write his biography. The first volume of The Life, Letters and Labors of Francis Galton was published in 1914, the second volume in 1925, and the third volume (in two parts) in 1930. It is an incomparable source of information on Galton, on Pearson himself, and on the early years of biometry. Although the volume of Pearson’s output of purely statistical work was somewhat reduced during these years by the task of writing this biography, it was still immense by ordinary standards.
Pearson was the principal editor of Biometrika from its founding to his death (vols. 1–28, 1901–1936), and for many years he was the sole editor. Under his guidance it became the world’s leading medium of publication of papers on, and mathematical tables relating to, statistical theory and practice. Soon after World War I, during which Pearson’s group was deeply involved in war work, he initiated the series Tracts for Computers, nos. 1·20 (1919·1935), many of which became indispensable to computers of the period. In 1925 he founded Annals of Eugenics and serves as editor of the first five volumes (1925–1933). Some of the tables in Tables for Statisticians and Biometricians (pt. I, 1914; pt. II, 1931) appear to be timeless in value; others are no longer used. The Tables of the Incomplete BetaFunction (1934), a compilation prepared under his direction over a period of several decades, remains a monument to him and his coworkers.
In July 1932 Pearson advised the college and university that he would resign from the Galton professorship the following summer. The college decided to divide the department of applied statistics into two independent units, a department of eugenics with which the Galton professorship would be associated, and a new department of statistics. In October 1933 Pearson was established in a room placed at his disposal by the zoology department; his son, Egon, was head of the new department of statistics; and R. A. Fisher was named the second Galton professor of eugenics. Pearson continued to edit Biometrika and had almost seen the final proofs of the first half of volume 28 through the press when he died on 27 April 1936.
NOTES
1. Quoted by E. S. Pearson in Karl Pearson in Karl Pearson: An Apprecation. . .,p. 4 (Biometrika, 28 , 196).
2. Galton discovered the statistical phenomenon of regression around 1875 in the course of experiments with sweetpea seeds to determine the law of inheritance of inheritance of size. Using 100 parental seeds of earch of 7 different selected sizes, he constructed a twoway plot of he diameters of parental and offspring seeds from each parental class. Galton then noticed that the median diameters of the offspring seeds for the respective parental classes fell nearly on astraight line. Furthermore, the median diameters of offspring from the largersize parental classes were less than those of the parents; and for the smallersize parental classes, they were greater than those of the parents, indicating a tendency of he “mean” offspring size to “revert” toward what might be described as the average ancestral type. Not realizing that this phenomenon is a characteristic of any twoway plot, he first termed it “reversion” and, later, “regression.”
Examining these same data further, Galton noticed that the variation of offspring size within the respective parental arrays (as measured by their respective semiinterquartile ranges) was approximately constant and less than the similarly measured variation of the overall offspring population. From this empirical evidence he then inferred the correct relation, variability of offspring family × variability of overall offspring population, which he announced in symbolic form in an 1877 lecture, calling r the “reversion” coefficient.
A few years later Galton made a twoway plot of the statures of some human parents of unselected statures and their adult children, noting that the respective marginal distributions were approximately Gaussian or “normall,” as Adolphe Quetelet had noticed earlier from examination distributions along lines in the plot paralllel to either of the variate axes were “apparently” Gaussian distributions of equal variation, which was less than, and in a constant ratio to, that of the corresponding marginal distributions. To obtain a numerical value for r, Galton expressed the deviations of the individual values of both variates from their respective medians in terms of their respective semiinterquartile ranges as a unit, so that r became the slope of his regression line.
In 1888 Galton made one more great and farreaching discovery. Applying the techniques that he had evolved for the measurement of the influence of heredity to the problem of measuring the degree of association between the sizes of two different organs of the same individual, he reached the conception of an “index of corelation” as a measure of the degree of relationship between two such characteristics and recongnized r, his measure of “reversion” or “regression,” to be such a coefficient of correlation or correlation, suitable for application to all living forms.
Galton, however, failed to recognize and appreciate the additional mathematical complexity necessarily involved in the joint consideration of more than two mutually correlated characteristics, with the result that his efforts to formulate and implement what became known as his law of ancestral heredity were somewhat confused and imprecise. It remained for Pearson to provide the necessary generalization and precision of formulation in the form of a multiple regression formula.
For fuller details, see Pearson’s “Notes on the History of Correlation” (1920).
3.Speeches. . .at a Dinner. . .in [His] Honour, pp. 22–23; also quoted by E. S. Pearson, op. cit., p. 19 (Biometrika, 28 , 211).
4. An examination of Letters From W. S. Gosset to R. A. Fisher 1915–1936, 4 vols. (Dublin, 1962), issued for private circulation only, reveals that Gosset (pen name “Student”), played a similar role with respect to R. A. Fisher. When and how they first came into contact is revealed by the two letters of Sept. 1912 from Gosset to Pearson that are reproduced in E. S. Pearson’s “Some Early Correspondence. . .”(1968).
5. E. S. Pearson, op. cit., apps. II and III.
6. Pearson was not the first to use this terminology: “Galton used it, as did also Lexis, and the writer has not found any reference which seems to be its first use” (Helen M. Walker, Studies. . ., p. 185). But Pearson’s consistent and exclusive use of this term in his epochmaking publications led to its adoption throughout the statistical community.
7. E. S. Pearson, op. cit., p 26 (Biometrika, 28 , 218).
8. The title “Contributions to the Mathematical Theory of Evolution” or “Mathematical Contributions. . .” was used as the general title of 17 memoirs, numbered II through XIX, published in the Philosophical Transactions or as Drapers’ Company Research Memoirs, and of 8 unnumbered papers published in the Proceedings of he Royal Society “Mathematical” became and remained the inital word from III(1896)on. No.XVII was announced before 1912 as a forthcoming Drapers’. . . Memoir but has not been published to date.
9. From Pearson, “Statistical Tests,” in Nature, 136 (1935), 296–297, see 296.
10. Pearson, “Notes on the History of Correlation,” p. 37 (Pearson and Kendall, p. 197).
11. Pearson did not use different symbols for population parameters (such as μ, σ, ρ) and sample measures of them (m, s, r) as has been done in this article, following the example set by “Student” in his first paper on smallsample theory, “The probale Error of a Mean” (1908). Use of indentical symbols for population parameters and sample measures of them makes Pearson’s, and other papers of this period, difficult to follow and, in some instances, led to error.
12. Pearson, “Notes on the History of Correlation.” p.42 (Pearson and kendall, p. 202).
13. In the rest of the article, the term” standard error” will be used instead of “standard deviation of the sampling error.” Pearson consistently gave formulas for, and spoke of the corresponding “probable error” (or “p.e.”) defined by, probable error = 0.674489. . . × standard error, the numerical factor being the factor appropriate to the normal distribution, and reserved the term” standard deviation” (and the symbol σ) for description of the variation o9f individuals in a population or sample.
14. Footnote, p, 247 (Early. . .papers, p. 134)
15. There are always two sample n’s, n_{yx}, and corresponding to the regression of y on x and the regression of x on y, respectively, in the sample. When these regressions are both exactly liner, n_{yx}= n_{xy}= r otherwise n_{yx} and n_{xy} are different.
In this memoir Pearson defines and discusses the correlation ratio, n_{yx}, and its relation to r entirely in terms of a sample of N paired observations, (x_{i}, y_{i}), (i= 1, 2,. . .,N). The implications of various equalities and inequalities between the correlation ratio of a trait X with respect to a trait Y in some general (nonnormal) bivariate population and ρ, the productmoment coefficient of correlation of X and Y in this population, are discussed, for example, in W. H. Kruskal, “Odinal Measures of Assocition,” in Journal of American Statistical Association, 53 (1958), 814–861.
16. In Pearson, “On the Systematic Fitting of Curves to Observations and Measurements,” in Biometrika, 1 , no. 3 (Apr. 1902), 264–303, see p. 271.
17. Pearson and Alice Lee, “On the Distribution of Frequency (Variation and Correlation) of the Barometric Height at Diverse Stations,” in Philosophical Transactions of the Royal Society, 190A (1898), 423–469, see 456 and footnote to 462, respectively.
18. Pearson, “On the Probable Error of a Coefficient of Correlation as Found From a Fourfold Table,” in Biometrika, 9 nos. 1–2 (Mar. 1913), 22–27.
19. Pearson, “On the Probable Error of Biserial η, “ibid., 11 , no. 4 (May 1917), 292–302.
20.Ibid., 1 , no. 1 (Oct. 1901), 2. Emphasis added.
21. Student, “Probable Error of a Correlation Coefficient,” ibid., 6 , nos. 2–3 (Sept. 1908), 302–310. In a 1915 letter to R. A. Fisher (repro. in E. S. Pearson, “Some Early Correspondence . . .,” p. 447, and in Pearson and Kendall, p. 470), Gosset tells “how these things came to be of importance [to him]” and, in particular, says that the work of “the Experimental Brewery which concerns such things as the connection between analysis of malt or hops, and the behaviour of the beer, and which takes a day to each unit of the experiment, thus limiting the numbers, demanded an answer to such questions as ‘If with a small number of cases I get a value r, what is the probability that there is really a positive correlation of greater than (say) 25?’”.
22. E. S. Pearson, “Some Reflexions. . .,” pp. 351–352 (Pearson and Kendall, pp. 349–350).
23. R. A. Fisher, “Frequency Distribution of the Values of the Correlation ceoefficient in Samples From an Indefinitely Large Population,” in Biometrika, 10 , no. 4 (May 1915), 507–521.
24. Letter from Pearson to Fisher dated 26 Sept. 1914, repro. in E. S. Pearson, “Some Early Correspondence . . .,” pp. 448 (Pearson and Kendall, p. 408).
25. Letter from Pearson to Fisher dated Oct. 1914, partly repro.ibid., pp. 449 (Pearson and Kendall, p. 409).
26. Letter from Pearson to Fisher dated 30 Jan., 1915, partly repro. ibid., pp. 449–450 (Pearson and Kendall, pp. 409–410).
27.Ibid., p. 450 (Pearson and Kendall, p. 410).
28. Letter from Pearson to Fisher dated 13 May 1916, repro. ibid., p. 451 (Pearson and Kendall, p. 411).
29. J. O. Irwin, in Journal of the Royal Statistical Society, 126 , pt. 1 (Mar. 1963), 161; F. Yates and K. Mather, in Biographical Memoirs of Fellows of the Royal Society, 9 (Nov. 1963), 98–99; P. C. Mahalanobis, in Biometrics, 20 , no. 2 (June 1964), 214.
30. R. A. Fisher, “ On an Absolute Criterion for Fitting Frequency Curves,” in Messenger of Mathematics, 41 (1912), 155–160.
This paper marks Fisher’s break away from inverse probability reasoning via Bayes’s theorem but, although evident in retrospect, the “break” was not clearcut: not having yet coined the term “likelihood,” he spoke (p. 157) of “the probability of any particular set of κ’ s” (that is, of the parameters involved) being “proportional to the chance of a given set of observations occurring”—which appears to be equivalent to the proposition in the theory of inverse probability that, assuming a uniform a priori probability distribution of the parameters, the ratio of the a posteriori probability that ө ‗ ө_{o} + ξ to the a posteriori probability that ө ‗ ө_{o} is equal to the ratio of the probability of the observed set of servation when ө ‗ ө_{o} + ξ to their probability when ө ‗ ө_{o}. He also described (p. 158) graphical representation of “the inverse probability system,” On the other hand, he did stress (p. 160) that only the relative (not the absolute) values of these “probabilities” were meaningful and that it would be “illegitimate” to integrate them over a region in the parameter space.
Fisher introduced the term “likelihood” in his paper “On the Mathematical Foundations of Theoretical Statistics,” in Philosophical Transactions of he Royal Society, 222A (19 Apr. 1922), 309–368, in which he made clear for the first time the distinction between the mathematical properties of “likelihoods” and “probabilities,” and stated:
I must plead quily in my original statement of the Method of Maximum Likelihood to having based my argument upon the principle of inverse probability; in the same paper, it is true, I emphasized the fact that such inverse probilities were relative only. . .Upon consideration. . .I perceive that the word probability is wrongly used in such a connection: probability is a ratio of frequencies, and about the frequencies of such [parameter] values we can know nothing whatever (p. 326).
31. E.S. Pearson, “Some Early Correspondence. . .” p. 452 (Pearson and Kendall, p. 412).
32. Repro. ibid., pp. 454–455 (Pearson and Kendall, pp. 414–415).
33. F. N. David, Tables of the Ordinates and Probability Integral of the Distribution of the Correlation Coefficient in Small Samples (London, 1938).
34. Letter from Pearson to Fisher dated 21 Aug. 1920, repro. in E. S. Pearson, “Some Early Correspondence. . .” p. 453 (Pearson and Kendall, p. 413).
35. R. A. Fisher, “On the ’Probable Error’ of a Coefficient of Correlation Deduced From a Small Sample,” in Metron, 1 , no. 4 (1921), 1–32.
36. Letters from Pearson to Fisher dated 26 June 1916 and 21 Oct. 1918, repro. in E. S. Pearson, “Some Early Correspondence. . .,” pp. 455, 456, respectively (Pearson and Kendall, pp. 415, 416).
37. Pearson, “Method of Moments and Method of Maximum Likelihood,” in Biometrika, 28 , nos. 1–2 (June 1936), 34–59; R. A. Fisher, “Professor Karl Pearson and the Method of Moments“, in Annals of Eugenics, 7, pt. 4 (June 1937), 303–318.
38. F. Y. Edgeworth, “On the Probable Error of Frequency Constants,” in Journal of the Royal Statistical Society, 71 (1908), 381–397, 499–512, 652–678.
39. The identical mathematical form of expressions derived by the method of maximum likelihood and by the method of inverse probability, if a uniform prior distribution is adopted, has been a source of continuing confusion. Thus, the “standard errors” given by Gauss in his 1816 paper were undeniably derived via the method of inverse probability and, strictly speaking, are the standard deviations of the and, strictly speaking, are the standard deviations of the a posteriori probability distributions of parameters concerned, given the observed values of the particular functions of sample values considered. On the other hand, by virtue of the abovementioned equivalence of form, Gauss’s 1816 formulas can be recognized as giving the “standard errors,” that is, the standard deviations of the sampling distributions, of the functions of sample values involved for fixed values of the corresponding population parameters. Consequently, speaking loosely, one is inclined today to attribute to Gauss the original (“first”) derivation of these “standard error” formulas, even though he may have had (in 1816) no conception of the “sampling distribution,” for fixed values of a population parameter, of a sample function used to estimate the value of this parameter. In contrast, the result estimate the value of this parameter. In contrast, the result given in his 1821 paper almost certainly refers to the sampling distribution of s, and not to the a posteriori distribution of σ.
Edgeworth’s discussion is quite explicitly in terms of inverse probability. PearsonFilon asymptotic formulas are derived afresh in this context and are said to be applicable only to “solutions” obtained by “the genuine inverse method,” the “fluctuation of the quaesitum” so determined “being less than that of any other determination” (pp.506–507).
The correct interpretation of the formulas derived by Pearson and Filon is somewhat obscured by their use of identical symbols for population parameters and the sample functions used to estimate them, and by the fact that their choice of words is such that their various summary statements can be interpreted either way. On the other hand, their derivation starts (p.231) with consideration of a ratio of probabilities, introduced without explanation but for which the explanation may be the “proposition in the theory of Inverse Probability” mentioned in note 30 above; and Pearson says, in his letter of June 1916 to Fisher (see note 32), “In the first place you have to demonstrate the logic of the Gaussian rule. . .I frankly confess I approved the Gaussian method in 1897 (see Phil, Trans. Vol. 191, A, p. 232), but I think it logically at fault now.” These facts suggest that Pearson and Filon may have regarded the “probable errors” and “correlations” they derived as describing properties of the joint a posteriori probability distribution of the population parameters, given the observed values of the sample functions used to estimate them.
40.Proceedings of the Royal Society, 68 (1900), 1–5.
41. Quoted by Pearson in his memoir on Weldon, in Biometrika, 5 , no. 1 (Oct. 1906), 35 (Pearson and Kendall, p.302).
42. Letter from Pearson to Galton, quoted in Pearson’s Life. . . of Francis Galton, IIIA, 241.
43. Quoted by Pearson in his memoir on Weldon, in Biometrika, 5 , no. 1 (Oct. 1906), 35 (Pearson and Kendall, p.302)
44.Ibid.
45. W. Bateson, “Heredity, Differentiation, and Other Conceptions of Biology: A Consideration of Professor Karl Pearson’s Paper ‘On the Principle of Homotyposis,’” in Proceedings of the Royal Society, 69 , no. 453, 193–205.
46. Pearson, “On the Fundamental Conceptions of Biology,” in Biometrika, 1 , no. 3 (Apr. 1902)320–344.
47. W. Bateson, Materials for the Study of Variation, Treated With Especial Regard to Discontinuity in the Origin of Species (London, 1894).
48. For fuller details, see either of the articles by P. Froggatt and N.C. Nevin in the bibliography; the first is the more complete.
49. These studies were not without a price for Pearson: he became deeply involved almost at once in a hot controversy over tuberculosis and a fierce dispute on the question of alcoholism. See E. S. Pearson, Karl pearson. . ., pp. 59–66 (Biometrika, 29 , 170–177).
50. From a printed statement entitled History of the Biometric and Galton Laboratories, drawn up by Pearson in 1920; quoted in E. S. Pearson, Karl Pearson. . ., p. 53 (Biometrika, 29 , 164)
BIBLIOGRAPHY
I. Original Works. A bibliography of Pearson’s research memoirs and his articles and letters in scientific journals that are on applied mathematics includig astromy, but not statistics, biometry, anthropology, eugenics, or mathematical tables, follows the obituary by L. N. G. Filon (see below). A biliography of his major contributions to the latter five areas is at the end of P. C. Mahalanobis, “A Note on the Statistical and Biometric Writings of Karl Pearson” (see below). The individual mathematical tables and collections of such tables to which Pearson made significant contributions in their computation or compilation, or through preparation of explanatory introductory material, are listed and described in Raymond Clare Archibald, Mathematical Table Makers (New York, 1948), 65–67.
Preparation of a complete bibliography of Pearson’s publications was begun, with his assistance, three years before his death. The aim was to include all of the publications on which his name appeared as sole or part author and all of his publications that were issued anonymously. The result, A Bibliography of the Statistical and Other Writings of Karl Pearson (Cambridge, 1939), compiled by G. M. Morant with the assistance of B. L. Welch, lists 648 numbered entries arranged chronologically under five principal headings, with short summaries of the contents of the more important, followed by a sixth section in which a chronological list, “probably incomplete,” is given of the syllabuses of courses of lectures and single lectures delivered by Pearson that were printed contemporaneously as brochures or single sheets. The five major categories and the number of entries in each are the following:
I. Theory of statistics and its application to biological, social, and other problems (406);.
II. Pure and applied mathematics and physical science (37);
III. Literary and historical (67);
IV. University matters (27);
V. Letters, reviews, prefatory and other notes in scientific publications (111).
Three omissions have been detected: “The Flying to Pieces of Whirling Ring,” in Nature, 43 , no. 1117 (26 Mar. 1891), 488; “Note on Professor J. Arthur Harris’ Papers on the Limitation in the Applicability of the Contingency Coefficient,” in Journal of the American Statistical Association, 25 , no. 171 (Sept. 1930), 320–323; and “Postscript,” ibid., 327.
The following annotated list of Pearson’s most important publications will suffice to reveal the great diversity of his contributions and their impact on the biological, physical, and social sciences. The papers marked with a single asterisk (*) have been repr. in Karl Pearson’s Early Statistical Papers (Cambridge, 1948) and those with a doubloe asterisk (**), in E. S. Pearson and M. G. Kendall, eds., Studies in the History of Probability and Statistics (London–Darien, Conn., 1970), referred to as Pearson and Kendall.
“On the Motion of Spherical and Ellipsoidal Bodies in Fluid Media” (2 pts.), in Quarterly Journal of Pure and Applied Mathematics, 20 (1883), 60–80, 184–211; and “On a Certain Atomic Hypothesis” (2 pts), in Transactions of the Cambridge Philosophical Society, 14 , pt. 2 (1887), 71–120, and Proceedings of the London Mathematical Society, 20 (1888), 38–63, respectively. These early papers on the motions of a rigid or pulsating atom in an infinite incompressible fluid did much to increase Pearson’s stature in applied mathematics at the time.
William Kingdon Clifford, The Common Sense of the Exact Sciences (London, 1885; reiss. 1888), which Pearson edited and completed.
Isaac Todhunter, A History of the Theory of Elasticity and of the Strength of Materials From Galilei to the Present Time, 2 vols. (Cambridge, 1886–1893; reiss. New York, 1960), edited and completed by Pearson.
The Ethic of Freethought (London, 1888; 2nd ed., 1901), a collection of essays, lectures, and public addresses on free thought, historical research, and socialism.
“On the Flexure of Heavy Beams Subjected to a Continuous Load. Part I,” in Quarterly Journal of Pure and Applied Mathematics, 24 (1889), 63–110, in which for the first time a nowmuchcited exact solution was given for the bending of a beam of circular cross section under its own weight, and extended to elliptic cross sections in “. . . Part II,” ibid., 31 (1899), 66–109, written with L. N. G. Filon.
The Grammar of Science (London, 1892; 3rd ed., 1911; reiss. Gloucester, Mass., 1969; 4th ed., E. S. Pearson, ed., London, 1937), a critical survey of the concepts of modern science and his most influential book.
*“Contributions to the Mathematical Theory of Evolution,” in Philosophical Transactions of the Royal Society, 185A (1894), 71–110, deals with the dissection of symmetrical and asymmetrical frequency curves into normal (Gaussian) components and marks Pearson’s introduction of the method of moments as a means of fitting a theoretical curve to experimental data and of the term “standard deviation” and σ as the symbol for it.
*“Contributions to the Mathematical Theory of Evolution. II. Skew Variation in Homogeneous Material,” ibid., 186A (1895), 343–414, in which the term “mode” is introduced, the foundations of the Pearson system of frequency curves is laid, and Types I–IV are defined and their application exemplified.
*“Mathematical Contributions to the Theory of Evolution. III. Regression, Heredity, and Panmixia,” ibid., 187A (1896), 253–318, Pearson’s first fundamental paper on correlation, with special reference to problems of heredity, in which correlation and regression are defined in far greater generality than previously and the theory of multivariate normal correlation is developed as a practical tool to a stage that left little to be added.
The Chances of Death and Other Studies in Evolution, 2 vols. (London, 1897), essays on social and statistical topics, including the earliest adequate study (“Variation in Man and Woman”) of anthropological “populations” using scientific measures of variability.
*“Mathematical . . . IV. On the Probable Errors of Frequency Constants and on the Influence of Random Selection on Variation and Correlation,” in Philosophical Transactions of the Royal Society, 191A (1898), 229–311, written with L. N. G. Filon, in which were derived the nowfamiliar expressions for the asymptotic variances and covariances of sample estimators of a group of population parameters in terms of derivatives of the likelihood function (without recognition of their applicability only to maximum likelihood estimators), and a number of particular results deduced therefrom.
*“Mathematical . . . V. On the Reconstruction of the Stature of Prehistoric Races,” ibid., 192A (1898), 169–244, in which multiple regression techniques were used to reconstruct predicted average measurements of extinct races from the sizes of existing bones, given the correlations among bone lengths in an extant race, not merely as a technical exercise but as a means of testing the accuracy of predictions in evolutionary problems in the light of certain evolutionary theories.
“Mathematical . . . On the Law of Ancestral Heredity,” in Proceedings of the Royal Society, 62 (1898), 386–412, a statistical formulation of Galton’s law in the form of a multiple regression of offspring on “midparental” ancestry, with deductions therefrom of theoretical values for various regression and correlation coefficients between kin, and comparisons of such theoretical values with values derived from observational material.
“Mathematical . . . VII. On the Correlation of Characters not Quantitatively Measurable,” in Philosophical Transactions of the Royal Society, 195A (1901), 1–47, in which the “tetrachoric” coefficient of correlation r_{t} was introduced for estimating the coefficient of correlation, ρ, of a bivariate normal distribution from a sample scored dichotomously in both variables.
*“On the Criterion That a Given System of Deviations From the Probable in the Case of a Correlated System of Variables Is Such That It Can Be Reasonably Supposed to Have Arisen From Random Sampling,” in London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, 5th ser., 50 (1900), 157–175, in which the “X^{2} test of goodness of fit” was introduced, one of Pearson’s greatest single contributions to statistical methodology.
“Mathematical . . . IX . On the Principle of Homotyposis and Its Relation to Heredity, to the Variability of the Individual, and to That of Race. Part I . Hornotyposis in the Vegetable Kingdom,” in Philosophical Transactions of the Royal Society, 191A (1901), 285–379, written with Alice Lee et al., a theoretical discussion of the relation of fraternal correlation to the correlation of “undifferentiated like organs of the individual” (called “homotyposis”), followed by numerous applications; the paper led to a complete schism between the biometric and Mendelian schools and the founding of Biometrika.
*“Mathematical . . . X . Supplement to a Memoir on Skew Variation,” ibid., 443–459; Pearson curves Type V and VI are developed and their application exemplified.
*“On the Mathematical Theory of Errors of Judgement With Special Reference to the Personal Equation,” ibid., 198A (1902), 235–299, a memoir still of great interest and importance founded on two series of experiments, each with three observers, from which it was learned, among other things, that the “personal equation” (bias pattern of an individual observer) is subject to fluctuations far exceeding random sampling and that the errors of different observers looking at the same phenomena are in general correlated.
“Note on Francis Galton’s Problem,” in Biometrika1 , no. 4 (Aug. 1902), 390–399, in which Pearson found the general expression for the mean value of the difference between the rth and the (r+1)th ranked individuals in random samples from a continuous distribution, one of the earliest results in the sampling theory of order statistics—similar general expressions for the variances of and correlations between such intervals are given in his joint paper of 1931.
“On the Probable Errors of Frequency Constants,” in Biometrika, 2 no. 3 (June 1903), 273–281, an editorial that deals with standard errors of, and correlations between, cell frequencies and sample centroidal moments, in terms of the centroidal moments of a univariate distribution of general form. The extension to samples from a generalbivariate distribution was made in pt. II in Biometrika9 nos. 1–2 (Mar, 1913), 1–19; and to functions of sample quantiles in pt. III Ibid13 no. 1 (Oct. 1920), 113–132.
*Mathematical. . .XIII . On the Theory of Contingency and Its Relation to Association and Normal Correlation, Drapers’ Company Research Memoirs, Biometric Series, no. 1 (London, 1904), directed toward measuring the association of two variables when the observational data take the form of frequencies in the cells of an r × c “contingency table” of qualitative categories not necessarily meaningfully orderable, an adaptation of his X^{2} goodnessoffit criterion, termed “square contingency,” being introduced to provide a test of overall departure from the hypothesis of independence and the basis of a measure of association, the “coefficient of contingency” , which was shown to tend under certain special conditions to the coefficient of correlation of an underlying bivariate normal distribution.
On Some Disregarded Points in the Stability of Masonry Dams, Drapers’ Company Research Memoirs, Technical Series, no. 1 (London, 1904), written with L. W. Atcherley, in which it was shown that the assumptions underlying a widely accepted procedure for calculating the stresses in masonry dams are not satisfied at the bottom of the dam, the stresses there being in excess of those so calculated, with consequent risk of rupture near the base—still citedc today, this paper and its companion Experimental Study. . . (1907) caused great concern at the time, for instance, with reference to the Britishbuilt Aswan Dam.
*Mathematical. . .XIV . On the General Theory of Skew Correlation and NonLinear Regression Drapers’ Company Research Memoirs, Biometric Series, no. 2 (London, 1905), dealt with the general conception of skew variation and correlation and the properties of the “correlation ratio” η (introduced in 1903) and showed for the first time the fundamental importance of the expressions and and of the difference between η and r as measures of departure from linearity, as well as those conditions that must be satisfied for linear, parabolic, cubic, and other regression equations to be adequate.
“The Problem of the Random Walk,” in Nature, 72 (17 July 1905), 294, a brief letter containing the first explicit formulation of a “random walk,” a term Pearson coined, and asking for information on the probability distribution of the walker’s distance from the origin after n steps—Lord Raylcigh indicated the asymptotic solution as n→ ∞ in the issue of 3 Aug., p. 318; and the general solution for finite n a was published by J. C. Kluyver in Dutch later the same year.
Mathematical. . .XV. A Mathematical Theory of Random Migration Drapers’ Company Research Memoirs, Biometric Series, no. 3 (London, 1906), written with John Blakeman. Various theoretical forms of distribution were derived that would result from random migration from an origin under certain ideal conditions, and solutions to a number of subsidiary problems were given—results that, while not outstandingly successful in studies of migration, have found various other applications.
**“Walter Frank Raphael Weldon, 1860–1906,” in Biometrika5 nos 1–2 (Oct. 1906), 1–52 (repr. as paper no. 21 in Pearson and Kendall), a tribute to the man who posed the questions that impelled Pearson to some of his most important contributions, with additional details on the early years (1890–1905) of the biometric school and the founding of Biometrika.
Mathematical. . .XVI. On Further Methods of Determining Correlation, Drapers’ Company Research Memoirs, Biometric Series, no. 4 (London 1907), dealt with calculation of the coefficient of correlation, r, from the individual differences (x—y) in a sample and with estimation of the coefficient of correlation, ρ of a bivariate normal population from the ranks of the individuals in a sample of that population with respect to each of the two traits concerned.
An Experimental Study of the Stresses in Masonry Dams, Drapers’ Company Research Memories, Technical Series, no. 5 (London, 1907), written with A. F. C. Pollard, C. W. Wheen, and L. F. Richardson, which lent experimental support to the 1904 theoretical findings.
A First Study of the Statistics of Pulmonary Tuberculosis, Drapers’ Company Research Memoirs, Studies in National Deterioration, no. 2 (London, 1907), and A Second Study. . .. Marital Infection,. . . Technical Series, no. 3 (London, 1908), written with E. G. Pope, the first two of seven publications by Pearson and his coworkers during 1907–1913 on the thenimportant and controversial subjects of the inheritance and transmission of pulmonary tuberculosis.
“On a New Method of Determining Correlation Between a Measured Character A, and a Character B, of which Only the Percentage of Cases Wherein B Exceeds (or Falls Short of) a Given Intensity Is Recorded for Each Grade of A,” in Biometrika6 nos. 1 and 2 (July–Oct. 1909), 96–105, in which the formula for the biserial coefficient of correlation, “biserial r”, is derived but not named, and its application exemplified.
“On a New Method of Determining Correlation When One Variable Is Given by Alternative and the Other by Multiple Categories,” Ibid7 , no. 3 (Apr. 1910), 248–257, in which the formula for “biserial η” is Derives but not named, and its application exemplified.
A First Study of the Influence of Parental Alcoholism on the Physique and Ability of the Offspring, Eugenics Laboratory Memoirs, no. 10 (London, 1910), written with Ethel M. Elderton, gave correlations between drinking habits of the parents and the intelligence and various physical characteristics of the offspring, and examined the effect of parental alcoholism on the infant death rate.
A Second Study. . . Being a Reply to Certain Medical Critics of the First Memoir and an Examination of the Rebutting Evidence Cited by Them, Eugenics Laboratory Memoirs, no. 13 (London, 1910), written with E. M. Elderton.
A Preliminary Study of Extreme Alcoholism in Adults, Eugenics Laboratory Memoirs, no. 14 (London, 1910), written with Amy Barrington and David Heron. The relations of alcoholism to number of convictions, education, religion, prostitution, mental and physical conditions, and death rates were examined, with comparisons between the extreme alcoholic and the general population.
“On the Probability That Two Independent Distributions of Frequency Are Really Samples From the Same Population,” in Biometrika, 8 , nos. 1–2 (July 1911), 250–254, in which his X^{2} goodnessoffit criterion is extended to provide a test of the hypothesis that two independent samples arrayed in a 2 × c tables are random samples from the sample population.
Social Problems: Their Treatment, Past, Present and Future. . ., Questions of the Day and of the Fray, no. 5(London, 1912), contains a perceptive, eloquent plea for replacement of literary exposition and folklore by measurement, and presents some results of statistical analyses that illustrate the complexity of social problems.
The Life, Letters and Labours of Francis Galton, 3 vols. in 4 pts. (Cambridge, 1914–1930).
Tables for Statisticians and Biometricians (London, 1914; 2nd ed., issued as “Part I,” 1924; 3rd ed., 1930), consists of 55 tables, some new, the majority repr. from Biometrika, a few from elsewhere, to which Pearson as editor contributed an intro. on their use.
“On the General Theory of Multiple Contingency With Special Reference to Partial Contingency,” in Biometrika, 11 no. 3 (May 1916), 145–158, extends the X^{2} method to the comparison of two (r × 2) tables and contains the basic elements of a large part of presentday X^{2} technique.
“Mathematical Contributions. . .XIX. Second Supplement to a Memoir on Skew Variation,” in Philosophical Transactions of the Royal Society, 216A (1916), 429–457, in which Pearson curves Types VII–XI are defined and their applications illustrated.
“On the Distribution of the Correlation Coefficient in Small Samples. Appendix II to the Papers of ’Student’ and R. A. Fisher. A Cooperative Study,” in Biometrika, 11 no. 4 (May 1917), 328–413, written with H. E. Soper, A. W. Young, B. M. Cave, and A. Lee, and exhaustive study of the moments and shape of the distribution of r in samples of size n from a normal population with correlation coefficient ρ as a function of n and ρ, and of its approach to normality as n→∞ with special attention to determination, via inverse probability, of the “most likely value” of p from an observed value of r—the paper that initiated the rift between Pearson and Fisher.
“De SaintVenant Solution for the Flexure of Cantilevers of CrossSections in the Form of Complete and Curtate Circular Sectors, and the Influence of the Manner of Fixing the Builtin End of the Cantilever on Its Deflection,” in Proceedings of the Royal Society, 96A (1919), 211–232, written with Mary Seegar, a basic paper giving the solution regularly cited for cantilevers of such cross sections—Pearson’s last paper in mechanics.
**“Notes on the History of Correlation. Being a Paper Read to the Society of Biometricians and Mathematical Statisticians, June 14, 1920,” in Briometrika, 13 no. 1 (Oct. 1920), 25–45 (paper no. 14 in Pearson and Kendall), deals with Gauss’s and Bravais’s treatment of the bivariate normal distribution, Galton’s discovery of correlation and regression, and Pearson’s involvement in the matter.
Tables of the Incomplete ΓFunction Computed by the Staff of the Department of Applied Statistics, University of London, University College (London, 1922; reiss. 1934), tables prepared under the direction of Pearson, who, as editor, contributed an intro. on their use.
Francis Galton, 1822–1922. A Centenary Appreciation, Questions of the Day and of the Fray, no. 11 (London, 1922).
Charles Darwin, 1809–1922. An Appreciation. . . ., Questions of the Day and of the Fray, no. 12 (London, 1923).
Historical Note on the Origin of the Normal Curve of Errors,” in Biometrika, 16 no. 3 (Dec. 1924), 402–404, announces the discovery of two copies of a longoverlooked pamphlet of De Moivre (1733) which gives to De Moivre priority in utilizing the integral of essentially the normal curve to approximate sums of successive terms of a binomial series, in formulating and using the theorem known as “Stirling’s formula,” and in enunciating “Bernoulli’s theorem” that imprecision of a sample fraction as an estimate of the corresponding population proportion depends on the inverse square root of sample size.
“On the Skull and Portraits of George Buchanan,” ibid., 18 nos. 3–4 (Nov. 1926), 233–256, in which it is shown that the protraits fall into two groups corresponding to distinctly different types of face, and only the type exemplified by the portraits in the possession of the Royal Society conforms to the skull.
“On the Skull and Portraits of Henry Stewart, Lord Darnley, and Their Bearing on the Tragedy of Mary, Queen of Scots,” ibid., 20B , no. 1 (July 1928), 1–104, in which the circumstances of Lord Darnley’s death and the history of his remains are discussed, anthropometric characteristics of his skull and femur are described and shown to compare reasonably well with the portraits, and the pitting of the skull is inferred to be of syphilitic origin.
“Laplace, Being Extracts From Lectures Delivered by Karl Pearson,” ibid., 21 nos. 1–4 (Dec. 1929), 202–216, an account of Laplace’s ancestry, education, and later life that affords necessary corrections to a number of earlier biographies.
Tables for Statisticians and Biometricians, Part II (London, 1931), tables nearly all repr. from Biometrika, with pref. and intro. on use of the tables by Pearson, as editor.
“On the Mean Character and Variance of a Ranked Individual, and on the Mean and Variance of the Intervals Between Ranked Individuals. Part I. Symmetrical Distributions (Normal and Rectangular),” in Biometrika, 23 nos. 3–4 (Dec. 1931), 364–397, and “. . .Part II. Case of Certain Skew Curves,” ibid24 nos. 1–2 (May 1932), 203–279, both written with Margaret V. Pearson, in which certain general formulas relating to means, standard deviations, and correlations of ranked individuals in samples of size n from a continuous distribution are developed and applied (in pt. I) to samples from the rectangular and normal distributions, and (in pt. II) to special skew curves (Pearson Types VIII, IX, X, and XI) that admit exact solutions.
Tables of the Incomplete BetaFunction (London, 1934), tables prepared under the direction of and edited by Pearson, with an intro. by Pearson on the methods of computation employed and on the uses of the tables.
“The Wilkinson Head of Oliver Cromwell and Its Relationship to Busts, Masks, and Painted Portraits,” in Biometrika, 26 nos. 3–4 (Dec. 1934), 269–378, written with G. M. Morant, an extensive analysis involving 107 plates from which it is concluded “that it is a ‘moral certainty’ drawn from circumstantial evidence that the Wilkinson Head is the genuine head of Oliver Cromwell.”
“Old Tripos Days at Cambridge, as Seen From Another Viewpoint,” in Mathematical Gazette, 20 (1936), 27–36.
Pearson edited two scientific journals, to which he also contributed substantially: Biometrika, of which he was one of the three founders, always the principal editor (vols. 1–28 , 1901–1936), and for many years the sole editor; and Annals of Eugenics, of which he was the founder and the editor of the first 5 vols. (1925–1933). He also edited three series of Drapers’ Company Research Memoirs: Biometric Series, nos. 1–4, 6–12 (London, 1904–1922) (no. 5 was never issued), of which he was sole author of 4 and senior author of the remainder; Studies in National Deterioration, nos. 1–11 (London, 1906–1924), 2 by Pearson alone and as joint author of 3 more; and Technical Series, nos. 1–7 (London, 1904–1918), 1 by Pearson alone, the others with coauthors. To these must be added the Eugenics Laboratory Memoirs, nos. 1–29 (London, 1907–1935), of which Pearson was a coauthor of 4. To many others, including the 13 issues (1909–1933) comprising “The Treasury of Human Inheritance,” vols. I and II, he contributed prefatory material; the Eugenics Laboratory Lecture Series, nos. 1–14 (London, 1909–1914), 12 by Pearson alone and 1 joint contribution; Questions of the Day and of the Fray, nos. 1–12 (London, 1910–1923), 9 by Pearson alone and 1 joint contribution; and Tracts for Computers, nos. 1–20 (London, 1919–1935), 2 by Pearson himself, plus a foreword, intro., or prefatory note to 5 others.
Pearson has given a brief account of the persons and early experiences that most strongly influenced his development as a scholar and scientist in his contribution to the volume of Speeches… (1934) cited below; fuller accounts of his Cambridge undergraduate days, his teachers, his reading, and his departures from the norm of a budding mathematician are in “Old Tripos Days” above. His “Notes on the History of Correlation” (1920) contains a brief account of how he became involved in the development of correlation theory; and he gives many details on the great formative period (1890–1906) in the development of biometry and statistics in his memoir on Weldon (1906) and in vol. IIIA of his Life. . . of Francis Galton.
A very large number of letters from all stages of Pearson’s life, beginning with his childhood, and many of his MSS, lectures, lecture notes and syllabuses, notebooks, biometric specimens, and data collections have been preserved. A large part of his scientific library was merged, after his death, with the joint library of the departments of eugenics and statistics at University College, London; a smaller portion, with the library of the department of applied mathematics.
Some of Pearson’s letters to Galton were published by Pearson, with Galton’s replies, in vol. III of his Life. . . of Francis Galton. A few letters of special interest from and to Pearson were published, in whole or in part, by his son, E. S. Pearson, in his “Some Incidents in the Early History of Biometry and Statistics” and in “Some Early Correspondence Between W. S. Gosset, R. A. Fisher, and Karl Pearson,” cited below; and a selection of others, from and to Pearson, together with syllabuses of some of Pearson’s lectures and lecture courses, are in E. S. Pearson, Karl Pearson: An Appreciation. . ., cited below.
For the most part Pearson’s archival materials are not yet generally available for study or examination. Work in progress for many years on sorting, arranging, annotating, crossreferencing, and indexing these materials, and on typing many of his handwritten items, is nearing completion, however. A first typed copy of the handwritten texts of Pearson’s lectures on the history of statistics was completed in 1972; and many dates, quotations, and references have to be checked and some ambiguities resolved before the whole of ready for public view. Hence we may expect the great majority to be available to qualified scholars before very long in the Karl Pearson Archives at University College, London.
II. Secondary Literature. The best biography of Pearson is still Karl Pearson: An Appreciation of Some Aspects of His Life and Work (Cambridge, 1938), by his son, Egon Sharpe Pearson, who stresses in his preface that “this book is in no sense a Life of Karl Pearson.” It is a reissue in book form of two articles, bearing the same title, published in Biometrika, 28 (1936), 193–257, and 29 (1937), 161–248, with two additional apps. (II and III in the book), making six in all. Included in the text are numerous instructive excerpts from Pearson’s publications, helpful selections from his correspondence, and an outline of his lectures on the history of statistics in the seventeenth and eighteenth centuries. App. I gives the syllabuses of the 7 public lectures Pearson gave at Gresham College, London, in 1891, “The Scope and Concepts of Modern Science,” from which The Grammar of Science (1892) developed; app. II, the syllabuses of 30 lectures on “The Geometry of Statistics,” “The Laws of Chance,” and ‘The Grometry of Chance” that Pearson delivered to general audiences at Gresham College, 1891–1894; app, III, by G. Udny Yule, repr, from Biometrika, 30 (1938), 198–203, summarizes the subjects dealt with by Pearson in his lecture courses on “The Theory of Statistics” at University College, London, during the 1894–1895 and 1895–1896 sessions; app. VI provides analogous summaries of his 2 lecture courses on “The Theory of Statistics” for firstand secondyear students of statistics at University College during the 1921–1922 session, derived from E. S. Pearson’s lecture notes; and apps. IV and V give, respectively, the text of Pearson’s report of Nov. 1904 to the Worshipful Company of Drapers on “the great value that the Drapers’ Grant [had] been to [his] Department” and an extract from his report to them of Feb. 1918, “War Work of the Biometric Laboratory.”
The following publications by E. S. Pearson are useful supps. to this work: “Some Incidents in the Early History of Biometry and Statistics, 1890–94,” in Biometrika, 52 pts. 1–2 (June 1965), 3–18 (paper 22 in Pearson and Kendall); “Some Reflexions on Continuity in the Development of Mathematical Statistics, 1885–1920,” ibid54 pts. 3–4 (Dec. 1967), 341–355 (paper 23 in Pearson and Kendall); “Some Early Correspondence Between W. S. Gosset, R. A. Fisher, and Karl Pearson, With Notes and Comments,” ibid., 55 no. 3 (Nov. 1968), 445–457 (paper 25 on Pearson and Kendall); Some Historical Reflections Traced Through the Development of the Use of Frequency Curves, Southern Methodist University Dept. of Statistics THEMIS Contract Technical Report no. 38 (Dallas, 1969); and “The Department of Statistics, 1971. A Year of Anniversaries. . .” (mimeo., University College, London, 1972).
Of the biographies of Karl Pearson in standard reference works, the most instructive are those by M. Greenwood, in the Dictionary of National Biography, 1931–1940 (London, 1949), 681–684; and Helen M. Walker, in International Encyclopedia of the Social Sciences, XI (New York, 1968), 496–503.
Apart from the above writings of E.S. Pearson, the most complete coverage of Karl Pearson’s career from the viewpoint of his contributions to statistics and biometry is provided by the obituaries by G. Udny Yule, in Obituary Notices of Fellows of the Royal Society of London, 2 no. 5 (Dec. 1936), 73–104; and P. C. Mahalanobis, in Sankhȳ2 pt. 4 (1936), 363–378, and its sequel, “A Note on the Statistical and Biometric Writings of Karl Pearson,” ibid., 411–422.
Additional perspective on Pearson’s contributions to biometry and statistics, together with personal recollections of Pearson as a man, scientist, teacher, and friend, and other revealing information are in Burton H. Camp, “Karl Pearson and Mathematical Statistics,” in Journal of the American Statistical Association, 28 no. 184 (Dec. 1933), 395–401; in the obituaries by Raymond Pearl, ibid., 31 no. 196 (Dec. 1936), 653–664; and G. M. Morant, in Man, 36 , no. 118 (June 1936), 89–92; in Samuel A. Stouffer,“Karl Pearson—An Appreciation on the 100th Anniversary of His Birth,” in Journal of the American Statistical Association53 no. 281 (Mar. 1958), 23–27. S. S. Wilks, “Karl Pearson: Founder of the Science of Statistics,” in Scientfic Monthly, 53 no. 2 (Sept. 1941), 249–253; and Helen M. Walker, “The Contributions of Karl Pearson,” in Journal of the American Statistical Association, 53 , no. 281 (Mar. 1958), 1122, are also informative and useful as somewhat more distant appraisals. L. N. G. Filon, “Karl Pearson as an Applied Mathematician,” in Obituary Notices of Fellows of the Royal Society of London, 2 , no. 5 (Dec. 1936), 104–110, seems to provide the only review and estimate of Pearson’s and astronomy. Pearson’s impact on sociology is discussed by S. A. Stouffer in his centenary “Appreciation” cited above; and Pearson’s “rather special variety of SocialDarwinism” is treated in some detail by Bernard Semmel in “Karl Pearson: Socialist and Darwinist,” in British Journal of Sociology, 9 , no. 2 (June 1958), 111–125. M. F. Ashley Montagu, in “Karl Pearson and the Historical Method in Ethnology,” in Isis, 34 , pt. 3 (Winter 1943), 211–214, suggests that the development of ethnology might have taken a different course had Pearson’s suggestions been put into practice.
The great clash at the turn of the century between the “Mendelians,” led by Bateson, and the “ancestrians,” led by Pearson and Weldon, is described with commendable detachment, and its aftereffects assessed, by P. Froggatt and N. C. Nevin in “The ’Law of Ancestral Heredity’ and the MendelianAncestrian Controversy in England, 1889–1906,” in Journal of Medical Genetics, 8 no. 1 (Mar. 1971), 1–36; and “Galton’s Law of Ancestral Heredity’: Its Influence on the Early Development of Human Genetics,” in History of Science, 10 (1971), 1–27.
Notable personal tributes to Pearson as a teacher, author, and friend, by three of his most distinguished pupils, L. N. G. Filon, M. Greenwood, and G. Udny Yule, and a noted historian of statistics, Harald Westergaard, have been preserved in Speeches Delivered at a Dinner Held in University College, London, in Honour of Professor Karl Pearson, 23 April 1934 (London, 1934), together with Pearson’s reply in the form of a fivepage autobiographical sketch. The centenary lecture by J. B. S. Haldane, “Karl Pearson, 1857–1957,” published initially in Biometrika, 44 pts. 3–4 (Dec. 1957), 303–313, is also in Karl Pearson, 1857–1957. The Centenary Celebration at University College, London, 13 May 1957 (London, 1958), along with the introductory remarks of David Heron, Bradford Hill’s toast, and E. S. Pearson’s reply.
Other publications cited in the text are Allan Ferguson, “Trends in Modern Physics,” in British Association for the Advancement of Science, Report of the Annual Meeting, 1936, 27–42; Francis Galton, Natural Inheritance (London New York, 1889; reissued, New York, 1972); R. A. Fisher, “The Correlation Between Relatives on the Supposition of Mendelian Inheritance,” in Transactions of the RoyalSociety of Edinburgh, 52 (1918), 399–433; H. L. Seal, “The Historical Development of the Gauss Linear Model,” in Biometrika54 pts. 1–2 (June 1967), 1–24 (paper no. 15 in Pearson and Kendall); and Helen M. Walker, Studies in the History of Statisical Method (Baltimore, 1931).
Churchill Eisenhart
[Contribution of the National Bureau of Standards, not subject to copyright.]
Cite this article
Pick a style below, and copy the text for your bibliography.

MLA

Chicago

APA
"Pearson, Karl." Complete Dictionary of Scientific Biography. . Encyclopedia.com. 27 Feb. 2017 <http://www.encyclopedia.com>.
"Pearson, Karl." Complete Dictionary of Scientific Biography. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/science/dictionariesthesaurusespicturesandpressreleases/pearsonkarl0
"Pearson, Karl." Complete Dictionary of Scientific Biography. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionariesthesaurusespicturesandpressreleases/pearsonkarl0
Pearson, Karl
Pearson, Karl
Karl Pearson, “founder of the science of statistics,” was born in London in 1857 and died at Coldharbour in Surrey, England, in 1936.
Pearson’s father, William Pearson, was a barrister, Queen’s Counsel, and a leader in the chancery courts. He was a man of great ability, with exceptional mental and physical energy and a keen interest in historical research, traits which his son also exhibited.
An incident from Pearson’s infancy, which Julia Bell, his collaborator, once related, contains in miniature many of the characteristics which marked his later life. She had asked him what was the first thing he could remember. He recalled that it was sitting in a highchair and sucking his thumb. Someone told him to stop sucking it and added that unless he did so, the thumb would wither away. He put his two thumbs together and looked at them a long time. “They look alike to me/’ he said to himself. “I can’t see that the thumb I suck is any smaller than the other. I wonder if she could be lying to me.” Here in this simple anecdote we have rejection of constituted authority, appeal to empirical evidence, faith in his own interpretation of the meaning of observed data, and, finally, imputation of moral obliquity to a person whose judgment differed from his own. These characteristics were prominent throughout his entire career. (The chief source of information about Pearson’s early life is a 170page memoir written immediately after his death by his son, Egon S. Pearson [1938], also a distinguished statistician.)
In Pearson’s early educational history there are indications of a phenomenal range of interests, unusual intellectual vigor, delight in controversy, the determination to resist anything which he considered misdirected authority, an appreciation of scholarship, and the urge to selfexpression, but there is almost no suggestion of any special leaning toward those studies for which he is now chiefly remembered.
In 1866 he was sent to University College School, London, but after a few years was withdrawn for reasons of health. At the age of 18 he obtained a scholarship at King’s College, Cambridge, being placed second on the list.
In an autobiographical note entitled “Old Tripos Days at Cambridge” (1936), published shortly before his death, he wrote of those undergraduate days as some of the happiest of his life. “There was pleasure in the friendships, there was pleasure in the fights, there was pleasure in the coaches’ teaching, there was pleasure in searching for new lights as well in mathematics as in philosophy and religion.” His tutor was Edward J. Routh, considered by some the most successful tutor in the history of Cambridge, a man for whom he developed a real affection. Pearson used to speak of the stimulation received from his mathematics teachers, Routh, Burnside, and Frost, and described contacts with other distinguished persons. He gave an amusing account of an examination held on four days in the homes of the four examiners: George Gabriel Stokes, whom he venerated as the greatest mathematical physicist in England and one of the two best lecturers he had ever known; James Clerk Maxwell, another great physicist but a poor lecturer; Arthur Cayley, lawyer and mathematician, inventor of the theory of matrices and the geometry of ndimensional space; and Isaac Todhunter, who had by that time published his History of the Mathematical Theory of Probability. The examination paper set by Todhunter provided a turning point in Pearson’s career. A demonstration which he submitted in this examination was attached, with an approving comment, by Todhunter to the unfinished manuscript of his History of the Theory of Elasticity (18861893). After Todhunter’s death Pearson was invited to finish and edit this History. This task was the beginning of his vital association with the Cambridge University Press, whose proofs were, for the next half century, rarely absent from his writing table.
Besides mathematics and the theory of elasticity, his interests during his Cambridge years included philosophy, especially that of Spinoza; the works of Goethe, Dante, and Rousseau, which he read in the original; the history of religious thought; and a search for a concept of the Deity that would be consistent with what he knew of science. Deeply concerned with religion but resenting coercion, he challenged the university authorities, first by his refusal to continue attendance at compulsory divinity lectures and then by his objection to compulsory chapel. He won both fights, and the university regulations were altered, but he continued to attend chapel on a voluntary basis.
After taking his degree with mathematical honors at Cambridge in 1879, he read law at Lincoln’s Inn and was called to the bar in 1881. There followed travel in Germany and a period of study at the universities of Heidelberg and Berlin, where he balanced the study of physics with that of metaphysics; of Roman law with the history of the Reformation; of German folklore with socialism and Darwinism. After returning to England he was soon lecturing and writing on German social life and thought, on Martin Luther, Karl Marx, Maimonides, and Spinoza, contributing hymns to the Socialist Song Book, writing papers in the field of elasticity, teaching mathematics in King’s College, London, and engaging in literary duels with Matthew Arnold and the librarians of the British Museum.
One of the friendships which had a deep influence on his life was with Henry Bradshaw, librarian of Cambridge University, to whom Pearson referred in “Old Tripos Days” as “the man who most influenced our generation.” In a speech, he described Bradshaw as “the ideal librarian, but something greater—the guide of the young and foolish” and added that the librarian showed him what the essentials of true workmanship must be. So deep was their friendship that Bradshaw could reprove the younger man for excessive ardor and lack of wisdom in intellectual controversy.
His first publication, at 23 years of age, was a little book, which must have been largely autobiographical, entitled The New Werther (1880), written in the form of letters from a young man named “Arthur” to his fiancee. It foreshadows The Ethic of Freethought (1888) and The Grammar of Science (1892). Arthur writes:
I rush from science to philosophy, and from philosophy to our old friends the poets; and then, overwearied by too much idealism, I fancy I become practical in returning to science. Have you ever attempted to conceive all there is in the world worth knowing—that not one subject in the universe is unworthy of study? The giants of literature, the mysteries of manydimensional space, the attempts of Boltzmann and Crookes to penetrate Nature’s very laboratory, the Kantian theory of the universe, and the latest discoveries in embryology, with their wonderful tales of the development of life—what an immensity beyond our grasp! . . . Mankind seems on the verge of a new and glorious discovery. What Newton did to simplify the various planetary motions must now be done to unite in one whole the various isolated theories of mathematical physics. (Quoted in Egon S. Pearson, 1938, p. 8)
All that the young writer wanted was a complete understanding of the universe. Thirty years later, the first issue of Biometrika carried as its frontispiece a picture of a statue of Charles Darwin with the words: Ignoramus, in hoc signo labor emus. Those five words ring out like the basic theme of Pearson’s life: “We are ignorant; so let us work.”
In 1884 Pearson became professor of applied mathematics and mechanics at University College, teaching mathematics to engineering students as well as courses on geometry. He occupied himself for the next few years with writing papers on elasticity, completing Todhunter’s History of the Theory of Elasticity, lecturing on socialism and free thought, publishing The Ethic of Freethought, writing in German (Die Fronica 1887) a historical study of the Veronica legends concerning pictures of Christ, collecting material on the German passion play which later formed the substance of The Chances of Death (1897), completing a book called The Common Sense of the Exact Sciences (see 1885), which had been begun by W. K. Clifford, and taking an active part in a small club whose avowed purpose was to break down the conventional barriers which prevented free discussion of the relations between men and women.
In 1890 Pearson was invited to lecture on geometry at Gresham College, with freedom to choose the subject matter on which he would lecture. In March 1891 he delivered his first course of four lectures on “The Scope and Concepts of Modern Science.” In 1892 he published the first edition ofThe Grammar of Science, and in 1893 he wrote an article on asymmetrical frequency curves (1894). It is apparent that a very important change had taken place in his concepts of scientific method, that he had reached a new conviction about the statistical aspects of the foundations of knowledge, that problems of heredity and evolution had acquired a new urgency, and in short, that a dramatic change had taken place in his professional life.
Influences on Pearson’s thinking . In 1890 W. F. R. Weldon was appointed to the chair of biology at University College. Weldon was already acquainted with Francis Galton and engaged in statistical research. In 1890 Weldon published a paper on variations in shrimp, in 1892 one on correlated variations, and in 1893 a third paper which contains the sentence “It cannot be too strongly urged that the problem of animal evolution is essentially a statistical problem.” In 1893 that point of view was heresy. The importance for science of the intense personal friendship which soon sprang up between Pearson and Weldon, then both in their early thirties, can scarcely be exaggerated. Weldon asked the questions that drove Pearson to some of his most significant contributions. Weldon’s sudden death from pneumonia at the age of 46 was a heavy blow to science and a great personal tragedy to Pearson.
In 1889 Galton, then 67 years old, publishedNatural Inheritance, summarizing his researches between 1877 and 1885 on the subject of regression. This work moved Weldon to undertake his studies of regression in biological populations and moved Pearson to arithmetical researches that culminated in 1897 with the famous productmoment correlation coefficient r. The elaboration of correlations led, in other hands than Pearson’s, to such diverse statistical inventions as factor analysis and the analysis of variance. The stimulation Pearson received from Galton and the devotion he felt toward the older man show on every page of the four volumes of The Life, Letters and Labours of Francis Galton (19141930), one of the world’s great biographies. On this work of over 1,300 quarto pages and about 170 fullpage plates, Pearson lavished some twenty years of work and much of his personal fortune.
Development of a science of statistics . The year 1890 represented not only a turning point in Pearson’s career; it marked the beginning of the science of statistics. Antedating this development and preparing the way for it had been a long period of slowly increasing interest in the statistical way of thinking. In 1890 this interest was still sporadic, restricted in scope, and shared by very few people. It exhibited itself primarily in the collection of such public statistics as population data, vital statistics, and economic data. It was also evident in actuarial work and in the adjustment of observations in astronomy and meteorology, particularly least squares adjustment. Outside these areas, this development was hampered not only by lack of interest but also by paucity of data and the absence of adequate theory. Statistical theory was almost entirely that which had been developed by the great astronomers and mathematicians concerned with mathematical probability related to errors of observation. It related chiefly to the binomial distribution or the normal distribution of a single variable.
The gathering of public statistics by governments and semipublic agencies was well established. After about 1800 most of the industrialized countries had instituted the official national census. Several nongovernmental societies had been set up, chiefly for the purpose of improving the quality of public statistics, for example, the Statistical Society of London (now the Royal Statistical Society) in 1834 and the American Statistical Association in 1839. Actuarial work had become a fairly welldeveloped and respected profession. The 25year period from 1853 to 1878 was the era of the great international statistical congresses. Economic statistics moved ahead greatly in this period, with notable improvements in methods of gathering data. Governments were beginning to take physical measurements of their soldiers and were making these data available to anthropometrists.
Among Pearson’s predecessors were men who made significant contributions to the mathematical theory of probability in relation to gambling problems, but they never tested that theory on data and never proposed its application in any other area. Early theorists of this kind were Pierre de Fermat, Blaise Pascal, Christian Huygens, and Abraham de Moivre. Other mathematicians wrote about the possible application of probability theory to social phenomena, but they had no data: Jakob (Jacques) Bernoulli wrote on such possible application to economics; Daniel Bernoulli on inoculation as a preventive of small pox; and Niklaus (Nicholas) Bernoulli, Condorcet, and Poisson, among others, on the credibility of testimony and related legal matters. Before 1800 William Playfair had invented the statistical graph and published many beautiful statistical charts from quite dubious data.
None of these men, then, either cared to test his theories on data or had appropriate data to work with, and contrariwise, many other men worked in statistical agencies tabulating data with very little idea of how to analyze them.
Two groups of persons, the actuaries and the mathematical astronomers, possessed both mathematical acumen and relevant data for testing theory, but neither group proposed a general statistical approach outside of its own field. The great mathematical astronomers of the first half of the nineteenth century, notably Laplace and Gauss, did lay the foundations for modern statistical theory by developing the concept of errors of observation and an impressive accompanying mathematical theory, and the ferment of ideas which they stimulated spread over Europe. Important contributions were made by Friedrich Wilhelm Bessel and Johann Franz Encke in Germany; Giovanni Plana in Italy; Adrien Marie Legendre, Poisson, Jean Baptiste Fourier, Auguste Bravais, and “Citizen” Kramp in France; Quetelet in Belgium; George Biddell Airy and Augustus De Morgan in England; and Thorwald Nicolai Thiele in Denmark. Only a very few persons before Pearson had thought of the statistical analysis of concrete data as a general method applicable to a wide range of problems; one such was Cournot, whose extensive writing, both on the theory of chance and on such matters as wealth and supply and demand, laid the foundations for mathematical economics. And more than any other person in the nineteenth century, Quetelet brought together mathematical theory, the collection of official statistics, and a concern for practical problems and fused the three into a single tool for studying the problems of life. Finally, of course, there was Galton, whose work had a great impact on Pearson and whose close friendship with him had an incalculable influence.
Although these men had put a high value on concrete data, the amount of data to which they had access was paltry beside what soon began to be collected by Pearson and his associates. Pearson always insisted on publishing the original data as well as the statistics derived from them. His primary aim was to develop a methodology for the exploration of life, not the refinement of mathematical theory. Whenever he developed a new piece of statistical theory, he immediately used it on data, and if his mathematics was cumbersome, this did not concern him.
Major contributions
Frequency curves
One of the problems on which Pearson spent a great deal of time and energy was that of deriving a system of generalized frequency curves based on a single differential equation, with parameters obtained by the method of moments. Quetelet seems to have believed that almost all social phenomena would show approximately normal distributions if the number of cases could be made large enough. Before 1890 J. P. Gram and Thiele in Denmark had developed a theory of skew frequency curves. After Pearson published his elaborate and extremely interesting system (1894; 1895), many papers were written on such related topics as the fitting of curves to truncated or imperfectly known distributions and tables of the probability distribution of selected curves.
Chisquare
Having fitted a curve to a set of observations, Pearson needed a criterion to indicate how good the fit was, and so he invented “chisquare” (1900). Quetelet and others who wanted to demonstrate the closeness of agreement between the frequencies in a distribution of observed data and frequencies calculated on the assumption of normal probability merely printed the two series side by side and said, in essence, “Behold!” They had no measure of discrepancy and were apparently not made uncomfortable by the lack of such a measure. Pearson not only devised the measure but he worked out its distribution and had it calculated. He himself never seems to have understood the concept of degrees of freedom, either in relation to chisquare or to his probableerror formulas. Yet chisquare is an enormously useful device with a range of applications far greater than the specific problem for which it was created, and it occupies an important position in modern statistical theory.
Correlation
The idea of correlation is due to Galton, who published a paper entitled “Corelations and Their Measurement Chiefly From Anthropometric Data” in 1880 and another entitled “Regression Towards Mediocrity in Hereditary Stature” in 1885, and who gave a more widely read statement in Natural Inheritance (1889). The mathematics of the normal correlation surface had been derived earlier in connection with errors made in estimating the position of a point in space. In 1808 Adrain gave the first known derivation of the probability that two such errors will occur together but dealt with uncorrelated errors only. The density function for two related errors was given by Laplace in 1810 and for n related errors by Gauss in 1823 or perhaps earlier. Plana in 1812, studying the probability of errors in surveying, and Bravais in 1856, that of errors in artillery fire, each obtained an equation in which there is a term analogous to r. Being concerned about the probability of the occurrence of error and not with the strength of relationship between errors, these men all studied the density function and paid no attention to the product term in the exponent, which is a function of r. They applied their findings only to errors of observation, and the relation of their work to the correlation surface was noted only long after the important works on correlation had been written. In a study made in 1877 of the height, weight, and age of 24,500 Boston school children, Henry Pickering Bowditch published curves showing the relation of height to weight but missed discovering the correlation between two variables.
Galton had been seriously hampered in his study of correlation by both lack of data and lack of an efficient routine of computation. He did have data on sweet peas and on the stature of parents and adult offspring for two hundred families. While Pearson began to lecture on correlation, Weldon began to make measurements on shrimp for correlation studies.
In Pearson’s first fundamental paper on correlation, entitled “Regression, Heredity and Panmixia” (1896), he generalized Galton’s conclusions and methods; derived the formula which we now call the “Pearson’s product moment” and two other equivalent formulas; gave a simple routine for computation which could be followed by a person without much mathematical training; stated the general theory of correlation for three variables; and gave the coefficients of the multiple regression equation in terms of the zeroorder correlation coefficients.
There followed a series of great memoirs on various aspects of correlation, some by students or associates of Pearson, such as G. Udny Yule and W. F. Sheppard, but most of them from his own hand. These dealt with such matters as correlation in nonnormal distributions, tetrachoric r, correlation between ranks, correlation when one or both variables are not scaled or regression is not linear. There were many papers presenting the results of correlation analysis in a great variety of fields. A large amount of labor went into the derivation of the probable error of each of these various coefficients and the tabulation of various probabilities related to correlation. It is fitting that the productmoment correlation coefficient is named the “Pearson r.”
Individual variability
Variation among errors made in observations on the position of a heavenly body had been studied extensively by the great mathematical astronomers. The list of those who before 1850 had written on the “law of facility of error,” derived the formula for the normal curve, and compiled probability tables would be a long one. The term “probable error” had come into widespread use within a few years after Bessel employed the term der wahrscheinliche Fehler in 1815 in a paper on the position of the polar star.
The concept of true variability among individuals is very different from the concept of chance variation among errors in the estimation of a single value. The idea of individual variability is prominent in the writings of Quetelet, Fechner, Ebbinghaus, Lexis, Edgeworth, Galton, and Weldon, but it was not commonly appreciated by other scientists of the nineteenth century. Pearson’s emphasis upon this idea is one of his real contributions to the understanding of life. In that first great paper on asymmetrical frequency curves (1894) he introduced the term “standard deviation” and the symbol σ, and he consistently used this term and this symbol when discussing variation among individuals. However, when writing about sampling variability, he always used the term “probable error,” thus clearly distinguishing variability due to individual differences from variability due to chance errors.
Probable errors of statistics
Pearson himself probably considered that one of his greatest contributions was the derivation of the probable errors of what he called “frequency constants” and various tables to facilitate the computation of such. His method, already well known in other connections, was to write the equation for a statistic, take the differential of both sides of that equation, square, sum, and reduce the result by any algebraic devices he could think of. The process of reduction was often formidable. Even though he was not much concerned with the distinction between a statistic and its parameter and he frequently used the former in place of the latter, these probable errors marked a great advance over the previous lack of any measure of the sampling variability of most statistics. In this era new statistics were being proposed on every side, and the amount of energy which went into the derivation of these probable errors was tremendous. With the successful search for exact sampling distributions that has been under way ever since the publication of Student’s work in 1908 and R. A. Fisher’s 1915 paper on the sampling distribution of the correlation coefficient, better methods than Pearson’s probable error formulas could provide are now in many cases available.
Publication of tables
An editorial in the first number of Biometrika (unsigned, but always attributed to Pearson) referred to the urgent need for tables to facilitate the work of the statistician and biometrician and promised that such tables would be produced as rapidly as possible. Such tables as were then available were in widely scattered sources, some of them almost impossible to obtain. By 1900, tables of the binominal coefficients, the trigonometric functions, logs, and antilogs were readily available. A large table of the logs of factorials computed in 1824 by F. C. Degen was almost unknown. There were tables of squares, cubes, square roots, and reciprocals, of which Barlow’s is the best known, and there were multiplication tables by Crelle and by Coats worth. Legendre had published a table of logarithms of the gammafunction, but copies were very scarce. The normal probability function had been extensively tabulated but always with either the probable error (.6745σ) or the modulus (σ/) as argument, never the standard error. Poisson had not tabulated the distribution which bears his name, but Bortkiewicz had done so in 1898 (in hisGesetz der kleinen Zahlen).
A list of the tables which have been issued inBiometrika from its second issue in 1902 until the present or which have appeared in the separate volumes of Tables for Statisticians and Biometricians (1914) or in the Drapers’ Company Series of Tracts for Computers would be a very long one. Some of these tables are no longer used, others appear to be timeless in value, even after the advent of the electronic computer. The Tables of the Incomplete Betafunction (1934) was among Pearson’s last contributions to science, published when he was 78 years old.
Controversies
The frequent controversies in which Pearson was embroiled cannot be disregarded. In his youth he did battle for such unpopular radical ideas as socialism, the emancipation of women, and the ethics of free thought. A few years later he was involved in a long struggle for the unpopular idea that mathematics should be applied to the study of biology. Much bitterness arose over this question, and the Royal Society, while ready to accept papers dealing with either mathematics or biology, refused to accept papers dealing with both. That refusal was one of the circumstances that led to the founding of Biometrika in 1900, and this in turn gave great impetus to the young sciences of biometry and mathematical statistics: now there was a journal in which mathematical papers on the biological sciences could be published.
In 1904 Gal ton established the Eugenics Record Office to further the scientific study of eugenics. It became known as the Eugenics Laboratory two years later, when Gal ton turned it over to Pearson so that he might operate it in connection with his Biometric Laboratory. The Biometric Laboratory, whose existence Pearson dated back to 1895, was a center for training postgraduate workers in this new branch of exact science. In 1911 these two laboratories were united to form the department of applied statistics in University College, with Pearson as its first professor.
Beginning in 1907 the Eugenics Laboratory published numerous very substantial statistical papers on three of the most controversial issues of the day: pulmonary tuberculosis, alcoholism, and mental deficiency and insanity. These appeared in two series entitled “Studies in National Deterioration” and “Questions of the Day and of the Fray.”
In contrast to the idea then current that tuberculosis could be eradicated by improving the environment, Pearson’s statistical studies indicated that the predisposition to tuberculosis was more hereditary than environmental and that there was no clear evidence that patients treated in sanatoria had a higher recovery rate than those treated elsewhere.
Another common assumption at that time was that alcoholic parents produce children with mental and physical deficiencies. The first studies on this subject coming from the Eugenics Laboratory found no marked relation between parental alcoholism and the intelligence, physique, or disease of offspring (1910). Later papers concluded that alcoholism is more likely to be a consequence than a cause of mental defect. Pearson commented that “the time is approaching when real knowledge must take the place of energetic but untrained philanthropy in dictating the lines of feasible social reform” (quoted in Egon S. Pearson 1938, p. 61).
After the American Eugenics Record Office announced in 1912 that mental defect was almost certainly a recessive Mendelian character and advised that “weakness in any trait should marry strength in that trait and strength may marry weakness,” Pearson or his associates marshaled statistical evidence to refute this pronouncement.
Each time Pearson took up such an issue, the reaction of medical authorities and public officials was angry and violent, and their personal attacks on Pearson were prolonged and vituperative; open conflict also developed between the more traditional Eugenics Education Society, of which Galton was honorary president, and the Eugenics Laboratory, of which he had been the founder.
The young sciences of biometry and statistics may well have profited from these major struggles with organized groups that allowed them to break the restraining bonds of apathy, of ignorance, of entrenched authority. Pearson was something of a crusader, and among the qualities a crusader needs are selfconfidence, the courage to fight for his convictions, and a touch of intellectual intolerance. He was a perfectionist and had scant patience with ideas or work which he considered incorrect. Moreover, he was trained for a legal career and from childhood had in his father the example of a successful trial lawyer. However, his first thought was to get at the truth, and, if intellectually convinced of an error, Pearson was ready to admit it. He once published in Biometrika a paper called “Peccavimus” (“We Have Erred”).
Although Pearson made contributions to statistical technique that now appear to be of enduring importance, these techniques are of less importance than what he did in rousing the scientific world from a state of sheer uninterest in statistical studies to one of eager effort by a large number of welltrained persons, who developed new theory, gathered and analyzed statistical data from every field, computed new tables, and reexamined the foundations of statistical philosophy. This is an achievement of fantastic proportions. His laboratory was a world center in which men from all countries studied. Few men in all the history of science have stimulated so many other people to cultivate and to enlarge the fields they themselves had planted. He provided scientists with the concept of a general methodology underlying all science, one of the great contributions to modern thought.
Helen M. Walker
[For the historical context of Pearson’s work, seeStatistics, article onThe History of Statistical Method; and the biographies of theBernoulli Family; Condorcet; Cournot; Flsher, R. A.; Galton; Gauss; Laplace; Moivre; Poisson; Quetelet; for discussion of the subsequent development of his ideas, seeGoodness of Fit; Multivariate Analysis, articles onCorrelation; Nonparametric Statistics, article onRanking Methods; Statistics, Descriptive, article onAssociation; and the biography ofYule.]
works by pearson
1880 The New Werther. London: Kegan.
(1885) 1946 Clifford, William K. The Common Sense of the Exact Sciences. Edited, and with a preface, by Karl Pearson; newly edited by James R. Newman. New York: Knopf.
18861893 Todhunter, Isaac A History of the Theory of Elasticity and of the Strength of Materials From Galilei to the Present Time. 2 vols. Edited and completed by Karl Pearson. Cambridge Univ. Press.
1887 Die Fronica: Ein Beitrag zur Geschichte des Christusbildes im Mittelalter. Strassburg (then Germany): Trubner.
(1888) 1901 The Ethic of Freethought, and Other Addresses and Essays. London: Black.
(1892) 1937 The Grammar of Science. 3d ed., rev. & enl. New York: Dutton. → A paperback edition was published in 1957 by Meridian.
(1894) 1948 Contributions to the Mathematical Theory of Evolution. I. Pages 140 in Karl Pearson, Karl Pearson’s Early Statistical Papers. Cambridge Univ. Press. → First published as “On the Dissection of Asymmetrical Frequency Curves” in Volume 185 of the Philosophical Transactions of the Royal Society of London, Series A.
(1895) 1948 Contributions to the Mathematical Theory of Evolution. II: Skew Variation in Homogeneous Material. Pages 41112 in Karl Pearson, Karl Pearson’s Early Statistical Papers. Cambridge Univ. Press. → First published in Volume 186 of the Philosophical Transactions of the Royal Society of London, Series A.
(1896) 1948 Mathematical Contributions to the Theory of Evolution. III: Regression, Heredity and Panmixia. Pages 113178 in Karl Pearson, Karl Pearson’s Early Statistical Papers. Cambridge Univ. Press. → First published in Volume 187 of the Philosophical Transactions of the Royal Society of London, Series A.
1897 The Chances of Death, and Other Studies in Evolution. 2 vols. New York: Arnold.
(1898) 1948 Pearson,karl; and Filon, L. N. G. Mathematical Contributions to the Theory of Evolution. IV: On the Probable Errors of Frequency Constants and on the Influence of Random Selection on Variation and Correlation. Pages 179261 in Karl Pearson, Karl Pearson’s Early Statistical Papers. Cambridge Univ. Press. → First published in Volume 191 of the Philosophical Transactions of the Royal Society of London, Series A.
(1900) 1948 On the Criterion That a Given System of Deviations From the Probable in the Case of a Correlated System of Variables Is Such That It Can Be Reasonably Supposed to Have Arisen From Random Sampling. Pages 339357 in Karl Pearson, Karl Pearsons’ Early Statistical Papers. Cambridge Univ. Press. → First published in Volume 50 of the Philosophical Magazine, Fifth Series.
1906 Walter Frank Raphael Weldon: 18601906. Biometrika 5:152.
1907 A First Study of the Statistics of Pulmonary Tuberculosis. Drapers’ Company Research Memoirs, Studies in National Deterioration, No. 2. London: Dulau.
1910 Elderton, Ethel M.; and Pearson, Karl A First Study of the Influence of Parental Alcoholism on the Physique and Ability of the Offspring. Univ. of London, Francis Galton Laboratory for National Eugenics, Eugenics Laboratory, Memoirs, Vol. 10. London: Dulau.
(1914) 19301931 Pearson, Karl (editor) Tables for Statisticians and Biometricians. 2 vols. London: University College, Biometric Laboratory. → Part 1 is the third edition; Part 2 is the first edition.
19141930 The Life, Letters and Labours of Francis Galton. 3 vols. in 4. Cambridge Univ. Press.
(1922) 1951 Pearson, Karl (editor) Tables of the Incomplete Gammafunction. Computed by the staff of the Department of Applied Statistics, Univ. of London. London: Office of Biometrika.
1923 On the Relationship of Health to the Psychical and Physical Characters in School Children. Cambridge Univ. Press.
1934 Pearson, Karl (editor) Tables of the Incomplete Betafunction. London: Office of Biometrika.
1936 Old Tripos Days at Cambridge, as Seen From Another Viewpoint. Mathematical Gazette 20:2736.
Karl Pearson’s Early Statistical Papers. Cambridge Univ. Press, 1948. → Contains papers published between 1894 and 1916.
supplementary bibliography
Annals of Eugenics. → Published since 1925 by the Eugenics Laboratory. Karl Pearson was the editor until his death and also contributed articles.
Biometrika: A Journal for the Statistical Study of Biological Problems. → Published since 1900, the journal was founded by W. F. R. Weldon, Francis Galton, and Karl Pearson. Edited by Karl Pearson until his death, and since then by Egon S. Pearson.
Drapers’ Company Research Memoirs Biometric Series. → Published since 1904. Contains a series of major contributions by Pearson and his associates. The series was edited by Pearson.
Filon, L. N. G. 1936 Karl Pearson: 18571936. Royal Society of London, Obituary Notices of Fellows 2, no. 5:73109.
Fisher, R. A. 1915 Frequency Distribution of the Value of the Correlation Coefficient in Samples From an Indefinitely Large Population. Biometrika 10:507521.
Haldane, J. B. S. 1957 Karl Pearson, 18571957: A Centenary Lecture Delivered at University College, London, on May 13, 1957. Biometrika 44:303313.
Morant, Geoffrey (editor) 1939 A Bibliography of the Statistical and Other Writings of Karl Pearson. Cambridge Univ. Press.
Pearson, Egon S. 1938 Karl Pearson: An Appreciation of Some Aspects of His Life and Work. Cambridge Univ. Press. → First published in 1936 and 1938 in Volumes 28 and 29 of Biometrika. Contains a partial bibliography of Karl Pearson’s writings.
Walker, Helen M. 1958 The Contributions of Karl Pearson. Journal of the American Statistical Association 53:1122.
Wilks, S. S. 1941 Karl Pearson: Founder of the Science of Statistics. Scientific Monthly 53:249253.
Cite this article
Pick a style below, and copy the text for your bibliography.

MLA

Chicago

APA
"Pearson, Karl." International Encyclopedia of the Social Sciences. . Encyclopedia.com. 27 Feb. 2017 <http://www.encyclopedia.com>.
"Pearson, Karl." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/socialsciences/appliedandsocialsciencesmagazines/pearsonkarl0
"Pearson, Karl." International Encyclopedia of the Social Sciences. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/socialsciences/appliedandsocialsciencesmagazines/pearsonkarl0
Pearson, Karl
Pearson, Karl 18571936
THE GRESHAM LECTURES ON STATISTICS
Karl Pearson was one of the principal architects of the modern theory of mathematical statistics. His interests ranged from mathematical physics, astronomy, philosophy, history, literature, socialism, and the law to Darwinism, evolutionary biology, heredity, Mendelism, eugenics, medicine, anthropology, and crainometry. His major contribution, however, by his lights and by posterity’s, was to establish and advance the discipline of mathematical statistics.
The second son of William Pearson and Fanny Smith, Carl Pearson was born in London on March 27, 1857. In 1879 the University of Heidelberg changed the spelling of his name when it enrolled him as “Karl Pearson”; five years later he adopted this variant of his name and eventually became known as “KP.” His mother came from a family of seamen and mariners, and his father was a barrister and Queen’s Counsel. The Pearsons were a family of dissenters and of Quaker stock. By the time Carl was twentytwo he had rejected Christianity and adopted “Freethought” as a nonreligious faith that was grounded in science.
Pearson graduated with honors in mathematics from King’s College, Cambridge University in January 1879. He stayed in Cambridge to work in Professor James Stuart’s engineering workshop and to study philosophy in preparation for his trip to Germany in April. His time in Germany was a period of selfdiscovery, philosophically and professionally. Around this time, he began to write The New Werther, an epistolary novel on idealism and materialism, published in 1880 under the pseudonym of Loki (a mischievous Scandinavian god). In Heidelberg Pearson abandoned Karl philosophy because “it made him miserable and would have led him to shortcut his career” (Karl Pearson, Letter to Robert Parker, 17 August 1879. Archive reference number: NW/Cor.23. Helga Hacker Pearson papers within Karl Pearson’s archival material held at University College London). Though he considered becoming a mathematical physicist, he discarded this idea because he “was not a born genius” (Karl Pearson, Letter to Robert Parker, 19 October 1879. Archive reference number/922. Karl Pearson’s archival material held at University College London). He stayed in Berlin and attended lectures on Roman international law and philosophy.
He returned to London and studied law at Lincoln’s Inn at the Royal Courts of Justice. He was called to the bar at the end of 1881 but practiced for only a very short time. Instead, he began to lecture on socialism, Karl Marx, Ferdinand Lassalle, and Martin Luther from 1880 to 1881, while also writing on medieval German folklore and literature and contributing hymns to the Socialist Song Book. In the course of his lifetime, he produced more than 650 publications, of which 400 were statistical; over a period of twentyeight years he founded and edited six academic journals, of which Biometrika is the best known.
Having received the Chair of Mechanism and Applied Mathematics at University College London (UCL) in June 1884, Pearson taught mathematical physics, hydrodynamics, magnetism, electricity, and elasticity to engineering students. Soon after, he was asked to edit and complete William Kingdom Clifford’s The Common Sense of Exact Science (1885) and Isaac Todhunter’s History of the Theory of Elasticity (1886).
THE GRESHAM LECTURES ON STATISTICS
Pearson was a founding member of the Men’s and Women’s Club, established in 1885 for the free and unreserved discussion of all matters concerning relationships of men and women. Among the various members was Marie Sharpe, whom he married in June 1890. They had three children, Sigrid, Helga and Egon. Six months after his marriage, he took up another teaching post in the Gresham Chair of Geometry at Gresham College in the City of London (the financial district), which he held for three years concurrently with his post at UCL. From February 1891 to November 1893, Pearson delivered thirtyeight lectures.
These lectures were aimed at a nonacademic audience. Pearson wanted to introduce them to a way of thinking that would influence how they made sense of the physical world. While his first eight lectures formed the basis of his book The Grammar of Science, the remaining thirty dealt with statistics because he thought this audience would understand insurance, commerce, and trade statistics and could relate to games of chance involving Monte Carlo roulette, lotteries, dice, and coins. In 1891 he introduced the histogram (a type of bar chart), and he devised the standard deviation and the variance (to measure statistical variation) in 1893. Pearson’s early Gresham lectures on statistics were influenced by the work of Francis Ysidro Edgeworth, William Stanley Jevons, and John Venn.
Pearson’s last twelve Gresham lectures signified a turning point in his career owing to the Darwinian zoologist W. F. R. Weldon (1860–1906), who was interested in using a statistical approach for problems of Darwinian evolution. Their emphasis on Darwinian population of species, underpinned by biological variation, not only implied the necessity of systematically measuring variation but also prompted the reconceptualization of a new statistical methodology, which led eventually to the creation of the Biometric School at University College London in 1894. Earlier vital and social statisticians were mainly interested in calculating averages and were not concerned with measuring statistical variation.
Pearson adapted the mathematics of mechanics, using the method of moments to construct a new statistical system to interpret Weldon’s asymmetrical distributions, since no such system existed at the time. Using the method of moments, Pearson established four parameters for curve fitting to show how data clustered (the mean), and spread (the standard deviation), if there were a loss of symmetry ( skewness ), and if the shape of the distribution was peaked or flat ( kurtosis ). These four parameters describe the essential characteristics of any empirical distribution and made it possible to analyze data that resulted in variousshaped distributions.
By the time Pearson finished his statistical lectures in May 1894, he had provided the infrastructure of his statistical methodology. He began to teach statistics at University College in October. By 1895 he had worked out the mathematical properties of the productmoment correlation coefficient (which measures the relationship between two continuous variables) and simple regression (used for the linear prediction between two continuous variables). In 1896 he introduced a higher level of mathematics into statistical theory, the coefficient of variation, the standard error of estimate, multiple regression, and multiple correlation, and in 1899 he established scales of measurement for continuous and discrete variables. Pearson devised more than eighteen methods of correlation from 1896 to 1911, including the tetrachoric, polychoric, biserial, and triserial correlations and the phi coefficient. Inspired and supported by Weldon, Pearson’s major contributions to statistics were: (1) introducing standardized statistical datamanagement procedures to handle very large sets of data; (2) challenging the tyrannical acceptance of the normal curve as the only distribution on which to base the interpretation of statistical data; (3) providing a set of mathematical statistical tools for the analysis of statistical variation, and (4) professionalizing the discipline of mathematical statistics. Pearson was elected a Fellow of the Royal Society in 1896 and awarded its Darwin Medal in 1898.
Pearson’s ongoing work throughout the 1890s with curve fitting signified that he needed a criterion to determine how good the fit was. He continued to work on improving his methods until he devised his chisquare goodness of fit test in 1900 and introduced the concept of degrees of freedom. Although many other nineteenthcentury scientists attempted to find a goodness of fit test, they did not give any underlying theoretical basis for their formulas, which Pearson managed to do. The overriding significance of this test meant that statisticians could use statistical methods that did not depend on the normal distribution to interpret their findings. Indeed, the chisquare goodness of fit test represented Pearson’s single most important contribution to the modern theory of statistics, for it raised substantially the practice of mathematical statistics. In 1904 Pearson established the chisquare statistic for discrete variables to be used in contingency tables. Pearson published his statistical innovations from his Gresham and UCL lectures in a set of twentythree papers, “Mathematical Contributions to the Theory of Evolution,” principally in Royal Society publications from 1893 to 1916. He established the first degree course in statistics in Britain in 1915.
PEARSON’S FOUR LABORATORIES
In the twentieth century Pearson founded and managed four laboratories. He set up the Drapers’ Biometric Laboratory in 1903 with a grant from the Worshipful Drapers’ Company (who funded the laboratory until 1933). The methodology incorporated in this laboratory involved the use of his statistical methods and numerous instruments. The problems investigated by the biometricians included natural selection, Mendelian genetics and Galton’s law of ancestral inheritance, crainometry, physical anthropology, and theoretical aspects of mathematical statistics. A year after Pearson established the Biometric Laboratory, the Worshipful Drapers’ Company gave him a grant to launch an Astronomical Laboratory equipped with a transit circle and a fourinch equatorial refractor.
In 1907 Francis Galton (who was then eightyfive years old) wanted to step down as director of the Eugenics Record Office, which he had set up three years earlier; he asked Pearson to take over the office, which Pearson subsequently renamed the Galton Eugenics Laboratory. Pearson had, by then, spent the previous fourteen years developing the foundations of his statistical methodology. His work schedule was so demanding that he took on this role only as a personal favor to Galton. Because Pearson regarded his statistical methods as unsuitable for problems of eugenics, he further developed Galton’s actuarial death rates and family pedigrees for the methodology of the Eugenics Laboratory. The latter procedure led to his twentyonevolume Treasury of Family Inheritance (1909–1930). In 1924 Pearson set up the Anthropometric Laboratory, made possible by a gift from his student, Ethel Elderton. When Galton died in January 1911, his estate was bequeathed to UCL and he named Pearson as the first professor of eugenics. The Drapers’ Biometric and the Galton Eugenics laboratories, which continued to function separately, became incorporated into the Department of Applied Statistics.
Although Pearson was a eugenicist, he eschewed eugenic policies. For him and his British contemporaries (e.g., Herbert Spencer, George Bernard Shaw, H. G. Wells, Marie Stopes, and Virginia Woolf), eugenics was principally a discourse about class, whereas in Germany and America the focus was on racial purity. The British were anxious that the country would be overrun by the poor unless their reproduction lessened; the middle classes were thus encouraged to have more children. In any case eugenics did not lead Pearson to develop any new statistical methods, nor did it play any role in the creation of his statistical methodology.
His wife, Marie Sharpe, died in 1928, and in 1929 he married Margaret Victoria Child, a coworker in the Biometric Laboratory. Pearson was made Emeritus Professor in 1933 and given a room in the Zoology Department at UCL, which he used as the office for Biometrika. From his retirement until his death in 1936, he published thirtyfour articles and notes and continued to edit Biometrika.
SCHOLARSHIP ON PEARSON
Pearson’s statistical work and innovations, his philosophy and his ideas about Darwinism, evolutionary biology, Mendelism, eugenics, medicine, and elasticity have been of considerable interest to innumerable scientists and scholars for more than a century. Throughout the twentieth century, many commentators viewed Pearson as a disciple of Francis Galton who merely expanded Galton’s ideas on correlation and regression. Consequently, a number of scholars have falsely assumed that Pearson’s motivation for creating a new statistical methodology arose from problems of eugenics. Among writers who have taken this view are Daniel Kevles, Bernard Norton, Donald Mackenzie, Theodore Porter, Richard Soloway, and Tukufu Zuberi. However, using substantial corroborative historical evidence in Pearson’s archives, Eileen Magnello (1999) provided compelling documentation that Pearson not only managed the Drapers’ Biometric and the Galton Eugenics laboratories separately but also that they occupied separate physical spaces, that he maintained separate financial accounts, that he established very different journals, and that he created two completely different methodologies. Moreover, he took on his work in the Eugenics Laboratory very reluctantly and wanted to relinquish the post after one year. Pearson emphasized to Galton that the sort of sociological problems that he was interested in pursuing for his eugenics program were markedly different from the research that was conducted in the Drapers’ Biometric Laboratory.
Juxtaposing Pearson alongside Galton and eugenics has distorted the complexity and totality of Pearson’s intellectual enterprises, since there was virtually no relationship between his research in “pure” statistics and his agenda for the eugenics movement. This longestablished but misguided impression can be attributed to (1) an excessive reliance on secondary sources containing false assumptions, (2) the neglect of Pearson’s voluminous archival material, (3) the use of a minute portion of his 600plus published papers, (4) a conflation of some of Pearson’s biometric and crainometric work with that of eugenics, and (5) a blatant misinterpretation and misrepresentation of Pearsonian statistics.
Continuing to link Galton with Pearson, Michael Bulmer (2003) suggested that the impetus to Pearson’s statistics came from his reading of Galton’s Natural Inheritance. However, Magnello (2004) argued that this view failed to take into account that Pearson’s initial reaction to Galton’s book in March 1889 was actually quite cautious. It was not until 1934, almost half a century later, when Pearson was 78 years old, that he reinterpreted the impact Galton’s book had on his statistical work in a more favorable light—long after Pearson had established the foundations to modern statistics.
The central role that Weldon played in the development of Pearson’s statistical program has been almost completely overlooked by most scholars, except for Robert Olby (1988) and Peter Bowler (2003), who gave Weldon greater priority than Galton in Pearson’s development of mathematical statistics as it related to problems of evolutionary biology. Weldon’s role in Pearson’s early published statistical papers was acknowledged by Churchill Eisenhart (1974), Stephen Stigler (1986), and A. W. F. Edwards (1993). In all her papers, Magnello addressed Weldon’s pivotal role in enabling Pearson to construct a new mathematically based statistical methodology.
Norton (1978a, 1978b) and Porter (2004) argue that Pearson’s iconoclastic and positivistic Grammar of Science played a role in the development of Pearson’s statistical work. However, Magnello (1999, 2005a) disputed this and argued that while The Grammar of Science represents his philosophy of science as a young adult, it does not reveal everything about his thinking and ideas, especially those in connection with his development of mathematical statistics. Thus, she maintains, it is not helpful to see this book as an account of what Pearson was to do throughout the remaining fortytwo years of his working life.
Although longstanding claims have been made by various commentators throughout the twentieth and early twentyfirst centuries that Pearson rejected Mendelism, Magnello (1998) showed that Pearson did not reject Mendelism completely but that he accepted the fundamental idea for discontinuous variation. Moreover, Philip Sloan (2000) argued that the biometricians’ debates clarified issues in Mendelism that otherwise might not have been developed with the rigor that they were to achieve.
Additionally, virtually all historians of science have failed to acknowledge that Pearson’s and Galton’s ideas, methods, and outlook on statistics were profoundly different. However, Bowler (2003) detected differences in their statistical thinking because of their different interpretations of evolution, and Stigler acknowledged their diverse approaches to statistics in his The History of Statistics (1986). Magnello (1996, 1998, 1999, 2002) explained that whereas Pearson’s main focus was goodness of fit testing, Galton’s emphasis was correlation; Pearson’s higher level of mathematics for doing statistics was more mathematically complex than Galton’s; Pearson was interested in very large data sets (more than 1,000), whereas Galton was more concerned with smaller data sets of around 100 (owing to the explanatory power of percentages); and Pearson undertook longterm projects over several years, while Galton wanted faster results. Moreover, Galton thought all data had to conform to the normal distribution, whereas Pearson emphasised that empirical distributions could take on any number of shapes.
Given the pluralistic nature of Pearson’s scientific work and the complexity of his many statistical innovations twinned with his multifaceted persona, Pearson will no doubt continue to be of interest for many future scholars. Pearson’s legacy of establishing the foundations of contemporary mathematical statistics helped to create the modern world view, for his statistical methodology not only transformed our vision of nature but also gave scientists a set of quantitative tools to conduct research, accompanied with a universal scientific language that standardized scientific writing in the twentieth century. His work went on to provide the foundations for such statisticians as R. A. Fisher, who went on to make further advancements in the modern theory of mathematical statistics.
SEE ALSO ChiSquare; Regression Analysis; Statistics
BIBLIOGRAPHY
Bowler, Peter J. 2003. Evolution: The History of an Idea. 3rd ed. Berkeley: University of California Press.
Bulmer, Michael. 2003. Francis Galton: Pioneer of Heredity and Biometry. Baltimore, MD: Johns Hopkins University Press.
Edwards, A. W. F. 1993. Galton, Pearson and Modern Statistical Theory. In Sir Francis Galton, FRS, The Legacy of His Ideas, ed. Milo Keynes. London: Palgrave Macmillan.
Eisenhart, Churchill. 1974. Karl Pearson. In Dictionary of Scientific Biography 10, 447–473. New York: Scribner’s.
Hilts, Victor. 1967. Statist and Statistician. New York: Arno Press, 1981.
Kevles, Daniel. 1985. In the Name of Eugenics: Genetics and the Uses of Human Heredity. New York: Knopf.
Mackenzie, Donald. 1981. Statistics in Britain 1865–1930: The Social Construction of Scientific Knowledge. Edinburgh: Edinburgh University Press.
Magnello, M. Eileen. 1996. Karl Pearson’s Gresham Lectures: W. F. R. Weldon, Speciation and the Origins of Pearsonian Statistics. British Journal for the History of Science 29: 43–64.
Magnello, M. Eileen. 1998. Karl Pearson’s Mathematisation of Inheritance: From Galton’s Ancestral Heredity to Mendelian Genetics (1895–1909). Annals of Science 55: 35–94.
Magnello, M. Eileen. 1999. The Noncorrelation of Biometrics and Eugenics: Rival Forms of Laboratory Work in Karl Pearson’s Career at University College London. Part 1. History of Science 37: 79–106; Part 2, 38: 123–150.
Magnello, M. Eileen. 2002. The Introduction of Mathematical Statistics into Medical Research: The Roles of Karl Pearson, Major Greenwood and Austin Bradford Hill. In The Road to Medical Statistics, eds. Eileen Magnello and Anne Hardy, 95–124. New York and Amsterdam: Rodopi.
Magnello, M. Eileen. 2004. Statistically Unlikely. Review of Francis Galton: Pioneer of Heredity and Biometry, by Michael Bulmer. Nature 428: 699.
Magnello, M. Eileen. 2005a. Karl Pearson and the Origins of Modern Statistics: An Elastician Becomes a Statistician. The Rutherford Journal: The New Zealand Journal for the History and Philosophy of Science and Technology. http://rutherfordjournal.org/. (Vol. 1, December).
Magnello, M. Eileen. 2005b. Karl Pearson, Paper on the ChiSquare Goodness of Fit Test. In Landmark Writings in Western Mathematics: Case Studies, 1640–1940, ed. Ivor GrattanGuinness, 724–731. Amsterdam: Elsevier.
Norton, Bernard. 1978a. Karl Pearson and the Galtonian Tradition: Studies in the Rise of Quantitative Social Biology. PhD diss., University College London.
Norton, Bernard. 1978b. Karl Pearson and Statistics: The Social Origin of Scientific Innovation. Social Studies of Science 8: 3–34.
Olby, Robert. 1988. The Dimensions of Scientific Controversy: The BiometricianMendelian Debate. British Journal for the History of Science 22: 299–320.
Pearson, Egon. 1936–1938. Karl Pearson: An Appreciation of Some Aspects of His Life and Work. Part 1, 1857–1905. Biometrika (1936): 193–257; Part 2, 1906–1936 (1938): 161–248. (Reprinted Cambridge, U.K.: Cambridge University Press, 1938).
Pearson, Karl. 1914–1930. The Life, Letters and Labours of Francis Galton. 3 vols. Cambridge, U.K.: Cambridge University Press.
Porter, Theodore M. 1986. The Rise of Statistical Thinking: 1820–1900. Princeton, NJ: Princeton University Press.
Porter, Theodore M. 2004. Karl Pearson: The Scientific Life in a Statistical Age. Princeton, NJ: Princeton University Press.
Sloan, Philip R. 2000. Mach’s Phenomenalism and the British Reception of Mendelism. Comptes Rendus de l’Académie des sciences 323: 1069–1079.
Soloway, Richard A. 1990. Demography and Degeneration: Eugenics and the Declining Birthrate in TwentiethCentury Britain. Chapel Hill: University of North Carolina Press.
Stigler, Stephen M. 1986. The History of Statistics: The Measure of Uncertainty before 1900. Cambridge, MA: Belknap Press.
Stigler, Stephen M. 1999. Statistics on the Table: The History of Statistical Concepts and Methods. Cambridge, MA: Harvard University Press.
Zuberi, Tukufu. 2001. Thicker Than Blood: How Racial Statistics Lie. Minneapolis: University of Minnesota Press.
M. Eileen Magnello
Cite this article
Pick a style below, and copy the text for your bibliography.

MLA

Chicago

APA
"Pearson, Karl." International Encyclopedia of the Social Sciences. . Encyclopedia.com. 27 Feb. 2017 <http://www.encyclopedia.com>.
"Pearson, Karl." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/socialsciences/appliedandsocialsciencesmagazines/pearsonkarl
"Pearson, Karl." International Encyclopedia of the Social Sciences. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/socialsciences/appliedandsocialsciencesmagazines/pearsonkarl
Pearson, Karl
PEARSON, KARL
(b. London, England, 27 March 1857; d. Coldharbour, Surrey, England, 27 April 1936),
applied mathematics, biometry, statistics, philosophy and social role of science, eugenics. For the original article on Pearson see DSB, vol. 10.
Pearson largely founded statistics as a mathematical field, and Churchill Eisenhart’s excellent entry on Pearson for the original Dictionary of Scientific Biography reasserts his importance as a giant of the field. Eisenhart was in a way defending Pearson against the critique of Ronald Aylmer Fisher and his followers, for whom Pearson, having made the way crooked, was not even a worthy precursor. Pearson and Fisher were embattled for almost two decades, from the late 1910s to Pearson’s death in 1936 and beyond. The historical memory of statisticians was shaped in part by this controversy and in particular by a somewhat technical disagreement over the number of degrees of freedom that are appropriate for a chisquare test. Fisher convinced many that Pearson, by getting it wrong, revealed profound mathematical incapacities. With this dispute clearly in mind, Eisenhart develops at some length a more charitable view of Pearson’s contribution, and Stephen Stigler (1999) has shown more recently that Pearson’s formulation reflected a subtly different understanding of a tricky problem rather than a mere mathematical howler.
Indeed, an important difference of perspective lay behind the battles between these founding statisticians. While Pearson was a visionary in his claims for the historical and social significance of this new mathematical science, his methods were somewhat makeshift, while Fisher’s mathematical approach was more systematic and coherent. Perhaps the most fundamental difference of approach between the two men is that Fisher was committed to an alliance of statistics and experimentation, while Pearson, who emphasized curvefitting, did not distinguish systematically between observation and experiment. Pearson regarded Fisher’s mature work as the misguided deployment of mathematical virtuosity to conjure bold conclusions from scanty data, and Fisher, who saw Pearson mainly as an obstacle to statistics rather than a new point of departure, preferred to trace his own lineage to others such as William Sealey Gosset, who had learned statistics in Pearson’s biometric laboratory. Fisher’s insistence on his own originality was largely accepted by the next generation of statisticians. Under other circumstances, Fisher’s highly original and mathematically elegant program of statistical inference, which, like Pearson's, focused on biological measures, might have been viewed as the fulfillment of Pearson’s vision rather than its rejection.
Pearson’s Road to Statistics Pearson began working in earnest on statistics in about 1892, and scholars have been greatly interested in how he got there. Pearson himself was acutely conscious of what he owed to Francis Galton’s work on statistics, evolution, and eugenics (a field of study concerned with improving hereditary qualities of a race or breed), and his fourvolume Life, Letters, and Labours of Francis Galton attests massively to this sense of obligation. But he had been somewhat skeptical of Galton’s attempt to subject biology and the human sciences to quantitative reason when he reported on Natural Inheritance to his “Men and Women’s Club” in 1889. His faith that statistics was the proper method for the study of evolution owed, in the first instance, to discussions and then collaboration with his biological colleague at University College London, Walter Frank Raphael Weldon. Eileen Magnello (1996) particularly emphasizes Weldon’s responsibility for
Pearson’s change of direction, and discusses at length the role of his quantitative ambitions in their shared program for the statistical study of evolution. Weldon, however, had himself found in Galton’s work a convincing alternative to the morphological perspective within which he had been trained, and a key aspect of Weldon’s role was indeed to persuade Pearson of Galton’s fundamental importance. Weldon and Galton communicated frequently about Pearson’s mathematical program, and while they were enthusiastic they also worried that his scheme of quantitative description tended to suppress the biology.
Stigler (1986) has called attention to Pearson’s interactions in the early 1890s with the economist and utilitarian philosopher Francis Ysidro Edgeworth, who had been writing on statistical mathematics since 1883. In his 1892 Newmarch Lectures, which Pearson may have attended, Edgeworth included a discussion of graphical methods of statistics. This work informed Pearson’s own important lectures on graphical statistics, delivered at Gresham College from 1891 to 1894. In fact, in the years just before 1890, Pearson was already growing more and more committed to graphical geometry, which he was then teaching to engineering students at University College London. He saw graphs as a means not only to represent scientific problems but also to solve them and he spoke euphorically of the dawn of a new mathematical epoch. Whereas René Descartes (1596–1650), with his analytic geometry, had made algebra the proper foundation for geometry, now Pearson imagined that a graphical form of geometry could become the master science, providing visually satisfying solutions even to problems of algebra. This program of graphical description lived on through the 1890s and beyond in a statistical method based first of all on fitting curves to data and of comparing these curves with other data (though often algebraically rather than visually). His chisquare test of goodness of fit, introduced in 1900, was a way of assessing probabilistically the fit of data points to a frequency distribution. He used this test, for example, to determine if some newly discovered skulls could be from the same population (or race) as others that had been excavated nearby, or if the death rate of a vaccinated population diverged significantly from that of an unvaccinated one.
Pearson’s statistical enthusiasm was about science and not primarily a matter of mathematics. He was thrilled to discover, through his exchanges with Weldon, that he possessed the tools to work out a new, quantitative basis for Charles Darwin’s theory of evolution, which he already had come to regard as the great scientific advance of the nineteenth century. The statistical journal Pearson and Weldon began to publish in 1901 was titled, significantly, Biometrika, and in the first decade of the twentieth century these biometricians were virtually the only prominent scientific supporters of Darwinian evolution by natural selection. Given the protoeugenic aspect of Pearson’s essays on the “woman’s question” beginning in the mid1880s, it is plausible that Pearson’s program of statistical biology had from the outset an important eugenic aspect. In alliance with Galton, whose studies of evolution had been motivated by eugenic ambitions since the time of his first paper on heredity in 1865, Pearson took up eugenics in earnest at the beginning of the new century. Within a decade, it would become an international movement of sociobiological politics. Not all of Pearson’s statistical work was biometrical, and much even of the biometry was not directly linked to eugenics. Conversely, eugenic study was often distinct from Pearsonian statistics, and Pearson himself presided over an important project of nonstatistical eugenic study within the eugenics laboratory that Galton had established earlier. Still, as Donald MacKenzie argued in 1981, eugenic ambitions pervaded the new statistics. From craniometry to public health to education and intelligence testing, they were never far below the surface. Pearson’s somewhat alarmist public lectures on the eugenic threat of national deterioration were unfailingly expressed in the statistical idiom of differential fertility and high heritability.
The Controversialist While Pearson was especially devoted to biometry, statistics for him was a field of applied mathematics that stood above every particular scientific discipline. The task of building up his new field involved both the development of appropriate technical tools and the training of people, and Pearson worked at both. Almost from the beginning, he set about creating a standard technical language for statistics, fixing or coining such terms as normal law and standard deviation. He worked out a formula, actually a family of formulas, for the coefficient of correlation, which Galton had estimated loosely using graphical methods, and he undertook to systematize the analysis of frequency distributions. He fit curves using a method of moments, which meant choosing the parameters of a distribution formula to match the average (first moment), the standard deviation (second moment), and so on, of the data points. Pearson’s goal was to work out methods of statistics that could be applied to any field whatsoever and that would raise the standard of statistical practice throughout the sciences.
It was a Herculean task—of cleaning stables more often than killing lions—since every field had its own distinctive problems and since many did not welcome the imperious if wellmeaning interventions of this acerbic outsider. In some cases, as in the emerging science of heredity, there was sharp resistance to the effort to conceive the field as fundamentally statistical. For William Bateson and other pioneers of genetics, science meant experimentation laboratories and experimental interventions, designed to reveal underlying causes acting at the level of individuals. Pearson and Weldon, by contrast, conceived the study of heredity as fundamental to Darwin’s theory of evolution by natural selection, and they announced in the introduction to the first volume of Biometrika that evolution is a science of mass phenomena to be investigated through the study of populations rather than of individuals. Although modern scholarship has made it clear that Pearson never rejected the possible reality of Mendelian genes, and even took on occasion a positive interest in them, the WeldonPearson program involved a refashioning of biology that many biologists found unacceptable.
Others, however, welcomed statistics, and even viewed the biometric program with great favor. The American biologist Raymond Pearl, who was interested in population dynamics, traveled from Baltimore, Maryland, to spend a term in Pearson’s laboratory. The Danish botanist Wilhelm Johannsen also came to London to meet the biometricians, though Pearson rebuffed him, and he regarded statistics as at least equal in importance to Mendelism for the study of heredity. The botanist Hugo de Vries, from the Netherlands, was a great admirer of Adolphe Quetelet and of statistics. Still, Pearson fought with all of them, including his student Pearl. Pearson’s problem was not that they rejected statistics, but that they failed to practice statistics at the level of his expectations or in accordance with his dogmas. In later life, Pearson liked to think of himself as struggling heroically against a conception of science that was indifferent or hostile to statistics, but in fact statistics was springing up in science everywhere. Most of Pearson’s battles were provoked not by a rejection of statistics, but by rival statistical practices, dissenters from the church biometric. Many of his opponents were far less skilled in mathematics than Pearson, but some, including Fisher, were capable or even superb mathematicians. Probably Pearson’s most distinguished statistical pupils were George Udny Yule and Major Greenwood, and for a time they were very tight personally as well as scientifically, but later he fought with each of them. His dispute with Yule over measures of association culminated in a 150page rebuttal and in Yule’s expulsion for a time from the community over which Pearson, in effect, presided.
Other controversies included a sharp exchange and what seem to have been bad or nonexistent personal relations with Pearson’s colleague at University College, the psychologist Charles Spearman, who used correlations among school tests to define a unitary measure of intelligence. Spearman is known to history as an important statistical psychologist, but Pearson thought his methods inadequate and was forthright about saying so. Many of Pearson’s disputes involved medicine, and again this was a case of denouncing what he saw as incorrect statistical methods rather than pushing the need for statistics where it was not being used. Often in these cases his discontent reflected also a conviction that the environmental causes emphasized by physicians were less important than hereditary ones. One of the best known of these environmental claims concerned the presumed effects of parental alcoholism on the health and ability of the child. Pearson (with Ethel Elderton, a worker in his laboratory), procured and analyzed data from two institutions for children to show statistically that the apparent effects of alcohol on the offspring were all nonexistent or small. Such effects as appeared in the statistics, they proposed, might well be due to hereditary correlations with other mental characteristics of the parents. In this case, the economist John Maynard Keynes joined in to defend the temperance reformers against Pearson’s eugenic doubts. Pearson crossed swords with the public health official Arthur Newsholme over the causes of infant mortality, which he preferred to attribute to hereditary weakness rather than to an unhealthy environment, and with the pathologist Almoth Wright over the effectiveness of antityphoid inoculation. In most of these cases Pearson objected that the doctors were taking superficial associations at face value rather than employing the more advanced methods that might allow a deeper understanding of the effective causes.
Statistics and the Social Role of Science Deeper understanding, at least in the form of causal knowledge, is what Pearson the positivist is supposed to have rejected, and indeed he often argued that science can only describe rather than explain or that causation is merely the limit of correlation. But he used such rhetoric opportunistically, and his philosophy did not keep him from trying to cut through meaningless correlations or from giving explanations in terms of entities that are wholly inaccessible to the senses. The most striking of these is the “ether squirts” whose hydrodynamics, Pearson suggested in The Grammar of Science, might explain the phenomena of physics and chemistry. Pearson’s philosophy owed less to the Austrian positivist Ernst Mach than to postKantian idealists such as Johann Gottlieb Fichte. The phenomenal world, he supposed, is not an external reality, but is created by the human mind. Yet if the mind can spin our visible world out of itself, it can also conceive, and in this sense create, genes and ether squirts.
Pearson typically invoked philosophical considerations to challenge the supposed limits of knowledge rather than to assert them. Science, he insisted, is a method, applicable to any topic whatsoever. It applies just as well to social life as to the motions of the planets. Although in his maturity he spoke of statistics as paradigmatic of scientific method, he did not undertake to codify that method. Instead, he presented science as a moral virtue. For Pearson, scientific method meant honest, disinterested investigation, so that opinion could be grounded in facts rather than in prejudice or selfinterest. Science makes us citizens, he continued, by teaching us to accept as valid for ourselves only what is valid for everyone. It is tantamount to socialism, a naturalistic basis for knowledge that must sever any tie between religion and rationality. Science involves, in several senses, renunciation: of beliefs grounded in prejudice or selfish interest; of the quest for higher meanings; and of the possibility of direct sensuous contact with a world outside us. However, these sacrifices, which are simultaneously moral and epistemological, can provide the foundation of an efficient social life based on genuine knowledge.
As a young man, Pearson was fascinated by history, especially of the German Reformation, and of what could be gleaned from it about the historical situation of his own time. The crucial factors of history, he thought, were property and relations of sex, represented in modern times by movements of workers and of women, and he expected that the era of selfish capitalism would soon give way to a new socialism. The bigotry and ignorance of the German religious reformer Martin Luther (1483–1546), he argued, had made the rise of individualistic capitalism as painful and inefficient as it could have been, and he looked to science rather than class violence to ease the birth of modern collectivism. He took the lead in forming a “Men and Women’s Club” to investigate the historical conditions and modern possibilities of relations between the sexes, and he became for a time an important intellectual authority for the women’s movement. His growing concern with statistics and biological evolution were linked to his sense of large historical changes calling for new forms of science. Pearson was concerned equally with the content of the science and with the character and roles of the scientist. His ambition to recover some features of the medieval university, especially the intense personal relationship of master to student, crystallized in the form of biometric and eugenic laboratories, where textbooks were eschewed in favor of closely supervised research. He wanted to avoid reducing statistics, or science generally, to something formulaic, stressing instead that the progress of science depends on cultivated individuality. But it was no easy matter to fuse individuality with impersonal objectivity or to ground wisdom in quantitative methods, and Pearson, whose habit was always to set himself against the conventions of his time, spoke often with regret in later life over the reduction of science to a mere profession.
SUPPLEMENTARY BIBLIOGRAPHY
Aldrich, John. “The Language of the English Biometric School.” International Statistical Review 71 (2003): 109–131.
Eyler, John. Sir Arthur Newsholme and State Medicine, 1885–1935. Cambridge, U.K.: Cambridge University Press, 1997.
Gayon, Jean. Darwin et l’aprèsDarwin: Une histoire de l’hypothèse de sélection naturelle. Paris: Editions Kimé, 1992.
Kevles, Daniel J. In the Name of Eugenics: Genetics and the Uses of Human Heredity. New York: Knopf, 1985.
Kingsland, Sharon. Modeling Nature: Episodes in the History of Population Ecology. Chicago: University of Chicago Press, 1985.
MacKenzie, Donald. Statistics in Britain, 1865–1930: The Social Construction of Scientific Knowledge. Edinburgh, U.K.: Edinburgh University Press, 1981.
Magnello, Eileen. “Karl Pearson’s Gresham Lectures: W. F. R. Weldon, Speciation, and the Origins of Pearsonian Statistics.” British Journal for the History of Science 29 (1996): 43–63
———. “Karl Pearson’s Mathematization of Inheritance: From Ancestral Heredity to Mendelian Genetics (1895–1909).” Annals of Science 55 (1998): 35–94
———. “The NonCorrelation of Biometrics and Eugenics: Rival Forms of Laboratory Work in Karl Pearson’s Career at
University College London.” History of Science 37, pts. 1 and 2 (1999): 79–106, 123–150
Matthews, J. Rosser. Quantification and the Quest for Medical Certainty. Princeton, NJ: Princeton University Press, 1995.
Norton, Bernard J. “Metaphysics and Population Genetics: Karl Pearson and the Background to Fisher’s MultiFactorial Theory of Inheritance.” Annals of Science 32 (1975): 537–553.
———. “Karl Pearson and Statistics: The Social Origins of Scientific Innovation.” Social Studies of Science 8 (1978): 3–34.
Porter, Theodore M. The Rise of Statistical Thinking, 1820–1900. Princeton, NJ: Princeton University Press, 1986.
———. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton, NJ: Princeton University Press, 1995.
———. Karl Pearson: The Scientific Life in a Statistical Age. Princeton, NJ: Princeton University Press, 2004.
Provine, William B. The Origins of Theoretical Population Genetics. Chicago: University of Chicago Press, 1971.
RollHansen, Nils. “The Crucial Experiment of Wilhelm Johannsen.” Biology and Philosophy 4 (1989): 303–329.
Stamhuis, Ida. “The Reaction on Hugo de Vries’s Intracellular Pangenesis: The Discussion with August Weismann.” Journal of the History of Biology 36 (2003): 119–152.
Stigler, Stephen. The History of Statistics: The Measurement of Uncertainty before 1900. Cambridge, MA: Belknap Press of Harvard University Press, 1986.
———. Statistics on the Table: The History of Statistical Concepts and Methods. Cambridge, MA: Harvard University Press, 1999.
Szreter, Simon. Fertility, Class, and Gender in Britain 1860–1940. Cambridge, U.K.: Cambridge University Press, 1996.
Walkowitz, Judith R. City of Dreadful Delight: Narratives of Sexual Danger in LateVictorian London. Chicago: University of Chicago Press, 1992.
Theodore M. Porter
Cite this article
Pick a style below, and copy the text for your bibliography.

MLA

Chicago

APA
"Pearson, Karl." Complete Dictionary of Scientific Biography. . Encyclopedia.com. 27 Feb. 2017 <http://www.encyclopedia.com>.
"Pearson, Karl." Complete Dictionary of Scientific Biography. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/science/dictionariesthesaurusespicturesandpressreleases/pearsonkarl
"Pearson, Karl." Complete Dictionary of Scientific Biography. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionariesthesaurusespicturesandpressreleases/pearsonkarl
Pearson, Karl
Karl Pearson, 1857–1936, English scientist. He studied law, taught geometry, and applied mathematics and mechanics, and in 1911 became professor of eugenics at the Univ. of London and director of the eugenics laboratory. A disciple of Francis Galton, he applied statistical methods to the study of biological problems (especially evolution and heredity), a science he called biometrics. He founded and edited Biometrika and was author of many works including The Grammar of Science (1892), Chances of Death (2 vol., 1897), and a biography of Francis Galton (3 vol., 1914–30).
Cite this article
Pick a style below, and copy the text for your bibliography.

MLA

Chicago

APA
"Pearson, Karl." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. 27 Feb. 2017 <http://www.encyclopedia.com>.
"Pearson, Karl." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/reference/encyclopediasalmanacstranscriptsandmaps/pearsonkarl
"Pearson, Karl." The Columbia Encyclopedia, 6th ed.. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopediasalmanacstranscriptsandmaps/pearsonkarl