The normal distribution is the single most important distribution in the social sciences. It is described by the bell-shaped curve defined by the probability density function
where exp is the exponential function, μthe mean of the distribution, σ the standard deviation, and σ2 the variance. As a matter of convenience, this distribution is often expressed as X ~ N (μ, σ 2). If X ~ N (0, 1) so that μ = 0 and σ 2 = 1, the outcome is the standard normal distribution. The resulting curve is shown in Figure 1, where the horizontal axis indicates values of X in terms of positive and negative integer values of the standard deviation. The curve’s shape is typical of normally distributed variables, even when they have different means and variances.
The normal distribution has two significant features. First, the curve is perfectly symmetrical about the mean of the distribution. As a result, the distribution mean is identical to the two alternative measures of central tendency, namely, the mode (the most frequent value of X ) and the median (the middle value of X ). Second, the mathematical function provides the basis for specifying the number of observations that should fall within select portions of the curve. In particular, approximately 68.3 percent of the
observations will likely fall within one standard deviation of the mean. In the case of the standard normal deviation, this would indicate that more than two-thirds of the observations would have a value between –1 and +1. Moreover, about 95.4 percent of the observations would fall within two standard deviations above and below the mean, and about 99.7 percent would fall within three standard deviations below and above the mean. Hence, relatively fewer observations are expected in the upper and lower tails of the distribution; the more extreme the departure from the mean the lower the score’s probability of occurrence.
The normal distribution was first associated with errors of measurement. In the latter half of the seventeenth century Galileo Galilei (1564–1642) noticed that the errors in astronomical observations were not totally random. Instead, not only did small errors outnumber large errors, but also the errors tended to be symmetrically distributed around a central value. In the first decade of the nineteenth century the mathematicians Adrien-Marie Legendre (1752–1833) and Carl Friedrich Gauss (1777–1855) worked out the precise mathematical formula, and Gauss demonstrated that this curve provided a close fit to the empirical distribution of observational errors. Gauss also derived the statistical method of least squares from the assumption that errors were normally distributed.
However, the normal distribution also appeared in other mathematical contexts. In the early eighteenth century Abraham de Moivre (1667–1754) showed that certain binomial distributions could be approximated by the same general curve. In fact, the normal curve is the limiting case for a binomial when events have a fifty-fifty chance of occurring and when the number of trials goes to infinity. A commonplace illustration is the distribution of coin tosses. In the early nineteenth century Pierre-Simon Laplace (1749–1827), when working on the central limit theorem, showed that the distribution of sample means tends to be normally distributed: The larger the number of samples, the closer is the fit to normality—a result that holds regardless of whatever the population distribution might be. Even if the scores in the population are highly skewed, the distribution of sample means will tend toward the normal curve.
Despite the fact that many mathematicians contributed to the emergence of the concept, it is Gauss whose name became most strongly linked with the discovery. As a consequence, the eponymic term Gaussian is often used instead of “normal” or “bell-shaped.”
Although the normal distribution was first applied to the description of measurement errors, scientists later began to realize that it also described variation in human phenomena independent of errors of measurement. In 1835 Adolphe Quetelet (1796–1874) applied the normal distribution to many physical attributes, such as height, and in 1869 Francis Galton (1822–1911) extended the same distribution to cover individual differences in ability. The latter application is seen in those psychometric instruments in which test scores are actually defined according to the normal distribution. For instance, the IQ scores on most intelligence tests are assigned in terms of a person’s position in the distribution. Thus, under the assumption that IQ has a mean of 100 and a standard deviation of 15, a score of 130 would place the individual in the upper 2 percent of the population in intellectual ability.
Indeed, the concept of the normal distribution has become so universal that it now provides the basis of almost all parametric statistical methods. For example, multiple regression analysis and the analysis of variance both assume that the errors of prediction, or residuals, are normally distributed with a mean of zero and a uniform variance. More sophisticated methods such as canonical correlation, discriminant analysis, and multivariate analysis of variance all require a more complex assumption, namely, multivariate normality. This means that the joint distribution of the variables is normally distributed. In the special case of bivariate normality, this assumption signifies that the joint distribution will approximate the shape of a three-dimensional bell. To the extent that the normality assumption is violated, the population inferences associated with these statistical methods will become approximate rather than exact.
Given the prominent place of the normal distribution in the social sciences, it is essential to recognize that not all human attributes or behavioral events are normally distributed. For example, many phenomena display extremely skewed distributions with long upper tails. Examples include the distributions of annual income across households, the box-office performance of feature films, the output of journal articles by scientists, and the number of violent acts committed by male teenagers. Sometimes these departures from normality can be rectified using an appropriate data transformation. For instance, a lognormal distribution becomes normal after a logarithmic transformation. Yet many important variables cannot be normalized in this way. In such cases, researchers may use statistics based on the specific nonnormal distribution or else employ various nonparametric or distribution-free methods. Furthermore, it is likely that the causal processes that generate normal distributions are intrinsically different from those that generate nonnormal distributions. As an example, the former tend to emerge when multiple causal processes are additive, whereas the latter tend to appear when those processes are multiplicative.
SEE ALSO Central Limit Theorem; Central Tendencies, Measures of; Distribution, Poisson; Distribution, Uniform; General Linear Model; Mean, The; Mode, The; Regression; Regression Analysis; Social Science; Standard Deviation; Variables, Random; Variance
Patel, Jagdish K., and Campbell B. Read. 1982. Handbook of the Normal Distribution. 2nd ed. New York: Marcel Dekker.
Yang, Hongwei. 2007. Normal Curve. In Encyclopedia of Measurement and Statistics, vol. 2, ed. Neil J. Salkind, 690–695. Thousand Oaks, CA: Sage.
Dean Keith Simonton
In studies of public health, information is frequently collected for variables that can be measured on a continuous scale in nature. Examples of such variables include age, weight, and blood pressure. The shape of the distribution associated with these variables is useful to describe the frequency of values across different ranges. More specifically, distributions allow for the probability of obtaining a specific value of a variable to be calculated, while providing estimates of the average, and range, of possible values. The normal distribution is the most widely used distribution to describe continuous variables. It is also frequently referred to as the Gaussian distribution, after the well-known German mathematician Karl Friedrich Gauss (1777–1855).
Normal distributions are a family of distributions characterized by the same general shape. These distributions are symmetrical, with the measured values of the variable more concentrated in the middle than in the tails. They are frequently referred to as "bell-shaped." The area under the curve of a normal distribution represents the sum of the probabilities of obtaining every possible value for a variable. In other words, the total area under a normal curve is equal to one. The shape of the normal distribution represents specified mathematically in terms of only two parameters: the mean (µ), and the standard deviation ([.sigma]). The standard deviation specifies the amount of dispersion around the mean, whereas the mean is the average value across sampled values of the variable. It is a characteristic of normal distribution that 95 percent of the possible values for a variable lie within –2 standard deviations. This is illustrated in Figure 1.
Several biological variables are normally distributed (e.g., blood pressure, serum cholesterol, height, and weight). The normal curve can be used to estimate probabilities associated with these variables. For example, in a population where the birth weight of infants is normally distributed with a mean of 7.2 pounds and a standard deviation of2.1 pounds, one might wish to find the probability a randomly chosen infant will have a birth weight of less than 3 pounds. Such information might help in planning for future obstetric services.
Since the normal distribution can have an infinite number of possible values for its mean and standard deviation, it is impossible to calculate the area for each and every curve. Instead, probabilities are calculated for a single curve where the mean is zero and the standard deviation is one. This curve is referred to as a standard normal distribution (Z). A random variable (X) that is normally distributed with mean (µ) and standard deviation ([.sigma]) can be easily transformed to the standard normal distribution by the formula Z = (X−µ)/[.sigma].
The normal distribution is important to statistical work because most hypothesis tests that are used assume that the random variable being considered has an underlying normal distribution. Fortunately, these tests work very well even if the distribution of the variable is only approximately normal. Examples of such tests include those based on the t, F, or chi-square statistics. If the variable is not normal, alternative nonparametric tests should be considered; however, such tests are inconvenient because they typically are less powerful and flexible in terms of types of conclusions that can be drawn. Alternatively, mathematical theory (e.g., the central limit theorem) has proven that normal distribution–based hypothesis testing can be performed if a large enough number of samples are taken. This latter option is based on an important principle that is largely responsible for the popularity of tests based on the normal function—that if the size of the samples is large enough, the shape of the sampling distribution approaches normal shape even if the distribution of the variable in question is not normal.
Paul J. Villeneuve
(see also: Chi-Square Test; Sampling; Statistics for Public Health )
The common pattern of numbers in which the majority of the measurements tend to cluster near the mean of distribution.
Psychological research involves measurement of behavior. This measurement results in numbers that differ from one another individually but that are predictable as a group. One of the common patterns of numbers involves most of the measurements being clustered together near the mean of the distribution, with fewer cases occurring as they deviate farther from the mean. When a frequency distribution is drawn in pictorial form, the resulting pattern produces the bell-shaped curve that scientists call a normal distribution.
When measurements produce a normal distribution, certain things are predictable. First, the mean, median , and mode are all equal. Second, a scientist can predict how far from the mean most scores are likely to fall. Thus, it is possible to determine which scores are more likely to occur and the proportion of score likely to be above or below any given score.
Many behavioral measurements result in normal distributions. For example, scores on intelligence tests are likely to be normally distributed. The mean is about 100 and a typical person is likely to score within about 15 points of the mean, that is, between 85 and 115. If the psychologist knows the mean and the typical deviation from the mean (called the standard deviation), the researcher can determine what proportion of scores is likely to fall in any given range. For instance, in the range between one standard deviation below the mean (about 85 for IQ scores) and one deviation above the mean (about 115 for IQ scores), one expects to find the scores of about two thirds of all test takers. Further, only about two and a half percent of test takers will score higher than two standard deviations above the mean (about 130).
Although psychologists rely on the fact that many measurements are normally distributed, there are certain cases where scores are unlikely to be normally distributed. Whenever scores cannot be higher than some upper value or smaller than some lower value, a non-normal distribution may occur. For example, salaries are not normally distributed because there is a lower value (i.e., nobody can make less than zero dollars), but there is no upper value. Consequently, there will be some high salaries that will not be balanced by corresponding, lower salaries. It is important to know whether scores are normally distributed because it makes a difference in the kind of statistical tests that are appropriate for analyzing and interpreting the numbers.
Berman, Simeon M. Mathematical Statistics: An Introduction Based on the Normal Distribution. Scranton, PA: Intext Educational Publishers, 1971.
Martin, David W. Doing Psychology Experiments. 2nd ed. Monterey, CA: Brooks/Cole, 1985.
A hypothetical mathematical distribution, the normal distribution provides an idealized model for comparison with observed variable distributions, and is the most commonly used mathematical model in statistical inference. In form it is a symmetrical, bell-shaped curve. The normal distribution for any particular variable is defined by its mean and standard deviation.
The mathematical properties of the normal distribution can be used to estimate the proportion in a sample falling above or below any particular reading or measurement for any variable to which the model is being applied. It is said to be relatively ‘robust’ to non-normality in observed variable distributions: in other words, in many circumstances it will serve as a reasonable model, even in cases where observed variable distributions appear to be rather inadequate approximations to normality. Even when a population does not itself have a normal distribution, the distribution of sample means will tend to approximate to a normal distribution. See also VARIATION (STATISTICAL); CENTRAL TENDENCY (MEASURES OF).
The distribution is symmetric about the mean, μ, and its variance is σ2. The range of x is infinite (–∞, ∞). Many sampling distributions tend to the normal form as the sample size tends to infinity.