Variances, Statistical Study of
Variances, Statistical Study of
General approaches to the study of variability
Parameters describing dispersion
This article discusses statistical procedures related to the dispersion, or variability, of observations. Many such procedures center on the variance as a measure of dispersion, but there are other parameters measuring dispersion, and the most important of these are also considered here. This article treats motivation for studying dispersion, parameters describing dispersion, and estimation and testing methods for these parameters.
Some synonyms or near synonyms for “variability” or “dispersion” are “diversity,” “spread,” “heterogeneity,” and “variation.” “Entropy” is often classed with these.
Why study variability?
In many contexts interest is focused on variability, with questions of central tendency of secondary importance—or of no importance at all. The following are examples from several disciplines illustrating the interest in variability.
Economics. The inequality in wealth and income has long been a subject of study. Yntema (1933) uses eight different parameters to describe this particular variability; Bowman (1956) emphasizes curves as a tool of description [cf. Wold 1935 for a discussion of Gini’s concentration curve and Kolmogorov 19581960 for Lévy’s function of concentration; see also Income distribution].
Industry. The variability of industrial products usually must be small, if only in order that the products may fit as components into a larger system or that they may meet the consumer’s demands; the methods of quality control serve to keep this variability (and possible trends with time) in check. [An elementary survey is Dudding 1952; more modern methods are presented in Keen & Page 1953; Page 1962; 1963; see also Quality control, Statistical.]
Psychology. Two groups of children, selected at random from a given grade, were given a reasoning test under different amounts of competitive stress; the group under higher stress had the larger variation in performance. (The competitive atmosphere stimulated the brighter children, stunted the notsobright ones: see Hays 1963, p. 351; for other examples, see Siegel 1956, p. 148; Maxwell I960; Hirsch 1961, p. 478.)
General approaches to the study of variability
The simplest approach to the statistical study of variability consists in the computation of the sample value of some statistic relating to dispersion [see Statistics, Descriptive, article onlocation and dispersion]. Conclusions as to the statistical significance or scientific interpretation of the resulting value, however, usually require selection of a specified family, 0, of probability distributions to represent the phenomenon under study. The choice of this family will reflect the theoretical framework within which the investigator performs his experiment (s). In particular, one or several of the parameters of the distributions of 0 will correspond to the notion of variability that is most relevant to the investigator’s special problem.
The need for the selection of a specified underlying family, 9, is typical for statistical methodology in general and has the customary consequences : ideally speaking, each specified underlying family, 9, should have a corresponding statistic (or statistical procedure) adapted to it; even if a standard statistic (for example, variance) can be used, its significance and interpretation may vary widely with the underlying family. Unfortunately, the choice of such a family is not always selfevident, and hence the interpretation of statistical results is sometimes subject to considerable “specification error.” [Seeerrors, article oneffects of errors in statistical assumptions.]
Two of the special families of probability distributions that will not be discussed in this article are connected with the methods of factor analysis and of variance components in the analysis of variance.
The factor analysis method analyzes a sample of N observations on an ndimensional vector (X_{1}, X_{2}, …, X_{n}) by assuming that the X_{i}; (i =1, …, n) are linear combinations of a random error term, a (hopefully small) number of “common factors,” and possibly a number of “specific factors.” (These assumptions determine a family, ℐ) Interest focuses on the coefficients in the linear combinations (factor loadings). Unfortunately, the method lacks uniqueness in principle. [See Factor Analysis; see also the survey by Henrysson 1957.]
The variance components method, in one of its simpler instances, analyzes scalarvalued observations, x_{iju} (k= 1, …, n_{ij}), on n_{ij} individuals, observed under conditions Q,_{ij}(i =1,…, r; j = 1, …,s), starting from the assumption that x_{ijk} =μ+a_{i}+b_{j} +d_{j} e_{ijk}, where the a_{i}, b_{j},c_{ij}, e_{ijk} are independent normal random variables with mean 0 and variances respectively. The objective is inference regarding these four variances, in order to evaluate variability from different sources. [See Linear Hypotheses, article onanalysis of variance.]
Parameters describing dispersion
Scales of measurement. Observations may be of different kinds, depending on the scale of measurement used: classificatory (or nominal), partially ordered, ordered (or ordinal), metric (defined below), and so forth. [See Psychometricsand Statistics, Descriptive, for further discussion of scales of measurement.] With each scale are associated transformations that may be applied to the observations and that leave the essential results of the measurement process intact. It is generally felt that parameters and statistical methods should in some sense be invariant under these associated transformations. (For dissenting opinions, see Lubin 1962, pp. 358359.)
As an example, consider a classificatory scale. Measurement in this case means putting an observed unit into one of several unordered qualitative categories (for instance, never married, currently married, divorced, widowed). Whether these categories are named by English words, by the numbers 1, 2, 3, 4, by the numbers 3, 2, 1, 4, by the numbers 100, 101, 250, 261, or by the letters A, B, C, D does not change the classification as such. Hence, whatever it is that statistical methods extract from classification data should not depend on the names (or the transformations of the names) of the categories. Thus, even if the categories have numbers for their names, as above, it would be meaningless to compute the sample variance from a sample.
Parameters in general . Given a family, ℐ, of probability distributions, an identifiable parameter is a numericalvalued function defined on ℐ Let P be a generic member of ℐ, and let m be a positive integer. Most of the parameters for describing dispersion discussed in this article can be defined as
E_{P}g (X_{1}, X_{2}, ... , X_{m}),
where X_{1}, X_{2}, …, X_{m} are independently and identically distributed according to P and g is an appropriate realvalued function. For example, the variance may be defined as E_{P}[1/2(X_{1} − X_{2})^{2}]. Given a family, ℐ, of probability distributions, one evidently has a wide choice of parameters (choosing a different g will usually yield a different parameter).
Different parameters will characterize (slightly or vastly) different aspects of ℐ. For instance, part of the disagreement between the eight methods of assessing variability described by Yntema (1933) stems from the fact that they represent different parameters. Of course, it is sometimes very useful to have more than one measure of dispersion available.
Dispersion parameters . A listing and comparison of various dispersion parameters for some of the scales mentioned above will now be given.
Parameters for classificatory scales. In a classificatory scale let there be q categories, with probabilities θ_{i} (i = 1, 2, …,q); ∑_{i}θ_{i} = 1. The dispersion parameter chosen should be invariant under name change of the categories, so it should depend on the θ_{i} only. If all θ_{i} are equal, diversity (variability) is a maximum, and the parameter should have a large value. If one θ_{i} is 1, so that the others are 0, diversity is 0, and the parameter should conventionally have the value 0. A family of parameters having these and other gratifying properties (for example, a weak form of additivity; see Rényi 1961, eqs. (1.20) and (1.21)) is given by
the amount of information of order α (entropy of order α). Note that
is Shannon’s amount of information [see Information theory]. This information measure has a stronger additivity property—Blyth (1959) points out that if the values of X are divided into groups, then the dispersion of X = between group dispersion + expected within group dispersion. Miller and Chomsky (1963) discuss linguistic applications.
There are other measures of dispersion for classificatory scales besides the informationlike ones. (For example, see Greenberg 1956.)
Parameters for metric scales. On a metric scale observations are real numbers, and all properties of real numbers may be used.
(a) For probability distributions with a density f,
there is the informationlike parameter
H_{i}(f) = E∫,log_{2},f(X) = ∫f(x)log_{2}f(x) d_{x} ≥ 0,
whenever the integral exists. This parameter is not invariant under arbitrary transformations of the Xline, although it is under translations. (For interesting maximum properties in connection with rectangular, exponential, normal distributions, see Rényi 1962, appendix, sec. 11, exercises 12, 17.) For a normal distribution with standard deviation
(b) Traditional measures of dispersion for metric scales are the standard deviation, σ ≥ 0, and the variance, σ^{2}=E_{p}[(Xμ)^{2}], where μ=E_{p}X. As mentioned above, an alternative definition is
half the expectation of the square of the difference of two random variables, X, and X_{2}, independently and identically distributed. This definition of σ^{2} suggests a whole string of socalled mean difference parameters, listed below under (c), (d), and (e),
all of which, like σ and σ^{2}, are invariant under translations only.
(c) Gini’s mean difference is given by
The integral at the right is in general form; if X_{1},, _{X}, have the density function f, the integral is
Wold (1935, pp. 4849) points out the relationship between this parameter and Cramér’s ω^{2} method for testing goodness of fit. As can be seen in Table 1, below, Gini’s mean difference is a distributiondependent function of σ.
There are variate difference parameters that involve the square of “higherorder differences” they are distributionfree functions of σ. An example is
E_{P}[(X_{3}  2X_{2} + X_{1})^{2}] = 6σ^{2}
There are also variate difference parameters involving the absolute value of higherorder differences; they are distributiondependent functions of cr. An example is
(d) By analogy with the first definition of the variance, there are dispersion parameters reflecting absolute variation around some measure of central tendency. Examples are the mean deviation from the mean, μ,
δμ = E_{p}X––μ,
and from the median, MedX,
δ_{Med} = E_{p}X–MedX.
These are distributiondependent functions of σ.
(e) There are dispersion parameters based on other differences. Two examples are the expected value of range of samples of size n,
E_{p}W_{n}=E_{p}[X_{(n)}–X_{(1)}],
where X_{(1)}= min (X_{1}; X_{2}, …, X_{n}) and X(n), max (X_{1}; X_{2}, …, X(n); and the difference of symmetric quantile points,
ξ_{1−α}ξ_{α}
where
and f is the density of the probability distribution P. Both these parameters are distributiondependent functions of σ. Note that this last parameter, the difference of symmetric quantile points, is not based on expected values of random variables.
(f) Another dispersion parameter is the coefficient of variation, σ/μ (given either as a ratio or in per cent), invented to eliminate the influence of absolute size on variability (for example, to compare the variation in size of elephants and of mice). Sometimes it does exactly that (Banerjee 1962); sometimes it does nothing of the sort (Cramer 1945, table 31.3.5). (For further discussion, see Pearson 1897.)
Because they are distributiondependent functions of cr, the parameters cited under (a), (c), (d), and (e) are undesirable for a study of the variance, σ^{2}, unless one is fairly sure about the underlying family of probability distributions. This will be illustrated below. Despite this drawback, these parameters are, of course, quite satisfactory as measures of dispersion in their own right.
Comparison of dispersion measures. Table 1 lists the quotient of several of the abovementioned parameters divided by σ, together with other relevant quantities. It gives these comparisons for the distributions of the types listed in the first column, with parameter specifications as indicated in the next two columns. The parameterization is the same as that in Distributions, Statistical, article onspecial continuous distributions.) The sign “˜” before an entry denotes an asymptotic result (for large n or large μ). Table 1 illustrates how
Table 1  comparison of dispersion parameters  

Distributional form  Parameters of distribution  Coefficient of variation  Ratios of mean difference to σ  
σ  μ  Med X  σ/μ  δ_{μ}/σ  δ_{Med}/σ  EW_{n/σ}  
a. Here γ is Euler’s constant: γ= 0.5772157 …  
b. Not known from the literature.  
Normal  σ  μ  μ  σ/μ  
Exponential  θ  θ  θlog_{e}^{2}  1  1  16/9  2/e  log_{e}^{2}  ˜γ+log_{e}n^{a} 
Double exponential  λ  λ  b  
Rectangular  b 
bad the distribution dependence of these parameters can really be. [See Errors, article on Nonsampling errors, for further discussion.]
Multivariate distributions. Most of the parameters discussed for univariate distributions can be generalized to multivariate distributions, usually in more than one fashion. The variance, for instance, is the expected value of onehalf the square of the distance between two random points on the real line. Generalization may be attained by taking the distance between two random points in kspace or by taking the content of a polyhedron spanned by k + 1 points in kspace. Thus, a rather great variety of multivariate dispersion parameters are possible. [See Multivariate analysisand, for example, van der Vaart 1965.]
Statistical inference
Shannon’s amount of information . Consider, first, point estimation of Shannon’s amount of information for discrete distributions. Suppose a sample of size n is drawn from the probability distribution, with q categories and probabilities θ_{i}, described earlier. Suppose n_{i}; observations fall in the ith category; Then
suggests itself as the natural estimator for H_{1},(P) = −Σθlog_{2}θ_{i}. The properties of this estimator have been studied by Miller and Madow (1954) and (by a simpler method) by Basharin (1959). The sampling distribution of Ĥ has mean
(The term O ( 1/n^{2}) denotes a function of n and the 0, such that for some positive constant, c, the absolute value of the function is less than θ/n2.) So for “small” n the bias, the difference between EĤ and H_{1},(P), is substantial. [For lowbias estimators, see Blyth 1959; for a general discussion of point estimation, see Estimation, article onpoint estimation.]
The variance of one population . Procedures for estimating the variance of a single population and for testing hypotheses about such a variance will now be described.
Point estimation for general ℐ. Let the underlying family, ℐμ consist of all probability distributions with density functions and known mean, p, or of all discrete distributions with known mean, μ. In both cases the theory of Ustatistics (see, for example, Fraser 1957, pp. 135147) shows that the minimum variance unbiased estimator of σ^{2}, given a sample of size n, is
Note that the sampling variance, , which measures the precision of the estimator relative to the underlying distribution, P (a member of ℐμ.), is definitely distribution dependent. If μ is, again, the family of all absolutely continuous (or discrete) distributions now with unknown mean, then the uniformly minimum variance unbiased estimator of σ^{2} is
where x̄ is the sample mean. Again, var_{P} is very much distribution dependent.
For more restricted families of distributions it is sometimes possible to find other estimators, with smaller sampling variances. Also, if the unbiasedness requirement is dropped, one may find estimators that, although biased, are, on the average, closer to the true parameter value than a minimum variance unbiased estimator: for the family of normal distributions,
is such an estimator of σ^{2}.
Distribution dependence. To illustrate the dependence of the quality of point estimators upon the underlying family of probability distributions, Table 2 lists the sampling variance of for random samples from 5 different distribution families. It is seen that the quotient (where P indicates some nonnormal underlying
Table 2 The sampling variance of .  

Distribution  
a.The reader should add a term O(l/n^{2})to each entry.  
b. Example is due to Hotelling 1961, p. 350.  
Normal  
Exponential  
Double exponential  
Rectangular  
Pearson type VII^{b} (f(x)=k(1+x^{2}a−^{2})^{−p};p>5/2) 
distribution and N a normal one) may vary from 2/5 to ∞. Hence, unless ℐ can be chosen in a responsible way, little can be said about the precision of as an estimator of σ^{2} (although for large samples the higher sample moments will be of some assistance in evaluating the precision of this estimator).
Normal distributions. Tests and confidence intervals on dispersion parameters for the case of normal distributions will now be discussed. In order to decide whether a sample of n observations may have come from a population with known variance, , or from a more heterogeneous one, test the hypothesis against the onesided alternative where σ^{2} is the (unknown) variance characterizing the sample (Rao 1952, sec. 6a. 1, gives a concrete example of the use of this onesided alternative). In order to investigate only whether the sample fits into the given population in terms of homogeneity, test against , where is a twosided alternative. If the underlying family is normal, the most powerful levelα test for the onesided alternative rejects H_{0}whenever
where is tne 100δ per cent point of the chisquare distribution for n − 1 degrees of freedom (so that is the upper 100α per cent point of the same distribution). [For further discussion of these techniques and the terminology, see Hypothesis testing.]
The most powerful unbiased levelα test for the twosided alternative rejects H_{0}; whenever
Here and , with β+γ =α =λ+ ν (see Lehmann 1959, chapter 5, sec. 5, example 5, and pp. 165 and 129; for tables, see Lindley et al. 1960: α = 0.05, 0.01,0.001). In practice the nonoptimal equaltail test is also used, where and with β=γ½α. For the latter test the standard chisquare tables suffice, and the two tests differ only slightly unless the sample size is very small.
The onesided and twosided confidence intervals follow immediately from the above inequalities; for example, a twosided confidence interval for σ^{2} at level α is
S^{2}/C_{1}<σ^{2}<S^{2}/C_{2}
Nonnormal distributions. The above discussion of the distribution dependence of point estimators of dispersion parameters should have prepared the reader to learn that the tests and confidence interval procedures discussed above are not robust against nonnormality. Little has been done in developing tests or confidence intervals for σ^{2} when ℐ is unknown or broad. Hotelling (1961, p. 356) recommends using all available knowledge to narrow ℐ down to a workable family of distributions, then adapting statistical methods to the resulting family.
Mean square differences. For a large family of absolutely continuous distributions with unknown mean, the minimum variance unbiased estimator, , was introduced above. An alternative formula is
This formula suggests another estimator of 2σ^{2}, unbiased, but not with minimum variance:
If the indices 1, 2, …, n in the sample x_{1},x_{2}, …, x_{n} indicate an ordering of some kind (for example, the order of arrival in a time series), then is called the first mean square successive difference. Similarly,
the second mean square successive difference, is an unbiased estimator of 6σ^{2}.
If the underlying family, ℐ, is normal, then asymptotically (for large n)
(see Kamat 1958). These estimators, although clearly less precise than , are of interest because they possess a special kind of robustness—against trend. Suppose the observations x_{1}, x_{n} have been taken at times t_{1}, … t_{n} from a time process, X(t) =ϕ(t) + Y, where ϕ is a smoothly varying function (trend) of t, and the distribution of the random variable Y is independent of t (for example, ϕ might describe an expanding economy and Y the fluctuations in it). Let an estimator be sought for var(Y). Most of the trend is then eliminated by considering only the successive differences x_{i} −X_{i+1}= ϕ(t_{i})− ϕ(t_{i+1})+ y_{i}−y_{i+1}, thus making for an estimator of var(Y) with much less bias. These methods have been applied to control and record charts by Keen and Page (1953), for example.
Little work has been done on studying the sampling distributions of successive difference estimators in cases where the underlying distribution is nonnormal. Moore (1955) gave moments and approximations of for four types of distributions.
The standard deviation of one population . Since , one might feel that the standard deviation, σ, should be estimated by the square root of a reasonable estimator of σ^{2}. This is, indeed, often done, and for large sample sizes the results are quite acceptable. For smaller sample sizes, however, the suboptimality of such estimators is more marked (specifically, Es_{2} ≠σ if the underlying family is normal, an unbiased estimator is
where Г is the gamma function). Therefore, there has been some interest in alternative estimators, like those now to be described.
Estimation via alternative parameters. In Table 1 it was pointed out that, depending on the underlying family, ℐ, of distributions, certain relations exist between σ and other dispersion parameters, θ, of the form θ =ν_{⊂}, σ. So if one knows ℐ, one may estimate θ by, say, T(x), apply the conversion factor 1/ν_{ℐ}, and find an unbiased estimator of σ.
Thus, the mean successive differences,
are, if ℐ is normal, unbiased estimators of , and respectively, with sampling variances (see Kamat 1958) given by
(Here the term o (1 //n) denotes a function that, after multiplication by n, goes to zero as n becomes large.) See Lomnicki (1952) for the sampling variance of [n(n −1 )]−1Σ_{i}Σ_{j}x_{i}−x_{j}, Gini’s mean difference, for normal, exponential, and rectangular ℐ. Again, if
and ℐ is normal, then is an unbiased estimator of σ its sampling variance is
which is close to the absolute lower bound, σ^{2}/(2n). The properties of
where Me(x) is the sample median, differ slightly, yet favorably, from those of d_{m}. The literature on these and similar statistics is quite extensive.
The last column of Table 1 suggests the use of the sample range, W_{n}= X(n) − X_{(1)}, to estimate σ the conversion factor now depends on both the underlying distribution and the sample size, n (for normal distributions, see David 1962, p. 113, table 7A.1). With increasing n, the precision of converted sample ranges as estimators of σ decreases rapidly. One may then shift to quasi ranges (X(n−r+1) − X(r)) or, better still, to linear combinations of quasi ranges (see David 1962, p. 107). The use of quasi ranges to obtain confidence intervals for interquantile distances (ξ1−α, — ξα) was also discussed by Chu (1957). This type of estimator employs order statistics. A more efficient use of order statistics is made by the socalled best unbiased linear systematic statistics and by approximations to these [for more information, see Nonparametric statistics, article Onorder statistics], These linear systematic statistics are especially useful in case the data are censored [see Statistical analysis, special problems of, article ontruncation and censorship]. It should also be mentioned that grouping of data poses special problems for the use of estimators based on order statistics [see Statistical analysis, special problems of, article ongrouped observations].
Comparing variances of several populations . As in the example of increased variation on a reasoning test with competitive stress, discussed above, it appears that situations will occur in which interest is focused on differences in variability as the response to differences in conditions. Two groups were compared in the example, but the situation can easily be generalized to more than two groups. Thus, one may want to apply more than two levels of competitive stress, and one may even bring in a second factor of the environment, such as different economic backgrounds (in which case one would have a twoway classification).
Bartlett’s test and the Ftest. Consider k populations and k samples, one from each population (in the reasoningtest example, each group of children under a given level of stress would constitute one sample). Let the observations x_{r1},x_{r2},…,x_{rn}, be a random sample from the rth population (r=1,… k,).Let
Define and ν_{r} = n_{r}−1,
where
Bartlett’s 1937 test of the hypothesis H_{0}: the k variances are equal, against H_{A}: not all variances are equal, assumes all samples to be drawn from normal distributions and rejects H_{0} if and only if the statistic L is too large, where
The true values of the means of the k populations do not influence the outcome of this test. The distribution of L is known to be chisquare, with k− 1 degrees of freedom, for large samples; for samples of intermediate size, it is desirable to use, as a closer approximation, the fact that L/(l + c), where
has approximately the same chisquare distribution. Bartlett’s test is unbiased. Against these virtues there is one outstanding weakness: the test has total lack of robustness against nonnormality [see Errors, article oneffects of errors in statistical assumptions].
For k= 2, Bartlett’s test reduces to a variant of the Ftest: reject H_{0}: σ_{1} = σ_{2} in favor of , is either too large or too small. The onesided Ftest rejects H_{0} in favor of , if where F_{1−α;ν1,ν2} is the upper 100a per cent point of the Fdistribution for ν_{2} and ν_{2} degrees of freedom. The Ftest in this context naturally has the same lack of robustness against nonnormality as Bartlett’s test.
Alternate test for variance heterogeneity.Bartlett and Kendall (1946) proposed an alternative approach: apply analysis of variance techniques to the logarithms of the ksample variances. The virtue of this suggestion is that the procedure can be generalized immediately to a test of variances in a twoway classification. Box (1953, p. 330) showed that this test, too, is nonrobust against nonnormality. More robust procedures are described below.
Variances of two correlated samples. McHugh (1953) quotes a study of the effect of age on dispersion of mental abilities; the same group of persons was measured at two different ages. Naturally the two samples are correlated, and the Ftest does not apply. Under the assumption that the pairs (x_{11}, x,_{21}), …, (x_{1i}, x_{2i}), …, (x_{1n}, x_{2n}) constitutea sample from a bivariate normal distribution with variances and and correlation coefficient ρ, the hypothesis is tested by the statistic
where and are as defined in the discussion of Bartlett’s test, above, and r is the sample correlation coefficient. The statistic Tθ is distributed under the null hypothesis as Student’s t with n− 2 degrees of freedom. Onesided tests, twosided tests, and confidence intervals follow in the customary manner. Specifically, the hypothesis is tested by means of the statistic
This method, which was proposed by Morgan (1939) and Pitman (1939), is based on the easily derived fact that the covariance between X + Y and X − Y is the difference between the variances of X and Y, so that the correlation between the sum and difference of the random variables is zero if and only if the variances are equal.
Testing for varianceheterogeneity preliminary to AN OVA. The analysis of variance assumes equality of variance from cell to cell. Hence, it is sometimes proposed that the data be run through a preliminary test to check this assumption, also called that of homoscedasticity; variance heterogeneity is also called heteroscedasticity.
There are two objections to this procedure. First, the same data are subjected to two different statistical procedures, so the two results are not independent. Hence, a special theoretical investigation is needed to find out what properties such a double procedure has (see Kitagawa 1963). Second (see Box 1953, p. 333), one should not use Bartlett’s test for such a preliminary analysis, because of its extreme lack of robustness against nonnormality: one might discard as heteroscedastic data that are merely nonnormal, whereas the analysis of variance is rather robust against nonnormality. (An additional important point is that analysis of variance is fairly robust against variance heterogeneity, at least with equal numbers in the various cells.) [Seesignificance, tests of, for further discussion of preliminary tests.]
In view of the relative robustness of range methods, Hartley’s suggestion (1950, pp. 277279) of testing for variance heterogeneity by means of range statistics is quite attractive.
Robust tests against variance heterogeneity.Box (1953, sec. 8) offers a more robust fesample test against variance heterogeneity: each of the ksamples is broken up into small, equal, exclusive, and exhaustive random subsets, a dispersion statistic is computed for each subset, and the withinsample variation of these statistics is compared with the betweensample variation. (Box 1953 applies an analysis of variance to the logarithms of these statistics; Moses 1963, p. 980, applies a rank test to the statistics themselves.)
Another approach applies a permutation test, which amounts to a kurtosisdependent correction of Bartlett’s test (Box & Andersen 1955, p. 23). The results are good, although they are better in the case of known means than in the case of unknown means.
Still another procedure uses rank tests [see Nonparametric statistics, articles onthe fieldand onranking methods; see also a survey by van Eeden 1964]. Moses (1963, sees. 3, 4) makes some enlightening remarks about things a rank test for dispersion can and cannot be expected to do.
H. Robert van der Vaart
[See also Statistics, descriptive, article onlocation and dispersion.]
BIBLIOGRAPHY
Banerjee, V. 1962 Experimentelle Untersuchungen zur Gültigkeit des Variationskoeffizienten V in der Natur, untersucht an zwei erbreinen Populationen einer Wasserläuferart. Biometrische Zeitschrift 4:121125.
Bartlett, M. S. 1937 Properties of Sufficiency and Statistical Tests. Royal Society of London, Proceedings Series A 160:268282.
Bartlett, M. S.; and Kendall, D. G. 1946 The Statistical Analysis of Varianceheterogeneity and the Logarithmic Transformation. Journal of the Royal Statistical Society, Series B 8:128138.
Basharin, G. P. 1959 On a Statistical Estimate for the Entropy of a Sequence of Independent Random Variables. Theory of Probability and Its Applications 4:333337. → First published in Russian in the same year, in Teoriia veroiatnostei i ee primeneniia, of which the English edition is a translation, published by the Society for Industrial and Applied Mathematics.
Blyth, Colin R. 1959 Note on Estimating Information. Annals of Mathematical Statistics 30:7179.
Bowman, Mary J. 1956 The Analysis of Inequality Patterns: A Methodological Contribution. Metron 18, no. 1/2:189206.
Box, G. E. P. 1953 Nonnormality and Tests on Variances. Biometrika 40:318335.
Box, G. E. P.; and Andersen, S. L. 1955 Permutation Theory in the Derivation of Robust Criteria and the Study of Departures From Assumptions. Journal of the Royal Statistical Society Series B 17:134.
Chu, J. T. 1957 Some Uses of Quasiranges. Annals of Mathematical Statistics 28:173180.
CramÉr, Harald (1945) 1951 Mathematical Methods of Statistics. Princeton Mathematical Series, No. 9. Princeton Univ. Press.
David, H. A. 1962 Order Statistics in Shortcut Tests. Pages 94128 in Ahmed E. Sarhan and Bernard G. Greenberg (editors), Contributions to Order Statistics. New York: Wiley.
Dudding, Bernard P. 1952 The Introduction of Statistical Methods to Industry. Applied Statistics 1:320.
Fraser, Donald A. S. 1957 Nonparametric Methods in Statistics. New York: Wiley.
Greenberg, Joseph H. 1956 The Measurement of Linguistic Diversity. Language 32:109115.
Hartley, H. O. 1950 The Use of Range in Analysis of Variance. Biometrika 37:271280.
Hays, William L. 1963 Statistics for Psychologists. New York: Holt.
Henrysson, Sten 1957 Applicability of Factor Analysis in the Behavioral Sciences: A Methodological Study. Stockholm Studies in Educational Psychology, No. 1. Stockholm: Almqvist & Wiksell.
Hirsch, Jerry 1961 The Role of Assumptions in the Analysis and Interpretation of Data. American Journal of Orthopsychiatry 31:474480. → Discussion paper in a symposium on the genetics of mental disease.
Hotelling, Harold 1961 The Behavior of Some Standard Statistical Tests Under Nonstandard Conditions. Volume 1, pages 319359 in Berkeley Symposium on Mathematical Statistics and Probability, Fourth, 1960, Proceedings. Edited by Jerzy Neyman. Berkeley and Los Angeles: Univ. of California Press.
Kamat, A. R. 1958 Contributions to the Theory of Statistics Based on the First and Second Successive Differences. Metron 19, no. 1/2:97118.
Keen, Joan; and Page, Denys J. 1953 Estimating Variability From the Differences Between Successive Readings. Applied Statistics 2:1323.
Kitagawa, Tosio 1963 Estimation After Preliminary Tests of Significance. University of California Publications in Statistics 3:147186.
Kolmogorov, A. N. 1958 Sur les propriétés des fonctions de concentrations de M. P. Lévy. Paris, Université, Institut Henri Poincaré, Annales 16:2734.
Lehmann, E. L. 1959 Testing Statistical Hypotheses. New York: Wiley.
Lindley, D. V.; East, D. A.; and Hamilton, P. A. 1960 Tables for Making Inferences About the Variance of a Normal Distribution. Biometrika 47:433437.
Lomnicki, Z. A. 1952 The Standard Error of Gini’s Mean Difference. Annals of Mathematical Statistics23:635637.
Lubin, Ardie 1962 Statistics. Annual Review of Psychology 13:345370.
Mchugh, Richard B. 1953 The Comparison of Two Correlated Sample Variances. American Journal of Psychology 66:314315.
Maxwell, A. E. 1960 Discrepancies in the Variances of Test Results for Normal and Neurotic Children. British Journal of Statistical Psychology 13:165172.
Miller, George A.; and Chomsky, Noam 1963 Finitary Models of Language Users. Volume 2, pages 419491 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), Handbook of Mathematical Psychology. New York: Wiley.
Miller, George A.; and Madow, William G. (1954) 1963 On the Maximum Likelihood Estimate of the ShannonWiener Measure of Information. Volume 1, pages 448469 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), Readings in Mathematical Psychology. New York: Wiley.
Moore, P. G. 1955 The Properties of the Mean Square Successive Difference in Samples From Various Populations. Journal of the American Statistical Association 50:434456.
Morgan, W. A. 1939 A Test for the Significance of the Difference Between the Two Variances in a Sample From a Normal Bivariate Population. Biometrika 31: 1319.
Moses, Lincoln E. 1963 Rank Tests of Dispersion. Annals of Mathematical Statistics 34:973983.
Page, E. S. 1962 Modified Control Chart With Warning Lines. Biometrika 49:171176.
Page, E. S. 1963 Controlling the Standard Deviation by Cusums and Warning Lines. Technometrics 5:307315.
Pearson, Karl 1897 On the Scientific Measure of Variability. Natural Science 11:115118.
Pitman, E. J. G. 1939 A Note on Normal Correlation. Biometrika 31:912.
Rao, C. Radhakrishna 1952 Advanced Statistical Methods in Biometric Research. New York: Wiley.
RÉnyi, AlfrÉd 1961 On Measures of Entropy and Information. Volume 1, pages 547561 in Berkeley Symposium on Mathematical Statistics and Probability, Fourth, 1960, Proceedings. Edited by Jerzy Neyman. Berkeley and Los Angeles: Univ. of California Press.
RÉnyi, AlfrÉd 1962 Wahrscheinlichkeitsrechnung, mit einem Anhang über Informationstheorie. Berlin: Deutscher Verlag der Wissenschaften.
Siegel, Sidney 1956 Nonparametric Statistics for the Behavioral Sciences. New York: McGrawHill.
van der Vaart, H. Robert 1965 A Note on Wilks’ Internal Scatter. Annals of Mathematical Statistics 36: 13081312.
van Eeden, Constance 1964 Note on the Consistency of Some Distributionfree Tests for Dispersion. Journal of the American Statistical Association 59:105119.
Wold, Herman 1935 A Study on the Mean Difference, Concentration Curves and Concentration Ratio. Metron 12, no. 2:3958.
Yntema, Dwight B. 1933 Measures of the Inequality in the Personal Distribution of Wealth or Income. Journal of the American Statistical Association 28:423433.
Cite this article
Pick a style below, and copy the text for your bibliography.

MLA

Chicago

APA
"Variances, Statistical Study of." International Encyclopedia of the Social Sciences. . Encyclopedia.com. 18 Apr. 2019 <https://www.encyclopedia.com>.
"Variances, Statistical Study of." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (April 18, 2019). https://www.encyclopedia.com/socialsciences/appliedandsocialsciencesmagazines/variancesstatisticalstudy
"Variances, Statistical Study of." International Encyclopedia of the Social Sciences. . Retrieved April 18, 2019 from Encyclopedia.com: https://www.encyclopedia.com/socialsciences/appliedandsocialsciencesmagazines/variancesstatisticalstudy
Citation styles
Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).
Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.
Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the mostrecent information available at these sites:
Modern Language Association
The Chicago Manual of Style
http://www.chicagomanualofstyle.org/tools_citationguide.html
American Psychological Association
Notes:
 Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
 In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.