The term chi-square (χ 2) refers to a distribution, a variable that is χ 2 -distributed, or a statistical test employing the χ 2 distribution. A χ 2 distribution with k degrees of freedom (df ) has mean k, variance 2k, and mode k – 2 (if k > 2), and is denoted . Much of its usefulness in statistical inference derives from the fact that the sample variance of a normally distributed variable is χ 2 -distributed with df = N – 1. All χ 2 distributions are asymmetrical, right-skewed, and non-negative. Owing to the broad utility of the χ 2 distribution, tabled χ 2 probability values can be found in virtually every introductory statistics text.
A test of the null hypothesis that (e.g., H 0: σ2 1.8) is conducted by obtaining the sample variance s 2, computing the test statistic
and consulting values of the distribution. For a two-tailed test, G is compared to the critical values associated with the lower and upper (50 × α)% of the distribution. Rejection implies, with confidence 1 – α, that the sample is not drawn from a normally distributed population with variance .
The χ 2 goodness of fit test compares two finite frequency distributions—one a set of observed frequency counts in C categories, the other a set of counts expected on the basis of theory or chance. The statistic
is computed, where Oi and Ei are, respectively, the observed and expected frequencies for category i given a fixed total sample size N. G is approximately χ 2 -distributed with df = C – 1. If the null hypothesis of equality is rejected, the test implies a statistically significant departure from expectations.
This test can be extended to test the null hypothesis that several frequency distributions are independent. For example, given a 3 × 4 contingency table of frequencies, where R = 3 rows (conditions) and C = 4 columns (categories), G may be computed as
and compared against a distribution. Expected frequencies are computed as the product of the marginal totals for column i and row j divided by N. Rejection of the null hypothesis implies that not all rows (or columns) were sampled from independent populations. This test may be extended to any number of dimensions.
These χ 2 tests have been found to work well with average expected frequencies as low as 2. However, these tests are inappropriate if the assumption of independent observations is violated.
A common application of χ 2 is to test the hypothesis that a sample’s parent population follows a particular continuous probability density function. The test is conducted by first dividing the hypothetical distribution into C “bins” of equal wdth. The frequencies expected for each bin (Ei ) are approximated by computing the probability of randomly selecting a case from that bin and multiplying by N. Observed frequencies (Oi ) are obtained by using the same bin limits in the observed distribution. The one-tailed test is conducted by using equation 2 and comparing the result to the critical value drawn from a distribution. Note that the number of bins, and points of division between bins, must be chosen arbitrarily, yet these decisions can have a large impact on conclusions.
The χ 2 distribution has many other applications in the social sciences, including Bartlett’s test of homogeneity of variance, Friedman’s test for median differences, tests for heteroscedasticity, nonparametric measures of association, and likelihood ratios. In addition, χ 2 statistics form the basis for many model fit and selection indices used in latent variable analyses, item response theory, logistic regression, and other advanced techniques. All of these methods involve the evaluation of the discrepancy between a model’s implications and observed data.
SEE ALSO Distribution, Normal
Howell, David C. 2006. Statistical Methods for Psychology. 6th ed. Belmont, CA: Wadsworth Publishing.
Pearson, Karl. 1900. On the Criterion That a Given System of Deviations From the Probable in the Case of a Correlated System of Variables Is Such That It Can Be Reasonably Supposed To Have Arisen From Random Sampling. Philosophical Magazine 50: 157-175.
Kristopher J. Preacher
Studies often collect data on categorical variables that can be summarized as a series of counts. These counts are commonly arranged in a tabular format known as a contingency table. For example, a study designed to determine whether or not there is an association between cigarette smoking and asthma might collect data that could be assembled into a 2−2 table. In this case, the two columns could be defined by whether the subject smoked or not, while the rows could represent whether or not the subject experienced symptoms of asthma. The cells of the table would contain the number of observations or patients as defined by these two variables.
The chi-square test statistic can be used to evaluate whether there is an association between the rows and columns in a contingency table. More specifically, this statistic can be used to determine whether there is any difference between the study groups in the proportions of the risk factor of interest. Returning to our example, the chi-square statistic could be used to test whether the proportion of individuals who smoke differs by asthmatic status.
The chi-square test statistic is designed to test the null hypothesis that there is no association between the rows and columns of a contingency table. This statistic is calculated by first obtaining for each cell in the table, the expected number of
|Observed values for data presented in a two-by-two table|
|source: Courtesy of author.|
|Variable 2||Variable 1||Total|
events that will occur if the null hypothesis is true. When the observed number of events deviates significantly from the expected counts, then it is unlikely that the null hypothesis is true, and it is likely that there is a row-column association. Conversely, a small chi-square value indicates that the observed values are similar to the expected values leading us to conclude that the null hypothesis is plausible. The general formula used to calculate the chi-square (X 2) test statistic is as follows:
where O = observed count in category; E = expected count in the category under the null hypothesis; df = degrees of freedom; and c, r represent the number of columns and rows in the contingency table.
The value of the chi-square statistic cannot be negative and can assume values from zero to infinity. The p-value for this test statistic is based on the chi-square probability distribution and is generally extracted from published tables or estimated using computer software programs. The p-value represents the probability that the chi-square test statistic is as extreme as or more extreme than observed if the null hypothesis were true. As with the t and F distributions, there is a different chi-square distribution for each possible value of degrees of freedom. Chi-square distributions with a small number of degrees of freedom are highly skewed; however, this skewness is attenuated as the number of degrees of freedom increases. In general, the degrees of freedom for tests of hypothesis that involve an r×c contingency table is
|Expected values for data presented in a two-by-two table|
|source: Courtesy of author.|
|Variable 2||Variable 1||Total|
equal to (r7minus;1)×(c−1); thus for any 2×2 table, the degrees of freedom is equal to one. A chi-square distribution with one degree of freedom is equal to the square root of the normal distribution, and, consequently, either the chi-square or standard normal table can be used to determine the corresponding p-value.
The chi-square test is most widely used to conduct tests of hypothesis that involve data that can be presented in a 2×2 table. Indeed, this tabular format is a feature of the case-control study design that is commonly used in public health research. Within this contingency table, we could denote the observed counts as shown in Table 1. Under the null hypothesis of no association between the two variables, the expected number in each cell under the null hypothesis is calculated from the observed values using the formula outlined in Table 2.
The use of the chi-square test can be illustrated by using hypothetical data from a study investigating the association between smoking and asthma among adults observed in a community health clinic. The results obtained from classifying 150 individuals are shown in Table 3. As Table 3 shows, among asthmatics the proportion of smokers was 40 percent (20/50), while the corresponding proportion among asymptomatic individuals was 22 percent (22/100). By applying the formula presented in Table 2, for the observed cell counts of 20, 30, 22, and 78 (Table 3) the corresponding expected counts are 14, 36, 28, and 72. The observed and expected counts can then be used to calculate the chi-square test statistic as outlined in Equation 1. The resulting value of the chi-square
|Hypothetical data showing chi-square test|
|source: Courtesy of author.|
|Symptoms of asthma||Ever smoke cigarettes||Total|
test statistic is approximately 5.36, and the associated p-value for this chi-square distribution that has one degree of freedom is 0.02. Therefore, if there was truly no association between smoking and asthma, there is a 2 out of 100 probability of observing a difference in proportions that is at least as large as 18 percent (40%–22%) by chance alone. We would therefore conclude that the observed difference in the proportions is unlikely to be explained by chance alone, and consider this result statistically significant.
Because the construction of the chi-square test makes use of discrete data to estimate a continuous distribution, some authors will apply a continuity correction when calculating this statistic. Specifically,
where Oi−Ei is the absolute value of the difference between Oi and Ei and the term 0.5 in the numerator is often referred to as Yates correction factor. This correction factor serves to reduce the chi-square value, and, therefore, increases the resulting p-value. It has been suggested that this correction yields an overly conservative test that may fail to reject a false null hypothesis. However, as long as the sample size is large, the effect of the correction factor is negligible.
When there is a small number of counts in the table, the use of the chi-square test statistic may not be appropriate. Specifically, it has been recommended that this test not be used if any cell in the table has an expected count of less than one, or if 20 percent of the cells have an expected count that is greater than five. Under this scenario, the Fisher's exact test is recommended for conducting tests of hypothesis.
Paul J. Villeneuve
(see also: Normal Distributions; Probability Model; Sampling; Statistics for Public Health; T-Test )
Cohran, W. G. (1954). "Some Methods for Strengthening the Common X 2 Test." Biometrics 10:417–451.
Grizzle, J. E. (1967). "Continuity Correction in the X2 Test for 2×2 Tables." The American Statistician 21:28–32.
Pagano, M., and Gauvreau, K. (2000). Principles of Biostatistics, 2nd edition. Pacific Grove, CA: Duxbury Press.
Rosner, B. (2000). Fundamentals of Biostatistics, 5th edition. Pacific Grove, CA: Duxbury Press.
The chi-square test (X2) is the most commonly used method for comparing frequencies or proportions. It is a statistical test used to determine if
|Table 1 . (Thomson Gale.)|
|Congenital heart defects in Down and Patau syndrome patients|
|Down syndrome||Patau syndrome||Total|
|Table 2. 2 × 2 Table summarizing data collected from two groups of patients . (Thomson Gale.)|
|Summary of subjects (Down and Patau syndrome patients)|
|Down syndrome||Patau syndrome||Total|
|Table 3. Observed and expected frequencies and X 2 for data in Table 1 . (Thomson Gale.)|
|Observed and expected frequencies of Down and Patau among subjects|
|Observed (o)||Expected (e)||o–e||(o–e)2||(o–e)2/e|
observed data deviate from those expected under a particular hypothesis. The chi-square test is also referred to as a test of a measure of fit or “goodness of fit” between data.
Typically, the chi-square test examines whether or not two samples are different enough in a particular characteristic to be considered separate from each other. Chi-square analysis belongs to a type of analysis
known as univariate analysis; this analysis examines the possible effect of one variable (often called the independent variable) upon an outcome (often called the dependent variable).
The chi-square analysis is used to test what is termed the null hypothesis (H0 ). A null hypothesis states there is no significant difference between expected and observed data. Investigators either accept or reject H0 after comparing the value of chi-square to a probability distribution. Chi-square values with low probability lead to the rejection of H0; put another way, this means that a factor other than chance created a large difference between the expected and observed results. Values with a higher probability are accepted; in other words there is no appreciable difference between the observed and expected values.
Chi-square tests only evaluate a single variable, thus they do not take into account the interaction among more than one variable upon the outcome. Therefore, other unseen factors may make the variables appear to be associated even when they are not. Despite this possibility, if properly used, the chi-square test is a very useful tool for the evaluation of associations and can be used as a preliminary analysis of more complex statistical evaluations.
Campbell, M.J. Statistics at Square Two: Understanding Modern Statistical Applications in Medicine. London: BMJ Publishing Group, 2001.
Donnelly, Robert A., Jr. The Complete Idiot’s Guide to Statistics. New York: Penguin, 2004.
Hunter, G. Scott. Let’s Review Biology-The Living Environment. Hauppauge NY: Barron’s Educational Series,2003.
The chi-square test (KHGR2) is the most commonly used method for comparing frequencies or proportions. It is a statistical test used to determine if observed data deviate from those expected under a particular hypothesis. The chi-square test is also referred to as a test of a measure of fit or "goodness of fit" between data. Typically, the hypothesis tested is whether or not two samples are different enough in a particular characteristic to be considered members of different populations. Chi-square analysis belongs to the family of univariate analysis, i.e., those tests that evaluate the possible effect of one variable (often called the independent variable) upon an outcome (often called the dependent variable).
The chi-square analysis is used to test the null hypothesis (H0), which is the hypothesis that states there is no significant difference between expected and observed data. Investigators either accept or reject H0, after comparing the value of chi-square to a probability distribution. Chi-square values with low probability lead to the rejection of H0 and it is assumed that a factor other than chance creates a large deviation between expected and observed results. As with all non-parametric tests (that do not require normal distribution curves), chi-square tests only evaluate a single variable, thus they do not take into account the interaction among more than one variable upon the outcome.
A chi-square analysis is best illustrated using an example in which data from a population is categorized with respect to two qualitative variables. Table 1 shows a sample of patients categorized with respect to two qualitative variables, namely, congenital heart defect (CHD; present or absent) and karyotype (trisomy 21, also called Down syndrome , or trisomy 13, also called Patau syndrome ). The classification table used in a chi-square analysis is called a contingency table and this is its simplest form (2 x 2). The data in a contingency table are often defined as row (r) and column (c) variables.
In general, a chi-square analysis evaluates whether or not variables within a contingency table are independent, or that there is no association between them. In this example, independence would mean that the proportion of individuals affected by CHD is not dependent on karyotype; thus, the proportion of patients with CHD would be similar for both Down and Patau syndrome patients. Dependence, or association, would mean that the proportion of individuals affected by CHD is dependent on kayotype, so that CHD would be more commonly found in patients with one of the two karyotypes examined.
Table 1 shows a 2 x 2 contingency table for a chi-square test—CHD (congenital heart defects) found in patients with Down and Patau syndromes
Chi-square is the sum of the squared difference between observed and expected data, divided by the expected data in all possible categories:
Χ2 = (O11 - E11)2 / E11 + (O12 - E12)2 / E 12 + (O21 - E21)2/ E21 + (O22–e22)2 / E22, where O11 represents the observed number of subjects in column 1, row 1, and so on. A summary is shown in Table 2.
The observed frequency is simply the actual number of observations in a cell. In other words, O11 for CHD in the Down-syndrome-affected individuals is 24. Likewise, the observed frequency of CHD in the Patau-syndrome-affected patients is 20 (O12). Because the null hypothesis assumes that the two variables are independent
|Down syndrome||Patau syndrome||Total|
|Congenital Heart Defects||CHD present||24||20||44|
|Down syndrome||Patau syndrome||Total|
|Congenital Heart Defects||CHD present||O11||O12||r1|
|Observed (o)||Expected (e)||o -e||(o -e) 2||(o -e) 2/e|
of each other, expected frequencies are calculated using the multiplication rule of probability. The multiplication rule says that the probability of the occurrence of two independent events X and Y is the product of the individual probabilities of X and Y. In this case, the expected probability that a patient has both Down syndrome and CHD is the product of the probability that a patient has Down syndrome (60/85 = 0.706) and the probability that a patient has CHD (44/85 = 0.518), or 0.706 x 0.518 = 0.366. The expected frequency of patients with both Down syndrome and CHD is the product of the expected probability and the total population studied, or 0.366 x 85 = 31.1.
Table 3 presents observed and expected frequencies and Χ2 for data in Table 1.
Before the chi-square value can be evaluated, the degrees of freedom for the data set must be determined. Degrees of freedom are the number of independent variables in the data set. In a contingency table, the degrees of freedom are calculated as the product of the number of rows minus 1 and the number of columns minus 1, or (r-1)(c-1). In this example, (2-1)(2-1) = 1; thus, there is just one degree of freedom.
Once the degrees of freedom are determined, the value of Χ2 is compared with the appropriate chi-square distribution, which can be found in tables in most statistical analyses texts. A relative standard serves as the basis for accepting or rejecting the hypothesis. In biological research, the relative standard is usually p = 0.05, where p is the probability that the deviation of the observed frequencies from the expected frequencies is due to chance alone. If p is less than or equal to 0.05, then the null hypothesis is rejected and the data are not independent of each other. For one degree of freedom, the critical value associated with p = 0.05 for Χ2 is 3.84. Chi-square values higher than this critical value are associated with a statistically low probability that H0 is true. Because the chi-square value is 11.44, much greater than 3.84, the hypothesis that the proportion of trisomy-13-affected patients with CHD does not differ significantly from the corresponding proportion for trisomy-21-affected patients is rejected. Instead, it is very likely that there is a dependence of CHD on karyotype.
Figure 1 shows chi-square distributions for 1, 3, and 5 degrees of freedom. The shaded region in each of the distributions indicates the upper 5% of the distribution. The critical value associated with p = 0.05 is indicated. Notice that as the degrees of freedom increases, the chi-square value required to reject the null hypothesis increases.
Because a chi-square test is a univariate test; it does not consider relationships among multiple variables at the same time. Therefore, dependencies detected by chi-square analyses may be unrealistic or non-causal. There may be other unseen factors that make the variables appear to be associated. However, if properly used, the test is a very useful tool for the evaluation of associations and can be used as a preliminary analysis of more complex statistical evaluations.
Grant, Gregory R., and Warren J. Ewens. Statistical Methods inBioinformatics. New York: Springer Verlag, 2001.
Nikulin, Mikhail S., and Priscilla E. Greenwood. A Guide toChi-Square Testing. New York: Wiley-Interscience, 1996.
Rice University. Rice Virtual Lab in Statistics. "Chi Square Test of Deviations." (cited November 20, 2002). <http;://www. ruf.rice.edu/~lane/stat_sim/chisq_theor/>.