Skip to main content

Reliability, Statistical

Reliability, Statistical





Statistical reliability refers to the consistency of measurement. According to Jum C. Nunnally and Ira H. Bernstein in their 1994 publication Psychometric Theory, in classical test theory reliability is defined mathematically as the ratio of true-score variance to total variance. A true score can be thought of as the score an individual would receive if the construct of interest were measured without error. If a researcher developed a perfect way to measure shyness, then each individuals shyness score would be a true score. In the social sciences, constructs are virtually always measured with error, and as such true scores are unknowns that must be estimated by observed scores.

The expected standard deviation of scores for an individual taking a test many times is known as the standard error of measurement (SEM). The SEM of a score is estimated by

where sx is the between-groups standard deviation of scores on the test and rxx is the estimate of score reliability. The expectation is that if an individual takes a test many times, 68 percent of scores will fall within one and 95 percent of scores will fall within two SEMs of the observed score. Assume that Jamie scored 1,000 on a test with an observed standard deviation of 100 and a score reliability estimate of .84. The SEM for Jamie would be

If Jamie took the achievement test many times, one would expect approximately 68 percent of the resulting scores to fall between 960 and 1,040, and approximately 95 percent of the scores to fall between 920 and 1,080.


Reliability can refer to several different types of estimates of consistency. Alternative form reliability refers to the degree of consistency between two equivalent forms of a measure. Split-half reliability refers to the consistency between two randomly-created sections of a single test. The internal consistency statistics coefficient α (also referred to as Cronbachs α ) and the related Kuder-Richardson 20 (or KR-20) can be thought of as the mean of all possible split-half estimates. Test-retest reliability is the consistency of scores on separate administrations of the same test. Determining an appropriate delay between the initial and follow-up administrations can be difficult. As examples, if the chosen delay is too long, expected changes (e.g., learning) in the respondents may contribute to a low test-retest correlation, which may be mistaken for evidence of invalidity. If the delay is too short respondents remembering and repeating their responses from the first test may contribute to a high test-retest correlation, which may be mistaken as evidence of validity. Ideally in this context, respondents would receive the exact same score on the test and the retest. If scores tend to go up or down from the test to the retest, the correlation between the test and the retest will be reduced unless all scores rise or fall by the same amount.

Inter-rater reliability refers to the extent to which judges or raters agree with one another. As an example, assume two judges rate teachers on the extent to which they promote a mastery orientation among students. The consistency of the judges can be estimated by correlating the scores of both judges. A 1965 study by John E. Overall found that the effective reliability of the judging process (i.e., of both judges together) can also be estimated; effective reliability will generally be better than the reliability of either judge in isolation. One problem in estimating inter-rater reliability is that the judges scores may reflect both random error and systematic differences in responding. This problem reflects a fundamental concern about the representativeness of judges, and, according to Richard J. Shavelson and Noreen M. Webb in their 1991 publication Generalizability Theory: A Primer, it is addressed in an extension of classical test theory known as generalizability theory.


The assertion this measure is reliable reflects a common misunderstanding about the nature of reliability. Reliability is a property of scores, not of measures, and it is therefore sample and context dependent. A test yielding good reliability estimates with one sample might yield poor reliability estimates with a sample from a different population, or even with the same sample under different testing conditions. The decision about whether to adopt an existing measure should therefore be based on more than a simple examination of score reliability in a few studies. Further, score reliability should be examined on each administration regardless of the measures empirical history (Thompson, 2002).


Reliability and validity are not necessarily related. Validity concerns the extent to which a measure actually measures what it is supposed to. While reliability does set a maximum limit on the correlation that can be observed between two measures, it is possible for scores to be perfectly reliable on a measure that has no validity or for scores to have no reliability on a measure that is perfectly valid. As an example of the latter, suppose that students in a pharmacology class are given a test of content knowledge at the beginning of the semester and at the end of the semester. Assume that on the pretest the students know nothing and therefore respond randomly, while on the post-test they all receive As. Estimating reliability via the split-half method or coefficient α on the pre-test would suggest very little consistency, as would the test-retest method. The test itself, however, would seem to be measuring content taught during the course (indicating good validity).


Nunnally, Jum C., and Ira H. Bernstein. 1994. Psychometric Theory. 3rd ed. New York: McGraw-Hill.

Overall, John E. 1965. Reliability of Composite Ratings. Educational and Psychological Measurement 25: 1011-1022.

Shavelson, Richard J., and Noreen M. Webb. 1991. Generalizability Theory: A Primer. Newbury Park, CA: Sage Publications.

Thompson, Bruce, ed. 2002. Score Reliability: Contemporary Thinking on Reliability Issues. Newbury Park, CA: Sage Publications.

Jeffrey C. Valentine

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Reliability, Statistical." International Encyclopedia of the Social Sciences. . 20 Jan. 2019 <>.

"Reliability, Statistical." International Encyclopedia of the Social Sciences. . (January 20, 2019).

"Reliability, Statistical." International Encyclopedia of the Social Sciences. . Retrieved January 20, 2019 from

Learn more about citation styles

Citation styles gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).

Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.

Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, cannot guarantee each citation it generates. Therefore, it’s best to use citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:

Modern Language Association

The Chicago Manual of Style

American Psychological Association

  • Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
  • In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.