Probability and the Law of Large Numbers

views updated

Probability and the Law of Large Numbers

Theoretical and experimental probabilities are linked by the Law of Large Numbers. This law states that if an experiment is repeated numerous times, the relative frequency, or experimental probability, of an outcome will tend to be close to the theoretical probability of that outcome. Here the relative frequency is the quotient of the number of times an outcome occurs divided by the number of times the experiment was performed.

The Law of Large Numbers is more than just a general principle. The Swiss mathematician Jakob Bernoulli (16591705) was the first to recognize the connection between long-run proportion and probability. In 1705, the year of his death, he provided a mathematical proof of the Law of Large Numbers in his book Ars Conjectandi ("The Art of Conjecturing"). This principle also plays a key role in the understanding of sampling distributions, enabling pollsters and researchers to make predictions based on statistics .

Demonstrating the Law of Large Numbers

When a fair die is tossed, the likelihood that the number on the top face of the die will be 2 is because only one of the six numbers on the die is a 2. So the theoretical probability of rolling a two is or about 17 percent. If a die is tossed six times, a 2 may be rolled more than once or not at all; hence, the percentage of times that a 2 is rolled will vary from the theoretical probability of . However, if the die is tossed 600 times, the relative frequency should approximate the theoretical probability. Hence, the number of times the result is 2 after 600 tosses should be fairly close to , or 100.

Suppose each of 100 people rolls a fair die 600 times while keeping track of the percentage of times a 2 was rolled. There most likely would be variations in their resulting relative frequencies. Still, the vast majority of the relative frequencies would be close to . If each die were rolled many more times, each of the individual results would tend to be even closer to .

Misconceptions about Probability

If a coin is flipped once and it lands heads up, does that mean it will land tails up next time? Certainly not. The Law of Large Numbers does not apply to any individual flip of the coin, but rather to the long-run behavior. If the coin landed heads up nine times in a row, it cannot be assured that the next flip will show tails. The probability that the next flip of the coin will be heads is still 50 percent. Even if many more heads than tails have been rolled initially, it should not be expected that heads will appear less often in the future.

Gambling houses and insurance companies use the Law of Large Numbers in order to determine a reasonable return that will encourage customers to participate while assuring the company of a nice profit and protecting the company from serious loss. Because gambling houses and insurance companies do a large amount of business, the experimental probability of wins or claims closely resembles the theoretical probability. However, an individual gambler or insured customer does not represent a large number of experiments, and individual results will often appear more or less than the average.

Gamblers who have been losing at roulette, or noticing that others have been losing recently at a particular slot machine, should not expect any increased likelihood of winning in the near future. The Law of Large Numbers does not imply that future winnings will occur to compensate for the earlier losses. In the same way, a gambler who is on a lucky streak should not expect a string of losses to balance out the wins.

Instead, flipping a fair coin and rolling a standard die are considered to be random, independent events. The result of one flip of a coin does not affect the likelihood of heads on subsequent flips. On each flip, the coin is just as likely to land heads up as tails up. It is unaffected by any tosses that have already occurred.

see also Predictions; Probability, Experimental; Probability, Theoretical.

Teresa D. Magnus


Berlinghoff, William P., et al. A Mathematics Sampler, 4th ed. New York: Ardsley House Publishers, Inc., 1996.

Consortium for Mathematics and Its Applications. For All Practical Purposes: Introduction to Contemporary Mathematics, 4th ed. New York: W. H. Freeman and Company, 1997.

Devlin, Keith. Life by the Numbers. New York: John Wiley & Sons, Inc., 1998.

Moore, David S. The Basic Practice of Statistics. New York: W. H. Freeman and Company, 1995.


Mathematician John Kerrich tossed a coin 10,000 times while interned in a prison camp in Denmark during World War I. At various stages of the experiment, the relative frequency would climb or fall below the theoretical probability of 0.5, but as the number of tosses increased, the relative frequency tended to vary less and stay near 0.5, or 50 percent.

For example, Kerrich recorded the number of heads within each set of ten flips. In his first ten flips, the coin landed headsup four times for a relative frequency of 0.4. In the next ten flips, he observed six heads, bringing his overall relative frequency to 0.5. After 30 tosses, the proportion of heads was0.567. After 200 tosses, it was0.502. With this small number of tosses, the proportion of heads fluctuates.

But after 10,000 tosses, Kerrich counted a total of 5,067 heads for a relative frequency of0.5067, which is fairly close to the theoretical probability.