Randomness

views updated May 11 2018

Randomness

RANDOMNESS AND RANDOM DISTRIBUTIONS

STATISTICS AND RANDOMNESS

BIBLIOGRAPHY

Randomness is a term used in the social sciences and mathematics to refer to chance factors occurring in a manner that the individual events in a series of events or outcomes do not exhibit a connection to each other in their occurrence. Events are independent of each other, and thus the occurrence of one event is not linked to the occurrence of other events in a series in any systematic manner.

Randomness as a quality of a series of events is believed to be the result of numerous minor causes producing small effects as an outcome that results in no systematic predictability for a given event in the series. The result of the many small causes, some canceling each other, is the independence of each event from the others. It is notable that the operation of causation is not denied in the universe of events and experiences. Rather, the causal background to a series of random events is interpreted as a multiplicity of small influences, some almost infinitesimal in impact, with some causes canceling or partially canceling the influence of other causes. The outcome is a series of events with independent occurrence in relation to each other. The word chance is used in popular speech to refer to this condition wherein events are generated independently of each other.

RANDOMNESS AND RANDOM DISTRIBUTIONS

Of great interest to social scientists is the demonstrable fact that some random distributions follow broadly predictable patterns when a series of events occurs in large numbers. Individual outcomes or instances cannot be predicted with any certainty, but a random pattern may be demonstrated to follow a broad configuration in such a manner that larger areas in the distribution of random events may be assigned a broad likelihood or probability of occurrence. Examples include various known or empirical distributions of random events that follow the bilaterally symmetrical distribution that forms a curve known as the normal curve. Another related known probability distribution is the binomial distribution, where only two possible outcomes can occur in a random series of events, such as tossing an unweighted coin a large or an even infinite number of times. The binomial distribution approximates the normal curve of probability with a large series of discrete, independent trials of two possible outcomes. Yet another known distribution of random events is the chisquare distribution when the X 2 statistic is calculated on a number of randomly drawn samples if the samples themselves are drawn from a larger universe of randomly distributed frequencies for a number of categories of observations. The tables that summarize the patterns of these known distributions of statistics are available in any book on statistical method.

STATISTICS AND RANDOMNESS

The field of statistics as it is applied in research in the social sciences can be broken into two broad divisions: descriptive statistics and inferential statistics. The purpose of the first division, descriptive statistics, is the summary of data. Data are ordinarily summarized by measures of central tendency, such as means, medians, and the mode. Variation or dispersion of data can be summarized with measures of variance and standard deviations. If there are two or more variables measured, a researcher may search for association through measures of correlation.

The second broad division in statistics is inferential statistics. This is the division used to deal with randomness and the use of known random distributions to compare for departures from randomness in empirical distributions. Departures from randomness in the latter situation can be assessed probabilistically and can be of great value in the assessment of causation. The search for causation begins with the determination by an observer of correlation or association between observations. Departures from randomness are of great importance because there is an initial indication of a pattern of association or correlation.

This second broad area of inferential statistics is itself broken into two large divisions. The first division involves estimation of parameters or universe values. This is attempted when a smaller sample must be drawn from a larger population or universe. The value calculated on a sample is referred to as a statistic. A mean calculated for a sample is a statistic. If the mean were calculated for an entire population, it would be referred to as the parameter. Researchers are often required to estimate parameters based on sample statistics due to limitations in time, personnel, and particularly the cost of conducting research on an entire population. Randomness is an important consideration for estimation of parameters because a sample must be drawn through a random process if an inference is to be made as a probability statement for the parameter value.

This requirement exists because the estimation of the parameter, or population mean, is based on knowledge of the pattern of a series of randomly sampled means, or a sampling distribution of means. This known sampling distribution approximates a normal curve, with the latters known probabilities for areas under the curve. Random sampling matches the assumptions of randomness for the pattern of means in the sampling distribution and thus enables a researcher to calculate the probability of being in error while using a sample mean to estimate the parameter of the population.

A second division of inferential statistics involves prescribed procedures used in hypotheses testing. Hypotheses testing is used in research as a search for the existence of relationships in a population or larger universe of possible observations. Again, random samples must be used if exact statements of probability are to be made in regard to the hypotheses being tested. Research scholars vary in style of work, but a typical model utilizing randomness is one that searches for non-randomness or hypothesized correlated variables by generating research hypotheses of correlation between categories of empirical observations or variables and then tests for these correlations through statistical procedures, such as a difference of means test, chi-square, or tests of significance for randomly sampled measures that yield the sampled correlation coefficients.

In this manner of conducting research, the hypothesis of randomness, or no relationship, which is referred to as the null hypothesis, is cast against a set of empirical frequencies drawn from a random sample. If the test of differences between means, or the X 2 statistic, or the sampled correlation coefficients yield a value or values that are so large that they are unlikely to occur by chance in repeated random samples from random distributions with their known statistical patterns, inferences can be made regarding the likelihood of a correlation or relationship between the hypothesized variables in the larger universe from which the samples have been randomly selected. This family of tests is known as significance tests. Thus the observation and understanding of random events enables one to become knowledgeable about random patterns. This knowledge of random patterns is useful for social scientists as it enables them to make inferences about causes in the social world, which is largely a non-random world, and thereby to build theoretical explanations based on empirical research to reach a better understanding of the complex social world in its many structured variations.

SEE ALSO Butterfly Effect; Chaos Theory; Regression Analysis; Residuals

BIBLIOGRAPHY

Blalock, Hubert M., Jr. 1972. Social Statistics. 2nd ed. New York: McGraw-Hill.

Bowerman, Bruce L., and Richard T. OConnell. 2007. Business Statistics in Practice. 4th ed. Boston: McGraw-Hill/Irwin.

Lindgren, Bernard W. 1968. Statistical Theory. 2nd ed. New York: Macmillan.

Mueller, John H., Karl F. Schuessler, and Herbert L. Costner. 1977. Statistical Reasoning in Sociology. 3rd ed. Boston: Houghton Mifflin.

Spatz, Chris. 1997. Basic Statistics: Tales of Distributions. Pacific Grove, CA: Brooks/Cole.

Walker, Helen M., and Joseph Lev. 1953. Statistical Inference. New York: Holt.

Kenneth N. Eslinger

Randomness

views updated May 11 2018

Randomness


When most people think of randomness, they generally think of a condition with an apparent absence of a regular plan, pattern, or purpose. The word random is derived from the Old French word randon, meaning haphazard. The mathematical meaning is not significantly different from the common usage. Mathematical randomness is exhibited when the next state of a process cannot be exactly determined from the previous state. Randomness involves uncertainty. The most common example of randomness is the tossing of a coin. From the result of a previous toss, one cannot predict with certainty that the result of the next coin toss will be either heads or tails.

Computers and Randomness

People performing statistical studies or requiring random numbers for other applications obtain them from a table, calculator, or computer. Random digits can be generated by repeatedly selecting from the set of numbers {0, 1, 2,, 9}. One way of making the selection would be to number ten balls with these digits and then draw one ball at a time without looking, recording the number on the drawn ball and replacing the ball after each successive drawing. The recorded string of digits would be a set of random numbers. Extensive tables of random numbers have been generated in the past. For example, the RAND Corporation used an electronic roulette wheel to compile a book with a million random digits.

Today, rather than using tables, people requiring random numbers more frequently use a calculator or a computer. Computers and calculators have programs that generate random numbers, but the numbers are really not random because they are based on complicated, but nonetheless deterministic, computational algorithms . These algorithms generate a sequence of what are called pseudo-random numbers .

Using computers to generate random numbers has altered the definition of randomness to involve the complexity of the algorithm used in the computations. It is not possible to achieve true randomness with a computer because there is always some underlying process that, with tremendous computational difficulty, could be duplicated to replicate the pseudo-random number. Physicists consider the emissions from atoms to be a truly random process, and therefore a source of generating random numbers. But the instruments used to detect the emissions introduce limitations on the actual randomness of numbers produced in that process. So at best, the many ways of generating a random number only approximate true randomness. With the advent of computers, mathematicians can define and develop methods to measure the randomness of a given number, but have yet to prove that a number sequence is truly random.

Randomness in Mathematics

Randomness has very important applications in many areas of mathematics. In statistics, the selection of a random sample is important to ensure that a study is conducted without bias. A simple random sample is obtained by numbering every member of the population of interest, and assigning each member a numerical label. The appropriate sample size is determined. The researcher then obtains the same quantity of random numbers as the sample size from a table of random numbers, a calculator, or a computer. The members of the population labeled with the corresponding random numbers are selected for study. In this way, every member of the population has an equal likelihood of being selected, eliminating any bias that may be introduced by other selection methods. Ensuring the randomness of the selection makes the results of the study more scientifically valid and more likely to be replicated.

There are many other applications of randomness in mathematics. Using solution methods involving random walks , applied mathematicians can obtain solutions for complex mathematical models that are the basis of modern physics. Albert Einstein, and later Norbert Weiner, used the method in the early twentieth century to describe the motion of microscopic particles suspended in a fluid. In the late 1940s, mathematicians Stanislaw Ulam and John von Neumann developed Monte Carlo methods, which apply random numbers to solve deterministic models arising in nuclear physics and engineering. Randomness is also important in the mathematics of cryptography , which is particularly important today and will continue to be in the future as sensitive information is transmitted across the Internet. Seemingly random numbers are used as the keys to encryption systems in use in digital communications.

In more complicated examples, randomness is closely tied to probability. Even seemingly irregular random phenomena exhibit some long-term regularity. Probability theory mathematically explains randomness. Mathematicians sometimes divide processes they study into deterministic or probabilistic (or stochastic ) models. If a phenomenon can be modeled deterministically, the process can be predicted with certainty using mathematical formulas and relationships. Stochastic models involve uncertainty, but with probability theory, the uncertain behavior of the phenomenon is better understood despite the haphazardness. One cannot predict the specific outcome of the coin-tossing experiment, but you can achieve an expectation and understanding of the process using probability theory. Through the use of probability theory, one understands much about topics such as nuclear physics.

Not all processes can be classified as deterministic or stochastic in an obvious manner. Chaos theory is a relatively recent area of mathematical study that helps explain the randomness that appears in some processes that are otherwise considered to be deterministic. The behavior of chaotic systems is dramatically influenced by their sensitivity to small changes in initial conditions. Mathematicians are currently developing methods to understand the underlying order of chaotic systems. Mathematicians apply chaos theory to clarify the apparent randomness of some processes.

see also Chaos; Cryptology.

Dick Jardine

Bibliography

Abramowitz, Milton, and Irene Stegun, eds. Handbook of Mathematical Functions. Washington, D.C.: U.S. Department of Commerce, National Bureau of Standards, 1964.

Chaitin, Gregory J. "Randomness and Mathematical Proof." In Scientific American 232, no. 5 (1975):4752.

Peterson, Ivars. The Jungles of Randomness. New York: John Wiley & Sons, 1998.

Randomness

views updated May 17 2018

RANDOMNESS

Randomness is a term with two principal meanings, one mathematical and the other physical. In the mathematics of probability and statistics, the term can refer to either the notion of "random variable" or the more imprecise concept signified by "random sampling," "at random," or "random distribution." A random variable, best defined as "a function defined on a given sample space" (Feller, 204), is less important than the imprecise "at random" notion it helps to clarify. The latter, in the purely theoretical formulation of probability, is roughly equivalent to the equal likelihood presumed in the basic postulates of probability (ibid. 29). As such, it is a purely theoretical model for the explanation of experimental results that are often neither perfectly random nor truly equally likely.

In the physical world randomness is closely associated with chance and with the data of such theories as quantum and statistical mechanics, which presuppose random motion of particles for the very formulation of their laws. Randomness thus seems to be a given, or datum, in at least some of the most important areas of science; J. von Neumann has attempted to demonstrate the radical character of this randomness. Nevertheless, it is a peculiarity of statistical theory that the most unexpected experimental resultsequally probable or notcan be (approximately) reduced to some sort of statistical regularity. For example, consider the relations between MaxwellBoltzmann, BoseEinstein, and FermiDirac statistics in theoretical physics (ibid. 3840). This suggests that there is some sort of order underlying even the most "random" of physical events, whether or not science ever in fact discovers it.

Bibliography: w. feller, An Introduction to Probability Theory and Its Applications, v.1 (New York 1957).

[p. r. durbin]