Monte Carlo Experiments

views updated

Monte Carlo Experiments

BIBLIOGRAPHY

Monte Carlo experiments consist of statistical resampling techniques usually employed on computers to provide approximate solutions to a variety of mathematical problems. These techniques apply both to deterministic as well as to stochastic problems. The resampling methodology was first introduced in 1908 by William S. Gosset, who used the pseudonym Student in his publications. It was Stanislaw Ulam who thought of automating the procedure by means of the first fast electronic calculators in 1946. Subsequent work with Nicholas C. Metropolis and John von Neumann produced the first algorithms for computer implementations. The technique is named after the city of Monte Carlo, renowned for its casino, because it is based on a random number generator, as in the game of roulette.

A typical example of the application of Monte Carlo techniques in statistics concerns the evaluation of the properties of an estimator in cases when the exact distribution is difficult or impossible to calculate, or when the asymptotic approximations are either not very good or not applicable (i.e., in small samples). For example, the analyst may be interested in evaluating the bias of an estimator θ̂ when estimating a parameter vector θ 0, or the efficiency of an estimator θ̂ compared to alternative estimators of θ 0.

The methodology is based on resampling, that is, on replicating the real world M times, calculating M different estimates, one for each replication. The empirical distribution of these estimates approximates the true distribution of the estimator object of the study.

The implementation of a Monte Carlo experiment is intuitive, although generally computationally intensive. First, the investigator has to choose a distribution for the variables included in the model characterized by the vector of parameters θ 0. Once the artificial values for θ 0 are set, an N -dimensional sample is drawn and an estimator computed from the sample. This procedure is iterated M times, obtaining a set of estimates θ̂ m with m = 1,, M. At that stage it is possible to assess the properties of the estimator, for example, calculating the sample mean and variance.

Good practice suggests varying the value of θ 0, the sample size N, and the number of iterations M. The methodology is then computationally expensive, for it requires calculations in different scenarios, all possibly characterized by a large N and a large M.

One limitation of Monte Carlo studies is that the analyst must completely specify the statistical model. This includes making assumptions about the form and parameters of the distribution of the error term, usually assumed independent of the explanatory variables. The results of the experiment depend, therefore, on these assumptions, with a great deal of loss of generality.

In constructing a Monte Carlo experiment, the analyst faces strategic issues such as the choice of a distribution, the degree of variation of the parameters of interest, a trade-off between accuracy and flexibility, the choice of the number of iterations M, and the sample size N. In addition, the results usually involve the production of a large number of tables that need to be well organized for the experiment to be meaningful for the reader.

The standard error of a Monte Carlo analysis decreases with the square root of N. However, a larger N may require increasing computational complexity. Less computationally expensive methods are variance-reducing techniques. These involve, for example, the use of common random numbers, that is, the use of the same pseudorandom numbers when evaluating different strategic choices. This is likely to induce positive correlations between the estimators compared in the experiment, and therefore to reduce the variance of estimated differences.

Given the nature of the typical Monte Carlo experiment, it may be unclear to the analyst if the results apply to a specific population of interest. In that case, the bootstrapping methodology helps in refining the inference on a particular sample in alternative to asymptotic approximations. It simply consists of a Monte Carlo experiment where a specific data set is considered as the population. In each iteration b, with b = 1,, B, a random sample is drawn with replacement from the data set. Then, an estimate is computed on each iteration and the usual Monte Carlo procedures apply.

Applications of Monte Carlo techniques in economics include the investigation of the properties of stochastic models, for example, in the real business cycle literature. An example of applications of Monte Carlo methods to deterministic problems is the computation of multidimensional finite integrals. In this case, the integral can be interpreted as the expected value of the integrand function applied to a random vector uniformly distributed on the region of integration. This can be approximated by means of the usual resampling technique.

SEE ALSO Properties of Estimators (Asymptotic and Exact); Students T-Statistic

BIBLIOGRAPHY

Davidson, Russell, and James G. MacKinnon. 1993. Estimation and Inference in Econometrics. New York: Oxford University Press.

Greene, William H. 2003. Econometric Analysis, 5th ed. Upper Saddle River, NJ: Prentice Hall.

Judd, Kenneth L. 1998. Numerical Methods in Economics. Cambridge, MA: MIT Press.

Metropolis, Nicholas, and Stanislaw Ulam. 1949. The Monte Carlo Method. Journal of the American Statistical Association 44 (247): 335-341.

Student. 1908. The Probable Error of a Mean. Biometrika 6: 1-25.

Luca Nunziata

About this article

Monte Carlo Experiments

Updated About encyclopedia.com content Print Article