Simulation

views updated May 18 2018

Simulation

I. INDIVIDUAL BEHAVIORAllen Newell and Herbert A. Simon

BIBLIOGRAPHY

II. ECONOMIC PROCESSESIrma Adelman

BIBLIOGRAPHY

III. POLITICAL PROCESSESCharles F. Hermann

BIBLIOGRAPHY

I INDIVIDUAL BEHAVIOR

“Simulation” is a term now generally employed to denote an approach to the construction of theories that makes essential use of computers and computer-programming languages. In some applications simulation techniques are used in investigating formal mathematical theories–for example, stochastic learning theories. In other applications, the theoretical models of the systems that are simulated are essentially nonquantitative and non-numerical.

We may define simulation more specifically as a method for analyzing the behavior of a system by computing its time path for given initial conditions, and given parameter values. The computation is said to “simulate” the system, because it moves forward in time step by step with the movement of the system it is describing. For the purposes of simulation, the system must be specified by laws that define its behavior, during any time interval, in terms of the state it was in at the beginning of that interval.

Simulation has always been an important part of the armory of applied mathematics. The introduction of modern computers, however, has so reduced the cost of obtaining numerical solutions to systems of equations that simulation has taken on vastly increased importance as an analytic tool. As a consequence, the relative balance of advantage has shifted from simplifying a theory in order to make it solvable toward retaining complexity in order to increase its accuracy and realism. With simulation and other techniques of numerical analysis, computers permit the study of far more complicated systems than could have been investigated at an earlier time.

Simulation with mathematical theories . Consider first the simulation of systems that are described by mathematical models (Newell & Simon 1963a, pp. 368-373). A model consists of a system of equations containing variables, literal constants (often called parameters), and numerical constants. The system of equations may be solved symbolically, by expressing the several variables explicitly in terms of the literal and numerical constants; or the system may be solved numerically, by substituting particular numerical values for the literal constants and then solving the equations for these special cases. For many complex systems of equations, symbolic solutions cannot be obtained, only numerical solutions. The former are clearly preferred when obtainable, for once a symbolic solution has been found, the special cases are readily evaluated by substituting numerical values for the parameters in the solutions instead of in the original equations.

To take a highly simplified example, a learning theory might postulate that the rate at which learning occurs is proportional to the amount of material remaining to be learned. If, for simplicity of exposition, we divide time into discrete intervals and define the variable x(t) as the amount of material that has been learned up to the end of the tth time interval, then the theory can be expressed by a simple difference equation:

where a is a literal constant that we can name intelligence for this learning task, and b is a literal constant that represents the total amount to be learned. The symbolic solution to this simple system is well known to be:

where x(0) is a literal constant denoting the amount that had already been learned up to the time t = 0. Particular numerical solutions can now be found by assigning numerical values a, b, and x(0) and by solving (2) for x(t) for any desired values of t. Alternatively, numerical solutions– learning curves—for the system can as readily be computed for given a, b, and x(0) directly from equation (1), transferring the x(t) term to the right-hand side and solving successively for x(1), x(2), x(3), · · ·. This latter procedure–solving (1) numerically–would be called simulation.

The advantages of simulation as a technique for analyzing dynamic systems are not evident from the excessively simple equation (1). Suppose, however, that a, instead of being a literal constant, designated some function, a[b,x(t)], of b and x(t). Only for very special cases could (1) then be solved symbolically to obtain an explicit expression, like (2), for x(t). In general, when we wish to know the time paths of systems expressed by such equations, numerical solution for special cases–that is, simulation–is our main recourse.

A great many of the mathematical theories currently used for the study of learning and other individual behavior are probabilistic or stochastic in character. Such theories pose formidable mathematical problems, and explicit symbolic solutions for their equations are seldom obtainable. Unlike a deterministic system, the solution of the equations of a stochastic system does not specify a single time path for the system but assigns a probability distribution to all possible paths of the system. In some cases, certain properties of this probability distribution–certain means and variances –can be obtained symbolically, but the specification of the entire probability distribution can seldom be obtained symbolically. In order to estimate the parameters of the distribution numerically, the system can be simulated by Monte Carlo techniques. That is, random variables can be introduced into the model, and the equations analogous to (1) solved a number of times, each time with different values of the random variables assigned and using the appropriate probabilities. The numerical solutions to (1) will trace out possible paths of the system, and the probability that a particular path will be followed in the simulation will be proportional to its probability under the assumptions of the model. Then, from the numerical results of the Monte Carlo simulation, numerical estimates can be made of the parameters of the probability distribution. [SeeRandom numbers.]

Information-processing theories . The greatest use of simulation for the study of individual behavior has involved so-called information-processing theories, which are different in important respects from classical mathematical theories. Simulation has exactly the same relation to information-processing theories as to other formal theories –it is a method for discovering how the system described by the theory will behave, in particular circumstances, over a period of time. However, almost no methods other than simulation exist for investigating information-processing theories. For this reason, a phrase like “simulation of cognitive processes” often refers both to an information-processing theory of cognitive processes and to the investigation of the theory by simulation, in the narrower sense of the term. [SeeInformation theory.]

Information-processing theories of individual behavior take the form of computer programs, usually written in programming languages called list-processing languages, especially devised for this purpose. The information-processing theories undertake to explain how a system like the human central nervous system carries out such complex cognitive processes as solving problems, forming concepts, or memorizing. The explanation is sufficiently detailed to predict–by computer simulation –behavior in specific problem situations. That is to say, the theory aims at predicting, not merely some quantitative aspects of behavior (number of errors in a learning situation, for example), but the actual concrete behaviors and verbal outputs of subjects placed in the very same situation and confronted with the identical task. [SeeConcept formation; Problem solving.]

A specific example will help make clear exactly what is meant by this. Clarkson (1963) has constructed an information-processing theory to explain how a bank officer selects portfolios of securities for trust funds. The trust officer, who has been told the purpose of the trust and the amount of money available for investment, prepares a list of stocks and bonds that can be bought with this sum. If the computer program is provided (as input) the same information about the purpose of the trust and its size, it will also produce (as output) a list of stocks and bonds. A first test of the program, as a theory, is whether it will pick the same list of companies and the same number of shares in each as the trust officer. In an actual test, for example, the program predicted the trust officer would buy 60 shares of General American Transportation, 50 Dow Chemical, 10 IBM, 60 Merck and Company, and 45 Owens-Corning Fiberglas. The trust officer in fact bought 30 Corning Glass, 50 Dow Chemical, 10 IBM, 60 Merck and Company, and 50 Owens-Corning Fiberglas.

A second and more severe test of the theory is whether, in reaching its final choices, it goes through the same stages of analysis, weighs the same factors, considers the same alternatives, as the human it is simulating. As one technique for making this kind of test, the human subject is asked to perform the task while thinking aloud, and his stream of verbalizations (protocol) is recorded. The human protocols are then compared with the trace produced by the computer program while it is performing the same task (for an example of this technique, which Clarkson also used, see Newell & Simon 1963b).

Omar K. Moore and Scarvia B. Anderson (1954) devised a task that required the subject to “recode” certain symbolic expressions. One problem was to recede LI: (r ?-p) · (-r ? q) into LO: – (–q-p). (For purposes of the experiment and of this example, these strings of symbols can be treated as “code messages,” whose meaning need not be known. The expression L1 may be read “parenthesis r horseshoe minus p parenthesis dot parenthesis minus r horseshoe q parenthesis.”) The recoding had to be carried out according to certain rules, which were numbered from 1 to 12. At one point in his thinking-aloud protocol, a subject said, “Now I’m looking for a way to get rid of the horseshoe inside the two brackets that appear on the left and right side of the equation. And I don’t see it. Yeah, if you apply rule 6 to both sides of the equation, from there I’m going to see if I can apply rule 7.”

The same recoding task, receding LI into L0 according to the rules specified by Moore and Anderson, was given to a program, the General Problem Solver (GPS), expressing an informationprocessing theory of human problem solving. One portion of the trace produced by the program read as follows:

Goal: Apply rule 7 to L1.

Subgoal: Change “horseshoe” to “wedge” in left side of L1.

Subgoal: Apply rule 6 to the left side of L1.

Result: L4: (-r V-p)-(-r D g).

Subgoal: Apply rule 7 to L4.

Subgoal: Change “horseshoe” to “wedge” in right of L4.

Subgoal: Apply rule 6 to right sideof L4.

Part of the test of the adequacy of GPS as a theory of human problem solving would be to decide how closely the segment of the human protocol reproduced above corresponded to the segment of the computer trace. For example, in the illustrative fragment cited here, both the human subject and GPS went down a blind alley, for rule 7 turned out to be not applicable to the expression recoded by rule 6. A theory of human problem solving must predict and explain the mistakes people make, as well as their ability sometimes to achieve solutions.

Information-processing theories can also be tested in more orthodox ways than by comparing protocols with traces. A program can be simulated in an experimental design identical with one that has been employed for human subjects. Statistics from the computer runs can be compared with the statistics of the human performance. This is the principal means that has been used to test the Elementary Perceiver and Memorizer (EPAM), an information-processing theory of human rote learning that will be described briefly later (Feigenbaum 1963). By having EPAM learn nonsense syllables by the standard serial anticipation method, at various simulated memory drum speeds and for lists of different lengths, quantitative predictions were made of the shape of the so-called serial position curve (relative numbers of errors during the learning process for different parts of the list of syllables). These predictions showed excellent agreement with the published data from human experiments. In other experiments in the learning of paired nonsense syllables, EPAM predicted quantitatively the effects upon learning rate of such variables as degree of familiarity with the syllables and degree of similarity between syllables. These predictions also corresponded closely with the published data from several experiments. [See Forgetting and Learning, article on verbal learning.]

In summary, an information-processing theory is expressed as a computer program which, exactly like a system of difference or differential equations, predicts the time path of a system from given initial conditions for particular values of the system parameters. The theory predicts not merely gross quantitative features of behavior but the actual stream of symbolic outputs from the subject. Such theories can be subjected to test in numerous ways; among others, by comparing the behaviors, including the thinking-aloud protocols, of subjects with the computer traces produced by the simulating programs.

Evaluation . The examples reveal both some of the strengths and some of the difficulties in the information-processing approach to theory construction and theory testing. On the positive side, the theories are written in languages that are capable of representing stimuli and responses directly and in detail, without an intermediate stage of translation into mathematical form. This has the further advantage of avoiding virtually all problems of stimulus scaling. The stimuli themselves, and not scales representing some of their gross characteristics, serve as inputs to the theory. Although the computer traces produced by the simulations are not in idiomatic English, the comparison of trace with protocol can be handled by relatively unambiguous coding techniques.

One of the main difficulties of the approach is closely related to one of its strengths—the detail of its predictions. Since different human subjects do not behave in exactly the same way in identical task situations, a simulation that predicts correctly the detail of behavior of one individual will predict the detail of behavior of others incorrectly. Presumably, a theory of individual behavior should consist of general statements about how human beings behave, rather than statements about how a particular human being behaves. At a minimum we would want to require of a theory that modifications to fit the behaviors of different subjects should change only relatively superficial features of the theory–values of particular parameters, say –and that they not necessitate fundamental reconstruction of the whole simulation program. It remains to be seen how fully this requirement will be met by information-processing theories. Up to the present, only a modest amount of investigation has been made of the possibilities of creating variants of simulation programs to fit different subjects.

Generality is needed, too, in another direction. Most of the early simulation programs were constructed to explain human performance in a single task–discovering proofs for theorems in symbolic logic, say, or making moves in chess. It seems unreasonable to suppose that the programs a human uses to solve problems in one task environment are entirely specific to that environment and separate and independent from those he uses for different tasks. The fact that people can be scaled, even roughly, by general intelligence argues against such total specialization of skills and abilities. Hence, an important direction of research is to construct theories that can be applied over a wide range of tasks.

An example of such a theory is GPS, already mentioned. GPS can simulate behavior in any problem environment where the task can be symbolized as getting from a given state of affairs to a desired state of affairs. A wide range of problem-solving tasks can be expressed in this format. (To say that GPS can simulate behavior in any such environments means, not that the program will predict human behavior correctly and in detail, but that the program can at least be made to operate and produce a prediction, a trace, in that environment, for comparison with the human behavior.)

Thus, GPS has actually been given tasks from about a dozen task environments–for example, discovering proofs for theorems in logic, solving the missionaries-and-cannibals puzzle, and solving trigonometric identities–and there is every reason to suppose it can handle a wide range of others.

Levels of explanation . There has been great diversity of opinion in individual psychology as to the appropriate relation between psychological theory and neurophysiology. One extreme of the behaviorist position holds that the laws of psychology should state functional relations between the characteristics of stimulus situations, as the independent variables, and response situations, as the dependent variables, with no reference to intervening variables. A quite different position holds that explanation in psychology should relate behavior to biological mechanisms and that the laws of psychology are essentially reducible to laws of neurophysiology.

Information-processing theories represent a position distinct from either of these (Simon & Newell 1964). Unlike the extreme behaviorist theories, they make strong assumptions about the processes in the central nervous system (CNS) that intervene between stimulus and response and endeavor to explain how the stimulus produces the response. On the other hand, the information-processing theories do not describe these intervening processes in terms of neurophysiological mechanisms but postulate much grosser elements: symbol structures and elementary information processes that create and modify such structures and are, in turn, controlled by the structures.

Behaviorist theories may be called one-level theories, because they refer only to directly observable phenomena. Neurophysiological theories are two-level theories, for they refer both to behavioral events and underlying chemical and biological mechanisms. Information-processing theories are at least three-level theories, for they postulate elementary information processes to explain behavior, with the suggestion that the elementary information processes can subsequently be explained (in one or more stages of reduction) in chemical and biological terms. [See Learning, article on neurophysiological aspects; and Nervous system.]

Elementary information processes and symbol structures play the same role, at this intermediate level of explanation, as that played by chemical reactions, atoms, and molecules in nineteenth-century chemistry. In neither case is it claimed that the phenomena cannot be reduced, at least in principle, to a more microscopic level (neurophysiology in the one case, atomic physics in the other); it is simply more convenient to have a division of labor between psychologists and neurophysiologists (as between chemists and atomic physicists) and to use aggregative theories at the information-processing level as the bridge between them.

Some specific theories . The remainder of this article will be devoted to an account of the psychological substance of some current information-processing theories of cognition. Because space prohibits a complete survey of such theories, the examples will be restricted to problem solving, serial pattern recognition, and rote memory processes. Theories of other aspects of pattern recognition and concept formation will be omitted (Hunt 1962; Uhr 1966), as will be theories proposing a parallel, rather than serial, organization of cognitive processes (Reitman 1965).

The theories to be described are incorporated in a number of separate programs, which have not yet been combined into a single information-processing theory of “the whole cognitive man.” Nevertheless, the programs have many similar components, and all incorporate the same basic assumptions about the organization and functioning of the CNS. Hence, the theories are complementary and will be discussed here, not as separate entities, but as components of a theory of cognition (Simon & Newell 1964).

The basic assumptions about the organization and functioning of the CNS are these:

(1) The CNS contains a memory, which stores symbols (discriminable patterns) and composite structures made up of symbols. These composite structures include lists (ordered sets of symbols, e.g., the English alphabet) and descriptions (associations between triads of symbols). The relation “black, opposite, white,” to be translated “white is the opposite of black,” is an example of a description.

(2) The CNS performs certain elementary processes on symbols: storing symbols, copying sym- bols, associating symbols in lists and descriptions,finding symbols on lists and in descriptions, comparing symbols to determine whether they are identical or different.

(3) The elementary processes are organized hierarchically into programs, and the CNS incorporates an interpretive process capable of executing such programs—determining at each stagewhat elementary process will be carried out next. The interpretation is serial (not all information-processing theories share this assumption, however), one process being executed at a time.

The memory, symbol structures, elementary processes, and interpreters are the information-processing mechanisms in terms of which observed human behavior is to be explained. Programs constructed from these mechanisms are used to simulate problem solving, memorizing, serial pattern recognizing, and other performances.

Problem-solving processes . A central task of any theory of human problem solving is to explain how people can find their way to solutions while exploring enormous “mazes” of possible alternative paths. It has been calculated that if a chess player were to consider all the possible outcomes of all possible moves, he would have to examine some 10120 paths. In fact he does not do this–obviously he could not, in any event–but conducts a highly selective search among a very much smaller number of possibilities. Available empirical data indicate that a problem solver usually explores well under a hundred paths in this and other problem-solving “mazes.”

Experiments based on information-processing theories have shown that such selective searches can be accomplished successfully by programs incorporating rules of thumb, or heuristics, for determining which paths in the maze are likely to lead to solutions (Feigenbaum & Feldman 1963, part 1, sees. 2, 3). Some of the heuristics are very specific to a particular task environment. As a person becomes skilled in such an environment, he learns to discriminate features of the situation that have diagnostic value, and he associates with those features responses that may be appropriate in situations of the specified kind. Thus, an important component in specific skills is a “table of connections,” stored in memory, associating discriminable features with possibly relevant actions. For example, a chess player learns to notice when the enemy’s king is exposed, and he learns to consider certain types of moves (e.g., “checking” moves) whenever he notices this condition. [See Attention.]

Much of the behavior of problem solvers in chess-playing and theorem-proving tasks can be explained in terms of simple “feature-noticing” programs and corresponding tables of connections, or associations. But the programs must explain also how these components in memory are organized into effective programs for selective search. These organizing programs are relatively independent of the specific kind of problem to be solved but are applicable instead (in conjunction with the noticing processes and associations) to large classes of task environments. They are general heuristics for problem solving.

The central GPS heuristic is an organization of elementary processes for carrying out means–end analysis. These processes operate as follows: the symbolized representations of two situations are compared; one or more differences are noted between them; an operator is selected among those associated with the difference by the table of connections; and the operator is applied to the situation that is to be changed. After the change has been made, a new situation has been created, which can be compared with the goal situation, and the process can then be repeated.

Consider, for example, a subject trying to solve one of the Moore-Anderson “recoding” problems. In this case, the problem is to change LI: r · (-p D q~) to LO: (q V p) · r. The subject begins by saying, “You’ve got to change the places of the r to the other side and you have to change the . . . minus and horseshoe to wedge, and you’ve got to reverse the places from p and q to q and p, so let’s see.. . . Rule 1 (i.e., A · B → B · A) is similar to it because you want to change places with the second part. . . .”

The meansend chain in this example includes the following: notice the difference in order between the terms of L1 and L0; find an operator (rule 1) that changes the order.

Another kind of organization, which may be called planning and abstraction, is also evident in problem solving. In planning, some of the detail of the original problem is eliminated (abstracted). A solution is then sought for the simplified problem, using means-end analysis or other processes. If the solution is found for the abstract problem, it provides an outline or plan for the solution of the original problem with the detail reinserted.

The programs for means-end analysis and for planning, each time they are activated, produce a “burst” of problem-solving activity directed at a particular subgoal or organization of subgoals. The problem-solving program also contains processes that control and organize these bursts of activity as parts of the total exploratory effort. [See Problem solving.]

Rote memory processes . The noticing processes incorporated in the problem-solving theory assume that humans have subprograms for discriminating among symbol structures (e.g., for noticing differences between structures) and programs for elaborating the fineness of these discriminations. They also assume that associations between symbol structures can be stored (e.g., the table of connections between differences and operators). Discrimination and association processes and the structures they employ constitute an important part of the detail of problem-solving activity but are obscured in complex human performances by the higher-level programs that organize and direct search.

In simpler human tasks—memorizing materials by rote—the organizing strategies are simpler, and the underlying processes account for most of the behavioral phenomena. It has been possible to provide a fairly simple information-processing explanation for the acquisition of discriminations and associations (Feigenbaum 1963). (Our description follows EPAM, the program for rote learning mentioned above.) In the learning situation, there develops in memory (a) a “sorting net,” for discriminating among stimuli and among responses, and (b) compound symbol structures, stored in the net, that contain a partial representation, or “image,” of the stimulus as one component and a partial image of the response as another.

When stimuli or responses are highly similar to each other, greater elaboration of the sorting net is required to discriminate among them, and more detail must be stored in the images. A stimulus becomes “familiar” as a result of repeated exposure and consequent elaboration of the sorting net and its stored image. We have already noted that these simple mechanisms have been shown to account, quantitatively as well as qualitatively, for some of the main features of learning rates that have been observed in the laboratory. [See Learning, article On discrimination learning.]

Recognition of periodic patterns . As an example of an information-processing explanation of some of the phenomena of concept formation, we consider how, according to such a theory, human subjects detect and extrapolate periodic patterns —e.g., the letter pattern A B M C D M . . . (Simon & Newell 1964, pp. 293-294).

At the core of the explanation is the hypothesis that in performing these tasks humans make use of their ability to detect a small number of basic relations and operations: for example, the relations “same” and “different” between two symbols or symbol structures; the relation “next” on a familiar alphabet; and the operations of finding the difference between and the quotient of a pair of numbers. (These discriminations are not unlike those required as components in problem-solving programs to detect features of symbol structures and differences between them.) With a relatively simple program of these processes, the pattern in a sequence can be detected, the pattern can be described by a symbol structure, and the pattern description can be used by the interpretive process to extrapolate. In the example A B M C D M, the relation “next” is detected in the substring A B C D, and the relation “same” in the mfs. These relations are used to describe a three-letter pattern, whose next member is the “next” letter after D,.i.e., E.

The information-processing theory of serial pattern detection has proved applicable to a wide range of cognitive tasks–including the Thurstone Letter Series Completion Test, symbol analogies tests, and partial reinforcement experiments. [See Concept formation; see also Feldman 1963.]

This article has described the use of simulation in constructing and testing theories of individual behavior, with particular reference to nonnumer-ical information-processing theories that take the form of computer programs. Examples of some of the current theories of problem solving, memorizing, and pattern recognition were described briefly, and the underlying assumptions about the information-processing characteristics of the central nervous system were outlined. Various techniques were illustrated for subjecting information-processing theories to empirical test.

Allen Newell AND Herbert A. Simon

[Directly related are the entriesDecision making, especially the article on psychological aspects; Information theory; models, mathematical; Problem solving. Other relevant material may be found inLearning theory; Mathematics; Nervous system; Programming; Reasoning and logic; Thinking.]

BIBLIOGRAPHY

Clarkson, Geoffrey P. E. 1963 A Model of the Trust Investment Process. Pages 347-371 in Edward A. Feigenbaum and Julian Feldman (editors), Computers and Thought. New York: McGraw-Hill.

Feigenbaum, Edward A. 1963 The Simulation of Verbal Learning Behavior. Pages 297-309 in Edward A. Feigenbaum and Julian Feldman (editors), Computers and Thought. New York: McGraw-Hill.

Feigenbaum, Edward A.; and Feldman, Julian (editors) 1963 Computers and Thought. New York: McGraw-Hill.

Feldman, Julian 1963 Simulation of Behavior in the Binary Choice Experiment. Pages 329-346 in Edward A. Feigenbaum and Julian Feldman (editors), Computers and Thought. New York: McGraw-Hill.

Green, Bert F. JR. 1963 Digital Computers in Research: An Introduction for Behavioral and Social Scientists. New York: McGraw-Hill.

Hunt, Earl B. 1962 Concept Learning: An Information Processing Problem. New York: Wiley.

Miller, George A.; Galanter, E.; and Pribram, K. H. 1960 Plans and the Structure of Behavior. New York: Holt.

Moore, Omar K.; and Anderson, Scarvia B. 1954 Modern Logic and Tasks for Experiments on Problem Solving Behavior. Journal of Psychology 38:151-160.

Newell, Allen; and Simon, Herbert A. 1963a Computers in Psychology. Volume 1, pages 361-428 in R. Duncan Luce, Robert Bush, and Eugene Galanter (editors), Handbook of Mathematical Psychology. New York: Wiley.

Newell, Allen; and Simon, Herbert A. 19636 Gps, a Program That Simulates Human Thought. Pages 279-293 in Edward A. Feigenbaum and Julian Feldman (editors), Computers and Thought. New York: McGraw-Hill.

Reitman, Walter R. 1965 Cognition and Thought: An Information-processing Approach. New York: Wiley.

Simon, Herbert A.; and Newell, Allen 1964 Information Processing in Computer and Man. American Scientist 52:281-300.

Uhr, Leonard (editor) 1966 Pattern Recognition. New York: Wiley.

II ECONOMIC PROCESSES

Simulation is at once one of the most powerful and one of the most misapplied tools of modern economic analysis. “Simulation” of an economic system means the performance of experiments upon an analogue of the economic system and the drawing of inferences concerning the properties of the economic system from the behavior of its analogue. The analogue is an idealization of a generally more complex real system, the essential properties of which are retained in the analogue.

While this definition is consistent with the denotation of the word, the connotation of simulation among economists active in the field today is much more restricted. The term “simulation” has been generally reserved for processes using a physical or mathematical analogue and requiring a modern high-speed digital or analogue computer for the execution of the experiments. In addition, most economic simulations have involved some stochastic elements, either as an intrinsic feature of the model or as part of the simulation experiment field. Thus, while a pencil-and-paper calculation on a two-person Walrasian economy would, according to the more general definition, constitute a simulation experiment, common usage of the term would require that, to qualify as a simulation, the size of the model must be large enough, or the relationships complex enough, to necessitate the use of a modern computing machine. Even the inclusion of probabilistic elements in the pencil-and-paper game would not suffice, in and of itself, to transform the calculations into a bona fide common-usage simulation. In order to avoid confusion, the term “simulation” will be used hereafter in its more restrictive sense.

To clarify the concept of simulation, it might be useful to indicate how the simulation process is related, on the one hand, to the analytic solution of mathematical models of an economy, and on the other, to the construction of mathematical and econometric models of economic systems.

The relationship between simulation and econometric and mathematical formulations of economic theories is quite intimate. It is these descriptions of economic processes that constitute the basic inputs into a simulation. After a mathematical or econometric model is translated into language a computing machine can understand, the behavior of the model, as described by the machine output, represents the behavior of the economic system being simulated. The solutions obtained by means of simulation techniques are quite specific. Given a particular set of initial conditions, a particular set of parameters, and the time period over which the model is to be simulated, a single simulation experiment yields a particular numerically specified set of time paths for the endogenous variables (the variables whose values are determined or explained by the model). A variation in one or more of the initial conditions or parameters requires a separate simulation experiment which provides a different set of time paths. Comparisons between the original solution and other solutions obtained under specific variations in assumptions can then be used to infer some of the properties of the relationships between input and output quantities in the system under investigation. In general, only very partial inferences concerning these relationships can be drawn by means of simulation experiments. In addition, the results obtained can be assumed to be valid only for values of the parameters and initial conditions close to those used in the simulation experiments. By contrast, traditional mathematical approaches for studying the implications of an economic model produce general solutions by deductive methods. These general solutions describe, in functional form, the manner in which the model relates the endogenous variables to any set of initial conditions and parameters and to time.

The models formulated for a simulation experiment must, of course, represent a compromise between “realism” and tractability. Since modern computers enable very large numbers of computations to be performed rapidly, they permit the step-by-step solution of systems that are several orders of magnitude larger and more complicated than those that can be handled by the more conventional techniques. The representation of economic systems to be investigated with the aid of simulation techniques can therefore be much more complex; there are considerably fewer restrictions on the number of equations and on the mathematical forms of the functions that can be utilized. Simulation, therefore, permits greater realism in the extent and nature of the feed-back mechanisms in the stylized representation of the economy contained in the econometric or mathematical model.

The impetus for the use of simulation techniques in economics a rises from three major sources. First, both theory and casual observation suggest that an adequate description of the dynamic behavior of an economy must involve complex patterns of time dependencies, nonlinearities, and intricate interrelationships among the many variables governing the evolution of economic activity through time. In addition, a realistic economic model will almost certainly require a high degree of disaggregation. Since analytic solutions can be obtained for only very special types of appreciably simplified economic models, simulation techniques, with their vastly greater capacity for complexity, permit the use of more realistic analogues to describe real economic systems.

The second driving force behind the use of simulation, one that is not unrelated to the first, arises from the need of social scientists in general and of economists in particular to find morally acceptable and scientifically adequate substitutes for the physical scientist’s controlled experiments. To the extent that the analogue used in the simulation represents the relevant properties of the economic system under study, experimentation with the analogue can be used to infer the results of analogous experiments with the real economy. The effects in the model of specific changes in the values of particular policy instruments (e.g., taxes, interest rates, price level) can be used to draw at least qualitatively valid inferences concerning the probable effects of analogous changes in the real economic system. Much theoretical analysis in economics is aimed at the study of the probable reactions of an economy to specified exogenous changes, but any economic model that can be studied by analytic techniques must of necessity omit so many obviously relevant considerations that little confidence can be placed in the practical value of the results. Since a simulation study can approximate the economy’s behavior and structure considerably more closely, simulation experiments can, at least in principle, lead to conditional predictions of much greater operational significance.

Finally, the mathematical flexibility of simulation permits the use of this tool to gain insights into many phenomena whose intrinsic nature is not at all obvious. It is often possible, for example, to formulate a very detailed quantitative description of a particular process before its essential nature is sufficiently well understood to permit the degree of stylization required for a useful theoretical analysis. Studies of the sensitivity of the results to various changes in assumptions can then be used to disentangle the important from the unimportant features of the problem.

Past uses of simulation . The earliest applications of simulation approaches to economics employed physical analogues of a hydraulic or electrical variety. Analogue computers permit the solution of more or less complex linear or nonlinear dynamic systems in which time is rteated as a continuous variable. They also enable a visual picture to be gained of adjustment processes. On the other hand, they are much slower than digital computers and can introduce distortions into the results because of physical effects, such as “noise” and friction, which have no conscious economic analogue.

Subsequent economic simulations have tended to rely primarily on digital computers. As the speed and memory capacity of the computers have improved, the economic systems simulated have become increasingly more complex and more elaborate, and the emphasis in simulation has shifted from use as a mathematical tool for solving systems of equations and understanding economic models to use as a device for forecasting and controlling real economies. In addition, these improvements in computer design have permitted the construction of microanalytic models in which aggregate relationships are built up from specifications concerning the behavioral patterns of a large sample of microeconomic units.

As early as 1892, Irving Fisher recommended the use of hydraulic analogies “not only to obtain a clear and analytical picture of the interdependence of the many elements in the causation of prices, but also to employ the mechanism as an instrument of investigation and by it, study some complicated variations which could scarcely be successfully followed without its aid” (1892, p. 44). It was not until 1950, however, that the first hydraulic analogues of economic systems were constructed. Phillips (1950) used machines made of transparent plastic tanks and tubes through which colored water was pumped to depict the Keynesian mechanism of income determination and to indicate how the production, consumption, stocks, and prices of a commodity interact. Electrical analogues were used to study models of inventory determination (Strotz et al. 1957) and to study the business cycle models of Kalecki (Smith & Erdley 1952), Goodwin (Strotz et al. 1953), and others.

The shift to the use of digital computers began in the late 1950s. In a simulation study on an IBM 650, Adelman and Adelman (1959) investigated the dynamic properties of the Klein-Goldberger model of the U.S. economy by extrapolating the exogenous variables and solving the system of 25 nonlinear difference equations for a period of one hundred years. In this process no indications were found of oscillatory behavior in the model. The introduction of random disturbances of a reasonable order of magnitude, however, generated cycles that were remarkably similar to those characterizing the U.S. economy. On the basis of their study, they concluded that (1) the Klein-Goldberger equations may represent good approximations to the behavioral relationships in the real economy; and (2) their results are consistent with the hypothesis that the fluctuations experienced in modern highly developed societies are due to random perturbations.

Duesenberry, Eckstein, and Fromm (1960) constructed a 14-equation quarterly aggregative econometric model of the U.S. economy. Simulation experiments with this model were used to test the vulnerability of the U.S. economy to depressions and to assess the effectiveness of various automatic stabilizers, such as tax declines, increases in transfer payments, and changes in business savings.

A far more detailed and, indeed, the most ambitious macroeconometric simulation effort to date, at least on this side of the iron curtain, is being carried out at the Brookings Institution in Washington, D.C. The simulation is based on a 400-equation quarterly econometric model of the U.S. economy constructed by various experts under the auspices of the Social Science Research Council (Duesenberry et al. 1965). The Brookings model has eight production sectors. For each of the nongovernment sectors there are equations describing the determinants of fixed investment intentions, fixed investment realizations, new orders, unfilled orders, inventory investment, production worker employment, nonproduction worker employment, production worker average weekly hours, labor compensation rates per hour, real depreciation, capital consumption allowances, indirect business taxes, rental income, interest income, prices, corporate profits, entrepreneurial income, dividends, retained earnings, and inventory valuation adjustment. The remaining expenditure components of the national product are estimated in 5 consumption equations, 11 equations for nonbusiness construction, and several import and export equations. For government, certain nondefense expenditures, especially at the state and local levels, are treated as endogenous variables, while the rest are taken as exogenous. On the income side of the accounts there is a vast array of additional equations for transfer payments, personal income taxes, and other minor items. The model also includes a demographic sector containing labor force, marriage rate, and household formation variables and a financial sector that analyzes the demand for money, the interest rate term structure, and other monetary variables. Finally, aside from a battery of identities, the model also incorporates two matrices: an input-output matrix that translates constant dollar GNP (gross national product) component demands into constant dollar gross product originating by industry, and a matrix that translates industry prices into implicit deflators for GNP components.

The solution to the system is achieved by making use of the model’s block recursive structure. First, a block of variables wholly dependent on lagged endogenous variables is solved. Second, an initial simultaneous equation solution is obtained for variables in the “quantity” block (consumption, imports, industry outputs, etc.) using predetermined variables and using prices from the previous period in place of current prices. Third, a simultaneous equation solution is obtained for variables in the “price” block using the predetermined variables and the initial solution from the quantity block. Fourth, the price block solution is used as a new input to the quantity block, and the second and third steps are iterated until each variable changes (from iteration to iteration) by no more than 0.1 per cent. Fifth, the three blocks are solved recursively.

Simulation experiments show that the system yields predictions both within and beyond the sample period that lie well within the range of accuracy of other models. Simulation studies have also been undertaken to determine the potential impact of personal-income and excise-tax reductions, of government expenditure increases, and of monetary tightness and ease. Stochastic simulations are also being carried out to ascertain the stability properties of the system.

A rather different approach to the estimation of behavioral relationships for the entire socioeconomic system has been developed by Orcutt and his associates (Orcutt et al. 1961). Starting with the observed behavior of microeconomic decision units rather than macroeconomic aggregates, they estimate behavioral relationships for classes of decision units and use these to predict the behavior of the entire system. For this purpose, functions known as operating characteristics are estimated from sample surveys and census data. These functions specify the probabilities of various outcomes, given the values of a set of status variables which indicate the relevant characteristics of the microeconomic units at the start of a period.

Their initial simulation experiments were aimed at forecasting demographic behavior. For example, to estimate the number of deaths occurring during a given month, the following procedure was applied: an operating characteristic,

was specified. Here, Pi(m) indicates the probability of the death of individual i in month ra, Ai(m – J) is the age of individual i at the start of month m, Ri; denotes the race of individual i, and Si denotes the sex of individual i. The function FI is a set of four age-specific mortality tables, one for each race and sex combination, which describe mortality conditions prevailing during the base month m0. The function F2 is used to update each of these mortality tables to month m. The function F3 is a cyclic function which accounts for seasonal variations in death rates.

This and other operating characteristics were used in a simulation of the evolution of the U.S. population month by month between 1950 and 1960. A representative sample of the U.S. population consisting of approximately 10,000 individuals was constructed. In each month of the calculation a random drawing determined, in accordance with the probabilities specified by the relevant operating characteristics, whether each individual in the sample would die, marry, get divorced, or give birth. A regression analysis was then used to compare the actual and predicted demographic changes during the sample period. Close agreement was obtained. This approach is currently being extended to permit analysis of the consumption, saving, and borrowing patterns of U.S. families, the behavior of firms, banks, insurance companies, and labor unions, etc. When all portions of the microanalytic model have been fitted together, simulation runs will be made to explore the consequences of various changes in monetary and fiscal policies.

So far, we have discussed simulation experiments for prediction purposes only. However, the potential of simulation as an aid to policy formulation has not been overlooked by development planners in either free-enterprise economies or socialist countries. A microanalytic simulation model has been formulated at the U.S.S.R. Academy of Sciences in order to guide the management of the production system of the Soviet economy (Cherniak 1963). The model is quite detailed and elaborate. It starts with individual plants at a specific location and ends up with interregional and interindustry tables for the Soviet Union as a whole. In addition, the U.S.S.R. is contemplating the introduction of a 10-step joint man-computer program. The program is to be cyclic, with odd steps being man-operated and even steps being computer-operated. The functions fulfilled by the man-operated steps include the elaboration of the initial basis of the plan, the establishment of the criteria and constraints of the plan, and the evaluation of results. The computer, on the other hand, will perform such tasks as summarizing and balancing the plan, determining optimal solutions to individual models, and simulating the results of planning decisions. This procedure will be tested at a level of aggregation corresponding to an industrial sector of an economic council and will ultimately be extended to the economy as a whole. I have been unable to find any information on the progress of this work more recent than that in Cherniak (1963).

To take an example from the nonsocialist countries, a large-scale interdisciplinary simulation effort was undertaken at Harvard University, at the request of President John F. Kennedy, in order to study the engineering and economic development of a river basin in Pakistan. The major recommendations of the report (White House 1964) are currently being implemented by the Pakistani government, with funds supplied by the U.S. Agency for International Development.

In Venezuela a 250-equation dynamic simulation model has been constructed by Holland, in conjunction with the Venezuelan Planning Agency and the University of Venezuela. This model, which is based on experience gained in an earlier simulation of a stylized underdeveloped economy (Holland & Gillespie 1963), is being used in Venezuela to compare, by means of sensitivity studies, the repercussions of alternative government expenditure programs on government revenues and on induced imports, as well as to check the consistency of the projected rate of growth with the rate of investment and the rate of savings. An interesting macroeconomic simulation of the Ecuadorian economy, in which the society has been disaggregated into four social classes, is currently being carried outbyShubik(1966).

Monte Carlo studies

The small sample properties of certain statistical estimators cannot be determined using currently available mathematical analysis alone. In such cases simulation methods are extremely useful. By simulating an economic structure (including stochastic elements) whose parameters are known, one generates samples of “observations” of a given sample size. Each sample is then used to estimate the parameters by several estimation methods. For each method, the distribution of the estimates is compared with the true values of the parameters to determine the properties of the estimator for the given sample size. This approach, known as the Monte Carlo sampling method, has frequently been applied by econometricians in studies of the small sample properties of alternative estimators of simultaneous equation models [seeSimultaneous equation estimation].

Limitations and potential . To a large extent, the very strength of simulation is also its major weakness. As pointed out earlier, any simulation experiment produces no more than a specific numerical case history of the system whose properties are to be investigated. To understand the manner of operation of the system and to disentangle the important effects from the unimportant ones, simulation with different initial conditions and parameters must be undertaken. The results of these sensitivity studies must then be analyzed by the investigator and generalized appropriately. However, if the system is very complex, these tasks may be very difficult indeed. To enable interpretation of results, it is crucial to keep the structure of the simulation model simple and to recognize that, as pointed out in the definition, simulation by no means implies a blind imitation of the actual situation. The simulation model should express only the logic of the simulated system together with the elements needed for a fruitful synthesis.

Another major problem in the use of simulation is the interaction between theory construction and simulation experiments. In many cases simulation has been used as an alternative to analysis, rather than as a supplementary tool for enriching the realm of what can be investigated by other, more conventional techniques. The inclination to compute rather than think tends to permeate a large number of simulation experiments, in which the investigators tend to be drowned in a mass of output data whose general implications they are unable to analyze. There are, of course, notable exceptions to this phenomenon. In the water-resource project (Dorfman 1964), for example, crude analytic solutions to simplified formulations of a given problem were used to pinpoint the neighborhoods of the solution space in which sensitivity studies to variations in initial conditions and parameters should be undertaken in the more complex over-all system. In another instance, insights gained from a set of simulation experiments were used to formulate theorems which, once the investigator’s intuition had been educated by means of the simulation, were proved analytically. Examples of such constructive uses of simulation are unfortunately all too few.

Finally, in many practical applications of simulation to policy and prediction problems, insufficient attention is paid to the correspondence between the system simulated and its analogue. As long as the description incorporated in the simulation model appears to be realistic, the equivalence between the real system and its analogue is often taken for granted, and inferences are drawn from the simulation that supposedly apply to the real economy. Clearly, however, the quality of the input data, the correspondence between the behavior of the outputs of the simulation and the behavior of the analogous variables in the real system, and the sensitivity of results to various features of the stylization should all be investigated before inferences concerning the real world are drawn from simulation experiments.

In summary, simulation techniques have a tremendous potential for both theoretical analysis and policy-oriented investigations. If a model is chosen that constitutes a reasonable representation of the economic interactions of the real world and that is sufficiently simple in its structure to permit intelligent interpretation of the results at the present state of the art, the simulations can be used to acquire a basic understanding of, and a qualitative feeling for, the reactions of a real economy to various types of stimuli. The usefulness of the technique will depend crucially, however, upon the validity of the representation of the system to be simulated and upon the quality of the compromise between realism and tractability. Presumably, as the capabilities of both high-speed computers and economists improve, the limitations of simulation will be decreased, and its usefulness for more and more complex problems will be increased.

Irma Adelman

[For additional treatment of the terminology in this article and other relevant material, seeEconometric models, aggregate.]

BIBLIOGRAPHY

General discussions of the simulation of economic processes can be found in Conference on Computer Simulation 1963; Holland & Gillespie 1963; Orcutt et al. 1960; 1961. A comprehensive general BIBLIOGRAPHY is Shubik 1960. On the application of analogue computers to problems in economic dynamics, see Phillips 1950; 1957; Smith & Erdley 1952; Strotz et al. 1953; 1957. Examples of studies using digital computers are Adelman & Adelman 1959; Cherniak 1963; Dorfman 1964; Duesenberry et al. 1960; 1965; Fromm & Taubman 1966; White House 1964.

Adelman, Irma; and Adelman, Frank L. (1959) 1965 The Dynamic Properties of the Klein-Goldberger Model. Pages 278-306 in American Economic Association, Readings in Business Cycles. Homewood, 111.: Irwin. → First published in Volume 27 of Econo-metrica.

Cherniak, Lu. I. 1963 The Electronic Simulation of Information Systems for Central Planning. Economics of Planning 3:23-40.

Conference ON Computer Simulation, University OF California, Los ANGELES, 1961 1963 Symposium on Simulation Models: Methodology and Applications to the Behavioral Sciences. Edited by Austin C. Hog-gatt and Frederick E. Balderston. Cincinnati, Ohio: South-Western Publishing.

Dorfman, Robert 1964 Formal Models in the Design of Water Resource Systems. Unpublished manuscript.

Duesenberry, James S.; Eckstein, Otto; and Fromm, Gary 1960 A Simulation of the United States Economy in Recession. Econometrica 28:749-809.

Duesenberry, James S. et al. (editors) 1965 The Brookings Quarterly Econometric Model of the United States. Chicago: Rand McNally.

Fisher, Irving (1892) 1961 Mathematical Investigations in the Theory of Value and Prices. New Haven: Yale Univ. Press.

Fromm, Gary; and Taubman, Paul 1966 Policy Simulations With an Econometric Model. Unpublished manuscript.

Holland, Edward P.; and Gillespie, Robert W. 1963 Experiments on a Simulated Under-developed Economy: Development Plans and Balance-of-payments Policies. Cambridge, Mass.: M.I.T. Press.

Orcutt, Guy H. et al. 1960 Simulation: A Symposium. American Economic Review 50:893-932.

Orcutt, Guy H. et al. 1961 Microanalysis of Socioeco-nomic Systems: A Simulation Study. New York: Harper.

Phillips, A. W. 1950 Mechanical Models in Economic Dynamics. Economica New Series 17:283-305.

Phillips, A. W. 1957 Stabilisation Policy and the Time-forms of Lagged Responses. Economic Journal 67: 265-277.

Shdbik, Martin 1960 BIBLIOGRAPHY on Simulation, Gaming, Artificial Intelligence and Allied Topics. Journal of the American Statistical Association 55:736-751.

Shubik, Martin 1966 Simulation of Socio-economic Systems. Part 2: An Aggregative Socio-economic Simulation of a Latin American Country. Cowles Foundation for Research in Economics, Discussion Paper No. 203.

Smith, O. J. M.; and Erdley, H. F. 1952 An Electronic Analogue for an Economic System. Electrical Engineering 71:362-366.

Strotz, R. H.; Calvert, J. F.; and Morehouse, N. F. 1957 Analogue Computing Techniques Applied to Economics. American Institute of Electrical Engineers, Transactions 70, part 1:557-563.

Strotz, R. H.; McAnulty, J. C.; and Naines, J. B. JR. 1953 Goodwin’s Nonlinear Theory of the Business Cycle: An Electro-analog Solution. Econometrica 21: 390-411.

White House–[U.S.] Department of Interior, Panel ON Waterlogging And Salinity IN West Pakistan 1964 Report on Land and Water Development in the Indus Plain. Washington: Government Printing Office.

III POLITICAL PROCESSES

A political game or political simulation is a type of model that represents some aspect of politics. The referent, or “reality,” represented by a simulation-gaming technique may be some existing, past, or hypothetical system or process. Regardless of the reference system or process depicted in a game or simulation, the model is always a simplification of the total reality. Some political features will be excluded. Those elements of political phenomena incorporated in the model are reduced in complexity. The simplification and selective incorporation of a reference system or process produce the assets of parsimony and manageability as well as the liability of possible distortion. These attributes are, of course, equally applicable to other kinds of models, whether they be verbal, pictorial, or mathematical.

Simulations and games differ from other types of models in that their interrelated elements are capable of assuming different values as the simulation or game operates or unfolds. In other words, they contain rules for transforming some of the symbols in the model. For example, a game might begin with the description of a particular situation circulated to players who then are instructed to make responses appropriate for their roles. Initial reactions of some players lead to action by others; they in turn provoke new responses. In this manner not only do situations evolve, but basic changes also occur in the relationships between the players. When the value of one element in the simulation is changed, related properties can be adjusted accordingly. Because of this ability to handle time and change, Brody has described simulations and games as “operating models” (1963, p. 670).

Games versus simulations . To the users of both techniques, the distinctions between games and simulations are still ambiguous, as some current definitions will illustrate.

Brody (1963) and Guetzkow and his associates (1963) distinguish between (1) machine, or computer, simulations, (2) man, or manual, games, and (3) mixed, or man-computer, simulations. Generally the “machine” referred to is a digital computer, although some simulations involve other types of calculating equipment. The extension of the term “simulation” to cover those models which are a mixture of man and machine does not occur in other definitions. For example, the concept “simulation” is confined by Pool, Abelson, and Popkin (1965) to models that are completely operated on a computer. Rapoport (1964) defines a simulation as a technique in which both the assessment of the situation and the subsequent decisions are made in accordance with explicit and formal rules. When either assessment or decisions are made by human beings without formal rules, the technique is described as a game. When both assessment and decision are made by men, Rapoport stipulates that the device is a scenario. A different interpretation is given by Shubik, who suggests that a game “is invariably concerned with studying human behavior or teaching individuals” (1964, p. 71). On the other hand, a simulation is designated as the reproduction of a system or organism in which the actual presence of humans is not essential because their behavior is one of the givens of the simulated environment. The necessary involvement of humans in games also appears in the definitions of Thorelli and Graves (1964) and Dawson (1962). In Dawson’s conceptualization all operating models are simulations, but only those which introduce human players are games.

In these definitions of games and simulations, two distinguishing criteria are recurrent. The techniques are differentiated either by the role specified for human participants or by the role assigned formal rules of change or transformation. When human participation is the distinguishing criterion, a game is described as an operating model in which participants are present and a simulation as a model without players. With rules of transformation as the criterion, the type of model without extensive use of formal rules is a game and the technique with formal rules for handling change is a simulation.

The two distinguishing criteria can be interpreted as opposite poles of a single differentiating property. A model designed with only a limited number of its operating rules explicitly stated requires human players and administrators to define rules as the game proceeds. Conversely, a model designed with the interrelationships between its units specified in formal rules has less use for human decision makers during its operation. Thus, formal rules and human participants are alternative means for establishing the dynamic relationships between a model’s units. Given this interpretation, two operational characteristics can be used for separating games and simulations. One definitional approach is to reserve the term “simulation” for models that are completely programmed (i.e., all operations are specified in advance) and that confine the role of humans to the specification of inputs. Alternatively, simulations can be defined as operating models that, although not necessarily excluding human decision makers, involve such complex programmed rules that either computer assistance or a special staff using smaller calculating equipment is required to determine the consequences of the rules. The latter definition of simulation is used in the remainder of this essay. Accordingly, a game is an operating model whose dynamics are primarily determined by participants and game administrators with a minimum of formal rules (i.e., neither computer nor calculation staff).

The use of “game” to identify one type of dynamic model necessitates distinguishing political gaming from the theory of games and from “parlor games.” Occasionally, game theory and political gaming are applicable to similar political problems. The theory of games, however, is a mathematical approach to conflict situations in which the outcome of play is dependent on the choices of the conflicting parties. To employ game theory, specified conditions must be fulfilled; for example, the players must have knowledge of all their alternatives and be able to assign utilities to each one. Political gaming does not require those conditions, nor does it contain formal solutions to conflict situations. Political gaming also may be confused with “parlor games” designed for entertainment. The two activities can be distinguished by political gaming’s instructional or research focus, its greater complexity, and its more elaborate effort to reproduce a political reference system or process.

To summarize, political games and simulations are operating models whose properties are substituted for selected aspects of past, present, or hypothetical political phenomena. Although consensus is lacking, many definitions imply that games are operating models in which the pattern of interaction and change between the represented units is made by human participants and administrators as the model is played. In simulations, the interaction and change among elements represented in the model are specified in formal rules of transformation which are frequently programmed on a computer. A proposed operational distinction is that a game becomes a simulation when the specified rules become so detailed as to require a separate calculating staff or a computer.

Operating models relevant to politics

Development . One origin of political gaming and simulation is the war game used to explore military strategy and tactics. The Kriegsspiel, or war game, was introduced in the German general staff late in the first quarter of the nineteenth century. Before the beginning of World War ǀǀ both the Germans and the Japanese—and perhaps, the Russians—are reported (Goldhamer & Speier [1959] 1961, p. 71) to have extended some war games from strictly military exercises to operations which included the representation of political features.

The current development of simulations and games for the study of politics also has been influenced by the use of operating models in the physical sciences (e.g., the study of aerodynamics in wind tunnels), the availability of electronic computers, and the interest in experimental research throughout the social sciences. The extension of the small-group laboratory studies to include the examination of larger social systems has been a vital contribution.

RAND international politics game . The all-man game created at the RAND Corporation in the 1950s was one of the earliest post-World War ǀǀ games conducted in the United States. As the game is described by Goldhamer and Speier (1959), it involves a scenario, a control team, and a group of players who are divided into teams. To establish the setting for the game, the scenario describes the relevant world features that exist at the outset of play. Each team of national policy makers decides on appropriate policy in response to the circumstances described in the scenario. Action taken by a nation is submitted to the control team, which judges its feasibility and plausibility. If the control team rules that a given policy is realistic, the action is assumed to have occurred. Appropriate teams are notified of the new development. The game evolves as nations respond to the moves of other teams and to the events introduced by the control team (e.g., technological innovations, assassinations, etc.).

In addition to the exercises conducted at RAND, this international politics game has provided the basic format for games at various institutions, including the Center for International Studies at the Massachusetts Institute of Technology (Bloomfield & Whaley 1965). In contrast to the RAND games, most of the university exercises have been conducted with students for instructional purposes. The problems presented by the game scenarios have ranged from historical events to future hypothetical developments. In some games, teams have represented various branches or departments within the same government rather than different nations.

Legislative game . The legislative game was developed by Coleman (1964) to examine collective decision making and bargaining. Although it is an all-man game like the RAND model, the initial conditions of the legislative game are established by a deck of especially designed cards rather than a scenario. Moreover, the game is structured exclusively by rules and the behaviors of players. It does not involve a control team. Six to eleven players assume the role of legislators representing the hypothetical constituencies or regions indicated on randomly assigned cards. These legislators attempt to remain in office by passing issues or bills the voters in their regions favor and by defeating items the voters oppose. In the critical bargaining period which precedes the legislators’ vote on each issue, any player may agree to support issues which are not relevant to his constituents in exchange for a vote from another legislator on an issue crucial for his re-election. After the game’s eight issues have been passed, defeated, or tabled, the legislators are able to determine whether they will be re-elected for another session.

Public opinion game . The public opinion game was designed by Davison (1961) as an instructional device to demonstrate such factors as the development of cross-pressures, the difference between private and public opinion, and the role of both psychological and social forces. Each participant is given the description of an individual whose role the player assumes. The final public opinion of every hypothetical individual is a weighted composite of (1) his private opinion before the issue is discussed, (2) his religious organization’s position, (3) his secondary organization’s position, and (4) his primary group’s position. Although the public opinion game is an all-man exercise like both the RAND and legislative games, it contains to a greater degree than the other two models quantitative rules that determine the outcome. The players, in various combinations, are able to choose the positions assumed by each of the elements which influence an individual’s final public opinion. Once the positions are taken, however, specific rules indicate what values shall be assigned to each opinion component and how they shall be compiled. If the public opinion game incorporated more determinants of opinion (the author acknowledges many simplifications), the resulting complexity of the rules probably would require a calculation staff or computer, thus transforming the game into a simulation according to the present definition.

Inter-Nation Simulation . The man-machine simulation developed by Guetzkow and his associates (1963), Inter-Nation Simulation (INS), is an example of an operating model whose dynamics are so complex that a calculation staff is required. In each of a series of 50-minute to 70-minute periods at least one exchange occurs between the structured calculations and the participants. Participants assume one of several functionally defined roles that are specified for each government. Unlike the RAND game, neither nations nor the positions within them correspond to specific counterparts in world politics.

In each simulated nation the decision makers allocate military, consumer, and “basic” resources in pursuit of objectives of their own choosing. After negotiating with other nations or alliances, the decision makers allocate their resources for trade, aid, blockades, war, research and development, defense preparations, and the generation of new resources. Programmed rules are used by the calculation staff to determine the net gain or loss in various types of resources that have resulted from the decisions. The programmed rules also determine whether the actions of the government have maintained the support of the politically relevant parts of their nation. If these elites, symbolically represented in the calculations, are dissatisfied, a revolution or election may establish a new government. The calculated results are returned to each nation, and a new cycle of interactions and decisions is begun. Versions of the Inter-Nation Simulation have been used in various institutions in the United States as well as by the Institute for Behavioral Science in Tokyo, Japan, and University College of London, England.

Benson and TEMPER simulations . The dynamics of INS are created by a combination of participant activity and quantitative rules. By comparison, the operations of the international simulation constructed by Benson (1961) are completely programmed on a computer. Nine major nations and nine tension areas in contemporary world politics are contained in the program. To begin a simulation exercise, the operator supplies the description of the nine major nations along such dimensions as aggressiveness, alliance membership, foreign bases, trade, and geographical location. From these initial inputs the computer determines the nature of the international system, the extent of each nation’s interest in various areas, and other necessary indices.

The operator then announces an action by one of the major nations against one of the tension areas. Actions are described in terms of the degree of national effort required—from diplomatic protest to total war. From this information the computer program determines the response of each of the other nations. The effects of the initial action and resulting counteractions are then computed for each nation and the international system. With these new characteristics, the operator may specify another action.

A much more complex computer simulation of international relations, TEMPER (Technological, Economic, Military and Political Evaluation Routine), is currently being developed for the Joint War Games Agency of the U.S. Joint Chiefs of Staff (1965). Represented in TEMPER is the interaction between political, economic, cultural, and military features of as many as 39 nations, which are selected from 117 nations included in the program.

Crisiscom . Whereas the Benson simulation and TEMPER are computer representations of international systems, Crisiscom (Pool & Kessler 1965) depicts the psychological mechanisms in individuals that affect their information processing. In its present form, the decision makers represented in Crisiscom deal with foreign policy, and the computer inputs are messages that concern the interaction between international actors. The policy makers and the material they handle, however, might be adapted for other levels of political decision making.

The messages fed into the computer are assessed for affect (attitude or feeling of one actor toward another) and salience (the importance attached to an actor or interaction). Each message is given attention, set aside, or forgotten by the simulated decision maker, according to programmed instructions that reflect psychological observations about the rejection and distortion of information. For example, more attention is given a message that does not contradict previous interpretations. This filtering of communications determines what issues are selected by the policy makers for attention and decision. The filtering program not only influences the process for the selection of issues but also gradually alters the decision maker’s affect matrix. The affect matrix represents the decision maker’s perception of how each actor feels toward all the other actors in the world. In a test of Crisiscom the kaiser and the tsar were simulated. Both were given identical messages about events in the week prior to the beginning of World War I. At the end of the simulated week the differences in their affect matrices and the events to which they were attending reflected quite plausibly differences between the actual historical figures.

Simulmatics election simulation . The Simul-matics Corporation has applied an all-computer simulation to the 1960 and 1964 American presidential elections (Pool et al. 1965). The electorate is represented in the computer by 480 types of voters. Every voter type is identified by a combination of traits that include geographical region, city size, sex, race, socioeconomic status, political party, and religion. For each voter type information is stored on 52 political characteristics, for example, intention to vote, past voting history, and opinions on various political issues. The empirical data for the 52 “issue-clusters” are drawn from national opinion polls conducted during campaigns since 1952. The aggregation of surveys, made possible by the stability of political attitudes in the United States, permits a more detailed representation of voters than is obtained in any single poll. Before the simulation can be applied to a particular election, the operators must decide what characteristics and issues are most salient. Equations or rules of transformation are written to express the impact of the issues on different kinds of voters.

In the 1960 campaign, a simulation whose equations applied cross-pressure principles to the voters’ religion and party ranked the nonsouthern states according to die size of the Kennedy vote. The simulation’s ordering of the states correlated .82 with the actual Kennedy vote. In 1964, the equations introduced into the computer program primarily represented the effects of three “issue-clusters” (civil rights, nuclear responsibility, and social welfare). This time the percentage of the vote in each state obtained by Johnson was estimated, and the correlation with the actual election was .90.

Other operating models . The described operating models illustrate various constructions and procedures but do not form an exhaustive list of those techniques currently used in political inquiry. For example, the Simulmatics project is not the only computer simulation of political elections. McPhee (1961), Coleman (1965), and Abelson and Bernstein (1963) have constructed simulations representing not only the election outcome but also the process by which voters form opinions during the campaign. The last model differs from the others by representing a local community referendum.

Other games and simulations range from local politics games (Wood 1964) to games of international balance of power (Kaplan et al. 1960); from simulations of disarmament inspection (Singer & Hinomoto 1965) to those of a developing nation (Scott & Lucas 1966). In addition, other operating models not directly concerned with the study of political phenomena are relevant. Abstract games of logic developed by Layman Allen are used in the study of judicial processes at the Yale Law School. Bargaining games, such as those advanced by Thomas Schelling (1961), and the substantial number of business and management simulations (K. Cohen et al. 1964; American Management Association 1961; Thorelli & Graves 1964) have political applications, as do some of the numerous war games (Giffin 1965).

Uses of political gaming and simulation . The development and operation of a game or simulation demand time, effort, and finances. Moreover, operating models are beset by such difficulties as the question of their political validity. Despite these problems, games and simulations have been considered to be useful techniques in research, instruction, and policy formation.

Research . Many of the research attractions of operating models can be summarized in their ability to permit controlled experimentation in politics. In political reality, determination of the effects of a phenomenon can be severely hindered by confounding influences of potentially related events. Through a carefully designed model, an experimenter can isolate a property and its effects by either deleting the competing mechanisms from the model or holding them constant. Not only can a game or simulation be manipulated, but it can also be subjected to situations without the permanent and possibly harmful consequences that a comparable event might create in the real world.

This ability of the researcher to control the model permits replications and increased access. To establish a generalized pattern, repeated observations are required. A simulation or game can be assigned the same set of initial conditions and played over and over again, whereas in its reference system the natural occurrence of a situation might appear infrequently and perhaps only once. Moreover, replications of an operating model may reveal a greater range of alternative responses to a situation than could otherwise be identified.

Increased access to the objects of their research also leads students of politics to consider operating models. Players in a political game can be continuously observed; a detailed set of their written and verbal communications can be secured; and they can be asked to respond to interviews, questionnaires, or other test instruments at points selected by the researcher. Computer simulations can be instructed to provide a “print-out” at any specified stage of their operation, thus providing a record for analysis. The diversity of means by which data are readily obtained from such models contrasts with the limited access to the crowded lives of actual policy makers and their often sensitive written materials.

In addition to control, replication, and access, gaming and simulation contribute in at least two ways to theory building. First, the construction of a game or a simulation requires the developer to be explicit about the units and relationships that are to exist in his model and presumably in the political reference system it represents. An essential component of the political process which has been ignored in the design of a game is dramatized for the discerning observer. A computer simulation necessitates even more explication of the relationships among the model’s components. An incomplete program may terminate abruptly or result in unintelligible output. Thus, in constructing an operating model a relationship between previously unconnected findings may be discovered. Alternatively, a specific gap in knowledge about a political process may be pinpointed, and hypotheses, required by the model, may be advanced to provide an explanation.

A second value for theory building results from the operating, or dynamic, quality of games and simulations. Static statements of relationships can be transformed into processes which respond to change without the restrictions imposed by such alternatives as linear regression models. According to Abelson and Bernstein (1963, p. 94), “. . . the static character of statistical models and their reliance on linear assumptions seem to place an upper limit on their potential usefulness. . . . Computer simulation offers a technique for formulating systemic models, thus promising to meet this need.”

It should be recognized that these research assets are not shared equally by all political games and simulations. For example, a computer simulation may not generate the range of potential alternatives to a situation provided by a series of games played with policy experts. On the other hand, control and replication are more easily achieved in a programmed computer simulation than in a political game in which players are given considerable latitude in forming their own patterns of interaction.

Instruction . Simulations involving human participants and games are being used in graduate and undergraduate instruction as well as in secondary school teaching. Several evaluations of these techniques for college and university teaching have been reported in political science (e.g., Alger 1963; Cohen 1962) and in other fields (e.g., Thorelli & Graves 1964, especially pp. 25-31; Conference on Business Games 1963). One of these reports (Alger 1963, pp. 185-186) summarizes the frequently cited values of a simulation or game as a teaching aid: “(1) It increases student interest and motivation; (2) It serves as a laboratory for student application and testing of knowledge; (3) It provides insight into the decision-maker’s predicament; (4) It offers a miniature but rich model that facilitates comprehension of world realities.”

In contrast to the positive evaluations of users like Alger, a critical assessment of the instructional merits of political gaming and simulation has been offered by Cohen (1962). On the basis of a game he conducted, Cohen questioned (1) whether the game increased motivation among students not already challenged by the course; (2) whether the game stimulated interest in more than a narrow segment of the entire subject matter; (3) whether the game misrepresented or neglected critical features of political reality, thereby distorting the image of political phenomena; and (4) whether a comparable investment by both instructor and students in more conventional modes of learning might have offered a larger increment in education.

Systematic educational research has yet to clarify the differing evaluations of games and simulations for instruction. In one educational experiment with a political simulation (Robinson et al. 1966), undergraduate students in each of three courses were divided into two groups controlled for intelligence, grade-point average, and certain personality characteristics. One group augmented its regular course activities by a weekly discussion of relevant case studies. The other group in each course participated in a continuing simulation for an equal amount of time. Although most measures of student interest did not indicate significant differences between the groups, students evinced more interest in the case studies, while attendance was higher in the simulation group. Similarly, no direct and unmediated difference was found between the two groups on either mastery of facts or principles. To date, the limited educational research on gaming in other fields (e.g., Conference on Business Games 1963) also has produced ambiguous results.

It may be that the more knowledge students have in the subject area of a model or game, the more complex and detailed it must be to provide a satisfactory learning experience. One instructional use of games and simulations that may prove particularly useful with sophisticated students is to involve them in the construction of a model rather than to simply have them assume the role of players.

Policy formation . In some respects the application of games and simulations as adjuncts to policy formation is only an extension of their use for research and instruction. Policy makers have employed games or man-machine simulations as training aids for governmental officials in the same manner as management-training programs have incorporated business games. In the Center for International Studies game at the Massachusetts Institute of Technology, “questionnaires returned by participants have revealed that responsible officials . . . place a uniformly high value on the special benefits the games provide, particularly in sharpening their perspectives of alternatives that could arise in crisis situations” (Bloomfield & Whaley 1965, p. 866).

Simulation and gaming research performed under government contract or with more or less direct policy implications has concerned such issues as the systemic consequences of the proliferation of nuclear weapons (Brody 1963), the impact on political decision making of situational characteristics associated with crisis (Hermann 1965), and the potential role of an international police force at various stages of disarmament (Bloomfield & Whaley 1965). An application of simulation research for political campaigns is reflected in the election simulations conducted under contract to the National Committee of the Democratic Party by the Simulmatics Corporation (Pool et al. 1965). One of the major areas of U.S. government-related activity involving operating models is war gaming and its extension to cover relevant political aspects of national security. Public material on these activities, however, is limited.

Validity—evaluating correspondence . How are the elements of political reality estimated in a way to make their transmission to an operating model accurate? What are the implications of representing political properties by deterministic or stochastic processes? What are the relative advantages of games compared to simulations? If humans act as participants, how can the motivations of actual political leaders be reproduced? Underlying most of these questions is the problem of validity. Operating models are by definition representations of an existing or potential system or process. They are constructed for the information they can provide about a selected reference system. If in comparison to the performance of the reference system the simulation or game produces spurious results, the model is of little worth. An estimation of the “goodness of fit” or extent of similarity between an operating model and external criteria is, therefore, the central problem of validity.

Different types of criteria or standards can be used for evaluating the correspondence between operating models and their reference system (Hermann 1967). To date, however, the only effort at validity that has been applied to most simulations and games uses no specific criterion of comparison. In this approach, called face validity, the model’s realism is based on impressionistic judgments of observers or participants. If participants are used, face validity may concern estimates of their motivation or involvement. Unfortunately, participants can be motivated in a game that incorrectly represents the designated political environment. Furthermore, if the operating model involves the substitution of one property for another, some feature may give the appearance of being quite unreal even though it replicates the performance of the real-world property it replaces.

When specific criteria for correspondence are established, model validity may be determined by comparing events, variables and parameters, or hypotheses. In the first case, the product or outcome of the operating model is compared to the actual events it was intended to replicate (e.g., the correlation between the state-by-state election returns in the 1960 U.S. presidential campaign and those projected by the Simulmatics Corporation’s model; Pool et al. 1965). The second approach compares the variables and parameters that constitute a model with the real-world properties they are intended to represent (e.g., a current study uses factor analysis of the core variables of a simulation and the quantifiable real-world indices for which the simulation variables have been substituted; validity judgments are made from comparisons of the factors and of the variables that load on each factor; see Chadwick 1966). The third approach to validity compares the statistical results of a number of hypotheses tested in an operating model with comparable tests of the same hypotheses conducted with data drawn from the reference system (e.g., Zinnes 1966). Confirmation of a variety of hypotheses in both the operating model and in the actual political system increases confidence in the model’s validity.

With the exception of face validity, each of the described validity approaches requires that values for specified properties be determined with precision for both the operating model and its reference system. Not only is this procedure often difficult, but in some instances it may not be appropriate. Imagine a political game designed as an aid for policy making whose purpose is to display as many different alternative outcomes to an initial situation as possible. The fact that the subsequent course of actual events leads to only one of those outcomes –or perhaps, none of them–may not reduce the utility of the exercise for increasing the number of alternatives considered by the policy makers. In other instances, an operating model may be consistent with one of the described validity criteria but remain unsatisfactory for some purposes. One illustration is an election simulation in which the winning political party is decided by a stochastic process. With correct probability settings and frequent elections in the model, the simulation’s distribution of party victories would closely approximate election outcomes in the reference system. Despite the apparent event validity, the simulation would be undesirable for instruction in election politics. The naive participant might conclude from his experience with the collapsed campaign process that party victory is determined exclusively by chance.

In summary, a variety of validity strategies exist. The appropriate strategy varies from model to model and with the purpose for which the game or simulation was designed. Each operating model must be independently validated. At any given time the confidence in the accuracy with which a game or simulation represents its intended political reference system will be a matter of degree. Evaluation of these operating models for the study of politics will depend upon the development and application of procedures for measuring the extent to which each model’s purposes are fulfilled.

Charles F. Hermann

[See alsoInternational relations; Strategy.]

BIBLIOGRAPHY

Abelson, Robert P.; and Bernstein, Alex 1963 A Computer Simulation Model of Community Referendum Controversies. Public Opinion Quarterly 27:93-122.

Alger, Chadwick F. 1963 Use of the Inter-Nation Simulation in Undergraduate Teaching. Pages 150-189 in Harold Guetzkow et al., Simulation in International Relations: Developments for Research and Teaching. Englewood Cliffs, N.J.: Prentice-Hall.

American Management Association 1961 Simulation and Gaming: A Symposium. Report No. 55. New York: The Association.

Benson, Oliver 1961 A Simple Diplomatic Game. Pages 504-511 in James N. Rosenau (editor), International Politics and Foreign Policy: A Reader in Research and Theory. New York: Free Press.

Bloomfield, L. P.; and Whaley, B. 1965 The Political–Military Exercise: A Progress Report. Orbis 8:854-870.

Brody, Richard A. 1963 Some Systemic Effects of the Spread of Nuclear Weapons Technology: A Study Through Simulation of a Multi-nuclear Future. Journal of Conflict Resolution 7:663-753.

Chadwick, R. W. 1966 Developments in a Partial Theory of International Behavior: A Test and Extension of Inter-Nation Simulation Theory. Ph.D. dissertation, Northwestern Univ.

Cohen, Bernard C. 1962 Political Gaming in the Classroom. Journal of Politics 24:367-381.

Cohen, Kalman J. et al. 1964 The Carnegie Tech Management Game: An Experiment in Business Education. Homewood, 111.: Irwin.

Coleman, James S. 1964 Collective Decisions. Sociological Inquiry 34:166-181.

Coleman, James S. 1965 The Use of Electronic Computers in the Study of Social Organization. Archives europèennes de sociologie 6:89-107.

Conference ON Business Games, Tulane University, 1961 1963 Proceedings. Edited by William R. Dill et al. New Orleans, La.: Tulane Univ., School of Business Administration.

Davison, W. P. 1961 A Public Opinion Game. Public Opinion Quarterly 25:210-220.

Dawson, Richard E. 1962 Simulation in the Social Sciences. Pages 1-15 in Harold Guetzkow (editor), Simulation in Social Science: Readings. Englewood Cliffs, N.J.: Prentice-Hall.

Giffin, Sidney F. 1965 The Crisis Game: Simulating International Conflict. Garden City, N.Y.: Doubleday.

Goldhamer, Herbert; and Speier, Hans (1959) 1961 Some Observations on Political Gaming. Pages 498-503 in James N. Rosenau (editor), International Politics and Foreign Policy: A Reader in Research and Theory. New York: Free Press.

Guetzkow, Harold et al. 1963 Simulation in International Relations: Developments for Research and Teaching. Englewood Cliffs, N.J.: Prentice-Hall.Hermann, C. F. 1965 Crises in Foreign Policy Making: A Simulation of International Politics. China Lake, Calif.: U.S. Naval Ordnance Test Station, Contract N123 (60S30) 32779A.

Hermann, C. F. 1967 Validation Problems in Games and Simulations With Special Reference to Models of International Politics. Behavioral Science 12:216-233.

Kaplan, Morton A.; Burns, Arthur L.; and Quandt, Richard E. 1960 Theoretical Analysis of the “Balance of Power.” Behavioral Science 5:240-252.

McPhee, William N. (1961) 1963 Note on a Campaign Simulator. Pages 169-183 in William N. McPhee, Formal Theories of Mass Behavior. New York: Free Press.

Pool, Ithiel DE Sola; and Kessler, A. 1965 The Kaiser, the Tsar, and the Computer: Information Processing in a Crisis. American Behavioral Scientist 8, no. 9:31-38.

Pool, Ithiel DE Sola; Abelson, Robert P.; and Popkin, Samuel L. 1965 Candidates, Issues, and Strategies: A Computer Simulation of the 1960 and 1964 Presidential Elections. Rev. ed. Cambridge, Mass.: M.I.T. Press.

Rapoport, Anatol 1964 Strategy and Conscience. New York: Harper.

Robinson, James A. et al. 1966 Teaching With Inter-Nation Simulation and Case Studies. American Political Science Review 60:53-65.

Schelling, T. C. 1961 Experimental Games and Bargaining Theory. World Politics 14:47-68.

Scott, Andrew M.; and Lucas, William A. 1966 Simulation and National Development. New York: Wiley.

Shubik, Martin 1964 Game Theory and the Study of Social Behavior: An Introductory Exposition. Pages 3-77 in Martin Shubik (editor), Game Theory and Related Approaches to Social Behavior: Selections. New York: Wiley.

Singer, J. David; and Hinomoto, Hirohide 1965 Inspecting for Weapons Production: A Modest Computer Simulation. Journal of Peace Research 1:18-38.

Thorelli, Hans B.; and Graves, Robert L. 1964 International Operations Simulation: With Comments on Design and Use of Management Games. New York: Free Press.

Wood, R. C. 1964 Smith-Massachusetts Institute of Technology Political Game, Documents 1-5. Unpublished manuscript, Massachusetts Institute of Technology, Department of Political Science.

U.S. Joint Chiefs OF Staff, Joint War Games Agency 1965 Temper. Volume 1: Orientation Manual. DDC AD 470 375L. Washington: The Agency.

Zinnes, Dina A. 1966 A Comparison of Hostile Behavior of Decision-makers in Simulate and Historical Data. World Politics 18:474-502.

Simulation

views updated Jun 27 2018

Simulation

Simulation is used to model efficiently a wide variety of systems that are important to managers. A simulation is basically an imitation, a model that imitates a real-world process or system. In business and management, decision makers are often concerned with the operating characteristics of a system. One way to measure or assess the operating characteristics of a system is to observe that system in actual operation. However, in many types of situations the cost of direct observation can be very high. Furthermore, changing some of the relationships or parameters within a system on an experimental basis may mean waiting a considerable amount of time to collect results on all the combinations that are of concern to the decision maker.

In business and management, a simulation is a mathematical imitation of a real-world system. The use of computers to conduct simulations is not essential from a theoretical standpoint. However, most simulations are

sufficiently complex from a practical standpoint to require the use of computers in running them. A simulation can also be considered to be an experimental process. In a set of experimental runs, the decision maker actively varies some of the parameters or relationships in the system. If the mathematical model behind the simulation is valid, the results of the simulation runs will imitate the results of the real system if it were to operate over some period of time.

In order to better understand the fundamental issues of simulation, an example is useful. Suppose a regional medical center seeks to provide air ambulance service to trauma and burn victims over a wide geographic area. Issues such as how many helicopters would be best and where to place them would be in question. Other issues such as scheduling of flight crews and the speed and payload of various types of helicopters could also be important. These represent decision variables that are to a large degree under the control of the medical center. There are uncontrollable variables in this situation as well. Examples are the weather and the prevailing accident and injury rates throughout the medical center's service region.

Given the random effects of accident frequencies and locations, the analysts for the medical center would want to decide how many helicopters to acquire and where to place them. Adding helicopters and flight crews until the budget is spent is not necessarily the best course of action. Perhaps two strategically placed helicopters would serve the region as efficiently as four helicopters of some other type scattered haphazardly about. Analysts would be interested in such things as operating costs, response times, and expected numbers of patients who would be served. All of these operating characteristics would be impacted by injury rates, weather, and any other uncontrollable factors as well as by the variables they are able to control.

The medical center could run their air ambulance system on a trial-and-error basis for many years before they had any reasonable idea what combinations of resources would work well. Not only might they fail to find the best or near-best combination of controllable variables, but also they might very possibly incur an excessive loss of life as a result of poor resource allocation. For these reasons, this decision-making situation would be an excellent candidate for a simulation approach. Analysts could simulate having any number of helicopters available. To the extent that their model is valid, they could identify the optimal number to have to maximize service, and where they could best be stationed in order to serve the population of seriously injured people who would be distributed about the service region. The fact that accidents can be predicted only statistically means that there would be a strong random component to the service system and that simulation would therefore be an attractive analytical tool in measuring the system's operating characteristics.

BUILDING THE MODEL

When analysts wish to study a system, the first general step is to build a model. For most simulation purposes, this would be a statistically based model that relies on empirical evidence where possible. Such a model would be a mathematical abstraction that approximates the reality of the situation under study. Balancing the need for detail with the need to have a model that will be amenable to reasonable solution techniques is a constant problem. Unfortunately, there is no guarantee that a model can be successfully built so as to reflect accurately the real-world relationships that are at play. If a valid model can be constructed, and if the system has some element that is random, yet is defined by a specific probability relationship, it is a good candidate to be cast as a simulation model.

Consider the air-ambulance example. Random processes affecting the operation of such a system include the occurrence of accidents, the locations of such accidents, and whether or not the weather is flyable. Certainly other random factors may be at play, but the analysts may have determined that these are all the significant ones. Ordinarily, the analysts would develop a program that would simulate operation of the system for some appropriate time period, say a month. Then, they would go back and simulate many more months of activity while they collect, through an appropriate computer program, observations on average flight times, average response times, number of patients served, and other variables they deem of interest. They might very well simulate hundreds or even thousands of months in order to obtain distributions of the values of important variables. They would thus acquire distributions of these variables for each service configuration, say the number of helicopters and their locations, which would allow the various configurations to be compared and perhaps the best one identified using whatever criterion is appropriate.

MONTE CARLO SIMULATION

There are several different strategies for developing a working simulation, but two are probably most common. The first is the Monte Carlo simulation approach. The second is the event-scheduling approach. Monte Carlo simulation is applied where the passage of time is not incorporated into the simulation model. Consider again the air ambulance example. If the simulation is set up to imitate an entire month's worth of operations all at once, it would be considered a Monte Carlo simulation. A

random number of accidents and injuries would generate a random number of flights with some sort of average distance incorporated into the model. Operating costs and possibly other operating values sought by the analysts would be computed.

The advantage of Monte Carlo simulation is that it can be done very quickly and simply. Thus, many months of operations could be simulated in the ambulance example. From the many months of operational figures, averages and distributions of costs could readily be acquired. Unfortunately, there is also a potentially serious disadvantage to the Monte Carlo simulation approach. If analysts ignore the passage of time in designing the simulation, the system itself may be oversimplified. In the air ambulance example, it is possible to have a second call come in while a flight is in progress which could force a victim to wait for a flight if no other helicopter is available. A Monte Carlo simulation would not account for this possibility and hence could contribute to inaccurate results. This is not to say that Monte Carlo simulations are generally flawed. Rather, in situations where the passage of time is not a critical part of the system being modeled, this approach can perform very well.

EVENT-SCHEDULING METHOD

The event-scheduling method explicitly takes into account time as a variable. In the air ambulance example, the hypothetical month-long simulation of the service system would emerge over time. First, an incident or accident would occur at some random location, at some random time interval from the beginning point. Then, a helicopter would respond, weather permitting, the weather being another random component of the model. The simulated mission would require some random time to complete with the helicopter eventually returning to its base. While on that service mission, another call might come in, but the helicopter would probably need to finish its first mission before undertaking another. In other words, a waiting line or queue, a term often used in simulation analysis to indicate there are customers awaiting service, could develop. The event scheduling approach can account for complexities like this where a Monte Carlo simulation may not.

With a computer program set up that would imitate the service system, hundreds of months would be simulated and operating characteristics collected and analyzed through averages and distributions. This would be done for all the relevant decision-variable combinations the analysts wish to consider. In the air ambulance example, these would include various numbers of helicopters and various base location combinations. Once the analysts have collected enough simulated information about each of the various combinations, it is very likely that certain combinations will emerge as being better than others. If one particular design does not rise to the top, at least many of them can usually be eliminated, and those that appear more promising can be subjected to further study.

PROGRAMMING LANGUAGES

It was noted that while there is no theoretical need to computerize a simulation, practicality dictates that need. In the air ambulance example, analysts would require thousands of calculations to simulate just one month of operation for one set of decision-variable values. Multiply this by hundreds of monthly simulations, and the prospect of doing it somehow by hand becomes absolutely daunting. Because of this problem, programming languages have been developed that explicitly support computer-based simulation. Using such programs, analysts can develop either of the types of simulations mentioned here, a Monte Carlo simulation or an event-scheduling method simulation, or other types as well.

Commonly Used Languages. Many of the programming languages used in the twenty-first century were first developed in the 1960s and 1970s. SIMCRIPT, one of the most widely used simulation languages, was first developed in 1963. It is particularly well suited to the event-scheduling method. The language has undergone several incarnations; a recent version, SIMSCRIPT III, was released in 2005. To apply this language, analysts develop a logical flow diagram, or model, of the system they seek to study. SIMSCRIPT is a stand-alone language that can be used to program a wide variety of models. Thus, someone who uses simulation regularly on a variety of problem types might be well served by having this type of language available.

Another widely used language is called GASP IV, first introduced in 1974. It operates more as an add-in set of routines to other high-level programming languages such as FORTRAN or PL/1. With the rapid proliferation of personal computers in recent years, specific simulation software packages, simulation add-ins to other packages, and other capabilities have become widely available. For instance, a simple Monte Carlo simulation can be performed using a spreadsheet program such as Microsoft's Excel. This is possible because Excel has a built-in random number function. However, one must be aware that the validity of such random number functions is sometimes questionable.

Although older languages continue to be used, new simulation software is still being developed and upgraded. VisSim is of more recent origin, dating back to 1989. Originally developed as a Windows-based software for modeling and simulating dynamic systems, the newest version, VisSim 7.0 (2008), now includes such advanced features as 3D plotting and animation, new random generators, and an improved user interface.

Construction of Simulation Software. One of the basic building blocks within any simulation language or other tool is the random number generator. Ordinarily, such a generator consists of a short set of programming instructions that produce a number that looks uniformly random over some numeric interval, usually a decimal fraction between zero and one. Of course, since the number comes from programming code, it is not really random; it only looks random. Any fraction between zero and one is theoretically as likely as any other. Such numbers can then be combined or transformed into apparently-random numbers that follow some other probability function, such as a normal probability distribution or a Poisson probability distribution.

This capability facilitates building simulations that have different types of random components within them. However, if the basic generator is invalid or not very effective, the simulation results may very well be invalid even though the analysts have developed a perfectly valid model of the system being studied. Thus, there is a need for analysts to be sure that the underlying random number generating routines produce output that at least looks' random. There is a need for external validity in a simulation model, a need for the model to accurately imitate reality. There is just as critical a need for the building blocks within the model to be valid, for internal validity which can be a problem when an untested random number generator is employed.

EXPERIENTIAL GAMES

One particularly fast-growing area of simulation applications lies in experiential games. Board games that we played as youngsters were basically simulations. Usually, some kind of race was involved. The winner was the player who could maneuver his or her playing pieces around the board, in the face of various obstacles and opponents' moves, the fastest. The basic random number generator was usually a pair of dice. Computer simulations have expanded the complexity and potential of such gaming a great deal, and rapid advancements in computer-based and online simulations have resulted in a heavily proliferation of such simulations. SimCity, first introduced in 1989, is a city-building simulation game that has upgraded over the past twenty years and been made available for personal computers and various video-game console systems. The latest incarnation, SimCity Societies, was released in 2007. SimCity has spawned a host of imitators, including SimEarth (originally released in 1990), SimRefinery (1993), and SimSafari (1998).

Management and business simulations have been developed that are sufficiently sophisticated to use in the college classroom setting. Almost all of these consist of specialized computer programs that accept decision sets from the game's players. With their decision sets entered into the computer program, some particular period of time is simulated, usually a year. The program outputs the competitive results with financial and operating measures that would include such variables as dollar and unit sales, profitability, market shares, operating costs, and so forth. Some competitors fare better than others because their decisions proved to be more effective than others in the face of competition in the computer-simulated marketplace. An important difference between board games and business simulations lies in the complexity of outcomes. The board game traditionally has only one winner. A well-developed business simulation can have several winners with different players achieving success in different aspects of the simulated market that is the game's playing field. Hence, business simulations have become very useful and effective learning tools in classroom settings and in online learning. A fundamental reason for this lies in the fact that simulation permits an otherwise complex system to be imitated at very low costs, both dollar and human; also, Internet connectivity permits participation by a widely-dispersed group of students.

Simulation will continue to prove useful in situations where timely decision making is important and when experimenting with multiple methods and variables is not fiscally possible or sound. Simulation allows for informative testing of viable solutions prior to implementation.

SEE ALSO Models and Modeling

BIBLIOGRAPHY

Laguna, M., and J. Marklund. Business Process Modeling, Simulation, and Design. Upper Saddle River, NJ: Pearson/Prentice Hall, 2005.

McLeish, D.L. Monte Carlo Simulation and Finance. Hoboken, NJ: Wiley, 2005.

Robert, C.P., and G. Casella. Monte Carlo Statistical Methods. 2nd ed. New York, NY: Springer, 2004.

Santos, M. Making Monte Carlo Work. Wall Street & Technology 23, no. 3 (March 2005): 2223.

Savage, S. Rolling the Dice. Financial Planning 33, no. 3(March 2003): 5962.

Scheeres, J. Making Simulation a Reality. Industrial Engineer 35, no. 2 (February 2003): 4648.

Simulation

views updated May 29 2018

Simulation

Simulation, from the Latin simulare, means to "fake" or to "replicate." The Concise Oxford Dictionary of Current English defines simulation as a "means to imitate conditions of (situation etc.) with a model, for convenience or training." Sheldon Ross of the University of California, Berkeley, states less formally that "computer simulations let us analyze complicated systems that can't be analyzed mathematically. With an accurate computer model we can make changes and see how they affect a system." Simulation involves designing and building a model of a system and carrying out experiments to determine how the real system works, how the system can be improved, and how future changes will affect the system (called "what if" scenarios). Computer simulations of systems are effective when performing actual experimentation is expensive, dangerous, or impossible.

One of the principal benefits of using simulation to model a real-world system is that someone can begin with a simple approximation of the process and gradually refine it as his or her understanding of the system improves. This stepwise refinement enables good approximations of complex systems relatively quickly. Also, as refinements are added, the simulation results become more accurate.

The oldest form of simulation is the physical modeling of smaller, larger, or exact-scale replicas. Scaled-down (smaller) replicas include simulations of chemical plants and riverestuary systems. Scaled-up (larger) replicas include systems such as crystal and gene structures. Exact-scale replicas include an aircraft cockpit used for pilot training or a space shuttle simulator to train astronauts.

Simulation is central to the rise of digital computers, and the story starts, strangely enough, with the pipe organ. American inventor Edwin Link (19041981) received his inspiration for the first pilot training simulator while working for his father's piano and organ company in the 1930s. Link developed mechanical "trainers" that used a pneumatic system to simulate the movement of aircraft. During World War II (19391945), the Link Trainer proved the training value of flight simulation and convinced the U.S. Navy to ask that the Massachusetts Institute of Technology (MIT) develop a computer that would power a general-purpose flight simulator. This endeavor became Project Whirlwind and "evolved into the first real-time, general purpose digital computer [which] made several important contributions in areas as diverse as computer graphics, time-sharing, digital communications, and ferrite-core memories," according to Thomas Hughes in his book Funding a Revolution.

In a society of limited resources and rapid technological change, training challenges are increasingly being addressed by the use of simulation-based training devices. Economic analysis supports the use of simulators as a sound investment, a flexible resource that provides a return for many years. Since the early 1960s, simulation has been one of many methods used to aid strategic decision-making in business and industry. As computer technology progresses and the cost of simulation for realistic training continues to decline, it is becoming increasingly possible to train simultaneously at different geographic locations and where training cannot be carried out in real life, as in shutting down a nuclear power plant after an earthquake.

Simulations can be classified as being discrete or analog. Discrete event simulation builds a software model to observe the time-based behavior of a system at discrete time intervals or after discrete events in time. For example, customers arrive at a bank at discrete intervals. Between two consecutive time intervals or events, nothing can occur. When the number of time intervals or events is finite, that simulation is called a "discrete event." The discrete event simulation software can be a high-level general-purpose programming language (C or C) or a specialized event/data driven application (a simulator).

However, in the real world, events can occur at any time, not just at discrete intervals. For example, the water level in a reservoir with given inflow and outflow may change all the time, and the level may be specified to an infinite number of decimal places. In such cases, continuous, or analog, simulation is more appropriate, although discrete event simulation could be used as an approximation. Some systems are neither completely discrete nor completely analog, resulting in the need for combined discreteanalog simulation.

A common way to simulate the random occurrence of events is to use Monte Carlo simulation. It is named for Monte Carlo, Monaco, where the primary attractions are casinos containing games of chance exhibiting random behavior, such as roulette wheels, dice, and slot machines. The random behavior in such games of chance is similar to how Monte Carlo simulation selects variable values at random to simulate a model. For example, when someone rolls a die, she knows that a 1, 2, 3, 4, 5, or 6 will come up, but she does not know which number will occur for any particular roll. This is the same as the variables used in computer simulations; these variables have a known range of values but an uncertain value for any particular time or event. A Monte Carlo simulation of a specific model randomly generates values for uncertain input variables over and over again (called "trials") in order to produce output results with statistical certainty, that is, providing a percentage chance that an actual output from the physical system will fall within the predicted range with virtual certainty.

Modeling is both an art and a science. It is an art to decide which features of the physical object need to be included in an abstract mathematical model. Any model must capture what is important and discard interesting features (uninteresting and irrelevant features are easy to discard). Complexity and processing performance also guide the art of deciding on a minimal set of features to be modeled from the physical object.

The science of simulation is the quantitative description of the relationship between features being modeled. These relationships dictate a model's transformation from one state to another state over time. Often when a mathematical model is eventually derived in a solvable form (closed form), it may or may not accurately represent the physical system. Computer simulation is preferable when the physical system cannot be mathematically modeled because of the complexity of variables and interacting components. Well-known examples of simulation are flight simulators and business games. However, there are a large number of potential areas for simulation, including service industries, transportation, environmental forecasting, entertainment, and manufacturing factories. For example, if a company wishes to build a new production line, the line can first be simulated to assess feasibility and efficiency.

Simulation and Computers

Although discrete event simulation can be carried out manually, it can be computationally intensive, lending itself to the use of computers and software. Simulation became widespread after computers became popular tools in scientific and business environments.

At this point, it may be helpful to define the relationship of computer simulation to the related fields of computer graphics, animation, and virtual reality (VR) . Computer graphics is the computational study of light and its effect on geometric objects, with the focus on graphics to produce meaningful rendered images of real-world or hypothetical objects. Animation is the use of computer graphics to generate a sequence of frames that, when passed before one's eyes very quickly, produce the illusion of continuous motion. Virtual reality is focused on immersive humancomputer interaction, as found in devices such as head-mounted displays, position sensors, and the data gloves. Simulation is the infrastructure on which these other fields are builta simulation model must be created and executed and the output analyzed. Simulation is thus the underlying engine that drives the graphics, animation, and virtual reality technologies.

Visual interactive simulation has been available since the late 1970s. Before this, simulation models were simply "black boxes"data going in and results coming out, with the output requiring extensive statistical processing. Using on-screen animations in a simulation model enables the status of a model to be viewed as it progresses; for example, a machine that breaks down may change color to red. This enables visual cues to be passed back to the user instantaneously so that action can be taken.

Although the simulation examples outlined here thus far have been physical systems, social situations can also be simulated in the electronic equivalent of role-playing or gaming. Both simulation and gaming can be defined as a series of activities in a sequence in which players participate, operating under overt constraints (agreed-on rules), and that usually involve competition toward an objective. The classic examples of simulation games are board games, such as chess and monopoly. Simulation games vary widely and have advanced along with time and technology, making them more interesting, enjoyable, realistic, and challenging.

According to the Interactive Digital Software Association (IDSA), the sale of interactive game simulation software for computers, video consoles, and the Internet generated revenues of $5.5 billion in 1998 for companies such as Nintendo and Sony, second only to the motion picture industry, which generated revenues of $6.9 billion in 1998. In fact, computer companies, including Intel, Apple, and AMD (Advanced Micro Devices), are increasingly designing their central processing units for gaming entertainment performance and not for office applications.

see also Analog Computing; Digital Computing.

William Yurcik

Bibliography

Banks, Jerry, John S. Carson II, and Barry L. Nelson. Discrete-Event Simulation, 2nd ed. Englewood Cliffs, NJ: Prentice Hall, 1996.

Hughes, Thomas. Funding a Revolution. New York: National Academy Press, 1999.

Khoshnevis, Behrokh. Discrete Systems Simulation. New York: McGraw-Hill, 1994.

Killgore, J. I. "The Planes That Never Leave the Ground." American Heritage of Invention Technology (Fall 1989): 5663.

Law, Averill M., and W. David Kelton. Simulation Modeling and Analysis. New York: McGraw-Hill, 1991.

Macedonia, Michael. "Why Digital Entertainment Drives the Need for Speed." IEEE Computer 33, no. 2 (2000): 124127.

Simulation

views updated May 21 2018

Simulation

Space explorers venture into the unknown. But the support crews of the space explorers do their best to send their imagination, analysis, and scientific knowledge ahead. Simulation has always been an integral part not only of astronaut training but also of testing engineering designs of hardware and software and all the procedures developed for the mission. The hard work of a dedicated simulation and training support team prepares the astronaut crews to successfully deal with emergencies, while mostly avoiding surprises in the mission execution.

Specific Applications of Simulation

Simulation allows the astronauts to become comfortable with the unfamiliar. The astronauts practice on simulators such as a mock-up of the space shuttle's crew compartment. Pilots practice shuttle approaches and landings with the modified Grumman Gulfstream G-2 corporate jet (otherwise known as the Shuttle Training Aircraft), which mimics the different drag and center of rotation of the shuttle. Mission specialists maneuver cargo in the payload bay or practice satellite retrieval on a simulated manipulator arm.

Demanding crew training regimes at the National Aeronautics and Space Administration's (NASA) Johnson Space Center in Houston, Texas, include single-system trainers that simulate specific functions such as propulsion, guidance, navigation, and communications. All of the single-system training comes together in the shuttle mission simulator (SMS) and the shuttle engineering simulator (SES). The SES simulates rendezvous, station keeping, and docking using a domed display for a realistic full-scale perspective of the shuttle cockpit view. The SMS includes a motion-based simulator for ascent and entry training, and a fixed-based simulator for orbit simulations. The SMS simulators imitate the sounds, scenes, and motion of a full shuttle missionfrom liftoff to touchdownto give the astronauts the feel of a real mission.

Every conceivable emergency or malfunction is practiced repeatedly in the simulator. The simulators are also used for problem solving. When the oxygen tank exploded on Apollo 13, for example, ground support teams and backup astronaut crews used the simulator to work solutions and send new procedures to the crew.

Sophisticated for their time, the original simulators were installed in 1962 by the Link company, which pioneered full-flight simulators. But that was the age of room-sized mainframe computers and engineers carrying slide rules in their pockets. Neither the personal computer nor the hand calculator had been developed yet. Tools for mission training are more sophisticated today. The SMS and the SES were upgraded in 1999 with new Silicon Graphics computers and software that increased the display capability by a factor of thirty.

Virtual-Reality Simulators

NASA increasingly uses sophisticated interactive virtual-reality simulators to plan and train for space shuttle and International Space Station operations. In the Johnson Science Center's Virtual Reality Laboratory, astronauts wearing virtual-reality helmets see the payload bay, each other, and the object they are handling. They can practice handing off an object to other astronauts. Handholds for the objects are suspended from ceiling wires calibrated to mimic the object's behavior in zero gravity.

Science teams from around the world also use virtual-reality simulations to coordinate, plan, and execute International Space Station and experiment operations. Virtual-reality databases allow distant users to observe diverse system interactions together.

Less-Sophisticated Tools and Techniques

While NASA is now able to employ sophisticated computer technology for simulating space tasks, realism can be simulated with simpler technologies. Astronaut candidates experience weightlessness on a KC-135 airplane flown in a parabolic path that simulates twenty to thirty seconds of floating in space. Known as the "vomit comet" because of the unsettling effect of sudden weightlessness, the KC-135 simulates zero gravity for astronaut training as well as for microgravity experiments.

Tasks involving the manipulation of massive objects for space shuttle operations or space station construction can be simulated in NASA's Neutral Buoyancy Laboratory (NBL) at the Johnson Science Center. (Neutral buoyancy is when an object has an equal tendency to float as sink.) Astronauts suit up and train underwater with backup scuba divers for missions such as the repair of the Hubble Space Telescope. Linked with the SMS and the Mission Control Center, astronauts in the NBL can train on specific mission timelines with flight controllers and astronauts piloting in the cockpit.

To become familiar with a lunar landscape, Apollo astronauts visited volcanic and impact crater sites such as Craters of the Moon National Park and Meteor Crater. They made geological field trips to Alaska, Hawaii, and Iceland. At Sunset Crater Volcano National Monument outside of Flagstaff, Arizona, geologists created a realistic site for operating in a lunar environment by blasting craters in the cinder field , erecting a mockup of the lunar lander, and bringing in a lunar rover for the astronauts to drive.

When it all comes together before a launch, the simulations and training prepare the astronauts to confidently go where no one has gone beforeexcept in the imagination.

see also Astronaut, Types of (volume 3); Computers, Use of (volume 3); International Space Station (volumes 1 and 3); Rendezvous (volume 3); Space Shuttle (volume 3).

Linda D. Voss

Bibliography

Benedict, Howard. NASA: A Quarter Century of Space Achievement. The Woodlands, TX: Pioneer Publications, Inc., 1984.

Burrows, William E. The Infinite Journey, ed. Mary Kalamaras. New York: Discovery Books, 2000.

Swanson, Glen E., ed. "Before This Decade Is Out. . . ": Personal Reflections on the Apollo Program. Washington, DC: NASA History Office, 1999.

Simulation

views updated May 23 2018

SIMULATION

Simulation is a special form of untruthfulness. It is an acted lie; for while the lie, properly speaking, is untruthfulness in speech, simulation is untruthfulness in deed.

Simulation is sinful, having the same kind and degree of malice as a lie; i.e., in itself it is venially sinful, but according to circumstances (e.g., when it causes another person grave injury) it can be mortal. However, its sinfulness is not quite so obvious as that of the lie. Words have definite meanings; and so if certain words are used that are contrary in meaning to what one has in mind, it is evident that a lie is being told. But actions are not so definitely significative. Except for a few conventional gestures, actions have no set meaning. Here it is the intention that counts. A woman may dye her hair, not wishing to deceive anyone or to appear what she is not, but simply to beautify herself, which, within certain limits, she has every right to do. Another may do exactly the same thing, but with the definite intention to deceive, e.g., to be taken for another woman. Here the intention vitiates the act, and the result is a sin of simulation.

Simulation can manifest itself in a variety of ways and spring from a multiplicity of motives. A man by affectations in demeanor or speech or dress may pretend to a culture or knowledge or wisdom that is not his own. Another may simulate a professional competence, as in the case of the quack or even of the legitimate doctor who poses as a specialist in areas of medicine other than his own. Another may simulate spiritual gifts and virtues, as does the fortune-teller, the clairvoyant, or the hypocrite. Still another may protest a love and respect for one whom in reality he despises. All of this may be from motives of monetary gain, reputation, power, or simply from uncontrolled feelings of inferiority. Ignorant of his own worth and potential, a man puts on a mask that he might appear in the power and worth of another.

It may seem that there are cases in which simulation is legitimate and even virtuous. For instance, one who is sick may act as though he were quite well so as not to cause inconvenience to others; or he may affect ignorance when he feels that a display of knowledge might embarrass another. These are cases not of simulation but of dissimulation. There is no real pretense here springing from a desire to deceive, but simply silence in order to keep one's secret. In ordinary circumstances one need not speak all one knows, and has no obligation to declare to others the state of his body or soul. By the same token one has a right to dissemble, i.e., to act in such a way as to ward off the curiosity of others. However, if being sick, one should simulate health in order to get a job in which health is required, or if one should pretend ignorance to avoid an obligation, which otherwise would be legitimately imposed upon him, then he would be committing the sin of simulation. For he would not simply be forestalling the curiosity of others, but actually and positively deceiving them.

Bibliography: j. a. mchugh and c. j. callan, Moral Theology, rev. e. p. farrell, 2 v. (New York 1958) 2:240304.

[s. f. parmisano]

simulation

views updated May 18 2018

simulation Imitation of the behavior of some existing or intended system, or some aspect of that behavior. Examples of areas where simulation is used include communication network design, where simulation can be used to explore overall behavior, traffic patterns, trunk capacity, etc., and weather forecasting, where simulation can be used to predict likely developments in the weather pattern. More generally, simulation is widely used as a design aid for both small and large systems, and is also used extensively in the training of people such as airline pilots or military commanders. It is a major application of digital computers and is the major application of analog computers.

From an implementation viewpoint, a simulation is usually classified as being either discrete event or continuous. For a discrete event simulation it must be possible to view all significant changes to the state of the system as distinct events that occur at specific points in time; the simulation then achieves the desired behavior by modeling a sequence of such events, treating each individually. By contrast, a continuous simulation views changes as occurring gradually over a period of time and tracks the progress of these gradual changes. Clearly the choice between these two in any particular case is determined by the nature of the system to be simulated and the purposes that the simulation is intended to achieve.

Although the distinction between simulation and emulation is not always clear, an emulation is normally “realistic” in the sense that it could be used as a direct replacement for all or part of the system that is emulated. In comparison, a simulation may provide no more than an abstract model of some aspect of a system.

simulate

views updated Jun 08 2018

sim·u·late / ˈsimyəˌlāt/ • v. [tr.] imitate the appearance or character of: red ocher intended to simulate blood| [as adj.] (simulated) a simulated leather handbag. ∎  pretend to have or feel (an emotion): it was impossible to force a smile, to simulate pleasure. ∎  produce a computer model of: future population changes were simulated by computer.DERIVATIVES: sim·u·la·tion / ˌsimyəˈlāshən/ n.sim·u·la·tive / -ˌlātiv/ adj.

simulate

views updated May 29 2018

simulate XVII. — pp. stem of L. simulāre, f. similis SIMILAR; see -ATE3.
So simulation XIV. — OF. or L.