Simulation Theory

views updated

SIMULATION THEORY

A prominent part of everyday thought is thought about mental states. We ascribe states like desire, belief, intention, hope, thirst, fear, and disgust both to ourselves and to others. We also use these ascribed mental states to predict how others will behave. Ability to use the language of mental states is normally acquired early in childhood, without special training. This naïve use of mental state concepts is variously called folk psychology, theory of mind, mentalizing, or mindreading and is studied in both philosophy and the cognitive sciences, including developmental psychology, social psychology, and cognitive neuroscience. One approach to mindreading holds that mental-state attributors use a naïve psychological "theory" to infer mental states in others from their behavior, the environment, or their other mental states, and to predict their behavior from their mental states. This is called the theory theory (TT). A different approach holds that people commonly execute mindreading by trying to simulate, replicate or reproduce in their own minds the same state, or sequence of states, as the target. This is the simulation theory (ST).

Another possible label for simulation is empathy. In one sense of the term, empathy refers to the basic maneuver of feeling one's way into the state of another, by "identifying" with the other, or imaginatively putting oneself in the other's shoes. One does not simply try to depict or represent another's state, but actually to experience or share it. Of course, mental life may feature empathic acts or events that are not deployed for mindreading. But the term simulation theory primarily refers to an account of mindreading that accords to empathy, or simulation, a core role in how we understand, or mindread, the states of others.

Istorical Antecedents of the Debate

A historical precursor of the ST/TT debate was the debate between positivists and hermeneutic theorists about the proper methodology for the human sciences. Whereas positivists argued for a single, uniform methodology for the human and natural sciences, early-twentieth-century philosophers like Wilhelm Dilthey and R. G. Collingwood advocated an autonomous method for the social sciences, called Verstehen, in which the scientist or historian projects herself into the subjective perspective or viewpoint of the actors being studied. Contemporary ST, however, makes no pronouncements about the proper methodology of social science; it only concerns the prescientific practice of understanding others. The kernel of this idea has additional historical antecedents. Adam Smith, Immanuel Kant, Arthur Schopenhauer, Friedrich Nietzsche, and W. V. Quine all wrote of the mind's empathic or projective propensities. Kant wrote:

[I]f I wish to represent to myself a thinking being, I must put myself in his place, and thus substitute, as it were, my own subject for the object I am seeking to consider (Kant 1787/1961, p. 336)

Nietzsche anticipated modern psychology in the following passage:

To understand another person, that is to imitate his feelings in ourselves, we produce the feeling in ourselves after the effects it exerts and displays on the other person by imitating with our own body the expression of his eyes, his voice, his bearing. Then a similar feeling arises in us in consequence of an ancient association between movement and sensation. (Nietzsche 1881/1977, pp. 156157.)

Quine (1960) briefly endorsed an empathy account of indirect discourse and propositional attitude ascription. He described attitude ascriptions as an "essentially dramatic idiom" rather than a scientific procedure, and this encouraged him to see the attitudes as disreputable posits that deserve to be eliminated from our ontology.

The Beginning of the Debate

It was in the 1980s that three philosophersRobert Gordon, Jane Heal, and Alvin Goldmanfirst offered sustained defenses of ST as an account of the method of mindreading. They were reacting partly to functionalist ideas in philosophy of mind and partly to emerging research in psychology. According to analytic functionalism, our understanding of mental states is based on commonsense causal principles that link states of the external world with mental states and mental states with one another. For example, if a person is looking attentively at a round object in ordinary light, he is caused to have a visual experience as of something round. If he is very thirsty and believes there is something potable in a nearby refrigerator, he will decide to walk toward that refrigerator. By using causal platitudes of this sort, attributors can infer mental states from the conditions of an agent's environment or from his previous mental states. One might start with beliefs about a target's initial mental states plus beliefs in certain causal psychological principles, feed this information into one's theoretical reasoning system, and let the system infer the "final" states that the target went into or will go into. This TT approach assumes that attribution relies on information about causal principles, so TT is said to be a "knowledge rich" approach.

Simulationists typically doubt that ordinary adults and children have as much information, or the kinds of information, that TT posits, even at a tacit or unconscious level. ST offers a different possibility, in which attributors are "knowledge-poor" but engage a special mental skill: the construction of pretend states. To predict an upcoming decision of yours, I can pretend to have your goals and beliefs, feed these pretend goals and beliefs into my own decision-making system, let the system make a pretend decision, and finally predict that you will make this decision. This procedure differs in three respects from the theorizing procedure. First, it involves no reliance on any belief by the attributor in a folk-psychological causal principle. Second, it involves the creation and deployment of pretend, or make-believe, states. Third, it utilizes a mental system, here a decision-making system, for a non-standard purpose, for the purpose of mindreading rather than action. It takes the decision-making system "off-line."

Daniel Dennett (1987) challenged ST by claiming that simulation collapses into a form of theorizing. If I make believe I am a suspension bridge and wonder what I will do when the wind blows, what comes to mind depends on the sophistication of my knowledge of the physics of suspension bridges. Why shouldn't make-believe mindreading equally depend on theoretical knowledge? Goldman (1989) parried this challenge by distinguishing two kinds of simulation: theory-driven and process-driven simulation. A successful simulation need not be theory driven. If both the initial states of the simulating system and the process driving the simulation are the same as, or relevantly similar to, those of the target system, the simulating system's output should resemble the target's output, enabling the prediction to be accurate.

Heal (1994) also worried about a threat of ST collapsing into TT. If ST holds that one mechanism is used to simulate another mechanism of the same kind, she claimed, then the first mechanism embodies tacit knowledge of theoretical principles of how that type of mechanism operates. Since defenders of TT usually say that folk-psychological theory is known only tacitly, this cognitive science brand of simulation would collapse into a form of TT. This led Heal to reject such empirical claims about sub-personal processes. Instead, she proposed (1998) that ST is in some sense an a priori truth. When we think about another's thoughts, we "co-cognize" with our target; that is, we use contentful states whose contents match those of the target. Heal has claimed that such co-cognition is simulation, and is an a priori truth about how we mindread.

Martin Davies and Tony Stone (2001) criticize Heal's proposed criterion of tacit knowledge possession. Yet another way to rebut the threat of collapse is to question the assumption that the integrity or robustness of simulation can be sustained only if it is not underpinned by theorizing. The assumption is that simulation is a sham if it is implemented by theorizing; ST implies that no theorizing is used. Against this, Goldman (2006) argues that theorizing at an implementation level need not conflict with higher-level simulation, and the latter is what ST insists upon.

Transference

According to the standard account, simulational mindreading proceeds by running a simulation that produces an output state (e.g., a decision) and "transferring" that output state to the target. "Transference" consists of two steps: classifying the output state as falling under a certain concept and inferring that the target's state also falls under that concept. Gordon (1995) worries about these putative steps. Classifying one's output state under a mental concept ostensibly requires introspection, a process of which Gordon is leery. Inferring a similarity between one's own state and a target's state sounds like an analogical argument concerning other minds, which Ludwig Wittgenstein and others have criticized. Also, if the analogy rests on theorizing, this undercuts the autonomy of simulation. Given these worrisome features of the standard account, Gordon proposes a construal of simulation without introspection or inference "from me to you."

Gordon replaces transference with "transformation." When I simulate a target, I "recenter" my egocentric map on the target. In my imagination, the target becomes the referent of the first-person pronoun "I" and his time of action, or decision, becomes the referent of "now." The transformation Gordon discusses is modeled on the transformation of an actor into a character he is playing. Once a personal transformation is accomplished, there is no need to "transfer" my state to him or to infer that his state is similar to mine. But there are many puzzling features of Gordon's proposal. He describes the content of what is imagined, but not what literally takes place. Mindreaders are not literally transformed into their targets (in the way princes are transformed into frogs) and do not literally lose their identity. We still need an account of a mindreader's psychological activities. Unless he identifies the type of his output state and imputes it to the target, how does the activity qualify as mindreading, that is, as believing of the target that she is in state M? Merely being oneself in state M, in imagination, does not constitute the mindreading of another person. One must impute a state to the target, and the state selected for imputation is the output state of the simulation, which must be detected and classified. First-person mental-state detection thereby becomes an important item on the ST agenda, an item on which simulationists differ, some, such as Harris (1992) and Goldman (2006), favoring introspection and others, such as Gordon (1995), resisting it.

Different theorists favor stronger or weaker versions of ST, in which "information" plays no role versus a moderate role. Gordon favors a very pure version of ST, whereas Goldman favors more of a hybrid approach, in which some acts of mindreading may proceed wholly by theorizing, and some acts may have elements of both simulation and theorizing. For example, a decision predictor might use a step of simulation to determine what he himself would do, but then correct that preliminary prediction by adding background information about differences between the target and himself. Some theory theorists have also moved toward a hybrid approach by acknowledging that certain types of mindreading tasks are most naturally executed by a simulation-like procedure (Nichols and Stich 2003).

What exactly does ST mean by the pivotal notion of a "pretend state"? Mental pretense may not be essential for simulational mindreading, for example, for the reading of people's emotional states as discussed at the end of this article. But most formulations of ST appeal to mental pretense. Mental pretense is often linked to imagining, but imagining comes in different varieties. One can imagine that something is the case, for example, that Mars is twice as large as it actually is, without putting oneself in another person's shoes. Goldman (2006) proposes a distinction between two types of imagining: suppositional-imagining and enactive-imagining.

Suppositional imagining is what one does when one supposes, assumes, or hypothesizes something to be the case. It is a purely intellectual posture, though its precise connection to other intellectual attitudes, like belief, is a delicate matter. Enactive imagining is not purely intellectual or doxastic. It is an attempt to produce in oneself a mental state normally produced by other means, where the mental states might be perceptual, emotional, or purely attitudinal. You can enactively imagine seeing somethingyou can visualize itor you can enactively imagine wanting or dreading something. For purposes of ST, the relevant notion of imagination is enactive imagination. To pretend to be in mental state M is to enactively imagine being in M. If the pretense is undertaken for mindreading, one would imagine being in M and "mark" the imaginative state as belonging to the target of the mindreading exercise.

Can a state produced by enactive imagining really resemble its counterpart state, the state it is meant to enact? And what are the respects of resemblance? Gregory Currie (1995) advanced the thesis that visual imagery is the simulation of vision, and Currie and Ian Ravenscroft extended this proposal to motor imagery. They present evidence from cognitive science and cognitive neuroscience to support these ideas, highlighting evidence of behavioral and neural similarity (Currie and Ravenscroft 2002). Successful simulational mindreading would seem to depend on significant similarity between imagination-produced states and their counterparts. However, perfect similarity, including phenomenological similarity, is not required (Goldman 2006).

Psychological Evidence

Gordon's first paper on ST (1986) appealed to research in developmental psychology to support it. Psychologists Heinz Wimmer and Josef Perner (1983) studied children who watched a puppet show in which a character is outside playing while his chocolate gets moved from the place he put it to another place in the kitchen. Older children, like adults, attribute to the character a false belief about the chocolate's location; three-year-olds, by contrast, do not ascribe a false belief. Another experiment showed that older autistic children resemble three-year-olds in making mistakes on this false-belief task (Baron-Cohen, Leslie, and Frith 1985). This was interesting because autistic children are known for a striking deficit in their capacity for pretend play. Gordon suggested that the capacity for pretense must be critical for adequate mindreading, just as ST proposes. Most developmental psychologists offered a different account of the phenomena, postulating a theorizing deficit as the source of the poor performances by both three-year-olds and autistic children. It was argued that three-year-olds simply do not possess the full adult concept of belief as a state that can be false, and this conceptual "deficit" is responsible for their poor false-belief task performance.

Endowment Effect

The conceptual-deficit account, however, appears to have been premature. First, when experimental tasks were simplified, three-year-olds and even younger children sometimes passed false-belief tests. Second, researchers found plausible alternative explanations of poor performance by three-year-olds, explanations in terms of memory or executive control deficiencies rather than conceptual deficiencies. Thus, the idea of conceptual changeassumed to be theoretical changewas undercut. This had been a principal form of evidence for TT and, implicitly, against ST. It has proved difficult to design more direct tests between TT and ST.

Shaun Nichols, Stephen Stich, and Alan Leslie (1995) cite empirical tests that allegedly disconfirm ST. One of these types of empirical tests involves the "endowment effect." The endowment effect is the finding that when people are given an item, for example, a coffee mug, they come to value it more highly than people who do not possess one. Owners hold out for significantly more money to sell it back than do nonowners who are offered a choice between receiving a mug and receiving a sum of money. When asked to predict what they would do, before being in such a situation, subjects underpredict the price that they themselves subsequently set. Nichols, Stich, and Leslie argue that TT readily explains this underprediction; people simply have a false theory about their own valuations. But ST, they argue, cannot explain it. If simulation is used to predict a choice, there are only two ways it could go wrong. The predictor's decision-making system might operate differently from that of the target, or the wrong inputs might be fed into the decision-making system. The first explanation does not work here, because it is the very same system. The second explanation also seems implausible because the situation is so transparent. This last point, however, runs contrary to the evidence. Research by George Loewenstein and other investigators reveals countless cases in which self- and other-predictions go wrong because people are unable to project themselves accurately into the shoes of others, or into their own future shoes. The actual current situation constrains their imaginative construction of future or hypothetical states, which can obviously derail a simulation routine (Van Boven, Dunning, and Loewenstein 2000). So ST has clear resources for explaining underpredictions in endowment effect cases.

Emotion Recognition

One of the best empirical cases for simulation is found in a domain little studied in the first two decades of empirical research on mindreading. This is the domain of detecting emotions by facial expressions. Goldman and Sripada (2005; also Goldman, 2006) survey findings pertaining to three types of emotions: fear, disgust, and anger. For each of these emotions, brain-damaged patients who are deficient in experiencing a given emotion are also selectively impaired in recognizing the same emotion in others' faces. Their mindreading deficit is specific to the emotion they are impaired in experiencing. ST provides a natural explanation of these "paired deficits": normal recognition proceeds by using the same neural substrate that subserves a tokening of that emotion, but if the substrate is damaged, mindreading should be impaired. TT, by contrast, has no explanation that is not ad hoc. TT is particularly unpromising because the impaired subjects retain conceptual ("theoretical") understanding of the relevant emotions.

By what simulational process could normal face-based emotion recognition take place? One possibility involves facial mimicry followed by feedback that leads to (subthreshold) experience of the observed emotion. In other words, normal people undergo traces of the same emotion as the person they observe. This resembles Nietzsche's idea, now supported by research showing that even unconscious perception of faces produces covert, automatic imitation of facial musculature in the observer, and these mimicked expressions can produce the same emotions in the self.

Another possible explanation of emotion recognition is unmediated mirroring, or resonance, in which the observer undergoes the same emotion experience as the observed person without activation of facial musculature. Such "mirror matching" phenomena have been identified for a variety of mental phenomena, in which the same experience that occurs in one person is also produced in someone who merely observes the first. Such mirror matching occurs for events ranging from action with the hands (Rizzolatti et al., 2001), to somatosensory experiences (Keysers et al., 2004), to pain (Singer et al., 2004). For example, if one observes somebody else acting, the same area of the premotor cortex is activated that controls that kind of action; if one observes somebody being touched on the leg, the same area of somatosensory cortex is activated that is activated in the normal experience of being touched on the leg; the same sort of matching applies to pain. This leads Vittorio Gallese (2003) to speak of a "shared manifold" of intersubjectivity, a possible basis for empathy and social cognition more generally. It is unclear whether mirror matching always yields recognition, or attribution, of the experience in question, so perhaps mindreading is not always implicated. But the basic occurrence of mental simulation, or mental mimicry, is strikingly instantiated.

See also Cognitive Science; Folk Psychology; Psychology.

Bibliography

Baron-Cohen, Simon, Alan Leslie, and Uta Frith. "Does the Autistic Child Have a 'Theory of Mind'?" Cognition 21 (1985): 3746.

Carruthers, Peter, and Peter K. Smith, eds. Theories of Theories of Mind. New York: Cambridge University Press, 1996.

Currie, Gregory. "Visual Imagery as the Simulation of Vision." Mind and Language 10 (1995): 2544.

Currie, Gregory, and Ian Ravenscroft. Recreative Minds: Imagination in Philosophy and Psychology. Oxford: Oxford University Press, 2002.

Davies, Martin, and Tony Stone. "Mental Simulation, Tacit Theory, and the Threat of Collapse." Philosophical Topics 29 (2001): 127173.

Davies, Martin, and Tony Stone, eds. Folk Psychology. Oxford: Blackwell, 1995.

Davies, Martin, and Tony Stone, eds. Mental Simulation: Evaluations and Applications. Oxford: Blackwell, 1995.

Dennett, Daniel. "Making Sense of Ourselves." In his The Intentional Stance. Cambridge, MA: MIT Press, 1987.

Gallese, Vittorio. "The Manifold Nature of Interpersonal Relations: The Quest for a Common Mechanism." Philosophical Transactions of the Royal Society of London B 358 (2003): 517528.

Goldman, Alvin. "Interpretation Psychologized." Mind and Language 4 (1989): 161185.

Goldman, Alvin. Simulating Minds: The Philosophy, Psychology and Neuroscience of Mindreading. New York: Oxford University Press, 2006.

Goldman, Alvin, and Chandra Sripada. "Simulationist Models of Face-based Emotion Recognition." Cognition 94 (2005): 193213.

Gopnik, Alison. "How We Know Our Minds: The Illusion of First-Person Knowledge of Intentionality." Behavioral and Brain Sciences 16 (1993): 114.

Gopnik, Alison, and Andrew N. Meltzoff. Words, Thoughts, and Theories. Cambridge, MA: MIT Press, 1997.

Gordon, Robert. "Folk Psychology as Simulation." Mind and Language 1 (1986): 158171.

Gordon, Robert. "Simulation without Introspection or Inference from Me to You." In Mental Simulation, edited by Martin Davies and Tony Stone. Oxford: Blackwell, 1995.

Harris, Paul. "From Simulation to Folk Psychology: The Case for Development." Mind and Language 7 (1992): 120144.

Heal, Jane. "Simulation vs. Theory Theory: What Is at Issue?" In Objectivity, Simulation and the Unity of Consciousness: Current Issues in the Philosophy of Mind, edited by Christopher Peacocke. Oxford: Oxford University Press, 1994.

Heal, Jane. "Co-cognition and Off-line Simulation: Two Ways of Understanding the Simulation Approach." Mind and Language 13 (1998): 477498.

Kant, Immanuel. Critique of Pure Reason. Translated by Norman Kemp Smith. London: Macmillan, 1961.

Keysers, C., B. Wicker, V. Gazzola, J.-L. Anton, L. Fogassi, and V. Gallese. "A Touching Sight: SII/PV Activation During the Observation and Experience of Touch." Neuron. 42 (2004): 335346.

Nietzsche, Friedrich. "Daybreak" (1881). In A Nietzsche Reader. Translated by R. J. Hollingdale. Harmondsworth: Penguin, 1977.

Nichols, Shaun, and Stephen P. Stich. Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding Other Minds. Oxford: Oxford University Press, 2003.

Nichols, Shaun, Stephen Stich, and Alan Leslie. "Choice Effects and the Ineffectiveness of Simulation: Response to Kuhberger et al." Mind and Language 10 (1995): 437445.

Perner, Josef. Understanding the Representational Mind. Cambridge, MA: MIT Press, 1991.

Quine, Willard Van Orman. Word and Object. Cambridge, MA: Technology Press, 1960.

Rizzolatti, G., L. Fogassi, and V. Gallese. "Neurophysiological Mechanisms Underlying the Understanding and Imitation of Action." Nature Reviews Neuroscience 2 (2001): 661670.

Singer, T., B. Seymour, J. O'doherty, H. Kaube, R. J. Dolan, And C. D. Frith. "Empathy for Pain Involves the Affective but not Sensory Components of Pain." Science, 303 (2004): 11571162.

Van Boven, L., D. Dunning, and G. Loewenstein. "Egocentric Empathy Gaps between Owners and Buyers: Misperceptions of the Endowment Effect." Journal of Personality and Social Psychology 79 (2000): 6676.

Wimmer, Heinz, and Josef Perner. "Beliefs about Beliefs: Representation and Constraining Function of Wrong Beliefs in Young Children's Understanding of Deception." Cognition 13 (1983): 103128.

Alvin I. Goldman (2005)