Cognitive Science

views updated Jun 27 2018

COGNITIVE SCIENCE

Cognitive science is the interdisciplinary study of mind, in which the concepts and methods of artificial intelligence (AI) are central (Boden forthcoming). The most prominent disciplines within the field are AI, artificial life (A-life), psychology, linguistics, computational neuroscience, and philosophyespecially the philosophy of mind and language. Cognitive anthropology is included too, though often goes unseen under the label of evolutionary psychology.

The many relevant subfields include robotics, whether classical, situated, or evolutionary; studies of enactive vision, where the organism's own movements (of eyes and/or body) provide crucial information for acting in the world; the psychology of human-computer interaction, including various aspects of virtual reality such as avatars; and computational theories of literature, art, music, and scientific discovery. Nonhuman minds are studied by computational ethology and neuroethology, and by A-life.

Who Is a Cognitive Scientist?

Not everyone working in the key disciplines is a cognitive scientist. Only those taking a computational approach to questions about mind are considered cognitive scientists.

Some AI workers, for example, are not cognitive scientists because they have no theoretical interest in human thought. Their aim is to challenge their ingenuity as computer engineers by getting a program or robot to do a task that people either cannot do or do not want to do. If hunches, or experimental evidence, about human psychology can help them achieve that goal, that is fine. But if nonhuman tricks are available, such as looking ahead in a chess game to consider all the legal possibilities for several moves, they will use them. These computer scientists are engaged in technological AI, not psychological AI. Only the latter is a proper part of cognitive science.

Even someone who does have a professional interest in human minds need not be a cognitive scientist. For instance, many social psychologists study patterns of interpersonal behavior without asking about the information processes that underlie them and make them possible. Even some cognitive psychologists insist that they are not cognitive scientists, because they follow James Gibson's (1979) affordance theory of perceptionwhich allows for information pickup but not for information processing. (Their self-description is based on an overly narrow view of what cognitive science covers: Gibsonian insights have become prominent in various areas of cognitive science, such as enactive vision.)

Similarly, many linguistssociolinguists and historical philologists, for instanceare not primarily concerned with just how language is generated and/or understood. But even those who do focus on these computational matters do not all agree. Chomskian linguistics, for example, was crucial in the rise of cognitive science and has deeply influenced the philosophy of mind; but non-Chomskian accounts of syntax have been developed since. In addition, theories of pragmatics have become at least as prominent as theories of syntaxand pragmatics is an aspect of situatedness, a concept of growing importance within cognitive science as a whole. As for anthropology, most anthropologists see their field as a hermeneutic enterprise, not a scientific one. They reject psychological explanations of culture in general, and computational accounts in particular.

Cognitive Science Is about More than Cognition

It includes cognitive psychology, of course: the study of language, memory, perception, problem solving, and creative thinking. What is more, most research has focused on individual human adult cognition. However, other aspects of mind are studied too: motivation, emotion, choice (Dennett 1984), development, psychopathology, interpersonal phenomena, motor control, and animal psychology.

Consider emotion, for example. The role of emotion in problem solving, attitude formation, and neurosis were topics of research in AI and computational psychology in the early 1960s. But the problems were too difficult, and were largely dropped. Interest revived later, partly because of neuroscientific work on emotional intelligence and partly because of advances in the computational theory of scheduling in multigoal systems (Sloman 1993). Interdisciplinary conferences on the psychology, neuroscience, computer modeling, and philosophy of emotion blossomed at the turn of the century, when the topic became a prominent aspect of research.

Whether the focus of attention is on development or psychopathology, emotion or motor control, the prime interest for cognitive science is in the abstractly defined computational functions that generate the behavior concerned. But the neural mechanisms that implement them are often studied too. Despite the functionalist doctrine of multiple realizability, many cognitive scientists want to know how psychological functions are actually implemented in the brain. When functionalism began in the 1960s, little attention was paid to the nervous system by philosophers or AI scientists. Since the 1980s, that has been less true.

Indeed, neuroscience as such has become increasingly concerned with computational questions. On the one hand, there are theories (and computer models) of specific neural circuits doing closely specified things. For instance, cells in the retina and/or visual cortex that compute particular visual features, such as light gradients or surface textures; or cells in the female cricket's brain that enable her to discriminate the song of male crickets of the same species, and to move accordingly. On the other hand, there are broad-brush theories about the computational functions carried out by large areas of the brain, where the focus is less on specific individual cells than on general neuroanatomy: the different cell types, locations, and connections of the neurons.

Developmental Issues

Most cognitive scientists study already established phenomena, although many include learning in their subject matter. Some, however, studyand modelmental development. And some do this because they believe that adult psychology cannot be properly understood without knowing how it developed. In short, they see the mind as an epigenetic system, deeply informed by its developmental history.

Epigenesis was stressed long ago by Conrad Waddington in biology and Jean Piaget in psychology. It is self-organized development, grounded in innate predispositions in continual dialectic interaction with the (internal and external) environment. For example, there are inborn dispositions to attend to broadly facelike stimuli, or to human speech-sounds. Once the attention is caught, learning can help develop the infant's pattern recognition and discriminatory powers. In some cases, such as face recognition, the neural mechanisms relevant at different stages have been largely identified.

An epigenetic view is not strictly environmentalist, nor strictly nativist either. Rather, it stresses the dialectical interplay between the two. Late twentieth-century work in developmental neuroscience and developmental psychology has therefore led to a radical reconceptualization of nativism (Elman et al. 1996). Some philosophers of biology have defined new accounts of self-organization and dynamical development accordingly (Oyama 1985).

What It Means to Say that Cognitive Science is Computational

Cognitive science employs computational models of mind in two senses.

First, the substantive concepts in its theories are computational. The mind is seen as some sort of computational system (just what sort is hotly disputed), and mental structure and mental processes are described accordingly (Haugeland 1997). So whereas many psychologists (and other scientists) use computers to express/clarify their theories, and especially to manipulate their experimental data, only cognitive scientists import computational ideas into their theories.

Second, computer modeling is often used to clarify and test computational theories of mind. Often, but not always, some work in cognitive science (in AI and psychology, not just in philosophy) employs computational concepts and insights, but with insufficient detail to allow programs to be written. When programming is possible, it provides several advantages. Even program failures can be scientifically illuminating, pointing out lacunae or mistakes in the theory, or fundamental limitations of the methodology being used. However, successes may be even more instructive. For if a programor a robotproduces a given performance, one knows that it suffices to do so.

Whether real minds (or brains) use similar processes to produce equivalent performance is another matter: just because a program does something in a certain way, it does not follow that people do too. This question can be answered only by empirical evidence. Sometimes, a programmed theory models not only psychological phenomena at various levels, but also the details of their underlying neural base. In such cases, validation requires both psychological and neuroscientific evidence.

The references to computational ideas in the previous two paragraphs cover concepts rooted in two different intellectual traditions, namely, cybernetics and Turing computation. These were closely linked in the years when cognitive science began.

A seminal paper by Warren McCulloch and Walter Pitts (1943) prompted early work both in neural nets and in what is sometimes called GOFAI, or "Good Old-Fashioned AI." (It also influenced the design of the von Neumann computer.) McCulloch and Pitts integrated three key ideas of the early twentieth century: propositional logic, neuron theory, and Turing computation. They proved that anything expressible in the propositional calculus is computable, and can be mapped onto some specifiable neural net. In addition, they suggested that a fourth key ideafeedback, the core concept of cyberneticscould be defined in terms of these networks, in which case purpose and learning could be embodied in them too.

A few years later they published another paper, in which they argued that probabilistic networks are more like brains and can do things naturally that logic-based systems cannot (Pitts and McCulloch 1947). They still insisted, nevertheless, that their original, logical, account was correct in principle.

In short, the concept of computational systems is normally used within the field to cover both GOFAI and connectionism. (Some philosophers, however, restrict it to the former.) Cognitive science includes both.

Sometimes, the reason why a computational theory is not actually modeled is that suitable computer technology does not yet exist. By the same token, many advances in cognitive science have depended partly upon advances in computing technology. These include both increases in size (computing power) and new types of virtual machine, embodying forms of computation that were not possible previously.

In some cases, the core ideas had already been defined long before the technology was available to test/explore them. Parallel distributed processing, for instance, was envisaged over twenty years before computers became powerful enough for it to be implemented in interesting ways. Similarly form-generating interactive diffusion equations and cellular automata were both first defined in the 1950s, but not extensively studied until the advent of large machines and computer graphics in the late 1980s. And genetic algorithms, glimpsed in the 1950s and defined in the late 1960s, were first implemented in the 1980s. Once the technology was available, further questions arose that had not been posed before.

Some Philosophical Problems

Many philosophical disputes arise within cognitive science. One dispute concerns the relative merits of the two AI approaches mentioned above: classical (symbolic) AI and connectionism, or neural networks. The latter is broadly inspired by the basic structure of the brain. (Some recent work in artificial neural networks tries to take more account of the subtleties of real neurons; even so, these models are hugely oversimplified in comparison with the real thing.) There are several types of neural networks, but the one most widely used within cog-nitive scienceand the one of greatest interest to philosophersis parallel distributed processing, or PDP.

Some researchers champion only one of these AI approaches, whereas others admit both because of their complementary strengths and weaknesses. Symbolic AI, or GOFAI, is better for modeling behaviors that involve hierarchical structure, advance planning, deliberation, and/or strict sequential order. The conscious, deliberative aspects of the mind are best suited to this approach. Connectionism, by contrast, is better for modeling the tacit learning and knowledge involved in pattern recognition, including the fuzzy family resemblances between instances of one and the same concept.

It does not follow that all unconscious mental processes are best modeled by PDP systems. Some psychoneural theories of action errors, including various clinical syndromes, employ hybrid (mixed) models in which the hierarchical aspects represent both conscious and unconscious processing.

internal representation

Another debate concerns the nature and importance of various kinds of internal representation. Connectionist representations are different from GOFAI ones, and several philosophers have argued that they are closer to the neural representations that embody concepts (Churchland 1989; Clark 1989, 1993; Cussins 1990). Computational neuroscience has described further types of representation. One example is emulator systems, which are neural mechanisms whose physical dynamics mimic the temporal changes being represented. Another, based on the anatomy of the cerebellum, is a way of representing motor behavior that is based neither on logic (GOFAI) nor on statistics (PDP), but on noneuclidean tensor geometry.

Some philosophers follow the AI community and/or the neuroscientists, in accepting that representations may take many different guises, depending on the role they have evolved to play. Others, however, argue that only formal-symbolic structures, expressed in a language of thought, are properly termed representations, and that only these can generate human conceptual/linguistic thought (Fodor 2000).

nature of computation

The nature of computation is a third topic of controversy (Scheutz 2002). Most philosophers define it as Alan Turing did in the 1930sand his is still the only really clear definition. However, practicing AI scientists think of computation in a number of different ways, based on virtual machines whose properties are different from those of a Turing machine. Moreover, some people are trying to go beyond Turing computation by defining new forms of computers (hypercomputers), somebut not allof which involve quantum computing. Some of these may turn out to be relevant to human brains, but others will not.

meaning in the real world

A fourth area of philosophical discussion focuses on whetherand if so, howmeaning (intentionality) can be grounded in the real worldand whether it can properly be attributed to programs and/or robots. Evolutionary theories of intentionality rule out GOFAI programs (as do many philosophers), butarguablyallow meaning to be ascribed to some evolved robots. The grounding problem, on this view, is solved by the way in which the relevant mechanism has evolved in situated, embodied systems.

Empirical work that is closely related to the problem of intentionality includes research on theory of mind. Very young children are unable to grasp that each person is an agent with their own set of beliefs and interests, which may differ from those of the child. So although the child realizes that adults (and even other children) know many things that they do not, the child does not appreciate that someone else may believe something to be true that the child knows to be false. (This is why infants do not lie: they cannot conceive of doing so.) Normally, theory of mind develops spontaneously at around ages four or five, although in autistic children it apparently does not. In other words, inbuilt predispositions have evolved that lead the young child first to engage with other humans (maintaining eye contact, pointing to direct attention, turn-taking in communication, etc.), and eventually to attribute intentional states to them. Philosophers have asked (for instance) whether they do this by theorizing about other people's minds or by simulating, or empathizing with, them (Davies and Stone 1995).

consciousness in computational terms

A fifth philosophical puzzle concerns whether consciousness could be explained in computational termsor in any other scientific, naturalistic, manner (Heil 2004, Newell 1980, Searle 1993). Research in various disciplines within cognitive science has shown that there is no such thing as the problem of consciousness; rather, there are many problems of consciousness, because the term is used to make many different distinctions. Some of these are much better understood than they were twenty years ago, thanks to computational work in AI, psychology, and neuroscience. Reflective self-consciousness, for example, and the bizarre dissociations of consciousness typical of multiple personality, are intelligible in terms of recursive processing, guiding procedures, and access limitations within complex hierarchical structures for perception, memory, and action.

Considerable controversy, however, still attends the problem of qualia. Some cognitive scientists argue that qualia can be analyzed in terms of complex dispositions for making discriminatory computations (Dennett 1991). Others see them as aspects of an irreducible informational feature of the universe, applying not only to human brains but to atoms as well (Chalmers 1996). Still others make further suggestions, including several based on quantum physics. In short, there are many theories of qualia, and no agreement about what a successful theory might look like.

opposition to orthodox cognitive science

A sixth controversyor rather, batch of controversiesarises from recent work that opposes orthodox (neo-Cartesian) cognitive science (Cliff, Harvey, and Husbands 1993; Port and van Gelder 1995; Wheeler 2005). This involves both empirical theory/modeling and philosophical discussion. In general, it draws on the traditions of phenomenological philosophy and/or autopoietic biology, rather than Cartesianism. It rejects both symbolic and connectionist AI, and the concept of representation. It highlights embodied systems (not abstract simulations), embedded in their environment and responding directly to it. Examples include situated robotics in AI, dynamical systems theory, ecological psychology, and A-life studies of evolution and coevolution.

Philosophies inspired by these empirical researches include the theory of extended mind (Clark 1997). This starts from the position that minds must necessarily be embodied and that memory storage lies largely outside the skull (ideas familiar within phenomenology and GOFAI, respectively) and goes on to argue that an individual person's mind is extended over the surrounding cultural artifacts: language, customs, and material objectsfrom palaces to pencils. The claim is that mind is not merely deeply influenced by these things, but it is largely constituted by them.

Philosophical questions associated with A-life include whether evolution is a necessary characteristic of life, and whether the concept of autopoiesis captures the essence of life (Bedau 1996; Maturana and Varela 1987). If living things are defined as autopoietic systemswhose physical unity, boundaries, and self-maintenance are attained by self-organized metabolic processesthen questions about the origins of life take on a different color, as do questions about the possibility of strong A-life (life in computer memory)so called by analogy to strong AI.

Philosophers of A-life consider not only the nature of life as such, but how and why it is related to mind. Must all minds be evolved, for example? Autopoietic theorists define all life as involving cognition, while insisting that only linguistic life (i.e., adult humans) involves representations. But questions remain about whether, and if so why, life really is essential for mind. By the same token, questions remain about whether the study of A-life is essentially unrelated to cognitive science or fundamental to it.

culture and cognitive science

Finally, culture-directed research in cognitive science raises philosophical questions too. One concerns the nature of group mind, or as it is more commonly called, distributed cognition (Hutchins 1995). Can one identify aspects of cognition that cannot be attributed to any single individual, but only to a team of enculturated persons acting in concertand if so, can one model such phenomena in computers? Two more such questions concern the evolution of information-processing mechanisms that underlie important cultural phenomenareligion or aesthetic appreciation, for exampleand the evolution of culture as such.

See also Computationalism; Neuroscience; Psychology.

Bibliography

Boden, Margaret A. The Creative Mind: Myths and Mechanisms. 2nd ed. London: Routledge, 2004.

Boden, Margaret A. Mind as Machine: A History of Cognitive Science. Oxford: Oxford University Press, forthcoming.

Bedau, Mark A. "The Nature of Life." In The Philosophy of Artificial Life, edited by Margaret A. Boden. Oxford: Oxford University Press, 1996.

Chalmers, David J. The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press, 1996.

Churchland, Paul M. A Neurocomputational Perspective: The Nature of Mind and the Structure of Science. Cambridge, MA: MIT Press, 1989.

Clark, Andy J. Associative Engines: Connectionism, Concepts, and Representational Change. Cambridge, MA: MIT Press, 1993.

Clark, Andy J. Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press, 1997.

Clark, Andy J. Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. Cambridge, MA: MIT Press, 1989.

Cliff, David; Harvey, Inman; and Husbands, Philip. "Explorations in Evolutionary Robotics." Adaptive Behavior 2 (1993), 73110.

Cussins, Adrian. "The Connectionist Construction of Concepts." In The Philosophy of Artificial Intelligence, edited by Margaret A. Boden. Oxford: Oxford University Press, 1990.

Davies, Martin, and Tony Stone, eds. Folk Psychology: The Theory of Mind Debate. Oxford: Blackwell, 1995.

Dennett, Daniel C. Consciousness Explained. Boston: Little Brown, 1991.

Dennett, Daniel C. Elbow Room: The Varieties of Free Will Worth Wanting. Cambridge, MA: MIT Press, 1984.

Dennett, Daniel C. The Intentional Stance. Cambridge, MA: MIT Press, 1987.

Elman, Jeffrey L., Elizabeth A. Bates, Mark H. Johnson, et al. Rethinking Innateness: A Connectionist Perspective on Development. Cambridge, MA: MIT Press, 1996.

Fodor, Jerry A. The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology. Cambridge, MA: MIT Press, 2000.

Gibson, James J. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin, 1979.

Haugeland, John, ed. Mind Design II: Philosophy, Psychology, Artificical Intelligence. 2nd ed. Cambridge, MA.: MIT Press, 1997.

Heil, John, ed. Philosophy of Mind: A Guide and Anthology. Oxford: Oxford University Press, 2004. See especially 36, and 910.

Hutchins, Edwin L. Cognition in the Wild. Cambridge, MA: MIT Press, 1995.

McCulloch, Warren S., and Walter H. Pitts. "A Logical Calculus of the Ideas Immanent in Nervous Activity." In The Philosophy of Artificial Intelligence, edited by Margaret A. Boden. Oxford: Oxford University Press, 1990. Essay first published in 1943.

Maturana, Humberto R., and Francisco J. Varela. The Tree of Knowledge: The Biological Roots of Human Understanding. Boston: New Science Library, 1987.

Newell, Allen. "Physical Symbol Systems." Cognitive Science 4 (1980): 135183.

Oyama, Susan. The Ontogeny of Information: Developmental Systems and Evolution. Cambridge, U.K.: Cambridge University Press, 1985.

Pitts, Walter H., and Warren S. McCulloch. "How We Know Universals: The Perception of Auditory and Visual Forms." In Embodiments of Mind, edited by Seymour Papert. Cambridge, MA: MIT Press, 1965. Essay first published in 1947.

Port, Robert F., and Timothy J. van Gelder, eds. Mind as Motion: Explorations in the Dynamics of Cognition. Cambridge, MA: MIT Press, 1995.

Scheutz, Matthias, ed. Computationalism: New Directions. Cambridge, MA: MIT Press, 2002.

Searle, John R. The Rediscovery of the Mind. Cambridge, MA: MIT Press, 1993.

Sloman, Aaron. "The Mind as a Control System." In Philosophy and the Cognitive Sciences, edited by Christopher Hookway and Donald Peterson. Cambridge, U.K.: Cambridge University Press, 1993.

Wheeler, Michael W. Reconstructing the Cognitive World: The Next Step. Cambridge, MA: MIT Press, 2005.

Margaret A. Boden (1996, 2005)

Cognitive Theory

views updated May 29 2018

Cognitive Theory

BIBLIOGRAPHY

What do people perceive, think, and know? How do people perceive, think, and know? These questions are of interest to both anthropology and psychology, but whereas psychology emphasizes the second question, the cultural sciences are primarily concerned with the first question. [See Thinking.]

Cultural anthropology in particular has undertaken to describe and catalogue the cognitions typical of the various societies that make up man-kind. An ethnography (the description of a culture) is very largely an account of what the people in a particular society perceive, think, and know. Thus, the large and growing archive of descriptive ethnographic materials is a repository of the information available about man’s cognitions regarding the principal concerns of his existence: working, eating, sleeping, making love, treating illness, performing rituals, fabricating tools, fighting, raising children, and so on, through the outline of cultural categories. Traditional styles of ethno-graphic description have been humanistic rather than formal. But in recent years, as a result of the influence of linguistics on the one hand, and clinical and experimental psychology on the other, formal methods of field work, analysis, and description have been developed which provide more precise, economical, and cross-culturally comparable descriptions of the various kinds of cognition that constitute a culture. Furthermore, there is increased interest in the process of cognition at a level more general, and less conscious, than can be conveniently regarded as appropriate to an ethnographic description of the languages of people in various cultures. These new developments in the formal analysis of culture go beyond the two essential observations that all races or varieties of human beings can perform essentially the same cognitive operations and that what is actually perceived, thought, and known, even in response to the same physical stimuli, varies predictably with culture rather than with physical type.

Although language is conventionally and properly regarded as a fundamental aspect of “culture,” it occupies an ambiguous position in cognitive studies and, as the development of the study of psycholinguistics may suggest, is probably more relevant to psychological than to cultural investigations. Language is employed in many, but by no means all, cognitive processes; there are, for instance, forms of thought in music, painting, and certain of the performing arts that are not linguistic. Language differences are in principle irrelevant to logical and mathematical thinking, which can be reduced from logical or mathematical symbols to linguistic ones in any of dozens of languages. And intraculturally, many of the most interesting individual differences in the cognitive process are observed in dialogues between speakers of the same language whose utterances are equally valid linguistically but not logically. A language provides a small and finite set of elementary signs and symbols, as well as rules for their permissible combination, which permit the construction of a very large number of utterances; but the rules of language do not “contain” the meaning of the utterances any more than the rules for attaching pieces of metal by nuts and bolts, screws, rivets, solder, pins, friction joints, clamps, etc., “contain” the design of an automobile engine. Therefore it would be a mistake to regard the structural description of a language as a description of cognitive process at anything more than a nuts-and-bolts level or for more than a portion of the cognitive operations of its speakers.

Culture as an ideal normative system. Ethnography, although it recognizes the existence of complementary individual variation in role and of individual deviancy from norm, initially describes a culture as an ideal structure that is generated by a group and is an attribute of that group. The formal, sometimes even mathematical, features of a culture thus are to be likened to the geometrical properties of a single object or to the interrelated statements of a highly organized body of knowledge, rather than to the multivariate statistical description of a population. The work of the ethnographer—describing the cognitive processes that have been culturally standardized in a given society —may perhaps best be made clear by an analogy. Let us suppose that a nonmathematician is given the task of describing a new mathematical calculus that is in active use by a group of people who have not formulated their system of calculation in a written text. It has, in other words, been developing informally over the years, is currently being used in its most matured form, and is being taught to new users by example and through oral instruction. The investigator is allowed to interview and observe—that is, he may ask questions during coffee breaks, watch people computing, save scraps of paper from wastebaskets, take photographs of the machines employed, talk a few times with the project director, listen to people teaching one another the right way to do things, and make other such minimally interfering kinds of observations and inquiries. He may even be permitted—and he will certainly be well advised—to join the group as a novice and learn to use the calculus himself.

Now, as he analyzes the data collected in these various ways, he does not merely tabulate the frequencies and intercorrelations of various classes of observed behavior in order to arrive at the calculus; if he did this, he would be giving equal weight to misunderstood jokes, learners’ mistakes, slips of the pen, careless work, gibberish produced by broken computers, and competent professional operations. What he does, instead, is to infer the system of rules that these people are attempting to apply. He will gain the assurance that he is on the way to an adequate understanding of these rules from the logical completeness of the system he infers and from his ability, when using it, to produce behavior that an expert will reward by saying, in effect, “That’s right; that’s good; now you’ve got it.” Of course, a sociologist or a psychologist might say, “But it is the behavior that is real, not this abstract system which no one actually applies perfectly and completely and which is merely the asymptote of the real curve of behaviors.” The investigator replies that culture—conceived in this sense as a collection of formal calculi—is just as real as algebra, Euclidean geometry, and set theory, which are the asymptotes of the “real” behavior of fallible students, mathematicians, and machines. Indeed, he will point out, these other calculi are aspects of a culture, and their apparently greater tangibility stems from the incidental circumstance that they have been the object of more intensive study and explicit description than the calculus which he has been investigating.

Certain aspects of cultures that are understood as ideal normative systems have been subjected to formal analysis. The semantic analysis of kinship terminology and other taxonomic systems by such techniques as componential analysis, the reduction of prescriptive marriage and descent rules to the form of permutation matrices (Bush 1963; Kemeny et al. 1957; Weil 1949; White 1963), the treatment of certain status relationship systems as Guttman scales (Goodenough 1963), and the formalization of the Hindu purity-impurity transformation cycle as a product of Galois groups (Wallace 1966) are examples of this effort to delineate in the most economical form the essential structure of limited aspects of culture. To the extent that the cultural structures thus formally delineated require that some or all of the generating population entertain equivalent cognitive structures, these ideal normative systems give information about cognition.

But it would be naive to suppose that all members of a group maintain identical cognitive structures which are, in effect, the single normative structure revealed by ethnography and formal cultural analysis. Not only are there individuals with deviant or incomplete models but also the existence of complementary specialized roles in every human society requires that a model of the ideal normative system not be completely housed in the brain of every, or even any, single individual. Thus the question must be asked, What is the relationship of the ideal normative system to individual cognitions?

Culture as a cognitive system. Linguists, psychiatrists, philosophers, and social scientists have long been concerned with an issue which may be crudely but adequately stated as a pair of questions: Do all human beings, with more or less accuracy and complexity, follow one single neuro-logically founded logical calculus, the system that Boole called “the laws of thought” (i.e., the elementary logical calculus which is the root of all formal logical and mathematical reasoning), or are there many logics, mutually inconsistent, generated by differences in language (as the Sapir-Whorf hypothesis might suggest), by other aspects of culture, or by evolutionary level? The evidence to support the notion of logical pluralism has so far been unconvincing. Some “different” logics appear to be merely variants comparable to contrasts in emphasis on class products as opposed to relative products or preferences for probabilistic versus true-false truth values. And some appear to be based on mistaken assumptions about the primitiveness or irrationality of non-Western or ancient thinking. Although there are great differences in the degree of explicitness, the form, and the complexity of reasoning embedded in different linguistic and cultural traditions, and also differences in the de-termination of situations to which formal reasoning will be applied, it appears that such elementary rational procedures as syllogistic deduction and Mill’s canons of inductive inference are universal. It is probable, furthermore, that the extent of the complexity of rational operations performed without mechanical aid or specialized training by the normal members of any society, regardless of their level of economic and political organization, has an upper limit which is roughly the same for all racial and cultural groups (see Wallace 196la).

Social science studies of cognition tend to emphasize the description of those perceptions, beliefs, and thoughts which are standardized, repetitive, and conventional in a society. Where such cognitions seem to be shared by all mature persons in the society, there may be little need to consider the individual; but where cognitions are not shared by all, the individual becomes important as a unit of analysis. In the general case the individual may be conceived as the site of a large and complexly organized set of perceptions, thoughts, and knowledge. This assemblage has been variously denoted the “image,” the “mazeway,” and so on; the term refers to the entire structure of the individual’s cognition about himself and the surrounding world, including memories, abstract knowledge, and rules of thought. Although the total description of any one person’s mazeway would doubtless be an impracticably large task, portions of any one mazeway can be described as a set of propositions which, in symbolic form, will approximate an internally consistent system. When one considers the group of individuals who compose a community of any size, with regard to a given aspect of behavior, the sum of the propositions with regard to that aspect may or may not yield an approximation of a logically consistent system. If they do sum to a system, then that sum is referred to as an aspect of “their culture.” In general, summing to culture will occur under two conditions: first, and obviously, if the individual mazeways are identical in content and internally consistent in structure; and second, if the individual mazeways, even if not all identical, sum to a consistent system. Anthropologists have traditionally drawn attention to the existence of identical (shared) structures and to a certain kind of sum (the equivalence structure) of nonshared structures. The two sorts of cultural summing of cognitions are represented schematically in Figure 1.

Examples of shared cultural cognitive elements in the United States, for instance, would be a speaking knowledge of basic English phonemes, vocabulary, and syntax; familiarity with the currency; and recognition of the American flag. Not all normal adult persons born and residing in the United States share even these minimal cognitions, but universality is closely approximated in most communities. Nonshared but complementary cognitions are just as readily discovered in division-of-labor systems: for instance, in household management, the wife’s knowledge of how to buy, cook, and serve food is usually complementary to the husband’s knowledge of how to secure enough money to provide the necessary transportation, cooking and sanitary appliances, and eating equip-

ment; the professional pianist’s skills are complementary to the skills of the piano manufacturer and the tuner; and so on. In some areas of behavior, it may be considered improper for complementary specialists to know each others’ specialties, and in some situations (e.g., religious or military operations) it may be impious or illegal for persons with one role even to know, let alone practice, the role of the other. The analysis and classification of cultural structures in terms of the individual cognitive components of which they are the sum has not yet advanced very far.

At the present stage of research into these matters, however, there are technical and semantic difficulties in analyzing the relationship between individual cognitive structures and those cognitive sums that we have here called a “culture.” Obviously, except in the special case of all members of the society sharing the same cognitive structure, culture cannot be considered to be embodied in any one individual even though it is a product of individual cognitions. Thus, ascribing the contents of ethnographic monographs to each and every, and sometimes even any, individual in the society cannot legitimately be done. Furthermore, even in the case of perfectly shared cognitions, the cultural sum may be ethnographically described in a logical transformation of individual cognitive contents and/or cultural sums which is empirically predictive of behavior and elegant in formulation but not descriptive of cognitive content in anybody at all. In what sense are such logical transformations of the cultural sums themselves descriptive of cognitive content or structure of the individual mazeways from which the observed behavior was originally produced? The status of such transformations is precisely like that of newly formulated and proven theorems which were implicit in the axioms formulated by a mathematician but never anticipated by him. No doubt the possibility of performing such transformations has much to do with the generative power of culture; but it is necessary to keep firmly in mind that the actual cognitions of individuals may be different from these transformations and should be described in their own terms. Indeed, the possibility of understanding the dynamics of culture change would seem very largely to lie in the prospect of unraveling this relationship between individual cognition and unrealized summative implications.

Anthony F. C. Wallace

[Directly related are the entries Culture, article onthe concept of culture; Componential analysis; Ethnography; Language, article onlanguage and culture; Linguistics, article on The field; and the biographies of Sapirand Whorf.]

BIBLIOGRAPHY

Bush, Robert R. 1963 An Algebraic Treatment of Rules of Marriage and Descent. Pages 159–172 in Harrison C. White, An Anatomy of Kinship: Mathematical Models for Structures of Cumulated Roles. Englewood Cliffs, N.J.: Prentice-Hall.

Colby, B. N. 1966 Ethnographic Semantics: A Preliminary Survey. Current Anthropology 7:3–32.

Goodenough, Ward H. 1963 Some Applications of Guttman Scale Analysis to Ethnography and Culture Theory. Southwestern Journal of Anthropology 19:235–250.

Hallowell, A. Irving 1955 Culture and Experience. Philadelphia: Univ. of Pennsylvania Press.

Hammel, E. A. (editor) 1965 Formal Semantic Analysis. American Anthropologist New Series 67, no. 5, part 2.

Kemeny, John G. et al. (1957) 1962 Introduction to Finite Mathematics. Englewood Cliffs, N.J.: Prentice-Hall.

Romney, A. Kimball; and D’andrade, Roy G. (editors)1964 Transcultural Studies in Cognition. American Anthropologist New Series 66, no. 3.

Wallace, Anthony F. C. 1961a On Being Just Complicated Enough. National Academy of Sciences, Proceedings 47:458–464.

Wallace, Anthony F. C. 1961b Culture and Personality. New York: Random House.

Wallace, Anthony F. C. 1966 Religion: An Anthropological Study. New York: Random House.

Weil, AndrÉ (1949) 1963 On the Algebraic Study of Certain Types of Marriage Laws. Pages 151–157 in Harrison C. White, An Anatomy of Kinship: Mathematical Models for Structures of Cumulated Roles. Englewood Cliffs, N.J.: Prentice-Hall. → An analysis of the Murngin marriage structure. First published in Claude Lévi-Strauss (editor), Les structures élémentaires de la parenté.

White, Harrison C. 1963 An Anatomy of Kinship: Mathematical Models for Structures of Cumulated Roles. Englewood Cliffs, N.J.: Prentice-Hall.

cognitive theory

views updated May 14 2018

cognitive theory A major cluster of theories in social psychology, which focus upon the links between mental processes (such as perception, memory, attitudes, or decision-making), and social behaviour. At a general level such theories are opposed to behaviourism, and suggest that human beings are active in selecting stimuli, constructing meanings, and making sense of their worlds. There are many branches of cognitive theory, including Fritz Heider's cognitive balance theory, Leon Festinger's cognitive dissonance theory, George Kelly's personal construct theory, and attribution theory (see J. R. Eiser , Cognitive Social Psychology, 1980
).

cognitive science

views updated May 29 2018

cognitive science A multidisciplinary research field involving artificial intelligence, cognitive psychology, linguistics, neuroscience, and philosophy. The goal is to understand the phenomena of thinking and the relationship between brain and mind. Progress depends upon work on computer simulations, perception, language, mental states, and consciousness.

About this article

Cognitive Theory

All Sources -
Updated Aug 13 2018 About encyclopedia.com content Print Topic