In everyday life people commonly refer to each other as being smart or slow. The perception that individuals differ widely in mental adeptness—in intelligence—long preceded development of the IQ test, and there is indeed a large vernacular for brilliance, stupidity, and the many points in between. There has been much sparring over the scientific meaning and measurement of intelligence, both in the rowdy corridors of public debate and in the sanctums of academe. But what do we actually know about intelligence? A lot more in the last decade, and some of it surprising even to experts. Moreover, the data form a very consistent pattern showing that differences in intelligence are a biologically grounded phenomenon with immense sociological import.
MEASUREMENT OF INTELLIGENCE
The effort to measure intelligence variation among individuals is a century old. Two strategies for measuring such differences have emerged, the psychometric and the experimental. Both spring from the universal perception that, although all people can think and learn, some are notably better at both than others. Accordingly, intelligence research focuses on how people differ in cognitive competence, not on what is common to all of us. (Other disciplines such as neuroscience and cognitive psychology specialize in the commonalities.) The aim of intelligence research is thus much narrower than explaining the intricacies of how brains and minds function. These intricacies are relevant to intelligence experts, but generally only to the extent that they illuminate why people in all cultures differ so much in their ability to think, know, and learn.
Psychometric (Mental Testing) Strategy. The IQ test represents the psychometric approach to measuring intelligence. Alfred Binet devised the first such test in France to identify children who would have difficulty profiting from regular school instruction. Binet's idea was to sample everyday mental competencies and knowledge that were not tied to specific school curricula, that increased systematically throughout childhood, and that could reliably forecast important differences in later academic performance. The result was a series of standardized, age-graded test items arranged in increasing order of difficulty. A child's score on the test compared the child's level of mental development to that of average children of the same age. Binet's aim was pragmatic and his effort successful.
Innumerable similar tests have been developed and refined in the intervening century (Anastasi 1996; Kaufman 1990). Some are paper-and-pencil tests, called group tests, that can be administered cheaply to many individuals at once and with only a small sacrifice in accuracy. Others, such as the various Wechsler tests, are individually administered tests that require no reading and are given one-on-one. Today, individually administered intelligence tests are typically composed of ten to fifteen subtests that vary widely in content. The two major categories are the verbal subtests, such as vocabulary, information, verbal analogies, and arithmetic, which require specific knowledge, and the performance subtests, such as block design, matrices, and figure analogies, which require much reasoning but little or no knowledge. The highly technical field that develops and evaluates mental tests, called psychometrics, is one of the oldest and most rigorous in psychology. Its products have been found useful in schools, industry, the military, and clinical practice, where they are widely used.
Professionally developed mental tests are highly reliable, that is, they rank people very consistently when they are retested. A great concern in earlier decades was whether mental tests might be culturally biased. Bias refers to the systematic over- or underestimation of the true abilities of people from certain groups—a "thumb on the scale"—favoring or disfavoring them. There are many specific techniques for uncovering test bias, and all mental tests are screened for bias today before being published. IQ tests generally yield different average scores for various demographic groups, but the consensus of expert opinion is that those average differences are not due to bias in the tests. The consensus among bias experts, after decades of research often trying to prove otherwise, is that the major mental tests used in the United States today do not systematically understate the developed abilities of native-born, English-speaking minorities, including American blacks. The American Psychological Association affirmed this consensus in its 1996 task force report, "Intelligence: Knowns and Unknowns" (Neisser et al. 1996).
The biggest remaining question about IQ tests today is whether they are valid, that is, whether they really measure "intelligence" and whether they really predict important social outcomes. As will be shown later, IQ tests do, in fact, measure what most people mean by the term "intelligence," and they predict a wide range of social outcomes, although some better than others and for reasons not always well understood.
Experimental (Laboratory) Strategy. The experimental approach to measuring differences in general intelligence is older than the psychometric but little known outside the study of intelligence. It has produced no tests of practical value outside research settings, although its likely products could someday replace IQ tests for many purposes. The approach began in the late 1800s when the great polymath Francis Galton proposed that mental speed might be the essence of intelligence. He therefore set out to measure it by testing how quickly people respond to simple sensory stimuli such as lights or tones. Galton's measures did not clearly correlate with "real-life" indicators of mental ability, such as educational success, so his chronometric approach was quickly dismissed as wrong-headed and far too simplistic to capture anything important about the beautiful complexity of human thought.
Advances in statistics after the mid-twentieth century, however, showed that Galton's data actually had shown considerable promise. New medical and computer technology have since allowed researchers to measure elements of mental processing with the necessary precision that Galton could not. The revival of his approach in the 1970s has revolutionized the study of intelligence. It is the new frontier in intelligence research today. No longer producing "fool's gold" but the real thing, the study of elementary cognitive processes has attracted researchers from around the world. It now appears that some differences in complex mental abilities may, in fact, grow from simple differences in how people's brains process information, including their sheer speed in processing.
There is no single experimental approach, but perhaps the dominant one today is the chronometric, which includes studies of inspection time (IT) and reaction time (RT). Chronometric tasks differ dramatically from IQ test items. The aim is to measure the speed of various elementary perceptual and comprehension processes. So, instead of scoring how well a person performs a complex mental task (such as solving a mathematics problem or defining a word), chronometric studies measure how quickly people perform tasks that are so simple that virtually no one gets them wrong. These elementary cognitive tasks (ECTs) include, for example, reporting which of two briefly presented lines is the longer or which of several lights has been illuminated. In the former, an IT task, the score is the number of milliseconds of exposure required to perceive the difference. In the latter, an RT task, the score is the number of milliseconds the subject takes to release a "home button" (called "decision time") in order to press the lighted response button (called "movement time").
Both average speed and variability in speed of reaction are measured over many trials. It turns out that brighter people are not only faster but more consistent in their speed of stimulus apprehension, discrimination, choice, visual search, scanning of short-term memory, and retrieval of information from long-term memory. In fact, variability in speed is more highly correlated with IQ (negatively) than is average speed. ECT performance correlates more highly with IQ as the tasks become more complex, for example, when the number of lights to distinguish among increases from two to four to eight (respectively, one, two, and three "bits" of information). Composites of various speed and consistency scores from different ECTs typically correlate −.5 to −.7 with IQ (on a scale of −1.0 to 1.0, with zero meaning no relation), indicating that both chronometric and psychometric measures tap much the same phenomena. Psychometric and chronometric measures of mental capacity also trace much the same developmental curve over the life cycle, increasing during childhood and declining in later adulthood. Debates among the experimentalists concern how many and which particular elementary cognitive processes are required to account for differences in psychometric intelligence.
MEANING OF INTELLIGENCE.
The meaning of intelligence can be described at two levels. Nonexperts are usually interested in the practical meaning of intelligence as manifested in daily life. What skills does it reflect? How useful are they in school, work, and home life? In contrast, intelligence researchers tend to be interested in the more fundamental nature of intelligence. Is it a property of the brain and, if so, which property exactly? Or is it mostly a learned set of skills whose value varies by culture? Personnel and school psychologists, like other researchers concerned with the practical implications of mental capability, are often interested in both levels.
Practical Definitions of Intelligence. The practical meaning of intelligence is captured well by the following description, which was published by fifty-two leading experts on intelligence (Gottfredson 1997a). It is based on a century of research on the mental behavior of higher-versus lower-IQ people in many different settings.
Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do. (p. 13)
The concept of intelligence refers specifically to an ability that is mental. It does not encompass many of the other personal traits and circumstances that are important in people's lives. It does not include, for instance, strictly physical skills, creativity, or traits of personality and character such as conscientiousness and drive. IQ tests are not intended to measure these other traits. Three practical definitions that are more specific may illuminate better what intelligence means in daily affairs. Each can be translated into the others, but each highlights a different practical aspect of intelligence: the ability to deal with complexity (Gottfredson 1997b), learn (Carroll 1997), and avoid making cognitive errors (Gordon 1997).
Intelligence as the ability to deal with complexity. IQ test items vary widely in content and format, but they often seem esoteric or narrowly academic. Many people in the past took these superficialities as guides to the nature of what IQ tests measure and therefore mistakenly concluded that they cannot be measuring anything of real consequence, at least outside schools. IQ tests' superficial characteristics, however, are irrelevant to their ability to measure intelligence. What matters is the complexity, the amount of mental manipulation, their tasks require: contrasting, abstracting, inferring, finding salient similarities and differences, and otherwise turning things over in one's mind to accomplish the mental task. Complexity is the active ingredient in tests that call forth intelligence. People who score higher on IQ tests are people who deal better with complexity, that is, are more adept at understanding and effectively solving more complex mental challenges.
Any kind of test vehicle or content (words, numbers, figures, pictures, symbols, blocks, mazes, and so on) can be used to create different levels of complexity. IQ tests typically do, in fact, contain subtests with different kinds of content. Forward and backward digit span (two memory subtests) illustrate clearly the notion of mental manipulation and task complexity. In digits forward, individuals are asked to repeat a string of from two to nine digits (say, 3–2–5–9–6) that is presented orally at one digit per second. In digits backward, the individual simply repeats the numbers in reverse order (in this case, 6–9–5–2–3). The one extra element in the second task (mentally reversing the list) greatly increases its complexity, nearly doubling its correlation with IQ.
Number series completion subtests can also seem trivial, but they illustrate how the same simple content can be varied to build increasingly complex mental demands. Consider the following three series: 4, 6, 8, 10, 12,——(easy item); 2, 4, 5, 7, 8, 10,——(moderate); and 9, 8, 7, 8, 7, 6,——(difficult). One must discern the relations between succeeding numbers in order to complete the series, and those relations become increasingly complex across the three series (respectively, add 2 to each successive digit; add 3 to each successive set of two digits; subtract 1 from each successive set of three digits). These are similar to the items found in one of the fifteen subtests of the Stanford–Binet Intelligence Scale (SBIS–IV) for school-aged youth. They require very little knowledge. Instead, their challenge is to use that simple information effectively—to contrast and compare, find relations, and infer rules—in order to solve logical problems in the test setting. IQ tests that require this on-the-spot problem solving are referred to as tests of fluid intelligence—of mental horsepower, if you will.
Some IQ subtests require test takers to bring considerable knowledge into the test setting in order to perform well, but they, too, illustrate the principle that the active ingredient in IQ tests is the complexity of their mental demands. Vocabulary, for example, is one of the very best subtests for measuring intelligence. The reason is that people do not learn most words (love, hate) by memorization or direct instruction, but rather by inferring their meanings and their fine nuances in meaning (love, affection, infatuation, devotion, and ardor; hatred, loathing, abhorrence, antipathy, and contempt) from the way other people use them in everyday life. Learning vocabulary is largely a process of distinguishing and generalizing concepts in natural settings.
Table 1 illustrates how vocabulary level reflects differences in the ability to deal with complexity. These results are from an earlier version of the Wechsler Adult Intelligence Scale (WAIS). All the adults tested were able to provide at least a tolerable definition of concrete items such as bed, ship, and penny, but passing rates dropped quickly for more abstract and nuanced concepts such as slice (94 percent), sentence (83 percent), domestic (65 percent), and obstruct (58 percent). Only half could define the words remorse, reluctant, and calamity. Fewer than one in five knew the words ominous andtirade, and only 5 percent could provide even a partial definition of travesty. Anyone who has attended high school, read newspapers and magazines, or watched television will have encountered these words. Vocabulary tests thus gauge the ease with which individuals have routinely "picked up" or "caught onto" concepts they encounter in the general culture. So, too, do the general information subtests that are included in many IQ test batteries ("Why do homeowners buy home insurance?").
Vocabulary, information, and other tests that require considerable prior knowledge are referred to as tests of crystallized intelligence because they measure the knowledge that has formed or crystallized from past problem solving. The greater the mental horsepower, the greater the accumulation. Only knowledge that is highly general and widely available is assessed, however, because otherwise IQ tests would also be measuring the opportunity to learn, not success when given the opportunity to do so. Tests of fluid and crystallized intelligence correlate very highly, despite their very different content, because the key active ingredient in both is the complexity of the problems people must solve.
Intelligence as the ability to learn. One of life's unremitting demands is to learn—that is, to process new information sufficiently well to understand it, remember it, and use it effectively. This is especially so in education and training, but it is also the case in meeting the challenges of everyday life, from learning to use a new appliance to learning the subtle moods of a friend or lover.
IQ level is correlated with speed, breadth, and depth of learning when learning requires thinking, specifically, when it is intentional (calls forth conscious mental effort), insightful (requires "catching on"), and age-related, that is, when older children learn the material more easily than do younger children (because they are mentally more mature) and when the material to be learned is meaningful and hierarchical (mastering earlier elements is essential for learning later ones, as in mathematics). Learning is also correlated with intelligence level when the learning task permits using past knowledge to solve new problems, the amount of time for learning is fixed, and the material to be learned is not unreasonably difficult or complex (which would cause everyone to fall back on trial-and-error learning). In short, intelligence is the
|Percentage of Adults Age 16–65 Passinga WAIS Vocabulary Items|
|item||% passing||item||% passing|
|source: matarazzo (1972), table 5, p. 514.|
|note: apassing includes getting at least partial credit.|
|1. bed||100||21. terminate||55|
|2. ship||100||22. obstruct||58|
|3. penny||100||23. remorse||51|
|4. winter||99||24. sanctuary||49|
|5. repair||98||25. matchless||47|
|6. breakfast||99||26. reluctant||50|
|7. fabric||92||27. calamity||50|
|8. slice||94||28. fortitude||36|
|9. assemble||90||29. tranquil||36|
|10. conceal||87||30. edifice||22|
|11. enormous||89||31. compassion||29|
|12. hasten||87||32. tangible||30|
|13. sentence||83||33. perimeter||26|
|14. regulate||80||34. audacious||20|
|15. commemce||79||35. ominous||20|
|16. ponder||64||36. tirade||17|
|17. cavern||68||37. encumber||19|
|18. designate||63||38. plagiarize||13|
|19. domestic||65||39. impale||14|
|20. consume||61||40. travesty||5|
ability to learn when the material to be learned is moderately complex (abstract, multifaceted, and so on), as distinct from learning by rote or mere memorization.
People learn at very different rates. In school, the ratios of learning rates are often four or five to one, and they can go much higher depending on the material. The military has likewise found that recruits differ greatly in how well they learn, which it calls trainability. One 1969 (Fox, Taylor, and Caylor 1969) study done for the U.S. Army found, for example, that enlistees in the bottom fifth of ability needed two to six times as many teaching trials and prompts as did their higher-ability peers to reach minimal proficiency in rifle assembly, monitoring signals, combat plotting, and other basic soldiering tasks. Figure 1 illustrates the major differences in trainability at different levels of IQ. People with IQs of about 115 and above can not only be trained in a college format but can even gather and infer information largely on their own. Training for people with successively lower IQs, however, must be made successively less abstract, more closely supervised, and limited to simpler tasks. Low levels of trainability limit not only how much can be learned in a set amount of time but also the complexity of the material that can be mastered with unlimited time.
Intelligence as the ability to avoid common cognitive errors. Intelligence can also be conceived, for practical purposes, as the probability of not making cognitive errors. The notion is that all people make cognitive errors but that brighter people make fewer of them in comparable situations. They make fewer errors in learning, for example, because they learn more quickly and thoroughly. And they make fewer errors of judgment in new and unexpected situations because they are better able to look ahead, assess the likely consequences of different actions and events, spot incongruities and problems, factor more information into their decision making, and perceive alternative courses of action.
Just as items on intelligence tests are scored right versus wrong or better versus worse, so, too, can many decisions in everyday life be classified in this manner. And just as intelligence tests must use many items to assess intelligence level accurately, so, too, does the meaning of intelligence in daily life manifest itself in the accumulation of good and bad decisions, large and small, throughout one's life. The lifetime advantages of higher intelligence are explored later. The point here is simply that intelligence can also be described as the ability to avoid making common errors in judgment and accumulating a harmful record of them.
These three workaday definitions give an intuitive sense of what it means at the level of personal experience to be more versus less intelligent. Intelligence researchers seek to understand intelligence differences in their more fundamental sense, below the surface of everyday observation. As described next, most have adopted a new working definition of intelligence for this purpose.
Psychometric g (Not IQ) as the Research Definition of "Intelligence." The psychometric approach to measuring intelligence cannot by itself tell us what intelligence is fundamentally, say, neurologically. However, it has greatly narrowed the possibilities. Most importantly, it has shown that intelligence is a highly general ability and that it is the backbone or supporting platform for the more specific mental abilities. This finding rests in turn on the discovery of a single, common, and replicable means for isolating for study what most people mean by intelligence. As explained, it is not the IQ but g, which is short for the general mental ability factor (Jensen 1998). The latter has replaced the former as the gold standard for measuring intelligence. Researchers do not yet know exactly what aspect of mind or brain g represents, but g has become the de facto definition of intelligence for most intelligence researchers, some of whom would drop the term "intelligence" altogether.
From the earliest days of mental testing, researchers observed that people who do well on one test tend to do well on all others. That is, all mental tests intercorrelate to some degree. This prompted Charles Spearman, Galton's student and one of the earliest theorists of intelligence, to invent the statistical technique of factor analysis to isolate that common component from any set of mental tests. Once the common factor, g, is statistically extracted from a large, diverse set of tests, each individual's standing on it (the person's g level) can then be calculated. So, too, conversely, can the ability of different tests to measure g (their g-loadings). Among mental tests, IQ tests provide the most accurate measures of g. Scores on the great variety of IQ tests are all highly g-loaded, that is, they all correlate highly with g (with .9 being a typical value for tests of the Wechsler variety). This high correlation means that IQ scores are quite adequate for most practical purposes; therefore, g scores are generally actually calculated only for research purposes.
The replicability of g. Research reveals that the same g dimension characterizes all demographic groups yet studied. Virtually identical g factors have been extracted from all large, diverse sets of mental tests, regardless of which method of factor analysis was used and regardless of the age, sex, or race of the test takers. The same g is called forth by tests that require much cultural knowledge as by ones requiring virtually none. It can be called up by any kind of item content (numbers, letters, shapes, pictures, blocks, and the like), a phenomenon that Spearman called indifference of the indicator.
Mental tests often measure more specific aptitudes in addition to g (say, verbal or spatial ability), but g is the crucial backbone of all mental tests. Efforts to create useful mental tests that do not measure g (for example, verbal aptitude tests that do not tap g) have all failed. Although mental tests are suffused by a common factor, no analogous common factor can be found among different personality tests (which test for extroversion, conscientiousness, sociability, and so on). The absence of a general personality factor illustrates that the general mental ability factor g is not an artifact of factor analysis but a real phenomenon.
To be sure, the existence of the g factor can be obscured by inappropriate testing (for example, when some test takers do not know the language well) and by narrow sampling (when all test takers are similar in intelligence). When allowed to manifest itself, however, the g factor clearly shows itself to transcend the particulars of content and culture. This is not to say that culture cannot affect the development of g or its social significance, but only that culture does not determine its fundamental nature. The nature of g seems to be surprisingly independent of culture, as other sorts of research have confirmed.
The generality of g. The great generality of g is perhaps psychometrics' most crucial discovery about the nature of intelligence. As noted, the identical g factor is the major distinction in mental abilities in all groups of people and tests, regardless of cultural context or content. As also noted, all mental ability tests measure mostly g, no matter what specific abilities they were intended to measure (verbal aptitude, mathematical reasoning, memory, intelligence, and so on). The manifest skills most associated with intelligence in both fact and public perception—reasoning, problem solving, abstract thinking, and learning—are themselves highly general, context-independent thinking skills. The psychometric vehicles (tests and test items) for measuring g are necessarily culture-bound to some degree, but the g abstracted from them appears not to be.
There are, of course, other mental aptitudes, but, unlike g, they seem specific to particular domains of knowledge or activity (language, music, mathematics, manipulating objects in three-dimensional space). Moreover, none of these narrower abilities seem so integral as g to the expression of all the others. Many decades of factor-analytic research on human abilities have confirmed what is called the hierarchical structure of mental abilities (Carroll 1993). As shown in the simplified version in Figure 2, abilities are arrayed from the top down, with the most general placed at the top. Research always finds g at the top of this generality hierarchy for mental abilities.
The generality of intelligence was less clear when researchers relied on IQ as their working definition of intelligence. The reason is that all IQ tests are imperfect measures of g and each often captures the flavor of some specialized ability or knowledge in addition to g. That is, all IQ tests share a large g component, but their small non-g components often differ in size and content. Attempting to understand intelligence by studying IQ scores has been akin to chemists trying to understand the properties of a particular chemical element by each studying samples that were impure to different degrees and with different additives. This ensured a muddied and fractious debate about the essence of intelligence. In contrast, the g factor is a stable, replicable phenomenon. When researchers study g, they can be confident they are studying the same thing, even when the g's they use were extracted from different sets of tests. Moreover, g has the advantage over IQ that it cannot be confused with the attributes or contents of any particular test, because g is always extracted from some large, mixed set of them. One must look below the surface characteristics of IQ tests, to g, to explain the core phenomenon they measure.
The g-loadings of tests and tasks. The ability to classify tests according to their correlation with g is also a major advance in the study of intelligence. It allows research on why tasks vary in their ability to call forth g and thus helps predict where in life higher levels of intelligence are most useful. Stated another way, mental tests can now be used to compare environments, not just people, and figure out why some environments are more cognitively demanding than others.
Evidence suggests that tasks are more g-loaded when they require more complex information processing, for example, when there are more pieces of information, when there are more operations to perform, and when the information is abstract, nested or incomplete. For instance, spelling and arithmetic tests pose much less complex and g-loaded tasks for adolescents and adults than do vocabulary and mathematical reasoning tests. Spelling and computing well in adolescence and beyond depends less on g level than does comprehending higher-level verbal and mathematical concepts, despite their superficially similar content.
As will be seen, many work tasks and occupations have been ranked in their demands for g. In theory, a g loading can be calculated for virtually everything we do in daily life. Life is like a series of mental tests in the sense that its demands vary considerably in complexity and consequent g-loading. This means that the advantages of being brighter will vary systematically across different life settings according to their cognitive complexity.
The finding that the subtests in an IQ test battery differ systematically in their ability to measure g has been cleverly used to explore the biological as well as the sociological meaning of g. By the method of correlated vectors, the g-loadings of IQ subtests are themselves correlated with other attributes of the subtests. For example, tests' g-loadings have been found to predict the genetic heritability of their scores, degree of inbreeding depression, and the subtests' correlations with brain size, faster glucose metabolism in the brain, and greater complexity and speed of onset of various electroencephalogram (EEG) brain waves. This pattern of correlations reinforces other findings which suggest that g is a biologically grounded capability to process complex information regardless of its explicit content.
Mental test scores—including the IQ—are composed of both g and non-g components, however. The non-g component might reflect more specific abilities, specific bits of cultural knowledge, aspects of personality or the testing situation, or other unspecified impurities that are independent of g. The decomposition of test scores into their g versus non-g components is also an enormously important development for understanding the meaning of intelligence. For example, it has been shown that it is almost exclusively the g component, not the non-g components, of tests that accounts for their ability to predict later school achievement and job performance. This considerably narrows the range of possible explanations for why IQ tests predict differences in individuals' later achievement. The explanation cannot reside mostly in the context-specific bits of knowledge that an IQ might reflect, but in the highly general mental capability that g represents in all contexts and cultures.
Experimental Study of the Components of g . If psychometrics has discovered that g is a very general information-processing capability, laboratory studies of intelligence are aimed at teasing out its components or building blocks. The debate among experimentalists has been about whether individual differences in general intelligence are more like differences in computer hardware or computer software. Both views, however, perceive differences in g or IQ as differences among individuals in the speed and quality of their information processing.
The "software" view argues that differences in intellectual performance originate in the better or worse use of the same hardware, for example, in the use of better strategies or algorithms for using information and solving problems. These metacognitive skills might include better allocation of time to the different components of a problem, monitoring of progress or responding to feedback, and otherwise better controlling how the different components of a task are executed. Such studies might look, for example, at the kinds of planning subjects use in solving verbal analogies or the ways they use their time in comprehending a passage of text. In this view, the general factor g reflects not a general underlying ability but the greater conscious use of separate planning and control strategies of general value, in all of which individuals could presumably be trained.
The "hardware" view postulates that differences in the speed and quality of information processing originate in differences in basic brain physiology, such as nerve conduction velocity. The great enthusiasm over the "top-down" software view during the 1970s and 1980s waned as research began more and more to support the claims of the "bottom-up" hardware view of intelligence. People can indeed be observed to use different strategies in solving problems, but differential motivation, effort, or strategy use do not seem to account for IQ differences, and the successful strategies are fairly task-specific.
Although research has not yet proven that differences in lower-level information processing abilities actually cause differences in higher-level ones, measures closer to the physiological level offer more promising explanations of g (Vernon 1993). For example, simultaneous recordings of subjects' RTs and brain-wave activity (specifically, average evoked potentials [AEP] measured by the EEG) have shown that speeds of ECT responses are moderately to highly correlated with complexity and speed of onset of certain brain waves, both of which occur in less time than required for conscious awareness of a stimulus. Much other research shows that both ECT and AEP responses are, in turn, moderately to highly correlated with IQ scores and, most importantly, with g itself. The g factor is the only mental ability with which ECT scores correlate.
Accordingly, some intelligence researchers now argue that intelligence may not be an ability per se, but rather a chemical, electrical, or metabolic property of the brain. Specific aptitudes, such as verbal and spatial ones, appear to reside in particular regions of the brain, but g may represent a global property permeating all regions. Nerve conduction velocity is currently being investigated as one such possible global property. Differences in velocity may in turn result from differences in nerve myelination (myelin is the fatty sheath around nerve axons). While still speculative, the velocity and myelination hypotheses are consistent with a well-established pattern of differences that any putative cause of intelligence will have to explain, namely, both the steady rise and then fall of fluid intelligence over the life cycle as well as the enduring differences in g among people at any single age.
Popular Contending Theories. Any theory of intelligence must take into account the basic facts about intelligence, whether it is measured as IQ or g. These include its high generality, heritability (discussed shortly), and correlations with elementary perceptual and physiological processes of the brain. Some of the theories that are most popular outside expert circles contradict or ignore these facts and thus are not viable contenders to the emerging g theory of intelligence. Others remain untested hypotheses. The major contenders to g theory can be characterized as either specificity or multiplicity theories of intelligence.
Specificity theories. Some scholars have argued that intelligence is not an underlying ability but merely the accumulation of specific bits of knowledge. For them, being "smart" is nothing more than knowing a lot, no matter how much time and effort went into that learning or what was learned. It is akin to the accumulation of marbles in a jar, signifying nothing other than that many marbles have been collected by whatever means. The apparent assumption is that people do not differ in their ability and efficiency in gathering marbles. However, intelligence has to be more than knowledge per se because, among other reasons, differences in intelligence show up on tests that require no knowledge whatsoever. Moreover, as noted, people differ greatly in their ability to acquire knowledge even when given the same opportunity to learn. There are "fast" students and "slow" students, irrespective of motivation and quality of instruction. For many experts, differences in the ability to acquire knowledge are at the heart of intelligence.
Another variant is the cultural specificity theory, which is that intelligence is merely the display of traits, whatever they may be, that are highly regarded in a particular culture. For example, one claim is that because IQ tests are typically developed by white European males, they inevitably measure beliefs, behavior, and knowledge that white European males value but that may have no intrinsic value. Intelligence, they say, might be defined completely differently in another culture, such as skill at hunting, navigating, or cooperating for the general good. The first claim is false and the second is irrelevant, even if true. As noted, the same g is extracted from all diverse sets of mental tests and for all cultural groups. (Besides, Asians tend to do better than whites on tests developed by the latter.) Whether different cultural groups recognize, value, and reward the phenomenon represented by g is interesting and important, but it does not erase the phenomenon itself as a scientific fact any more than rejecting the concept of evolution brings evolution to a halt.
Perhaps the best-known variant is the academic specificity theory, which says that IQ and intelligence are simply "book smarts," a narrow "academic" skill that is useful inside but not outside schools and bookish jobs. According to this theory, intelligence may be an enduring personal trait, but only a narrow one. As will be shown, g is indeed highly useful in education and training. However, the very generality of g—the ability to deal with complexity, to learn, and to avoid mistakes—argues against the narrow "book smarts" conception of intelligence. So, too, does much research, discussed later, on the many practical advantages conferred by higher levels of g. Carpenters as well as bank tellers, sales agents as well as social scientists, routinely deal with complexity on the job and are aided by higher levels of g.
Multiplicity theories. Robert Sternberg (1985) argues that there are several intelligences, including "analytical," "practical," and "creative." Howard Gardner (1983) is famous for postulating eight and possibly nine intelligences: linguistic, logicalmathematical, musical, spatial, bodily-kinesthetic, intrapersonal, interpersonal, naturalist, and (possibly) existential. Daniel Goleman's 1995 book on "emotional intelligence" has taken the country by storm. All three theories are engaging, are popular in lay circles, and describe undeniably important skills, knowledges, and achievements. All three theories suggest that g, if it exists, is only one of various coequal abilities. This is, indeed, why multiple intelligence theories are so popular. They are often interpreted—wrongly—as suggesting that everyone can be smart in some useful way.
The question, however, is whether the "intelligences" these theories describe are actually comparable to g in any fundamental way. Specifically, are they even abilities, or might they be the products (literary, scientific, or artistic) of exercising g together with specific abilities in specific settings with specific kinds of training and experience? Are the purported intelligences even mental rather than, say, physical abilities or aspects of personality? And for those that are mental abilities, are they comparable to g in their general applicability? Unfortunately, the research necessary for answering these questions credibly has not been conducted. Almost none of the "multiple intelligences" has actually been measured, and none have been shown independent of g in representative samples of the population. Verbal descriptions of them leave many experts doubtful that they are comparable to g in any important way.
Some of them, like emotional intelligence, seem to be a combination of many different traits, some being abilities and others not, some being mental and others not. Verbal definitions suggest that practical intelligence (like "street smarts") may be the accumulation of highly context-specific knowledge gathered through strictly informal experience (for example, knowing the special argot and norms of a particular neighborhood, occupation, or other subculture). Gardner's intelligences are different forms of highly valued cultural accomplishment. As such, they require not only the ability to succeed but also the personality traits, such as drive and persistence, needed to transform that potential into a valued product. This is not to deny that personality is important for great accomplishment, but only that it is useful to distinguish between the separate ability, personality, and other factors contributing to it.
In addition, some of Gardner's intelligences seem to mirror more specific and psychometrically well-studied traits, such as verbal, mathematical, and spatial aptitude. Much research has shown that these so-called group factors are highly correlated with g but appear below it in the hierarchical structure of human mental abilities (see Figure 2). Gardner himself has stated that exemplary levels of all his intelligences require IQ levels over 120, meaning that the eight intelligences are not alternatives to g but narrower abilities pervaded by it. In short, they appear to be different cultural playgrounds for the cognitively rich. All the purported "multiple intelligences" are important topics for study, but they cannot be assumed to be comparable to g in either generality or practical importance by virtue of being labeled "intelligences."
HERITABILITY AND ENVIRONMENTALITY OF INTELLIGENCE
Behavioral genetics is a method for studying the influence of both genes and environments on human behavior. In recent decades the field has shown that mental abilities, personality, vocational interests, psychopathology, and even social attitudes and life events are shaped by both genes and environments (Loehlin 1992; Plomin et al. 1997). More research has been conducted on the heritability of intelligence than on any other psychological trait, and much of it has been longitudinal.
Behavioral genetics focuses on explaining variation in a particular population. Its basic method is to look at similarities between relatives of different degrees of genetic and environmental relatedness: identical twins reared apart, adopted siblings reared together, identical versus fraternal twins, and so on. Such research can also test, among other things, whether specific environmental factors create IQ similarities and differences and, if the research is longitudinal, whether change (and stability) in IQ ranking is due to the operation of genes, environments, or both. It can also test whether two heritable traits or behaviors, such as IQ and academic achievement, share the same genetic and environmental roots.
Such research does not reveal how genes affect intelligence, only that they do. Explanations of how genes influence intelligence will come from molecular genetics, which has only recently isolated the first gene for intelligence. Molecular genetic research also holds promise for detailing exactly how environments might influence the actions of genes.
Individual Differences. Behavioral genetics has focused historically on explaining differences among individuals within a population. The following such findings should be generalized only to the sorts of populations studied so far, most of them Western and none extreme in terms of either deprivation or privilege.
IQ is substantially heritable. Heritability (h2) refers to the percentage of observed differences in a trait (in phenotypes) that is due to differences in genes (genotypes). Estimates of the heritability for IQ typically range between .4 and .8 (on a scale from 0 to 1.0). This means that from 40 percent to 80 percent of the observed differences in individuals' IQs are due to the genetic differences among them. This means, conversely, up to 20 percent to 60 percent of IQ differences are environmental in origin. Aptitudes measured by the most g-loaded tests are the most heritable. Aptitudes measured by tests of more specific abilities, such as verbal and spatial visualization, are moderately heritable, but less so than g.
IQ heritability rises with age. This discovery was a surprise even to behavioral geneticists because, like virtually all social scientists, they had assumed that environmental effects cumulate over a lifetime to reduce the influence of genes. Not so, apparently. The heritability of IQ is about .4 in the preschool years, rises to .6 by adolescence, and increases to about .8 in late adulthood. The reason for this increase is unclear. The major hypothesis, however, is that "genes drive experience" and lead people to seek different social niches. That is, different genotypes tend to choose, create, and elicit different environments in childhood and beyond, which in turn shape intellectual development. For example, bright and dull youth receive different encouragement and opportunities. They also tend to choose different experiences for themselves, especially as they become more independent of parents, teachers, and other authorities. As individuals take a greater hand in shaping their environments, for better or worse, their IQ phenotypes begin to mirror their IQ genotypes more closely. The correlation between IQ phenotypes and genotypes (which is the square root of heritability) rises to .9 by later adulthood.
The surprising rise in heritabilities is consistent with the disappointing results of socioeducational interventions (similar to Head Start) that were designed to raise low childhood IQs. To date, all have exhibited fade-out, meaning that the initial improvements in IQ dissipated within a few years. Improvements in more malleable outcomes (such as fewer children being held back a grade) may be observed, but permanent rises in g are not. The same IQ fade-out occurs with genetically at risk children adopted into more advantaged families: By adolescence, their early favorable IQs fall back to the average for their nonadopted biological relatives.
IQ-relevant environments are partly genetic in origin. Social scientists have tended to think of environments as conditions strictly "out there" to which people are passively "exposed." Children's environments correlate with their genes, however, partly because they passively receive both from their parents. People's environments are also heritable to some degree because people choose, make, remake, elicit, and interpret them. Because people's genetic proclivities help shape their environments, real and perceived, behavioral geneticists often refer to people's proximal environments as, in effect, their extended phenotypes. That is, people's near environments are to some degree an extension of themselves because they are partly products of the person's genotype for intelligence, personality, and the like.
When people's environments are studied with the same behavioral genetic techniques as are their psychological traits and behaviors, research consistently shows that rearing environments, peer groups, social support, and life events are, in fact, moderately heritable. For example, one measure of individual infant and toddler rearing environments found that those environments were 40 percent heritable. Moreover, half of the environmental measure's ability to predict cognitive development could be accounted for by that measure's genetic component. In other words, IQ-relevant environments are partly genetic in origin. This is an example of what behavioral geneticists refer to as the operation of nature via nurture.
Shared family effects on IQ dissipate by adolescence. Behavioral genetic research confirms that environments have substantial influence in creating IQ differences. However, providing yet another surprise, the research showed that environmental influences had been completely misunderstood. Psychologists–behavioral geneticists David Rowe (1994) and Sandra Scarr (1997) call this mistaken view, respectively, "family effects theory" and "socialization theory." This is the still widespread but false assumption that differences between families in their socioeconomic circumstances (income, parental education, occupation, income, and so on) and child-rearing styles (cold, authoritative, and so on) create differences between their children in ability and personality. These presumed effects are called shared or between-family influences because they affect all children in the family in the same way and thus make children in the same families more alike and children in different families less alike.
As it turns out, such shared effects influence IQ (but not personality) in early childhood, but they are disappear by adolescence. Nor is it known what these temporary influences are. The only environmental effects that continue to influence IQ beyond adolescence are nonshared or within-family effects on IQ. Nonshared effects are factors that influence one sibling but not others in a family. What they consist of regarding IQ is not yet known, but they could include random biological events, illness, and differential experiences in parent–child or sibling relationships. Nonshared effects help to explain why biological siblings who grow up together are so different in IQ. They differ by about 12 IQ points, on the average, compared to the average 17-point IQ difference between any two random strangers. Much of that difference is due to their genetic differences, however, because biological siblings share, on the average, only 50 percent of their segregating genes.
The dissipation of "family effects" and the rising influence of genes with age can be seen clearly in adoption research. The IQs of adopted siblings are similar in childhood but not in adolescence. By adolescence, their IQs also cease to resemble the IQs of their adoptive parents but become more like the IQs of the biological parents they have never known.
Special abilities, ECTs, and school achievement have common genetic roots with g. As noted, there are many mental abilities, whether at the level of ECTs, such as choice reaction time, or at the level of group factors, such as verbal ability. However, they all correlate with g. To the extent that they overlap each other and g phenotypically, that overlap is due almost entirely to a common genetic source. Conversely, only a small portion of the genetic component of specific aptitudes—such as verbal skills, memory skills, and speed of processing—is not g-related. The same general pattern is found for the sizable correlation between academic achievement and IQ. To the degree that they correlate, that similarity is almost entirely genetic; to the degree that they diverge, the cause is mostly environmental.
IQ stability is mostly genetic in origin whereas ageto-age change in IQ rank originates mostly in nonshared environments. Rank in IQ relative to agemates is highly stable. Genes and shared environments both contribute mostly to IQ stability rather than to age-to-age change. It is the nonshared environment that causes age-to-age change. Marked change is rare and tends to be idiosyncratic, transient, and difficult to attribute to any particular event.
Cautions in interpreting heritabilities. High heritabilities do not mean that a trait is not malleable. Heritability and malleability are separate phenomena. Certain heritable conditions (such as diabetes) are treatable and certain nongenetic effects (such as those of lead poisoning) are not. All that a high heritability means is that current differences in environmental conditions do not create much intelligence variation beyond that owing to genetic differences. If environments were equalized for everyone, phenotypic variation might be reduced somewhat, but heritability would rise to 100 percent. In contrast, if environments could be individually tailored to compensate for genetic differences (by providing insulin for diabetics, changing the diets of those with phenylketonuria, providing the best education to the least intelligent, and the like), both heritability and variability would fall.
Moreover, heritability is the degree to which genes explain phenotypic variance in a trait, so a high heritability does not rule out shifts in population averages over time. Something that affects everyone can change a group's average without changing its variability. Recent generations have been getting taller, but height is still highly heritable within generations. The same is true for IQ levels, which have been increasing several points a decade this century in developed countries. Both increases are still scientific puzzles, but some scholars have suggested a common explanation—societywide improvements in nutrition, reduction in disease, and the like. Researchers have yet to establish, however, to what extent the rises in IQ reflect increases in the g versus non-g components of mental tests and thus an increase in g itself.
What is clear, however, is that shared family environments that vary within the normal range of family environments in the developed world do not have create lasting differences in IQ. Within the normal range of variation, different families have basically the same effects in promoting mental growth. The key to understanding how environments create IQ differences among age peers lies in understanding nonshared effects. These are the environments, whether biological or social, and both within and outside family settings, that affect siblings differently and make them less alike. The shattering of shared effects theory as an explanation for adult differences in IQ is a revolutionary development, albeit one yet to be accepted by many social scientists. The discovery of lasting nonshared influences opens exciting new ways of thinking about IQ-relevant environments. We may have been looking in all the wrong places. Behavioral genetics provides the best tools at present for ferreting out what those nongenetic factors are.
To reiterate a cautionary note, we do not know the effects of environments that are extreme or that do not allow individuals the personal freedoms that most Westerners enjoy. We do not know, either, what the effects of entirely novel environments or interventions would be, whether social or biological. We can predict, however, that any social or educational intervention would have to fall outside the normal range of variation already studied in order to change the distribution of IQs very much. For instance, supplying a typical middle-class family environment to all lower-class children cannot be expected to narrow the average IQ gap between middle- and lower-class adolescents. Middle-class children themselves range across the entire IQ spectrum (as do lower-class children) despite the advantages (or absence thereof) of middle-class life.
Group Differences. There is little scientific debate anymore about whether valid phenotypic differences exist among races, ethnicities, and social classes. Average group differences in IQ are the rule, not the exception, both worldwide and in the United States. To the extent that the matter has been investigated, group IQ differences appear to reflect differences in g itself and are mirrored by group differences in performance on the simple laboratory tasks described earlier.
Group IQ differences can be pictured as the displacement of the IQ bell curves of some social groups somewhat upward or downward on the IQ continuum compared to others. All groups' bell curves overlap greatly; the differences consist in where along the IQ continuum each group is centered. Ashkenazic Jews tend to score as far above average (about IQ 112) as American blacks score below average (about IQ 85), with most other groups spread in between. It should be noted, however, that black cultural subgroups differ among themselves in average IQ, as do the constituent subgroups of Jews, gentile whites, Asians, Hispanics, and Native Americans.
The most contentious debate regarding intelligence is whether average group IQ differences are partly genetic in origin. The favored assumption in the social sciences for the last half-century has been that race differences are entirely environmental. However, research designed to prove this has not done so. It has succeeded in finding environmental factors that might possibly explain at most a third of the American black–white average difference. This failure does not rule out an entirely environmental explanation based on factors yet to be assessed. It does rule out several factors, however, that were once assumed to account for the bulk of the average difference, namely, family income and social class. Large average IQ differences between black and white children are found at all levels of family income and social class.
Behavioral geneticists have recently developed statistical methods for estimating the extent to which average differences among social groups (races, sexes, and so on) might be due to genetic differences among them. Perhaps not surprisingly, few behavioral geneticists have actually applied those methods to available data, and those who have been willing to do so have experienced unusual barriers to publishing their results. As a result, there is little direct published evidence one way or the other. When surveyed in 1988, about half of IQ experts reported a belief that race and class differences result from both genetic and environmental influences. This should be considered a reasonable but unproven hypothesis.
The earlier caution should be repeated here. Research has so far studied only the normal range of environmental variation within any race or ethnic group. American minority children may more often grow up in extremely deprived environments. Studies of very low-IQ Appalachian communities suggest that biologically unhealthy and cognitively retarded family environments can permanently stunt cognitive development.
Some people fear that any evidence of genetic differences between groups would have dire social consequences. This fear is unwarranted. A demonstration of genetic differences would not dictate any particular political reaction. Both liberal and conservative social policy can humanely accommodate such an eventuality, as some policy analysts and behavioral geneticists have illustrated (Kaus 1992; Rowe 1997). Depending on one's politics, for example, genetic differences by race could be used to argue either for forbidding or for requiring racial preferences in education and employment. Moreover, environmentalism and hereditarianism have both on occasion helped undergird tyrannical regimes that practiced mass murder, for example, respectively, the Stalinist Soviet Union and Nazi Germany. Political extremism (or moderation) is neither guaranteed nor precluded by scientific conclusions one way or the other. Scientific facts and political reactions to them are independent issues. Developing effective social policy does depend, however, on working in concert with the facts, not against them, whatever they may be.
SOCIAL CORRELATES AND CONSEQUENCES OF DIFFERENCES IN INTELLIGENCE
Much research has focused on how individuals' own behavior and life outcomes are affected by their intelligence level. There has been little research yet on what may ultimately interest sociologists more, namely, the ways in which interpersonal contexts and social institutions are shaped by the cognitive levels of the individuals populating them.
Individual Level. American adults clearly value intelligence highly because they rate it second only to good health in importance. Differences in intelligence do, in fact, correlate to some extent with just about everything we value, including mental and physical health, success in school and work, law-abidingness, emotional sensitivity, creativity, altruism, even sense of humor and physical coordination. The scientific question, however, is whether differences in intelligence actually cause any of these differences in people's behaviors and outcomes. Or might intelligence as often be their consequence as their cause?
Questions of causality. Most IQ variability is genetic from adolescence on, meaning that it cannot be mostly "socially constructed." Moreover, to the extent that it has nongenetic sources, evidence leans against their being the usual suspects in social research (parents' income, education, child-rearing practices, and the like). If intelligence is not caused (much) by its major social correlates, does it cause them?
Pieces of an answer are available from experimental and quasi-experimental research conducted by educational, employment, and training psychologists in public, private, and military settings for more than a half-century. Differences in prior mental ability are strong—in fact, the strongest—predictors of later performance in school, training, and on the job when tasks are at least moderately complex. Moreover, the correlations are stronger with objective than subjectively measured performance outcomes (for example, standardized performance rather than teacher grades or supervisor ratings). The military services also have extensive experience attempting to nullify the effects of ability differences on recruits' later performance in training and on the job. Their failed attempts testify to the stubborn functional import of such differences—as does the failure of lengthy job experience to neutralize differences in worker intelligence.
IQ is moderately highly correlated with a nexus of good outcomes—higher education, high-status jobs, and income growth over a career. In view of the g-loadedness of the educational and occupational worlds, it would be surprising were IQ not found to be an important precursor to these outcomes. IQ is, in fact, the best predictor of later educational level attained, and it helps predict occupational status and income growth even after controlling for education and family background.
IQ is also correlated to varying degrees (negatively) with a nexus of bad outcomes—dropping out of school, unemployment, incarceration, bearing illegitimate children, dependence on welfare, and living in poverty as an adult. This nexus of social pathology has been the focus of recent lively debates about the role of intelligence, where protagonists typically pit intelligence against an array of external factors, including various aspects of family background, to see which is the stronger predictor. Intelligence generally equals or exceeds the predictive ability of any small set of such variables, although the relations tend to be modest in both cases. One possible explanation for the relation of IQ to social pathology is that lack of socioeconomic competitiveness may precipitate a downward social spiral.
However, IQ may play a direct role, too. Committing crimes, bearing illegitimate children, and other such personal behavior may result in part from errors of judgment in conducting one's life, perhaps due in part to lack of foresight and ability to learn from experience. Conversely, higher g may help insulate people from harmful environments. Research has shown, for instance, that higher intelligence is a major attribute of "resilient" children, who prosper despite terrible rearing conditions, and of those who avoid delinquency despite living in delinquent environments. The hypothesis is that their greater ability to perceive options and solve problems constitutes a buffer.
Either the genetic or nongenetic components of phenotypic intelligence might be responsible for its causal impact. Because intelligence is highly genetic, it is reasonable to assume that its causal impact is mostly due to its genetic component. This has, in fact, been found to be the case with its effect on standardized academic achievement. The latter shares all its genetic roots with IQ. Similar multivariate genetic analyses are now accumulating for various socioeconomic outcomes that depend on mental competence. Educational and occupational level are both moderately genetic in origin, with estimates (for males) ranging from .4 to .7 for education and .3 to .6 for occupation. Part of that genetic portion overlaps the genetic roots of IQ. In the best study so far (Lichtenstein and Pedersen 1997), occupational status was more than half genetic in origin. Half that genetic portion was shared jointly with the genetic roots of both IQ and years education, and half was independent of both. The remaining variability in phenotypic occupational status was split between nonshared environmental effects that (1) were shared with education (but not IQ) and (2) were unique to occupation.
One of the biggest confusions in the debate over the causal role of intelligence results from the mistaken equating of intelligence with genetic factors and of social class with nongenetic factors by some of the most visible protagonists in the debate. While the former assumption has some justification owing to the high heritability of g, it nonetheless muddies the conceptual waters. The latter assumption is even less warranted, however, because many social "environments" turn out to be moderately genetic. All social "environments" must now be presumed partly genetic until proven otherwise. Not being genetically sensitive, virtually all current research on the effects of social and family environments is actually uninterpretable. Progress in the causal analysis of environments and their relation to g will come only when more social scientists begin using genetically sensitive research designs.
Principles of importance. Although the causal role of intelligence has yet to be clarified, research leaves no doubt that people's life chances shift markedly across the IQ continuum. Those shifts in specific life arenas will be discussed later, but it would help first to state four principles that summarize what it means for intelligence to have practical importance in individuals' lives.
First, importance is a matter of better odds. Being bright is certainly no guarantee of happiness and success, nor does being dull guarantee misery and failure. Most low-IQ people marry, work, have children, and are law-abiding citizens. Being brighter than average does, however, systematically tilt the odds toward favorable outcomes. Higher levels of intelligence always improve the odds, sometimes only slightly but often substantially, depending on the outcome in question.
Second, importance varies systematically across different settings and life arenas. Intelligence level tilts the odds of success and failure more in some arenas of life (such as academic achievement) than others (such as good citizenship). For instance, the correlations of IQ with years of schooling completed (.6) and composites of standardized academic achievement (.8) are over twice that for IQ correlations with delinquency (−.2 to −.3). Correlations in the same life arena can also vary depending on complexity of the tasks involved. For instance, correlations of job performance with test scores range from .2 in unskilled work to .8 in the most cognitively demanding jobs.
Third, importance is relative to other known influences and one's particular aims. Many personal traits and circumstances can affect the odds of success and failure in different arenas of life. Intelligence is never "everything" in the practical affairs of life. Depending on the outcome in question, personality, experience, peers, family background, and the like can tilt the odds of success, sometimes more than intelligence does and sometimes less. As noted earlier, IQ predicts standardized achievement better than it does persistence in education, probably because personality and circumstances affect the latter much more than the former. Weak prediction at the individual level does not mean the predictor is unimportant in a pragmatic sense, as illustrated by the relation between delinquency and social class. The correlation is generally below −.2 but usually thought quite important for policy purposes, as is the similarly low correlation at the individual level between smoking and various health risks.
Fourth, importance is cumulative. Small individual effects can be quite important when they cumulate across the many arenas and phases of one's life. Many of g's daily effects are small, but they are consistent and ubiquitous. Like the small odds favoring the house in gambling, people with better odds win more often than they lose and can thus gradually amass large gains. Likewise, although smart people make "stupid" mistakes, they tend to accumulate fewer of them over a lifetime. Although the odds of any particular unfavorable outcome may not always be markedly higher in the lower IQ ranges, lower-IQ people face worse odds at every turn in life, meaning that their odds for experiencing at least one destructive outcome may be markedly higher.
Education and training. Schooling is the most g-loaded setting through which citizens pass en masse. Its unremitting demand is to learn and, moreover, to learn increasingly complex material as young people progress through it. It therefore highlights the intellectual distinctions among citizens better than does any other life setting and in ways plainly visible to the layperson. To be sure, schools enhance everyone's cognitive development, but they currently seem to have little impact on making people either more alike or less alike in intelligence. Sociologist Christopher Jencks estimated in 1972 that if quantity and quality of schooling were made identical for everyone, such equalization would reduce the variance in test scores by only 20 percent. When Poland's Communist government rebuilt Warsaw after World War II, it allocated housing, schools, and health services without regard to residents' social class. This farreaching equalization of environments did little or nothing either to equalize the IQs of the next generation of children or to reduce the correlation of their IQs with parental education and occupation (Stein, Susser, and Wald 1978).
As already noted, brighter students and military recruits learn much more from the same learning opportunities and often require less than one-fourth the exposure than do their less able peers for the same degree of learning. This difference in ability to capitalize on learning opportunities also greatly influences the maximum level of attainment youngsters are likely to reach. People with an IQ of 75 (the threshold for mental retardation) have roughly only a 50–50 chance of being able to master the elementary school curriculum; an IQ of about 105 is required for the same odds of getting grades good enough in high school to enter a four-year college; and an IQ of 115 is required for 50–50 odds of doing well enough in college to enter graduate or professional school.
Figure 3, similarly to Figure 1, summarizes accumulated employer experience about the most effective sorts of training for people at different ranges of IQ. Figure 3 is based on research with the Wonderlic Personnel Test (WPT), a short group intelligence test. Above the WPT equivalent of IQ 115–120 (which includes about 10 to 15 percent of the general white population), people can basically train themselves; the middle half of the IQ distribution (IQ 91–110) can learn routines quickly with some combination of written materials, experience, and mastery learning; but people below IQ 80 (10 percent of the general white population) require slow, concrete, highly supervised, and often individualized training. The military is prohibited by law from inducting anyone below this level because of inadequate trainability, and current minimum standards exclude anyone below the equivalent of IQ 85.
Employment. Many studies have found that the major distinction among occupations in the U.S. economy is the cognitive complexity of their constituent tasks. The most complex jobs are distinguished by their greater requirements for dealing with unexpected situations, learning new procedures and identifying problems quickly, recalling task-relevant information, reasoning and making judgments, and similar higher-order thinking skills that are prototypical of intelligence. Other job attributes that correlate highly with the occupational complexity factor include writing, planning, scheduling, analyzing, decision making, supervising, negotiating, persuading, instructing, and self-direction. So, too, do responsibility, criticality, and prestige. As already noted, mental ability test scores correlate most highly with performance in the most complex jobs. That is, differences in intelligence have a bigger impact on performance—"more bang for the buck"—when work is more g-loaded.
Not surprisingly, then, an occupation's overall complexity level correlates extremely highly with the average IQ of its incumbents. As Figure 3 illustrates, all occupations draw applicants from a wide range of IQ, but the minimum and average IQ levels rise with job level. Although wide, the typical IQ recruitment ranges for different occupations do not overlap at the extremes of job level (professional versus unskilled). The average IQ of applicants to middle-level jobs (about IQ 105), such as police work, is 15 IQ points (one standard deviation) lower than for applicants to professional jobs but 15 IQ points higher than for applicants to semiskilled work, such as food service worker. No occupation seems to recruit its workers routinely from below IQ 80.
The foregoing results for jobs suggest a high practical utility for higher intelligence in other aspects of life. Many jobs (child care, sales, accounting, teaching, managing) pose the same mental challenges (persuading, instructing, organizing, and ministering to people) that pervade nonpaid activities (parenting, home and financial management, civic responsibilities, friendships, and so on).
Daily life. Daily life has become considerably more complex during the twentieth century. Increased size, bureaucratization, and regulation of social institutions and services, together with greater reliance on continually changing information technologies, have greatly increased the cognitive complexity of daily life. Life may be physically easier, healthier, and more pleasant today, but it has become mentally more challenging in developed societies. Some of this complexity is captured well by the U.S. Department of Education's 1992 National Adult Literacy Survey (NALS; Kirsch et al. 1993). Although the NALS was not designed as an intelligence test, it closely mimics the key attributes of an IQ test battery: Its intent was to measure complex information-processing skills by sampling a broad range of tasks from universally relevant contexts and contents; the relative difficulty of its items stems from their complexity, not their manifest content; and its three scales reflect one general factor.
Figure 4 illustrates items at different levels of the three NALS subscales; Figure 1 translates the NALS scores into IQ equivalents. These items do not involve esoteric "book smarts" but represent practical, everyday skills in dealing with banks, restaurants, transportation systems, and social agencies; understanding the news and one's options; and providing basic information about oneself. Nonetheless, about 15 percent of white adults and 40 percent of black adults routinely function no higher than Level 1 (225 or less), which corresponds to 80 percent proficiency in skills such as locating an expiration date on a driver's license and totaling a bank deposit. Another 25 percent of whites and 36 percent of blacks routinely function no higher than Level 2 (226–275), which includes proficiency in such skills as locating an intersection on a street map, entering background information on an application for a Social Security card, and determining the price difference between two show tickets.
These are examples of the myriad daily tasks that require some independent learning and reasoning as one navigates life. None may be critical by itself, but the more often one fails such tasks, the more one is hampered in grasping opportunities, satisfying one's needs and desires, and assisting family and friends. A national education panel concluded, in fact, that Level 1 and 2 skills are not sufficient for competing successfully in a global economy or exercising fully the rights and responsibilities of citizenship. Consistent with this, the NALS study found that, compared to adults with Level 5 skills (376–500, reached by about 4 percent of whites and less than 0.5 percent of blacks), adults with Level 1 skills were five times more likely to be out of the labor force, ten times more likely to live in poverty, only 40 percent as likely to be employed full time, and 7 percent as likely to be employed in a managerial or professional job—if employed at all.
Two daily activities where mental competence may have life-and-death implications are driving and health behavior. A large longitudinal study of Australian servicemen found that the death rate from motor vehicle accidents for men with IQs above 100 (52 per 10,000) was doubled at IQ 85–100 (92 per 10,000) and tripled at IQ 80–85 (147 per 10,000). The study authors suggested that the higher death rates might be due to poorer ability to assess risks. Medical research has likewise documented that many nonretarded patients have difficulty reading labels on prescription medicine and following simple physician instructions about self-care and future appointments.
Nexus of social pathology. Table 2 shows how the odds of social pathology fall (or rise) the further one's IQ exceeds (or falls below) the average IQ. It shows the percentages of young white adults in five successive IQ ranges who experience certain bad outcomes. As shown, the odds of incarceration, illegitimate births, poverty as an adult, and the like all double at each successively lower IQ range. The ratios in the last column show how the odds of bad outcomes thus differ greatly even for people who are only somewhat below average (IQ 76–90) versus somewhat above average (IQ 111–125) in IQ. Among these young white adults, for instance, 17 percent of the former IQ group versus only 3 percent of the latter live in poverty as adults, for a ratio of about 5:1. The odds are less discrepant for some bad outcomes (3:2 for divorce and unemployment) but more discrepant for others (7:1 for incarceration and 88:1 for dropping out of high school). The disparities in odds across IQ groups are even more extreme at the extremes of IQ. Good and bad outcomes can be found at all IQ levels, but what is typical differs enormously, as was also illustrated with the NALS data.
Moreover, the odds of dropping out of school, illegitimate births, poverty, and welfare dependence all increase with lower IQ among siblings within the very same family and even when the families are intact and not poor. There is something about below-average IQ itself that puts individuals at serious social risk, whatever their family circumstances.
Overall life chances. Figure 1 and Table 2 together paint a vivid picture of how greatly overall life chances differ by IQ level. People with IQs below 75 are clearly in the "high-risk" zone, where trainability and employability are very low and the odds of various social pathologies are much elevated. Although risks fall substantially for individuals with IQs only somewhat below average (IQ 76–90), these people still face an "uphill battle" because they are not very competitive for many training programs and jobs. The middle 50 percent of the population (IQ 91–110) is competitive for many of a modern economy's jobs but likely only to be just "keeping up" relative to others. Their brethren of somewhat above-average IQ (IQ 111–125) are more likely "out ahead" socioeconomically because they are highly trainable and competitive for better jobs. Their rates of pathology are also very low. People with IQs above 125 are so competitive cognitively and so seldom hobbled by g-related social pathology that socioeconomic success is truly "theirs to lose."
Interpersonal Context. One of the most fascinating questions in the study of intelligence has received virtually no attention: How does the mix (average and variability) of intelligence levels in a setting—its IQ context—affect behavior in that setting? How might one's fate be affected by the intelligence level of the other people in one's interpersonal settings—of one's parents, siblings, neighbors, friends, and other close compatriots?
The basic issue is this: A difference in IQ of one standard deviation (about 15 points) is socially perceptible and meaningful. Interpersonal communication becomes fraught with increasing difficulty beyond this distance because of larger gaps in vocabulary, knowledge, and ability to draw inferences or "catch on," as well as the emotional discomfort such gaps create. Figure 1 reveals how IQ ranges of about one standard deviation also mark off substantial differences in options for education, training, and career, and thus the likelihood of entering different social niches. As shown in the figure, the normal range of intelligence (IQ 70–130, which includes roughly 95 percent of the general white population) spans four standard deviations of IQ. Socially and cognitively, that is an enormous difference. How, then, do people communicate and congregate across the IQ continuum in their daily lives? The average difference between siblings and spouses is about 12 IQ points, which means that most people in a biological family fall within the range of ready cognitive communicability. Any two random people in the population, however, differ by 17 IQ points, which represents the borderline for communicating effectively and as social equals.
Communication, cooperation, and reciprocity. The ability to communicate as equals constitutes a social tie, as does the ability to trade information and assistance. Such reciprocity is the basis of longer-term cooperation. Lack of reciprocity creates not only social distance but also animosity where reciprocity had been expected. There are many bases for cooperation and reciprocity, but sharing information and helping to solve problems is crucial in many settings. Ethnographic studies of middle school children, for instance, show how patterns of mutual assistance and friendship, rather than resentment and unwillingness either to provide help to classmates or to seek it from them, evolve from similarities and differences in students' competence in answering homework and test items. Similar g-driven interpersonal relations can be expected in many workgroups
|Percentage of Young White Adults with Particular Life Outcomes, by IQ Level|
|life outcome||iq:||75 and below "very dull"||76–90 "dull"||91–110 "normal"||111–125 "bright"||over 125 "very bright"||ratio of dull to bright|
|source: herrnstein and murray (1994): (respectively) 158, 163, 174, 230, 180, 132, 194, 247/248, 194, 146.|
|out of labor force 1+ mo/yr (men)||22||19||15||14||10||4:3|
|unemployed 1+ mo/yr (men)||12||10||7||7||2||3:2|
|divorced in 5 yrs||21||22||23||15||9||3:2|
|% of children below iq 75 (mothers)||39||17||6||7||–||2:1|
|had illegitimate child (women)||32||17||8||4||2||4:1|
|lives in poverty||30||16||6||3||2||5:1|
|went on welfare after first child (women)||55||21||12||4||1||5:1|
|ever incarcerated/doing time (men)||7||7||3||1||0||7:1|
|chronic welfare recipient (mothers)||31||17||8||2||0||8:1|
|high school dropout||55||35||6||0.4||0||88:1|
and other settings in which teammates depend on one another for technical competence.
People of markedly different ability levels also tend to have different interests, which further impedes their ability to develop rapport. Assortative mating studies show that individuals explicitly seek mates of similar IQ levels and that spouses' IQs are, in fact, moderately correlated (about .4), perhaps more so than any other personal characteristic (except gender). Cognitive incompatibility is certainly responsible for the extreme social isolation often experienced by both the mentally retarded and the highly gifted. Extremely gifted children, who may be four standard deviations or more above average (IQ 160 and above), often feel, and are treated as, alien. These children are as different from the borderline gifted (IQ 130) as the latter are from the average child (IQ 100). With extraordinary vocabularies for their age, the highly gifted speak virtually a different language from their agemates. Although less extreme, the same type of alienation develops across much smaller gaps in IQ. In short, cognitive similarity seems to affect the formation of social bonds, which themselves are the building blocks of "social structure."
Social separation and segregation. Because rough similarity in g promotes interpersonal reciprocity and rapport, it should not be surprising that people segregate themselves somewhat by cognitive ability when free to do so, marriage being the most intimate example. Segregation occurs along IQ lines for other reasons as well, many related to the functional value of intelligence in obtaining higher education and better work.
In the typical school, students enter grade 1 spanning at least mental ages four to nine, which translates quickly into markedly different grade-equivalent achievement levels. By reducing g variability within learning groups, ability grouping and tracking represent schools' attempt, albeit a perennially controversial one, to accommodate students of different cognitive levels. Its pedagogical merits aside, grouping reinforces friendships within IQ ranges and is but the first of many ways by which schools and employers direct individuals toward different occupational and income groups, and thence into residential neighborhoods, partly along IQ lines.
A 1933 epidemiological survey in New York City documented that the average IQ levels of white school children across a large sample of the city's 273 Health Areas ranged from 74 to 118, a range of three standard deviations. The parents of these children would differ even more in average IQ. Consistent with genetic expectations, parents of any ability level produce children at virtually all ability levels, but their children's average IQ is closer to the population average than is their own.
Social clustering along IQ lines can be expected to increase familiarity, communication, and mutual assistance by enhancing within-group similarity, at least when the groups are minimally competent. Enhanced similarity can elevate the risks of low IQ, however, when IQ clustering results in a critical mass of individuals below some critical threshold in IQ. That threshold may be IQ 75, which is the level below which individuals need considerable assistance from family, friends, or social agencies to live independently in modern societies. When critical mass is reached in a family or community, networks of competent help become overwhelmed by sticky webs of bad judgment, which in turn produce a physically unhealthy and socially dysfunctional environment for all members, as sympathetic social anthropologists have documented.
In any case, greater within-group similarity produces greater between-group dissimilarity and distance. A contested but reasonable hypothesis of Richard Herrnstein and Charles Murray's 1994 book, The Bell Curve, is that society is becoming increasingly stratified along cognitive lines, jeopardizing national unity. That specter raises much anxiety in democratic societies, perhaps accounting for the quick distaste the thesis roused in many quarters. Any societal divisions that g creates would, however, be softened somewhat by g's genetic basis. The laws of genetics guarantee that many children will differ substantially from their parents, producing intergenerational mobility across IQ and social-class lines and thereby assuring some cross-group ties. Whether or not it is increasing over time or permeable in nature, social clustering by g is nonetheless considerable. It is therefore a perennial matter of public debate, whether the question be where to locate Section 8 or other public housing or how to integrate social classes and races in educational settings.
Social networks and subcultures of attitudes, behavior, and knowledge. The Bell Curve's thesis about the dangers of cognitive stratification rests in its assumption that different cognitive strata create distinct and somewhat discordant cultures. Sociologist Robert A. Gordon (1997) has outlined at the level of small groups how different IQ contexts do actually represent different subcultures. These different subcultures in turn expose their members to different experiences, risks, knowledge, opinions, assistance, and expectations, as suggested earlier. IQ-discrepant subgroups, for example, differ not so much in the social ideals they espouse as in tolerance for their violation. They also differ in the degree to which they diffuse news and information from the broader culture rather than propagate rumor, misinformation, and even the AIDS virus.
The New York City neighborhoods mentioned earlier differed not only in IQ but also in rates of birth, death, infant mortality, and juvenile delinquency, illustrating that different IQ contexts probably constitute notably different social milieus for developing children. Children of, say, IQ 100 surely live different lives with different opportunities when raised in IQ contexts of 85 versus 115, both of which are common in the United States. Not only is such a child substantially above average in the first context while below average in the second, which creates its own opportunities and obstacles for the child, but there are also significant differences across the two contexts in the quality of ambient advice, information, and personal examples. Children's IQ levels seem not to be permanently affected by their IQ contexts, but their more malleable behaviors and outcomes may be, as studies of youthful career aspirations and delinquency suggest. Epidemiological analyses of the g-related contagion of certain risky health and social behaviors would further illuminate how risks rise or fall according to the level of "local intelligence" in which one is embedded.
Societal Level. The interpersonal contexts that influence an individual's behavior are themselves shaped partly by the g levels of the people inhabiting them, as just described. IQ contexts thus represent an impact of g on an individual level that is above and beyond the effects of that individual's own IQ. IQ contexts have "macro" as well as "micro" effects in a society, however, because they create gradients of information flow, status and stigma, power and influence across a nation. These societal-level effects of g, via IQ contexts, may be the most important of all for a society, and they cry out for sociological analysis. Only a few such analyses have been done, but they illustrate the promise of a sociology of intelligence.
Evolution of social structures. If knowledge is power, then brighter people can be expected to advance further in any society freely allowing its accumulation. What is less obvious, except in hindsight, is that the routes to success may themselves be shaped by enduring variation in g within a population. Wide dispersion in g is a biological fact that all societies must accommodate. What norms and institutions evolve to promote such accommodation, especially where g has high functional value?
Consider the occupational hierarchy, that gradient of occupations from high to low in status, income, and educational requirements, which sociologists have shown to be replicated around the world. The consequences for individuals of their placement in it is clear, but its evolution is not. As described earlier, the major dimension underlying the hierarchy seems to be the complexity, not the content, of the tasks comprising the occupations arrayed along it. The occupational hierarchy is, then, a set of stable task configurations ranked in desirability according to their g-loadedness.
The structural question is how tasks gradually become sorted over time by their g-loadings into more g-homogeneous sets within occupations, thereby creating sharper distinctions in g-loading between occupations. This segregation of tasks by g-loading into a g-based occupational hierarchy most likely gradually arises from the natural sorting and reassignment of people and tasks to each other in the effort to improve aggregate performance of an organization's or society's essential functions. When workers are sorted more consistently by g level into occupations, occupational content can evolve to better fit the typical incumbent. For example, employers can gradually remove easy tasks from, and add complex tasks to, jobs whose usual incumbents are bright, and do the opposite for jobs typically peopled by less bright workers (Gottfredson 1985).
Of course, g is hardly the only contributor to job performance, and job performance is not the only basis for how work and workers are organized in firms and societies. But to the extent that g is the most functionally important worker attribute overall and that people become sorted to work by g level, there will arise a g-based occupational hierarchy whose distinctions gradually expand or contract when the g-related efficiency of sorting workers rises or falls. This theory illustrates how the biological fact of differences in g can constrain the evolution of social institutions. That biological fact clearly rules out common utopian fantasies in which all citizens are assigned, rotated through, or ascend to jobs of equal difficulty and status.
Racial politics. When two social groups differ substantially in average g and g has functional value, they can also be expected to differ in g-related outcomes. The average difference in outcomes will depend on, among other factors, the size of the average group difference in g and the g-loading of the outcome in question. The g-generated differences in outcome have many sociopolitical reverberations, because they are pervasive, frequently large, and sometimes involve races once subjugated. The societal-level reverberations have the power to alter many aspects of a nation's culture. This can be illustrated by the national effort in recent decades to eliminate racial disparities in education and employment despite continuing racial disparities in g.
A key practical dilemma for educators and employers is that unbiased, valid measures of mental ability are generally the best predictors of school and job performance but, owing to phenotypic differences in g across racial groups, they have considerable disparate impact. That is, they screen out disproportionate numbers of candidates from some races. Unless group disparities in g are eliminated, there will continue to be a trade-off between selecting the most able applicants and selecting a racially balanced student body or work force, especially in highly g-loaded settings such as graduate school and the professions. In both employment law and public perceptions, unequal selection rates by race constitute prima facie evidence of illegal discrimination, often making it risky to use g-loaded predictors.
This combination of scientific facts and legal constraints has precipitated in personnel selection psychology a desperate but unsuccessful search for non-g substitutes for mental tests. There turns out to be no substitute for higher-order thinking skills. This failure created additional pressure on the field to reduce employers' legal vulnerability while retaining mental tests by instituting racial preferences. Eventually the U.S. Congress banned the most efficient such "solution" as an undisguised quota (the race-norming of employment tests, which means ranking applicants on separate racial curves). That ban in turn increased the pressure to covertly reduce or eliminate the g component of tests (to remove crucial mental demands), the results of which led to enormous controversy—and litigation—in personnel selection psychology. The same controversial effort to reduce the g-loading of employee selection criteria is now occurring for college admissions in states where racial preferences have been banned or might be. Being the most g-loaded predictor of student performance, the SAT has been the first target. In short, g-related group differences in outcomes have long been driving widespread changes in standards for admission, hiring, promotion, and more, sometimes improving selection and sometimes not, but always causing controversy.
Selection psychology is only one microcosm for observing the sorts of societal waves created by g-related group disparities. Virtually every school practice, from instructional and grouping practices to discipline to teacher assignment and funding, has been modified in recent decades to neutralize either the reality or the appearance of racial differences in phenotypic intelligence and their real-world effects. Keen disappointment at the failure of these modifications to accomplish that neutralization has itself has sparked mutual recriminations between blacks and whites, led to more expansive definitions of discrimination and racism, and in many other ways shifted national politics. As is apparent, the societal-level ramifications of group differences in g hinge critically not only on how large they are and whom they affect but also on how a society explains and reacts to the differences.
Inequality and the democratic paradox. A population's IQ bell curve may bunch up or spread out somewhat with environmental change, and it may shift a bit up or down the IQ continuum over time. Nonetheless, it will remain as much a biological fact as are differences in height. The bell curves for different demographic groups may also shift somewhat relative to each other along the IQ continuum, but gaps will likely persist.
As indicated in Figure 1, the IQ continuum represents a gradient of functional advantage for the individuals and groups arrayed along it. Happiness and regard may be available to all, but money, power, and prestige all tend to flow up the continuum, especially in a free society. Accordingly, envy flows up and stigma down. The IQ continuum is thus a strong current deep within the body politic that influences its surface dynamics in myriad ways and can frustrate efforts to steer a society in certain directions. Perhaps for this reason, political efforts to regulate or defy those dynamics have sometimes been violent in spirit if not in act. A 1980 analysis of genocides earlier in the century found that all but one of the targeted groups (Gypsies) were of apparently higher average intelligence than those seeking to exterminate them, for instance, the Jews in Germany, Armenians in Turkey, Ibos in Nigeria, and the educated classes in Cambodia.
Any humane society will moderate the effects of unequal biological and social advantage, preventing unbridled competition and the degradation of its weaker members. If resources naturally flow up the IQ continuum, societies can consciously redistribute some of them back down it—in a word, by helping. Such is the realm of charity and, increasingly, social policy, although such measures are seldom conceived in terms of helping the less "able" because that in itself would be stigmatizing. More often today, help is couched in terms of assisting the "deprived," as though all social inequality were the result of some social groups illegitimately expropriating from others what would have otherwise naturally accrued to them. Some inequality may be, but much is not.
Extreme egalitarianism is as problematic, however, as unbridled individualism, for it hobbles talent and deadens ambition. John Gardner outlined the trade-offs between promoting individual merit and equalizing social outcomes in his 1984 book, Excellence: Can We Be Equal and Excellent Too? In that eloquent little book, he asked the question that writers from both the political left and right have since tried to answer in more detail: How can we create a valued place for people of all ability levels and bring out the best in all? The proffered answers differ primarily in the difficult trade-offs the authors settle for among personal liberty, equality of socioeconomic outcomes, and an emphasis on human excellence and productivity, three principles that are somewhat inconsistent owing to meaningful differences among people.
If such are the political dilemmas that the deep current of g inevitably creates, the debates over their resolution seldom seem cognizant of the dilemma's roots in human variation. Democracy is itself a social leveler because its grants political equality to people who are in numerous ways biologically unequal. But this strength is also its torment, because democracy excites the desire for yet more leveling, to which biological inequalities—especially intelligence differences—pose an obstacle. Mother Nature is no egalitarian. As Alexis de Tocqueville observed almost 200 years ago ([1835, 1840] 1969), "When there is no more hereditary wealth, class privilege, or prerogatives of birth, and when every man derives his strength from himself alone, it becomes clear that the chief source of disparity between the fortunes of men lies in the mind . . . [T]here would still be inequalities of intelligence which, coming directly from God, will ever escape the laws of man" (pp. 457–458, 538).
Biological diversity in g is a core challenge to democratic societies and to the scholars who are responsible for helping citizens understand how their society works. The challenge is exacerbated as technology advances, because such advance favors higher-g over lower-g people owing to their better ability to capitalize on it. Western democracies view democracy and technology as their twin engines of progress, however, and so haplessly seek solutions to inequality by pursuing yet more of both. That is the democratic paradox. The answer to the dilemma lies not in pursuing the opposite strategy—that is, curtailing both democracy and technology, as is sometimes hinted—but most likely in better understanding how differences in g orchestrate and constrain social life, to the extent that they do. For sociologists of intelligence, there is much to do.
Anastasi, A. 1996 Psychological Testing. New York: Plenum.
Brown, H., R. Prisuta, B. Jacobs, and A. Campbell 1996 Literacy of Older Adults in America: Results from theNational Adult Literacy Survey. NCES 97-576. Washington, D.C.: U.S. Department of Education, National Center for Education Statistics.
Carroll, J. B. 1993 Human Cognitive Abilities: A Survey ofFactor-Analytic Studies. New York: Cambridge University Press.
——1997 "Psychometrics, Intelligence, and Public Perception." Intelligence 24(1):25–52.
de Tocqueville, A. (1835, 1840) 1969 Democracy in America. New York: Harper Perennial.
Fox, W. L., J. E. Taylor, and J. S. Caylor 1969 AptitudeLevel and the Acquisition of Skills and Knowledges in aVariety of Military Training Tasks. (Technical Report 69-6). Prepared for the Department of the Army by the Human Resources Research Office. Washington, D.C.: The George Washington University.
Gardner, H. 1983 Frames of Mind. New York: Basic Books.
Gardner, J. 1984 Excellence: Can We Be Equal and Excellent Too? New York: Harper.
Goleman, D. 1995 Emotional Intelligence. New York: Bantam.
Gordon, R. A. 1997 "Everyday Life as an Intelligence Test: Effects of Intelligence and Intelligence Context." Intelligence 24(1):203–320.
Gottfredson, L. S. 1985 "Education as a Valid but Fallible Signal of Worker Quality: Reorienting an Old Debate About the Functional Basis of the Occupational Hierarchy." In A. C. Kerckhoff, ed., Researchin Sociology of Education and Socialization, vol. 5. Greenwich, Conn.: JAI Press.
Gottfredson, L. S. 1997a "Mainstream Science on Intelligence: An Editorial with 52 Signatories, History, and Bibliography." Intelligence 24(1):13–23.
Gottfredson, L. S. 1997b "Why Intelligence Matters: The Complexity of Everyday Life." Intelligence 24(1):79–132.
Herrnstein, R. J., and C. Murray 1994 The Bell Curve:Intelligence and Class Structure in American Life. New York: Free Press.
Jencks, C., M. Smith, H. Acland, M. J. Bane, D. Cohen, H. Gintis, B. Heyns, and S. Michelson 1972 Inequality: A Reassessment of the Effect of Family and Schooling inAmerica. New York: Harper and Row.
Jensen, A. R. 1998 The g factor: The Science of MentalAbilities. Wesport, Conn.: Praeger.
Kaufman, A. S. 1990 Assessing Adolescent and Adult Intelligence. Boston: Allyn and Bacon.
Kaus, M. 1992 The End of Inequality. New York: Basic Books.
Kirsch, I. S., A. Jungeblut, L. Jenkins, and A. Kolstad 1993 Adult Literacy in America: A First Look at theResults of the National Adult Literacy Survey. Princeton, N.J.: Educational Testing Service.
Lichtenstein, P., and N. L. Pedersen 1997 "Does Genetic Variance for Cognitive Abilities Account for Genetic Variance in Educational Achievement and Occupational Status? A Study of Twins Reared Apart and Twins Reared Together." Social Biology 44:77–90.
Loehlin, J. C. 1992 Genes and Environment in PersonalityDevelopment. Newbury Park, Calif.: Sage.
Matarazzo, J. D. 1972 Wechsler's Measurement and Appraisal of Adult Intelligence, 5th ed. Baltimore: Williams and Wilkins.
Neisser, U., G. Boodoo, T. J. Bouchard, A. W. Boykin, N. Brody, S. J. Ceci, D. F. Halpern, J. C. Loehlin, R. Perloff, R. J. Sternberg, and S. Urbana 1996 "Intelligence: Knowns and Unknowns." American Psychologist 51:77–101.
Plomin, R., J. C. DeFries, G. E. McClearn, and M. Rutter 1997 Behavioral Genetics, 3rd ed. New York: W. H. Freeman.
Rowe, D. C. 1994 The Limits of Family Influence: Genes,Experience, and Behavior. New York: Guilford.
——1997 "A Place at the Policy Table? Behavioral Genetics and Estimates of Family Environmental Effects on IQ." Intelligence 24(1):133–158.
Scarr, S. 1997 "Behavior-Genetic and Socialization Theories of Intelligence: Truce and Reconcilliation." In R. J. Sternberg and E. Grigorenko, eds., Intelligence,Heredity, and Environment. New York: Cambridge University Press.
Stein, Z., M. Susser, and I. Wald 1978 "Cognitive Development and Social Policy." Science, 200 23:1357–1362.
Sternberg, R. J. 1985 Beyond IQ: A Triarchic Theory ofHuman Intelligence. New York: Cambridge University Press.
Vernon, P. A. 1993 Biological Approaches to the Study ofHuman Intelligence. Norwood, N.J.: Ablex.
Wonderlic Personnel Test, Inc. 1992 Wonderlic PersonnelTest and Scholastic Level Exam: User's Manual. Libertyville, IL: Author.
Linda S. Gottfredson
paulo n. lopes
myths, mysteries, and realities
james w. pellegrino
triarchic theory of intelligence
robert j. sternberg
The term emotional intelligence was introduced in a 1990 article by Peter Salovey and John D. Mayer. They described emotional intelligence as a set of skills that involve the ability to monitor one's own and others' feelings and emotions, to discriminate among them, and to use this information to guide one's thinking and action. Salovey and Mayer introduced the term as a challenge to intelligence theorists to contemplate an expanded role for the emotional system in conceptual schemes of human abilities, and to investigators of emotion who had historically considered the arousal of affect as disorganizing of cognitive activity. In the spirit of Charles Darwin, who, in his 1872 book The Expression of the Emotions in Man and Animals, viewed the emotional system as necessary for survival and as providing an important signaling system within and across species, Salovey and Mayer emphasized the functionality of feelings and described a set of competencies that might underlie the adaptive use of affectively charged information.
Associated Concepts and Formal Definition
The idea of an emotional intelligence was anticipated, at least implicitly, by various theorists who argued that traditional notions of analytic intelligence are too narrow. Emotional intelligence adds an affective dimension to Robert Sternberg's 1985 work on practical intelligence, is consistent with theorizing by Nancy Cantor and John Kihlstrom (1987) about social intelligence, and is directly related to research on children's emotional competencies by Carolyn Saarni (1999) and others. Emotional intelligence is most similar to one of the multiple intelligences characterized by Howard Gardner in Frames of Mind (1983). Gardner delineated intrapersonal intelligence as awareness of one's feelings and the capacity to effect discriminations among these feelings, label them, enmesh them in symbolic codes, and draw upon them as a means of understanding and guiding one's behavior.
Mayer and Salovey described emotional intelligence more specifically in 1997 by outlining the competencies it encompasses. They organized these competencies along four branches: (1) the ability to perceive, appraise, and express emotion accurately;(2) the ability to access and generate feelings when they facilitate cognition; (3) the ability to understand affect-laden information and make use of emotional knowledge; and (4) the ability to regulate emotions to promote growth and well-being.
Individuals can be more or less skilled at attending to, appraising, and expressing their own emotional states. These emotional states can be harnessed adaptively and directed toward a range of cognitive tasks, including problem solving, creativity, and decision-making. Emotional intelligence also includes essential knowledge about the emotional system. The most fundamental competencies at this level concern the ability to label emotions with words and to recognize the relationships among exemplars of the affective lexicon. Finally, emotional intelligence includes the ability to regulate feelings in oneself and in other people. Individuals who are unable to manage their emotions are more likely to experience negative affect and remain in poor spirits.
Measures and Findings
There are two types of measures of emotional intelligence: self-report questionnaires and ability tests. Self-report measures essentially ask individuals whether or not they have various competencies and experiences consistent with being emotionally intelligent. Ability tests require individuals to demonstrate these competencies, and they rely on tasks and exercises rather than on self-assessment. Self-report and ability measures may yield different findings, because asking people about their intelligence is not the same as having them take an intelligence test.
Self-report measures include relatively short scales, such as Niccola Schutte and colleagues'(1998) scale, intended to assess Salovey and Mayer's original model of emotional intelligence, and the Trait Meta-Mood Scale (TMMS), designed to assess people's beliefs about their propensity to attend with clarity to their own mood states and to engage in mood repair. More comprehensive self-report inventories, such as the Bar-On Emotional Quotient Inventory (EQ-i) encompass a larger number of subscales that tap into personality and other traits related to emotional experience and self-reported, noncognitive competencies.
The advantage of self-report measures is that they provide a global self-evaluation of emotional competence. They draw upon a rich base of self-knowledge and reflect people's experiences across different settings and situations. However, these measures have important limitations: they measure perceived, rather than actual, abilities; and they are susceptible to mood and social desirability biases, as well as deliberate or involuntary self-enhancement. Moreover, self-report measures overlap substantially with personality, and it is unclear whether they contribute to the understanding of social and emotional functioning over and above what personality traits might explain.
To overcome such problems, Mayer, David Caruso, and Salovey (1999) developed an ability test of emotional intelligence. Their first test, called the Multidimensional Emotional Intelligence Scale (MEIS), paved the way for a more reliable, better normed, and more professionally produced test, the Mayer, Salovey, and Caruso Emotional Intelligence Test (MSCEIT). This test asks people to process emotional information and use it to solve various problems, and to rate the effectiveness of different strategies for dealing with emotionally arousing situations. It consists of eight tasks, including decoding facial expressions and visual displays of emotion, understanding blends of emotions and emotional dynamics, integrating emotional information with other thinking processes, and managing emotions for purposes of self-regulation and social interaction. The test can be scored using either expert or consensus norms, and Mayer and his colleagues demonstrated in 2001 that these scoring methods yield similar results.
Ability tests of emotional intelligence avoid the self-enhancement and other biases that plague self-report measures, and they are very different from personality inventories. These are substantial advantages. However, these tests also have limitations. To assess emotional regulation, the MSCEIT evaluates people's knowledge of appropriate strategies for handling various situations, rather than their actual skill in implementing these strategies. It is not known to what extent the abilities assessed by ability tests generalize across situations and social or cultural contexts. While they are intended to assess skills, relying on consensus scoring can make it difficult to distinguish enacted skills from adjustment or conformity, especially because emotionally intelligent behavior necessarily reflects attunement to social norms and expectations.
Evidence suggests that emotional intelligence, assessed through ability tests, represents a coherent and interrelated set of abilities, distinct from (but meaningfully related to) traditional measures of intelligence, and developing with age. Initial studies also suggest that ability measures of emotional intelligence are associated with a range of positive outcomes, including lower peer ratings of aggressiveness and higher teacher ratings of prosocial behavior among school children; less tobacco and alcohol consumption among teenagers; higher self-reported empathy, life satisfaction, and relationship quality among college students; and higher manager ratings of effectiveness among leaders of an insurance company's customer claims teams. Emotional intelligence also seems to explain the perceived quality of social relationships over and above what personality traits and traditional measures of intelligence might explain.
Stronger evidence that emotional skills are associated with social adaptation comes from studies with children, using very different measures. In a large number of studies, children's abilities to read emotions in faces, understand emotional vocabulary, and regulate their emotions have been associated with their social competence and adaptation, as rated by peers, parents, and teachers.
Emotional Intelligence in the Schools
During the 1980s and 1990s, the idea that the social problems of young people (e.g., dropping out of school, illicit drug use, teenage pregnancy) can be addressed through school-based prevention programs became popular among educational reformers. Earlier programs focused primarily on social problem-solving skills or conflict resolution strategies. After the 1995 publication of a best-selling trade book on the topic of emotional intelligence by science writer Daniel Goleman, the concept of emotional intelligence gained enormous popular appeal, and school-based programs of social and emotional learning multiplied. These programs usually deal with emotions explicitly, and they can help children to build a feelings vocabulary, recognize facial expressions of emotion, control impulsive behavior, and regulate feelings such as sorrow and anger.
There is evidence that programs of social and emotional learning that are well designed and well implemented can promote children's social and emotional adjustment. Programs such as Promoting Alternative Thinking Strategies (PATHS), the Seattle Social Development Project, and Resolving Conflict Creatively have been evaluated through studies that track children's development over time. Benefits from these programs may include gains in children's social and emotional bonding to school, lowered dropout rates, a reduced incidence of aggressive or risky behaviors, and improvements in cognitive and emotional functioning. However, social and emotional learning programs usually address a very broad range of competencies, and it is not known to what extent the benefits observed in these studies can be attributed specifically to the training of emotional skills. Moreover, the success of these interventions depends on many factors, including the quality and motivation of the teachers, as well as their capacity to promote informal learning and generalization of skills.
Researchers associated with the Collaborative to Advance Social and Emotional Learning (CASEL) and others have drafted useful guidelines to help educators choose, adapt, and implement effective social and emotional learning programs. Important questions remain to be addressed, however. In dealing with others, people draw upon a very wide range of social and emotional skills, and it may be difficult to address all these competencies through formal or explicit instruction. It is not clear exactly what skills to emphasize, what are the best ways of teaching these skills, and to what extent they generalize across settings and situations.
Emotional skills may contribute to academic achievement in various ways. The ability to perceive and understand emotions may facilitate writing and artistic expression, as well as the interpretation of literature and works of art. Emotional regulation may help children to handle the anxiety of taking tests, or the frustrations associated with any pursuit requiring an investment of time and effort. It may also facilitate control of attention, sustained intellectual engagement, intrinsic motivation, and enjoyment of challenging academic activities.
See also: Intelligence, subentry on Myths, Mysteries, and Realities.
Bar-On, Reuven. 1997. EQ-I: Bar-On Emotional Quotient Inventory. Toronto: Multi-Health Systems.
Cantor, Nancy, and Kihlstrom, John F. 1987. Personality and Social Intelligence. Englewood Cliffs, NJ: Prentice-Hall.
Gardner, Howard. 1983. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books.
Goleman, Daniel. 1995. Emotional Intelligence. New York: Bantam.
Mayer, John d.; Caruso, David R.; and Salovey, Peter. 1999. "Emotional Intelligence Meets Traditional Standards for an Intelligence." Intelligence 27:267–298.
Mayer, John D., and Salovey, Peter. 1997. "What Is Emotional Intelligence?" In Emotional Development and Emotional Intelligence, ed. Peter Salovey and David Sluyter. New York: Basic Books.
Mayer, John d.; Salovey, Peter; Caruso, David r.; and Sitarenios, Gill. 2001. "Emotional Intelligence As a Standard Intelligence." Emotion 1:232–242.
Saarni, Carolyn. 1999. The Development of Emotional Competence. New York: Guilford Press.
Salovey, Peter, and Mayer, John D. 1990. "Emotional Intelligence." Imagination, Cognition, and Personality 9:185–211.
Salovey, Peter; Mayer, John d.; Goldman, Susan l.; Turvey, Carolyn; and Palfai, Tibor P. 1995. "Emotional Attention, Clarity, and Repair: Exploring Emotional Intelligence Using the Trait Meta-Mood Scale." In Emotion, Disclosure, and Health, ed. James Pennebaker. Washington, DC: American Psychological Association.
Salovey, Peter, and Sluyter, David J., eds. 1997. Emotional Development and Emotional Intelligence: Educational Implications. New York: BasicBooks.
Schutte, Nicolla s.; Malouff, John m.; Hall, l. e.; Haggerty, d. j.; Cooper, Joan t.; Golden,C. J.; and Dornheim, L. 1998. "Development and Validation of a Measure of Emotional Intelligence." Personality and Individual Differences 25:167–177.
Sternberg, Robert J. 1985. Beyond IQ: A Triarchic Theory of Human Intelligence. Cambridge, Eng.: Cambridge University Press.
Paulo N. Lopes
Introductory treatments of the measurement of intelligence often begin with a discussion of three pioneers in the field: the French psychologist Alfred Binet (1857–1911), the English psychologist Charles Spearman (1863–1945), and the American psychologist Lewis Terman (1877–1956). Binet initiated the applied mental measurement movement when, in 1905, he introduced the first test of general mental ability. Spearman offered support for a psychologically cohesive dimension of general intellectual ability when, in 1904, he showed that a dominant dimension (called g ) appears to run through heterogeneous collections of intellectual tasks. And Terman championed the application of intelligence testing in schools and in the military. Subsequently, Terman also illustrated how tracking intellectually talented youth longitudinally (i.e., via long-term studies) affords fundamental insights about human development in general.
Binet: The Testing of Mental Ability
Binet was not the first to attempt to measure mental ability. Operating under the maxim of the fourth century b.c.e. Greek philosopher Aristotle, that the mind is informed to the extent that one's sensory systems bring in clear and reliable information, the English scientist Francis Galton (1822–1911) and others had aimed to measure intellect through fundamental psychophysical procedures that indexed the strength of various sensory systems. In contrast, Binet examined complex behaviors, such as comprehension and reasoning, directly. In doing so, his methods could not compare to psychophysical assessments in terms of reliability. But Binet more than made up for this in the validity of his assessment procedure in predicting school performance. Binet's insight was to use an external criterion to validate his measuring tool. Thus, he pioneered the empirically keyed or external validation approach to scale construction. His external criterion was chronological age, and test items were grouped such that the typical member of each age group was able to achieve 50 percent correct answers on questions of varying complexity. With Binet's procedure, individual differences in scale scores, or mental age (MA), manifested wide variation around students of similar chronological age (CA). These components were synthesized by William Stern to create a ratio of mental development: MA/CA. This was later multiplied by 100 to form what became known as the intelligence quotient ("IQ"), namely IQ = MA/CA ×100.
Spearman: The Discovery of g
While Binet was creating the first valid test of general intellectual functioning, Spearman was conducting basic research that offered tangible support for the idea that a psychologically cohesive dimension of general intelligence () underlies performance on any set of items demanding mental effort. In a groundbreaking publication from 1904 called "'General Intelligence': Objectively Determined and Measured," Spearman showed that g appears to run through all heterogeneous collections of intellectual tasks and test items. Ostensibly, items aggregated to form such groupings were seen as a hodgepodge. Yet when such items are all positively correlated and they are summed, the signal received by each is successively amplified and the noise carried by each is successively attenuated. And the total score paints a clear picture of the attribute under analysis.
Spearman and William Brown formalized this property of aggregation in 1910. The Spearman-Brown Prophecy formula estimates the proportion of common or reliable variance running through a composite: rtt= krxx÷ 1 + (k - 1) rxx (where: rtt = common or reliable variance, rxx = average item intercorrelation, and k = number of items). This formula reveals how a collection of items with uniformly light (weak) positive intercorrelations (say, averaging rxx = .15) can create a composite dominated by common variance. If fifty rxx = 15 items were available, for example, their aggregation would generate an individual differences measure having 90 percent common variance (and 10 percent random error). Stated another way, aggregation amplifies signal and lessens noise. As Bert Green stated in his 1978 article "In Defense of Measurement," "given enough sow's ears you can indeed make a silk purse" (p. 666). A large number of weak positive correlations between test items is, in fact, the ideal when measuring broad psychological attributes.
Terman: The Application of IQ
Binet's approach to assessing mental ability was impressive because, unlike psychophysical assessments of sensory systems, his test forecasted teacher ratings and school performance. And Spearman's work identified the dominant dimension responsible for the validity of these forecasts. Subsequently, Terman cultivated the new enterprise of applied psychological testing. For example, he played a key role in America's military effort when he combined forces with the American psychologist Robert Yerkes (1876–1956) to facilitate personnel selection during World War I. The U.S. armed forces needed an efficient means to screen recruits, many of whom were illiterate. One of Terman's students, Arthur Otis, had devised a nonverbal test of general intelligence, and his work was heavily drawn on to build one of the two group intelligence tests used for the initial screening and the appropriate placement of recruits: the Army Alpha (for literates) and Beta (for illiterates). The role that mental measurements played in World War I and, subsequently, in World War II constitutes one of applied psychology's great success stories. Even today, an act of the U.S. Congress mandates a certain minimum score on tests of general mental ability, because training efficiency is compromised prohibitively at IQs less than or equal to 80 (the bottom 10% of those tested).
Following World War I, Terman was one of the first to draw a generalization between the utility of military intellectual assessments and problems in America's schools. In the early 1920s, Terman developed one of the most famous longitudinal studies in all of psychology, exclusively devoted to the intellectually gifted (the top 1%). Terman, a former teacher himself, was aware of the ability range found in homogeneous groupings based on chronological age and became an advocate of homogeneous grouping based on mental age. Drawing on solid empirical findings from his study of 1,528 intellectually precocious youth (a study that continued after his death in 1956 and into the twenty-first century), he proposed that, at the extremes (say, two standard deviations beyond either side of IQ's normative mean), the likelihood of encountering special student needs increases exponentially. Terman noted that structuring educational settings around chronological age often results in classes of students with markedly different rates of learning (because of markedly different mental ages). Optimal rates of curriculum presentation and complexity vary in gradation throughout the range of individual differences in general intelligence. With IQ centered on 100 and a standard deviation of 16, IQs extending from the bottom 1 percent to the top 1 percent in ability cover an IQ range of approximately 63 to 137. But because IQs are known to go beyond 200, this span covers less than half of the possible range. Leta Hollingworth's classic 1942 study, Children above 180 IQ, provided empirical support for the unique educational needs of this special population. These needs have been empirically supported in every decade since.
The Modern Hierarchical Structure of Mental Abilities
Modern versions of intelligence tests index essentially the same construct that was uncovered at the turn of the twentieth century in Spearman's 1904 work, "'General Intelligence': Objectively Determined and Measured"–albeit with much more efficiency and precision. For example, g is a statistical distillate that represents approximately half of what is common among the thirteen subtests comprising the Wechsler Adult Intelligence Scale. As noted by intelligence researcher Ian J. Deary in "Intelligence: A Very Short Introduction," the attribute g represents the research finding that "there is something shared by all the tests in terms of people's tendencies to do well, modestly, or poorly on all of them" (p. 10). In 2001 Deary's team published the longest temporal stability assessment of general intelligence to date (covering a span of sixty-six years, from age eleven to age seventy-seven); they observed a correlation of .62, which rose to over .70 when statistical artifacts were controlled.
John B. Carroll and other modern psychometricians have come to a consensus that mental abilities follow a hierarchical structure, with g at the top of the hierarchy and other broad groups of mental abilities offering psychological import beyond g. Specifically, mathematical, spatial-mechanical, and verbal reasoning abilities all have demonstrated incremental (value-added) validity beyond g in forecasting educational and vocational criteria. Although mathematical, spatial, and verbal reasoning abilities do not have the breadth or depth of external correlates that g does, the incremental validity they offer makes them especially important for educational and vocational planning.
Psychological and Social Correlates of g
Psychologists at poles of the applied educational—industrial spectrum, such as Richard Snow and John Campbell, respectively, have underscored the real-world significance of general intelligence by incorporating it in lawlike empirical generalizations, as in the following two passages:
Given new evidence and reconsideration of old evidence, [g ] can indeed be interpreted as "ability to learn" as long as it is clear that these terms refer to complex processes and skills and that a somewhat different mix of these constituents may be required in different learning tasks and settings. The old view that mental tests and learning tasks measure distinctly different abilities should be discarded. (Snow, p. 22)
General mental ability is a substantively significant determinant of individual differences in job performance for any job that includes information-processing tasks. If the measure of performance reflects the information processing components of the job and any of several well-developed standardized measures used to assess general mental ability, then the relationship will be found unless the sample restricts the variances in performance or mental ability to near zero. The exact size of the relationship will be a function of the range of talent in the sample and the degree to which the job requires information processing and verbal cognitive skills. (Campbell, p. 56)
Modern research on general intelligence has sharpened validity generalizations aimed at forecasting educational outcomes, occupational training, and work performance. But empiricism also has escalated in domains at the periphery of general intelligence's network of external relationships, such as aggression, delinquency and crime, and income and poverty. For some benchmarks, general intellectual ability covaries .70–.80 with academic achievement measures, .40–.70 with military training assignments, .20–.60 with work performance (higher correlations reflect greater job complexity), .30–.40 with income, and around .20 with law-abidingness.
An excellent compilation of positive and negative correlates of g can be found in a 1987 work by Christopher Brand that documents a variety of weak correlations between general intelligence and diverse phenomena. For example, g is positively correlated with altruism, sense of humor, practical knowledge, responsiveness to psychotherapy, social skills, and supermarket shopping ability, and negatively correlated with impulsivity, accident-proneness, delinquency, smoking, racial prejudice, and obesity. This diverse family of correlates is especially thought-provoking because it reveals how individual differences in general intelligence "pull" with them cascades of direct and indirect effects.
Charles Murray's 1998 longitudinal analysis of educational and income differences between siblings is also illuminating. Murray studied biologically related siblings who shared the same home of rearing and socioeconomic class yet differed on average by 12 IQ points. He found that the differences in IQ predicted differences in educational achievement and income over the course of 15 years. His findings corroborate those of other studies that use a similar control for family environment, while not confounding socioeconomic status with biological relatedness.
Experts' definitions of general intelligence appear to fit with g 's nexus of empirical relationships. Most measurement experts agree that measures of general intelligence assess individual differences pertaining to "abstract thinking or reasoning," "the capacity to acquire knowledge," and "problem-solving ability." Naturally, individual differences in these attributes carry over to human behavior in facets of life outside of academic and vocational arenas. Abstract reasoning, problem solving, and rate of learning touch many aspects of life in general, especially in the computer-driven, information-dense society of the United States in the early twenty-first century.
Biological Correlates of g
General intelligence may be studied at different levels of analysis, and, as documented by Arthur Jensen in "The g Factor," modern measures of g have been linked to a variety of biological phenomena. By pooling studies of a variety of kinship correlates of g (e.g., identical and fraternal twins reared together and apart, and a variety of adoption designs), the heritability of general intelligence in industrialized nations has been estimated to be between 60 and 80 percent. These estimates reflect genetic factors responsible for individual differences between people, not overall level of g. In addition, research teams in molecular genetics, led by Robert Plomin, are working to uncover DNA markers associated with g. Using magnetic resonance imaging technology, total brain volume covaries in the high .30s with g after removing the variance associated with body size. Glucose metabolism is related to problem-solving behavior, and the highly gifted appear to engage in more efficient problem-solving behavior that is less energy expensive. Also, highly intellectually gifted individuals show enhanced right hemispheric functioning, and electroencephdographic (EEG) phenomena have been linked to individual differences in g. Finally, some investigators have suggested that dendritic arborization (the amount of branching of dendrites in neurons) is correlated with g.
A Continuing Field of Debate
The above empiricism is widely accepted among experts in the measurement/individual differences field. Yet, it has been common for empiricism pertaining to general intelligence (and interpretative extrapolations emanating from it) to stimulate contentious debate. Indeed, psychologists can be found on all sides of the complex set of issues engendered by assessing individual differences in general intelligence. But this is not new, and it is likely to continue. Because psychological assessments are frequently used for allocating educational and vocational opportunities, and because different demographic groups differ in test score and criterion performance, social concerns have followed the practice of intellectual assessment since its beginning in the early 1900s. In the context of these social concerns, alternative conceptualizations of intelligence, such as Howard Gardner's theory of multiple intelligences, Daniel Goleman's theory of emotional intelligence, and Robert Sternberg's triarchic theory of intelligence have generally been positively received by the public. Yet, measures of these alternative formulations of intelligence have not demonstrated incremental validity beyond what is already gained by conventional measures of intelligence. That is, they have not yet demonstrated incremental validity beyond conventional psychometric tests in the prediction of important life outcomes such as educational achievement, occupational level, and job performance. This is not to say that there is no room for improvement in the prediction process. Innovative measures of mental abilities, however, need to be evaluated against existing measures before one can claim that they capture something new.
See also: Assessment Tools, subentry on Psychometric and Statistical; Binet, Alfred; Intelligence, subentry on Myths, Mysteries, and Realities; Terman, Lewis.
Brand, Christopher. 1987. "The Importance of General Intelligence." In Arthur Jensen: Consensus and Controversy, ed. Sohan Magil and Celia Magil. New York: Falmer.
Campbell, John P. 1990. "The Role of Theory in Industrial and Organizational Psychology." In Handbook of Industrial and Organizational Psychology, 2nd edition, ed. Marvin D. Dunnette and Leaette M. Hough. Palo Alto, CA: Consulting Psychologists Press.
Carroll, John B. 1993. Human Cognitive Abilities: A Survey of Factor-Analytic Studies. Cambridge, Eng.: Cambridge University Press.
Deary, Ian J. 2001. Intelligence: A Very Short Introduction. New York: Oxford University Press.
Gottfredson, Linda S., ed. 1997. "Intelligence and Social Policy" (special issue). Intelligence 24 (1).
Green, Bert F. 1978. "In Defense of Measurement." American Psychologist 33:664–670.
Hollingworth, Leta S. 1942. Children above 180 IQ. New York: World Book.
Jensen, Arthur R. 1998. The g Factor: The Science of Mental Ability. Westport, CT: Praeger.
Murray, Charles. 1998. Income, Inequality, and IQ. Washington, DC: American Enterprise Institute.
Snow, Richard E. 1989. "Aptitude-Treatment Interaction as a Framework for Research on Individual Differences in Learning." In Learning and Individual Differences: Advances in Theory and Research, ed. Phillip. L. Ackerman, Robert J. Sternberg, and Robert G. Glasser. New York: Freeman.
Snyderman, Mark, and Rothman, Stanley. 1987. "Survey of Expert Opinion on Intelligence and Aptitude Testing." American Psychologist 42:137–144.
Spearman, Charles. 1904. "'General Intelligence': Objectively Determined and Measured." American Journal of Psychology 15:201–292.
Terman, Lewis. 1925–1959. Genetic Studies of Genius, 4 vols. Stanford, CA: Stanford University Press.
Thorndike, Robert M., and Lohman, David F. 1990. A Century of Ability Testing. Chicago: Riverside.
The theory of multiple intelligences (MI) was developed by Howard Gardner, a professor of cognition and education at Harvard University. Introduced in his 1983 book, Frames of Mind, and refined in subsequent writings, the theory contends that human intelligence is not a single complex entity or a unified set of processes (the dominant view in the field of psychology). Instead, Gardner posits that there are several relatively autonomous intelligences, and that an individual's intellectual profile reflects a unique configuration of these intelligences.
Definition of Intelligence
In his 1999 formulation of MI theory, Intelligence Reframed, Gardner defines intelligence as "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture." By considering intelligence a potential, Gardner asserts its emergent and responsive nature, thereby differentiating his theory from traditional ones in which human intelligence is fixed and innate. Whether a potential will be activated is dependent in large part on the values of the culture in which an individual grows up and the opportunities available in that culture, although Gardner also acknowledges the role of personal decisions made by individuals, their families, and others. These activating forces result in the development and expression of a range of abilities (or intelligences) from culture to culture and also from individual to individual.
Gardner's definition of intelligence is unique as well in that it considers the creation of products such as sculptures and computers to be as important an expression of intelligence as abstract problem solving. Traditional theories do not recognize created artifacts as a manifestation of intelligence, and therefore are limited in how they conceptualize and measure it.
Criteria for intelligences. Gardner does not believe that the precise number of intelligences is known, nor does he believe that they can be identified through statistical analyses of cognitive test results. He began by considering the range of adult endstates that are valued in diverse cultures around the world. To uncover the abilities that support these end-states, he examined a wide variety of empirical sources from different disciplines that had never been used together for the purpose of defining human intelligence. His examination yielded eight criteria for defining an intelligence:
- Two criteria derived from biology: (1) an intelligence should be isolable in cases of brain damage, and (2) there should be evidence for its plausibility and autonomy in evolutionary history.
- Two criteria derived from developmental psychology: (3) an intelligence has to have a distinct developmental history along with a definable set of expert end-state performances, and (4) it must exist within special populations such as idiot savants and prodigies.
- Two criteria derived from traditional psychology: (5) an intelligence needs to be supported by the results of skill training for its relatively independent operation, and (6) also by the results of psychometric studies for its low correlation to other intelligences.
- Two criteria derived from logical analysis: (7) an intelligence must have its own identifiable core operation or set of operations, and (8) it must be susceptible to encoding in a symbol system–such as language, numbers, graphics, or musical notations.
To be defined as an intelligence, an ability has to meet most, though not all, of the eight criteria.
Identified intelligences. As of 2001, Gardner has identified eight intelligences:
- Linguistic intelligence, exemplified by writers and poets, describes the ability to perceive and generate spoken or written language.
- Logical-mathematical intelligence, exemplified by mathematicians and computer programmers, involves the ability to appreciate and utilize numerical, abstract, and logical reasoning to solve problems.
- Musical intelligence, exemplified by musicians and composers, entails the ability to create, communicate, and understand meanings made out of sound.
- Spatial intelligence, exemplified by graphic designers and architects, refers to the ability to perceive, modify, transform, and create visual or spatial images.
- Bodily-kinesthetic intelligence, exemplified by dancers and athletes, deals with the ability to use all or part of one's body to solve problems or to fashion products.
- Naturalistic intelligence, exemplified by archaeologists and botanists, concerns the ability to distinguish, classify, and use features of the environment.
- Interpersonal intelligence, exemplified by leaders and teachers, describes the ability to recognize, appreciate, and contend with the feelings, beliefs, and intentions of other people.
- Intrapersonal intelligence, apparent when individuals pursue a particular interest, choose a field of study or work, or portray their life through different media, involves the ability to understand oneself–including emotions, desires, strengths, and vulnerabilities–and to use such information effectively in regulating one's own life.
Gardner does not claim this roster of intelligences to be exhaustive; MI theory is based wholly on empirical evidence, and the roster can therefore be revised with new empirical findings. In the MI framework, all intelligences are equally valid and important, and though significantly independent of one another, they do not operate in isolation. Human activity normally reflects the integrated functioning of several intelligences. An effective teacher, for example, relies on linguistic and interpersonal intelligences, and possesses knowledge of particular subject areas as well.
Relationship to Other Theories
MI theory bears similarities to several other contemporary theories of intelligence, yet it remains distinct. Although it shares a pluralistic view of intelligence with Robert Sternberg's triarchic theory, MI theory organizes intelligences in terms of content areas, and no single cognitive function, such as perception or memory, cuts across all domains. The triarchic theory, in contrast, posits three intelligences differentiated by functional processes, and each intelligence operates consistently across domains.
Daniel Goleman's theory of emotional intelligence resonates with MI theory in that both acknowledge the social and affective aspects of intelligence. Whereas Goleman views intelligence from a moral and ethical perspective, however, Gardner regards all intelligences as value-free: He does not judge individuals as inferior or superior based on their configuration of intelligences, nor does he judge cultures as inferior or superior because they value one intelligence over another.
MI theory has been criticized on two grounds. First, some critics contend that psychometric research finds correlations, not autonomy, among abilities. Gardner has argued that these correlations are largely due to the use of psychometric instruments designed to measure only a given set of abilities. Second, critics have suggested that human intelligence is different from other human capabilities, such as musical talent. Gardner believes that such a narrow use of the word intelligence reflects a Western intellectual mind-set that does not recognize the diversity of roles that contribute to society.
Implications for Educational Practice
The primary intent for developing MI theory was to chart the evolution and topography of the human mind, not to prescribe educational practice. Nonetheless, MI theory has been discussed widely in the educational field and has been particularly influential in elementary education, where it has provided a useful framework for improving school-based practice in the areas of curricula, instruction, and assessment.
Curricula and instruction. From an MI perspective, curricula, particularly for young children, should encompass a broad range of subject areas that include (but go beyond) reading, writing, and arithmetic, because all intelligences are equally valuable. The visual arts, for example, are a serious domain in and of themselves, and not just as a means to improve reading scores. According to MI theory, the talented artist is just as intelligent as the excellent reader, and each has an important place in society. In The Disciplined Mind, Gardner cautions that an authentic MI-based approach goes beyond conveying factual knowledge about various domains: He stresses the importance of promoting in-depth exploration and real understanding of key concepts essential to a domain.
Because each child's biopsychological potential is different, providing a broad range of subject areas at a young age also increases the likelihood of discovering interests and abilities that can be nurtured and appreciated. Educators who work with at-risk children have been particularly drawn to this application of MI theory, because it offers an approach to intervention that focuses on strengths instead of deficits. By the same token, it extends the concept of the gifted child beyond those who excel in linguistic and logical pursuits to include children who achieve in a wide range of domains.
MI theory can be applied to the development of instructional techniques as well. A teacher can provide multiple entry points to the study of a particular topic by using different media, for example, and then encouraging students to express their understanding of the topic through diverse representational methods, such as pictures, writings, three-dimensional models, or dramatizations. Such instructional approaches make it possible for students to find at least one way of learning that is attuned to their predispositions, and they therefore increase motivation and engagement in the learning process. They also increase the likelihood that every student will attain at least some understanding of the topic at hand.
Assessment. When applied to student assessment, MI theory results in the exploration of a much wider range of abilities than is typical in the classroom, in a search for genuine problem-solving or product-fashioning skills. An MI-based assessment requires "intelligence-fair" instruments that assess each intellectual capacity through media appropriate to the domain, rather than through traditional linguistic or logical methods. Gardner also argues that for assessment to be meaningful to students and instructive for teachers, students should work on problems and projects that engage them and hold their interest; they should be informed of the purpose of the task–and the assessment criteria as well; and they should be encouraged to work individually, in pairs, or in a group. Thus, the unit of analysis extends beyond the individual to include both the material and social context.
MI-based assessments are not as easy to design and implement as standard pencil-and-paper tests, but they have the potential to elicit a student's full repertoire of skills and yield information that will be useful for subsequent teaching and learning. As part of Project Spectrum, Gardner and colleagues developed a set of assessment activities and observational guidelines covering eight domains, including many often ignored by traditional assessment instruments, such as mechanical construction and social understanding. Project Spectrum's work also included linking children's assessments to curricular development and bridging their identified strengths to other areas of learning.
Evidence of the Value of the Theory
MI theory has been incorporated into the educational process in schools around the world. There is much anecdotal evidence that educators, parents, and students value the theory, but, as of 2001, little systematic research on the topic has been completed. The main study was conducted by Mindy Kornhaber and colleagues at Harvard University's Project Zero in the late 1990s. They studied forty-one elementary schools in the United States that had been applying MI theory to school-based practice for at least three years. Among the schools that reported improvement in standardized-test scores, student discipline, parent participation, or performance of students with learning differences, the majority linked the improvement to MI-based interventions. Kornhaber's study also illuminates the conditions under which MI theory is adopted by schools and integrated into the educational process.
The difficulty of research on MI theory in education is correlating changes specifically to the theory, since schools are complex institutions that make it difficult to isolate cause-and-effect relationships. Indeed, since MI theory is meaningful in the context of education only when combined with pedagogical approaches such as project-based learning or artsintegrated learning, it is not possible to study the precise contribution of the theory itself to educational change, only the effect of interventions that are based on it or incorporate it.
See also: Assessment, subentries on Classroom Assessment, Performance Assessment, Portfolio Assessment.
Chen, Jie-Qi; Krechevsky, Mara; and Viens, Julie. 1998. Building on Children's Strengths: The Experience of Project Spectrum. New York: Teachers College Press.
Gardner, Howard. 1993. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books.
Gardner, Howard. 1993. Multiple Intelligences: The Theory in Practice. New York: Basic Books.
Gardner, Howard. 1999. Intelligence Reframed: Multiple Intelligences for the 21st Century. New York: Basic Books.
Gardner, Howard. 2000. The Disciplined Mind: Beyond Facts and Standardized Tests: The K–12 Education That Every Child Deserves. New York: Penguin Books.
Goleman, Daniel. 1995. Emotional Intelligence: Why It Can Matter More Than IQ. New York: Bantam Books.
Kornhaber, Mindy L. 1999. "Multiple Intelligences Theory in Practice." In Comprehensive School Reform: A Program Perspective, ed. James H. Block, Susan T. Everson, and Thomas R. Guskey. Dubuque, IA: Kendall/Hunt.
Sternberg, Robert J. 1988. The Triarchic Mind: A New Theory of Human Intelligence. New York: Viking.
MYTHS, MYSTERIES, AND REALITIES
Intelligence and intelligence tests are often at the heart of controversy. Some arguments concern the ethical and moral implications of, for example, selective breeding of bright children. Other arguments deal with the statistical basis of various conclusions such as whether tests are biased, or how much of intelligence is genetically determined. What one hears less often is discussion of the construct of intelligence itself: What is intelligence? How does it grow? How and why do people differ intellectually? Questions like these, along with many others, which are central to any discussion of intelligence and intelligence testing, are less often raised, much less answered.
Historical Roots of Intelligence Tests
Intelligence testing began as a more or less scientific pursuit into the nature of differences in human intellect. However, it soon acquired practical significance as a tool for predicting school achievement and selecting individuals for various educational programs. Sir Francis Galton's work in the late 1800s formed the background for much of the research and theory pursued during the twentieth century on the assessment of individual differences in intelligence.
Galton believed that all intelligent behavior was related to innate sensory ability but his attempts to empirically validate that assumption were largely unsuccessful. In France, Alfred Binet and Victor Henri (1896) criticized the approach advocated by Galton in England and James McKeen Cattell in the United States and argued that appropriate intelligence testing must include assessment of more complex mental processes, such as memory, attention, imagery, and comprehension. In 1904, Binet and Théodore Simon were commissioned by the French Minister of Public Instruction to develop a procedure to select children unable to benefit from regular public school instruction for placement in special educational programs. In 1905 Binet and Simon published an objective, standardized intelligence test based on the concepts developed earlier by Binet and Henri. The 1905 test consisted of thirty subtests of mental ability, including tests of digit span, object and body part identification, sentence memory, and so forth. Many of these subtests, with minor modifications, are included in the Stanford-Binet intelligence test of the early twenty-first century.
In 1908 and again in 1911, Binet and Simon published revised versions of their intelligence test. The revised tests distinguished intellectual abilities according to age norms, thus introducing the concept of mental age. The subtests were organized according to the age level at which they could be successfully performed by most children of normal intelligence. As a result, children could be characterized and compared in terms of their intellectual or mental age. The Binet and Simon intelligence test was widely adopted in Europe and in the United States. Lewis Terman of Stanford University developed the more extensive Stanford-Binet test in 1916. This test has been used extensively in several updated versions throughout the United States.
A major change in intelligence testing involved the development of intelligence tests that could be simultaneously administered to large groups. Group tests similar to the original Binet and Simon intelligence test were developed in Britain and the United States. During World War I, group-administered intelligence tests (the Army Alpha and Army Beta tests) were used in the United States to assess the abilities of recruits who could then be selected for various duties based on their performance. In England, from the 1940s to the 1960s, intelligence tests were administered to all children near the age of eleven years to select students for different classes of vocational training.
An enormous number of "mental" tests are available in the early twenty-first century and are typically divided into those involving group versus individual administration. Whereas IQ (the intelligence quotient) was originally reported as the ratio of an individual's mental age to chronological age multiplied by 100 (100 × MA/CA), IQ has long been based upon normative score distributions for particular age groups. All individual and group tests currently yield such deviation IQs where 100 typically represents the 50th percentile and 68 percent of all scores fall between 85 and 115.
Factor Theories of Intelligence
What is intelligence and what do these tests actually assess? Very early in the psychological study of intelligence, Charles Spearman (1904) sought to empirically determine the similarities and differences between various mental tests and school performance measures. He found that many seemingly diverse mental tests were strongly correlated with each other. This led him to postulate a general factor of intelligence, g, that all mental tests measure in common while simultaneously varying in how much the general factor contributes to a given test's performance. On the basis of correlational studies, Spearman argued that intelligence is composed of a general factor that is found in all intellectual functioning plus specific factors associated with the performance of specific tasks. Spearman's theoretical orientation and methods of analysis served as the foundation of all subsequent factor analytic theories of intelligence. Spearman (1927) later developed a more complex factor theory introducing more general "group factors" made up of related specific factors. However, he adhered to his main tenet that a common ability underlies all intellectual behavior. For lack of a better definition, he referred to this as a mental energy or force.
The concept that intelligence is characterized by a general underlying ability plus certain task-or domain-specific abilities constitutes the basis of several major theories of intelligence, including those offered by Cyril L. Burt (1949), Philip E. Vernon (1961), and Arthur Jensen (1998). Quite distinct from theories of intelligence that emphasize g are those that emphasize specific abilities that can be combined to form more general abilities. Lloyd L. Thurstone (1924, 1938) developed factor analytic techniques that first separate out specific or primary factors. Thurstone argued that these primary factors represent discrete intellectual abilities, and he developed distinct tests to measure them. Among the most important of Thurstone's primary mental abilities are verbal comprehension, word fluency, numerical ability, spatial relations, memory, reasoning, and perceptual speed.
Raymond Cattell (1963, 1971) attempted a rapprochement of the theories of Spearman and Thur-stone. In an attempt to produce a g factor, he combined Thurstone's primary factors to form secondary or higher-order factors. Cattell found two major types of higher order factors and three minor ones. The major factors were labeled gf and gc, for fluid and crystallized general intelligence. Cattell argued that the fluid intelligence factor represents an individual's basic biological capacity. Crystallized intelligence represents the types of abilities required for most school activities. Cattell labeled the minor general factors gv, gr, and gs for visual abilities, memory retrieval, and performance speed, respectively. Cattell's initial theory has been substantially extended by individuals such as John L. Horn (1979,1985).
The most recent psychometric research supports a hierarchical model of intellect generally in accord with the outlines of the Cattell-Horn theory. At the top of the hierarchy is g and under are broad group factors such as gf and gc. Below these broad group factors are more specific or narrow ability factors. The majority of intelligence tests focus on providing overall estimates of g (or gf and gc ) since this maximizes the prediction of performance differences among people in other intellectual tasks and situations, including performance in school.
Alternative Theoretical Perspectives
The hierarchical model of human intelligence that has evolved from the psychometric or measurement approach is not the only influential perspective on human intellect. A second view of intelligence, that provided by developmental psychology, stems from the theory of intellectual development proposed by the Swiss psychologist Jean Piaget. This tradition is a rich source of information on the growth and development of intellect. A third view on intelligence, the information-processing or cognitive perspective, is an outgrowth of work in cognitive psychology since the 1970s. It provides elaborate descriptions and theories of the specific mental activities and representations that comprise intellectual functioning. The three perspectives are similar with regard to the general skills and activities that each associates with "being or becoming intelligent." For all three, reasoning and problem-solving skills are the principal components of intelligence. A second area of overlap among the three involves adaptability as an aspect of intelligence.
The differences and separate contributions of the three perspectives to an understanding of human intellect also stand out. The emphasis on individual differences within the psychometric tradition is certainly relevant to any complete understanding of intelligence. A theory of intelligence should take into account similarities and differences among individuals in their cognitive skills and performance capabilities. However, a theory of human intellect based solely on patterns of differences among individuals cannot capture all of intellectual functioning unless there is little that is general and similar in intellectual performance.
In contrast, the developmental tradition emphasizes similarities in intellectual growth and the importance of organism—environment interactions. By considering the nature of changes that occur in cognition and the mechanisms and conditions responsible, one can better understand human intellectual growth and its relationship to the environment. This requires, however, that one focus not just on commonalities in the general course of cognitive growth, but consider how individuals differ in the specifics of their intellectual growth. Such a developmental-differential emphasis seems necessary for a theory to have adequate breadth and to move the study of intelligence away from a static, normative view, where intelligence changes little over development, to a more dynamic view that encompasses developmental change in absolute levels of cognitive power.
Finally, the cognitive perspective helps to define the scope of a theory of intelligence by further emphasizing the dynamics of cognition, through its concentration on precise theories of the knowledge and processes that allow individuals to perform intellectual tasks. Psychometric and developmental theories typically give little heed to these processes, yet they are necessary for a theory of intelligence to make precise, testable predictions about intellectual performance.
No theory developed within any of the three perspectives addresses all of the important elements and issues mentioned above. This includes the more recent and rather broad theories such as Howard Gardner's multiple intelligences theory (1983, 1999) and Robert J. Sternberg's triarchic theory (1985). Both theories represent an interesting blending of psychometric, developmental, and cognitive perspectives.
Uses and Abuses of Tests
Above it was noted that testing was developed in response to pragmatic concerns regarding educational selection and placement. The use of intelligence tests for educational selection and placement proliferated during the decades from the 1930s through 1960s as group tests for children became readily available. Since the early 1980s, however, general intelligence testing has declined in public educational institutions. One reason for diminished used of such tests is a trend away from homogeneous grouping of students and attendant educational tracking. A second reason is that achievement rather than aptitude testing has become increasingly popular. Not surprisingly, such tests tend to be better predictors of subsequent achievement than aptitude or intelligence tests. Even so, it is an established fact that measures of general intelligence obtained in childhood yield a moderate 0.50 correlation with school grades. They also correlate about 0.55 with the number of years of education that individuals complete.
Intelligence and aptitude tests continue to be used with great frequency in military, personnel-selection, and clinical settings. There are also two major uses of intelligence tests within educational settings. One of these is for the assessment of mental retardation and learning disabilities, a use of tests reminiscent of the original reason for development of the Binet and Simon scales in the early 1900s. The second major use is at the postsecondary level. College entrance is frequently based upon performance on measures such as the SAT, first adopted in the United States by the College Entrance Examination Board in 1937. Performance on the SAT, together with high school grades, is the basis for admission to many American colleges and universities. The ostensible basis for using SAT scores is that they moderately predict freshman grade point average–precisely what they were originally designed to do. However, considerable debate has arisen about the legitimacy and value of continued use of SAT scores for college admission decisions.
Throughout the history of the testing movement, dating back to the early 1900s and extending to the early twenty-first century, there has been controversy concerning the (mis) use of test results. One of the earliest such debates was between Lewis Terman, who helped develop the revised Stanford-Binet and other tests, and the journalist Walter Lippman. A frequent issue in debates about the uses and abuses of intelligence tests in society is that of bias. It is often argued that most standardized intelligence tests have differential validity for various racial, ethnic, and socioeconomic groups. Since the tests emphasize verbal skills and knowledge that are part of Western schooling, they are presumed to be unfair tests of the cognitive abilities of other groups. As a response to such arguments, attempts have been made to develop culture-fair or culture-free tests. The issue of bias in mental testing is beyond this brief review and Arthur R. Jensen (1980; 1981) can be consulted for highly detailed treatments of this topic. Evidence in the 1990s suggests that no simple form of bias in either the content or form of intelligence tests accounts for the mean score differences typically observed between racial and ethnic groups.
Factors Affecting Test Scores
Much of the research on intelligence has focused on specific factors affecting test scores. This includes research focused on environmental versus genetic contributions to IQ scores, related issues such as race differences in IQ, and overall population trends in IQ.
One of the most extensively studied and hotly debated topics in the study of intelligence is the contribution of heredity and environment to individual differences in test scores. Given a trait such as measured intelligence on which individuals vary, it is inevitable for people to ask what fraction is associated with differences in their genotypes (the so-called heritability of the trait) as well as what fraction is associated with differences in environmental experience. There is a long history of sentiment and speculation with regard to this issue. It has also proven difficult to answer this question in a scientifically credible way, in large measure due to the conceptual and statistical complexity of separating out the respective contributions of heredity and environment. Adding to the complexity is the need to obtain test score data from people who have varying kinship and genetic relationships, including identical and fraternal twins, siblings, and adoptive children with their biological and adoptive parents. Nonetheless, evidence has slowly been gathered that heritability is sizeable and that it varies across populations. For IQ, heritability is markedly lower for children, about 0.45, than for adults where it is about0.75. This means that with age, differences in test scores increasingly reflect differences in the genotype and in individual life experience rather than differences in the families within which they were raised. The factors underlying this shift as well as the mechanism by which genes contribute to individual differences in IQ scores are largely unknown. The same can be said for understanding environmental contributions to those differences. A common misconception is that traits like IQ with high heritability mean that the results are immutable, that the environment has little or no impact, or that learning is not involved. This is wrong since heritable traits like vocabulary size are known to depend on learning and environmental factors.
Perhaps no topic is more controversial than that of race differences in IQ, especially since it is so often tied up with debates about genetics and environmental influences. It is an established fact that there are significant differences between racial and ethnic groups in their average scores on standardized tests of intelligence. In the United States, the typical difference between Caucasians and African Americans is 15 points or one standard deviation. A difference of this magnitude has been observed for quite a long period of time with little evidence that the difference has declined despite significant evidence that across the world IQ scores have risen substantially over the last fifty years. The latter phenomenon is known as the "Flynn Effect," and it, like so many other phenomena associated with test scores, begs for an adequate explanation.
There is a tendency to interpret racial and ethnic differences in mean IQ scores as being determined by genetic factors since, as noted above, IQ scores in general have a fairly high level of heritability and the level of heritability seems to be about the same in different racial and ethnic groups. There is, however, no logical basis on which to attribute the mean difference between racial groups to either genetic or environmental factors. As one group of researchers stated, "In short, no adequate explanation of the differential between the IQ means of Blacks and Whites is presently available" (Neisser et al., p. 97).
Age and Intelligence
Although most intelligence tests are targeted for school-age populations, there are instruments developed for younger age groups. Such tests emphasize the assessment of perceptual and motor abilities. Unfortunately, measures of infant and pre-school intelligence tend to correlate poorly with intelligence tests administered during the school years. However, there appears to be a high degree of stability in the IQ scores obtained in the early primary grades and IQ scores obtained at the high school level and beyond. Often this is misinterpreted as indicating that an individual's intelligence does not change as a function of schooling or other environmental factors. What such results actually indicate is that an individual's score relative to his or her age group remains fairly constant. In an absolute sense, an individual of age 16 can solve considerably more difficult items and problems than an individual of age 8. Comparing IQ scores obtained at different ages is akin to comparing apples and oranges since the composition of tests changes markedly over age levels.
Research has also studied changes in IQ following early adulthood. A frequent conclusion from research examining age groups ranging from 21 to 60 and beyond is that there is an age-related general decline in intellectual functioning. However, there are serious problems with many such studies since they involve cross-sectional rather than longitudinal contrasts. In those cases where longitudinal data are available, it is less obvious that intelligence declines with age. John L. Horn and Cattell (1967) presented data indicating a possible differential decline in crystallized and fluid intelligence measures. Crystallized intelligence measures focus on verbal skills and knowledge whereas fluid intelligence measures focus on reasoning and problem solving with visual and geometric stimuli. The latter also often place an emphasis on performance speed. Fluid intelligence measures tend to show substantial declines as a function of age, whereas crystallized intelligence measures often show little or no decline until after age 65. Research in the 1990s based on combinations of longitudinal and cross-sectional samples supports the conclusion that there are age-related declines in intelligence, which seem to vary with the type of skill measured, and that the declines are often substantial in the period from age 65 to 80.
After more than 100 years of theory and research on the nature and measurement of intelligence there is much that researchers know but even more that they don't understand. Still lacking is any agreed upon definition of intelligence and many of the empirical findings regarding intelligence test scores remain a puzzle. In their summary paper "Intelligence: Knowns and Unknowns," Ulric Neisser and colleagues (1996) stated:
In a field where so many issues are unresolved and so many questions unanswered, the confident tone that has characterized most of the debate on these topics is clearly out of place. The study of intelligence does not need politicized assertions and recriminations; it needs self restraint, reflection, and a great deal more research. The questions that remain are socially as well as scientifically important. There is no reason to think them unanswerable, but finding the answers will require a shared and sustained effort as well as the commitment of substantial scientific resources. (p. 97)
See also: Gifted and Talented, Education of; Individual Differences, subentry on Abilities and Aptitudes.
Binet, Alfred, and Henri, Victor. 1896. "La Psychologie Individuelle." Année Psychologie 2:411–465.
Block, Ned J., and Dworkin, Gerald. 1976. The IQ Controversy. New York: Pantheon Books
Brody, Nathan. 1992. Intelligence. San Diego, CA: Academic Press.
Burt, Cyril L. 1949. "The Structure of the Mind: A Review of the Results of Factor Analysis." British Journal of Educational Psychology 19:100–111, 176–199.
Carroll, John B. 1978. "On the Theory-Practice Interface in the Measurement of Intellectual Abilities." In Impact of Research on Education, ed. Patrick Suppes. Washington, DC: National Academy of Education.
Carroll, John B. 1993. Human Cognitive Abilities: A Survey of Factor Analytic Studies. Cambridge, Eng.: Cambridge University Press.
Cattell, Raymond B. 1963. "Theory of Fluid and Crystallized Intelligence: A Critical Experiment." Journal of Educational Psychology 54:1–22.
Cattell, Raymond B. 1971. Abilities: Their Structure, Growth and Action. Boston: Houghton Mifflin.
Flynn, John R. 1987. "Massive IQ Gains in 14 Nations: What IQ Tests Really Measure." Psychological Bulletin 101:171–191.
Gardner, Howard. 1983. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books.
Gardner, Howard. 1999. Intelligence Reframed: Multiple Intelligences for the 21st Century. New York: Basic Books.
Gould, Stephen, J. 1996. The Mismeasure of Man. New York: Norton.
Gustafsson, Jan-Eric. 1988. "Hierarchical Models of Individual Differences in Cognitive Abilities." In Advances in the Psychology of Human Intelligence, Vol. 4, ed. Robert J. Sternberg. Hillsdale, NJ: Erlbaum.
Herrnstein, Richard, and Murray, Charles. 1994. The Bell Curve: Intelligence and Class Structure in American Life. New York: Free Press.
Horn, John L. 1979. "The Rise and Fall of Human Abilities." Journal of Research on Developmental Education 12:59–78.
Horn, John L. 1985. "Remodeling Old Models of Intelligence." In Handbook of Intelligence: Theories, Measurements, and Applications, ed. Benjamin B. Wolman. New York: Wiley.
Horn, John L., and Cattell, Raymond B. 1967. "Age Differences in Fluid and Crystallized Intelligence." Acta Psychologica 26:107–129.
Jensen, Arthur R. 1980. Bias in Mental Testing. New York: Free Press.
Jensen, Arthur R. 1981. Straight Talk on Mental Tests. New York: Free Press.
Jensen, Arthur R. 1998. The g Factor: The Science of Mental Ability. Westport, CT: Praeger.
Lemann, Nicholas. 1999. The Big Test: The Secret History of the American Meritocracy. New York: Farrar, Strauss, and Giroux.
Neisser, Ulric, et al. 1996. "Intelligence: Knowns and Unknowns." American Psychologist 51:77–101.
Spearman, Charles. 1904. "General Intelligence, Objectively Determined and Measured." American Journal of Psychology 15:201–293.
Spearman, Charles. 1927. The Abilities of Man: Their Nature and Measurement. New York: Macmillan.
Sternberg, Robert J. 1985. Beyond IQ: A Triarchic Theory of Human Intelligence. New York: Cambridge University Press.
Thurstone, Lloyd L. 1924. The Nature of Intelligence. London and New York: Harcourt Brace.
Thurstone, Lloyd L. 1938. Primary Mental Abilities. Psychometric Monographs No. 1. Chicago: University of Chicago Press.
Vernon, Philip E. 1961. The Structure of Human Abilities. London: Methuen.
James W. Pellegrino
TRIARCHIC THEORY OF INTELLIGENCE
The triarchic theory of intelligence is based on a broader definition of intelligence than is typically used. In this theory, intelligence is defined in terms of the ability to achieve success in life based on one's personal standards–and within one's sociocultural context. The ability to achieve success depends on the ability to capitalize on one's strengths and to correct or compensate for one's weaknesses. Success is attained through a balance of analytical, creative, and practical abilities–a balance that is achieved in order to adapt to, shape, and select environments.
Information-Processing Components Underlying Intelligence
According to Robert Sternberg's proposed theory of human intelligence, a common set of universal mental processes underlies all aspects of intelligence. Although the particular solutions to problems that are considered "intelligent" in one culture may be different from those considered intelligent in another, the mental processes needed to reach these solutions are the same.
Metacomponents, or executive processes, enable a person to plan what to do, monitor things as they are being done, and evaluate things after they are done. Performance components execute the instructions of the metacomponents. Knowledge-acquisition components are used to learn how to solve problems or simply to acquire knowledge in the first place. For example, a student may plan to write a paper (metacomponents), write the paper (performance components), and learn new things while writing (knowledge-acquisition components).
Three Aspects of Intelligence
According to the triarchic theory, intelligence has three aspects: analytical, creative, and practical.
Analytical intelligence. Analytical intelligence is involved when the components of intelligence are applied to analyze, evaluate, judge, or compare and contrast. It typically is involved in dealing with relatively familiar kinds of problems where the judgments to be made are of a fairly abstract nature.
In one study, an attempt was made to identify the information-processing components used to solve analogies such as: A is to B as C is to: D1, D2, D3, D4 (e.g., lawyer is to client as doctor is to [a] nurse, [b] medicine, [c] patient, [d] MD). There is an encoding component, which is used to figure out what each word (e.g., lawyer ) means, while the inference component is used to figure out the relation between lawyer and client.
Research on the components of human intelligence has shown that although children generally become faster in information processing with age, not all components are executed more rapidly with age. The encoding component first shows a decrease in processing time with age, and then an increase. Apparently, older children realize that their best strategy is to spend more time in encoding the terms of a problem so that they later will be able to spend less time in making sense of these encodings. Similarly, better reasoners tend to spend relatively more time than do poorer reasoners in global, up-front metacomponential planning when they solve difficult reasoning problems. Poorer reasoners, on the other hand, tend to spend relatively more time in detailed planning as they proceed through a problem. Presumably, the better reasoners recognize that it is better to invest more time up front so as to be able to process a problem more efficiently later on.
Creative intelligence. In work with creativeintelligence problems, Robert Sternberg and Todd Lubart asked sixty-three people to create various kinds of products in the realms of writing, art, advertising, and science. For example, in writing, they would be asked to write very short stories, for which the investigators would give them a choice of titles, such as "Beyond the Edge" or "The Octopus's Sneakers." In art, the participants were asked to produce art compositions with titles such as "The Beginning of Time" or "Earth from an Insect's Point of View." Participants created two products in each domain.
Sternberg and Lubart found that creativity is relatively, although not wholly, domain-specific. In other words, people are frequently creative in some domains, but not in others. They also found that correlations with conventional ability tests were modest to moderate, demonstrating that tests of creative intelligence measure skills that are largely different from those measured by conventional intelligence tests.
Practical intelligence. Practical intelligence involves individuals applying their abilities to the kinds of problems that confront them in daily life, such as on the job or in the home. Much of the work of Sternberg and his colleagues on practical intelligence has centered on the concept of tacit knowledge. They have defined this construct as what one needs to know, which is often not even verbalized, in order to work effectively in an environment one has not been explicitly taught to work in–and that is often not even verbalized.
Sternberg and colleagues have measured tacit knowledge using work-related problems one might encounter in a variety of jobs. In a typical tacit-knowledge problem, people are asked to read a story about a problem someone faces, and to then rate, for each statement in a set of statements, how adequate a solution the statement represents. For example, in a measure of tacit knowledge of sales, one of the problems deals with sales of photocopy machines. A relatively inexpensive machine is not moving out of the showroom and has become overstocked. The examinee is asked to rate the quality of various solutions for moving the particular model out of the showroom.
Sternberg and his colleagues have found that practical intelligence, as embodied in tacit knowledge, increases with experience, but that it is how one profits, or learns, from experience, rather than experience per se, that results in increases in scores. Some people can work at a job for years and acquire relatively little tacit knowledge. Most importantly, although tests of tacit knowledge typically show no correlation with IQ tests, they predict job performance about as well as, and sometimes better than, IQ tests.
In a study in Usenge, Kenya, Sternberg and colleagues were interested in school-age children's ability to adapt to their indigenous environment. They devised a test of practical intelligence for adaptation to the environment that measured children's informal tacit knowledge of natural herbal medicines that the villagers used to fight various types of infections. The researchers found generally negative correlations between the test of practical intelligence and tests of academic intelligence and school achievement. In other words, people in this context often emphasize practical knowledge at the expense of academic skills in their children's development.
In another study, analytical, creative, and practical tests were used to predict mental and physical health among Russian adults. Mental health was measured by widely used paper-and-pencil tests of depression and anxiety, while physical health was measured by self-report. The best predictor of mental and physical health was the practical-intelligence measure, with analytical intelligence being the second-best measure and creative intelligence being the third.
Factor-analytic studies seek to identify the mental structures underlying intelligence. Four separate factor-analytic studies have supported the internal validity of the triarchic theory of intelligence. These studies analyzed aspects of individual differences in test performance in order to uncover the basic mental structures underlying test performance. In one study of 326 high school students from throughout the United States, Sternberg and his colleagues used the so-called Sternberg Triarchic Abilities Test (STAT) to investigate the validity of the triarchic theory. The test comprises twelve subtests measuring analytical, creative, and practical abilities. For each type of ability, there are three multiple-choice tests and one essay test. The multiple-choice tests involve verbal, quantitative, and figural content. Factor analysis on the data was supportive of the triarchic theory of human intelligence, as it was measured relatively separate and independent analytical, creative, and practical factors. The triarchic theory also was consistent with data obtained from 3,252 students in the United States, Finland, and Spain. The study revealed separate analytical, creative, and practical factors of intelligence.
In another set of studies, researchers explored the question of whether conventional education in school systematically discriminates against children with creative and practical strengths. Motivating this work was the belief that the systems in most schools strongly tend to favor children with strengths in memory and analytical abilities.
The Sternberg Triarchic Abilities Test was administered to 326 high-school students around the United States and in some other countries who were identified by their schools as gifted (by whatever standard the school used). Students were selected for a summer program in college-level psychology if they fell into one of five ability groupings: high analytical, high creative, high practical, high balanced (high in all three abilities), or low balanced (low in all three abilities). These students were then randomly divided into four instructional groups, emphasizing memory, analytical, creative, or practical instruction. For example, in the memory condition, they might be asked to describe the main tenets of a major theory of depression. In the analytical condition, they might be asked to compare and contrast two theories of depression. In the creative condition, they might be asked to formulate their own theory of depression. In the practical condition, they might be asked how they could use what they had learned about depression to help a friend who was depressed.
Students who were placed in instructional conditions that better matched their pattern of abilities outperformed students who were mismatched. In other words, when students are taught in a way that fits how they think, they do better in school. Children with creative and practical abilities, who are almost never taught or assessed in a way that matches their pattern of abilities, may be at a disadvantage in course after course, year after year.
A follow-up study examined learning of social studies and science by 225 third-graders in Raleigh, North Carolina, and 142 eighth-graders in Baltimore, Maryland, and Fresno, California. In this study, students were assigned to one of three instructional conditions. In the first condition, they were taught the course they would have learned had there been no intervention, which placed an emphasis on memory. In the second condition, students were taught in a way that emphasized critical (analytical) thinking, and in the third condition they were taught in a way that emphasized analytical, creative, and practical thinking. All students' performance was assessed for memory learning (through multiple-choice assessments) as well as for analytical, creative, and practical learning (through performance assessments).
Students in the triarchic-intelligence (analytical, creative, practical) condition outperformed the other students in terms of the performance assessments. Interestingly, children in the triarchic instructional condition outperformed the other children on the multiple-choice memory tests. In other words, to the extent that one's goal is just to maximize children's memory for information, teaching triarchically is still superior. This is because it enables children to capitalize on their strengths and to correct or to compensate for their weaknesses, allowing them to encode material in a variety of interesting ways.
In another study, involving 871 middle-school students and 432 high school students, researchers taught reading either triarchically or through the regular curriculum. At the middle-school level, reading was taught explicitly. At the high school level, reading was infused into instruction in mathematics, physical sciences, social sciences, English, history, foreign languages, and the arts. In all settings, students who were taught triarchically substantially outperformed students who were taught in standard ways.
The triarchic theory of intelligence provides a useful way of understanding human intelligence. It seems to capture important aspects of intelligence not captured by more conventional theories. It also differs from the theories of Howard Gardner, which emphasize eight independent multiple intelligences (such as linguistic and musical intelligence), and from the theory of emotional intelligence. The triarchic theory emphasizes processes of intelligence, rather than domains of intelligence, as in Gardner's theory. It also views emotions as distinct from intelligence. Eventually, a theory may be proposed that integrates the best elements of all existing theories.
See also: Creativity; Intelligence, subentry on> Myths, Mysteries, AND Realities.
Gardner, Howard. 1983. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books.
Gardner, Howard. 1999. Intelligence Reframed: Multiple Intelligences for the 21st Century. New York: Basic Books.
Sternberg, Robert J. 1981. "Intelligence and Nonentrenchment." Journal of Educational Psychology 73:1–16.
Sternberg, Robert J. 1993. Sternberg Triarchic Abilities Test. Unpublished test.
Sternberg, Robert J. 1997. Successful Intelligence. New York: Plume.
Sternberg, Robert J. 1999. "The Theory of Successful Intelligence." Review of General Psychology 3:292–316.
Sternberg, Robert j.; Ferrari, Michel; Clinkenbeard, Pamela R.; and Grigorenko, Elena L. 1996. "Identification, Instruction, and Assessment of Gifted Children: A Construct Validation of a Triarchic Model." Gifted Child Quarterly 40 (3):129–137.
Sternberg, Robert j.; Forsythe, George B.; Hedlund, Jennifer; Horvath, Joe; Snook, Scott; Williams, Wendy m.; Wagner, Richard K.; and Grigorenko, Elena L. 2000. Practical Intelligence in Everyday Life. New York: Cambridge University Press.
Sternberg, Robert j.; Grigorenko, Elena L.; Ferrari, Michel; and Clinkenbeard, Pamela r. 1999. "A Triarchic Analysis of an Aptitude-Treatment Interaction." European Journal of Psychological Assessment 15 (1):1–11.
Sternberg, Robert J., and Lubart, Todd I. 1995. Defying the Crowd: Cultivating Creativity in a Culture of Conformity. New York: Free Press.
Sternberg, Robert J., and Rifkin, Bathseva. 1979. "The Development of Analogical Reasoning Processes." Journal of Experimental Child Psychology 27:195–232.
Sternberg, Robert j.; Torff, Bruce; and Grigorenko, Elena L. 1998. "Teaching Triarchically Improves School Achievement." Journal of Educational Psychology 90: 374–384.
Robert J. Sternberg
Empirical work in geropsychology began in the early part of the twentieth century with the observation that there were apparent declines in intellectual performance when groups of young and old persons were compared on the same tasks. This early work was done primarily with measures designed for assessing children or young adults. The intellectual processes used in the development of cognitive structures and functions in childhood, however, are not always the most relevant processes for the maintenance of intelligence into old age, and a reorganization of cognitive structures (for example, mental abilities) may indeed be needed to meet the demands of later life. Nevertheless, certain basic concepts relevant to an understanding of intelligence in childhood remain relevant at adult levels. Changes in basic abilities and measures of intelligence must therefore be studied over much of the life course, even though the manner in which intellectual competence is organized and measured may change with advancing age.
This section will first review the historical background for the study of adult intelligence. Alternative formulations of the conceptual nature of intelligence will then be described. Next, changes in intellectual competence that represent actual decrement that individuals experience will be differentiated from the apparently lower performance of older persons that is not due to intellectual decline, but instead reflects a maintained, but obsolescent, functioning of older cohorts when compared to younger peers. This is the distinction between data on age differences gained from cross-sectional comparison of groups differing in age and data acquired by means of longitudinal studies of the same individuals over time. This discussion will also include information on the ages at which highest levels of intellectual competence are reached, magnitudes of within-generation age changes, and an assessment of generational differences. Some attention will be given to the distinction between academic and practical intelligence. Finally, the influences of health, lifestyles, and education will be considered. This will provide an understanding of why some individuals show intellectual decrement in early adulthood while others maintain or increase their level of functioning well into old age.
Intellectual development from young adulthood through old age has become an important topic because of the increase in average life span and the ever-larger number of persons who reach advanced ages. Assessment of intellectual competence in old age is often needed to provide information relevant to questions of retirement for cause (in the absence of mandatory retirement for reasons of age), or to determine whether sufficient competence remains for independent living, for medical decision making, as well as for the control and disposition of property. Level of intellectual competence may also need to be assessed in preparation for entering retraining activities required for late-life career changes.
Four influential theoretical positions have had a major impact on empirical research on intelligence and age. The earliest conceptualization came from Sir Charles Spearman's work in 1904. Spearman suggested that a general dimension of intelligence (g) underlied all purposeful intellectual pursuits. All other components of such pursuits were viewed as task or item specific (s). This view underlies the family of assessment devices that were developed at the beginning of the twentieth century; particularly the work of Alfred Binet and Henri Simon in France. Thinking of a single, general form of intelligence may be appropriate for childhood, when measurement of intellectual competence is used primarily to predict scholastic performance. However, a singular concept is not useful beyond adolescence, because there is no single criterion outcome to be predicted in adults. Also, there is convincing empirical evidence to support the existence of multiple dimensions of intelligence that display a different life-course pattern.
The notion of a single dimension of intelligence became popular during World War I, when Robert Yerkes constructed the Army Alpha intelligence test for purposes of classifying the large number of inductees according to their ability level. Because of the predominantly verbal aspects of this test, it was soon supplemented by performance measures suitable for illiterate or low-literate inductees. Assessing a single dimension of intelligence also widely influenced educational testing. It was around this time that Lewis Terman, a psychologist working at Stanford University, adapted the work of Binet and Simon for use in American schools and introduced the Stanford-Binet test, which dominated educational testing for many decades. Terman was also responsible for the introduction of the IQ concept. He argued that one could compute an index (the intelligence quotient, or IQ) that represents the ratio of a person's mental age (as measured by the Stanford-Binet test) divided by the person's chronological age. An IQ value of 100 was assigned to be equivalent to the average performance of a person at a given age. The IQ range from 90 to 110 represented the middle 50 percent of the normal population. Because there is no linear relation between mental and chronological age past adolescence, however, the Stanford-Binet approach did not work well with adults (see below).
An early influential multidimensional theory of intelligence was Edward. L. Thorndike's view that different dimensions of intelligence would display similar levels of performance within individuals. Thorndike also suggested that all categories of intelligence possessed three attributes: power, speed, and magnitude (see Thorndike & Woodworth, 1901). This approach is exemplified by the work of David Wechsler. The Wechsler Adult Intelligence Scale (WAIS) consists of eleven distinct subscales of intelligence, derived from clinical observation and earlier mental tests, combined into two broad dimensions: verbal intelligence and performance (nonverbalmanipulative) intelligence. These dimensions can then be combined to form a total global IQ. The global IQ (comparable to the Stanford-Binet IQ) is statistically adjusted to have a mean of 100 for any normative age group. The range of the middle 50 percent of the population is also set to range from 90 to 110. The Wechsler scales were used in the clinical assessment of adults with psychopathologies, and some of the subtests are still used by neuropsychologists to help diagnose the presence of a dementia.
The Wechsler verbal and performance scales are highly reliable in older persons, but measurable differences between the two are often used as a rough estimate of age decline, a use that has not proven to be very reliable. A more significant limitation of the test for research on intellectual aging, however, has been the fact that the factor structure of the scales does not remain invariant across age. As a consequence, most recent studies of intellectual aging in community-dwelling populations have utilized some combination of the primary mental abilities.
Factorially simpler multiple dimensions of intelligence were identified in the work of Louis Leon Thurstone during the 1930s, which was expanded upon by J. P. Guilford in 1967. (The primary mental abilities described by Thurstone and Guilford have also formed the basis for this author's own work; see Schaie, 1996b.) Major intellectual abilities that account for much of the observed individual differences in intelligence include verbal meaning (recognition vocabulary), inductive reasoning (the ability to identify rules and principles), spatial orientation (rotation of objects in two- or three-dimensional space), number skills (facility with arithmetic skills), word fluency (recall vocabulary), and perceptual speed. Further analyses of the primary intellectual abilities have identified several higher-order dimensions, including those of fluid intelligence (applied to novel tasks) and crystallized intelligence (applied to acculturated information).
The introduction of Piagetian thought into American psychology led some investigators to consider the application of Piagetian methods to adult development. However, Jean Piaget's (1896–1980) original work assumed that intellectual development was complete once the stage of formal operations had been reached during young adulthood. Hence, this approach has contributed only sparsely to empirical work on adult intelligence.
There are also discernable secular trends that cut across theoretical positions on different aspects of adult intelligence. Diana Woodruff-Pak has identified four stages: (1) until the mid-1950s, concerns were predominantly with identifying steep and apparently inevitable age-related decline; (2) the late 1950s through the mid-1960s saw the discovery that there was stability as well as decline; (3) beginning with the mid-1970s, external social and experiential effects that influenced cohort differences in ability levels were identified; and (4) in the 1980s and 1990s the field has been dominated by attempts to alter experience and manipulate age differences. Successful demonstrations of the modifiability of intellectual performance has led researchers to expand definitions of intelligence and to explore new methods of measurement.
Conceptualizations of intelligence
Past approaches to the study of intelligence (from its origin in work with children) were primarily concerned with academic or intellectual performance outcomes. Current conceptualizations of intelligence, by contrast, often distinguish between academic, practical, and social intelligence. All of these, however, are basically manifestations of intellectual competence. The contemporary study of intellectual competence has been largely driven by three perspectives: The first sees competence as a set of latent variables (latent variables are not directly observable but are inferred by statistical means, e.g., factor analysis, from sets of observed variables that are related to the "latent variables") that represent permutations of variables identified by studies of basic cognition. This perspective has been characterized as a componential or hierarchical approach. The second perspective views competence as involving specific knowledge bases. The third focuses on the "fit" or congruence between an individual's intellectual competence and the environmental demands faced by the individual.
Componential/hierarchical approaches to intelligence. The three major examples of such approaches are: Robert Sternberg's triarchic theory of intelligence; Paul Baltes' two-dimensional model; and the hierarchical model linking intellectual competence with basic cognition proposed by Sherry Willis and K. Warner Schaie.
Sternberg proposed a triarchic theory of adult intellectual development that involves metacomponents, and experiential and contextual aspects (a metacomponent is a necessary component required for the others). The metacomponential part of the theory involves an information-processing approach to basic cognition, including processes such as encoding, allocation of mental resources, and monitoring of thought processes. The second component of the theory posits that these processes operate at different levels of experience, depending upon the task—the basic components can operate in a relatively novel fashion or, with experience, they may become automatized. For example, identifying a driver's response pattern under a specific traffic condition; this response then becomes automatic whenever a similar condition is encountered. According to Sternberg, the most intelligent person is the one who can adjust to a change in a problem situation and who can eventually automate the component processes of task solution. The third aspect of the theory is concerned with how the individual applies the metacomponents in adjusting to a change in the environment.
Baltes proposed a two-dimensional componential model of cognition. The first component is identified as the mechanics of intelligence, which represent the basic cognitive processes that serve as underpinnings for all intelligent behavior. The second component of the theory is labeled the pragmatics of intelligence. This is the component that is influenced by experience. Baltes argues that the environmental context is critical to the particular form or manifestation in which pragmatic intelligence is shown. The concept of wisdom has been linked with, and studied within, the pragmatics of intelligence.
Willis and Schaie conceptualized a hierarchical relationship between basic cognition and intellectual competence. Basic cognition is thought to be represented by domains of psychometric intelligence, such as the second-order constructs of fluid and crystallized intelligence and the primary mental abilities associated with each higher-order construct. The cognitive abilities represented in traditional approaches to intelligence are proposed to be universal across the life span and across cultures. When nurtured and directed by a favorable environment at a particular life stage, these processes and abilities develop into cognitive competencies that are manifested in daily life as cognitive performance.
Intellectual competence, as represented in activities of daily living, is seen in the phenotypic expressions of intelligence that are context- or age-specific. The particular activities and behaviors that serve as phenotypic expressions of intelligence vary with the age of the individual, with a person's social roles, and with the environmental context. Problem solving in everyday activities is complex and involves multiple basic cognitive abilities. Everyday competence also involves substantive knowledge, as well as the individual's attitudes and values associated with a particular problem domain.
Age changes in intelligence
A number of longitudinal studies have been conducted in the United States and in Europe covering substantial age ranges. An important new addition is the initiation of longitudinal studies in the very old, providing hope for better information on age changes in the nineties and beyond.
Longitudinal studies of intelligence usually show a peak of intellectual performance in young adulthood or early middle age, with a virtual plateau until early old age and accelerating average decline thereafter. However, it should be noted that different intellectual skills reach peak performance at different ages and decline at different rates. Figure 1 provides an example from the Seattle Longitudinal Study, a large scale study of community-dwelling adults. This figure shows age changes from twenty-five to eighty-eight years of age for six primary mental abilities. Note that only perceptual speed follows a linear pattern of decline from young adulthood to old age, and that verbal ability does not reach a peak until midlife and still remains above early adult performance in advanced old age. Similar patterns have been obtained in meta-analyses of the WAIS (cf. McArdle, 1994).
It should also be noted that there are wide individual differences in change in intellectual competence over time. For example, when change in large groups of individuals was monitored over a seven-year interval, it was found that the ages 60 to 67 and 67 to 74 were marked by stability of performance, while even from ages 74 to 84 as many as 40 percent of study participants remained stable. However, it was also found that by their mid-sixties almost all persons had significantly declined on at least one ability, while remaining stable on others.
Age differences in intelligence
Findings from age-comparative (cross-sectional) studies of intellectual performance are used to compare adults of different ages at a single point in time. Because of substantial generational differences, these studies show far greater age differences than the within-individual changes observed in longitudinal data. Ages of peak performance are found to be earlier (for later-born cohorts) in cross-sectional studies. Modest age differences are found by the early fifties for some, and by the sixties for most, dimensions of intelligence. On the WAIS, age differences are moderate for the verbal part of the test, but substantial for the performance scales. Because of the slowing in the rate of positive cohort differences (a later-born cohort performs at a higher level than an earlier-born cohort at the same ages), age difference profiles have begun to converge somewhat more with the age-change data from longitudinal studies. Both peak performance and onset of decline seem to be shifting to later ages for most variables.
Figure 2 presents age differences over the age range from twenty-five to eighty-one for samples tested in 1991, found in the Seattle Longitudinal Study, which can be directly compared to the longitudinal data presented in Figure 1. Particularly noteworthy is the fact that cross-sectional age differences are greater than those observed in longitudinal studies—except for numerical skills, which show greater decline when measured longitudinally.
More recent studies of the WAIS with normal individuals (the WAIS has been widely applied to samples with neuropathology or mental illness, so it is important to state that the work referred to is on normal individuals) use approaches that involve latent variable models (see McArdle & Prescott, 1992; Rott, 1993), while other analyses have been conducted at the item level. A study by Sands, Terry, and Meredith (1989) investigated two cohorts spanning the age range from 18 to 61. Improvement in performance was found between the ages of 18 and 40 and between 18 and 54. Between ages 40 and 61, improvement was found for the information, comprehension and vocabulary subtests, while there was a mixed change (gain on the easy items and decline on the difficult items) on picture completion and a decline on digit symbol and block design (with decline only for the most difficult items of the latter test). The discrepancies between the longitudinal and cross-sectional findings on the WAIS, as well as on the primary mental abilities, can be attributed largely to cohort differences in attained peak level and rate of change arising as a consequence of the different life experiences of successive generations.
Since women, on average, live longer than men, one might ask whether there are differential patterns by sex. Most studies find that there are average-level differences between men and women at all ages, with women doing better on verbal skills and memory, while men excel on numerical and spatial skills. The developmental course of intellectual abilities, however, tends to have parallel slopes for men and women.
Cohort differences in intellectual abilities
Cohort differences in psychometric abilities have been most intensively examined in the Seattle Longitudinal Study. Cumulative cohort differences for cohorts born from 1897 to 1966 are shown in Figure 3 for the abilities discussed above. There is a linear positive pattern for inductive reasoning and verbal memory, a positive pattern for spatial orientation, but curvilinear or negative patterns for numeric ability and perceptual speed. Factors thought to influence these cohort differences include changes in average educational exposure and changes in educational practices, as well as the control of early childhood infectious diseases and the adoption of healthier lifestyles by more recent cohorts. Similar differences have also been found using biologically related parent-offspring dyads compared at approximately similar ages.
The effect of these cohort differences is to increase age differences in intelligence between young and old for those skills where there have been substantial gains across successive generations (e.g., inductive reasoning), but to decrease age differences in instances were younger generations perform more poorly (e.g., number skills). Hence, it should be kept in mind that some older persons seem to perform poorly when compared with their younger peers not because they have suffered mental decline, but because they are experiencing the consequences of obsolescence.
Much of the work done by psychologists on intelligence has concerned those aspects that are sometimes called academic intelligence. However, an equally important aspect of intelligence is concerned with the question of how individuals can function effectively on tasks and in situations encountered on a daily basis. It has been shown that individual differences in performance on everyday tasks can be accounted for by a combination of performance levels on several basic abilities. Competence in various everyday activities (e.g., managing finances, shopping, using medications, using the telephone) involve several cognitive abilities or processes that cut across or apply to various substantive domains. But the particular combination or constellation of basic abilities varies, of course, across different tasks of daily living. It is important to note that the basic abilities are seen as necessary (but not sufficient) antecedents for everyday competence.
Other variables, such as motivation and meaning, and in particular the role of the environment or context, determine the particular types of applied activities and problems in which practical intelligence is manifested. Everyday competence also involves substantive knowledge associated with the particular everyday-problem domain, as well as attitudes and values with regard to the problem domain. Both the sociocultural context and the microenvironment determine the expression of practical intelligence for a given individual. For example, while the ability to travel beyond one's dwelling has been of concern through the ages, comprehending airline schedules and operating computer-driven vehicles are only recent expressions of practical intelligence. The environment also plays an important role in the maintenance and facilitation of everyday competence as people age. Environmental stimulation and challenges, whether they occur naturally or through planned interventions, have been shown to be associated with the maintenance and enhancement of everyday competence in the elderly. Practical intelligence appears to peak in midlife and then decline, following closely the changes observed in the underlying cognitive abilities associated with specific everyday problems.
Influences upon intellectual development
Intellectual competence does not operate within a vacuum. It is affected both by an individual's physiological state (i.e., the individual's state of health and, in old age particularly, the presence or absence of chronic disease), as well as the presence or absence of a favorable environmental context and adequate support systems. Figure 4 provides a conceptual schema of the influences that impact the adult development of cognitive/intellectual competence.
Adult cognitive functioning must, of course, be initially based upon both heritable (genetic) influences and the early environmental influences typically experienced within the home of the biological parents. It has been suggested by some behavior geneticists that much of the early environmental influences are nonshared (i.e., not shared by all members of a family). However, there is retrospective evidence that some early shared environmental influences may affect adult intellectual performance (see Schaie & Zuo, 2000). Both genetic and early environmental factors are thought to influence midlife cognitive functioning. Early environmental influences will, of course, also exert influences on midlife social status. Genetic factors are also likely to be implicated in the rate of cognitive decline in adulthood. Thus far, the best-studied gene in this context is the apolipoprotein E (ApoE) gene, one of whose alleles is thought to be a risk factor for Alzheimer's disease. ApoE status is therefore also considered a factor in cognitive development (the expression of this gene is probably not important prior to midlife).
Influence of health. Considerable information is available on the reciprocal effects of chronic disease and intellectual abilities. It has been observed that decline in intellectual performance in old age is substantially accelerated by the presence of chronic diseases. Conditions such as cardiovascular disease, renal disease, osteoarthritis, and diabetes tend to interfere with lifestyles that are conducive to the maintenance of intellectual abilities, while they also have direct effects on brain functioning. One study found that individuals free of chronic disease perform intellectually at levels that are characteristic of those seven years younger who are suffering from such diseases. However, it has also been shown that the age of onset of chronic disease is later, and the disease severity is less, when it occurs in individuals functioning at high intellectual levels.
Influence of lifestyles. Many studies have related individual differences in socioeconomic circumstances (and resultant lifestyles) to the maintenance of high levels of intellectual functioning into old age. In particular, it has been shown that individuals who actively pursue intellectually stimulating activities seem to decline at lower rates than those who do not. Such pursuits may include travel, intensive reading programs, participation in clubs and organizations, and cultural and continuing-education activities. Conversely, those individuals whose opportunities for stimulating activities have been reduced due to the loss of a spouse or other factors restricting their social networks may be at greatest risk for decline.
Influence of education. Both the maintenance of intellectually stimulating activities and the pursuit of healthful lifestyles appear to be dependent to a considerable extent on an individual's level of attained education. Over the course of the twentieth century, in the United States, there was an average increase in educational exposure amounting to approximately six years for men and five years for women. This societal shift may be largely responsible for many of the favorable cohort differences in intellectual abilities described in this article. Those advantaged educationally are also more likely to be engaged in intellectually stimulating work experiences. These, in turn, have been shown to have favorable effects on the maintenance of intellectual functions into old age. Finally, it should be noted that while there is eventual age-related decline in intelligence for both the educationally advantaged and disadvantaged, those who start at a high level are likely to retain sufficient intellectual competence to last throughout life.
K. Warner Schaie
See also Age-Period-Cohort Model; Creativity; Functional Ability; Learning; Problem Solving, Everyday; Wisdom.
Arbuckle, T. Y.; Maag, U.; Pushkar, D.; and Chaikelson, J. S. "Individual Differences in Trajectory of Intellectual Development over 45 Years of Adulthood." Psychology and Aging 13 (1998): 663–675.
Baltes, P. B. "The Aging Mind: Potentials and Limits." Gerontologist 33 (1993): 580–594.
Baltes, P. B.; Mayer, K. U.; Helmchen, H.; and Steinhagen-Thiessen, E. "The Berlin Aging Study (BASE): Overview and Design." Ageing and Society 13 (1993): 483–533.
Binet, A., and Simon, T. "Methodes Nouvelles pour le Diagnostic du Niveau Intellectuel des Anormaux." L'Annee Psychologique 11 (1905): 102–191.
Bosworth, H. B., and Schaie, K. W. "Survival Effects in Cognitive Function, Cognitive Style, and Sociodemographic Variables in the Seattle Longitudinal Study." Experimental Aging Research 25 (1999): 121–139.
Busse, E. W. "Duke Longitudinal Studies of Aging." Zeitschrift für Gerontologie 26 (1993): 123–128.
Cunningham, W. R., and Owens, W. A., Jr. "The Iowa State Study of the Adult Development of Intellectual Abilities." In Longitudinal Studies of Adult Psychological Development. Edited by K. W. Schaie. New York: Guilford, 1983. Pages 20–39.
Eichorn, D. H.; Clausen, J. A.; Haan, N.; Honzik, M. P.; and Mussen, P. H. Present and Past in Middle Life. New York: Academic Press, 1981.
Gribbin, K.; Schaie, K. W.; and Parham, I. A. "Complexity of Life Style and Maintenance of Intellectual Abilities." Journal of Social Issues 36 (1980): 47–61.
Guilford, J. P. The Nature of Human Intelligence. New York: McGraw-Hill, 1967.
Horn, J. L., and Hofer, S. M. "Major Abilities and Development in the Adult Period." In Intellectual Development. Edited by R. J. Sternberg and C. A. Berg. Cambridge, U.K.: Cambridge University Press, 1992.
Hultsch, D. F.; Hertzog, C.; Small, B. J.; McDonald-Miszlak, L.; and Dixon, R. A. "Short-Term Longitudinal Change in Cognitive Performance in Later Life." Psychology and Aging 7 (1992): 571–584.
Matarazzo, J. D. Wechsler's Measurement and Appraisal of Adult Intelligence, 5th ed. Baltimore: Williams & Wilkins, 1972.
McArdle, J. J. "Structural Factor Analysis Experiments with Incomplete Data." Mutivariate Experimental Research 29 (1994): 404–454.
McArdle, J. J., and Prescott, C. A. "Age-Based Construct Validation Using Structural Equation Modeling." Experimental Aging Research 18 (1992): 87–115.
Plomin, R., and Daniels, D. "Why Are Two Children in the Same Family So Different from Each Other?" The Behavioral and Brain Sciences 10 (1987): 1–16.
Poon, L. W.; Sweaney, A. L.; Clayton, G. M.; and Merriam, S. B. "The Georgia Centenarian Study." International Journal of Aging and Human Development 34 (1992): 1–17.
Rott, C. "Intelligenzentwicklung im Alter [Development of Intelligence in Old Age]." Zeitschrift für Gerontologie 23 (1990): 252–261.
Sands, L. P.; Terry, H.; and Meredith, W. "Change and Stability in Adult Intellectual Functioning Assessed by Wechsler Item Responses." Psychology and Aging 4 (1989): 79–87.
Schaie, K. W. "Midlife Influences upon Intellectual Functioning in Old Age." International Journal of Behavioral Development 7 (1984): 463–478.
Schaie, K. W. "The Hazards of Cognitive Aging." Gerontologist 29 (1989): 484–493.
Schaie, K. W. "The Course of Adult Intellectual Development." American Psychologist 49 (1994): 304–313.
Schaie, K. W. "Generational Differences." In Encyclopedia of Gerontology. Edited by J. E. Birren. San Diego, Calif.: Academic Press, 1996a. Pages 567–576.
Schaie, K. W. Intellectual Development in Adulthood: The Seattle Longitudinal Study. Cambridge, U.K.: Cambridge University Press, 1996b.
Schaie, K. W. "The Impact of Longitudinal Studies on Understanding Development from Young Adulthood to Old Age." International Journal of Behavioral Development (2000).
Schaie, K. W.; Plomin, R.; Willis, S. L.; Gruber-Baldini, A.; and Dutta, R. "Natural Cohorts: Family Similarity in Adult Cognition." In Psychology and Aging: Nebraska Symposium on Motivation, 1991. Edited by T. Sonderegger. Lincoln: University of Nebraska Press, 1992.
Schaie, K. W., and Willis, S. L. "Theories of Everyday Competence." In Handbook of Theories of Aging. Edited by V. L. Bengtson and K. W. Schaie. New York: Springer, 1999. Pages 174–195.
Schaie, K. W., and Willis, S. L. "A Stage Theory Model of Adult Cognitive Development Revisited." In The Many Dimensions of Aging: Essays in Honor of M. Powell Lawton. Edited by B. Rubinstein, M. Moss, & M. Kleban. New York: Springer, 2000. Pages 175–193.
Schaie, K. W., and Zuo, Y. L. "Family Environments and Adult Cognitive Functioning." In Context of Intellectual Development. Edited by R. L. Sternberg and E. Grigorenko. Hillsdale, N.J.: Erlbaum, 2001.
Schooler, C.; Mulatu, M. S.; and Oates, G. "The Continuing Effects of Substantively Complex Work on the Intellectual Functioning of Older Workers." Psychology and Aging 14 (1999): 483–506.
Spearman, C. "General Intelligence: Objectively Determined and Measured." American Journal of Psychology 15 (1904): 201–292.
Sternberg, R. J. Beyond IQ: A Triarchic Theory of Human Intelligence. Cambridge, U.K.: Cambridge University Press, 1985.
Terman, L. M. The Measurement of Intelligence. Boston: Houghton, 1916.
Thorndike, E. L., and Woodworth, R. S. "Influence of Improvement in One Mental Function upon the Efficiency of Other Mental Functions." Psychological Review 8 (1901): 247–261, 384–395, 553–564.
Willis, S. L., and Schaie, K. W. "Everyday Cognition: Taxonomic and Methodological Considerations." In Mechanisms of Everyday Cognition. Edited by J. M. Puckett and H. W. Reese. Hillsdale, N.J.: Erlbaum, 1993.
Woodruff-Pak, D. S. "Aging and Intelligence: Changing Perspectives in the Twentieth Century." Journal of Aging Studies 3 (1989): 91–118.
Intelligence is defined as the capacity for learning, reasoning, understanding, and similar forms of mental activity. This definition implies that the concept of intelligence is both multifaceted (i.e., reflective of many aspects of mental ability) and implicative of differences among people (i.e., reflective of degrees of capacity, ability, or aptitude among individuals). Yet this definition does not necessarily relate directly to the definition of intelligence used by scientists. In fact there is no consensus on the definition of intelligence among professionals who study it (e.g., psychologists, educators, computer scientists).
There have been multiple attempts to define intelligence. These definitions can be broadly classified into five large groups: (1) consensus definitions, (2) operational definitions, (3) task-based or psychometric definitions, (4) process-based definitions, and (5) domain definitions.
“Consensus definitions” of intelligence are typically associated with attempts of researchers in the field to consolidate a variety of points of view and produce, collectively, a comprehensive common definition. In this regard two symposia that brought together researchers in the field are important. The first symposium, which took place in 1921 under the title “Intelligence and Its Measurement: A Symposium,” focused on the abilities to learn and adapt to the environment. However, the definitions of these abilities varied. For example, the American psychologist Lewis Terman emphasized abstract thinking, whereas another American psychologist, Edward Thorndike, stressed the importance of providing good responses to questions. The second symposium, which took place in 1986, brought together a new generation of intelligence researchers (e.g., Douglas Detterman, Ulric Neisser, Robert Sternberg). By then the field of intelligence had developed markedly, having produced hundreds of research articles and books. The resulting consensus definition kept the reference to learning and adaptive abilities but expanded to include many other abilities, including meta-cognitive abilities.
Although there is still no single consensus definition of intelligence, based on the discussions at these symposia, multiple other meetings, and in the press a broad definition of intelligence includes references to lower-level processes, such as perception and attention, and higher-level processes, such as problem solving, reasoning, and decision making, with regard to learning and demonstrating adaptive behaviors in problem situations. These lower-and higher-level processes are typically referred to in two dimensions: quality and speed. Quality refers to efficacy or lack of errors, and speed refers to time while learning or solving a problem. Intelligence implies the presence of no or few errors and high speed in all processes.
“Operational definitions” of intelligence are closely linked to the concept of intelligence testing. Intelligence testing was conceived of and developed by the French psychologists Alfred Binet and Théodore Simon, who first used such a test to identify learning-impaired Parisian children in the early 1900s. The “invention” was welcomed by psychologists around the world, especially in the United States, and resulted in the development of innumerable tests of intelligence. To reflect the wealth of the research and the differential power of intelligence tests in describing individual differences, the American psychologist Edwin Boring noted in 1923 that intelligence was simply what intelligence tests test. Although obviously circular in nature, this definition of intelligence is still powerful. Researchers and practitioners often use the common metric of IQ (intelligence quotient), even though IQ typically reflects many different theoretical positions when generated by different tests of intelligence. For example, the first tests of intelligence by Binet were primarily based on sensory processes; David Wechsler’s tests (which exist in three versions spanning infancy, childhood, and adulthood) measure primary judgment skills. Then there are theory-based tests, such as the tests of Raymond Cattell, which are based on the theory of crystallized (i.e., acquired and learned over the total life span) and fluid (i.e., transformable to novel materials, situations, and tasks) intelligence, and such modern tests of intelligence as the Cognitive Assessment System (by Jack Naglieri and Jagannath Prasad Das) or K-ABC (by Alan and Nadeen Kaufman), which are both based on the theories of the Soviet neuropsychologist Alexander Luria. Yet as long as a test can generate an IQ, it is assumed to measure intelligence.
“Task-based or psychometric” definitions of intelligence are associated with ideas of defining intelligence through tasks that, by agreement among researchers, call for intelligence. One of the first proponents of task-based definitions of intelligence was the American psychologist Charles Spearman (1863–1945). In the early 1900s Spearman proposed that intelligence includes a so-called general (g -, or mental energy) factor and task-specific factors. The g -factor can explain the observation that indicators of performance on all intelligence tasks tend to correlate with each other (e.g., doing well on one task typically suggests strong performance on other tasks as well), whereas task-specific factors can explain why these correlations are not perfect (e.g., the performance indicators will differ on tasks that involve reading versus arithmetic). Spearman’s work had a tremendous impact on the field of intelligence: Students and followers include Cattell, Wechsler, Anne Anastasi, Detterman, Arthur Jensen, and many others. Spearman’s work also had opponents. For example, Thorndike argued for three forms of intelligence: abstract, mechanical, and social. Similarly Louis Thurstone argued that several primary mental abilities form intelligence (verbal comprehension, word fluency, number facility, spatial visualization, associative memory, perceptual speed, and reasoning). In an attempt to reconcile the theories of Spearman and Thurstone, Cattell proposed a hierarchical theory of intelligence in which lower-level abilities form two higher-order factors, fluid (reasoning with novel stimuli) and crystallized (reasoning with acquired knowledge) intelligence, which in turn contribute to the g -factor. Another opponent of Spearman’s was Joy Paul Guilford, who, developing Thurstone’s ideas, stated that intelligence can be represented by 150 abilities that result from different combinations of operations (e.g., cognition and memory), content (e.g., figural and symbolic), and products (e.g., unit and class).
“Process-based” definitions of intelligence are linked to theories that are not test or task based but, rather, capture processes involved in intelligence across tasks, domains, and tests. For example, the so-called triarchic theory of Robert Sternberg postulates three fundamental processes underlying intelligence: (1) analytical processes, which reflect judgment of a quality of an argument; (2) practical processes, which indicate skills of adaptation to situations or environment; and (3) creative processes, which capture skills of generating new knowledge and practices. Each of these types of processes is “constructed” from three different components: (a) knowledge acquisition components, (b) performance components, and (c) metacognitive components. These componential processes can manifest themselves in any area of human functioning.
“Domain-based” definitions of intelligence are typically associated with domains of expertise. For example, Howard Gardner postulates eight dimensions of intelligence. These dimensions, to a various degree, are present in all people and are recruited when particular types of tasks are performed or in particular domains of expertise. Specifically these intelligences are (1) bodily-kinesthetic, in which sportsmen excel; (2) musical, demonstrated to a high degree by musicians; (3) interpersonal, characteristic of philosophers; (4) intrapersonal, common among politicians; (5) logical-mathematical, possessed by mathematicians; (6) naturalistic, demonstrated by scientists; (7) verbal-linguistic, characteristic of writers; and (8) visual-spatial, necessary at high levels for engineers. Another example of domain-based definitions of intelligence is the theory of emotional intelligence (developed by Peter Salovey, John Mayer, and Daniel Goleman). This theory specifies intelligence in the domain of emotional functioning as the ability to perceive, appraise, express, access, generate, and regulate emotions and feelings.
The concept of IQ was developed by psychologists and statisticians in such a way that the distribution of scores remains relatively constant over a life span. IQs are compared across people, not within an individual, and are characterized by a population mean of 100 and a standard deviation of 15. Yet intelligence changes developmentally, and these changes occur in a number of ways. It is obvious that the intelligence of a one-year-old cannot be compared with the intelligence of a fifty-year-old, although their IQs can be compared. There are many developmental theories, for example those of Jean Piaget (1896–1980), that demonstrate that children reason in ways distinctly different from adults. Thus if a person had an IQ score of 110 at age one and has an IQ score of 110 at age fifty, this person’s “texture” of intelligence has changed, but his or her relative position among peers has remained constant. A few relevant observations should be noted. First, early childhood intelligence is not a good predictor of level of intelligence later in life. Second, intelligence tends to vary across a person’s life span, with a gradual increase toward middle age adulthood and a graduate decline in older ages. Third, it has been reported that in the developed world, intelligence tended to increase during the twentieth century (often called the Flynn effect), but it has appeared to stabilize or even decrease in the twenty-first century.
Individuals in the general population differ in their intelligence. Differences are captured by assessments of intelligence, which include both standardized tests (e.g., K-ABC) and experimental tasks (e.g., computerized tasks administered to register reaction time in response to particular stimuli). To identify the sources of these individual differences, researchers investigate the etiology (i.e., origin) of intelligence.
The etiology of intelligence is typically formulated in psychology as a question of nature and nurture: Does intelligence stem from genes (i.e., nature) or environments (i.e., nurture)? This question can be traced back to ancient times, where it was initially formulated as an “either/or” dilemma. In the early twenty-first century, however, there is a consensus that both hereditary and environmental factors play substantial and complementary roles in the development of intelligence. Two statistical coefficients are typically used to express the contributions of both genes and environments: “heritability,” which shows the amount of variation in intelligence among individuals attributable to genes, and “environmentality,” which captures the variation in intelligence attributable to environment. Both coefficients are relevant only at the level of population analyses and cannot be applied to individuals. Exciting tasks in early twenty-first century research pertain to the identification of specific genes and environments that underlie differences in intelligence. For example, it has been shown that variants in such genes as COMT (a gene responsible for the production of catechol-O-methyl transferase, an enzyme involved in the breakdown of major neurotransmitters) and BDNF (a gene responsible for the production of brain-derived neurotrophic factor, a protein involved in the biochemistry of neuronal survival, growth, and differentiation) are associated with individual differences in cognitive functioning and intelligence. It has also been shown that specific environments, such as impoverished or enriched with certain micronutrients (e.g., iodine), lead to individual differences in intellectual functioning. It is important to realize that neither genes nor environment have a deterministic impact on intelligence. The influence of both types of etiological factors, both additive and interactive, is probabilistic and takes place through the brain. Specifically there is a body of research that establishes evidence regarding which structures and pathways of the brain are associated with solving intellectual tasks and how patterns of brain activation vary among people and in different experimental conditions (e.g., sleep depleted versus deprived).
Although the majority of experts agree on the importance of both genes and environment in the etiology of intelligence, there are still leftovers of the raucous debate of the early 1990s related to the arguments put forth in The Bell Curve. Written by the psychologist Richard Herrnstein and the geneticist Charles Murray, The Bell Curve claimed IQ is hereditary and, as such, the single determinant of a person’s life outcomes. That and similar debates indicate that the concept of intelligence remains a point of disagreement with the capacity to raise charged social issues.
The concept of intelligence is viewed by some as a social construct developed to capture individual differences in cognitive functioning and as such has no “permanent” definition or understanding; both vary with the change of societal context. Thus yet another disagreement in the literature on intelligence pertains to the debate of a “social” versus “real” phenomenon. Those who argue that the concept of intelligence is a social construct suggest it was invented by the privileged classes to maintain their privilege. Those who argue that the concept of intelligence is based on the latent ability truly differentiating people maintain it is a helpful differentiating and predictive concept that has value in decisions pertaining to education and job placement.
Four types of differences are typically discussed in the study of intelligence: sex differences, ethnic and racial differences, cultural differences, and differences in conditions (i.e., intelligence in deaf and hard of hearing versus in hearing people). Males and females tend to demonstrate equivalent or comparable average scores on tests of intelligence. Yet although there are no differences in performance when performance indices are averaged across tasks, there are differences on specific tasks as well as differences in variability and range. Specifically males tend to score higher on spatial and visual tasks, among others, requiring memory, motor tasks involving aiming, and certain tasks requiring mathematical skills. Females tend to score higher on tasks requiring phonological and semantic processing, verbal production and comprehension, and fine motor skills. Broadly speaking, males demonstrate advantages in spatial reasoning, and females demonstrate advantages in verbal reasoning, but this generalized statement can be challenged by the presence and absence of sex differences on other tasks. As of the early twenty-first century there is no consensus on the profile, stability, and nature of sex differences in intelligence.
Another source of group differences in intelligence is variation in performance among different ethnic and racial groups. Group differences are typically seen on standardized tests of intelligence, especially those that rely heavily on g -theories. The differences among ethnic and racial groups demonstrate the underperformance of Hispanic Americans, Native Americans, and African Americans as compared with Asian and white Americans (of a variety of ethnic backgrounds). The differences are asystematic, meaning that the profiles of differences vary for different tasks. In other words, there is no systematic differentiation of profiles of abilities among the ethnic or racial groups. It is of special interest that the ethnic gap appears to be smaller or closed when testing is conducted using tasks from process- or domain-based theories of intelligences.
Similarly people in different cultural groups around the world demonstrate varied performances on intelligence tasks. Moreover definitions of intelligence vary across cultures as well. Thus what is considered to be “intelligent” behavior among the Luo people of Kenya is different from that of Yup’ik people of Alaska. A number of researchers have studied so-called implicit theories of intelligence—ideas about intelligence conceived by laypeople. It turns out that definitions of intelligence in the East and the West, for example, are quite different, with Eastern cultures emphasizing more social-emotional components of intelligence and Western cultures emphasizing information-processing aspects of intelligence.
Another source of group differences is the difference among people with special needs. For example, deaf people tend to score lower on tests of intelligence that call for verbal skills, but their performance on tests of spatial reasoning is similar to hearing individuals. Blind people score lower on spatial tasks (when administered in Braille), but their scores are average on verbal tasks. Thus characteristics of biological development (i.e., hormonal differences), acculturation, education, and various other peculiarities of development can all be related to group differences in intelligence. At this point there is no definitive answer to why these group differences exist.
SEE ALSO Cognition; Intelligence, Social; IQ Controversy; Memory; Multiple Intelligences Theory; Nature vs. Nurture; Psychology; Psychometrics
Cianciolo, Anna T., and Robert J. Sternberg. 2004. Intelligence: A Brief History. Malden, MA: Blackwell.
Deary, Ian J. 2000. Looking down on Human Intelligence: From Psychometrics to the Brain. Oxford: Oxford University Press.
Mackintosh, N. J. 1998. IQ and Human Intelligence. Oxford: Oxford University Press.
Elena L. Grigorenko
WORLD WAR I
THE INTERWAR PERIOD
WORLD WAR II
THE COLD WAR
Intelligence has a long European pedigree. The term itself originates from the "intelligencers"—specially assigned individuals who, during the reign of England's King Charles I (r. 1625–1649), were tasked with spying on the enemy. In other European countries too there was a growing trend in surveillance of both internal and external threats. However, intelligence as it is now understood is a twentieth-century creation.
The contemporary British system can trace its origins to 1909, when the newly formed Secret Service Bureau split, creating the Security Service (MI5), and the Secret Intelligence Service (MI6), dealing with domestic and foreign targets respectively. In France, following the 1870–1871 Franco-Prussian War, an intelligence organization was founded, though it was disbanded in 1899. In its place an intelligence component was assigned to the Deuxième Bureau of the Army General Staff. This complemented other intelligence organizations, in particular the Foreign Ministry's Cabinet Noir.
In Germany, as part of its General Staff, the earlier Intelligence Bureau was reformed in 1912 into a specialized intelligence and counterespionage division known as Department IIIb. In Italy, the secret services established in the mid-nineteenth century were reorganized many times, and in 1900 the Office of Information was established. The Russian secret police was initially founded in the sixteenth century by Ivan the Terrible. In 1883 the Okhrana was founded, which remained intact until the Bolshevik Revolution of 1917.
World War I was a momentous event that demonstrated for the first time how useful an effective intelligence organization could be. As would be the case with World War II, rapid technological advances were made, particularly in cryptography. In some instances intelligence proved to be decisive. One of the greatest successes for French intelligence was the discovery of the German plans to launch a gas attack on the Allied armies in 1915. Despite such information, some British and French commanders rejected the warning—with dire consequences, as the Battle of Ypres would show.
Examples of good, close intelligence collaboration included the Anglo-French base at Folkestone, in southeast England, set up to conduct agent-running operations into occupied parts of western Europe. Perhaps the greatest coup for the French was their spy situated within the German High Command, who provided a stream of valuable information.
The British intelligence effort was just as important during the war, especially in the field of code breaking. A good example of this was the interception of the Zimmermann telegram, which detailed advances made by the Germans in requesting Mexican involvement in the war. As a response to this, the United States entered the war in 1917.
In contrast to the Allied effort, German intelligence was less effective, suffering mixed fortunes. There were some successes, for instance predicting military developments in Russia, but there were also notable failures, notably the overreliance on opensource (i.e., freely available, such as radio broadcasts and published) information, which was susceptible to British deception efforts. At the same time, the Austro-Hungarian intelligence service did produce good results, especially in code breaking, and overall fared relatively well, particularly with regard to the Russian army, whose messages they could read.
Italy's introduction to the war in 1915 resulted in an increase in the size and scope of its intelligence effort, with collection stations opened in numerous European cities, including London, Paris, Madrid, Bern, and St. Petersburg. Russian intelligence though badly structured, was generally good, particularly in terms of espionage. This was no doubt assisted through collaborative relations with the French. However, the poor structure caused great problems, for unlike some of its allies, Russia had to fight on two fronts.
World War I cannot be considered an "intelligence war" as World War II would be. To be sure, intelligence provided an effective means of gathering information, but in a period when such information was often novel, it was generally believed only when it conformed to the existing preconceptions of military and civilian decision makers. At the same time, however, the success of intelligence during the war served to increase its stature and importance, both militarily and diplomatically.
As a result, intelligence efforts were enlarged in all the major European countries. This was especially true among the victorious powers. In Britain, MI5 grew from nineteen staff members before the war to 844 by war's end. The code-breaking effort was increased, and in 1919 the Government Code and Cipher School was created. It was not until the mid-1930s, however, that the Air Ministry set up an intelligence outfit, and this would prove to be crucial a few years later when hostilities again broke out. Of perhaps greatest importance was the creation in 1936 of the Joint Intelligence Committee—a body composed of the various elements of the intelligence system, designed to produce all-source estimates for military and political decision makers.
France, the other major European victor, also increased its intelligence effort with the introduction of new intelligence organizations, each geared toward different objectives, including code breaking and combating foreign agents within France. Following Benito Mussolini's accession to power in 1922, Italian intelligence became a bureaucratically controlled yet loosely organized system. Mussolini, as would become the norm with other authoritarian dictators, considered himself the supreme intelligence analyst, and he alone was allowed to see the full range of information available.
Of the defeated powers the biggest changes occurred in Germany. The imperial police intelligence system had been dissolved by the Allies, and in its place a new organization was installed with the primary task of providing information on any political threats. In addition, the armed forces retained intelligence units, yet these were also directed toward providing information on internal, not external, threats.
With the Nazi Party's acquisition of power in the early 1930s, intelligence in Germany altered irrevocably. The Third Reich attached huge importance to the gathering of information on potential enemies, in many ways reflecting the insecurity that would dominate Soviet intelligence for so long. Intelligence was therefore omnipresent. Like other areas of the government, competing intelligence organizations strove to dominate Adolf Hitler's affection, concentrating on diplomatic, military, economic, and social-ideological targets.
In Soviet Russia intelligence had become an effective mode of governance with Vladimir Lenin's rise to power. The tsarist Okhrana was replaced by the Bolshevik Cheka, a ruthless organization designed to suppress internal opposition. By 1919 a covert foreign section had been set up to organize and spread the worldwide communist revolution. By the 1930s the Soviet intelligence system, now known as the NKVD (the acronym for Narodnyi Kommissariat Vnutrennyk Del, or People's Commissariat for Internal Affairs), had become Joseph Stalin's omniscient tool of terror, which, while coercing the populace at home, was also remarkably successful at recruiting agents abroad.
After 1918 intelligence had grown to become an integral component of government in all the major European countries. From the mid-1930s onward it became crucial, not least in monitoring the rising German aggression. Traditionally it has been assumed that the Anglo-French appeasement policies of the late 1930s were a characteristic of the intelligence failure to identify the Nazi threat. Yet intelligence records reveal this explanation to be far too simplistic: information had been gathered on the nature of Germany's diplomacy and its military capabilities, but intelligence was only one cog in policy makers' decisions. Indeed, the failure of German intelligence to gauge the British and French reactions to Germany's invasion of Poland in 1939 was far more disastrous than appeasement ever was.
From the outset, World War II rapidly became an intelligence war. In every major conflict intelligence played a role. Scholars have debated ever since the impact of intelligence. While there can be no definitive answer, one simple fact is clear: without intelligence the war would have been unrecognizably different.
One key area was intelligence liaison, which in general terms was effectively maintained in defiance of a common enemy. Polish intelligence and resistance proved to be crucial in this respect, for, by providing the first Enigma machine to British intelligence—enabling the Allies to intercept and decipher German Enigma codes—they achieved what many regard as the greatest intelligence coup of the war. Through Ultra—the code name given to the breaking of the German code—the Allies were able to discern enemy plans. Thus crucial tactical and strategic information was provided and turned out to be decisive in, for instance, the Battle of the Atlantic and the Battle of El Alamein. A corollary of this was the "XX System," or double-cross system. British intelligence managed to identify and intercept every single German spy within its shores. It also was able to "turn" many of them, so that they began to send false information back to Germany. Through Ultra, the Allies were able to observe the German acceptance of and reaction to such information.
A related war effort was the Allied use of deception. In its simplest sense, this involved camouflaging truck and tank movements in the desert so that their tracks could not be observed from the air. At the other end of the scale were the hugely successful campaigns to mislead the Germans about where the invasion of France would occur in 1944, codenamed Operation Overlord. Through the double-cross system and the fabrication of dummy army bases on the southeast coast of England, the Germans were tricked into believing that the attack would take place at the Pas de Calais, when in fact it would take place farther along the French coast in Normandy.
With the German war machine rolling through Europe, British Prime Minister Winston Churchill set up the Special Operations Executive (SOE), whose task was to "set Europe ablaze." This was intelligence in its covert action sense, and it proved to be extremely useful. Through SOE, European resistance efforts to German occupation could be coordinated and extended. To take one example, Norwegian workers provided the Allies with information regarding German attempts to build an atomic bomb and the fact that a plant in Norway was being used to make heavy water, a crucial stage of the process. Liaising with British intelligence, SOE and the Norwegian resistance were able to severely disrupt these efforts, eventually sinking a ferry laden with all the German stocks in a Norwegian fjord.
Despite such efforts, however, intelligence was not always as effective. There is still debate as to the extent to which the Japanese attack on Pearl Harbor on 7 December 1941 could have been avoided, given the quantity and quality of Japanese messages intercepted. A similar yet more clear-cut case is that of the German invasion of the Soviet Union—Operation Barbarossa—in June 1941. Stalin, as the self-appointed supreme authority on intelligence, could not and would not believe that Hitler would dishonor the 1939 Nazi-Soviet Nonaggression Pact. As a result he chose to ignore the plethora of reliable intelligence that indicated this was precisely what Hitler intended to do. Such an error was only rectified by the German miscalculation of their pace of advance, culminating in their defeat in the 1942 Battle of Moscow.
As a whole, however, Allied intelligence was exceptionally good during World War II. With the exclusion of the Soviet Union there were efficient chains of command, and intelligence data could flow freely both nationally and internationally. In the Soviet Union this passage was not as simple and often depended on whether intelligence confirmed existing beliefs. Where the Soviet Union did excel, as indeed it did in the 1930s and would continue to do so postwar, was in the recruitment of agents.
On the Axis side, intelligence, and in particular intelligence exchange, was more limited, and this can perhaps be seen as an outcome of the relationship between the Axis Powers. German intelligence remained divided and beset by internal competition. Given the Nazis' ideological stance, far more people offered their services to the Allies than they did to the Axis Powers, yet there were some notable exceptions. In Britain, the American William Joyce, more commonly known as "Lord Haw-Haw," provided a stream of pro-German propaganda, for which he was eventually executed. An Abwehr officer, Major Nikolaus Ritter, recruited various agents in Britain, Belgium, and America. Despite being called the "rising star of the Abwehr" by its head, Admiral Wilhelm Canaris, Ritter was also its biggest failing, for he inadvertently revealed all of its agents to an American spy in 1941.
The Germans managed to break several of the Turkish codes, which revealed some brief details of British-U.S.-Soviet discussions. Other intercepted signals in 1943 revealed to the Nazi high command the attempts by the Spanish to distance themselves from Germany. Militarily, in general terms German intelligence was better at the tactical level—individual military situations—than it was on the larger strategic level, and this may have been a direct result of the German inability to penetrate the higher echelons of Allied decision making.
The Italians, before their surrender, also maintained a network of foreign agents, in particular in North Africa, undoubtedly a remnant of former colonial presence. By the middle part of the war the Italian Military Information Service had a large code-breaking section. Despite collecting a vast amount of information, often through theft as opposed to interception, the Italians appear to have succumbed to Allied deception efforts.
Intelligence during World War II was therefore tantamount to the day-to-day running of the war. The accuracy and importance of such intelligence can only effectively be considered in hindsight, yet what is crucial is how much importance was placed on such information. While it may be known in the early twenty-first century that some things were correct and others false, what is more relevant is whether or not such material was acted upon at the time.
A stark contrast exists between the relative intelligence successes on the Allied side and the intelligence failings on the Axis side. While it is extremely difficult to gauge this difference and impact qualitatively, it is possible to state, as does the official history of British intelligence in the war, that "but for intelligence the war would have taken a very different course."
The postwar period saw a monumental increase in intelligence, and this in part was due to the introduction of the United States as a major intelligence force. Taking its lead from the British system, the Americans in 1941 instigated the Office of the Coordinator of Information, replaced the following year by the Office of Strategic Services, which in 1947 became the Central Intelligence Agency (CIA). If there had been a nylon curtain separating the powers during the interwar period an impenetrable iron curtain separated them after 1945, and this had a direct impact on the importance of intelligence liaison.
Before the end of hostilities the British and Americans had identified that the Soviet Union would become the "new Germany," and intelligence efforts were redirected accordingly. Through several formal and informal agreements, the Anglo-American intelligence partnership flourished, bringing into its coalition several other European nations.
To those countries in the West, considerable U.S. assistance was offered, and this ensured that friendly intelligence organizations could be created, particularly in West Germany. Initially a CIA-controlled intelligence network was created there called the Gehlen Organization. In 1956 this became the BND (Bundesnachrichtendienst, or Federal Intelligence Service), which lasted throughout the Cold War. The BND was crucial in the gathering of intelligence on East Germany, as for many it became a "window on the east." Within Berlin, various military missions were established to observe conventional Soviet forces.
Several European countries were important due to their geographical proximity to the Soviet Union: Norway became the ideal spot to monitor Soviet missile and nuclear tests from the 1950s onward, and Turkey likewise was a site for capturing Soviet missile telemetry. Italy, with its large contingent of communists in the postwar period, was a useful base from which to disseminate propaganda, mainly through the sponsorship of terrorist attacks that could be blamed on the communists. In Germany, radio stations were used to spread information, and in many countries large military bases were established.
In the Eastern bloc, with the vast and all-pervasive KGB (Komitet Gosudarstvennoy Bezopasnosti, or Committee for State Security) at its heart, intelligence became synonymous with internal policing. As had long been its stable tradecraft, the Soviet Union, along with its Eastern bloc satellites, excelled in the recruiting of Western agents, and this continued right up until the end of the Cold War. The Soviet signals-intelligence effort, about which very little is known, was vast in scale and scope and included among its triumphs the bugging of numerous foreign embassies in Moscow, including that of the United States.
Intelligence became the stable ingredient of the Cold War, with ever-increasing budgets and ever-advancing technological means. As with World War II, it is extremely difficult to measure the impact of intelligence. Once more, it is possible to identify the often decisive role that intelligence played in individual episodes. In general terms, while it is difficult to quantify, it is possible to state that intelligence was consistently a crucial ingredient in policy making.
Famously, Western intelligence failed to foresee the implosion of the Soviet Union in 1991, as did, it seems, Soviet intelligence. The initial post–Cold War environment was a strange one, since for the first time since 1900, there was no easily discernible enemy or threat. Growing throughout the Cold War but really only evident afterward was the rising threat of terrorism. This had moved from its Cold War state-sponsored form to a post–Cold War non-state-sponsored form. As it did so, the threat diversified, culminating in the attacks on the United States in September 2001.
A growing characteristic since 1991 has been the increasing importance of liaison with countries that previously had been considered hostile targets, in particular those in the Middle East. Internal policing and security also have increased in importance, as has the exchange of information. This has become especially crucial in Europe, as many of the terrorist targets travel through and frequent European cities. By 2004, and in the wake of new high-casualty, high-impact terrorist attacks, intelligence liaison had become the crux of European security policy.
Since World War I intelligence has grown to become the cornerstone of governmental decision making and policy. Whereas its initial goal had been military, this scope has diversified to reflect the nature of the international scene, for as the target changes, so too must the intelligence organization. It is safe to say that intelligence will go through further stages, yet it is by no means clear how, when, or why this will happen. In the meantime intelligence, battered as it may be by the 2003 scandals regarding weapons of mass destruction in Iraq, will remain a permanent fixture of governance.
Andrew, Christopher M. Her Majesty's Secret Service: The Making of the British Intelligence Community. London, 1985.
Bungert, Heike, et al., eds. Secret Intelligence in the Twentieth Century. London, 2003.
Haswell, Jock. Spies and Spymasters: A Concise History of Intelligence. London, 1977.
Hess, Sigurd. "Intelligence Cooperation in Europe, 1990 to the Present." Journal of Intelligence History 3, no. 1 (summer 2003), 61–68.
Hinsley, F. H. British Intelligence in the Second World War. Cambridge, U.K., 1993.
May, Ernest R., ed. Knowing One's Enemies: Intelligence Assessment before the Two World Wars. Princeton, N.J., 1984. Reprint, 1996.
Plougin, Vladimir. Russian Intelligence Services. Vol. 1, The Early Years. Translated by Gennady Bashkov and edited by Claudiu A. Secara. New York, 2000.
Richelson, Jeffrey T. Foreign Intelligence Organizations. Cambridge, Mass., 1988.
Michael S. Goodman
Intelligence is an abstract concept whose definition continually evolves and often depends upon current social values as much as scientific ideas. Modern definitions refer to a variety of mental capabilities, including the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience, as well as the potential to do these things.
Several theories about intelligence emerged in the twentieth century and with them debate about the nature of intelligence and whether it determined by hereditary factors, the environment, or both. As methods developed to assess intelligence, experts theorized about the measurability of intelligence, its accuracy, and the field known as psychometrics, a branch of psychology dealing with the measurement of mental traits, capacities, and processes. Publication in 1994 of The Bell Curve: Intelligence and Class Structure in American Life by Richard J. Herrnstein and Charles Murray stirred the controversy. Their findings pointed to links between social class, race, and intelligence quotient (IQ) scores, despite questions by many about the validity of IQ tests as a measurement of intelligence or a predictor of achievement and success.
Part of the problem regarding intelligence stems from the fact that nobody has adequately defined what intelligence really means. In everyday life, people have a general understanding that some people are "smart," but when they try to define "smart" precisely, they often have difficulty because a person can be gifted in one area and average or below in another. To explain this phenomenon, some psychologists have developed theories to include multiple components of intelligence.
Since about 1970, psychologists have expanded the notion of what constitutes intelligence. Newer definitions of intelligence encompass more diverse aspects of thought and reasoning. For example, American psychologist Robert Sternberg developed a three-part theory of intelligence which states that behaviors must be viewed within the context of a particular culture; that a person's experiences impact the expression of intelligence; and that certain cognitive processes control all intelligent behavior. When all these aspects of intelligence are viewed together, the importance of how people use their intelligence becomes more important than the question of "how much" intelligence a person has. Sternberg has suggested that some intelligence tests focus too much on what a person has already learned rather than on how well a person acquires new skills or knowledge.
Another multifaceted approach to intelligence is Howard Gardner's proposal that people have eight intelligences:
- Musical: Children with musical intelligence are always singing or tapping out a beat. They are aware of sounds others miss. Musical children are discriminating listeners.
- Linguistic: Children with linguistic intelligence excel at reading, writing, telling stories, and doing crossword or other word puzzles.
- Logical-Mathematical: Children with this type of intelligence are interested in patterns, categories, and relationships. They are good at mathematic problems, science, strategy games, and experiments.
- Bodily-Kinesthetic: These children process knowledge through their senses. They usually excel at athletics and sports , dance, and crafts.
- Spatial: These children think in images and pictures. They are generally good at mazes and jigsaw puzzles. They often spend lots of time drawing, building (with blocks, Legos, or erector sets), and daydreaming.
- Interpersonal: This type of intelligence fosters children who are leaders among their peers, are good communicators, and understand the feelings and motives of others.
- Intrapersonal: These children are shy, very aware of their own feelings, and are self-motivated.
- Naturalist: This type of intelligence allows children to distinguish among, classify, and use features of the environment. These children are likely to make good farmers, gardeners, botanists, geologists, florists, and archaeologists. Naturalist adolescents can often name and describe the features of every make of car around them.
There are many different types of intelligence tests, and they all do not measure the same abilities. Although the tests often have aspects that are related with each other, one should not expect that scores from one intelligence test that measures a single factor will be similar to scores on another intelligence test that measures a variety of factors. Many people are under the false assumption that intelligence tests measure a person's inborn or biological intelligence. Intelligence tests are based on an individual's interaction with the environment and never exclusively measure inborn intelligence. Intelligence tests have been associated with categorizing and stereotyping people. Additionally, knowledge of one's performance on an intelligence test may affect a person's aspirations and motivation to obtain goals. Intelligence tests can be culturally biased against certain groups.
STANFORD-BINET INTELLIGENCE SCALES Consisting of questions and short tasks arranged from easy to difficult, the Stanford-Binet measures a wide variety of verbal and nonverbal skills. Its fifteen tests are divided into the following four cognitive areas: verbal reasoning (vocabulary, comprehension, absurdities, verbal relations); quantitative reasoning (math, number series, equation building); abstract/visual reasoning (pattern analysis, matrices, paper folding and cutting, copying); and short-term memory (memory for sentences, digits, and objects, and bead memory). A formula is used to arrive at the intelligence quotient, or IQ. An IQ of 100 means that the child's chronological and mental ages match. Traditionally, IQ scores of 90–109 are considered average; scores below 70 indicate mental retardation . Gifted children achieve scores of 140 or above. Revised in 1986, the Stanford-Binet intelligence test can be used with children starting at age two. The test is widely used to assess cognitive development and often to determine placement in special education classes.
WECHSLER INTELLIGENCE SCALES The Wechsler intelligence scales are divided into two sections: verbal and nonverbal, with separate scores for each. Verbal intelligence, the component most often associated with academic success, implies the ability to think in abstract terms using either words or mathematical symbols. Performance intelligence suggests the ability to perceive relationships and fit separate parts together logically into a whole. The inclusion of the performance section in the Wechsler scales is especially helpful in assessing the cognitive ability of children with speech and language disorders or whose first language is not English. The test can be of particular value to school psychologists screening for specific learning disabilities because of the number of specific subtests that make up each section.
KAUFMAN ASSESSMENT BATTERY FOR CHILDREN The Kaufman Assessment Battery for Children (KABC) is an intelligence and achievement test for children ages 2.5–12.5 years. It consists of 16 subtests, not all of which are used for every age group. A distinctive feature of the KABC is that it defines intelligence as problem-solving ability rather than knowledge of facts, which it considers achievement. This distinction is evident in the test's division into two parts—intelligence and achievement—which are scored separately and together. The test's strong emphasis on memory and lesser attention to verbal expression are intended to offset cultural disparities between black and white children. In addition, the test may be given to non-native speakers in their first language and to hearing impaired children using American Sign Language.
Babies were once thought to enter the world with minds that were blank slates that developed through a lifetime of experiences. It is as of the early 2000s known that newborns have brains as sophisticated as the most powerful supercomputers, pre-wired with a large capacity for learning and knowledge. In the first few months of life, a baby's brain develops at an amazing rate. At birth, infants have the senses of sight, sound, and touch. At about three or four months, infants begin to develop memory, and it expands quickly. Modern brain imaging techniques have confirmed that children's intelligence is not just hereditary but is also affected greatly by environment. Babies' brains develop faster during their first year than at any other time. By three months, babies can follow moving objects with their eyes, are extremely interested in their surroundings, and can recognize familiar sounds, especially their parents' voices. At six months, infants begin to remember familiar objects, react to unfamiliar people or situations, and realize that objects are permanent. At seven months, babies can recognize their own name. Parents can help their infants develop their intelligence by talking and reading to them, playing with them, and encouraging them to play with a variety of age-appropriate toys .
Toddlers' lives generally revolve around experimenting with and exploring the environment around them. The primary source of learning for toddlers is their families. During their third year, toddlers should be able to sort and group similar objects by their appearance, shape, and function. They also start to understand how some things work, and their memory continues to improve rapidly. They are able to remember and seek out objects that are hidden or moved to a different location. Toddlers should be able to follow two-step instructions and understand contrasting ideas, such as large and small, inside and outside, opened and closed, and more and less. Toddlers also develop a basic understanding of time in relation to their regular activities, such as meals and bedtime.
At age three, preschoolers can say short sentences, have a vocabulary of about 900 words, show great growth in communication, tell simple stories, use words as tools of thought, want to understand their environment, and answer questions. At age four, children can use complete sentences, have a 1,500-word vocabulary, frequently ask questions, and learn to generalize. They are highly imaginative, dramatic, and can draw recognizable simple objects. Preschoolers also should be able to understand basics concepts such as size, numbers, days of the week, and time. They should have an attention span of at least 20 minutes. Children this age are still learning the difference between reality and fantasy. Their curiosity about themselves and the world around them continues to increase.
At age five, children should have a vocabulary of more than 2,000 words. They should be able to tell long stories, carry out directions well, read their own name, count to ten, ask the meaning of words, know colors, begin to know the difference between fact and fiction, and become interested in their surrounding environment, neighborhood, and community. Between the ages of seven and 12, children begin to reason logically and organize their thoughts coherently. However, generally, they can only think about actual physical objects; they cannot handle abstract reasoning. They also begin to lose their self-centered way of thinking. During this age range, children can master most types of conservation experiments and begin to understand that some things can be changed or undone. Early school-age children can coordinate two dimensions of an object simultaneously, arrange structures in sequence, change places or reverse the normal order of items in a series, and take something such as a story, incident, or play out of its usual setting or time and relocate it in another.
Starting at about age 12, adolescents can formulate hypotheses and systematically test them to arrive at an answer to a problem. For example, they can formulate hypotheses based on the phrase "what if." They can think abstractly and understand the form or structure of a mathematical problem. Another characteristic of the later school-age years is the ability to reason contrary to fact. That is, if they are given a statement and asked to use it as the basis of an argument, they are capable of accomplishing the task. Until they reach the age of 15 or 16, adolescents are generally not capable of reasoning as an adult. High school-age adolescents continue to gain cognitive and study skills. They can adapt language to different contexts, master abstract thinking, explore and prepare for future careers and roles, set goals based on feelings of personal needs and priorities, and are likely to reject goals set by others.
Autism is a profound mental disorder marked by an inability to communicate and interact with others. The condition's characteristics include language abnormalities, restricted and repetitive interests, and the appearance of these characteristics in early childhood. As many as two-thirds of children with autistic symptoms are mentally deficient. However, individuals with autism can also be highly intelligent. Autistic individuals typically are limited in their ability to communicate nonverbally and verbally. About half of all autistic people never learn to speak. They are likely to fail in developing social relationships with peers, have limited ability to initiate conversation if they do learn how to talk, and show a need for routine and ritual. Various abnormalities in the autistic brain have been documented. These include variations in the frontal lobes of the brain that focus on control and planning and in the limbic system, a group of structures in the brain that are linked to emotion, behavior, smell, and other functions. Autistic individuals may suffer from a limited development of the limbic system. This would explain some of the difficulties faced by autistic individuals in processing information.
Mental retardation usually refers to people with an IQ below 70. According to the American Psychiatric Association, a mentally retarded person is significantly limited in at least two of the following areas: self-care, communication, home living, social-interpersonal skills, self-direction, use of community resources, functional academic skills, work, leisure, health, and safety . Mental retardation affects roughly 1 percent of the U.S. population. According to the U.S. Department of Education, about 11 percent of school-aged children were enrolled in special education programs for students with mental retardation. There are four categories of mental retardation: mild, moderate, severe, and profound. There are many different causes of mental retardation, both biological and environmental. In about 5 percent of cases, retardation is transmitted genetically, usually through abnormalities in chromosomes, such as Down syndrome or fragile X syndrome . Children with Down syndrome have both mental and motor retardation. Most are severely retarded, with IQs between 20 and 49. Fragile X syndrome, in which a segment of the chromosome that determines gender is abnormal, primarily affects males.
Autism —A developmental disability that appears early in life, in which normal brain development is disrupted and social and communication skills are retarded, sometimes severely.
Down syndrome —A chromosomal disorder caused by an extra copy or a rearrangement of chromosome 21. Children with Down syndrome have varying degrees of mental retardation and may have heart defects.
Intelligence quotient (IQ) —A measure of somebody's intelligence, obtained through a series of aptitude tests concentrating on different aspects of intellectual functioning.
Kaufman Assessment Battery for Children —An intelligence and achievement test for children ages 2.5 to 12.5 years.
Psychometrics —The development, administration, and interpretation of tests to measure mental or psychological abilities. Psychometric tests convert an individual's psychological traits and attributes into a numerical estimation or evaluation.
Stanford-Binet intelligence scales —A device designed to measure somebody's intelligence, obtained through a series of aptitude tests concentrating on different aspects of intellectual functioning. An IQ score of 100 represents "average" intelligence.
Wechsler intelligence scales —A test that measures verbal and non-verbal intelligence.
Autism symptoms begins in infancy, but typically the condition is diagnosed between the ages of two to five. The symptoms of mental retardation are usually evident by a child's first or second year. In the case of Down syndrome, which involves distinctive physical characteristics, a diagnosis can usually be made shortly after birth. Mentally retarded children lag behind their peers in developmental milestones such as sitting up, smiling, walking, and talking. They often demonstrate lower than normal levels of interest in their environment and less responsiveness to others, and they are slower than other children in reacting to visual or auditory stimulation. By the time a child reaches the age of two or three, retardation can be determined using physical and psychological tests . Testing is important at this age if a child shows signs of possible retardation because alternate causes, such as impaired hearing, may be found and treated. There is no cure for autism or mental retardation.
When to call the doctor
Parents should consult a healthcare professional if their child's intellectual development appears to be significantly slower than their peers. Children suspected of having intelligence development problems should undergo a comprehensive evaluation to identify their difficulties as well as their strengths. Since no specialist has all the necessary skills, many professionals might be involved. General medical tests as well as tests in areas such as neurology (the nervous system), psychology, psychiatry, special education, hearing, speech and vision, and physical therapy may be needed. A pediatrician or a child and adolescent psychiatrist often coordinates these tests.
Parents should pay close attention to possible symptoms in their children. Autism is diagnosed by observing the child's behavior, communication skills , and social interactions. Medical tests should rule out other possible causes of autistic symptoms. Criteria that mental health experts use to diagnose autism include problems developing friendships, problems with make-believe or social play, endless repetition of words or phrases, difficulty in carrying on a conversation, obsessions with rituals or restricted patterns, and preoccupation with parts of objects. A diagnosis of mental retardation is made if an individual has an intellectual functioning level well below average and significant limitations in two or more adaptive skill areas. If mental retardation is suspected, a comprehensive physical examination and medical history should be done immediately to discover any organic cause of symptoms. If a neurological cause such as brain injury is suspected, the child may be referred to a neurologist or neuropsychologist for testing.
Armstrong, Thomas, and Jennifer Brannen. You're Smarter than You Think: A Kid's Guide to Multiple Intelligences. Minneapolis, MN: Free Spirit Publishing, 2002.
Brill, Marlene Targ. Raising Smart Kids for Dummies. New York: Wiley Publishing, 2003.
Deary, Ian J. Intelligence: A Very Short Introduction. Oxford, UK: Oxford University Press, 2001.
Georgas, James, et al. Culture and Children's Intelligence: Cross-Cultural Analysis of the WISC-III. Burlington, MA: Academic Press, 2003.
Bailey, Ronald. "The Battle for Your Brain: Science Is Developing Ways to Boost Intelligence, Expand Memory, and More. But Will You be Allowed to Change Your Own Mind?" Reason (February 2003): 25–31.
Bower, Bruce. "Essence of G: Scientists Search for the Biology of Smarts-General Factor Used to Determine Intelligence Level." Science News (February 8, 2003): 92–3.
Furnham, Adrian, et al. "Parents Think Their Sons Are Brighter than Their Daughters: Sex Differences in Parental Self-Estimations and Estimations of their Children's Multiple Intelligences." Journal of Genetic Psychology (March 2002): 24–39.
Gottfredson, Linda S. "Schools and the G Factor." The Wilson Quarterly (Summer 2004): 35–45.
Stanford, Pokey. "Multiple Intelligences for Every Classroom." Intervention in School & Clinic (November 2003): 80–85.
Child Development Institute. 3528 E Ridgeway Road Orange, CA 92867. Web site: <www.cdipage.com>.
National Academy of Child & Adolescent Psychiatry. 3615 Wisconsin Ave. NW, Washington, DC 20016. Web site: <www.aacap.org>.
Rosenblum, Gail. "Baby Brainpower." Sesame Workshop, 2004. Available online at <www.sesameworkshop.org/babyworkshop/library/article.php?contentId=860> (accessed November 10, 2004).
"The Theory of Multiple Intelligences." Human Intelligence, Fall 2001. Available online at <www.indiana.edu/~intell/mitheory.shtml> (accessed November 10, 2004).
Ken R. Wells
The roles of genes and environment in the determination of intelligence have been controversial for more than 100 years. Studies of the question have often been marred by untested assumptions, poor design, and even racism, faults that more modern studies have striven to avoid. Nonetheless, examining the biology of intelligence is an enterprise that continues to be fraught with difficulty, and there remains no real consensus even on how to define the term.
Conventional measures of intelligence are obtained using standard tests, called intelligence quotient tests or, more commonly, IQ tests. These tests have been shown to be reliable and valid. Reliability means that they measure the same thing from person to person, whereas validity means that they measure what they claim to measure. IQ tests measure a person's ability to reason and to solve problems. These abilities are frequently called general cognitive ability, or "g."
Almost all genetic studies of the heritability of intelligence (how much is due to genetics and how much is due to the environment) have been obtained from IQ tests. To understand the studies, therefore, it is important to understand what IQ tests measure, and how their use and interpretation have changed over time.
The standard IQ-measurement approach to intelligence is among the oldest of approaches and probably began in 1876, when Francis Galton investigated how much the similarity between twins changed as they developed over time. Galton's study was concerned with measuring psychophysical abilities, such as strength of handgrip or visual acuity. The concept of general cognitive ability was first described by Charles Spearman in 1904. Later, Alfred Binet and Theophile Simon (1916) evaluated intelligence based on judgment, involving adaptation to the environment, direction of one's efforts, and self-criticism.
Most standard test results now include three scores: VIQ, PIQ, and FSIQ. The VIQ score measures verbal ability (verbal IQ), PIQ measures performance ability (performance IQ), and FSIQ provides an overall measurement (full scale IQ). Commonly used IQ tests include the Stanford-Binet Intelligence Scale, the Wechsler Intelligence Scale for Children (WISC), and the Wechsler Adult Intelligence Scales. The results achieved by individual testtakers on one of these IQ tests are likely to be similar to the results they achieve on the others, and they all aim to measure general cognitive ability (among other things). Measures of scholastic achievement, such as the SAT and the ACT correlate highly with "g."
Environmental Effects on Intelligence
The study of intelligence must take environmental effects into account. The Flynn effect describes a phenomenon that indicates that IQ has increased about 3 points per decade over the last fifty years, with children scoring higher than parents in each generation. This increase has been linked to multiple environmental factors, including better nutrition, increased schooling, higher educational attainment of parents, less childhood disease, more complex environmental stimulation, lower birth rates, and a variety of other factors.
Males and females have equivalent "g" scores. The question of racial differences and IQ arose when a 10 point IQ difference between African Americans and Americans of European descent was documented. Two adoption studies indicate that the effect may be in part related to environment factors, including culture. Also, environmental differences similar to those identified with the Flynn effect can be postulated. Studies of black Caribbean children and English children raised in an orphanage in England found that the black Caribbean children had higher IQs than the English children, with mixed racial children in between. A study comparing black children adopted by white families and those who were adopted into black families in the United States showed that black children raised by whites had higher IQ scores, again suggesting that the environment played a role.
Expanded Concepts of Intelligence
Many of the standard measures of IQ, such as the WISC and the Stanford-Binet, have changed their content over the years. Although they both still report verbal, performance, and total scores, the Wechsler model now offers scores for four additional factors (verbal comprehension, perceptual organization, processing speed, and freedom from distractibility). The Stanford-Binet also yields additional scores, including abstract-visual reasoning, quantitative reasoning, and short-term memory.
However, the majority of research into genetic and environmental variance in IQ has centered on the assumption that general cognitive ability is the essence of intelligence. Newer tests that measure specific abilities have not been included in genetic studies. These include, for example, tests that measure creativity in a model for intelligence. The addition of new factors in the Wechsler and Stanford-Binet IQ tests represents a trend toward a broader approach to IQ, and away from the notion that IQ can be understood by the single factor, "g."
Family, Twin, and Adoption Studies
Genetic studies have traditionally used models that evaluate how much of the variability in IQ is due to genes and how much is associated with environment. These studies include family studies, twin studies, and adoption studies.
General cognitive ability runs in families. For first-degree relatives (parents, children, brothers, sisters) living together, correlations of "g" for over 8,000 parent-offspring pairs averaged 0.43 (0.0 is no correlation, 1.0 is complete correlation). For more than 25,000 sibling pairs, "g" correlations averaged 0.47. Heritability estimates range from 40 to 80 percent, meaning that 40 to 80 percent of "g" is due to genes.
In twin studies of over 10,000 pairs of twins, monozygotic (genetically identical) twins averaged an 0.85 correlation of "g," whereas for dizygotic (fraternal, like brothers or sisters) same-sex twins the "g" correlations were 0.60. These twin studies suggest that the heritability (genetic effect) accounts for about half of the variance in "g" scores.
Adoption studies also provide evidence for substantial heritability of "g." The "g" estimate for identical twins raised apart is similar to that of identical twins raised together, proving that for genetically identical individuals, environmental differences did not affect "g." The Colorado Adoption Study (CAP) of first-degree relatives who were adopted also indicated significant heritability of "g." Thus, classical genetic studies indicate that there is a statistically significant and substantial genetic influence on "g."
Newer genetic research on general cognitive ability has focused on developmental changes in IQ, multivariate relations (contributions of multiple factors) among cognitive abilities, and specific genes responsible for the heritability of "g." Developmental changes over time were first studied by Galton in 1876. The CAP study was conducted over twenty-five years and evaluated 245 children who had been separated from their parents at birth and adopted by one month of age. This study, and others, showed that the variance in "g" due to environment for an adopted child in his or her adoptive family is largely unconnected with the shared adoptive family upbringing, that is, a shared parent-sibling environment. For adoptive parents and their adopted children, the parent-offspring correlations for heritability were around zero. For adopted children and their biologic mothers or for children raised with their biologic parents, heritability was the same, increasing with age.
Recent studies indicate that heritability increases over time, with infant measures of about 20 percent, childhood measures at 40 percent, and adult measures reaching 60 percent. Why is there an age effect for the heritability of "g"? Part of this could be due to different genes being expressed over time, as the brain develops. The stability of the heritability measure correlates with changes in brain development, with "maturity" of brain structure achieved after adolescence. Also, it is likely that small gene effects early in life become larger as children and adolescents select or create environments that foster their strengths.
Multivariate relations among cognitive abilities affect more than general cognitive ability as measured by "g." Current models of cognitive abilities include specific components such as spatial and verbal abilities, speed of processing, and memory abilities. Less is known about the heritabilities of these specific cognitive skills. They also show substantial genetic influence, although this influence is less than what has been found for "g." Multivariate genetic analyses indicate that the same genetic factors influence different abilities. In other words, a specific gene found to be associated with verbal ability may also be associated with spatial ability and other specific cognitive abilities. Four studies have shown that genetic effects on measures of school achievement are highly correlated with genetic effects on "g." Also, discrepancies between school achievement and "g," as occurs with under-achievers, are predominantly of environmental origin.
Genes for Intelligence
The search for specific genes associated with IQ is proceeding at a rapid pace with the completion of the Human Genome Project. While defects in single genes, such as the fragile X gene, can cause mental retardation, the heritability of general cognitive ability is most likely due to multiple genes of small effect (called quantitative trait loci, or QTLs) rather than a single gene of large effect. QTLs contribute additively and interchangeably to intelligence.
Genetic studies have identified QTLs associated with "g" on chromosomes 4 and 6. These studies involved both children with high "g" and children with average "g." QTLs on chromosome 6 have been identified and shown to be active in the regions of the brain involved in learning and memory. The gene identified is for insulin-like growth factor 2 receptor, or IGF2R, the exact function of which is still unknown. One allele (alternative form) of IGR2R was found to be present 30 percent of the time in two groups of children with high "g." This was twice the frequency of its occurrence in two groups of children with average "g," and these findings have been successfully replicated in other studies. QTLs associated with "g" have also been identified on chromosome 4. Future identification of QTLs will allow geneticists to begin to answer questions about IQ and development and gene-environment interaction directly, rather than relying on less specific family, adoption, and twin studies.
In summary, intelligence measurements ranging from specific cognitive abilities to "g" have a complex relationship. Genetic contributions are large, and heritability increases with age. Heritability remains high for verbal abilities during adulthood. Finally, the identification of QTLs associated with "g" and with specific cognitive abilities is just beginning.
see also Behavior; Complex Traits; Eugenics; Fragile X syndrome; Genetic Discrimination; Quantitative Traits; Twins.
and Ruth Abramson
Casse, D. "IQ since 'The Bell Curve.'" Commentary Magazine 106, no. 2 (1998): 33-41.
Chiacchia, K. B. "Race and Intelligence." In Encyclopedia of Psychology, 2nd ed., Bonnie Strickland, ed. Farmington Hills, MI: Gale Group, 2001.
Deary, I. J. "Differences in Mental Ability." British Medical Journal 317 (1998): 1701-1703.
Fuller, J. L., and W. R. Thompson. "Cognitive and Intellectual Abilities." In Foundations of Behavior Genetics. St. Louis, MO: C.V. Mosby Co., 1978.
Plomin, R. "Genetics of Childhood Disorders, III: Genetics and Intelligence." Journal of the American Academy of Childhood and Adolescent Psychiatry 38 (1999): 786-788.
Sternberg, R. J., and J. C. Kaufman. "Human Abilities." Annual Review Psychology 49: 479-502.
Sternberg, R. J., and E. L. Grigorenko. "Genetics of Childhood Disorders, I: Genetics and Intelligence." Journal of the American Academy of Childhood and Adolescent Psychiatry 38 (1999): 486-488.
Intelligence is the ability to acquire, remember, and use knowledge in order to make judgments, solve problems, and deal with new experiences.
for searching the Internet and other reference sources
Not every student will have the experience of taking an I.Q. (intelligence quotient) test. However, almost all students do know what it is like to take a standardized test. Unlike a test taken for a class, which usually is based on specific material already covered in the classroom, standardized tests typically include a wide range of items that ask the test-taker to use words, solve problems, and understand relationships among concepts. The results often show how the student performed compared to others of the same age or grade level. For example, if a student scores at the 80th percentile, that means this person performed better than 8 out of 10 students of the same age or grade who took the test.
I.Q. tests are used to compare an individual’s performance to that of others on a sampling of school-related tasks. An individual’s performance on all these tasks is averaged and compared to that of other people of the same age. I.Q. varies among people much the way height does. Most people are close to average height, while a small number are much taller or shorter than average. Similarly, the average I.Q. is 100, and most people fall somewhere between 70 and 130. Those who fall below 70 often are diagnosed with mental retardation, while those with an I.Q. higher than 130 often are considered gifted.
How did the practice of measuring intelligence get started? Back in 1905, public school administrators in Paris asked psychologist Alfred Binet to come up with a test that would identify mentally retarded children who could benefit from special help outside the regular classroom. It was hoped that this would help relieve the problem of overcrowded classes. The Simon-Binet test that resulted set the stage for intelligence testing throughout the 1900s.
An American psychologist at Stanford University named Lewis Terman revised the Simon-Binet test in 1916. This revision, known as the Stanford-Binet Intelligence Scale, is still in use today, although it has been revised several more times. The latest version includes sections on abstract/visual reasoning, verbal reasoning (word-related problems), quantitative reasoning (number-related problems), and short-term memory.
Wechsler Intelligence Scales
The Wechsler Intelligence Scales are another well-known set of I.Q. tests. Psychologist David Wechsler developed a number of tests in the 1940s, 50s, and 60s that were tailored to children of different ages as well as to adults. Today, many schools use the Wechsler Intelligence Scale for Children–Third Edition (WISC-III) to evaluate children between the ages of 6 and 16. One of
Lewis Terman And The Stanford Study
In the early 1900s, it was widely believed that gifted people were physically inferior, had unusual interests, and found it difficult to relate to others. In the 1920s, psychologist Lewis Terman and his colleagues at Stanford University launched a study of 1*500 children in California who were gifted (LQ.s over 130). After tracking the children for several years, the researchers found that they actually were healthier, taller, better adjusted, and more popular than the average child. As these gifted children grew into young adults, they were more likely to attend college, achieve academically, pursue advanced degrees, and go on to higher-level professional positions in fields such as science, writing, and business They also tended to be more satisfied with their lives as adults.
This dispelled the notion that extremely bright people were “eggheads” who were at a disadvantage socially. It also suggested that I.Q. may be a somewhat reliable predictor of later academic and professional success. At the same time, however, the study showed that a high I.Q. is not a guarantee of success, as there were also many individuals who did not achieve at a high level. While many of the “gifted children” were well educated and had good professional jobs, all levels of employment and income were found.
Wechsler’s contributions is the notion that intelligence can be broken down into two main types of problem-solving: verbal and nonverbal. A Verbal Scale on the WISC-III measures how well children are able to use words to solve problems of different kinds, including some that involve common sense and others that involve more abstract reasoning. The Performance Scale on the test measures how well children use nonverbal abilities to make sense of visual relationships; for instance, by solving a puzzle or deciphering a code.
Intelligence is sometimes classified as left-brained or right-brained, although that is an oversimplification. People can hear words and musical sounds with both ears, but the right ear is believed to have a stronger connection to the left hemisphere of the brain where words and speech are processed. People can hear words spoken into the left ear but they understand them better when assisted by the speech pathway from the right ear. Likewise, the left ear has a stronger connection to the right hemisphere of the brain where musical sounds are processed. People can hear melodies played into the right ear but enjoy them more when assisted by the sound pathway from the left ear.
Defining intelligence is not as simple as giving someone an I.Q. test, however. In fact, scientists still are debating what intelligence really is. Harvard psychologist and education expert Howard Gardner has challenged the notion that there is a single human intelligence, or even just verbal and nonverbal intelligences. Instead, Gardner has proposed a theory of multiple intelligences. He argues that human beings have at least seven separate intelligences, each relatively independent of the others:
- linguistic (reading and writing)
- logical-mathematical (using numbers, solving logic problems)
- spatial (finding one’s way around an environment)
- musical (perceiving and creating patterns of pitch and rhythm)
- bodily-kinesthetic (making precise movements, as in performing surgery or dance)
- interpersonal (understanding others)
- intrapersonal (knowing oneself)
Gardner also believes that any definition of intelligence must take into account what the culture values. For example, while we might consider someone intelligent if that person can use words or numbers well, people of another culture might place more value on skills such as hunting, fishing, or understanding nature. Gardner’s theory favors observation of people over time, rather than short-answer I.Q. tests, for measuring the different intelligence types.
Other psychologists have suggested still other theories for understanding intelligence. Swiss psychologist Jean Piaget, believed that intelligence should be defined as adaptation to the environment. Piaget looked at how children display intelligence at different stages of life, from infancy through adolescence, and tried to make generalizations about how they are able to cope with their surroundings and meet new challenges. More recently, psychologist Robert Sternberg proposed a three-sided theory of intelligence, arguing that intelligence actually is composed of three parts: the ability to analyze information to solve problems, the creative ability to incorporate insights and new ideas, and the practical ability to size up situations and survive in the real world.
Most likely, all of these theories are at least partly correct and can contribute to our overall understanding of intelligence. However, updated versions of the tests developed by Alfred Binet and David Wechsler still are used to measure I.Q. Many psychologists and educators have strong feelings for or against giving I.Q. tests. Some argue that I.Q. tests are very useful for predicting how well a particular child will do in school and judging whether that child needs extra support or more challenges. However, others fear that children who test poorly may be stereotyped as low achievers and not given the level of attention they otherwise would have received. Still others believe that specific test questions put individuals from certain ethnic groups at a disadvantage. For example, think about how some of the verbal expressions used by African American or Latino students differ from those used by their caucasian classmates. A verbal question that asks about a particular word that is commonly used by people of one ethnic group might be unfamiliar to people from other backgrounds. Finally, some experts are concerned that I.Q. tests may underestimate the abilities of people with speech, movement, and other disabilities.
Testing and Evaluation
National Association for Gifted Children, 1707 L Street Northwest, Suite 550, Washington, DC 20036. A national organization for parents and teachers that focuses on the special needs of gifted and talented students. Telephone 202-785-4268 http://www.nagc.org
U.S. Department of Education, 400 Maryland Avenue Southwest, Washington, DC 20202. The department of the federal government that oversees special education programs for mentally retarded and gifted students. Telephone 800-872-5327 http://www.ed.gov
Intelligence is the ability to solve problems. It is also commonly referred to as practical sense or the ability to get along well in all sorts of situations. People cannot see, hear, touch, smell, or taste intelligence. On the other hand, the more they have, the better able they are to respond to things around them. Anyone interested in understanding intelligence will find many theories, definitions, and opinions available.
During the late nineteenth century, scientists were interested in the differences in human thinking abilities. Out of these interests evolved the need to distinguish between children who could learn in a school environment and those who could not. At the turn of the nineteenth century, Alfred Binet and Theodore Simon developed a set of questions that helped identify children who were having difficulty learning. This set of questions was later used in the United States by Lewis Terman of Stanford University and eventually became the Stanford-Binet Intelligence Test. The main purpose for looking at intelligence is to be able to measure it; measuring thinking ability helps psychologists predict future learning of children and, if necessary, develop educational programs that will enhance learning.
In addition to general thinking ability, other definitions of intelligence describe the specific ways a child responds to problems. For example, Howard Gardner is interested in how children use different abilities to display intelligence. In particular, he says that children have mathematical, musical, interpersonal, linguistic, spatial, bodily-kinesthetic, intrapersonal, and naturalistic intelligences. He theorizes that children have all these types of intelligence, but have more ability in one than the others. Theorist Daniel Goleman argues that general intelligence, measured by the traditional test, is not as useful in predicting success in life as the measurement of emotional intelligence. He suggests that abilities such as initiative, trustworthiness, self-confidence, and empathy are more important to consider than general intelligence. Therefore, until specific tests of these types of intelligence are developed, determining how much a child has of one type of intelligence is not possible.
Measuring Intelligence as a Comprehensive Process
Although people cannot see intelligence, it can be measured. Psychologists measure intelligence using several methods such as the Stanford-Binet scale, the Wechsler Intelligence scales, and the Kaufman Assessment Battery for Children. These tests measure abilities such as information processing, memory, reasoning, and problem solving. Tasks that measure these abilities include identifying the missing part in a picture, repeating numbers, or defining vocabulary words. These tests also measure a child's ability to respond in an acceptable way to different social situations. The important factor in all established intelligence tests is that the child must be able to see, hear, or speak in order to pass the test. A child who is able to hear and answer questions will score the best.
For children who cannot speak or hear, there are tasks for measuring nonverbal intelligence within each of the tests. Some of these include items such as completing puzzles or reproducing a design using blocks. However, instructions are given verbally, so a child will need to hear and understand questions in order to respond. There are other less frequently administered tests of performance, such as the Test of Non-verbal Intelligence where instructions are given in pantomime. In cases where a child does not speak English, translated intelligence tests that measure the same abilities can be used. In cases where no translation exists, the use of a qualified translator is acceptable.
The method of measuring intelligence of infants and toddlers not old enough to speak is a little different. For example, instead of identifying children who cannot learn, psychologists measure whether the infants or toddlers have developed a common ability by a certain age. One test used with this age group is the Bayley Scales of Infant Development. This test includes tasks such as rolling over, smiling, and imitating sounds. Specifically, psychologists want to know how an infant is developing compared to other infants of the same age.
After psychologists give intelligence tests they can begin to determine a child's level of intelligence by looking at the amount of items the child answered correctly compared to other children of the same age. Correct responses are tallied and referred to as intelligence quotients, or IQ. The scores of the children in the original group are distributed around an average score of 100. Most children have average intelligence and score between 85 and 115. Very few children score in the low range of mental retardation or the high range of gifted.
Several important factors should be considered when discussing an IQ score. If the child was having a bad day or if the examiner made an error in scoring, the results would not be typical. Psychologists must consider these factors in addition to how the child performs on other tests that measure the child's academic achievement and typical behavior at home. In other words, intelligence is not based on one score from one test. In fact, levels of intelligence cannot be determined unless all of these factors are considered.
Typically, psychologists who are highly trained and professionally qualified in giving intelligence tests will determine intelligence. For the most part, intelligence tests are administered to elementary school children because learning difficulties are easier to notice when children begin school. In most cases a teacherwill suspect something different about the performance of a child and will ask the school psychologist about this occurrence. In other cases, parents may want to know if their child is ready for school and will ask about testing services in their community. Nevertheless, assessing a child's level of intelligence helps identify the strengths and weaknesses in the child's learning abilities. This leads to individual learning programs for the child and more useful tasks at school.
Certainly, it is important to consider how individuals think about information in order to predict learning performance. For this reason, intelligence tests are necessary, and in some states required by law. However, it is equally important that educators do not place children in less demanding classrooms only because they think a child may not be able to learn at a faster pace. As previously discussed, intelligence may be increased. Limiting the type of learning opportunities a child should get will prevent learning new problem-solving skills; educators do not want to frustrate a child who needs a different type of program, and will change learning tasks as needed.
Environmental and Genetic Influences on Intelligence
Typically, the way a child thinks and solves problems stays the same from age six on. After beginning school, a child's ability to think appears to develop at a normal rate through grade levels that match the child's age. If a child is given problem-solving tasks, then the child will learn how to solve problems and vice versa. Fortunately, children who are not given the chance to learn how to think about problems before attending school will ultimately catch up. Because of this, psychologists are careful when interpreting low intelligence scores in children younger than age six. This also means that while low intelligence cannot be cured, it can be changed. In fact, educational programs are specifically planned to improve environments in order to increase educational and life skills.
In addition to a child's environment, the intelligence of the child's parents has some influence on the amount of intelligence the child is born with. Questions about the influence of genetics are explained by looking at characteristics children inherit from parents. While studying twins, some scientists have shown that intelligence is largely inherited. Researchers also found that the higher the parents' IQs, the higher their child's IQ tended to be. At the same time, there was less consistency between the IQs of adoptive parents and their adoptive children. Many other factors are related to the development of intelligence. These include parent education, family financial status, family size, and early schooling. Parents who provide a rich learning environment and foster good learning behaviors will have children with better than average IQ scores, barring any medical causes of mental retardation.
Low Intelligence Scores
Psychologists who identify low levels of thinking ability will describe the child as having below average intelligence. A score of sixty-nine or less results in this classification. Unfortunately, this description has potentially negative effects. A child may think the meaning of the low score is a definite sign that they cannot learn and will fail to learn in order to verify the classification. Historically, there is a negative meaning attached to low IQ scores. Therefore, the suggestion that a child has mental retardation can be devastating to the child's family. Due to the nature of errors in testing and the fact that intelligence can be changed, scores that fall at or around seventy, should be interpreted very carefully. Finally, teachers and parents may expect that children with this classification cannot learn, therefore giving them challenging tasks is not necessary. If children are not expected to learn, given a low IQ score or classification, then no one may demand higher performances from them.
It is very difficult to predict outcomes based on the score of an intelligence test, because these tests do not measure other important factors, such as motivation. In addition, intelligence tests are sometimes used inappropriately with minority children who may not understand certain items because of cultural differences. Therefore, any intelligence test score must be fully understood and interpreted with great care.
See also:DEVELOPMENTAL NORMS
Kamphaus, Randy. Clinical Assessment of Children's Intelligence. Boston: Allyn and Bacon, 1993.
Nuthall, Ena, Ivonne Romero, and Joanne Kalesnik. Assessing and Screening Preschoolers: Psychological and Educational Dimensions, 2nd edition. Boston: Allyn and Bacon, 1999.
An abstract concept whose definition continually evolves and often depends upon current social values as much as scientific ideas. Modern definitions refer to a variety of mental capabilities, including the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience as well as the potential to do so.
Several theories about intelligence emerged in the 20th century and with them debate about the nature of intelligence, whether it is hereditary, environmental or both. As methods developed to assess intelligence, theorizing occurred about the measurability of intelligence, its accuracy and this field known as psychometrics. As the 20th century drew to a close, publication of The Bell Curve by Richard J. Herrnstein and Charles Murray in 1994 stirred the controversy. Their findings pointed to links between social class, race, and IQ scores, despite questions by many about the validity of IQ tests as a measurement of intelligence or a predictor of achievement and success.
Part of the problem regarding intelligence stems from the fact that nobody has adequately defined what intelligence really means. In everyday life, we have a general understanding that some people are "smart," but when we try to define "smart" precisely, we often have difficulty because a person can be gifted in one area and average or below in another. To explain this phenomenon, some psychologists have developed theories to include multiple components of intelligence.
Charles Darwin 's younger cousin, Sir Francis Galton , inspired by the Origin of the Species, developed a forerunner of 20th-century testing in the 1860s when he set out to prove that intelligence was inherited. He used quantitative studies of prominent individuals and their families.
British psychologist and statistician Charles Spearman in 1904 introduced a central concept of intelligence psychometrics, pointing out that people who perform well on one type of intelligence test tend to do well on others also. This general mental ability that carried over from one type of cognitive testing to another, Spearman named g—for general intelligence. Spearman concluded that g consisted mainly of the ability to infer relationships based on one's experiences. Spearman's work led to the idea that intelligence is focused on a single, main component.
French psychologists Alfred Binet and Theodore Simon followed in 1905, introducing the concept of mental age to match chronological age in children with average ability. In bright children, mental age would exceed chronological age; in slower learners, mental age would fall below chronological age. Simon and Binet's test was introduced into the United States in a modified form in 1916 by Stanford psychologist Lewis Terman , and with it the concept of the intelligence quotient (I.Q.), the mental age divided by chronological age and multiplied by 100.
With the adoption of widespread testing using the Stanford-Binet and two versions created for the Army in World War I, the concept of the intelligence test departed from Binet and Simon's initial view. Intelligence became associated with a fixed, innate, hereditary value. That is, one's intelligence, as revealed by IQ tests, was locked at a certain level because of what was seen as its hereditary basis. Although a number of well-known and respected psychologists objected to this characterization of intelligence, it gained popularity, especially among the public.
At this time, people placed great faith in the role of science in improving society; intelligence tests were seen as a specific application of science that could be used beneficially. Unfortunately, because of the nature of the tests and because of many people's willingness to accept test results uncritically, people of racial minorities and certain ethnic groups were deemed to be genetically inferior with regard to intelligence compared to the majority.
Some early psychologists thought that measuring the speed of sensory processes and reaction times might indicate an individual's intelligence. This approach provided no useful results. Subsequently, tests reflecting white American culture and its values provided the benchmark for assessing intelligence. Although such tests indicate the degree of academic success that an individual is likely to experience, many have questioned the link to the abstract notion of intelligence, which extends beyond academic areas.
Immigration laws restricted entry into the United States of "inferior" groups, based on the results of early intelligence testing, according to some scholars. This claim seems to have some merit, although many psychologists objected to the conclusions that resulted from mass intelligence testing. In large part, the immigration laws seemed to reflect the attitudes of Americans in general regarding certain groups of people.
In the 1940s, a different view of intelligence emerged. Rejecting Spearman's emphasis on g, American psychologist L.L. Thurstone suggested that intelligence consists of specific abilities. He identified seven primary intellectual abilities: word fluency, verbal comprehension, spatial ability, perceptual speed, numerical ability, inductive reasoning , and memory .
Taking Thurstone's concept even further, J.P. Guilford developed the theory that intelligence consists of as many as five different operations or processes (evaluation, convergent production, divergent production, memory, and cognition ), five different types of content (visual, auditory, symbolic, semantic, and behavioral) and six different products (units, classes, relations, systems, transformation, and implications). Each of these different components was seen as independent; the result being an intelligence theory that consisted of 150 different elements.
In the past few decades, psychologists have expanded the notion of what constitutes intelligence. Newer definitions of intelligence encompass more diverse aspects of thought and reasoning. For example, psychologist Robert Sternberg developed a three-part theory of intelligence that states that behaviors must be viewed within the context of a particular culture (i.e., in some cultures, a given behavior might be highly regarded whereas in another, the same behavior is given low regard); that a person's experiences impact the expression of intelligence; and that certain cognitive processes control all intelligent behavior. When all these aspects of intelligence are viewed together, the importance of how people use their intelligence becomes more important than the question of "how much" intelligence a person has. Sternberg has suggested that current intelligence tests focus too much on what a person has already learned rather than on how well a person acquires new skills or knowledge. Another multifaceted approach to intelligence is Howard Gardner 's proposal that people have eight intelligences: logical-mathematical, linguistic, musical, spatial, bodily-kinesthetic, interpersonal, intrapersonal and the naturalistic.
Daniel Goleman has written about an emotional intelligence of how people manage their feelings, interact and communicate, combining the interpersonal and intrapersonal of Gardner's eight intelligences.
One feature that characterizes the newly developing concept of intelligence is that it has broader meaning than a single underlying trait (e.g., Spearman's g). Sternberg and Gardner's emergent ideas suggest that any simple attempt at defining intelligence is inadequate given the wide variety of skills, abilities, and potential that people manifest.
Some of the same controversies that surfaced in the early years of intelligence testing have recurred repeatedly throughout this century. They include the question of the relative effects of environment versus heredity , the degree to which intelligence can change, the extent of cultural bias in tests, and even whether intelligence tests provide any useful information at all.
The current approach to intelligence involves how people use the information they possess, not merely the knowledge they have acquired. Intelligence is not a concrete and objective entity, though psychologists have looked for different ways to assess it. The particular definition of intelligence that has currency at any given time reflects the social values of the time as much as the scientific ideas.
The approach to intelligence testing, however, remains closely tied to Charles Spearman's ideas, despite new waves of thinking. Tests of intelligence tend to mirror the values of our culture, linking them to academic skills such as verbal and mathematical ability, although performance-oriented tests exist.
See also Culture-fair test; Stanford-Binet intelligence scales; Wechsler Intelligence Scales
Gardner, Howard. Intelligence Reframed: Multiple intelligences for the 21st Century. New York: Basic Books, 1999.
Gould, S.J. The Mismeasure of Man. New York: W.W. Norton, 1996.
Khalka, Jean Ed. What Is Intelligence? Cambridge: Cambridge University Press, 1994.