Racial Hierarchy: Disproven
Racial Hierarchy: Disproven
Racial Hierarchy: Disproven
The perception of humans as belonging to “racial” categories dates from the time of the Renaissance and the European colonization of the New World and those parts of the Old World remote from Europe itself. The aboriginal inhabitants in these areas were perceived, in hierarchical fashion, as being inferior in complexity and categorically different from the colonists, and they were referred to in somewhat derogatory descriptive terms. The European colonizers had technological capabilities that were largely lacking, or at least less developed, in the areas being colonized. The marine technology that created the ships that got them there in the first place was something they took understandable pride in, although it was this technology that got people from one part of the world to another without seeing anything of the inhabitants of the areas in between. This was one of the things that contributed to the perception of the people of the world in categorical “racial” terms. In addition to their marine technology, the literacy of the colonizers and the navigating skills they had learned led them to assume a categorical distinction in their capabilities and achievements as compared to the original inhabitants of the areas being colonized. Inevitably, they looked down on these peoples as being of a lesser order of intellectual worth.
Throughout history, all human groups have felt that they were the best of humankind, and those that looked different and lacked technological sophistication were considered inferior. More than a few psychologically oriented writers of the late twentieth and early twenty-first centuries have regarded the idea that different human groups had differences in average intellectual capacity as a valid expectation, just as they have gone along with the assumption that “races” are valid biological categories. These writers include J. Philippe Rushton, the author of Race, Evolution and Behavior: A Life History Perspective (1995), Arthur R. Jensen, who wrote The g Factor: The Science of Mental Ability (1998), and Richard Lynn and Tatu Vanhanen, the authors of IQ and the Wealth of Nations (2002).
During the colonial period, both Europeans and Americans assumed that there were no cities in subSaharan Africa, and that Africans did not pursue an agricultural way of life. In fact there were many urban centers in Africa, and agriculture was well-established and widespread. Furthermore, African religious sophistication has been widely documented (see Glazer 2001).
The assumption that “races” are valid biological categories has been essentially disproven by the fact that the variance of inherited dimensions of the subjectively assumed categories within “races” is many times greater than that of the variance of those same dimensions between such“racial” categories. Quantitative work on this subject clearly demonstrates that “race” is not a valid biological category (see, for example, Fish 2002, and Templeton 2002). When genetic diversity is tested, it can be shown that well over 80 percent of the known range of variation occurs between the individuals of any given population while only 6 percent occurs between the populations of different geographical regions.
There is another major reason to deny the folk assumption that various locally identified human groups should be expected to have different biologically inherent capabilities, and this is the appreciation of the nature of the selective forces that demanded a considered understanding and response on the part of the members of the human groups in question. This outlook is derived from an appreciation of what a full anthropological perspective can give, including an assessment of how human populations lived during the distant past, when their physical and mental characteristics were shaped by the forces of evolution that influenced human chances for survival. As it stands, most of the assessments of the survival problems faced by different human populations only consider the way they are living now, not what the lifeways of their ancestors were in the past.
Looking at the human populations of the world, one thing that needs to be emphasized is that virtually none of them are living the way their ancestors did in the Pleistocene era, which ended just over 10,000 years ago. Even the Australian aborigines, so often taken as typifying the lifeway of the “primitive,” were living a late to post-Pleistocene way of life at the time of European invasion and settlement late in the eighteenth century. Starting nearly two million years ago, all ancestral hominid populations were living a hunting-and-gathering way of life, which was essentially the same type of existence in all the occupied portions of the Old World. This lifestyle involved selecting a prey animal, trotting after it for a number of days until it could go no more, and then moving in for the kill. This existence put the same pressures on people throughout the inhabited world. The same was true for the knowledge needed to collect edibles from the plant kingdom. Selective pressures did not differ from one part of the inhabited world to another in regard to what people had to figure out. Of course, selective forces maintaining pigment in the skin did differ from the tropics to the temperate parts of the world, but this had nothing to do with human problem-solving capabilities.
If there are differences in the capabilities of human populations, these tend to be very different from what so many ethnocentric commentators have assumed. Those whose survival has depended upon certain capabilities that have been relaxed in other human populations have retained what almost certainly had been common to all populations during the Pleistocene. For example, tests in the late twentieth century have shown that Australian aborigines have less near-sightedness and astigmatism than the European-derived people who were testing them. Peoples whose ancestors had most recently survived by hunting have tended to retain more fast-twitch muscle capabilities, which are more frequently found among those who are the best sprinters in the world (see Entine 2000).
If the lifeways of our Pleistocene ancestors required the same problem-solving capabilities throughout the world, it had to have taken just as much wit or intelligence to cope with the problems of making a farm work in the absence of any written instructions. The amount of rote learning needed to carry out such a project had to be every bit as daunting as outwitting prey animals was for the Pleistocene hunters, or as figuring out what was edible and what was not. There is thus no reason to expect that the innate intellectual capabilities of any of the populations of the world differ to any significant extent from those of any other population.
Brace, C. Loring. 1995. The Stages of Human Evolution, 5th ed. Englewood Cliffs, NJ: Prentice-Hall.
Connah, Graham. 2001. African Civilizations: An Archaeological Perspective, 2nd ed. Cambridge, U.K.: Cambridge University Press.
Ehret, Christopher. 1998. The African Classical Age: Eastern and Southern Africa in World History, 1000 BC to AD 400. Charlottesville: University Press of Virginia.
Entine, John. 2000. Taboo: Why Black Athletes Dominate Sports and Why We Are Afraid to Talk About It. New York: Pacific Affairs.
Fish, Jefferson M., ed. 2002. Race and Intelligence: Separating Science from Myth. Malwah, NJ: Lawrence Erlbaum.
Glazier, Stephen D., ed. 2001. The Encyclopedia of African and African-American Religions. New York: Routledge.
Jensen, Arthur R. 1998. The g Factor: The Science of Mental Ability. Westport, CT: Praeger.
Lynn, Richard, and Tatu Vanhanen. 2002. IQ and the Wealth of Nations. Westport, CT: Praeger.
Rushton, J. Philippe. 1995. Race Evolution and Behavior: A Life History Perspective. New Brunswick, NJ: Transaction Publishers.
Templeton, Alan R. 2002. “Genetic and Evolutionary Significance of Human Races.” In Race and Intelligence: Separating Science from Myth, edited by J. M. Fish, 31–36. Malwah, NJ: Lawrence Erlbaum.
C. Loring Brace