Ratings for Video Games, Software, and the Internet

views updated


Ratings are labeling systems that index media content (e.g., films, television programs, interactive games, recorded music, websites) primarily to control young people's access to particular kinds of portrayals. The underlying assumption is that children and young adolescents are particularly vulnerable to message influences and therefore need to be shielded from certain types of content. The content most typically rated consists of portrayals of sexuality, violence, vulgar language, or adult themes, although this varies from country to country. For example, nudity, all but ignored in the Scandinavian countries, often earns more restrictive ratings in the United States; in Germany, violence and racist speech are of particular concern; Australia explicitly adds suicide to the list of problematic kinds of content.

The Ratings Controversy

Ratings have been controversial since their inception, and attempts to rate the content of new media have intensified the debate. As advances in communication technology increase the amount, accessibility, and vividness of media content that is available to ever-growing audiences, particularly young audiences, questions that first emerged in the early days of motion pictures are revisited. Is rating any different from censorship? Who should do the rating? What kinds of content require ratings? What criteria should be used? What form should the label or advisory take? How do ratings affect audiences, the profits, and the content producers?

Ratings have been characterized as representing a "middle ground," somewhere between doing nothing at all (i.e., allowing youths unfettered access to any and all content) and government censorship. It is not surprising that how close to the "middle" rating systems are perceived to be depends largely on who is looking. In the United States, for example, some parents and child advocates contend that ratings are inconsistent and often ineffective; they argue that more stringent controls are necessary. Most content producers and civil libertarians, on the other hand, view any attempt to rate or label content, even voluntary systems exercised by nongovernmental bodies, a threat to free speech. They contend that, at worst, ratings provide a means for full-blown government censorship and, at best, exert a chilling effect on whether and how "nonmainstream ideas" are expressed.

Typically, content ratings are developed in response to public and political pressures to "do something" about media content, pressures that arise when someone makes a case that particular kinds of media depictions threaten youths, if not society in general. This has certainly been the case with each of the new communication media. For example, in the early 1980s, when music videos first enabled parents to see violent, sexual, and misogynist images in a few music videos, public pressures to "do something" about popular music resulted in record industry self-labeling of recordings in order to head-off government intervention. Similarly, when in the early 1990s the U.S. Congress responded to public outcries about graphic violence in interactive games by threatening regulation, the video game and computer software industries developed parental advisory systems. In the mid-1990s, reports of children gaining easy access to pornographic materials on the World Wide Web fueled an intense debate about controlling children's access to information on the Internet. That debate ultimately led to passage of the Communications Decency Act (CDA), making the display or transmission of "indecent or patently offensive material" to minors a criminal offense. Legal challenges on First Amendment grounds led the U.S. Supreme Court to overturn the CDA in 1997, so controversy over whether and how to protect children on the Internet continues.

Descriptive Versus Evaluative Ratings

A fundamental issue in the debate over ratings concerns the difference between descriptive and evaluative ratings criteria (sometimes termed "rules based" and "standards based" criteria, respectively). Descriptive approaches attempt to classify content on the basis of concrete, objective criteria about which it is presumed very different individuals can agree (e.g., "Does a living creature suffer physical injury or death?"). Evaluative approaches attempt to be more sensitive to situational variations by allowing more subjective judgments (e.g., "Is the nudity artistic, erotic, or pornographic?"), but they risk disagreement over just what terms such as "artistic" and "erotic" mean to different people. Thus, a descriptive system would rule that any website displaying a bare female breast must be assigned the same rating, regardless of whether the was a painting by Amedeo Modigliani or Peter Paul Rubens, a Playboy centerfold, or an X-rated video. Under an evaluative system, on the other hand, these three websites could each be given different ratings, such as "artistic," "erotic," or "pornographic," with the ultimate ratings depending entirely on the judgment of the person doing the rating. Although evaluative approaches work relatively well within highly homogenous communities where values and definitions are closely shared, each successive step toward a more heterogeneous audience increases the likelihood of disagreement. Conversely, descriptive approaches increase the likelihood that different observers will agree with what is depicted but are less flexible in accounting for situational nuances. This issue becomes increasingly important as globalization makes the same content available to people in various locations that have widely different meaning and value systems. It has reached critical proportions with the development of the World Wide Web, a fundamental premise of which is that it reaches the most heterogeneous audience possible.

Age-Based Ratings

A closely related issue concerns whether and how to classify content on the basis of age. As the primary justification for ratings tends to be the protection of youths, it is not surprising that most systems around the world apply age-based advisories (e.g., "parental discretion advised") and/or restrictions (e.g., "no one under seventeen admitted"). In most cases, however, determination of which age restriction to employ is almost totally subjective. Different cultures, indeed individual parents, often disagree about what is or is not appropriate both for thirteen-year-olds in general and for particular thirteen-year-olds. Some of the most convincing testimony to the subjectivity of indexing by age is provided by the sheer number of different ages that various rating systems employ as markers. Depending on which medium and rating system is examined, advisories or restrictions can be found for ages six, seven, thirteen, fourteen, seventeen, eighteen, and twenty-one in the United States alone. Similarly, a survey of ratings in thirty different countries shows that every year between three and twenty-one (with the exception of nine and twenty) is used to mark some kind of content.

The alternative to indexing by age is simply to provide descriptive labels or icons, leaving it up to individual caretakers to decide whether to restrict a particular child's access. For example, the International Content Rating Association (ICRA), a global consortium of representatives from the public sector and the Internet industry, describes website content (e.g., "explicit sexual acts," "passionate kissing," "innocent kissing"), but it leaves judgments about what is or is not appropriate for an individual child entirely up to the parents or caretakers. The ICRA system also provides extensive, relatively objective definitions of the terms used for each label, in an attempt to reduce uncertainty about the meaning of any particular label.

At issue, of course, is the amount of parental effort required by each approach versus the opportunity to tailor control of access to the needs and abilities of individual children. It is relatively easy for a parent to rely on some general statement indicating that a particular game or website is or is not appropriate for children under a specific age. It is relatively demanding for a parent to work through descriptions and definitions of all of the different kinds of rated content that might appear in order to decide what is and is not suitable for a particular child. Moreover, the value of age-based rating is further complicated by evidence that labeling content on the basis of age serves more to attract than to deter some children—a kind of forbidden fruit effect. This phenomenon seems most pronounced among youths who are slightly under a specified age. That is, labeling content as being inappropriate for children under thirteen years increases the appeal of that content among eleven-and twelve-year-olds.

Who Does the Rating?

Still another point of controversy surrounding ratings concerns whether they should be administered by some independent third party or by individuals who are involved in producing or distributing the content. Here, the issue is one of trust. Can consumers trust ratings that game designers or webmasters assign to their own creations, or is this a case of "asking the fox to guard the hen house"? This question increases in importance as the sheer volume of content increases. For example, it is feasible for an independent rating board such as the one employed by the Motion Picture Association of America to view and classify several hundred motion pictures each year. However, the hundreds of hours of television programming that most U.S. households receive daily makes such third-party ratings problematic, contributing to the broadcast television industry's decision to adopt a system that allows producers or broadcasters to rate their own material.

These two approaches (i.e., independent ratings and producer ratings) resulted in the two systems that were initially developed to rate interactive games. In this case, the problem was not so much the number of titles developed each year but the sheer amount of time that it takes to move through all aspects of an individual game (often several hundred hours). The Interactive Digital Software Association opted to create the Entertainment Software Rating Board (ESRB), a third-party group, to review videotapes (submitted by game developers) of selected sections of the video games and assign one of five, age-indexed ratings categories. The ESRB categories and their descriptors are as follows:

EC: early childhood, ages three-plus; should contain no material that parents would find inappropriate,

E: everyone, ages six-plus; may contain minimal violence or some crude language,

T: teen, thirteen-plus; may contain violence, strong language, or suggestive themes,

M: mature, seventeen-plus; may contain intense violence or mature sexual themes, and

A: adult, eighteen-plus; may include graphic depictions of sex or violence.

In contrast to the ESRB, the now defunct Recreational Software Advisory Council (RSAC) developed a self-rating system that enabled software developers to attach descriptive labels, icons, and intensity ratings to computer games. Because computer games are often translated for video game platforms, which are almost impossible to market without an ESRB rating, over time most interactive games opted to use the ESRB system, effectively eliminating the RSAC system. Ironically, both systems ran the risk of public distrust—the RSAC system because it employed self-rating, the ESRB system because it depended on whatever videotape excerpts game designers chose to submit for examination. In both cases, the solution was to provide means for public scrutiny and comment, coupled with sanctions for misrepresentation. If the experience of consumers contradicted the rating assigned by either system, it was a simple process to check the product fully and institute sanctions, including denying a rating to any game that had been misrepresented. To the extent that lack of a rating reduces or closes off distribution channels, the economic threat is presumed to motivate game designers and producers to make accurate content disclosures.

The Internet further complicates many of these rating issues. The vast reservoir of global and continually changing information on the World Wide Web renders third-party rating of all accessible material problematic. Although third-party ratings can identify a limited array of websites that are judged to be inappropriate or appropriate for children, sheer volume implies that tens of thousands of websites must go unrated. Self-rating systems for the web face a different problem—how to convince content producers to rate their websites voluntarily, as well as to rate them accurately. Since a fundamental attribute of the Internet is the provision of a free distribution channel, many content producers may see few incentives to rate their websites, and there are even fewer sanctions for misrepresentation or failure to rate. If one does not care whether or not children can access a web-site, why bother to rate it? A counterbalance to this, of course, would be the development of an Internet-filtering system that allows only rated websites to be accessed, strong motivation for content producers who want to reach children.

Many Voices, Many Values

The multitude of voices, values, and meaning systems inherent in a global communication system also leads to a proliferation of content that people want rated. In addition to advisories for sex, violence, nudity, and vulgar language, various international groups have called for indexing of racist, misogynist, or antireligious content, as well as content that involves other forms of "hate speech," portrayals of drugs or other "taboo" substances, or suicide. As more kinds of content are included on the web, there comes a point at which ratings become too burdensome to use.

Finally, because the web makes little or no distinction between news, education, art, entertainment, casual conversation, or any other kind of informational context, arguments about whether and how to differentiate content depending on context have become particularly thorny. Should graphic violence in a news report be exempt from rating? Should human genitalia displayed on a health-related website be rated differently from those displayed on an entertainment website? Should these things be rated at all?

On the plus side, the Internet industry has developed the Platform for Internet Content Selection (PICS), a system that enables easy design of content rating systems and makes it possible for parents to specify the kinds of content their children will be able to access on the household computer. Moreover, PICS can simultaneously host both self-rating and third-party systems, making it possible for consumers to combine these approaches.

The ICRA is developing a voluntary, descriptive self-rating system to operate at the heart of a PICS-based system. The ICRA system provides parents the option of (1) examining a list of descriptive labels and definitions developed to index each kind of content and then setting the browser to filter as they choose, (2) accepting the judgment of whatever third-party organization wishes to share the system, or (3) both. Thus, one parent might rely on the third-party judgment of a religious organization, another on a group of educators, and a third might make individual judgments based entirely on a personal assessment of what kinds of information are appropriate to his or her own child.

Such a system depends on there being a critical mass of content websites that have either self-rated or have been indexed by some third party, as well as the option for parents to block websites that have not been rated. Even given such a system, the complexities of developing ratings that fit the needs and desires of parents from around the world make it unlikely that a "perfect" system will ever emerge. What is clear is that the extent to which any rating system is effective depends on thoughtful, active participation of parents who care about the kinds of content to which their children have access.

See also:Communications Decency Act of 1996; First Amendment and the Media; Internet and the World Wide Web; Pornography; Pornography, Legal Aspects of; Ratings for Television Programs; Ratings for Movies; Sex and the Media; V-Chip; Violence in the Media, Attraction to; Violence in the Media, History of Research on.


Federman, Joel. (1996). Media Ratings: Design, Use, and Consequences. Studio City, CA: Mediascope, Inc.

Price, Monroe E., ed. (1998). The V-Chip Debate: Content Filtering from Television to the Internet. Mahwah, NJ: Lawrence Erlbaum.

Donald F. Roberts