The growing participation of African Americans in television, both in front of and behind the camera, has coincided with the radical restructuring of race relations in the United States from the end of World War II to the present day. Throughout this period, the specific characteristics of the television industry have complicated the ways in which these changing relations have been represented in television programming.
Television was conceived as a form of commercialized mass entertainment. Its standard fare—comedy, melodrama, and variety shows—favors simple plot structures, family situations, light treatment of social issues, and reassuring happy endings, all of which greatly delimit character and thematic developments. Perhaps more than any other group in American society, African Americans have suffered from the tendencies of these shows to depict onedimensional character stereotypes.
Because commercial networks are primarily concerned with the avoidance of controversy and the creation of shows with the greatest possible appeal, African Americans were rarely featured in network series during the early years of television. Since the 1960s, the growing recognition by network executives that African Americans are an important group of consumers has led to greater visibility; however, in most cases, fear of controversy has led programmers to promote an unrealistic view of African-American life. Black performers, writers, directors, and producers have had to struggle against the effects of persistent typecasting and enforced sanitization in exchange for acceptance in white households. Only when African Americans made headway into positions of power in the production of television programs were alternative modes of representing African Americans developed.
Although experiments with television technology date back to the 1880s, it was not until the 1930s that sufficient technical expertise and financial backing were secured for the establishment of viable television networks. The National Broadcasting Company (NBC), a subsidiary of the Radio Corporation of America (RCA), wanted to begin commercial television broadcasting on a wide scale but was interrupted by the outbreak of World War II, and the television age did not commence in earnest until after peace was declared.
In 1948 the three major networks—the National Broadcasting Company (NBC), the Columbia Broadcasting System (CBS), and the American Broadcasting Company (ABC)—began regularly scheduled primetime programming. That same year, the Democratic Party adopted
a strong civil rights platform at the Democratic convention, and the Truman administration issued a report entitled To Secure These Rights, the first statement made by the federal government in support of desegregation. Yet these two epochal revolutions—television and the civil rights movement—had little influence on one another for many years. While NBC, as early as 1951, stipulated that programs dealing with race and ethnicity should avoid ridiculing any social or racial group, most network programming rarely reflected the turbulence caused by the agitation for civil rights, nor did activists look to television as a medium for effecting social change. The effort to obtain fair and honest representation of African Americans and African-American issues on television remains a complex and protracted struggle.
In the early years of television, African Americans appeared most often as occasional guests on variety shows. Music entertainment artists, sports personalities, comedians, and political figures of the stature of Ella Fitzgerald, Lena Horne, Sarah Vaughan, Louis Armstrong, Duke Ellington, Cab Calloway, Pearl Bailey, Eartha Kitt, the Harlem Globetrotters, Dewey "Pigmeat" Markham, Bill "Bojangles" Robinson, Ethel Waters, Joe Louis, Sammy Davis Jr., Ralph Bunche, and Paul Robeson appeared in such shows as Milton Berle's Texaco Star Theater (1948–1953), Ed Sullivan's Toast of the Town (1948–1955), the Steve Allen Show (1950–1952; 1956–1961), and Cavalcade of Stars (1949–1952). Quiz shows like Strike It Rich (1951–1958), amateur talent contests like Chance of a Lifetime (1950–1953; 1955–1956), and shows concentrating on sporting events (particularly boxing matches), like The Gillette Cavalcade of Sports (1948–1960), provided another venue in which prominent blacks occasionally took part.
Rarely did African Americans host their own shows. Short-run exceptions included The Bob Howard Show (1948–1950); Sugar Hill Times (1949), an all-black variety show featuring Willie Bryant and Harry Belafonte; the Hazel Scott Show (1950), the first show featuring a black female host; the Billy Daniels Show (1952); and the Nat "King" Cole Show (1956–1957). There were even fewer all-black shows designed to appeal to all-black audiences or shows directed and produced by blacks. Short-lived local productions constituted the bulk of the latter category. In the early 1950s, a black amateur show called Spotlight on Harlem was broadcast on WJZ-TV in New York City; in 1955, the religious Mahalia Jackson Show appeared on Chicago's WBBM-TV.
Comedy was the only fiction-oriented genre in which African Americans were visible participants. Comedy linked television with the deeply entrenched cultural tradition of minstrelsy and blackface practices dating back to the antebellum period. In this cultural tradition, the representation of African Americans was confined either to degrading stereotypes of questionable intelligence and integrity (such as coons, mammies, Uncle Toms, or Stepin Fetchits) or to characterizations of people in willingly subservient positions (maids, chauffeurs, elevator operators, train conductors, shoeshine boys, handypeople, and the like). Beginning in the 1920s, radio comedies had perpetuated this cultural tradition, tailored to the needs of the medium.
The dominant television genre, the situation comedy, was invented on the radio. Like its television successor, the radio comedy—self-contained fifteen-minute or half-hour episodes with a fixed set of characters, usually involving minor domestic or familial disputes, and painlessly resolved in the allotted time period—lent itself to caricature. Since all radio comedy was verbal, it relied for much of its humor on the misuse of language, such as malapropisms or syntax error; and jokes made at the expense of African Americans (and their supposed difficulties with the English language) were a staple of radio comedies.
The first successful radio comedy, and the series that in many ways defined the genre, was Amos 'n' Andy, (1929–1960), which employed white actors to depict unflattering black characters. Amos 'n' Andy featured two white comedians, Freeman Gosden and Charles Correll, working in the style of minstrelsy and vaudeville. Another radio show that was successfully transferred to television was Beulah (1950–1953). The character Beulah was originally created for a radio show called Fibber McGee and Molly (1935–1957), in which Beulah was played by Marlin Hurt, a white man. These two shows, which adopted an attitude of contempt and condescending sympathy toward the black persona, were re-created on television with few changes, except that the verisimilitude of the genre demanded the use of black actors rather than whites in blackface and "blackvoice." As with Amos 'n' Andy (1951–1953)—in its first season the thirteenth most-watched show on television—the creators of Beulah had no trouble securing commercial support; both television shows turned out to be as popular as their radio predecessors, though both were short-lived in their network television incarnations.
Beulah (played first by Ethel Waters, then by Louise Beavers) developed the story of the faithful, complacent Aunt Jemima who worked for a white suburban middle-class nuclear family. Her unquestioning devotion to solving familial problems in the household of her white employers, the Hendersons, validated a social structure that forced black domestic workers to profess unconditional fidelity to white families, while neglecting their personal relations to their own kin. When blacks were included in Beulah's personal world, they appeared only as stereotypes. For instance, the neighbor's maid, Oriole (played by Butterfly McQueen), was an even more pronounced Aunt Jemima character; and Beulah's boyfriend, Bill Jackson (played by Percy Harris and Dooley Wilson), the Henderson's handyperson, was a coon. The dynamics between the white world of the Hendersons and Beulah's black world were those of the perfect object with a defective mirror image. The Hendersons represented a well-adjusted family, supported by a strong yet loving working father whose sizable income made it possible for the mother to remain at home. In contrast, Beulah was condemned to chasing after an idealized version of the family because her boyfriend did not seem too interested in a stable relationship; she was destined to work forever because Bill Jackson did not seem capable of taking full financial responsibility in the event of a marriage. As the show could only exist as long as Beulah was a maid, it was evident that her desires were never to be fulfilled. If Beulah seemed to enjoy channeling all her energy toward the solution of a white family's conflicts, it was because her own problems deserved no solution.
Amos 'n' Andy, on the other hand, belonged to the category of folkish programs that focused on the daily life and family affairs of various ethnic groups. Several such programs, among them Mama (1949–1956), The Goldbergs (1949–1955), and Life with Luigi (1952–1953)—depicting the lives of Norwegians, Jews, and Italians, respectively—were popularized in the early 1950s. In Amos 'n' Andy, the main roles comprised an assortment of stereotypical black characters. Amos Jones (played by Alvin Childress) and his wife, Ruby (played by Jane Adams), were passive Uncle Toms, while Andrew "Andy" Hogg Brown (played by Spencer Williams) was gullible and half-witted. George "Kingfish" Stevens (played by Tim Moore) was a deceiving, unemployed coon, whose authority was constantly being undermined by his shrewd wife Sapphire (played by Ernestine Wade) and overbearing mother-in-law, "Mama" (played by Amanda Randolph). "Lightnin'" (played by Horace Stewart) was a janitor, and Algonquin J. Calhoun (played by Johnny Lee) was a fast-talking lawyer. These stereotypical characters were contrasted, in turn, with serious, level-headed black supporting characters, such as doctors, business people, judges, law enforcers, and so forth. The humorous situations created by the juxtapositions of these two types of characters—stereotypical and realistic—made Amos 'n' Andy an exceptionally intricate comedy and the first all-black television comedy that opened a window for white audiences on the everyday lives of African-American families in Harlem.
Having an all-black cast made it possible for Amos 'n' Andy to neglect relevant but controversial issues like race relations. The Harlem of this show was a world of separate but equal contentment, where happy losers, always ready to make fools of themselves, coexisted with regular people. Furthermore, the show's reliance on stereotypes precluded both the full-fledged development of its characters and the possibility of an authentic investigation into the pathos of black daily life. Even though the performers often showed themselves to be masters of comedy and vaudeville, it is unfortunate that someone like Spencer Williams, who was also a prolific maker of all-black films, would only be remembered by the general public as Andy.
While a number of African Americans were able to enjoy shows like Beulah and Amos 'n' Andy, many were offended by their portrayal of stereotypes, as well as by the marked absence of African Americans from other fictional genres. Black opposition had rallied without success to protest the airing of this kind of show on the radio in the 1930s. Before Amos 'n' Andy aired in 1951, the National Association for the Advancement of Colored People (NAACP) began suing CBS for the show's demeaning depiction of blacks, and the organization did not rest until the show was canceled in 1953. Yet the viewership of white and black audiences alike kept Amos 'n' Andy in syndication until 1966. The NAACP's victory in terminating Amos 'n' Andy and Beulah also proved somewhat pyrrhic, since during the subsequent decade the networks produced no dramatic series with African Americans as central characters, while stereotyped portrayals of minor characters continued.
Many secondary comic characters from the radio and cinema found a niche for themselves in television. In the Jack Benny Show (1950–1965), Rochester Van Jones (played by Eddie "Rochester" Anderson) appeared as Benny's valet and chauffeur. For Anderson, whose Rochester had amounted to a combination of the coon and the faithful servant in the radio show, the shift to television proved advantageous, as he was able to give his character greater depth on the television screen. Indeed, through their outlandish employer-employee relationship, Benny and Anderson established one of the first interracial onscreen partnerships in which the deployment of power alternated evenly from one character to the other. The same may not be said of Willie Best's characterizations in shows like The Stu Erwin Show (1950–1955) and My Little Margie (1952–1955). Best tended to confine his antics to the Stepin Fetchit style and thereby reinforced the worst aspects of the master-slave dynamic.
African-American participation in dramatic series was confined to supporting roles in specific episodes in which the color-line tradition was maintained, such as the Philco Television Playhouse (1948–1955), which featured a young Sidney Poitier in "A Man Is Ten Feet Tall" in 1955; the General Electric Theater (1953–1962), which featured Ethel Waters and Harry Belafonte in "Winner by Decision" in 1955; and The Hallmark Hall of Fame (1952–) productions in 1957 and 1959 of Marc Connelly's "Green Pastures," a biblical retelling performed by an all-black cast. African Americans also appeared as jungle savages in such shows as Ramar of the Jungle (1952–1953), Jungle Jim (1955), and Sheena, Queen of the Jungle (1955–1956). The television western, one of the most important dramatic genres of the time, almost entirely excluded African Americans, despite their importance to the real American West. In the case of those narratives set in contemporary cities, if African Americans were ever included, it was only as props signifying urban deviance and decay. A rare exception to this was Harlem Detective (1953–1954), an extremely low-budget, local program about an interracial pair of detectives (with William Marshall and William Harriston playing the roles of the black and white detectives, respectively) produced by New York's WOR-TV.
Despite the sporadic opening of white households to exceptional African Americans and the effectiveness of the NAACP's action in canceling Amos 'n' Andy, the networks succumbed to the growing political conservatism and racial antagonism of the mid-1950s. The cancellation of the Nat "King" Cole Show exemplifies the attitude that prevailed among programmers during that time. Nat "King" Cole had an impeccable record: his excellent musical and vocal training complemented his noncontroversial, delicate, and urbane delivery; he had a nationally successful radio show on NBC in the 1940s; and over forty of his recordings had been listed for their top sales by Billboard magazine between 1940 and 1955. Cole's great popularity was demonstrated in his frequent appearances as guest or host on the most important television variety shows. NBC first backed Cole completely, as is evidenced by the network's willingness to pour money into the show's budget, to increase the show's format from fifteen to thirty minutes, and to experiment with different time slots. Cole also had the support of reputable musicians and singers who were willing to perform for nominal fees. His guests included Count Basie, Mahalia Jackson, Pearl Bailey, and all-star musicians from "Jazz at the Philharmonic." Yet the Nat "King" Cole Show did not gain enough popularity among white audiences to survive the competition for top ratings; nor was it able to secure a stable national sponsor. After about fifty performances, the show was canceled.
African Americans exhibited great courage in these early years of television by supporting some shows and boycotting others. Organizations such as the Committee on Employment Opportunities for Negroes, the Coordinating Council for Negro Performers, and the Committee for the Negro in the Arts constantly fought for greater and fairer inclusion. During the height of the civil rights movement, the participation of African Americans in television intensified. Both Africans and African Americans became the object of scrutiny for daily news shows and network documentaries. The profound effects of the radical recomposition of race relations in the United States and the independence movement in Africa could not go unreported. "The Red and the Black" (January 1961), a segment of the Close Up! documentary series, analyzed the potential encroachment of the Soviet Union in Africa as European nations withdrew from the continent; "Robert Ruark's Africa" (May 1962), a documentary special shot on location in Kenya, defended the colonial presence in the continent. The series See It Now (1951–1958) started reporting on the civil rights movement as early as 1954, when the U.S. Supreme Court ruled to desegregate public schools, and exposed the measures that had been taken to hinder desegregation in Norfolk high schools in an episode titled "The Lost Class of '59," aired in January 1959. CBS Reports (1959–) examined, among other matters, the living conditions of blacks in the rural South in specials such as "Harvest of Shame" (November 1960). In December 1960 NBC White Paper aired "Sit-In," a special report on desegregation conflicts in Nashville. "Crucial Summer" (which started airing in August 1963) was a five-part series of half-hour reports on discrimination practices in housing, education, and employment. It was followed by "The American Revolution of '63" (which started airing in September 1963), a three-hour documentary on discrimination in different areas of daily life across the nation.
However, the gains made by the airing of these programs were offset by the effects of poor scheduling, and they were often made to compete with popular series programs and variety and game shows from which blacks had been virtually erased. As the civil rights movement gained momentum, some southern local stations preempted programming that focused on racial issues, while other southern stations served as a means for the propagation of segregationist propaganda.
As black issues came to be scrutinized in news reports and documentaries, African Americans began to appear in the growing genre of socially relevant dramas, such as The Naked City (1958–1963), Dr. Kildare (1961–1966), Ben Casey (1961–1966), The Defenders (1961–1965), The Nurses (1962–1965), Channing (1963–1964), The Fugitive (1963–1967), and Slattery's People (1963–1965). These shows, which usually relied on news stories for their dramatic material, explored social problems from the perspective of white doctors, nurses, educators, social workers, or lawyers. Although social issues were seriously treated, their impact was much diminished by the easy and felicitous resolution with which each episode was brought to a close. Furthermore, the African Americans who appeared in these programs—Ruby Dee, Louis Gossett Jr., Ossie Davis, and others—were given roles in episodes where topics were racially defined, and the color line was strictly maintained.
The short-lived social drama East Side/West Side (1963–1964) proved an exception to this rule. It was the first noncomedy in the history of television to cast an African American (Cicely Tyson) as a regular character. The program portrayed the dreary realities of urban America without supplying artificial happy endings; on occasion, parts of the show were censored because of their liberal treatment of interracial relations. East Side/West Side ran into difficulties when programmers tried to obtain commercial sponsors for the hour during which it was aired; eventually, despite changes in format, it was canceled after little more than twenty episodes.
Unquestionably, the more realistic television genres that evolved as a result of the civil rights movement served as powerful mechanisms for sensitizing audiences to the predicaments of those affected by racism. But as television grew to occupy center stage in American popular entertainment, the gains of the civil rights movement came to be ambiguously manifested. By 1965, a profusion of toprated programs had begun casting African Americans both in leading and supporting roles. The networks and commercial sponsors became aware of the purchasing power of African-American audiences, and at the same time they discovered that products could be advertised to African-American consumers without necessarily offending white tastes. Arguably, the growing inclusion of African Americans in fiction-oriented genres was premised on a radical inversion of previous patterns. If blacks were to be freed from stereotypical and subservient representation, they were nevertheless portrayed in ways designed to please white audiences. Their emergence as a presence in television was to be facilitated by a thorough cleansing.
A sign of the changing times was the popular police comedy Car 54, Where Are You? (1961–1963). Set in a rundown part of the Bronx, this comedy featured black officers in secondary roles (played by Nipsey Russell and Frederick O'Neal). However, the real turning point in characterizations came with I Spy (1965–1968), a dramatic series featuring Bill Cosby and Robert Culp as Alexander Scott and Kelly Robinson, two secret agents whose adventures took them to the world's most sophisticated spots, where racial tensions did not exist. In this role, Cosby played an immaculate, disciplined, intelligent, highly educated, and cultured black man who engaged in occasional romances but did not appear sexually threatening and whose sense of humor was neither eccentric nor vulgar. While inverting stereotypical roles, I Spy also created a one-to-one harmonious interracial friendship between two men.
I Spy was followed by other top-rated programs. In Mission Impossible (1966–1973), Greg Morris played Barney Collier, a mechanic and electronics expert and member of the espionage team; in Mannix (1967–1975), a crime series about a private eye, Gail Fisher played Peggy Fair, Mannix's secretary; in Ironside (1967–1975), Don Mitchell played Mark Sanger, Ironside's personal assistant and bodyguard; and in the crime show Mod Squad (1968–1973), Clarence Williams III played Linc Hayes, one of the three undercover police officers working for the Los Angeles Police Department. This trend was manifested in other top-ranked shows: Peyton Place (1964–1969), the first prime-time soap opera, featured Ruby Dee, Percy Rodriguez, and Glynn Turman as the Miles Family; in Hogan's Heroes (1965–1971), a sitcom about American prisoners in a German POW camp during World War II, Ivan Dixon played Sergeant Kinchloe; in Daktari (1966–1969), Hari Rhodes played an African zoologist; in Batman (1966–1968), Eartha Kitt appeared as Catwoman; in Star Trek (1966–1969), Nichelle Nichols was Lieutenant Uhura; in the variety show Rowan and Martin's Laugh-In (1966–1973), Chelsea Brown, Johnny Brown, and Teresa Graves appeared regularly; and in the soap opera The Guiding Light (1952–), Cicely Tyson started appearing regularly after 1967.
Julia (1968–1971) was the first sitcom in over fifteen years to feature African Americans in the main roles. It placed seventh in its first season, thereby becoming as popular as Amos 'n' Andy had been in its time. Julia Baker (played by Diahann Carroll) was a middle-class, cultured widow who spoke standard English. Her occupation as a nurse suggested that she had attended college. She was economically and emotionally self-sufficient; a caring parent to her little son Corey (played by Marc Copage); and equipped with enough sophistication and wit to solve the typical comic dilemmas presented in the series. However, many African Americans criticized the show for neglecting the more pressing social issues of their day. In Julia's sub-urban world, it was not so much that racism did not matter, but that integration had been accomplished at the expense of black culture. Julia's cast of black friends and relatives (played by Virginia Capers, Diana Sands, Paul Winfield, and Fred Williamson) appeared equally sanitized. Ironically, Julia perpetuated some of the same misrepresentations of the black family as Beulah —for despite its elegant trappings, Julia's was yet another female-headed African-American household.
As successful as Julia was the Bill Cosby Show (1969–1971), which featured Bill Cosby as Chet Kincaid, a single, middle-class high school gym teacher. In contrast to Julia, however, this comedy series presented narrative conflicts that involved Cosby in the affairs of black relatives and innercity friends, as well as in those of white associates and suburban students. The Bill Cosby Show sought to integrate the elements of African-American culture through the use of sound, setting, and character: African-American music played in the background, props reminded one of contemporary political events, Jackie "Moms" Mabley and Mantan Moreland appeared frequently as Cosby's aunt and uncle, and Cosby's jokes often invested events from black everyday life with comic pathos. A less provocative but long-running sitcom, Room 222 (1969–1974), concerned an integrated school in Los Angeles. Pete Dixon (played by Lloyd Haynes), a black history teacher, combined the recounting of important events of black history with attempts to address his students' daily problems. Another comic series, Barefoot in the Park (1970–1971)—with Scoey Mitchell, Tracey Reed, Thelma Carpenter, and Nipsey Russell—was attempted, but failed after thirteen episodes; it was an adaptation of the film by the same name but with African Americans playing the leading roles.
By the end of the 1960s, many of the shows in which blacks could either demonstrate their decision-making abilities or investigate the complexities of their lives had been canceled. Two black variety shows failed due to poor scheduling and lack of white viewer support: The Sammy Davis Jr. Show, the first variety show hosted by a black person since the Nat "King" Cole Show (1966); and The Leslie Uggams Show (1969), the first variety show hosted by a black woman since Hazel Scott. A similar fate befell The Outcasts (1968–1969), an unusual western set in the period immediately following the Civil War. The show, which featured two bounty hunters, a former slave and a former slave owner, and addressed without qualms many of the same controversial themes associated with the civil rights movement, was canceled due to poor ratings. Equally short-lived was Hawk (1966), a police drama shot on location in New York City, which featured a full-blooded Native American detective (played by Burt Reynolds) and his black partner (played by Wayne Grice). An interracial friendship was also featured in the series Gentle Ben (1967–1969), which concerned the adventures of a white boy and his pet bear; Angelo Rutherford played Willie, the boy's close friend. While interracial friendships were cautiously permitted, the slightest indication of romance was instantly suppressed: The musical variety show Petula (1968) was canceled because it showed Harry Belafonte and Petula Clark touching hands.
Despite these limitations, the programs of the 1960s, 1970s, and 1980s represented a drastic departure from the racial landscape of early television. In the late 1940s, African Americans were typically confined to occasional guest roles; by the end of the 1980s, most top-rated shows featured at least one black person. It had become possible for television shows to violate racial taboos without completely losing commercial and viewer sponsorship. However, greater visibility in front of the camera did not necessarily translate into equal opportunity for all in all branches of television: the question remained as to whether discriminatory practices had in fact been curtailed, or had simply survived in more sophisticated ways. It was true that the presence of blacks had increased in many areas of television, including, for example, the national news: Bryant Gumbel co-anchored Today (1952–) from 1982 to 1997; Ed Bradley joined 60 Minutes (1968–) in 1981; Carole Simpson was a weekend anchor for ABC World News Tonight, where she had started as a correspondent in 1982, from 1988 to 2003.
Nevertheless, comedy remained the dominant form for expressing black lifestyles. Dramatic shows centering on the African-American experience have had to struggle to obtain high enough ratings to remain on the air—the majority of the successful dramas have been those where blacks share the leading roles with other white protagonists.
During the 1970s and 1980s, the number of social dramas, crime shows, or police stories centering on African Americans or featuring an African American in a major role steadily increased. Most of the series were canceled within a year. These included The Young Lawyers (1970–1971), The Young Rebels (1970–1971), The Interns (1970–1971), The Silent Force (1970–1971), Tenafly (1973–1974), Get Christie Love! (1974–1975), Shaft (1977), Paris (1979–1980), The Lazarus Syndrome (1979), Harris & Co. (1979), Palmerstown, USA (1980–1981), Double Dare (1985), Fortune Dane (1986), The Insiders (1986), Gideon Oliver (1989), A Man Called Hawk (1989), and Sonny Spoon (1988). The most popular dramatic series with African-American leads were Miami Vice (1984–1989), In the Heat of the Night (1988–1994), and The A-Team (1983–1987). On Miami Vice and In the Heat of the Night, Philip Michael Thomas and Howard Rollins, the black leads, were partnered with better-known white actors who became the most identifiable character for each series. Perhaps the most popular actor on a dramatic series was the somewhat cartoonish Mr. T, who played Sergeant Bosco "B.A." Baracus on The A-Team, an action-adventure series in which soldiers of fortune set out to eradicate crime. Although in the comedy Barney Miller (1975–1980) Ron Glass played an ambitious middle-class black detective, the guest spots or supporting roles in police series generally portrayed African Americans as sleazy informants, such as Rooster (Michael D. Roberts) on Baretta (1975–1978), or Huggy Bear (Antonio Fargas) on Starsky and Hutch (1975–1979).
In prime-time serials, African Americans appeared to have been unproblematically assimilated into a middle-class lifestyle. Dynasty (1981–1989) featured Diahann Carroll as one of the series' innumerable variations on the "rich bitch" persona; while Knots Landing (1979–1993), L.A. Law (1986–1994), China Beach (1988–1990), and The Trials of Rosie O'Neal (1991–1992) developed storylines with leading black roles as well as interracial romance themes. Later dramatic series featuring African Americans in regularly occurring roles included Homicide: Life on the Street (1993–1999), NYPD Blue (1993–2005), Oz (1997–2003), The Practice (1997–2004), Third Watch (1999–2005), Boston Public (2000–2004), and Six Feet Under (2001–2005), as well as ER (1994–), "24" (2001–), The Wire (2002–), Without a Trace (2002–), Law & Order (1990–) and its spin-offs Law & Order: Special Victims Unit (1999–) and Law & Order: Criminal Intent (2001–), and CSI: Crime Scene Investigation (2000–) and its spinoffs CSI: Miami (2002–) and CSI: New York (2004–).
MTM Enterprises produced some of the most successful treatments of African Americans in the 1980s. In their programs, which often combined drama and satire, characters of different ethnic backgrounds were accorded full magnitude. Fame (1982–1983) was an important drama about teenagers of different ethnicities coping with the complexities of contemporary life. Frank's Place (1987–1988), an offbeat and imaginative show about a professor who inherits a restaurant in a black neighborhood in New Orleans, provided viewers with a realistic treatment of black family affairs. Though acclaimed by critics, Frank's Place did not manage to gain a large audience, and the show was canceled after having been assigned four different time slots in one year.
African Americans have been featured in relatively minor and secondary roles on science fiction series. Star Trek 's communications officer Lieutenant Uhura (played by Nichelle Nichols) was little more than a glorified telephone operator. Star Trek: The Next Generation (1987–1994) featured LeVar Burton as Leiutenant Geordi La Forge, a blind engineer who can see through a visor. A heavily made-up Michael Dorn was cast as Lieutenant Worf, a horny-headed Klingon officer, and Whoopi Goldberg appeared frequently as the supremely empathetic, long-lived bartender Guinan. In Deep Space 9 (1992–1999), the third Star Trek series, a major role was given to Avery Brooks as Commander Sisko, head of the space station on which much of the show's action takes place, while Star Trek: Voyager (1995–2001) featured Tim Russ as Vulcan security officer Tuvok. Enterprise (2001–2005), the fifth Star Trek series, featured Anthony Montgomery as Ensign Travis Mayweather.
Until recently, blacks played an extremely marginal role in daytime soap operas. In 1966, Another World became the first daytime soap opera to introduce a storyline about a black character, a nurse named Peggy Harris Nolan (played by Micki Grant). In 1968, the character of Carla Hall was introduced as the daughter of housekeeper Sadie Gray (played by Lillian Hayman). Embarrassed by her social and ethnic origins, Carla was passing for white in order to be engaged to a successful white doctor. Some network affiliates canceled the show after Carla appeared. Since then, many more African Americans have appeared in soap operas, including Al Freeman Jr., Darnell Williams, Phylicia Rashad, Jackée, Blair Underwood, Nell Carter, Billy Dee Williams, Cicely Tyson, and Ruby Dee. In most cases, character development has been minor, with blacks subsisting on the margins of activity, not at the centers of power. An exception was the interracial marriage between a black woman pediatrician and a white male psychiatrist on General Hospital in 1987. Generations, the only soap opera that focused exclusively on African-American family affairs, was canceled in 1990 after a year-long run. However, The Young and the Restless (1973–) has featured such African-American actors as Kristoff St. John, Victoria Rowell, Shemar Moore, and Tonya Lee Williams in long-running storylines. In addition, black actor James Reynolds joined the cast of Days of Our Lives (1965–) in 1982 as police commander Abe Carver, and continued in the role for more than twenty years, with a short break in the early 1990s to star in Generations. Reynold's Abe Carver has become one of television's longest-running black characters.
The dramatic miniseries Roots (1977) and Roots: The Next Generation (1979)—more commonly known as Roots II —were unusually successful. For the first time in the history of television, close to 130 million Americans dedicated almost twenty-four hours to following a 300-year saga chronicling the tribulations of African Americans in their sojourn from Africa to slavery and, finally, to emancipation. Yet Roots and Roots II were constrained by the requirements of linear narrative, and characters were seldom placed in situations where they could explore the full range of their historical involvement in the struggle against slavery. The miniseries Beulah Land (1980), a reconstruction of the southern experience during the Civil War, attempted to recapture the success of Roots, but ended up doing no more than reviving some of the worst aspects of Gone with the Wind. Other important but less commercially successful dramatic historical reconstructions include The Autobiography of Miss Jane Pittman (1973), King (1978), One in a Million: The Ron LeFlore Story (1978), A Woman Called Moses (1978), Backstairs at the White House (1979), Freedom Road (1979), Sadat (1983), and Mandela (1987). There are also a number of made-for-television movies based on the civil rights movement, including The Ernest Green Story (1993), Mr. & Mrs. Loving (1996), The Color of Courage (1998), Ruby Bridges (1998), Selma, Lord, Selma (1999), Freedom Song (2000), Boycott (2002), and The Rosa Parks Story (2002).
A number of miniseries and made-for-television movies about black family affairs and romance were broadcast in the 1980s. Crisis at Central High (1981) was based on the desegregation dispute in Little Rock, Arkansas, while Benny's Place (1982), Sister, Sister (1982), The Defiant Ones (1985), and The Women of Brewster Place (1989) were set in various African-American communities. Other more recent examples include The Josephine Baker Story (1990), The Temptations (1998), Introducing Dorothy Dandridge (1999), The Corner (2000), Carmen: A Hip Hopera (2001), The Old Settler (2001), Lackawanna Blues (2005), and Their Eyes Were Watching God (2005).
The 1970s witnessed the emergence of several television sitcoms featuring black family affairs. In these shows, grave issues such as poverty and upward mobility were embedded in racially centered jokes. A source of inspiration for these sitcoms may have been The Flip Wilson Show (1970–1974), the first successful variety show hosted by an
African American. The show, which featured celebrity guests like Lucille Ball, Johnny Cash, Muhammad Ali, Sammy Davis Jr., Bill Cosby, Richard Pryor, and B. B. King, was perhaps best known for the skits Wilson performed. The skits were about black characters (Geraldine Jones, Reverend Leroy, Sonny the janitor, Freddy Johnson the playboy, and Charley the chef) who flaunted their outlandishness to such a degree that most viewers were unable to determine whether they were meant to be cruel reminders of minstrelsy or parodies of stereotypes.
A number of family comedies, mostly produced by Tandem Productions (Norman Lear and Bud Yoking), became popular around the same time as The Flip Wilson Show : these included All in the Family (1971–1983), Sanford and Son (1972–1977), Maude (1972–1978), That's My Mama (1974–1975), The Jeffersons (1975–1985), Good Times (1974–1979), and What's Happening (1976–1979). On Sanford and Son, Redd Foxx and Demond Wilson played father-and-son Los Angeles junk dealers. Good Times, set in a housing development on the South Side of Chicago, portrayed a working-class black family. Jimmie Walker, who played J.J., became an overnight celebrity with his "jive-talking" and use of catchphrases like "Dy-No-Mite." On The Jeffersons, Sherman Hemsley played George Jefferson, an obnoxious and upwardly mobile owner of a dry-cleaning business. As with Amos 'n' Andy, these comedies relied principally on stereotypes—the bigot, the screaming woman, the grinning idiot, and so on—for their humor. However, unlike their predecessor of the 1950s, the comedies of the 1970s integrated social commentary into the joke situations. Many of the situations reflected contemporary discussions in a country divided by, among other things, the Vietnam War. And because of the serialized form of the episodes, most characters were able to grow and learn from experience.
By the late 1970s and early 1980s, the focus of sitcoms had shifted from family affairs to nontraditional familial arrangements. The Cop and the Kid (1975–1976), Diff'rent Strokes (1978–1986), The Facts of Life (1979–1988), and Webster (1983–1987) were about white families and their adopted black children. Several comic formulas were also reworked, as a sassy maid (played by Nell Carter) raised several white children in Gimme a Break! (1981–1987), and a wise-cracking and strong-willed butler (played by Robert Guillaume) dominated the parody Soap (1977–1981). Guillaume later played an equally daring budget director for a state governor in Benson (1979–1986). Several less successful comedies were also developed during this time, including The Sanford Arms (1976), The New Odd Couple (1982–1983), One in a Million (1980), and The Red Foxx Show (1986).
The most significant comedies of the 1980s were those in which black culture was explored on its own terms. The extraordinarily successful The Cosby Show (1984–1992), the first African-American series to top the annual Nielsen ratings, featured Bill Cosby as Cliff Huxtable, a comfortable middle-class paterfamilias to his Brooklyn family, which included his successful lawyer wife Clair Huxtable (played by Phylicia Rashad) and their six children. The series 227 (1985–1990) starred Marla Gibbs, who had previously played a sassy maid on The Jeffersons, in a family comedy set in a black section of Washington, D.C. A Different World (1987–1993), a spin-off of The Cosby Show, was set in a black college in the South. Amen (1986–1991), featuring Sherman Hemsley as Deacon Ernest Frye, was centered on a black church in Philadelphia. In all of these series, the black-white confrontations that had been the staple of African-American television comedy were replaced by situations in which the humor was provided by the diversity and difference within the African-American community.
Some black comedies—Charlie & Company (1986), Family Matters (1989–1998), Fresh Prince of Bel Air (1990–1996), and True Colors (1990–1992)—followed the style set by The Cosby Show. Others like In Living Color (1990–1994) took the route of reworking a combination of variety show and skits in a manner reminiscent of The Flip Wilson Show. Other popular variety and sketch comedy series starring African-American comedians included HBO's The Chris Rock Show (1997–2000) and Dave Chappelle's Chappelle's Show (2003–2005) on Comedy Central. Much of the originality and freshness of these comedies is due to the fact that some of them were produced by African Americans (The Cosby Show, A Different World, Fresh Prince of Bel Air, and In Living Color ). Carter Country (1977–1979), a sitcom that pitted a redneck police chief against his black deputy (played by Kene Holliday), inspired several programs with similar plot lines: Just Our Luck (1983), He's the Mayor (1986), The Powers of Matthew Star (1982–1983), Stir Crazy (1985), Tenspeed and Brown Shoe (1980), and Enos (1980–1981).
UPN, launched as the United Paramount Network in 1995, has made a staple of programming situation comedies featuring primarily African-American casts, including Moesha (1996–2001), The Parkers (1999–2004), Girlfriends (2000–), One on One (2001–), Half & Half (2002–), All of Us (2003–), Eve (2003–), and Second Time Around (2004–2005). The actor Taye Diggs produced and starred as a hotshot attorney in the UPN dramatic series Kevin Hill (2004–). The Fox network offered the comedy Living Single (1992–1998), starring Queen Latifah, and The Bernie Mac Show (2001–), while the WB had actors Jaime Foxx in The Jaime Foxx Show (1996–2001) and Steve Harvey in Steve Harvey's Big Time (2003–2005). ABC's comedies included The Hughleys (1998–2002), starring D. L. Hughley, and My Wife and Kids (2001–2005), starring Damon Wayans, while cable station Showtime offered a series adaptation of the movie Soul Food (2000–2004). Reality series such as Survivor (2000–), The Amazing Race (2001–), American Idol (2002–), and The Apprentice (2004–) featured African Americans among their participants. The UPN's popular reality show America's Next Top Model (2001–) also featured black participants, as well as an African-American host and producer, Tyra Banks.
Local stations, public television outlets, syndication, and cable networks have provided important alternatives for the production of authentic African-American programming. In the late 1960s, local television stations began opening their doors to the production of all-black shows and the training of African-American actors, commentators, and crews. Examples of these efforts include Black Journal —later known as Tony Brown's Journal —(1968–1976), a national public affairs program; Soul (1970–1975), a variety show produced by Ellis Haizlip at WNET in New York; Inside Bedford-Stuyvesant (1968–1973), a public affairs program serving the black communities in New York City; and Like It Is, a public affairs show featuring Gil Noble as the outspoken host.
At the national level, public television has also addressed African-American everyday life and culture in such series and special programs as History of the Negro People (1965), Black Omnibus (1973), The Righteous Apples (1979–1981), With Ossie and Ruby (1980–1981), Gotta Make This Journey: Sweet Honey and the Rock (1984), The Africans (1986), Eyes on the Prize (1987), and Eyes on the Prize II (1990). The Public Broadcasting Service (PBS) documentary series American Masters (1986–) featured a number of episodes on African-American artists, including Louis Armstrong, James Baldwin, Duke Ellington, Lena Horne, Sidney Poitier, and others. The American Experience (1988–), another documentary series on PBS, included episodes on the careers of Ida B. Wells, Adam Clayton Powell, Malcolm X, Marcus Garvey, and other important African Americans, along with episodes on topics in black culture and history, including "Roots of Resistance: The Story of the Underground Railroad" (1995), "Scottsboro: An American Tragedy" (2000), and "The Murder of Emmett Till" (2003). In addition, black journalist Gwen Ifill became the moderator of Washington Week (1967–) and senior correspondent for The NewsHour with Jim Lehrer (1995–) on PBS in 1999. Ifill also moderated the first televised debate between the candidates for vice president during the 2004 presidential campaign.
Syndication, the system of selling programming to individual stations on a one-to-one basis, has been crucial for the distribution of shows such as Soul Train (1971–), Solid Gold (1980–1988), The Arsenio Hall Show (1989–1994), The Oprah Winfrey Show (1986–), and The Montel Williams Show (1991–). A wider range of programming has also been made possible by the growth and proliferation of cable services. Robert Johnson took a personal loan for $15,000 in the early 1980s to start a cable business—Black Entertainment Television (BET)—catering to the African Americans living in the Washington, D.C., area. At that time BET consisted of a few hours a day of music videos. By the early 1990s, the network had expanded across the country, servicing about 25 million subscribers, and had a net worth of more than $150 million. (Its programming had expanded to include black collegiate sports, music videos, public affairs programs, and reruns of, among others, The Cosby Show and Frank's Place.) The Black Family Channel, founded in 1999 as MBC Network, is a black-owned and operated cable network for African-American families with children's programs, sports, news, talk shows, and religious programming.
As late as 1969, children's programming did not include African Americans. The first exceptions were Sesame Street (1969–) and Fat Albert and the Cosby Kids (1972–1989). These two shows were groundbreaking in content and format; they emphasized altruistic themes, the solution of everyday problems, and the development of reading skills and basic arithmetic. Other children's shows that focused on or incorporated African Americans include The Jackson Five (1971), ABC After-School Specials (1972–), The Harlem Globetrotters Popcorn Machine (1974–1976), Rebop (1976–1979), 30 Minutes (1978–1982); Reading Rainbow (1983–2004), Pee-Wee's Playhouse (1986–1991); Saved by the Bell (1989–1993), Saved by the Bell: The New Class (1993–2000), and Where in the World Is Carmen San Diego (1991–1996).
Although African Americans have had to struggle against both racial tension and the inherent limitations of television, they have become prominent in all aspects of the television industry. As we enter the twenty-first century, the format and impact of television programming will undergo some radical changes, and the potential to provoke and inform audiences will grow. Television programs are thus likely to become more controversial than ever, but they will also become an even richer medium for effecting social change. Perhaps African Americans will be able to use these technical changes to allay the racial discord and prejudice that persists off-camera in America.
This article primarily explores the racial issues that impacted on television in its golden years right up to the current century. The arrival of digital delivery systems that have enhanced satellite, cable, DVD and even the internet has reduced the power and reach of broadcast television. Nevertheless, African Americans continue to be short-changed by the medium even with the huge success of Oprah Winfrey, Chris Rock, and a few other Black super stars. The more the technology changes the more it stays the same.
See also Black Entertainment Television (BET); Carroll, Diahann; Cosby, Bill; Davis, Ossie; Dee, Ruby; Film in the United States; Gossett, Louis, Jr.; Minstrels/Minstrelsy; Poitier, Sidney; Radio; Tyson, Cicely; Wilson, Flip
Allen, Robert C., ed. Channels of Discourse, Reassembled: Television and Contemporary Criticism, 2d ed. Chapel Hill: University of North Carolina Press, 1992.
Bogle, Donald. Blacks in American Films and Television: An Encyclopedia. New York: Garland, 1988.
Bogle, Donald. Primetime Blues: African Americans on Network Television. New York: Farrar, Strauss Giroux, 2001.
Brooks, Tim, and Earle Marsh, eds. The Complete Directory to Prime Time Network and Cable TV Shows, 1946–Present. 8th ed. New York: Ballantine, 2003.
Dates, Jannette L., and William Barlow, eds. Split Image: African Americans in the Mass Media, 2d ed. Washington, D.C.: Howard University Press, 1993.
Gray, Herman S. Cultural Moves: African Americans and the Politics of Representation. Berkeley: University of California Press, 2005.
Hunt, Darnell M., ed. Channeling Blackness: Studies on Television and Race in America. New York: Oxford University Press, 2005.
Lommel, Cookie. African Americans in Film and Television. Philadelphia: Chelsea House, 2003.
MacDonald, J. Fred. Blacks and White TV: Afro-Americans in Television Since 1948, 2d ed. Chicago: Nelson-Hall, 1992.
McNeil, Alex. Total Television: A Comprehensive Guide to Programming from 1948 to the Present, 4th ed. New York: Penguin, 1996.
Means Coleman, Robin R. African American Viewers and the Black Situation Comedy: Situating Racial Humor. New York: Garland, 1998.
Neale, Stephen, and Frank Krutnik. Popular Film and Television Comedy. London and New York: Routledge, 1990.
Pulley, Brett. The Billion Dollar BET: Robert Johnson and the Inside Story of Black Entertainment Television. Hoboken, N.J.: Wiley, 2004.
Torres, Sasha, ed. Living Color: Race and Television in the United States. Durham, N.C.: Duke University Press, 1998.
Torres, Sasha. Black, White, and in Color: Television and Black Civil Rights. Princeton, N.J.: Princeton University Press, 2003.
White, Mimi. "What's the Difference? 'Frank's Place' in Television." Wide Angle 13 (1990): 82–93.
Zook, Kristal Brent. Color by Fox: The Fox Network and the Revolution in Black Television. New York: Oxford University Press, 1999.
charles hobson (1996)
chris tomassini (2005)
Under the cover of darkness on 9 December 1992, U.S. forces went ashore at Mogadishu, Somalia, and got an unexpected reception. The night suddenly turned bright, as television lights illuminated the landing area and temporarily blinded marines and navy SEALs equipped with night vision goggles. At the water's edge were hundreds of journalists who had been waiting to film the beginning of Operation Restore Hope, a humanitarian mission to distribute food and other vital supplies to starving Somalis. The news media had turned the beach into a kind of outdoor television studio, much to the distress of the troops.
The advance guard of Operation Restore Hope did not know that television journalists would complicate their landing. Yet the reporters were there because Pentagon officials had alerted them. Military officials hoped for favorable publicity from news stories about the beginning of a mission that they thought would win widespread approbation. But while they notified reporters, Pentagon authorities forgot to tell marine and navy commanders to expect a reception of lights and cameras.
This incident illustrates the complex relationship between the news media—and particularly television journalists—and those who plan and implement U.S. foreign policy. Journalists depend on government officials for information and access—to conferences, briefings, crisis areas, and war zones. Yet they often chafe under the restrictions that policymakers or military commanders impose. Those who formulate or carry out foreign policy depend on TV news to provide them with favorable publicity as well as information about international affairs or channels for building public support. Yet these officials also worry about the power of cameras and reporters to transform events as well as to frame issues, expose secrets, or challenge official policies. Cooperation and mutual dependence is the flip side of tension and conflicting interests.
Since the middle of the twentieth century, television has been closely connected to U.S. foreign policy. What makes TV important is that it is a visual medium that commands large audiences. Continuing technological improvements, including live broadcasting of international events as they take place, have made television a powerful instrument for conveying information, molding public attitudes, and influencing government policies. Yet it is easy to exaggerate or misunderstand the power of television to shape foreign policy. Beginning in the late twentieth century, the U.S. government had to deal with twenty-four-hour news cycles, "real-time" reporting of "breaking" news, and extensive coverage of international events with large significance, such as the terrorist attacks on New York City and Washington, D.C., on 11 September 2001, or with dramatic appeal, such as whether Elián Gonzalez, the six-year-old refugee, should remain with relatives in Miami or return to his father in Cuba. Television has affected the ways that the U.S. government has made foreign policy and built public support for it. Yet presidents and other high officials with clear objectives and sophisticated strategies for dealing with the news media—for example, George H. W. Bush's administration during the Persian Gulf War and the international crisis that preceded it—have maintained control of foreign policy and commanded public backing for their international agenda.
Yet even before it had such immediacy or reach, television played a significant—and sometimes controversial—role in shaping government actions and popular understanding of international affairs. The Vietnam War was a critical event. It began, at least, as an American war, just when television had become the principal source of news for a majority of the U.S. public. It offered lessons—controversial, to be sure—about the role of TV in shaping public attitudes toward international affairs. And it occurred at a time of significant changes in journalism. Despite their devotion to objectivity, balance, and fairness, TV reporters would no longer insist, as Edward R. Murrow had in 1947, on a contract provision that limited his right to express opinion in his stories. Vietnam, in short, marked a major transition in the relationship between television and foreign policy.
TV NEWS AND THE EARLY COLD WAR
Although it was a novelty in the United States at the end of World War II, television became an important part of American life during the first postwar decade. Fewer than one out of ten American homes had television in 1950. Five years later the proportion had grown to two-thirds. New stations quickly took to the air and usually affiliated with one of the networks: the National Broadcasting Company (NBC), the Columbia Broadcasting System (CBS), the American Broadcasting Company (ABC), or the short-lived DuMont Television Network.
Even when the networks consisted of a handful of stations, government officials showed keen interest in using television to build public support for U.S. foreign and military policies. Public affairs officers in the State Department said they favored television because it did "a better job than any other medium at depicting foreign policy in action." The department worked with both networks and independent producers to create shows about foreign policy and world affairs or make available for telecast films that it produced itself. Among the most popular series was The Marshall Plan in Action, which premiered on ABC in June 1950 and continued under the title Strength for a Free World until February 1953. Other shows were the interview program Diplomatic Pouch (CBS) and, during the Korean War, The Facts We Face (CBS) and Battle Report—Washington (NBC). In these early days of broadcasting, the networks were eager for programming to fill up their time slots. They also took advantage of opportunities to demonstrate support for American Cold War policies, especially during the McCarthy era.
Evening newscasts became regular features during the late 1940s and early 1950s. Each network aired fifteen-minute programs. CBS and NBC expanded their shows to thirty minutes in September 1963; ABC did not do the same until January 1967. John Daly anchored the ABC broadcast during most of the 1950s. Douglas Edwards held the same position at CBS until Walter Cronkite replaced him in April 1962. The most popular news program in the early 1950s was NBC's Camel News Caravan, with host John Cameron Swayze. With a carnation in his lapel and zest in his voice, Swayze invited viewers to "go hopscotching the world for headlines." After this brisk international tour, there might be stops at a beauty pageant or a stadium for the afternoon's scores. Chet Huntley and David Brinkley succeeded Swayze in October 1956. Anchors and journalists rather than hosts, as Swayze had been, they brought greater depth to the NBC newscasts without making them solemn. They also attracted viewers because of the novelty of their pairing, the contrast in their personalities, and the familiarity of their closing—"Good night, Chet;" "Good night, David." They led the ratings until the late 1960s.
International news was important on each of these programs, yet there were difficulties in covering distant stories, especially on film. Fifteen minutes (less time for commercials, lead-in, and closing) allowed coverage of only a few stories and little time for analysis. Cumbersome and complicated cameras and sound equipment made film reports difficult. Before the beginning of satellite communication in the 1960s, it might take a day or two for film from international locations to get on the air. Despite these limitations, the audience for these newscasts grew steadily. By 1961 surveys showed that the public considered television the "most believable" source of news. Two years later, for the first time, more people said that they relied on television rather than newspapers for most of their news.
Many viewers watched in utter astonishment on 22 October 1962, when President John F. Kennedy informed them about Soviet missiles in Cuba and the possibility of nuclear war. Soviet diplomats got a copy of the speech only an hour before Kennedy went before the cameras. The president's decision to make his demand for the removal of the missiles on television made compromise on this fundamental issue all but impossible. Kennedy spoke directly to Soviet premier Nikita Khrushchev, calling on him to step back from the nuclear brink. It was an extremely skillful use of television as a medium of diplomatic communication. The crisis dominated TV news coverage until its end six days later. The reporting surely influenced public attitudes, but it probably had little direct effect on Kennedy's advisers. Secretary of Defense Robert S. McNamara later revealed that he did not watch television even once during the thirteen days of the Cuban Missile Crisis. Yet during the growing difficulties in Vietnam, Kennedy, his advisers, and those who succeeded them in the White House paid close attention to television news.
THE FIRST TELEVISION WAR
Vietnam did not become a big story on American television until 1965, but it was a controversial one from the time that U.S. military personnel began to play a significant role in combat in the early 1960s. Officials of both the U.S. and South Vietnamese governments were extremely concerned about coverage of the war. Their criticism at first centered on reporting in newspapers and magazines and on wire services, as these news media began sending full-time correspondents to Vietnam several years before NBC's Garrick Utley became the first television journalist based in Saigon, beginning in mid-1964. Yet even though their assignments were brief and their numbers few, TV journalists still found that South Vietnamese authorities scrutinized their reporting and sometimes objected to it, as Utley's colleague, Jim Robinson, learned during one of his occasional trips to Saigon while stationed in NBC's Hong Kong bureau. Offended by one of Robinson's stories, President Ngo Dinh Diem expelled the correspondent from the country in November 1962, despite protests from both the U.S. embassy and journalists in Saigon.
The Kennedy administration used less heavy-handed methods to manage the news from Vietnam. Administration officials tried to play down U.S. involvement in what it described as a Vietnamese war, even as the president sharply increased U.S. military personnel from several hundred to more than sixteen thousand. Yet Kennedy and his advisers rejected the military censorship of news reporting that had prevailed in previous twentieth-century wars, lest such restrictions call attention to a story whose significance they wished to diminish. Instead, U.S. officials in Saigon mixed patriotic appeals "to get on the team" with upbeat statements about South Vietnamese military success and misleading information about what were ostensibly U.S. military advisers who in reality participated in combat operations.
The administration's efforts at news management collapsed during the Buddhist crisis of 1963, as horrifying images of the fiery suicides of monks protesting government restrictions on religious expression appeared in American television news reports and on the front pages of newspapers. What the U.S. embassy called the "press problem" worsened, as reporters not only mistrusted official sources because of their manipulation of information, but contributed to a public debate about whether the Diem government's liabilities were so great that it might not be able to prevail in the war against the National Liberation Front. In important televised interviews in September with Cronkite of CBS and Huntley and Brinkley of NBC on the day that each of those newscasts expanded to thirty minutes, Kennedy reaffirmed the U.S. commitment to South Vietnam while publicly delivering the message that diplomatic emissaries privately conveyed: that the Diem government should make changes to reclaim popular support so it could more effectively prosecute the war. Kennedy also quietly tried to dampen public criticism of Diem, even as his advisers debated how to step up the pressure on the South Vietnamese leader and whether to encourage a coup, by suggesting that the New York Times remove correspondent David Halberstam, whose critical reports had questioned the administration's backing of Diem. The publisher of the Times refused to buckle to presidential pressure; Halberstam remained in Saigon to cover the coup that ousted Diem on 1 November.
The administration of Lyndon B. Johnson in many ways followed its predecessor's pattern of news management as it expanded U.S. involvement in the war in Vietnam in 1964 and 1965. Johnson and his principal advisers believed that domestic support was critical to the U.S. war effort, but worried "that our public posture is fragile." Like its predecessor, the Johnson administration ruled out censorship of the news in favor of a system of voluntary cooperation in withholding certain kinds of military information. "Because we are fools" was the explanation that the president gave one group of journalists for this choice. Yet administration policymakers repeatedly considered censorship and rejected it for fear of damage to official credibility. They also hoped that an ambitious program of public relations would ensure favorable coverage of the U.S. war effort.
Yet the "information problem" continued, even after U.S. policy became "maximum candor and disclosure consistent with the requirements of security." Many reporters distrusted the daily official briefings in Saigon, which they derisively called "The Five O'clock Follies." While some journalists considered these briefings a mixture of spin, exaggeration, and half-truths, others concluded that the information officers told "outright lies." Evidence for this darker interpretation came from Arthur Sylvester, the assistant secretary of defense for public affairs, who turned an innocuous social occasion in Saigon in July 1965 into a nasty confrontation when he sneered at reporters, "Look, if you think that any American official is going to tell you the truth, you're stupid."
Some White House officials worried about "fragmentary" reports lacking "perspective" on TV newscasts as the networks rapidly increased their news operations in South Vietnam in 1965. Their fears about what TV cameras might reveal became acute when the CBS Evening News showed correspondent Morley Safer's sensational film report about U.S. marines using cigarette lighters to burn civilian huts in a search-and-destroy operation in the village of Cam Ne in August. Pentagon officials charged Safer with staging the incident and tried unsuccessfully to get him removed from his assignment because his Canadian nationality supposedly made it impossible for him to report fairly on what they now called an "American" war.
Safer's story was exceptional. Few reports on TV newscasts in 1965 and 1966 directly questioned U.S. objectives or methods of warfare. Most concentrated on combat that involved what anchors commonly called "our" troops or pilots. Many of these "bang-bang" stories lauded the sophisticated military technology that gave U.S. forces advantages in firepower and mobility or the scale of the U.S. war effort, as troops and supplies poured into South Vietnam. Television news often entertained as it informed by providing many appealing human interest stories about American war heroes or ordinary GIs. Reporters and anchors usually accorded commanders in the field and high policymakers in Washington deferential treatment. Critics of the war—especially those in the more radical organizations—often got skeptical or patronizing coverage, if they got any at all.
Yet TV news also showed the difficulties, dilemmas, and horrors of Vietnam, if only occasionally, from the time that the Johnson administration committed large numbers of U.S. combat troops to the war in 1965. Some reporters quickly recognized fundamental strategic problems, as when ABC's Lou Cioffi asserted in October 1965 that "the United States has brought in a fantastic amount of military power here in Vietnam. But so far we've not been able to figure just exactly how to use it effectively in order to destroy the Vietcong." There were stories about the persistent troubles with pacification programs and the many ways that the war was distorting—and destroying—the lives of Vietnamese civilians. The difficulties and dangers of filming heavy fighting, along with the "queasy quotient" of network production staffs that edited reports for broadcast at the dinner hour, ensured that TV news programs would not show daily, graphic scenes of human suffering. But the newscasts did provide glimpses of severely wounded soldiers, as in Charles Kuralt's report for CBS about an artillery sergeant who clenched a cigar and grimaced as medics dressed the wounds in a leg that surgeons later amputated. A few stories also concerned atrocities, as when CBS's Don Webster described how U.S. troops severed ears from enemy corpses. And some stories could be unsettling, even if they contained no graphic images, as when the CBS Evening News showed a soldier's widow, baby in arms, reading one of her husband's last letters from Vietnam. Such stories were infrequent, yet their power came from what NBC News executive Reuven Frank said television journalism did best, which was the transmission of experience.
Johnson was concerned about the impact of dramatic images and the simplification inherent in half-hour newscasts. He also knew that television audiences were increasing; more than half the American people said they got most of their news from TV. The president's thinking was an example of what sociologist W. Phillips Davison has called "the third-person effect," a belief that mass communications have their greatest influence "not on 'me' or 'you,' but on 'them'" and a tendency to exaggerate the impact "on the attitudes and behavior of others." Johnson, who frequently watched the newscasts on banks of monitors tuned simultaneously to all three major networks, worried about the effects of even a single critical story and sometimes expressed his dismay directly to network news executives, anchors, or reporters. He also repeatedly found what he considered evidence of one-sidedness, unfairness, and bias.
As the war became more controversial and public support for his Vietnam policies declined, Johnson made more extreme charges. He told the president of NBC News in February 1967 that "all the networks, with the possible exception of ABC, were slanted against him," that they were "infiltrated," and that he was "ready to move on them if they move on us." The following month, he alleged that CBS and NBC were "controlled by the Vietcong," and later that year he insisted, "I can prove that Ho [Chi Minh] is a son-of-a-bitch, if you let me put it on the screen—but they [the networks] want me to be the son-of-a-bitch."
When many reporters began to describe the war as a stalemate in mid-1967, the Johnson administration launched a new public relations campaign aimed at persuading the American people that the United States was indeed making progress in achieving its goals in Vietnam. Believing that the "main front" of the war was "here in the United States," Johnson urged his advisers "to sell our product," since he insisted that "the Administration's greatest weakness was its inability to get over the complete story" on Vietnam. The Progress Campaign produced increased public support for Johnson's Vietnam policies. The improvement in the polls reflected the hopeful statements of high officials, including General William C. Westmoreland's famous declaration in a speech at the National Press Club in November 1967 that "we have reached the point when the end begins to come into view."
Such assertions of progress contributed to public disbelief and confusion and to further decline in the president's credibility when the Tet Offensive began in January 1968. TV showed startling scenes of South Vietnam under "hard, desperate, Communist attack," in the words of NBC's Brinkley, as fighting occurred in Saigon as well as in more than one hundred other locations. Some of the film was the most spectacular of the war, including footage on NBC and ABC of General Nguyen Ngoc Loan, the chief of the South Vietnamese police, executing a captured NLF officer after a street battle. Several disturbing reports showed TV journalists suffering wounds on camera.
Some observers have been highly critical of the news coverage of Tet. "The dominant themes of the words and film from Vietnam," wrote Peter Braestrup, "added up to a portrait of defeat for the allies. Historians, on the contrary, have concluded that the Tet offensive resulted in a severe military-political setback for Hanoi in the South." Yet historians continue to debate the results of the Tet fighting; there is no scholarly consensus, despite Braestrup's assertion. Moreover, TV journalists who assessed the battles did not find allied defeat. The most famous evaluation came from Walter Cronkite, who declared in a special, prime-time program on 27 February that "the Vietcong did not win by a knockout, but neither did we." "We are mired in stalemate," he concluded, and the time had come for negotiations to end U.S. involvement in the war.
"If I've lost Cronkite, I've lost the country," Johnson said gloomily. Cronkite's call for disengagement did influence the president, but only in combination with many other indications of deep divisions within the public, the Congress, the Democratic Party, and even his own administration over the war. Fearing that he could not govern effectively for another term, Johnson made his dramatic announcement to millions of stunned television viewers on the evening of 31 March 1968 that he sought negotiations to end the war and would not run again for president.
President Richard M. Nixon believed that he faced even greater opposition than Johnson from the news media in general and television journalists in particular, especially over his handling of the Vietnam War. Nixon usually read daily news summaries rather than watching the newscasts themselves. His marginal comments frequently indicated his displeasure and instructed assistants to "hit" or "kick" a particular correspondent or network for a story that he considered inaccurate or unfair. Presidential aides also maintained lists of journalists—mainly network anchors, White House correspondents, and syndicated columnists or commentators—ranked according to their friendliness toward the administration and that could be used for inflicting retaliation or providing "a special stroke."
Nixon followed a two-pronged strategy to deal with the alleged hostility of television news and to build public support for his Vietnam policies. One part involved direct, often hard-hitting, attacks on the networks. Beginning with his famous speech in Des Moines on 13 November 1969, Vice President Spiro T. Agnew tried to channel popular frustration with the war toward the networks by charging that the executives who ran them were "a tiny and closed fraternity" who "do not represent the views of America." The second part of the strategy was to use television as a medium of direct communication with the American people in order to bypass as much as possible critical reporters, editors, and commentators. As he withdrew U.S. forces from South Vietnam, Nixon urged his aides to use public relations initiatives—increasingly centered on television—to create an image of him "as a strong leader of boldness, courage, decisiveness, and independence" who would settle for nothing less than "peace with honor."
Television coverage of the war diminished as U.S. troops came home and U.S. casualties declined. Those stories that did air gave more attention to the social, political, and economic dimensions of a war that was again becoming mainly a Vietnamese conflict, one that to many Americans lacked the significance of earlier years, one that had simply gone on too long. In a report on the CBS Evening News about fighting in Quang Tri province in April 1972, the camera showed the crumpled bodies of children, refugees who died when their truck hit a land mine. There would be more fighting, correspondent Bob Simon declared, and more that generals, journalists, and politicians would say about those battles. "But it's difficult to imagine what those words can be," Simon concluded. "There's nothing left to say about this war. There's just nothing left to say."
Some critics blamed the extensive, uncensored television coverage for U.S. failure in Vietnam. Robert Elegant, who reported about Vietnam for the Los Angeles Times, insisted that partisanship prevailed over objectivity as journalists "instinctively" opposed the U.S. government and "reflexively" supported "Saigon's enemies." The television screen rather than the battlefield, according to Elegant, determined the outcome of the war by creating a "Vietnam syndrome" that "paralyzed American will" during Saigon's final crisis and that may have led to further troubles in Angola, Afghanistan, and Iran. Elegant offered little evidence to support these inflammatory charges, and Daniel C. Hallin, who carefully studied news media coverage of Vietnam, found many compelling reasons to conclude that television did not somehow lose the war. Hallin argued in The "Uncensored War": The Media and Vietnam (1986) that public opinion had turned against Johnson's handling of the war by early 1967, during a time that TV news coverage was most favorable to administration policies. Moreover, public support for the Korean War diminished even more quickly, yet "television was in its infancy, censorship was tight, and the World War II ethic of the journalist serving the war effort remained strong."
Hallin had the stronger argument, but Elegant's point of view had a greater effect on U.S. policy. Military officials resented the portrayal, as time went on, of the Vietnam War as part of what Hallin called the "sphere of legitimate controversy." Their belief that TV coverage undermined popular support for the war led to new restrictions on reporting when U.S. troops invaded Grenada in October 1983. Military commanders refused to transport reporters to the combat zone and barred them from the island for several days. Most journalists simply did not believe the official explanation that their exclusion was mainly to ensure their safety. ABC correspondent Steve Shepard was one of several reporters who chartered a boat, only to be turned away by the U.S. Navy as he approached Grenada. The Pentagon provided TV news programs with the only available video of the military operations in Grenada, but it did not include any scenes of combat. Walter Cronkite, who had retired in March 1981 as CBS anchor, deplored the abridgment of the public's right to know. Yet these protests did little to detract from the main story, which closely followed the Reagan administration's position—that U.S. forces had conducted a successful military operation against a potential Cuban-Soviet satellite. The restrictions on the reporting of the Grenada operation were an indication that in government-media relations, there would be no more Vietnams.
THE IRANIAN HOSTAGE CRISIS
No international story other than war dominated television news for as long as the Iranian hostage crisis. The seizure of the staff of the U.S. embassy in Tehran on 4 November 1979 marked the beginning of fourteen months of concentrated, dramatic, and controversial news coverage that affected both public understanding of the hostage crisis and government efforts to resolve it.
TV's treatment of the Iranian hostage crisis invites comparison with its reporting about a similar event—the seizure of the USS Pueblo on 23 January 1968 and the imprisonment of its crew of eighty-two (another crew member died of wounds incurred during the ship's capture). The North Korean capture of this intelligence ship got extensive coverage for several days on all three networks. Yet even when it led the news, the Pueblo seizure seemed to be related to the biggest continuing story at the time—the Vietnam War. Some reporters, such as ABC's diplomatic correspondent John Scali, told viewers that senior U.S. officials believed that the North Koreans had coordinated their action with the North Vietnamese, who were massing troops around the U.S. marine base at Khe Sanh. The beginning of the Tet Offensive a week later eclipsed the Pueblo story, although newscasts occasionally reported about the negotiations to free the crew. No one, at least on TV, counted the days (335) that the sailors remained in captivity. No Western journalist could go to Pyongyang to interview government officials or gauge popular attitudes toward the United States. Without such film reports, the Pueblo story simply could not hold a prominent, continuing position on TV newscasts. Film of some crew members did occasionally appear on the evening news programs. But the North Korean government approved its release; it contained confessions of wrongdoing and apologies, and the network journalists who narrated it made clear that the film was highly suspect. A few interviews with family members dwelled less on their distress or outrage than on whether the face or voice in the film was really their relative's and whether he appeared any different since being imprisoned. The Pueblo was an important story, but in 1968—a year of "horrors and failures," according to CBS's Harry Reasoner—it was not nearly as sensational or shocking or troubling as the assassinations of Martin Luther King Jr. and Robert F. Kennedy, the violence at the Democratic Convention in Chicago, the Soviet invasion of Czechoslovakia, or the war in Vietnam.
The Iranian hostage crisis, by contrast, became a dominant story quickly and remained so throughout its duration, even during the 1980 presidential election campaign. Some journalists did not imagine that it would become a news event of such magnitude. Ted Koppel, who covered the State Department for ABC, thought that this incident, like the detention of U.S. diplomats during an earlier invasion of the embassy in Tehran on 14 February 1979, would be over in hours. Yet the Sunday evening edition of ABC's World News Tonight on the first day of the crisis showed some of the images that did much to stoke public outrage: glimpses of hostages in handcuffs and blindfolds, the burning of an American flag, and a photograph of the Ayatollah Ruhollah Khomeini, who reportedly approved the takeover of the embassy.
Network competition had a notable effect on ABC's coverage of the crisis. In 1977 ABC News, traditionally third in ratings and reputation, got a new president, Roone Arledge, who also headed the network's highly successful sports division. Arledge considered expanding World News Tonight to a full hour as a way of giving it more prominence, but local affiliates were unwilling to cede to the network an additional—and highly profitable—half hour. Arledge then experimented with late night news programming by airing half-hour specials with increasing frequency at 11:30 p.m. (EST). The hostage crisis gave Arledge the opportunity to secure permanent hold of that time slot. ABC, however, did not show its first late-night special until 8 November 1979 nor make it a nightly offering until six days later. The title of the show was both revealing and influential: America Held Hostage.
On 24 March 1980 the program got a new, permanent host, Koppel, and a new name, Night-line. It continued to provide daily coverage, even if the hostage crisis sometimes was not the lead story. Koppel hoped to use the growing capabilities of satellite technology and his skills as an interviewer to create "intercontinental salons" on live TV. Yet the discussion on the debut program was hardly genteel, as Dorothea Morefield, wife of a hostage, asked Ali Agah, the Iranian chargé in Washington, how his government could "continue to hold these innocent people." Some critics found such verbal confrontations contrived and mawkish, with news taking a back seat to show business. Yet television newscasts have long been a mixture of entertainment and information; Nightline expanded the limits of an established genre. And like ABC, the other networks covered the hostage crisis as a human drama as well as an international event, devoting considerable time both to interviews with family members and to the diplomatic efforts to secure the hostages' release. While ABC may have provided the hostage crisis with a melodramatic title, CBS's Walter Cronkite, television's most respected journalist, popularized what became the standard for measuring its duration. He added to his famous sign-off—"and that's the way it is"—a count of the days, eventually 444, that the fifty-two hostages endured captivity.
The Carter administration at first welcomed the heavy news coverage. Administration officials had many chances to explain to viewers that they were taking strong, but measured action—diplomatic initiatives, economic sanctions—to try to resolve the crisis without resorting to military force. Jimmy Carter could concentrate on his role as president, rather than as candidate facing a vigorous challenge for his party's presidential nomination from Senator Edward M. Kennedy of Massachusetts. Indeed, Carter conspicuously refrained from campaign travel in favor of a Rose Garden strategy that played up his responsibilities as national leader. Carter's approval rating surged from 30 to 61 percent during the first month of the hostage crisis. Never before had the Gallup Poll recorded such a sharp improvement.
Yet administration officials soon deplored the extensive television coverage. Hodding Carter, the State Department spokesperson, complained that news reports were complicating negotiations. White House officials found considerable evidence that Iranian demonstrators were playing to the cameras. Yet their efforts to shift public attention away from the hostage crisis simply would not work because of what presidential counsel Lloyd Cutler called "the constant drumbeat of TV news." Deputy Secretary of State Warren Christopher believed that television intensified public anger and frustration as it reported about the failed rescue effort in April 1980, described diplomatic initiatives that seemed ineffective, and relentlessly counted the days. Press secretary Jody Powell expressed his frustration at the end of one long day when there had been demonstrations across from the White House by two antagonistic groups that had shouted and scuffled. He crossed Pennsylvania Avenue late at night, walked into Lafayette Park, and unexpectedly encountered CBS reporter Jed Duvall. The reason for these prolonged difficulties, Powell blurted out, was "the networks with their nose up the Ayatollah's ass."
CENTRAL AMERICA AND THE LEGACIES OF VIETNAM
"No more Vietnams" was a popular slogan in the late 1970s and early 1980s, but there were strong disagreements about the meaning of that simple phrase. Some people wanted to avoid another long, costly, and—most important—unsuccessful war. Like Ronald Reagan's first secretary of defense, Caspar Weinberger, they believed that the United States should use its military forces only to achieve clear objectives that commanded public support and that would lead to victory. Others wanted to refrain from intervention in Third World revolutions or civil wars where no outside power could hope to impose a lasting settlement. Some counseled against a major effort, even to stop the spread of communism, in a peripheral area. Still others were wary of situations in which limited measures—military aid, the dispatch of advisers, covert action—might create pressures for progressively deeper involvement culminating in the commitment of combat troops.
Television viewers often learned about these perspectives as their advocates addressed a major, recurring question: Would Central America become "another Vietnam"? On the evening newscasts, the range of views on this issue—and, more generally, on U.S. policy toward Nicaragua or El Salvador—was much greater than in the early 1960s when television covered the expanding U.S. involvement in Vietnam. Changes in broadcast journalism did not account for this difference. Reporters still relied heavily on official sources of information. Quantitative studies of television newscasts show that officials of the Carter or Reagan administrations and members of Congress most frequently appeared in stories about Central America. What was different was that the sphere of legitimate controversy was broader, in large measure because of the legacy of Vietnam.
Television reporting did occasionally have a notable effect on public attitudes or U.S. policy concerning Central America. One shocking example involved ABC correspondent Bill Stewart, who was reporting about the civil war in Nicaragua. Carrying a white flag and media credentials, Stewart approached a National Guard roadblock in Managua on 20 June 1979. A guard officer ordered him to lie on the ground and then shot him and his interpreter. Stewart's crew filmed the killings from its van; the tape ran not just on ABC but on CBS and NBC as well. The footage was uniquely horrifying; the only comparable incidents that TV had shown were Jack Ruby's shooting of Lee Harvey Oswald and General Loan's execution of the NLF prisoner during the Tet Offensive. Speaking the next day to the Organization of American States, Secretary of State Cyrus Vance deplored "the mounting human tragedy" in Nicaragua. "This terror was brought home vividly to the American people yesterday with the cold-blooded murder … of an American newsman." Vance then issued the Carter administration's first public call for the resignation of Nicaraguan president Anastasio Somoza and his replacement with a "government of national reconciliation."
Eight years later in very different circumstances, television focused on another individual who affected public views about U.S. policy toward Nicaragua. Colonel Oliver North, fired from his position with the National Security Council, testified for several days in July 1987 during televised congressional hearings into the Iran-contra scandal. Dressed in marine uniform, North was poised and passionate. He admitted that he had misled Congress, but was unrepentant. He presented himself as a patriot who had served his country and his president by maintaining the contras, "body and soul." Polls revealed a significant shift in public attitudes toward continued U.S. military aid to the contras, with opponents outnumbering supporters by a margin of more than two to one before North's testimony but opinion almost equally divided after his appearance. Television, however, helped focus attention more on North's personality than on public issues. Polls showed that many Americans agreed with Ronald Reagan, who called North "a national hero." "Olliemania," as some journalists called the phenomenon of his sudden celebrity, helped launch North on a new career as radio talk show host after appeals overturned three felony convictions.
THE GREAT COMMUNICATOR ON THE WORLD STAGE
President Ronald Reagan became known as the Great Communicator, a distinction that earned him both plaudits and derogation. Reagan's speeches moved, inspired, and reassured millions of people. Critics, however, insisted that Reagan was an acting president, a performer who brought to the White House the theatrical skills that he had learned in Hollywood and who followed scripts that he had done little, if anything, to create. Reagan, like most contemporary presidents, usually read texts that speechwriters had prepared. Yet sometimes the words and often the ideas were his own. Opponents deplored the troubling oversimplifications in his folksy anecdotes and uplifting stories. Yet many viewers found an authenticity that came from the president's sincerity and conviction. Reagan was extraordinarily successful at using the White House and, indeed, the world, as a stage—or perhaps, more accurately, a studio—as he exploited the medium of television to build public support for his presidency.
White House aides planned Reagan's public appearances with meticulous care as television events. They chose the best camera angles, chalked in toe marks so the president would know exactly where to stand, and positioned reporters to minimize opportunities for unwanted questions. The preparations reflected what the president's assistants called the "line of the day," the story that they wanted to lead the news in order to advance their legislative or international agenda. What viewers saw, Reagan's communications experts thought, was more important than what they heard. When the CBS Evening News ran a critical story in October 1984 about Reagan's use of soothing images to obscure unpopular policies, reporter Lesley Stahl was astounded when White House aide Richard Darman telephoned to congratulate her. "You guys in Televisionland haven't figured it out, have you?" Darman said. "When the pictures are powerful and emotional, they override if not completely drown out the sound. Lesley, I mean it, nobody heard you."
Televised images mattered so much to the Reagan White House partly because of changes in TV news. By the early 1980s about two-thirds of the American public said that television was their primary source of news. Viewers could watch a growing number of news programs, including morning and midday shows as well as the traditional evening broadcasts. During prime time there were popular magazine programs, such as 60 Minutes (CBS) and 20/20 (ABC), as well as brief updates called "newsbreaks." And at the end of the day, there was Nightline. Cable TV, which reached 20 percent of television households in 1981 and more than twice that proportion in 1985, offered more choices. On 1 June 1980 the Cable News Network became the first 24-hour news channel. Greater competition and corporate pressures made network news executives more concerned with ratings and willing to try to increase them by altering the balance between information and entertainment. At CBS, for example, when ratings plunged after Cronkite's retirement, the news director urged the new anchor, Dan Rather, to dress in a sweater to appear friendly and informal and insisted on more "feel-good" features. CBS producers dropped a report about State Department reaction to Israeli bombing in Lebanon to open a slot on the evening news of 30 November 1982 for a story about singing sheep. On all the networks, lighter features, striking visuals, and ten-second sound bites increasingly became ways to attract and hold viewers who had more choices, remote controls, and seemingly shorter attention spans. The communications experts in the White House exploited these trends, packaging presidential appearances to fit the changes in TV news.
Reagan's international trips produced many dramatic and memorable television scenes. The advance planners created public occasions, often in striking surroundings, where the president would be in the spotlight. For example, on the rocky coast of Normandy, Reagan gave a magnificent speech in which he commemorated the fortieth anniversary of D-Day on 6 June 1984 by saluting "the boys of Pointe du Hoc … the champions who helped free a continent … the heroes who helped end a war." White House aide Michael Deaver made sure that the French scheduled Reagan's address so it would air live during the network morning news programs. In another stirring scene, Reagan expressed his fervent anti-communism and his commitment to freedom when he stood before the Brandenburg Gate in West Berlin on 12 June 1987 and cried, "Mr. Gorbachev, tear down this wall." Other trips produced less exalted, but nonetheless effective events. During a trip to South Korea in November 1983, Reagan attended an outdoor service at a chapel within sight of the North Korean border. One military police officer explained that a nearby armored personnel carrier was there for "backdrop." In one notorious case, advance planning failed. Presidential assistants did not learn that SS troops were buried at Bitburg cemetery in West Germany before the White House announced Reagan's visit. The president refused to change his plans, but he also went to Bergen-Belsen, the site of a Nazi concentration camp, where he gave one of his most moving addresses.
Reagan's summits with Mikhail Gorbachev were international media events with considerable symbolic significance. At their first meeting in Geneva in November 1985, Reagan said that he recognized from Gorbachev's smile that he was dealing with a different kind of Soviet leader. Televised images of their close and friendly relations symbolized the international changes that were occurring as the Cold War began to wane. At their summit in Washington, D.C., in December 1987, the most important substantive achievement was the signing of the Intermediate-Range Nuclear Forces treaty. But what mattered as well was what the news media called "Gorby fever," which the Soviet leader stoked by stopping his limousine and plunging into welcoming crowds in downtown Washington. When Reagan reciprocated by traveling to Moscow in May 1988, he followed a schedule that was the result of elaborate planning, including the use of polling and focus groups to test the themes of his speeches. Cameras followed Reagan and Gorbachev as they strolled through Red Square answering questions that appeared to be spontaneous, but some of which had been planted. When a reporter asked about the "evil empire," as Reagan had described the Soviet Union in a famous speech in March 1983, the president replied, that was "another time, another era." The televised scenes beginning in late 1989 of revolutions in Eastern Europe and the opening of the Berlin Wall confirmed Reagan's pronouncement.
THE PERSIAN GULF WAR
On 16–17 January 1991, viewers around the world watched the beginning of a war for the first time ever on live television. As allied bombs and cruise missiles hit targets in Iraq, CNN reporters described what they saw from their hotel in Baghdad during the first hours of the Persian Gulf War. The explosions had severed communications with other U.S. network reporters in the Iraqi capital. Using the telephone, CNN correspondents Peter Arnett, John Holliman, and Bernard Shaw acted much like radio reporters, since they were unable to transmit pictures of what they saw. "Now there's a huge fire that we've just seen," Holliman exclaimed. "And we just heard—whoa. Holy cow. That was a large air burst that we saw. It was filling the sky." "Go for it, guys," a CNN producer told the reporters. "The whole world's watching." One of those viewers was President George H. W. Bush, who said with relief that the war had begun, "just the way it was scheduled."
The reporting on that first night from Baghdad was exceptional, in part because government restrictions did not impede it. But Iraqi authorities established censorship within twenty-four hours after the start of the bombing and U.S. military officials imposed a system of restraints on the news media that had been years in the making.
Its outlines emerged in 1984, when the Pentagon responded to complaints about the exclusion of reporters from Grenada by creating a committee chaired by General Winant Sidle, who had served as a military information officer during the Vietnam War. The Sidle panel recognized that technological innovations—portable satellite antennas that made possible live broadcasting from the battlefield—might jeopardize the security of U.S. military operations or the safety of U.S. troops. It called on journalists to refrain from reporting sensitive information as a condition of their accreditation. If commanders decided that military considerations required limited access to combat areas, the Sidle panel recommended that small groups of journalists be allowed into battle zones and then share their reporting with their colleagues.
This pool system got its first test when U.S. troops invaded Panama in December 1989. Military transports carried reporters to base areas, but kept them sequestered during the first days of Operation Just Cause. Complaints about a system that seemed to keep reporters away from military action rather than to facilitate their access to it led to extended discussions between news organizations and military authorities during the buildup of U.S. forces in Saudi Arabia during late 1990. On the eve of Operation Desert Storm, the Pentagon announced new guidelines, which required journalists to participate in pools, exclude restricted information from their articles and broadcasts, and submit their reports to military officials for security review.
The legacies of Vietnam exerted a powerful influence on the Bush administration as it prepared for war. General Colin Powell, the chair of the Joint Chiefs of Staff, believed that Vietnam had shown that the American public simply would not tolerate a prolonged, televised war with heavy casualties. Bush thought that Vietnam had also proven other lessons. He insisted that the United States and its coalition partners must use overwhelming military force that would produce decisive results. The president did a masterly job of building public support for his demand that Iraqi troops withdraw from Kuwait and his determination to use force, if necessary, to achieve that goal. He portrayed Saddam Hussein as a kind of Middle Eastern Hitler—a dictator who brutalized Iraqis, an outlaw who defied fundamental principles of international order, and an aggressor who wanted weapons of mass destruction to threaten other nations. TV news provided other view-points, other voices, including Hussein's, as the Iraqi president granted interviews to several Western journalists, including news anchors Peter Jennings of ABC and Rather of CBS. The newscasts also covered the debate over the use of force, which Congress authorized on 12 January 1991. Polls showed that substantial majorities favored military action and endorsed Bush's handling of the crisis. Once the war began, the president enjoyed overwhelming public support for his policies.
TV's war coverage in some ways resembled a miniseries. The networks had distinctive titles—"War in the Gulf," " Showdown in the Gulf"—and accompanying music. A good deal of the coverage emphasized the drama of war—the danger of a sudden attack of Iraqi Scud missiles, the risks of flying into hostile fire, the heroism of U.S. troops and the sacrifices of their relatives at home.
Yet in all this reporting about war, TV provided an extremely limited view of the fighting. The Defense Department supplied most of the video of the air war. Recorded with night vision equipment, it seemed fantastic and futuristic, something that reminded many viewers of a video game. The pool system ensured that reporters would have few unregulated opportunities to cover combat. A handful of correspondents, like CBS's Bob Simon, went out into the Saudi desert without escorts. Simon's first such excursion produced a report from Khafji, near the border with Kuwait, where U.S. marines were under attack. On his next unescorted trip, Iraqi troops captured him; Simon and his crew remained prisoners until the end of the war. Although network newscasts ran several reports about what they bluntly called censorship, there was little public dissatisfaction with the Pentagon's restrictions. Viewers were far more interested in seeing the briefings of General H. Norman Schwarzkopf, commanding and apparently authoritative performances that made the general the greatest hero of the Gulf War.
The television journalist who got the most attention—much of it unwelcome—was CNN's Peter Arnett. When most reporters decided to put personal safety ahead of the story, Arnett decided to stay in Baghdad. When Iraqi authorities decided to expel the remaining Western TV correspondents during the first week of fighting, they excepted CNN, mainly because producer Robert Wiener had spent months building cooperative relations with government officials in Baghdad. Arnett had no competition, but he did have "minders," information officials who limited his movements and censored his reporting. Repeatedly Arnett pushed against those limits, for example, by arguing that his reports would have greater credibility if he could respond more freely to questions from CNN anchors during live broadcasts. His stories about civilian casualties during allied bombing raids—at a baby milk formula factory and an air raid shelter—stirred enormous controversy in the United States. The most extreme of many attacks on the alleged impropriety of reporting from behind enemy lines came from Republican representative Lawrence Coughlin of Pennsylvania, who denounced Arnett as "the Joseph Goebbels of Saddam Hussein's Hitler-like regime." Arnett disliked becoming part of the story, yet he stayed in Baghdad until the end of the ground war.
Arnett's reporting notwithstanding, Bush administration officials were pleased with television coverage of the Gulf War. Pete Williams, the Pentagon's chief public affairs officer, concluded that the news media had provided "the best war coverage we've ever had." Secretary of State James A. Baker spoke more candidly when he described "that poor demoralized rabble—outwitted, outflanked, outmaneuvered by the U.S. military. But I think, given time, the press will bounce back."
THE CNN EFFECT
By the early 1990s many people had concluded that television news possessed formidable powers to influence the U.S. government's foreign policy. The "CNN effect," as it is usually called, actually has several dimensions. The first is providing a new channel of diplomatic communication, one that allows governments to transmit proposals or engage in dialogue, sometimes with extraordinary speed. Officials in the Bush administration, for example, sometimes used TV to send messages to Saddam Hussein after the invasion of Kuwait, hoping that a public channel might increase the pressure on the Iraqi leader to accede to U.S. demands. Government leaders, however, have long used the news media as channels of diplomacy. Radio, for example, carried Woodrow Wilson's Fourteen Point Address of 8 January 1918 to an international audience.
The second dimension of the "CNN effect" is setting the foreign policy agenda—giving certain issues urgency or importance through news reports that capture the interest of millions of viewers and elicit a strong response. The ability to provide live reports from almost anywhere in the world, to transmit dramatic, emotional images, and to show them repeatedly seems to provide television with powers that exceed the other news media to alter the priorities that the government gives to international issues. The third dimension is accelerating official action. Even before the advent of CNN and other twenty-four-hour news channels, Lloyd Cutler, a counsel to President Carter, found that TV news had led to "foreign policy on deadline," as White House officials hurried to take action—to make a statement, to announce a new initiative—before the next newscast.
The final—and most controversial—dimension of the "CNN effect" is forcing government action. George F. Kennan, the foreign service officer who was an architect of containment during the early Cold War, summarized this perspective in a diary entry about U.S. intervention in Somalia in December 1992. Kennan maintained that television pictures of starving Somali children had produced "an emotional reaction, not a thoughtful or deliberate one," but one strong enough to take control of foreign policy decisions from "the responsible deliberative organs of our government."
A closer look at U.S. involvement in Somalia, however, suggests different conclusions than Kennan's about the effects of televised images on government policy decisions. Quantitative studies show that extensive coverage of the famine and fighting in Somalia followed the policy initiatives of the Bush administration in 1992 rather than preceded them. Television coverage surely affected the views of administration officials and gave them confidence that what they thought would be a limited, low-risk humanitarian intervention would have considerable public support. But television pictures of suffering Somalis did not determine the president's decision to dispatch troops. Television had a more decisive effect on President William Jefferson Clinton's decision to terminate Operation Restore Hope when newscasts showed shocking tape in early October 1993 of a crowd in Mogadishu desecrating the corpse of a U.S. soldier who had been killed in a firefight. The U.S. casualties took the president by surprise, and he was not prepared to appeal to angry members of Congress for the continuation of a mission that had suddenly grown dangerous. Instead, Clinton announced that U.S. forces would come home by 31 March 1994.
Television showed the horrors of ethnic cleansing and civil war in Bosnia, and those reports were influential but not decisive in shaping U.S. government action. Scenes of Serb camps with emaciated Muslim and Croat prisoners in August 1992 produced condemnations from the Bush administration. Yet the president and his principal advisers were unwilling to take military action, as they believed that there was no clear exit strategy. Clinton, too, reacted intensely to graphic TV reports of atrocities, such as the casualties that occurred when a mortar shell exploded in a Sarajevo marketplace on 5 February 1994. But he followed no consistent policy. Not until mid-1995 did the Clinton administration approve strong measures, including continuing NATO air strikes, to bring the Bosnian war to an end. Available evidence suggests that the president acted to eliminate a major problem that burdened U.S. foreign policy and that threatened his political prospects. Almost four years later, in March 1999, the United States and its NATO allies again used military force in an air war against Yugoslavia to persuade President Slobodan Milosevic to halt ethnic cleansing in Kosovo. News reports, including many on TV, of brutality against Kosovars contributed to public support for this war. But concern about popular reaction to potential U.S. casualties led Clinton to rule out the use of ground troops, except for peacekeeping.
The "CNN effect" influenced U.S. interventions in Somalia, Bosnia, and Kosovo. TV reports helped set the agenda; at times, officials of the Bush and Clinton administrations had to react—sometimes quickly—to events that dominated the newscasts. But the "CNN effect" was variable, and it was only one of many factors in the process of formulating foreign policy.
THE TWENTY-FIRST CENTURY
The new millennium began with televised celebrations on every continent, hopeful events that suggested that modern communications were bringing closer the creation of Marshall McLuhan's global village. Yet the twenty-first century also brought almost unimaginable scenes of horror and suffering when terrorists flew hijacked airplanes into the twin towers of the World Trade Center and the Pentagon on 11 September 2001. Enormous audiences in the United States and around the world relied on television for news about these disasters. Even government officials watched television because it provided more information more quickly than other available sources. Round-the-clock coverage on the broadcast as well as the cable news channels quickly spread the disbelief, outrage, grief, and uncertainty about the future that were immediate products of these startling events.
Technological changes—especially greater Internet access and the increasing convergence of the computer and the television—may alter viewing habits and change sources of news and information. But for the immediate future, at least, during conflicts, crises, and other important international developments, both public officials and citizens will turn to television news.
Arlen, Michael J. Living-Room War. New York, 1982.
Arnett, Peter. Live from the Battlefield: From Vietnam to Baghdad, 35 Years in the World's War Zones. New York, 1994. A superb memoir by a reporter who covered the Vietnam War for the Associated Press and the Persian Gulf War for CNN.
Barnouw, Erik. Tube of Plenty: The Evolution of American Culture. 2d rev. ed. New York, 1990.
Baughman, James L. The Republic of Mass Culture: Journalism, Filmmaking, and Broadcasting in America Since 1941. 2d ed. Baltimore, 1997.
Bernhard, Nancy E. U.S. Television News and Cold War Propaganda, 1947–1960. New York, 1999.
Braestrup, Peter. Big Story: How the American Press and Television Reported and Interpreted the Crisis of Tet 1968 in Vietnam and Washington. 2 vols. Boulder, Colo., 1977.
Cannon, Lou. President Reagan: The Role of a Lifetime. New York, 2000.
Cutler, Lloyd N. "Foreign Policy on Deadline." Foreign Policy no. 56 (fall 1984): 113–128.
Davison, W. Phillips. "The Third-Person Effect in Communication." Public Opinion Quarterly 47 (spring 1983): 1–15.
Dennis, Everette E., et al. The Media at War: The Press and the Persian Gulf Conflict. New York, 1991.
Elegant, Robert. "How to Lose a War." Encounter 57 (August 1981): 73–90. A harsh and often polemical critique of reporting of the Vietnam War.
Halberstam, David. War in a Time of Peace: Bush, Clinton, and the Generals. New York, 2001.
Hallin, Daniel C. The "Uncensored War": The Media and Vietnam. New York, 1986. The best analysis of news media reporting of Vietnam.
——. We Keep America on Top of the World: Television Journalism and the Public Sphere. New York, 1994. A collection of excellent essays.
Hammond, William M. Public Affairs: The Military and the Media, 1962–1968. Washington, D.C., 1988.
——. Public Affairs: The Military and the Media, 1968–1973. Washington, D.C., 1996.
Isaacs, Arnold R. "The Five O'Clock Follies Revisited: Vietnam's 'Instant Historians' and the Legacy of Controversy." The Long Term View 5 (summer 2000): 92–101. Explains how Vietnam created lasting distrust of the news media among military officers.
Kimball, Jeffrey P. Nixon's Vietnam War. Lawrence, Kans., 1998.
Koppel, Ted, and Kyle Gibson. Nightline: History in the Making and the Making of Television. New York, 1996. Contains an insider account of the birth of the ABC news show Nightline during the Iranian hostage crisis.
MacArthur, John R. Second Front: Censorship and Propaganda in the Gulf War. New York, 1992. A scathing critique of the pool system and the news media's cooperation with it.
Maltese, John Anthony. Spin Control: The White House Office of Communications and the Management of Presidential News. 2d ed. Chapel Hill, N.C., 1994.
Mermin, Jonathan. "Television News and American Intervention in Somalia: The Myth of a Media-Driven Foreign Policy." Political Science Quarterly 112 (autumn 1997): 385–403.
Neuman, Johanna. Lights, Camera, War: Is Media Technology Driving International Politics? New York, 1996.
O'Neill, Michael J. The Roar of the Crowd: How Television and People Power Are Changing the World. New York, 1993.
Pach, Chester J., Jr. "And That's the Way It Was: The Vietnam War on the Network Nightly News." In David Farber, ed. The Sixties: From Memory to History. Chapel Hill, N.C., 1994.
——. "Tet on TV: U.S. Nightly News Reporting and Presidential Policymaking." In Carole Fink, Philipp Gassert, and Detlef Junker, eds. 1968: The World Transformed. New York, 1998.
——. "The War on Television: TV News, the Johnson Administration, and Vietnam." In Marilyn Young and Robert Buzzanco, eds. Blackwell Companion to the Vietnam War. Malden, Mass., 2002.
Small, Melvin. Covering Dissent: The Media and the Anti-Vietnam War Movement. New Brunswick, N. J., 1994.
Stahl, Lesley. Reporting Live. New York, 1999.
Strobel, Warren P. Late-Breaking Foreign Policy: The News Media's Influence on Peace Operations. Washington, D.C., 1997. Includes excellent analysis of TV coverage of military interventions in Somalia, Haiti, and Bosnia.
Tallman, Gary C., and Joseph P. McKerns. "'Press Mess:' David Halberstam, the Buddhist Crisis, and U.S. Policy in Vietnam, 1963." Journalism and Communication Monographs 2 (fall 2000): 109–153.
Utley, Garrick. You Should Have Been There Yesterday: A Life in Television News. New York, 2000. A highly informative memoir by one of television's most important international affairs correspondents.
Woodward, Bob. The Commanders. New York, 1991.
Wyatt, Clarence. Paper Soldiers: The American Press and the Vietnam War. New York, 1993.
See also Dissent in Wars; The National Interest; The Press; Public Opinion.
FRIENDLY PERSUASION: LBJ, TV NEWS, AND THE DOMINICAN REPUBLIC
On 28 April 1965, President Lyndon B. Johnson ordered U.S. marines to the Dominican Republic to protect U.S. citizens during political violence and to prevent communists from seizing power. Johnson was extremely concerned about news coverage of the intervention. He tried unsuccessfully to get CBS to remove television correspondent Bert Quint, whose reports from Santo Domingo cast doubt on whether there was a significant communist threat. But Johnson found other ways to affect television reporting. In an oral history interview, NBC correspondent John Chancellor revealed the following about events of 2 May 1965: "We had a program, a television program on a Sunday afternoon…. I had gone down … to stand in front of the White House and speak a little essay into the camera on what the President's reaction was…. As I stood out there waiting for the program to begin, what I didn't know was that the President was upstairs…. He was alone and looking at me out the window, and he got very curious about what I was doing…. And the guard in the West Wing came out and got me, and I went inside. He said, 'There's a telephone call for you,' and it was the President."
According to the tape of the telephone conversation, Johnson said:
John, … I don't want to be quarrelsome, but I want you to know the facts…. If we don't watch out, the bellyachers are going to run the country and we'll lose our democracy…. Our mission down there [in the Dominican Republic], evacuation, is not half-way through.…. [U.S. Ambassador John Bartlow Martin] says that the Latin American … ambassadors, generally, are very favorable to us because we've saved their hide…. While they can't come out and say we're against mother or we support marines in Latin America, … they're very happy…. And … he's [Martin] going to point out [at a press conference] some of … them that have been imported and are known Castro leaders…. Fifty are identified as of last night…. I have to be very careful because I don't want to say a guy [who] disagrees with me is a Communist, or I'm a McCarthy…. The point, though, that I want to get over with you is those on the ground … are very happy that their lives have been spared and we're there…. Number two—the mission is not com pleted or about to be completed.
Chancellor replied, "All right, I have that clearly in mind," and Johnson said, "Okay, partner."
Chancellor recollected: "And I went out and stood out there—it didn't sound right, what he had told me, but nonetheless … I put it into the piece I'd written…. Then I went back and the following day I was able to determine pretty accurately that what he'd told me was an absolute fabrication, a big lie! I've rarely been as angry. I really was just furious! Presidents use all kinds of tools on reporters to do their work….. I've never really told this to anybody before except a few close friends because you don't go around calling the president a liar. In this case, he was."
Punk rock music began in New York City in the mid 1970s. It was then that bands like Television, the Ramones, Patti Smith, and the band that would later become Blondie, the Stilettoes, set the stage for a new kind of music. “On the nascent New York p unk-rock circuit of the mid-1970s,” wrote Kurt Loder in Esquire, “Television was a wondrous curiosity—a scragged-out Bowery quartet that enriched its witty punk-squak tunes with gorgeous, extended improvisations by two very distinct guitarists, Richard Lloyd and songwriter Tom Verlaine.”
In late 1973 Tom Verlaine was walking through New York’s Bowery section complaining to a friend about the difficulties of finding clubs in which to perform. Together they stumbled upon CBGB’s and its owner Hilly Kristal. After a casual discussion, Kristal told Verlaine that his band should come by and audition. Until then the bar featured Irish folk music and was a biker bar a couple nights a week.
The band, consisting of Verlaine and Lloyd on guitars, Billy Ficca on drums, and Richard Hell on bass, placed
Members include: Billy Ficca, drums; Richard Lloyd, guitar; Tom Vcrlainc (born Tom Miller, c. 1950 in NJ), vocals and guitar; Fred Smith, bass. Former member: Richard Hell, bass.
Band formed in 1973 in New York City. Began playing at the now-legendary CBGB, 1973; released first single “Little Johnny Jewel” , 1975; first album Marquee Moon, 1977; disbanded, 1979; reunited for tour and the album Television, 1992.
some mimeographed posters around town and bought their own ads. But after only a month of playing one or two nights a week, other like-minded musicians began showing up. The Ramones were looking for a place to play, as was Patti Smith, and the Stilettoes.
Early punk music was not so much a rebellion, as a counter-revolution. “The first punks were not a new generation,” wrote Bill Flanagan in Musician, “but the underbelly of the 60s generation who remembered the glory of their youth and wanted to reclaim rock from Pink Floyd, the Doobie Brothers, the Moody Blues—whoever they felt had blown it.”
Many people erroneously think that punk rock began in England, but it was only an Englishman who took the New York look and sound back to England. Malcolm McLaren was managing the campy, glam band called New York Dolls in the mid-1970s. The members of Television wore ripped clothing because they didn’t know howto sew, and McLaren was obsessed with their look. “It was very much like, ’Just play and I’ll do everything else—you’ll have a record out in six months, I guarantee it will be top ten, ’” Verlaine recalled in Musician. Where Hell liked the idea, neither Lloyd nor Verlaine trusted McLaren; they told him “no thanks.” McLaren went back to England, and within nine months the Sex Pistols surfaced on the London scene, sporting Richard Hell’s hairdo and Television’s ripped-up look.
In 1974 producer Brian Eno helped record a Television demo. Before long, an A & R (artists and repertory) person at Island Records was calling it half of an album. But none of the band liked the production style of the demo, and asked to begin again with a different producer. Around this time Richard Hell left Television due to friction among the members and formed the Voidoids. When the Stilettoes broke up, Verlaine invited their bass player, Fred Smith to join Television.
Meanwhile, Sire records was offering record deals to many artists. Patti Smith was the first of CBGB’s acts to sign with a label, releasing her ground-breaking album Horses on Arista in 1975. Television released a single “Little Johnny Jewel” in 1975, but instead of accepting a deal with Sire, as the Ramones and the Talking Heads did, Television decided to wait for a better deal.
Finally, in 1977, Elektra records released Television’s Marquee Moon, which is considered a landmark album. Rolling Stone’s David Fricke wrote, “the stunning ice-blue guitarchitecture and defiant spirit of free-jamming wanderlust on Television’s debut album … blew wide holes through cream-puff AOR rock and the already calcifying primitivismof punk.” In Spin, Andrew Schwartz called Marquee Moon “Television’s one uncontestable masterpiece … the album’s ingeniously orchestrated guitar parts and stark fables of spiritual transcendence amid urban decay left marks still evident in the music of [today’s bands] U2, Sonic Youth, and Ride, to name a few.”
AlthoughTelevision is always mentioned among the first punkers who vastly influenced British punk and subsequent “alternative” subgenres, their sound was actually much different from other bands. As James Rotondi wrote in Guitar Player, “Television’s improvisational bent and poetic streak set them off from most of their contemporaries.” Schwartz felt that “Television plays rock’n’roll, not as high-speed eighth-notes or monolithic bar chords, but as a series of improvisations by a deft, powerful Smith-Ficca rhythm section and two virtuoso guitarists, Verlaine and Lloyd.”
As a songwriter, Verlaine has certainly managed to set the band apart from their contemporaries. His lyrics usually begin as odd narrative tales that eventually lose any discernible story line. Most of his influences came from flying saucer songson his childhood radio. Schwartz asserted that “Verlaine draws less from [early rocker] Chuck Berry than from 50s films and 19th-century poets such as Arthur Rimbaud. If there was anger and defiance in the music, it was more in the spirit of [poet] Allen Ginsberg’s” Howl “than of teenage rebellion.” Pulse! noted that Television’s “fat-free twin guitar attack and sparse lyrics helped pave the way for punk rock’s economy.”
Television’s 1978 follow-up album, Adventure, paled by comparison to their debut, although it too impressed critics. They did not record a third album until 1992. This delay caused people to believe that the band had broken up and later made a comeback, but Television insisted they’d just been on hiatus for 13 years. It was their live performances, however, and not their albums, that made them legendary. They were performances considered rarely equaled in rock. Though they released only two official albums—neither of which sold even 150, 000 copies—at least 16 bootleg releases have surfaced since.
All four members worked on various projects during their “sabbatical.” Verlaine regularly received critical kudos for his solo works. And although not as prolific as Verlaine, Lloyd was also critically lauded for his solo efforts, as well as for his lead guitar work with singer/songwriter Matthew Sweet, and with X’s leadman John Doe on his side projects.
The sparks that flew on stage between Lloyd and Verlaine were considered the same sparks that broke up the band. But their differences were not apparent on 1992’s Television. Spin’s CeliaFarber called Television “a damned good, maybe even great, record.” Some critics had mixed feelings, but nobody could deny that Television still had their gifts. Surprisingly, reviewers did not romanticize the comeback, but evaluated it with a careful ear. In EsquireLoder said that “these gleaming tapestries of (for the most part) straight-through-the-amp Fender guitar sound—now mellowed somewhat, but more compelling than ever—are one of art-rock’s richer rewards.”
As of the mid-1990s Television’s status was unclear. Capital Records had produced Television as a one off. Although their reunion tour was well received, the members did not have plans of giving up their solo work. Regardless of Television’s plans, as Guitar Player proclaimed, “for their balance of subtly shaped tones, their intertwining of rich melodies, their dynamics, and their jagged rhythmic interplay, they are as crucial to modern guitar as any band of the past 20 years.”
Marquee Moon, Elektra Records, 1977.
Adventure, Elektra Records, 1978.
The Blow Up, ROIR CD, 1978.
Television, Capital Records, 1992.
Verlaine’s solo albums
Flashlight, IRS Records.
Words From the Front, Warner Bros.
Warm and Cool, Rykodisc, 1992.
Lloyd’s solo albums
Field of Fire, Moving Target, 1985.
Real Time, Celluloid, 1987.
Esquire, January 1993.
Guitar Player, January 1993.
Metro Times, October 21, 1992; March 3, 1993.
Musician, September 1992; June 1995.
Pulse!, September 1992; November 1992.
Rolling Stone, October 29, 1992; January 7, 1993.
Spin, November 1992; January 1993.
TelevisionTHE RELATIONSHIP BETWEEN FILM
TELEVISION AND FILM BEFORE 1960
FILM ON NETWORK TELEVISION
THE IMPACT OF CABLE AND
HOME VIDEO FROM 1980–2000
DIGITAL TECHNOLOGY AND THE FUTURE
OF FILM AND TELEVISION
The experience of seeing movies is likely to conjure thoughts of going to a movie theater: the smell of popcorn at the concession stand, the friendly bustle of fellow moviegoers in the lobby, the collective anticipation as the auditorium lights dim, and the sensation of being enveloped by a world that exists, temporarily, in the theater's darkness. Anyone who enjoys movies has vivid memories of going out to see movies; the romance of the movie theater is crucial to the appeal of cinema. But what about all of the movies we experience by staying in? The truth is that most of us born since 1950 have watched many more movies at home, on the glowing cathode-ray tube of a television set, than on the silver screen of a movie theater.
It is not often recognized, but the family home has been the most common site of movie exhibition for more than half of the cinema's first century. In the United States this pattern began with the appearance of commercial broadcast television, starting with the debut of regular prime-time programming in 1948, and has grown with each new video technology capable of delivering entertainment to the home—cable, videocassette recorders (VCRs), direct broadcast satellites (DBS), DVD (digital video disc) players, and video-on-demand (VOD). Over much of this period, watching movies on TV represented a calculated tradeoff for consumers: television offered a cheap and convenient alternative to the movie theater at the cost of a diminished experience of the movie itself. With the introduction of high-definition (HDTV) television sets and high-fidelity audio in the 1990s, however, the humble TV set has grown to be the centerpiece of a new "home theater," which can offer a viewing experience superior in most ways to that of a typical suburban multiplex. In fact, with theaters desperate for additional income, going out to the movies now often involves sitting through a barrage of noisy, forgettable commercials for products aimed mostly at teenagers. In an odd twist, the only hope for avoiding commercials has become to stay in and watch movies on television.
We tend to think of film and television as rival media, but their histories are so deeply intertwined that thinking of them separately is often a hindrance to understanding how the film and television industries operate or how people experience these media in their everyday lives. Starting in the late 1950s, Hollywood studios began to produce substantially more hours of film for television (in the form of TV series) than for movie theaters, and that pattern holds to this day. Since the early 1960s, it has been apparent that feature films are merely passing through movie theaters en route to their ultimate destination on home television screens. As physical artifacts, films may reside in studio vaults, but they remain alive in the culture due almost entirely to the existence of television. Whether films survive on cable channels or on DVD, they rarely appear on any screens other than television screens once they have completed their initial theatrical release. Given the importance of television in the film industry and in film culture, why do we think of film and television separately?
First, when television appeared on the scene, there was already a tradition of defining the cinema in contrast with other media and art forms. Much classic film theory and criticism, for instance, sought to define film as an autonomous medium by comparing it with precedents in theater, painting, and fiction. In each case, the goal was to acknowledge continuities while highlighting the differences that made film unique. Within this framework, it seemed natural to look for the differences between film and television, even as the boundaries between the media blurred and television became the predominant site of exhibition for films produced in Hollywood.
Second, there is an inherent ambiguity in the way that the term "television" functions in common usage, and this complicates efforts to delineate the relationship between film and television. Depending upon the context of usage, the word "television" serves as convenient shorthand for speaking about at least four different aspects of the medium:
- Technology: "Television" is used to identify the complex system of analog and digital video technology used to transmit and receive electronic images and sounds. While electronic signals are transmitted and received virtually simultaneously, the images and sounds encoded in those signals may be live or recorded. In other words, the "liveness" of television—a characteristic often used to distinguish television and film—is inherent in the acts of transmission and reception, but not necessarily in the content that appears on TV screens.
- Consumer Electronics: "Television" also refers to the television set, an electronic consumer good that is integrated into the spaces and temporal rhythms of everyday life. While the movie theater offers a sanctuary, set aside from ordinary life, the TV set is embedded in life. Initially, the TV set was an object found mainly in the family home; increasingly, television screens of all sizes have been dispersed throughout society and can be found in countless informal social settings. As a consumer good, the HDTV set is also becoming a fetish object for connoisseurs of cutting-edge technology—independent of the particular content viewed on the screen.
- Industry: "Television" refers also to the particular structure of commercial television, a government-regulated industry dominated by powerful networks that broadcast programs to attract viewers and then charge advertisers for the privilege of addressing those viewers with commercials. Using the airwaves to distribute content, the television industry initially had no choice but to rely on advertising revenue, which led to the peculiar flow of commercial television—the alternation of segmented programs punctuated regularly by commercials—as well as the reliance on series formats to deliver consistent audiences to advertisers.
- Content: "Television" serves as a general term for the content of commercial television, particularly when comparing film and television. Considering the vast range of content available on television, this usage often leads to facile generalizations, suggesting that there is an inherent uniformity or underlying logic to the programs produced for television.
As a result of the ambiguity involved in the usage of the term "television," there is no sensible or consistent framework for thinking about the relationship of film and television. Instead, a single characteristic often serves as the basis for drawing a distinction between the two forms, even though it may obscure more significant similarities. For example, the common assumption that television is a medium directed at the home, while film is a medium directed at theaters, overlooks the importance of the TV set as a technology for film exhibition. Similarly, the emphasis on television's capacity for live transmission obscures the fact that most TV programs are recorded on film or videotape and that feature films make up a large percentage of TV programming.
Third, film has enjoyed a prestige that only recently has been accorded to television, and this status marker has encouraged people to view film and television separately. Every culture creates hierarchies of taste and prestige, and whether explicitly stated or implicitly assumed, film has had a higher cultural status than television. It has been a sign of success, for example, when an actor or a director moves out of television into movies. Similarly, film critics have enjoyed much greater prestige than any critic who has written about television. The scholarly field of film studies, and universities in general, were slow to welcome the study of television. All of this suggests that there has been an unrecognized, but nevertheless real, investment in a cultural hierarchy that treats film as a more serious and respectable pursuit than television, and this hierarchy supported the assumption that film and television are separate media. Of course, any hierarchy of cultural values is subject to change over time. When a television series like The Sopranos (beginning 1999) achieves greater critical acclaim than virtually any movie of the past decade, it is a signal that values are shifting.
By the time the networks introduced regular prime-time programs in 1948, television's arrival as a popular medium had been anticipated for nearly two decades, during which the public had followed news reports of scientific breakthroughs, public demonstrations, and political debates. Electronics manufacturers spearheaded research into the technology of television broadcasting, which was envisioned by them as an extension of the existing system of radio broadcasting in which stations linked to powerful networks broadcast programs to home receivers. The Radio Corporation of America (RCA), which operated the NBC radio network, dominated the electronics industry and lobbied heavily to see its technology adapted by the Federal Communications Commission (FCC) as the industry standard.
b. Philadelphia, Pennsylvania, 25 March 1924
Sidney Lumet's career began at an extraordinary and unique moment in the history of American television. For a few years during the first decade of television, the TV networks broadcast live theatrical performances from studios in New York and Los Angeles to a vast audience nationwide. These ephemeral productions—as immediate and fleeting as any witnessed in the amphitheaters of ancient Greece, yet staged in the blinding glare of commercial television—served as the training ground for a generation of American film directors, which also included Franklin Schaffner, George Roy Hill, Martin Ritt, Arthur Penn, and John Frankenheimer.
Before beginning a fifty-year movie career, Lumet worked at CBS, where he directed hundreds of hours of live television for such series as Danger (1950–1955), You Are There (1953–1957), Climax! (1954–1958), and Studio One (1948–1958). The craft of directing live television, invented through trial and error by pioneers like Lumet, required economy, speed, and precision: concentrated rehearsals with an ensemble of actors, brief blocking of the camera setups, followed by intense concentration on the moment of performance because retakes were out of the question.
Lumet's approach to filmmaking bears traces of this formative experience. Unlike many directors, Lumet begins each film with several weeks of rehearsal in which he and his actors come to a shared understanding of each scene, to ensure that the actual production runs like clockwork. On the set, Lumet works quickly, seldom shooting more than four takes of any shot. He often completes a shooting schedule in thirty days or less, and brings productions in under budget. In an age of superstar directors who may spend years on a single film, Lumet has worked steadily, building a career, scene by scene, film by film, through classics (Dog Day Afternoon, 1975) and clunkers (A Stranger Among Us, 1992).
Lumet's best films—Serpico (1973), Dog Day Afternoon, Running on Empty (1988), and Prince of the City (1981)—are blunt and immediate. What they lack in formal precision, they make up for in the vitality of the performances and the conviction of the storytelling. Lumet can be a superb visual stylist when orchestrating confrontations between actors in confined spaces, but he is generally indifferent to the visual potential of his material and has never seemed concerned with creating a signature style. His approach to filmmaking, with its emphasis on preparation, ensemble acting, and an unobtrusive camera that captures the spontaneity of performance, translates the values of live television into the medium of film.
Twelve Angry Men (1957), Long Day's Journey Into Night (1962), Fail-Safe (1964), The Pawnbroker (1964), The Hill (1965), Serpico (1973), Murder on the Orient Express (1974), Dog Day Afternoon (1975), Network (1976), Prince of the City (1981), The Verdict (1982), Running on Empty (1988), Q&A (1990)
Bogdanovich, Peter. Who the Devil Made It: Conversations with Legendary Film Directors. New York: Ballantine,1998.
Cunningham, Frank R. Sidney Lumet: Film and Literary Vision. Lexington: University Press of Kentucky, 1991.
Lumet, Sidney. Making Movies. New York: Knopf, 1995.
The Hollywood studios were far from passive bystanders during this period. Having already invested in radio, but seen the radio industry controlled by those companies able to establish networks, the studios hoped to command the television industry as they had dominated the movie industry, by controlling networks that would serve as the key channels of distribution in television. The studios also envisioned alternative uses for television technology that would conform more closely to
the economic exchange of the theatrical box office. These included theater television, in which programs would be transmitted to theaters and shown on movies screens, and subscription television, in which home viewers would pay directly for the opportunity to view exclusive programs.
The plans of studio executives were thwarted by the FCC, which stepped in following the Supreme Court's 1948 Paramount decision, to investigate whether the major studios, with their record of monopolistic practices in the movie industry, should be allowed to own television stations. While the studios awaited a decision, the established radio networks—CBS, NBC, and ABC—signed affiliate agreements with the most powerful TV stations in the largest cities, leaving the studios without viable options for forming competitive networks. Thwarted in their ambitions, the major studios withdrew from television until the mid-1950s. Theater television died in its infancy and subscription television would not become a major factor for years to come.
In the meantime, smaller studios and independent producers rushed to supply television with programming. The networks initially promoted the idea that television programs should be produced and broadcast live in order to take advantage of the medium's unique qualities. The networks supplied local affiliates with live programs for their evening schedules and a small portion of their daytime schedule, but each affiliate, along with the small group of independent stations that had chosen not to join a network, still needed to fill the long hours of a broadcast day—and there was not yet a backlog of television programs available. Television stations looked to feature films as the only ready source of programming, and the only features available to them came from outside the major Hollywood studios: British companies and such Poverty Row studios as Monogram Pictures and Republic Pictures Corporation. The theatrical market for B movies had begun to dry up after World War II, and these companies eagerly courted this new market for low-budget films, licensing hundreds of titles for broadcast. It has been estimated that 5,000 feature film titles were available to television by 1950.
Responding to the same demand for programs, small-scale independent producers in Hollywood also began to produce filmed series for television. The most visible early producers in the low-budget "telefilm" business (as it came to be known) were the aging cowboy stars William "Hopalong Cassidy" Boyd (1895–1972), Gene Autry (1907–1998), and Roy Rogers (1911–1998), but they were soon joined by veteran film producers like Hal Roach (1892–1992), radio producers like Frederick W. Ziv (1905–2001), and entrepreneurial performers like Bing Crosby (1903–1977) as well as Lucille Ball (1911–1989) and Desi Arnaz (1917–1986), whose Desilu Studio grew to become one of the most successful television studios of the 1950s.
By mid-decade, as the television audience grew and the demand for programming drove prices higher, the major Hollywood studios discovered their own financial incentives for licensing feature films to television and for entering the field of television production. RKO opened the market for the major studios in 1954 when its owner, Howard Hughes, sold the studio's pre-1948 features to General Teleradio, the broadcasting subsidiary of General Tire and Rubber Company that operated independent station WOR in New York. Warner Bros. followed in 1956 by selling its library of 750 pre-1948 features for $21 million. After this financial windfall was earned from titles locked away in studio vaults, the floodgates opened at all of the studios. Soon the television listings were filled with movies scheduled morning, noon, and night. The most famous of these movie programs was New York station WOR's Million Dollar Movie, which broadcast the same movie five evenings in a row. New York-bred filmmakers like Martin Scorsese have spoken fondly of discovering classic Hollywood movies for the first time while watching the Million Dollar Movie. In a very real sense, television served as the first widely available archive
of American movies, sparking an awareness of film history and creating a new generation of movie fans.
As the Hollywood studios began to release their films to television, they also began to produce filmed television series. Walt Disney (1901–1966) led the way in 1954 with the debut of Disneyland (1954–1990), the series designed to launch his new theme park. Warner Bros., Twentieth Century Fox, and MGM joined prime time the following year. By the end of the 1950s, Hollywood studios were the predominant suppliers of prime time programs for the networks. The transformation was most obvious at Warner Bros., which at one point in 1959 had eight television series in production and not a single feature film. In order to meet the demand for television programs, Warner Bros. geared up to produce the equivalent of a feature film each working day.
While the studios specialized in high volume "telefilm" productions made with the efficiency of an assembly line, the most acclaimed television programs of the decade were anthology drama series that offered a new, original play performed and broadcast live each week. In the intensely creative environment required to produce a live production witnessed by millions of viewers, programs such as Studio One (1948–1958) and Playhouse 90 (1956–1961) served as the training ground for a new generation of writers (Paddy Chayefsky, Reginald Rose, Rod Serling), directors (Arthur Penn, Sidney Lumet, John Frankenheimer, Franklin Shaffner, George Roy Hill) and actors (Paul Newman, Rod Steiger, James Dean, Piper Laurie, Kim Hunter, Geraldine Page and many more) who became the first in a long line of television-trained artists to make the transition into movies.
Diversifying into television may have seemed risky for a studio in the early 1950s, but within a decade television had become firmly entrenched in Hollywood, where the studios had come to depend for their very existence on the income provided by television. Networks and local stations leaned almost exclusively on Hollywood to satisfy their endless need for programming. By the end of the 1950s, 80 percent of network prime-time programming was produced in Hollywood; it had become nearly impossible to turn on a TV set without encountering a film made in Hollywood, whether a television series or a feature film.
The most significant development for the movie studios occurred in 1960, when they came to an agreement with the Screen Actors Guild that allowed them to sell the television rights to films made after 1948. NBC, the network most committed to color television, introduced Hollywood feature films to prime time in September 1961 with the premiere of the series NBC Saturday Night Movie (1961–1977). ABC added movies to its prime time schedule in 1962. As the perennial first place network with the strongest schedule of regular series, CBS did not feel a need to add movies until 1965. Still, the networks embraced feature films so fervently that by 1968 they programmed seven movies a week in prime time, and four of these finished among the season's highest rated programs.
As recent Hollywood releases became an increasingly important component of prime time schedules, the competition for titles quickly drove up the prices. In 1965 the average price for network rights to a feature film was $400,000, but that figure doubled in just three years. The networks publicized the broadcast premiere of recent studio releases as major events. A milestone of the period occurred in 1966, when ABC paid Columbia $2 million for the rights to the studio's blockbuster hit, The Bridge on the River Kwai (1957). Sponsored solely by Ford Motor Company to promote its new product line, the movie drew an audience of 60 million viewers.
As television became a crucial secondary market for the movie industry, movies needed to be produced with the conditions of commercial television in mind. Many of these concessions to the television industry of the 1960s and 1970s contributed to the impression of the cinema's superiority. In an era when a new generation of filmmakers and critics were promoting the idea that film was an art form, television stations and networks chopped movies to fit into 90- or 120-minute time slots and interrupted them every 12 or 13 minutes for commercials. Because of the moral standards imposed on commercial television by advertisers and the FCC, studios soon required directors to shoot "tame" alternate versions of violent or sexually explicit scenes for the inevitable television version. Studios began to balk when directors used wide-screen compositions in which key action occurred at the edges of the frame—outside the narrower dimensions of the television screen. As a reminder, camera viewfinders were etched with the dimensions of the TV frame. Studios also began to use optical printers to create "pan-and-scan" versions of widescreen films. Using this technique, scenes shot in a single take often were cut into a series of alternating closeups, or reframed during the printing process by panning across the image, so that key action or dialogue occurred within the TV frame.
As the cost of television rights for feature films climbed during the 1960s, each of the networks began to develop movies made expressly for television. NBC partnered with MCA Universal to create a regular series of "world premiere" movies, beginning with Fame is the Name of the Game in 1966. As the network with the lowest-rated regular series, ABC showed the greatest interest in movies made for television. The ninety-minute ABC Movie of the Week premiered in 1968. As executive in charge of the movies, Barry Diller (b. 1942) essentially ran a miniature movie studio at ABC. He supervised the production of 26 movies per year, each made for less than $350,000. Among the many memorable ABC movies during this period were Brian's Song (1971), a tearjerker about a football player's terminal illness starring Billie Dee Williams and James Caan that became the year's fifth highest-rated broadcast, and That Certain Summer (1972), a TV milestone in which Hal Holbrook and Martin Sheen played a gay couple. By 1973 ABC scheduled a Movie of the Week three nights per week. Director Steven Spielberg, whose suspenseful 1971 film Duel managed to sustain excruciating tension even with the commercial breaks of network television, has become the most celebrated graduate of the made-for-TV movie.
As a market for filmed series, theatrical features, and original movies, television contributed substantially to the economic viability of the movie studios during the 1960s and 1970s. In fact, the television market inspired the first round of consolidation in the movie industry, as the rising value of film libraries made the studios appealing targets for conglomerates looking to diversify their investments. As a subsidiary of the conglomerate Gulf + Western, Paramount became the model for the full integration of the movie and TV industries in the late 1970s, when Barry Diller moved from ABC to Paramount, accompanied by his protégé, Michael Eisner (b. 1942). Paramount produced many of the television series that led ABC to the top of the ratings in the 1970s (Happy Days [1974–1984], Laverne and Shirley [1976–1983], Mork and Mindy [1978–1982], and Taxi [1978–1983]), but also learned how to leverage the familiarity of TV stars and TV properties to create cross-media cultural phenomena. The signal event in this process was Paramount's successful transformation of John Travolta from a supporting player in the TV series Welcome Back, Kotter (1975–1979), into the star of the blockbuster hits
Saturday Night Fever (1977) and Grease (1978). The Diller regime also decided to transform the long-cancelled, cult-hit TV series Star Trek (1966–1969), into a movie franchise with Star Trek: The Motion Picture (1979), which revived the commercial prospects for a dormant studio property. The Paramount model spread throughout the industry in the 1980s, as Diller became the chairman of Twentieth Century Fox and Eisner became chairman of Walt Disney Studios.
The first three decades of network television in America represent a period of remarkable stability for the television industry. Once the basic structure of the television industry had been established, the television seasons rolled past with comforting familiarity. However, the rapid growth of cable television and home video in the 1980s, followed by a new round of consolidation in the media industries, disrupted the balance of power in the television industry and led to the complete integration of television networks and Hollywood studios.
Cable television began in the 1940s and 1950s as community antenna television (CATV), a solution to reception problems in geographically isolated towns where people had trouble receiving television signals with a home antenna. The turning point for cable television came during the 1970s, when several corporations began to distribute program services by satellite, making it possible to reach audiences on a national—and eventually international—scale without the need for local affiliate stations. Time, Inc. was the first company to launch a satellite-based service when it premiered Home Box Office (HBO) in 1975. The service began on a small scale, with only a few hundred viewers for its initial broadcast, but it demonstrated that a subscription service for movies and special events could be a viable economic alternative to commercial broadcasting. By the end of the decade, other subscription-based movie channels, including Showtime, the Movie Channel, and HBO's own spinoff network, Cinemax, had followed suit. With these movie channels, and many other new cable channels, cable service expanded rapidly. In 1978, only 17 percent of American households had cable; by 1989, cable penetration had reached 57 percent. This new market was a boon for the studios, which benefited from the increased prices that accompanied the competition for television rights to recently released films, and also for viewers, who were finally able to see complete, unedited feature films in their homes.
Videocassette recorders (VCRs) became a common feature in American homes during the 1980s. Videotape was introduced in 1956, but it was initially used only within the television industry. Its widespread use by television viewers awaited the development of the videocassette by Sony during the 1970s. The consumer market for home VCRs developed slowly at first because Sony and its rival Matsushita developed incompatible systems (Betamax and VHS, respectively). The market also stalled because of a lawsuit filed in 1976 by Disney and Universal against Sony, charging that home videotaping represented a violation of copyright laws. The issue was settled in Sony's favor by a 1984 Supreme Court decision, and the consumer market for VCRs exploded. Although in 1982, 4 percent of American households owned a VCR, by 1988, the figure had reached 60 percent.
As a result of the rise of cable and home video, the motion picture industry developed new release patterns that channeled movies from their debut in theaters to their eventual appearance on television through a carefully managed series of exclusive distribution "windows" designed to squeeze the maximum value from each stage of a movie's lifespan in the video age: theatrical release, home video, pay-per-view, pay cable, basic cable, and broadcast television. By the time a movie has made its way down the chain to broadcast TV, and is available for free to television viewers, it has received so much exposure that it is no longer a form of showcase programming.
As these technological developments shook the familiar patterns of the television and movie industries, a series of regulatory changes governing the television industry and relaxed enforcement of antitrust laws by the Reagan-era Justice Department heated up the media industries, subjecting them to a general trend of mergers and acquisitions that swept through corporate America in the 1980s. This climate gave rise to the series of mergers and acquisitions that saw the Big Three networks change hands in 1985 and 1986, which will be discussed in greater detail below. Regulatory changes also produced a sharp increase in the number of television stations, as corporations invested in chains of stations. In 1970, of the 862 stations in the country, only 82 operated independently of the three networks. The number of independent stations doubled in the 1980s. By 1995 there were 1,532 stations, of which 450 were independent of the three major networks. As the number of stations increased, it became possible to create new television networks.
In 1985, the media conglomerate News Corporation, owned by media tycoon Rupert Murdoch, purchased Twentieth Century Fox Studios. Then in 1986, Murdoch purchased six television stations which served as the foundation for launching the Fox Network, led by former Paramount chairman Barry Diller. Because Fox began by programming just a few nights each week, it technically did not meet the FCC definition of a full-fledged network, and therefore was not constrained by FCC rules that prohibited a network from producing its own programs. As a result, Fox served as the paradigm for a new era in the media industries, with a television network stocked with series produced by its corporate sibling, Twentieth Century Fox Television. Programs like The Simpsons (beginning 1989) and The X-Files (1993–2002) grew into network hits and lucrative commercial franchises within a perfect, closed loop of corporate synergy in which all profits remained within the parent company, News Corporation.
Pointing to the loophole that Fox had squeezed through in order to produce its own programs, the networks lobbied for an end to the FCC rules that had kept them from producing programs or sharing in the lucrative syndication market (where programs are sold to local stations and international markets) since the early 1970s. These Financial Interest and Syndication Rules were gradually repealed between 1991 and 1995. The policy change not only gave networks the opportunity to produce their own programs, but it also eliminated the last remaining barriers separating the movie and television industries. Studios quickly formed new television networks or merged with existing networks. Time Warner's WB Network and Viacom's United Paramount Network (UPN) debuted in 1995 (the two were merged into the CW in 2006). ABC came under the control of the Walt Disney Company in August 1995 when Disney acquired the network's parent company, Capital Cities/ABC Television Network for $19 billion. Viacom purchased CBS in 1999, and NBC acquired Vivendi Universal in 2005. In this stage of consolidation, the boundaries between film and television are certainly not perceived as barriers; rather, they represent opportunities for diversifying a media conglomerate's product lines.
At the turn of the twenty-first century, the boundaries between the media blurred, thanks to the convergence of digital technologies and consolidation in the media industries. Many filmmakers use digital video in place of film throughout the entire filmmaking process, and it is only a matter of time before movies are distributed and projected in theaters using digital technology. The vast libraries of film and television titles that give the conglomerates much of their economic value are being digitized and stored on computer servers. The latest round of mergers in the media industries has created conglomerates that actively promote cross-media synergy. The enticement of extraordinary riches for anyone fortunate enough to be involved in the creation of a hit TV series means that talent no longer flows from TV to movies; many producers, directors, writers, and performers move eagerly between film and television.
b. Chicago, Illinois, 5 February 1943
Michael Mann is roughly the same age as Martin Scorsese, Francis Coppola, George Lucas, and the other directors of the film-school generation who revived American filmmaking in the 1970s, but he is seldom thought of as a member of that generation, despite the fact he too attended film school in the 1960s. Like the romantic loners who inhabit his films, Mann followed his own route to the film industry. He attended film school in London, instead of New York or Los Angeles, and while his peers traveled directly from film school to the movie industry, Mann detoured through television, where he learned his craft by writing for the police series Police Story (1973–1977) and Starsky and Hutch (1975–1979) and then by creating the series Vega$ (1978–1981).
Mann understood the potential for rich storytelling inherent in the series format and appreciated the creative authority of the writer-producer in television. In 1981 he directed his first feature film, the accomplished existential thriller Thief, yet returned to television to produce Miami Vice (1984–1989) and Crime Story (1986–1988), two of the most innovative series in television history. In the tradition of the great auteur directors of the studio era, Mann burrowed deeply into an exhausted genre; beneath the familiar façade of the police series, he discovered the darkest impulses of his age and his own voice as an artist. Returning to film, Mann hit his stride at the turn of the millennium, and directing at least two classics (The Last of the Mohicans , Heat ) and a number of other films (The Insider , Ali , and Collateral ) that express his enduring theme—the challenges faced by a man (it is always a man) who attempts to live by a personal moral code in a capricious, corrupting world.
Mann spent his formative years in television drama during the 1970s, when one police series looked exactly like every other. Yet to accompany his narrative voice, he developed a powerful personal style that is as evident in his television series as in his films. When he returned to television with the unfortunately short-lived Robbery Homicide Division (2002–2003), he shot the entire series on digital video (DV). Other television producers and filmmakers have used DV because it is less expensive than film, or because it is easier to manipulate for post-production effects, but Mann discovered the expressive qualities of the medium's hyperrealism. The television series turned out to be a trial run for Collateral, which used DV to transform nighttime Los Angeles into a throbbing, spectral world. Thanks to a visual aesthetic first worked out in television, Mann was able to create one of the most visually striking movies of the time.
Films: Thief (1981), Manhunter (1986), The Last of the Mohicans (1992), Heat (1995), The Insider (1999), Ali (2001), Collateral (2004); Television Series: Miami Vice (1984–1989), Crime Story (1986–1988), Robbery Homicide Division (2002–2003); Other: AFI—The Director—Michael Mann (2002)
Fuller, Graham. "Making Some Light: An Interview with Michael Mann." In Projections 1, edited by John Boorman and Walter Donahue, 262–278. London: Faber & Faber,1992.
James, Nick. Heat. London: British Film Institute, 2002.
The two-way migration of talent between movies and television first took off in the 1980s, the decade when the director of a few stylish four-minute music videos on MTV could find him or herself with a contract to direct a feature film. Advances in television set technology and the reduced cost of larger screens made it possible for viewers to appreciate differences in visual styles on television. For the first time in the history of
television, competition gave producers and networks an incentive to create distinctive styles. The proliferation of cable channels and the habits of viewers armed with remote controls made a distinctive visual style as important as character and setting in creating an identity for a television series.
When critics praised the groundbreaking crime series Hill Street Blues (1981–1987) and Miami Vice (1984–1989) in the 1980s, they spoke not only about the stories but also about stylistic innovations: the documentary techniques of Hill Street Blues, the adaptation of a music video aesthetic in Miami Vice, a series created and produced by Michael Mann (b. 1943), who moved easily between TV and movies. David Lynch made a big splash with Twin Peaks (1990–1991) a series that brought Lynch's unique vision to television before losing focus in its second season.
Since then directors, writers, and producers have continued to alternate between movies and television. Some directors, such as Oliver Stone (with the mini-series Wild Palms ) and John Sayles (with the series Shannon's Deal [1990–1991]) have made token appearances in television. Others have served as executive producers, including Steven Spielberg (with the miniseries Taken, 2002) and George Lucas (with the series The Young Indiana Jones Chronicles, 1992–1993). Several screenwriters have shifted into television because of the storytelling potential of the series format and the creative control of the writer-producer in television. These include Joss Whedon (Buffy the Vampire Slayer, 1997–2003), Aaron Sorkin (The West Wing, 1999–2006), and Alan Ball (Six Feet Under, 2001–2005). There are several writer-directors who move consistently between film and television, depending on the nature of the project, including Michael Mann, Edward Zwick and Marshall Herskovitz, and Barry Levinson. The most successful producer in Hollywood during this era may be Jerry Bruckheimer, who continues to produce blockbuster hits like Armageddon (1998) and Pirates of the Caribbean (2003), while his company produces the three CSI: Crime Scene Investigation television series for CBS.
In order to attract the young adult viewers most desired by advertisers, television networks must attempt to create programs that attract and reward a discriminating audience. In the past, this audience may have been dissatisfied with commercial networks for interrupting or otherwise interfering with a drama or a movie, but they could only dream of an alternative. Today a flick of the remote control takes them directly to movies and uninterrupted drama series available on HBO and Showtime, collected in DVD box sets, and soon via video-on demand—all experienced in theater-quality, high-definition and Surround Sound. Discerning viewers are still drawn to television, but they have acquired a taste for a viewing experience that is increasingly cinematic. In one portent of the future, the commercial networks have switched to widescreen framing for quality drama series like ER (beginning 1994) and The West Wing.
The experience of watching television at home is becoming more like the experience of watching movies on a big screen. The convergence of digital technologies is gradually eliminating the material distinction between film and video. Media corporations would like to move to a model of video-on-demand in which viewers select individual titles from the studio's library. With these changes on the horizon, it is possible to imagine a time in the not-too-distant future when the differences between film and television will be no more than a topic of historical interest.
Anderson, Christopher. Hollywood TV: The Studio System in the Fifties. Austin: University of Texas Press, 1994.
Balio, Tino, ed. Hollywood in the Age of Television. Boston: Unwin Hyman, 1990.
Caldwell, John Thornton. Televisuality: Style, Crisis, andAuthority in American Television. New Brunswick, NJ: Rutgers University Press, 1995.
Hilmes, Michele. Hollywood and Broadcasting: From Radio to Cable. Champaign: University of Illinois Press, 1990.
Monaco, Paul. The Sixties: 1960–1969. Berkeley: University of California Press, 2003.
Mullen, Megan Gwynne. The Rise of Cable Programming in the United States: Revolution or Evolution? Austin: University of Texas Press, 2003.
Prince, Stephen. A New Pot of Gold: Hollywood Under the Electronic Rainbow, 1980–1989. Berkeley: University of California Press, 2002.
Wasko, Janet. Hollywood in the Information Age: Beyond the Silver Screen. Austin: University of Texas Press, 1995.
Wasser, Frederick. Veni, Vidi, Video: The Hollywood Empire and the VCR. Austin: University of Texas Press, 2002.
TELEVISION This entry includes 2 subentries:
Programming and Influence
Programming and Influence
By 1947, the American Broadcasting Company (ABC), Columbia Broadcasting System (CBS), the Du Mont Network, and the National Broadcasting Company (NBC) had started regularly scheduling television programs on a small number of stations. Many more channels soon commenced operations, and a TV boom began. By 1960 just under 90 percent of all households had one or more sets. Because most channels had network affiliation agreements—96 percent of all stations in 1960—the networks dominated the medium for over thirty years. (Du Mont ceased operations in 1955.) Especially in the evening, when most Americans watched TV, consumers very likely viewed a network program.
In the late 1940s, relatively few advertisers were prepared to follow the American radio model of producing and underwriting the cost of shows. Within a few years, however, and often by accident, the networks and a few advertisers developed individual programs that sparked interest in the medium. This, in turn, encouraged more companies to advertise on TV.
At first, television betrayed both a class and regional bias. The coaxial cable permitting simultaneous network telecasts did not reach Los Angeles, the center of the nation's motion picture industry and home to most popular entertainers, until September 1951. As a result, most network shows originated from New York. And programs tended to have a New York accent. At the same time, programmers often confused their own, more cosmopolitan, tastes with those of viewers. Network executives assumed audiences wanted culturally ambitious fare, at least some of the time. Some simply believed the TV audience was more educated and well-to-do, despite studies indicating little class bias to set ownership.
In the 1950s, television relied on a variety of program types or "genres." The first was the variety program, telecast live with a regular host. Milton Berle and Ed Sullivan starred in two of the most durable variety hours. Individual sponsors produced "dramatic anthologies," original dramas aired live. Although many TV plays were uneven or pretentious, some proved memorable, notably Marty, which was later remade as a feature film starring Ernest Borgnine. Other program types came from network radio: the dramatic series, situation comedy, and quiz (later game) show. They relied on one of radio's oldest objectives: create a consumer habit of tuning to a specific program every day or week. (Many closed with the admonition, "Same time, same station.") CBS, of the four networks, adhered most dutifully to this model of programming.
The success of CBS's situation comedy I Love Lucy (1951–1957) confirmed the network's strategy. More tellingly, repeats of episodes proved almost as popular. This greatly undermined another broadcast industry "rule": that audiences always wanted original programming, even in the summer when replacement series heretofore had been offered. By the late 1950s, most series were filmed. They had an additional advantage over the live telecast. They could not only be rerun in the summer but then rented or "syndicated" for re-airing by individual stations in the United States and overseas. Lucy, it should be noted, was the single most rerun series in the history of television.
TV's dependency on film accelerated in the late 1950s. ABC banked heavily on filmed action/adventure series—first westerns, then detective dramas—many of which gained large followings. CBS and NBC quickly seized on the trend. During the 1958–1959 season, seven of the ten most popular programs, according to the A. C. Nielsen ratings service, were westerns. Most were considerably more sophisticated than television's earliest westerns, such as Hopalong Cassidy and The Lone Ranger, which were plainly aimed at pre-adolescents. The new "adult" westerns and detective series also possessed higher production values. The large audiences especially for westerns also indicated a change in the television audience, as TV spread into smaller cities and towns in the South and West. Filmed programming satisfied small-town audiences, which, as movie exhibitors had long known, greatly preferred westerns over nightclub comedy or original drama.
By the end of the 1950s, the economics of television had become clear. Networks and stations derived most of their revenues from the sale of time to advertisers. Indeed, the stations that the networks owned were their most profitable properties. Producing successful programs was far more risky—too much for most stations to do extensively. Most new television series failed. Yet a popular program could be a moneymaker in syndication. With this prospect in mind, as well as a wish to wrest control from advertisers, the networks gradually began producing more of their own programming. Government regulations, however, severely restricted network participation in entertainment programming in the 1970s and 1980s.
News programming was the great laggard in early TV. The networks produced fifteen-minute early evening weekday newscasts and telecast special events, including the national party conventions and presidential inaugurations. Informational shows were considered "loss leaders," presented to satisfy TV critics and federal regulators. The Federal Communications Commission (FCC) assigned TV licenses, including the limited number that the agency permitted the networks to own. The FCC expected each license holder to devote a small proportion of its schedule to "public interest" programming, including news. Under no pressure to win audiences, news program producers had great latitude in story selection. That said, TV news personnel tended to be political centrists who took their cues from colleagues working at the prestigious newspapers.
For all its shortcomings, early television news had one great journalist, Edward R. Murrow of CBS. Revered for his radio coverage of World War II, Murrow coproduced and hosted the documentary series See It Now, beginning in 1951. Although widely praised and courageous in its treatment of domestic anti-Communism, See It Now never won a large audience. His less critically admired interview program Person to Person, was far more popular and, indeed, anticipated similar, more celebrity-centered efforts by Barbara Walters of ABC several decades later.
In the early 1960s, NBC and CBS began pouring more of their energies into their early evening newscasts, lengthening them from fifteen to thirty minutes in 1963. (ABC did not do so until 1967 and waited another decade before investing substantially in news.) The early evening newscast strategy reflected the "habit" rule of broadcasting, while proving very profitable. Although audiences did not equal those for entertainment shows later in the evening, the nightly newscasts drew enough viewers to interest advertisers. Similarly successful was NBC's Today show, which premiered in 1952. Aired in the early morning for two hours, Today offered a mix of news and features. ABC eventually developed a competitor, Good Morning America.
In the late 1950s and 1960s, all three networks occasionally produced documentaries, usually an hour long, that explored different public issues. Although they rarely had impressive ratings, documentaries mollified critics and regulators dismayed by the networks' less culturally ambitious programming. The opportunity costs (the value of goods or services that one must give up in order to produce something) of airing documentaries, however, grew with heightened advertiser demand for popular series in the late 1960s. The networks quietly reduced their documentary production. Although most TV critics were dismayed, the FCC, which had earlier encouraged such programming, said nothing. Partly relieving the networks of their former obligations was the Public Broadcasting Service (PBS), created by Congress in 1969. Although chronically underfinanced, PBS managed to produce some public affairs and informational programming, once the preserve of the commercial networks. The commercial network documentary had all but vanished by 1980.
In its place came a new type of news show. CBS's 60 Minutes, which debuted in 1968, was the trendsetter. The documentary's great weaknesses, according to 60 Minutes producer Don Hewitt, was its slow pacing. Largely because of its devotion of an hour or more to one "serious" issue like German unification, it bored the majority of viewers. Hewitt wanted to make news programming engaging. "Instead of dealing with issues we [will] tell stories," he remarked (Richard Campbell, 60 Minutes and the News, p. 3). And he determined to mix it up. On 60 Minutes, no single topic would absorb more than a quarter hour. The topics covered, in turn, would vary to attract as many in the audience as possible. It came to be known as the first TV "magazine" and eventually, 60 Minutes nurtured a large following. Indeed, it became the first news program to compete successfully with entertainment series in evening prime time.
All three networks found airing newsmagazines irresistible. They were considerably cheaper than entertainment programming and the network could own and produce the program, and not pay fees to an independent company. (At the time, the FCC limited network ownership of entertainment programs.) This meant higher profits, even if a 60 Minutes imitator accrued smaller ratings than a rival entertainment series.
The tone of network news changed over time. In the 1950s and early 1960s, TV news programs tended to be almost stenographic. A network newscast report on a cabinet secretary's speech was largely unfiltered. This approach had several explanations. Excessively critical coverage might upset federal regulators. Then, too, broadcast news people tended to share in many of the assumptions of newsmakers, especially in regards to the Cold War with the Soviet Union. Television's coverage of America's involvement in Vietnam, especially during the escalation of U.S. participation (1963–1967), was hardly hostile. Nor was TV's combat footage especially graphic. Still, the inability of the U.S. military to secure South Vietnam, despite repeated claims of progress, shattered the Cold War consensus while fostering a new skepticism toward those in power. So did the attempts by the Nixon administration to cover up scandals associated with the Watergate break-in of 1972. The networks did not cover the Watergate affair as searchingly as some newspapers, the Washington Post or Los Angeles Times, for example. Yet the scandals further damaged relations between government officials and network TV news correspondents. But correspondents had not become leftist ideologues, as many conservatives assumed; network reporters' politics remained strikingly centrist. Rather, TV correspondents tended to mediate government news more warily—regardless of which party controlled the executive branch. Network TV news also became more correspondent-centered. The reporter's interpretation of an announcement—not the announcement itself—dominated most network news accounts.
Still, in times of grave national crisis, network newscasters self-consciously assumed a special role. After the assassination of John F. Kennedy in 1963 and the resignation of Richard M. Nixon in 1974, television journalists sought to reassure and unite the nation. The sociologist Herbert J. Gans dubbed this the "order restoration" function of the national news media. The terrorist attacks of September 2001 prompted a similar response, as well as demonstrations of patriotism not seen on television news since the early Cold War.
Local news programming became especially important to individual stations. Stations initially aired news programs as a regulatory concession. Most followed the networks in expanding their newscasts from fifteen minutes in the 1960s. They were of growing interest to advertisers, and became the single most profitable form of local programming. Stations extended the length and frequency of their newscasts. Production values and immediacy increased as stations switched from film to videotape for their stories. As the competition among stations for ratings grew, the news agenda changed. Little time went to serious issues—which were often difficult to capture visually—as opposed to features, show-business news, and, in larger markets, spectacular fires and crimes.
Sporting events had long been a convenient means of filling the schedule. Because their audiences were disproportionately male, however, most sports telecasts could not command the same ratings as popular entertainment series, except for the championship series in baseball and the National Football League (NFL). Moreover, in airing sporting contests, television played favorites. Football proved to be the most "telegenic" sport, and began luring viewers on Sunday afternoons, which had long been considered a time when people would not watch television. Professional football broke another rule by achieving ratings success in prime time, with the debut of Monday night NFL telecasts on ABC in 1970. Cable television in the 1980s and 1990s created more outlets devoted to sports.
With a cable connection, subscribers could improve their TV's reception and greatly increase their programming choices. In the 1980s, the non-cable viewer could select from seven channels; the cable home had thirty-three. More and more consumers preferred to have more options, which multiplied in the 1990s. In the late 1980s, cable reached about half of all households. A decade later, just under 70 percent of all homes had cable.
Although cable offered an extraordinary range of choices, viewer preferences were strikingly narrow. Channels playing to certain, specialized tastes enjoyed the greatest success. Eight of the fifteen most watched cable telecasts the week of 17–23 December 2001, were on Nickelodeon, which programmed exclusively for young children. Professional wrestling and football programs placed five shows that week.
With cable's spread, the networks saw their share of the evening audience fall from 90 percent in the mid-1970s to just over 60 percent twenty years later. The network early evening newscasts suffered even larger declines. The creation of all-news cable channels, beginning with the Cable News Network (CNN) in 1980, ate away at the authority of the network news programs. Still, CNN's effects should not be overstated. Except during a national crisis, relatively few watched CNN. Entertainment cable channels actually posed the larger problem. The availability of such channels gave viewers alternatives to the newscasts they had not previously had.
All in all, cable had contradictory effects on the networks. News producers, anxious to retain audiences, made their newscasts' agenda less serious and more fixated on scandal (a trend also explained by the end of the Cold War). At the same time, entertainment programs, similarly losing viewers to cable, became more daring. This was not because cable programs, with a few exceptions on pay cable services, violated moral proprieties. Many cable channels aired little other than reruns of network programs and old feature films. For the networks, however, only a more relaxed standard could hold viewers, especially younger ones. While still voluntarily honoring some moral strictures, television series handled violence and sexual relations with a realism unimaginable a generation earlier. Old prohibitions against the use of profanity and nudity were partially relaxed.
No network hurried this trend along more enthusiastically than Fox. Formed in 1986, Fox carried a number of comedies, action dramas, and reality shows (When Good Pets Go Bad), some of which consciously crossed mainstream boundaries of good taste. Fox owner Rupert Murdoch, an Australian publisher of tabloid newspapers, lacked the self-conscious sensibility of his older rivals.
Fox's rise coincided with the relaxation of federal regulations. Between the 1920s and 1970s, the relative scarcity of on-air channels justified government oversight of broadcasting. The radio spectrum only permitted so many stations per community. With cable eliminating this rationale, the FCC in the 1980s systematically deregulated broadcasting. In the late twentieth century, television license holders aired news programs to make money, not to please federal officials. Congress approved this course, and the 1996 Telecommunications Act weakened remaining FCC rules limiting the number of stations that networks and others could own.
Institutional Impacts of Television
The nation's established mass media—radio, films, and newspapers—reacted differently to television's sudden presence in the American home. Radio felt the effects first, as audiences for radio programs, particularly in the evening, dropped sharply in the first half of the 1950s. Radio's relative portability allowed some recovery, especially with the development of the transistor. Then, too, in the 1950s, most Americans only owned one television. Those unhappy with what another family member insisted on watching could listen to a radio elsewhere in the house. Moreover, radio could be a diversion for those doing the dishes or cleaning a room. At the same time, radio listening while driving became much more common as more automobiles were equipped with radios, and the percentage of Americans who owned cars increased. In addition, some radio stations broke with an older industry tradition by targeting a demographic subgroup of listeners, specifically, adolescents. Stations hired disc jockeys who continuously played rock and roll music. Television stations and networks could only offer a few programs tailored to teens. Advertisers prized their parents more. Radio, in that regard, anticipated the direction of television's competitors after the 1960s. Radio stations continued to narrow their formats by age, race, and politics.
Television presented an enormous challenge to the film industry. Theater attendance dropped sharply in the late 1940s and early 1950s. however, box office receipts were declining even before television arrived in many communities. With marginal theaters closing, the studios responded by reducing the number of movies produced per year. To compete with TV, more films had elaborate special effects and were produced in color. (Not until 1972 did most homes have color televisions.) The collapse of film censorship in the mid-1960s gave Hollywood another edge: violence and sexual situations could be portrayed with an unprecedented explicitness that TV producers could only envy.
Although most large studios at first resisted cooperating with the television networks, by the mid-1950s virtually every movie company was involved in some TV production. With some exceptions, most of Hollywood's initial video work resembled the old "B" movie, the cheaper theatrical release of the 1930s and 1940s produced as the second feature for a twin billing or for the smaller theaters, most of which had ceased operations in the late 1950s. In the late 1960s, motion picture firms began producing TV movies, that is, two-hour films specifically for television. At first, they were fairly cheaply mounted and forgettable. But a few had enormous impact. ABC's Roots, telecast in 1977, chronicled the history of an African American family and prompted a new appreciation for family history. Although the TV films remained popular through the 1980s, higher costs caused the networks to lose their enthusiasm for the genre, which all but disappeared from the small screen in the 1990s.
No major mass medium responded more ineffectively to the challenge of television than newspapers. For more than two decades, newspaper publishers refused to regard TV as a threat to their industry. Indeed, the diffusion of television did not initially affect newspaper circulation. In the long run, however, TV undermined the daily newspaper's place in American life. As "baby boomers," those Americans born between 1946 and 1963, reluctantly entered adulthood, they proved less likely to pick up a paper. If they did, they spent less time reading it. Publishers belatedly responded by making their papers more appealing to a generation raised with television. They shortened stories, carried more pictures, and used color. Assuming, not always correctly, that readers already knew the headlines from television, editors insisted that newspaper stories be more analytical. Yet they were losing the war. The more interpretive journalism failed to woo younger readers, while many older readers deemed it too opinionated. Although Sunday sales were fairly stable, daily circulation per household continued to drop.
Like many newspaper publishers, America's political class only slowly recognized television's impact. John F. Kennedy's video effectiveness during the 1960 presidential campaign, however, changed many minds, as did some powerful television political spots by individual candidates later in the decade. TV advertising became an increasingly common electoral weapon, even though its actual impact was debated. Nevertheless, to candidates and their consultants, the perception that television appeals could turn an election mattered more than the reality. And, as the cost of television spots rose, so did the centrality of fundraising to politicians. TV, in that regard, indirectly contributed to the campaign finance problem besetting both political parties by making their leaders more dependent on the monies of large corporations and their political action committees.
Advertisers of goods and services, and not political candidates, were far and away commercial television's greatest patrons. (Political campaigns accounted for 7 percent of all advertising spending—print as well as video—in 1996.) During TV's first decade, sponsors had great power. They likely underwrote entire programs, and often involved themselves in aspects of the production. They sought product placement on the set, and sometimes integrated the middle commercial into the story. They also censored scripts. For example, a cigarette manufacturer sponsoring The Virginian forbade a cast member from smoking a cigar on camera.
In the early 1960s, sponsors lost their leverage. The involvement of some in the rigging of popular quiz shows had embarrassed the industry. Members of Congress and others insisted that the networks, and not sponsors, have the ultimate authority over program production (a power the networks themselves had long sought). Concomitantly, more advertisers wanted to enter television, creating a seller's market. Then, too, as the costs of prime time entertainment series rose, so did the expense of sole sponsorship. Advertisers began buying individual spots, as opposed to entire programs. The new economics of television, even more than the fallout over the quiz scandals, gave the networks sovereignty over their schedules. Yet the entry of so many more potential sponsors, demanding masses of viewers, placed added pressure on the networks to maximize their ratings whenever possible. Networks turned away companies willing to underwrite less popular cultural programming, such as The Voice of Firestone, because more revenue could be earned by telecasting series with a wider appeal.
The popularity of cable in the 1980s and 1990s marked a new phase in advertiser-network relations. The "niche marketing" of cable channels like MTV and Nickelodeon greatly eased the tasks of advertising agencies' media buyers seeking those audiences. The networks, on the other hand, confronted a crisis. Although willing to continue to patronize network programs, advertisers made new demands. These did not ordinarily involve specific production decisions, whether, for instance, a character on a sitcom had a child out of wedlock. Instead, media buyers had broader objectives. No longer did they focus exclusively on the size of a program's audience; they increasingly concerned themselves with its composition. A dramatic series like Matlock had a large audience, but a graying one. Friends and Melrose Place, on the other hand, were viewed by younger viewers. Advertisers assumed that younger consumers were far more likely to try new products and brands. Increasingly in the 1990s, the demo-graphics of a series' audience determined its fate. This left viewers not in the desired demographic group in the wilderness of cable.
Balio, Tino, ed. Hollywood in the Age of Television. Boston: Unwin Hyman, 1990.
Baughman, James L. The Republic of Mass Culture: Journalism, Filmmaking, and Broadcasting in America since 1941. 2d ed. Baltimore: Johns Hopkins University Press, 1997.
Bernhard, Nancy E. U.S. Television News and Cold War Propaganda, 1947–1960. Cambridge, U.K.: Cambridge University Press, 1999.
Bogart, Leo. The Age of Television: A Study of Viewing Habits and the Impact of Television on American Life. 3d ed. New York: Frederick Ungar, 1972.
Hallin, Daniel C. We Keep America on Top of the World: Television Journalism and the Public Sphere. London and New York: Routledge, 1994.
———. The "Uncensored War" :The Media and Vietnam. New York: Oxford University Press, 1986.
Mayer, Martin. About Television. New York: Harper and Row, 1972. The best, most thoughtful journalistic account of the television industry before the cable revolution.
O'Connor, John E., ed. American History/American Television: Interpreting the Video Past. New York: Frederick Ungar, 1983.
Stark, Steven D. Glued to the Set: The Sixty Television Shows and Events That Made Us Who We Are Today. New York: Free Press, 1997.
Television is the process of capturing photographic images, converting them into electrical impulses, and then transmitting the signal to a decoding receiver. Conventional transmission is by means of electromagnetic radiation, using the methods of radio. Since the early part of the twentieth century, the development of television in the United States has been subject to rules set out by the federal government, specifically the Federal Communications Commission (FCC), and by the marketplace and commercial feasibility.
Image conversion problems were solved in the latter part of the nineteenth century. In 1873 English engineer Willoughby Smith noted the photoconductivity of the element selenium, that its electrical resistance fluctuated when exposed to light. This started the search for a method to change optical images into electric current, and simultaneous developments in Europe eventually led to a variety of mechanical, as opposed to electronic, methods of image transmission.
In 1884 German engineer Paul Nipkow devised a mechanical scanning system using a set of revolving disks in a camera and a receiver. This converted the image by transmitting individual images sequentially as light passed through small holes in the disk. These were then "reassembled" by the receiving disk. The scanner, called a Nipkow disk, was used in experiments in the United States by Charles F. Jenkins and in England by John L. Baird to create a crude television image in the 1920s. Jenkins began operating in 1928 as the Jenkins Television Corporation near Washington, D.C., and by 1931 nearly two dozen stations were in service, using low-definition scanning based on the Nipkow system.
In the 1930s, American Philo T. Farnsworth, an independent inventor, and Vladimir K. Zworykin, an engineer with Westinghouse and, later, the Radio Corporation of America (RCA), were instrumental in devising the first workable electronic scanning system. Funding, interference from competitors, and patent issues slowed advances, but Farnsworth came out with an "image dissector," a camera that converted individual elements of an image into electrical impulses, and Zworykin developed a similar camera device called the iconoscope. Although Zworykin's device was more successful, in the end collaboration and cross-licensing were necessary for commercial development of television.
By 1938, electronic scanning systems had overtaken or, in some cases, incorporated elements of, mechanical ones. Advancements made since the early 1900s in the United States, Europe, and Russia by Lee De Forest, Karl Ferdinand Braun, J. J. Thomson, A. A. Campbell Swinton, and Boris Rosing contributed to the commercial feasibility of television transmission. Allen B. DuMont's improvements on the cathode-ray tube in the late 1930s set the standard for picture reproduction, and receivers (television sets) were marketed in New York by DuMont and RCA. The cathode-ray tube receiver, or picture tube, contains electron beams focused on a phosphorescent screen. The material on the screen emits light of varying intensity when struck by the beam, controlled by the signal from the camera, reproducing the image on the tube screen in horizontal and vertical lines—the more lines, the more detail. The "scene" changes at around the rate of 25 to 30 complete images per second, giving the viewer the perception of motion as effectively as in motion pictures.
Early Commercial Broadcasting
In 1939, the National Broadcasting Company in New York provided programming focused on the New York World's Fair. During the 1930s, RCA president David Sarnoff, a radio programming pioneer, developed research on programming for television, which was originally centered on public events and major news stories. In late 1939, the FCC adopted rules to permit the collection of fees for television services, in the form of sponsored programs. In the industry, the National Television Systems Committee (NTSC) was formed to adopt uniform technical standards. Full commercial program service was authorized by the FCC on 1 July 1941, with provisions that the technical standard be set at 525 picture lines and 30 frames per second. After more than forty years of experimentation, television was on the brink of full commercial programming by the beginning of World War II (1939–1945). After World War II, a television broadcasting boom began and the television industry grew rapidly, from programming and transmitting ("airing") to the manufacturing of standardized television sets.
The development of color television was slower. Color television used the same technology as monochromatic (black and white), but was more complex. In 1940, Peter Goldmark demonstrated a color system in New York that was technically superior to its predecessors, going back to Baird's 1928 experiments with color and Nipkow disks. But Goldmark's system was incompatible with monochromatic sets. The delay in widespread use of color television had more to do with its compatibility with monochromatic systems than with theoretical or scientific obstacles. By 1954, those issues had been resolved, and in 1957 the federal government adopted uniform standards. For most Americans, however, color televisions were cost-prohibitive until the 1970s.
The Future of Television
The last three decades of the twentieth century were filled with as many exciting advancements in the industry as were the first three: Projection televisions (PTVs) were introduced, both front-and rear-projection and with screens as large as 7feet; videotape, which had been used by broadcasters since the 1950s, was adapted for home use, either for use with home video cameras or for recording programmed broadcasting (by the 1980s video-cassette recorders—VCRs—were nearly as common as TVs); cable television and satellite broadcasting began to make inroads into the consumer market; and in the early 2000s, digital videodiscs (DVDs) began to replace videotape cassettes as a consumer favorite. Also in the 1970s, advancements were made in liquid crystal display (LCD) technology that eventually led to flatter screens and, in the 1990s, plasma display panels (PDPs) that allowed for screens over a yard wide and just a few inches thick.
The 1990s brought about a revolution in digital television, which converts analog signals into a digital code (1s and 0s) and provides a clearer image that is less prone to distortion (though errors in transmission or retrieval may result in no image at all, as opposed to a less-than-perfect analog image). First developed for filmmakers in the early 1980s, high-definition television (HDTV) uses around 1,000 picture lines and a wide-screen format, providing a sharper image and a larger viewing area. Also, conventional televisions have an aspect ratio of 4:3 (screen width to screen height), whereas wide-screen HDTVs have an aspect ratio of 16:9, much closer to that of motion pictures.
Since the late 1980s, the FCC has been aggressively advocating the transition to digital television, largely because digital systems use less of the available bandwidth, thereby creating more bandwidth for cellular phones. Based on technical standards adopted in 1996, the FCC ruled that all public television stations must be digital by May 2003, considered by many to be an overly optimistic deadline. As with the development of color television, the progress of HDTV has been hampered by compatibility issues. The FCC ruled in 1987 that HDTV standards must be compatible with existing NTSC standards. By 2000, however, the focus for the future of HDTV had shifted to its compatibility and integration with home computers. As of 2002, HDTV systems were in place across the United States, but home units were costly and programming was limited.
Ciciora, Walter S. Modern Cable Television Technology: Videos, Voice, and Data Communications. San Francisco: Morgan Kaufmann, 1999.
Federal Communications Commission. Home page at http://www.fcc.gov
Fisher, David E. Tube: The Invention of Television. Washington, D.C.: Counterpoint, 1996.
Gano, Lila. Television: Electronic Pictures. San Diego, Calif.: Lucent Books, 1990.
Trundle, Eugene. Guide to TV and Video Technology. Boston: Newnes, 1996.
Sections within this essay:Background
Federal Regulation of Licenses, Content, and Advertising
Regulation of Television Broadcast Licenses
Content Regulation: The Fairness Doctrine
Content Regulation: Rules Underlying the Fairness Doctrine
Content Regulation: Obscene, Profane, and Indecent Broadcasts
Regulation of Advertising
Children and Television
Federal Communications Commission
National Telecommunications and Information Administration
American businesses pour billions of dollars each year into marketing their services and products on television. Transmitted to viewers through electromagnetic airwaves, satellite feeds, optical fibers, and cable lines, television programming often transcends state lines. The interstate character of this commercial activity brings regulation of television within the purview of the Commerce Clause of the U. S. Constitution. U.S.C.A. Const. Art. I, section 8, cl. 3. Under the Commerce Clause, federal courts have ruled that Congress has the power to regulate "radio communications," including the power to control the number, location, and activities of broadcasting stations around the country.
Pursuant to this power Congress passed the Communications Act of 1934, which expanded the definition of "radio communication" to include "signs, signals, pictures, and sounds of all kinds, including all instrumentalities, facilities, apparatus, and services … incidental to such transmission.". With the advent of television in the late 1930s and its growth in popularity during the 1940s and 1950s, "radio communication" was eventually interpreted to encompass television broadcasts as well.
The rapid growth of telecommunications also prompted Congress to create the Federal Communications Commission (FCC), an executive branch agency charged with overseeing the telecommunications industry in the United States. The FCC has exclusive jurisdiction to grant, deny, review, and terminate television broadcast licenses. The FCC is also responsible for establishing guidelines, promulgating regulations, and resolving disputes involving various broadcast media. The FCC does not, however, typically oversee the selection of programming that is broadcast. There are exceptions for this general rule, including limits on indecent programming, the number of commercials aired during children's programming, and rules involving candidates for public office. Five commissioners, appointed by the president and confirmed by the Senate, direct the FCC. Commissioners are appointed for five-year terms; no more than three may be from one political party. Within the FCC, the Media Bureau develops, recommends and administers the policy and licensing programs relating to electronic media, including cable and broadcast television in the United States and its territories.
The FCC enacts and enforces regulations addressing competition among cable and satellite companies and other entities that offer video programming services to the general public. This jurisdiction includes issues such as:
- Mandatory carriage of television broadcast signals
- Commercial leased access
- Program access
- Over-the-air reception devices
- Commercial availability of set-top boxes
- Accessibility of closed captioning and video description on television programming
In 1978 Congress established the National Telecommunications and Information Administration (NTIA) to serve as the policy arm for federal regulation of telecommunications. Together with the FCC, the NTIA formulates and presents official White House positions on a variety of domestic and international telecommunication-related issues.
Federal regulation of television broadcasting preempts any conflicting state or local regulation. However, the federal government's power to regulate television is not absolute. In regulating television, both Congress and the FCC must do so to advance the public interest. Congress and the FCC also must be sensitive to First Amendment concerns. Television broadcast companies are entitled to exercise robust journalistic freedom that is consistent with the right of the public to participate in a diverse marketplace of ideas, a marketplace that itself is tempered by appropriate social, political, esthetic, moral, and cultural values.
The Communications Act of 1934 confers upon the FCC the sole authority to examine applications for television broadcast licenses and to grant, refuse, or revoke them as the public interest, convenience, or necessity requires. Each license granted for the operation of a television station lasts for a term of not to exceed eight years and may be renewed for a term of not to exceed eight years, measured from the expiration date of the preceding license.
Pursuant to provisions in the Telecommunications Act of 1996, television in the United States must convert from analog signal broadcast to digital signal. During the transition period, the FCC has temporarily assigned each television station a second station to broadcast the digital signal, while continuing to broadcast the analog on the original channel. Total conversion is expected to be completed in 2006, unless the FCC approves an extension. The FCC is not accepting any applications for new stations until television broadcasting has completed the conversion to digital.
The FCC has broad discretion to establish the qualifications for applicants seeking a television broadcast license and for licensees seeking renewal. The FCC has exercised this discretion to prescribe an assortment of qualifications relating to citizenship, financial solvency, technical prowess, and moral character, and other criteria the commission has deemed relevant to determine the fitness of particular applicants to run a television station. The FCC will also compare the programming content proposed by an applicant to the content of existing programming. The FCC favors applicants who will make television entertainment more diverse and competitive.
To limit the concentration of power in television broadcast rights, the FCC has promulgated rules restricting the number of television stations that a licensee may operate. An applicant who has reached the limit may seek an amendment, waiver, or exception to the rule, and no licensee may be denied an additional license until he or she has been afforded a full hearing on the competing public interests at stake. Applicants or licensees who are dissatisfied with a decision issued by the FCC may seek review from the U. S. Court of Appeals for the District of Columbia Circuit, which has exclusive jurisdiction over appeals concerning FCC decisions granting, denying, modifying, or revoking television broadcast licenses. Decisions rendered by the appellate court may be appealed to the U. S. Supreme Court.
The FCC is authorized to assess and collect a schedule of license fees, application fees, equipment approval fees, and miscellaneous regulatory assessments and penalties to cover the costs of its enforcement proceedings, policy and rulemaking activities, and user information services. The commission may establish these charges and review and adjust them every two years to reflect changes in the Consumer Price Index. Failure to timely pay a fee, assessment, or penalty is grounds for dismissing an application or revoking an existing license.
The original rationale for federal regulation of telecommunications was grounded in the finite num-ber of frequencies on which to broadcast. Many Americans worried that if Congress did not exercise its power over interstate commerce to fairly allocate the available frequencies to licensees who would serve the public interest, then only the richest members of society would own television broadcast rights and television programming would become one-dimensional, biased, or slanted. Only by guaranteeing a place on television for differing opinions, some Americans contended, would the truth emerge in the marketplace of ideas. These concerns manifested themselves in the fairness doctrine.
First fully articulated in 1949, the fairness doctrine had two parts: it required broadcasters to (1) cover vital controversial issues in the community; and (2) provide a reasonable opportunity for the presentation of contrasting points of view. Violation of the doctrine could result in a broadcaster losing its license. Not surprisingly, licensees grew reluctant to cover controversial stories out of fear of being punished for not adequately presenting opposing views. First Amendment advocates decried the fairness doctrine as chilling legitimate speech. The doctrine came under further scrutiny in the 1980s when the explosion of cable television stations dramatically expanded the number of media outlets available.
In 1987 the FCC abolished the fairness doctrine by a 4-0 vote, concluding that the free market and not the federal government is the best regulator of news content on television. Individual media outlets compete with each other for viewers, the FCC said, and this competition necessarily involves establishing the accuracy, credibility, reliability, and thoroughness of each story that is broadcast. Over time the public weeds out news providers that prove to be inaccurate, unreliable, one-sided, or incredible.
Despite the death of the fairness doctrine in 1987, two underlying rules that were developed during its existence remained in effect for another 13 years: the personal attack rule and the political editorial rule. The personal attack rule required broadcast licensees to notify persons who were maligned or criticized during their station's coverage of a controversial public issue and allow the attacked persons to respond over the licensees' air waves. If the attack was made upon the honesty, character, or integrity of another person, the licensee was required to provide a script or tape of the attack to the person identified before giving that person a reasonable opportunity to respond. The political editorial rule afforded political candidates notice of and opportunity to respond to editorials opposing them or endorsing another candidate.
The personal attack and political editorial rules fell by the wayside in 2000, when the Court of Appeals for the District of Columbia ordered the FCC to either provide a detailed justification for their continued application or abandon them. Initially, the FCC suspended the rules on a temporary basis, but later formally repealed both rules.
Proponents of both the personal attack and political editorial rules, as well as the fairness doctrine, have sometimes called for reinstatement. For example, during the 2004 presidential campaign, a furor erupted when some stations decided to broadcast "Stolen Honor", a documentary critical of presidential candidate John Kerry. However, none of the rules have been reinstated.
Although the demise of the Fairness Doctrine and its underlying rules have given broadcasters greater control over the content of their programming, broadcasters still may not discriminate among candidates for public office. Once a broadcaster permits one candidate for public office to use its facilities, it must afford equal opportunities to all other candidates for the same office. Broadcast stations that willfully or repeatedly fail to provide a legally qualified candidate for elective office reasonable access to their airwaves may subject themselves to sanctions, including revocation of their licenses. The FCC "equal time" provisions apply only to the candidates themselves and not to appearances made by campaign managers or other supporters. The determination of what constitutes a legally qualified candidacy is made by reference to state law.
Within the universe of First Amendment protection, broadcast radio and television stations have been subjected to greater regulation than any other verbal, visual, or printed medium of expression. The licensing process by itself gives the federal government more power over the content of television and radio broadcasts than it has over any print medium. Radio and television stations have been required to carry public service messages that they might not otherwise have chosen to carry, and they have been subjected to censure for broadcasting materials that would not have been punishable if they had been published in another medium.
The United States Code prohibits the broadcast of any material that is "obscene, indecent, or profane," but offers no definition for those terms. Instead, that task is left to the FCC through its rulemaking and adjudicatory functions. Essentially, it is illegal to air obscene programming at any time. To determine what is obscene, the U.S. Supreme Court crafted a three-prong test:
- An average person, applying contemporary community standards, would find that the material, as a whole, appeals to the prurient interest
- The material depicts or describes, in a patently offensive way, sexual conduct specifically defined by applicable law
- The material, taken as a whole, lacks serious literary, artistic, political, or scientific value
Federal law also prohibits the broadcast of indecent programming or profane language during certain hours. According to the FCC, indecent programming involves patently offensive sexual or excretory material that does not rise to the level of obscenity. Indecent material cannot be barred entirely, because it is protected by the First Amendment. The FCC has promulgated a rule that bans indecent broadcasts between the hours of 6:00 a.m. and 10:00 p.m. The FCC defines profanity as "including language so grossly offensive to members of the public who actually hear it as to amount to a nuisance". Profanity is also barred from broadcast between 6:00 a.m. and 10:00 p.m.
In 1978 in FCC v. Pacifica Foundation, the U. S. Supreme Court upheld an FCC order finding that a pre-recorded satirical monologue constituted indecent speech with the repeated use of seven "dirty words" during an afternoon broadcast. The Supreme Court acknowledged that the monologue was not obscene and thus could not have been regulated had it been published in print. But the Court distinguished broadcast media from print media, pointing out that radio and television stations are uniquely pervasive in Americans' lives, and are easily accessible by impressionable children who can be inadvertently exposed to offensive materials without adult supervision. Print media, the Court said, do not intrude upon Americans' privacy to the same extent or in the same manner. Thus, the Court concluded that the FCC could regulate indecent speech on radio and television but cautioned that the commission must do so in a manner that does not completely extinguish such speech.
When a station airs obscene, indecent, or profane material, the FCC may revoke the station's license, impose a monetary forfeiture, or issue a warning. One of the highest profile cases in the last few years came after a half-time performance with Janet Jackson and Justin Timberlake at the 2004 Super Bowl. In August 2004, the FCC ordered CBS Broadcasting to pay $550,000 for its broadcast of indecent material. The FCC issued $7.9 million in indecency fines in 2004.
The FCC undertakes investigations into alleged obscene, profane, and indecent material after receiving public complaint. The FCC reviews each complaint to determine whether it appears that a violation may have occurred. If so, the FCC will begin an investigation. The context of the broadcast is the key to determine whether a broadcast was indecent or profane. The FCC analyzes what was aired, the meaning of it, and the context in which it aired. Complaints can be made online, via e-mail or regular mail, or by calling 1-888-CALL-FCC (voice) or 1-888-TELLFCC (TTY).
As cable television gained prominence during the 1980s, it became unclear whether the FCC's rules on indecency and profanity applied to this burgeoning medium. Cable operators do not use broadcast spectrum frequencies, but they are licensed by local communities in the same way broadcast television station operators are licensed by the FCC. Moreover, cable operators partake in the same kind of First Amendment activities as do their broadcast television counterparts.
Congress tried to clarify the responsibilities of cable operators when it passed the Cable Television Consumer Protection and Competition Act of 1992 (CTCPCA). CTCPCA authorized cable channel operators to restrict or block indecent programming. The authorization applied to leased access channels, which federal law requires cable systems to reserve for lease by unaffiliated parties, and public access channels, which include educational, governmental, or local channels that federal law requires cable operators to carry. Cable operators claimed that the statute was fully consistent with the First Amendment because it left judgments about the suitability of programming to the editorial discretion of the operators themselves. But cable television viewers filed a lawsuit arguing that the statute violated the First Amendment by giving cable operators absolute power to determine programming content.
In 1996 the case was appealed to the U. S. Supreme Court, which issued an opinion that was as badly divided as the litigants. In handing down its 5-4 decision in Denver Area Educational Telecommunications Consortium, Inc. v. F.C.C, the Court first noted that cable television shares the same characteristics of broadcast television that were discussed in the Pacifica case, namely that it is uniquely pervasive, is capable of invading the privacy of viewers' homes, and is easily accessible by children. Despite the similarities, the Court held that CTCPCA had violated the First Amendment by giving cable operators the power to prohibit patently offensive or indecent programming transmitted over public access channels. The court reasoned that locally accountable bodies comprised of community members are better capable of addressing programming concerns, and thus creating a "cable operator's veto" was not the least restrictive means of addressing the appropriateness and suitability of cable television programming.
With respect to leased access channels, the Court ruled that CTCPCA also violated the First Amendment by requiring cable system operators to segregate patently offensive programming on separate channels and then requiring the operators to block those channels from viewer access until individual cable subscribers requested access in writing. The Court said that these requirements had an obvious speech-restrictive effect on viewers and were not narrowly or reasonably tailored to protect children from exposure to indecent materials. The Court cited the V-chip, as one less restrictive means of accomplishing the same objective.
The law governing television advertising is more settled than that of obscene, indecent, or profane materials. The First Amendment permits governmental regulation of television advertising and other forms of commercial speech so long as the government's interest in doing so is substantial, the regulations directly advance the government's asserted interest, and the regulations are no more extensive than necessary to serve that interest. This test affords advertisers more First Amendment protection than does the public-interest test under which federal courts review most FCC content-related regulations. In a free enterprise system the law recognizes that consumers depend on unfettered access to accurate and timely information regarding the quality, quantity, and price of various goods and services.
Conversely, society is not served by false, deceptive, or harmful advertisements, and thus regulations aimed at curbing such advertising are typically found to serve a substantial governmental interest. The best example involves the federal ban on cigarette advertising. In 1967 the FCC acted upon citizen complaints against the misleading nature of tobacco advertisements by implementing a rule that required any television station carrying cigarette advertisements to also air public service announcements addressing the health risks posed by tobacco. The rule withstood a court challenge. In addition, two years later Congress passed the Public Health and Cigarette Smoking Act of 1969, which banned all electronic advertising of cigarettes as inherently misleading and harmful. The act took effect in 1971 and survived a court challenge that same year. The law remains in effect today. No federal laws or FCC rules ban alcohol advertising, however.
Unlike other areas of telecommunications law, Congress has allowed states to adopt their own regulations governing false and deceptive advertising. Many states have responded by adopting the Uniform Deceptive Trade Practices Act (UDTPA), which prohibits three specific types of representations: (1) false representations that goods or services have certain characteristics, ingredients, uses, benefits, or quantities; (2) false representations that goods or services are new or original; and (3) false representations that goods or services are of a particular grade, standard, or quality. Under UDTPA, liability may arise for advertisements that are only partially accurate, if the inaccuracies are likely to confuse prospective consumers. Ambiguous representations may require clarification to prevent the imposition of liability. For example, a business that accuses a competitor of being "untrustworthy" may be required to clarify that description with additional information if consumer confusion is likely to result.
The 1990 Children's Television Act (CTA) was passed to increase the amount of educational and informational television programming for children. CTA requires broadcast stations to serve the educational and informational needs of children through its overall programming, including programming specifically designed to serve these needs ("core programming"). Core programming is programming specifically designed to serve the educational and informational needs of children ages 16 and under. CTA requires that broadcasters:
- Provide parents and consumers with advance information about core programs being aired.
- Define the type of programs that qualify as core programs.
- Air at least three hours per week of core educational programming.
- Limit the amount of time devoted to commercials during children's programs.
Fueled in part by growing public sentiment against the increasingly violent nature of television programming, NTIA and FCC officials recommended that federal law give parents greater control over the programming viewed by their children. The Telecommunications Act of 1996 introduced a ratings system that requires television shows to be rated for violence and sexual content. The act also created the so-called V-chip, a receptor inside television sets that gives parents the ability to block programs they find unsuitable for their children. Under the act, authority to establish TV ratings is given to a committee comprised of parents, television broadcasters, television producers, cable operators, public interest groups, and other interested individuals from the private sector.
In 2004, the FCC imposed children's educational and informational programming obligations on digital multicast broadcasters. Effective January 1, 2006, at least three hours per week of core programming must be provided on the main programming stream, for digital broadcasters. The minimum amount of core programming increases for digital broadcasters that multicast; it increases in proportion to the amount of free video programming offered by the broadcaster on multicast channels. The FCC also limited the amount of commercial matter on all digital video programming, whether free or pay, that is aimed at an audience 12 years old and under.
Beginning January 1, 2006, the FCC also imposed rules governing and limiting the display of Internet web site addresses during programs directed at children 12 and under. The requirements apply to both analog and digital programming. Moreover, FCC rules prohibit "host-selling". According to the FCC, host-selling is any character endorsement that may confuse a child viewers from distinguishing between program and non-program material.
American Jurisprudence West Group, 1998.
http://caselaw.lp.findlaw.com/data/constitution/amendment01 U..S. Constitution: First Amendment.
West's Encyclopedia of American Law West Group, 1998.
445 12th Street S.W.
Washington, DC 20554 USA
Phone: (888) 225-5322
Fax: (202) 835-5322
Primary Contact: Kevin J. Martin, Chairman
1401 Constitution Ave. N.W
Washington, DC 20230 USA
Phone: (202) 482-7002
Fax: (202) 482-1840
Primary Contact: Michael D. Gallagher, Administrator
In his book Watching Race, Herman Gray describes television as a medium used to “engage, understand, negotiate, and make sense of the material circumstances of [everyday life]” (Gray 1995, p. 43). A person’s entire worldview is obviously influenced by many factors, but it remains evident that, in many parts of the world, television plays a significant role in shaping public perceptions of race and racial differences.
Racialized groups are relegated to the role of the “invisible other” in television programming in the United States. Viewers of most commercial programming are led to believe racial/ethnic groups are virtually nonexistent, and that those that do exist reside in racial worlds or on television channels of their own. Cultural programming is needed that expresses the range of experiences of African Americans, Native Americans, Asian Americans, and Latino Americans. In the early twenty-first century, however, the major television networks (CBS, NBC, ABC, and FOX) continue to marginalize and stigmatize racial minorities in their programming.
In early television programming, African-American performers occupied stereotypical, unflattering roles. These actors gradually became a part of mainstream society at the expense of self-degradation, and the visual images of blacks on network television created a false portrayal of African-American identity. Such controlling images as the Jezebel, mammy, servant, matriarch, buffoon, minstrel, and slave presented a distorted reality of racial identity for African Americans. Shows such as Amos and Andy (1951–1953), Beulah (1950–1953), and Jack Benny (1950–1965) portrayed African Americans as lacking intellect and seemingly enjoying their subservient and less powerful positions in the world. As an example, the show Beulah typified the “good old-fashioned minstrel show” (Haggins 2001, p. 250). The lead character, Beulah, was the stereotypical domestic servant who was “happy” with her lot in life serving her boss. Similarly, her friend Oriole was her queen-sized “childlike idiot friend,” perpetuating the “pickaninny” stereotype of the bulging-eyed child with thick lips, unkempt hair, and eating a slice of watermelon. The third character typified the “Uncle Tom” and “Coon” character. Beulah’s boyfriend Bill embodied what it allegedly meant to be a black man. While in the presence of whites, Bill was hardworking, dependable, and content. In his “real” state, however, he was lazy and avoided the work and responsibilities he should have held as a man. Although more recent televisual depictions have not been as blatantly racist as this, shows such as Beulah paved the way for controlling images to be constructed, transformed, and perpetuated in all forms of media. Unfortunately, these images continue to create an illusion of African Americans as subservient and of less value, in addition to being criminal, predatory, and a threat to European Americans. These negative portrayals are a result of slavery and the objectification of African slaves as sexual creatures and servants to the whites who colonized North America, a stigma that remains in place in the twenty-first century.
Television portrayals have, to some degree, begun to challenge many of the long-standing controlling images associated with African Americans. Although they may not be as blatantly racist as they were in the past, networks, reporters, and television writers still perpetuate a subtle form of media racism that R. R. Means Coleman has dubbed “neominstrelsy.” According to Coleman, neominstrelsy refers to contemporary versions of the minstrel images of African Americans that were pervasive in early television programming. The early images set the stage for current media images, which often still function to sustain the myth that African Americans are unscrupulous, lack morals, and are only capable of entertaining others through comedy. According to the media reasearcher Robert Entman, “images of Blacks are produced by network news [that] reinforce whites’ antagonism toward Blacks,” and these images perpetuate stereotypic depictions and contribute to this cycle of television racism (Entman 1994, p. 516). Because television is one of the most heavily used media for information and entertainment purposes, it is imperative that media outlets and creators begin to rethink how these distorted images create an unrealistic picture of African-American life.
Unlike other racial groups, African Americans have been depicted in many television shows in which they have made up the vast majority of the cast. These shows include, but are not limited to, Good Times (1974–1979), The Cosby Show (1984–1992), A Different World (1987–1993), Moesha (1996–2001), Sister, Sister (1994–1999), Girlfriends (2000–), Half & Half (2002–), and Everybody Hates Chris (2005–). These shows have portrayed the diversity that exists within the African-American community. Yet while different kinds of relationships and life experiences have been portrayed in these shows, the constant is that African Americans are generally confined to shows that have a neominstrelsy theme. The characters are comedic and, to varying degrees, embody the Jezebel, mammy, matriarch, buffoon, minstrel, and “Stepin Fetchit” stereotypes. These images sustain the myth that African Americans are unscrupulous, lack morals, and are only capable of entertaining others through comedy. The Cosby Show was an exception, however, for it attempted to debunk these stereotypes. Yet while the show was praised for communicating a positive image of African-American identity, it was also criticized for not being “black” enough.
In the 2006–2007 television season, there were fewer than seven programs with a predominately African-American cast. While this may be better than blacks
having no visibility at all, little is being done to deconstruct societal beliefs about Africans Americans. Sadly, the shows are rarely fully developed and confine blacks to the genre of comedy, making it difficult to counter longstanding controlling images that impact real world perceptions of the African-American community.
Native Americans are rarely portrayed in movies or on television, and when they are shown they are often wearing stereotypical attire (i.e., headdress) or armed with antiquated artillery (i.e., bow and arrow), ready to fulfill the all too familiar image of the “noble” savage. These images perpetuate a negative image of racial/ethnic identity for First Nations people and instill the belief that being Native American is a “thing of the past.” Audiences are led to believe that Native Americans either do not exist or are too small in number to be fairly represented. No matter what period of time in which a story is being told, “contemporary portrayals [of First Nations persons] are typically presented in an historic context” (Merskin 1998, p. 335).
This depiction is a visual representation of a linguistic image accepted as an accurate symbol of First Nations people. They are restricted to an image of being a homogenous group of people lacking any distinctive qualities (e.g., tribes) or heterogeneity. The most pervasive and troubling image is the “conventionalized imagery [that] depicts Indians as wild, savage, heathen, silent, noble, childlike, uncivilized, premodern, immature, ignorant, bloodthirsty, and historical or timeless, all in juxtaposition to the white civilized, mature, modern (usually) Christian American man” (Meek 2006, p. 119). Other stereotypes include the portrayal of Native Americans as drunkards, gamblers, and wards of the government, and these images are too often perceived as accurate representations of the original inhabitants of North America. Television programs with a periodic or a recurring Native American character include, but are not limited to, The Lone Ranger (1949–1957), Dr. Quinn, Medicine Woman (1993–1998), Walker, Texas Ranger (1993–2001), Northern Exposure (1990–1995), McGyver (1985–1992), and Quantum Leap (1989–1993).
These shows portray First Nations people as either occupying a space on the Western frontier or being virtually invisible on television’s racial landscape. This portrayal is attributed to movie and television “Westerns,” which created the stereotypical genre of media representations. According to the sociologist Steve Mizrach, “Indians are shown as bloodthirsty savages, obstacles to progress, predators on peaceful settlers, enemy ‘hostiles’ of the U.S. Cavalry, etc. … the political context of the Indian Wars completely disappears” (Mizrach 1998). To this day, notes Duane Champagne of the Native Nations Law and Policy Center at the University of California, Los Angeles, “Hollywood prefers to isolate its Indians safely within the romantic past, rather than take a close look at Native American issues in the contemporary world” (Champagne 1994, p. 719). These archaic, inaccurate portrayals are further problematized by “images in which the linguistic behaviors of others are simplified and seen as deriving from those persons’ essences” (Meek 2006, p. 95) and “remind us of an oppressive past” (Merskin 2001, p. 160). Through these depictions, First Nations people are presented as “nonnative, incompetent speaker[s] of English” (Meek 2006, p. 96). This is a strategy used to emphasize “Indian civilized otherness by having the character speak English in monosyllables” (Taylor 2000, p. 375).
U.S. companies have long used images of American Indians for product promotion, mainly to “build an association with an idealized and romanticized notion of the past” (Merskin 2001, p. 160). Products such as Land O’ Lakes butter, Sue Bee honey, Big Chief sugar, and Crazy Horse malt liquor have stereotypic caricatures on their labels that are supposed to reflect Native American ethnicity, but which are actually “dehumanizing, one-dimensional images based on a tragic past” (Merskin 2000, p. 167).
Asian Americans are represented on television as a homogenous group of people whose ethnicity is Chinese, Korean, or Japanese. This worldview of a population that is in reality very ethnically diverse is both problematic and restricting. Stereotypes of Asian Americans emerged from efforts by whites to oppress racial groups deemed inferior, and from a nineteenth-century fear of an Asian expansion into white occupations and communities, often referred to as the “Yellow Peril.” These controlling images emerged in order to reduce Asian-American men and women to caricatures based on how the dominant society perceived their racial and gendered identities.
There are both general cultural stereotypes and gender-specific stereotypes of Asian Americans disseminated in the media. General cultural stereotypes include assumptions that Asian Americans are: (1) the model minority, (2) perpetual foreigners, (3) inherently and passively predatory immigrants who never give back, (4) restricted to clichéd occupations (e.g., restaurant workers, laundry workers, martial artists), and (5) inherently comical or sinister. Controlling, gender-specific images of Asian-American identity include Charlie Chan, Fu Manchu, Dragon Lady, and China Doll. Charlie Chan and Fu Manchu are emasculated stereotypes of Asian men as eunuchs or asexual. Charlie Chan, a detective character, was “effeminate, wimpy,” and “dainty,” (Sun 2003, p. 658), as well as “a mysterious man, possessing awesome powers of deduction” (Shah 2003). He was also deferential to whites, “non-threatening, and revealed his ‘Asian wisdom’ in snippets of ‘fortune-cookie’ observations.” Conversely, there is the Fu Manchu character, who is “a cruel, cunning, diabolical representative of the ‘yellow peril”’ (Sun 2003, p. 658).
Asian-American women are portrayed as being hypersexual–or as the opposite of “asexual” Asian men (Sun 2003). The Lotus Blossom (i.e., China Doll, Geisha Girl, shy Polynesian beauty) is “a sexual-romantic object,” utterly feminine, delicate, and welcome respites from their often loud, independent American counterparts” (Sun 2003, p. 659). Dragon Lady is the direct opposite of the Lotus Blossom. She is “cunning, manipulative, and evil,” “aggressive,” and “exudes exotic danger” (Sun 2003, p. 659). There are also the “added characteristics of being sexually alluring and sophisticated and determined to seduce and corrupt white men” (Shah 2003).
Shows with at least one recurring character of Asian descent include: The Courtship of Eddie’s Father (1969– 1972), Happy Days (1974–1984), Quincy, M.E. (1976– 1983), All-American Girl (1994–1995), Ally McBeal (1997–2002), Mad TV (1995–), Half & Half (2002–), Lost (2004–), and Grey’s Anatomy (2005–). Examples of how long-held stereotypes of Asian-American women are perpetuated include the character Ling Woo on Ally McBeal and Miss Swan on Mad TV. Ling Woo was an attorney (portrayed by the Chinese actress Lucy Liu) who was “tough, rude, candid, aggressive, sharp tongued, and manipulative” and hypersexualized (Sun 2003, p. 661). She was also a feminist, in stark contrast with past portrayals of Asian women. While some Asian Americans believed Ling was a stereotype breaker, she still perpetuated the Dragon Lady stereotype, especially when she “growl[ed] like an animal, breathing fire at Ally, walking into the office to the music of Wicked Witch of the West from The Wizard of Oz” (Sun 2003, p. 661).
Miss Swan is an Asian-American character on FOX’s sketch comedy show Mad TV. She is played by the Jewish comedian Alexandria Borstein and represents an example of “yellowface,” which is the Asian equivalent of blackface and refers to a non-Asian person “performing” an Asian identity. Miss Swan is “a babbling nail salon owner with a weak grasp of the English language” (Arm-strong 2000), and she is always depicted as the perpetual foreigner, inherently predatory and restricted to the occupation of nail salon owner. She is also a comic character who speaks broken, unintelligible English. Ms. Borstein is not Asian, which has appeared to be okay with the show’s audience. This mixed casting of Asian characters was also a problem with the short-lived sitcom All-American Girl, which portrayed a Korean family but cast only one Korean actor (the comedian Margaret Cho). All the other actors were either Japanese American or Chinese American, thus perpetuating the assumption that Asians are interchangeable and must assimilate to mainstream (white) culture in order to “fit in.”
The controlling images of Asian Americans distort what it means to belong to this very heterogeneous ethnic group. Attempts to diversify television programming were made with All-American Girl, but much more work is needed to accurately represent Asian Americans. Both Lost and Grey’s Anatomy have strong and visible Asian-American actors as part of the regular cast. As the journalist Donal Brown notes, UCLA researchers believe these shows and characters are complex and have great appeal across racial and ethnic groups, but they are “concerned that the Asian American characters on television [are] portrayed in high status occupations perpetuating the ‘model minority’ stereotype” (Brown 2006).
“Latino representation in Hollywood is not keeping pace with the explosion of the U.S. Hispanic population, and depictions of Latinos in television and film too often reinforce stereotypes” (Stevens 2004). Television shows purport to reflect reality in their programs, but they rarely, if ever, do so when casting characters. According to advocacy group Children Now, Latinos make up over 12.5 percent of the U.S. population, yet only 2 percent of characters on television are Latino (Stevens 2004), not including those Latinos who are portraying white (non-ethnic) characters.
Latino Americans have been subjugated and oppressed as immigrants “invading” U.S. culture. Contemporary immigration issues notwithstanding, the most prevailing stereotypes associated with Latino-American males are the glorified drug dealer, the “Latin lover,” the “greaser,” and the “bandito” (Márquez 2004). Latina women are depicted as deviant, “frilly señoritas” or as “volcanic temptresses,” and Latino families, in general, are “unintelligent,” “passive,” “deviant,” and “dependent” (Márquez 2004). These depictions may be rare, but they can undoubtedly have a significant impact on perceptions and attitudes people develop about individuals of Latin descent. Images of Latino Americans do not reflect the “Latino explosion” in U.S. culture, and they ultimately reinforce the stereotypes that should be countered. These images may not be fully positive or fully negative, but their rarity makes it more problematic that these images are so restricting.
Notable television programs featuring or including a Latino-American character include: Chico and the Man (1974–1978), Luis (2003), The Ortegas (2003), NYPD Blue (1993–2005), Will & Grace (1998–2006), Popstar (2001), George Lopez (2002–), The West Wing (1999–2006), The Brothers Garcia (2000–2003), Taina (2001–2002), Dora the
Explorer (2000–), Desperate Housewives (2004–), CSI: Miami (2002–), and Ugly Betty (2006–). Latino-American culture has had tremendous appeal in popular culture, yet members of the different ethnic groups within the Latino community remain marginalized in primetime television programming. One promising program that can potentially debunk these controlling images is Ugly Betty, starring the actress America Ferrera. Betty is aspiring for success in the fashion industry and is faced with opposition because she does not fit the industry (read mainstream) cultural standard of beauty. Despite much opposition, she refuses to succumb to societal expectations and remains committed to not compromising her character and integrity.
Ugly Betty is based on an incredibly popular Columbian telenovela (soap opera), Yo soy Betty, la fea, that was very successful in Mexico, India, Russia, and Germany. Although the lead character defies conventional wisdom regarding televisual success, the show may presage an era in which issues concerning racial representation in television are dealt with onscreen as well as off.
Television demographics in the United States should mirror the racial demographics of both the country and the cities within which the television programs take place, but many analyses suggest they do not (see Márquez 2004). This becomes particularly salient for individuals who have had limited interpersonal contact with people from other racial groups. Diversifying television production teams and actors is an effective strategy for eradicating subtle and blatant racism, or “symbolic annihilation” (Meek 1998). Community activism is also a powerful tool in this regard. Through research and the creation of “diversity development” programs at networks like FOX, the national Children Now organization offers practical approaches to addressing racial representation in the media. Efforts by Children Now and other organizations committed to addressing issues of fair racial and ethnic representation in the media are critical in bringing about such change. It is only through education and formal efforts that programmers, scriptwriters, and other pivotal players can be made aware of the exclusionary nature of television. Awareness of such racism will ideally prompt the television community to become proactive in redefining their role in perpetuating controlling images that continue to plague twenty-first-century portrayals of racial groups.
Armstrong, Mark. 2000. “Mr. Wong, Miss Swan: Asian Stereotypes Attacked.” E!-Online News, August 11. Available from http://www.eonline.com/news.
Brown, Donal. 2006. “Asian Americans Go Missing When It Comes to TV.” Available from http://news.pacificnews.org/news/view_article.html?article_id=2187822d260441c375f65241320819d0.
Champagne, Duane, ed. 1994. Native America: Portrait of the Peoples. Detroit, MI: Visible Ink.
Chihara, Michelle. 2000. “There’s Something About Lucy.” Boston Phoenix, February 28. Also available from http://www.alternet.org/story/290/.
Collins, Patricia Hill. 1990. Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment. Boston: Unwin Hyman.
Entman, Robert. 1994. “Representation and Reality in the Portrayal of Blacks on Network Television Shows.” Journalism Quarterly 71 (3): 509–520.
Gray, Herman. 1995. Watching Race: Television and the Struggle for “Blackness.” Minneapolis: University of Minnesota Press.
Haggins, B. L. 2001. “Why ‘Beulah’ and ‘Andy’ Still Play Today: Minstrelsy in the New Millennium.” Emergences: Journal for the Study of Media & Composite Cultures 11 (2): 249–267.
Inniss, Leslie B., and Joe R. Feagin. 1995. “The Cosby Show: The View from the Black Middle Class.” Journal of Black Studies 25 (6): 692–711.
Mayeda, Daniel. M. “EWP Working to Increase Diversity in Television.” USAsians.net. Available from http://us_asians.tripod.com/articles-eastwest.html.
Means Coleman, R. R. 1998. African American Viewers and the Black Situation Comedy: Situating Racial Humor. New York: Garland.
Meek, B. A. 2006. “And the Injun Goes ‘How!’: Representations of American Indian English in White Public Space.” Language in Society 35 (1): 93–128.
Méndez-Méndez, Serafín, and Diane Alverio. 2002. Network Brownout 2003: The Portrayal of Latinos in Network Television News. Report Prepared for the National Association of Hispanic Journalists. Available from http://www.maynardije.org/resources/industry_studies.
Merskin, Debra. 1998. “Sending up Signals: A Survey of Native American Media Use and Representation in the Mass Media.” Howard Journal of Communications 9: 333–345.
———. 2001. “Winnebagos, Cherokees, Apaches and Dakotas: The Persistence of Stereotyping of American Indians in American Advertising Brands.” Howard Journal of Communications 12: 159–169.
Mizrach, Steve. 1998. “Do Electronic Mass Media Have Negative Effects on Indigenous People?” Available from http://www.fiu.edu/~mizrachs/media-effects-indians.html.
Rebensdorf, Alicia. 2001. “The Network Brown-Out.” AlterNet. Available from http://www.alternet.org.
Shah, Hemant. 2003. “‘Asian Culture’ and Asian American Identities in the Television and Film Industries of the United States.” Studies in Media & Information Literacy Education 3. Also available from http://www.utpjournals.com/simile/issue11/shahX1.html.
Stevens, S. 2004. “Reflecting Reality: A Fordham Professor Testifies before Congress about the Dearth of Latinos on Television and in Film.” Available at http://www.fordham.edu/campus_resources/public_affairs/inside_fordham/inside_fordham_archi/october_2004/news/professor_discusses__16624.asp.
Sun, Chyng Feng. 2003. “Ling Woo in Historical Context: the New Face of Asian American Stereotypes on Television.” In Gender, Race, and Class in Media: A Text-Reader, 2nd ed., edited by Gail Dines and Jean M. Humez. Thousand Oaks, CA: Sage.
Taylor, Rhonda H. 2000. “Indian in the Cupboard: A Case Study in Perspective. International Journal of Qualitative Studies in Education 13 (4): 371–384.
Tina M. Harris
At the same time radio began to achieve commercial viability in the 1920s, the United States and Britain began experimenting with "television," the wireless transmission of moving pictures. Although Britain was initially somewhat more successful, both countries experienced a lot of difficulty in the early stages. There were a variety of reason for this. In America, many people whose livelihoods were tied to radio were also responsible for developing television. Accordingly, they were in no hurry to see radio, a sure money maker, usurped by the new medium. In addition, the Depression greatly slowed the development of television in the 1930s. There was also a tremendous amount of infighting between potential television manufacturers and the Federal Communications Commission (FCC) in trying to establish uniform technical standards. And finally, just as it seemed as though television was poised to enter American homes, the onset of World War II delayed its ascendancy until the war's end. However, in the late 1940s and early 1950s commercial television exploded on the American market, forever changing the way products are sold, people are entertained, and news events are reported. In the years immediately following World War II television quickly became America's dominant medium, influencing, shaping, and recording popular culture in a way no other media has ever equaled.
Although televisions first appeared on the market in 1939, because there were virtually no stations and no established programming, it wasn't until just after World War II that TV began its meteoric rise to media dominance. As John Findling and Frank Thackeray note in Events That Changed America in the Twentieth Century, in 1946 only 7,000 TV sets were sold. However, as television stations began appearing in an increasing number of cities, the number of sets sold rose dramatically. In 1948 172,000 sets were sold; in 1950 there were more than 5,000,000 sets sold. By 1960 more than 90 percent of American homes had TV sets, a percentage which has only climbed since. Before television, Americans had spent their leisure time in a variety of ways. But as each new station appeared in a particular city, corresponding drops would occur in restaurant business, movie gates, book and magazine sales, and radio audiences. By the early 1960s Americans were watching over 40 hours of TV a week, a number that has remained remarkably stable ever since.
Television originally only had 12 Very High Frequency (VHF) channels—2 through 13. In the late 1940s over 100 stations were competing for transmission on VHF channels. Frequency overcrowding resulted in stations interfering with one another, which led to the FCC banning the issuance of new licenses for VHF channels for nearly four years, at the conclusion of which time stations receiving new licenses were given recently developed Ultra High Frequency (UHF) channels (14 through 88). However, most TV sets needed a special attachment to receive UHF channels, which also had a worse picture and poorer sound than VHF channels. Unfortunately, it was mostly educational, public access, and community channels that were relegated to UHF. Because of the FCC's ban, the three major networks, ABC, CBS, and NBC, were able to corner the VHF channels and dominate the television market until well into the 1980s.
From its introduction in American society, television has proven itself capable of holding its audience riveted to the screen for countless hours. As a result, people who saw television as a means through which to provide culturally uplifting programming to the American public were gravely disappointed. Instead, TV almost immediately became an unprecedentedly effective means of selling products. Most Americans weren't interested in "educational" programming, and if people don't watch advertisers don't pay for air time in which to sell their products. TV shows in the late 1940s followed the model established by the success of radio; single advertisers paid for whole shows, the most common of which were half hour genre and variety shows. But in 1950 a small lipstick company named Hazel Bishop changed forever the way companies sold their products.
When Hazel Bishop first began advertising on TV in 1950 they had only $50,000 a year in sales. In two short years of television advertising, and at a time when only 10 percent of American homes had TV sets, that number rose to a stunning $4.5 million. As advertisers flocked to hawk their products on TV, TV executives scrambled to find a way to accommodate as many companies as possible, which would result in astronomical profits for both the advertisers and the networks. TV executives realized that single product sponsorship was no longer effective. Instead, they devised a system of longer breaks during a show, which could be split up into 30 second "spots" and sold to a much larger number of advertisers. Although this advertising innovation led to television's greatest period of profitability, it also led to advertising dictating television programming.
In the early 1950s television advertisers realized they had a monopoly on the American public; they were competing with each other, not with other mediums, such as books or magazines. Americans watched regardless what was on. Advertisers discovered that what most people would watch was what was the least objectionable (and often the most innocuous) show in a given time slot; hence the birth of the concept of "Least Objectionable Programming." A TV show didn't have to be good, it only had to be less objectionable than the other shows in the same time slot. Although more "serious" dramatic television didn't entirely disappear, the majority of shows were tailored to create the mood advertisers thought would result in their consumers being the most amenable to their products. By the mid-1950s lightweight sitcoms dominated the American television market. The relative success and happiness of television characters became easily measurable by the products they consumed.
Prior to the twentieth century, "leisure time" was a concept realized by generally only the very wealthy. But as the American middle class grew astoundingly fast in the post World War II boom, a much larger population than ever before enjoyed leisure time, which helped contribute to television's remarkable popularity. Perhaps even more important was the rise of the concept of "disposable income," money that people could spend on their wants rather than needs. Advertisers paying for the right to influence how people might spend their disposable income largely funded television. As a result, television was ostensibly "free" prior to the late 1970s. Nevertheless, television's cost has always been high; it has played perhaps the single largest role in contributing to America's becoming a consumer culture unparalleled in world history. Countless television shows have achieved an iconic stature in American popular culture, but none have had as powerful an effect on the way American's live their day to day lives as have commercials.
By the mid-1950s it became clear that television's influence would not be confined to the screen. Other forms of media simply could not compete directly with television. As result, they had to change their markets and formats in order to secure a consistent, although generally much smaller than pre-television, audience. Perhaps the most far reaching consequence of the rise of television is that America went from being a country of readers to a country of watchers. Previously hugely popular national magazines such as Colliers, Life, and The Saturday Evening Post went out of business in the late 1950s. Likewise, as television, an ostensibly more exciting visual medium than radio, adapted programming previously confined to the radio, radio shows quickly lost their audience. Magazines and radio stations responded similarly. Rather than trying to compete with TV they became specialized, targeting a singular demographic audience. Simultaneously, advertisers grew more savvy in their research and development and realized that highly specific consumer markets could be reached via radio and magazine advertising. Strangely, television's rise to prominence secured the long term success of radio and magazines; their response to the threat of television eventually resulted in a much larger number of magazines and radio stations than had been available before television. In the late 1990s audiences can find a radio station or magazine that focuses on just about any subject they might want.
Perhaps the industry struck hardest by the advent of television was the American film industry. Hollywood initially considered TV an inferior market, not worthy of its consideration. And why not, for in the late 1940s as many as 90 million people a week went to the movies. But television's convenience and easy accessibility proved much competition for Hollywood. By the mid-1950s the industry's audience had been reduced to half its former number. Hollywood never recovered as far as actual theater audiences are concerned. In fact, by the late 1990s only 15 million people a week attended the cinema, and this in a country with twice the population it had in the late 1940s. However, Hollywood, as it seemingly always does, found a way to adapt. Rather than relying exclusively on box-office receipts, Hollywood learned to use television to its advantage. Now after market profits, including the money generated from pay-per-view, cable channels, premium movie channels, and, most of all, video sales and rentals, are just as important to a film's success, if not more so, than a film's box-office take.
In addition, television's media domination has contributed greatly to blurring the lines between TV and Hollywood. Most Hollywood studios also produce TV shows on their premises. Furthermore, just as radio stars once made the jump from the airwaves to the silver screen, TV stars routinely make the jump from the small screen to the movies. As a result, many American celebrities can't be simply categorized in the way they once were. Too many stars have their feet in too many different mediums to validate singular labels. Take, for example, Oprah Winfrey, a television talk show maven who has also been involved in several successful books, promoted the literary careers of others, frequently appeared on other talk shows and news magazines as a guest, and has acted in and produced a number of both television and Hollywood films. Although hers is an extreme example, television's unequaled cultural influence has resulted in turning any number of stars who would have formerly been confined to one or two mediums into omnipresent multimedia moguls.
If radio ushered in the era of "broadcast journalism," TV helped to further define and legitimize it. In addition, television newscasts have changed the way Americans receive and perceive news. By the late 1950s TV reporters had learned to take advantage of emerging technologies and use them to cover breaking news stories live. Television broadcasts and broadcasters grew to hold sway over public opinion. For, example, public sentiment against the Vietnam War was fueled by nightly broadcasts of its seemingly senseless death and destruction. Walter Cronkite added further fuel to the growing fire of anger and resentment in 1968 when he declared on-air that he thought the war in Vietnam was a "terrible mistake." When most Americans think of the events surrounding JFK, Martin Luther King, and Bobby Kennedy's assassinations, the civil rights movement, the first moon walk, the Challenger space shuttle disaster, the Gulf War, and the 1992 riots in Los Angeles after the Rodney King trial verdict, it is the televised images that first come to mind.
Unfortunately, by the late 1990s many Americans had come to rely on television news as their main source of information. In addition to nightly news and news oriented cable networks, cheap-to-make and highly profitable "news magazines" such as Dateline NBC, 20/20, and 48 Hours have become TV's most common form of programming. Rarely do these shows feature news of much importance; instead, they rely on lurid and titillating reports that do nothing to enrich our knowledge of world events but nevertheless receive consistently high ratings, thus ensuring the continuing flow of advertising dollars. That most Americans rely on television for their information means that most Americans are underinformed; for full accounts of a particular story it is still necessary to seek out supporting written records in newspapers, magazines, and books. The problem with relying on television for information is, as Neil Postman writes, "not that television presents us with entertaining subject matter but that all subject matter is presented as entertaining." Accordingly, it is Postman's contention that America's reliance on TV for information is dangerous, for when people "become distracted by trivia, when cultural life is redefined as a perpetual round of entertainments, when serious conversation becomes a form of baby-talk, when, in short, a people become an audience and their public business a vaudeville act, then a nation finds itself at risk."
Because of news broadcasting and the fact that television is the best way to reach the largest number of Americans, television has helped shape American politics in the second half of the twentieth century. Effective television advertising has become crucial to the success or failure of nearly any national election. Unfortunately, such advertising is rarely completely factual or issue oriented. Instead, most such advertisements are used to smear the reputation of a particular candidate's opponent. Perhaps the most famous example of such advertisements occurred in the 1988 Presidential campaign, during which Republican George Bush ran a series of slanted and inflammatory spots about his Democratic opponent, Michael Dukakis. Furthermore, the careers of several presidents have become inextricably intertwined with TV. For example, President Ronald Reagan, a former minor movie star and television product pitchman, used his television savvy so effectively that he came to be known as "the Great Communicator." Conversely, President Bill Clinton, whose initial effective use of television has reminded some of JFK's, became victim to his own marketability when in August of 1998 he admitted in a televised speech to the nation that he had lied about his affair with a young intern named Monica Lewinsky. His admission, which was meant to put the incident behind him, instead spawned a virtual television cottage industry, with literally dozens of shows devoting themselves to continual discussion about his fate, which was ultimately decided in a televised impeachment trial.
Strangely, considering the man was wary of the medium, perhaps no politician's career has been more tied to television than Richard Nixon's. As a vice presidential candidate on Dwight Eisenhower's 1952 Republican Presidential bid, Nixon came under fire for allegedly receiving illegal funding. A clamor arose to have Nixon removed from the ticket. On September 23, 1952, Nixon went on TV and delivered a denial to the accusations, which has since become known as "the Checkers speech." More than 1 million favorable letters and telegrams were sent supporting Nixon; he remained on the ticket and he and Eisenhower won in a landslide. Conversely only eight years later TV would play a role in Nixon's losing his own bid for the White House. Nixon agreed to a series of televised debates with his much more telegenic opponent, John Fitzgerald Kennedy. Nixon's pasty face and sweaty brow may have cost him the election. Many historian's believe that Kennedy "won" the debates as much by his more polished appearance and manner than by anything he said. Nixon learned from his error. While again running for the Presidency in 1968, Nixon hired a public relations firm to run his campaign. The result was a much more polished, image conscious, and TV friendly Nixon; he won the election easily. In the last chapter of Nixon's political career, broadcast and print media helped spur investigations into his involvement with the Watergate affair. The hearings were broadcast live on TV, which helped to turn public opinion against the President, who resigned from office as a result. One of the most famous images in TV history of television is that of Nixon turning and waving to the crowd as he boarded the helicopter that removed him from power forever.
Prior to World War II, baseball was widely recognized as America's national pastime. Games were often broadcast live, and the country blissfully spent its summers pursuing the on-the-field exploits of larger than life figures such as Babe Ruth. However, after the war other sports grew to prominence, largely because of television, beer, and, until 1970, cigarette advertisers, who saw in sports audiences a target market for their products. Individual sports such as golf and tennis, grew in popularity, but team sports such as hockey, basketball, and most of all, football had the greatest increases. By the late 1960s and with the advent of the Super Bowl, annually America's most viewed broadcast, football surpassed baseball as America's favorite pastime. Because of their nature, team sports have a built in drama that escalates in intensity over the duration of a season. Americans are drawn to the players in this drama, which has resulted in athletes hawking products perhaps more than any other cultural icons. In fact, as advertising revenues increasingly fund sports, athletes have become perhaps the highest paid workers in America. Michael Jordan earned a reported $30 million to play for the Chicago Bulls in the 1997-98 season. In the Fall of 1998 pitcher Kevin Brown signed with the Los Angeles Dodgers for a seven year deal worth $105 million. Accompanying their paychecks is a rise in media scrutiny. Elite athletes are hounded by paparazzi in a way once reserved for movie stars and royalty. Such is the price of television fame in America.
Despite the ridiculous salaries and the accompanying out-of-control egos of many athletes and owners, it could be argued that television has made sports better for most people. Seeing a game live at a venue remains thrilling, but TV, with its multiple camera angles and slow motion instant replays, is by far a better way to actually see a game. In addition to better vision, one has all the comforts of home without the hassle of inclement weather, expensive tickets, heavy traffic, and nearly impossible parking. And because of TV and its live transmission of sports, certain moments have become a part of America's collective cultural fabric in a way that never would have been possible without television. Heroes and goats achieve legendary status nearly immediately. Just as important to our culture as the televising of events of political and social importance is the broadcast of sporting events. Although not particularly significant in their contribution to human progress, because of television the images of San Francisco 49er Joe Montana's pass to Dwight Clark in the back of the endzone in the 1981 NFC championship game to beat the Dallas Cowboys or of a ground ball dribbling between Boston Red Sox first baseman Bill Buckner's legs in the sixth game of the 1986 World Series are just as much a part of American culture's visual memory as the image of Neil Armstrong walking on the moon.
As the twentieth century careens to a close and America prepares to embark on a new century, debates over television's inarguable influence continue to rage. Is TV too violent? Is television's content too sexually oriented? Has television news coverage become vacuous and reliant on the superfluous and tawdry? Is TV damaging our children and contributing to the fraying of America's social fabric? Regardless of the answers to these and countless other questions about television's influence, the inarguable fact is that television is the most important popular culture innovation in history. Our heroes and our villains are coronated and vanquished on television. Sound bites as diverse in intent and inception as "where's the beef," "read my lips," and "just do it," have become permanently and equally ensconced in the national lexicon. Television is not without its flaws, but its accessibility and prevalence has created what never before existed: shared visual cultural touchstones. As one America mourned the death of JFK, argued about the veracity of the Clarence Thomas/Anita Hill hearings, recoiled in horror as Reginald Denny was pulled from his truck and beaten, and cheered triumphantly as Mark McGwire hoisted his son in the air after hitting his 62nd home run. For better or for worse, in the second half of the twentieth century television dominated and influenced the American cultural landscape in an unprecedented fashion.
—Robert C. Sickels
Baker, William F., and George Dessart. Down the Tube: An Inside Account of the Failure of American Television. New York, Basic Books, 1998.
Caldwell, John Thornton. Televisuality: Style, Crisis, and Authority in American Television. New Brunswick, Rutgers University Press, 1995.
Comstock, George. The Evolution of American Television. Newbury Park, Sage Publications, 1989.
Findling, John E., and Frank W. Thackeray. Events that Changed America in the Twentieth Century. Westport, Connecticut, Green-wood Press, 1996.
Himmelstein, Hal. Television Myth and the American Mind. Westport, Praeger Publishers, 1994.
Postman, Neil. Amusing Ourselves to Death. New York, Penguin Books, 1985.
Stark, Steven D. Glued to the Set: The 60 Television Shows and Events that Made Us Who We Are Today. New York, Free Press, 1997.
Udelson, Joseph H. The Great Television Race: A History of the American Television Industry 1925-1941. The University of Alabama Press, 1982.
Television is a telecommunication device for sending (broadcasting) and receiving video and audio signals. The name television is derived from the Greek root tele for far sight and the Latin root visio meaning sight, or combined together to mean distance seeing. Broadly speaking, television, or TV, is, thus, the overall technology used to transmit pictures with sound using radio frequency and microwave signals or closed-circuit connections.
Television programming was regularly broadcast in such countries as the United States, England, Germany, France, and the Soviet Union before World War II (1939–1945). However, television in the U.S., for instance, did not become common in homes until the middle 1950s. By the early 2000s, over 250 million television sets were in use in the U.S., nearly one TV set per person.
german engineer and inventor Paul Julius Gottlieb Nipkow (1860–1940) designed the mechanism in 1884 that provided for television. Nipkow placed a spiral pattern of holes onto a scanning disk (later called a Nipkow disk.) He turned the scanning disk while it was in front of a brightly lit picture so that each part of the picture was eventually exposed. This technology was later used inside cameras and receivers to produce the first television images.
The invention of the cathode ray tube in 1897 by German inventor and physicist Karl Ferdinand Braun
(1850–1918) quickly made possible the technology that today is called television. Indeed, by 1907, the cathode ray tube was capable of supplying images for early incarnations of the television.
English inventor Alan Archibald Campbell-Swinton (1863–1930) invented electronic scanning in 1908. He used an electron gun to neutralize charges on an electrified screen. Later, he wrote a scientific paper describing the electronic theory behind the concept of television. Russian-born American physicist Vladimir Kosma Zworykin (1889–1982) added to Campbell-Swinton’s idea when he developed an iconoscope camera tube later in the 1920s.
American inventor Charles Francis Jenkins (1867–1934) and English inventor John Logie Baird (1888–1946) used the Nipkow scanning disc in the early 1920s for their developmental work with television. Baird used his working television model to show a 1926 audience. By 1928, he had set up an experimental broadcast system. In addition, Baird demonstrated a color transmission of television. Then, Philo Traylor Farnsworth (1906–1971), an American inventor and engineer, invented a television camera that could convert elements of an image into an electrical signal. Farnsworth demonstrated the first completely electronic television system in 1934, which he eventually patented.
Within 50 years, television had become a dominant form of entertainment and an important way to acquire information. This remains true in the mid-2000s, as the average U.S. citizen spends between two and five hours each day watching television.
Television operates on two principles that underlie how the human brain perceives the visual world. First, if an image is divided into a group of very small colored dots (called pixels), the brain is able to reassemble the individual dots to produce a meaningful image. Second, if a moving image is divided into a series of pictures, with each picture displaying a successive part of the overall sequence, the brain can put all the images together to form a single flowing image. The technology of the television (as well as computers) utilizes these two features of the brain to present images. The dominant basis of the technology is still the cathode ray tube.
A cathode ray tube contains a positively charged region (the anode) and a negatively charged region (the cathode). The cathode is located at the back of the tube. As electrons exit the cathode, they are attracted to the anode. The electrons are also focused electronically into a tight beam, which passes into the central area of the television screen. The central region is almost free of air, so that there are few air molecules to deflect the electrons from their path. The electrons travel to the far end of the tube where they encounter a flat screen. The screen is coated with a molecule called phosphor. When an electron hits a phosphor, the phosphor glows. The electron beam can be focused in a coordinated way on different part of the phosphor screen, effectively painting the screen (a raster pattern). This process occurs very quickly—about 30 times each second—producing multiple images each second. The resulting pattern of glowing and dark phosphors is what is interpreted by the brain as a moving image.
Black and white television was the first to be developed, as it utilized the simplest technology. In this technology, the phosphor is white. Color television followed, as the medium became more popular, and demands for a more realistic image increased. In a color television, three electron beams are present. They are called the red, green, and blue beams. Additionally, the phosphor coating is not just white. Rather, the screen is coated with red, green, and blue phosphors that are arranged in stripes. Depending on which electron beam is firing and which color phosphor dots are being hit, a spectrum of colors is produced. As with the black and white television, the brain reassembles the information to produce a recognizable image.
High definition television (HDTV) is a format that uses digital video compression, transmission, and presentation to produce a much crisper and lifelike image than is possible using the cathode ray tube technology. This is because more information can be packed into the area of the television screen. Conventional cathode ray tube screens typically have 525 or 625 horizontal lines of dots on the screen. Each line contains approximately from 500 to 600 dots (or pixels). Put another way, the information possible is 525× 500 pixels. In contrast, HDTV contains from 720 to 1080× 500 pixels. The added level of detail produces a visually-richer image.
Televisions of the 1950s and 1960s utilized an analog signal. The signals were beamed out into the air from the television station, to be collected by an antenna positioned on a building or directly on the television (commonly called rabbit ears). Today, the signal is digitized. This allows the electronic pulses to be sent through cable wire to the television, or to a satellite, which then beams the signal to a receiving dish in a format known as MPEG-2 (exactly like the video files that can be loaded on to a computer).
The digital signal is less subject to deterioration that is the analog signal. Thus, a better quality image reaches the television.
Television signals are transmitted on frequencies that are limited in range. Only persons residing within a few dozen miles of a TV transmitter can usually receive clear and interference-free reception. Community antenna television systems, often referred to as CATV, or simply cable, developed to provide a few television signals for subscribers far beyond the service area of big-city transmitters. As time passed cable moved to the big cities. Cable’s appeal, even to subscribers able to receive local TV signals without an outdoor antenna, is based on the tremendous variety of programs offered. Some systems provide subscribers with a choice of hundreds of channels.
Cable systems prevent viewers from watching programs they have not contracted to buy by scrambling or by placing special traps in the subscriber’s service drop that remove selected channels. The special tuner box that descrambles the signals can often be programmed by a digital code sent from the cable system office, adding or subtracting channels as desired by the subscriber.
Wired cable systems generally send their programming from a central site called a head end. TV signals are combined at the head end and, then, sent down one or more coaxial-cable trunk lines. Signals for various neighborhoods along the trunk split away to serve individual neighborhoods from shorter branches called spurs.
Coaxial cable, even the special type used for CATV trunk lines, is made of material that dissipates the electrochemicals passing through. Signals must be boosted in power periodically along the trunk line, usually every time the signal level has fallen by approximately 20 decibels, the equivalent of the signal having fallen to one-hundredth (1/100th) of its original power. The line amplifiers used must be very sophisticated to handle the wide bandwidth required for many programs without degrading the pictures or adding noise. The amplifiers must adjust for changes in the coaxial cable due primarily to temperature changes. The amplifiers used are very much improved over those used by the first primitive community antenna systems, but even today trunk lines are limited in length to about a dozen miles. Not much more than about one hundred line amplifiers can be used along a trunk line before problems become unmanageable.
Cable’s program offerings are entirely confined within the shielded system. The signals provided to subscribers must not interfere with over-the-air radio and television transmissions using the same frequencies. Because the cable system’s offerings are confined within the shielded system, pay-per-view programs can be offered on cable.
Cable is potentially able to import TV signals from a great distance using satellite or terrestrial-microwave relays. Cable systems are required to comply with a rule called Syndex, for syndication exclusivity, where an over-the-air broadcaster can require that imported signals be blocked when the imported stations carry programs they have paid to broadcast.
The current step in CATV technology is the replacing of wire-based coaxial systems with fiber optic service. Fiber optics is the technology where electrical signals are converted to light signals by solid-state laser diodes. The light waves are transmitted through very-fine glass fibers so transparent that a beam of light will travel through this glass fiber for miles.
Cable’s conversion to fiber optics results in an enormous increase in system bandwidth. Virtually the entire radio and TV spectrum is duplicated in a fiber optic system. As an example, every radio and TV station transmitting over the airwaves can be carried in a single thread of fiberglass; 500 separate television channels on a single cable system is easily handled. A fiber optic CATV system can be used for two-way communication more easily than can a wire-cable plant with electronic amplifiers. Fiber optic cable service greatly increases the support needed for interactive television services.
Plasma television has been available commercially since the late 1990s, and are becoming popular in the early 2000s. Plasma televisions do not have a cathode ray tube. Thus, the screen can be very thin. Typically, plasma televisions screens are about 6 in (15 cm) thick but can reach down to 1 in (2.5 cm) thick. This allows the screen to be hung from a wall. Along with plasma television, flat panel LCD television is also available. Both are considered flat panel televisions. Some models are used as computer monitors.
In a plasma television, fluorescent lights are present instead of phosphors. Red, green, and blue fluorescent lights enable a spectrum of colors to be produced, in much the same way as with conventional television. Each fluorescent light contains a gas called plasma. Plasma consists of electrically charged atoms (ions) and electrons (negative in charge). When an electrical signal encounters plasma, the added energy starts a process where the particles bump into one another. This bumping releases a form of energy called a photon. The release of ultraviolet photons causes a reaction with phosphor material, which then glows.
Rear projection television send the video signal and its images to a projection screen with the use of a lens system. They were introduced in the 1970s but declined in the 1990s as better alternatives were made available. Many used large screens, some over 100 in (254 cm). These projection systems are divided into three groups: CRT-based (cathode ray tube-based), LCD-based (liquid crystal display-based), and DLP-based (digital light processing-based). Their quality since the 1970s have improved drastically so that in the 2000s they are still a viable type of television.
Chrominance —Color information added to a video signal.
Coaxial cable —A concentric cable, in which the inner conductor is shielded from the outer conductor; used to carry complex signals.
Compact disc —Digital recording with extraordinary fidelity.
Field —Half a TV frame, a top to bottom sweep of alternate lines.
Frame —Full TV frame composed of two interlaced fields.
Parallax —Shift in apparent alignment of objects at different distances.
Phosphor —Chemical that gives off colored light when struck by electron.
ATV stands for advanced television, a name created by the U.S. Federal Communications Commission (FCC) for digital TV (or DTV). It is the television system replacing the current analog system in the United States. Television technology is rapidly moving toward the ATV digital system planned to replace the aging analog process. ATV is a digital-television system, where the aspects that produce the image are processed as computer-like data. Digitally processed TV offers several tremendous advantages over analog TV methods. In addition to sharper pictures with less noise, a digital system can be much more frugal in the use of spectrum space. ATV includes high-definition television (HDTV), which is a format for digital video compression, transmission, and presentation. For instance, high-definition television (HDTV), which was developed in the 1980s, uses 1,080 lines and a wide-screen digital format to provide a very clear picture when compared to the traditional 525- and 625-line televisions.
Most TV frames are filled with information that has not changed from the previous frame. A digital TV system can update only the information that has changed since the last frame. The resulting picture looks to be as normal as the pictures seen for years, but many more images can be transmitted within the same band of frequencies.
TV audiences have been viewing images processed digitally in this way for years, but the final product has been converted to a wasteful analog signal before it leaves the television transmitter. Satellite relays have relayed TV as digitally compressed signals to maximize the utilization of the expensive transponder equipment in orbit about the Earth. The small satellite dishes offered for home reception receive digitally-encoded television signals.
ATV will not be compatible with current analog receivers, but it will be phased in gradually in a carefully-considered plan that will allow older analog receivers to retire gracefully over time. Television in the 2000s includes such features as DVD (digital video recorders) players, computers, VCRs (video cassette recorders), computer-like hard drives (to store programming), Internet-access, video game consoles, pay-per-view broadcasts, and a variety of other advanced add-on features.
Crisell, Andrew. A Study of Modern Television: Thinking Inside the Box. Basingstoke, UK, and New York: Palgrave Macmillan, 2006.
Trundle, Eugene. Newnes Guide to Television and Video Technology. Oxford, UK, and Boston, MA: Newnes, 2001.
Ovadia, Schlomo. Broadband Cable TV Access Networks: From Technologies to Applications. Upper Saddle River, NJ: Prentice Hall, 2001.
The device known as the television is actually a receiver that is the end point of a system that transmits pictures and sounds at a distance. The process starts with a television camera that converts all image information into electrical signals, which are delivered to homes and businesses through a television antenna, underground fiber optic cable, or satellite. The function of the receiver, or television set, is to unscramble the electrical signals, converting them into sounds and pictures.
Television is one of the greatest technological developments of all time. It did not happen overnight, but developed over a number of years, taking advantage of advances in the sciences and technologies of the time. Television has not only been a source of entertainment worldwide, but it has also linked people through their common experience of witnessing events that are happening in different parts of the world and beyond. For example, on July 20, 1969, about 720 million people all over the world watched on television as astronaut Neil Armstrong walked on the moon.
In the United States, about 99 percent of households own at least one television set. Over 60 percent subscribe to cable television. The average household watches approximately seven hours of television each day.
A group effort
No single person invented the television. Instead, it is the result of scientific research in various countries over several decades. In 1817, Baron Jöns Jakob Berzelius (1779–1848), a Swedish chemist, identified selenium as a chemical element. He found that selenium could conduct electricity and that this ability to conduct electricity varied with the amount of light hitting it. In 1878, Sir William Crookes (1832–1919), a British chemist and physicist, first mentioned cathode rays (beams of electrons in a glass vacuum tube). These scientific findings occurred separately and would take many years to be applied to the making of television.
In 1884, German engineer Paul Nipkow (1860–1940) built the first crude television with the help of a mechanical scanning disk. Small holes on the rotating disk picked up pieces of images and imprinted them on a light-sensitive selenium tube. A receiver then recreated the image pieces into a whole picture. Nipkow's mechanical invention, crude as it was, employed the scanning principle that would be used by future television cameras and receivers to record and recreate images for a television screen.
Television goes electronic
In 1911, while some scientists were trying to improve on Nipkow's mechanical scanning disk, Scottish electrical engineer Alan Archibald Campbell Swinton (1863–1930) discussed his idea of a "distant electric vision," using cathode rays. Although Swinton never built the electronic television that he so accurately described, other scientists brought into reality his idea of the television set as we know it today.
In 1897, German scientist Ferdinand Karl Braun (1850–1918) invented the cathode ray tube. Inside the glass tube, cathode rays could produce pictures by hitting the fluorescent (glowing) screen at the end of the tube. Boris Rosing of Russia demonstrated in 1907 that the cathode ray tube could serve as the receiver of a television system.
In England, John Logie Baird (1888–1946) experimented with Nipkow's scanning disk in the early 1920s. At around the same time, in the United States, Charles Francis Jenkins (1867–1934) was performing the same experiment. In 1926, Baird was the first to demonstrate the electrical transmission of images in motion.
The invention of television cameras during the 1920s further contributed to the development of television. Philo Farnsworth (1906–1971) of Idaho was only fifteen years old when he figured out the workings of an electronic television system. Farnsworth invented the image dissector tube, an electronic scanner. In 1927, he gave the first public demonstration of the electronic television by transmitting the image of a dollar sign. Along with another American, Allen B. Dumont (1901–1965), Farnsworth developed a pickup tube that became the home television set by 1939.
At the same time, Russian immigrant Vladimir Zworykin (1889–1982) invented an electronic camera tube called the iconoscope. Both television cameras invented by Farnsworth and Zworykin used a cathode ray tube as the television receiver for recreating the original images.
The earliest mention of color television was in a German patent in the early 1900s.
In 1928, John Logie Baird used Nipkow's mechanical scanning disk to demonstrate color television. A color television system developed by Hungarian-born American Peter Goldmark (1906–1977) in 1940 did not receive wide acceptance because it did not work in black-and-white television sets. It took almost twenty years for color television to be commercially available.
The television is made up of four principal sets of parts: the exterior part or housing, the picture tube, the audio (sound) reception and stereo system, and the electronic components (parts). These electronics parts include cable and antenna input and output devices, a built-in antenna in most television sets, a remote control receiver, computer chips, and access buttons. The remote control, popularly called a "clicker," is an additional part of the television set.
The television housing is made of injection-molded plastic. In injection molding, liquid plastic is forced into a mold with the help of high pressure. The plastic takes on the shape of the mold as it cools. Some television sets may have exterior wooden cabinets. The audio reception and stereo systems are made of metal and plastic.
The picture tube materials consist of glass and a coating made of the chemical phosphor, which glows when hit by light. Other picture tube materials include electronic attachments around and at the rear of the tube. Brackets and braces hold the picture tube inside the housing.
The antenna and most of the input-output connections are made of metal. Some of the input-output connections are coated with plastic or special metals to improve the quality of the connection or to insulate it. (The insulation material prevents the escape of heat, electricity, or sound.) The chips (also called microchips) are made of silicon, metal, and solder (a metal that is heated and used to join metals).
Different types of engineers are responsible for designing a television set. These include electronics, audio, video, plastics, and fiber optics engineers. The engineering team may design a bigger television set patterned after an existing model. They may also design new features, including an improved picture, better sound system, or a remote control that can work with other devices, such as a DVD player.
The team members discuss ideas about the new features, redrawing plans as they develop new ideas about the design. After the engineers receive initial approval for manufacturing the set, they make a prototype, or a model, after which the other sets will be patterned. A prototype is important for testing out the design, appearance, and functions of the set. Having a prototype also enables the production engineers to determine the production processes, machining (the cutting, shaping, and finishing by machine), tools, robots, and changes to existing factory production lines.
When the prototype passes a series of strict tests and is finally approved for manufacture by management, the engineers draw detailed plans and produce specifications for the design and production of the model. Specifications include the type of materials needed, their sizes, and workmanship involved in the manufacturing process.
Television uses a process called scanning to capture and then recreate an image. When recording an image, the television camera breaks it down into 525 horizontal lines. Electron beams in the camera tube scan (read) the lines thirty times every second. (In Europe, Australia, and most countries in Asia, each image is separated into 625 lines, with the scanning done at twenty-five times per second.) The television receiver, which is the television set, recreates the images on the screen by using the same electrical signals recorded by the television camera. The picture tube inside the television contains three electron guns that receive the video (image) signals. The electron guns shoot electron beams at the phosphor-coated dots on the screen, scanning the screen in the same pattern that the images were recorded by the camera.
Raw materials and components that are manufactured by other persons are ordered. The production line is constructed and tested. Finally, the components that would go into the new television sets are put together in the assembly line.
The Manufacturing Process
1 Television housings are mostly made of plastic. Using a process called injection molding, high pressure is applied to liquid plastic to force it into molds. The plastic is allowed to cool and harden. The formed solid plastics are released from the molds, trimmed, and cleaned. They are then assembled to make up the television housing. The molds are designed so that brackets and supports for the various parts of the television set are part of the housing.
2 The television picture tube, also called a cathode ray tube (CRT), is shaped like a funnel. The widest part of the funnel is a slightly curved plate made of glass. The glass is the television screen on which pictures are viewed. A dark tint may be added to the glass plate to improve color. The inside of the screen is covered with tiny dots of phosphors, or chemicals that glow when hit by electrons. The phosphor dots come in the primary colors red, green, and blue.
3 Immediately behind the phosphor layer is a thin metal shadow mask with thousands of small holes. Some shadow masks are made of iron. The better-quality shadow masks are made of a mixture of nickel and iron called Invar that lets the picture tube operate at a higher temperature. Higher temperatures result in brighter pictures.
4 The narrow end of the color picture tube contains three electron guns. Their job is to shoot electron beams at the phosphors, with each gun responsible for a specific color. The shadow mask makes sure the electron guns each shoot at one color of phosphors in a process called scanning (see sidebar on page 272). When hit by electrons, the phosphors light up, creating the pictures on the television screen.
After the electron guns are placed inside the picture tube, air is removed from the tube to prevent it from interfering with the movement of the electrons. Then, the end of the tube is closed off with a fitted electrical plug that will be placed near the back of the set.
5 A deflection yoke, consisting of two electromagnetic coils, is fitted around the neck of the picture tube. The electromagnetic coils cause pulses of high voltage to guide the direction and speed of the electron beams as they scan the television screen.
6 The speakers, which go into the housing, are typically made by another company that works closely with the television manufacturer. They are made according to certain characteristics specified by the manufacturer. Wiring, electronic sound controls, and integrated circuitry are assembled in the television set as it travels along the assembly line. An integrated circuit, also called a chip or microchip, is a tiny piece of silicon on which electronic parts and their interconnections are imprinted.
TELEVISION TRICKS THE EYE?
A moving scene that we see on television is actually a series of still (nonmoving) images that are shown in rapid succession. The physical phenomenon called "persistence of vision" enables the retina of the eye to hold on to an image for a fraction of a second longer after the eye has seen it. The brain, which works with the eye, puts these still images together so that the eye perceives them as a single moving scene.
7 After the picture tube and the audio system are assembled in the set, other electronic parts are added to the rear of the set. The antenna, cable jacks, other input and output jacks, and the electronics for receiving remote control signals are prepared as subassemblies on another assembly line or by specialty contractors hired from outside the company. These electronic components are added to the set, and the housing is closed.
Like other precision products, the television requires strict quality control during manufacture. Inspections, laboratory testing, and field testing are constantly conducted during the development of prototypes. The manufacturer has to be sure the resulting product is not only technologically sound but also safe for use in homes and businesses.
Researchers continue to find new ways to improve on television sets. The high-definition television (HDTV) system that we have today is a digital television system. The conventional television system transmits signals using radio waves. During transmission, these waves could get distorted, for example, by bad weather. The television set, unable to distinguish between distorted and good-quality waves, converts all the radio waves it receives into pictures. Therefore, the resulting images may not all be of good quality.
Digital television, on the other hand, while also using radio waves, assigns a code to the radio waves. When it comes time to recreate the picture, the television set obtains information from the code on how to display the image. HDTV offers clearer and sharper images with its 1,125-line picture. Compared to the traditional 525-line picture, HDTV offers a far better picture because more lines are scanned by the television camera and receiver. This means more details of the images are included.
In the future, digital television could also allow the viewer to choose camera angles while watching a concert or a sports event. The viewer could also communicate with the host of a live program and edit movies on screen.
Flat-panel television screens, such as liquid-crystal display (LCD) and plasma screens, are being perfected to achieve the kind of picture and sound seen in movie theaters. They are also seen as replacements for the present bulky television sets made of cathode ray tubes. The flat screens are not only lightweight but are also energy-efficient. However, unless these state-of-the art technologies become affordable, it will be a while before consumers convert to flat-screen televisions.
- A device used in television to send and receive electromagnetic waves, or waves of electrical and magnetic force brought about by the vibration of electrons.
- cable television:
- The transmission of television programs from television stations to home television sets through fiber optic cable or metal cable.
- cathode ray tube (CRT):
- A vacuum tube whose front part makes up the television screen. In the tube, images are formed by electrons striking the phosphor-coated screen.
- Also called microchip, a very small piece of silicon that carries interconnected electronic components.
- A small particle within an atom that carries a negative charge, which is the basic charge of electricity.
- fiber optic cable:
- A bundle of hair-thin glass fibers that carry information as beams of light.
- Giving off light when exposed to electric current.
- persistence of vision:
- A physical phenomenon in which the retina of the eye holds on to an image for a fraction of a second longer after it has seen the image. The brain, which works with the eye, puts these still images together so that the eye perceives them as a single movement.
- A chemical that glows when struck by light.
- An object that is put into space and used to receive and send television signals over long distances.
- To move electrons over a surface in order to transmit an image.
- shadow mask:
- A metal sheet with thousands of small holes. It is found behind the phosphor layer of the color picture tube and through which three electron guns at the other end of the tube shoot electron beams. The shadow mask ensures that each gun shoots at the specific phosphor color.
- A nonmetallic material widely used in microchips because of its ability to conduct electricity.
For More Information
Graham, Ian. Communications. Austin, TX: Steck-Vaughn Company, 2001.
Graham, Ian. Radio and Television. Austin, TX: Steck-Vaughn Company, 2001.
Parker, Steve, Peter Lafferty, and Steve Setford. How Things Work. New York, NY: Barnes & Noble Books, 2001.
Brown-Kenyon, Paul I., Alan Miles, and John S. Rose,. "Unscrambling Digital TV." McKinsey Quarterly. (2000): pp.71-81.
Kubey, Robert, and Mihaly Csikszentmihalyi. "Television Addiction." Scientific American. (February 2000).
Early Television Foundation.http://www.earlytelevision.org (accessed on July 22, 2002).
"The Revolution of Television." Technical Press.http://www.tvhandbook.com/History/History_TV.htm (accessed on July 22, 2002).