Source Edition – Chapters 7-9
Chapter 7: Radio
7.1 Radio
7.2 Evolution of Radio Broadcasting
7.3 Radio Station Formats
7.4 Radio’s Impact on Culture
7.5 Radio’s New Future
7.1 Radio
In 1983, radio station WOXY’s new owners bought the station and changed its format from Top 40 to the up-and-coming alternative rock format, kicking off with U2’s “Sunday Bloody Sunday (WOXY, 2009).” Then located in the basement of a fast-food restaurant in Ohio, the station was a risk for its purchasers, a husband and wife team who took a chance by changing the format to a relatively new one. Their investment paid off with the success of their station. By 1990, WOXY had grown in prestige to become one of Rolling Stone magazine’s top 15 radio stations in the country, and had even been made famous by a reference in the 1988 film Rain Man (Bishop, 2004). In 1998, the station launched a web cast and developed a national following, ranking 12th among Internet broadcasters for listenership in 2004 (Bishop, 2004).
When the station’s owners decided to retire and sell the frequency allocation in 2004, they hoped to find investors to continue the online streaming version of the station. After several months of unsuccessful searching, however, the station went off the air entirely—only to find a last-minute investor willing to fund an Internet version of the station (WOXY).
The online version of the station struggled to make ends meet until it was purchased by the online music firm Lala (Cheng, 2010). The now-defunct Lala sold WOXY to music company Future Sounds Inc., which moved the station and staff from Ohio to Austin, Texas. In March 2010, citing “current economic realities and the lack of ongoing funding,” WOXY.com went off the air with only a day’s notice (Cheng, 2010).
Taken in the context of the modern Internet revolution and the subsequent faltering of institutions such as newspapers and book publishers, the rise and fall of WOXY may seem to bode ill for the general fate of radio. However, taken in the larger context of radio’s history, this story of the Internet’s effect on radio could prove to be merely another leap in a long line of radio revolutions. From the shutting down of all broadcasts during World War I to the eclipse of radio by television during the 1950s, many arbiters of culture and business have prophesized the demise of radio for decades. Yet this chapter will show how the inherent flexibility and intimacy of the medium has allowed it to adapt to new market trends and to continue to have relevance as a form of mass communication.
References
Bishop, Lauren. “97X Farewell,” Cincinnati Enquirer, May 10, 2004, http://www.enquirer.com/editions/2004/05/10/tem_tem1a.html.
Cheng, Jacqui. “Bad Luck, Funding Issues Shutter Indie Station WOXY.com,” Ars Technica (blog), March 23, 2010, http://arstechnica.com/media/news/2010/03/bad-luck-funding-issues-shutter-indie-station-woxycom.ars.
WOXY, “The History of WOXY,” 2009, http://woxy.com/about/.
WOXY, “The History.”
7.2 Evolution of Radio Broadcasting
Learning Objectives
- Identify the major technological changes in radio as a medium since its inception.
- Explain the defining characteristics of radio’s Golden Age.
- Describe the effects of networks and conglomerates on radio programming and culture.
At its most basic level, radio is communication through the use of radio waves. This includes radio used for person-to-person communication as well as radio used for mass communication. Both of these functions are still practiced today. Although most people associate the term radio with radio stations that broadcast to the general public, radio wave technology is used in everything from television to cell phones, making it a primary conduit for person-to-person communication.
The Invention of Radio
Guglielmo Marconi is often credited as the inventor of radio. As a young man living in Italy, Marconi read a biography of Hienrich Hertz, who had written and experimented with early forms of wireless transmission. Marconi then duplicated Hertz’s experiments in his own home, successfully sending transmissions from one side of his attic to the other (PBS). He saw the potential for the technology and approached the Italian government for support. When the government showed no interest in his ideas, Marconi moved to England and took out a patent on his device. Rather than inventing radio from scratch, however, Marconi essentially combined the ideas and experiments of other people to make them into a useful communications tool (Coe, 1996).
In fact, long-distance electronic communication has existed since the middle of the 19th century. The telegraph communicated messages through a series of long and short clicks. Cables across the Atlantic Ocean connected even the far-distant United States and England using this technology. By the 1870s, telegraph technology had been used to develop the telephone, which could transmit an individual’s voice over the same cables used by its predecessor.
When Marconi popularized wireless technology, contemporaries initially viewed it as a way to allow the telegraph to function in places that could not be connected by cables. Early radios acted as devices for naval ships to communicate with other ships and with land stations; the focus was on person-to-person communication. However, the potential for broadcasting—sending messages to a large group of potential listeners—wasn’t realized until later in the development of the medium.
Broadcasting Arrives
The technology needed to build a radio transmitter and receiver was relatively simple, and the knowledge to build such devices soon reached the public. Amateur radio operators quickly crowded the airwaves, broadcasting messages to anyone within range and, by 1912, incurred government regulatory measures that required licenses and limited broadcast ranges for radio operation (White). This regulation also gave the president the power to shut down all stations, a power notably exercised in 1917 upon the United States’ entry into World War I to keep amateur radio operators from interfering with military use of radio waves for the duration of the war (White).
Wireless technology made radio as it is known today possible, but its modern, practical function as a mass communication medium had been the domain of other technologies for some time. As early as the 1880s, people relied on telephones to transmit news, music, church sermons, and weather reports. In Budapest, Hungary, for example, a subscription service allowed individuals to listen to news reports and fictional stories on their telephones (White). Around this time, telephones also transmitted opera performances from Paris to London. In 1909, this innovation emerged in the United States as a pay-per-play phonograph service in Wilmington, Delaware (White). This service allowed subscribers to listen to specific music recordings on their telephones (White).
In 1906, Massachusetts resident Reginald Fessenden initiated the first radio transmission of the human voice, but his efforts did not develop into a useful application (Grant, 1907). Ten years later, Lee de Forest used radio in a more modern sense when he set up an experimental radio station, 2XG, in New York City. De Forest gave nightly broadcasts of music and news until World War I halted all transmissions for private citizens (White).
Radio’s Commercial Potential
After the World War I radio ban lifted with the close of the conflict in 1919, a number of small stations began operating using technologies that had developed during the war. Many of these stations developed regular programming that included religious sermons, sports, and news (White). As early as 1922, Schenectady, New York’s WGY broadcast over 40 original dramas, showing radio’s potential as a medium for drama. The WGY players created their own scripts and performed them live on air. This same groundbreaking group also made the first known attempt at television drama in 1928 (McLeod, 1998).
Businesses such as department stores, which often had their own stations, first put radio’s commercial applications to use. However, these stations did not advertise in a way that the modern radio listener would recognize. Early radio advertisements consisted only of a “genteel sales message broadcast during ‘business’ (daytime) hours, with no hard sell or mention of price (Sterling & Kittross, 2002).” In fact, radio advertising was originally considered an unprecedented invasion of privacy, because—unlike newspapers, which were bought at a newsstand—radios were present in the home and spoke with a voice in the presence of the whole family (Sterling & Kittross, 2002). However, the social impact of radio was such that within a few years advertising was readily accepted on radio programs. Advertising agencies even began producing their own radio programs named after their products. At first, ads ran only during the day, but as economic pressure mounted during the Great Depression in the 1930s, local stations began looking for new sources of revenue, and advertising became a normal part of the radio soundscape (Sterling & Kittross, 2002).
The Rise of Radio Networks
Not long after radio’s broadcast debut, large businesses saw its potential profitability and formed networks. In 1926, RCA started the National Broadcasting Network (NBC). Groups of stations that carried syndicated network programs along with a variety of local shows soon formed its Red and Blue networks. Two years after the creation of NBC, the United Independent Broadcasters became the Columbia Broadcasting System (CBS) and began competing with the existing Red and Blue networks (Sterling & Kittross, 2002).
Although early network programming focused mainly on music, it soon developed to include other programs. Among these early innovations was the variety show. This format generally featured several different performers introduced by a host who segued between acts. Variety shows included styles as diverse as jazz and early country music. At night, dramas and comedies such as Amos ’n’ Andy, The Lone Ranger, and Fibber McGee and Molly filled the airwaves. News, educational programs, and other types of talk programs also rose to prominence during the 1930s (Sterling & Kittross, 2002).
The Radio Act of 1927
In the mid-1920s, profit-seeking companies such as department stores and newspapers owned a majority of the nation’s broadcast radio stations, which promoted their owners’ businesses (ThinkQuest). Nonprofit groups such as churches and schools operated another third of the stations. As the number of radio stations outgrew the available frequencies, interference became problematic, and the government stepped into the fray.
The Radio Act of 1927 established the Federal Radio Commission (FRC) to oversee regulation of the airwaves. A year after its creation, the FRC reallocated station bandwidths to correct interference problems. The organization reserved 40 high-powered channels, setting aside 37 of these for network affiliates. The remaining 600 lower-powered bandwidths went to stations that had to share the frequencies; this meant that as one station went off the air at a designated time, another one began broadcasting in its place. The Radio Act of 1927 allowed major networks such as CBS and NBC to gain a 70 percent share of U.S. broadcasting by the early 1930s, earning them $72 million in profits by 1934 (McChesney, 1992). At the same time, nonprofit broadcasting fell to only 2 percent of the market (McChesney, 1992).
In protest of the favor that the 1927 Radio Act showed toward commercial broadcasting, struggling nonprofit radio broadcasters created the National Committee on Education by Radio to lobby for more outlets. Basing their argument on the notion that the airwaves—unlike newspapers—were a public resource, they asserted that groups working for the public good should take precedence over commercial interests. Nevertheless, the Communications Act of 1934 passed without addressing these issues, and radio continued as a mainly commercial enterprise (McChesney, 1992).
The Golden Age of Radio
The so-called Golden Age of Radio occurred between 1930 and the mid-1950s. Because many associate the 1930s with the struggles of the Great Depression, it may seem contradictory that such a fruitful cultural occurrence arose during this decade. However, radio lent itself to the era. After the initial purchase of a receiver, radio was free and so provided an inexpensive source of entertainment that replaced other, more costly pastimes, such as going to the movies.
Radio also presented an easily accessible form of media that existed on its own schedule. Unlike reading newspapers or books, tuning in to a favorite program at a certain time became a part of listeners’ daily routine because it effectively forced them to plan their lives around the dial.
Daytime Radio Finds Its Market
During the Great Depression, radio became so successful that another network, the Mutual Broadcasting Network, began in 1934 to compete with NBC’s Red and Blue networks and the CBS network, creating a total of four national networks (Cashman, 1989). As the networks became more adept at generating profits, their broadcast selections began to take on a format that later evolved into modern television programming. Serial dramas and programs that focused on domestic work aired during the day when many women were at home. Advertisers targeted this demographic with commercials for domestic needs such as soap (Museum). Because they were often sponsored by soap companies, daytime serial dramas soon became known as soap operas. Some modern televised soap operas, such as Guiding Light, which ended in 2009, actually began in the 1930s as radio serials (Hilmes, 1999).
The Origins of Prime Time
During the evening, many families listened to the radio together, much as modern families may gather for television’s prime time. Popular evening comedy variety shows such as George Burns and Gracie Allen’s Burns and Allen, the Jack Benny Show, and the Bob Hope Show all began during the 1930s. These shows featured a central host—for whom the show was often named—and a series of sketch comedies, interviews, and musical performances, not unlike contemporary programs such as Saturday Night Live. Performed live before a studio audience, the programs thrived on a certain flair and spontaneity. Later in the evening, so-called prestige dramas such as Lux Radio Theater and Mercury Theatre on the Air aired. These shows featured major Hollywood actors recreating movies or acting out adaptations of literature (Hilmes).
Instant News
By the late 1930s, the popularity of radio news broadcasts had surpassed that of newspapers. Radio’s ability to emotionally draw its audiences in close to events made for news that evoked stronger responses and, thus, greater interest than print news could. For example, the infant son of famed aviator Charles Lindbergh was kidnapped and murdered in 1932. Radio networks set up mobile stations that covered events as they unfolded, broadcasting nonstop for several days and keeping listeners updated on every detail while tying them emotionally to the outcome (Brown, 1998).
As recording technology advanced, reporters gained the ability to record events in the field and bring them back to the studio to broadcast over the airwaves. One early example of this was Herb Morrison’s recording of the Hindenburg disaster. In 1937, the Hindenburg blimp exploded into flames while attempting to land, killing 37 of its passengers. Morrison was already on the scene to record the descent, capturing the fateful crash. The entire event was later broadcast, including the sound of the exploding blimp, providing listeners with an unprecedented emotional connection to a national disaster. Morrison’s exclamation “Oh, the humanity!” became a common phrase of despair after the event (Brown, 1998).
Radio news became even more important during World War II, when programs such as Norman Corwin’s This Is War! sought to bring more sober news stories to a radio dial dominated by entertainment. The program dealt with the realities of war in a somber manner; at the beginning of the program, the host declared, “No one is invited to sit down and take it easy. Later, later, there’s a war on (Horten, 2002).” In 1940, Edward R. Murrow, a journalist working in England at the time, broadcast firsthand accounts of the German bombing of London, giving Americans a sense of the trauma and terror that the English were experiencing at the outset of the war (Horten, 2002). Radio news outlets were the first to broadcast the attack on Pearl Harbor that propelled the United States into World War II in 1941. By 1945, radio news had become so efficient and pervasive that when Roosevelt died, only his wife, his children, and Vice President Harry S. Truman were aware of it before the news was broadcast over the public airwaves (Brown).
The Birth of the Federal Communications Commission
The Communications Act of 1934 created the Federal Communications Commission (FCC) and ushered in a new era of government regulation. The organization quickly began enacting influential radio decisions. Among these was the 1938 decision to limit stations to 50,000 watts of broadcasting power, a ceiling that remains in effect today (Cashman). As a result of FCC antimonopoly rulings, RCA was forced to sell its NBC Blue network; this spun-off division became the American Broadcasting Corporation (ABC) in 1943 (Brinson, 2004).
Another significant regulation with long-lasting influence was the Fairness Doctrine. In 1949, the FCC established the Fairness Doctrine as a rule stating that if broadcasters editorialized in favor of a position on a particular issue, they had to give equal time to all other reasonable positions on that issue (Browne & Browne, 1986). This tenet came from the long-held notion that the airwaves were a public resource, and that they should thus serve the public in some way. Although the regulation remained in effect until 1987, the impact of its core concepts are still debated. This chapter will explore the Fairness Doctrine and its impact in greater detail in a later section.
Radio on the Margins
Despite the networks’ hold on programming, educational stations persisted at universities and in some municipalities. They broadcast programs such as School of the Air and College of the Air as well as roundtable and town hall forums. In 1940, the FCC reserved a set of frequencies in the lower range of the FM radio spectrum for public education purposes as part of its regulation of the new spectrum. The reservation of FM frequencies gave educational stations a boost, but FM proved initially unpopular due to a setback in 1945, when the FCC moved the FM bandwidth to a higher set of frequencies, ostensibly to avoid problems with interference (Longley, 1968). This change required the purchase of new equipment by both consumers and radio stations, thus greatly slowing the widespread adoption of FM radio.
One enduring anomaly in the field of educational stations has been the Pacifica Radio network. Begun in 1949 to counteract the effects of commercial radio by bringing educational programs and dialogue to the airwaves, Pacifica has grown from a single station—Berkeley, California’s KPFA—to a network of five stations and more than 100 affiliates (Pacifica Network). From the outset, Pacifica aired newer classical, jazz, and folk music along with lectures, discussions, and interviews with public artists and intellectuals. Among Pacifica’s major innovations was its refusal to take money from commercial advertisers, relying instead on donations from listeners and grants from institutions such as the Ford Foundation and calling itself listener-supported (Mitchell, 2005).
Another important innovation on the fringes of the radio dial during this time was the growth of border stations. Located just across the Mexican border, these stations did not have to follow FCC or U.S. regulatory laws. Because the stations broadcast at 250,000 watts and higher, their listening range covered much of North America. Their content also diverged—at the time markedly—from that of U.S. stations. For example, Dr. John Brinkley started station XERF in Del Rio, Mexico, after being forced to shut down his station in Nebraska, and he used the border station in part to promote a dubious goat gland operation that supposedly cured sexual impotence (Dash, 2008). Besides the goat gland promotion, the station and others like it often carried music, like country and western, that could not be heard on regular network radio. Later border station disc jockeys, such as Wolfman Jack, were instrumental in bringing rock and roll music to a wider audience (Rudel, 2008).
Television Steals the Show
A great deal of radio’s success as a medium during the 1920s and 1930s was due to the fact that no other medium could replicate it. This changed in the late 1940s and early 1950s as television became popular. A 1949 poll of people who had seen television found that almost half of them believed that radio was doomed (Gallup, 1949). Television sets had come on the market by the late 1940s, and by 1951, more Americans were watching television during prime time than ever (Bradley). Famous radio programs such as The Bob Hope Show were made into television shows, further diminishing radio’s unique offerings (Cox, 1949).
Surprisingly, some of radio’s most critically lauded dramas launched during this period. Gunsmoke, an adult-oriented Western show (that later become television’s longest-running show) began in 1952; crime drama Dragnet, later made famous in both television and film, broadcast between 1949 and 1957; and Yours Truly, Johnny Dollar aired from 1949 to 1962, when CBS canceled its remaining radio dramas. However, these respected radio dramas were the last of their kind (Cox, 2002). Although radio was far from doomed by television, its Golden Age was.
Transition to Top 40
As radio networks abandoned the dramas and variety shows that had previously sustained their formats, the soundscape was left to what radio could still do better than any other mass medium: play music. With advertising dollars down and the emergence of better recording formats, it made good business sense for radio to focus on shows that played prerecorded music. As strictly music stations began to rise, new innovations to increase their profitability appeared. One of the most notable and far-reaching of these innovations was the Top 40 station, a concept that supposedly came from watching jukebox patrons continually play the same songs (Brewster & Broughton, 2000). Robert Storz and Gordon McLendon began adapting existing radio stations to fit this new format with great success. In 1956, the creation of limited playlists further refined the format by providing about 50 songs that disc jockeys played repeatedly every day. By the early 1960s, many stations had developed limited playlists of only 30 songs (Walker, 2001).
Another musically fruitful innovation came with the increase of Black disc jockeys and programs created for Black audiences. Because its advertisers had nowhere to go in a media market dominated by White performers, Black radio became more common on the AM dial. As traditional programming left radio, disc jockeys began to develop as the medium’s new personalities, talking more in between songs and developing followings. Early Black disc jockeys even began improvising rhymes over the music, pioneering techniques that later became rap and hip-hop. This new personality-driven style helped bring early rock and roll to new audiences (Walker, 2001).
FM: The High-Fidelity Counterculture
As music came to rule the airwaves, FM radio drew in new listeners because of its high-fidelity sound capabilities. When radio had primarily featured dramas and other talk-oriented formats, sound quality had simply not mattered to many people, and the purchase of an FM receiver did not compete with the purchase of a new television in terms of entertainment value. As FM receivers decreased in price and stereo recording technology became more popular, however, the high-fidelity trend created a market for FM stations. Mostly affluent consumers began purchasing component stereos with the goal of getting the highest sound quality possible out of their recordings (Douglas, 2004). Although this audience often preferred classical and jazz stations to Top 40 radio, they were tolerant of new music and ideas (Douglas, 2004).
Both the high-fidelity market and the growing youth counterculture of the 1960s had similar goals for the FM spectrum. Both groups eschewed AM radio because of the predictable programming, poor sound quality, and over-commercialization. Both groups wanted to treat music as an important experience rather than as just a trendy pastime or a means to make money. Many adherents to the youth counterculture of the 1960s came from affluent, middle-class families, and their tastes came to define a new era of consumer culture. The goals and market potential of both the high-fidelity lovers and the youth counterculture created an atmosphere on the FM dial that had never before occurred (Douglas, 2004).
Between the years 1960 and 1966, the number of households capable of receiving FM transmissions grew from about 6.5 million to some 40 million. The FCC also aided FM by issuing its nonduplication ruling in 1964. Before this regulation, many AM stations had other stations on the FM spectrum that simply duplicated the AM programming. The nonduplication rule forced FM stations to create their own fresh programming, opening up the spectrum for established networks to develop new stations (Douglas, 2004).
The late 1960s saw new disc jockeys taking greater liberties with established practices; these liberties included playing several songs in a row before going to a commercial break or airing album tracks that exceeded 10 minutes in length. University stations and other nonprofit ventures to which the FCC had given frequencies during the late 1940s popularized this format, and, in time, commercial stations tried to duplicate their success by playing fewer commercials and by allowing their disc jockeys to have a say in their playlists. Although this made for popular listening formats, FM stations struggled to make the kinds of profits that the AM spectrum drew (Douglas, 2004).
In 1974, FM radio accounted for one-third of all radio listening but only 14 percent of radio profits (Douglas, 2004). Large network stations and advertisers began to market heavily to the FM audience in an attempt to correct this imbalance. Stations began tightening their playlists and narrowing their formats to please advertisers and to generate greater revenues. By the end of the 1970s, radio stations were beginning to play specific formats, and the progressive radio of the previous decade had become difficult to find (Douglas, 2004).
The Rise of Public Radio
After the Golden Age of Radio came to an end, most listeners tuned in to radio stations to hear music. The variety shows and talk-based programs that had sustained radio in early years could no longer draw enough listeners to make them a successful business proposition. One divergent path from this general trend, however, was the growth of public radio.
Groups such as the Ford Foundation had funded public media sources during the early 1960s. When the foundation decided to withdraw its funding in the middle of the decade, the federal government stepped in with the Public Broadcasting Act of 1967. This act created the Corporation for Public Broadcasting (CPB) and charged it with generating funding for public television and radio outlets. The CPB in turn created National Public Radio (NPR) in 1970 to provide programming for already-operating stations. Until 1982, in fact, the CPB entirely and exclusively funded NPR. Public radio’s first program was All Things Considered, an evening news program that focused on analysis and interpretive reporting rather than cutting-edge coverage. In the mid-1970s, NPR attracted Washington-based journalists such as Cokie Roberts and Linda Wertheimer to its ranks, giving the coverage a more professional, hard-reporting edge (Schardt, 1996).
However, in 1983, public radio was pushed to the brink of financial collapse. NPR survived in part by relying more on its member stations to hold fundraising drives, now a vital component of public radio’s business model. In 2003, Joan Kroc, the widow of McDonald’s CEO and philanthropist Ray Kroc, bequeathed a grant of over $200 million to NPR that may keep it afloat for many years to come.
Having weathered the financial storm intact, NPR continued its progression as a respected news provider. During the first Gulf War, NPR sent out correspondents for the first time to provide in-depth coverage of unfolding events. Public radio’s extensive coverage of the 2001 terrorist bombings gained its member stations many new listeners, and it has since expanded (Clift, 2011). Although some have accused NPR of presenting the news with a liberal bias, its listenership in 2005 was 28 percent conservative, 32 percent liberal, and 29 percent moderate. Newt Gingrich, a conservative Republican and former speaker of the house, has stated that the network is “a lot less on the left” than some may believe (Sherman, 2005). With more than 26 million weekly listeners and 860 member stations in 2009, NPR has become a leading radio news source (Kamenetz, 2009).
Public radio distributors such as Public Radio International (PRI) and local public radio stations such as WBEZ in Chicago have also created a number of cultural and entertainment programs, including quiz shows, cooking shows, and a host of local public forum programs. Storytelling programs such as This American Life have created a new kind of free-form radio documentary genre, while shows such as PRI’s variety show A Prairie Home Companion have revived older radio genres. This variety of popular public radio programming has shifted radio from a music-dominated medium to one that is again exploring its vast potential.
Conglomerates
During the early 1990s, many radio stations suffered the effects of an economic recession. Some stations initiated local marketing agreements (LMAs) to share facilities and resources amid this economic decline. LMAs led to consolidation in the industry as radio stations bought other stations to create new hubs for the same programming. The Telecommunications Act of 1996 further increased consolidation by eliminating a duopoly rule prohibiting dual station ownership in the same market and by lifting the numerical limits on station ownership by a single entity.
As large corporations such as Clear Channel Communications bought up stations around the country, they reformatted stations that had once competed against one another so that each focused on a different format. This practice led to mainstream radio’s present state, in which narrow formats target highly specific demographic audiences.
Ultimately, although the industry consolidation of the 1990s made radio profitable, it reduced local coverage and diversity of programming. Because stations around the country served as outlets for a single network, the radio landscape became more uniform and predictable (Keith, 2010). Much as with chain restaurants and stores, some people enjoy this type of predictability, while others prefer a more localized, unique experience (Keith, 2010).
Key Takeaways
- The Golden Age of Radio covered the period between 1930 and 1950. It was characterized by radio’s overwhelming popularity and a wide range of programming, including variety, music, drama, and theater programs.
- Top 40 radio arose after most nonmusic programming moved to television. This format used short playlists of popular hits and gained a great deal of commercial success during the 1950s and 1960s.
- FM became popular during the late 1960s and 1970s as commercial stations adopted the practices of free-form stations to appeal to new audiences who desired higher fidelity and a less restrictive format.
- Empowered by the Telecommunications Act of 1996, media conglomerates have subsumed unprecedented numbers of radio stations by single companies. Radio station consolidation brings predictability and profits at the expense of unique programming.
Exercises
Please respond to the following short-answer writing prompts. Each response should be a minimum of one paragraph.
- Explain the advantages that radio had over traditional print media during the 1930s and 1940s.
- Do you think that radio could experience another golden age? Explain your answer.
- How has the consolidation of radio stations affected radio programming?
- Characterize the overall effects of one significant technological or social shift described in this section on radio as a medium.
References
Bradley, Becky. “American Cultural History: 1950–1959,” Lone Star College, Kingwood, http://kclibrary.lonestar.edu/decade50.html.
Brewster, Bill and Frank Broughton, Last Night a DJ Saved My Life: The History of the Disc Jockey, (New York: Grove Press, 2000), 48.
Brinson, Susan. The Red Scare, Politics, and the Federal Communications Commission, 1941–1960 (Westport, CT: Praeger, 2004), 42.
Brown, Manipulating the Ether, 123.
Brown, Robert. Manipulating the Ether: The Power of Broadcast Radio in Thirties America (Jefferson, NC: MacFarland, 1998), 134–137.
Browne, Ray and Glenn Browne, Laws of Our Fathers: Popular Culture and the U.S. Constitution (Bowling Green, OH: Bowling Green State University Popular Press, 1986), 132.
Cashman, America in the Twenties and Thirties, 327.
Cashman, Sean. America in the Twenties and Thirties: The Olympian Age of Franklin Delano Roosevelt (New York: New York University Press, 1989), 328.
Clift, Nick. “Viewpoint: Protect NPR, It Protects Us,” Michigan Daily, February 15, 2011, http://www.michigandaily.com/content/viewpoint-npr.
Coe, Lewis. Wireless Radio: A Brief History (Jefferson, NC: MacFarland, 1996), 4–10.
Cox, Jim. American Radio Networks: A History (Jefferson, NC: MacFarland, 2009), 171–175.
Cox, Jim. Say Goodnight, Gracie: The Last Years of Network Radio (Jefferson, NC: MacFarland, 2002), 39–41.
Dash, Mike. “John Brinkley, the goat-gland quack,” The Telegraph, April 18, 2008, http://www.telegraph.co.uk/culture/books/non_fictionreviews/3671561/John-Brinkley-the-goat-gland-quack.html.
Douglas, Susan. Listening In: Radio and the American Imagination (Minneapolis: University of Minnesota Press, 2004), 266–268.
Gallup, George. “One-Fourth in Poll Think Television Killing Radio,” Schenectady (NY) Gazette, June 8, 1949, http://news.google.com/newspapers?id=d3YuAAAAIBAJ&sjid=loEFAAAAIBAJ&pg=840,1029432&dq=radio-is-doomed&hl=en.
Grant, John. Experiments and Results in Wireless Telegraphy (reprinted from The American Telephone Journal, 49–51, January 26, 1907), http://earlyradiohistory.us/1907fes.htm.
Hilmes, Radio Voices, 183–185.
Hilmes, Michele. Radio Voices: American Broadcasting 1922–1952 (Minneapolis: University of Minnesota Press, 1999), 157.
Horten, Gerd. Radio Goes to War: The Cultural Politics of Propaganda During World War II (Los Angeles: University of California Press, 2002), 48–52.
Kamenetz, Anya. “Will NPR Save the News?” Fast Company, April 1, 2009, http://www.fastcompany.com/magazine/134/finely-tuned.html.
Keith, Michael. The Radio Station: Broadcast, Satellite and Internet (Burlington, MA: Focal Press, 2010), 17–24.
Longley, Lawrence D. “The FM Shift in 1945,” Journal of Broadcasting 12, no. 4 (1968): 353–365.
McChesney, Robert W. “Media and Democracy: The Emergence of Commercial Broadcasting in the United States, 1927–1935,” in “Communication in History: The Key to Understanding,” OAH Magazine of History 6, no. 4 (1992).
McLeod, Elizabeth. “The WGY Players and the Birth of Radio Drama,” 1998, http://www.midcoast.com/~lizmcl/wgy.html.
Mitchell, Jack. Listener Supported: The Culture and History of Public Radio (Westport, CT: Praeger, 2005), 21–24.
Museum, “Soap Opera,” The Museum of Broadcast Communications, http://www.museum.tv/eotvsection.php?entrycode=soapopera.
Pacifica Network, “Pacifica Network Stations,” The Pacifica Foundation, http://pacificanetwork.org/radio/content/section/7/42/.
PBS, “Guglielmo Marconi,” American Experience: People & Events, http://www.pbs.org/wgbh/amex/rescue/peopleevents/pandeAMEX98.html.
Rudel, Anthony. Hello, Everybody! The Dawn of American Radio (Orlando, FL: Houghton Mifflin Harcourt, 2008), 130–132.
Schardt, Sue. “Public Radio—A Short History,” Christian Science Monitor Publishing Company, 1996, http://www.wsvh.org/pubradiohist.htm.
Sherman, Scott. “Good, Gray NPR,” The Nation, May 23, 2005, 34–38.
Sterling, Christopher and John Kittross, Stay Tuned: A History of American Broadcasting, 3rd ed. (New York: Routledge, 2002), 124.
ThinkQuest, “Radio’s Emergence,” Oracle ThinkQuest: The 1920s, http://library.thinkquest.org/27629/themes/media/md20s.html.
Walker, Jesse. Rebels on the Air: An Alternative History of Radio in America (New York: New York University Press, 2001), 56.
White, “Broadcasting After World War I (1919–1921),” United States Early Radio History, http://earlyradiohistory.us/sec016.htm.
White, “News and Entertainment by Telephone (1876–1925),” United States Early Radio History, http://earlyradiohistory.us/sec003.htm.
White, “Pre-War Vacuum Tube Transmitter Development 1914–1917),” United States Early Radio History, http://earlyradiohistory.us/sec011.htm.
White, Thomas. “Pioneering Amateurs (1900–1917),” United States Early Radio History, http://earlyradiohistory.us/sec012.htm.
7.3 Radio Station Formats
Learning Objectives
- Describe the use of radio station formats in the development of modern stations.
- Analyze the effects of formats on radio programming.
Early radio network programming laid the groundwork for television’s format, with many different programs that appealed to a variety of people broadcast at different times of the day. As television’s popularity grew, however, radio could not compete and so it turned to fresh programming techniques. A new type of format-driven station became the norm. Propelled by the development of new types of music such as psychedelic rock and smooth jazz, the evolution of radio station formats took place. Since the beginning of this shift, different stations have tended to focus on the music that certain demographics preferred. For example, many people raised on Top 40 radio of the 1950s and 1960s did not necessarily want to hear modern pop hits, so stations playing older popular songs emerged to meet their needs.
Modern formats take into account aging generations, with certain stations specifically playing the pop hits of the 1950s and early 1960s, and others focusing on the pop hits of the late 1960s, 1970s, and 1980s. These formats have developed to target narrow, defined audiences with predictable tastes and habits. Ratings services such as Arbitron can identify the 10-year age demographic, the education level, and even the political leanings of listeners who prefer a particular format. Because advertisers want their commercials to reach an audience likely to buy their products, this kind of audience targeting is crucial for advertising revenue.
Top Radio Formats
The following top radio formats and their respective statistics were determined by an Arbitron survey that was released in 2010 (Arbitron Inc., 2010). The most popular formats and subformats cover a wide range of demographics, revealing radio’s wide appeal.
Country
Country music as a format includes stations devoted both to older and newer country music. In 2010, the country music format stood as the most popular radio format, beating out even such prominent nonmusic formats as news and talk. The format commanded the greatest listener share and the second largest number of stations dedicated to the style. Favored in rural regions of the country, the country music format—featuring artists like Keith Urban, the Dixie Chicks, and Tim McGraw—appeals to both male and female listeners from a variety of income levels (Arbitron Inc., 2010).
News/Talk/Information
The news/talk/information format includes AM talk radio, public radio stations with talk programming, network news radio, sports radio, and personality talk radio. This format reached nearly 59 million listeners in 2010, appealing particularly to those aged 65 and older; over 70 percent of its listeners had attended college. These listeners also ranked the highest among formats in levels of home ownership (Arbitron Inc., 2010).
Adult Contemporary
Generally targeted toward individuals over 30, the adult contemporary (AC) format favors pop music from the last 15 to 20 years as opposed to current hits. Different subformats, such as hot AC and modern AC, target younger audiences by playing songs that are more current. In 2010, the majority of AC audience were affluent, married individuals divided roughly along the national average politically. Adult contemporary listeners ranked highest by format in at-work listening. Hot AC, a subformat of AC that plays more current hits, ranked seventh in the nation. Urban AC, a version of AC that focuses on older R&B hits, ranked eighth in the nation in 2010 (Arbitron Inc., 2010).
Pop Contemporary Hit Radio
Pop contemporary hit radio, or pop CHR, is a subformat of contemporary hit radio (CHR). Other subformats of CHR include dance CHR and rhythmic CHR. Branded in the 1980s, this format encompasses stations that have a Top 40 orientation but draw on a wide number of formats, such as country, rock, and urban (Ford, 2008). In 2010, pop CHR ranked first among teenaged listeners, with 65 percent of its overall listeners aged under 35. This music, ranging from popular artists like Taylor Swift and Kanye West to Shakira, was played in the car more than at home or work, and saw its largest listening times in the evenings. Rhythmic CHR, a subformat focusing on a mix of rhythmic pop, R&B, dance, and hip-hop hits, also ranked high in 2010 (Arbitron).
Classic Rock
Classic rock stations generally play rock singles from the 1970s and 1980s, like “Stairway to Heaven,” by Led Zeppelin, and “You Shook Me All Night Long,” by AC/DC. Another distinct but similar format is album-oriented rock (AOR). This format focuses on songs that were not necessarily released as singles, also known as album cuts (Radio Station World). In 2010, classic rock stations ranked fifth in listener figures. These individuals were overwhelmingly men (70 percent) between the ages of 35 and 54 (54 percent). Classic rock was most often listened to in the car and at work, with only 26 percent of its listeners tuning in at home (Arbitron).
Urban Contemporary
The urban contemporary format plays modern hits from mainly Black artists—such as Lil Wayne, John Legend, and Ludacris—featuring a mix of soul, hip-hop, and R&B. In 2010, the format ranked eleventh in the nation. Urban contemporary focuses on listeners in the 18–34 age range (Arbitron).
Mexican Regional
The Mexican regional format is devoted to Spanish-language music, particularly Mexican and South American genres. In 2010, it ranked thirteenth in the nation and held the top spot in Los Angeles, a reflection of the rise in immigration from Mexico, Central America, and South America. Mexican regional’s listener base was over 96 percent Hispanic, and the format was most popular in the Western and Southwestern regions of the country. However, it was less popular in the Eastern regions of the country; in New England, for example, the format held a zero percent share of listening. The rise of the Mexican regional format illustrates the ways in which radio can change rapidly to meet new demographic trends (Arbitron).
An increasingly Spanish language–speaking population in the United States has also resulted in a number of distinct Spanish-language radio formats. These include Spanish oldies, Spanish adult hits, Spanish religious, Spanish tropical, and Spanish talk among others. Tejano, a type of music developed in Hispanic Texan communities, has also gained enough of an audience to become a dedicated format (Arbitron).
Other Popular Formats
Radio formats have become so specialized that ratings group Arbitron includes more than 50 designations. What was once simply called rock music has been divided into such subformats as alternative and modern rock. Alternative rock began as a format played on college stations during the 1980s but developed as a mainstream format during the following decade, thanks in part to the popular grunge music of that era. As this music aged, stations began using the term modern rock to describe a format dedicated to new rock music. This format has also spawned the active rock format, which plays modern rock hits with older rock hits thrown in (Radio Station World).
Nostalgia formats have split into a number of different formats as well. Oldies stations now generally focus on hits from the 1950s and 1960s, while the classic hits format chooses from hits of the 1970s, 1980s, and 1990s. Urban oldies, which focuses on R&B, soul, and other urban music hits from the 1950s, 1960s, and 1970s, has also become a popular radio format. Formats such as adult hits mix older songs from the 1970s, 1980s, and 1990s with a small selection of popular music, while formats such as ’80s hits picks mainly from the 1980s (Radio Station World).
Radio station formats are an interesting way to look at popular culture in the United States. The evolution of nostalgia formats to include new decades nods to the size and tastes of the nation’s aging listeners. Hits of the 1980s are popular enough with their demographic to have entire stations dedicated to them, while other generations prefer stations with a mix of decades. The rise of the country format and the continued popularity of the classic rock format are potential indicators of cultural trends.
Key Takeaways
- Radio station formats target demographics that can generate advertising revenue.
- Contemporary hit radio was developed as a Top 40 format that expanded beyond strictly pop music to include country, rock, and urban formats.
- Spanish-language formats have grown in recent years, with Mexican regional moving into the top 10 formats in 2008.
- Nostalgia genres have developed to reflect the tastes of aging listeners, ranging from mixes of music from the 1970s, 1980s, and 1990s with current hits to formats that pick strictly from the 1980s.
Exercises
Please respond to the following writing prompts. Each response should be a minimum of one paragraph.
- What is the purpose of radio station formats?
- How have radio station formats affected the way that modern stations play music?
- Pick a format, such as country or classic rock, and speculate on the reasons for its popularity.
References
Arbitron Inc., Radio Today: How America Listens to Radio, 2010.
Arbitron, Radio Today, 27–30.
Arbitron, Radio Today, 32–34.
Ford, John. “Contemporary Hit Radio,” September 2, 2008, http://radioindustry.suite101.com/article.cfm/contemporary_hit_radio_history.
Radio Station World, “Oldies, Adult Hits, and Nostalgia Radio Formats,” Radio Station World, 1996–2010, http://radiostationworld.com/directory/radio_formats/radio_formats_oldies.asp.
Radio Station World, “Rock and Alternative Music Formats,” Radio Station World, 1996–2010, http://radiostationworld.com/directory/Radio_Formats/radio_formats_rock.asp.
Radio Station World, “Rock and Alternative Music Formats,” Radio Station World.
7.4 Radio’s Impact on Culture
Learning Objectives
- Analyze radio as a form of mass media.
- Describe the effects of radio on the spread of different types of music.
- Analyze the effects of the Fairness Doctrine on political radio.
- Formulate opinions on controversial issues in radio.
Since its inception, radio’s impact on American culture has been immense. Modern popular culture is unthinkable without the early influence of radio. Entire genres of music that are now taken for granted, such as country and rock, owe their popularity and even existence to early radio programs that publicized new forms.
A New Kind of Mass Media
Mass media such as newspapers had been around for years before the existence of radio. In fact, radio was initially considered a kind of disembodied newspaper. Although this idea gave early proponents a useful, familiar way to think about radio, it underestimated radio’s power as a medium. Newspapers had the potential to reach a wide audience, but radio had the potential to reach almost everyone. Neither illiteracy nor even a busy schedule impeded radio’s success—one could now perform an activity and listen to the radio at the same time. This unprecedented reach made radio an instrument of social cohesion as it brought together members of different classes and backgrounds to experience the world as a nation.
Radio programs reflected this nationwide cultural aspect of radio. Vox Pop, a show originally based on person-in-the-street interviews, was an early attempt to quantify the United States’ growing mass culture. Beginning in 1935, the program billed itself as an unrehearsed “cross-section of what the average person really knows” by asking random people an assortment of questions. Many modern television shows still employ this format not only for viewers’ amusement and information but also as an attempt to sum up national culture (Loviglio, 2002). Vox Pop functioned on a cultural level as an acknowledgement of radio’s entrance into people’s private lives to make them public (Loviglio, 2002).
Radio news was more than just a quick way to find out about events; it was a way for U.S. citizens to experience events with the same emotions. During the Ohio and Mississippi river floods of 1937, radio brought the voices of those who suffered as well as the voices of those who fought the rising tides. A West Virginia newspaper explained the strengths of radio in providing emotional voices during such crises: “Thanks to radio…the nation as a whole has had its nerves, its heart, its soul exposed to the needs of its unfortunates…We are a nation integrated and interdependent. We are ‘our brother’s keeper (Brown).’”
Radio’s presence in the home also heralded the evolution of consumer culture in the United States. In 1941, two-thirds of radio programs carried advertising. Radio allowed advertisers to sell products to a captive audience. This kind of mass marketing ushered in a new age of consumer culture (Cashman).
War of the Worlds and the Power of Radio
During the 1930s, radio’s impact and powerful social influence was perhaps most obvious in the aftermath of the Orson Welles’s notorious War of the Worlds broadcast. On Halloween night in 1938, radio producer Orson Welles told listeners of the Mercury Theatre on the Air that they would be treated to an original adaptation of H. G. Wells’s classic science fiction novel of alien invasion War of the Worlds. The adaptation started as if it were a normal music show that was interrupted by news reports of an alien invasion. Many listeners had tuned in late and did not hear the disclaimer, and so were caught up by the realism of the adaptation, believing it to be an actual news story.
According to some, an estimated 6 million people listened to the show, with an incredible 1.7 million believing it to be true (Lubertozzi & Holmsten, 2005). Some listeners called loved ones to say goodbye or ran into the street armed with weapons to fight off the invading Martians of the radio play (Lubertozzi & Holmsten, 2005). In Grovers Mill, New Jersey—where the supposed invasion began—some listeners reported nonexistent fires and fired gunshots at a water tower thought to be a Martian landing craft. One listener drove through his own garage door in a rush to escape the area. Two Princeton University professors spent the night searching for the meteorite that had supposedly preceded the invasion (Lubertozzi & Holmsten, 2005). As calls came in to local police stations, officers explained that they were equally concerned about the problem (Lubertozzi & Holmsten, 2005).
Although the story of the War of the Worlds broadcast may be funny in retrospect, the event traumatized those who believed the story. Individuals from every education level and walk of life had been taken in by the program, despite the producers’ warnings before, during the intermission, and after the program (Lubertozzi & Holmsten, 2005). This event revealed the unquestioning faith that many Americans had in radio. Radio’s intimate communication style was a powerful force during the 1930s and 1940s.
Radio and the Development of Popular Music
One of radio’s most enduring legacies is its impact on music. Before radio, most popular songs were distributed through piano sheet music and word of mouth. This necessarily limited the types of music that could gain national prominence. Although recording technology had also emerged several decades before radio, music played live over the radio sounded better than it did on a record played in the home. Live music performances thus became a staple of early radio. Many performance venues had their own radio transmitters to broadcast live shows—for example, Harlem’s Cotton Club broadcast performances that CBS picked up and broadcast nationwide.
Radio networks mainly played swing jazz, giving the bands and their leaders a widespread audience. Popular bandleaders including Duke Ellington, Benny Goodman, and Tommy Dorsey and their jazz bands became nationally famous through their radio performances, and a host of other jazz musicians flourished as radio made the genre nationally popular (Wald, 2009). National networks also played classical music. Often presented in an educational context, this programming had a different tenor than did dance-band programming. NBC promoted the genre through shows such as the Music Appreciation Hour, which sought to educate both young people and the general public on the nuances of classical music (Howe, 2003). It created the NBC Symphony Orchestra, a 92-piece band under the direction of famed conductor Arturo Toscanini. The orchestra made its first performance in 1937 and was so popular that Toscanini stayed on as conductor for 17 years (Horowitz, 2005). The Metropolitan Opera was also popular; its broadcasts in the early 1930s had an audience of 9 million listeners (Horowitz, 2005).
Regional Sounds Take Hold
The promotional power of radio also gave regional music an immense boost. Local stations often carried their own programs featuring the popular music of the area. Stations such as Nashville, Tennessee’s WSM played early country, blues, and folk artists. The history of this station illustrates the ways in which radio—and its wide range of broadcasting—created new perspectives on American culture. In 1927, WSM’s program Barn Dance, which featured early country music and blues, followed an hour-long program of classical music. George Hay, the host of Barn Dance, used the juxtaposition of classical and country genres to spontaneously rename the show: “For the past hour we have been listening to music taken largely from Grand Opera, but from now on we will present ‘The Grand Ole Opry (Kyriakoudes).’” NBC picked up the program for national syndication in 1939, and it is currently one of the longest-running radio programs of all time.
Shreveport, Louisiana’s KWKH aired an Opry-type show called Louisiana Hayride. This program propelled stars such as Hank Williams into the national spotlight. Country music, formerly a mix of folk, blues, and mountain music, was made into a genre that was accessible by the nation through this show. Without programs that featured these country and blues artists, Elvis Presley and Johnny Cash would not have become national stars, and country music may not have risen to become a popular genre (DiMeo, 2010).
In the 1940s, other Southern stations also began playing rhythm and blues records recorded by Black artists. Artists such as Wynonie Harris, famous for his rendition of Roy Brown’s “Good Rockin’ Tonight,” were often played by White disc jockeys who tried to imitate Black Southerners (Laird, 2005). During the late 1940s, both Memphis, Tennessee’s WDIA and Atlanta, Georgia’s WERD were owned and operated by Black individuals. These disc jockeys often provided a measure of community leadership at a time when few Black individuals were in powerful positions (Walker).
Radio’s Lasting Influences
Radio technology changed the way that dance and popular music were performed. Because of the use of microphones, vocalists could be heard better over the band, allowing singers to use a greater vocal range and create more expressive styles, an innovation that led singers to become an important part of popular music’s image. The use of microphones similarly allowed individual performers to be featured playing solos and lead parts, features that were less encouraged before radio. The exposure of radio also led to more rapid turnover in popular music. Before radio, jazz bands played the same arrangement for several years without it getting old, but as radio broadcasts reached wide audiences, new arrangements and songs had to be produced at a more rapid pace to keep up with changing tastes (Wald).
The spotlight of radio allowed the personalities of artists to come to the forefront of popular music, giving them newfound notoriety. Phil Harris, the bandleader from the Jack Benny Show, became the star of his own program. Other famous musicians used radio talent shows to gain fame. Popular programs such as Major Bowes and His Original Amateur Hour featured unknown entertainers trying to gain fame through exposure to the show’s large audience. Major Bowes used a gong to usher bad performers offstage, often contemptuously dismissing them, but not all the performers struck out; such successful singers as Frank Sinatra debuted on the program (Sterling & Kitross).
Television, much like modern popular music, owes a significant debt to the Golden Age of Radio. Major radio networks such as NBC, ABC, and CBS became—and remain—major forces in television, and their programming decisions for radio formed the basis for television. Actors, writers, and directors who worked in radio simply transferred their talents into the world of early television, using the successes of radio as their models.
Radio and Politics
Over the years, radio has had a considerable influence on the political landscape of the United States. In the past, government leaders relied on radio to convey messages to the public, such as President Franklin D. Roosevelt’s “fireside chats.” Radio was also used as a way to generate propaganda for World War II. The War Department established a Radio Division in its Bureau of Public Relations as early as 1941. Programs such as the Treasury Hour used radio drama to raise revenue through the sale of war bonds, but other government efforts took a decidedly political turn. Norman Corwin’s This Is War! was funded by the federal Office of Facts and Figures (OFF) to directly garner support for the war effort. It featured programs that prepared listeners to make personal sacrifices—including death—to win the war. The program was also directly political, popularizing the idea that the New Deal was a success and bolstering Roosevelt’s image through comparisons with Lincoln (Horten).
FDR’s Fireside Chats
President Franklin D. Roosevelt’s Depression-era radio talks, or “fireside chats,” remain one of the most famous uses of radio in politics. While governor of New York, Roosevelt had used radio as a political tool, so he quickly adopted it to explain the unprecedented actions that his administration was taking to deal with the economic fallout of the Great Depression. His first speech took place only 1 week after being inaugurated. Roosevelt had closed all of the banks in the country for 4 days while the government dealt with a national banking crisis, and he used the radio to explain his actions directly to the American people (Grafton, 1999).
Roosevelt’s first radio address set a distinct tone as he employed informal speech in the hopes of inspiring confidence in the American people and of helping them stave off the kind of panic that could have destroyed the entire banking system. Roosevelt understood both the intimacy of radio and its powerful outreach (Grafton, 1999). He was thus able to balance a personal tone with a message that was meant for millions of people. This relaxed approach inspired a CBS executive to name the series the “fireside chats (Grafton, 1999).”
Roosevelt delivered a total of 27 of these 15- to 30-minute-long addresses to estimated audiences of 30 million to 40 million people, then a quarter of the U.S. population (Grafton, 1999). Roosevelt’s use of radio was both a testament to his own skills and savvy as a politician and to the power and ubiquity of radio during this period. At the time, there was no other form of mass media that could have had the same effect.
Certainly, radio has been used by the government for its own purposes, but it has had an even greater impact on politics by serving as what has been called “the ultimate arena for free speech (Davis & Owen, 1998).” Such infamous radio firebrands as Father Charles Coughlin, a Roman Catholic priest whose radio program opposed the New Deal, criticized Jews, and supported Nazi policies, aptly demonstrated this capability early in radio’s history (Sterling & Kitross). In recent decades, radio has supported political careers, including those of U.S. Senator Al Franken of Minnesota, former New York City mayor Rudy Giuliani, and presidential aspirant Fred Thompson. Talk show hosts such as Rush Limbaugh have gained great political influence, with some even viewing Limbaugh as the de facto leader of the Republican Party (Halloran, 2009).
The Importance of Talk Radio
An important contemporary convergence of radio and politics can be readily heard on modern talk radio programs. Far from being simply chat shows, the talk radio that became popular in the 1980s features a host who takes callers and discusses a wide assortment of topics. Talk radio hosts gain and keep their listeners by sheer force of personality, and some say shocking or insulting things to get their message across. These hosts range from conservative radio hosts such as Rush Limbaugh to so-called shock jocks such as Howard Stern.
Repeal of the Fairness Doctrine
While talk radio first began during the 1920s, the emergence of the format as a contemporary cultural and political force took place during the mid- to late-1980s following the repeal of the Fairness Doctrine (Cruz, 2007). As you read earlier in this chapter, this doctrine, established in 1949, required any station broadcasting a political point of view over the air to allow equal time to all reasonable dissenting views. Despite its noble intentions of safeguarding public airwaves for diverse views, the doctrine had long attracted a level of dissent. Opponents of the Fairness Doctrine claimed that it had a chilling effect on political discourse as stations, rather than risk government intervention, avoided programs that were divisive or controversial (Cruz, 2007). In 1987, the FCC under the Reagan administration repealed the regulation, setting the stage for an AM talk radio boom; by 2004, the number of talk radio stations had increased by 17-fold (Anderson, 2005).
The end of the Fairness Doctrine allowed stations to broadcast programs without worrying about finding an opposing point of view to balance the stated opinions of its host. Radio hosts representing all points of the political spectrum could say anything that they wanted to—within FCC limits—without fear of rebuttal. Media bias and its ramifications will be explored at greater length in Chapter 14 “Ethics of Mass Media”.
The Revitalization of AM
The migration of music stations to the FM spectrum during the 1960s and 1970s provided a great deal of space on the AM band for talk shows. With the Fairness Doctrine no longer a hindrance, these programs slowly gained notoriety during the late 1980s and early 1990s. In 1998, talk radio hosts railed against a proposed congressional pay increase, and their listeners became incensed; House Speaker Jim Wright received a deluge of faxes protesting it from irate talk radio listeners from stations all over the country (Douglas). Ultimately, Congress canceled the pay increase, and various print outlets acknowledged the influence of talk radio on the decision. Propelled by events such as these, talk radio stations rose from only 200 in the early 1980s to more than 850 in 1994 (Douglas).
Coast to Coast AM
Although political programs unquestionably rule AM talk radio, that dial is also home to a kind of show that some radio listeners may have never experienced. Late at night on AM radio, a program airs during which listeners hear stories about ghosts, alien abductions, and fantastic creatures. It’s not a fictional drama program, however, but instead a call-in talk show called Coast to Coast AM. In 2006, this unlikely success ranked among the top 10 AM talk radio programs in the nation—a stunning feat considering its 10 p.m. to 2 a.m. time slot and bizarre format (Vigil, 2006).
Originally started by host Art Bell in the 1980s, Coast to Coast focuses on topics that mainstream media outlets rarely treat seriously. Regular guests include ghost investigators, psychics, Bigfoot biographers, alien abductees, and deniers of the moon landing. The guests take calls from listeners who are allowed to ask questions or talk about their own paranormal experiences or theories.
Coast to Coast’s current host, George Noory, has continued the show’s format. In some areas, its ratings have even exceeded those of Rush Limbaugh’s (Vigil, 2006). For a late-night show, these kinds of high ratings are rare. The success of Coast to Coast is thus a continuing testament to the diversity and unexpected potential of radio (Vigil, 2006).
On-Air Political Influence
As talk radio’s popularity grew during the early 1990s, it quickly became an outlet for political ambitions. In 1992, nine talk show hosts ran for U.S. Congress. By the middle of the decade, it had become common for many former—or failed—politicians to attempt to use the format. Former California governor Jerry Brown and former New York mayor Ed Koch were among the mid-1990s politicians that had AM talk shows (Annenberg Public Policy Center, 1996). Both conservatives and liberals widely agree that conservative hosts dominate AM talk radio. Many talk show hosts, such as Limbaugh, who began his popular program 1 year after the repeal of the Fairness Doctrine, have made a profitable business out of their programs.
During the 2000s, AM talk radio continued to build. Hosts such as Michael Savage, Sean Hannity, and Bill O’Reilly furthered the trend of popular conservative talk shows, but liberal hosts also became popular through the short-lived Air America network. The network closed abruptly in 2010 amid financial concerns (Stelter, 2010). Although the network was unsuccessful, it provided a platform for such hosts as MSNBC TV news host Rachel Maddow and Minnesota Senator Al Franken. Other liberal hosts such as Bill Press and Ron Reagan, son of President Ronald Reagan, have also found success in the AM political talk radio field (Stelter, 2010). Despite these successes, liberal talk radio is often viewed as unsustainable (Hallowell, 2010). To some, the failure of Air America confirms conservatives’ domination of AM radio. In response to the conservative dominance of talk radio, many prominent liberals, including House Speaker Nancy Pelosi, have advocated reinstating the Fairness Doctrine and forcing stations to offer equal time to contrasting opinions (Stotts, 2008).
Freedom of Speech and Radio Controversies
While the First Amendment of the U.S. Constitution gives radio personalities the freedom to say nearly anything they want on the air without fear of prosecution (except in cases of obscenity, slander, or incitement of violence, which will be discussed in greater detail in Chapter 15 “Media and Government”), it does not protect them from being fired from their jobs when their controversial comments create a public outrage. Many talk radio hosts, such as Howard Stern, push the boundaries of acceptable speech to engage listeners and boost ratings, but sometimes radio hosts push too far, unleashing a storm of controversy.
Making (and Unmaking) a Career out of Controversy
Talk radio host Howard Stern has managed to build his career on creating controversy—despite being fined multiple times for indecency by the FCC, Stern remains one of highest-paid and most popular talk radio hosts in the United States. Stern’s radio broadcasts often feature scatological or sexual humor, creating an “anything goes” atmosphere. Because his on-air antics frequently generate controversy that can jeopardize advertising sponsorships and drive away offended listeners—in addition to risking fines from the FCC—Stern has a history of uneasy relationships with the radio stations that employ him. In an effort to free himself of conflicts with station owners and sponsors, in 2005 Stern signed a contract with Sirius Satellite Radio, which is exempt from FCC regulation, so that he can continue to broadcast his show without fear of censorship.
Stern’s massive popularity gives him a lot of clout, which has allowed him to weather controversy and continue to have a successful career. Other radio hosts who have gotten themselves in trouble with poorly considered on-air comments have not been so lucky. In April 2007, Don Imus, host of the long-running Imus in the Morning, was suspended for racist and sexist comments made about the Rutgers University women’s basketball team (MSNBC, 2007). Though he publically apologized, the scandal continued to draw negative attention in the media, and CBS canceled his show to avoid further unfavorable publicity and the withdrawal of advertisers. Though he returned to the airwaves in December of that year with a different station, the episode was a major setback for Imus’s career and his public image. Similarly, syndicated conservative talk show host Dr. Laura Schelssinger ended her radio show in 2010 due to pressure from radio stations and sponsors after her repeated use of a racial epithet on a broadcast incited a public backlash (MSNBC, 2007).
As the examples of these talk radio hosts show, the issue of freedom of speech on the airwaves is often complicated by the need for radio stations to be profitable. Outspoken or shocking radio hosts can draw in many listeners, attracting advertisers to sponsor their shows and bringing in money for their radio stations. Although some listeners may be offended by these hosts and may stop tuning in, as long as the hosts continue to attract advertising dollars, their employers are usually content to allow the hosts to speak freely on the air. However, if a host’s behavior ends up sparking a major controversy, causing advertisers to withdraw their sponsorship to avoid tarnishing their brands, the radio station will often fire the host and look to someone who can better sustain advertising partnerships. Radio hosts’ right to free speech does not compel their employer to give them the forum to exercise it. Popular hosts like Don Imus may find a home on the air again once the furor has died down, but for radio hosts concerned about the stability of their careers, the lesson is clear: there are practical limits on their freedom of speech.
Key Takeaways
- Radio was unique as a form of mass media because it had the potential to reach anyone, even the illiterate. Radio news in the 1930s and 1940s brought the emotional impact of traumatic events home to the listening public in a way that gave the nation a sense of unity.
- Radio encouraged the growth of national popular music stars and brought regional sounds to wider audiences. The effects of early radio programs can be felt both in modern popular music and in television programming.
- The Fairness Doctrine was created to ensure fair coverage of issues over the airwaves. It stated that radio stations must give equal time to contrasting points of view on an issue. An enormous rise in the popularity of AM talk radio occurred after the repeal of the Fairness Doctrine in 1987.
- The need for radio stations to generate revenue places practical limits on what radio personalities can say on the air. Shock jocks like Howard Stern and Don Imus test, and sometimes exceed, these limits and become controversial figures, highlighting the tension between freedom of speech and the need for businesses to be profitable.
Exercises
Please respond to the following writing prompts. Each response should be a minimum of one paragraph.
- Describe the unique qualities that set radio apart from other forms of mass media, such as newspapers.
- How did radio bring new music to places that had never heard it before?
- Describe political talk radio before and after the Fairness Doctrine. What kind of effect did the Fairness Doctrine have?
- Do you think that the Fairness Doctrine should be reinstated? Explain your answer.
- Investigate the controversy surrounding Don Imus and the comments that led to his show’s cancellation. What is your opinion of his comments and CBS’s reaction to them?
References
Anderson, Brian. South Park Conservatives: The Revolt Against Liberal Media Bias (Washington D.C.: Regnery Publishing, 2005), 35–36.
Annenberg Public Policy Center, Call-In Political Talk Radio: Background, Content, Audiences, Portrayal in Mainstream Media, Annenberg Public Policy Center Report Series, August 7, 1996, http://www.annenbergpublicpolicycenter.org/Downloads/Political_Communication/Political_Talk_Radio
/1996_03_political_talk_radio_rpt.PDF.
Brown, Manipulating the Ether, 140.
Cashman, America in the Twenties and Thirties, 329.
Cruz, Gilbert. “GOP Rallies Behind Talk Radio,” Time, June 28, 2007, http://www.time.com/time/politics/article/0,8599,1638662,00.html.
Davis, Richard. and Diana Owen, New Media and American Politics (New York: Oxford University Press, 1998), 54.
DiMeo, Nate. “New York Clashes with the Heartland,” Hearing America: A Century of Music on the Radio, American Public Media, 2010, http://americanradioworks.publicradio.org/features/radio/b1.html.
Douglas, Listening In, 287.
Grafton, John. ed., Great Speeches—Franklin Delano Roosevelt (Mineola, NY: Dover, 1999), 34.
Halloran, Liz. “Steele-Limbaugh Spat: A Battle for GOP’s Future?” NPR, March 2, 2009, http://www.npr.org/templates/story/story.php?storyId=101430572.
Hallowell, Billy. “Media Matters’ Vapid Response to Air America’s Crash,” Big Journalism, January 26, 2010, http://bigjournalism.com/bhallowell/2010/01/26/media-matters-vapid-response-to-air-americas-crash.
Horowitz, Joseph. Classical Music in America: A History of Its Rise and Fall (New York: Norton, 2005), 399–404.
Horten, Radio Goes to War, 45–47.
Howe, Sondra Wieland “The NBC Music Appreciation Hour: Radio Broadcasts of Walter Damrosch, 1928–1942,” Journal of Research in Music Education 51, no. 1 (Spring 2003).
Kyriakoudes, Louis. The Social Origins of the Urban South: Race, Gender, and Migration in Nashville and Middle Tennessee, 1890–1930 (Chapel Hill: University of North Carolina Press, 2003), 7.
Laird, Tracey. Louisiana Hayride: Radio and Roots Music Along the Red River (New York: Oxford University Press, 2005), 4–10.
Loviglio, Jason. “Vox Pop: Network Radio and the Voice of the People” in Radio Reader: Essays in the Cultural History of Radio, ed. Michele Hilmes and Jason Loviglio (New York: Routledge, 2002), 89–106.
Lubertozzi, Alex and Brian Holmsten, The War of the Worlds: Mars’ Invasion of Earth, Inciting Panic and Inspiring Terror From H. G. Wells to Orson Welles and Beyond (Naperville, IL: Sourcebooks, 2005), 7–9.
MSNBC, Imus in the Morning, MSNBC, April 4, 2007.
Stelter, Brian. “Liberal Radio, Even Without Air America,” New York Times, January 24, 2010, http://www.nytimes.com/2010/01/25/arts/25radio.html.
Sterling and Kitross, Stay Tuned, 182.
Sterling and Kitross, Stay Tuned, 199.
Stotts, Bethany. “Pelosi Supports Return of Fairness Doctrine,” Accuracy in Media Column, June 26, 2008, http://www.aim.org/aim-column/pelosi-support-return-of-fairness-doctrine.
Vigil, Delfin. “Conspiracy Theories Propel AM Radio Show Into the Top Ten,” San Francisco Chronicle, November 12, 2006, http://articles.sfgate.com/2006-11-12/news/17318973_1_radio-show-coast-cold-war.
Wald, Beatles Destroyed Rock ’n’ Roll, 95–96.
Wald, Elijah. How the Beatles Destroyed Rock ’n’ Roll: An Alternative History of American Popular Music (New York: Oxford University Press, 2009), 100–104.
Walker, Rebels on the Air, 53–54.
7.5 Radio’s New Future
Learning Objectives
- Distinguish the differences between satellite radio, HD radio, Internet radio, and podcasting.
- Identify the development of new radio technologies.
Although the future of radio has been doubted many times throughout its history, it is still in existence. The inherent portability of the medium gives it an advantage over other types of media that require an individual’s full attention, such as television or print. The simplicity of radio has leant itself to a variety of uses.
In recent years, new technologies have promised to expand the reach of radio and to expand the kinds of programming it offers. Satellite and HD radio have increased the amount and diversity of available programming by making more stations available. Internet radio has increased the accessibility of radio communication, and practically anyone who has access to a computer can create subscription podcasts to distribute around the world. These new technologies promise to make radio an enduring, innovative form of media.
Satellite Radio
In 1998, the FCC awarded licenses to two businesses interested in creating a radio version of cable television—without the cables. This act was the beginning of satellite radio, and the companies soon became XM and Sirius. These two networks sold special receivers that could pick up satellite transmissions broadcasting a wide range of formats on different channels to listeners who paid a monthly fee for the commercial-free programming.
Like cable television, satellite radio was not required to censor its disc jockeys or guests for profanity. This attracted somewhat controversial radio personalities known for their conflicts with the FCC, such as Howard Stern and Opie and Anthony. The networks also drew hosts such as NPR’s Bob Edwards and Bruce Springsteen’s guitarist “Little” Steven Van Zandt to create their own shows. Because listeners paid one price for access to all of the channels, disc jockeys experienced less pressure to adhere to the limited playlist style of programming that was the norm for terrestrial radio stations (Breen, 2005). In 2008, Sirius and XM merged to form Sirius XM. In 2010, the company recorded its first profits (Reuters, 2010).
HD Radio
Developed around 2001 to help terrestrial radio stations compete with emerging satellite radio technology, HD radio is essentially a digital transmission of radio signals resulting in less static and better sound quality, even for AM stations. Upgraded quality is not the major benefit of HD radio, however; the technology allows signals to be compressed so that one station can air so-called shadow stations on the same frequency as its regular broadcast. Although listeners need an HD radio to receive these channels, they pay no subscription fee, as independent stations provide their own programming as they deem necessary (Pogue, 2009).
Stations such as NPR’s WAMU in Washington, DC, broadcast different types of programming on their shadow channels. For example, the station’s 88.5-1 broadcasts the regular analog schedule of WAMU, while 88.5-2 broadcasts bluegrass and country music programming, and 88.5-3 broadcasts public radio programs not available on the analog version of 88.5 (American Universe Radio).
HD radio allows current broadcasters to provide content that they would normally put aside in favor of more commercial programs. WAMU’s bluegrass and country shadow station plays content originally played over the airwaves but relegated to the Internet in favor of more marketable programs. The innovation of HD radio allowed the station to reintroduce the programs without risking its financial stability. With this financial freedom, HD radio offers a host of programming possibilities for traditional radio.
Internet Radio and Podcasting
Broadcasting is both a strength and limitation of broadcasting. Although technological advances of the past 50 years, such as audio recorders and microphones, have made creating a radio program simple, finding a way to broadcast that program presents difficulties for the average person. The expansion of the Internet, however, has changed this limitation into a manageable hurdle for both businesses and individuals alike.
Internet Radio
At its core, Internet radio is simply the streaming of audio programs through the medium of the Internet. As early as 1994, radio stations such as Chapel Hill, North Carolina’s WXYC were broadcasting their signal over the Internet, and so potentially gaining a worldwide audience (WXYC). Soon, online-only radio stations were created to broadcast programs. Services such as Live 365, founded in 1999, have acted as distributors for Internet radio programs, charging broadcasters fees to stream their programs to a large listening audience.
Another type of Internet radio service is Pandora radio. This radio website does not distribute existing programs but rather allows users to create their own custom music radio stations. A listener creates a Pandora account and types in a song, composer, or artist, and the service creates a station that plays songs that are similar to the user’s selection. This analysis of music attempts to collect as many details about a song as possible, from lyrics to instrumentation to harmony, and then categorizes songs according to these attributes, making it possible for listeners to customize their own stations based on one or more of the cataloged attributes. The listener can delete unwanted songs from the playlist and create new stations as well. Pandora currently relies on on-screen advertising and has implemented audio advertisements as well (Beltrone, 2009). Other music services such as Yahoo! Music, AOL Radio, and Jango offer radio stations with multiple programmed genres.
Problems of Internet Broadcasting
Despite the rise of Internet radio over the past several years, its success has never been a sure thing. As the trend gained momentum, many inexperienced broadcasters confronted the issue of royalties, and many experienced broadcasters encountered new legal issues related to streaming. Stations that broadcast over the airwaves must pay publishing royalties to the musicians and songwriters behind the recordings. Rather than pay an individual musician or songwriter each time a recording is played, however, broadcasters—including radio station, coffee shops, and restaurants—pay for a blanket license that allows them to play any song. As Internet broadcasting grew, musicians and record labels began demanding royalties from Internet stations and specifying new licensing restrictions. For instance, Pandora radio’s license specifies that users can buy a song, but they can’t replay a song without purchasing it, nor can they skip more than six songs per hour.
Other issues arose as terrestrial stations began streaming on the Internet. Since its inception, the medium has struggled with such concerns as whether advertisers should pay for commercials played over the Internet as well as over the air and what types of licenses should be used for Internet radio stations. In time, the federal government mediated an agreement between broadcasters and record companies with the Webcasters Settlement Act of 2009. This legislation designated Internet-only stations as pure-play stations, dividing them according to the types of coverage they offer. Each category pays royalties in different ways, ensuring both fair compensation for artists and the future viability of Internet radio (Albenesius, 2009).
Podcasting
Unlike Internet radio, podcasting employs downloadable rather than streamed programs. The term podcasting itself stems from the use of MP3 players such as Apple’s iPod to use programs on demand. Many terrestrial stations have employed podcasting to supplement their traditional over-the-air broadcasting. Because these are single programs rather than continuous stations, podcasts are an easier medium to produce than is Internet radio.
Some podcast producers, such as Mignon Fogarty, have created programs that led to book deals and a steady income. Fogarty’s weekly Grammar Girl: Quick and Dirty Tricks podcast focuses on simple grammar rules. Within a year of its inception, this podcast racked up 1 million downloads and received national acclaim (Faherty, 2007). Nevertheless, podcasting does not fit neatly into the traditional concept of radio. Yet, there is no question that it is following in the footsteps of past radio programs, and that it provides a potential vision of the medium’s place in the years to come. Just as radio evolved from a medium for soap operas and live music to talk shows and recorded music, podcasts are a window into what radio may evolve into in the future.
Key Takeaways
- Radio’s flexibility as a medium has allowed it to adjust to the fluctuations of audience tastes and markets.
- Satellite radio is a subscription-based service, while HD radio is provided at no cost by current radio providers.
- Internet radio and podcasting have allowed many new programs and stations to be broadcast at low cost.
Exercises
Please respond to the following writing prompts. Each response should be a minimum of one paragraph.
- Define satellite radio, HD radio, Internet radio, and podcasting.
- How have each of these mediums fared in terms of popularity?
- Pick one of these mediums and predict its future success given its current popularity.
End-of-Chapter Assessment
Review Questions
-
Section 1
- Name three major changes that have affected the development of radio.
- What made the period from the 1930s to the 1950s radio’s golden age?
- How have large networks affected the development of radio?
- Has the corporate consolidation of radio over the past decades made radio better or worse in your opinion? Explain your answer.
-
Section 2
- How and why do modern radio stations employ radio formats?
- What is your opinion about the effects of formats on the current state of radio?
- Describe your favorite radio format and explain how the advertising is marketed to you.
-
Section 3
- What makes radio unique among forms of mass media?
- Explain the ways radio affected the development of your favorite genre of music.
- How do you think popular music would be heard and spread if there was no radio?
- What do you think political talk radio would presently be like if the Fairness Doctrine had not been repealed?
-
Section 4
- How do you think new radio technologies will affect traditional radio broadcasting over the next 10 years?
- Of the four new radio technologies listed in Section 7.4 “Radio’s New Future”, which do you think has the most potential to succeed? Explain your answer.
Critical Thinking Questions
- Taken as a whole, has government regulation been good or bad for radio? Explain your answer using specific examples.
- Given the rise of tightly formatted radio stations, do you think it is still possible to have a truly popular music? Why or why not?
- Do you think radio should be treated as a public resource or a private commodity? If your view was made law, how would it affect radio programming?
- If radio is a public resource, how should issues of freedom of speech and censorship be handled?
- Given the history of radio, do you think that new innovations in radio will make radio more democratic and accessible, or will regulatory and market forces control access?
Career Connection
New technologies in radio have created new radio career possibilities. As podcasting, Internet radio, satellite radio, and HD radio have fueled demand for new content, opportunities have emerged for self-starters to create and host their own radio programs or become freelance radio journalists.
Consider some of the uses for podcasting and radio journalism. Some useful links for researching careers in these areas, among others you may find through your own research, are (http://transom.org/) and (http://www.airmedia.org/). Based on your research and ideas, identify a career field in online radio that you may wish to pursue. Think about ways that people in this career field have employed radio. Now answer the following questions:
- How have people used radio in your chosen career?
- How have new technologies, such as podcasting and Internet radio, allowed for new uses of radio in this career?
- How could you use radio in your career even if you weren’t necessarily a radio producer or journalist?
- What kinds of projects or initiatives could you undertake that would involve radio?
References
Albenesius, Chloe. “Internet Radio Reaches Deal on Royalty Rates,” PC Magazine, July 7, 2009, http://www.pcmag.com/article2/0,2817,2349813,00.asp.
American Universe Radio, WAMU, “Schedules,” http://wamu.org/programs/schedule/.
Beltrone, Gabriel. “Pandora’s Back,” The Big Money, July 23, 2009, http://www.thebigmoney.com/articles/monetize/2009/07/23/pandora-s-back.
Breen, Bill. “Written in the Stars,” Fast Company, February 1, 2005, http://www.fastcompany.com/magazine/91/open_stars.html.
Faherty, John. “‘Grammar Girl’ Podcasts Rule Online,” USA Today, March 8, 2007, http://www.usatoday.com/tech/webguide/internetlife/2007-03-08-grammar-girl_N.htm.
Pogue, David. “HD Radio Crying Out to Be Heard,” New York Times, April 8, 2009, http://www.nytimes.com/2009/04/09/technology/personaltech/09pogue.html.
Reuters, “Sirius XM Posts Profit, Its First Since Merger, New York Times, February 25, 2010, http://www.nytimes.com/2010/02/26/technology/26radio.html.
WXYC, “Simulcast,” http://wxyc.org/about/simulcast.
Chapter 8: Movies
8.1 Movies
8.2 The History of Movies
8.3 Movies and Culture
8.4 Issues and Trends in Film
8.5 The Influence of New Technology
8.1 Movies
In 2009, many moviegoers were amazed by the three-dimensional (3-D) film Avatar. Avatar grossed over $1.8 billion in theaters worldwide, $1.35 billion from 3-D sales alone (Gray, 2010). Following in that vein, dozens of other movie studios released 3-D films, resulting in lesser box office successes such as Alice in Wonderland, Clash of the Titans, and Shrek Forever After. Many film reviewers and audiences seemed adamant—3-D movies were the wave of the future.
However, could this eye-popping technology actually ruin our moviegoing experience? Brian Moylan, a critic for Gawker.com, argues that it already has. The problem with 3-D, he says, is that “It is so mind-numbingly amazing that narrative storytelling hasn’t caught up with the technology. The corporate screenwriting borgs are so busy trying to come up with plot devices to highlight all the newfangled whoosiwhatsits—objects being hurled at the audience, flying sequences, falling leaves, glowing Venus Flytraps—that no one is really bothering to tell a tale (Moylan).”
James Cameron, director of Avatar, agrees. “[Studios] think, ‘what was the takeaway lessons from Avatar? Oh you should make more money with 3-D.’ They ignore the fact that we natively authored the film in 3-D, and [they] decide that what we accomplished in several years of production could be done in an eight week (post-production 3-D) conversion [such as] with Clash of the Titans (Baig, 2010).” Cameron makes the following point: While recent films such as Avatar (2009) and Beowulf (2007) were created exclusively for 3-D, many other filmmakers have converted their movies to 3-D after filming was already complete. Clash of the Titans is widely criticized because its 3-D effects were quickly added in postproduction (Baig, 2010).
What effect does this have on audiences? Aside from the complaints of headaches and nausea (and the fact that some who wear glasses regularly can find it uncomfortable or even impossible to wear 3-D glasses on top of their own), many say that the new technology simply makes movies look worse. The film critic Roger Ebert has continuously denounced the technology, noting that movies such as The Last Airbender look like they’re “filmed with a dirty sheet over the lens (Ebert, 2010).” 3-D technology can cause a movie to look fuzzier, darker, and generally less cinematically attractive. However, movie studios are finding 3-D films attractive for another reason.
Because seeing a movie in 3-D is considered a “premium” experience, consumers are expected to pay higher prices. And with the increasing popularity of IMAX 3D films, tickets may surpass $20 per person (Stewart & McClintock, 2010). This gives 3-D films an advantage over 2-D ones as audiences are willing to pay more.
The recent 3-D boom has often been compared to the rise of color film in the early 1950s. However, some maintain that it’s just a fad. Will 3-D technology affect the future of filmmaking? With a host of new 3-D technologies for the home theater being released in 2010, many are banking on the fact that it will. Director James Cameron, however, is unsure of the technology’s continuing popularity, arguing that “If people put bad 3-D in the marketplace they’re going to hold back or even threaten the emerging of 3-D (Baig).” What is important, he maintains, is the creative aspect of moviemaking—no technology can replace good filmmaking. In the end, audiences will determine the medium’s popularity. Throughout the history of film, Technicolor dyes, enhanced sound systems, and computer-generated graphics have boasted huge box-office revenues; however, it’s ultimately the viewers who determine what a good movie is and who set the standard for future films.
References
Baig, “Cameron: 3D Promising, But Caution Needed.”
Baig, Edward. “‘Avatar’ Director James Cameron: 3D Promising, but Caution Needed,” USA Today, March 11, 2010, http://content.usatoday.com/communities/technologylive/post/2010/03/james-cameron/1.
Ebert, Roger, review of The Last Airbender, directed by M. Night Shyamalan, Chicago Sun Times, June 30, 2010, http://rogerebert.suntimes.com/apps/pbcs.dll/article?AID=/20100630/REVIEWS/100639999.
Gray, Brandon. “‘Avatar’ is New King of the World,” Box Office Mojo, January 26, 2010, http://boxofficemojo.com/news/?id=2657.
Moylan, Brian. “3D is Going to Ruin Movies for a Long Time to Come,” Gawker, http://gawker.com/#!5484085/3d-is-going-to-ruin-movies-for-a-long-time-to-come.
Stewart, Andrew and Pamela McClintock, “Big Ticket Price Increase for 3D Pics,” Variety, March 24, 2010, http://www.variety.com/article/VR1118016878.html?categoryid=13&cs=1.
8.2 The History of Movies
Learning Objectives
- Identify key points in the development of the motion picture industry.
- Identify key developments of the motion picture industry and technology.
- Identify influential films in movie history.
The movie industry as we know it today originated in the early 19th century through a series of technological developments: the creation of photography, the discovery of the illusion of motion by combining individual still images, and the study of human and animal locomotion. The history presented here begins at the culmination of these technological developments, where the idea of the motion picture as an entertainment industry first emerged. Since then, the industry has seen extraordinary transformations, some driven by the artistic visions of individual participants, some by commercial necessity, and still others by accident. The history of the cinema is complex, and for every important innovator and movement listed here, others have been left out. Nonetheless, after reading this section you will understand the broad arc of the development of a medium that has captured the imaginations of audiences worldwide for over a century.
The Beginnings: Motion Picture Technology of the Late 19th Century
While the experience of watching movies on smartphones may seem like a drastic departure from the communal nature of film viewing as we think of it today, in some ways the small-format, single-viewer display is a return to film’s early roots. In 1891, the inventor Thomas Edison, together with William Dickson, a young laboratory assistant, came out with what they called the kinetoscope, a device that would become the predecessor to the motion picture projector. The kinetoscope was a cabinet with a window through which individual viewers could experience the illusion of a moving image (Gale Virtual Reference Library) (British Movie Classics). A perforated celluloid film strip with a sequence of images on it was rapidly spooled between a light bulb and a lens, creating the illusion of motion (Britannica). The images viewers could see in the kinetoscope captured events and performances that had been staged at Edison’s film studio in East Orange, New Jersey, especially for the Edison kinetograph (the camera that produced kinetoscope film sequences): circus performances, dancing women, cockfights, boxing matches, and even a tooth extraction by a dentist (Robinson, 1994).
As the kinetoscope gained popularity, the Edison Company began installing machines in hotel lobbies, amusement parks, and penny arcades, and soon kinetoscope parlors—where customers could pay around 25 cents for admission to a bank of machines—had opened around the country. However, when friends and collaborators suggested that Edison find a way to project his kinetoscope images for audience viewing, he apparently refused, claiming that such an invention would be a less profitable venture (Britannica).
Because Edison hadn’t secured an international patent for his invention, variations of the kinetoscope were soon being copied and distributed throughout Europe. This new form of entertainment was an instant success, and a number of mechanics and inventors, seeing an opportunity, began toying with methods of projecting the moving images onto a larger screen. However, it was the invention of two brothers, Auguste and Louis Lumière—photographic goods manufacturers in Lyon, France—that saw the most commercial success. In 1895, the brothers patented the cinématographe (from which we get the term cinema), a lightweight film projector that also functioned as a camera and printer. Unlike the Edison kinetograph, the cinématographe was lightweight enough for easy outdoor filming, and over the years the brothers used the camera to take well over 1,000 short films, most of which depicted scenes from everyday life. In December 1895, in the basement lounge of the Grand Café, Rue des Capucines in Paris, the Lumières held the world’s first ever commercial film screening, a sequence of about 10 short scenes, including the brother’s first film, Workers Leaving the Lumière Factory, a segment lasting less than a minute and depicting workers leaving the family’s photographic instrument factory at the end of the day, as shown in the still frame here in Figure 8.3 (Encyclopedia of the Age of Industry and Empire).
Believing that audiences would get bored watching scenes that they could just as easily observe on a casual walk around the city, Louis Lumière claimed that the cinema was “an invention without a future (Menand, 2005),” but a demand for motion pictures grew at such a rapid rate that soon representatives of the Lumière company were traveling throughout Europe and the world, showing half-hour screenings of the company’s films. While cinema initially competed with other popular forms of entertainment—circuses, vaudeville acts, theater troupes, magic shows, and many others—eventually it would supplant these various entertainments as the main commercial attraction (Menand, 2005). Within a year of the Lumières’ first commercial screening, competing film companies were offering moving-picture acts in music halls and vaudeville theaters across Great Britain. In the United States, the Edison Company, having purchased the rights to an improved projector that they called the Vitascope, held their first film screening in April 1896 at Koster and Bial’s Music Hall in Herald Square, New York City.
Film’s profound impact on its earliest viewers is difficult to imagine today, inundated as many are by video images. However, the sheer volume of reports about the early audience’s disbelief, delight, and even fear at what they were seeing suggests that viewing a film was an overwhelming experience for many. Spectators gasped at the realistic details in films such as Robert Paul’s Rough Sea at Dover, and at times people panicked and tried to flee the theater during films in which trains or moving carriages sped toward the audience (Robinson). Even the public’s perception of film as a medium was considerably different from the contemporary understanding; the moving image was an improvement upon the photograph—a medium with which viewers were already familiar—and this is perhaps why the earliest films documented events in brief segments but didn’t tell stories. During this “novelty period” of cinema, audiences were more interested by the phenomenon of the film projector itself, so vaudeville halls advertised the kind of projector they were using (for example “The Vitascope—Edison’s Latest Marvel”) (Balcanasu, et. al.), rather than the names of the films (Britannica Online).
By the close of the 19th century, as public excitement over the moving picture’s novelty gradually wore off, filmmakers were also beginning to experiment with film’s possibilities as a medium in itself (not simply, as it had been regarded up until then, as a tool for documentation, analogous to the camera or the phonograph). Technical innovations allowed filmmakers like Parisian cinema owner Georges Méliès to experiment with special effects that produced seemingly magical transformations on screen: flowers turned into women, people disappeared with puffs of smoke, a man appeared where a woman had just been standing, and other similar tricks (Robinson).
Not only did Méliès, a former magician, invent the “trick film,” which producers in England and the United States began to imitate, but he was also the one to transform cinema into the narrative medium it is today. Whereas before, filmmakers had only ever created single-shot films that lasted a minute or less, Méliès began joining these short films together to create stories. His 30-scene Trip to the Moon (1902), a film based on a Jules Verne novel, may have been the most widely seen production in cinema’s first decade (Robinson). However, Méliès never developed his technique beyond treating the narrative film as a staged theatrical performance; his camera, representing the vantage point of an audience facing a stage, never moved during the filming of a scene. In 1912, Méliès released his last commercially successful production, The Conquest of the Pole, and from then on, he lost audiences to filmmakers who were experimenting with more sophisticated techniques (Encyclopedia of Communication and Information).
The Nickelodeon Craze (1904–1908)
One of these innovative filmmakers was Edwin S. Porter, a projectionist and engineer for the Edison Company. Porter’s 12-minute film, The Great Train Robbery (1903), broke with the stagelike compositions of Méliès-style films through its use of editing, camera pans, rear projections, and diagonally composed shots that produced a continuity of action. Not only did The Great Train Robbery establish the realistic narrative as a standard in cinema, it was also the first major box-office hit. Its success paved the way for the growth of the film industry, as investors, recognizing the motion picture’s great moneymaking potential, began opening the first permanent film theaters around the country.
Known as nickelodeons because of their 5 cent admission charge, these early motion picture theaters, often housed in converted storefronts, were especially popular among the working class of the time, who couldn’t afford live theater. Between 1904 and 1908, around 9,000 nickelodeons appeared in the United States. It was the nickelodeon’s popularity that established film as a mass entertainment medium (Dictionary of American History).
The “Biz”: The Motion Picture Industry Emerges
As the demand for motion pictures grew, production companies were created to meet it. At the peak of nickelodeon popularity in 1910 (Britannica Online), there were 20 or so major motion picture companies in the United States. However, heated disputes often broke out among these companies over patent rights and industry control, leading even the most powerful among them to fear fragmentation that would loosen their hold on the market (Fielding, 1967). Because of these concerns, the 10 leading companies—including Edison, Biograph, Vitagraph, and others—formed the Motion Picture Patents Company (MPPC) in 1908. The MPPC was a trade group that pooled the most significant motion picture patents and established an exclusive contract between these companies and the Eastman Kodak Company as a supplier of film stock. Also known as the Trust, the MPPC’s goal was to standardize the industry and shut out competition through monopolistic control. Under the Trust’s licensing system, only certain licensed companies could participate in the exchange, distribution, and production of film at different levels of the industry—a shut-out tactic that eventually backfired, leading the excluded, independent distributors to organize in opposition to the Trust (Britannica Online).
The Rise of the Feature
In these early years, theaters were still running single-reel films, which came at a standard length of 1,000 feet, allowing for about 16 minutes of playing time. However, companies began to import multiple-reel films from European producers around 1907, and the format gained popular acceptance in the United States in 1912 with Louis Mercanton’s highly successful Queen Elizabeth, a three-and-a-half reel “feature,” starring the French actress Sarah Bernhardt. As exhibitors began to show more features—as the multiple-reel film came to be called—they discovered a number of advantages over the single-reel short. For one thing, audiences saw these longer films as special events and were willing to pay more for admission, and because of the popularity of the feature narratives, features generally experienced longer runs in theaters than their single-reel predecessors (Motion Pictures). Additionally, the feature film gained popularity among the middle classes, who saw its length as analogous to the more “respectable” entertainment of live theater (Motion Pictures). Following the example of the French film d’art, U.S. feature producers often took their material from sources that would appeal to a wealthier and better educated audience, such as histories, literature, and stage productions (Robinson).
As it turns out, the feature film was one factor that brought about the eventual downfall of the MPPC. The inflexible structuring of the Trust’s exhibition and distribution system made the organization resistant to change. When movie studio, and Trust member, Vitagraph began to release features like A Tale of Two Cities (1911) and Uncle Tom’s Cabin (1910), the Trust forced it to exhibit the films serially in single-reel showings to keep with industry standards. The MPPC also underestimated the appeal of the star system, a trend that began when producers chose famous stage actors like Mary Pickford and James O’Neill to play the leading roles in their productions and to grace their advertising posters (Robinson). Because of the MPPC’s inflexibility, independent companies were the only ones able to capitalize on two important trends that were to become film’s future: single-reel features and star power. Today, few people would recognize names like Vitagraph or Biograph, but the independents that outlasted them—Universal, Goldwyn (which would later merge with Metro and Mayer), Fox (later 20th Century Fox), and Paramount (the later version of the Lasky Corporation)—have become household names.
Hollywood
As moviegoing increased in popularity among the middle class, and as the feature films began keeping audiences in their seats for longer periods of time, exhibitors found a need to create more comfortable and richly decorated theater spaces to attract their audiences. These “dream palaces,” so called because of their often lavish embellishments of marble, brass, guilding, and cut glass, not only came to replace the nickelodeon theater, but also created the demand that would lead to the Hollywood studio system. Some producers realized that the growing demand for new work could only be met if the films were produced on a regular, year-round system. However, this was impractical with the current system that often relied on outdoor filming and was predominately based in Chicago and New York—two cities whose weather conditions prevented outdoor filming for a significant portion of the year. Different companies attempted filming in warmer locations such as Florida, Texas, and Cuba, but the place where producers eventually found the most success was a small, industrial suburb of Los Angeles called Hollywood.
Hollywood proved to be an ideal location for a number of reasons. Not only was the climate temperate and sunny year-round, but land was plentiful and cheap, and the location allowed close access to a number of diverse topographies: mountains, lakes, desert, coasts, and forests. By 1915, more than 60 percent of U.S. film production was centered in Hollywood (Britannica Online).
The Art of Silent Film
While the development of narrative film was largely driven by commercial factors, it is also important to acknowledge the role of individual artists who turned it into a medium of personal expression. The motion picture of the silent era was generally simplistic in nature; acted in overly animated movements to engage the eye; and accompanied by live music, played by musicians in the theater, and written titles to create a mood and to narrate a story. Within the confines of this medium, one filmmaker in particular emerged to transform the silent film into an art and to unlock its potential as a medium of serious expression and persuasion. D. W. Griffith, who entered the film industry as an actor in 1907, quickly moved to a directing role in which he worked closely with his camera crew to experiment with shots, angles, and editing techniques that could heighten the emotional intensity of his scenes. He found that by practicing parallel editing, in which a film alternates between two or more scenes of action, he could create an illusion of simultaneity. He could then heighten the tension of the film’s drama by alternating between cuts more and more rapidly until the scenes of action converged. Griffith used this technique to great effect in his controversial film The Birth of a Nation, which will be discussed in greater detail later on in this chapter. Other techniques that Griffith employed to new effect included panning shots, through which he was able to establish a sense of scene and to engage his audience more fully in the experience of the film, and tracking shots, or shots that traveled with the movement of a scene (Motion Pictures), which allowed the audience—through the eye of the camera—to participate in the film’s action.
MPAA: Combating Censorship
As film became an increasingly lucrative U.S. industry, prominent industry figures like D. W. Griffith, slapstick comedian/director Charlie Chaplin, and actors Mary Pickford and Douglas Fairbanks grew extremely wealthy and influential. Public attitudes toward stars and toward some stars’ extravagant lifestyles were divided, much as they are today: On the one hand, these celebrities were idolized and imitated in popular culture, yet at the same time, they were criticized for representing a threat, on and off screen, to traditional morals and social order. And much as it does today, the news media liked to sensationalize the lives of celebrities to sell stories. Comedian Roscoe “Fatty” Arbuckle, who worked alongside future icons Charlie Chaplin and Buster Keaton, was at the center of one of the biggest scandals of the silent era. When Arbuckle hosted a marathon party over Labor Day weekend in 1921, one of his guests, model Virginia Rapp, was rushed to the hospital, where she later died. Reports of a drunken orgy, rape, and murder surfaced. Following World War I, the United States was in the middle of significant social reforms, such as Prohibition. Many feared that movies and their stars could threaten the moral order of the country. Because of the nature of the crime and the celebrity involved, these fears became inexplicably tied to the Artbuckle case (Motion Pictures). Even though autopsy reports ruled that Rapp had died from causes for which Arbuckle could not be blamed, the comedian was tried (and acquitted) for manslaughter, and his career was ruined.
The Arbuckle affair and a series of other scandals only increased public fears about Hollywood’s impact. In response to this perceived threat, state and local governments increasingly tried to censor the content of films that depicted crime, violence, and sexually explicit material. Deciding that they needed to protect themselves from government censorship and to foster a more favorable public image, the major Hollywood studios organized in 1922 to form an association they called the Motion Picture Producers and Distributers of America (later renamed the Motion Picture Association of America, or MPAA). Among other things, the MPAA instituted a code of self-censorship for the motion picture industry. Today, the MPAA operates by a voluntary rating system, which means producers can voluntarily submit a film for review, which is designed to alert viewers to the age-appropriateness of a film, while still protecting the filmmakers’ artistic freedom (Motion Picture Association of America).
Silent Film’s Demise
In 1925, Warner Bros. was just a small Hollywood studio looking for opportunities to expand. When representatives from Western Electric offered to sell the studio the rights to a new technology they called Vitaphone, a sound-on-disc system that had failed to capture the interest of any of the industry giants, Warner Bros. executives took a chance, predicting that the novelty of talking films might be a way to make a quick, short-term profit. Little did they anticipate that their gamble would not only establish them as a major Hollywood presence but also change the industry forever.
The pairing of sound with motion pictures was nothing new in itself. Edison, after all, had commissioned the kinetoscope to create a visual accompaniment to the phonograph, and many early theaters had orchestra pits to provide musical accompaniment to their films. Even the smaller picture houses with lower budgets almost always had an organ or piano. When Warner Bros. purchased Vitaphone technology, it planned to use it to provide prerecorded orchestral accompaniment for its films, thereby increasing their marketability to the smaller theaters that didn’t have their own orchestra pits (Gochenour, 2000). In 1926, Warner debuted the system with the release of Don Juan, a costume drama accompanied by a recording of the New York Philharmonic Orchestra; the public responded enthusiastically (Motion Pictures). By 1927, after a $3 million campaign, Warner Bros. had wired more than 150 theaters in the United States, and it released its second sound film, The Jazz Singer, in which the actor Al Jolson improvised a few lines of synchronized dialogue and sang six songs. The film was a major breakthrough. Audiences, hearing an actor speak on screen for the first time, were enchanted (Gochenour). While radio, a new and popular entertainment, had been drawing audiences away from the picture houses for some time, with the birth of the “talkie,” or talking film, audiences once again returned to the cinema in large numbers, lured by the promise of seeing and hearing their idols perform (Higham, 1973). By 1929, three-fourths of Hollywood films had some form of sound accompaniment, and by 1930, the silent film was a thing of the past (Gochenour).
“I Don’t Think We’re in Kansas Anymore”: Film Goes Technicolor
Although the techniques of tinting and hand painting had been available methods for adding color to films for some time (Georges Méliès, for instance, employed a crew to hand-paint many of his films), neither method ever caught on. The hand-painting technique became impractical with the advent of mass-produced film, and the tinting process, which filmmakers discovered would create an interference with the transmission of sound in films, was abandoned with the rise of the talkie. However, in 1922, Herbert Kalmus’s Technicolor company introduced a dye-transfer technique that allowed it to produce a full-length film, The Toll of the Sea, in two primary colors (Gale Virtual Reference Library). However, because only two colors were used, the appearance of The Toll of the Sea (1922), The Ten Commandments (1923), and other early Technicolor films was not very lifelike. By 1932, Technicolor had designed a three-color system with more realistic results, and for the next 25 years, all color films were produced with this improved system. Disney’s Three Little Pigs (1933) and Snow White and the Seven Dwarves (1936) and films with live actors, like MGM’s The Wizard of Oz (1939) and Gone With the Wind (1939), experienced early success using Technicolor’s three-color method.
Despite the success of certain color films in the 1930s, Hollywood, like the rest of the United States, was feeling the impact of the Great Depression, and the expenses of special cameras, crews, and Technicolor lab processing made color films impractical for studios trying to cut costs. Therefore, it wasn’t until the end of the 1940s that Technicolor would largely displace the black-and-white film (Motion Pictures in Color).
Rise and Fall of the Hollywood Studio
The spike in theater attendance that followed the introduction of talking films changed the economic structure of the motion picture industry, bringing about some of the largest mergers in industry history. By 1930, eight studios produced 95 percent of all American films, and they continued to experience growth even during the Depression. The five most influential of these studios—Warner Bros., Metro-Goldwyn-Mayer, RKO, 20th Century Fox, and Paramount—were vertically integrated; that is, they controlled every part of the system as it related to their films, from the production to release, distribution, and even viewing. Because they owned theater chains worldwide, these studios controlled which movies exhibitors ran, and because they “owned” a stock of directors, actors, writers, and technical assistants by contract, each studio produced films of a particular character.
The late 1930s and early 1940s are sometimes known as the “Golden Age” of cinema, a time of unparalleled success for the movie industry; by 1939, film was the 11th-largest industry in the United States, and during World War II, when the U.S. economy was once again flourishing, two-thirds of Americans were attending the theater at least once a week (Britannica Online). Some of the most acclaimed movies in history were released during this period, including Citizen Kane and The Grapes of Wrath. However, postwar inflation, a temporary loss of key foreign markets, the advent of the television, and other factors combined to bring that rapid growth to an end. In 1948, the case of the United States v. Paramount Pictures—mandating competition and forcing the studios to relinquish control over theater chains—dealt the final devastating blow from which the studio system would never recover. Control of the major studios reverted to Wall Street, where the studios were eventually absorbed by multinational corporations, and the powerful studio heads lost the influence they had held for nearly 30 years (Baers, 2000).
Post–World War II: Television Presents a Threat
While economic factors and antitrust legislation played key roles in the decline of the studio system, perhaps the most important factor in that decline was the advent of the television. Given the opportunity to watch “movies” from the comfort of their own homes, the millions of Americans who owned a television by the early 1950s were attending the cinema far less regularly than they had only several years earlier (Motion Pictures). In an attempt to win back diminishing audiences, studios did their best to exploit the greatest advantages film held over television. For one thing, television broadcasting in the 1950s was all in black and white, whereas the film industry had the advantage of color. While producing a color film was still an expensive undertaking in the late 1940s, a couple of changes occurred in the industry in the early 1950s to make color not only more affordable, but more realistic in its appearance. In 1950, as the result of antitrust legislation, Technicolor lost its monopoly on the color film industry, allowing other providers to offer more competitive pricing on filming and processing services. At the same time, Kodak came out with a multilayer film stock that made it possible to use more affordable cameras and to produce a higher quality image. Kodak’s Eastmancolor option was an integral component in converting the industry to color. In the late 1940s, only 12 percent of features were in color; however, by 1954 (after the release of Kodak Eastmancolor) more than 50 percent of movies were in color (Britannica Online).
Another clear advantage on which filmmakers tried to capitalize was the sheer size of the cinema experience. With the release of the epic biblical film The Robe in 1953, 20th Century Fox introduced the method that would soon be adopted by nearly every studio in Hollywood: a technology that allowed filmmakers to squeeze a wide-angle image onto conventional 35-mm film stock, thereby increasing the aspect ratio (the ratio of a screen’s width to its height) of their images. This wide-screen format increased the immersive quality of the theater experience. Nonetheless, even with these advancements, movie attendance never again reached the record numbers it experienced in 1946, at the peak of the Golden Age of Hollywood (Britannica Online).
Mass Entertainment, Mass Paranoia: HUAC and the Hollywood Blacklist
The Cold War with the Soviet Union began in 1947, and with it came the widespread fear of communism, not only from the outside, but equally from within. To undermine this perceived threat, the House Un-American Activities Committee (HUAC) commenced investigations to locate communist sympathizers in America who were suspected of conducting espionage for the Soviet Union. In the highly conservative and paranoid atmosphere of the time, Hollywood, the source of a mass-cultural medium, came under fire in response to fears that subversive, communist messages were being embedded in films. In November 1947, more than 100 people in the movie business were called to testify before the HUAC about their and their colleagues’ involvement with communist affairs. Of those investigated, 10 in particular refused to cooperate with the committee’s questions. These 10, later known as the Hollywood Ten, were fired from their jobs and sentenced to serve up to a year in prison. The studios, already slipping in influence and profit, were eager to cooperate in order to save themselves, and a number of producers signed an agreement stating that no communists would work in Hollywood.
The hearings, which recommenced in 1951 with the rise of Senator Joseph McCarthy’s influence, turned into a kind of witch hunt as witnesses were asked to testify against their associates, and a blacklist of suspected communists evolved. Over 324 individuals lost their jobs in the film industry as a result of blacklisting (the denial of work in a certain field or industry) and HUAC investigations (Georgakas, 2004; Mills, 2007; Dressler, et. al., 2005).
Down With the Establishment: Youth Culture of the 1960s and 1970s
Movies of the late 1960s began attracting a younger demographic, as a growing number of young people were drawn in by films like Sam Peckinpah’s The Wild Bunch (1969), Stanley Kubrick’s 2001: A Space Odyssey (1968), Arthur Penn’s Bonnie and Clyde (1967), and Dennis Hopper’s Easy Rider (1969)—all revolutionary in their genres—that displayed a sentiment of unrest toward conventional social orders and included some of the earliest instances of realistic and brutal violence in film. These four films in particular grossed so much money at the box offices that producers began churning out low-budget copycats to draw in a new, profitable market (Motion Pictures). While this led to a rise in youth-culture films, few of them saw great success. However, the new liberal attitudes toward depictions of sex and violence in these films represented a sea of change in the movie industry that manifested in many movies of the 1970s, including Francis Ford Coppola’s The Godfather (1972), William Friedkin’s The Exorcist (1973), and Steven Spielberg’s Jaws (1975), all three of which saw great financial success (Britannica Online; Belton, 1994).
Blockbusters, Knockoffs, and Sequels
In the 1970s, with the rise of work by Coppola, Spielberg, George Lucas, Martin Scorsese, and others, a new breed of director emerged. These directors were young and film-school educated, and they contributed a sense of professionalism, sophistication, and technical mastery to their work, leading to a wave of blockbuster productions, including Close Encounters of the Third Kind (1977), Star Wars (1977), Raiders of the Lost Ark (1981), and E.T.: The Extra-Terrestrial (1982). The computer-generated special effects that were available at this time also contributed to the success of a number of large-budget productions. In response to these and several earlier blockbusters, movie production and marketing techniques also began to shift, with studios investing more money in fewer films in the hopes of producing more big successes. For the first time, the hefty sums producers and distributers invested didn’t go to production costs alone; distributers were discovering the benefits of TV and radio advertising and finding that doubling their advertising costs could increase profits as much as three or four times over. With the opening of Jaws, one of the five top-grossing films of the decade (and the highest grossing film of all time until the release of Star Wars in 1977), Hollywood embraced the wide-release method of movie distribution, abandoning the release methods of earlier decades, in which a film would debut in only a handful of select theaters in major cities before it became gradually available to mass audiences. Jaws was released in 600 theaters simultaneously, and the big-budget films that followed came out in anywhere from 800 to 2,000 theaters nationwide on their opening weekends (Belton; Hanson & Garcia-Myers, 2000).
The major Hollywood studios of the late 1970s and early 1980s, now run by international corporations, tended to favor the conservative gamble of the tried and true, and as a result, the period saw an unprecedented number of high-budget sequels—as in the Star Wars, Indiana Jones, and Godfather films—as well as imitations and adaptations of earlier successful material, such as the plethora of “slasher” films that followed the success of the 1979 thriller Halloween. Additionally, corporations sought revenue sources beyond the movie theater, looking to the video and cable releases of their films. Introduced in 1975, the VCR became nearly ubiquitous in American homes by 1998 with 88.9 million households owning the appliance (Rosen & Meier, 2000). Cable television’s growth was slower, but ownership of VCRs gave people a new reason to subscribe, and cable subsequently expanded as well (Rogers). And the newly introduced concept of film-based merchandise (toys, games, books, etc.) allowed companies to increase profits even more.
The 1990s and Beyond
The 1990s saw the rise of two divergent strands of cinema: the technically spectacular blockbuster with special, computer-generated effects and the independent, low-budget film. The capabilities of special effects were enhanced when studios began manipulating film digitally. Early examples of this technology can be seen in Terminator 2: Judgment Day (1991) and Jurassic Park (1993). Films with an epic scope—Independence Day (1996), Titanic (1997), and The Matrix (1999)—also employed a range of computer-animation techniques and special effects to wow audiences and to draw more viewers to the big screen. Toy Story (1995), the first fully computer-animated film, and those that came after it, such as Antz (1998), A Bug’s Life (1998), and Toy Story 2 (1999), displayed the improved capabilities of computer-generated animation (Sedman, 2000). At the same time, independent directors and producers, such as the Coen brothers and Spike Jonze, experienced an increased popularity, often for lower-budget films that audiences were more likely to watch on video at home (Britannica Online). A prime example of this is the 1996 Academy Awards program, when independent films dominated the Best Picture category. Only one movie from a big film studio was nominated—Jerry Maguire—while the rest were independent films. The growth of both independent movies and special-effects-laden blockbusters continues to the present day. You will read more about current issues and trends and the future of the movie industry later on in this chapter.
Key Takeaways
- The concept of the motion picture was first introduced to a mass audience through Thomas Edison’s kinetoscope in 1891. However, it wasn’t until the Lumière brothers released the cinématographe in 1895 that motion pictures were projected for audience viewing. In the United States, film established itself as a popular form of entertainment with the nickelodeon theater in the 1910s.
- The release of The Jazz Singer in 1927 marked the birth of the talking film, and by 1930 silent film was a thing of the past. Technicolor emerged for film around the same time and found early success with movies like The Wizard of Oz and Gone With the Wind. However, people would continue to make films in black and white until the late 1950s.
- By 1915 most of the major film studios had moved to Hollywood. During the Golden Age of Hollywood, these major studios controlled every aspect of the movie industry, and the films they produced drew crowds to theaters in numbers that have still not been surpassed. After World War II, the studio system declined as a result of antitrust legislation that took power away from studios and of the invention of the television.
- During the 1960s and 1970s, there was a rise in films—including Bonnie and Clyde, The Wild Bunch, 2001: A Space Odyssey, and Easy Rider—that celebrated the emerging youth culture and a rejection of the conservatism of the previous decades. This also led to looser attitudes toward depictions of sexuality and violence in film. The 1970s and 1980s saw the rise of the blockbuster, with films like Jaws, Star Wars, Raiders of the Lost Ark, and The Godfather.
- The adoption of the VCR by most households in the 1980s reduced audiences at movie theaters but opened a new mass market of home movie viewers. Improvements in computer animation led to more special effects in film during the 1990s with movies like The Matrix, Jurassic Park, and the first fully computer-animated film, Toy Story.
Exercises
Identify four films that you would consider to be representative of major developments in the industry and in film as a medium that were outlined in this section. Imagine you are using these films to explain movie history to a friend. Provide a detailed explanation of why each of these films represents significant changes in attitudes, technology, or trends and situate each in the overall context of film’s development. Consider the following questions:
- How did this movie influence the film industry?
- What has been the lasting impact of this movie on the film industry?
- How was the film industry and technology different before this film?
References
Baers, Michael. “Studio System,” in St. James Encyclopedia of Popular Culture, ed. Sara Pendergast and Tom Pendergast (Detroit: St. James Press, 2000), vol. 4, 565.
Balcanasu, Andrei Ionut, Sergey V. Smagin, and Stephanie K. Thrift, “Edison and the Lumiere Brothers,” Cartoons and Cinema of the 20th Century, http://library.thinkquest.org/C0118600/index.phtml?menu=en%3B1%3Bci1001.html.
Belton, American Cinema/American Culture, 305.
Belton, John. American Cinema/American Culture. (New York: McGraw-Hill, 1994), 284–290.
Britannica Online, s.v. “History of the Motion Picture”.
Britannica Online, s.v. “Kinetoscope,” http://www.britannica.com/EBchecked/topic/318211/Kinetoscope/318211main/Article.
Britannica Online, s.v. “nickelodeon.”
Britannica Online. s.v. “History of the Motion Picture.” http://www.britannica.com/EBchecked/topic/394161/history-of-the-motion picture; Robinson, From Peep Show to Palace, 45, 53.
British Movie Classics, “The Kinetoscope,” British Movie Classics, http://www.britishmovieclassics.com/thekinetoscope.php.
Dictionary of American History, 3rd ed., s.v. “Nickelodeon,” by Ryan F. Holznagel, Gale Virtual Reference Library.
Dresler, Kathleen, Kari Lewis, Tiffany Schoser and Cathy Nordine, “The Hollywood Ten,” Dalton Trumbo, 2005, http://www.mcpld.org/trumbo/WebPages/hollywoodten.htm.
Encyclopedia of Communication and Information (New York: MacMillan Reference USA, 2002), s.v. “Méliès, Georges,” by Ted C. Jones, Gale Virtual Reference Library.
Encyclopedia of the Age of Industry and Empire, s.v. “Cinema.”
Fielding, Raymond A Technological History of Motion Pictures and Television (Berkeley: California Univ. Press, 1967) 21.
Gale Virtual Reference Library, “Motion Pictures in Color,” in American Decades, ed. Judith S. Baughman and others, vol. 3, Gale Virtual Reference Library.
Gale Virtual Reference Library, Europe 1789–1914: Encyclopedia of the Age of Industry and Empire, vol. 1, s.v. “Cinema,” by Alan Williams, Gale Virtual Reference Library.
Georgakas, Dan. “Hollywood Blacklist,” in Encyclopedia of the American Left, ed. Mari Jo Buhle, Paul Buhle, and Dan Georgakas, 2004, http://writing.upenn.edu/~afilreis/50s/blacklist.html.
Gochenour, “Birth of the ‘Talkies,’” 578.
Gochenour, Phil. “Birth of the ‘Talkies’: The Development of Synchronized Sound for Motion Pictures,” in Science and Its Times, vol. 6, 1900–1950, ed. Neil Schlager and Josh Lauer (Detroit: Gale, 2000), 577.
Hanson, Steve and Sandra Garcia-Myers, “Blockbusters,” in St. James Encyclopedia of Popular Culture, ed. Sara Pendergast and Tom Pendergast (Detroit: St. James Press, 2000), vol. 1, 282.
Higham, Charles. The Art of the American Film: 1900–1971. (Garden City: Doubleday & Company, 1973), 85.
Menand, Louis “Gross Points,” New Yorker, February 7, 2005, http://www.newyorker.com/archive/2005/02/07/050207crat_atlarge.
Mills, Michael. “Blacklist: A Different Look at the 1947 HUAC Hearings,” Modern Times, 2007, http://www.moderntimes.com/blacklist/.
Motion Picture Association of America, “History of the MPAA,” http://www.mpaa.org/about/history.
Motion Pictures in Color, “Motion Pictures in Color.”
Motion Pictures, “Griffith,” Motion Pictures, http://www.uv.es/EBRIT/macro/macro_5004_39_6.html#0011.
Motion Pictures, “Post World War I US Cinema,” Motion Pictures, http://www.uv.es/EBRIT/macro/macro_5004_39_10.html#0015.
Motion Pictures, “Pre World War II Sound Era: Introduction of Sound,” Motion Pictures, http://www.uv.es/EBRIT/macro/macro_5004_39_11.html#0017.
Motion Pictures, “Pre World-War I US Cinema,” Motion Pictures: The Silent Feature: 1910-27, http://www.uv.es/EBRIT/macro/macro_5004_39_4.html#0009.
Motion Pictures, “Recent Trends in US Cinema,” Motion Pictures, http://www.uv.es/EBRIT/macro/macro_5004_39_37.html#0045.
Motion Pictures, “The War Years and Post World War II Trends: Decline of the Hollywood Studios,” Motion Pictures, http://www.uv.es/EBRIT/macro/macro_5004_39_24.html#0030.
Robinson, From Peep Show to Palace, 135, 144.
Robinson, From Peep Show to Palace, 63.
Robinson, From Peep Show to Palace, 74–75; Encyclopedia of the Age of Industry and Empire, s.v. “Cinema.”
Robinson, David. From Peep Show to Palace: The Birth of American Film (New York: Columbia University Press, 1994), 43–44.
Rogers, Everett. “Video is Here to Stay,” Center for Media Literacy, http://www.medialit.org/reading-room/video-here-stay.
Rosen, Karen and Alan Meier, “Power Measurements and National Energy Consumption of Televisions and Video Cassette Recorders in the USA,” Energy, 25, no. 3 (2000), 220.
Sedman, David. “Film Industry, Technology of,” in Encyclopedia of Communication and Information, ed. Jorge Reina Schement (New York: MacMillan Reference, 2000), vol. 1, 340.
8.3 Movies and Culture
Learning Objectives
- Recognize how movies reflect cultural attitudes, trends, and events.
- Indicate how movies influence culture.
Movies Mirror Culture
The relationship between movies and culture involves a complicated dynamic; while American movies certainly influence the mass culture that consumes them, they are also an integral part of that culture, a product of it, and therefore a reflection of prevailing concerns, attitudes, and beliefs. In considering the relationship between film and culture, it is important to keep in mind that, while certain ideologies may be prevalent in a given era, not only is American culture as diverse as the populations that form it, but it is also constantly changing from one period to the next. Mainstream films produced in the late 1940s and into the 1950s, for example, reflected the conservatism that dominated the sociopolitical arenas of the time. However, by the 1960s, a reactionary youth culture began to emerge in opposition to the dominant institutions, and these antiestablishment views soon found their way onto the screen—a far cry from the attitudes most commonly represented only a few years earlier.
In one sense, movies could be characterized as America’s storytellers. Not only do Hollywood films reflect certain commonly held attitudes and beliefs about what it means to be American, but they also portray contemporary trends, issues, and events, serving as records of the eras in which they were produced. Consider, for example, films about the September 11, 2001, terrorist attacks: Fahrenheit 9/11, World Trade Center, United 93, and others. These films grew out of a seminal event of the time, one that preoccupied the consciousness of Americans for years after it occurred.
Birth of a Nation
In 1915, director D. W. Griffith established his reputation with the highly successful film The Birth of a Nation, based on Thomas Dixon’s novel The Clansman, a prosegregation narrative about the American South during and after the Civil War. At the time, The Birth of a Nation was the longest feature film ever made, at almost 3 hours, and contained huge battle scenes that amazed and delighted audiences. Griffith’s storytelling ability helped solidify the narrative style that would go on to dominate feature films. He also experimented with editing techniques such as close-ups, jump cuts, and parallel editing that helped make the film an artistic achievement.
Griffith’s film found success largely because it captured the social and cultural tensions of the era. As American studies specialist Lary May has argued, “[Griffith’s] films dramatized every major concern of the day (May, 1997).” In the early 20th century, fears about recent waves of immigrants had led to certain racist attitudes in mass culture, with “scientific” theories of the time purporting to link race with inborn traits like intelligence and other capabilities. Additionally, the dominant political climate, largely a reaction against populist labor movements, was one of conservative elitism, eager to attribute social inequalities to natural human differences (Darity). According to a report by the New York Evening Post after the film’s release, even some Northern audiences “clapped when the masked riders took vengeance on Negroes (Higham).” However, the outrage many groups expressed about the film is a good reminder that American culture is not monolithic, that there are always strong contingents in opposition to dominant ideologies.
While critics praised the film for its narrative complexity and epic scope, many others were outraged and even started riots at several screenings because of its highly controversial, openly racist attitudes, which glorified the Ku Klux Klan and blamed Southern Blacks for the destruction of the war (Higham). Many Americans joined the National Association for the Advancement of Colored People (NAACP) in denouncing the film, and the National Board of Review eventually cut a number of the film’s racist sections (May). However, it’s important to keep in mind the attitudes of the early 1900s. At the time the nation was divided, and Jim Crow laws and segregation were enforced. Nonetheless, The Birth of a Nation was the highest grossing movie of its era. In 1992, the film was classified by the Library of Congress among the “culturally, historically, or aesthetically significant films” in U.S. history.
“The American Way”
Until the bombing of Pearl Harbor in 1941, American films after World War I generally reflected the neutral, isolationist stance that prevailed in politics and culture. However, after the United States was drawn into the war in Europe, the government enlisted Hollywood to help with the war effort, opening the federal Bureau of Motion Picture Affairs in Los Angeles. Bureau officials served in an advisory capacity on the production of war-related films, an effort with which the studios cooperated. As a result, films tended toward the patriotic and were produced to inspire feelings of pride and confidence in being American and to clearly establish that America and its allies were forces of good. For instance, critically acclaimed Casablanca paints a picture of the ill effects of fascism, illustrates the values that heroes like Victor Laszlo hold, and depicts America as a place for refugees to find democracy and freedom (Digital History).
These early World War II films were sometimes overtly propagandist, intended to influence American attitudes rather than present a genuine reflection of American sentiments toward the war. Frank Capra’s Why We Fight films, for example, the first of which was produced in 1942, were developed for the U.S. Army and were later shown to general audiences; they delivered a war message through narrative (Koppes & Black, 1987). As the war continued, however, filmmakers opted to forego patriotic themes for a more serious reflection of American sentiments, as exemplified by films like Alfred Hitchcock’s Lifeboat.
Youth versus Age: From Counterculture to Mass Culture
In Mike Nichols’s 1967 film The Graduate, Dustin Hoffman, as the film’s protagonist, enters into a romantic affair with the wife of his father’s business partner. However, Mrs. Robinson and the other adults in the film fail to understand the young, alienated hero, who eventually rebels against them. The Graduate, which brought in more than $44 million at the box office, reflected the attitudes of many members of a young generation growing increasingly dissatisfied with what they perceived to be the repressive social codes established by their more conservative elders (Dirks).
This baby boomer generation came of age during the Korean and Vietnam wars. Not only did the youth culture express a cynicism toward the patriotic, prowar stance of their World War II–era elders, but they displayed a fierce resistance toward institutional authority in general, an antiestablishmentism epitomized in the 1967 hit film Bonnie and Clyde. In the film, a young, outlaw couple sets out on a cross-country bank-robbing spree until they’re killed in a violent police ambush at the film’s close (Belton).
Bonnie and Clyde’s violence provides one example of the ways films at the time were testing the limits of permissible on-screen material. The youth culture’s liberal attitudes toward formally taboo subjects like sexuality and drugs began to emerge in film during the late 1960s. Like Bonnie and Clyde, Sam Peckinpah’s 1969 Western The Wild Bunch, displays an early example of aestheticized violence in film. The wildly popular Easy Rider (1969)—containing drugs, sex, and violence—may owe a good deal of its initial success to liberalized audiences. And in the same year, Midnight Cowboy, one of the first Hollywood films to receive an X rating (in this case for its sexual content), won three Academy Awards, including Best Picture (Belton). As the release and subsequently successful reception of these films attest, what at the decade’s outset had been countercultural had, by the decade’s close, become mainstream.
The Hollywood Production Code
When the MPAA (originally MPPDA) first banded together in 1922 to combat government censorship and to promote artistic freedom, the association attempted a system of self-regulation. However, by 1930—in part because of the transition to talking pictures—renewed criticism and calls for censorship from conservative groups made it clear to the MPPDA that the loose system of self-regulation was not enough protection. As a result, the MPPDA instituted the Production Code, or Hays Code (after MPPDA director William H. Hays), which remained in place until 1967. The code, which according to motion picture producers concerned itself with ensuring that movies were “directly responsible for spiritual or moral progress, for higher types of social life, and for much correct thinking (History Matters),” was strictly enforced starting in 1934, putting an end to most public complaints. However, many people in Hollywood resented its restrictiveness. After a series of Supreme Court cases in the 1950s regarding the code’s restrictions to freedom of speech, the Production Code grew weaker until it was finally replaced in 1967 with the MPAA rating system (American Decades Primary Sources, 2004).
MPAA Ratings
As films like Bonnie and Clyde and Who’s Afraid of Virginia Woolf? (1966) tested the limits on violence and language, it became clear that the Production Code was in need of replacement. In 1968, the MPAA adopted a ratings system to identify films in terms of potentially objectionable content. By providing officially designated categories for films that would not have passed Production Code standards of the past, the MPAA opened a way for films to deal openly with mature content. The ratings system originally included four categories: G (suitable for general audiences), M (equivalent to the PG rating of today), R (restricted to adults over age 16), and X (equivalent to today’s NC-17).
The MPAA rating systems, with some modifications, is still in place today. Before release in theaters, films are submitted to the MPAA board for a screening, during which advisers decide on the most appropriate rating based on the film’s content. However, studios are not required to have the MPAA screen releases ahead of time—some studios release films without the MPAA rating at all. Commercially, less restrictive ratings are generally more beneficial, particularly in the case of adult-themed films that have the potential to earn the most restrictive rating, the NC-17. Some movie theaters will not screen a movie that is rated NC-17. When filmmakers get a more restrictive rating than they were hoping for, they may resubmit the film for review after editing out objectionable scenes (Dick, 2006).
The New War Film: Cynicism and Anxiety
Unlike the patriotic war films of the World War II era, many of the films about U.S. involvement in Vietnam reflected strong antiwar sentiment, criticizing American political policy and portraying war’s damaging effects on those who survived it. Films like Dr. Strangelove (1964), M*A*S*H (1970), The Deer Hunter (1978), and Apocalypse Now (1979) portray the military establishment in a negative light and dissolve clear-cut distinctions, such as the “us versus them” mentality, of earlier war films. These, and the dozens of Vietnam War films that were produced in the 1970s and 1980s—Oliver Stone’s Platoon (1986) and Born on the Fourth of July (1989) and Stanley Kubrick’s Full Metal Jacket (1987), for example—reflect the sense of defeat and lack of closure Americans felt after the Vietnam War and the emotional and psychological scars it left on the nation’s psyche (Dirks, 2010; Anderegg, 1991). A spate of military and politically themed films emerged during the 1980s as America recovered from defeat in Vietnam, while at the same time facing anxieties about the ongoing Cold War with the Soviet Union.
Fears about the possibility of nuclear war were very real during the 1980s, and some film critics argue that these anxieties were reflected not only in overtly political films of the time but also in the popularity of horror films, like Halloween and Friday the 13th, which feature a mysterious and unkillable monster, and in the popularity of the fantastic in films like E.T.: The Extra-Terrestrial, Raiders of the Lost Ark, and Star Wars, which offer imaginative escapes (Wood, 1986).
Movies Shape Culture
Just as movies reflect the anxieties, beliefs, and values of the cultures that produce them, they also help to shape and solidify a culture’s beliefs. Sometimes the influence is trivial, as in the case of fashion trends or figures of speech. After the release of Flashdance in 1983, for instance, torn T-shirts and leg warmers became hallmarks of the fashion of the 1980s (Pemberton-Sikes, 2006). However, sometimes the impact can be profound, leading to social or political reform, or the shaping of ideologies.
Film and the Rise of Mass Culture
During the 1890s and up until about 1920, American culture experienced a period of rapid industrialization. As people moved from farms to centers of industrial production, urban areas began to hold larger and larger concentrations of the population. At the same time, film and other methods of mass communication (advertising and radio) developed, whose messages concerning tastes, desires, customs, speech, and behavior spread from these population centers to outlying areas across the country. The effect of early mass-communication media was to wear away regional differences and create a more homogenized, standardized culture.
Film played a key role in this development, as viewers began to imitate the speech, dress, and behavior of their common heroes on the silver screen (Mintz, 2007). In 1911, the Vitagraph company began publishing The Motion Picture Magazine, America’s first fan magazine. Originally conceived as a marketing tool to keep audiences interested in Vitagraph’s pictures and major actors, The Motion Picture Magazine helped create the concept of the film star in the American imagination. Fans became obsessed with the off-screen lives of their favorite celebrities, like Pearl White, Florence Lawrence, and Mary Pickford (Doyle, 2008).
American Myths and Traditions
American identity in mass society is built around certain commonly held beliefs, or myths about shared experiences, and these American myths are often disseminated through or reinforced by film. One example of a popular American myth, one that dates back to the writings of Thomas Jefferson and other founders, is an emphasis on individualism—a celebration of the common man or woman as a hero or reformer. With the rise of mass culture, the myth of the individual became increasingly appealing because it provided people with a sense of autonomy and individuality in the face of an increasingly homogenized culture. The hero myth finds embodiment in the Western, a film genre that was popular from the silent era through the 1960s, in which the lone cowboy, a seminomadic wanderer, makes his way in a lawless, and often dangerous, frontier. An example is 1952’s High Noon. From 1926 until 1967, Westerns accounted for nearly a quarter of all films produced. In other films, like Frank Capra’s 1946 movie It’s a Wonderful Life, the individual triumphs by standing up to injustice, reinforcing the belief that one person can make a difference in the world (Belton). And in more recent films, hero figures such as Indiana Jones, Luke Skywalker (Star Wars), and Neo (The Matrix) have continued to emphasize individualism.
Social Issues in Film
As D. W. Griffith recognized nearly a century ago, film has enormous power as a medium to influence public opinion. Ever since Griffith’s The Birth of a Nation sparked strong public reactions in 1915, filmmakers have been producing movies that address social issues, sometimes subtly, and sometimes very directly. More recently, films like Hotel Rwanda (2004), about the 1994 Rwandan genocide, or The Kite Runner (2007), a story that takes place in the midst of a war-torn Afghanistan, have captured audience imaginations by telling stories that raise social awareness about world events. And a number of documentary films directed at social issues have had a strong influence on cultural attitudes and have brought about significant change.
In the 2000s, documentaries, particularly those of an activist nature, were met with greater interest than ever before. Films like Super Size Me (2004), which documents the effects of excessive fast-food consumption and criticizes the fast-food industry for promoting unhealthy eating habits for profit, and Food, Inc. (2009), which examines corporate farming practices and points to the negative impact these practices can have on human health and the environment, have brought about important changes in American food culture (Severson, 2009). Just 6 weeks after the release of Super Size Me, McDonald’s took the supersize option off its menu and since 2004 has introduced a number of healthy food options in its restaurants (Sood, 2004). Other fast-food chains have made similar changes (Sood, 2004).
Other documentaries intended to influence cultural attitudes and inspire change include those made by director Michael Moore. Moore’s films present a liberal stance on social and political issues such as health care, globalization, and gun control. His 2002 film Bowling for Columbine, for example, addressed the Columbine High School shootings of 1999, presenting a critical examination of American gun culture. While some critics have accused Moore of producing propagandistic material under the label of documentary because of his films’ strong biases, his films have been popular with audiences, with four of his documentaries ranking among the highest grossing documentaries of all time. Fahrenheit 9/11 (2004), which criticized the second Bush administration and its involvement in the Iraq War, earned $119 million at the box office, making it the most successful documentary of all time (Dirks, 2006).
Key Takeaways
- As products of mass culture, movies reflect cultural attitudes, trends, and concerns:
- D. W. Griffith’s film The Birth of a Nation, presenting a racist perspective on the U.S. Civil War and its aftermath, reflected racist concerns of the era in which it was produced.
- During World War II, films reflected the patriotic, prowar sentiments of the time.
- In the 1960s and 1970s with the rise of an antiestablishment youth culture, movies adopted more liberal stances toward sexuality and violence and displayed a cynicism toward established social structures.
- After the failure of the Vietnam War, films reflected a more ambivalent attitude toward war.
- The MPAA rating system, established in 1968, gave filmmakers greater freedom in the content they were able to portray on screen.
- Movies shape cultural attitudes and customs, as audiences adopt the attitudes and styles of the characters they watch on screen. Filmmakers may use their movies to influence cultural attitudes toward certain social issues, as in Fahrenheit 9/11 and Super Size Me.
Exercises
- Consider three films you have watched in the last year. In what ways have these films reflected current concerns, trends, or attitudes? Of these movies, which do you think have the most potential to shape cultural attitudes or bring about social change? How do you think these movies might bring about this change?
- Locate a film that has been remade and watch the original and remade versions. Besides the obvious changes in fashion and technology, what other differences do you notice that reflect the cultural attitudes, trends, and events in which each film was produced?
References
American Decades Primary Sources, “The Production Code of the Motion Picture Producers and Distributers of America, Inc.—1930–1934,” American Decades Primary Sources, ed. Cynthia Rose (Detroit: Gale, 2004), vol. 4, 12–15.
Anderegg, Michael. introduction to Inventing Vietnam: The War in Film and Television, ed. Michael Anderegg (Philadelphia: Temple University Press, 1991), 6–8.
Belton, American Cinema/American Culture, 286.
Belton, John. introduction to Movies and Mass Culture, ed. John Belton, 12.
Darity, William A.“Birth of a Nation,” Encyclopedia of the Social Sciences, 2nd ed., ed. William A. Darity, Jr., Gale Virtual Reference Library, 1:305–306.
Dick, Kirby. interview by Terry Gross, Fresh Air, NPR, September 13, 2006, http://www.npr.org/templates/story/story.php?storyId=6068009.
Digital History, Review of Casablanca, directed by Michael Curtiz, Digital History, http://www.digitalhistory.uh.edu/historyonline/bureau_casablanca.cfm.
Dirks, Tim. “1980s Film History,” Filmsite, 2010, http://www.filmsite.org.
Dirks, Tim. “Film History of the 2000s,” Filmsite; Washington Post, “The 10 Highest-Grossing Documentaries,” July 31, 2006, http://www.washingtonpost.com/wp-dyn/content/graphic/2006/07/31/GR2006073100027.html.
Dirks, Tim. review of The Graduate, directed by Mike Nichols, Filmsite, http://www.filmsite.org/grad.html.
Doyle, Jack. “A Star is Born: 1910s,” Pop History Dig, 2008, http://www.pophistorydig.com/?tag=film-stars-mass-culture.
Higham, Art of the American Film, 13.
History Matters, “Complete Nudity is Never Permitted: The Motion Picture Code of 1930,” http://historymatters.gmu.edu/d/5099/.
Koppes, Clayton R. and Gregory D. Black, Hollywood Goes to War: How Politics, Profits and Propaganda Shaped World War II Movies (Los Angeles: The Free Press, 1987), 122.
May, “Apocalyptic Cinema,” 46.
May, Lary. “Apocalyptic Cinema: D. W. Griffith and the Aesthetics of Reform,” in Movies and Mass Culture, ed. John Belton (New Brunswick, NJ: Rutgers University Press, 1997), 26.
Mintz, Steven. “The Formation of Modern American Mass Culture,” Digital History, 2007, http://www.digitalhistory.uh.edu/database/article_display.cfm?HHID=455.
Pemberton-Sikes, Diana. “15 Movies That Inspired Fashion Trends,” The Clothing Chronicles, March 3, 2006, http://www.theclothingchronicles.com/archives/217-03032006.htm.
Severson, Kim. “Eat, Drink, Think, Change,” New York Times, June 3, 2009, http://www.nytimes.com/2009/06/07/movies/07seve.html.
Sood, Suemedha. “Weighing the Impact of ‘Super Size Me,’” Wiretap, June 29, 2004. http://www.wiretapmag.org/stories/19059/.
Wood, Robin. Hollywood from Vietnam to Reagan (New York: Columbia University Press, 1986), 168.
8.4 Issues and Trends in Film
Learning Objectives
- Recognize the role the major Hollywood studios have in shaping the movie industry today.
- Identify the major economic concerns involved in the production and distribution of films.
- Describe the effects of piracy on the movie industry.
Filmmaking is both a commercial and artistic venture. The current economic situation in the film industry, with increased production and marketing costs and lower audience turnouts in theaters, often sets the standard for the films big studios are willing to invest in. If you wonder why theaters have released so many remakes and sequels in recent years, this section may help you to understand the motivating factors behind those decisions.
The Influence of Hollywood
In the movie industry today, publicity and product are two sides of the same coin. Even films that get a lousy critical reception can do extremely well in ticket sales if their marketing campaigns manage to create enough hype. Similarly, two comparable films can produce very different results at the box office if they have been given different levels of publicity. This explains why the film What Women Want, starring Mel Gibson, brought in $33.6 million in its opening weekend in 2000, while a few months later, The Million Dollar Hotel, also starring Gibson, only brought in $29,483 during its opening weekend (Nash Information Services, 2000; Nash Information Services, 2001). Unlike in the days of the Hollywood studio system, no longer do the actors alone draw audiences to a movie. The owners of the nation’s major movie theater chains are keenly aware that a film’s success at the box office has everything to do with studio-generated marketing and publicity. What Women Want was produced by Paramount, one of the film industry’s six leading studios, and widely released (on 3,000 screens) after an extensive marketing effort, while The Million Dollar Hotel was produced by Lionsgate, an independent studio without the necessary marketing budget to fill enough seats for a wide release on opening weekend (Epstein, 2005).
The Hollywood “dream factory,” as Hortense Powdermaker labeled it in her 1950 book on the movie industry (Powdermaker), manufactures an experience that is part art and part commercial product (James, 1989). While the studios of today are less factory-like than they were in the vertically integrated studio system era, the coordinated efforts of a film’s production team can still be likened to a machine calibrated for mass production. The films the studios churn out are the result of a capitalist enterprise that ultimately looks to the “bottom line” to guide most major decisions. Hollywood is an industry, and as in any other industry in a mass market, its success relies on control of production resources and “raw materials” and on its access to mass distribution and marketing strategies to maximize the product’s reach and minimize competition (Belton). In this way, Hollywood has an enormous influence on the films to which the public has access.
Ever since the rise of the studio system in the 1930s, the majority of films have originated with the leading Hollywood studios. Today, the six big studios control 95 percent of the film business (Dick). In the early years, audiences were familiar with the major studios, their collections of actors and directors, and the types of films that each studio was likely to release. All of that changed with the decline of the studio system; screenwriters, directors, scripts, and cinematographers no longer worked exclusively with one studio, so these days, while moviegoers are likely to know the name of a film’s director and major actors, it’s unusual for them to identify a film with the studio that distributes it. However, studios are no less influential. The previews of coming attractions that play before a movie begins are controlled by the studios (Busis, 2010). Online marketing, TV commercials, and advertising partnerships with other industries—the name of an upcoming film, for instance, appearing on some Coke cans—are available tools for the big-budget studios that have the resources to commit millions to prerelease advertising. Even though studios no longer own the country’s movie theater chains, the films produced by the big six studios are the ones the multiplexes invariably show. Unlike films by independents, it’s a safe bet that big studio movies are the ones that will sell tickets.
The Blockbuster Standard
While it may seem like the major studios are making heavy profits, moviemaking today is a much riskier, less profitable enterprise than it was in the studio system era. The massive budgets required for the global marketing of a film are huge financial gambles. In fact, most movies cost the studios much more to market and produce—upward of $100 million—than their box-office returns ever generate. With such high stakes, studios have come to rely on the handful of blockbuster films that keep them afloat (New World Encyclopedia), movies like Titanic, Pirates of the Caribbean, and Avatar (New World Encyclopedia). The blockbuster film becomes a touchstone, not only for production values and story lines, but also for moviegoers’ expectations. Because studios know they can rely on certain predictable elements to draw audiences, they tend to invest the majority of their budgets on movies that fit the blockbuster mold. Remakes, movies with sequel setups, or films based on best-selling novels or comic books are safer bets than original screenplays or movies with experimental or edgy themes.
James Cameron’s Titanic (1997), the second highest grossing movie of all time, saw such success largely because it was based on a well-known story, contained predictable plot elements, and was designed to appeal to the widest possible range of audience demographics with romance, action, expensive special effects, and an epic scope—meeting the blockbuster standard on several levels. The film’s astronomical $200 million production cost was a gamble indeed, requiring the backing of two studios, Paramount and 20th Century Fox (Hansen & Garcia-Meyers). However, the rash of high-budget, and high-grossing, films that have appeared since—Harry Potter and the Sorcerer’s Stone and its sequels (2002–2011), Avatar (2009), Alice in Wonderland (2010), The Lord of the Rings films (2001–2003), The Dark Knight (2008), and others—are an indication that, for the time being, the blockbuster standard will drive Hollywood production.
The Role of Independent Films
While the blockbuster still drives the industry, the formulaic nature of most Hollywood films of the 1980s, 1990s, and into the 2000s has opened a door for independent films to make their mark on the industry. Audiences have welcomed movies like Fight Club (1999), Lost in Translation (2003), and Juno (2007) as a change from standard Hollywood blockbusters. Few independent films reached the mainstream audience during the 1980s, but a number of developments in that decade paved the way for their increased popularity in the coming years. The Sundance Film Festival (originally the U.S. Film Festival) began in Park City, Utah, in 1980 as a way for independent filmmakers to showcase their work. Since then, the festival has grown to garner more public attention, and now often represents an opportunity for independents to find market backing by larger studios. In 1989, Steven Soderbergh’s sex, lies, and videotape, released by Miramax, was the first independent to break out of the art-house circuit and find its way into the multiplexes.
In the 1990s and 2000s, independent directors like the Coen brothers, Wes Anderson, Sofia Coppola, and Quentin Tarantino made significant contributions to contemporary cinema. Tarantino’s 1994 film, Pulp Fiction, garnered attention for its experimental narrative structure, witty dialogue, and nonchalant approach to violence. It was the first independent film to break $100 million at the box office, proving that there is still room in the market for movies produced outside of the big six studios (Bergan, 2006).
The Role of Foreign Films
English-born Michael Apted, former president of the Director’s Guild of America, once said, “Europeans gave me the inspiration to make movies…but it was the Americans who showed me how to do it (Apted, 2007).” Major Hollywood studio films have dominated the movie industry worldwide since Hollywood’s golden age, yet American films have always been in a relationship of mutual influence with films from foreign markets. From the 1940s through the 1960s, for example, American filmmakers admired and were influenced by the work of overseas auteurs—directors like Ingmar Bergman (Sweden), Federico Fellini (Italy), François Truffaut (France), and Akira Kurosawa (Japan), whose personal, creative visions were reflected in their work (Pells, 2006). The concept of the auteur was particularly important in France in the late 1950s and early 1960s when French filmmaking underwent a rebirth in the form of the New Wave movement. The French New Wave was characterized by an independent production style that showcased the personal authorship of its young directors (Bergan). The influence of the New Wave was, and continues to be, felt in the United States. The generation of young, film school-educated directors that became prominent in American cinema in the late 1960s and early 1970s owe a good deal of their stylistic techniques to the work of French New Wave directors.
In the current era of globalization, the influence of foreign films remains strong. The rapid growth of the entertainment industry in Asia, for instance, has led to an exchange of style and influence with U.S. cinema. Remakes of a number of popular Japanese horror films, including The Ring (2005), Dark Water (2005), and The Grudge (2004), have fared well in the United States, as have Chinese martial arts films like Crouching Tiger, Hidden Dragon (2000), Hero (2002), and House of Flying Daggers (2004). At the same time, U.S. studios have recently tried to expand into the growing Asian market by purchasing the rights to films from South Korea, Japan, and Hong Kong for remakes with Hollywood actors (Lee, 2005).
Cultural Imperialism or Globalization?
With the growth of Internet technology worldwide and the expansion of markets in rapidly developing countries, American films are increasingly finding their way into movie theaters and home DVD players around the world. In the eyes of many people, the problem is not the export of a U.S. product to outside markets, but the export of American culture that comes with that product. Just as films of the 1920s helped to shape a standardized, mass culture as moviegoers learned to imitate the dress and behavior of their favorite celebrities, contemporary film is now helping to form a mass culture on the global scale, as the youth of foreign nations acquire the American speech, tastes, and attitudes reflected in film (Gienow-Hecht, 2006).
Staunch critics, feeling helpless to stop the erosion of their national cultures, accuse the United States of cultural imperialism through flashy Hollywood movies and commercialism—that is, deliberate conquest of one culture by another to spread capitalism. At the same time, others argue that the worldwide impact of Hollywood films is an inevitable part of globalization, a process that erodes national borders, opening the way for a free flow of ideas between cultures (Gienow-Hecht, 2006).
The Economics of Movies
With control of over 95 percent of U.S. film production, the big six Hollywood studios—Warner Bros., Paramount, 20th Century Fox, Universal, Columbia, and Disney—are at the forefront of the American film industry, setting the standards for distribution, release, marketing, and production values. However, the high costs of moviemaking today are such that even successful studios must find moneymaking potential in crossover media—computer games, network TV rights, spin-off TV series, DVD and releases on Blu-ray Disc format, toys and other merchandise, books, and other after-market products—to help recoup their losses. The drive for aftermarket marketability in turn dictates the kinds of films studios are willing to invest in (Hansen & Garcia-Meyers).
Rising Costs and Big Budget Movies
In the days of the vertically integrated studio system, filmmaking was a streamlined process, neither as risky nor as expensive as it is today. When producers, directors, screenwriters, art directors, actors, cinematographers, and other technical staff were all under contract with one studio, turnaround time for the casting and production of a film was often as little as 3 to 4 months. Beginning in the 1970s, after the decline of the studio system, the production costs for films increased dramatically, forcing the studios to invest more of their budgets in marketing efforts that could generate presales—that is, sales of distribution rights for a film in different sectors before the movie’s release (Hansen & Garcia-Meyers). This is still true of filmmaking today. With contracts that must be negotiated with actors, directors, and screenwriters, and with extended production times, costs are exponentially higher than they were in the 1930s—when a film could be made for around $300,000 (Schaefer, 1999). By contrast, today’s average production budget, not including marketing expenses, is close to $65 million today (Nash Information Services).
Consider James Cameron’s Avatar, released in 2009, which cost close to $340 million, making it one of the most expensive films of all time. Where does such an astronomical budget go? When weighing the total costs of producing and releasing a film, about half of the money goes to advertising. In the case of Avatar, the film cost $190 million to make and around $150 million to market (Sherkat-Massoom, 2010; Keegan, 2009). Of that $190 million production budget, part goes toward above-the-line costs, those that are negotiated before filming begins, and part to below-the-line costs, those that are generally fixed. Above-the-line costs include screenplay rights; salaries for the writer, producer, director, and leading actors; and salaries for directors’, actors’, and producers’ assistants. Below-the-line costs include the salaries for nonstarring cast members and technical crew, use of technical equipment, travel, locations, studio rental, and catering (Tirelli). For Avatar, the reported $190 million doesn’t include money for research and development of 3-D filming and computer-modeling technologies required to put the film together. If these costs are factored in, the total movie budget may be closer to $500 million (Keegan). Fortunately for 20th Century Fox, Avatar made a profit over these expenses in box-office sales alone, raking in $750 million domestically (to make it the highest-grossing movie of all time) in the first 6 months after its release (Box Office Mojo, 2010). However, one thing you should keep in mind is that Avatar was released in both 2-D and 3-D. Because 3-D ticket prices are more expensive than traditional 2-D theaters, the box-office returns are inflated.
The Big Budget Flop
However, for every expensive film that has made out well at the box office, there are a handful of others that have tanked. Back in 1980, when United Artists (UA) was a major Hollywood studio, its epic western Heaven’s Gate cost nearly six times its original budget: $44 million instead of the proposed $7.6 million. The movie, which bombed at the box office, was the largest failure in film history at the time, losing at least $40 million, and forcing the studio to be bought out by MGM (Hall & Neale, 2010). Since then, Heaven’s Gate has become synonymous with commercial failure in the film industry (Dirks).
More recently, the 2005 movie Sahara lost $78 million, making it one of the biggest financial flops in film history. The film’s initial production budget of $80 million eventually doubled to $160 million, due to complications with filming in Morocco and to numerous problems with the script (Bunting, 2007).
Piracy
Movie piracy used to be perpetrated in two ways: Either someone snuck into a theater with a video camera, turning out blurred, wobbly, off-colored copies of the original film, or somebody close to the film leaked a private copy intended for reviewers. In the digital age, however, crystal-clear bootlegs of movies on DVD and the Internet are increasingly likely to appear illegally, posing a much greater threat to a film’s profitability. Even safeguard techniques like digital watermarks are frequently sidestepped by tech-savvy pirates (France, 2009).
In 2009, an unfinished copy of 20th Century Fox’s X-Men Origins: Wolverine appeared online 1 month before the movie’s release date in theaters. Within a week, more than 1 million people had downloaded the pirated film. Similar situations have occurred in recent years with other major movies, including The Hulk (2003) and Star Wars Episode III: Revenge of the Sith (2005) (France, 2009). According to a 2006 study sponsored by the MPAA, Internet piracy and other methods of illegal copying cost major Hollywood studios $6.1 billion in the previous year (Hiestand, 2006). The findings of this report have since been called into question, with investigators claiming that there was no clear methodology for how researchers estimated those figures (Sandoval, 2010). Nonetheless, the ease of theft made possible by the digitization of film and improved file-sharing technologies like BitTorrent software, a peer-to-peer protocol for transferring large quantities of information between users, have put increased financial strain on the movie industry.
Key Takeaways
- A film’s performance at the box office is often directly related to the studio marketing budget that backs it.
- Because of high marketing and production costs, the major studios have increasingly come to rely on blockbuster films to keep themselves profitable.
- Independent films found increased popularity in the 1990s and 2000s, in part because they represented a break from the predictable material often released by studios.
- With the rise of digital filming technology and online movies, movie piracy has become an increasing concern for Hollywood.
Exercises
In Section 8.3 “Issues and Trends in Film”, you learned that blockbuster films rely on certain predictable elements to attract audiences. Think about recent blockbusters like Alice in Wonderland, Avatar, and Pirates of the Caribbean and consider the following:
- What elements do these films have in common? Why do you think these elements help to sell movies?
- How have the big Hollywood studios shaped these elements?
- How do economic concerns, like box-office totals, promote predictable elements?
- Have these movies been affected by piracy? If so, how? If not, why?
References
Apted, Michael. “Film’s New Anxiety of Influence,” Newsweek, December 28, 2007. http://www.newsweek.com/2007/12/27/film-s-new-anxiety-of-influence.html.
Belton, American Cinema/American Culture, 61–62.
Bergan, Film, 60.
Bergan, Ronald. Film (New York: Dorling Kindersley, 2006), 84.
Box Office Mojo, “Avatar,” May 31, 2010, http://boxofficemojo.com/movies/?id=avatar.htm.
Bunting, Glenn F. “$78 Million of Red Ink?” Los Angeles Times, April 15, 2007, http://articles.latimes.com/2007/apr/15/business/fi-movie15/4.
Busis, Hillary. “How Do Movie Theaters Decide Which Trailers to Show?” Slate, April 15, 2010, http://www.slate.com/id/2246166/.
Dick, Kirby. interview, Fresh Air.
Dirks, “The History of Film: the 1980s.”
Epstein, Edward Jay. “Neither the Power nor the Glory: Why Hollywood Leaves Originality to the Indies,” Slate, October 17, 2005, http://www.slate.com/id/2128200.
France, Lisa Respers. “In Digital Age, Can Movie Piracy Be Stopped?” CNN, May 2, 2009, http://www.cnn.com/2009/TECH/05/01/wolverine.movie.piracy/.
Gienow-Hecht, Jessica C. E. “A European Considers the Influence of American Culture,” eJournal USA, February 1, 2006, http://www.america.gov/st/econenglish/2008/June/20080608094132xjyrreP0.2717859.html.
Hall, Sheldon and Stephen Neale, “Super Blockbusters: 1976–1985,” in Epics, Spectacles, and Blockbusters: A Hollywood History (Detroit: Wayne State Univ. Press, 2010), 231.
Hansen and Garcia-Meyers, “Blockbusters,” 283.
Hiestand, Jesse. “MPAA Study: ’05 Piracy Cost $6.1 Bil.,” Hollywood Reporter, May 3, 2006. http://business.highbeam.com/2012/article-1G1-146544812/mpaa-study-05-piracy-cost-61-bil.
James, Caryn. “Critic’s Notebook: Romanticizing Hollywood’s Dream Factory,” New York Times, November 7, 1989, http://www.nytimes.com/1989/11/07/movies/critic-s-notebook-romanticizing-hollywood-s-dream-factory.html.
Keegan, “How Much Did Avatar Really Cost?”
Keegan, Rebecca. “How Much Did Avatar Really Cost?” Vanity Fair, December 2009, http://www.vanityfair.com/online/oscars/2009/12/how-much-did-avatar-really-cost.html.
Lee, Diana. “Hollywood’s Interest in Asian Films Leads to Globalization,” UniOrb.com, December 1, 2005, http://uniorb.com/ATREND/movie.htm.
Nash Information Services, “Glossary of Movie Business Terms,” The Numbers, http://www.the-numbers.com/glossary.php.
Nash Information Services, “The Million Dollar Hotel,” The Numbers: Box Office Data, Movie Stars, Idle Speculation, http://www.the-numbers.com/2001/BHOTL.php.
Nash Information Services, “What Women Want,” The Numbers: Box Office Data, Movie Stars, Idle Speculation, http://www.the-numbers.com/2000/WWWNT.php.
New World Encyclopedia, s.v. “Film Industry,” www.newworldencyclopedia.org/entry/Hollywood.
Pells, Richard. “Is American Culture ‘American’?” eJournal USA, February 1, 2006, http://www.america.gov/st/econ-english/2008/June/20080608102136xjyrreP0.3622858.html.
Powdermaker, Hortense. “Hollywood, the Dream Factory,” http://astro.temple.edu/~ruby/wava/powder/intro.html.
Sandoval, Greg. “Feds Hampered by Incomplete MPAA Piracy Data,” CNET News, April 19, 2010, http://news.cnet.com/8301-31001_3-20002837-261.html.
Schaefer, Eric. “Bold! Daring! Shocking! True!”: A History of Exploitation Films, 1919–1959 (Durham, NC: Duke University Press, 1999), 50.
Sherkat-Massoom, Mojgan. “10 Most Expensive Movies Ever Made,” Access Hollywood, 2010, http://today.msnbc.msn.com/id/34368822/ns/entertainment-access_hollywood/?pg=2#ENT_AH_MostExpensiveMovies.
Tirelli, Aldo-Vincenzo “Production Budget Breakdown: The Scoop on Film Financing,” Helium, http://www.helium.com/items/936661-production-budget-breakdown-the-scoop-on-film-financing.
8.5 The Influence of New Technology
Learning Objectives
- Identify the impact of home-entertainment technology on the motion picture industry.
- Recognize the role the DVD market plays in the economics of moviemaking.
- Describe the impact of digital cinematography on the film industry.
New technologies have a profound impact, not only on the way films are made, but also on the economic structure of the film industry. When VCR technology made on-demand home movie viewing possible for the first time, filmmakers had to adapt to a changing market. The recent switch to digital technology also represents a turning point for film. In this section, you will learn how these and other technologies have changed the face of cinema.
Effects of Home Entertainment Technology
The first technology for home video recording, Sony’s Betamax cassettes, hit the market in 1975. The device, a combined television set and videocassette recorder (VCR), came with the high price tag of $2,495, making it a luxury still too expensive for the average American home. Two years later, RCA released the vertical helical scan (VHS) system of recording, which would eventually outsell Betamax, though neither device was yet a popular consumer product. Within several years however, the concept of home movie recording and viewing was beginning to catch on. In 1979, Columbia Pictures released 20 films for home viewing, and a year later Disney entered the market with the first authorized video rental plan for retail stores. By 1983, VCRs were still relatively uncommon, found in just 10 percent of American homes, but within 2 years the device had found a place in nearly one-third of U.S. households (Entertainment Merchant Association).
At the same time, video rental stores began to spring up across the country. In 1985, three major video rental chains—Blockbuster, Hastings, and Movie Gallery—opened their doors. The video rental market took off between 1983 and 1986, reaching $3.37 billion in 1986. Video sales that year came to $1 billion, for total revenue of more than $4 billion, marking the first time in history that video would eclipse box-office revenues ($3.78 billion that year) (Entertainment Merchant Association).
Video sales and rentals opened a new mass market in the entertainment industry—the home movie viewer—and offered Hollywood an extended source of income from its films. On the other hand, the VCR also introduced the problem of piracy.
VCRs Legal, Just Barely
In an age when Hollywood was already struggling financially because of increased production costs, Sony’s release of home video recording technology became a major source of anxiety for Hollywood studios. If people could watch movies in their own homes, would they stop going to the movies altogether? In the 1976 case Sony Corp. of America v. Universal City Studios, Universal Studios, and the Walt Disney Company sued Sony in the U.S. District Court for the Central District of California. The suit argued that because Sony was manufacturing a technology that could potentially be used to break copyright law, the company was therefore liable for any copyright infringement committed by VCR purchasers. The District Court struggled with the case, eventually ruling against Sony. However, Sony appealed to the Supreme Court, where the case was again highly debated. Part of the struggle was the recognition that the case had wider implications: Does a device with recording capabilities conflict with copyright law? Is an individual guilty of copyright infringement if she records a single movie in her own home for her own private use?
Eventually the Supreme Court ruled that Sony and other VCR manufacturers could not be held liable for copyright infringement. This case represented an important milestone for two reasons. It opened up a new market in the entertainment sector, enabling video rental and home movie sales. Additionally, the case set a standard for determining whether a device with copying or recording capability violated copyright law. The court ruled that because nonprofit, noncommercial home recording did not constitute copyright violation, VCR technology did have legitimate legal uses, and Sony and other companies could not be held liable for any misuse of their devices. Recently, this case has posed interpretive challenges in legal battles and in debates over file sharing through the Internet (Spruill & Adler, 2009).
The Optical Disc System
In 1980, around the time when consumers were just beginning to purchase VCRs for home use, Pioneer Electronics introduced another technology, the LaserDisc, an optical storage disc that produced higher quality images than did VHS tapes. Nonetheless, because of its large size (12 inches in diameter) and lack of recording capabilities, this early disc system never became popular in the U.S. market. However, the LaserDisc’s successor, the digital versatile disc (DVD) was a different story. Like LaserDisc, the DVD is an optical storage disc—that is, a device whose encoded information follows a spiral pattern on the disc’s surface and can be read when illuminated by a laser diode. However, unlike the analog-formatted LaserDisc, the DVD’s information storage is entirely digital, allowing for a smaller, lighter, more compressed medium.
The first DVDs were released in stores in 1997, impressing consumers and distributers with their numerous advantages over the VHS tape: sharper-resolution images, compactness, higher durability, interactive special features, and better copy protection. In only a few years, sales of DVD players and discs surpassed those of VCRs and videos, making the DVD the most rapidly adopted consumer electronics product of all time (Entertainment Merchant Association).
In 1999, the movie rental market was revolutionized by Netflix. Netflix began in 1997 as a video rental store in California. In 1999, the company began offering a subscription service online. Subscribers would select movies that they wanted to see on Netflix’s website, and the movies would arrive in their mailbox a few days later, along with a prepaid return envelope. This allowed users to select from thousands of movies and television shows in the privacy of their own home.
More recently, DVD technology has been surpassed by the Blu-ray Disc format, intended for storing and producing high-definition video. Released in 2006, the Blu-ray Disc technology has the same physical dimensions as DVDs, but because they are encoded to be read by lasers with a shorter wavelength, the discs have more than five times the storage capacity of the DVD (Blu Ray). By 2009 there were 10.9 million Blu-ray Disc players in U.S. homes (Molbaek, 2009). However, the technology has yet to replace the DVD in rental stores and among the majority of U.S. consumers.
DVD Revenues and Decline
DVD rentals and sales make up a major source of revenue for the movie industry, accounting for nearly half of the returns on feature films. In fact, for some time the industry has been exploiting the profitability of releasing some films directly to DVD without ever premiering them in theaters or of releasing films on DVD simultaneously with their theater releases. According to one estimate, for every movie that appears in theaters, there are three that go straight to DVD (Court, 2006). While direct-to-DVD has become synonymous with poor production values and ill-conceived sequels, there are a number of reasons why a studio might bypass the multiplexes. Prequels and sequels of box-office hits, shot on a lower production budget, are often released this way and can generate considerable income from the niche market of hard-core fans. The fourth American Pie film, Bring It On: In It to Win It, and Ace Ventura Pet Detective, Jr. are all examples of successful direct-to-DVD films. However, in other cases, the costs of theatrical promotion and release may simply be too high for a studio to back. This is especially true among independently produced films that lack the big-studio marketing budgets. Slumdog Millionaire (2009) was almost one of these cases. However, the film did make it to theaters, going on to win eight Academy Awards in 2009, including Best Picture (Charity, 2009). Finally, a film may go straight to DVD when its content is too controversial to be released in theaters. For example, almost all porn films are direct-to-DVD releases.
Between 2005 and 2008, the number of direct-to-DVD releases grew 36 percent as studios began to see the profitability of the strategy (Barnes, 2008). After a movie’s success at the box office, a prequel, sequel, or related movie might earn the same profit pound-for-pound at the rental store if filmmakers slash the production budget, often replacing the original celebrity actors with less expensive talent. In 2008, direct-to-DVD brought in around $1 billion in sales (Barnes, 2008).
Despite the profitability of the DVD market, the economic downturn that began in 2007, along with the concurrent release of Blu-ray Disc technology and online digital downloads, have brought about a decline in DVD sales among U.S. consumers (Garrett, 2008). With the rise in digital downloads, Netflix broadened its appeal in 2007 by offering subscribers live-streaming movies and TV shows. This allowed viewers to watch programs on their computers, handheld devices, the Nintendo Wii game system, the Sony PlayStation 3 game system, and the Microsoft Xbox 360 game system without ever having the disc itself.
Additionally, by late 2007 film studios also became anxious over another trend: the Redbox rental system. Redbox, an American company that places DVD rental vending machines in pharmacies, grocery stores, and fast-food chains around the country, had placed a kiosk in approximately 22,000 locations by 2009 (Barnes, 2009). For the movie industry, the trouble isn’t the widespread availability of Redbox rentals, it’s the price. As of March 2001, customers can rent DVDs from a Redbox kiosk for only $1 per day, which has already led to a severe decline in rental revenue for the film industry (Diorio, 2009). According to the traditional pricing model, prices for rentals are based on a release window; newly released films cost more to rent for a specified period of time after their release. When customers can rent both older and newly released movies at the same low price, rentals don’t produce the same returns (Hiestand, 2006).
Hollywood has also suffered major losses from online piracy. Since 2007, studios have been teaming up to turn this potential threat into a source of income. Now, instead of illegally downloading their favorite movies from file-sharing sites, fans can go to legal, commercial-supported sites like Hulu.com, where they can access a selected variety of popular movies and TV shows for the same price as accessing NBC, ABC, and CBS—free. In April 2010, Hulu announced it had already launched a fee-based service, Hulu Plus, in addition to its free service, for users who want access to even more programs, such as Glee (Reuters, 2010). Hulu doesn’t allow viewers to download the films to their home computers, but it does provide a home-viewing experience through online streaming of content (Hulu, 2010).
The Industry Goes Digital
In an industry where technological innovations can transform production or distribution methods over the course of a few years, it’s incredible to think that most movies are still captured on celluloid film, the same material that Thomas Edison used to capture his kinetoscope images well over a century ago. In 2002, George Lucas’s Star Wars Episode II: Attack of the Clones became the first major Hollywood movie filmed on high-definition digital video. However, the move to digitally filmed movies has been gradual; much of the movie industry—including directors, producers, studios, and major movie theater chains—has been slow to embrace this major change in filming technology. At the time that Lucas filmed Attack of the Clones, only 18 theaters in the country were equipped with digital projectors (Kirsner, 2006).
However, digital cinematography has become an increasingly attractive, and increasingly popular, option for a number of reasons. For one thing, during production, it eliminates the need to reload film. A scene filmed in the traditional method, requiring multiple takes, can now be filmed in one continuous take because no raw material is being used in the process (Kirsner, 2006). The digital format streamlines the editing process as well. Rather than scanning the images into a computer before adding digital special effects and color adjustments, companies with digitally filmed material can send it electronically to the editing suite. Additionally, digital film files aren’t susceptible to scratching or wear over time, and they are capable of producing crystal-clear, high-resolution images (Taub, 2009).
For distributers and production companies, digitally recorded images eliminate the costs of purchasing, developing, and printing film. Studios spend around $800 million each year making prints of the films they distribute to theaters and additional money on top of that to ship the heavy reels (Burr, 2002). For a film like Attack of the Clones, widely released in 3,000 theaters, printing and shipping costs for 35-mm film would be around $20 million (Burr, 2002). On the other hand, with digital format, which requires no printing and can be sent to theaters on a single hard drive, or, as the system develops, over cable or satellite, these costs are virtually eliminated (Carvajal, 2005; Burr).
In part, the change has been gradual because, for theaters, the costs of making the digital switch (at around $125,000 for a high-quality digital projector) (Reuters, 2003) is high, and the transformation offers them fewer short-term incentives than it does for distributors, who could save a significant amount of money with digital technology. Furthermore, theaters have already heavily invested in their current projection equipment for 35-mm film (Carvajal). In the long run, the high-definition picture capabilities of digital movies might boost profits as more moviegoers turn out at the theaters, but there are no guarantees. In the meantime, the major studios are negotiating with leading theater chains to underwrite some of the conversion expenses (McCarthy, 2009).
Another financial pitfall of digital film is, surprisingly, the cost of storage once the film is out of major circulation. For major studios, a significant portion of revenues—around one-third—comes from the rerelease of old films. Studios invest an annual budget of just over $1,000 per film to keep their 35-millimeter masters in archival storage (Cieply, 2007). Keeping the film stock at controlled temperature and moisture levels prevents degradation, so masters are often stored in mines, where these conditions can be met most optimally (Cieply, 2007).
Digital data however, for all of its sophistication, is actually less likely to last than traditional film is; DVDs can degrade rapidly, with only a 50 percent chance of lasting up to 15 years (Cieply, 2007), while hard drives must be operated occasionally to prevent them from locking up. As a result, the storage cost for digital originals comes closer to $12,500 per film per year (Cieply, 2007). Moreover, as one generation of digital technology gives way to another, files have to be migrated to newer formats to prevent originals from becoming unreadable.
The Resurgence of 3-D
After World War II, as movie attendance began to decline, the motion picture industry experimented with new technologies to entice audiences back into increasingly empty theaters. One such gimmick, the 3-D picture, offered the novel experience of increased audience “participation” as monsters, flying objects, and obstacles appeared to invade the theater space, threatening to collide with spectators. The effect was achieved by manipulating filming equipment to work like a pair of human eyes, mimicking the depth of field produced through binocular vision. By joining two cameras together and spacing them slightly apart with their lenses angled fractionally toward one another, filmmakers could achieve an effect similar to that created by the overlapping fields of vision of the right and left eye. In theaters, the resulting images were played simultaneously on two separate projectors. The 3-D glasses spectators wore were polarized to filter the images so that the left eye received only “left eye” projections and the right eye received only “right eye” projections (Buchanan, 2008).
3-D was an instant sensation. House of Wax, the first big-budget 3-D movie, released in 1953, brought in over $1 million during its first 3 weeks in theaters, making it one of the most successful films of the year. Best of all for investors, 3-D could be created with fairly inexpensive equipment. For this reason, a boom of 3-D development soon occurred nationwide. Forty-six 3-D movies were filmed in a span of 2 years. However, 3-D proved to be a brief success, with its popularity already beginning to wane by the end of 1953 (Hayes, 2009).
3-D soon migrated from the realm of common popular entertainment to novelty attraction, appearing in IMAX cinemas, as an occasional marketing draw for kids’ movies, and in theme-park classics like Captain Eo and Honey, I Shrunk the Audience. Captain Eo, a Disneyland attraction from 1986 to 1993, featured pop sensation Michael Jackson in his heyday. Following Jackson’s death, the film was rereleased for a limited time in 2010 (Rivera, 2009).
Despite the marginal role 3-D has played since the midcentury fad died out, new technologies have brought about a resurgence in the trend, and the contemporary 3-D experience seems less like a gimmick and more like a serious development in the industry. DreamWorks animation CEO Jeffrey Katzenberg, for one, likened the new 3-D to the introduction of color (McCarthy). One of the downfalls that led to the decline of 3-D in the 1950s was the “3-D headache” phenomenon audiences began to experience as a result of technical problems with filming (Hayes). To create the 3-D effect, filmmakers need to calculate the point where the overlapping images converge, an alignment that had to be performed by hand in those early years. And for the resulting image to come through clearly, the parallel cameras must run in perfect sync with one another—another impossibility with 35-millimeter film, which causes some distortion by the very fact of its motion through the filming camera.
Today the 3-D headache is a thing of the past, as computerized calibration makes perfect camera alignment a reality and as the digital recording format eliminates the celluloid-produced distortion. Finally, a single digital projector equipped with a photo-optical device can now perform the work of the two synchronized projectors of the past. For the theater chains, 3-D provides the first real incentive to make the conversion to digital. Not only do audiences turn out in greater numbers for an experience they can’t reproduce at home, even on their HD television sets, but theaters are also able to charge more for tickets to see 3-D films. In 2008, for example, Journey to the Center of the Earth, which grossed $102 million, earned 60 percent of that money through 3-D ticket sales, even though it played in 3-D on only 30 percent of its screens (McCarthy). Two of the top-grossing movies of all time, Avatar (2009) and Alice in Wonderland (2010), were both released in 3-D.
Key Takeaways
- The introduction of the VCR in the late 1970s made home movie viewing easy. The VCR was replaced by DVD technology in the late 1990s, which is currently being replaced by Blu-ray Disc technology.
- DVD sales and rentals account for about a third of film revenues. Some films are released straight to DVD without ever appearing in theaters.
- Star Wars Episode II: Attack of the Clones (2002) was the first big-budget film to be recorded digitally. Since then, many more films have been made with digital cinematography. However a full-scale industry change has been gradual, mainly because of the costs of conversion.
- Three-dimensional movies were a fad in the 1950s. In recent years, because of improved technologies, 3-D movies have seen a resurgence.
Exercises
Imagine you work for a major Hollywood studio and you are negotiating a contract with a large theater chain to switch to a digital projection system. Consider the following:
- What are the pros and cons of this switch?
- How have digital projection systems affected the motion picture industry?
- How has digital film affected the DVD market?
End-of-Chapter Assessment
Review Questions
-
Section 1
- Explain the importance of Georges Méliès’s work in the development of cinematography?
- Why was the MPPC formed?
- What caused the movie industry to move to Hollywood?
- Describe the factors that led to the rise and fall of the Hollywood studio system.
- What impact did the HUAC investigations have on Hollywood?
-
Section 2
- Explain audience reactions to The Birth of a Nation. How did this film reflect the culture of its time?
- Explain the role Frank Capra’s Why We Fight films played in World War II cinema.
- What does The Graduate reflect about the culture of the late 1960s?
- Explain how American individualism is reinforced in popular films.
- Name some films that have had an impact on social issues.
-
Section 3
- Why might studios invest nearly half of their budgets in marketing efforts?
- List the six major Hollywood studios today and explain their influence on the film industry.
- What economic factors have led to the blockbuster standard?
- Explain the influence of foreign films on American cinema.
- What factors have led to the increase of film piracy?
-
Section 4
- Explain the significance of the Sony Corp. of America v. Universal City Studios case.
- Why are some movies released direct-to-DVD?
- Explain the reluctance of major theater chains to switch to the digital system.
- What are some advantages of digital cinematography?
- Why did the 3-D movie trend fizzle out in the 1950s?
Critical Thinking Questions
- Imagine you are a film studies teacher and you choose to show excerpts from The Birth of a Nation in your class to illustrate its significance in film history. One student is highly offended by the film and stops to voice her concerns to you after class. Taking into consideration the things you have learned about the history of cinema and the relationship between film and culture, how would you explain your choice to this student?
- Assume you want to create a documentary to raise awareness about a social issue that concerns you. What issue would you address and what would you choose to document? Whom would you interview, where would you go, and so on?
- How would you respond to a visitor from another country who accuses the United States of cultural imperialism through the export of American movies?
- Imagine you want to produce a remake of a movie from the 1980s. Choose a movie that you think would be a blockbuster. Create a marketing plan that includes merchandise tie-ins and sources of revenue beyond the box office.
- After its decline in the 1950s, 3-D experienced a brief comeback in the 1980s. Based on what you know about the movie industry of the time and the culture of the 1980s, why might this have occurred?
Career Connection
Research the career of a Hollywood producer. In this career, identify the different types of producers involved in a production. What tasks are these producers expected to perform? Do people in this career specialize in a certain genre of film? If so, which genre would you specialize in and why?
References
Barnes, Brookes. “Movie Studios see a Threat in Growth of Redbox,” New York Times, September 6, 2009, http://www.nytimes.com/2009/09/07/business/media/07redbox.html.
Barnes, Brooks. “Direct-to-DVD Releases Shed Their Loser Label,” New York Times, January 28, 2008, http://www.nytimes.com/2008/01/28/business/media/28dvd.html.
Blu-Ray.com, “Blu-Ray Disc,” http://www.blu-ray.com/info/.
Buchanan, Matt. “Giz Explains 3D Technologies,” Gizmodo (blog), November 12, 2008, http://gizmodo.com/5084121/giz-explains-3d-technologies.
Burr, Ty. “Will the ‘Star Wars’ Digital Gamble Pay Off?”
Burr, Ty. “Will the ‘Star Wars’ Digital Gamble Pay Off?” Entertainment Weekly, April 19, 2002, http://archives.cnn.com/2002/SHOWBIZ/Movies/04/19/ew.hot.star.wars/.
Carvajal, “Nurturing Digital Cinema.”
Carvajal, Doreen. “Nurturing Digital Cinema,” New York Times, May 23, 2005, http://www.nytimes.com/2005/05/22/technology/22iht-movies23.html.
Charity, Tom. “Review: Why Some Films Go Straight to DVD,” CNN, February 27, 2009, http://www.cnn.com/2009/SHOWBIZ/Movies/02/27/review.humboldt.
/index.html.
Cieply, Michael. “The Afterlife is Expensive for Digital Movies,” New York Times, December 23, 2007, http://www.nytimes.com/2007/12/23/business/media/23steal.html.
Court, Robert W. “Straight to DVD,” New York Times, May 6, 2006, Opinion section, http://www.nytimes.com/2006/05/06/opinion/06cort.html.
Diorio, Carl. “$1 DVD Rentals Costing Biz $1 Bil: Study,” Hollywood Reporter, December 7, 2009, http://www.hollywoodreporter.com/news/1-dvd-rentals-costing-biz-92098.
Entertainment Merchant Association, “A History of Home Video and Video Game Retailing,” http://www.entmerch.org/industry_history.html.
Entertainment Merchant Association, “History of Home Video”; Dirks, “History of Film: the 1990s,” Filmsite.
Garrett, Dianne. “DVD Sales Down 3.6% in ’07,” January 7, 2008, www.variety.com/article/VR1117978576?refCatId=20.
Hayes, “Short History of 3D Movies.”
Hayes, John. “‘You See Them WITH Glasses!’ A Short History of 3D Movies,” Wide Screen Movies Magazine, 2009, http://widescreenmovies.org/wsm11/3D.htm.
Hiestand, Jesse. “MPAA Study: ’05 Piracy Cost $6.1 Bil.,” Hollywood Reporter, May 3, 2006. http://business.highbeam.com/2012/article-1G1-146544812/mpaa-study-05-piracy-cost-61-bil.
Hulu, “Media Info,” 2010, http://www.hulu.com/about.
Kirsner, Scott. “Studios Shift to Digital Movies, but Not Without Resistance,” New York Times, July 4, 2006, http://www.nytimes.com/2005/05/22/technology/22iht-movies23.html?scp=15&sq=digital%20movie&st=cse.
McCarthy, “Tech Behind 3D’s Big Revival.”
McCarthy, Erin. “The Tech Behind 3D’s Big Revival,” Popular Mechanics, April 1, 2009, http://www.popularmechanics.com/technology/digital/3d/4310810.
Molbaek, Henning. “10.7 Million Blu-Ray Players in U.S. Homes,” DVDTown.com, Jan 9, 2009, http://www.dvdtown.com/news/107-million-blu-ray-players-in-us-homes/6288.
Reuters, “Hulu Launches Paid Subscription TV Service,” Fox News, June 30, 2010, http://www.foxnews.com/scitech/2010/06/30/hulu-starts-paid-subscription-tv-service/.
Reuters, “Movie Theaters Going Digital,” CNN, December 24, 2003, http://www.cnn.com/2003/TECH/ptech/12/24/digital.movietheater.reut/index.html.
Rivera, Heather Hust. “Captain EO Returns to Disneyland Resort.” Disney Parks Blog, December 18, 2009. http://disneyparks.disney.go.com/blog/2009/12/captain-eo-returns-to-disneyland-resort/.
Spruill, Willie and Derek Adler, “Sony Corp. of America v. Universal City Studios,” Downloading & Piracy project for Laura N. Gasaway’s Cyberspace Law Seminar, University of North Carolina School of Law, 2009, http://www.unc.edu/courses/2009spring/law/357c/001/Piracy/cases.htm.
Taub, Eric A. “More Digital Projectors, Coming to a Theater Near You,” Gadgetwise (blog), New York Times, June 18, 2009, http://gadgetwise.blogs.nytimes.com/2009/06/18/its-a-4k-world-after-all/.
Chapter 9: Television
9.1 The Evolution of Television
9.2 The Relationship Between Television and Culture
9.3 Issues and Trends in the Television Industry
9.4 Influence of New Technologies
9.1 The Evolution of Television
Learning Objectives
- Identify two technological developments that paved the way for the evolution of television.
- Explain why electronic television prevailed over mechanical television.
- Identify three important developments in the history of television since 1960.
Since replacing radio as the most popular mass medium in the 1950s, television has played such an integral role in modern life that, for some, it is difficult to imagine being without it. Both reflecting and shaping cultural values, television has at times been criticized for its alleged negative influences on children and young people and at other times lauded for its ability to create a common experience for all its viewers. Major world events such as the John F. Kennedy and Martin Luther King assassinations and the Vietnam War in the 1960s, the Challenger shuttle explosion in 1986, the 2001 terrorist attacks on the World Trade Center, and the impact and aftermath of Hurricane Katrina in 2005 have all played out on television, uniting millions of people in shared tragedy and hope. Today, as Internet technology and satellite broadcasting change the way people watch television, the medium continues to evolve, solidifying its position as one of the most important inventions of the 20th century.
The Origins of Television
Inventors conceived the idea of television long before the technology to create it appeared. Early pioneers speculated that if audio waves could be separated from the electromagnetic spectrum to create radio, so too could TV waves be separated to transmit visual images. As early as 1876, Boston civil servant George Carey envisioned complete television systems, putting forward drawings for a “selenium camera” that would enable people to “see by electricity” a year later (Federal Communications Commission, 2005).
During the late 1800s, several technological developments set the stage for television. The invention of the cathode ray tube (CRT) by German physicist Karl Ferdinand Braun in 1897 played a vital role as the forerunner of the TV picture tube. Initially created as a scanning device known as the cathode ray oscilloscope, the CRT effectively combined the principles of the camera and electricity. It had a fluorescent screen that emitted a visible light (in the form of images) when struck by a beam of electrons. The other key invention during the 1880s was the mechanical scanner system. Created by German inventor Paul Nipkow, the scanning disk was a large, flat metal disk with a series of small perforations arranged in a spiral pattern. As the disk rotated, light passed through the holes, separating pictures into pinpoints of light that could be transmitted as a series of electronic lines. The number of scanned lines equaled the number of perforations, and each rotation of the disk produced a television frame. Nipkow’s mechanical disk served as the foundation for experiments on the transmission of visual images for several decades.
In 1907, Russian scientist Boris Rosing used both the CRT and the mechanical scanner system in an experimental television system. With the CRT in the receiver, he used focused electron beams to display images, transmitting crude geometrical patterns onto the television screen. The mechanical disk system was used as a camera, creating a primitive television system.
Mechanical Television versus Electronic Television
From the early experiments with visual transmissions, two types of television systems came into existence: mechanical television and electronic television. Mechanical television developed out of Nipkow’s disk system and was pioneered by British inventor John Logie Baird. In 1926, Baird gave the world’s first public demonstration of a television system at Selfridge’s department store in London. He used mechanical rotating disks to scan moving images into electrical impulses, which were transmitted by cable to a screen. Here they showed up as a low-resolution pattern of light and dark. Baird’s first television program showed the heads of two ventriloquist dummies, which he operated in front of the camera apparatus out of the audience’s sight. In 1928, Baird extended his system by transmitting a signal between London and New York. The following year, the British Broadcasting Corporation (BBC) adopted his mechanical system, and by 1932, Baird had developed the first commercially viable television system and sold 10,000 sets. Despite its initial success, mechanical television had several technical limitations. Engineers could get no more than about 240 lines of resolution, meaning images would always be slightly fuzzy (most modern televisions produce images of more than 600 lines of resolution). The use of a spinning disk also limited the number of new pictures that could be seen per second, resulting in excessive flickering. The mechanical aspect of television proved to be a disadvantage that required fixing in order for the technology to move forward.
At the same time Baird (and, separately, American inventor Charles Jenkins) was developing the mechanical model, other inventors were working on an electronic television system based on the CRT. While working on his father’s farm, Idaho teenager Philo Farnsworth realized that an electronic beam could scan a picture in horizontal lines, reproducing the image almost instantaneously. In 1927, Farnsworth transmitted the first all-electronic TV picture by rotating a single straight line scratched onto a square piece of painted glass by 90 degrees.
Farnsworth barely profited from his invention; during World War II, the government suspended sales of TV sets, and by the time the war ended, Farnsworth’s original patents were close to expiring. However, following the war, many of his key patents were modified by RCA and were widely applied in broadcasting to improve television picture quality.
Having coexisted for several years, electronic television sets eventually began to replace mechanical systems. With better picture quality, no noise, a more compact size, and fewer visual limitations, the electronic system was far superior to its predecessor and rapidly improving. By 1939, the last mechanical television broadcasts in the United States had been replaced with electronic broadcasts.
Early Broadcasting
Television broadcasting began as early as 1928, when the Federal Radio Commission authorized inventor Charles Jenkins to broadcast from W3XK, an experimental station in the Maryland suburbs of Washington, DC. Silhouette images from motion picture films were broadcast to the general public on a regular basis, at a resolution of just 48 lines. Similar experimental stations ran broadcasts throughout the early 1930s. In 1939, RCA subsidiary NBC (National Broadcasting Company) became the first network to introduce regular television broadcasts, transmitting its inaugural telecast of the opening ceremonies at the New York World’s Fair. The station’s initial broadcasts transmitted to just 400 television sets in the New York area, with an audience of 5,000 to 8,000 people (Lohr, 1940).
Television was initially available only to the privileged few, with sets ranging from $200 to $600—a hefty sum in the 1930s, when the average annual salary was $1,368 (KC Library). RCA offered four types of television receivers, which were sold in high-end department stores such as Macy’s and Bloomingdale’s, and received channels 1 through 5. Early receivers were a fraction of the size of modern TV sets, featuring 5-, 9-, or 12-inch screens. Television sales prior to World War II were disappointing—an uncertain economic climate, the threat of war, the high cost of a television receiver, and the limited number of programs on offer deterred numerous prospective buyers. Many unsold television sets were put into storage and sold after the war.
NBC was not the only commercial network to emerge in the 1930s. RCA radio rival CBS (Columbia Broadcasting System) also began broadcasting regular programs. So that viewers would not need a separate television set for each individual network, the Federal Communications Commission (FCC) outlined a single technical standard. In 1941, the panel recommended a 525-line system and an image rate of 30 frames per second. It also recommended that all U.S. television sets operate using analog signals (broadcast signals made of varying radio waves). Analog signals were replaced by digital signals (signals transmitted as binary code) in 2009.
With the outbreak of World War II, many companies, including RCA and General Electric, turned their attention to military production. Instead of commercial television sets, they began to churn out military electronic equipment. In addition, the war halted nearly all television broadcasting; many TV stations reduced their schedules to around 4 hours per week or went off the air altogether.
Color Technology
Although it did not become available until the 1950s or popular until the 1960s, the technology for producing color television was proposed as early as 1904, and was demonstrated by John Logie Baird in 1928. As with his black-and-white television system, Baird adopted the mechanical method, using a Nipkow scanning disk with three spirals, one for each primary color (red, green, and blue). In 1940, CBS researchers, led by Hungarian television engineer Peter Goldmark, used Baird’s 1928 designs to develop a concept of mechanical color television that could reproduce the color seen by a camera lens.
Following World War II, the National Television System Committee (NTSC) worked to develop an all-electronic color system that was compatible with black-and-white TV sets, gaining FCC approval in 1953. A year later, NBC made the first national color broadcast when it telecast the Tournament of Roses Parade. Despite the television industry’s support for the new technology, it would be another 10 years before color television gained widespread popularity in the United States, and black-and-white TV sets outnumbered color TV sets until 1972 (Klooster, 2009).
The Golden Age of Television
The 1950s proved to be the golden age of television, during which the medium experienced massive growth in popularity. Mass-production advances made during World War II substantially lowered the cost of purchasing a set, making television accessible to the masses. In 1945, there were fewer than 10,000 TV sets in the United States. By 1950, this figure had soared to around 6 million, and by 1960 more than 60 million television sets had been sold (World Book Encyclopedia, 2003). Many of the early television program formats were based on network radio shows and did not take advantage of the potential offered by the new medium. For example, newscasters simply read the news as they would have during a radio broadcast, and the network relied on newsreel companies to provide footage of news events. However, during the early 1950s, television programming began to branch out from radio broadcasting, borrowing from theater to create acclaimed dramatic anthologies such as Playhouse 90 (1956) and The U.S. Steel Hour (1953) and producing quality news film to accompany coverage of daily events.
Two new types of programs—the magazine format and the TV spectacular—played an important role in helping the networks gain control over the content of their broadcasts. Early television programs were developed and produced by a single sponsor, which gave the sponsor a large amount of control over the content of the show. By increasing program length from the standard 15-minute radio show to 30 minutes or longer, the networks substantially increased advertising costs for program sponsors, making it prohibitive for a single sponsor. Magazine programs such as the Today show and The Tonight Show, which premiered in the early 1950s, featured multiple segments and ran for several hours. They were also screened on a daily, rather than weekly, basis, drastically increasing advertising costs. As a result, the networks began to sell spot advertisements that ran for 30 or 60 seconds. Similarly, the television spectacular (now known as the television special) featured lengthy music-variety shows that were sponsored by multiple advertisers.
In the mid-1950s, the networks brought back the radio quiz-show genre. Inexpensive and easy to produce, the trend caught on, and by the end of the 1957–1958 season, 22 quiz shows were being aired on network television, including CBS’s $64,000 Question. Shorter than some of the new types of programs, quiz shows enabled single corporate sponsors to have their names displayed on the set throughout the show. The popularity of the quiz-show genre plunged at the end of the decade, however, when it was discovered that most of the shows were rigged. Producers provided some contestants with the answers to the questions in order to pick and choose the most likable or controversial candidates. When a slew of contestants accused the show Dotto of being fixed in 1958, the networks rapidly dropped 20 quiz shows. A New York grand jury probe and a 1959 congressional investigation effectively ended prime-time quiz shows for 40 years, until ABC revived the genre with its launch of Who Wants to Be a Millionaire in 1999 (Boddy, 1990).
The Rise of Cable Television
Formerly known as Community Antenna Television, or CATV, cable television was originally developed in the 1940s in remote or mountainous areas, including in Arkansas, Oregon, and Pennsylvania, to enhance poor reception of regular television signals. Cable antennas were erected on mountains or other high points, and homes connected to the towers would receive broadcast signals.
In the late 1950s, cable operators began to experiment with microwave to bring signals from distant cities. Taking advantage of their ability to receive long-distance broadcast signals, operators branched out from providing a local community service and began focusing on offering consumers more extensive programming choices. Rural parts of Pennsylvania, which had only three channels (one for each network), soon had more than double the original number of channels as operators began to import programs from independent stations in New York and Philadelphia. The wider variety of channels and clearer reception the service offered soon attracted viewers from urban areas. By 1962, nearly 800 cable systems were operational, serving 850,000 subscribers.
Cable’s exponential growth was viewed as competition by local TV stations, and broadcasters campaigned for the FCC to step in. The FCC responded by placing restrictions on the ability of cable systems to import signals from distant stations, which froze the development of cable television in major markets until the early 1970s. When gradual deregulation began to loosen the restrictions, cable operator Service Electric launched the service that would change the face of the cable television industry—pay TV. The 1972 Home Box Office (HBO) venture, in which customers paid a subscription fee to access premium cable television shows and video-on-demand products, was the nation’s first successful pay cable service. HBO’s use of a satellite to distribute its programming made the network available throughout the United States. This gave it an advantage over the microwave-distributed services, and other cable providers quickly followed suit. Further deregulation provided by the 1984 Cable Act enabled the industry to expand even further, and by the end of the 1980s, nearly 53 million households subscribed to cable television (see Section 6.3 “Current Popular Trends in the Music Industry”). In the 1990s, cable operators upgraded their systems by building higher-capacity hybrid networks of fiber-optic and coaxial cable. These broadband networks provide a multichannel television service, along with telephone, high-speed Internet, and advanced digital video services, using a single wire.
The Emergence of Digital Television
Following the FCC standards set out during the early 1940s, television sets received programs via analog signals made of radio waves. The analog signal reached TV sets through three different methods: over the airwaves, through a cable wire, or by satellite transmission. Although the system remained in place for more than 60 years, it had several disadvantages. Analog systems were prone to static and distortion, resulting in a far poorer picture quality than films shown in movie theaters. As television sets grew increasingly larger, the limited resolution made scan lines painfully obvious, reducing the clarity of the image. Companies around the world, most notably in Japan, began to develop technology that provided newer, better-quality television formats, and the broadcasting industry began to lobby the FCC to create a committee to study the desirability and impact of switching to digital television. A more efficient and flexible form of broadcast technology, digital television uses signals that translate TV images and sounds into binary code, working in much the same way as a computer. This means they require much less frequency space and also provide a far higher quality picture. In 1987, the Advisory Committee on Advanced Television Services began meeting to test various TV systems, both analog and digital. The committee ultimately agreed to switch from analog to digital format in 2009, allowing a transition period in which broadcasters could send their signal on both an analog and a digital channel. Once the switch took place, many older analog TV sets were unusable without a cable or satellite service or a digital converter. To retain consumers’ access to free over-the-air television, the federal government offered $40 gift cards to people who needed to buy a digital converter, expecting to recoup its costs by auctioning off the old analog broadcast spectrum to wireless companies (Steinberg, 2007). These companies were eager to gain access to the analog spectrum for mobile broadband projects because this frequency band allows signals to travel greater distances and penetrate buildings more easily.
The Era of High-Definition Television
Around the same time the U.S. government was reviewing the options for analog and digital television systems, companies in Japan were developing technology that worked in conjunction with digital signals to create crystal-clear pictures in a wide-screen format. High-definition television, or HDTV, attempts to create a heightened sense of realism by providing the viewer with an almost three-dimensional experience. It has a much higher resolution than standard television systems, using around five times as many pixels per frame. First available in 1998, HDTV products were initially extremely expensive, priced between $5,000 and $10,000 per set. However, as with most new technology, prices dropped considerably over the next few years, making HDTV affordable for mainstream shoppers.
As of 2010, nearly half of American viewers are watching television in high definition, the fastest adoption of TV technology since the introduction of the VCR in the 1980s (Stelter, 2010). The new technology is attracting viewers to watch television for longer periods of time. According to the Nielsen Company, a company that measures TV viewership, households with HDTV watch 3 percent more prime-time television—programming screened between 7 and 11 p.m., when the largest audience is available—than their standard-definition counterparts (Stelter, 2010). The same report claims that the cinematic experience of HDTV is bringing families back together in the living room in front of the large wide-screen TV and out of the kitchen and bedroom, where individuals tend to watch television alone on smaller screens. However, these viewing patterns may change again soon as the Internet plays an increasingly larger role in how people view TV programs. The impact of new technologies on television is discussed in much greater detail in Section 9.4 “Influence of New Technologies” of this chapter.
Key Takeaways
- Two key technological developments in the late 1800s played a vital role in the evolution of television: the cathode ray tube and the scanning disk. The cathode ray tube, invented by German physicist Karl Ferdinand Braun in 1897, was the forerunner of the TV picture tube. It had a fluorescent screen that emitted a visible light (in the form of images) when struck by a beam of electrons. The scanning disk, invented by German inventor Paul Nipkow, was a large, flat metal disk that could be used as a rotating camera. It served as the foundation for experiments on the transmission of visual images for several decades.
- Out of the cathode ray tube and the scanning disk, two types of primitive television systems evolved: mechanical systems and electronic systems. Mechanical television systems had several technical disadvantages: Low resolution caused fuzzy images, and the use of a spinning disk limited the number of new pictures that could be seen per second, resulting in excessive flickering. By 1939, all mechanical television broadcasts in the United States had been replaced by electronic broadcasts.
- Early televisions were expensive, and the technology was slow to catch on because development was delayed during World War II. Color technology was delayed even further because early color systems were incompatible with black-and-white television sets. Following the war, television rapidly replaced radio as the new mass medium. During the “golden age” of television in the 1950s, television moved away from radio formats and developed new types of shows, including the magazine-style variety show and the television spectacular.
- Since 1960, several key technological developments have taken place in the television industry. Color television gained popularity in the late 1960s and began to replace black-and-white television in the 1970s. Cable television, initially developed in the 1940s to cater to viewers in rural areas, switched its focus from local to national television, offering an extensive number of channels. In 2009, the traditional analog system, which had been in place for 60 years, was replaced with digital television, giving viewers a higher-quality picture and freeing up frequency space. As of 2010, nearly half of American viewers have high-definition television, which offers a crystal-clear picture in wide-screen to provide a cinematic experience at home.
Exercises
Please respond to the following writing prompts. Each response should be a minimum of one paragraph.
- Prior to World War II, television was in the early stages of development. In the years following the war, the technical development and growth in popularity of the medium were exponential. Identify two ways television evolved after World War II. How did these changes make postwar television superior to its predecessor?
- Compare the television you use now with the television from your childhood. How have TV sets changed in your lifetime?
- What do you consider the most important technological development in television since the 1960s? Why?
References
Boddy, William. “The Seven Dwarfs and the Money Grubbers,” in Logics of Television: Essays in Cultural Criticism, ed. Patricia Mellencamp (Bloomington, IN: Indiana University Press, 1990), 98–116.
Federal Communications Commission, “Visionary Period, 1880’s Through 1920’s,” Federal Communications Commission, November 21, 2005, http://www.fcc.gov/omd/history/tv/1880-1929.html.
KC Library, Lone Star College: Kinwood, “American Cultural History 1930–1939,” http://kclibrary.lonestar.edu/decade30.html.
Klooster, John. Icons of Invention: The Makers of the Modern World from Gutenberg to Gates (Santa Barbara, CA: ABC-CLIO, 2009), 442.
Lohr, Lenox. Television Broadcasting (New York: McGraw Hill, 1940).
Steinberg, Jacques. “Converters Signal a New Era for TVs,” New York Times, June 7, 2007, http://www.nytimes.com/2007/06/07/technology/07digital.html.
Stelter, Brian. “Crystal-Clear, Maybe Mesmerizing,” New York Times, May 23, 2010, http://www.nytimes.com/2010/05/24/business/media/24def.html.
World Book Encyclopedia (2003), s.v. “Television.”
9.3 Issues and Trends in the Television Industry
Learning Objectives
- Explain the influence of sponsors on program content.
- Describe the major trends among the broadcasting and cable networks.
When television was in its infancy, producers modeled the new medium on radio. Popular radio shows such as police drama Dragnet and western cowboy series Gunsmoke were adapted for television, and new TV shows were sponsored by single advertisers, just as radio shows had been. Television was dominated by three major networks—NBC, ABC, and CBS—and these networks accounted for more than 95 percent of all prime-time viewing until the late 1970s. Today, the television industry is far more complex. Programs are sponsored by multiple advertisers; programming is controlled by major media conglomerates; and the three major networks no longer dominate the airwaves but instead share their viewers with numerous cable channels. Several factors account for these trends within the industry, including technological developments, government regulations, and the creation of new networks.
The Influence of Corporate Sponsorship
Early television programs were often developed, produced, and supported by a single sponsor, which sometimes reaped the benefits of having its name inserted into the program’s title—Colgate Comedy Hour, Camel Newsreel, Goodyear TV Playhouse. However, as production costs soared during the 1950s (a single one-hour TV show cost a sponsor about $35,000 in 1952 compared with $90,000 at the end of the decade), sponsors became increasingly unable to bear the financial burden of promoting a show single-handedly. This suited the broadcast networks, which disliked the influence sponsors exerted over program content. Television executives, in particular NBC’s Sylvester L. “Pat” Weaver, advocated the magazine concept, in which advertisers purchased one- or two-minute blocks rather than the entire program, just as magazines contained multiple advertisements from different sponsors. The presence of multiple sponsors meant that no one advertiser controlled the entire program.
Although advertising agencies relinquished control of production to the networks, they retained some influence over the content of the programs they sponsored. As one executive commented, “If my client sells peanut butter and the script calls for a guy to be poisoned eating a peanut butter sandwich, you can bet we’re going to switch that poison to a martini (Newcomb).” Sponsors continue to influence program content indirectly by financially supporting shows they support and pulling funding from those they do not. For example, in 1995, pharmaceutical giant Procter & Gamble, the largest television advertiser, announced it would no longer sponsor salacious daytime talk shows. The company provided producers with details about its guidelines, pulling out of shows it deemed offensive and supporting shows that dealt with controversial subject matter responsibly. Communications heavyweight AT&T took a similar path, reviewing shows after they were taped but before they aired in order to make decisions about corporate sponsorship on an individual basis (Advertising Age, 1995). In 2009, advertisers used their financial might to take a stand against Fox News host Glenn Beck, who offended viewers and sponsors alike with his incendiary comments that President Obama was a “racist” and had a “deep-seated hatred for white people.” Sponsors of the Glenn Beck TV talk show began to remove advertising spots from the program in protest of Beck’s comments. A spokeswoman for Progressive car insurance said, “We place advertising on a variety of programming with the goal of reaching a broad range of insurance consumers who might be interested in our products. We also seek to avoid advertising on programming that our customers or potential customers may find extremely offensive (Spain, 2009).” Other shows whose advertisers have pulled ads include NBC’s long-running sketch comedy show Saturday Night Live, BET’s Hot Ghetto Mess, and ABC’s Ellen sitcom.
Public Television and Corporate Sponsorship
Corporate sponsorship does not just affect network television. Even public television has become subject to the influence of advertising. Established in 1969, Public Broadcasting Service (PBS) developed out of a report by the Carnegie Commission on Educational Television, which examined the role of educational, noncommercial television on society. The report recommended that the government finance public television in order to provide diversity of programming during the network era—a service created “not to sell products” but to “enhance citizenship and public service (McCauley, 2003).” Public television was also intended to provide universal access to television for viewers in rural areas or viewers who could not afford to pay for private television services. PBS focused on educational program content, targeting viewers who were less appealing to the commercial networks and advertisers, such as the over-50 age demographic and children under 12.
The original Carnegie Commission report recommended that Congress create a federal trust fund based on a manufacturer’s excise tax on the sale of TV sets to finance public television. Following intense lobbying by the National Association of Broadcasters, the proposal was removed from the legislation that established the service. As a result, public television subsists on viewer contributions and federal funding and the latter has been drastically reduced in recent years. Although a 2007 proposal by President George W. Bush to eliminate more than half of the federal allocation to public broadcasting ($420 million out of $820 million) was overturned, PBS has become increasingly dependent on corporate sponsorship to stay afloat. By 2006, corporate sponsors funded more than 25 percent of all public television. Sponsorship has saved many programs that would otherwise have been lost, but critics have bemoaned the creeping commercialism of public television. When PBS began selling banner advertisements on its website in 2006, Gary Ruskin, executive director of consumer group Commercial Alert, commented, “It’s just one more intrusion of the commercial ethos into an organization that was supposed to be firmly noncommercial. The line between them and the commercial networks is getting fuzzier and fuzzier (Gold, 2006).” Despite such criticisms, the drop in federal funding has forced public television executives to seek more creative ways of obtaining financial backing—for example, through online banner ads. In 2009, PBS shortened the length of time companies were required to sponsor some programs in an effort to encourage advertisers (Stelter, 2009). As of 2010, the future of PBS remained uncertain. With better-funded cable channels offering niche-interest shows that were traditionally public television’s domain (BBC nature series Planet Earth was shown on the Discovery Channel, while historical dramas John Adams and The Tudors are shown on premium cable channels HBO and Showtime), PBS is left to rely on shows that have been around for decades, such as Nova and Nature, to attract audiences (McGrath, 2008). Only time will tell how PBS fares in the face of competition.
The Rise and Fall of the Network
The period between 1950 and 1970 is historically recognized as the network era. Aside from a small portion of airtime controlled by public television, the three major networks (known as the Big Three) dominated the television industry, collectively accounting for more than 95 percent of prime-time viewing. In 1986, Rupert Murdoch, the head of multinational company News Corp, launched the Fox network, challenging the dominance of the Big Three. In its infancy, Fox was at best a minor irritation to the other networks. With fewer than 100 affiliated stations (the other networks all had more than 200 affiliates each), reaching just 80 percent of the nation’s households (compared with the Big Three’s 97 percent coverage rate), and broadcasting just one show (The Late Show Starring Joan Rivers), Fox was barely a consideration in the ratings war. During the early 1990s, these dynamics began to change. Targeting young viewers and Black audiences with shows such as Beverly Hills 90210, Melrose Place, In Living Color, and The Simpsons, Fox began to establish itself as an edgy, youth-oriented network. Luring affiliates away from other networks to increase its viewership, Fox also extended its programming schedule beyond the initial 2-night-a-week broadcasts. By the time the fledgling network acquired the rights to National Football League (NFL) games with its $1.58 billion NFL deal in 1994, entitling it to 4 years of NFL games, Fox was a worthy rival to the other three broadcast networks. Its success turned the Big Three into the Big Four. In the 1994–1995 television season, 43 percent of U.S. households were watching the Big Four at any given moment during prime time (Poniewozik, 2009).
Fox’s success prompted the launch of several smaller networks in the mid-1990s. UPN (owned by Paramount, recently acquired by Viacom) and WB (owned by media giant Time Warner) both debuted in January 1995. Using strategies similar to Fox, the networks initially began broadcasting programs 2 nights a week, expanding to a 6-day schedule by 2000. Targeting young and minority audiences with shows such as Buffy the Vampire Slayer, Moesha, Dawson’s Creek, and The Wayans Bros., the new networks hoped to draw stations away from their old network affiliations. However, rather than repeating the success of Fox, UPN and WB struggled to make an impact. Unable to attract many affiliate stations, the two fledgling networks reached fewer households than their larger rivals because they were unobtainable in some smaller cities. High start-up costs, relatively low audience ratings, and increasing production expenses spelled the end of the “netlets,” a term coined by Variety magazine for minor-league networks that lacked a full week’s worth of programming. After losing $1 billion each, parent companies CBS (having split from Viacom) and Time Warner agreed to merge UPN and WB, resulting in the creation of the CW network in 2006. Targeting the desirable 18–34 age group, the network retained the most popular shows from before the merger—America’s Next Top Model and Veronica Mars from UPN and Beauty and the Geek and Smallville from WB—as well as launching new shows such as Gossip Girl and The Vampire Diaries. Despite its cofounders’ claims that the CW would be the “fifth great broadcast network,” the collaboration got off to a shaky start. Frequently outperformed by Spanish-language television network Univision in 2008 and with declining ratings among its target audience, critics began to question the future of the CW network (Grego, 2010). However, the relative success of shows such as Gossip Girl and 90210 in 2009 gave the network a foothold on its intended demographic, quashing rumors that co-owners CBS Corporation and Warner Bros. might disband the network. Warner Bros. Television Group President Bruce Rosenblum said, “I think the built-in assumption and the expectation is that the CW is here to stay (Collins, 2009).”
Cable Challenges the Networks
A far greater challenge to network television than the emergence of smaller competitors was the increasing dominance of cable television. Between 1994 and 2009, the percentage of U.S. households watching the Big Four networks during prime time plummeted from 43 percent to 27 percent (Poniewozik, 2009). Two key factors influenced the rapid growth of cable television networks: industry deregulation and the use of satellites to distribute local TV stations around the country.
During the 1970s, the growth of cable television was restricted by FCC regulations, which protected broadcasters by establishing franchising standards and enforcing anti-siphoning rules that prevented cable from taking sports and movie programming away from the networks. However, during the late 1970s, a court ruled that the FCC had exceeded its authority, and the anti-siphoning rules were repealed. This decision paved the way for the development of cable movie channels, contributing to the exponential growth of cable in the 1980s and 1990s. Further deregulation of cable in the 1984 Cable Communications Policy Act removed restrictions on cable rates, enabling operators to charge what they wanted for cable services as long as there was effective competition to the service (a standard that over 90 percent of all cable markets could meet). Other deregulatory policies during the 1980s included the eradication of public-service requirements and the elimination of regulated amounts of advertising in children’s programming, expanding the scope of cable channel stations. Deregulation was intended to encourage competition within the industry but instead enabled local cable companies to establish monopolies all over the country. In 1989, U.S. Senator Al Gore of Tennessee commented, “Precipitous rate hikes of 100 percent or more in one year have not been unusual since cable was given total freedom to charge whatever the market will bear…. Since cable was deregulated, we have also witnessed an extraordinary concentration of control and integration by cable operators and program services, manifesting itself in blatantly anticompetitive behavior toward those who would compete with existing cable operators for the right to distribute services (Zaretsky, 1995).” The FCC reintroduced regulations for basic cable rates in 1992, by which time more than 56 million households (over 60 percent of the households with televisions) subscribed to a cable service.
The growth of cable TV was also assisted by a national satellite distribution system. Pioneered by Time Inc., which founded cable network company HBO, the corporation used satellite transmission in 1975 to beam the “Thrilla from Manila”—the historic heavyweight boxing match between Muhammad Ali and Joe Frazier—into people’s homes. Shortly afterward, entrepreneur Ted Turner, owner of independent Atlanta-based station WTBS, uplinked his station’s signal onto the same satellite as HBO, enabling cable operators to downlink the station on one of their channels. Initially provided free to subscribers to encourage interest, the station offered TV reruns, wrestling, and live sports from Atlanta. Having created the first “superstation,” Turner expanded his realm by founding 24-hour news network CNN in 1980. At the end of the year, 28 national programming services were available, and the cable revolution had begun. Over the next decade, the industry underwent a period of rapid growth and popularity, and by 1994 viewers could choose from 94 basic and 20 premium cable services.
Narrowcasting
Because the proliferation of cable channels provided viewers with so many choices, broadcasters began to move away from mass-oriented programming in favor of more targeted shows. Whereas the broadcast networks sought to obtain the widest audience possible by avoiding programs that might only appeal to a small minority of viewers, cable channels sought out niche audiences within specific demographic groups—a process known as narrowcasting. In much the same way that specialist magazines target readers interested in a particular sport or hobby, cable channels emphasize one topic, or group of related topics, that appeal to specific viewers (often those who have been neglected by broadcast television). People interested in current affairs can tune into CNN, MSNBC, Fox News, or any number of other news channels, while those interested in sports can switch on ESPN or TSN (The Sports Network). Other channels focus on music, shopping, comedy, science fiction, or programs aimed at specific cultural or gender groups. Narrowcasting has proved beneficial for advertisers and marketers, who no longer need to time their communications based on the groups of people who are most likely to watch television at certain times of the day. Instead, they concentrate their approach on subscription channels that appeal directly to their target consumers.
Impact on Networks
The popularity of cable television has forced the Big Four networks to rethink their approach to programming over the past three decades. Because of the narrowcasting mode of distribution and exhibition, cable TV has offered more explicit sexual and violent content than broadcast television does. To compete for cable channels’ viewing audience, broadcast networks have loosened restrictions on graphic material and now frequently feature partial nudity, violence, and coarse language. This has increased viewership of mildly controversial shows such as CSI, NCIS, Grey’s Anatomy, and Private Practice, while opening the networks to attacks from conservative advocacy groups that object to extreme content.
The broadcast networks are increasingly adapting narrowcasting as a programming strategy. Newer networks, such as the CW, deliberately target the 18–34 age group (women in particular). Since its inception, the CW has replaced urban comedies such as Everybody Hates Chris with female-oriented series such as Gossip Girl and The Vampire Diaries. Older networks group similar programs that appeal to specific groups in adjacent time slots to retain viewers for as long as possible. For example, ABC sitcoms Modern Family and Cougar Town run back to back, while Fox follows reality police series Cops with crime-fighting show America’s Most Wanted.
Despite responding to challenges from cable, the broadcast networks’ share of the total audience has declined each year. Between 2000 and 2009, the networks saw their numbers drop by around 8 million viewers (Bianco, 2009).
Key Takeaways
- During the 1950s, the cost of producing a single television show increased as shows became longer and production costs soared. Sponsorship on network television shifted from single sponsorship, in which a program was entirely supported and produced by one advertiser, to multiple sponsorship, in which advertisers bought 1- or 2-minute spots on the show. Although no one advertiser controlled the content of the show, sponsors had some say in the program’s subject matter. Sponsors have retained some control over program content by withdrawing funding from shows that are deemed to have offensive or inappropriate content.
- Public television was created to enhance citizenship and also to provide a television service for people in rural areas or those who could not afford to pay for a private television service. Despite its origins as a noncommercial entity, public television has increasingly had to turn to commercial sponsorship to stay afloat. Government funding for public television has declined over the years, and competition from niche cable channels has rendered its future uncertain.
- Between 1950 and 1970, the Big Three networks (ABC, CBS, and NBC) accounted for around 95 percent of prime-time viewing. The addition of Fox in 1986 created the Big Four; however, attempts to create an additional major network have been unsuccessful. CBS-owned UPN and Time Warner-owned WB merged in 2006 to create the CW. Targeted at women aged 18–34, the CW consistently ranks a low fifth in the ratings.
- The primary challenge to network television has been the rapid growth of cable, which grew exponentially in the 1980s and 1990s as a result of industry deregulation and the use of satellites to distribute local channels to a national audience (pioneered by HBO in the 1970s). Cable broadcasters use a process known as narrowcasting to target niche audiences for their shows. Channels usually focus on a single topic, such as news, weather, shopping, or comedy. Competition from cable has forced network television to loosen its restrictions regarding sex and violence on shows, and the networks have turned increasingly to narrowcasting in an effort to retain audiences. Despite its efforts, competition from cable and other sources has caused prime-time viewing audiences of the Big Four networks to drop from 43 percent in 1994 to 27 percent in 2009.
Exercises
Please respond to the following short-answer writing prompts. Each response should be a minimum of one paragraph.
-
Choose one of the Big Four networks and print out its weekly programming schedule. Watch the network’s prime-time programs over the course of a week, noting the target demographic for each show. Observe the advertising sponsors that support each show and compare how the products and services fit with the intended audience.
- Does the network make use of narrowcasting to air shows with the same demographic in adjacent time slots?
- How do the types of products and services advertised during each show change depending on the content and target audience?
- Does the network cater to one target audience in particular?
- How has the rise of cable television affected the Big Four networks? What trends have emerged out of this competition?
References
Advertising Age, “Speak Up About Talk Shows,” November 27, 1995, http://adage.com/article?article_id=84233.
Bianco, Robert. “The Decade in Television: Cable, the Internet Become Players,” USA Today, December 29, 2009, http://www.usatoday.com/life/television/news/2009-12-28-decadeTV28_CV_N.htm.
Collins, Scott. “With Ratings Comeback, has CW Finally Turned the Corner?” Los Angeles Times, April 7, 2009, http://latimesblogs.latimes.com/showtracker/2009/04/last-week-the-cw-scored-its-best-ratings-in-nearly-five-months-ordinarily-this-might-not -sound-like-huge-news-but-cw-is-a.html.
Gold, Matea. “Marketing Tie-ins Finding Their Way to PBS Sponsors,” Baltimore Sun, October 23, 2006, http://articles.baltimoresun.com/2006-10-23/features/0610230151_1_pbs-corporate-underwriters-public-television.
Grego, Melissa. “How The CW Stays Undead,” Broadcasting and Cable, February 1, 2010, http://www.broadcastingcable.com/article/446733-How_The_CW_Stays_Undead.php.
McCauley, Michael P. Public Broadcasting and the Public Interest (Armonk, NY: M. E. Sharpe, 2003), 239.
McGrath, Charles. “Is PBS Still Necessary?” New York Times, February 17, 2008, http://www.nytimes.com/2008/02/17/arts/television/17mcgr.html.
Newcomb, Encyclopedia of Television, 2170.
Poniewozik, James. “Here’s to the Death of Broadcast,” Time, March 26, 2009, http://www.time.com/time/magazine/article/0,9171,1887840,00.html.
Spain, William. “Advertisers Deserting Fox News’ Glenn Beck,” MarketWatch, August 14, 2009, http://www.marketwatch.com/story/advertisers-deserting-fox-news-glenn-beck-2009-08-14.
Stelter, Brian. “PBS to Shorten Time Commitments for Sponsorships,” New York Times, May 7, 2009, http://www.nytimes.com/2009/05/08/business/media/08adco.html.
Zaretsky, Adam M. “The Cable TV Industry and Regulation,” Regional Economist, July 1995, http://research.stlouisfed.org/publications/regional/95/07/CableTV.pdf.
9.4 Influence of New Technologies
Learning Objectives
- Describe the difference between satellite television and cable television.
- Identify two of the major satellite companies in today’s market.
- Identify ways in which the Internet has affected content delivery and viewing patterns.
The experience of watching television is rapidly changing with the progression of technology. No longer restricted to a limited number of channels on network television, or even to a TV schedule, viewers are now able to watch exactly what they want to watch, when they want to watch it. Nontelevision delivery systems such as the Internet, which enables viewers to download traditional TV shows onto a computer, laptop, iPod, or smartphone, are changing the way people watch television. Meanwhile, cable and satellite providers are enabling viewers to purchase TV shows to watch at their convenience through the use of video-on-demand services, changing the concept of prime-time viewing. Digital video recording (DVR) systems such as TiVo, which enable users to record particular shows onto the system’s computer memory, are having a similar effect.
Although TV audiences are becoming increasingly fragmented, they are also growing because of the convenience and availability of new technology. In 2009, Nielsen’s Three Screen Report, which encompassed television, cell phone, and computer usage, reported that the average viewer watched more than 151 hours of television per month, up 3.6 percent from the previous year (Semuels, 2009). Viewers might not all be sitting together in the family room watching prime-time shows on network TV between 7 and 11 p.m., but they are watching.
The War Between Satellite and Cable Television
The origins of satellite television can be traced to the space race of the 1950s, when the United States and the Soviet Union were competing to put the first satellite into space. Soviet scientists accomplished the goal first with the launch of Sputnik in 1957, galvanizing Americans (who were fearful of falling behind in space technology during the Cold War era) into intensifying their efforts and resulting in the creation of the National Aeronautics and Space Administration (NASA) in 1958. AT&T launched Telstar, the first active communications satellite, on July 10, 1962, and the first transatlantic television signal—a black-and-white image of a U.S. flag waving in front of the Andover Earth Station in western Maine—transmitted that same day. However, the television industry did not utilize satellites for broadcasting purposes until the late 1970s when PBS introduced Public Television Satellite Service. Satellite communication technology caught on and was used by broadcasters as a distribution method between 1978 and 1984 by pioneering cable channels such as HBO, TBS (Turner Broadcasting System), and CBN (Christian Broadcasting Network, later the Family Channel).
The trouble with early satellite television systems was that once people purchased a satellite system, they had free access to every basic and premium cable service that was broadcasting via satellite signals. The FCC had an “open skies” policy, under which users had as much right to receive signals as broadcasters had the right to transmit them. Initially, the satellite receiver systems were prohibitively expensive for most families, costing more than $10,000. However, as the price of a satellite dish dropped toward the $3,000 mark in the mid-1980s, consumers began to view satellite TV as a cheaper, higher-quality alternative to cable. Following the initial purchase of a dish system, the actual programming—consisting of more than 100 cable channels—was free. Cable broadcasters lobbied the government for legal assistance and, under the 1984 Cable Act, were allowed to encrypt their satellite feeds so that only people who purchased a decoder from a satellite provider could receive the channel.
Following the passing of the Cable Act, the satellite industry took a dramatic hit. Sales of the popular direct-to-home (DTH) systems (precursors to the smaller, more powerful direct broadcast satellite systems introduced in the 1990s) that had offered free cable programming slumped from 735,000 units in 1985 to 225,000 units a year later, and around 60 percent of satellite retailers went out of business. The satellite industry’s sudden drop in popularity was exacerbated by large-scale antidish advertising campaigns by cable operators, depicting satellite dishes as unsightly. Although sales picked up in the late 1980s with the introduction of integrated receiving and decoding units and the arrival of program packages, which saved consumers the time and effort of signing up for individual programming services, the growth of the satellite industry was stunted by piracy—the theft of satellite signals. Of the 1.9 million units manufactured between 1986 and 1990, fewer than 500,000 were receiving signals legally (Thibedeau, 2000). The problem was ultimately solved by the actions of the Satellite Broadcasting and Communications Association (SBCA), an association created in 1986 by the merger of two trade organizations—the Society of Private and Commercial Earth Stations (SPACE) and the Direct Broadcast Satellite Association (DBSA). SPACE was composed of manufacturers, distributors, and retailers of direct-to-home systems, and DBSA represented companies interested in direct broadcast satellite systems. The SBCA set up an antipiracy task force, aggressively pursuing illegal hackers with the FBI’s help.
Once the piracy problem was under control, the satellite industry could move forward. In 1994, four major cable companies launched a first-generation direct broadcast satellite (DBS) system called PrimeStar. The system, a small-dish satellite-delivered program service specifically intended for home reception, was the first successful attempt to enter the market in the United States. Within a year, PrimeStar was beaming 67 channels into 70,000 homes for a monthly fee of $25 to $35 (in addition to a hardware installation fee of $100 to $200). By 1996, competing companies DirecTV and the EchoStar Dish Network had entered the industry, and Dish Network’s cheaper prices were forcing its competitors to drop their fees. DirecTV acquired PrimeStar’s assets in 1999 for around $1.82 billion, absorbing its rival’s 2.3 million subscribers (Junnarker, 1999).
The Current Satellite Market: DirecTV versus Dish Network
As of 2010, the two biggest players in the satellite TV industry are DirecTV and Dish Network. Assisted by the passing of the Satellite Television Home Viewers Act in 1999, which enabled satellite providers to carry local TV stations (putting them on equal footing with cable television), both companies have grown rapidly over the past decade. In the first quarter of 2010, DirecTV boasted 18.6 million subscribers, placing it ahead of its rival, Dish Network, which reported a total of 14.3 million subscribers (Paul, 2010). Dish courts customers who have been hit by the economic downturn, aggressively cutting its prices and emphasizing its low rates. Conversely, DirecTV targets affluent consumers, emphasizing quality and choice in its advertising campaigns and investing in advanced services and products such as multiroom viewing (enabling a subscriber to watch a show in one room, pause it, and continue watching the same show in another room) to differentiate itself from rival satellite and cable companies.
Since the 1999 legislation put satellite television in direct competition with cable, the major satellite companies have increasingly pitted themselves against cable broadcasters, offering consumers numerous incentives to switch providers. One of these incentives is the addition of premium networks for satellite subscribers in the same vein as premium cable channel HBO. In 2005, DirecTV expanded its 101 Network channel to include original shows, becoming the first satellite station to air first episodes of a broadcast television series with NBC daytime soap opera Passions in 2007. The station aired first-run episodes of football drama series Friday Night Lights in 2008 and set its sights on the male over-35 demographic by obtaining syndication rights to popular HBO series Oz and Deadwood a year later. Commenting on the satellite company’s programming plans, executive vice president for entertainment for DirecTV Eric Shanks said, “We’d like to become a pre-cable window for these premium channels (Carter, 2009).” In other words, the company hopes to purchase HBO shows such as Sex and the City before HBO sells the series to basic-cable channels like TBS.
In another overt bid to lure cable customers over to satellite television, both DirecTV and Dish Network offer a number of comprehensive movies and sports packages, benefiting from their additional channel capacity (satellite TV providers typically offer around 350 channels, compared with 180 channels on cable) and their ability to receive international channels often unavailable on cable. In the mid-2000s, the satellite companies also began encroaching on cable TV’s domination of bundled packages, by offering all-in-one phone, Internet, and television services. Despite being ideally suited to offering such packages with their single telecommunications pipe into the house, cable companies such as Comcast, Cox, and Time Warner had developed a reputation for offering poor service at extortionate prices. In the first three quarters of 2004, the eight largest cable providers (with the exception of bankrupt Adelphia) lost 552,000 basic-cable subscribers. Between 2000 and 2004, cable’s share of the TV market fell from 66 percent to 62 percent, while the number of U.S. households with satellite TV increased from 12 percent to 19 percent (Belson, 2004). Despite reports that cash-strapped consumers are switching off pay-TV services to save money during strained economic times, satellite industry revenues have risen steadily over the past decade.
The Impact of DVRs and the Internet: Changing Content Delivery
Over the past two decades, the viewing public has become increasingly fragmented as a result of growing competition between cable and satellite channels and traditional network television stations. Now, TV audiences are being presented with even more options. Digital video recorders (DVRs) like TiVo allow viewers to select and record shows they can watch at a later time. For example, viewers can set their DVRs to record all new (or old) episodes of the show Deadliest Catch and then watch the recorded episodes whenever they have free time.
DVRs can be used by advertisers to track which shows are being viewed. DVRs are even capable of targeting viewers with specific ads when they decide to watch their recorded program. In 2008, consumer groups battled with cable companies and lawmakers to protect the privacy of viewers who did not wish to be tracked this way, causing Nielsen to make tracking optional.
Nontelevision delivery systems such as the Internet allow viewers to download their favorite shows at any time, on several different media. The Internet has typically been bad news for traditional forms of media; newspapers, magazines, the music industry, video rental companies, and bookstores have all suffered from the introduction of the Internet. However, unlike other media, television has so far survived the Internet’s effects. Television remains the dominant source of entertainment for most Americans, who are using new media in conjunction with traditional TV viewing, watching vast quantities of television in addition to streaming numerous YouTube videos and catching up on missed episodes via the networks’ web pages. In the third quarter of 2008, the average American watched 142 hours of television per month, an increase of 5 hours per month from the same quarter the previous year. Internet use averaged 27 hours per month, an increase of an hour and a half between 2007 and 2008 (Stross, 2009).
New Viewing Outlets: YouTube and Hulu
Of the many recent Internet phenomena, few have made as big an impact as video-sharing website YouTube. Created by three PayPal engineers in 2005, the site enables users to upload personal videos, television clips, music videos, and snippets of movies that can be watched by other users worldwide. Although it initially drew unfavorable comparisons with the original music-sharing site Napster (see Chapter 6 “Music”), which was buried under an avalanche of copyright infringement lawsuits, YouTube managed to survive the controversy by forming agreements with media corporations, such as NBC Universal Television, to legally broadcast video clips from shows such as The Office. In 2006, the company, which showed more than 100 million video clips per day, was purchased by Google for $1.65 billion (MSNBC, 2006). Correctly predicting that the site was the “next step in the evolution of the Internet,” Google CEO Eric Schmidt has watched YouTube’s popularity explode since the takeover. As of 2010, YouTube shows more than 2 billion clips per day and allows people to upload 24 hours of video every single minute (Youtube). To secure its place as the go-to entertainment website, YouTube is expanding its boundaries by developing a movie rental service and showing live music concerts and sporting events in real time. In January 2010, Google signed a deal with the Indian Premier League, making 60 league cricket matches available on YouTube’s IPL channel and attracting 50 million viewers worldwide (Timmons, 2010).
While YouTube remains focused on user-generated material, viewers looking for commercial videos of movies and TV shows are increasingly turning to Hulu. Established in 2007 following a deal between NBC Universal, News Corporation, and a number of leading Internet companies (including Yahoo!, AOL, MSN, and MySpace), the site gives users access to an entire library of video clips without charge and syndicates its material to partner distribution sites. The videos include full episodes of current hit shows such as House, Saturday Night Live, and The Simpsons, as well as older hits from the studios’ television libraries. Supported through advertising, the venture, which is only available to viewers in the United States, became the premier video broadcast site on the web within 2 years. In July 2009, the site received more than 38 million viewers and delivered more videos than any site except YouTube (Salter, 2009). Throughout the entire year, Hulu generated an estimated $120 million in revenue and increased its advertiser base to 250 sponsors (Salter, 2009). Its advertising model appeals to viewers, who need only watch two minutes of promotion in 22 minutes of programming, compared with 8 minutes on television. Limiting sponsorship to one advertiser per show has helped make recall rates twice as high as those for the same advertisements on television, benefiting the sponsors as well as the viewers.
Some critics and television executives claim that the Hulu model has been too successful for its own good, threatening the financial underpinnings of cable TV by reducing DVD sales and avoiding carriage fees—in 2009, Fox pulled most of the episodes of It’s Always Sunny in Philadelphia from Hulu’s site. Per the networks’ request, Hulu also shut off access to its programming from Boxee, a fledgling service that enabled viewers to stream online video to their TV sets. “We have to find ways to advance the business rather than cannibalize it,” stated the distribution chief at TNT, a network that refused to stream episodes of shows such as The Closer on Hulu’s site (Rose, 2009). However, many television executives realize that if they do not cannibalize their own material, others will. When a viral video of Saturday Night Live short “Lazy Sunday” hit the web in 2005, generating millions of hits on YouTube, NBC did not earn a dime. Broadcast networks—the Big Four and the CW—have also begun streaming shows for free in an effort to stop viewers from watching episodes on other websites.
Video-on-Demand
Originally introduced in the early 1990s, the concept of video on demand (VOD)—a pay-per-view system that allows viewers to order or download a film via television or the Internet and watch it at their convenience—was not immediately successful because of the prohibitive cost of ordering a movie compared to buying or renting it from a store. Another early complaint about the service was that studios withheld movies until long after they were available on DVD, by which time most people who wanted to view the film had already seen it. Both of these disadvantages have since been remedied, with movies now released at the same time on VOD as they are on DVD at competitive rental prices. Currently, most cable and satellite TV providers offer some form of on-demand service, either VOD, which provides movies 24 hours a day and enables viewers all the functionality of a DVD player (such as the ability to pause, rewind, or fast forward films), or near video on demand (NVOD), which broadcasts multiple copies of a film or program over short time intervals but does not allow viewers to control the video.
As an alternative to cable or satellite VOD, viewers can also readily obtain movies and television shows over the Internet, via free services such as YouTube and Hulu or through paid subscriptions to sites that stream movies to a computer. Online DVD rental service Netflix started giving subscribers instant access to its catalog of older TV programs and films in 2007, while Internet giant Amazon.com set up a rival service resembling the pay-per-view model in 2008. Viewers can also stream free episodes of their favorite shows via cable and broadcast networks’ websites. With the increasing popularity of smartphones—cell phones that contain built-in applications and Internet access—viewers are using VOD as a way of watching television while they are out of the house. Having discovered that consumers are willing to watch entire TV episodes or even films on their smartphones, industry executives are looking for ways to capitalize on smartphone technology. In 2010, News Corporation’s Fox Mobile Group was planning to launch Bitbop, a service that will stream TV episodes to smartphones for $9.99 a month. Discussing the project, Bitbop architect Joe Bilman said that “the marriage of on-demand content and mobility has the power to light a fire in the smartphone space (Stelter, 2010).” The shift from traditional television viewing to online viewing is making a small but noticeable dent in the $84 billion cable and satellite industry. Between the beginning of 2008 and the end of 2009, an estimated 800,000 U.S. households cut the cable cord in favor of web viewing (Schonfeld, 2010).
Interactive Television
Moving a step beyond VOD, cable and satellite TV providers are combining aspects of traditional television viewing with online content to create an entirely new way of watching shows—interactive television (iTV). Using an additional set-top box and their remote control, viewers can utilize several different features that go beyond simply watching a television show. For example, interactive television enables users to take part in quiz shows, vote for a favorite contestant on a game show, view highlights or look up statistics during sports matches, create a music playlist or photo slideshow, and view local information such as weather and traffic through a ticker under a current TV program. Software such as Microsoft’s UltimateTV, released in 2001, even brought interactivity to individual television shows. For example, a viewer watching CBS crime series CSI can click on the interactive icon in the corner of the screen and obtain instant information about forensic analysis techniques, along with an episode guide, character biographies, and a map of the show’s Las Vegas setting.
Interactive television is beginning to take on the social format of the web, linking viewers with online communities who use communication tools such as Twitter and Skype IM to discuss what they just saw on television in real time. When popular musical comedy show Glee hit the screens in 2009, marketing experts at Fox pushed for a strong online presence, airing the pilot episode well in advance of the actual season debut and generating buzz on social networking sites such as Twitter and Facebook. Once the show gained widespread popularity, Fox launched an interactive hypertrailer on its website, allowing viewers to click on and “like” the show’s cast members on Facebook. The Glee cast also participates in weekly “tweet-peats,” which feature live Twitter feeds that scroll across the bottom of the screen during reruns of the show, providing behind-the-scenes details and answering fan questions. The CW network uses a similar technique with its “TV to Talk About” campaign, a tagline that changes from ad to ad to include iterations such as “TV to text about,” “blog about,” or “tweet about.” Its website offers forums where viewers can discuss episodes and interact with video extras, photos, and background clips about various shows. Online television forum Television Without Pity provides viewers with an alternative place for discussion that is not affiliated with any one network.
Despite the shift toward interactive television, one barrier that manufacturers seem to be unwilling to cross is the addition of Internet to people’s TV sets. Although Internet-enabled televisions began trickling into the market in 2008 and 2009, many industry executives remained skeptical of their potential. In February 2009, Sony spokesman Greg Belloni said, “Sony’s stance is that consumers don’t want an Internet-like experience with their TVs, and we’re really not focused on bringing anything other than Internet video or widgets to our sets right now (Richtel, 2009).” Although some analysts predict that up to 20 percent of televisions will be Internet-enabled by 2012, consulting firm Deloitte anticipates the continued concurrent use of TV sets with laptops, MP3 players, and other browser-enabled devices (Deloitte, 2010).
Key Takeaways
- The first satellite television signal was broadcast in 1962; however, the television industry did not begin utilizing satellites for broadcasting purposes until the late 1970s. Early problems with satellite TV included the high cost of a satellite dish and the theft of satellite signals following the passing of the Cable Act in 1984. Once piracy was under control, satellite television companies began to emerge and become profitable. The two biggest current satellite television providers are DirecTV, which targets its services toward affluent consumers, and Dish Network, which targets lower-earning consumers. Since the 1999 legislation enabled satellite companies to broadcast local channels, satellite TV has become a viable threat to cable. Satellite companies attempt to lure cable customers by offering premium channels, sports and movie packages, and competitive prices.
- Unlike some other forms of media, television is so far surviving the impact of the Internet. However, the World Wide Web is changing content delivery methods and the way people conceive television and program scheduling. New viewing outlets such as YouTube and Hulu enable viewers to watch online video clips, entire episodes of TV shows, and movies free of charge (although Hulu also offers paid content with a wider selection of programs to offset the losses in network advertising revenue). Video-on-demand services, now available through most cable and satellite providers, allow viewers to order movies or TV programs at their convenience, rather than having to adhere to a fixed programming schedule. VOD is also available through Internet sites such as Amazon.com and Netflix, allowing people to stream shows and video clips to their smartphones and watch television while on the go. Thanks to the influence of the Internet, television is becoming more interactive, with providers combining aspects of traditional viewing and online content. This is manifested in two ways: new features that provide viewers with hundreds of additional options while they watch their favorite shows (for example, the ability to look up a news story or get a weather update), and social television, which encourages viewers to combine TV viewing with social networking (for example, by blogging or joining an online chat forum about the show).
Exercises
Please respond to the following short-answer writing prompts. Each response should be a minimum of one paragraph.
- What is the difference between satellite and cable television? In today’s market, who is winning the battle for consumers?
- Aside from DirecTV and Dish Network, what other satellite options do consumers have? How do these options differ from DirecTV and Dish Network?
- How have the Internet and DVRs affected your television-viewing habits?
End-of-Chapter Assessment
Review Questions
-
Section 1
- What were some of the technological developments that paved the way for the evolution of television, and what role did they play?
- What factors contributed to the dominance of electronic television over mechanical television?
- Why was color technology slow to gain popularity following its development?
- What were some of the important landmarks in the history of television after 1960?
-
Section 2
- What cultural factors influenced television programming between 1950 and 2010?
- How did television influence culture between 1950 and 2010? How are television and culture interrelated?
-
Section 3
- How can corporate sponsors influence television programming?
- What factors have influenced the decline of the major networks since 1970? How have the networks adapted to changes in the industry?
- How does cable television differ from network television? How has the growth of cable been influenced by industry legislation?
-
Section 4
- What are the main differences between satellite television and cable television? What factors influenced the growing popularity of satellite television in the 1980s and 1990s?
- Who are the two main competitors in the satellite television industry? How do they differ?
- How is the Internet changing content delivery methods and viewing patterns?
Critical Thinking Questions
- Do television programs just reflect cultural and social change, or do they influence it?
- Television audiences are becoming increasingly fragmented as a result of competition from cable and satellite companies and nontelevision delivery systems such as the Internet. What are the potential social implications of this trend?
- How can broadcast networks compete against satellite and cable operators?
- Critics frequently blame television for increasing levels of violence and aggression in children. Do broadcasters have a social responsibility to their viewers, and if so, how can they fulfill it?
- Supporters of public television argue that it serves a valuable role in the community, whereas opponents believe it is outdated. Is public television still relevant in today’s society, or should funding be cut completely?
Career Connection
Whether online viewing outlets continue to grow in popularity or viewers return to more traditional methods of watching television, broadcasters are likely to remain dependent on advertising sponsors to fund their programming. Advertising sales executives work for a specific network and sell TV time to agencies and companies, working within budgets to ensure that clients make effective use of their advertising time.
Read through the U.S. Bureau of Labor Statistics overview of a career in advertising sales. You can find it at: http://www.bls.gov/oco/ocos297.htm.
Then, read BNET’s analysis of the television advertising industry at http://industry.bnet.com/media/10008136/truth-in-network-tv-advertising-and-what-to-do-about-it/. Once you have looked at both sites, use the information to answer these questions:
- According to the Bureau of Labor Statistics website, the employment rate for advertising sales agents is expected to increase by 7 percent between 2010 and 2018, about average for all professions. What reasons does the site give for this increase? How will the growth be offset?
- What predictions does the BNET article make about future trends in the Big Four networks’ advertising sales? How might this affect career prospects?
- Based on the analysis in the BNET article and the information on the Bureau of Labor Statistics website, how is the advertising sales industry likely to change and develop?
- As the Bureau of Labor Statistics website points out, creativity is an invaluable trait for advertising sales executives. Using the information on both sites, think of a list of creative ways to attract new clients to the ailing broadcast networks.
References
Belson, Ken. “Cable’s Rivals Lure Customers With Packages,” New York Times, November 22, 2004, http://www.nytimes.com/2004/11/22/technology/22satellite.html.
Carter, Bill. “DirecTV Raises Its Sights for a Channel,” New York Times, January 23, 2009, http://www.nytimes.com/2009/01/24/business/media/24direct.html.
Deloitte, “Deloitte Analyses Top Trends for the Media Industry for 2010,” news release, http://www.deloitte.com/view/en_GB/uk/industries/tmt/press-release/37df818581646210VgnVCM100000ba42f00aRCRD.htm.
Junnarker, Sandeep. “DirecTV to Buy Rival PrimeStar’s Assets,” CNET, January 22, 1999, http://news.cnet.com/DirecTV-to-buy-rival-Primestars-assets/2100-1033_3-220509.html.
MSNBC, Associated Press, “Google Buys YouTube for $1.65 Billion,” MSNBC, October 10, 2006, http://www.msnbc.msn.com/id/15196982/ns/business-us_business/.
Paul, Franklin. “Dish Network Subscriber Gain, Profit Beat Street,” Reuters, May 10, 2010, http://www.reuters.com/article/idUSTRE6492MW20100510.
Richtel, Matt. “What Convergence? TV’s Hesitant March to the Net,” New York Times, February 15, 2009, http://www.nytimes.com/2009/02/16/technology/internet/16chip.html.
Rose, Frank. “Hulu, a Victim of Its Own Success?” Wired, May 12, 2009, http://www.wired.com/epicenter/2009/05/hulu-victim-success/.
Salter, Chuck. “Can Hulu Save Traditional TV?” Fast Company, November 1, 2009, http://www.fastcompany.com/magazine/140/the-unlikely-mogul.html.
Schonfeld, Erick. “Estimate: 800,000 U.S. Households Abandoned Their TVs for the Web,” TechCrunch, April 13, 2010, http://techcrunch.com/2010/04/13/800000-households-abandoned-tvs-web/.
Semuels, Alana. “Television Viewing at All-Time High,” Los Angeles Times, February 24, 2009, http://articles.latimes.com/2009/feb/24/business/fi-tvwatching24.
Stelter, Brian. “Audiences, and Hollywood, Flock to Smartphones,” New York Times, May 2, 2010, http://www.nytimes.com/2010/05/03/business/media/03mobile.html.
Stross, Randall. “Why Television Still Shines in a World of Screens,” New York Times, February 7, 2009, http://www.nytimes.com/2009/02/08/business/media/08digi.html.
Thibedeau, Harry W. “DTH Satellite TV: Timelines to the Future,” Satellite Broadcasting & Communications Association, 2000, http://satelliteretailers.com/dish_installation.html.
Timmons, Heather. “Google Sees a New Role for YouTube: An Outlet for Live Sports,” New York Times, May 2, 2010, http://www.nytimes.com/2010/05/03/business/media/03cricket.html.
YouTube, “YouTube Fact Sheet,” http://www.youtube.com/t/fact_sheet.
9.2 The Relationship Between Television and Culture
Learning Objectives
- Identify ways in which American culture is reflected on television.
- Identify ways in which television affects the development of American culture.
Since its inception as an integral part of American life in the 1950s, television has both reflected and nurtured cultural mores and values. From the escapist dramas of the 1960s, which consciously avoided controversial issues and glossed over life’s harsher realities in favor of an idealized portrayal, to the copious reality TV shows in recent years, on which participants discuss even the most personal and taboo issues, television has held up a mirror to society. But the relationship between social attitudes and television is reciprocal; broadcasters have often demonstrated their power to influence viewers, either consciously through slanted political commentary, or subtly, by portraying controversial relationships (such as single parenthood, same-sex marriages, or interracial couplings) as socially acceptable. The symbiotic nature of television and culture is exemplified in every broadcast, from family sitcoms to serious news reports.
Cultural Influences on Television
In the 1950s, most television entertainment programs ignored current events and political issues. Instead, the three major networks (ABC, NBC, and CBS) developed prime-time shows that would appeal to a general family audience. Chief among these types of shows was the domestic comedy—a generic family comedy that was identified by its character-based humor and usually set within the home. Seminal examples included popular 1950s shows such as Leave It to Beaver, The Donna Reed Show, and The Adventures of Ozzie and Harriet. Presenting a standardized version of the White middle-class suburban family, domestic comedies portrayed the conservative values of an idealized American life. Studiously avoiding prevalent social issues such as racial discrimination and civil rights, the shows focused on mostly White middle-class families with traditional nuclear roles (mother in the home, father in the office) and implied that most domestic problems could be solved within a 30-minute time slot, always ending with a strong moral lesson.
Although these shows depicted an idealized version of American family life, many families in the 1950s were traditional nuclear families. Following the widespread poverty, political uncertainty, and physical separation of the war years, many Americans wanted to settle down, have children, and enjoy the peace and security that family life appeared to offer. During the booming postwar era, a period of optimism and prosperity, the traditional nuclear family flourished. However, the families and lifestyles presented in domestic comedies did not encompass the overall American experience by any stretch of the imagination. As historian Stephanie Coontz points out, “the June Cleaver or Donna Stone homemaker role was not available to the more than 40 percent of black women with small children who worked outside the home (Coontz, 1992).” Although nearly 60 percent of the U.S. population was labeled middle class by the mid-1950s, 25 percent of all families and more than 50 percent of two-parent Black families were poor. Migrant workers suffered horrific deprivations, and racial tensions were rife. None of this was reflected in the world of domestic comedies, where even the Hispanic gardener in Father Knows Best was named Frank Smith (Coontz, 1992).
Not all programs in the 1950s were afraid to tackle controversial social or political issues. In March 1954, journalist Edward R. Murrow broadcast an unflattering portrait of U.S. Senator Joseph McCarthy on his show See It Now. McCarthy, a member of the Senate Investigation Committee, had launched inquiries regarding potential Communist infiltration in U.S. institutions. Murrow thought that McCarthy’s aggressive tactics were a potential threat to civil liberties. His portrait cast the senator from Wisconsin in an unflattering light by pointing out contradictions in his speeches. This led to such an uproar that McCarthy was formally reprimanded by the U.S. Senate (Friedman, 2008).
Entertainment programs also tackled controversial issues. The long-running television western Gunsmoke, which aired on CBS from 1955 to 1975, flourished in a Cold War society, where U.S. Marshal Matt Dillon (James Arness) stood up to lawlessness in defense of civilization. The characters and community in Gunsmoke faced relevant social issues, including the treatment of minority groups, the meaning of family, the legitimacy of violence, and the strength of religious belief. During the 1960s, the show adapted to the desires of its viewing audience, becoming increasingly aware of and sympathetic to ethnic minorities, in tune with the national mood during the civil rights era. This adaptability helped the show to become the longest-running western in TV history.
Violence and Escapism in the 1960s
During the 1960s, television news broadcasts brought the realities of real-world events into people’s living rooms in vivid detail. The CBS Evening News with Walter Cronkite, which debuted in 1962, quickly became the country’s most popular newscast, and by the end of the decade, journalist Walter Cronkite was known as the most trusted man in America. Following John F. Kennedy’s election to the presidency at the beginning of the decade, the 1960s took an ominous turn. Shocked viewers tuned into Cronkite’s broadcast on November 22, 1963, to learn about the assassination of their president. During the next few days, viewers followed every aspect of the tragedy on television, from the tremor in Cronkite’s voice as he removed his glasses and announced the news of Kennedy’s death, to the frantic scenes from Dallas police headquarters where the assassin, Lee Harvey Oswald, was gunned down by nightclub owner Jack Ruby, to the thousands of mourners lining up next to the president’s flag-draped coffin.
Around the same time as Kennedy’s assassination, horrific images from Vietnam were streaming into people’s living rooms during the nation’s first televised war. With five camera crews on duty in the Saigon bureau, news crews captured vivid details of the war in progress. Although graphic images were rarely shown on network TV, several instances of violence reached the screen, including a CBS report in 1965 that showed Marines lighting the thatched roofs of the village of Cam Ne with Zippo lighters and an NBC news report in 1968 that aired a shot of South Vietnamese General Nguyen Ngoc Loan executing a captive on a Saigon street. Further images, of children being burned and scarred by napalm and prisoners being tortured, fueled the antiwar sentiments of many Americans. In addition to the devastation caused by the president’s death and the Vietnam War, Americans were also feeling the pressure of the Cold War—the clash between the United States and the Soviet Union in the years following World War II. This pressure was especially great during periods of tension throughout the 1950s and 1960s, such as the 1962 Cuban Missile Crisis, a confrontation that caused many people to fear nuclear war.
As a result of the intense stress faced by many Americans during the 1960s, broadcasters and viewers turned to escapist programs such as I Dream of Jeannie, a fantasy show about a 2,000-year-old genie who marries an astronaut, and Bewitched, a supernatural-themed show about a witch who tries to live as a suburban housewife. Both shows typified the situation comedy, or sitcom, a comedy genre featuring a recurring cast of characters who resolve zany situations based on their everyday lives. Other popular sitcoms in the 1960s included The Beverly Hillbillies, a show about a poor backwoods family who move to Beverly Hills, California, after finding oil on their land, and Gilligan’s Island, the ultimate escapist comedy about seven characters shipwrecked on an uncharted island. None of the 1960s sitcoms mentioned any of the political unease that was taking place in the outside world, providing audiences with a welcome diversion from real life. Other than an occasional documentary, TV programming in the 1960s consisted of a sharp dichotomy between prime-time escapist comedy and hard news.
Diversity and Politics in the 1970s
During the 1970s, broadcasters began to diversify families on their shows to reflect changing social attitudes toward formerly controversial issues such as single parenthood and divorce. Feminist groups including the National Organization for Women (NOW), the National Women’s Political Caucus, and the Coalition of Labor Union Women pushed for equality on issues such as pay and encouraged women to enter the workforce. In 1972, the U.S. Supreme Court sanctioned women’s right to abortion, giving them control over their reproductive rights. Divorce rates skyrocketed during the 1970s, as states adopted no-fault divorce laws, and the change in family dynamics was reflected on television. Between 1972 and 1978, CBS aired the socially controversial sitcom Maude. Featuring a middle-aged feminist living with her fourth husband and divorced daughter, the show exploded the dominant values of the White middle-class domestic sitcom and its traditional gender roles. Throughout its 7-year run, Maude tackled social and political issues such as abortion, menopause, birth control, alcoholism, and depression. During its first four seasons, the show was in the top 10 in Nielsen ratings, illustrating the changing tastes of the viewing audience, who had come of age during the era of civil rights and Vietnam protests and developed a taste for socially conscious television. Other 1970s sitcoms took the same approach, including Maude’s CBS predecessor, All in the Family, which covered issues ranging from racism and homophobia to rape and miscarriage, and The Mary Tyler Moore Show, which reflected changing attitudes toward women’s rights by featuring television’s first never-married independent career woman as the central character. Even wholesome family favorite The Brady Bunch, which ran from 1969 to 1974, featured a non-nuclear family, reflecting the rising rates of blended families in American society.
In addition to changing family dynamics on sitcoms and other prime-time shows, variety and comedy sketch shows developed a political awareness in the 1970s that reflected audiences’ growing appetite for social and political commentary. Sketch comedy show Saturday Night Live (SNL) premiered on NBC in 1975 and has remained on air ever since. Featuring a different celebrity guest host every week and relatively unknown comedy regulars, the show parodies contemporary popular culture and politics, lambasting presidential candidates and pop stars alike. Earlier NBC sketch comedy show Laugh-In, which ran from 1968 to 1973, also featured politically charged material, though it lacked the satirical bite of later series such as SNL. By the end of the decade, television broadcasting reflected a far more politically conscious and socially aware viewing audience.
The Influence of Cable Television in the 1980s
Until the mid-1980s, the top three networks (ABC, NBC, and CBS) dominated television broadcasting in the United States. However, as cable services gained popularity following the deregulation of the industry in 1984, viewers found themselves with a multitude of options. Services such as Cable News Network (CNN), Entertainment and Sports Programming Network (ESPN), and Music Television (MTV) profoundly altered the television landscape in the world of news, sports, and music. New markets opened up for these innovative program types, as well as for older genres such as the sitcom. During the 1980s, a revival of family sitcoms took place with two enormous hits: The Cosby Show and Family Ties. Both featured a new take on modern family life, with the mothers working outside of the home and the fathers pitching in with housework and parental duties. Despite their success on network television, sitcoms faced stiff competition from cable’s variety of choices. Between 1983 and 1994, weekly broadcast audience shares (a measure of the number of televisions in use that are tuned to a particular show) for network television dropped from 69 to 52, while cable networks’ shares rose from 9 to 26 (Newcomb, 2004).
With a growing number of households subscribing to cable TV, concern began to grow about the levels of violence to which children were becoming exposed. In addition to regularly broadcast network programs, cable offered viewers the chance to watch films and adult-themed shows during all hours, many of which had far more violent content than normal network programming. One study found that by the time an average child leaves elementary school, he or she has witnessed 8,000 murders and more than 100,000 other acts of violence on television (Blakey, 2002). Although no conclusive links have been drawn between witnessing violence on television and carrying out violence in real life, the loosening boundaries regarding sexual and violent content on television is a persistent cause for concern for many parents. For more information on the social effects of violence in the media, please refer to Chapter 2 “Media Effects”.
Specialization in the 1990s and 2000s
Although TV viewership is growing, the vast number of cable channels and other, newer content delivery platforms means that audiences are thinly stretched. In recent years, broadcasters have been narrowing the focus of their programming to meet the needs and interests of an increasingly fragmented audience. Entire cable channels devoted to cooking, music, news, African American interests (see sidebar below), weather, and courtroom drama enable viewers to choose exactly what type of show they want to watch, and many news channels are further specialized according to viewers’ political opinions. This trend toward specialization reflects a more general shift within society, as companies cater increasingly to smaller, more targeted consumer bases. Business magazine editor Chris Anderson explains, “We’re leaving the watercooler era, when most of us listened, watched and read from the same relatively small pool of mostly hit content. And we’re entering the microculture era, when we are all into different things (Gunther, 2006).” Just as cable broadcasters are catering to niche markets, Internet-based companies such as Amazon.com and Netflix are taking advantage of this concept by selling large numbers of books, DVDs, and music albums with narrow appeal. Section 9.3 “Issues and Trends in the Television Industry” and Section 9.4 “Influence of New Technologies” of this chapter will cover the recent trends and issues of this era in television.
Black Entertainment Television (BET)
Launched in 1980, Black Entertainment Television (BET) was the first television network in the United States dedicated to the interests of African American viewers. The basic-cable franchise was created in Washington, DC, by media entrepreneur Robert Johnson, who initially invested $15,000 in the venture. Within a decade, he had turned the company into a multimillion-dollar enterprise, and in 1991 it became the first Black-controlled company on the New York Stock Exchange. The company was sold to Viacom in 2003 for $3 billion.
Pre-dating MTV by a year, BET initially focused on Black-oriented music videos but soon diversified into original urban-oriented programs and public affairs shows. Although BET compensated somewhat for the underrepresentation of Blacks on television (African Americans made up 8 percent of the prime-time characters on television in 1980 but made up 12 percent of the population), viewers complained about the portrayal of stereotypical images and inappropriate violent or sexual behavior in many of the rap videos shown by the network. In a 2004 interview with BET vice president of communications Michael Lewellen, former BET talk show host Bev Smith said, “We had videos on BET in those days that were graphic but didn’t proliferate as they seem to be doing now. That’s all you do seem to see are scantily dressed women who a lot of African American women are upset about in those videos (Fox News, 2004).” Despite the criticisms, BET remained the No. 1 cable network among Blacks 18 to 34 in 2010 and retained an average audience of 524,000 total viewers during the first quarter of the year (Forbes, 2010).
Television’s Influence on Culture
Despite entering a microculture era with a variety of niche markets, television remains the most important unifying cultural presence in the United States. During times of national crises, television news broadcasts have galvanized the country by providing real-time coverage of major events. When terrorists crashed planes into the World Trade Center towers in 2001, 24-hour TV news crews provided stunned viewers around the world with continuous updates about the attack and its aftermath. Meanwhile, network blockbusters such as Lost and 24 have united viewers in shared anticipation, launching numerous blogs, fan sites, and speculative workplace discussions about characters’ fates.
Televised coverage of the news has had several cultural effects since the 1950s. Providing viewers with footage of the most intense human experiences, televised news has been able to reach people in a way that radio and newspapers cannot. The images themselves have played an important role in influencing viewer opinion. During the coverage of the civil rights movement, for example, footage of a 1963 attack on civil rights protesters in Birmingham, Alabama, showed police blasting African American demonstrators—many of them children—with fire hoses. Coupled with images of angry White segregationist mobs squaring off against Black students, the news footage did much to sway public opinion in favor of liberal legislation such as the 1964 Voting Rights Act. Conversely, when volatile pictures of the race riots in Detroit and other cities in the late 1960s hit the airwaves, horrified viewers saw the need for a return to law and order. The footage helped create an anti-civil-rights backlash that encouraged many viewers to vote for conservative Republican Richard Nixon during the 1968 presidential election.
During the past few decades, mass-media news coverage has gone beyond swaying public opinion through mere imagery. Trusted centrist voices such as that of Walter Cronkite, who was known for his impartial reporting of some of the biggest news stories in the 1960s, have been replaced by highly politicized news coverage on cable channels such as conservative Fox News and liberal MSNBC. As broadcasters narrow their focus to cater to more specialized audiences, viewers choose to watch the networks that suit their political bias. Middle-of-the-road network CNN, which aims for nonpartisanship, frequently loses out in the ratings wars against Fox and MSNBC, both of which have fierce groups of supporters. As one reporter put it, “A small partisan base is enough for big ratings; the mildly interested middle might rather watch Grey’s Anatomy (Poniewozik, 2010).” Critics argue that partisan news networks cause viewers to have less understanding of opposing political opinions, making them more polarized.
Social Controversy
The issue of whether television producers have a responsibility to promote particular social values continues to generate heated discussion. When the unmarried title character in the CBS series Murphy Brown—a comedy show about a divorced anchorwoman—got pregnant and chose to have the baby without any involvement from the father, then–Vice President Dan Quayle referenced the show as an example of degenerating family values. Linking the 1992 Los Angeles riots to a breakdown of family structure and social order, Quayle lambasted producers’ poor judgment, saying, “It doesn’t help matters when prime-time TV has Murphy Brown, a character who supposedly epitomizes today’s intelligent, highly paid professional woman, mocking the importance of fathers by bearing a child alone, and calling it just another ‘lifestyle choice (Time, 1992).’” Quayle’s outburst sparked lively debate between supporters and opponents of his viewpoint, with some praising his outspoken social commentary and others dismissing him as out of touch with America and its growing number of single mothers.
Similar controversy arose with the portrayal of openly gay characters on prime-time television shows. When the lead character on the ABC sitcom Ellen came out in 1997 (2 weeks after Ellen DeGeneres, the actress who played the role, announced that she was gay), she became the first leading gay character on both broadcast and cable networks. The show proved to be a test case for the nation’s tolerance of openly gay characters on prime-time TV and became the subject of much debate. Embraced by liberal supporters and lambasted by conservative objectors (evangelical Baptist minister Jerry Falwell infamously dubbed her “Ellen DeGenerate”), both the actress and the show furthered the quest to make homosexuality acceptable to mainstream audiences. Although Ellen was canceled the following year (amid disagreements with producers about whether it should contain a parental advisory warning), DeGeneres successfully returned to television in 2003 with her own talk show. Subsequent shows with prominent gay characters were quick to follow in Ellen’s footsteps. According to the Gay & Lesbian Alliance Against Defamation (GLAAD), 18 lesbian, gay, bisexual, or transgender characters accounted for 3 percent of scripted series regulars in the 2009–2010 broadcast television schedule, up from 1.3 percent in 2006 (Mitchell, 2009).
Creating Stars via Reality Television
Emerging out of the 1948 TV series Candid Camera, in which people were secretly filmed responding to elaborate practical jokes, reality television aimed to capture real, unscripted life on camera. The genre developed in several different directions, from home-video clip shows (America’s Funniest Home Videos, America’s Funniest People) to true-crime reenactment shows (America’s Most Wanted, Unsolved Mysteries) to thematic shows based on professions of interest (Project Runway, Police Women of Broward County, Top Chef). Near the turn of the millennium, the genre began to lean toward more voyeuristic shows, such as MTV’s The Real World, an unscripted “documentary” that followed the lives of seven strangers selected to live together in a large house or apartment in a major city. The show drew criticisms for glamorizing bad behavior and encouraging excessive drinking and casual sex, although its ratings soared with each successive controversy (a trend that critics claim encouraged producers to actively stage rating-grabbing scenarios). During the late 1990s and 2000s, a wave of copycat reality TV shows emerged, including the voyeuristic series Big Brother, which filmed a group of strangers living together in an isolated house full of cameras in an attempt to win large amounts of cash, and Survivor, a game show in which participants competed against each other by performing endurance challenges on an uninhabited island. Survivor’s success as the most popular show on television in the summer of 2000 ensured the continued growth of the reality television genre, and producers turned their attention to reality dating shows such as The Bachelor, Temptation Island, and Dating in the Dark. Cheap to produce, with a seemingly never-ending supply of willing contestants and eager advertising sponsors, reality TV shows continue to bring in big ratings. As of 2010, singing talent competition American Idol is television’s biggest revenue generator, pulling in $8.1 million in advertising sales every 30 minutes it is on the air (Bond, 2010).
Reality TV has created the cultural phenomenon of the instant celebrity. Famous for simply being on the air, reality show contestants are extending their 15 minutes in the spotlight. Kate Gosselin, star of Jon & Kate Plus 8, a cable TV show about a couple who have eight children, has since appeared in numerous magazine articles, and in 2010 she starred on celebrity reality dance show Dancing with the Stars. Survivor contestant Elisabeth Hasselbeck became a co-host on TV talk show The View, and several American Idol contestants (including Kelly Clarkson and Carrie Underwood) have become household names. The genre has drawn criticism for creating a generation that expects to achieve instant wealth without having to try very hard and also for preying on vulnerable people whom critics call “disposable.” When Britain’s Got Talent star Susan Boyle suffered a public meltdown in 2009 after the stress of transitioning from obscurity to stardom in an extremely short time period, the media began to point out the dangers of reality television. In 2009, TheWrap.com investigated the current lives of former stars of reality shows such as The Contender, Paradise Hotel, Wife Swap, and Extreme Makeover, and found that at least 11 participants had committed suicide as an apparent result of their appearances on screen (Adams, 2009; Feldlinger).
Key Takeaways
- Television has been reflecting changing cultural values since it first gained popularity after World War II. During the 1950s, most programs ignored current events and political issues in favor of family-friendly domestic comedies, which featured White suburban middle-class families. Extreme stress during the 1960s, caused by political events such as the Vietnam War and the Cuban Missile Crisis, led people to turn to escapist television offered by fantasy sitcoms. These provided a sharp dichotomy with the hard-news shows of the era. Social consciousness during the 1970s prompted television producers to reflect changing social attitudes regarding single parenthood, women’s roles, and divorce, and sitcom families began to reflect the increasing number of non-nuclear families in society. The increasing popularity of cable TV in the 1980s led to an explosion of news and entertainment channels, some of which raised concerns about the levels of violence on television. During the 1990s and 2000s, TV networks became more specialized, catering to niche markets in order to meet the needs of an increasingly fragmented audience.
- Television reflects cultural values, and it also influences culture. One example of this is the polarization of cable TV news, which is no longer centrist but caters to individual political tastes. Critics argue that this influences cable news viewers’ opinions and makes them less open to opposing political viewpoints. Entertainment programs also play an influential role within society. By portraying controversial relationships such as single parents or gay couples as acceptable, TV shows have the power to shape viewers’ attitudes. In recent years, broadcasters have created the concept of the instant celebrity through the genre of reality television. Contestants on reality TV shows now permeate every aspect of culture and the media, from the music charts to popular magazines and newspapers.
Exercises
Please respond to the following short-answer writing prompts. Each response should be a minimum of one paragraph.
- Choose a popular sitcom from the past 50 years you are familiar with (you can view episodes on Hulu.com to refamiliarize yourself if necessary). Using the ideas in this section as a starting point, identify three ways in which your chosen sitcom reflects or reflected American culture.
- Spend a few days reviewing news coverage on Fox News and MSNBC. How is coverage of similar news stories different? Do you think partisan news networks can affect public opinion? Why or why not?
References
Adams, Guy. “Lessons From America on the Dangers of Reality Television,” Independent (London), June 6, 2009, http://www.independent.co.uk/news/world/americas/lessons-from-america-on-the-dangers-of-reality-television-1698165.html.
Blakey, Rea. “Study Links TV Viewing Among Kids to Later Violence,” CNN Health, March 28, 2002, http://archives.cnn.com/2002/HEALTH/parenting/03/28/kids.tv.violence/index.html.
Bond, Paul. “‘Idol’ Listed as TV’s Biggest Revenue Generator,” Hollywood Reporter, May 5, 2010, http://www.hollywoodreporter.com/hr/content_display/news/e3i8f1f42046a622bda2d602430b16d3ed9.
Coontz, Stephanie. “‘Leave It to Beaver’ and ‘Ozzie and Harriet’: American Families in the 1950s,” in The Way We Never Were: American Families and the Nostalgia Trip (New York: BasicBooks, 1992), 28.
Forbes, “BET Networks Unveils New African American Consumer Market Research and New Programming at 2010 Upfront Presentation,” April 14, 2010, http://www.forbes.com/feeds/prnewswire/2010/04/14/prnewswire201004141601PR_NEWS_USPR_____NE86679.html.
Fox News, The O’Reilly Factor, “Is Black Entertainment Television Taking a Disturbing Turn?” Fox News, May 26, 2004, http://www.foxnews.com/story/0,2933,120993,00.html.
Frank Feldlinger, “TheWrap Investigates: 11 Players Have Committed Suicide,” TheWrap, http://www.thewrap.com/television/article/thewrap-investigates-11-players-have-committed-suicide-3409.
Friedman, Michael J. “‘See It Now’: Murrow vs. McCarthy,” in Edward R. Murrow: Journalism at Its Best, publication of U.S. Department of State, June 1, 2008, http://www.america.gov/st/democracyhr-english/2008/June/20080601110244eaifas8.602542e-02.html.
Gunther, Marc. “The Extinction of Mass Culture, CNN Money, July 12, 2006, http://money.cnn.com/2006/07/11/news/economy/pluggedin_gunther.fortune/index.htm.
Mitchell, Wendy. “GLAAD Report: Gay Characters on Network TV Still on the Rise,” Entertainment Weekly, September 30, 2009, http://hollywoodinsider.ew.com/2009/09/30/glaad-report-gay-characters-on-rise/.
Newcomb, Horace. ed., Encyclopedia of Television (New York: Fitzroy Dearborn, 2004), 389.
Poniewozik, James. “CNN: Can a Mainstream News Outlet Survive?” Time, May 3, 2010, http://www.time.com/time/magazine/article/0,9171,1983901,00.html.
Time, “Dan Quayle vs. Murphy Brown,” June 1, 1992, http://www.time.com/time/magazine/article/0,9171,975627,00.html.