A Brief Social History of Game Play1
Dmitri Williams, University of Illinois at Urbana-Champaign
Who has played video games? Where have they played them? And how have games
helped or hindered social networks and communities? This chapter answers these historical
questions for the birthplace of video games—the United States—although many other
industrialized countries have had similar patterns. In the U.S., our collective stereotype conjures
up an immediate image: Isolated, pale-skinned teenage boys sit hunched forward on a sofa in
some dark basement space, obsessively mashing buttons. In contrast, the statistics and accounts
tell a very different story—one of often vibrant social settings and diverse playing communities.
Why do American conceptions of gamers diverge from reality?
The explanation is that for video game media, the sociopolitical has been inseparable
from the practical. Social constructions, buttressed by the news media over the past 30 years,
have created stereotypes of game play that persist within generations. This chapter will explain
both the imagery and the reality. Moving from the descriptive to the analytical, it begins with the
basic trends and figures: who played, when, where and why, and how changes in technology
have impacted the social side of gaming. An immediate pattern appears—for both industrial and
political reasons, the early 1980s were a crucial turning point in the social history of video game
play. What began as an open and free space for cultural and social mixing was quickly
transformed through social constructions that had little to do with content, the goals of the
1 The author wishes to thank Kurt Squire, Constance Steinkuehler and the editors for their comments on a draft of
producers, or even demand. The legacy of that era persists today, influencing who plays, how we
view games, and even how we investigate their uses and effects.
Setting the Stage
Figure A gives industry revenues for home and arcade play, standardized to 1983 dollars.
The data show what game historians have already presented through narratives (Herman, 1997;
Herz, 1997; Kent, 2000; Sheff, 1999): a slow adoption during the 1970s lead to a massive spike
in popularity during the Atari heyday of the early 1980s, followed by the collapse of that
company and the industry’s eventual revival in the late 1980s by Nintendo. The late 1980s also
saw the beginning of play moving from public to private spaces. Over the past 10 years, games
have steadily become more popular to the point where they are now considered mainstream
media, competing with newspapers, television, radio and film for attention and dollars.
1977 1980 1983 1986 1989 1992 1995 1998 2001
Figure A. Industry breakdown: Home game vs. arcade sales, in millions of dollars.2
Aside from the early hobbyist era (Levy, 1994), commercial gaming efforts did not gain
traction until the early 1970s. Long before arcades and Atari, the first mass-marketed home game
machine, Magnavox’s Odyssey, aimed for the mainstream family audience. Seeking to break the
nascent link between gaming technology and male youth culture, Magnavox marketed their
machine as the new electronic hearth, complete with advertising tableaux of families joined in
domestic bliss around the new machines. Such tableaux can have powerful idealizing and
norming functions (Marchand, 1985). From 1972 until the early 1980s, manufacturers tried to
promote the idea of gaming as a mainstream activity. This meant convincing parents that console
games could unite their families, while also convincing single adults that arcade games had sex
appeal. It is a measure of the strong stereotypes about age and gaming that the following will
surprise many readers: As a niche hobby, these marketing efforts had some early success. Game
play in public spaces began as an adult activity, with games first appearing in bars and nightclubs
before the eventual arcade boom. Then, when arcades first took root, they were populated with a
wide mixing of ages, classes and ethnicities. This mixing was quite similar to what Gabler
(1999) describes for turn-of-the-century nickelodeons—populist, energetic, and ultimately
There is no doubt that mainstream corporate and political forces helped to dull down
these public spaces (a theme to be taken up shortly). However, the single biggest cause in the
decline of gaming in the 1980s was the spectacular collapse of the Atari corporation, an event so
2 For comparability, these data have been adjusted for inflation and standardized to their values in 1983. Unadjusted
values would show higher totals in recent years. Data Source: Amusement & Music Operators Association,
Nintendo, PC Data, NPD, Veronis Suhler, Vending Times (1978-2001).
traumatic that it appeared to destroy the entire industry. But while pundits and investors alike
thought Atari’s collapse was proof of a faddish product and a fickle consumer, it was really no
more than inept management (Cohen, 1984; Sheff, 1999). In actuality, the rise of Atari and the
game industry had created a new kind of consumer, one increasingly comfortable with
interactive electronic devices. The Nintendo revival of the late 1980s proved that demand had in
fact not magically disappeared (it continued to flourish in Japan). Still, Nintendo’s marketing and
distribution solidified games as the province of children for the next 10 years.
The Crash and the New Consumer
At first, the industry’s collapse was easy to explain as just another example of short
attention-span American tastes: first disco polyester, then Pet Rocks, and now Pac-Man.
However, a closer look at the demographics and demand shows that video games helped usher in
a new kind of consumer, one increasingly aware of new tools and new possibilities. Consumers
were beginning to embrace home computers, compact discs, and the concept of digital systems
as convenient and powerful entertainment tools.
The demand for video games should be viewed as part of a larger trend in entertainment
consumption. Games’ initial rise and temporary decline occurred during periods of overall
increasing demand for entertainment products. Large increases in productivity and income gains
have increased Americans’ incentives to work even more while also giving families more
discretionary income: less time, but more money. Much of this trend is due to the large-scale rise
in hours worked by U.S. women (Schor, 1991). Time has become scarcer, and Americans have
been steadily spending more and more of their income to enjoy it to the fullest (Vogel, 2001). In
1970, Americans were spending 4.3% of their incomes on recreation and entertainment, but by
1994, that figure had grown to 8.6% (The national income and product accounts of the United
States, 1929-1976, 1976; Survey of current business, 1996). It follows that consumers have been
quick to adopt digital technologies that can be enjoyed more efficiently. For example, Americans
spend a great deal of time playing card games and board games (Schiesel, 2003), but it is far
easier and faster (if more expensive) to play Risk on a computer than on a tabletop with dice.
A New Electronic Hearth: The Rise of Home Computing and Games
Throughout the 1980s, a combination of economic and technological forces moved play
away from social, communal and relatively anarchic early arcade spaces, and into the controlled
environments of the sanitized mall arcade (or “family fun center”) or into the home. The idea of
a home game machine—once confusing and new to consumers3— seemed less remarkable in a
home with microprocessors embedding in everything from PCs to blenders. This acceptance can
also be viewed as part of a general transition of technology-based conveniences away from
public areas and into private ones (Cowan, 1983; Putnam, 2000).
Since their inception, video games have been harbingers of the shift from analog to
digital technology for both consumers and producers. They made major portions of a generation
comfortable and technoliterate enough to accept personal computers (Lin & Leper, 1987; Rogers,
1985) electronic bulletin boards, desktop publishing, compact disks and the Web, and have
pushed the development of microprocessors, artificial intelligence, compression technologies,
broadband networks and display technologies (Burnham, 2001). Games functioned as stepping
3 For example, the first home game machine, the Magnavox Odyssey, had trouble with consumers in part because
many incorrectly assumed it would only work on a Magnavox television set (Herman, 1997).
stones to the more complex and powerful world of home computers. Figure B shows the dual
trends in adoption for home game systems and home computers. Notably, games preceded
computers at every step of adoption, and have continued to be in more homes since their arrival.
1977 1978 1979 1980 1981 1982 1983 1984 1986 1989 1990 1993 1994 1995 1996 1997 1998 1999 2000
Figure B. Consoles and computers come to the home, 1977-2000 (% penetration).4
The Rise of Networks
The last major trend affecting the social site of gaming is the more recent move towards
networked game play. Beginning with text-based networked games called “MUDs” and
proceeding to graphical versions called “MMRPGs,” online games have emerged as an important
4 Data Source: Consoles, Nintendo of America, Amusement & Music Operators Assoc., The
Economist,www.icwhen.com; Computers, National Science Foundation, Roper surveys, Census Bureau, Statistical
Abstracts of the United States.
new and social game format (see Chan & Vorderer, in
this volume). But although the history of these PC-
based games suggests a vibrant social universe
( Dibbell, 2001, 2003; Mulligan & Petrovsky, 2003;
Turkle, 1995), the casual gamer is unlikely to invest
the time or money to wade into them. Such games are
extremely profitable, but are still a minor part of game
play (Croal, 2001; Palumbo, 1998). Instead, it is the
current wave of more mainstream online game
adoption that has firms investing (Kirriemur, 2002),5
and will have social implications. This adoption is
promising because it has begun to expand the game
market beyond the traditionally younger, male audience that plays console games. Data from the
Pew Internet and American Life Project (see Table 1) illustrate that higher percentages of racial
minorities and women are playing than white men, and that surprisingly large numbers of older
The major drivers for this phenomenon are not the games themselves, but the addition of
other players via the Internet (Griffiths, Davies, & Chappell, 2003; Kline & Arlidge, 2002). As
one online gamer said, “[meeting new people is] the most interesting aspect of the game. This
gives it a social dimension. There’s another person behind every character” (Pham, 2003)(p. C1).
5 Cell phone-based games have a similar appeal, and may drive phone use (Schwartz, 1999).
6 These data were graciously supplied to the author by Senior Research Scientist John Horrigan of the Pew Internet
and American Life Project in an email.
Online Gaming Demographics
National adult sample, respondents
who answered yes to “ever play a
All users 37%
Note. Data were collected by the Pew
Internet and American Life Project in
June and July, 2002.
The presence of competitors and collaborators introduces a social element that has been missing
from some gamers’ experiences since the early-1980s heyday of the arcade (Herz, 1997).
Gamers in general—but especially arcade players (Garner, 1991; Meadows, 1985;
Ofstein, 1991)—were able to enter a world based purely on talent and hard work, not social
status. The resulting social element of game play has always been one of the medium’s strongest
appeals (see Raney, Smith and Baker, in this volume). For those who felt marginalized,
unchallenged, or unable to participate in other mainstream activities, game play allowed for the
contestation of issues that were less easily dealt with in everyday life. For the awkward, the
underclass, or the socially restricted player, success at a game translated into a level of respect
and admiration previously unavailable outside of the arcade. There was no gender or status bias
in arcade competition, and the machine didn’t care if the player was popular, rich or an outcast.
As Herz put it, “It didn’t matter what you drove to the arcade. If you sucked at Asteroids, you
just sucked.” (Herz, 1997, p. 47). Much like on playing fields, social conventions and
interactions within an arcade were separate from those of “real life.” Inside the arcade, gamers
assumed roles separate from their outside personae and adhered to a strict set of rules governing
game play, including a ban on physical aggression and a recognition of the hierarchy of skill
Arcades were social magnets in the early 1980s, attracting a range of players to their
populist settings. An 18-year old girl was quoted in Newsweek describing her local arcade:
“Look at all these people together—blacks, whites, Puerto Ricans, Chinese. This is probably the
one place in Boston where there are not hassles about race” (1981). Class barriers were similarly
low in the early years. Herz notes their similarity to pinball parlors where
sheltered suburban teens might actually come into contact with working-class kids, high-school
dropouts, down-and-out adults, cigarettes, and other corrupting influences, which made the place
a breeding ground for parental paranoia, if not for crime. (Herz, 1997, p. 44)
As forbidden fruit, the appeal to video gamers was apparent. Not only could people mix with
others of different ages, ethnicities and classes that they were otherwise constricted from being
near, they could form friendships, compete, and establish an identity. Said one player, looking
back on the era, “Sure, all my favorites were there, but it was the magic of the place at large, and
the people there that were a major draw” (Killian, 2002).
While early arcades represented a key social site for play, consoles and PC games in
homes were equally important. Mitchell (1984) studied 20 families from a range of backgrounds
to see what the impact of adding a console game machine was to family life. She found that
family and sibling interaction increased, that no detrimental trends were found in schoolwork
(there was actually a slight improvement), that none of the children or parents became more
aggressive, that boys played more than girls, that girls gained a sense of empowerment, and that
all of the families saw games as a bridge to personal computers. She further concluded that home
video games brought families together more than any other activity in recent memory, chiefly by
displacing time spent watching television (Mitchell, 1985). Murphy (1984) found that homes
with video games had similar family interactions to those that did not. Instead, in nearly every
case, links to deviant behavior were found to correlate with parental variables such as
supervision and pressure for achievement.
The Causes of New Media Ambivalence
Games are a contentious subject in modern American society not solely because of their
inherent qualities, but because they are a wholly new medium of communication, something
guaranteed to provoke suspicion and ambivalence. In America and elsewhere, the advent of
every major medium has been greeted with utopian dreams of democracy, but also with tales and
visions of woe and social disorder or unrest (Czitrom, 1982; Neuman, 1991). This pattern has
been consistent and has maintained itself dating from the telegraph (Standage, 1999), and
persisting through nickelodeons (Gabler, 1999), the telephone (Fischer, 1992), newspapers, (Ray,
1999), movies (Lowery & DeFluer, 1995), radio (S. Douglas, 1999), television (Schiffer, 1991),
and now with both video games and the Internet. Video games are simply the latest in a long
series of media to endure criticism. Typically, the actual source of the tension lies not in the new
medium, but in preexisting social issues. The tensions over new media are surprisingly
predictable, in part because the issues that drive them are enduring ones such as intra-class strife
and societal guilt.
Understanding how and why the medium is assigned blame can tell us a great deal about
the tensions and real social problems that are actually at issue. Often, focusing attention on the
medium is a convenient way of assigning blame while ignoring complex and troubling problems.
Media coverage of new technology often generates a climate in which consumers of news media
are terrified of phenomena which are unlikely to occur. Just as importantly, they are also guided
away, purposefully or not, from complicated and troubling systemic social issues (Glassner,
1999). This is not a new trend, and not particular to America. Across a wide variety of cultures,
the dangers most emphasized by a society are not the ones most likely to occur, but are instead
the ones most likely to offend basic moral sensibilities or that can be harnessed to enforce some
social norm (M. Douglas, 1992; M. Douglas & Wildavsky, 1982). Most tragically, the guilt over
mistreatment of our children can manifest itself in a painfully unjust way by casting the children
themselves as the source of the problem. Resorting to the trope of the “bad seed,” or blaming an
external force like media, provides an excuse to ignore the primary risk factors associated with
juvenile crime and violence, which are abuse from relatives, neglect, malnutrition, and above all,
poverty (Glassner, 1999).
The evidence presented so far suggests that strong social forces have shaped our
reactions to games, and the subsequent comfort level of certain groups to remain players.
This phenomenon has a direct point of origin in the early 1980s, and represents a host of
social issues not directly related to games themselves. Most prominently, arcades were
threatening to conservative forces. As seen in media coverage, arcades were mixing
grounds for homeless children and lawyers, housewives and construction workers, and
countless other socially impermissible combinations. The lashing out against arcades that
followed was, according to the research, unjustified. For example, Time reported that
children in arcades were susceptible to homosexual cruisers, prostitution and hard liquor
(Skow, 1982). It was no coincidence that the years 1981 and 1982 marked the start of the
news media’s dystopian frames of misspent youth, or fears of injury, drug use and the
like (Williams, 2003)—and that this was precisely the period of the conservative Reagan
administration’s rise to power. In seeking to throw off what it perceived as the general
social, cultural and moral malaise of the 1970s, the Reagan administration campaigned on
a platform that especially highlighted the culpability and irresponsibility of single
“welfare queen” mothers (Gilens, 1999). This political agenda lead to frames about
truancy, unsupervised children and the negative influences of electronic media—
especially arcades—working as babysitters for unconscionable working mothers.
Internet Cafés: Old Wine in New Bottles
Just as with arcades, the uncontrolled space of the Internet has predictably raised
concerns about who is interacting with whom, and what morally questionable activities might be
taking place. The case of Internet cafés—a modern combination of anarchic arcade space and
private network—shows the same patterns and concerns reoccurring.
Much like early arcades, many Internet cafés are marked by the same dark lighting and
socially inclusive atmosphere (McNamara, 2003; Yee, Zavala, & Marlow, 2002). The networked
game play inside the cafés is in many ways a return to the aesthetic and values of the early
arcades: the spaces are morally questionable, challenging, anarchic, uncontrolled, racially diverse
and community oriented. Fears and reaction to the Internet cafés are also remarkably similar to
arcades, probably owing to a parallel set of concerns and punditry centering around computer
use. Table 2 illustrates that the same themes occur 20 years apart, suggesting that the same issues
of social control and parental guilt are still operating.
These current concerns may or may not be valid ones, but the history of moral panics and
public criticisms of public amusement gathering spaces—whether it is a nickelodeon, pinball
hall, or arcade—suggests that they are likely overstated and hiding other social tensions.
Comparing Coverage of Early Arcades With Current Coverage of Internet Cafés
Early Arcade Coverage (1981-
Internet Café Coverage
The site operator
defends the activity
“I baby-sat a bunch of kids here
all summer. It may have cost
them money, but they were here,
they were safe, and they didn’t
get into trouble.”*
“I think that anything that helps
keep kids off the street and out of
trouble is a good thing . . . Here
there are no cigarettes, no drugs,
no alcohol. Here the kids come to
be with their friends.”**
“Taking a cue from the pool-
troubled elders of the mythical
River City, communities from
Snellville, Ga., to Boston have
recently banned arcades or
restricted adolescent access.”†
“It’s not hard to imagine what
Professor Harold Hill would have
said upon entering the dim
recesses of Cyber HQ in Eagle
Rock. ‘Trouble, with a capital T
and that rhymes with C and that
stands for computer game.’”**
Note. *(Skow, 1982) **(McNamara, 2003) †(Langway, 1981)
Areas of Struggle: Age, Gender and Place
Games, much like other new technologies, have been a means of social control. This is
illustrated by presenting the everyday practices, the social construction and the framing of three
issues surrounding game play: age, gender and place.
Dad, put down the joystick and back away slowly: Games and Age
During the mid-1980s and the 1990s, video games were constructed as the province of
children. Today, as an all-ages phenomenon once again, they have begun to reenter the social
mainstream. Is this adoption the result of American culture’s slow and steady acceptance of
gaming technology, or simply the result of an aging user base? The evidence suggests that there
were both cohort and age effects (Glenn, 1977) at work over the last quarter-century. Today,
youths adopt game technology at the same time as many Generation X players continue to play
past adolescence. As a result, the average age of players has been rising steadily and, according
to the industry, is now 29 (Top Ten Industry Facts, 2004). One cohort effect is relatively easy to
isolate: the generations that ignored video games in the late 1970s and early 1980s have
continued to stay away. Those who played and stopped rarely returned; by 1984, Baby Boomers
had dramatically decreased their play, probably because of the powerful social messages they
were suddenly getting about the shame and deviancy of adult gaming (Williams, 2003). Another
reason may have been that the culture of games still caters primarily to adolescents, despite
adults who want more mature content (Kushner, 2001; Russo, 2001).
It should be reemphasized that the popular conception of game use as a purely child-
centric phenomenon did not emerge until well after games had entered the popular
consciousness, and home games became widespread. This is not surprising since the initial video
game boom occurred in adult spaces such as bars and nightclubs. But it was not until the late
1990s that this frame finally began to dissipate, perhaps because such reporting had become so at
odds with actual use. For example, Roper data showed that adult home game play was at 43%
during 1993, the same year Time reported that grownups “don’t get it” (Elmer-DeWitt, 1993).
But they did “get it,” and in the 1990s, adults were seemingly able to come out of the video
games closet. Much of this stems from the social cache (and disposable income) that Generation
X members gained upon entering independent adulthood. This trend was also likely reinforced
by a transition within news magazines to younger writers for the video games beat.
The research shows a clear gender gap in video game play, but one that has only been
measured for adolescents. Nearly every academic study and survey of the social impact of
games, regardless of its focus, has noted that males play more often than females (Buchman &
Funk, 1996; Dominick, 1984; Griffiths, 1997; Michaels, 1993; Phillips, Rolls, Rouse, &
Griffiths, 1995). Some of the gender preferences may be the results of socialization and parental
influence (Scantlin, 1999). Parents may have been discouraging girls at the same time they were
encouraging boys to play. For example, Ellis (1984) found that parents exerted far more control
over their daughters’ ability to go to arcades than their sons’.
Why should this be? The explanation involves the gendering of technology. For males,
technology has long been an empowering and masculine pursuit that hearkens back to the
wunderkind tinkerers of the previous century. The heroic boy inventor image was first made
fashionable through the carefully managed and promoted exploits of Thomas Edison and then
Guglielmo Marconi (S. Douglas, 1987). Since then, technology has remained a socially
acceptable pursuit for boys, and one that may offer them a sense of identity and empowerment
that they are not getting elsewhere (Chodorow, 1994; Rubin, 1983). One theory maintains that
boys are driven to technology in large part because it helps them develop their self-identity at a
time when, unlike girls, they are being forced into independence (Chodorow, 1994). Male tastes
are privileged through content as well, reinforcing the choice.
This explanation fits the experience of the males, but does not fully explain the dearth of
females pursuing technological interests, who continue to remain on the sidelines of science and
technology. For example, women are dramatically underrepresented as engineers and scientists,
despite outperforming men in science and math in high school (Seymour, 1995). The percentage
of female engineering Ph.D.’s who graduated in 1999 was an all-time high of only 15%.7 If there
are fewer women in technology, it must be for one of two reasons. One, women are not capable
or naturally interested in technology, or two, women have been systematically socialized away
from technology. Despite media framing, there is no evidence to suggest that biology plays a
role. There is, however, ample evidence pointing to a social construction of science as a male
pursuit (Jansen, 1989).8 Flanagan argues that this is a direct result of the threat that female
empowerment through technology poses to male power; women who use technology are not only
less dependent on men but less monitored and controllable (Flanagan, 1999). The world of video
games is a direct extension of this power relationship (McQuivey, 2001). Female characters,
when they do appear, tend to be objects rather than protagonists, resulting in generally negative
gender stereotypes (J. Funk, 2001; Gailey, 1993; Knowlee et al., 2001). Additionally, a male,
heterosexual viewpoint is assumed, with most characters playing the role of the strong, assertive
man seeking glory through violence with the reward of female companionship (Consalvo, 2003).
But while women experience frustration with their inability to identify with in-game characters,
male designers are largely unaware of the problem (K. Wright, 2002).
The effects of such social constructions are very real: the connection between video game
play and later technological interest has become a gender issue in early adolescence, and persists
7 Data are from the National Science Foundation’s Survey of Earned Doctorates Summary report, 1999.
8 A recent study of implicit attitudes found that both women and men see science as a male domain (O'Connell,
throughout the lifespan. Females are socialized away from game play, creating a self-fulfilling
prophecy for technology use: girls who do not play become women who do not use computing
technology (Cassell & Jenkins, 1999; Gilmore, 1999), and certainly do not aspire to make games.
In my interviews with game makers over two years, I spoke with almost no women. It is no
surprise, then, that an industry-wide masculine culture has developed in which a male point of
view is nearly the only point of view. Despite the untapped sales potential of the female
audience, this culture is unlikely to undergo any sea change in the near future so long as men
dominate the ranks of game makers.
Place, the Final Frontier
In addition to the powerful social forces that have moderated gamers’ behavior and
access to the technology, changes in both technology and space have impacted play. As Spigel
(1992) has shown, the introduction of a new technology or appliance into the home can have a
tremendous impact on social relations within families and communities. Writing from a more
community-based perspective, Putnam has argued about the negative impact that electronic
media have on local conversation and sociability (Putnam, 2000). Putnam has suggested that
video games are yet another media technology that is further atomizing communities by bringing
individuals out of public spaces. But in this case, Putnam’s line of analysis misses the actual
sequence of events, and presumes incorrectly that game play, regardless of location, is isolating.
The diversity of early arcade play had been drastically reduced by the mid 1980s (Herz,
1997), when games were played primarily in homes (J. B. Funk, 1993; Kubey & Larson, 1990).
For play to be social, a group had to gather around a television set. Evidence suggests that in the
mid-1980s, home play hit a low point for sociability (Murphy, 1984). The correlation for
sociability and home console play was still positive, but was not as large as for arcade play (Lin
& Leper, 1987). One reason for this temporary drop was that the earliest home games usually
only allowed for one or two players, as compared to the four-player consoles that became
popular in the early 1990s. Once more games and console systems were made to satisfy the
demand for more players, the trend reversed. By 1995, researchers were finding that play was
highly social again (Phillips et al., 1995).
Some of the move toward the home was precipitated by advances in technology, and
some by changes in the home itself. Over thirty years, technology has lowered the cost of
processing and storage to the point where home game units are comparable to arcade units;
convenience has moved games into homes. But other less obvious forces have kept game
technology moving into more isolated spaces within homes. From 1970 to 2000, the average U.S.
home size rose from 1,500 square feet to 2,200 square feet, but this space became more
subdivided than ever before (O'Briant, 2001). Ten percent more homes had four or more
bedrooms than in 1970, even though Americans are having fewer children ("In census data, a
room-by-room picture of the american home," 2003). Consequently, there is less shared space
within homes and more customized, private space for individuals. More than half of all U.S.
children have a video game player in their bedroom (Roberts, 2000; Sherman, 1996). In much
the same way that Putnam described televisions moving people off of communal stoops and into
houses, games and computers have been moving people out of living rooms and into bedrooms
and home offices.
Games, along with other mass media, may have separated families within their own
houses causing less inter-generational contact, while at the same time opening up access to new
social contacts of all types via networked console systems and PCs. The result is a mix of
countervailing social forces—less time with known people and more exposure to new people
from a broader range of backgrounds. Whether or not this virtual networking is qualitatively
better or worse for social networks than in-person game play is an issue that has received little
attention. However, despite the physical separation of game players, the desire to play together
has remained constant.
If the social history of video games can teach us anything, it is that humans will use
games to connect with each other, that technology changes the means (and thus the quality) of
those connections, and that this will all generate concern. These conclusions have implications
for researchers, both in how we should study gaming and in how we should consider gamers.
The Academic Agenda, and a Suggestion
The political climate and news media coverage have had a direct and dramatic effect on
the gaming research agenda. The most prominent figure in the U.S. health care system, Surgeon
General C. Everett Koop, was widely cited when he claimed in 1982 that video games were
hazardous to the health of young people, created aberrant behavior, increased tension and a
disposition for violence (Lin & Leper, 1987). Although there was no science to back this
assertion, researchers understandably went looking for it. In reviewing the resulting literature 10
years later, Funk concluded “Despite initial concern, current research suggests that even frequent
video game playing bears no significant relationship to the development of true
psychopathology. For example, researchers have failed to identify expected increases in
withdrawal and social isolation in frequent game players” (Funk, 1992, p. 53-54).
The effects work on violent games and aggression has similar origins. However, simply
because the motivations for the research have their roots in sociopolitical fears does not mean
that the research must necessarily be invalid. Looking for effects is certainly a worthwhile
activity. However, where the research can be found lacking is in its failure to incorporate social
variables. The typical experiment has included bringing subjects into a laboratory and then
having them play alone. Without the social context in place, it is not clear what such studies are
capturing. Sherry suggests that the dominant format of laboratory studies of players playing
alone against a computer may be testing for an effect that does not occur normally (Sherry, 2003;
Sherry & Lucas, 2003). The long history of social play lends weight to such criticisms.
Again, this is not to say that effects will not be found. Instead, it is to suggest that social
variables have been ignored in the models. In fact, some social variables may well cause stronger
effects. Others may moderate or reverse them. Although many of the leading researchers on
media violence have recently noted this omission (C. Anderson et al., 2003), no work has
included social variables. Until experimentalists incorporate the actual circumstances of play,
they will be open to criticisms of external validity. The solution does not involve creating new
theories. The social learning approach used by most effects researchers is based on observational
modeling and is in fact highly applicable and adaptable to social settings. The problem is that the
settings and the social actors who might be modeled have been excluded.
In the fast lanes of the Information Superhighway, the speculations and initial research on
gaming and online community have begun. Despite Putnam’s ominous warnings about the
Internet (2000), some researchers (Howard, Rainie, & Jones, 2001; Rheingold, 1993; Wellman &
Gullia, 1999) have borrowed from Habermas (1998), Anderson (1991) and Oldenburg (1997) to
explore the civic and social utopias that might be created in online spaces, including games.
Others (Nie, 2001; Nie & Erbring, 2002; Nie & Hillygus, 2002) have argued that far darker
outcomes are likely, especially in games (Ankney, 2002). This utopian/dystopian research
agenda will be the backdrop as we enter an era of virtual gaming communities. Will these
networked games help us cross social boundaries and create new relationships, or take us even
farther away from our dwindling civic structures? While empirical evidence remains scant, some
may wonder where the game player is in the discussion.
Players Aren’t Passive
Games research is the child of mainstream U.S. social science communication research.
As such, it is not particularly surprising that attention has remained focused on what games do to
people rather than what people do with games. One problem with this preference is that the
games-playing audience is plainly an active one. Some researchers have argued that this activity
will make effects stronger (C. Anderson & Bushman, 2001). That may turn out to be true.
Nevertheless, digital interactive media have made audience agency obvious to even the casual
observer. These new media have destabilized the assumptions of an inactive or gullible consumer
by rudely introducing media which have an inescapably active—or interactive—component.
Power, agency and control have spread both upstream to the producer and downstream to
the consumer (Neuman, 1991). It is difficult to suggest that Internet users do not have a high
level of choice, agency and activity. Likewise, video games are plainly an active medium. The
starting point for video game research should be a theoretical framework that allows for active
users in real social contexts. But when game players go so far as to actively participate in the
creation of the content, we must consider them anew. For example, more than one-quarter of
EverQuest players say they have made some original artwork or fiction based on the game
(Griffiths et al., 2003). Counter-Strike players create wholly new forms of self-expression within
their game (T. Wright, Boria, & Breidenbach, 2002). “Modders” take the content creation tools
given to them by the manufacturers and create new worlds and objectives consistent with their,
not the producers’ preferences (Katz & Rice, 2002; "PC Gamer "It" list," 2003). Researchers
considering direct effects or limited effects models will have to come to grips with a population
that takes a vigorous role in the practice and creation of their medium, not simply its
As Stephenson (1967) noted long ago,
Social scientists have been busy, since the dawn of mass communication research, trying to
prove that the mass media have been sinful where they should have been good. The media
have been looked at through the eyes of morality when, instead, what was required was a
fresh glance at people existing in their own right for the first time. (p. 45)
The social history of video games makes plain that we should consider not only the active
ways that gamers participate in the medium, but the long tradition of the way they play together.