ArticlePDF Available

Slipping Racism into the Mainstream: A Theory of Information Laundering

Authors:

Abstract and Figures

Many studies in recent years have addressed the notable ways that Internet features such as blogs and search engines have democratized the community of information seekers and providers, however, fewer investigations have addressed the darker element that has emerged from that same democratic sphere. That is, the huge resurgence and transformation of racist communities across cyberspace. This article presents a new theory of information laundering to explain the process by which racial hate speech is becoming legitimized through a borrowed network of online associations. This Internet‐specific theory builds upon research of “information‐based” racist propaganda to explain how today's search engines, social networks, and political blogs unwittingly enable purveyors of bigotry to infiltrate into mainstream spaces of public discourse.
Content may be subject to copyright.
Communication Theory ISSN 1050-3293
ORIGINAL ARTICLE
Slipping Racism into the Mainstream:
A Theory of Information Laundering
Adam Klein
Department of Humanities, Communication Studies and Public Relations, Mercy College, Dobbs Ferry,
NY, USA
Many studies in recent years have addressed the notable ways that Internet features such
as blogs and search engines have democratized the community of information seekers and
providers, however, fewer investigations have addressed the darker element that has emerged
from that same democratic sphere. That is, the huge resurgence and transformation of
racist communities across cyberspace. This article presents a new theory of information
laundering to explain the process by which racial hate speech is becoming legitimized
through a borrowed network of online associations. This Internet-specific theory builds
upon research of ‘‘information-based’’ racist propaganda to explain how today’s search
engines, social networks, and political blogs unwittingly enable purveyors of bigotry to
infiltrate into mainstream spaces of public discourse.
doi:10.1111/j.1468-2885.2012.01415.x
Today, hate group activity is more present in society than it has been in decades. In
early 2011, the Southern Poverty Law Center (SPLC), one of the leading watchdogs
of hate activity in the United States, reported a staggering spike in the number of
organized hate groups across the country. What was once 602 reported cases of active
hate organizations, factions, and chapters across the US in 2000 had risen to 1002
groups just a decade later a 66% increase in activity (U.S. Hate Groups Top, 2011).
Just one day after that report was released, the Simon Wiesenthal Center, a group
which tracks racial extremism worldwide, unveiled its annual Digital Terrorism &
Hate Report that documented over 14,000 hate sites, blogs, and social networks
now operating across the web (‘‘2011 Digital,’’ 2011). Over the past decade, hate
organizations have steadily relocated their central bases into the decentralized network
of cyberspace, from hardcore skinhead gangs to the neo-Nazi party faithful. In this new
virtual reality, Klan hoods have been replaced by a much thicker cloth of anonymity,
and the book-burning rallies of yesterday have become today’s white power music
downloads, racist chat rooms, and picture galleries of hate-based youth culture.
As with many social movements that have gradually gone digital over the last
15 years, media scholars have begun to study the connection between an increase
Corresponding author: Adam Klein; e-mail: adgklein@gmail.com (A.K.)
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 427
Theory of Information Laundering A. Klein
in online presence and the growth in real-world participation within hate-group
communities (Berlet & Vysotsky, 2006; Duffy, 2003; Schafer, 2002; Weatherby &
Scroggins, 2006). The underlying question of ‘‘why’’ this noticeable spike in hate-
group activity is emerging in society seems to be, at least in part, addressed by the
simultaneous expansion in their online communities, ranging from outright racist
to politically violent websites, and that connection has only been supported by many
of the hate groups themselves (Simi & Futrell, 2006). Reno Wolf, founder of the
National Association for the Advancement of White People (NAAWP), proclaimed,
‘‘We get a lot of members off the Internet ... In fact, we figured out that in the
last couple of months, about 12 percent of those who visit our website really follow
through and join the organization’’ (Swain & Nieli, 2003, p. 124). Of course, as
Conant (2009) observed, ‘‘It’s hard to conduct accurate surveys of racists, who tend
to exaggerate their strength and importance’’ (p. 30). At the same time, a reality that
cannot be overlooked is the sheer number of hate websites which have undoubtedly
increased by the thousands in the last 5 years, along with the traffic flow to these online
communities, and so, a revitalized and highly vocal hate movement is nonetheless on
the rise in the United States.
But beyond the growing community, the World Wide Web has brought its
own unique properties of communication into the information age, such as search
engines, blogs, and social networks. Just as these properties have affected not only
the flow, but also, subsequently, the form of traditional media content (i.e., jour-
nalism), so too have they begun to reshape the appearance and profile of hate
speech in cyberspace. When thought of as a rhetorical strategy, hate speech should
be understood as the tactical employment of words, images, and symbols, as well
as links, downloads, news threads, conspiracy theories, politics, and even pop cul-
ture, all of which have become the complex machinery of effective inflammatory
rhetoric. Whillock (1995) observed how, ‘‘[r]ather than seeking to win adherence
through superior reasoning, hate speech seeks to move an audience by creating
a symbolic code for violence’’ (p. 32). She called these rhetorical codes ‘‘hate
appeals’’ in which preexisting cultural and historical stereotypes are tapped into
through mainstream vehicles such as information and politics, and even humor
(Billig, 2001). The effect of this rhetorical strategy is to denigrate a designated
‘‘out-group’’ in a way that appeals to the majority’s ‘‘in-group’’ mentalities, (All-
port, 1954; Bosmajian, 1983), and thereby advancing the second goal, to recruit a
following.
Today, it appears that the goal of recruitment has been modernized through a less
obvious but perhaps more effective means of communication through cyberspace.
Online, the languages of bigotry, such as those that are expressed by white supremacist,
anti-Semitic, or anti-immigrant groups, have begun to converge in a mutually
beneficial relationship with other elements that share similar themes of race, ethnicity,
and nationality, such that the lines that once separated racism from political
extremism are harder to distinguish. In 2009, the SPLC reported that ‘‘Militiamen,
white supremacists, anti-Semites, nativists, tax protesters and a range of other activists
428 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
of the radical right are cross-pollinating and may even be coalescing’’ in a political
climate that ‘‘has many authorities worried’’ (Keller, para. 8).
Research on the hateful rhetoric of racist movements has traditionally focused on
those toxic yet effective messages of cultural intolerance, racial superiority, or fear in
a given society (Breton, Galeotti, Salmon, & Wintrobe, 2002; Daniels, 1997; Feagin,
2006; George, 1959; Tsesis, 2002). These regenerative themes are often expressed in
the overt language of bigotry and political extremism and have conventionally been
communicated through underground newspapers or talk radio, to more present-day
vehicles like ‘‘white power’’ rock music or gay hate signs displayed outside the
funerals of fallen soldiers. However, many of today’s racist organizations have
exchanged these genres of strident hate for the ubiquitous Internet currencies of
online search terms, news and research websites, blogging, and social networks. This
article focuses on the means by which marginal hate speech has steadily become a
part of the mainstream World Wide Web.
As a contribution to existing studies on the forms of hateful rhetoric (Cohen,
2003; Dobratz, 2001; Ezekial, 1995; Loeb, 1999), this research aims to move beyond
the examinations of content to pose a new theory of process, called information
laundering. This process, first conceptualized in Klein’s (2010) study of 26 white
supremacist websites, is an Internet-specific theory.1Information laundering illus-
trates how the Internet’s unique properties allow subversive social movements to not
only grow globally, but also to quietly legitimize their causes through a borrowed
network of associations. This article will examine, first, the reasons why the Internet
has created such ideal conditions for allowing racist movements to transform, con-
ceal, and seamlessly merge their agendas into the popular domains of online culture.
Then, a deeper investigation into four crucial domains of the web search engines,
social networks, news sites, and the blogosphere will demonstrate how traditional
hateful rhetoric successfully converges with mainstream communities in cyberspace.
A web of possibilities
The Internet presents an ideal counterculture environment, in which alternative
forms of information, specialized media interests, and marginalized communities
have flourished because of the endless and unrestricted digital space that allows
for these more narrow genres and groups to thrive. Prior to the Internet, fringe
groups, in particular, had to rely upon the relatively inexpensive and often ineffective
currencies of mass media (tabloids, pamphlets, self-published books, and local
radio) to disseminate their ideologies. However, for racist subcultures, the public,
niche-driven communities and anonymous nature of the World Wide Web would
presented a unique opportunity to relocate their movements from out of the spotlight
of the society, and into the unguarded, anything-goes atmosphere of cyberspace. The
converging media space has afforded racist organizations, which had been transparent
within traditional media outlets, the ability to reintroduce and redefine themselves
as equal residing members of the interconnected digital culture.
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 429
Theory of Information Laundering A. Klein
Cyberspace presents a rather exclusive opportunity for hate movements niche
communities themselves to tap into those intersecting online interests of politics,
current events, opinions, and most of all, information. If we pause to recall a time
not so long ago when trusted information was equated more so with tattered books
on library bookshelves and scholarly journals, then it would seem strange that new
media could alter that valued system so abruptly. But in less than a decade, the
Internet had become ‘‘the most important source of information’’ for more than
70% of Americans, ranking higher than books, newspapers, television, and radio,
according to UCLA’s 2003 Internet Report (UCLA Internet Project, 2003). Shenk
(1999) characterized online information as a major contributing factor to what he
called society’s growing problem of ‘‘data smog.’’ He noted, ‘‘Information overload
has surely been accelerated and highlighted by the popularization of the Internet’’
(p. 26). But even greater a problem than the sheer quantity of data is the nature
and quality of what constitutes public information in cyberspace today, and it is that
regression of factual content that has, more than any other factor, paved the way for
hate groups’ ascendancy onto the mainstream information superhighway.
Online, informational content can include everything from news sites with
headlines that change by the hour, to virtual encyclopedias, e-books, and a wealth
of scholarly databases and libraries. But it also includes opinion blogs, political
news threads, video uploads and music downloads, wikis and tweets. Adding to the
never-ending mixture of data, opinion, and popular culture is an amalgam of bad
information as well. The parameters of what is considered ‘‘trusted information’’
have widened in the virtual world, primarily because the drivers of that content are
an anonymous and unrestricted public that are far less scrupulous about the kinds of
the facts they publish. Principally, one might argue that despite the false perceptions
of what is believed to be trusted information in cyberspace, true knowledge is what
really matters in any medium. But, for hate groups especially, perception is reality.
Despite the Internet’s lack of gatekeepers, or its questionable fusion of facts and
opinion, or the open boundaries that define ‘‘information’’ there, people continue
to go online to seek out new knowledge. For the racist organization, the general
perception of an information superhighway, something they can belong to, signifies
a rich opportunity to finally plug their movements into a mainstream circuit. As Don
Black, founder of the largest White Power website, Stormfront.org, recalls,
It was with the exponential growth of the Internet ...that we first had the
opportunity to reach potentially millions with our point of view. These are
people who for the most part have never attended one of our meetings or have
never subscribed to any of our publications. We were for the first time able to
reach a broad audience. (Swain & Nieli, 2003, p. 155).
Turning intolerance into information
The transitional journey, from traditional to new media, does not merely reflect a flow
of followers from one venue into the next, but much more so a transformative process
430 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
that is changing the perception of racism, itself, through the Internet. In its newly
fashioned form, traditional cultural extremism has taken on a variety of intellectual
fac¸ades, from Holocaust denial ‘‘research’’ organizations (anti-Semitic groups) to
‘‘traditional family’’ advocacies (antigay/lesbian campaigns), and other mainstream
profiles. Online, websites like Stormfront and the Vanguard News Network have
sought to coalesce their racist agendas with mainstream politics, primarily of the
far right, borrowing from hot-button issues like the affirmative action debate, the
anti-immigration movement, or questions of the first African-American President’s
‘‘true’’ birthplace. This research suggests that racist movements have managed to
successfully tap into the new wave of online politics, blogs, search engines, and
social networks, in order to build the greater illusion of legitimacy and conventional
support for their causes.
Information laundering defined
The process by which hate groups are steadily building a cyber presence and online
network is similar to other social movements whose common goals are audience,
attention, and action; each of which have proven to be exponentially more achievable
through the global interconnectivity of the web. And yet, while racist groups like
the National Alliance or the New Black Panthers do seek a greater following and
notoriety, the Internet can also deliver these radical right and left organizations
something else the perception of legitimacy. Today, those earlier themes of
intolerance, superiority, and fear have moved into the subtle background of what are
now ‘‘community websites’’ on their face, seemingly part of an educational, political,
or social movement just like any other. In this strategy, hate speech is replaced with
new themes of scientific information, political opinion, and social interaction, as well
as news reporting, some even borrowed from legitimate sources. But creating the
‘‘user-friendly’’ website that looks and acts just like a Craigslist (i.e., Stormfront.org)
or a Wikipedia (i.e., Metapedia.com) is only one small part of the process.
The concept of information laundering explains how the formats and constructs
of cyberspace can act much like a system of money laundering. Typical money
laundering is defined as a ‘‘process by which criminals conceal or disguise the
proceeds of their crimes or convert those proceeds into goods and services. It
allows criminals to infuse their illegal money into the stream of commerce, thus
corrupting financial institutions’’ (Federal Bureau of Investigation, 2009). Just as
there are countless ways for criminals to launder their illegal funds into financial
institutions, the Internet has provided a limitless platform that enables hate groups
to disguise and convert their form of illegitimate currency hate-based information
into what is rapidly becoming acceptable web-based knowledge, thus washed
virtually ‘‘clean’’ by the system. Through the Internet, hate groups are entering into
mainstream culture by attaining legitimacy from the established media currencies
of the cyber community, primarily search engines and interlinking social networks.
These conventional pathways can unwittingly lead an online information seeker to
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 431
Theory of Information Laundering A. Klein
extremist content that, as this article will show, has already been designed for them
to appear as educational, political, scientific, and even spiritual in nature.
More importantly, however, this network can also allow content from these
websites to travel outside of their domain, to merge with mainstream spaces like
YouTube, Facebook, political blogs, and even occasionally news cycles. Such was
recently the case when, in 2010, mass e-mails sent by New York GOP gubernatorial
candidate Carl Paladino reached the mainstream media, revealing, among other
racist contents, a video depicting President Barack Obama as an African tribesman
beating a set of bongos in ‘‘his’’ native garb. In fact, this video called ‘‘Obongo’’
actually originated on the white supremacist video-sharing website Podblanc.com
in 2008. But even more common than any one specific example like the Paladino
incident, which erupted into a national news story (a small public relations victory
for the White Power community), is the larger effect that the proliferation of online
hate speech is having on public discourse today.
In Daniels’ (2009) work on cyber racism, she writes, ‘‘The least recognized and,
hence, most insidious – threat posed by white supremacy online is the epistemological
menace to our accumulation and production of knowledge about race, racism, and
civil rights in the digital era’’ (p. 8). In other words, these online communities intend
to impact the way that white society thinks about the ‘‘nonwhite’’ society. Their
goal is to digitally introduce a legitimate campaign of ‘‘racial enlightenment’’ into
the mainstream discourse of the web – to corrupt it. In this objective, the Internet
becomes the key to activating the process of information laundering, turning hateful
rhetoric into public knowledge. In the late-1990s, when the first hate websites were
suddenly emerging online, research in this area would have been both premature and
highly inconclusive. Today, however, we can more clearly see what the Internet has
done to transform the meaning of information, and what that, in turn, has done for
hate speech.
Theoretical groundwork
In their research on ‘‘White Supremacists, Oppositional Culture and the World Wide
Web,’’ Adams and Roscigno (2005) speculated that the ‘‘growth of hate-oriented
websites likely stems from a variety of conditions’’ (p. 763). Chief among them they
cite the Internet as being a ‘‘relatively cheap and efficient tool for disseminating
organizational information/propaganda to a mass audience,’’ as well as providing a
‘‘free space’’ with limited media and political constraint.’’
However, even more than the Internet’s inexpensive and unregulated real estate,
into which intolerant societies have migrated and flourished, there is another essential
element that this new media space provides: access into a network of information.
While typically, purveyors of hate speech and propaganda are associated with those
who communicate through extremist sentiments, incendiary language, and outright
fabrications, researchers have increasingly found that information has become their
chosen vernacular. Johnson-Cartee and Copeland (2004) write, ‘‘Contrary to what
432 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
many believe, propagandists frequently provide reliable and accurate information to
their intended audience’’ (p. 155).
This research aims to expand upon theirs and other existing theories of racist
propaganda by offering the possibility of another factor that has helped to solidify
a permanent place for these fringe elements in cyberspace. That is the legitimizing
factor of an interconnected information superhighway of web directories, research
engines, news outlets, and social networks that collectively funnel into and out of
today’s hate websites. For information-seekers, the result of this funneling process is
a wider array of perspectives, and thus, a broader understanding of any given topic.
However, for propaganda providers like religious hate groups, the same process
inadvertently lends the credibility and reputation of authentic websites to those
illegitimate few to which they are nonetheless connected. Such is the case with many
of today’s leading search engines like Google, that unwittingly filter directly into
hate websites, or video-sharing networks like YouTube which host their venomous
content everyday. This theoretical process we call information laundering is unique
to cyberspace because it provides the ideal media environment through which false
information and counterfeit political movements can be washed clean by a system of
advantageous associations.
However, before we break this theory down by its various processes and specific
examples, it is important to examine two theoretical building blocks of information
laundering. These concepts have helped to explain how propaganda and false
information are sometimes overlooked, and even authenticated, by the most educated
of minds and reputable gatekeepers. The theories of white propaganda (Jowett &
O’Donnell, 1999) and academic/technical ethos (Borrowman, 1999) follow a long line
of scholarly pursuits in the field of propaganda, some of which we have already
discussed. But these two contributions, in particular, lend a current perspective to
the modified nature of propaganda in the information age.
Jowett and O’Donnell define propaganda as a highly functional communicative
device that is ‘‘associated with control and is regarded as a deliberate attempt to
alter or maintain a balance of power that is advantageous to the propagandist’’ (p.
15). Jowett and O’Donnell’s ‘‘Model of Propaganda’’ demonstrates the separation
of information and persuasion according to purpose. This model illustrates a split
in the communication process between these two forces whereby propaganda
exists somewhere in between, and often by design, thus going unnoticed by the
receiver. Jowett and O’Donnell call this method ‘‘white’’ propaganda under the
three distinctions ‘‘white,’’ ‘‘gray,’’ and ‘‘black’’ (from selective facts to outright
fabrications). This research is concerned with the nature of ‘‘white’’ propaganda,
specifically, which deliberately blurs the line of persuasion and information. The
result of this distortion is a produced message that appears ‘‘reasonably close to the
truth ... presented in a manner that attempts to convince the audience that the
sender is a ‘good guy’ with the best ideas and political ideology’’ (p. 16).
While ‘‘gray’’ propaganda delves deeper into more dishonest practices like false
advertising and statistics tampering, ‘‘black’’ propaganda is the most recognized form,
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 433
Theory of Information Laundering A. Klein
expressed in amplified public deceits like those of the Nazi era. However, in today’s
media-savvy society, ‘‘white’’ propaganda might be the most effective of the three
because of its ability to penetrate mainstream issues. As noted, racist movements
have moved away from those recognizable hate symbols, and toward more day-
to-day politics and public affairs to communicate their agendas. News and politics
surrounding issues like immigration and affirmative action, in particular, provide
racial extremists with the ideal fodder for their ongoing narratives of distrust, anger,
and fear of nonwhite citizens. Squires (2007) explains, ‘‘Whiteness is ...depicted as
an identity under siege in affirmative action discourse’’ (p. 80). Online, themes like
these have been carefully sewn into forums and blogs that are intended to read just
like any daily news feed or online editorial. When considered separately, many of
these ‘‘white’’ propaganda news articles do bear elements of accurate information.
However, when viewed collectively, it becomes clear that these reports are really part
of a larger mechanism that continually feeds one race-based news story after another,
in order to produce only a distinctly racist point of view.
Like ‘‘white’’ propaganda, Borrowman’s concept of an academic and techno-
ethos also explained the manipulation of information through the media, but
more specifically it considers the creditability-building qualities of this process
in cyberspace. In his study on the educational pitfalls in cyberspace, Borrowman
considers the example of students who use the Internet to research the Holocaust.
He observes how through an open network, students could be led directly to
Holocaust denial websites which are structured to appear as reputable research
centers with professional titles, university affiliations, links to published literature,
and academic-sounding mission statements. He explains:
When academic ethos is at work, a reader is convinced that the writer is a
rational, reasonable, intelligent individual who is engaging in an honest
dialogue ...readers are led to believe that a writer is being ethical and fair in the
construction of his or her argument. For Holocaust deniers the construction of
such an ethos is enormously important (p. 7).
Beyond the contextual methods for constructing the appearance of ‘‘an expert in
a given field,’’ Borrowman asserts that a new techno-ethos has evolved in cyberspace.
He contends, ‘‘techno-ethos is the credibility or authority that is constructed online
in the programming proficiency demonstrated in a flashy Web site.’’ Today, this can
be achieved with relative ease through either technical know-how or simply hiring
a qualified website designer. The fruits of this labor can be extremely effective with
the net generation whose critical thinking skills Borrowman fears have been partially
replaced by what he calls ‘‘critical surfing.’’ In other words, young researchers have
become accustomed to giving credence to websites simply based on their professional
designs, visual appeal, sophisticated options, and the volume of its visitors.
With so many young adults assigning these features the mark of credible
information, it should come as little surprise that so many hate groups are lining up
to acquire the best website designers that money can buy.
434 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
A theoretical model of information laundering
Information laundering involves not only hate organizations’ attention to the
messaging and formats of their websites, but also, especially, the shrewd insinuation
into the mainstream pathways and stopping points by which surfers will eventually
make their way to these sites. Academic and techno ethos can only transform the
perceived character of today’s racist organizations, online. These qualities, along
with ‘‘white’’ propaganda tactics, are critical elements to the theory of information
laundering, but only to the extent that hate groups have learned to employ them
inside of their own websites for the information-seeking visitor. The other half of that
formula exists well beyond the homepage in the pathway that led to these illegitimate
spaces. This all-important external factor called ‘‘the information superhighway’’
was not created by hate movements, but nevertheless has become their most crucial
accomplice in recruitment.
In cyberspace, the pathways to false knowledge and propaganda are the same
as those that lead to legitimate and credible resources. It is as if beneficiaries like
the White Power movement have slid into a new Dewey Decimal System and
contaminated it, but few have noticed their presence there. Today, racist websites
have become conveniently integrated and interconnected into the central currencies
of information, politics, and popular culture (see Figure 1). This research divides
these currencies into four major categories: search engines (discovery), research and
news (information), political blogs (opinion), and social networks (expression).
The following section aims to illustrate the central processes of information
laundering by providing explanations and examples from each of the four currencies
and the representative websites that utilize to them. Those websites – Stormfront, The
Institute for Historical Review, The National Socialist Movement, American Renais-
sance, The Charles Darwin Research Institute, and Metapedia are next-generation
sites that best represent the qualities and characteristics of information laundering.
In terms of their size and significance, Stormfront and American Renaissance
each receive considerably more daily visitors to their websites than their watchdog
counterparts’ sites like NAACP.org, ADL.org, and GLAAD.org (Alexa, 2011).
Extremism Cyberspace Mainstream
Hate
Speech
Search
Engines
News &
Research
Racist
Websites
Social
Networks
Political
Blogs
Public
Knowledge
Figure 1 Model of information laundering.
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 435
Theory of Information Laundering A. Klein
Search engines
Like most activity in cyberspace, the process of laundering hate speech into informa-
tion begins at the primary entrance point for most online research and day-to-day
inquiries, the search engine. A typical search engine, like Google, Yahoo, Bing, or
Ask, designates the most relevant websites upfront in their directory pages usually
based upon the ‘‘popularity’’ and ‘‘freshness’’ of those sites (Lewandowski, 2008).
The more popular the site, the more likely it has mass appeal, and therefore, an
assumed relevance to a large group of users. At the same time, certain websites might
contain the most recent and, again, relevant input, regardless of their prominence.
Such sites are considered ‘‘fresh’’ and can quickly be allocated to a higher ranking
in the results pages. Other factors that determine search engine relevancy include a
website’s location with regard to the user’s server, the breadth of the query itself (the
scope of information sought), existing business agreements between search engines
and websites, and other emerging factors. These very same factors can also work to
the benefit of websites that are less-than-reputable when it comes to the information
they provide, but are nonetheless either popular among a large group of users, or
current in terms of the content they offer relating to a specific topic (i.e., race), or
they might simply be close in namesake to the search terms entered by the user (i.e.,
‘‘American Renaissance’’ the white nationalist website).
The web information company Alexa.com, which tracks the ‘‘traffic metrics’’ of
activity in cyberspace, also details the ‘‘clickstream’’ for every given website noting the
immediate pathway that led a user into that particular site (Alexa, 2011). While the
leading hate sites examined in this study are frequently funneled into via other racist
websites, Alexa has cited consistently their number one preceding page was Google,
followed closely by Yahoo. Let us briefly consider an illustrative experiment in the
process of information laundering. As a preliminary test, this research entered four
keywords into four leading search engines. The search terms ‘‘Holocaust,’’ ‘‘Islam,’’
‘‘White people,’’ and ‘‘Black people’’ were entered into the Google, Yahoo, Bing, and
Ask.com search engines, and only the first three pages of results were reviewed. From
this basic observational approach, all four search engines yielded at least two results
that were sponsored by white supremacist websites or racist inscribers.
On the first page of results, Google listed a link to a webpage about black people,
which immediately led to another page containing ugly racial slurs and hateful jokes.
On their second page of results for ‘‘White people,’’ the search engine giant had a
direct listing for a National Socialist website that contained inner links to Holocaust
revisionist (denial) information. Both Bing and Yahoo offered listings on their pages
for anti-Islamic websites, which denigrated both the religion and the Muslim people.
The Ask.com engine listed the ‘‘Holocaust Hoax’’ website among its otherwise
legitimate set of search results (historical, commemorative, and biographical websites
about the Holocaust), thereby giving credence to an illegitimate page aimed solely
at debunking the murder of millions. Other search results included direct links to
the White Revolution homepage, a separatist organization, and several racist joke
436 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
websites. It is important to note that within many of these initial hits, one finds links
to other racist websites even more virulent than the first, thereby threading together
in just one or two moves the radical fringe elements of cyberspace to a mainstream
search engine.
Currently, the creators and contributors of racist websites remain virtually free
to publish and propagate whatever material they choose. If thought of as a giant
bookstore where both products and ideas are sold, the Internet really has no discretion
over what items will stock its shelves. All one needs in this day and age to publish
an idea or ignite a cause is a website, an Internet Service Provider (ISP), and some
level of recognition by a major search engine to disseminate that content. Most of
the time ISPs and search engines are like storage facilities, unaware of the content
they host. And without incentive to do so, why would they? For ISPs that specialize
in web hosting, there is no legal ground that would deem them a ‘‘publisher’’ should
one of their websites or blogs publish something illegal. In fact, according to Shyles
(2003), it is in the best interest of an ISP to ‘‘avoid exercising any kind of editorial
control or parental screening in order to avoid liability as publishers.’’ (p. 343). For
the hate website, this is good news because it allows any hate-based organization ease
with which to post any matter of content on the web.
News and research
The research and news pathway often represents the second stop for information-
seekers on the web. Depending upon the nature of the investigation, and the
investigator, a news source like CNN.com can create an important buffer between
an informative tool and the hate website it references. On the other hand, some
research content can be as random as search engine results. Public encyclopedias like
Wikipedia are a prime example of this potential because their content can be added
or altered by just about anyone, and provide links to the topics they reference, such
as racist websites. For their own part, hate organizations have used the common
and trusted appearance of these popular research sites as the archetypes for building
their own knock-off ‘‘public information centers,’’ the most popular of which is
Metapedia.com (see Figure 2).2
Metapedia is advertised as an alternative source of information. This white
nationalist site is designed to offer Internet-users an educational outlet on tens of
thousands of subjects, providing of course, a racial spin on their explanations (i.e.,
a search for the word ‘‘homosexual’’ on Metapedia automatically yields a definition
of the term ‘‘sodomite’’). Metapedia, which looks and behaves just like Wikipedia is
also offered in ten different languages.
In Daniels’ (2009) study on cyber racism, she observed what she called ‘‘cloaked
websites’’ that ‘‘conceal, disguise, obfuscate their authorship in order to advance a
political agenda’’ (p. 120). A cloaked website itself, Metapedia is not a self-proclaimed
white supremacist community, but rather presented as an ‘‘alternative encyclopedia’’
that is updated daily. The SPLC identifies Metapedia as an indication that racist
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 437
Theory of Information Laundering A. Klein
Figure 2 Metapedia webpage on ‘‘Revisionism laws information.’’
communities are attempting to reach mainstream academics, but reaffirms that,
while their academic ‘‘subjects sound familiar ...their definitions don’t’’ (Southern
Poverty Law Center, 2007). Websites like Metapedia that mimic the appearance of
mainstream sites are reaching for the aesthetic Internet standard that Borrowman
called techno-ethos. Other hate sites have employed subtler means of tapping into
the academic ethos of its visitors by using content intended to signal that ‘‘this page
is a trusted information source.’’ Some of these scholarly signifiers, typically found
on the homepage, include noted academic credentials like ‘‘Ph.D.,’’ loose university
affiliations, and even looser intellectual associations, such as the ‘‘Charles Darwin
Research Institute.’’ The Holocaust Denial website Institute for Historical Review
offers its own library and archives section with noted ‘‘articles, reviews, books and
essays.’’ These are the calling cards of false academic achievement that together signify
‘‘a more legitimate’’ anti-Semitic movement. Racist websites utilize these signifiers in
order to falsely acquire a certified status in the eyes of their visitors.
These kinds of academic inquiries negotiate an ongoing struggle between the
legitimate providers of information and the hate-filled agencies that infiltrate their
venues of research, literature, and occasionally, even the mainstream news. Such was
the case with Fox News, which, twice in 2008, booked racist guests on programs
where they were then identified as an ‘‘Internet journalist’’ and ‘‘free speech activist’’
(Southern Poverty Law Center, 2009). The ‘‘Internet journalist,’’ Andy Martin, had
438 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
appeared on ‘‘Hannity’s America’’ to talk about then-Senator Obama’s conspiratorial
association with convicted terrorist William Ayers, but had also been known in
litigious circles as an outspoken racist. The Anti-Defamation League [ADL] had
followed Martin’s history of ‘‘filing literally hundreds of lawsuits’’ marked in phrases
like ‘‘crooked, slimy Jew’’ and sentiments like ‘‘I am able to understand how the
Holocaust took place, and with every passing day feel less and less sorry that it did.’’
Fox News would later apologize for accidentally hosting these guests. However, for
the racist movements they represent, these incidences signify a successful penetration
onto the mainstream information stage. Paul Farhi (2010) is one Washington Post
journalist who has observed a recent increase in fringe-based news stories emerging
in his profession:
Stories that might have been dismissed as marginal or kooky in an earlier age
now command serious scrutiny from mainstream news organizations ...The
news media’s romance with the fringe may be a stark reflection of how the
business has changed in just the past few years. Before there was an Internet,
before the explosion of sources of news and commentary, mainstream news
organizations could maintain something like a gatekeeper role, downplaying or
ignoring stories they deemed unfit for public consumption. p. 35
Meanwhile, internally, the websites of fringe movements have also sought to
reflect the borrowed appearance and content of mainstream news organizations on
their own homepages. This common practice of information laundering, referred
to here as mainstreaming, is regularly found on the National Socialist Movement
website that features direct links to news stories from the New York Times, Fox News,
CNN.com, and other mainstream outlets (NSM News, 2009). In every case, these
mainstream news stories are centered around unrelated, but yet useful, race-centered
issues and events, such as corrupt Mexican police officers trafficking drugs into the
United States, or local assaults carried out at the hands of an African-American
assailant. These nonbiased, legitimate news stories take on new meanings when
strung together to present a collective narrative of nonwhite crimes and corruption.
Through a regular system of mainstreaming, these ideas take on the appearance of
being directly supported by the New York Times or Fox News. For now, the practice
of creating links to other websites without those sites’ permission is considered fair
use under the current interpretations of Internet copyright laws, which works well to
the advantage of today’s interconnecting hate websites (Copyright, 2011, para. 4).
Political blogs
From news and research, the next sphere of information gathering on the web, the
political blogosphere, represents a much broader level of unrestrained civic discourse.
Certainly less conventional than news and research in terms of its gatekeepers (the
public), the blogosphere is far more concerned with the opinions of everyday people
than with the facts and drawn conclusions of experts and reporters. The blogosphere
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 439
Theory of Information Laundering A. Klein
is by far the grayest area of information in cyberspace; a venue for citizens to share
their perspectives and feelings about political and social issues, and from which
other users can examine those perspectives to form their own opinions. Like public
encyclopedias, blogs and political pages are easily arrived at through the initial
search engine inquiry. However, in terms of trusted information, a greater degree
of separation exists between the filters of news and research and the vast medley of
online blogs.
For a hate group, the blogosphere presents a great opportunity to breach the
intellectual base of everyday people. The journey into the fringe element of cyberspace
often runs through these corridors of public debate that speak to issues of legitimate
political and social concern, but can be exploited by hate groups all the same who need
not identify their affiliations with an extreme agenda. In a 2010 letter to the Federal
Communications Commission [FCC], more than 30 organizations, including Free
Press and the Center for Media Justice, asserted, ‘‘The Internet has made it harder for
the public to separate the facts from bigotry masquerading as news’’ (Nagesh, 2010,
para 2). Calling on the FCC to return its attention to racist speech in the mainstream,
the coalition highlighted how today’s hate mongers ‘‘spread lies and hate under the
cloak of anonymity, and sometimes, the guise of credibility ...blogs are filled with
the hateful messages and misinformation of anonymous posters’’ (Future of Media,
2010, p. 19). A prominent example of hateful rhetoric emerging from the blogosphere
occurred in the wake of the Bernie Madoff Ponzi scheme scandal in 2008, when the
ADL reported that ‘‘popular blogs devoted to finance’’ were being ‘‘flooded’’ with
anti-Semitic comments (Anti-Defamation League, 2008). The ADL posted examples
from mainstream blogs including nymag.com, dealbreaker.com, and portfolio.com,
as well as from the discussion boards of forbes.com, and sunsentinel.com.
Some blogs are known for their frequent hosting of racially charged rhetoric
stemming from recurring political themes. The grassroots conservative Free Republic
website is one such blog, a self-described forum for issues of ‘‘God, Family, Country,
Life & Liberty.’’ While these topics are widely covered in context in blogs and
lengthy discussion boards, other subjects, such as the ‘‘conspiracy of President Barack
Obama’s citizenship’’ and ‘‘The Homosexual Agenda,’’ are also debated. Though
most blogs that engage these discussions are not racist or homophobic in context,
but more so political in tone, occasionally the former find their way into this heated
venue. One such blogger was James von Brunn, who posted his thoughts on the Free
Republic about then-President Elect Obama’s citizenship regularly in 2008. In one
lengthy he wrote, ‘‘Obama has lived for 48 years without leaving any footprints
none! ... This is no accident. It is being performed on purpose with Media help
but to serve whom & why???’’ (Roth, 2009). Six months after posting this blog,
Von Brunn would walk into the U.S. Holocaust Museum in Washington, DC and
attempt a shooting spree that would claim the life of one African-American guard,
Officer Stephen Johns. In addition to belonging to the online birther movement, Von
Brunn also utilized the web to sell his propagandist book Kill the Best Gentiles! on his
anti-Semitic site holywesternempire.com.
440 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
Of course, political blogs collectively represent some of the best of what the World
Wide Web has to offer, a new democratic sphere unto itself. Certainly, issues of race
and identity, and the politics that surround them are important topics that belong in
the blogosphere. However, this study contends that a strong divide does exist between
the forums of racial debate and blogs that truly circulate hate speech. Whereas some
would argue that racism exists at one end of the spectrum, and healthy racial debate
at the other, these two forms of communication are separated entirely by motive
and emphasis. Those who blog on a topic like affirmative action, for instance, are
likely motivated by political, social, or economic concerns, and as such, their contexts
place the emphasis on matters of equality, fairness, or jobs. Those who engage in
hateful rhetoric through the affirmative action debate are only really motivated by
the issue of identity and social intolerance, and this is seen in their words that place
an emphasis on a people rather than the matter at hand. However, with information
laundering, there is a value in noting not only the separation between the two forms
of race communications, but more importantly, the way that hate movements and
their websites borrow from the arguments and appearance of those closest to them
on the debate side. In fact, that is their precise strategy. In this way, we see a complex
dilemma in identifying modern hateful rhetoric in online blogs and politics that looks
and sounds just like the extreme views of the far right or the far left.
Online, hate groups posing as political causes have learned to seize on language
that insinuates our cultural differences without directly revealing their racist agenda.
For instance, anti-Israel blogs often open a convenient gateway into anti-Semitic
discourse, much the same way that anti-immigration blogs tap into primed anti-
Hispanic sentiments. Like the coded language of hate speech, Bratich (2008) noted
how the World Wide Web also has a ‘‘penchant for housing conspiratorial rumor
mongering’’ (p. 86). He writes, ‘‘while a conspiracy theory may in itself be based
on ‘‘bad information,’’ it is made even more ‘‘foul’’ because it circulates through
an untrustworthy medium.’’ In their study, Stempel, Hargrove, and Stempel (2007)
further examined that medium, citing Internet blogs as one of the two most common
media outlets for producing conspiracy theorists. From a national survey, they
concluded: ‘‘We found evidence of robust positive associations between belief in
conspiracy theories and higher consumption of nonmainstream media (blogs and
tabloids)’’ (p. 366). And so, in the blogosphere hate groups have found an ideal space
for injecting their narratives of racial and ethnic conspiracies among a welcoming
cyberculture of paranoia.
Social networks
In addition to blogs, news, research, and politics, the growing universe of social
networks in cyberspace has begun to provide racist movements with a cultural
segue into the online youth culture. More than just a friend-building network,
social networks represent dynamic spaces where individual and cultural identities
are expressed, tried on, and shared. Like other online prospects, racist organizations
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 441
Theory of Information Laundering A. Klein
have honed in on this valuable chance to share and spread their own cultural
identities, tastes, and preferences, and of course, to recruit. However, what makes
social networks so attractive to hate groups, in addition to the college kids that
populate these spaces, is how welcoming the culture can be, where new friends and
ideas are accepted with little reservation in the click of a mouse.
In May 2009, many Internet users read about the first high-profile case of a
popular social network colliding with the elements of hate speech in cyberspace. The
website is Facebook, the largest online community in cyberspace today and home
to a staggering 20% of global Internet users every month. Facebook has been an
online social network since 2004, but only a few years later, it has become a haven
for a few unexpected residents of the community Holocaust denial groups. Among
the sea of faces and profiles, mainly high school and college students, a new wave
of memberships has surfaced dedicated to the cause of denying that the Holocaust
ever occurred; that it was in fact a Jewish conspiracy. Those who had heard this
same rhetoric voiced before quickly recognized its subtext and responded, but not to
the Holocaust deniers or the White Pride groups that had also converged onto the
mainstream space. Instead, attorneys like Brian Cuban decided to address Facebook
directly, demanding that the Internet giant remove all profiles that espoused any
form of racist sentiment. In an open letter, Cuban wrote Facebook CEO, Mark
Zuckerberg, to clarify that ‘‘the Holocaust denial movement is nothing more than a
pretext to allow the preaching of hatred against Jews and to recruit other like-minded
individuals to do the same’’ (MacMillan, 2009). Ultimately, Facebook executives
decided not to remove the Holocaust denial groups, citing no clear violation had
been made to their terms of service.
‘‘Terms of Service.’’ At the end of the day, these three words often constitute the
only real law of the land in cyberspace. Regardless of whether the claim of hate speech
was valid in this particular context, or the fact that Facebook is primarily a youth-
based website, the apparent approval of Holocaust denial groups in the mainstream
social network represented a legitimizing victory for the hate community. For racist
movements, websites like Facebook and YouTube present the most ideal platforms
for sharing their own information, views, and communities with the masses in a
mainstream venue virtually free of gatekeepers and oversight. More importantly,
in addition to building a cyber pathway into racist subculture, mainstream social
networks have inadvertently supplied racist movements with something far more
valuable than just another platform from which to convey messages of hate or links to
their domains. They deliver the net generation. The college crowd that fills the pages
of sites like Facebook and Twitter are the primary reason that hate groups have begun
to build their own profiles and links on these sites. White Supremacist Matthew
Hale openly declares, ‘‘We generally reach out to the private colleges and universities
... because we want to have the elite. We are striving for that, focusing on winning
the best and the brightest of the young generation’’ (Swain & Nieli, 2003, p. 124).
In Klein’s (2010) study of 26 white supremacist websites, YouTube video links
were found to be, by far, the most common form of media convergence exhibited in
442 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
Figure 3 Media convergence on Creativityalliance.com/video.htm
these sites. Multimedia convergence itself becomes another form of mainstreaming
through which hate websites can use the connections to socially conventional domains
like YouTube to add legitimacy to their name. For the young Internet-user, already
quite familiar with YouTube, the mere appearance of its borrowed presence on a
white power website like CreativityAlliance.com only makes the racist sentiments
expressed on its CA-TV webcast seem all the more socially acceptable (see Figure 3).
Social networks represent the last domain of the information laundering process
used by today’s hate groups to achieve a more accepted presence in cyberspace.
The result of funneling hateful rhetoric through these four channels of discovery,
information, opinion, and expression can be seen in growing examples of culturally
intolerant sentiments that surface in mainstream outlets like Fox News and CNBC,3
which, in turn, recharge the online base of bigotry. That base continues to grow
by the thousands. Through this opportune system of associations some cleverly
acquired, and others by sheer happenstance racist movements are able to digitally
launder hateful rhetoric through Internet channels in order to produce a loose form
of accepted ‘‘public discourse.’’
Discussion
Today, the information laundering process is playing out in cyberspace with little
public awareness, or media investigation, about where certain mainstream racist
sentiments originated. For instance, vilifying accusations about a president’s religion,
or claims about his nationality, do not emerge from scholarly, political, or public
debates. Rather, they begin on the fringes, in white power websites, and only through
the web where they have found a successful pathway to work racist subthemes into
mainstream issues. In a recent debate forum at the University of Virginia’s Miller
Center for Public Affairs, Farhad Manjoo (2010) contended:
What we’re seeing more and more is that the the extreme points of views that
we’re getting [that] couldn’t have been introduced into national discussion in
the past are being introduced now by this sort of entry mechanism ...people put
it on blogs, and then it gets picked up by cable news, and then it becomes a
national discussion.
In his statement, Manjoo observes the noticeable shift that has taken place in
the tone of public discourse in recent years, which he partly attributes to the ‘‘entry
mechanism’’ of the Internet. Several examples have illustrated that phenomenon
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 443
Theory of Information Laundering A. Klein
Table 1 From the Web Into the Mainstream.
Connecting Through the Web Surfacing in the mainstream
1995: Stormfront.org is launched, the first
website of the white nationalist
community.
2003: VDARE is designated a ‘‘hate group’’
by the Southern Poverty Law Center for
publishing racist and anti-Semitic essays.
1999:VDARE,theprowhite,
anti-immigrant website is established. It
becomes a popular base for nativist and
anti-Hispanic rhetoric.
2005: Creativity Alliance founder Matt Hale
is sentenced to 40 years in prison for
attempting to solicit the murder of a
judge.
2007: Creativity Alliance, a white
supremacist religious website, begins
streaming ”CA-TV” on YouTube.
2010: James Von Brunn enters the U.S.
Holocaust Museum and kills African
American Security Officer Stephen Johns.
2008: Online rumors and blogs surface
about Barack Obama’s ‘‘true Kenyan
birthplace.’’ A National Review Online
article calls on Senator Obama to release
his birth certificate.
2008: James von Brunn begins blogging
about the ‘‘missing birth certificate’’ on
the conservative blog, Free Republic.
2009: Holocaust Denial groups emerge on
Facebook and begin social networking.
2011: The Simon Wiesenthal Center reports
over 14,000 hate sites now operating on
the web. Stormfront.org receives more
visitors than the NAACP, GLAAD, or
ADL websites.
2011: A CBS News/New York Times poll
finds that 25% of US citizens believe that
President Obama was not born in the
United States.
2011: The Southern Poverty Law Center
reports 1002 hate groups active in the
United States, a 66% increase from a
decade ago.
2012: A racially charged Superbowl ad
features a Chinese girl gloating (in broken
English) about America’s economic
demise.
2012: VDARE founder Peter Brimelow is
invited to speak at the 2012 Conservative
Political Action Conference (CPAC).
(see Table 1), including the aforementioned acceptance of Holocaust denial groups
within Facebook’s social network, or the mainstreaming of the birther movement,
which actually began in the blogosphere in early 2008. It was then elevated by a
National Review Online editorial that was the first to call on then-Senator Obama
to ‘‘release his birth certificate’’ (Geraghty, 2008). The birther pattern is a prime
example of the process of information laundering, because from the fringe blogs to
the National Review article, the movement graduated into the mainstream discourse
when it became a regular talking point of the President’s first years in office, despite
the release of certified birth documents.
A similar pattern can be observed in the case of VDARE.com, a website that began
as an anti-immigration movement in 1999. By 2003, the Southern Poverty Law Center
had added VDARE to its list of hate groups in the US due to the website’s noticeable
transition from political advocacy to popular haven for the nativist, pro-White
community, and, its publishing of anti-Hispanic and anti-Semitic essays (Southern
Poverty Law Center, 2012). In 2012, VDARE followed the birther movement onto
444 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
the mainstream stage when its white nationalist founder, Peter Brimelow, was invited
to speak at the Conservative Political Action Conference (CPAC), demonstrating
another legitimization of the racist agenda.
In many ways, VDARE exemplifies the prototypical modern racism where,
online, essays like ‘‘Freedom vs. Diversity’’ and ‘‘Why Immigrants Kill’’ are packaged
as scholarly and political discourse, and then certified in a corner of the public
domain. The Family Research Council (FRC), an antigay propaganda group that
touts ‘‘research’’ on its website regarding the ills of homosexuality, has followed that
same model. In 2010, the FRC managed to slip into the mainstream ‘‘52 times’’
when its spokespersons were invited on Fox, CNN, and other major news networks
‘‘to discuss issues ranging from the repeal of ‘‘Don’t Ask, Don’t Tell,’’ to the 2012
presidential campaign’’ (News Networks, 2011).
Certainly, not every media outlet that discusses racial issues or hosts a radical
speaker is a beacon for bigotry. Issues like race, identity, citizenship, and diversity
are important public matters to report upon, debate, and examine. But with the
knowledge that racist movements are beginning to draw sound bytes and followers
from the same well as political news shows, media watchdogs increasingly advocate
that those mainstream sources of information must become more cognizant of where
extremist viewpoints really originate, and also, where they can lead (Nagesh, 2010).
The interconnected and loose information environment of cyberspace has created
the perfect conditions for racist and xenophobic discourse to diffuse into conventional
public affairs domains. The theory of information laundering has presented a digital
process by which racist rhetoric and extremist agendas cycle through the information
currencies of search engines, political blogs, news boards and social networks on a
daily basis, producing a more legitimized hate speech. Online, the ‘‘symbolic code
for violence’’ that Whillock (1995) once posited hate groups were seeking to create,
is effectively being realized, because the web offers admission to social movements
like VDARE, and communities like Facebook, where ‘‘hate appeals’’ can take on the
encoded and protected form of political debates, or popular culture, subsequently
affecting the mainstream dialogue. Future studies might apply information laun-
dering theory to examine how the mainstream news media’s dialogue surrounding
issues of nationality, race, or ethnicity in society has subsequently become more
acceptably extreme since the popular establishment of the Internet in the mid-1990s.
Focused investigations could attempt to identify and track the content of specific
racist websites (i.e., slogans, campaigns, videos) that potentially emerge in more
mainstream venues of social debate, such as select political blogs of the far right or
far left.
While the growing number of websites of racist organizations are still just a
fraction of the World Wide Web, the fact that they have built new inroads to the
informational, political, and cultural centers of cyberspace is no less significant a
development by itself. That is because for every degree to which the dial of intolerance
is moved closer to ‘‘normal,’’ a higher tolerance for hate speech is established among
the public. The lines between political debate and hateful rhetoric slowly become
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 445
Theory of Information Laundering A. Klein
increasingly ambiguous as the flow of information is diluted by racial undercurrents,
and cultural stereotypes quietly become just another part of the American vernacular.
In effect, by lowering the bar of what qualifies as acceptable expressions of bigotry,
we simultaneously lower the bar of civility and common sense in our culture.
Notes
1 Some material in this article has been sourced from A. Klein’s (2010) ASpaceforHate:
The White Power Movement’s Adaptation Into Cyberspace, with the kind permission of
Litwin Books, LLC.
2 Metapedia webpage image retrieved in Google Images, from
http://conservapedia.com/images/1/1b/
3 In an appearance on the CNBC Show ‘‘The Big Idea with Donny Deutsch,’’ conservative
commentator Ann Coulter said, ‘‘We [Christians] just want Jews to be perfected ...We
consider ourselves perfected’’ (‘‘Hate in the Mainstream,’’ 2008). Her commentary
quickly surfaced among the online community of anti-Semitic websites where it was now
being celebrated in the headlines, video clips, and discussion boards.
References
Adams, J., & Roscigno, V. J. (2005). White supremacists, oppositional culture and the World
Wide Web. Social Forces,84(2), 759777.
Alexa. (2011). Alexa: The web information company. Retrieved from http://alexa.com/
Allport, G. W. (1954). The nature of prejudice. Reading, MA: Addison-Wesley Publishing Co.
Anti-Defamation League. (2008, December 19). Mainstream web sites flooded with
anti-Semitic comments in wake of Madoff scandal. Retrieved from http://www.adl.org/
PresRele/Internet_75/5422_12.htm
Berlet, C., & Vysotsky, S. (2006). Overview of U.S. white supremacists groups. Journal of
Political and Military Sociology,34(1), 3032.
Billig, M. (2001). Humour and hatred: the racist jokes of the Ku Klux Klan. Discourse and
Society,12, 291313.
Bosmajian, H. A. (1983). The language of oppression. Lanham, MD: University Press of
America.
Borrowman, S. (1999). Critical surfing: Holocaust deniability and credibility on the web.
College Teaching,47(2), 44 – 54.
Bratich, J. (2008). Conspiracy panics: Political rationality and popular culture. Albany, NY:
SUNY Press.
Breton, A., Galeotti, G., Salmon, P., & Wintrobe, R. (2002). Political extremism and
rationality. Cambridge, England: Cambridge University Press.
Cohen, A. (2003). White Power music is an effective recruiting tool. In White Supremacy
Groups, Farmington Hills, MI: Greenhaven Press.
Conant, E. (2009, May 4). Rebranding hate in the age of Obama. Newsweek,153(18), 30.
Copyright and fair use: Connecting to other websites (2011). Retrieved from http://fairuse.
stanford.edu/Copyright_and_Fair_Use_Overview/chapter6/6-c.html
Daniels, J. (2009). Cyber racism: White supremacy online and the new attack on civil rights.
Lanham, MD: Rowman & Littlefield Publishers, Inc.
446 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
A. Klein Theory of Information Laundering
Daniels, J. (1997). White lies: Race, class, gender, and sexuality in white supremacist discourse.
New York, NY: Routledge.
Dobratz, B. A. (2001). The role of religion in the collective identity of the white racialist
movement. Journal for the Scientific Study of Religion,40(2), 287302.
Duffy, M. E. (2003). Web of hate: A fantasy theme analysis of the rhetorical vision of hate
groups online. Journal of Communication Inquiry,27(3), 373390.
Ezekial, R. S. (1995). The racist mind: Portrait of American neo-Nazis and Klansmen.New
York, NY: Viking.
Farhi, P. (2010, December). From the fringe to the mainstream: How ‘‘scandals’’ of dubious
validity or relevance end up attracting so much media attention. American Journalism
Review, 3237.
Feagin, J. R. (2006). Systemic racism: A theory of oppression. New York, NY: Routledge.
Federal Bureau of Investigation. (2009). 2009 financial crimes report. Retrieved from
http://www.fbi.gov/stats-services/publications/financial-crimes-report-2009/financial-
crimes-report-2009#asset
Future of media and information needs of communities in a digital age (2010). Retrieved
from http://fjallfoss.fcc.gov/ecfs/document/view?id=7020450549
George, A. L. (1959). Propaganda analysis: A study of the inferences made from Nazi
propaganda in World War II. Evanston, IL: Row, Peterson.
Geraghty, J. (2008, June 9). Obama could debunk some rumors by releasing his birth
certificate. National Review Online. Retrieved from http://www.nationalreview.
com/campaign-spot/9490/obama-could-debunk-some-rumors-releasing-his-birth-
certificate
Johnson-Cartee, K. S., & Copeland, G. A. (2004). Strategic political communication:
Rethinking social influence, persuasion, and propaganda. Lanham, MD: Rowman &
Littlefield Publishers Inc.
Jowett, G. S., & O’Donnell, V. (1999). Propaganda and persuasion. Thousand Oaks, CA:
SAGE Publications.
Keller, L. (2009). The second wave: Evidence grows of far-right militia resurgence. Retrieved
from http://www.splcenter.org/get-informed/intelligence-report/browse-all-
issues/2009/fall/the-second-wave
Klein, A. G. (2010). A space for hate: The White Power movement’s adaptation into cyberspace.
Duluth, MN: Litwin Books, LLC.
Lewandowski, D. (2008). Search engine user behavior: How can users be guided to quality
content? Information Services & Use,28, 261268.
Loeb, H. A. (1999). Words have consequences: Reframing the hate speech debate. Human
Rights: Journal of the Section of Individual Rights & Responsibilities,26(4), 1113.
MacMillan, D. (2009). Facebook’s Holocaust controversy. Retrieved from http://www.
businessweek.com/technology/content/may2009/tc20090512_104433.htm
Manjoo, F. (2010). UVA debate: Does the internet help or hurt democracy? Retrieved from
http://www.pbs.org/newshour/bb/media/jan-june10/miller_06-01.html
Nagesh, G. (2010). Groups want FCC to police hate speech on talk radio, cable news
networks. Retrieved from http://thehill.com/blogs/hillicon-valley/technology/100833-
groups-want-fcc-to-police-hate-speech
News networks regularly promote anti-gay Family Research Council on air (2011). Retrieved
from http://equalitymatters.org/factcheck/201112120002
NSM News (2009). Retrieved from http://www.nsm88.org/
Communication Theory 22 (2012) 427– 448 2012 International Communication Association 447
Theory of Information Laundering A. Klein
Roth, Z. (2009). Von Brunn revered by some Neo-Nazis. Retrieved from http://
tpmmuckraker.talkingpointsmemo.com/2009/06/von_brunn_revered_by_some_neo-
nazis.php
Schafer, J. A. (2002). Spinning the web of hate: Web-based hate propagation by extremist
organizations. Journal of Criminal Justice and Popular Culture,9(2), 6988.
Shenk, D. (1999). Data smog: Surviving the information glut. San Francisco: Harperedge.
Shyles, L. (2003). Deciphering cyberspace: Making the most of digital communication
technology. Thousand Oaks, CA: Sage Publications.
Simi, P., & Futrell, R. (2006). Cyberculture and the endurance of white power activism.
Journal of Political and Military Sociology,34(1), 115142.
Simon Wiesenthal Center. (2011). 2011 digital terrorism & hate report launched at Museum
of Tolerance New York. Retrieved from http://www.wiesenthal.com/site/apps/
nlnet/content2.aspx?c=lsKWLbPJLnF&b=6478433&ct=9141065
Southern Poverty Law Center. (2007, Winter). Aryan encyclopedia takes off. Intelligence
Report, 128. Retrieved from http://www.splcenter.org/intel/intelreport/
article.jsp?aid=863.
Southern Poverty Law Center. (2009, Spring). Extremism in the media. Intelligence Report,
133. Retrieved from http://www.splcenter.org/intel/intelreport/article.jsp?aid=1007
Southern Poverty Law Center. (2012). VDARE. Retrieved from http://www.splcenter.org/
vdare-foundation
Squires, C. R. (2007). Dispatches from the color line: The press and multiracial America.
Albany, NY: State University of New York Press.
Stempel, C., Hargrove, T., & Stempel, G., III (2007). Media use, social structure, and belief in
9/11 conspiracy theories. Journalism & Mass Communication Quarterly,84(2), 353 372.
Swain, C. M., & Nieli, R. (2003). Contemporary voices of white nationalism in America.
Cambridge, England: Cambridge University Press.
Tsesis, A. (2002). Destructive messages: How hate speech paves the way for harmful social
movements. New York, NY: New York University Press.
UCLA Internet Project. (2003). Internet peaks as America’s most important source of
information. Retrieved from http://www.anderson.ucla.edu/x3829.xml
U.S. Hate Groups Top 1000 (2011). Retrieved from http://www.splcenter.org/
get-informed/news/us-hate-groups-top-1000
Weatherby, G. A., & Scroggins, B. (2006). A content analysis of persuasion techniques used
on white supremacist websites. Journal of Hate Studies,4(9), 931.
Whillock, R. K. (1995). The use of hate as a stratagem for achieving political and social goals.
In R. K. Whillock, & D. Slayden (Eds.), Hate speech. Thousand Oaks, CA: SAGE
Publications.
448 Communication Theory 22 (2012) 427– 448 2012 International Communication Association
... ortant for the dissemination of farright discourse into mainstream public debate (Ekman, 2015;Karl, 2019;Schwarzenegger & Wagner, 2018;A. Winter, 2019) but more than that, the internet is often used actively and skilfully by the far right to help increase their legitimacy and appeal to less radical audiences (Daniels, 2009;Gerstenfeld et al., 2003;A. Klein, 2012). It might be tempting to dismiss the relevance of digital far-right settings, networks, discourses, and users as marginal and merely virtual phenomena separate from anything in the tangible 'real world'. However, with contemporary society, politics, and not to mention everyday life, characterised by an increasingly complex media environ ...
... Relatedly, others have shown how farright sites borrow aesthetics and content from credible sources, like legacy news media, to appear more trustworthy (A. Klein, 2012; see also Gerstenfeld et al., 2003). This can make far-right content appear palatable and therein might enable far-right sites to attract (unsuspecting) visitors. ...
... ays the most 'important' content first, users can sometimes perceive it as though it is. Therein, search engines can provide credibility to far-right content, allowing users to search for and find far-right sites, like-minded others and organisations, and access skewed information that confirms their far-right beliefs (Daniels, 2009(Daniels, , 2018A. Klein, 2012). On social media sites like YouTube, Facebook, and Reddit, recommendation systems and content feeds have been found to support and emphasise incendiary content (Gaudette et al., 2020;Massanari, 2017;Munn, 2020), and on YouTube especially, recommendations have been found to steer users towards increasingly radical far-right content (Munn ...
Thesis
Full-text available
Background: This thesis explores the far right online beyond the study of political parties and extremist far-right sites and content. Specifically, it focuses on the proliferation of far-right discourse among ‘ordinary’ internet users in mainstream digital settings. In doing so, it aims to bring the study of far-right discourse and the enabling roles of digital platforms and influential users into dialogue. It does so by analysing what is communicated and how; where it is communicated and therein the roles of different socio-technical features associated with various online settings; and finally, by whom, focusing on particularly influential users. Methods: The thesis uses material from four different datasets of digital, user-generated content, collected at different times through different methods. These datasets have been analysed using mixed methods approaches wherein interpretative methods, primarily in the form of critical discourse analysis (CDA), have been combined with various data processing techniques, descriptive statistics, visualisations, and computational data analysis methods. Results: The thesis provides a number of findings in relation to far-right discourse, digital platforms, and online influence, respectively. In doing so it builds on the findings of previous research, illustrates unexpected and contradictory results in relation to what was previously known, and makes a number of interesting new discoveries. Overall, it begins to unravel the complex interconnectedness of far-right discourse, platforms, and influential users, and illustrates that to understand the far-right’s efforts online it is imperative to take several dimensions into account simultaneously. Conclusion: The thesis makes several contributions. First, the thesis makes a conceptual contribution by focusing on the interconnectedness of far-right efforts online. Second, it makes an empirical contribution by exploring the multifaceted grassroots or ‘non-party’ dimensions of far-right mobilisation, Finally, the thesis makes a methodological contribution through its mix of methods which illustrates how different aspects of the far right, over varying time periods, diversely sized and shaped datasets, and user constellations, can be approached to reveal broader overarching patterns as well as intricate details.
... Those network bridges became source-laundering mechanisms that obscured demonstrable collaboration or formal cooperation between different actor networks. Similar to Klein's (2012) definition of information laundering that illustrated 'how the Internet's unique properties allow subversive social movements to (…) quietly legitimize their causes through a borrowed network of associations' (p. 419), it was far-right news sites and the resources of a radical right party that concealed and disguised the extremist origins of the campaign and charged it with political legitimacy. ...
Article
Full-text available
THE ARTICLE IS AVAILABLE OPEN ACCESS VIA A THE JOURNAL PAGE: https://doi.org/10.1080/1369118X.2022.2050415 ABSTRACT Many liberal democracies have witnessed the rise of radical right parties and movements that threaten liberal values of tolerance and inclusion. Extremist movement factions may promote inflammatory ideas that engage broader publics, but party leaders face dilemmas of endorsing content from extremist origins. However, when that content is shared over larger intermediary networks of aligned supporters and media sites, it may become laundered or disconnected from its original sources so that parties can play it back as official communication. With a dynamic network analysis and various-time series analysis we tracked content flows from the German version of a global far-right anti-immigration campaign across different media platforms, including YouTube, Twitter, and collections of far-right and mainstream media sites. The analysis shows how content from the small extremist Identitarian Movement spread over expanding networks of low-level activists of the Alternative for Germany party and far-right alternative media sites. That network bridging enabled party leadership to launder the source of the content and roll out its own version of the campaign. As a result, national attention became directed to extremist ideas.
... As with the mainstreaming of alternative far-right beliefs and ideologies more broadly, WN perspectives also find their way into the mainstream because the Stormfront community strategically appeals to mainstream whites by reformulating racist propaganda into more palatable rhetoric (Hartzell, 2020;Meddaugh and Kay, 2009) and engages with oppositional views (e.g., anti-racist) on the site, in part, to accomplish this goal but also to encourage active participation amongst community members (Bright et al., 2022). The WN ideologies of Stormfront users are then inadvertently disseminated into more mainstream online spaces by the baked in mechanisms of the Internet (Graham, 2016;Klein, 2012;Winter, 2019), as well as more mainstream right media outlets, such as the Ben Shapiro Show or The Daily Standard (Speakman and Funk, 2020). ...
Article
Introduction Research has indicated a growing resistance to vaccines among U.S. conservatives and Republicans. Following past successes of the far-right in mainstreaming health misinformation, this study tracks almost two decades of vaccine discourse on the extremist, white nationalist (WN) online message-board Stormfront. We examine the argumentative repertoire around vaccines on the forum, and whether it assimilated to or challenged common arguments for and against vaccines, or extended it in ways unique to the racist WN agenda. Methods We use a mixed-methods approach, combining unsupervised machine learning of 8892 posts including the term “vaccin*“, published on Stormfront between 2001 and 2017. We supplemented the computational analysis with a manual coding of randomly sampled 500 posts, evaluating the prevalence of pro- and anti-vaccine sentiment, previously identified pro- and anti-vaccine arguments, and WN-specific arguments. Results Discourse was dynamic, increasing around specific events, such as outbreaks and following legal debates about vaccine mandates. We identified four themes: conspiracies, science, race and white innovation. The prominence of themes over time was relatively stable. Our manual coding identified levels of anti-vaccine sentiment that were much higher than found in the past on mainstream social media. Most anti-vaccine posts relied on common anti-vaccine tropes and not on WN conspiracy theories. Pro-vaccination posts, however, were supported by unique race-based arguments. Conclusion We find a high volume of anti-vaccine sentiment among WN on Stormfront, but also identify unique pro-vaccine arguments that echo the group's racist ideology. Public health implication As with past health-related conspiracy theories, high levels of anti-vaccine sentiment in online far-right sociotechnical information systems could threaten public health, especially if it ‘spills-over’ to mainstream media. Many pro-vaccine arguments on the forum relied on racist, WN reasoning, thus preventing the authors from recommending the use of these unethical arguments in future public health communications.
... The outstanding category in each is satire and parody. While distinct from other types of disinformation, sarcasm traditionally accompanies conspiracy theories and hate speech (Klein, 2012;Woods & Hahner, 2019). Historically, satire and parody have been weaponized by the far right to get most of the public to not take them seriously, at least in early stages (Woods & Hahner, 2019). ...
Article
The QAnon conspiracy blends ancient and malleable anti-Semitic bigotry with modern social media. Attribute intermedia agenda setting has rarely considered conspiracy theories. Conspiracies like QAnon are not fact-based and challenge conventional agenda setting methodologies. This study explores attribute IAS among national, regional, and local media coverage of QAnon-supporting congressional candidates in Georgia and Colorado in 2020. It introduces notions of rational and irrational agenda setting domains to fully analyze the transfer of irrational attributes across diverse media agendas.
Article
Full-text available
The unforeseen event, which is the appearance of SARS-CoV-2, has become a huge challenge for universities in Poland and in the world, especially in the field of management. The first months of transition to distance learning were a test of organizational and didactic possibilities for universities, and also indicated new directions of changes in education. The aim of the article is to identify the possibilities of using e-learning in higher education in the future. The research used the case study and diagnostic survey methods. The authors wanted to show that the experiences gained in the pandemic can be applied in the future in the implementation of studies using distance learning (e-learning). Based on the research conducted among both university employees and students, an attempt was made to define guidelines for building a win-win scenario for the use of e-learning.
Chapter
This chapter approaches human communicators as produsers (a combination of ‘producer’ and ‘user’) and explores how present-day media audiences increasingly create and distribute their own content. It begins by examining various strands of audience research and considers how understandings of the term ‘audience’ have changed since the emergence of digital media. It then considers how recent notions of ‘participatory culture’ tend to depict produsers as active participants in media culture and as politically engaged subjects. It critically interrogates these suggestions by examining produsers as civic storytellers, citizen journalists, and fan activists. This chapter concludes by suggesting that the produser is a ‘paradoxical figure’ insofar as his or her power to participate is dependent on a capitalist media infrastructure that is often exploitative.
Article
Germany's 2017 NetzDG law is an example of ‘new school speech regulation’ (Balkin, 2014), which restricts speech by coercing intermediaries into censoring users, rather than coercing speakers directly. It is the first such measure which specifically targets hate speech on social media, by requiring large platforms to operate complaints procedures which ensure illegal content is rapidly removed. Numerous other countries have since adopted similar regulations, indicating that states increasingly turn to new school speech regulation as a regulatory strategy to tackle hate speech on social media. This paper aims to evaluate the effectiveness of new school speech regulation in as a regulatory strategy to address online hate speech, taking NetzDG as a case study. A review of relevant empirical literature shows that many features of social media platforms actively promote or encourage hate speech. Key factors include algorithmic recommendations, which frequently promote hateful ideologies; social affordances which let users encourage or disseminate hate speech by others; anonymous, impersonal environments; and the absence of media ‘gatekeepers’. In mandating faster content deletion, NetzDG only addresses the last of these, ignoring other relevant factors. Moreover, reliance on individual user complaints to trigger platforms' obligations means hate speech will often escape deletion. Interviews with relevant civil society organisations confirm these flaws of the NetzDG model. From their perspectives, NetzDG has had little impact on the prevalence or visibility of online hate speech, and its reporting mechanisms fail to help affected communities. NetzDG represents an incremental, narrow approach to a complex sociotechnical problem which requires more fundamental regulatory reform. In this regard, it shows the limitations of censorship-based new school speech regulation. Rules which assert state authority by prescribing censorship of narrowly-defined content categories are ill-suited to large-scale, networked, algorithmically-curated social media, where other governance mechanisms influence user behaviour more than content deletion. The paper advocates a more systemic and preventive regulatory approach. Platforms should be required to take public interest considerations into account in all design and governance processes, aiming to shape platform environments to actively discourage users from posting or viewing hate speech, rather than simply deleting it afterwards.
Article
Full-text available
Situational awareness in the face of hybrid campaigns is undermined by ‘blind spots’, which this article conceptualises. Blind spots are where legal or political frameworks preclude intelligence gathering below the threshold of war. Drawing on a historical case study of alterations in legislation and security police directives in Norway from 1950–2021, the analysis shows how an influence operation’s blind spot varied along with shifts in the balance between national security concerns and democratic individual rights. Recent initiatives to criminalise domestic collaboration with foreign influence operators are discussed, and the advent of AI and microtargeting presented as a challenge out of reach of such legislation.
Article
Full-text available
The Internet has made it possible for people to access just about any information they could possibly want. Conversely, it has given organizations a vehicle through which they can get their message out to a large audience. Hate groups have found the Internet particularly appealing, because they are able to get their uncensored message out to an unlimited number of people (ADL 2005). This is an issue that is not likely to go away.
Article
Full-text available
White Supremacist groups in the United States share certain common elements and characteristics. In addition to a view of racial hierarchy, there is usually some form of antisemitism, dualism, apocalypticism, a reliance on conspiracy theories, a masculinist perspective, and antipathy towards gays and lesbians. They also share some common elements with all social movements. At the same time, there are distinctive differences among White Supremacist groups. There are several ways to illustrate these differences. In order to better explain how these groups operate in the public sphere, we separate them into the categories of: political, religious, and youth cultural (racist skinhead, racist gangs, etc.) This typology, proposed by Vysotsky (2004), focuses on how these groups recruit and mobilize supporters around specific ideologies or cultural frames. We also look at several complexities and controversies in the study of White Supremacist organizations.
Article
Today, as in the past, racial oppression is not just a surface-level feature of society, but rather it pervades, permeates, and interconnects all major social groups, networks, and institutions across society.
Article
This book presents ten alarmingly candid interviews by some of the most prominent members of what co-editors Carol M. Swain and Russ Nieli warn is a growing White Nationalist movement. The ten people interviewed in this volume make statements that are sure to shock, amuse, challenge, and provoke readers. Their remarks are of particular interest, Swain and Nieli believe, for understanding how the many race-conscious whites who lie outside the integrationist consensus on racial issues in America view developments that have taken place in the United States since the Civil Rights movement. If current trends continue, the authors predict, these ideas will become more common, especially as whites become a diminishing portion of the U.S. population. They argue that the claims of white nationalists need to be aired in open, public forums, where they can be vigorously challenged and subjected to refutation. Carol M. Swain is Professor of Political Science and Professor of Law at Vanderbilt University. She is the author of Black Faces, Black Interests (Harvard, 1993). She has published numerous articles including the op-eds in the New York Times, Wall Street Journal, and the Chronicle of Higher Education and lectures widely across the country, on issues ranging from congressional redistricting to the future of affirmative action programs. Swain was one of twelve children born into rural poverty, is a high school dropout, and a first generation college student who started her education at a community college and went on to receive a doctorate and law degree. She spent the first ten years of her career teaching at Princeton University, where she was a tenured professor of political science and public policy at the Woodrow Wilson School of Public and International Affairs. A former Fulbright Scholar, Russ Nieli is currently a lecturer in politics at Princeton University. His areas of academic interest run the gammet from Wittgenstein to race relations, and he is currently working on a book on the decline of the inner-city African American communities in the decades following the Civil Rights Revolution of the 1960's. © Carol M. Swain and Russ Nieli 2003 and Cambridge University Press, 2010.
Article
White supremacist groups have traditionally been viewed as "fringe" groups to be ignored, dismissed, or at most, observed warily. White Lies investigates the white supremacist imagination, and argues instead that the ideology of these groups is much closer to core American values than most of us would like to believe. The book explores white supremacist ideology through an analysis of over 300 publications from a variety of white supremacist organizations. It examines the discourse of these publications and the ways in which "whites," "blacks," and "Jews" are constructed within that discourse.