Conference PaperPDF Available

Online astroturfing: A theoretical perspective

Authors:

Abstract

Online astroturfing refers to coordinated campaigns where messages supporting a specific agenda are distributed via the Internet. These messages employ deception to create the appearance of being generated by an independent entity. In other words, astroturfing occurs when people are hired to present certain beliefs or opinions on behalf of their employer through various communication channels. The key component of astroturfing is the creation of false impressions that a particular idea or opinion has widespread support. Although the concept of astroturfing in traditional media outlets has been studied, online astroturfing has not been investigated intensively by IS scholars. This study develops a theoretically-based definition of online astroturfing from an IS perspective and discusses its key attributes. Online astroturfing campaigns may ultimately have a substantial influence on both Internet users and society. Thus a clear understanding of its characteristics, techniques and usage can provide valuable insights for both practitioners and scholars. © (2013) by the AIS/ICIS Administrative Office All rights reserved.
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 1
Online Astroturfing: A Theoretical Perspective
Completed Research Paper
Jerry Zhang
University of Texas at San Antonio
jerry.zhang@utsa.edu
Darrell Carpenter
University of Texas at San Antonio
darrell.carpenter@utsa.edu
Myung Ko
University of Texas at San Antonio
myung.ko@utsa.edu
ABSTRACT
Online astroturfing refers to coordinated campaigns where messages supporting a specific agenda are distributed via the
Internet. These messages employ deception to create the appearance of being generated by an independent entity. In other
words, astroturfing occurs when people are hired to present certain beliefs or opinions on behalf of their employer through
various communication channels. The key component of astroturfing is the creation of false impressions that a particular idea
or opinion has widespread support. Although the concept of astroturfing in traditional media outlets has been studied, online
astroturfing has not been investigated intensively by IS scholars. This study develops a theoretically-based definition of
online astroturfing from an IS perspective and discusses its key attributes. Online astroturfing campaigns may ultimately have
a substantial influence on both Internet users and society. Thus a clear understanding of its characteristics, techniques and
usage can provide valuable insights for both practitioners and scholars.
Keywords (Required)
Internet, astroturfing, deception, persuasion.
INTRODUCTION
Internet users who seek to gain knowledge on a particular subject or gauge support for various opinions frequently search the
web for references. It is common for web references to contain both factual information and a comments section where web
page viewers can post their individual opinions. The volume of information available through Internet resources is growing
rapidly and most computer-savvy people consider the Internet a primary source of reliable information. Facilitated by
powerful search engines, Internet users have access to a broad spectrum of opinions regarding popular issues, even those
opinions with sparse support.
From a social psychology perspective, an individual’s beliefs on a particular subject are often influenced by others’ beliefs
(Kelman 1958). Therefore, the beliefs of Internet users are likely to be influenced by the information and opinions provided
by other Internet users. Additionally, some Internet users have begun to doubt the veracity of information released by
organizations and public authorities. As a result, many users have turned to alternative information sources such as social
networks, blogs, and other forms of interactive online communication, which they believe are more authentic (Quandt 2012).
Peer-provided information has been extensively used in the e-commerce domain. It is normal practice for Internet users to
view product reviews and feedback from other consumers when contemplating an unfamiliar purchase. Poor reviews and
feedback ratings will likely have a negative impact on intentions to buy a particular product while positive reviews and
feedback may provide confidence in a particular purchase (Chen et al. 2008; Dellarocas et al. 2007; Hu et al. 2006; Senecal et
al. 2004). The same effects can often be observed in relation to political figures during election cycles. The reputation of a
particular candidate may be severely tarnished by undesirable media coverage or social network discussions (Ratkiewicz et
al. 2011a). Accordingly, the opinions of potential voters may be weakened or changed completely as a result of unfavorable
media coverage and damning social commentary, while other candidates, organizations, agendas, and opinions may gain
favor with voters.
Unfortunately, some of the information received from the Internet is falsified to manipulate the reader’s opinions (Cho et al.
2011; Cox et al. 2008; Daniels 2009; Mackie 2009; MacKinnon 2011; Stajano et al. 2011). In many cases this falsified
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 2
information is crafted to appear as if it was posted by autonomous Internet users when it was, in fact, released by paid agents
of parties with an interest in spreading a particular message. This type of activity is referred to as astroturfing, which entails
the imitating or faking of popular grassroots opinions or behaviors (Hoggan 2009; McNutt 2010). The term comes from the
brand name “AstroTurf”, which is a synthetic grass used on sports fields.
The concept of astroturfing is not new in the non-digital world. This perception management technique has been used in
politics, public relations, and marketing for years. However, the Internet has provided convenient opportunities for users to
post opinions in an anonymous fashion. Communicating anonymously on the Internet provides users with a sense of security
much like talking to others in a completely dark room in which nobody can see each other (McKenna et al. 2000). This cloak
of anonymity provides an opportunity for users to pretend they are someone else, thus making the Internet an ideal platform
for astroturfing. With the rapid growth of online outlets, astroturfing can be used to spread information throughout the digital
world via online forums, comments, blogs, and social networks (Mustafaraj et al. 2010; Ratkiewicz et al. 2011a; Ratkiewicz
et al. 2011b). There is evidence suggesting that some large organizations are using online astroturfing through public relation
firms to create posts that discredit their critics (Greenwald et al. 2005; Norman 2004). Other organizations utilized paid
individuals to propagate favorable images online (MacKinnon 2011).
Although the utilization of online astroturfing has been studied in the fields of sociology (McNutt et al. 2007) and political
science (Mattingly 2006), it has not received much attention from IS scholars. The examination of the phenomenon in other
fields does not address online astroturfing as a socio-technical strategy, its potential impacts on business technology
investments, or its potential impact on the entire Internet. Online astroturfing may be leveraged as a vehicle to enhance
deceitful positive image or damage targets’ reputations through false claims. Additionally, if not controlled, it may
undermine the veracity of genuine information resources and diminish the value of Internet interactive technologies. Thus,
online astroturfing has specific implications for the IS discipline with a focus in the cyber security realm. The purpose of this
study is to define online astroturfing from an IS theoretical perspective and to discuss its critical attributes. This discussion of
its traits provides valuable insights to IS scholars and practitioners and serves as a catalyst for future research endeavors.
ONLINE ASTROTURFING
The term “astroturfing” was used by Senator Lloyd Bentsen to describe “ the artificial grassroots campaigns created by public
relations (PR) firms” (Stauber 2002). Organizations that engage in astroturfing activities usually hire public relations or
lobbying firms to simulate grassroots campaigns (McNutt 2010). In other words, astroturfing occurs when groups of people
are hired to present certain beliefs or opinions, which these people do not really possess, through various communication
channels. In most cases, the hired groups and individuals support arguments or claims for their employer’s favor while
challenging critics and denying adverse claims (Cho et al. 2011). If successful, astroturfing creates falsified impressions
among decision makers or the general public and achieves the goal of persuasion. Traditionally, the scope and influence of
astroturfing are limited by the strength of financial support behind the effort since hiring public relation firms to generate and
disseminate these false messages can be costly (Hoggan 2009). Therefore, Lyon and Maxwell (2004) describe astroturfing as
“a form of costly state falsification”.
Traditional astroturfing has primarily targeted decision and policy makers. Examples include: a massive public health
campaign suggesting people use disposable cups in order to prevent the spread of disease from shared metal cups (Lee 2010),
a group of “grassroots” lobbyists posting messages in support of the General Mining Act of 1872 while being funded by
corporate sponsors who have strong interests in maintaining the provisions of that Act (Lyon et al. 2004), a leaked memo
from a US oil industry organization indicating its plan to deploy thousands of employees to protest proposed climate change
legislation (Mackenzie et al. 2009).
While traditional astroturfing was effective in certain domains, the Internet has fundamentally changed the rules of social
communication. Since it is difficult to authenticate an individual online, it has become easy to create false identities and
advocate a belief or opinion while posing as a group of spontaneous individuals. Additionally, as noted by Stajano and
Wilson (2011) online communication and social networks allow a single individual to create multiple aliases to give others
the impression that there are many people sharing a same opinion. Namelyastroturfers strive to create the falsified impression
that the given ideas or opinions are held by a large portion of the population. The combination of anonymity (McKenna et al.
2000) and interactivity (Morris et al. 1996) enabled by the Internet communication paradigm has provided a technical
platform and opportunity for astroturfing. Web-based systems can be exploited in a variety of ways to achieve the desired
result: a single professional blogger can control several distinct blogs; a person can create different profiles on social
networks; users can post reviews and comments on many e-commerce and political sites. The scope and gravity of these
deceptive online actions are increasing as compared to traditional astroturfing (Tumasjan et al. 2010).
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 3
Online astroturfing has become a tool of choice because it typically costs less and influences more (Mackie 2009). The
fraudulent perceptions disseminated through astroturfing can be classified as both identity-based and message-based
according to Hancock’s (2007) taxonomy of deception. Astroturf messages falsely represent the identities of the poster and
also deliver deceptive or misleading information. Therefore, we define online astroturfing as the dissemination of deceptive
opinions by imposters posing as autonomous individuals on the Internet with the intent of promoting a specific agenda.
We posit that despite the low cost of posting messages online, initiating an effective astroturfing campaign requires
substantial human capital, ample computational resources, and a strategic management protocol. Within these parameters, the
astroturfing messages can be falsified or genuine; the targets are determined by the purpose of the campaign; the motivation
may be political, commercial or military; the communication method may be one-way or interactive; and the communication
process may be automated or human controlled.
MOTIVATIONS
Motivations for astroturfing are based on the benefits derived from manipulating the opinions of message receivers. In the
public relations industry, online astroturfing is referred to as a third party manipulation technique (Mackie 2009). Several
prominent examples of astroturfing in business and politics have been documented in both academic and popular literature.
Wal-Mart hired a public relations firm to reinforce its favorable public image and discredit critics (Daniels 2009). The firm
launched two websites in 2006: www.forwalmart.com and www.paidcritics.com. The first website was used to propagate the
positive contributions of Wal-Mart to working families while the second one was used to discredit critics of Wal-Mart by
asserting that they were “paid critics”. IBM and some other large corporations openly encourage employees to blog in favor
of their employers and against competitors (Cox et al. 2008). Mustafaraj and Metaxas (2010) found concrete evidence of
online astroturfing via Twitter during the Massachusetts senate race between Martha Coakley and Scott Brown. To smear one
of the candidates, perpetrators leveraged several Twitter accounts and generated hundreds of tweets in a brief period, thus
reaching a wide audience and potentially influencing the election outcome. On both the Amazon and Barnes & Noble
websites, fake positive reviews have been discovered , intending to influence the purchase decisions of customers to benefit
multiple parties including vendors, publishers, and authors (Hu et al. 2011). While astroturfing is often associated with
business and politics, it has also been used for national strategic and tactical purposes. After the terrorist attacks of 9/11, the
Office of Strategic Influence was created within the Pentagon for the purpose of “flooding targeted areas with information”.
Even though this particular office only existed for a short time, similar operations are still employed by the Pentagon to
praise the military operations (Pfister 2011).
METHODS
Online astroturfing activities can be initiated by automated systems or human operators. However, Jakobsson (2012) notes
that automation techniques are required to reach an effective scale. Once information from an astroturfing campaign is
disseminated, many legitimate users may fall victim to the scheme and begin propagating the counterfeit information
(Ratkiewicz et al. 2011b). As a result, the effect of the astroturfing campaign is amplified. Several recent publications
highlight automated astroturfing activities conducted via Twitter. Chu et al (2010) examined Twitter users by classifying
them as human, bot or cyborg. In contrast to other online social networks, Twitter allows the use of bots or automated
programs that can post tweets when the account owners are absent. Cyborgs are a combination of human and automated
actors and are further classified as either bot-assisted humans or human-assisted bots. The ability to employ cyborgs blurs the
lines between humans and bots for astroturfing activities. Metaxas and Mustafaraj (2010) and Ratkiewicz et al. (2011b) have
discussed different techniques that can be used to detect automated astroturfing accounts on Twitter. We posit that despite the
potential for cyborgs, bots on social networks should be readily distinguishable from genuine Internet users because their
message traffic is typically unidirectional and they cannot intelligently interact with other users. On the other hand,
astroturfing campaigns employing a large number of human operators are possible with sufficient financial support and
strategic management (MacKinnon 2011). Professional astroturfers can advocate their employer’s opinions anywhere
through user-generated content without using automated tools. They can also infiltrate microblogs, social networks,
chatrooms, and comment sections of targeted websites. Compared to automated mechanisms, human astroturfers may be
characterized as less efficient, but potentially more effective. While human astroturfers are, in fact, autonomous individuals,
they are not spontaneous as the opinions they espouse are designated by their employers. However, the messages they post
are carefully tailored to the specific environment they have infiltrated thus providing them with the ability to adapt quickly as
conditions change. Human astroturfers are also able to interact with legitimate Internet users thereby making their messages
more convincing. We theorize that without sufficient knowledge of astroturfing techniques, typical Internet users can be
easily deceived by this method. As stated by Mackie (2009), “the Internet is vulnerable to astroturfing by the powerful and
wealthy”.
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 4
MECHANISMS FOR EFFECTIVE ASTROTURFING
The previous sections discussed the motivation for online astroturfing and the ways astroturfed messages are disseminated
through the Internet. Although many scholars believe online astroturfing is effective and difficult for users to detect (Chu et
al. 2010; Hu et al. 2011; Mustafaraj et al. 2010; Ratkiewicz et al. 2011b), the mechanisms behind successful online
astroturfing have not been directly investigated. How does online astroturfing change the readers’ minds? What makes users
believe some online astroturfing messages while doubting others? In this section we explore the mechanisms behind effective
online astroturfing and present propositions based on existing theoretical foundations.
From a social psychology perspective, the influence created by online astroturfing is consistent with informational social
influence or social proof (Cialdini 2001b). Informational social influence is exerted when a subject accepts information from
other people as evidence to be weighed when forming one’s own judgment (Deutsch et al. 1955). The application of social
proof is to “use peer power whenever it is available” (Cialdini 2001a). According to Deutsch and Gerard (1955), the effect of
informational social influence will be most salient when people are ambiguous about subjects or situations. Therefore, when
Internet users are uncertain about a particular subject, they may seek and accept information provided by other users on the
Internet. However, the manner in which readers process information must also be considered. The Elaboration Likelihood
Model (Cacioppo et al. 1986; Petty et al. 1996) suggests that in a central route, people tend to examine the content of the
persuasive message very carefully, while in a peripheral route, people do not process the actual argument of the message
through cognitive effort but rely on other characteristics of the message which are more assessable and obvious. Petty and
Cacioppo (1986) contend that when people are highly motivated and willing to process the message, they will scrutinize the
persuasive argumentation carefully. In this case, a strong argument is more efficient than a weak argument. However, when
people are unmotivated they tend to rely on simple cues in the message such as the conviction or passion conveyed by the
poster. Thus, the decision to rely on the strength of argument, peripheral cues, or both is highly dependent on the receiver’s
level of involvement.
In online astroturfing, the goal of the message sender is to convince the receiver that the message content is a heartfelt,
rational, and defensible opinion held by a social peer. Ultimately, the message sender seeks to either alter the receiver’s
opinion or create doubts about a particular viewpoint through a coordinated campaign of deceptive information
dissemination. Therefore, the effect of online astroturfing can be defined as the degree to which an astroturfing campaign
alters the receiver’s opinion or level of conviction regarding a particular subject. Based on a synthesis of the Elaboration
Likelihood Model and informational social influence theory (Cialdini 2001b), we contend that the effects of online
asturtorfing are related to four important mechanisms: multiple sources (Harkins et al. 1981b, 1987), uncertainty (Wooten et
al. 1998), perceived similarities (Cialdini 2001a), and receivers’ motivations (Cacioppo et al. 1986; Metzger 2007).
Multiple Source Effect
Multiple source effect was first identified by Harkins and Petty (1981a). In their experiment, they found that the subject
groups receiving multiple arguments from multiple sources were most persuaded when compared to other groups; the subject
groups receiving a single argument from multiple sources were less persuaded; and the subject groups receiving multiple
arguments from a single source were least persuaded. Their study indicated that both the number of sources and the number
of arguments play important roles in persuasion. Later, Harkins and Petty (1987) conducted another experiment to investigate
the reasons why multiple sources enhance processing. The results of their study are consistent with the previous research and
showed that multiple sources enhance message processing due to recipients’ perceptions that arguments from different
sources are more likely to be viewed as different perspectives provided by different individuals. In the context of the online
environment, most user-generated content has little or no verifiable identity attached to it and instead arbitrary identifiers
such as screen names or IP addresses are used. Thus, from a technical perspective, it is quite easy for an online astroturfer to
mask himself or herself through different identities and users are likely to perceive these identities as independent
information sources. Accordingly Internet users are likely to believe the information is being provided by a number of
different users. This leads to the following is proposed.
Proposition 1: The number of information sources influences the effect of online astroturfing.
Receivers’ uncertainty
Intuitively, if users are uncertain about a particular subject, they are more likely to be influenced by the information provided
by others. In this case, informational social influence can be used to change or vacillate the receiver’s opinions (Wooten et al.
1998). Conversely, if one is very knowledgeable or experienced regarding a particular subject, he or she will be less likely to
accept others’ thoughts or opinions (Deutsch et al. 1955). In political or advertisement campaigns, individuals who are
uncertain about the candidate or product are likely to be vulnerable to online astroturfing. Therefore we believe that the
message receivers’ uncertainty is a major factor in astroturfing effectiveness and the following is proposed.
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 5
Proposition 2: Uncertainty influences the effect of online astroturfing.
Perceived Similarities
Similarity is another key factor in informational social influence. If the information receiver perceives himself or herself as
similar to the sender, the receiver is more likely to be influenced or adopt the opinions embodied in the message. Sometimes
the similarities of peers can be even more compelling than the message itself (Cialdini 2001a). In contrast, if a product review
is written from the perspective of a vendor or manufacturer, the potential consumer will be less likely to be influenced by this
advocated opinion. Cialdini (2001a) contends that “influence is often best exerted horizontally rather than vertically”. The
premise of peer power is that it has to come, or appear to come, from a peer. On the Internet, astroturfers do not have any
connection with information receivers, but they are adept at making messages sound as if they are generated by someone
similar to the receiver. To enhance the social influence created by online astroturfing, Kinniburgh and Denning (2006)
suggest a strategy of supporting “homegrown” blogs that do not appear to be written by an authoritative figure. Thus, even
though the spatial and social distance between information senders and receivers is large, the technology can be manipulated
to shorten the psychological distance and allow information receivers to perceive astroturfers as peers. Accordingly, the
following is proposed.
Proposition 3: Perceived similarities influence the effect of online astroturfing.
Levels of involvement
Based on the previously discussed tenets of the Elaboration Likelihood Model (Cacioppo et al. 1986; Petty et al. 1996),
motivation or level of involvement is a critical factor in a user’s decision to either critically analyze data or rely on perceptive
cues. Metzger (2007) found that Internet information seekers with high motivation will likely evaluate opinions carefully
based on the quality of the information while low motivation information seekers will look to salient cues. Internet users are
similar to other information seekers in that they are more likely to use central route processing when motivated. Conversely,
when motivation or ability to judge the quality and trustworthiness of online sources is low Internet users will likely rely on
peripheral or heuristic processing. Therefore, we believe that levels of involvement will act as a moderator on the effect of
online astroturfing and the following is proposed.
Proposition 4: Level of involvement moderates the effect of online astroturfing.
LIMITATIONS AND CONCLUSION
The perfect online astroturfing campaign relies on both skillful deceivers and vulnerable receivers. It is a powerful weapon
used to launch asymmetric attacks designed to deceive innocent voters, consumers, and other information seekers. With
comparatively modest resources, an online astroturfing campaign is able to generate substantial social influence over a target.
The nature of Internet communication makes it relatively difficult to collect and examine data from astroturfing activities.
Although techniques have been developed to detect online astroturfing (Ratkiewicz et al. 2011a; Ratkiewicz et al. 2011b),
they are only effective on certain kinds of automated astroturfing systems and specific media. Once a user’s opinion has been
influenced it is almost impossible to restore the opinion to the pre-influence state. Additionally, once an astroturfing
campaign gains traction, the fraudulent information will likely be redistributed by the manipulated users and become
indistinguishable from other user-generated content. Thus, Ratkiewicz et al (2011b) suggest that identifying and terminating
online astroturfing at the initiation stage is critical.
In the present study we defined online astroturfing as the dissemination of deceptive opinions by imposters posing as
autonomous individuals on the Internet with the intention of promoting a specific agenda. It can be motived by political,
business, or military agendas and initiated by automated mechanisms or human actors. Additionally, we examined the
theoretical underpinnings of related research to identify the attributes of online astroturfing. Finally, we developed a set of
propositions based on the theoretical foundations to serve as the basis for future research. This study contributes to the
limited body of knowledge related to online astroturfing by identifying four key concepts that are likely to influence the
effectiveness of this tactic and have implications for both general IS research and specific areas of cyber security research
such as perception management and protection of information resources. These key concepts include the multiple source
effect, receiver uncertainty, perceived similarities between the sender and receiver, and the level of the receiver’s
involvement. However, our discussion regarding the effective attributes of online astroturfing may not be conclusive and
should be supplemented by further investigation. We contend that the escalation of astroturfing activity could have a
profound effect on the credibility of all Internet information resources and this study provides additional insights on attributes
and mechanisms behind this phenomenon, which are of interest to the scholarly community, policy makers, and practitioners.
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 6
REFERENCES
1. Cacioppo, J.T., Petty, R.E., Kao, C.F., and Rodriguez, R. "Central and peripheral routes to persuasion: An individual
difference perspective," Journal of Personality and Social Psychology (51:5) 1986, p 1032.
2. Chen, Y., and Xie, J. "Online consumer review: Word-of-mouth as a new element of marketing communication mix,"
Management science (54:3) 2008, pp 477-491.
3. Cho, C., Martens, M., Kim, H., and Rodrigue, M. "Astroturfing Global Warming: It Isn’t Always Greener on the Other
Side of the Fence," Journal of Business Ethics (104:4), 2011/12/01 2011, pp 571-587.
4. Chu, Z., Gianvecchio, S., Wang, H., and Jajodia, S. "Who is tweeting on twitter: human, bot, or cyborg?," Proceedings
of the 26th Annual Computer Security Applications Conference, ACM, 2010, pp. 21-30.
5. Cialdini, R.B. "Harnessing the science of persuasion," Harvard Business Review (79:9) 2001a, pp 72-81.
6. Cialdini, R.B. Influence: Science and practice Allyn and Bacon Boston, MA, 2001b.
7. Cox, J.L., Martinez, E.R., and Quinlan, K.B. "Blogs and the corporation: managing the risk, reaping the benefits," The
Journal of Business Strategy (29:3) 2008, pp 4-12.
8. Daniels, J. "Cloaked websites: propaganda, cyber-racism and epistemology in the digital era," New Media & Society
(11:5), August 1, 2009 2009, pp 659-683.
9. Dellarocas, C., Zhang, X.M., and Awad, N.F. "Exploring the value of online product reviews in forecasting sales: The
case of motion pictures," Journal of Interactive Marketing (21:4) 2007, pp 23-45.
10. Deutsch, M., and Gerard, H.B. "A study of normative and informational social influences upon individual judgment,"
The journal of abnormal and social psychology (51:3) 1955, p 629.
11. Greenwald, R., Gilliam, J., Smith, D., Tully, K., Gordon, C.M., Cheek, D., Brock, J., Florio, R., Frizzell, J., and
Cronkite, W. Wal-Mart: The high cost of low price Disinformation Company, 2005.
12. Hancock, J.T. "Digital deception," Oxford handbook of internet psychology) 2007, pp 289-301.
13. Harkins, S.G., and Petty, R.E. "The multiple source effect in persuasion," Personality and Social Psychology Bulletin
(7:4) 1981a, p 627.
14. Harkins, S.G., and Petty, R.E. "The Multiple Source Effect in Persuasion The Effects of Distraction," Personality and
Social Psychology Bulletin (7:4) 1981b, pp 627-635.
15. Harkins, S.G., and Petty, R.E. "Information utility and the multiple source effect," Journal of personality and social
psychology (52:2) 1987, p 260.
16. Hoggan, J. Climate cover-up: The crusade to deny global warming Greystone Books, 2009.
17. Hu, N., Liu, L., and Sambamurthy, V. "Fraud detection in online consumer reviews," Decision Support Systems (50:3)
2011, pp 614-626.
18. Hu, N., Pavlou, P.A., and Zhang, J. "Can online reviews reveal a product's true quality?: empirical findings and
analytical modeling of Online word-of-mouth communication," Proceedings of the 7th ACM conference on
Electronic commerce, ACM, 2006, pp. 324-330.
19. Jakobsson, M. The Death of the Internet Wiley-IEEE Computer Society Press, 2012.
20. Kelman, H.C. "Compliance, identification, and internalization: Three processes of attitude change," The Journal of
Conflict Resolution (2:1) 1958, pp 51-60.
21. Lee, C.W. "The roots of astroturfing,") 2010.
22. Lyon, T.P., and Maxwell, J.W. "Astroturf: Interest Group Lobbying and Corporate Strategy," Journal of Economics &
Management Strategy (13:4) 2004, pp 561-597.
23. Mackenzie, K., and Pickard, J. "Lobbying memo splits US oil industry," in: Financial Times, London (UK), United
Kingdom, London (UK), 2009, p. 1.
24. Mackie, G. "Astroturfing Infotopia," Theoria: A Journal of Social & Political Theory (56:119) 2009, pp 30-56.
25. MacKinnon, R. "CHINA'S "NETWORKED AUTHORITARIANISM"," Journal of Democracy (22:2) 2011, pp 32-46.
26. Mattingly, J.E. "Radar Screens, Astroturf, and Dirty Work: A Qualitative Exploration of Structure and Process in
Corporate Political Action," Business and Society Review (111:2) 2006, pp 193-221.
27. McKenna, K.Y.A., and Bargh, J.A. "Plan 9 from cyberspace: The implications of the Internet for personality and social
psychology," Personality and social psychology review (4:1) 2000, pp 57-75.
28. McNutt, J., and Boland, K. "Astroturf, technology and the future of community mobilization: Implications for nonprofit
theory," J. Soc. & Soc. Welfare (34) 2007, p 165.
29. McNutt, J.G. "Researching Advocacy Groups: Internet Sources for Research about Public Interest Groups and Social
Movement Organizations," Journal of Policy Practice (9:3-4) 2010, pp 308-312.
30. Metzger, M.J. "Making sense of credibility on the Web: Models for evaluating online information and recommendations
for future research," Journal of the American Society for Information Science and Technology (58:13) 2007, pp
2078-2091.
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 7
31. Morris, M., and Ogan, C. "The Internet as Mass Medium," Journal of Computer-Mediated Communication (1:4) 1996,
pp 0-0.
32. Mustafaraj, E., and Metaxas, P. "From obscurity to prominence in minutes: Political speech and real-time search,") 2010.
33. Norman, A. The Case Against Wal-Mart Brigantine Media, 2004.
34. Petty, R.E., and Cacioppo, J.T. Attitudes and persuasion: Classic and contemporary approaches Westview Press, 1996.
35. Pfister, D.S. "THE LOGOS OF THE BLOGOSPHERE: FLOODING THE ZONE, INVENTION, AND ATTENTION
IN THE LOTT IMBROGLIO," American Forensic Association, 2011, pp. 141-162.
36. Quandt, T. "What’s left of trust in a network society? An evolutionary model and critical discussion of trust and societal
communication," European Journal of Communication (27:1), March 1, 2012 2012, pp 7-21.
37. Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Flammini, A., and Menczer, F. "Detecting and tracking political
abuse in social media," Proc. of ICWSM) 2011a.
38. Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Patil, S., Flammini, A., and Menczer, F. "Truthy: mapping the
spread of astroturf in microblog streams," in: Proceedings of the 20th international conference companion on World
wide web, ACM, Hyderabad, India, 2011b, pp. 249-252.
39. Senecal, S., and Nantel, J. "The influence of online product recommendations on consumers’ online choices," Journal of
Retailing (80:2) 2004, pp 159-169.
40. Stajano, F., and Wilson, P. "Understanding scam victims: seven principles for systems security," Commun. ACM (54:3)
2011, pp 70-75.
41. Stauber, J. "Toxic Sludge Is Good For You: Lies, Damn Lies And The Public Relations Industry Author: John Stauber,
Sheldon Rampton, Pub,") 2002.
42. Tumasjan, A., Sprenger, T.O., Sandner, P.G., and Welpe, I.M. "Predicting elections with twitter: What 140 characters
reveal about political sentiment," Proceedings of the fourth international aaai conference on weblogs and social
media, 2010, pp. 178-185.
43. Wooten, D.B., and Reed II, A. "Informational influence and the ambiguity of product experience: Order effects on the
weighting of evidence," Journal of Consumer Psychology; Journal of Consumer Psychology) 1998.
... Amazon, for example, encourages sellers to use automatically generated text in their catalogue items [107]. Another use case comes in the form of astroturfing, where synthetic user accounts are created on platforms such as X (formerly Twitter), Facebook or reddit to promote a particular viewpoint or content [108,109]. Figure 3 shows an example of bots on X that appear to be using ChatGPT primarily to sell cryptocurrencies. ...
... Users therefore have an expectation that autocomplete suggestions have some material connection with the search query. 108 This expectation is not circumvented by the fact that autocomplete suggestions are based primarily on prior user queries and not ground truth. Further, Google is responsible for the display of the autocomplete suggestions because the company, not third parties, plays an active role in ranking and preparing suggestions for users. ...
... 107 Id. at Paragraph 15 'Verbindung ist geeignet, eine aus sich heraus aussagekräftige Vorstellung her vorzurufen'. 108 Id. at Paragraph 20 'Aus dem 'Ozean von Daten' werden dem suchenden Internetnutzer von der Suchmaschine der Beklagten nicht x-beliebige ergänzende Suchvorschläge präsentiert, die nur zufällig 'Treffer' liefern. Die Suchmaschine ist, um für Internetnutzer möglichst attraktiv zu sein -und damit den gewerblichen Kunden der Beklagten ein möglichst großes Publikum zu eröffnen -auf inhaltlich weiterführende ergänzende Suchvorschläge angelegt. ...
Article
Full-text available
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce responses that are plausible, helpful and confident, but that contain factual inaccuracies, misleading references and biased information. These subtle mistruths are poised to cumulatively degrade and homogenize knowledge over time. This article examines the existence and feasibility of a legal duty for LLM providers to create models that ‘tell the truth’. We argue that LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. We define careless speech against ‘ground truth’ in LLMs and related risks including hallucinations, misinformation and disinformation. We assess the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act, Digital Services Act, Product Liability Directive and Artificial Intelligence Liability Directive. Current frameworks contain limited, sector-specific truth duties. Drawing on duties in science and academia, education, archives and libraries, and a German case in which Google was held liable for defamation caused by autocomplete, we propose a pathway to create a legal truth duty for providers of narrow- and general-purpose LLMs.
... Protests and campaigns can run on the Internet as well. The spread of social media activism opened the door for online astroturfing [64,65]. Studying political astroturfing on Twitter focused on the coordination of a disinformation campaign. ...
Preprint
Full-text available
In 2016, a network of social media accounts animated by Russian operatives attempted to divert political discourse within the American public around the presidential elections. This was a coordinated effort, part of a Russian-led complex information operation. Utilizing the anonymity and outreach of social media platforms Russian operatives created an online astroturf that is in direct contact with regular Americans, promoting Russian agenda and goals. The elusiveness of this type of adversarial approach rendered security agencies helpless, stressing the unique challenges this type of intervention presents. Building on existing scholarship on the functions within influence networks on social media, we suggest a new approach to map those types of operations. We argue that pretending to be legitimate social actors obliges the network to adhere to social expectations, leaving a social footprint. To test the robustness of this social footprint we train artificial intelligence to identify it and create a predictive model. We use Twitter data identified as part of the Russian influence network for training the artificial intelligence and to test the prediction. Our model attains 88% prediction accuracy for the test set. Testing our prediction on two additional models results in 90.7% and 90.5% accuracy, validating our model. The predictive and validation results suggest that building a machine learning model around social functions within the Russian influence network can be used to map its actors and functions.
... Detecting persuasiveness in conversations, is of significant interest in understanding the influence of online content and could be used in several applications; for example, persuasiveness can be considered a measure of impact of disinformation (Zhang et al. 2013;Zerback et al. 2021;Hidey and McKeown 2018). Previous research has proposed AI models based on content features, to evaluate the overall influence of a speaker's comments in a conversation (Hidey and McKeown 2018), and the role of personal characteristics to predict persuasiveness (Al Khatib et al 2020). ...
Article
Full-text available
The topic of persuasion in online conversations has social, political and security implications; as a consequence, the problem of predicting persuasive comments in online discussions is receiving increasing attention in the literature. Following recent advancements in graph neural networks, we analyze the impact of conversation structure in predicting persuasive comments in online discussions. We evaluate the performance of artificial intelligence models receiving as input graphs constructed on the top of online conversations sourced from the “Change My View” Reddit channel. We experiment with different graph architectures and compare the performance on graph neural networks, as structure-based models, and dense neural networks as baseline models. Experiments are conducted on two tasks: (1) persuasive comment detection, aiming to predict which comments are persuasive, and (2) influence prediction, aiming to predict which users are persuasive. The experimental results show that the role of the conversation structure in predicting persuasiveness is strongly dependent on its graph representation given as input to the graph neural network. In particular, a graph structure linking only comments belonging to the same speaker in the conversation achieves the best performance in both tasks. This structure outperforms both the baseline model, which does not consider any structural information, and structures linking different speakers’ comments with each other. Specifically, the F1 score of the best performing model is 0.58, which represents an improvement of 5.45% over the baseline model (F1 score of 0.55) and 7.41% over the model linking different speakers’ comments (F1 score of 0.54).
... Malicious actors lean on "sockpuppetry"-adopting the persona of some hypothetical "person" who could exist but does not-for a range of purposes: 54 manipulating perception of public political opinion on social media, propping up (or attacking) the reputation of businesses or individuals, and carrying out scams on digital marketplaces [76,267,138,285]. Historically, sockpuppets have been powered either by basic automation or by people in low-wage countries paid by malicious actors to control many misrepresented personas. ...
Preprint
Full-text available
Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: "personhood credentials" (PHCs), digital credentials that empower users to demonstrate that they are real people -- not AIs -- to online services, without disclosing any personal information. Such credentials can be issued by a range of trusted institutions -- governments or otherwise. A PHC system, according to our definition, could be local or global, and does not need to be biometrics-based. Two trends in AI contribute to the urgency of the challenge: AI's increasing indistinguishability (i.e., lifelike content and avatars, agentic activity) from people online, and AI's increasing scalability (i.e., cost-effectiveness, accessibility). Drawing on a long history of research into anonymous credentials and "proof-of-personhood" systems, personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors. In contrast, existing countermeasures to automated deception -- such as CAPTCHAs -- are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.
Article
Full-text available
In 2016, a network of social media accounts animated by Russian operatives attempted to divert political discourse within the American public around the presidential elections. This was a coordinated effort, part of a Russian-led complex information operation. Utilizing the anonymity and outreach of social media platforms Russian operatives created an online astroturf that is in direct contact with regular Americans, promoting Russian agenda and goals. The elusiveness of this type of adversarial approach rendered security agencies helpless, stressing the unique challenges this type of intervention presents. Building on existing scholarship on the functions within influence networks on social media, we suggest a new approach to map those types of operations. We argue that pretending to be legitimate social actors obliges the network to adhere to social expectations, leaving a social footprint. To test the robustness of this social footprint, we train artificial intelligence to identify it and create a predictive model. We use Twitter data identified as part of the Russian influence network for training the artificial intelligence and to test the prediction. Our model attains 88% prediction accuracy for the test set. Testing our prediction on two additional models results in 90.7% and 90.5% accuracy, validating our model. The predictive and validation results suggest that building a machine learning model around social functions within the Russian influence network can be used to map its actors and functions.
Article
In the growth of online news, the industry faces increased threats on a polarized landscape, such as online-targeted disinformation, that threaten the legitimacy of news sources. This research contributes to the theoretical advancement of crisis communication and social psychology theories and provides guidance for professionals navigating emerging forms of paracrises. Results from this experimental design study reveal that during orchestrated disinformation campaigns, an astroturf paracrisis can damage the credibility of a targeted organization. Findings share how political ideology affects perceptions of news credibility during these campaigns and how combined proactive and reactive messaging can attenuate the effects of an astroturfer across the political spectrum.
Article
Full-text available
Astroturfing, belirli bir gündem veya ürün için yaygın bir taban desteği izlenimi oluşturmak için kullanılan aldatıcı bir yöntemdir. Belirli bir mesajı yaymak için sahte çevrim içi kimliklerin, paralı etkileyicilerin veya taklitçilerin kullanılmasını ve bireylerin manipüle edilmesini içerir. Astroturfing genellikle kamuoyunu manipüle etmek ve kuruluşların veya bireylerin çıkarlarını desteklemek için kullanılır ve siyaset, iş dünyası ve sosyal medya dahil olmak üzere çeşitli bağlamlarda ortaya çıkabilir. Dezenformasyon yayabildiği, tüketici davranışlarını manipüle edebildiği ve demokratik kurumlara ve piyasaya olan güveni sarsabildiği için astroturfing’in olumsuz etkileri önemli arz etmektedir. Bu çalışmanın amacı, astroturfing ve sonuçlarına ilişkin kapsamlı bir genel bakış sağlamak ve bunun alabileceği çeşitli biçimleri ve tespit edilip karşı konulabileceği yolları anlamaya odaklanmaktır. Çalışma, astroturfing ile ilgili literatür taraması kullanılarak gerçekleştirilmiştir.Literatür taraması, astroturfing’in belirli bir gündem ya da ürün için tabanda yaygın bir destek olduğuna dair yanlış bir izlenim oluşturmayı içeren kötü niyetli ve aldatıcı bir yöntem olduğunu ortaya koymuştur. Siyaset, iş dünyası ve sosyal medya da dahil olmak üzere çeşitli bağlamlarda ortaya çıkabilir ve sahte çevrimiçi kimlikler, ücretli influencer’lar veya belirli bir mesajı yaymak için mevcut hesapların manipülasyonu şeklinde görülebilir. Astroturfing’in olumsuz etkileri oldukça geniş kapsamlıdır. Demokratik kurumlara ve ilgili pazara olan güveni sarsabilir. Çalışma, olumsuz etkilerini azaltmak için astroturfing’i tanımlamanın ve ele almanın önemli olduğu sonucuna varmıştır. Literatürde astroturfing ile mücadele için medya okuryazarlığı eğitimi, çevrimiçi platform politikaları ve yasal yollar da dahil olmak üzere çeşitli potansiyel çözümler ortaya çıkmıştır. Astroturfing’i ortaya çıkarmak ve bununla mücadele etmek için harekete geçerek, siyaset ve iş dünyası da dahil olmak üzere çeşitli bağlamlarda şeffaflığı, dürüstlüğü ve güveni teşvik etmek mümkündür. Bu çalışma, astroturfing konusunun daha iyi anlaşılmasına ve bu konuda harekete geçmenin önemine katkıda bulunmaktadır.
Article
Full-text available
There seems to be dwindling trust in media and public authorities in highly developed, democratic societies, with a common fear that audiences are being manipulated. At the same time, people in these countries increasingly turn to alternative information sources, like social networks, blogs and other forms of online communication that they deem to be more authentic. This article discusses the role of trust in parallel to the development of society and media. On the basis of an evolutionary model of societal communication, the author develops a concept of network trust vis-a-vis institutionalized trust and personal trust. He argues that a widespread loss of trust in media and institutions might pose a danger to democratic societies – and that various forms of (participatory) network communication might not be an adequate solution to this problem.
Article
Nonprofit Organizations advocate for the poor, the disenfranchised and the oppressed. This process is thought to build social capital and civil society, while engendering the development of social skills and deliberation. In recent years, scholars have observed that nonprofit advocacy organizations have moved from membership associations to professionalized policy change organizations. Virtual advocacy will move the process farther a field. Astroturf, the creation of synthetic advocacy efforts, continues this process further. All of this has troubling implications for nonprofit organizations and nonprofit theory. This paper describes the astroturf phenomenon, reviews pertinent nonprofit theory and speculates on the impact of astroturf for society and the further development of nonprofit theory.
Article
Fraud poses a significant threat to the Internet. 1.5% of all online advertisements attempt to spread malware. This lowers the willingness to view or handle advertisements, which will severely affect the structure of the web and its viability. It may also destabilize online commerce. In addition, the Internet is increasingly becoming a weapon for political targets by malicious organizations and governments. This book will examine these and related topics, such as smart phone based web security. This book describes the basic threats to the Internet (loss of trust, loss of advertising revenue, loss of security) and how they are related. It also discusses the primary countermeasures and how to implement them.
Article
The prevalence of both deception and communication technology in our personal and professional lives has given rise to an important set of questions at the intersection of deception and technology, referred to as 'digital deception'. These questions include issues concerned with deception and self-presentation, such as how the Internet can facilitate deception through the manipulation of identity. A second set of questions is concerned with how we produce lies. For example, do we lie more in our everyday conversations in some media than in others? Do we use different media to lie about different types of things, to different types of people? This article examines these questions by first elaborating on the notion of digital deception in the context of the literature on traditional forms of deception. It considers identity-based forms of deception online and the lies that are a frequent part of our everyday communications.
Article
This essay examines the significance of a particular metaphor, flooding the zone, which gained prominence as an account of bloggers' argumentative prowess in the wake of Senator Trent Lott's toast at Strom Thurmond's centennial birthday party. I situate the growth of the blogosphere in the context of the political economy of the institutional mass media at the time and argue that the blogosphere is an alternative site for the invention of public argument. By providing an account of how the blogosphere serves as a site of invention by flooding the zone with densely interlinked coverage of a controversy, this essay theorizes how the networked public sphere facilitates invention with speed, agonism, and copiousness. The essay then identifies how flooding the zone has been adopted by corporations and the state in order to blunt spontaneous argumentation emerging from the periphery of communication networks.
Article
Effective countermeasures depend on first understanding how users naturally fall victim to fraudsters.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
This article analyzes cloaked websites, which are sites published by individuals or groups who conceal authorship in order to disguise deliberately a hidden political agenda. Drawing on the insights of critical theory and the Frankfurt School, this article examines the way in which cloaked websites conceal a variety of political agendas from a range of perspectives. Of particular interest here are cloaked white supremacist sites that disguise cyber-racism. The use of cloaked websites to further political ends raises important questions about knowledge production and epistemology in the digital era. These cloaked sites emerge within a social and political context in which it is increasingly difficult to parse fact from propaganda, and this is a particularly pernicious feature when it comes to the cyber-racism of cloaked white supremacist sites. The article concludes by calling for the importance of critical, situated political thinking in the evaluation of cloaked websites.