Conference PaperPDF Available

Online astroturfing: A theoretical perspective



Online astroturfing refers to coordinated campaigns where messages supporting a specific agenda are distributed via the Internet. These messages employ deception to create the appearance of being generated by an independent entity. In other words, astroturfing occurs when people are hired to present certain beliefs or opinions on behalf of their employer through various communication channels. The key component of astroturfing is the creation of false impressions that a particular idea or opinion has widespread support. Although the concept of astroturfing in traditional media outlets has been studied, online astroturfing has not been investigated intensively by IS scholars. This study develops a theoretically-based definition of online astroturfing from an IS perspective and discusses its key attributes. Online astroturfing campaigns may ultimately have a substantial influence on both Internet users and society. Thus a clear understanding of its characteristics, techniques and usage can provide valuable insights for both practitioners and scholars. © (2013) by the AIS/ICIS Administrative Office All rights reserved.
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 1
Online Astroturfing: A Theoretical Perspective
Completed Research Paper
Jerry Zhang
University of Texas at San Antonio
Darrell Carpenter
University of Texas at San Antonio
Myung Ko
University of Texas at San Antonio
Online astroturfing refers to coordinated campaigns where messages supporting a specific agenda are distributed via the
Internet. These messages employ deception to create the appearance of being generated by an independent entity. In other
words, astroturfing occurs when people are hired to present certain beliefs or opinions on behalf of their employer through
various communication channels. The key component of astroturfing is the creation of false impressions that a particular idea
or opinion has widespread support. Although the concept of astroturfing in traditional media outlets has been studied, online
astroturfing has not been investigated intensively by IS scholars. This study develops a theoretically-based definition of
online astroturfing from an IS perspective and discusses its key attributes. Online astroturfing campaigns may ultimately have
a substantial influence on both Internet users and society. Thus a clear understanding of its characteristics, techniques and
usage can provide valuable insights for both practitioners and scholars.
Keywords (Required)
Internet, astroturfing, deception, persuasion.
Internet users who seek to gain knowledge on a particular subject or gauge support for various opinions frequently search the
web for references. It is common for web references to contain both factual information and a comments section where web
page viewers can post their individual opinions. The volume of information available through Internet resources is growing
rapidly and most computer-savvy people consider the Internet a primary source of reliable information. Facilitated by
powerful search engines, Internet users have access to a broad spectrum of opinions regarding popular issues, even those
opinions with sparse support.
From a social psychology perspective, an individual’s beliefs on a particular subject are often influenced by others’ beliefs
(Kelman 1958). Therefore, the beliefs of Internet users are likely to be influenced by the information and opinions provided
by other Internet users. Additionally, some Internet users have begun to doubt the veracity of information released by
organizations and public authorities. As a result, many users have turned to alternative information sources such as social
networks, blogs, and other forms of interactive online communication, which they believe are more authentic (Quandt 2012).
Peer-provided information has been extensively used in the e-commerce domain. It is normal practice for Internet users to
view product reviews and feedback from other consumers when contemplating an unfamiliar purchase. Poor reviews and
feedback ratings will likely have a negative impact on intentions to buy a particular product while positive reviews and
feedback may provide confidence in a particular purchase (Chen et al. 2008; Dellarocas et al. 2007; Hu et al. 2006; Senecal et
al. 2004). The same effects can often be observed in relation to political figures during election cycles. The reputation of a
particular candidate may be severely tarnished by undesirable media coverage or social network discussions (Ratkiewicz et
al. 2011a). Accordingly, the opinions of potential voters may be weakened or changed completely as a result of unfavorable
media coverage and damning social commentary, while other candidates, organizations, agendas, and opinions may gain
favor with voters.
Unfortunately, some of the information received from the Internet is falsified to manipulate the reader’s opinions (Cho et al.
2011; Cox et al. 2008; Daniels 2009; Mackie 2009; MacKinnon 2011; Stajano et al. 2011). In many cases this falsified
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 2
information is crafted to appear as if it was posted by autonomous Internet users when it was, in fact, released by paid agents
of parties with an interest in spreading a particular message. This type of activity is referred to as astroturfing, which entails
the imitating or faking of popular grassroots opinions or behaviors (Hoggan 2009; McNutt 2010). The term comes from the
brand name “AstroTurf”, which is a synthetic grass used on sports fields.
The concept of astroturfing is not new in the non-digital world. This perception management technique has been used in
politics, public relations, and marketing for years. However, the Internet has provided convenient opportunities for users to
post opinions in an anonymous fashion. Communicating anonymously on the Internet provides users with a sense of security
much like talking to others in a completely dark room in which nobody can see each other (McKenna et al. 2000). This cloak
of anonymity provides an opportunity for users to pretend they are someone else, thus making the Internet an ideal platform
for astroturfing. With the rapid growth of online outlets, astroturfing can be used to spread information throughout the digital
world via online forums, comments, blogs, and social networks (Mustafaraj et al. 2010; Ratkiewicz et al. 2011a; Ratkiewicz
et al. 2011b). There is evidence suggesting that some large organizations are using online astroturfing through public relation
firms to create posts that discredit their critics (Greenwald et al. 2005; Norman 2004). Other organizations utilized paid
individuals to propagate favorable images online (MacKinnon 2011).
Although the utilization of online astroturfing has been studied in the fields of sociology (McNutt et al. 2007) and political
science (Mattingly 2006), it has not received much attention from IS scholars. The examination of the phenomenon in other
fields does not address online astroturfing as a socio-technical strategy, its potential impacts on business technology
investments, or its potential impact on the entire Internet. Online astroturfing may be leveraged as a vehicle to enhance
deceitful positive image or damage targets’ reputations through false claims. Additionally, if not controlled, it may
undermine the veracity of genuine information resources and diminish the value of Internet interactive technologies. Thus,
online astroturfing has specific implications for the IS discipline with a focus in the cyber security realm. The purpose of this
study is to define online astroturfing from an IS theoretical perspective and to discuss its critical attributes. This discussion of
its traits provides valuable insights to IS scholars and practitioners and serves as a catalyst for future research endeavors.
The term “astroturfing” was used by Senator Lloyd Bentsen to describe “ the artificial grassroots campaigns created by public
relations (PR) firms” (Stauber 2002). Organizations that engage in astroturfing activities usually hire public relations or
lobbying firms to simulate grassroots campaigns (McNutt 2010). In other words, astroturfing occurs when groups of people
are hired to present certain beliefs or opinions, which these people do not really possess, through various communication
channels. In most cases, the hired groups and individuals support arguments or claims for their employer’s favor while
challenging critics and denying adverse claims (Cho et al. 2011). If successful, astroturfing creates falsified impressions
among decision makers or the general public and achieves the goal of persuasion. Traditionally, the scope and influence of
astroturfing are limited by the strength of financial support behind the effort since hiring public relation firms to generate and
disseminate these false messages can be costly (Hoggan 2009). Therefore, Lyon and Maxwell (2004) describe astroturfing as
“a form of costly state falsification”.
Traditional astroturfing has primarily targeted decision and policy makers. Examples include: a massive public health
campaign suggesting people use disposable cups in order to prevent the spread of disease from shared metal cups (Lee 2010),
a group of “grassroots” lobbyists posting messages in support of the General Mining Act of 1872 while being funded by
corporate sponsors who have strong interests in maintaining the provisions of that Act (Lyon et al. 2004), a leaked memo
from a US oil industry organization indicating its plan to deploy thousands of employees to protest proposed climate change
legislation (Mackenzie et al. 2009).
While traditional astroturfing was effective in certain domains, the Internet has fundamentally changed the rules of social
communication. Since it is difficult to authenticate an individual online, it has become easy to create false identities and
advocate a belief or opinion while posing as a group of spontaneous individuals. Additionally, as noted by Stajano and
Wilson (2011) online communication and social networks allow a single individual to create multiple aliases to give others
the impression that there are many people sharing a same opinion. Namelyastroturfers strive to create the falsified impression
that the given ideas or opinions are held by a large portion of the population. The combination of anonymity (McKenna et al.
2000) and interactivity (Morris et al. 1996) enabled by the Internet communication paradigm has provided a technical
platform and opportunity for astroturfing. Web-based systems can be exploited in a variety of ways to achieve the desired
result: a single professional blogger can control several distinct blogs; a person can create different profiles on social
networks; users can post reviews and comments on many e-commerce and political sites. The scope and gravity of these
deceptive online actions are increasing as compared to traditional astroturfing (Tumasjan et al. 2010).
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 3
Online astroturfing has become a tool of choice because it typically costs less and influences more (Mackie 2009). The
fraudulent perceptions disseminated through astroturfing can be classified as both identity-based and message-based
according to Hancock’s (2007) taxonomy of deception. Astroturf messages falsely represent the identities of the poster and
also deliver deceptive or misleading information. Therefore, we define online astroturfing as the dissemination of deceptive
opinions by imposters posing as autonomous individuals on the Internet with the intent of promoting a specific agenda.
We posit that despite the low cost of posting messages online, initiating an effective astroturfing campaign requires
substantial human capital, ample computational resources, and a strategic management protocol. Within these parameters, the
astroturfing messages can be falsified or genuine; the targets are determined by the purpose of the campaign; the motivation
may be political, commercial or military; the communication method may be one-way or interactive; and the communication
process may be automated or human controlled.
Motivations for astroturfing are based on the benefits derived from manipulating the opinions of message receivers. In the
public relations industry, online astroturfing is referred to as a third party manipulation technique (Mackie 2009). Several
prominent examples of astroturfing in business and politics have been documented in both academic and popular literature.
Wal-Mart hired a public relations firm to reinforce its favorable public image and discredit critics (Daniels 2009). The firm
launched two websites in 2006: and The first website was used to propagate the
positive contributions of Wal-Mart to working families while the second one was used to discredit critics of Wal-Mart by
asserting that they were “paid critics”. IBM and some other large corporations openly encourage employees to blog in favor
of their employers and against competitors (Cox et al. 2008). Mustafaraj and Metaxas (2010) found concrete evidence of
online astroturfing via Twitter during the Massachusetts senate race between Martha Coakley and Scott Brown. To smear one
of the candidates, perpetrators leveraged several Twitter accounts and generated hundreds of tweets in a brief period, thus
reaching a wide audience and potentially influencing the election outcome. On both the Amazon and Barnes & Noble
websites, fake positive reviews have been discovered , intending to influence the purchase decisions of customers to benefit
multiple parties including vendors, publishers, and authors (Hu et al. 2011). While astroturfing is often associated with
business and politics, it has also been used for national strategic and tactical purposes. After the terrorist attacks of 9/11, the
Office of Strategic Influence was created within the Pentagon for the purpose of “flooding targeted areas with information”.
Even though this particular office only existed for a short time, similar operations are still employed by the Pentagon to
praise the military operations (Pfister 2011).
Online astroturfing activities can be initiated by automated systems or human operators. However, Jakobsson (2012) notes
that automation techniques are required to reach an effective scale. Once information from an astroturfing campaign is
disseminated, many legitimate users may fall victim to the scheme and begin propagating the counterfeit information
(Ratkiewicz et al. 2011b). As a result, the effect of the astroturfing campaign is amplified. Several recent publications
highlight automated astroturfing activities conducted via Twitter. Chu et al (2010) examined Twitter users by classifying
them as human, bot or cyborg. In contrast to other online social networks, Twitter allows the use of bots or automated
programs that can post tweets when the account owners are absent. Cyborgs are a combination of human and automated
actors and are further classified as either bot-assisted humans or human-assisted bots. The ability to employ cyborgs blurs the
lines between humans and bots for astroturfing activities. Metaxas and Mustafaraj (2010) and Ratkiewicz et al. (2011b) have
discussed different techniques that can be used to detect automated astroturfing accounts on Twitter. We posit that despite the
potential for cyborgs, bots on social networks should be readily distinguishable from genuine Internet users because their
message traffic is typically unidirectional and they cannot intelligently interact with other users. On the other hand,
astroturfing campaigns employing a large number of human operators are possible with sufficient financial support and
strategic management (MacKinnon 2011). Professional astroturfers can advocate their employer’s opinions anywhere
through user-generated content without using automated tools. They can also infiltrate microblogs, social networks,
chatrooms, and comment sections of targeted websites. Compared to automated mechanisms, human astroturfers may be
characterized as less efficient, but potentially more effective. While human astroturfers are, in fact, autonomous individuals,
they are not spontaneous as the opinions they espouse are designated by their employers. However, the messages they post
are carefully tailored to the specific environment they have infiltrated thus providing them with the ability to adapt quickly as
conditions change. Human astroturfers are also able to interact with legitimate Internet users thereby making their messages
more convincing. We theorize that without sufficient knowledge of astroturfing techniques, typical Internet users can be
easily deceived by this method. As stated by Mackie (2009), “the Internet is vulnerable to astroturfing by the powerful and
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 4
The previous sections discussed the motivation for online astroturfing and the ways astroturfed messages are disseminated
through the Internet. Although many scholars believe online astroturfing is effective and difficult for users to detect (Chu et
al. 2010; Hu et al. 2011; Mustafaraj et al. 2010; Ratkiewicz et al. 2011b), the mechanisms behind successful online
astroturfing have not been directly investigated. How does online astroturfing change the readers’ minds? What makes users
believe some online astroturfing messages while doubting others? In this section we explore the mechanisms behind effective
online astroturfing and present propositions based on existing theoretical foundations.
From a social psychology perspective, the influence created by online astroturfing is consistent with informational social
influence or social proof (Cialdini 2001b). Informational social influence is exerted when a subject accepts information from
other people as evidence to be weighed when forming one’s own judgment (Deutsch et al. 1955). The application of social
proof is to “use peer power whenever it is available” (Cialdini 2001a). According to Deutsch and Gerard (1955), the effect of
informational social influence will be most salient when people are ambiguous about subjects or situations. Therefore, when
Internet users are uncertain about a particular subject, they may seek and accept information provided by other users on the
Internet. However, the manner in which readers process information must also be considered. The Elaboration Likelihood
Model (Cacioppo et al. 1986; Petty et al. 1996) suggests that in a central route, people tend to examine the content of the
persuasive message very carefully, while in a peripheral route, people do not process the actual argument of the message
through cognitive effort but rely on other characteristics of the message which are more assessable and obvious. Petty and
Cacioppo (1986) contend that when people are highly motivated and willing to process the message, they will scrutinize the
persuasive argumentation carefully. In this case, a strong argument is more efficient than a weak argument. However, when
people are unmotivated they tend to rely on simple cues in the message such as the conviction or passion conveyed by the
poster. Thus, the decision to rely on the strength of argument, peripheral cues, or both is highly dependent on the receiver’s
level of involvement.
In online astroturfing, the goal of the message sender is to convince the receiver that the message content is a heartfelt,
rational, and defensible opinion held by a social peer. Ultimately, the message sender seeks to either alter the receiver’s
opinion or create doubts about a particular viewpoint through a coordinated campaign of deceptive information
dissemination. Therefore, the effect of online astroturfing can be defined as the degree to which an astroturfing campaign
alters the receiver’s opinion or level of conviction regarding a particular subject. Based on a synthesis of the Elaboration
Likelihood Model and informational social influence theory (Cialdini 2001b), we contend that the effects of online
asturtorfing are related to four important mechanisms: multiple sources (Harkins et al. 1981b, 1987), uncertainty (Wooten et
al. 1998), perceived similarities (Cialdini 2001a), and receivers’ motivations (Cacioppo et al. 1986; Metzger 2007).
Multiple Source Effect
Multiple source effect was first identified by Harkins and Petty (1981a). In their experiment, they found that the subject
groups receiving multiple arguments from multiple sources were most persuaded when compared to other groups; the subject
groups receiving a single argument from multiple sources were less persuaded; and the subject groups receiving multiple
arguments from a single source were least persuaded. Their study indicated that both the number of sources and the number
of arguments play important roles in persuasion. Later, Harkins and Petty (1987) conducted another experiment to investigate
the reasons why multiple sources enhance processing. The results of their study are consistent with the previous research and
showed that multiple sources enhance message processing due to recipients’ perceptions that arguments from different
sources are more likely to be viewed as different perspectives provided by different individuals. In the context of the online
environment, most user-generated content has little or no verifiable identity attached to it and instead arbitrary identifiers
such as screen names or IP addresses are used. Thus, from a technical perspective, it is quite easy for an online astroturfer to
mask himself or herself through different identities and users are likely to perceive these identities as independent
information sources. Accordingly Internet users are likely to believe the information is being provided by a number of
different users. This leads to the following is proposed.
Proposition 1: The number of information sources influences the effect of online astroturfing.
Receivers’ uncertainty
Intuitively, if users are uncertain about a particular subject, they are more likely to be influenced by the information provided
by others. In this case, informational social influence can be used to change or vacillate the receiver’s opinions (Wooten et al.
1998). Conversely, if one is very knowledgeable or experienced regarding a particular subject, he or she will be less likely to
accept others’ thoughts or opinions (Deutsch et al. 1955). In political or advertisement campaigns, individuals who are
uncertain about the candidate or product are likely to be vulnerable to online astroturfing. Therefore we believe that the
message receivers’ uncertainty is a major factor in astroturfing effectiveness and the following is proposed.
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 5
Proposition 2: Uncertainty influences the effect of online astroturfing.
Perceived Similarities
Similarity is another key factor in informational social influence. If the information receiver perceives himself or herself as
similar to the sender, the receiver is more likely to be influenced or adopt the opinions embodied in the message. Sometimes
the similarities of peers can be even more compelling than the message itself (Cialdini 2001a). In contrast, if a product review
is written from the perspective of a vendor or manufacturer, the potential consumer will be less likely to be influenced by this
advocated opinion. Cialdini (2001a) contends that “influence is often best exerted horizontally rather than vertically”. The
premise of peer power is that it has to come, or appear to come, from a peer. On the Internet, astroturfers do not have any
connection with information receivers, but they are adept at making messages sound as if they are generated by someone
similar to the receiver. To enhance the social influence created by online astroturfing, Kinniburgh and Denning (2006)
suggest a strategy of supporting “homegrown” blogs that do not appear to be written by an authoritative figure. Thus, even
though the spatial and social distance between information senders and receivers is large, the technology can be manipulated
to shorten the psychological distance and allow information receivers to perceive astroturfers as peers. Accordingly, the
following is proposed.
Proposition 3: Perceived similarities influence the effect of online astroturfing.
Levels of involvement
Based on the previously discussed tenets of the Elaboration Likelihood Model (Cacioppo et al. 1986; Petty et al. 1996),
motivation or level of involvement is a critical factor in a user’s decision to either critically analyze data or rely on perceptive
cues. Metzger (2007) found that Internet information seekers with high motivation will likely evaluate opinions carefully
based on the quality of the information while low motivation information seekers will look to salient cues. Internet users are
similar to other information seekers in that they are more likely to use central route processing when motivated. Conversely,
when motivation or ability to judge the quality and trustworthiness of online sources is low Internet users will likely rely on
peripheral or heuristic processing. Therefore, we believe that levels of involvement will act as a moderator on the effect of
online astroturfing and the following is proposed.
Proposition 4: Level of involvement moderates the effect of online astroturfing.
The perfect online astroturfing campaign relies on both skillful deceivers and vulnerable receivers. It is a powerful weapon
used to launch asymmetric attacks designed to deceive innocent voters, consumers, and other information seekers. With
comparatively modest resources, an online astroturfing campaign is able to generate substantial social influence over a target.
The nature of Internet communication makes it relatively difficult to collect and examine data from astroturfing activities.
Although techniques have been developed to detect online astroturfing (Ratkiewicz et al. 2011a; Ratkiewicz et al. 2011b),
they are only effective on certain kinds of automated astroturfing systems and specific media. Once a user’s opinion has been
influenced it is almost impossible to restore the opinion to the pre-influence state. Additionally, once an astroturfing
campaign gains traction, the fraudulent information will likely be redistributed by the manipulated users and become
indistinguishable from other user-generated content. Thus, Ratkiewicz et al (2011b) suggest that identifying and terminating
online astroturfing at the initiation stage is critical.
In the present study we defined online astroturfing as the dissemination of deceptive opinions by imposters posing as
autonomous individuals on the Internet with the intention of promoting a specific agenda. It can be motived by political,
business, or military agendas and initiated by automated mechanisms or human actors. Additionally, we examined the
theoretical underpinnings of related research to identify the attributes of online astroturfing. Finally, we developed a set of
propositions based on the theoretical foundations to serve as the basis for future research. This study contributes to the
limited body of knowledge related to online astroturfing by identifying four key concepts that are likely to influence the
effectiveness of this tactic and have implications for both general IS research and specific areas of cyber security research
such as perception management and protection of information resources. These key concepts include the multiple source
effect, receiver uncertainty, perceived similarities between the sender and receiver, and the level of the receiver’s
involvement. However, our discussion regarding the effective attributes of online astroturfing may not be conclusive and
should be supplemented by further investigation. We contend that the escalation of astroturfing activity could have a
profound effect on the credibility of all Internet information resources and this study provides additional insights on attributes
and mechanisms behind this phenomenon, which are of interest to the scholarly community, policy makers, and practitioners.
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 6
1. Cacioppo, J.T., Petty, R.E., Kao, C.F., and Rodriguez, R. "Central and peripheral routes to persuasion: An individual
difference perspective," Journal of Personality and Social Psychology (51:5) 1986, p 1032.
2. Chen, Y., and Xie, J. "Online consumer review: Word-of-mouth as a new element of marketing communication mix,"
Management science (54:3) 2008, pp 477-491.
3. Cho, C., Martens, M., Kim, H., and Rodrigue, M. "Astroturfing Global Warming: It Isn’t Always Greener on the Other
Side of the Fence," Journal of Business Ethics (104:4), 2011/12/01 2011, pp 571-587.
4. Chu, Z., Gianvecchio, S., Wang, H., and Jajodia, S. "Who is tweeting on twitter: human, bot, or cyborg?," Proceedings
of the 26th Annual Computer Security Applications Conference, ACM, 2010, pp. 21-30.
5. Cialdini, R.B. "Harnessing the science of persuasion," Harvard Business Review (79:9) 2001a, pp 72-81.
6. Cialdini, R.B. Influence: Science and practice Allyn and Bacon Boston, MA, 2001b.
7. Cox, J.L., Martinez, E.R., and Quinlan, K.B. "Blogs and the corporation: managing the risk, reaping the benefits," The
Journal of Business Strategy (29:3) 2008, pp 4-12.
8. Daniels, J. "Cloaked websites: propaganda, cyber-racism and epistemology in the digital era," New Media & Society
(11:5), August 1, 2009 2009, pp 659-683.
9. Dellarocas, C., Zhang, X.M., and Awad, N.F. "Exploring the value of online product reviews in forecasting sales: The
case of motion pictures," Journal of Interactive Marketing (21:4) 2007, pp 23-45.
10. Deutsch, M., and Gerard, H.B. "A study of normative and informational social influences upon individual judgment,"
The journal of abnormal and social psychology (51:3) 1955, p 629.
11. Greenwald, R., Gilliam, J., Smith, D., Tully, K., Gordon, C.M., Cheek, D., Brock, J., Florio, R., Frizzell, J., and
Cronkite, W. Wal-Mart: The high cost of low price Disinformation Company, 2005.
12. Hancock, J.T. "Digital deception," Oxford handbook of internet psychology) 2007, pp 289-301.
13. Harkins, S.G., and Petty, R.E. "The multiple source effect in persuasion," Personality and Social Psychology Bulletin
(7:4) 1981a, p 627.
14. Harkins, S.G., and Petty, R.E. "The Multiple Source Effect in Persuasion The Effects of Distraction," Personality and
Social Psychology Bulletin (7:4) 1981b, pp 627-635.
15. Harkins, S.G., and Petty, R.E. "Information utility and the multiple source effect," Journal of personality and social
psychology (52:2) 1987, p 260.
16. Hoggan, J. Climate cover-up: The crusade to deny global warming Greystone Books, 2009.
17. Hu, N., Liu, L., and Sambamurthy, V. "Fraud detection in online consumer reviews," Decision Support Systems (50:3)
2011, pp 614-626.
18. Hu, N., Pavlou, P.A., and Zhang, J. "Can online reviews reveal a product's true quality?: empirical findings and
analytical modeling of Online word-of-mouth communication," Proceedings of the 7th ACM conference on
Electronic commerce, ACM, 2006, pp. 324-330.
19. Jakobsson, M. The Death of the Internet Wiley-IEEE Computer Society Press, 2012.
20. Kelman, H.C. "Compliance, identification, and internalization: Three processes of attitude change," The Journal of
Conflict Resolution (2:1) 1958, pp 51-60.
21. Lee, C.W. "The roots of astroturfing,") 2010.
22. Lyon, T.P., and Maxwell, J.W. "Astroturf: Interest Group Lobbying and Corporate Strategy," Journal of Economics &
Management Strategy (13:4) 2004, pp 561-597.
23. Mackenzie, K., and Pickard, J. "Lobbying memo splits US oil industry," in: Financial Times, London (UK), United
Kingdom, London (UK), 2009, p. 1.
24. Mackie, G. "Astroturfing Infotopia," Theoria: A Journal of Social & Political Theory (56:119) 2009, pp 30-56.
25. MacKinnon, R. "CHINA'S "NETWORKED AUTHORITARIANISM"," Journal of Democracy (22:2) 2011, pp 32-46.
26. Mattingly, J.E. "Radar Screens, Astroturf, and Dirty Work: A Qualitative Exploration of Structure and Process in
Corporate Political Action," Business and Society Review (111:2) 2006, pp 193-221.
27. McKenna, K.Y.A., and Bargh, J.A. "Plan 9 from cyberspace: The implications of the Internet for personality and social
psychology," Personality and social psychology review (4:1) 2000, pp 57-75.
28. McNutt, J., and Boland, K. "Astroturf, technology and the future of community mobilization: Implications for nonprofit
theory," J. Soc. & Soc. Welfare (34) 2007, p 165.
29. McNutt, J.G. "Researching Advocacy Groups: Internet Sources for Research about Public Interest Groups and Social
Movement Organizations," Journal of Policy Practice (9:3-4) 2010, pp 308-312.
30. Metzger, M.J. "Making sense of credibility on the Web: Models for evaluating online information and recommendations
for future research," Journal of the American Society for Information Science and Technology (58:13) 2007, pp
Zhang et al. Online Astroturfing: A Theoretical Perspective
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15-17, 2013. 7
31. Morris, M., and Ogan, C. "The Internet as Mass Medium," Journal of Computer-Mediated Communication (1:4) 1996,
pp 0-0.
32. Mustafaraj, E., and Metaxas, P. "From obscurity to prominence in minutes: Political speech and real-time search,") 2010.
33. Norman, A. The Case Against Wal-Mart Brigantine Media, 2004.
34. Petty, R.E., and Cacioppo, J.T. Attitudes and persuasion: Classic and contemporary approaches Westview Press, 1996.
IN THE LOTT IMBROGLIO," American Forensic Association, 2011, pp. 141-162.
36. Quandt, T. "What’s left of trust in a network society? An evolutionary model and critical discussion of trust and societal
communication," European Journal of Communication (27:1), March 1, 2012 2012, pp 7-21.
37. Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Flammini, A., and Menczer, F. "Detecting and tracking political
abuse in social media," Proc. of ICWSM) 2011a.
38. Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Patil, S., Flammini, A., and Menczer, F. "Truthy: mapping the
spread of astroturf in microblog streams," in: Proceedings of the 20th international conference companion on World
wide web, ACM, Hyderabad, India, 2011b, pp. 249-252.
39. Senecal, S., and Nantel, J. "The influence of online product recommendations on consumers’ online choices," Journal of
Retailing (80:2) 2004, pp 159-169.
40. Stajano, F., and Wilson, P. "Understanding scam victims: seven principles for systems security," Commun. ACM (54:3)
2011, pp 70-75.
41. Stauber, J. "Toxic Sludge Is Good For You: Lies, Damn Lies And The Public Relations Industry Author: John Stauber,
Sheldon Rampton, Pub,") 2002.
42. Tumasjan, A., Sprenger, T.O., Sandner, P.G., and Welpe, I.M. "Predicting elections with twitter: What 140 characters
reveal about political sentiment," Proceedings of the fourth international aaai conference on weblogs and social
media, 2010, pp. 178-185.
43. Wooten, D.B., and Reed II, A. "Informational influence and the ambiguity of product experience: Order effects on the
weighting of evidence," Journal of Consumer Psychology; Journal of Consumer Psychology) 1998.
... Astroturfing is ''a fake grassroots activity on the Internet'' [35] whose main aim is to influence lawmakers as well as elections and election campaigns. Astroturfing can be achieved using groups of bad actors or automated techniques such as social bots, or a combination of the two [36]. Astroturfing campaigns will use multiple sources to publish and promote the same message. ...
... Astroturfing campaigns will use multiple sources to publish and promote the same message. This characteristic is likely to persuade the targets rather more than an Astroturfing message that came from one source [36]. Several techniques can be used to automatically detect Astroturfing campaigns on social media [37][38][39][40][41][42]. ...
Social media is used to commit and detect crimes. With automated methods, it is possible to scale both crime and detection of crime to a large number of people. The ability of criminals to reach large numbers of people has made this area subject to frequent study, and consequently, there have been several surveys that have reviewed specific crimes committed on social platforms. Until now, there has not been a review article that considers all types of crimes on social media, their similarity as well as their detection. The demonstration of similarity between crimes and their detection methods allows for the transfer of techniques and data between domains. This survey, therefore, seeks to document the crimes that have been committed on social media, and demonstrate their similarity through a taxonomy of crimes. Also, this survey documents publicly available datasets. Finally, this survey provides suggestions for further research in this field.
... Astroturfing campaigns are most likely amplified via Twitter. In contrast to other online social networks, Twitter allows the use of bots or automated accounts that can post tweets when the account owners are absent (Zhang, Carpenter, & Ko 2013). During election campaigns retweeting can be a political act (Boyd, Golder & Lotan, 2010). ...
Conference Paper
Full-text available
Social media applications like Facebook, Instagram, Twitter, YouTube, etc. have become an integral part in people’s lives, especially young people who consume them heavily for different purposes. Scholars have pointed out the pivotal role of social media in learning due to the fact that these dynamic sites contain rich and varied communication tools that motivate learners to actively use different types of online educational resources, and to interact academically with their teachers and peers. The researchers will use the secondary data analysis to qualitatively to analyze the studies that examined the uses of social media in higher education from 2004 since the introduction of Web2.0 that allowed users to be key contributors in the virtual world till 2018. This paper aims at exploring the uses of social media among college students in learning. This includes watching demos and tutorials, using varied online learning resources, forming online learning groups with other colleagues for the purpose of completing group assignments and practical projects, seeking academic support and advice from their instructors remotely, etc. The findings of this research will suggest recommendations for higher educational institutions to effectively use and adopt social media applications in learning which will reflect positively on the quality of academic learning in colleges and universities.
... astroturfing. An important contribution was developed by Zhang, Carpenter, and Ko (2013), for whom digital astroturfing, or online astroturfing as they call it, can be defined as "the dissemination of deceptive opinions by imposters posing as autonomous individuals on the Internet with the intent of promoting a specific agenda" (p. 3). ...
Full-text available
Digital astroturfing and computational propaganda have drawn a lot of attention in recent years because of the malicious effects on the political environment, especially in the face of the emerging far right. But most studies on astroturfing are limited to seeking theoretical concepts. The present article suggests that the concept of astroturfing can be conceptually defined and empirically investigated through a social network analysis (SNA) approach. The article is specially focused on understanding the use of mobile instant messaging services (MIMS) like WhatsApp as a stage for astroturfing practices in 2018 Brazilian elections. Its main hypothesis is that SNA methods can help in understanding how a Bolsonarist influence operation and misinformation network was structured. Results show different thematic groups and several functional clusters and lead to the identification of practices that match with those of agents from the professional field of politics.
... Astroturfing campaigns are most likely amplified via Twitter. In contrast to other online social networks, Twitter allows the use of bots or automated accounts that can post tweets when the account owners are absent (Zhang, Carpenter, & Ko 2013). During election campaigns retweeting can be a political act (Boyd, Golder & Lotan, 2010). ...
Full-text available
The appropriate use of social media, but also and mainly our need to share and communicate good practices and ideas can certainly ameliorate this world. This book includes research papers that can help us ameliorate, ourselves and others. We are not talking about a standard book of conference proceedings since all papers were blindly reviewed and comments were sent to authors in order for them to ameliorate their research. Each author had a time limit period to proceed to the necessary changes. Additionally, we requested from all authors to follow a suggested structure for the proceedings, so that it is of easy access and consistency to all readers, starting by a) an introduction and then b) introducing the research topic, c) the research questions, d) the results and last, adding some e) questions for reflection. The authors of this book come from four different countries, respecting the intercultural and International approach we always take to promote research sharing and knowledge (Greece, Serbia, United Arab Emirates, and Italy) around the world.
... Within the political context, online astroturfing implies the distribution of a specific agenda via the Internet, through coordinated campaigns[11], which aim to influence electoral results[10]. ...
Conference Paper
Election times are commonly the scenes of massive political thoughts shared online. However, how reliable are all those opinions?. This work presents five traits of
Disinformation has been regarded as a key threat to democracy. Yet, we know little about the effects of different modalities of disinformation, or the impact of disinformation disseminated through (inauthentic) social media accounts of ordinary citizens. To test the effects of different forms of disinformation and their embedding, we conducted an experimental study in the Netherlands ( N = 1,244). In this experiment, we investigated the effects of disinformation (contrasted to both similar and dissimilar authentic political speeches), the role of modality (textual manipulation versus a deepfake), and the disinformation’s embedding on social media (absent, endorsed or discredited by an (in)authentic citizen). Our main findings indicate that deepfakes are less credible than authentic news on the same topic. Deepfakes are not more persuasive than textual disinformation. Although we did find that disinformation has effects on the perceived credibility and source evaluations of people who tend to agree with the stance of the disinformation’s arguments, our findings suggest that the strong societal concerns on deepfakes’ destabilizing impact on democracy are not completely justified.
Drawing from conflicts observed in online communities (e.g., astroturfing and shadow banning), I extend Pietraszewski's theory to accommodate phenomena dependent on the intersubjectivity of groups, where representations of group membership (or beliefs about group membership) diverge. Doing so requires enriching representations to include other agents and their beliefs in a process of recursive mentalizing.
When governments run influence operations they may leverage in-house capabilities, outsource to digital mercenaries, or use a combination of these strategies. We theorize that governments outsource because it provides plausible deniability if the operation is uncovered, and offers access to cutting-edge influence tactics beyond those common to established government institutions. Using data from Facebook, Twitter, and Instagram, we test implications of this theory via two covert online influence campaign case studies, each focused on Syria, executed by Russia’s military intelligence agency (colloquially known as the GRU), and by the Internet Research Agency (IRA), a privately owned company. We find that the GRU focused on the creation of front media properties that produced longform journalistic content, an established tactic more amenable to reaching general audiences. By contrast, the IRA exploited the architecture of social media platforms to target specific audiences with memes and customized messages that were more narrowly tailored than those spread by the GRU. We also find that the tailored content produced by the IRA received higher engagement than GRU longform articles when posted to the same platforms, even if we include cascades of interactions from re-posts of GRU-authored articles that spread beyond their own Facebook page. Our findings highlight the importance of disaggregating information operations by actor type and across platforms to better understand their tactics and impact.
Political organizations worldwide keep innovating their use of social media technologies. In the 2019 Indian general election, organizers used a network of WhatsApp groups to manipulate Twitter trends through coordinated mass postings. We joined 600 WhatsApp groups that support the Bharatiya Janata Party, the right-wing party that won the general election, to investigate these campaigns. We found evidence of 75 hashtag manipulation campaigns in the form of mobilization messages with lists of pre-written tweets. Building on this evidence, we estimate the campaigns' size, describe their organization and determine whether they succeeded in creating controlled social media narratives. Our findings show that the campaigns produced hundreds of nationwide Twitter trends throughout the election. Centrally controlled but voluntary in participation, this hybrid configuration of technologies and organizational strategies shows how profoundly online tools transform campaign politics. Trend alerts complicate the debates over the legitimate use of digital tools for political participation and may have provided a blueprint for participatory media manipulation by a party with popular support.
Der Beitrag beginnt mit einem Rückblick auf das Ideal des Internets als basisdemokratische Technologie und einer Betrachtung der realen Situation, wie Menschen in Zeiten der Digitalisierung kommunizieren. Bei einer solchen Gegenüberstellung kommt man am Phänomen der toxischen Online-Kommunikation nicht vorbei: verletzende, aufstachelnde, dysfunktionale und/oder destruktive Kommunikation in Text, Bild oder Video über das Internet. Im Artikel werden Varianten toxischer Online-Kommunikation vorgestellt und deren Auswirkungen auf die toxisch Kommunizierenden und auf die Betroffenen dieser Verhaltensweisen aufgezeigt. Die Auslöser dieses Verhaltens in bestimmten Nutzergruppen werden analysiert. Darauf aufbauend, werden Erklärungs- und Lösungsansätze in der Sozial-, Medien- und Persönlichkeitspsychologie diskutiert. Wichtige Ansätze sind u. a. dunkle Persönlichkeitsfacetten, Media Richness und Selbstregulation. Der Beitrag schließt mit einem Ausblick auf mögliche wissenschaftliche Stoßrichtungen und politische Lösungen.
Full-text available
There seems to be dwindling trust in media and public authorities in highly developed, democratic societies, with a common fear that audiences are being manipulated. At the same time, people in these countries increasingly turn to alternative information sources, like social networks, blogs and other forms of online communication that they deem to be more authentic. This article discusses the role of trust in parallel to the development of society and media. On the basis of an evolutionary model of societal communication, the author develops a concept of network trust vis-a-vis institutionalized trust and personal trust. He argues that a widespread loss of trust in media and institutions might pose a danger to democratic societies – and that various forms of (participatory) network communication might not be an adequate solution to this problem.
Nonprofit Organizations advocate for the poor, the disenfranchised and the oppressed. This process is thought to build social capital and civil society, while engendering the development of social skills and deliberation. In recent years, scholars have observed that nonprofit advocacy organizations have moved from membership associations to professionalized policy change organizations. Virtual advocacy will move the process farther a field. Astroturf, the creation of synthetic advocacy efforts, continues this process further. All of this has troubling implications for nonprofit organizations and nonprofit theory. This paper describes the astroturf phenomenon, reviews pertinent nonprofit theory and speculates on the impact of astroturf for society and the further development of nonprofit theory.
Fraud poses a significant threat to the Internet. 1.5% of all online advertisements attempt to spread malware. This lowers the willingness to view or handle advertisements, which will severely affect the structure of the web and its viability. It may also destabilize online commerce. In addition, the Internet is increasingly becoming a weapon for political targets by malicious organizations and governments. This book will examine these and related topics, such as smart phone based web security. This book describes the basic threats to the Internet (loss of trust, loss of advertising revenue, loss of security) and how they are related. It also discusses the primary countermeasures and how to implement them.
The prevalence of both deception and communication technology in our personal and professional lives has given rise to an important set of questions at the intersection of deception and technology, referred to as 'digital deception'. These questions include issues concerned with deception and self-presentation, such as how the Internet can facilitate deception through the manipulation of identity. A second set of questions is concerned with how we produce lies. For example, do we lie more in our everyday conversations in some media than in others? Do we use different media to lie about different types of things, to different types of people? This article examines these questions by first elaborating on the notion of digital deception in the context of the literature on traditional forms of deception. It considers identity-based forms of deception online and the lies that are a frequent part of our everyday communications.
This essay examines the significance of a particular metaphor, flooding the zone, which gained prominence as an account of bloggers' argumentative prowess in the wake of Senator Trent Lott's toast at Strom Thurmond's centennial birthday party. I situate the growth of the blogosphere in the context of the political economy of the institutional mass media at the time and argue that the blogosphere is an alternative site for the invention of public argument. By providing an account of how the blogosphere serves as a site of invention by flooding the zone with densely interlinked coverage of a controversy, this essay theorizes how the networked public sphere facilitates invention with speed, agonism, and copiousness. The essay then identifies how flooding the zone has been adopted by corporations and the state in order to blunt spontaneous argumentation emerging from the periphery of communication networks.
Effective countermeasures depend on first understanding how users naturally fall victim to fraudsters.
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
This article analyzes cloaked websites, which are sites published by individuals or groups who conceal authorship in order to disguise deliberately a hidden political agenda. Drawing on the insights of critical theory and the Frankfurt School, this article examines the way in which cloaked websites conceal a variety of political agendas from a range of perspectives. Of particular interest here are cloaked white supremacist sites that disguise cyber-racism. The use of cloaked websites to further political ends raises important questions about knowledge production and epistemology in the digital era. These cloaked sites emerge within a social and political context in which it is increasingly difficult to parse fact from propaganda, and this is a particularly pernicious feature when it comes to the cyber-racism of cloaked white supremacist sites. The article concludes by calling for the importance of critical, situated political thinking in the evaluation of cloaked websites.