Content uploaded by Morten Bay
Author content
All content in this area was uploaded by Morten Bay on Nov 14, 2024
Content may be subject to copyright.
16
Social Media Ethics: A Rawlsian Approach to Hypertargeting
and Psychometrics in Political and Commercial Campaigns
MORTEN BAY, Center for the Digital Future at the USC Annenberg School, USA
Targeted social media advertising based on psychometric user proling has emerged as an eective way of
reaching individuals who are predisposed to accept and be persuaded by the advertising message. This article
argues that in the case of political advertising, this may present a democratic and ethical challenge. Hypertar-
geting methods such as psychometrics can “crowd out” political communication with opposing views due to
individual attention and time limitations, creating inequities in the access to information essential for voting
decisions. The use of psychometrics also appears to have been used to spread both information and misin-
formation through social media in recent elections in the U.S. and Europe. This article is an applied ethics
study of these methods in the context of democratic processes and compared to purely commercial situations.
The ethical approach is based on the theoretical, contractarian work of John Rawls, which serves as a lens
through which the author examines whether the rights of individuals, as Rawls attributes them, are violated
by this practice. The article concludes that within a Rawlsian framework, use of psychometrics in commer-
cial advertising on social media platforms, though not immune to criticism, is not necessarily unethical. In
a democracy, however, the individual cannot abandon the consumption of political information, and since
using psychometrics in political campaigning makes access to such information unequal, it violates Rawlsian
ethics and should be regulated.
CCS Concepts: • Human-centered computing →Social media;•Security and privacy →Social aspects
of security and privacy;•Information systems → Social advertising;
Additional Key Words and Phrases: Social media, ethics, data collection, targeting, psychometrics
ACM Reference format:
Morten Bay. 2018. Social Media Ethics: A Rawlsian Approach to Hypertargeting and Psychometrics in Political
and Commercial Campaigns. ACM Trans. Soc. Comput. 1, 4, Article 16 (December 2018), 14 pages.
https://doi.org/10.1145/3281450
1 INTRODUCTION
Collecting data about a social media user’s behavior to achieve higher precision in the targeting
of online ads is now a common practice and several studies have shown the high ecacy of such
methods [12,42,44,68]. The construction of a data prole on a user is one of the main monetization
tools for providers of social media services and the tools of this trade are constantly evolving [64,
80]. Recently, some attention has been paid to the concept of psychometrics and their utility in de-
scribing personal traits of social media users to subsequently predict their behavior when exposed
Authors’ addresses: M. Bay, 2021 1/2 Talmadge Street, Los Angeles, CA 90027; email: mortenbay@live.com.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and
the full citation on the rst page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specic permission and/or a fee. Request permissions from permissions@acm.org.
© 2018 Association for Computing Machinery.
2469-7818/2018/12-ART16 $15.00
https://doi.org/10.1145/3281450
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
16:2 M. Bay
to hyper-targeted advertising [32,40]. The psychometric trend has spilled over from advertising
and marketing into other realms of strategic social media persuasion, most notably politics. Psy-
chometrics is but one method of extremely ecient hypertargeting that is currently moving from
the commercial sphere into the sphere of public discourse and democratic processes. The main
question explored here is whether there is an ethical dierence between use of psychometrics for
political versus commercial purposes.
Though it can be argued that both situations have a similar, asymmetrical power balance, per-
suasion tactics in, e.g., elections, are part of the democratic process itself and for this reason, they
arguably deserve a higher level of normative scrutiny. The ethical discussion is particularly per-
tinent as regulators consider what kind of legislation social media should adhere to. For example,
in traditional media forms, such as TV, radio, and print, political advertising is regulated dier-
ently than commercial advertising. But even though the Internet and social media are signicant
channels for political advertising, political campaigning on the Internet is not regulated nearly
as strictly as other platforms. As Dykhne [14] shows, the Federal Election Committee (FEC) has
made a distinction between TV and online ads, even though the same campaign videos that are
expensive to run on TV can appear on social media for free. According to Dykhne, a campaign
video uploaded and shared by a Facebook user would require no disclosure of who is behind it,
since disclosure is only required if a fee was paid to show the ad. It is legal for a campaign to
distribute a link to the video through social media and e-mail without the need for disclosure. Fur-
thermore, FEC exempts small-size ads from the disclosure if it cannot be “conveniently printed”
[14, p. 355]. The target of this exemption was originally bumper stickers and badge pins but has
been extended in practice to online ads such as those on Google and Facebook due to their size
on the screen. The FEC has been unable to rule denitively on this matter due to disagreement
among the commissioners.
Facebook has recently implemented a requirement of disclosure for political advertisers on the
platform [48], after its association with analytics rm Cambridge Analytica became the subject
of public scrutiny. It was revealed that the personal data of at least 87 million Facebook users
were collected by a Cambridge Analytica researcher in 2014 and used for political purposes [10].
Although the users had all accepted both Facebook’s Terms of Service, which at the time allowed
for this specic type of data collection, as well as the conditions stated by the app used to collect the
data, some scholars argue that the available information did not suciently enable the user to give
a truly informed consent [77]. These events ignited a new debate over political communication on
social media and how political advertising should be regulated. The ethical concerns related to this
debate is the motivation for this article.
1.1 Method
Building on a previously presented version of this article [5], the present study is an applied ethics
case in which the ethics of political philosopher John Rawls are applied to the use of psychometrics
in social media campaigning. The application of ethical theory as a method to evaluate the viability
of a scientic research tool/method or a given technology is a widespread method of ethical inquiry
within Science and Technology Studies (STS). Regardless of the theoretical background, these in-
quiries are often used for evaluation purposes, e.g., to assess whether it is prudent to employ or
even legally approve a certain practice, method or tool/instrument/technology from a moral stand-
point. In applied ethics, parameters such as eciency, capability, or accuracy that might normally
inform decision-making or policy processes, are not important in themselves. These parameters
only become important in the ethical evaluation if conditions imbue them with moral value. For
example, a particular method’s accuracy score does not inuence the ethical evaluation unless it
can convincingly be argued that the score should be prioritized over other, more general, ethical
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
Social Media Ethics: A Rawlsian Approach to Hypertargeting and Psychometrics 16:3
principles, such as human rights. The method analyzed here, psychometrics, was famously used
as an analytical model in the data collection and processing eorts of Cambridge Analytica, a
company who was hired to collect and process data during the 2016 election campaign of Donald
Trump.
1.2 Literature Review
The question of whether the use of psychometrics in social media is ethical has not been the sub-
ject of much academic exploration. The ethics of using psychometrics in clinical psychology for
individual diagnosis have been explored at great length [8,60,65], but the use of the method specif-
ically to social media marketing and political campaigning remains largely unexplored. Studies of
advertising more broadly have engaged with ethics for decades and the industry has adopted ethi-
cal codes that are applied to the practice. Murphy [58] provides an excellent overview of the many
decades of work building ethical frameworks for marketing, and Robinson [64]hasevenshown
how data mining can be incorporated into the ethics statements of relevant industry associations.
As early as 1999, Austin and Reed [41] proposed a set of ethics guidelines for Internet market-
ing towards children, but though these are highly applicable overall, they do not account for the
higher accuracy of current targeting methods. Research involving human participants on social
media has also been explored from an ethical standpoint [7,57,75,83], but the majority of these
explorations are focused on research for scientic purposes, rather than political or commercial
communication.
The related method of microtargeting has also been explored [3,17,81] from an ethical stand-
point by several scholars. Microtargeting is the currently dominant targeting method in social
media marketing, but it also diers from the use of psychometrics in a signicant way: Predic-
tions about how a user will behave when given a certain stimulus through social media are (most
often) made based on data the user has somehow agreed to surrender. Since psychometrics repre-
sent a scientic method of psychological evaluation, conclusions can be drawn about the mental
health/state and personality of the individual that far exceed predictions made through the most
popular microtargeting methods in use today.
Still, Barocas [4] shows how microtargeting in political campaigning can contribute to a chill-
ing eect, in which voters will not express their opinion because of privacy concerns, and how
microtargeting could lead campaigns to promote divisive wedge issues and cites Howard as stat-
ing that microtargeting can be used for voter disenfranchisement, for example, by targeting the
communication of certain information to one voter group but not the other.
Another related research area is Data Ethics, which can be viewed as a subeld of Information
Ethics and ethics of information technology. Floridi and Taddeo [25] consider Data Ethics more
encompassing than the term initially indicates, and argue that terms such as “computer ethics,”
“machine ethics” and “robo-ethics” are becoming obsolete, as “it is not the hardware that causes
ethical problems, it is what the hardware does with the software and the data that represents the
source of our new diculties” [25]. They dene Data Ethics as a branch of ethics that involves
three “axes of research”: the ethics of data, the ethics of algorithms and the ethics of practices.
The ethics of data axis is concerned with the collection and analysis of data, i.e., issues of privacy,
discrimination, risk, transparency and more. The ethics of algorithms concerns itself with the de-
sign of algorithms, bias, auditing, and it is also here many questions regarding machine learning
and AI would be discussed according to Floridi and Taddeo. Finally, the question of ethics of prac-
tices concerns “the responsibilities and liabilities of people and organizations in charge of data
processes” [25]. This article is concerned with the latter axis, as the situation under analysis here
concerns use of psychometrics in the context of political and commercial advantage and not, say,
for research beneting humanity overall.
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
16:4 M. Bay
1.3 Rawlsian Ethics
John Rawls has been called “the most important political philosopher of the 20th Century” [13].
His main contributions are his theoretical conceptions of liberty and justice as fairness. In his
seminal 1971 book A Theory of Justice, Rawls reinvigorated the concept of the social contract
in discussions of how an ideal society should be structured. He further argued against relying on
utilitarianism as primary a principle for decision-making, since even those solutions that bring the
biggest benets to the largest groups of people can be so unfair toward the minority that they are
unethical and socially undesirable. Instead of the dominant utilitarian mode of the 20th and early
21st centuries, Rawls argues for structuring society on a number of principles focused on equality
of opportunity. These include the maximin principle, in which decision-makers should consider
the least bad of the worst possible outcomes when choosing, the principle of guaranteeing basic
liberties, and the related dierence principle, which states that any inequalities of power, social
status, or distribution of income must benet those least advantaged in society [61].
To aid in the decision-making process and creation of fair basic structures in society, Rawls
proposes looking through an imaginary “Veil of ignorance” from the “Original Position,” in which
ideal structures can be created without regard to prior knowledge of status, relations, or conditions.
Looking at the other people with whom we are building a society through the veil, we cannot see
whatever wealth, distinction, primacy, knowledge, history, or background they have brought with
them to the original position starting point. We are ignorant to these matters and any ethical
principles being discussed must therefore be based only on what would be of benet to everyone,
regardless of other, personal characteristics. The term “Social Contract” is normally used about
such a collective agreement about ethical principles that help dene the basic structure of society.
The so-called “contractarian” tradition in ethics and philosophy can be traced back to ancient
Greek philosophers and was rened by philosophers such as Hobbes and Rousseau in the 1600s
and 1700s, until it was revived and reframed by Rawls in the 1970s [58].
Employing such Rawlsian ethics and principles when discussing ethics in information technol-
ogy or data is not new. Robinson [2] has done impressive work on a Rawlsian approach to data
mining in general and Homan [37] has laid out the many ways in which Rawlsian ethics apply to
dierent technologies. Rawls’ veil of ignorance entails equal access to information in a fair society,
according to Don Fallis [22], as it would be impossible for those constructing a fair society to do
so without such information. The point of the veil of ignorance is to make decisions without any
preconceptions in a situation where stakeholders possess similar amounts of information. Van den
Hoven and Rooksby also argue specically that access to information is a candidate to be one of
Rawls’ primary goods, i.e., something that everyone has a right to obtain in a fair and just society.
[38].
But Rawls is in fact even more specic in his assertion that to assert their political liberties and
make use of their primary goods in the democratic process, members of society require “assurance
of a more even access to public media” [26, p. 149]. Rawls sees it as imperative that there is equal
access to the educational resources necessary to make informed decisions in the deliberative pro-
cess he calls “Public Reason” [63, p. 216]. This, as I shall show below, has signicant implications
for the use of psychometrics in social media campaigning.
2 PSYCHOMETRICS FOR ADVERTISING ON COMMON SOCIAL MEDIA
PLATFORMS
Psychometrics, understood broadly as personality traits and behaviors that can be evalu-
ated/measured and scored for dierent purposes, is a eld with a long history that can be traced
all the way back to Darwin [44]. While there are some standard psychometric models, oftentimes
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
Social Media Ethics: A Rawlsian Approach to Hypertargeting and Psychometrics 16:5
social media platforms will create their own metrics to build the social prole needed to increase
the accuracy of targeted advertising.
Facebook’s use of psychometrics for advertising purposes has been explored in popular media
[47]. Garcia-Martinez [29], when describing a (since abandoned) tool constructed by the Facebook
data team explains how its algorithm would start “...spitting out...every ethnic stereotype you
can imagine.” The biases in such algorithms have been documented and critiqued by Noble [59,
66], Srinivasan [72], and many others. Garcia-Martinez equates the high utilization of payday
loans among some minority groups with auent San Francisco residents’ ability to buy $100 yoga
pants, arguing that targeted advertising treats everyone equally, even if the advertised products are
dierentiated by income [6, 13th para.] However, as I will show in the following, societal conditions
related to social media platforms complicate these matters.
2.1 The Cambridge Analytica/Facebook Scandal
In 2015, Youyou, Kosinski, and Stillwell published a paper to much attention that showed how
software was able to predict personality traits more eectively than humans, at least within the
connes set by the paper [82]. Using only users’ Facebook Likes as the main source of data, com-
puters running predictive analytics software were able to place users more accurately within the
so-called Big Five model than the users’ Facebook friends could. The Big Five or Five-Factor psy-
chometrics model measures the prevalence of ve personality traits that spell OCEAN: Openness
to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism [54]. A total of
86,220 participants lled out a 100-item questionnaire that was used to score the dierent traits,
and this self-evaluation was then compared to Big Five scores provided by Facebook friends with
varying relations to the participants as well as the above-mentioned software basing its evaluation
on Likes. The computer-based evaluations had an average accuracy of 0.56 compared to an average
accuracy of 0.49 in the human evaluation.
This caught the attention of the media but also of Alexandr Kogan, an assistant professor
at Cambridge University where the above study was performed. Based on his knowledge of
the model, Kogan provided a similar model to Strategic Communications Laboratories (SCL),
a company that uses psychological modeling to inuence voter groups. In 2013, Cambridge
Analytica (CA) was formed as an SCL subsidiary that would focus on U.S. elections. The 2016
campaigns of Ted Cruz and Donald Trump employed CA which was also connected to the Leave
campaign in the UK “Brexit” referendum [32]. Beierle et al. [6] point to the widespread belief that
CA played a signicant role in the election of Donald Trump using the psychometrics model.
In 2018, it was reported that Aleksandr Kogan had collected personal data from more than
50 million Facebook users in 2014, which were later used for hyper-targeting in the presidential
election in 2016 [31]. This number was later adjusted to 87 million by Facebook [21]. Kogan
accomplished this by creating an app for the Facebook platform that provided a personality test
for the users. A smaller number of users consented to letting the app collect the data about the user
during the test and use them for unspecied analysis. Facebook’s advertising and data collection
rules at the time (these were changed in 2015), made it possible for Kogan to also collect data
from people in the consenting users’ extended networks, leading to the high number of aected
users [21,56]. Thus, simply by accepting Facebook’s terms of service, users were exposed to data
collection by third parties, which raises questions of contractual responsibility and accountability.
2.2 The Ability to Exit Social Media
In marketing ethics, the contract-based theories of John Rawls have been explored by several
scholars [15,16,55]. Freeman [27] expands Rawls’ conception of a fair social contract to busi-
ness contracts, writing that a contract is only fair “if the parties to the contract would agree to it in
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
16:6 M. Bay
ignorance of their actual stakes,” echoing Rawls’ “veil of ignorance” concept [61]. Is the contract
that users enter into with Facebook such a “fair” contract? The ethics of these end-user license
agreements and similar texts have been discussed elsewhere, particularly in light of the privacy
concerns they give rise to [40,51] as well as their asymmetrical nature [34].
Some scholars argue that the pervasiveness of social media may apply pressure on the individual
to engage with them or become a social outcast [28,79]. Yet ultimately, the decision to share data
still rests with the individual, even in face of a contractual relationship characterized by asymme-
try. The cost/benet analysis of what a user gets in return for giving away personal information
and agreeing to become exposed to targeted advertising is important, as is a discussion of the
asymmetrical nature of the contract oered, but both topics are outside the scope of this article.
Instead, I will posit the question: Is an exit from social media possible if you want to remain a
functioning member of a democratic society? Can you acquire the information needed to be an in-
formed voter without engaging with social media? Since close to half the population of the entire
world uses social media when one counts the worldwide use of Facebook and local platforms in,
e.g., China and Russia [73], is it even possible to say that you are meaningfully engaged with other
members of your society if you do not do so through social media? Ling and Schroeder [67]have
indicated how social media are increasingly becoming a Durkheimian social fact, since they are a
prerequisite for communication in a majority of social groups. We may still not be in a situation
where participation in society requires social media participation in the same way that it requires
the use of money or roads, but it is fair to say that social media is fast becoming a dominant mode
of communication between humans.
As for the question of social media’s unavoidability in the democratic process, I argue in the
following that it has now become dicult to acquire the requisite information needed to be an in-
formed voter without social media. Political operatives and elected ocials have seized upon social
media as a manner of communicating directly with voters without the lters and gatekeeping of
news media [74]. More importantly, news outlets use social media to promote their news stories in
a way that wasn’t even possible before social media [46]. They choose this push-strategy, because
news consumers can now mostly be found on social media. In the U.S., a recent survey showed that
65% of Americans get their news from social media, which is more than any other, single media
category. For 44%, Facebook specically is their main source of news [70]. In a survey conducted
during the 2016 presidential election campaign period, 44% of American survey respondents said
they had read news about the election on social media prior to being asked. TV still held the top
spot for election news, but even so, the highest percentage for any preferred media category for
election content was local tv news programs at 57% [30], which means that the 44% who read
election news on social media is a quite signicant number.1
With a large majority of Americans accessing news through social media and close to half the
population using social media for election news in the 2016, it is not unreasonable to argue that
social media is now the most signicant channel for distribution of content regarding political
discourse. Abandoning social media may therefore seriously impact a person’s ability to be a fully
informed voter.
2.3 Social Contracts and Social Media
John Rawls argues for fair social contracts in a society that is dicult for individuals to abandon
but also argues that individuals who have entered into such contracts freely and fairly must honor
them to ensure social cooperation. As mentioned, data collection for targeting of commercial ad-
vertising is—technically—accepted freely by the user when accepting the terms of service on a
1Note that this survey was conducted in January 2016, before the post-convention ramp-up of the election campaigns.
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
Social Media Ethics: A Rawlsian Approach to Hypertargeting and Psychometrics 16:7
social media platform. Issues of fairness may arise, such as ethical problems related to the plat-
form changing the terms and conditions without alerting the user, not upholding their part of the
contract, selling psychometric data to third parties without informing the user or not making it
clear that psychometric proling is taking place. Rawlsian fairness in this sense would also require
that users are informed about the conditions of the contract in a way that is equally accessible to
everyone. It is the responsibility of the social media platform to conveying the terms of use in a
manner that is simple enough for all users and cannot be overlooked, particularly considering the
asymmetrical power relation between platform and user. However, if users enter into the contract
fully informed, Rawls provides no basis for the argument that the use of psychometrics is unethical
for commercial purposes. But as I will now proceed to show, the opposite is the case for political
advertising.
3 PSYCHOMETRICS IN POLITICAL PERSUASION ON SOCIAL MEDIA
When it comes to using psychometrics in social media as persuasive tactics for political or strategic
purposes, conditions are dierent, and there are other ethical concerns at play. At the time of
writing, it is entirely possible to be a member of society without engaging with social media. It
is substantially more dicult to disengage from society altogether. In Rawlsian ethics, society is
constructed with fairness as a guiding principle, which in Rawls’ view requires social cooperation
from all society’s members. Considering the singular individual outside society is pointless in this
context, also because psychometrics and social media are contingent on interactions of humans.
The Rawlsian question now becomes: Is it fair to use psychometrics in social media, given that we
strive for a just society? Can a society that holds fairness as a guiding principle allow for the use
of psychometrics in social media when the intent is political?
Rawls is quite clear on this point. For a society to be just, it must be “well-ordered” [62,p.8].
This means that the basic principles governing individuals and society, chosen by those who
constructed the society, must be transparent to the members of that society. The mechanisms of
the system of governance must be clear to these individuals and they must be able to participate
in these mechanisms. This does not mean that, e.g., law enforcement in society must be completely
transparent and nothing can be classied. But it does mean that members of the society must nd
transparency in the mechanisms through which something is kept secret from them and must
agree that this ability should be given to law enforcement [62].
In other words, Rawls argues that in a just and fair society, its members must be able to monitor
the mechanisms of democracy to ensure that society stays “well-ordered,” and they must have
access to the information needed to do so. Here, we hit upon the rst challenge when it comes
to the use of psychometrics in social media in situations of political persuasion. I argue that
the precise targeting of information delivery may isolate the individual from other information
sources, if the volume of the information delivered through targeting is so high that it eectively
drowns out other sources.
3.1 The Case of Michigan in the 2016 U.S. Presidential Election
During the 2016 presidential election campaign in the U.S., the state of Michigan was key to Donald
Trump’s victory. It was also one of the states where Trump’s victory was smallest, with only a 0.2%
voteshare advantage over his opponent, Hillary Clinton. It seems fair to assume that with such a
small margin, any number of events could have contributed to the result going in one direction or
the other. In the jigsaw puzzle of variables that caused Trump to gain this small lead, removal or
reversal of even a single variable could have caused his advantage to shrink (or grow).
Thus, the information available to the voters may have been crucial in deciding the Michigan
vote. A direct causality between the information made available to Michigan voters and the result
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
16:8 M. Bay
has yet to be established. But it is not unreasonable to assume that the available information could
be part of the equation. Looking at the state of Michigan in the 2016 U.S. presidential election,
Kaminska et al. [43] found that misleading news information, or what the authors call “junk news,”
was shared on Twitter as often as what the authors dene as “professional” news, both making up
approximately 33% of the total content shared in the days leading up to the election. The authors
prefer the term “junk news” as opposed to “fake news” due to the undenable character of the
latter term, which, as Caplan, Hanson, and Donovan have also pointed out, has been co-opted by
political factions who use it dierently [11].
The Howard et al. study cited by Kaminska et al. looks at all election-related news content
posted to Twitter by users deemed to be located in Michigan [39]. The study does not dierentiate
between types of users except the roughly 2% that were designated as bots. This means that the
studied tweets would include tweets from Michigan-based news outlets, but not from those based
in, say, New York. In other words, Michigan-based Twitter users would likely be both exposed
to news content from other users alongside tweets from established “professional” news outlets.
However, studies going all the way back to Lazarsfeld in 1944 [45] show that only very few voters
let their positions be impacted directly by news, and that news stories usually have to be conveyed
by peers or thought leaders to change voters’ minds. This type of two-step ow is exactly how news
is shared on social media, and since the dawn of online media studies, scholars have shown how
users tend to trust their peers over institutions, some bringing Lazarsfeld’s ndings into the social
media era [36,53,71,78].
The Howard et al. study can therefore be viewed as analyzing only the news content that has
a high likelihood of inuencing voter choice, precisely because it is based on content posted by
Michigan-based users, i.e., peers. With the equal distribution of “junk” and “professional” news in
the studied tweets, even if you ignore the existence of lter bubbles and echo chambers [23]and
assume that every voter is equally exposed to the dierent sources of information, every other
piece of inuential, election-related news seen by voters through Twitter, was false or misleading.
When half the news stories conveyed in this manner are misleading, it can result in the crowding
out of useful, truthful information, which then becomes more dicult to obtain.
This is a clear violation of Rawls’ rule on the transparency of democratic mechanisms and the
ability of individuals to obtain the necessary information to express themselves in a democratic
system. Making truthful information harder to access also hinders individual’s ability to express
themselves democratically. Now, most voters do not get all their information from social media,
just a substantial part of it [70]. Also, as mentioned above, voters can freely choose not to engage
in social media without disengaging from society. However, even if voters choose to do so, the
spillover eect from social media into other types of news media as well as communication with
peers are enough, I argue, to have a powerful impact on the overall information accessed by voters.
This can be seen in studies of how so-called “fake news” stories were picked up by traditional media
after rst having appeared on social media [49,50]
In other words, the impact on the generally available information pool happens regardless, and
an individual who is not on social media would likely still base voting decisions on at least some
social media-borne information.
4 VIOLATIONS
With Rawls’ aforementioned requirement of equality of media and information access in mind,
we hit upon the rst of the two ways in which use of psychometrics in political campaigning on
social media violate Rawlsian ethics. It is tempting to believe that there are almost no limits on
spaces to store or relay information on the Internet. Even if you assume that to be the case, the
emergence of the “attention economy” [52,69,76] showed that there is clearly a limit to how much
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
Social Media Ethics: A Rawlsian Approach to Hypertargeting and Psychometrics 16:9
users of online services can consume of the information presented to them online. Persuasion on
social media is thus a zero-sum game and part of a persuader’s mission is to succeed in presenting
information in a way that blocks out competing, contradictive information.
As mentioned above, machine-learning-based psychometric targeting has been shown to target
the individual better than humans, thus creating a situation where tailor-made information is
relayed at the individual level on social media. Now consider the situation as seen in Michigan,
where just as much misinformation was presented to the individual as information. Taking a cue
from Floridi [24], and assuming that untruthful information is in fact not information at all, but
misinformation, this means that the individual targeted by psychometric-based campaigning is
deprived of the full and free access to factual information required to participate in the democratic
process as Rawls understands it, at least insofar the individual uses social media to access such
information. In Rawls’ ideal scenario, an individual would always be able to draw information
from sources other than social media because of equality in media and information access. So, is it
not just a question of educating the public to not trust social media for this kind of information?
I argue that it is not, in light of the expectations of users when accessing this type of information
on social media, and I will illustrate this in the next section.
4.1 Expectations of Transparency
Using once again the example of Facebook, the company and its representatives have stated several
times that they wish for the platform to be an objective and transparent venue for debate in which
all sides can be equally represented in a pluralist vision not unlike Rawls’ [61–63]. There has been
much critique of the assumed objectivity and transparency of social media platforms [28,59,64,
72] but for the sake of the argument, I will go forward with the assumption that the social media
platforms in question have pure intentions and are at least working towards such a vision.
Facebook’s terms and conditions for advertisers [18] as well as its community standards [19]
are quite clear. Advertisers cannot make statements that are factually incorrect or is intended
to mislead the public. This is also applicable to sponsored posts in users’ news feeds. Those
using psychometrics to target users with misinformation or “fake news” are thus in violation of
Facebook’s rules. However, much of this misinformation is also spread through sock puppet (fake)
accounts, enabling a peer-to-peer virality. This is also in violation of Facebook’s rules, this time
with regards to both the terms and conditions for users [20] as well as the community guidelines.
I am not attempting to state the obvious here, that sources of fake news and misinformation on
Facebook are in violation of Facebook’s own rules. But I argue that users cannot be blamed for
expecting those rules to be followed by others and enforced by Facebook.
4.2 Hijacking Users’ Information Sources to Transmit Misinformation
If users have a reasonable expectation that Facebook’s own vision of transparency and pluralism is
foundational to the platform, and the rules are there to inhibit the spread of misinformation, then
the burden is—initially, at least—not on the users to separate misinformation from information.
Requiring a higher level of media and information literacy of users may be a pragmatic, quick-
and-dirty solution, but it would be forcing users to take responsibility for the unethical behavior
of others. From the standpoint of Rawlsian basic principles, it is simply unethical, in a well-ordered,
just, and fair society, to hijack (as in Michigan) half of the information available and instead present
misinformation, since it blocks the ability of individuals to express themselves fully as members
of a democratic society.
The role of psychometrics here is the hijacking part. It is important here to note that not all
misinformation, fake news or junk news, being spread in Michigan came from the Trump cam-
paign. However, according to both Anderson and Horvath [1] and Grusin [33], some of it did and
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
16:10 M. Bay
was distributed to users through methods employing psychometrics. This targeting is so precise
that it is possible for misinformation to crowd out information presented to the individual user,
which—as mentioned—may be less problematic in a commercial context. But simply by engaging
with social media, users have no expectation of political campaigning using psychometrics and
are not free to disengage from the eect of social media campaigning, as that would mean also
disengaging from Rawlsian public reasoning.
Again, the user is free to leave social media or ignore advertising, but to be a moral individual
who participates in the democratic process, as Rawls prescribes, the user must be open to an array
of viewpoints [62,63] and therefore cannot simply tune out.
4.3 Uneven Information Access by Definition
Could psychometrics-based political campaigning on social media be used in ways that benet the
individual’s ability to participate in the manner Rawls considers to be that person’s duty? I argue
that it cannot, and this brings me to the second way that this sort of use of psychometrics is in
violation of Rawlsian principles.
As mentioned above, Rawls considers it imperative to the free expression of an individual’s po-
litical liberty that there is an even access to information through public media. The word “even”
is important here, as it relates to the equities that dominate Rawls’ work. The purpose of psycho-
metrics in political campaigning on social media is to tailor the message as much as possible to
the individual user. This is, at its very foundation, a principle of inequity and asymmetry.
One of the few areas in which Rawls agrees with his contemporaries Habermas and Foucault
[26] is that there can be imbalances in communication between sender and receiver and that these
imbalances can be expressed in power relations. For at least Rawls and Habermas, this touches
upon the ethicality of democratic discourse itself, with Rawls arguing that individuals must enter
freely and equally into the public reasoning [63] and Habermas arguing that any sort of discourse
in the public sphere must be held to certain norms of truthfulness and fairness for it to benet
democracy [35]. Use of psychometrics in political social media campaigns runs counter to this.
The extreme precision and individual-level addressing of the user crowd out other viewpoints and
reduce the amount of pluralism in the discourse as mentioned above. But it also automatically
creates a power asymmetry that would not be acceptable under Rawls’ and Habermas’ doctrines
of democratic discourse mentioned above. They both argue for equity in the discourse, but if a
user sees only one aspect of one viewpoint, and another user only sees a dierent aspect of that
same viewpoint because of hypertargeting, then this equity does not exist. One user may not have
access to the same information as the other.
Another violation happens at the organizational level. If the individual user’s attention is a
battleground to be fought over through hypertargeting, then lack of access to the amounts of
data required to produce reliable psychometric-based predictions would put smaller players in the
political landscape at a disadvantage, inhibiting pluralism. Such a division of power would also
run counter to how Rawls proposes society’s basic structures should be constructed.
In other words, this is problematic even when the communication does not involve misinfor-
mation. It is merely a principle of inequity at the heart of hyper-targeted political communication,
of which psychometrics is the instrument du jour.
5 CONCLUSION
There are many other ethical aspects regarding the use of psychometrics in targeted advertising
on social media not discussed here. One example is the fact that psychometric methods such as
the “Big Five” method mentioned above can be viewed as unethical, involuntary psychological
assessment [2,9]. Closer to the communication, media, and information studies elds, it can also
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
Social Media Ethics: A Rawlsian Approach to Hypertargeting and Psychometrics 16:11
be debated whether psychometrics should be used in any sort of communication tactic (commer-
cial advertising included), particularly with the emergence of location- and identity-aware media
platforms in public spaces. What happens when psychometric measurement in advertising and
persuasion reveals something about you in public that you do not even know yourself yet? As
mentioned above, however, consent given under fair conditions may support this type of adver-
tising under Rawlsian ethics. The long-running debate over social proles also raises questions
about the consequences of dening a person by what can, almost certainly, only be a small part of
a larger picture, even with the best psychometrics in place.
These, and many other discussions will likely are up in the future as psychometrics and other
means of hypertargeting take up larger and larger roles in our daily lives. The purpose of this arti-
cle’s application of Rawlsian ethics to psychometric-enabled, political social media campaigning,
is to contribute insights to social media ethics, ethics of microtargeting, and data ethics in light of
the rise of psychometrics. As this article has only touched upon the bare minimum of the ethical
questions raised by psychometrics in political campaigning on social media, much still needs to be
explored. Further research could investigate the actual, general ecacy of psychometrics in this
context, but also further inquiries into the ethics of psychometrics in commercial advertising con-
texts should be made. Another avenue of inquiry could be explorations of other ethical systems
than Rawls’. How would a virtue ethicist, a utilitarian, or a libertarian view this topic? The policy
and regulation questions following from the above conclusions are also open for exploration.
REFERENCES
[1] Berit Anderson and Brett Horvath. 2017. The Rise of the Weaponized AI Propaganda Machine. Retrieved from
https://scout.ai/story/the-rise- of-the- weaponized-ai- propaganda-machine.
[2] Azy Barak. 1999. Psychological applications on the internet: A discipline on the threshold of a new millennium. Appl.
Prev. Psychol. 8, 4 (1999), 231–245. DOI:https://doi.org/10.1016/S0962-1849(05)80038-1
[3] Oana Barbu. 2014. Advertising, microtargeting and social media. Procedia Soc. Behav. Sci. 163 (2014), 44–49. DOI:
https://doi.org/10.1016/j.sbspro.2014.12.284
[4] Solon Barocas. 2012. The price of precision: Voter microtargeting and its potential harms to the democratic process.
In Proceedings of the 1st Edition Workshop on Politics, Elections, and Data. 31–36.
[5] Morten Bay. 2018. The ethics of psychometrics in social media: A Rawlsian approach. In Proceedings of the 51st Hawaii
International Conference on System Sciences.
[6] Felix Beierle, Kai Grunert, Sebastian Gondor, and Viktor Schluter. 2017. Towards psychometrics-based friend recom-
mendations in social networking services. In Proceedings of the IEEE 6th International Conference on AI and Mobile
Services (AIMS’17). 105–108. DOI:https://doi.org/10.1109/AIMS.2017.22
[7] Jacqueline Lorene Bender, Alaina B. Cyr, Luk Arbuckle, and Lorraine E. Ferris. 2017. Ethics and privacy implications
of using the internet and social media to recruit participants for health research: A privacy-by-design framework for
online recruitment. J. Med. Internet Res. 19, 4 (2017), e104. DOI:https://doi.org/10.2196/jmir.7029
[8] David J. Berndt. 1983. Ethical and professional considerations in psychological assessment. Prof. Psychol. Res. Pract.
14, 5 (1983), 580.
[9] Allen Buchanan and Dan W. Brock. 1986. Deciding for Others. Milbank Q. 64 (1986), 17. DOI:https://doi.org/10.2307/
3349960
[10] Carole Cadwallr and Emma Graham-Harrison. 2018. Revealed: 50 million Facebook proles harvested for Cambridge
Analytica in major data breach | News. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/
17/cambridge-analytica- facebook-inuence- us-election.
[11] Robyn Caplan, Lauren Hanson, and Joan Donovan. 2018. Dead reckoning - Navigating content moderation after “fake
news.” Data and Society (2018), 40.
[12] Jianqing Chen and Jan Stallaert. 2014. An economic analysis of online advertising using behavioral targeting. MIS Q.
38, 2 (2014), 429–A7. DOI:https://doi.org/10.2139/ssrn.1787608
[13] Brian Duignan. 2010. The 100 most inuential philosophers of all time. Britannica Educational Publishing.
[14] Irina Dykhne. 2018. Persuasive or deceptive? Native advertising in political campaigns. South. Calif. Law Rev. 91,
(2018), 339–373.
[15] Georges Enderle. 2016. How can business ethics strengthen the social cohesion of a society? J. Bus. Ethics (2016),
1–11. DOI:https://doi.org/10.1007/s10551-016- 3196-5
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
16:12 M. Bay
[16] Georges Enderle and Patrick E. Murphy. 2009. Ethics and corporate social responsibility for marketing in the global
marketplace. In SAGE Handbook of International Marketing (2009), 504–531.
[17] Jerey C. Esparza. 2015. The personal computer vs. the voting rights act: how modern mapping technology and
ethically polarized voting work together to segregate voters. UMKC Law Rev. 84 (2015), 235–261. DOI:https://doi.org/
10.3868/s050-004- 015-0003- 8
[18] Facebook. Advertising Policies. Facebook.com. Retrieved from https://www.facebook.com/policies/ads/#.
[19] Facebook. Community Standards. Facebook.com. Retrieved from https://www.facebook.com/communitystandards.
[20] Facebook. Terms of Service. Facebook.com. Retrieved from https://www.facebook.com/terms.
[21] Facebook. 2018. Hard Questions: Q&A with Mark Zuckerberg on Protecting People’s Information | Facebook
Newsroom. FB.com. Retrieved from https://newsroom.fb.com/news/2018/04/hard-questions-protecting-peoples-
information/.
[22] Don Fallis. 2007. Information ethics for twenty-rst century library professionals. Libr. Hi Tech 25, 1 (2007), 23–36.
[23] Seth Flaxman, Sharad Goel, and Justin M. Rao. 2016. Filter bubbles, echo chambers, and online news consumption.
Public Opin. Q. 80, S1 (2016), 298–320. DOI:https://doi.org/10.1093/poq/nfw006
[24] Luciano Floridi. 2008. Understanding epistemic relevance. Erkenntnis 69, 1 (2008), 69–92.
[25] Luciano Floridi, Mariarosaria Taddeo, and Luciano Floridi. 2016. What is data ethics? Philos.Trans.R.Soc.AMath.
Phys. Eng. Sci. 374, 2083 (2016), 1–5.
[26] Michel Foucault. 1970. The archaeology of knowledge. Soc. Sci. Inf. 9, 1 (1970), 175–185. DOI:https://doi.org/10.1177/
053901847000900108
[27] Edward R. Freeman and William M. Evan. 1979. A stakeholder theory of the modern corporation: kantian capital-
ism. Ethical Theory Bus. 3 (1979), 97–106. Retrieved from http://www.business.uzh.ch/professorships/strategy/stu/
BS/lecture/Evan_Freeman_1988.pdf.
[28] Christian Fuchs. 2017. Social Media: A Critical Introduction. Sage.
[29] Antonio Garcia-Martinez. 2017. I’m an ex-facebook exec: Don’t believe what they tell you about ads. The Guardian.
Retrieved from https://www.theguardian.com/technology/2017/may/02/facebook-executive-advertising- data-
comment.
[30] Jerey Gottfried, Michael Barthel, Elisa Shearer, and Amy Mitchell. 2018. Where Americans Are Getting News
About the 2016 Presidential Election. Retrieved from http://www.journalism.org/2016/02/04/the-2016-presidential-
campaign-a- news-event-thats-hard-to-miss/.
[31] Kevin Granville. 2018. Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens. The New York
Times. Retrieved from https://www.nytimes.com/2018/03/19/technology/facebook-cambridge-analytica-explained.
html.
[32] Hannes Grassegger and Mikael Krogerus. 2017. The data that turned the world upside down. Vice. Retrieved from
https://motherboard.vice.com/en_us/article/big-data-cambridge-analytica-brexit-trump.
[33] Richard A. Grusin. 2017. Donald trump’s evil mediation. Theory Event 20, 1 (2017), 86–99.
[34] David J. Gunkel. 2018. Gaming the System: Deconstructing Video Games, Games Studies, and Virtual Worlds. Indiana
University Press.
[35] Jürgen Habermas. 1990. Moral Consciousness and Communicative Action.DOI:https://doi.org/10.1007/s13398-014-
0173-7.2
[36] Itai Himelboim, Ruthann Weaver Lariscy, Spencer F. Tinkham, and Kaye D. Sweetser. 2012. Social media and online
political communication: The role of interpersonal informational trust and openness. J. Broadcast. Electron. Media 56,
1 (2012), 92–115. DOI:https://doi.org/10.1080/08838151.2011.648682
[37] Anna Lauren Homan. 2017. Beyond distributions and primary goods: assessing applications of Rawls in information
science and technology literature since 1990. J. Assoc. Inf. Sci. Technol. 68, 7 (2017), 1601–1618. DOI:https://doi.org/
10.1002/asi.23747
[38] Jeroen Van den Hoven and Emma Rooksby. 2008. Distributive justice and the value of information: A (broadly)
Rawlsian approach. In Information Technology and Moral Philosophy, Jeroen Van Den Hoven and John Weckert (Eds.).
Cambridge University Press, New York, New York.
[39] Philip N. Howard, Samantha Bradshaw, Gillian Bolsover, Lisa-Maria Neudert, and Bence Kollanyi. 2017. Junk
News and Bots During the U.S. Election: What Were Michigan Voters Sharing Over Twitter? Retrieved from
http://275rzy1ul4252pt1hv2dqyuf.wpengine.netdna-cdn.com/wp-content/uploads/2017/07/2206.pdf.
[40] David John Hughes, Moss Rowe, Mark Batey, and Andrew Lee. 2012. A tale of two sites: Twitter vs. Facebook and
the personality predictors of social media usage. Comput. Human Behav. 28, 2 (2012), 561–569. DOI:https://doi.org/
10.1016/j.chb.2011.11.001
[41] M. Jill Austin and Mary Lynn Reed. 1999. Targeting children online: Internet advertising ethics issues. J. Consum.
Mark. 16, 6 (1999), 590–602.
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
Social Media Ethics: A Rawlsian Approach to Hypertargeting and Psychometrics 16:13
[42] Justin P. Johnson. 2013. Targeted advertising and advertising avoidance. RAND J. Econ. 44, 1 (2013), 128–144.
DOI:https://doi.org/10.1111/1756-2171.12014
[43] Monica Kaminska, John D. Gallacher, Bence Kollanyi, Taha Yasseri, and Philip Howard. 2017. Social Media and News
Sources During the 2017 UK General Election. Oxford, UK.
[44] Robert M. Kaplan and Dennis P. Saccuzzo. 2001. Psychological testing: Principles, applications, and issues. Psychol.
Test. Princ. Appl. Issues (5th ed.) 44, (2001), 1–11. DOI:https://doi.org/10.1017/CBO9781107415324.004
[45] Paul F. Lazarsfeld, Bernard Berelson, and Hazel Gaudet. 1944. The People’s Choice: How the Voter Makes up his Mind
in a Presidential Campaign, Slo Duell (Ed.). Oxford.
[46] Chei Sian Lee and Long Ma. 2012. News sharing in social media: The eect of gratications and prior experience.
Comput. Human Behav. 28 (2012). DOI:https://doi.org/10.1016/j.chb.2011.10.002
[47] Sam Levin. 2017. Facebook told advertisers it can identify teens feeling “insecure” and “worthless.” The Guardian.
Retrieved from https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data- insecure-teens.
[48] David Lumb. 2018. Facebook will label political ads and note who paid for them. Engadget. Retrieved from
https://www.engadget.com/2018/04/06/facebook-label- political-ads- and-note- who-paid/.
[49] Neil Macfarquhar and Andrew Rossback. 2017. How russian propaganda spread from a parody website to
fox news. The New York Times. Retrieved from https://www.nytimes.com/interactive/2017/06/07/world/europe/
anatomy-of- fake-news-russian-propaganda.html.
[50] Sapna Maheswari. 2017. How fake news goes viral: A case study. The New York Times. Retrieved from https://www.
nytimes.com/2016/11/20/business/media/how-fake- news-spreads.html?login=google.
[51] Veronica Marotta, Kaifu Zhang, and Alessandro Acquisti. 2017. Not all privacy is created equal: The welfare impact
of targeted advertising. SSRN. Retrieved from https://ssrn.com/abstract=2951322
[52] Alice E. Marwick. 2015. Instafame: Luxury seles in the attention economy. Public Cult. 27, 1 -75 (2015), 137–160.
DOI:https://doi.org/10.1215/08992363-2798379
[53] Alice E. Marwick and Danah Boyd. 2014. Networked privacy: How teenagers negotiate context in social media. New
Media Soc. 16, 7 (2014), 1051–1067. DOI:https://doi.org/10.1177/1461444814543995
[54] R. R. McCrae and O. P. John. 1992. An introduction to the ve-factor model and its applications. J. Pers. 60, 2 (1992),
175–215. DOI:https://doi.org/10.1111/j.1467-6494.1992.tb00970.x
[55] David McPherson. 2013. Vocational virtue ethics: Prospects for a virtue ethic approach to business. J. Bus. Ethics 116,
2 (2013), 283–296. DOI:https://doi.org/10.1007/s10551-012- 1463-7
[56] Rani Molla. 2018. Facebook lost nearly $50 billion in market cap since the cambridge analytics data scandal—Recode.
Recode. Retrieved from https://www.recode.net/2018/3/20/17144130/facebook-stock-wall-street-billion-market-cap.
[57] Megan A. Moreno, Natalie Goniu, Peter S. Moreno, and Douglas Diekema. 2013. Ethics of social media research:
Common concerns and practical considerations. Cyberpsychol.Behav.Soc.Netw.16, 9 (2013), 708–713. DOI:https://
doi.org/10.1089/cyber.2012.0334
[58] Patrick E. Murphy. 2002. Ethics in social marketing. J. Public Policy Mark. 21, 1 (2002), 168–169. Retrieved from http://
search.proquest.com/docview/211139899?accountid=14549%5Cnhttp://hl5yy6xn2p.search.serialssolutions.com/
?genre=article&sid=ProQ:&atitle=Ethics+in+Social+Marketing&title=Journal+of+Public+Policy+&+Marketing&
issn=07439156&date=2002-04- 01&volume=21&iss.
[59] Saya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
[60] William O’Donohue and Kyle E. Ferguson. 2003. Handbook of Professional Ethics for Psychologists: Issues, Questions,
and Controversies. Sage Publications.
[61] John Rawls. 1971. A Theory of Justice. Harvard University Press.
[62] John Rawls. 2001. Justice as Fairness: A Restatement. Harvard University Press.
[63] John Rawls. 2005. Political Liberalism. Columbia University Press.
[64] Stephen Cory Robinson. 2015. The good, the bad, and the ugly: Applying Rawlsian ethics in data mining marketing.
J. Media Ethics Explor. Quest. Media Moral. 30, 1 (2015), 19–30. DOI:https://doi.org/10.1080/08900523.2014.985297
[65] John Rust and Susan Golombok. 2014. Modern Psychometrics: The Science of Psychological Assessment. Routledge.
[66] Saya Umoja Noble. 2013. Google search: Hyper-visibility as a means of rendering black women and girls invis-
ible. Invis. Cult. an Electron. J. Vis. Cult. 19 (2013), 1–35. Retrieved from http://ivc.lib.rochester.edu/google-search-
hyper-visibility- as-a- means-of- rendering-black- women-and-girls-invisible/.
[67] Ralph Schroeder and Rich Ling. 2014. Durkheim and Weber on the social implications of new information and com-
munication technologies. New Media and Society 16, 5 (2014), 789–805. DOI:https://doi.org/10.1177/1461444813495157
[68] Jan H. Schumann, Florian von Wangenheim, and Nicole Groene. 2014. Targeted online advertising: Using reciprocity
appeals to increase acceptance among users of free web services. J. Mark. 78, 1 (2014), 59–75. DOI:https://doi.org/10.
1509/jm.11.0316
[69] Guosong Shao. 2009. Understanding the appeal of user-generated media: A uses and gratication perspective. Internet
Res. 19, 1 (2009), 7–25. DOI:https://doi.org/10.1108/10662240910927795
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.
16:14 M. Bay
[70] Elisa Shearer and Jerey Gottfried. 2017. News use across social media platforms 2017. Pew Research
Center—Journalism & Media. Retrieved from http://www.journalism.org/2017/09/07/news-use-across- social-media-
platforms-2017/.
[71] Wanita Sherchan, Surya Nepal, and Cecile Paris. 2013. A survey of trust in social networks. ACM Comput. Surv. 45, 4
(2013), 1–33. DOI:https://doi.org/10.1145/2501654.2501661
[72] Ramesh Srinivasan. 2017. Whose Global Village?: Rethinking how Technology Shapes Our World. NYU Press.
[73] Statista. 2018. Global social media ranking 2018 | Statistic. Statista.com. Retrieved from https://www.statista.com/
statistics/272014/global-social- networks-ranked- by-number- of-users/.
[74] Cass R. Sunstein. 2018. # Republic: Divided Democracy in the Age of Social Media. Princeton University Press.
[75] Leanne Townsend and Claire Wallace. 2016. Social media research: A guide to ethics. University of Aberdeen. Re-
trieved from https://www.gla.ac.uk/media/media_487729_en.pdf.
[76] Zeynep Tufekci. 2013. “Not this one” social movements, the attention economy, and microcelebrity networked ac-
tivism. Am. Behav. Sci. 57, 7 (2013), 848–870.
[77] Zeynep Tufekci. 2018. Facebook’s surveillance machine. The New York Times. Retrieved from https://www.nytimes.
com/2018/03/19/opinion/facebook-cambridge- analytica.html.
[78] Jason Turcotte, Chance York, Jacob Irving, Rosanne M. Scholl, and Raymond J. Pingree. 2015. News recommendations
from social media opinion leaders: Eects on media trust and information seeking. J. Comput. Commun. 20, 5 (2015),
520–535. DOI:https://doi.org/10.1111/jcc4.12127
[79] Sherry Turkle. 2011. Alone together. Basic Books.
[80] Tracy L. Tuten and Michael R. Solomon. 2014. Social media marketing strategy. In Social Media Marketing.Sage,
63–96.
[81] Dennis G. Wilson. 2017. The ethics of automated behavioral microtargeting. AI Matters 3, 3 (2017), 56–64. DOI:
https://doi.org/10.1145/3137574.3139451
[82] Wu Youyou, Michal Kosinski, and David Stillwell. 2015. Computer-based personality judgments are more accurate
than those made by humans. Proc. Natl. Acad. Sci. U.S.A. 112, 4 (2015), 1036–1040. DOI:https://doi.org/10.1073/pnas.
1418680112
[83] Michael Zimmer. 2010. “But the data is already public”: On the ethics of research in Facebook. Ethics Inf. Technol. 12,
4 (2010), 313–325.
Received April 2018; revised September 2018; accepted September 2018
ACM Transactions on Social Computing, Vol. 1, No. 4, Article 16. Publication date: December 2018.