ArticlePDF Available

Facebook's Emotional Contagion Experiment as a Challenge to Research Ethics



This article analyzes the ethical discussion focusing on the Facebook emotional contagion experiment published by the Proceedings of the National Academy of Sciences in 2014. The massive-scale experiment manipulated the News Feeds of a large amount of Facebook users and was successful in proving that emotional contagion happens also in online environments. However, the experiment caused ethical concerns within and outside academia mainly for two intertwined reasons, the first revolving around the idea of research as manipulation, and the second focusing on the problematic definition of informed consent. The article concurs with recent research that the era of social media and big data research are posing a significant challenge to research ethics, the practice and views of which are grounded in the pre social media era, and reflect the classical ethical stances of utilitarianism and deontology. (In Press, published October 2016.)
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 1
Media and Communication (ISSN: 2183-2439)
2016, Volume 4, Issue 4, Pages X-X
Doi: 10.17645/mac.v4i4.579
Facebook’s Emotional Contagion Experiment as a Challenge to
Research Ethics
Jukka Jouhki 1,*, Epp Lauk 2, Maija Penttinen 1, Niina Sormanen 2 and Turo Uskali 2
1 Department of History and Ethnology, University of Jyväskylä, 40014 Jyväskylä, Finland; E-Mails:
(J.J.), (M.P.)
2 Department of Communication, University of Jyväskylä, 40014 Jyväskylä, Finland; E-Mails: (E.L.), ni- (N.S.), (T.U.)
* Corresponding author
Submitted: 31 January 2016 | Accepted: 12 April 2016 | Published: in press
This article analyzes the ethical discussion focusing on the Facebook emotional contagion experiment published by the
Proceedings of the National Academy of Sciences in 2014. The massive-scale experiment manipulated the News Feeds
of a large amount of Facebook users and was successful in proving that emotional contagion happens also in online
environments. However, the experiment caused ethical concerns within and outside academia mainly for two
intertwined reasons, the first revolving around the idea of research as manipulation, and the second focusing on the
problematic definition of informed consent. The article concurs with recent research that the era of social media and
big data research are posing a significant challenge to research ethics, the practice and views of which are grounded in
the pre social media era, and reflect the classical ethical stances of utilitarianism and deontology.
Big data; emotional contagion; Facebook; informed consent; manipulation; methodology; privacy; research ethics; so-
cial media; user data
This article is part of the issue “Successes and Failures in Studying Social Media: Issues of Methods and Ethics, edited
by Epp Lauk and Niina Sormanen (University of Jyväskylä, Finland).
© 2016 by the authors; licensee Cogitatio (Lisbon, Portugal). This article is licensed under a Creative Commons Attribu-
tion 4.0 International License (CC BY).
1. Introduction
In June 2014 the Proceedings of the National Academy
of Sciences (PNAS) published an article entitled “Exper-
imental Evidence of Massive-Scale Emotional Conta-
gion Through Social Networks”. It was about an
conducted by Adam D. I. Kramer from Fa-
cebook’s Core Data Science Team together with Jamie
E. Guillory and Jeffrey T. Hancock from Cornell Univer-
sity. The article provided experimental evidence about
emotional contagion, a phenomenon that has been
widely studied before but mostly in offline environ-
ments. In January 2012, the research team manipulat-
Henceforth, “the Facebook experiment” or “the experiment”.
ed the News Feeds of a massive number (N = 689,003)
of Facebook users for a week, reducing the amount of
emotional content in their feeds. After analyzing over
three million posts and over 122 million words, the re-
sults showed that when the amount of positive status
updates published in their News Feed was reduced, us-
ers published more negative status updates and fewer
positive updates. Conversely, when the amount of
negative status updates was reduced, users published
more positive status updates and fewer negative up-
dates. Moreover, the less emotional content the users
were exposed to, the fewer words they used in their
status updates. (Kramer, Guillory, & Hanckock, 2014).
The research suggested that emotional states “can
be transferred to others via emotional contagion, lead-
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 2
ing people to experience the same emotions without
their awareness” (Kramer et al., 2014, p. 8788). Emo-
tional contagion had been proved earlier (e.g. Barsade,
2002; Huntsinger, Lun, Sinclair, & Clore, 2009; Kramer
et al., 2014, p. 8788 also refer to several other studies),
but proving that it happens “outside of in-person inter-
action” and particularly in the increasingly popular so-
cial media was new (see e.g. Ferrara & Yang, 2015 for a
similar but more recent study). Moreover, as there are
common conceptions about positive social media post-
ings making people sad or envious (e.g. Copeland,
2011), the experiment produced valuable information
to the contrary. The experiment suggested that peo-
ple’s “hearts and minds”, as Schroeder (2014, p. 3) puts
it, can be manipulated online, for good or ill. (See also
Shah, Capella, & Neuman, 2015; Summers-Effler, Van
Ness, & Hausmann, 2015, p. 472; cf. Parkinson & Man-
stead 2015, p. 377.)
Academic and non-academic reactions to the
studydefined as ethically controversial (Ananny, 2015,
p. 101; Harriman & Patel, 2014; Pejovic & Musolesi,
2015, p. 18; Simon, 2014; Thorson & Wells, 2015, p.
10)were mixed. On a broader view, the heterogeneity
of the views on the ethics of the experiment is a sign of
how contested and fluid the concept of privacy is (e.g.
Ess, 2013, p. 260). Moreover, as Facebook cooperates
with several universities such as Cornell, Stanford and
Harvard (see e.g. Cheng, Adamic, Dow, Kleinberg, &
Leskovec, 2014; Friggeri, Adamic, Eckles, & Cheng,
2014; Sun, Rosenn, Marlow, & Lento, 2009)
the exper-
iment has raised debate about whose research ethics
prevail in such joint venturesthose of a private com-
pany or those of an academic research institution. In
this article, we focus on the academic but also look to
some extent at the non-academic ethical commentary
on the Facebook experiment, and ask what it tells us
about ethical research issues in the current era of so-
cial media research.
The ethical discussion presented in this article is
founded on an integrative literature review (see e.g.
Card, 2010; Torraco, 2005) that we conducted by
searching major journal databases such as Science Di-
rect, Google Scholar, Sage Journals, and Ebsco Academ-
ic Search Elite for articles covering the experiment. As a
result we obtained articles from journals such as Re-
search Ethics; Big Data & Society; Media, Culture & So-
ciety; Nature; and Information, Communication &
Society. In addition to journal articles, we searched for
conference proceedings on the experiment as well as
scholarly analyses of the issue published in blogs and
For Stanford’s recent collaboration with Facebook, see
and-facebook. More about Facebook’s partnerships at https:// On Facebook’s exclusive cooperation
with a few universities that have been granted access to
Facebook’s data, see e.g. Paolillo, 2005, p. 50.
other internet sites. Some news and magazine articles
as well as blog posts were also included in order to of-
fer some non-academic views on the issue.
Overall, our approach to the ethical discussion re-
volving around the Facebook experiment is essayistic in
nature (see e.g. Ceserani, 2010; Cornelissen, Gajew-
skade, Piekkari, & Welch, 2012, pp. 198-199), which
means that we prefer exploring and discussing the top-
ic in a heuristic manner: we tend to concentrate on
raising questions rather than put forward any definite
results based on empirical research. However, we do
argue that there are two crucial themes of debate
which sum up the ethical discussion revolving around
the experiment: research as manipulation (discussed in
Section 3) and the related informed consent (discussed
in Section 4). Moreover, we suggest that the debates
about the ethics of human-subject big data research,
while demanding a rethink of research ethics, still reflect
the classical divide between the utilitarian and the deon-
tological points of view. In the next section we will intro-
duce some key questions of research ethics in the era of
social media. Then we move on to present the Facebook
experiment and the ensuing ethical discussion.
2. Research Ethics and the Human Subject
The views on research ethics generally put into practice
in any academic research can be seen as balancing be-
tween two classic moral philosophical stances. Utilitar-
ianism attempts to calculate the morality of an act by
estimating the total amount of happiness or suffering
produced by the act, while deontology views certain
actions as immoral or moral per se, regardless of their
consequences. Both these stances are applied, for ex-
ample, in social media research when scholars contem-
plate the effect of their study on the subjects’ privacy:
the utilitarian view of privacy might allow certain incur-
sions into privacy if the result is the greater good,
whereas from the deontological point of view, a certain
level of privacy is a right that should not be violated, for
example, by conducting a study without receiving the in-
formed consent of the subjects of the study (Ess, 2013,
pp. 256-262; Shrader-Frechette, 2000). Both stances are
problematic, and neither of them is applied in research
without any consideration of the otheror in moral de-
cision-making outside of academia, for that matter. At
any rate, the utilitarian emphasis on avoidance of harm
and the more deontological value of receiving informed
consent from research subjects are considered the two
most significant imperatives of research ethics in studies
with human participants (e.g. the British Psychological
Society, 2010). Actual policies as to how exactly the im-
peratives are defined and in what situations they apply
(e.g. in big data research) vary significantly.
This article is based on an unpublished conference paper by
Jouhki et al. (2015).
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 3
One of the key ethical principles of the Association
of Internet Researchers (AoIR)that the greater the
vulnerability of the subject of study, the greater the ob-
ligation of the researcher to protect the subjectis a
good example of how challenging it is to formulate
specific rules of ethical research (Markham & Buchan-
an, 2012, pp. 4-5). Obviously, protecting the research
subject depends on how one defines both the harm
that might be inflicted on the unprotected person and
also a research subject. The context of any research
setting means that ethical codes are not so much strict
rules as incentives to individual researchers to reflect
on the moral ground of their research and make ethical
decisions using their own judgment of what is in fact
practicable in the circumstances. Especially when in-
formed consent cannot be obtained in human-subject
research, the benefits of the study should outweigh the
harm of any invasion of privacy.
Often anonymity is seen as enough to ensure the
no-harm rule in cases of non-experimental (e.g. purely
observational) research. In experiments that affect the
participants’ behavior, the rules are stricter (See e.g.
Vainio, 2012; Vanderpool, 1996.). The level of sensitivi-
ty required for the decision-making to be ethically suf-
ficient is a constant topic of debate. For example, a
research institution or a commercial company engaging
in research might hold the view that obeying the law is
enough to make the research ethical (Hudson &
Bruckman, 2004, pp. 132-133). If the participants are
not harmed in any way during the data gathering, an
ethically sensitive researcherwhether working in a
private company or a universitymight still take into
account the hypothetical situation that a research par-
ticipant at some point learns about his or her role in
the research and is offended (i.e. harmed) by having
been a participant without having given consent (e.g.
Hudson & Bruckman, 2004, pp. 136-138). Moreover, an
ethically sensitive researcher might treat public content
on the internet (e.g. tweets, blog posts) as intimate parts
of their creator’s personhood. Most researchers, how-
ever, use this content without securing informed con-
sent (Hesse, Moser, & Riley, 2015, p. 27).
The fact that data are accessible and public does
not necessarily mean that using them is not jeopardizing
privacy and is thus ethically justified (see e.g. boyd,
Marx, 2013; Tinati, Halford, Carr, & Pope, 2014, p.
673; Zimmer, 2010). The boundaries between private
and public informationespecially on the internetare
frustratingly ambiguous, contested and changing (Mark-
ham & Buchanan, 2012, p. 6; Ess, 2007, p. 499; see also
Rooke, 2013; Rosenberg 2010; Weeden, 2012, pp. 42-
43). Even when a researcher wants to have participants’
informed consent to take part in a study, it might be
impossible for him or her to obtain it if the research in
The author danah boyd wants her name to be written in
lower case.
question concerns, for example, massive data mining
processes and projects. Moreover, big data researchers
often ignore the whole question of informed consent
because they define their data as either public or pro-
prietary (Paolillo, 2015, p. 49). Also, when there is no
direct contact between the researchers and their hu-
man subjects it is questionable whether the subjects
should even be called participants. Besides, when ex-
periments are made on them, it is unclear whether
they are to be subject to the same ethical research
scrutiny as human-subject study participants normally
are (Hutton & Henderson, 2015, p. 178; Kahn, Vayena,
& Mastroianni, 2014, p. 13677.). Even if a researcher
did in such cases manage to obtain the participants’
consent, there would be no real guarantee that it was
indeed informed (Flick, 2016, p. 15-17).
To problematize the issue further, even if informed
consent was verified and the researcher was allowed
to use the participants’ personal data, the data might
also include information about people (e.g. contacts of
the users) who had not given their informed consent
(Phillips, 2011, p. 32). Thus it is no surprise that a large
number of extensive data mining projects are carried
out without informing the groups or individuals target-
ed by the researchers; the only measure taken to pro-
mote the ethicality of the research is making sure that
the participants are anonymous, thus ensuring confi-
dentiality (Lindsay & Goldring, 2010; Zwitter, 2014, p.
5; see also Sormanen et al., 2016).
In contrast, when conducting qualitative research
like virtual ethnography or, more specifically, partici-
pant observation, in smaller internet forums, obtaining
the consent of participants is technically relatively
easy. However, it is rarely done because of the possibil-
ity that knowing that they are being observed might
cause participants to act differently from usual, which
would skew the data. Then again, in practice, many
scholars do not seek informed consent because they
are afraid it would be denied (e.g. Hine, 2000, pp. 23-
24.). Sometimes participant observation even without
consent is impossible (e.g. in the case of private discus-
sion groups), so the researcher might engage in decep-
tion (e.g. an invented alias) in order to gain access to
the group of participants. As Brotsky and Giles (2007,
pp. 95-96) put what is indeed rather obvious, covert
participant observation is “highly controversial from an
ethical position”, but as in most completed research
projects with ethical research challenges, it is ultimate-
ly justified by reference to the benefits brought by the
results. Sometimes even informed consent does not
create an authentic consensual atmosphere, for exam-
ple if the subjects of the research do not feel they have
been treated fairly or if the purpose of the research is
not felt to be morally valuable enough (Kennedy,
Elgesem, & Miguel, 2015). Lastly, even if informed con-
sent is received, there is the problem of the level of in-
formedness. How can a researcher be sure that the
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 4
research subject has sufficiently understood the pur-
pose and the consequences of the research? (E.g.
Escobedo, Guerrero, Lujan, Ramirez, & Serrano, 2007;
Svanteson, 2007, p. 72.)
3. The Facebook Experiment as Manipulation
On Facebook, the News Feed is practically a list of sta-
tus updates of the contacts in a user’s network. The
updates shown in or omitted from the News Feed de-
pend on “a ranking algorithm that Facebook continual-
ly develops and tests in the interest of showing viewers
the content they will find most relevant and engaging”.
Facebook is thus like any traditional media as it pro-
vides content to its users selectively, but where it dif-
fers from the old media is that the content is modified
individually according to what the medium evaluates to
be the optimally engaging experience. (Kramer et al.,
2014, p. 8788.) Users accept this practice when signing
up for Facebook.
In their massive-scale experiment, Kramer et al.
(2014) tested the emotional engagement of Facebook
users by modifying their News Feed. The “experiment
on the manipulative power of Facebook feeds”, as Pea-
cock (2014, p. 8) described it, was criticized almost
immediately upon publication of the article. Bloggers
claimed Facebook made users “sad for a psych experi-
ment” (Grimmelmann, 2014) or the company was us-
ing people as “lab rats” (a blogger quoted by Rushe,
2014). According to The Guardian’s poll (Fishwick,
2014), most people who read about the experiment
were not surprised that Facebook would experiment
on user data the way it did but, at the same time, they
declared they had now “lost trust” in Facebook and
were considering closing their account. The “secret”
experiment, as The Guardian called it, “sparked out-
rage from people who felt manipulated by the compa-
ny”. It can be speculated that had Facebook known
what the public reaction to their experiment was going
to be, they would not have published it. danah boyd
(2014; see also Paolillo, 2015, p. 49) suggests that the in-
tended PR outcome of the experiment from Facebook’s
point of view was to show that Facebook can downplay
negative content in their service and thus make custom-
ers happier. Presumably this was seen as better for users
and better for Facebook, as experimentation is how
websites make their services better (Halavais, 2015, pp.
689-690; Kahn et al., 2014, p. 13677).
It is possible that many people missed the benevo-
lent intention of the research team and concentrated
on the contestable ethics of their method. The criticism
about the experiment reached such levels that Face-
book’s researcher and the first author of the article,
Adam Kramer, defended the experiment in his own Fa-
cebook page, pointing to the minimal “actual impact
on people”. During the week of the experiment, he ex-
plained, the users who were affected “produced an av-
erage of one fewer emotional word, per thousand
words”. (Kramer, 2014.) The magnitude of the impact
was perhaps unknown to many critics of the experi-
ment, as many objected to it on the grounds that Face-
book was “controlling the emotions” of its users.
Moreover, regardless of the magnitude of the impact
of the experiment, the user agreement of Facebook
can be interpreted to mean that users of Facebook al-
low researchers to experiment on them.
Thus, many ethicists would agree with Meyer
(2014), who published a statement with five co-authors
and on behalf of 27 other ethicists “to disagree with
these sweeping condemnations” of Facebook’s ethics
in the experiment. She wrote that “the experiment was
controversial, but it was not an egregious breach of ei-
ther ethics or law.” If Facebook is permitted to mine
user data and study users for personal profit but aca-
demics are not permitted to use that information and
learn from it, it “makes no one better off” (Meyer,
2014). However, for many critics it was more a matter
of ethical principle than actual impact. For example,
Kleinsman and Buckley (2015, p. 180) rejected Meyer’s
statement and claimed that “[i]f an experiment is in
‘breach of either ethics or law,’ then whether it is an
‘egregious’ breach or not is irrelevant.” In this view,
there is no grey area in research ethics, and conse-
quently, a person as a subject of research isin a bina-
ry wayeither harmed or not harmed.
Many scholars were even more critical than Kleins-
man and Buckley (2015). Recuber (2016), for example,
noted how quick scholars were to draw analogies be-
tween the Facebook experiment and the infamous Mil-
gram’s (1963) experiment analyzing obedience to
authority, as well as to the Stanford Prison experiment,
also known as the Zimbardo experiment (Haney, Banks,
& Zimbardo, 1973; Zimbardo, 1973), that studied the
psychological effects of becoming a prisoner or a
guard. According to Recuber, there were indeed some
similarities between the Facebook experiment and the
two notorious experiments from the 1960s, one being
the fact that all three studied the researchers’ ability to
manipulate change in the participants’ behavior. How-
ever, the Facebook experiment was different in its fail-
ure to reflect on this aspect (Recuber, 2016, pp. 46-47).
The user reactions studied in the Facebook experiment
were caused by the observers but the power relations
between the experimenters and the experimentees
were downplayed or normalized, and not at all prob-
lematized. This, at least to Recuber, is a typical and in-
sidious element of contemporary big data research.
(Recuber, 2016.) When the number of research sub-
jects is so high, individually they tend to vanish in the
haze of the overarching term “big data”. However, the
“power” exerted per capita over the participants in the
Facebook experiment can be viewed as rather minimal
(albeit massive in scale). The experiments carried out
by Milgram and Zimbardo, on the other hand, caused
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 5
their participants to suffer severe physical and psycho-
logical stress.
The ethics of human-subject research is mainly
about protecting the subject. In this sense, the Face-
book experiment was found ethically questionable.
Strict assessments of the experiment conclude that the
study indeed “harmed” its participants (albeit almost
unnoticeably), because it changed the participants’
mood (e.g. Bryman & Bell, 2015, p. 141; Grimmelmann,
2014; Kleisman & Buckley, 2015, p. 181). However, if
harming is defined as changing a participant’s mood,
then a vast quantity of empirical research on humans is
harmful, especially research that requires face-to-face
interaction. In general, big data studies or techniques to
test or predict personality or actions might not be legally
problematic but they do undermine a “sense of individ-
uality on a personal level”, claims Schroeder (2014, p. 7).
Facebook has experimented on its users before,
and has published research about it (see e.g. Bond et
al., 2012; Chan, 2015, p. 1081; Simonite, 2012). How-
ever, these experiments were explicit in their intention
to influence users. For example, in 2010 on the day of
the US congressional elections, Facebook encouraged
randomly assigned users to vote, managed to increase
voting activity, and afterwards published an article
about it in Nature (Bond et al., 2012). Moreover, in
2012 Mark Zuckerberg, the CEO of Facebook, used Fa-
cebook to encourage people to register as organ do-
nors, after which organ donor enrollment increased
significantly in the US (Simonite, 2012). These forms of
“manipulation” did not raise as much ethical debate as
the experiment we discuss here did. The reason for this
might be that people see explicit forms of intended ma-
nipulation as more acceptable than covert forms, even if
the explicit manipulation attempts to elicit significantly
greater change in the subject than the covert form.
Research ethics are often implemented more strict-
ly in the academic world than in the corporate research
environment. Then again, the ethical views of social
media users might be quite flexible, and a lot of how
users relate to being studied and experimented on by
researchers depends on the application of the results
(Kennedy et al. 2015, pp. 8-10). It seems like people do
not want to be experimented on for the sake of an ex-
periment but they are more likely to accept it if the ex-
periment might result in some kind of benefit for
themselves or others. Many people also do not mind
commercials or other manipulationseven outright
propagandaas they are often part of the deal be-
tween users and service providers (cf. Searls 2015 on
ad blockers). In the case of the Facebook experiment,
even though scholars did not read any status updates,
some people still felt that their privacy was violated.
The problem in these kinds of cases is often the fact
that one has a feeling of being private while actually
being public (Kennedy et al., 2015, p. 13). According to
Chan (2015, p. 1080), the fact that neither Facebook
nor Cornell Universitythe two parties involved in
conducting the studyapparently anticipated the pub-
lic backlash they would face for the data manipulation
shows “the vast disconnect between the research cul-
ture of big data (whether based in corporate or aca-
demic institutions) and the general public’s cultural
4. The Problem of Informed Consent
It is the “informed” in informed consent that is the
other major ethical research issue in the experiment
that worried both the general public and academia (see
e.g. Kahn et al., 2014, p. 13677). Cornell University re-
searchers (Guillory and Hancock) analyzed the data af-
ter Facebook (Kramer) had collected them. The study
therefore did not go through an ethical review at Cor-
nell University, which might have been critical of how
the informed consent of the participants was going to
be secured (Paolillo, 2015, p. 50). In the article, re-
search ethics is discussed in two sentences (Kramer et
al., 2014, p. 8789). The first sentence states that the
researchers themselves did not read any of the texts
analyzed for the experiment as a linguistic software
program was used to analyze the data. The other sen-
tence declares that the data collection “was consistent
with Facebook’s Data Use Policy, to which all users
agree prior to creating an account on Facebook, consti-
tuting informed consent for this research.” In other
words, the authors interpreted Facebook’s user
agreement to mean informed consent.
In that case, the level of informedness is highly de-
batable, as most users of Facebook do not read or
completely understand the data use policy (Flick, 2016,
p. 17; see also Kennedy et al., 2015, pp. 10-15). When a
user accepts the terms and signs up for Facebook, he
or she is informed that the service provider will use the
personal data for all sorts of things (Facebook, 2015a).
The user might give their consent but is most likely not
well informed, since the description of the data use
policy is not very precise (see e.g. Grady, 2015, p. 885;
Sloan, Morgan, Burnap, & Williams, 2014, p. 16.). For
example, at the time of the experiment, the research
use of personal data was not mentioned although, fol-
lowing the wide publicity the experiment received, it
has subsequently been added to the policy.
Kleisman and Buckley (2015; see also Bail, 2015, p.
23) hold the view that because the authors of the Fa-
cebook experiment could have asked for proper in-
formed consent from the users, they should have done
so. It does not matter whether the research is unlikely
to cause harm or if it is beneficial or otherwise im-
portant: consent is always essential if it can be ob-
tained. The scholars should at least have informed
those users who were affected afterwards (Recuber,
2016, p. 54; see also McKelvey, Tiessen, & Simcoe,
2015, pp. 580-581). A month after the publication of
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 6
the experiment, PNAS’s Editor-in-Chief, Inder M. Ver-
ma (2014), added a foreword to the contested article.
It was entitled “Editorial Expression of Concern and
Correction” and it defended the authors’ ethical choic-
es by separating Facebook’s data collection process
from the actions of Cornell University. Readers were
reminded that it was a non-academic private company
(= Facebook’s Kramer) that gathered the data, and the
academics (= Cornell’s Guillory and Hancock) only ana-
lyzed them. However, as the responsibility fell partly on
the journal (see e.g. Kahn et al., 2014, p. 13679), Verma
(2014, p. 10779; see also Schroeder, 2014, pp. 2-3) did
concede that perhaps everything was “not fully con-
sistent with the principles of obtaining informed con-
sent and allowing people to opt out.”
Many human-subject big data scientists know that a
strict interpretation of the opting-out option makes
their research extremely difficult. The problem is fur-
ther complicated by the fact that in many cases the da-
ta are not collected by academics but by third parties
such as Facebook. Should the data collectors abide by
the ethical research norms of academia? If they did,
there would be a lot of ethical problems, particularly
with data produced by third parties, such as filmed
footage, photographs, Google Street View data, tele-
vised rock concert recordings, and so on. Even if partic-
ipant anonymity was secured, the human subjects in
these cases could not opt out. If scholars did not have
to worry about opting out as an ethical norm, they could
team up with someone outside of academia to do their
“dirty work” (see e.g. Kahn et al., 2014, p. 13677; Wrzus
& Mehl, 2015, p. 264; cf. boyd, 2014.). On the other
hand, one could say that a person can opt out of any Fa-
cebook experiment by not signing up for Facebook in the
first placejust like a potential participant in a psychol-
ogy experiment can decide not to attend the experiment
if he or she does not want to be manipulated.
In general, an ethically pragmatic social media us-
er’s informed consent is more like meta-informedness,
or “implicit informed consent” (Bryman & Bell, 2015, p.
139), where the user knows that for example Facebook
will do various known and unknown things with its user
data but is unlikely to do anything that is morally too
dubiousalthough it has been observed that users
tend to underestimate the level of their privacy when
they are excited about a social media application (Kehr,
Kowatsch, Wentzel, & Fleisch, 2015). For most users,
Facebook’s data policy is thus a reasonably informed
and fair trade-off between the user who gets to use the
service without a fee, and the service provider who gets
to sell the data to third parties such as advertisers (Ken-
nedy et al., 2015, p. 12; see also Hutton & Henderson,
2015, p. 178). This is actually the common logic of com-
mercial media, and the “ethical fig leaf” (O’Hara, Ngu-
yen, & Haynes, 2014, p. 4) of a social media researcher.
As Chan (2015, p. 1080; see also Aiken & Mahon,
2014, p. 4) notes, Facebook’s data use policy “enables
any user to potentially become an experiment subject
without need for prior consent”. In the end, a scholar
interested in research ethics might ask if there is any-
thing ethically new in the Facebook experiment. People
were studied without their knowing about it but they
had allowed it by signing up for Facebook. (Schroeder,
2014, p. 3; see also Zwitter, 2014, p. 1.) Certainly com-
panies have been doing experiments with only vaguely
informed consent before, as have psychologists, so
many people think the Facebook experiment is merely a
recent example of an old ethical research issue
(Schroeder, 2014, pp. 1-2; cf. Selinger & Hartzhog, 2016).
In a way, the Facebook user agreement is similar to
the informed consent form the participants in most
psychological experiments have to fill out. Participants
are informed that they (and the data they will produce)
will be used for scientific purposes but the participant
might not know exactly what those purposes are. He or
she might even be deceived about the real purpose of
the study to which they have consented. The message
of informed consent is, “I trust you. Do what is need-
ed.” Perhaps the only new aspect in this case is that
there are over a billion people on Facebook every day.
It is an essential networking tool for a large amount of
people, many of whom are dependent (to a greater or
lesser extent) on the service. This means that its user
agreement is not necessarily an ethical act between two
equal parties: opting out of an experiment becomes
equal to opting out of a significant part of one’s social
life (see e.g. Gertz, 2016). One might therefore suggest
that a participant might be sufficiently informed but the
question of consent is more controversial.
After multiple critical reviews of the experiment,
Mike Schroepfer, the Chief Technology Officer for Fa-
cebook, wrote an apologetic post for Facebook’s News-
room. According to him, they should have “considered
non-experimental ways” to do the research. Also, the
research would have “benefited from more extensive
review by a wider and more senior group of people”.
Schroepher also notes that they did not inform the
public about the experiment well enough (Schroepfer,
2014). Schroepher also introduced a new framework of
research that Facebook is going to implement. It in-
cluded clearer guidelines for researchers, a more ex-
tensive review stage, and training (including on privacy
and security matters), as well as the establishment of a
special research website (Facebook, 2015b).
Describing the new guidelines section, Schroepher
announced that a more enhanced review process
would be conducted prior to research if the intended
research focused on “studying particular groups or
populations (such as people of a certain age) or if it re-
lated to content that may be considered deeply per-
sonal (such as emotions).” Also, a further review would
be conducted if there was any collaboration with the
academic community. The statement ends with trying
to convince the readersupposedly a daily Facebook
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 7
user—that Facebook wants to do research “in a way
that honors the trust you put in us by using Facebook
every day.” (Schroepher, 2014.) This seems to be Face-
book’s way of admitting that the experiment lacked in-
formed consent. Perhaps for PR reasons as well as due
to potential legal issues, Schroepher could not say out-
right that the experiment failed to obtain informed
consent (cf. Verma, 2014).
5. Discussion
In this article we have shown how the debate around
the Facebook experiment brings up two crucial and in-
terrelated themes of research ethics: research as ma-
nipulation, and the problem of informed consent. The
debate around the experiment shows that the era of
big data research demands some rethinking of research
ethics. Although the two key issues presented here are
not unique to contemporary research but had been
debated for decades before big data research came in
(see e.g. Faden & Beauchamp, 1986; Roelcke, 2004),
the unprecedentedly large amount of human subjects
that are called for in such research has led to a need
for special scrutiny. At the same time, it seems that the
ethical evaluation of such experiments is based on the
classical ethical stances of utilitarianism or deontology.
The proponent of the former sees little or no harm
done in such an experiment and no loss of happiness
caused by it, while the proponent of the latter consid-
ers that, regardless of the degree of actual harm, hu-
man integrity has been violated (see e.g. Ess, 2013, pp.
256-262; Harman & Cornelius, 2015, p. 58; Shrader-
Frechette, 2000).
Reaching any ethical consensus about the Facebook
experiment is further impeded by disagreements over
the definition of key concepts such as the “harm” done
to human subjects, and their “informed consent”.
When academic research ethics is so vague, it might
seem simpler for scholars to leave it to the law and us-
er agreements to define the ethics of the research.
However, according to Chan (2015, p. 1082; see also
Paolillo, 2015, p. 50; Burgess & Bruns, 2015, p. 99),
commercial companies’ ethical research standards
should not be allowed to spread to the academic world.
Flick (2016; see also Halavais, 2015, p. 592) agrees and
thinks that the commercial and academic sectors should
negotiate and agree on standards, but without making
any concessions in the commercial companies’ favor.
However, as universities’ opportunities to cooperate
with private companies working with big data increase,
the opportunities to leave the problematic ethics of da-
ta collection to companies increase likewise.
Mike Schroepher, the Chief Technology Officer of
Facebook, stated that Facebook should have communi-
cated “clearly why and how” they did the experiment
(Schroepfer, 2014). The statement implies that a per-
son is deprived of optimal well-being if the reasons and
methods of any actions carried out on him or her are not
properly communicated. On the other hand, one could
easily claim the opposite: a person suffers less when he
or she does not know or notice anything about such ac-
tions. As Stilgoe (2015, pp. 46-47) observes, the Face-
book experiment was rare in being openly published and
publicly scrutinized, since most such experiments are
conducted in secret. We can wonder if people were out-
raged about the experiment because Facebook altered
its users’ states of mind or because it reminded them
that their states of mind are being altered all the time by
all kinds of things, people and organizations (see e.g.
boyd, 2016; see also Kehr et al., 2015).
At the same time, Kennedy et al. (2015, p. 2) ob-
serve that there has been little research about what
social media users themselves actually think about be-
ing observed, studied andwe would add
experimented on. This is rather disconcerting, given
the massive number of people that use social media
and are in some form or other observed and experi-
mented on by researchers. Perhaps surprisingly, the
social media users Kennedy et al. (2015, pp. 3-4) stud-
ied seemed to be concerned about privacy, but mainly
about social privacy. That is, they wanted to be sure
that they could choose which individuals in their net-
work have access to their personal information. They
were not so worried about institutional privacy, or “the
mining of personal information by social media plat-
forms, commercial companies and governments”. Alt-
hough we are talking about only one study, there is
reason to suggest that the ethical criticism of the Face-
book experiment made by academics might not reflect
users’ worries. This is a topic that should be further
studied, as it would be relevant for research ethics in
the era of social media to be more grounded in the us-
er level. A more holistic and inclusive ethical research
study would ensure that researchers do more than de-
fine what is morally optimal in big data research; or, as
Tama Leaver (2013) states, “Big Data needs Big Ethics,
and we don’t have them yet.”
If we go further into the ethical implications of so-
cial media experiments that aim to enhance user expe-
rience, we are faced with a more profound ethical
challenge than a discussion of manipulation and in-
formed consent reveals. If in Facebook we are fed im-
agery that further filters our experiences of the “real”
world, then what are the ethical ramifications of re-
searchers teaming up with companies that aim to give
people “the experience they want” (Simonite, 2012)?
Would the companies be in charge of the “hard ethical
choice…of what content to show…without oversight,
transparency, or informed consent” (boyd, 2014)? The
way media and new media influence our perceptions
of reality has already been widely studied (e.g. Fair-
clough, 1995; Macey, Ryan, & Springer, 2014) but there
has been little consideration so far of the ethics of aca-
demics taking part in these kinds of studies.
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 8
The way big data is “all at once essential, valuable,
difficult to control, and ubiquitous” seems to be re-
flected in our complex, context-dependent attitudes
toward it (Pushcmann & Burgess, 2014, p. 1695). Gertz
(2016, p. 56) notes that despite the Facebook contro-
versy, the number of Facebook users is still growing. At
the same time, users’ autonomy seems to be diminish-
ing. From this it can be concluded that many users do
not mind the asymmetrical relationship they have with
the service provider. As Ess (2013, p. 254) notes, “our
engagements with new digital media appear to bring in
their wake important transformations in our sense of
self and identity.” Our “foundational conception of au-
tonomous self” that has legitimated concepts of priva-
cy that “modern liberal-democratic” states respect
seems to be changing. Perhaps the question we should
ask is primarily existential rather than ethical, as Gertz
(2016, p. 61) suggests. According to him, we should
first think about the increasingly significant role tech-
nology plays in our lives. If we accept it, then we can
have a more meaningful discussion on the ethics of
scholars experimenting with it.
Conflict of Interests
The authors declare no conflict of interests.
Aiken, M., & McMahon, C. (2014). A primer of research
in mediated environments: Reflections on cyber-
methodology. SSRN Working Papers. Retrieved from
Ananny, M. (2015). Toward an ethics of algorithms: Con-
vening, observation, probability, and timeliness. Sci-
ence, Technology, and Human Values, 4(1), 93-117.
Bail, C. A. (2015). Taming big data: Using app technology
to study organizational behavior on social media. So-
ciological Methods & Research. Retrieved from
Barsade, S. (2002). The ripple effect. Emotional conta-
gion and its influence on group behavior. Administra-
tive Science Quarterly, 47(4), 644-675.
Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Mar-
low, C., Settle, J. E., & Fowler, J. H. (2012). A 61-
million-person experiment in social influence and po-
litical mobilization. Nature, 489(7415), 295-298.
boyd, d. (2010, April). Publicity and privacy in web 2.0.
Keynote speech at WWW2010, Raleigh, USA. Re-
trieved from
boyd, d. (2014). What does the Facebook experiment
teach us? Growing anxiety about data manipulation.
The Message. Retrieved from
boyd, d. (2016). Untangling research and practice: What
Facebook’s “emotional contagion” study teaches us.
Research Ethics, 12(1), 4-13.
The British Psychological Society (2010). Code of human
research ethics. Leicester: The British Psychological
Society. Retrieved from
Brotsky, S. R., & Giles, D. (2007). Inside the “Pro-ana”
community: A covert online participant observation.
Eating Disorders: The Journal of Treatment & Preven-
tion, 15(2), 93-109.
Bryman, A., & Bell, E. (2015). Business research methods.
Oxford: Oxford University Press.
Burgess, J., & Bruns, A. (2015). Easy data, hard data: The
politics and pragmatics of Twitter research after the
computational turn. In G. Langlois, J. Redden, & G.
Elmer (Eds.), Compromised data: From social media
to big data (pp. 93-111). New York: Bloomsbury Aca-
Card, N. A. (2010). Literature review. In N. J. Salkind
(Ed.), Encyclopedia of research design (pp. 726-729).
Thousand Oaks: Sage.
Ceserani, R. (2010). The essayistic style of Walter Ben-
jamin. Primerjalna književnost, 33(1), 83-92.
Chan, A. (2015). Big data interfaces and the problem of
inclusion. Media, Culture & Society, 37(7), 1078-
Cheng, J., Adamic, L. A., Dow, P. A., Kleinberg, J., &
Leskovec, J. (2014). Can cascades be predicted? Pro-
ceedings of the 23rd International Conference on
World Wide Web (pp. 925-936). New York: ACM. Re-
trieved from
Copeland, L. (2011). The anti-social network. Slate. Re-
trieved from
Cornelissen, J., Gajewskade, M., Piekkari, R., & Welch, C.
(2012). Writing up as a legitimacy seeking process:
Alternative publishing recipes for qualitative re-
search. In S. Gillian & C. Cassel (Eds.), Qualitative or-
ganizational research: Core methods and current
challenges (pp. 185-203). London: Sage.
Escobedo, C., Guerrero, J., Lujan, G., Ramirez, A., &
Serrano, D. (2007). Ethical issues with informed con-
sent. Bio-Ethics, 1, 1-8. Retrieved from
Ess, C. M. (2007). Internet research ethics. In A. Joinson
(Ed.), Oxford handbook of internet psychology (pp.
481-502). Oxford: Oxford University Press.
Ess, C. M. (2013). Global media ethics? Issues, require-
ments, challenges, resolutions. In S. J. A. Ward (Ed.),
Global media ethics: Problems and perspectives (pp.
253-271). West Sussex: Wiley-Blackwell.
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 9
Facebook (2015a). Terms of service. Retrieved from
Facebook (2015b). Research at Facebook. Retrieved
Faden, R. R., & Beauchamp, T. L. (1986). A History and
theory of informed consent. Oxford: Oxford Universi-
ty Press.
Fairclough, N. 1995. Media discourse. London: Blooms-
Ferrara, E., & Yang, Z. (2015). Measuring emotional con-
tagion in social media. PloS ONE, 10(11). Retrieved
Fishwick, C. (2014, June 30). Facebook’s secret mood ex-
periment: Have you lost trust in the social network?
The Guardian. Retrieved from www.theguardian.
Flick, C. (2016). Informed consent and the Facebook
emotional manipulation study. Research Ethics,
12(1), 14-28.
Friggeri, A., Adamic, L. A., Eckles, D., & Cheng, J. (2014).
Rumor cascades. Proceedings of the Eighth Interna-
tional AAAI Conference on Weblogs and Social Media
(ICWSM) (pp. 101-110). Palo Alto, CA: AAAI Press.
Retrieved from
Gertz, N. (2016). Autonomy online: Jacques Ellul and the
Facebook emotional manipulation study. Research
Ethics, 12(1), 55-61.
Grady, C. (2015). Enduring and emerging challenges of
informed consent. The New England Journal of Med-
icine, 372(9), 855-862.
Grimmelmann, J. (2014). As flies to wanton boys. The
Laboratorium. Retrieved from http://laboratorium.
Halavais, A. (2015). Bigger sociological imaginations:
Framing big social data theory and methods. Infor-
mation, Communication & Society, 18(5), 583-594.
Haney, C., Banks, C., & Zimbardo, P. (1973). A study of
prisoners and guards in a simulated prison. Naval Re-
search Reviews, 30(9), 4-17.
Harman, L. B., & Cornelius, F. (2015). Ethical health in-
formatics. Burlington, MA: Jones & Bartlett.
Harriman, S., & Patel, J. (2014). The ethics and editorial
challenges of internet-based research. MBC Medi-
cine, 12, 124-127.
Hesse, B. W., Moser, R. P., & Riley, W. T. (2015). From
big data to knowledge in the social sciences. The An-
nals of American Academy of Political and Social Sci-
ence, 659(1), 16-32.
Hine, C. (2000). Virtual ethnography. London: Sage.
Hudson, J. M., & Bruckman, A. (2004). Go away: Partici-
pant objections to being studied and the ethics of
chatroom research. The Information Society, 20(2),
Huntsinger, J. R., Lun, J., Sinclair, S., & Clore, G. L. (2009).
Contagion without contact: Anticipatory mood
matching in response to affiliative motivation. Per-
sonality and Social Psychology Bulletin, 35(7), 909-
Hutton, L., & Henderson, T. (2015). “I didn’t sign up for
this!”: Informed consent in social network research.
Proceedings of the Ninth International AAAI Confer-
ence on Web and Social Media (ICWSM) (pp. 178-
187). Palo Alto, CA: AAAI Publications. Retrieved
Jouhki, J., Lauk, E., Penttinen, M., Rohila, J., Sormanen,
N., & Uskali, T. (2015, November). Social media per-
sonhood as a challenge to research ethics: Exploring
the case of the Facebook experiment. Paper present-
ed at the Social Media Research Symposium,
Jyväskylä, Finland.
Kahn, J. P., Vayena, E., & Mastroianni, A. C. (2014). Opin-
ion: Learning as we go: Lessons from the publication
of Facebook's social-computing research. Proceed-
ings of the National Academy of Sciences of the Unit-
ed States of America, 111(38), 13677-13679.
Kehr, F., Kowatsch, T., Wentzel, D., & Fleisch, E. (2015).
Blissfully ignorant: The effects of general privacy
concerns, general institutional trust, and affect in the
privacy calculus. Information Systems Journal, 25,
Kennedy, H., Elgesem, D., & Miguel, C. (2015). On fair-
ness: User perspectives on social media data mining.
Convergence: The International Journal of Research
into New Media Technologies. Retrieved from
Kleinsman, J., & Buckley, S. (2015). Facebook study. A lit-
tle bit unethical but worth it? Journal of Bioethical
Inquiry, 12(2), 179-182.
Kramer, A. (2014). A post on 29.6.2014. Facebook. Re-
trieved from
Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014).
Experimental evidence of massive-scale emotional
contagion through social networks. Proceedings of
the National Academy of Sciences of the United
States of America, 111(24), 8788-8790.
Leaver, T. (2013, September). Birth, death and Facebook.
Paper presented at Adventures in Culture in Tech-
nology (ACAT) Seminar Series, Perth. Retrieved from
Lindsay, S., & Goldring, J. (2010). Anonymizing data for
secondary use. In A. J. Mills, G. Durepos, & E. Wiebe
(Eds.), Encyclopedia of case study research (pp. 25-
27). London: Sage.
Macey, D. A., Ryan, K. M., & Springer, N. J. (Eds.). (2014).
How television shapes our worldview: Media repre-
sentations of social trends and change. New York:
Lexington Books.
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 10
Markham, A., & Buchanan, E. (2012). Ethical decision-
making and Internet research. Recommendations
from the AoIR ethics working committee (version
2.0). Chicago: Association of Internet Researchers.
Retrieved from
Marx, G. T. (2013). An ethics for the new (and old) sur-
veillance. In F. Flammini, R. Stola, & G. Franceschetti
(Eds.), Effective surveillance for homeland security:
Balancing technology and social issues (pp. 2-20).
Boca Raton, FL: Taylor & Francis.
McKelvey, F., Tiessen, M., & Simcoe, L. (2015). A consen-
sual hallucination no more? The Internet as simula-
tion machine. European Journal of Cultural Studies,
18(4-5), 577-594.
Meyer, M. N. (2014). Misjudgments will drive social trials
underground. Nature, 511(7509), 265.
Milgram, S. (1963). Behavioral study of obedience. Jour-
nal of Abnormal and Social Psychology, 67(4), 371-
O’Hara, K., Nguyen, M.-H. C., & Haynes, P. (2014). Intro-
duction. In K. O’Hara, M.-H. C. Nguyen, & P. Haynes
(Eds.), Digital enlightenment yearbook 2014: Social
networks and social machines, surveillance and em-
powerment (pp. 3-24). Amsterdam: IOS Press.
Paolillo, J. C. (2015). Network analysis. In A. Geor-
gakopoulou & T. Spilioti (Eds.), The Routledge hand-
book of language and digital communication (pp. 36-
54). London: Routledge.
Parkinson, B., & Manstead, A. S. R. (2015). Current emo-
tion research in social psychology: Thinking about
emotions and other people. Emotion Review, 7(4),
Peacock, S. E. (2014). How web tracking changes user
agency in the age of Big Data: The used user. Big Da-
ta & Society, 1(2), 1-11. Retrieved from http://
Pejovic, V., & Musolesi, M. (2015). Anticipatory mobile
computing. A survey of the state of the art and re-
search challenges. ACM Computing Surveys, 47(3), 1-
Phillips, M. L. (2011). Using social media in your re-
search. Experts explore the practicalities of observing
human behavior through Facebook and Twitter.
gradPSYCH, 9(4), 32.
Recuber, T. (2016). From obedience to contagion: Dis-
courses of power in Milgram, Zimbardo, and the Fa-
cebook experiment. Research Ethics, 12(1), 44-54.
Roelcke, V. (2004). Introduction: Historical perspectives
on human subjects research during the 20th century,
and some implications for present day issues in bio-
ethics. In V. Roelcke & G. Maio (Eds.), Twentieth cen-
tury ethics of human subjects research: Historical
perspectives on values, practices, and regulations
(pp. 11-18). Stuttgart: Franz Steiner Verlag.
Rooke, B. (2013). Four pillars of internet research ethics
with Web 2.0. Journal of Academic Ethics, 11(4), 265-
Rosenberg, Å. (2010). Virtual world research ethics and
the private/public distinction. International Journal
of Internet Research Ethics, 3(1), 23-37.
Rushe, D. (2014, October 2) Facebook sorryalmost
for secret psychological experiment on users. The
Guardian. Retrieved from
Schroeder, R. (2014). Big Data and the brave new world
of social media research. Big Data & Society, 1(2), 1-
Searls, D. (2015, November 6). Ad blockers and the next
chapter of the Internet. Harvard Business Review.
Retrieved from
Selinger, E., & Hartzog, W. (2016). Facebook’s emotional
contagion study and the ethical problem of co-opted
identity in mediated environments where users lack
control. Research Ethics, 12(1), 35-43.
Shah, D. V., Cappella, J. N., & Neuman, W. R. (2015). Big
data, digital media, and computational social science:
Possibilities and perils. The Annals of the American
Academy of Political and Social Science, 659(1), 6-13.
Shrader-Frechette, K. (2000). Ethics of scientific research.
London: Rowman & Littlefield.
Simon, J. R. (2014). Corporate research ethics: Whose
responsibility? Annals of Internal Medicine, 161(12),
Simonite, T. (2012, June 13). What Facebook knows. MIT
Technology Review. Retrieved from
Sloan, L., Morgan, J., Burnap, P., & Williams, M. (2014).
Who tweets? Deriving the demographic characteris-
tics of age, occupation and social class from Twitter
user meta-data. LloS ONE, 10(3), 1-20.
Sormanen, N., Rohila J., Lauk, E., Uskali T., Jouhki J., &
Penttinen M. (2016). Chances and challenges of
computational data gathering and analysis: The case
of issue-attention cycles on Facebook. Digital Jour-
nalism, 4(1), 55-74.
Stilgoe, J. (2015). Experiment earth: Responsible innova-
tion in Geoengineering. London: Routledge.
Summers-Effler, E., Van Ness, J., & Hausmann, C. (2015).
Peeking in the black box: Studying, theorizing, and
representing the micro-foundations of day-to-day in-
teractions. Journal of Contemporary Ethnography, 44
(4), 450-479.
Sun, E., Rosenn, I., Marlow, C., & Lento, T. (2009). Ge-
sundheit! Modeling contagion through Facebook
news feed. Proceedings of the Third International
ICWSM Conference (pp. 146-153). Palo Alto, CA: AAAI
Publications. Retrieved from
Svanteson, D. J. B. (2007). Private international law and
the Internet. Alphen aan den Rijn: Kluwer Law Inter-
Thorson, K., & Wells, C. (2015). Curated flows: A frame-
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 11
work for mapping media exposure in the digital age.
Communication Theory. Retrieved from http://ssc.
Tinati, R., Halford, S., Carr, L., & Pope, C. 2014. Big data:
Methodological challenges and approaches for socio-
logical analysis. Sociology, 48(4), 663-681.
Torraco, R. J. (2005). Writing integrative literature re-
views: Guidelines and examples. Human Resource
Development Review, 4(3), 356-367.
Vainio, A. (2012). Beyond research ethics: Anonymity as
‘ontology’, ‘analysis’ and ‘independence’. Qualitative
Research, 13(6), 685-698.
Vanderpool, H. Y. (1996). The ethics of research involving
human subjects: Facing the 21st century. Frederick:
University Publishing Group.
Verma, I. M. (2014). Editorial expression of concern and
correction. Proceedings of the National Academy of
Sciences, 111(29), 10779.
Weeden, M. R. (2012). Ethics and on-line research
methodology. Journal of Social Work Values and Eth-
ics, 9(1), 40-51.
Wrzus, C., & Mehl, M. R. (2015). Lab and/or field? Meas-
uring personality processes and their social conse-
quences. European Journal of Personality, 29(2), 250-
Zimbardo, P. (1973). On the ethics of intervention in
human psychological research: With special refer-
ence to the Stanford prison experiment. Cognition,
2(2), 243-256.
Zimmer, M. (2010). “But the data is already public”: On
the ethics of research in Facebook. Ethics and Infor-
mation Technology, 12(4), 313-325.
Zwitter, A. (2014). Big Data ethics. Big Data & Society,
1(2), 1-6.
About the Authors
Jukka Jouhki (PhD, Docent) is a Cultural Anthropologist working as a Senior Lecturer of Ethnology at
the Department of History and Ethnology, and a member of the Social Media Research Institute at
University of Jyväskylä, Finland. Jouhki’s research interests include democracy, nationalism, imagined
communities, online gambling, old and new media, as well as various issues in human-technology re-
lations, and cultural phenomena related to them.
Epp Lauk (PhD) is Professor of Journalism and Head of the Department of Communication at the Uni-
versity of Jyväskylä, Finland. Her research and publications focus on journalism cultures and history,
media and journalism in Central and East European countries, media self-regulation and innovations
in journalism.
Maija Penttinen is an undergraduate student at the Department of History and Ethnology, University
of Jyväskylä. Her research interests include both political participation and civic action on social me-
dia, in addition to studying the integration of social networking sites as platforms for everyday activi-
ties. She is currently working on her Master’s thesis on everyday activities and experiences
manifested in research literature concerning the social networking site Facebook.
Niina Sormanen (MA) is a PhD candidate of Organizational Communication and Public Relation (PR)
at the University of Jyväskylä, Department of Communication. Her research interests include com-
municative behavior and power relations in the social media context. Her PhD thesis is focused on
the interplay of organizational and media professionals and individuals in the social media context
and uses of social media in building their communicative power.
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 12
Turo Uskali (PhD) is the Head of Journalism and Senior Research Scholar at the Department of Com-
munication, University of Jyväskylä, Finland. He leads several research projects focusing on innova-
tions in journalism. The most recent ones focus on mobile data journalism, and wearables. Uskali is
also an Associate Professor at the University of Bergen, Norway, and he has authored or co-authored
seven books about the evolution of global journalism and the changes in media industries.
... The rapid diffusion of social media posts containing viral challenges has, in turn, triggered the spread of the unconventional behaviors encouraged by these challenges. Social media serves as a highly effective medium for viral challenges to generate and flourish rapidly as ordinary users can act as both the pioneers and propagators of user generated content in the online realm [18,23,28,32]. As such, viral social media challenges present an interesting case study for applying behavioral contagion theory, which attempts to explain how an individual s behavior can be indirectly influenced by observing the behavior of others [35,47]. ...
... Our research makes a unique contribution by studying both prosocial and potentially risky viral social media challenges through the theoretical lens of behavioral contagion [35,47]. As such, our research sets out to answer the following high-level research questions: To answer these questions, we conducted 30 semi-structured interviews with college students (ages [18][19][20][21][22][23][24][25][26][27] at two large public universities in the United States. Participants had to have ACM Trans. ...
... The Facebook emotional contagion study, which found that emotional states can be transmitted indirectly and unknowingly through observing posts made by one s Facebook friends, is likely the most well-known and controversial application of contagion theory in the HCI literature [1,23,26,41]. Yet, understanding if and how behavior propagates through social networks is also an emerging area of HCI research. Polansky et al. [35] first coined the term behavioral contagion and defined it as a form of social influence in which the behavior of an individual is influenced indirectly by observing the behavior of others. ...
Full-text available
Viral social media challenges have erupted across multiple social media platforms. While social media users participate in prosocial challenges designed to support good causes, like the Ice Bucket Challenge, some challenges (e.g., Cinnamon Challenge) can also potentially be dangerous. To understand the influential factors, experiences, and reflections of young adults who participated in a viral social media challenge in the past, we conducted interviews with 30 college students (ages 18-27). We applied behavioral contagion theory as a qualitative lens to understand whether this theory could help explain the factors that contributed to their participation. We found that behavior contagion theory was useful but not fully able to explain how and why young social media users engaged in viral challenges. Thematic analyses uncovered that overt social influence and intrinsic factors (i.e., social pressure, entertainment value, and attention-seeking) also played a key role in challenge participation. Additionally, we identified divergent patterns between prosocial and potentially risky social media challenges. Those who participated in prosocial challenges appeared to be more socially motivated as they saw more similarities between themselves and the individuals that they observed performing the challenges and were more likely to be directly encouraged by their friends to participate. In contrast, those who performed potentially risky challenges often did not see similarities with other challenge participants, nor did they receive direct encouragement from peers; yet, half of these participants said they would not have engaged in the challenge had they been more aware of the potential for physical harm. We consider the benefits and risks that viral social media challenges present for young adults with the intent of optimizing these interactions by mitigating risks, rather than discouraging them altogether.
... To provide insight, Facebook cooperated with Cornell University researchers to conduct an experiment on Facebook users entitled "Experimental evidence of Massive-scale Emotional Contagion through Social Networks". In this research, Facebook changed users' news feeds with the same information but with varying degrees of negative and positive aspects to determine how it would affect the users' reaction towards the post [39]. For example, if two news stories had the same information, the positive post would show the news in a good light, while the negative post would show the news in a bad light. ...
... This is called "emotional contagion". Both Facebook and Cornell did a lot of data mining, so it was thought that the research had enough data to be called scientific research [39]. ...
Full-text available
Businesses are starting to use the Metaverse to expand their service network and establish new value co-creation for customers. However, businesses may need to carefully assess the ethical implications of their data collection and utilisation procedures for business sustainability. This paper examines the ethical concerns surrounding the usage of the Metaverse by organisations to obtain a competitive edge. This research was based on an exploratory assessment of business ethics and a Metaverse business model. A structured literature review was selected as the study’s design to get a better understanding of the issue. This research provides preliminary insights into the Metaverse and its business ethics, suggesting that any business must have a transparent policy regarding its Metaverse applications to foster a culture of ethics. This research aims to promote a constructive discussion on the issue of ethics in the context of the Metaverse that arises when an organisation conducts a violation or misuses user data. This paper is useful for people in the fields of technology and public policy, such as academics, businesspeople, and policymakers.
... That led to potential future threats being missed. For example, Facebook and Cambridge Analytics 4 used malevolent privacy techniques to persuade individuals to abdicate their privacy and ethical principles [16,21]. ...
Full-text available
The rationale of this work is based on the current user trust discourse of Artificial Intelligence (AI). We aim to produce novel HCI approaches that use trust as a facilitator for the uptake (or appropriation) of current technologies. We propose a framework (HCTFrame) to guide non-experts to unlock the full potential of user trust in AI design. Results derived from a data triangulation of findings from three literature reviews demystify some misconceptions of user trust in computer science and AI discourse, and three case studies are conducted to assess the effectiveness of a psychometric scale in mapping potential users' trust breakdowns and concerns. This work primarily contributes to the fight against the tendency to design technical-centered vulnerable interactions, which can eventually lead to additional real and perceived breaches of trust. The proposed framework can be used to guide system designers on how to map and define user trust and the socioethical and organisational needs and characteristics of AI system design. It can also guide AI system designers on how to develop a prototype and operationalise a solution that meets user trust requirements. The article ends by providing some user research tools that can be employed to measure users' trust intentions and behaviours towards a proposed solution.
... In computational social science, passive and captive research individuals provide behavioural data, which not always are left open to facilitate reproducibility and transparency. Moreover, for a long time, ethics concerns have been raised (Giglietto and Rossi, 2012), in which big data are commonly extracted from users' social media accounts without their explicit permission, and sometimes even after manipulating the feedback users receive (Jouhki et al., 2016). ...
Full-text available
Computational social science is being scrutinised and some concerns have been expressed with regards to the lack of transparency and inclusivity in some of the researches. However, how computational social science can be reformulated to adopt participatory and inclusive practices? And, furthermore, which aspects shall be carefully considered to make possible this reformulation? We present a practical case that addresses the challenge of collectively studying social interactions within community-based mental health care. This study is done by revisiting and revising social science methods such as social dilemmas and game theory and by incorporating the use of digital interfaces to run experiments in-the-field. The research can be framed within the emergent citizen social science or social citizen science where shared practices are still lacking. We have identified five key steps of the research process to be considered to introduce participatory and inclusive practices: research framing, research design, experimental spaces, data sources, and actionable knowledge. Social dilemmas and game theory methods and protocols need to be reconsidered as an experiential activity that enables participants to self-reflect. Co-design dynamics and the building of a working group outside the academia are important to initiate socially robust knowledge co-production. Research results should support evidence-based policies and collective actions put forward by the civil society. The inclusion of underserved groups is discussed as a way forward to new avenues of computational social science jointly with intricate ethical aspects. Finally, the paper also provides some reflections to explore the particularities of a further enhancement of social dimensions in citizen science.
... In fact, scholars fear perceptions of surveillance when prospective research designs are adopted without participant awareness. For example, the Facebook emotion contagion study 16 , which did not seek consent from people whose Facebook feeds were modified for experimental purposes, was heavily critiqued on ethical grounds 17 . Pertinent here is the position of boyd and Crawford, who noted that experiments conducted without participant awareness can reinforce the troubling perception of the technologies as "Big Brother, enabling invasions of privacy, decreased civil freedoms, and increased state and corporate control" 10 . ...
Full-text available
Research has revealed the potential of social media as a source of large-scale, verbal, and naturalistic data for human behavior both in real-time and longitudinally. However, the in-practice utility of social media to assess and support wellbeing will only be realized when we account for extraneous factors. A factor that might confound our ability to make inferences is the phenomenon of the ``observer effect''---that individuals may deviate from their otherwise typical social media use because of the awareness of being monitored. This paper conducts a causal study to measure the observer effect in longitudinal social media use. We operationalized the observer effect in two dimensions of social media (Facebook) use---behavioral and linguistic changes. Participants consented to Facebook data collection over an average retrospective period of 82 months and an average prospective period of 5 months around the enrollment date to our study. We measured how they deviated from their expected social media use after enrollment. We obtained expected use by extrapolating from historical use using time-series (ARIMA) forecasting. We find that the deviation in social media use varies across individuals based on their psychological traits. Individuals with high cognitive ability and low neuroticism immediately decreased posting after enrollment, and those with high openness significantly increased posting. Linguistically, most individuals decreased the use of first-person pronouns, reflecting lowered sharing of intimate and self-attentional content. While some increased posting about public-facing events, others increased posting about family and social gatherings. We validate the observed changes with respect to psychological traits drawing from psychology and behavioral science theories, such as self-monitoring, public self-consciousness, and self-presentation. The findings provide recommendations to correct observer effects in social media data-driven assessments of human behavior.
... Sometimes, the results of these studies are publicly shared, such as a 61-million person experiment to increase voter turnout (Bond et al., 2012), or the infamous experiment of emotional contagion (Kramer et al., 2014). In both cases, there was major public backlash about the impact of these systems, and the questionable ethics of such experiments (Jouhki et al., 2016;Zittrain, 2014;Zuboff, 2015). ...
Full-text available
Existing social media platforms (SMPs) make it incredibly difficult for researchers to conduct studies on social media, which in turn has created a knowledge gap between academia and industry about the effects of platform design on user behavior. To close the gap, we introduce Yourfeed, a research tool for conducting ecologically valid social media research. We introduce the platform architecture, as well key opportunities such as assessing the effects of exposure of content on downstream beliefs and attitudes, measuring attentional exposure via dwell time, and evaluating heterogeneous newsfeed algorithms. We discuss the underlying philosophy of interoperability for social media and future developments for the platform.
... Protecting users' autonomy & anonymity requires researchers to reflect on whether applied social media interventions or analysis methods could be manipulative or invasive. A negative example which might have harmed research subjects is an experiment by Facebook which altered the timeline to show more positive/negative content to users to investigate emotional contagion (Jouhki, Lauk, Penttinen, Sormanen, & Uskali, 2016). Further, ethical issues might arise if analysis methods are used to infer sensitive information or make predictions about individuals based on social media data, for example whether someone is likely to become depressed (Laacke, Mueller, Schomerus, & Salloch, 2021). ...
Full-text available
En route to the unravelling of today’s multiplicity of societal challenges, making sense of social data has become a crucial endeavour in Information Systems (IS) research. In this context, Social Media Analytics (SMA) has evolved to a promising field of data-driven approaches, guiding researchers in the process of collecting, analysing, and visualising social media data. However, the handling of such sensitive data requires careful ethical considerations to protect data subjects, online communities, and researchers. Hitherto, the field lacks consensus on how to safeguard ethical conduct throughout the research process. To address this shortcoming, this study proposes an extended version of a SMA framework by incorporating ethical reflection phases as an addition to methodical steps. Following a design science approach, existing ethics guidelines and expert interviews with SMA researchers and ethicists serve as the basis for redesigning the framework. It was eventually assessed through multiple rounds of evaluation in the form of focus group discussions and questionnaires with ethics board members and SMA experts. The extended framework, encompassing a total of five iterative ethical reflection phases, provides simplified ethical guidance for SMA researchers and facilitates the ethical self-examination of research projects involving social media data.
Full-text available
The Routledge Handbook of Translation and Methodology provides a comprehensive overview of methodologies in translation studies, including both well-established and more recent approaches. The Handbook is organised into three sections, the first of which covers methodological issues in the two main paradigms to have emerged from within translation studies, namely skopos theory and descriptive translation studies. The second section covers multidisciplinary perspectives in research methodology and considers their application in translation research. The third section deals with practical and pragmatic methodological issues. Each chapter provides a summary of relevant research, a literature overview, critical issues and topics, recommendations for best practice, and some suggestions for further reading. Bringing together over 30 eminent international scholars from a wide range of disciplinary and geographical backgrounds, this Handbook is essential reading for all students and scholars involved in translation methodology and research.
Full-text available
This article discusses Internet research ethics, which promises to become an ever-more robust and significant field within information ethics, on the one hand, and research ethics more broadly, on the other. As new venues emerge for human-human and human-machine interaction, it seems certain that new ethical conundrums will emerge. But the overall history of Internet research ethics includes at least some convergence on key values and rights, while at the same time preserving important local differences with regard to approaches to ethical decision making and implementation of basic rights and principles - even across East-West divides. This trajectory suggests not the certainty of finding resolutions to every ethical problem that comes along, but rather the sense of finding such resolutions in the face of new difficulties, with sufficient frequency and success to encourage further efforts to do so.
Full-text available
Advancing theory in media exposure and effects requires contending with an increasing level of complexity and contingency. Building on established theoretical concerns and the research possibilities enabled by large social datasets, we propose a framework for mapping information exposure of digitally situated individuals. We argue that from the perspective of an individual's personal communication network, comparable processes of "curation" are undertaken by a variety of actors-not only conventional newsmakers but also individual media users, social contacts, advertisers, and computer algorithms. Detecting the competition, intersection, and overlap of these flows is crucial to understanding media exposure and effects today. Our approach reframes research questions in debates such as polarization, selective and incidental exposure, participation, and conceptual orientations for computational approaches. © 2015 International Communication Association November 2015 10.1111/comt.12087 Original Article Original Articles
Full-text available
The recent Facebook study about emotional contagion has generated a high-profile debate about the ethical and social issues in Big Data research. These issues are not unprecedented, but the debate highlighted that, in focusing on research ethics and the legal issues about this type of research, an important larger picture is overlooked about the extent to which free will is compatible with the growth of deterministic scientific knowledge, and how Big Data research has become central to this growth of knowledge. After discussing the ‘emotional contagion study’ as an illustration, these larger issues about Big Data and scientific knowledge are addressed by providing definitions of data, Big Data and of how scientific knowledge changes the human-made environment. Against this background, it will be possible to examine why the uses of data-driven analyses of human behaviour in particular have recently experienced rapid growth. The essay then goes on to discuss the distinction between basic scientific research as against applied research, a distinction which, it is argued, is necessary to understand the quite different implications in the context of scientific as opposed to applied research. Further, it is important to recognize that Big Data analyses are both enabled and constrained by the nature of data sources available. Big Data research is bound to become more widespread, and this will require more awareness on the part of data scientists, policymakers and a wider public about its contexts and often unintended consequences.
Conference Paper
Full-text available
The paper explores research ethics in the era of social media and big data by discussing a debated Facebook experiment about emotional contagion.
Online social networking is rising. Adolescent boys and girls are social networks users, but they are not enough cautious. Even if they know the ways to protect themselves against the Internet’s risks, just a few of them behave consistently. Most teenagers start using smartphones and social networks when they are 11-years-old or younger. Accordingly to many of them, the Internet is just a game. In face-to-face social interactions they have many friends, but they are uncomfortable. Peer community is no more a safe place; it needs growing competitiveness and self-defence capabilities. Adolescents boys and girls less at ease into the peer community are also more likely cyber-bullying victims. Data source: Laboratorio Adolescenza and SIMA, Survey 2016 (sample size: 2,000 12-14 years old Italian students).
Experiments in geoengineering - intentionally manipulating the Earth's climate to reduce global warming - have become the focus of a vital debate about responsible science and innovation. Drawing on three years of sociological research working with scientists on one of the world's first major geoengineering projects, this book examines the politics of experimentation. Geoengineering provides a test case for rethinking the responsibilities of scientists and asking how science can take better care of the futures that it helps bring about. This book gives students, researchers and the general reader interested in the place of science in contemporary society a compelling framework for future thinking and discussion.
This paper examines the peculiar essayistic style of Walter Benjamin. All of the various genres of his writing have an allegorical and surrealistic quality, with hidden and private (and essayistic) meanings, but also with a prophetic, hallucinatory tension.