Content uploaded by Rama Adithya Varanasi
Author content
All content in this area was uploaded by Rama Adithya Varanasi on Mar 30, 2022
Content may be subject to copyright.
Accost, Accede, or Amplify: Aitudes towards COVID-19
Misinformation on WhatsApp in India
Rama Adithya Varanasi
Information Science
Cornell University
New York, NY, USA
rv288@cornell.edu
Joyojeet Pal
School of Information
University of Michigan
Ann Arbor, MI, USA
joyojeet@umich.edu
Aditya Vashistha
Information Science
Cornell University
New York, NY, USA
adityav@cornell.edu
ABSTRACT
Social media has witnessed an unprecedented growth in users based
in low-income communities in the Global South. However, much
remains unknown about the drivers of misinformation in such com-
munities. To ll this gap, we conducted an interview-based study
to examine how rural and urban communities in India engage with
misinformation on WhatsApp. We found that misinformation led
to bitterness and conict – rural users who had higher social status
heavily inuenced the perceptions and engagement of marginalized
members. While urban users relied on the expertise of gatekeepers
for verication, rural users engaged in collective deliberations in
oine spaces. Both rural and urban users knowingly forwarded
misinformation. However, rural users propagated hyperlocal mis-
information, whereas urban users forwarded misinformation to
reduce their eorts to assess information credibility. Using a public
sphere lens, we propose that the reactions to misinformation pro-
vide a view of Indian society and its schisms around class, urbanity,
and social interactions.
CCS CONCEPTS
•Human-centered computing →Empirical studies in HCI.
KEYWORDS
Misinformation, WhatsApp, encrypted platforms, rural, disinfor-
mation, public sphere
ACM Reference Format:
Rama Adithya Varanasi, Joyojeet Pal, and Aditya Vashistha. 2022. Accost,
Accede, or Amplify: Attitudes towards COVID-19 Misinformation on What-
sApp in India. In CHI Conference on Human Factors in Computing Systems
(CHI ’22), April 29-May 5, 2022, New Orleans, LA, USA. ACM, New York, NY,
USA, 17 pages. https://doi.org/10.1145/3491102.3517588
1 INTRODUCTION
The risks of misinformation are extremely high for millions of
new social media users in low-income communities who may
lack the capacity to verify information [
108
]. In this environment,
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from permissions@acm.org.
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA
©2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-9157-3/22/04. . . $15.00
https://doi.org/10.1145/3491102.3517588
misinformation has led to devastating events, including lynch-
ings [
124
] and deaths of several people [
62
]. Yet, the overwhelm-
ing share of research on misinformation is situated in Western
environments [
59
,
76
,
113
], in turn leading to interventions that
disregard the linguistic, cultural, and political settings in the Global
South [
2
,
22
,
77
,
96
]. A growing body of work has taken this chal-
lenge head-on, especially in large nation states like Brazil and India
that have seen massive increases in misinformation in the pub-
lic sphere [
9
,
64
,
86
,
97
]. However, much of the existing research
has focused mainly on political misinformation shared by urban
users [
9
,
37
,
50
,
64
,
80
,
97
], who have grown into social media use
after having been Internet users for a while. Little is published on
the drivers of misinformation in low-income, rural settings where
users are new to online information environments and often lag
behind their urban counterparts in digital skills [25, 115].
We present a comparative examination of attitudes and experi-
ences, using semi-structured interviews with nineteen WhatsApp
users from rural communities and nine users from urban communi-
ties in India. Our work considers the commonalities and dierences
between participants in how they interact with misinformation,
what factors shape their perceptions and engagements, and which
strategies they employ to combat the risks of misinformation. To
understand everyday misinformation experiences of our partici-
pants, we focused on health misinformation propagating on closed
WhatsApp groups. Our ndings paint a troubling picture, wherein
WhatsApp groups of both communities were fraught with COVID-
19 health misinformation, which resulted in conict and harms in
oine and online settings. We found that people who had higher
social status controlled and inuenced the perceptions and engage-
ment of marginalized people in rural areas. We also found important
dierences in how participants assess information credibility. While
both rural and urban participants forwarded misinformation in their
WhatsApp groups even when they were unsure of the authenticity,
rural participants deliberately propagated hyperlocal disinforma-
tion to reinforce communal biases. While urban participants only
occasionally pushed back against misinformation messages on-
line, rural participants employed community-driven tactics, such as
deliberations in social gatherings and announcements through re-
purposed vehicles with loudspeakers that traversed physical spaces.
To analyze collective engagement practices on WhatsApp groups
and relate them to oine social practices, we turn to the public
sphere, a lens to study public interactions undertaken by individuals
in a community for dierent kinds of collective purposes, such as
opinion formation [
57
]. Building on the prior work that examined
modern social media spaces through the lens of the public sphere
[
23
,
74
,
92
], we nd ways in which WhatsApp groups, as online
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA XX
public spheres in rural communities, were mainly accessible to
people with power, who promoted their own interpretations of
misinformation. Hierarchies allowed the reinforcement of beliefs
aligned with dominant sociopolitical ideologies. Yet, we also found
spaces people created to collectively deliberate and assist people to
contest and make sense of misinformation – both in-person and on
WhatsApp groups, as an outcome of forwarded online misinforma-
tion. We end by discussing the importance of oine public spheres
in ghting misinformation, especially in rural communities. Our
study contributions to the study of misinformation in HCI4D and
CSCW literature as follows.
(1)
We show how rural communities, largely understudied, en-
gage with misinformation on closed WhatsApp groups, pro-
viding complementary perspectives to the prior computa-
tional studies conducted on public groups [50, 64].
(2)
We contrast how people in rural and urban communities
discover, verify, propagate, and combat misinformation.
(3)
Using the lens of public sphere, we provide rich insights
to design features that could enable eective participation
of marginalized members of the community in collective
deliberations to curb the spread of misinformation.
2 BACKGROUND AND RELATED WORK
Misinformation is inaccurate [
48
], incomplete [
78
], or vague [
68
]
information that is shared by individuals with varied motivations
[
129
]. Misinformation literature has its roots in rumor studies.
While misinformation and rumors have several commonalities (e.g.,
both may resurface despite corrections [
70
,
71
]), unlike rumors
whose veracity cannot be veried [
35
,
43
], misinformation is al-
ways deemed as inaccurate.
Research on online misinformation has grown rapidly with the
rising inuence of social media in the Global North. In particular,
HCI scholars have studied misinformation with diverse stakehold-
ers (e.g., journalists) [
18
,
47
,
99
,
109
,
112
,
113
] on various media
(e.g., news, tweets) [
9
,
30
,
31
,
50
,
54
,
59
,
60
] through a range of dis-
ciplinary perspectives [
27
,
46
,
67
,
79
,
82
]. Within the Global South,
social media has witnessed an unprecedented growth, often exploit-
ing new technology users’ lack of hindsight of dealing with digital
disinformation [
116
], leading to dramatic consequences such as
lynchings [
124
], civic unrest [
98
], and political polarization [
9
,
10
].
Despite this, the tools and approaches to studying misinformation
continue to be situated largely in the Global North [34, 87, 125].
A nascent but growing body of work in HCI has started investi-
gating misinformation in the Global South to understand misinfor-
mation propagation [
50
,
64
,
97
], to build typology of misinformation
messages [
9
,
11
,
80
], and more recently to understand misinforma-
tion engagement practices [
45
,
79
,
123
]. Although this line of work
oers useful insights about how people in the Global South engage
with misinformation, little work has been done to study experiences
of rural communities who have limited digital awareness and dier-
ent mental models around emergent technologies [
19
,
20
,
36
,
115
].
Our study contributes to this line of research by asking the follow-
ing
RQ:
What are the key dierences and commonalities between
rural and urban communities in how they (1) discover and verify,
(2) propagate, and (3) combat misinformation on closed WhatsApp
groups.
2.1 Navigating Misinformation
Discovering and Verifying Misinformation.
Prior research on
misinformation discovery has focused on dierent processes that
individuals use to label information as accurate or misinformation.
These processes can be either driven by an individual’s internal
perceptions [
46
] or their external actions [
76
]. Internal perceptions
are often inuenced by the message source [
76
,
79
,
118
], content
characteristics (e.g., the overall tone of the message [
31
,
46
]), design
cues (e.g., number of shares, likes, or comments), and individu-
als’ personality traits [
31
,
31
,
75
,
95
] when assessing information
credibility [51].
Conversely, external actions refer to the assistance individuals
take from interpersonal and institutional resources to verify in-
formation veracity [
118
], such as reaching out to individuals for
verication [
26
,
59
] and online verication [
53
]. Within the Global
South, urban users often relinquish their responsibility of verica-
tion to social media platforms and news agencies [
58
]. Our work
extends this body of work by characterizing the internal percep-
tions and external practices of rural social media users that shape
their opinions around misinformation.
Propagating Misinformation.
People often share misinforma-
tion unintentionally [
59
] when emotional proximity to the subject
and prior beliefs cloud their judgment [
89
,
100
]. Even on realiz-
ing that they shared misinformation, people often justify their
action by putting responsibility on other people for not doing their
own research [
16
]. On rare occasions, individuals explicitly cor-
rect themselves by deleting the message or adding clarication
for their imagined audience [
83
,
119
] to be a socially responsible
citizen [
16
]. People also deliberately share misinformation [
119
],
contributing to faster and farther spread of misinformation than
the factual information [
12
,
127
]. Prior literature outlines diverse
motivations for people to share disinformation, including receiving
nancial benets [
28
] and safeguarding individual or communal
agenda [
9
,
24
,
82
,
128
], among others. We contribute to this line of
research by describing the sociotechnical factors and practices that
shape how misinformation propagates in urban and low-income
rural communities.
Combating Misinformation.
Scholars have also studied the tools,
techniques, and strategies people use to curb misinformation in
their everyday lives. One strategy is to provide people multiple
warnings ahead of time about the potential exposure of misinfor-
mation [
44
,
76
,
104
]. However, timing of such warnings remain
controversial as a few studies have shown reduced eectiveness
of such warnings post-exposure to misinformation [
44
,
104
], while
more recent work shows contrary evidence [
27
]. Another strategy
that people often use to correct misinformation include providing
an alternative evidence-based narrative [
76
,
105
,
113
]. Prior studies
have also explored the scope of computational tools in assisting peo-
ple to ght misinformation by providing visual aids and data-driven
crowdsourced cues to distinguish misinformation [
66
,
93
,
106
,
107
].
Our work expands Uddin et al.’s [
122
] research in this area by
examining the range of local and contextualized strategies rural
participants use to combat misinformation and contrasting them
with those used by urban participants. Taken together, our nd-
ings present a critical and timely addition to the literature on the
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA
drivers of engagement with misinformation by studying dierent
communities in India.
2.2 Public Sphere as a Lens
To examine the engagement practices of rural and urban communi-
ties with misinformation, we draw upon the notion of public sphere.
The concept of public sphere was put forth by German social the-
orist Jurgen Habermas to describe an aspect of social life where
individuals with common needs and issues can gather freely to
deliberate with an intent to inuence collective action [
57
]. The
term has a broad and multi-faceted meaning, ranging from “strong”
public spheres that engage in formal and large-scale deliberations
to achieve a political outcome to more “weak” ones that engage in
small-scale discussions and information exchanges to achieve opin-
ion formation [
49
]. A common theme across these dierent types
of public spheres is that the information that ows through these
types of groups pertain to the public at large. This is in contrast to
private sphere which is dened as a space where an individual ex-
periences a degree of freedom, privacy and authority, unrestricted
by external interventions [
57
]. Historically, women and people of
color have been excluded from participation in the public sphere.
As a result, women have mostly kept to private sphere for childcare
and domestic work, whereas men have dominated the public sphere
[101].
The theoretical development of the public sphere attributed im-
portant characteristics to the physical spaces. First, for successful
collective deliberations, the public sphere should reect the ab-
sence of power structures by enabling free and equal standing
among participants through mutual respect, empathy, and solidar-
ity [
56
,
65
,
114
]. Second, the public sphere should also be marked by
equal participation and exchange of ideas [
33
], with a focus on com-
mon good [
114
] and willingness by members to reevaluate through
competing validity of claims [
55
]. Although Habermas himself dis-
tinguishes normative principles of his theory from the practical
ones, subsequent research on the public sphere has criticized the
theory for inadequate conception of the notion for unequal com-
munities and their participation in the public sphere [
49
]. In recent
years, technological tools, such as online forums [
40
], collaborative
spaces [
15
], and instant messaging apps like WhatsApp [
124
] have
replaced traditional public spheres as new spaces of collective de-
liberations. These tools provide inter-connectivity and democratic
communication with potential audiences through ability to reshare,
comment, like, or react [41, 94, 103].
In order to understand the contribution of such technologies in
formation and maintenance of online public sphere, Dahlgren
[42]
proposed three key dimensions, namely structural,interactional,
and representational. Structural dimension provides an understand-
ing of how congurations of online technologies aect access and
inclusion of dierent communities. Interactional dimension focuses
on the exchange of opinion among dierent group members. Lastly,
representational dimension focuses on plurality of such exchanges,
engaging ways in which dierent communities are represented in
such deliberations. For example, from the lens of public sphere,
WhatsApp enable users to create and maintain public and private
groups (structural) and to share information with dierent individ-
uals, such as colleagues, friends, and family (interactional). These
groups also provide a space for collective deliberations and dis-
course that cross geographic boundaries by facilitating exchange
of global news, user-generated content, and state announcements
between diverse individuals in the communities (representational)
[
88
,
90
,
91
]. In our study, we discuss our ndings through important
characteristics of public sphere to examine how rural and urban
communities engage in collective deliberations around misinforma-
tion.
3 METHODOLOGY
To answer our research questions, we draw on 28 semi-structured
interviews conducted with rural and urban participants who were
part of active WhatsApp groups across four states in India. We used
a breadth approach to reach a more dispersed set of respondents
across parts of India. The interviews lasted four months (Aug–Oct,
2020) and were conducted remotely during the period of rising
COVID-19 cases in India.
Participant Recruitment.
We used a combination of snowball
and convenience sampling to recruit participants from dierent
rural and urban communities. We present distinct characteristics of
participants from these communities in the demographic section.
To recruit rural participants, we contacted six well-known non-
governmental organizations that work with rural communities on
community health issues. Of these, three organizations that were
running COVID-19-related programs expressed interest.
We partnered with them and recruited participants through their
established networks. Based on our diverse requirement criteria, the
organizations reached out to potential participants through both
WhatsApp messages and telephone calls, explained the purpose of
our study to them, and then gave us the contact information of those
who expressed an interest in participating. All of our interactions
with the organization and participants took place remotely. We
set up pre-interview conversations with interested candidates to
understand their participation in WhatsApp groups and experience
with COVID-19 misinformation. All shortlisted participants were
part of several WhatsApp groups and routinely saw COVID-19-
related messages. We also recruited participants who did not posses
smartphones, but were intermediate users of WhatsApp (i.e., even
though they did not have a WhatsApp account, they routinely saw
messages through other people).
To recruit urban participants, we composed a WhatsApp-based
recruitment message containing basic study details along with
rst author’s email address, phone number, and a direct URL to
contact him on WhatsApp. We then shared the message in our active
WhatsApp groups of friends, acquaintances, and colleagues. We also
requested our social connections to forward it in their own personal
active WhatsApp groups. We set up a pre-interview conversation
with the interested candidates to understand their participation in
WhatsApp groups and experience with COVID-19 misinformation.
During recruitment phase, we made an explicit decision to recruit
low-income users from rural areas, in part because of their relatively
higher isolation from major forms of information environments
[
85
] in comparison to urban low-income users who are exposed to
other forms of information [
111
]. Overall, we recruited nineteen
rural and nine urban participants. After each interview, we rened
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA XX
our probes, stopping when we reached a theoretical saturation in
our participants’ narratives.
Participant Demographics.
Boundaries between rural and urban
communities are complex to tease out precisely [
126
], both because
of the uidity of peoples’ movements and connectivity, and the
increasing ubiquity of services across regions. However, some im-
portant distinctions remain in place. Rural participants were in
geographically smaller contiguous settlements. Our sampled vil-
lages had an average of 2,700 people, while small towns averaged
80,000 residents. While respondents in both locations had oine
connections besides online engagement with other members of
their WhatsApp groups, a key characteristic of rural populations
was physical proximity with their group members. This proximity
was layered over other hierarchies – participants in the WhatsApp
groups included village council members and community elders
besides community health workers, low-income vendors, home-
makers, young graduates, NGO workers, and laborers. Out of the
nineteen rural participants, eight were women, ve of whom were
homemakers and three participated in social work. Six participants
had less than twelve years of formal education, ten participants
were graduates, and three were postgraduates. Most participants
belonged from low-income households, with median monthly fam-
ily income of $145. This is inline with similar studies in the Global
South with low-income communities [
63
,
115
]. The majority of the
participants practiced Hinduism (n=15), which is roughly compara-
ble to the national spread on religion, while the rest were Muslim
(n=2) and Christian (n=1). Rural participants spoke Hindi, Oriya,
and Telugu. Most of the households had limited experience with
technology and smartphones were often shared among multiple
family members. Internet was mainly accessed through mobile
phones. Rural participants used WhatsApp and Facebook messen-
ger for everyday communication. They also used YouTube and apps
similar to Tiktok (e.g., Chingari, Mitron) for daily entertainment.
In contrast, the urban respondents were from cities with a pop-
ulation greater than one million. These participants comprised
students, information technology consultants, homemakers, and
white collar workers in nance, market research, and law. Five out
of nine urban participants were women. Majority of these women
participants were employed (n=4). Six participants were graduates
and rest were postgraduates. Most urban participants were from
middle-income households with a median monthly family income
of $530. With respect to faith and religion, six participants were
practicing Hinduism, two were Christians, and one was Muslim.
Urban participants mainly conversed in English or Hindi. Table 1
shows the demographic details of our participants. Unlike their
rural counterparts, all the urban participants had exclusive owner-
ship and access to their smartphones. They also invested in paid
subscription to high-speed broadband connection, in addition to
mobile data plans. In addition to WhatsApp, urban participants also
used Snapchat, Telegram, and Instagram for daily communication.
They also actively used YouTube, Twitter, LinkedIn, and Facebook.
Procedure.
Due to rising cases of COVID-19 worldwide, we devel-
oped a remote interview protocol with diverse options, including
video call on Zoom or WhatsApp and voice call via mobile network,
to account for participants’ preferences and Internet bandwidth is-
sues. We also asked participants to choose 1–2 WhatsApp group(s)
with the most COVID-related activity in the last three months.
Participants were then encouraged to peruse the group before the
interview and shortlist interesting COVID-related conversations
(using the “favorite” option in WhatsApp or by writing them down).
During the interview process, we asked participants to share the
shortlisted messages along with the relevant context. This process
helped us develop a better understanding of the participants’ use
of these WhatsApp groups. The participants shared messages by
either sharing their smartphone screen through Zoom, forwarding
messages over WhatsApp to us, or reading the messages out loud.
The interview protocol was divided into three parts. We started
the interview with questions that explored how participants dis-
covered and veried the authenticity of misinformation messages
(e.g., “Can you walk us through a few messages that you trusted the
most in the group?”). The next part explored their motivations for
interacting with misinformation (e.g. “What dierent factors do you
take into consideration while forwarding a message?”). The last part
investigated their willingness and strategies to curb misinformation
(e.g.“What do you do with the COVID related messages that you don’t
trust? Why do you use this strategy?”). We conducted interviews in
the language preferred by participants and recorded the conversa-
tion with participants’ consent. The interviews lasted an average of
60 minutes (min: 45 minutes, max: 2 hours). The participants were
not provided with any compensation for the interviews.
Data Collection and Analysis.
We collected a total of 32.5 hours
of interview data. All the recordings were translated into English
and transcribed. As the interviews were conducted in multiple lan-
guages, appropriate care was taken to ensure consistency in the
translation process. The interview sessions also produced 84 mis-
information messages and 59 pages of detailed notes. First author
who conducted the interviews also engaged in the qualitative cod-
ing using inductive thematic analysis [
29
]. We took multiple passes
on the transcribed data, notes, and the misinformation messages
to conduct open coding. We avoided using any presupposed codes
and instead let the codes emerge from our data. Credibility was
established through prolonged engagement with the data by all
authors. Major disagreements in this process were resolved through
multiple rounds of peer-debrieng in which all the authors par-
ticipated [
39
]. Over the course of analysis, all authors regularly
met to: (1) discuss coding plans, (2) develop preliminary codebook,
(3) review the codebook and rene/edit codes, and (4) develop cat-
egories and themes. Finally, using abductive approach [
120
], we
further structured, mapped, and categorized the themes around the
key concepts of public sphere, such as discourse,opinion formation,
structure, and representation, among others. At the end of multiple
passes, our collaborative analysis produced 51 codes. Example codes
included social verication strategies,power structures inuencing
trust, and communal angle around forwarding. Codes were further
condensed into ve key themes around misinformation discovery,
misinformation propagation,combating misinformation,eects of
misinformation, and misinformation characteristics. Our nal code-
book, themes, and the prevalence of each code is provided in the
Appendix.
Positionality & Ethical Considerations.
Our study was approved
by IRB at Cornell University. All participants were briefed about
the implications of the study. During recruitment, to accommodate
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA
Participants RURAL URBAN
Count 19 9
Gender Women: 8; Men: 11 Women: 5; Men: 4
Age (years) Min: 20; Max: 57; Avg: 34; St.D: 12.43 Min: 20; Max: 53; Avg: 31; St.D: 12.17
Education High School or less: 6; Graduate: 10; Post-graduate: 3 Graduate: 6; Post-graduate: 3
Location (states) Maharashtra: 8; Andhra Pradesh: 3; Orissa: 4; Telangana: 4 Maharashtra: 2; Andhra Pradesh: 2; Orissa: 4; Telangana: 1
Group types Friends: 10; Family: 6; Colleagues: 5; Society: 5; Self-help: 4 Friends: 6; Family: 10; Colleagues: 4; COVID-19: 3
Group size Min: 12; Max: 103; Avg: 31; St.D: 26.25 Min: 12; Max: 45; Avg: 31; St.D: 10.72
Family Income
(Rupees/month)
Min: 3,500; Max: 18,000; Avg: 8,182.35; St.D: 4,231.16 Min: 21,000; Max: 55,000; Avg: 34,300; St.D: 10,500
Smartphone own-
ership
Own personally: 3; Own & share: 8; Only share: 5; No access: 3 Own personally: 7; Own & share: 2; Only share: 0 ; No access: 0
Profession Types
Govt. employed: 4; NGO: 4; Salaried: 4; Business: 4; Unemployed: 3
NGO: 2; Salaried: 6; Business: 1; Unemployed: 1
Professional Expe-
rience (Years)
Min: 3; Max: 17; Avg: 6 years Min: 1; Max: 22; Avg: 8.5 years
Table 1: Participants’ demographic details across Rural and Urban communities
unanticipated personal issues, participants were given the option
to reschedule, cancel or conduct the interview in multiple sessions.
Due to exploratory nature of our research, we did not inuence
participants’ existing beliefs and interaction practices with misin-
formation messages through any form of intervention. At the same
time, we shared our understanding of COVID-19 misinformation by
sharing simple strategies to identify misinformation compiled by
the Ministry of Health
1
. To safeguard participant’s misinformation
interaction practices, all the participants’ personally identiable
information, including their narratives and shared messages, were
anonymized.
The qualitative interpretation of the resultant data is shaped by
our prior research and experience with studying misinformation in
the Global South. All authors are of Indian origin and have exten-
sive experience conducting eld research with both rural and urban
populations in the Global South. The interviews were conducted
by the rst author, a PhD student, whose urbanity, education, and
social status placed him in a position of power, especially with par-
ticipants in rural areas. All authors have routinely received several
of these COVID-related misinformation messages on WhatsApp
groups they are part of. Our commitment to studying the impact
of misinformation in rural settings is part of a longer-term en-
gagement of understanding the benets and harms of emergent
technologies to marginalized communities with limited digital lit-
eracy and know-hows. We view HCI research from the lens of
postcolonial computing [
61
], aiming to conduct formative research
to understand how people in low-resource environments interact
with misinformation, experience associated harms and burdens,
and devise novel strategies to curb its propagation.
1https://www.mohfw.gov.in
4 DESCRIPTION OF WHATSAPP GROUPS
We now briey describe the WhatsApp groups and conversations
our participants shortlisted to discuss with us. WhatsApp is a pop-
ular social media platform in both rural and urban India. The What-
sApp groups that the participants shortlisted were well-known in
their social circles. A majority of the groups were accessible through
an invitation link. In the remaining groups, participants were man-
ually added by the administrators. However, in such groups, several
members were made administrators to make it easier to add mem-
bers. All the members of the group were allowed to post without any
restrictions and administrators rarely performed any moderation
activities, such as a removing members from the group.
Urban Groups.
Our urban participants were part of diverse What-
sApp groups of friends, family, and colleagues. Friend groups mainly
comprised people who had known each other from school or college.
Family groups had extended family members along with distant
relatives. Colleague groups often had peers with shared identity
(e.g., working at the same company) and interest (e.g., group for
informal conversations). Half of these groups existed for over two
years (avg = 3.3 years) while the other half were formed within
the last six months in response to the COVID-19 pandemic. Unlike
other groups where the pandemic was one of many topics of dis-
cussion, these groups were dedicated for discussion on COVID-19.
On average, these groups had 21 participants (min=12, max=28).
Conversations and forward messages shared with us from these
WhatsApp groups were mostly in English, with periodic conversa-
tions in code-mixed Hindi and Marathi.
Rural Groups.
Along with WhatsApp groups of friends and fam-
ily members, our rural participants also shortlisted other types of
WhatsApp groups that we call Society groups and Self-help groups.
Society groups were made by Panchayat (village administration)
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA XX
sta and consisted of mostly men. These groups were used by Pan-
chayat sta to share general information and facilitate informal
conversations. These groups were also the largest in our study,
accommodating 80–103 members. Self-help groups were formed by
community health workers (CHWs) to assist women with specic
needs. These groups had conversations around health, childcare,
and household work, among other topics. These groups were rela-
tively small averaging about 25 members. Most of the messages in
these groups were in the local language.
Types of Misinformation on WhatsApp groups.
Overall, par-
ticipants shared 84 COVID-related potential misinformation mes-
sages with us during the interview. We manually fact-checked the
veracity of all the messages with two dierent leading fact-checking
websites, Alt News [
5
] and India Today [
6
]. A total of 65 messages
had misinformation as veried by third-party fact-checkers. These
messages were global misinformation and were highly pervasive
in both rural and urban areas, with same content appearing in
dierent languages. Within these, there were four main types of
COVID-related misinformation. The rst type of messages were
communal in nature, specically targeting minority communities
and blaming them for spreading coronavirus (e.g., Figure 2.A). The
other types of misinformation included (1) ctitious data (e.g., num-
ber of new infections), (2) false information (e.g., fake government
mandates around lockdown), (3) and false health recommendations
(e.g., herbal and homeopathic treatments of COVID-19).
Apart from the global misinformation, we also discovered mes-
sages that were highly contextual and contained misinformation
specic to a particular local community. We call these messages
hyperlocal misinformation. In addition to following similar themes
as the global misinformation, these messages contained inaccurate
facts about individuals in the community who contracted COVID-
19, such as manipulated content along with images of sick or dead
individuals to wrongfully show deaths from COVID-19. These mes-
sages, unlike the global misinformation messages, often contained
personally identiable information (e.g., name, address) of the indi-
viduals in the community. Our manual fact-checking process that
we followed for verifying global misinformation proved ineec-
tive against these messages. As a result, we relied on triangulation
wherein we inquired community members to corroborate the narra-
tive and events described in these message [
121
]. We captured their
understanding of the message and if we received multiple accounts
of the events described in the message from two or more people,
we labelled it as hyperlocal misinformation.
5 FINDINGS
Our ndings are organized as follows. We rst discuss factors that
participants consider during discovery and verication of misin-
formation on WhatsApp groups. Then, we present reasons people
report for propagating misinformation in other WhatsApp groups.
Finally, we discuss strategies they use to counter or combat misin-
formation in these groups.
5.1 Discovery
Oline Intermediaries.
Our participants had varying levels of
access to smartphones, which inuenced their exposure to mis-
information and the experience with associated harms. In rural
areas, some participants did not own personal smartphones. In-
stead, they either shared a smartphone with other family members
or were secondary users of WhatsApp in the sense that they par-
ticipated in closed WhatsApp groups through other people, like
family members, who had a WhatsApp account [
102
]. Often, pri-
mary WhatsApp users relayed the dialogue back and forth between
the secondary users and members of the WhatsApp groups. In line
with the ndings from prior work, these secondary users were
typically women [
8
], elderly, and low-wage workers (e.g., casual
vendors or day laborers). Even when women had their own What-
sApp account, male family members often shaped how, and with
whom, they engaged in discussions in online settings. Manoj, a
senior male, described how WhatsApp use was gendered in his
village:
“All of the WhatsApp groups mostly have men. You will
only nd a few women. Women have keypad phones. .. in
our village whatever information comes in these groups,
it goes through men. They forward it to their wives. .. sometimes
women have their own groups like with health work-
ers. . .they are not the main ones like ours. I know 10–15
main groups. There is not a single woman in them.”
By “main” groups, Manoj meant active WhatsApp groups that
have key inuential people (e.g., village administrators) as members.
Inability to access and participate in these main groups systemat-
ically excluded women from collective deliberation processes in
oine and online public spheres.
The discernible power asymmetries enabled primary users (or
intermediary-users) to play a big role in inuencing secondary
users’ (or beneciary-users’) trust and awareness around misinfor-
mation. For example, for Lata, a housewife, her husband shaped her
understanding around the COVID-19 pandemic by saying things
like “What I am sharing with you is useful and important, not what
others say!” or “You would have seen this COVID nonsense on TV,
you should listen to what I am sharing.” During the interview, Lata
shared multiple messages containing misinformation, which she
believed to be true because her husband shared them. He in turn
had ostensibly got these messages on social media, which he be-
lieved more than news in mainstream media. We also saw other
examples of secondary users who were professionally related to
primary users. Harshit, an NGO employee described how he felt
“responsible” to share COVID-related updates with laborers who
did not own smartphones or used WhatsApp. He often read aloud
forward messages or showed videos that he received in a WhatsApp
group of people of his caste to the laborers during breaks between
work sessions.
During our interview, we found that many messages that Harshit
shared contained false health advice. Other intermediaries, such
as government representatives and community health workers,
who similarly mediated access to information, based on what they
consumed on WhatsApp groups, exercised their authority in the
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA
Stage
Rural Participants
Setting (Sphere)
Urban Participants
Setting (Sphere)
Discovery
Intermediaries controlled access
and influenced trust
High trust in institutional actors
and video messages
Home
WhatsApp groups
Influencers shared opinions
and shaped trust
Reliance on source and
message characteristics
Home and Small Groups
WhatsApp groups
Verification
Collective deliberations in small
groups using interpersonal
networks
Social gatherings
Gatewatching
by “experts”
Collective deliberations and
use of digital skills
Main WhatsApp groups
Satellite WhatsApp groups
Propagation
Shared hyperlocal and
communal misinformation
Members coerced to propagate
misinformation
WhatsApp groups
Shared misinformation that
could be potentially useful
Shared misinformation to
save verification time
WhatsApp groups
Combat
Focused on hyperlocal messages
Agressive confrontation and
public service annoncements
Community
Focused on select messages
Used soft nudges and
persuasion
WhatsApp groups
Offline Private
Offline Public
Online Public
Online Public
Public
Offline Public
Online
Online Public
Offline Private
Online Public
Online Public
Online Public
Figure 1: Table summarizing key practices of rural and urban participants around misinformation and the corresponding
spheres in which they occur.
community by playing this role of informal broadcasters for com-
munity members who lacked direct access to devices. For example,
government representatives during informal social gatherings and
community health workers in women-only group meetings decided
who is part of the conversations, what information is deemed as
credible, and which recommendations are worth acting upon when
discussing WhatsApp messages. Often people with marginalized
identities were hesitant to express their opinions freely in these pub-
lic spaces and were discouraged from participating in the process
of collective deliberation. Similar power structures also permeated
online public spheres of the closed WhatsApp groups.
In-network Inuencers.
Unlike some rural participants, all urban
participants had access to personal smartphones, uninterrupted
internet access, and membership to multiple WhatsApp groups.
No participant relied on an intermediary to access information in
the WhatsApp groups. We found that participants’ perceptions of
information credibility were shaped by people with whom they
had strong ties oine. We call such people in-network inuencers,
who included family members, relatives, friends, neighbors, and
colleagues with strong ties. Unlike intermediaries in rural areas,
inuencers did not control participants’ access to the information or
interaction in WhatsApp groups. Instead, they shaped participants’
trust by playing the role of a sounding board and lending their
opinions when solicited.
For example, Ankit, a senior accountant, periodically sat with
his 32-year-old son to discuss COVID-related WhatsApp messages
that he found suspicious and took his son’s suggestions on which
messages to trust. He shared how he came to trust a message that
falsely claimed an Ayurvedic medicine to prevent COVID-19 (see
Figure 2.B), when his son endorsed it. Similarly, Meena, a young
graduate student, instructed her parents to forward suspicious mes-
sages to her before forwarding these messages to the wider audi-
ence, to avoid any embarrassment in case the forwarded messages
turns out to be fake. During dinner, Meena routinely shared her
opinions on some of the messages with her parents and encour-
aged them to think critically. She acknowledged how her parent’s
blind acceptance of such messages contributed to regular emotional
distress. Even though inuencers proactively shaped participants’
opinions, unlike the case of intermediaries and beneciaries in ru-
ral regions, participants could still proactively engage in agentic
practices around the messages. Meena described how despite her
objections, her parents ordered a homeopathic medicine that was
mentioned as a cure for COVID-19 in a WhatsApp forward (see
Figure 2.C).
Sender Inuence.
When judging credibility of WhatsApp mes-
sages, both rural and urban participants placed a lot of emphasis
on “who” sent the message. Rural participants who used WhatsApp
groups reported high levels of trust in local government representa-
tives (e.g., panchayat members) and community health workers who
routinely shared COVID-related messages on WhatsApp groups
as well as in oine settings. Participants reasoned that local gov-
ernment representatives had a long history of working with their
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA XX
community, establishing trust, and sharing reliable information
(e.g., via local government pamphlets and circulars).
However, local government representatives often inadvertently
forwarded WhatsApp messages containing false health advice, of-
ten when the information in the messages aligned well with their
own personal beliefs. Arti, a senior coordinator of community
health workers, recounted that a few of her subordinates forwarded
messages containing multiple inaccurate home-based remedies and
misinformation against a minority community. Arti elaborated one
particular instance:
“During training, one health worker shared a video of
a Muslim vegetable seller licking produce. She felt that
it was true. She told me that other Muslim vendors in
her area also do like this. I did not question her because
I didn’t want to make her feel bad...For my peers, I feel
local people and local news is the only source to verify
information if something they get on WhatsApp is fake
or not...if these messages are fake they start to believe
wrong things. I don’t think they do it on purpose.”
Local government representatives and health workers were well-
aware of people’s trust in them and the inuence of their messages.
In an environment of uncertainty on what is ocially known or
communicated through the state, these representatives played an
important part in peoples’ information environments. For example,
Govind, a panchayat ocial described, “If I send a COVID-related
message to people in the village, people feel that if sir [Govind] has
sent the message then it will be right.” Often these representatives
shared COVID-related prescriptive messages that were beyond the
scope of their ocial duties. Consequently, the ocials became a
vehicle for expressing and enforcing biases.
Source and Message Characteristics.
Urban participants high-
lighted greater trust in forwarded messages that contained links or
names of news sources well-aligned with their political views. Yas-
min, an analyst, described how she particularly trusted messages
that had the link of the original article from news websites like the
Times of India.
Urban participants had varying opinions on credibility of What-
sApp messages that linked to a Twitter, Facebook, or Instagram
post. In general, we found that younger participants felt that they
could trust such messages and believed that people would hesitate
to share fake messages on public platforms like Twitter than on
closed WhatsApp groups. Yasmin expressed how she trusted Face-
book and Twitter posts more because unlike WhatsApp groups,
she could control whose messages appeared in her feed, for exam-
ple, by following journalists and unfollowing people who share
misinformation. Moreover, the engagement in the form of likes
and comments on the posts also helped participants like Yasmin
assess credibility. On the other hand, older participants in urban
areas expressed explicit distrust over messages that had Facebook
or Instagram links. Ankit shared how he did not trust messages
especially from Facebook because “anyone could create an account,
but on WhatsApp someone has to use his own number.”
Our urban respondents also expressed relative distrust on video-
based social media platforms like YouTube and TikTok and were
somewhat aware of deepfake technology. On the contrary, rural
participants trusted video messages the most. They felt that video
messages are a living account that something happened and pro-
vided contextual cues to gauge if things are presented out of context.
Poornima, an NGO worker, described:
“I trust video messages the most because I can under-
stand what I am seeing. We cannot really believe all
the text messages. A few can be fake also. You don’t
know where they copied the text from. In the video, we
can see what actually happened so I trust those kind of
messages more. If someone I don’t know sends a suspi-
cious message in the WhatsApp groups, I do not believe
it unless its a video message.”
Like Poornima, several rural participants had no mental models
of deepfake technologies. Unlike text messages, video messages
were perceived as “harder to edit.” These results provide a cause
for concern as prior research shows that a signicant portion of
misinformation on WhatsApp propagates in multimedia formats
[
98
]. Urban participants described how they developed intuitive
“skills” to assess message credibility. For example, Jatin, a recent
graduate, described how the use of emoticons in COVID-related
forwards was a red ag for him. Urban participants also reported
other cues, such as message length, engagement of members with a
message, and the number of times they received the same message
from dierent group members, to judge the authenticity. In contrast,
rural participants trusted form-factor elements of WhatsApp mes-
sages that mimicked the style of government-sanctioned physical
pamphlets and circulars (e.g., containing ocial logos, signature of
purported ocials).
5.2 Verication
Online Gatewatchers.
Participants did not always verify the cred-
ibility of messages they found suspicious. But when they did, they
used a gamut of strategies. Urban participants relied on verication
from a few gatewatchers in the WhatsApp groups, who did that
as a goodwill gesture or social obligation towards group members.
Gatewatching is a practice of examining dierent gates or sources
of information and selecting the most appropriate pieces, aligning
with the interests of the intended community [
32
, Chapter 2][
112
].
Gatewatchers in our study were typically credentialed or knowl-
edgeable members of the groups, like nurses. They kept a close eye
on the incoming group messages and notied the group when they
perceived a particular message as misinformation. In most cases,
this responsibility was entrusted to individuals by the group mem-
bers, instead of individuals volunteering for the role. For example,
Sai described how his sister, a constable, was expected to notify
members if anyone posted false information about COVID-related
lockdowns. Instead of engaging with group members who shared
misinformation, she simply diverted the group members’ attention
to more accurate resources like trusted websites. Gatewatchers
reported becoming the resource that group members sometimes
privately reached out to verify the reliability of WhatsApp mes-
sages.
A few urban participants formed parallel COVID-specic groups
to increase communities’ awareness and assist them in verifying
health-related messages. These WhatsApp groups acted as knowl-
edge networks in the public sphere [
13
]. The founding members
of these groups took on the work of disseminating only veried
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA
Figure 2: (A) A Video of a Muslim vegetable vendor applying saliva on produce. Videos like these were used to spread inaccurate
claims of Muslims spreading COVID-19; (B) A promotional message claiming Ayurvedic medicine to cure and prevent COVID-
19; (C) A forward claiming homeopathic medicine to be a preventive medicine with links and contact information to buy the
medicine online.
messages, and often deliberated with each other to agree on a
message. Pooja, a member of one such group, recounted how the
group helped her nd the appropriate treatment when her friend
tested positive, by dierentiating between messages containing
false health advice (e.g., unveried homeopathic cures) and authen-
tic medical information.
Tech-savviness.
On an individual level, some urban participants
relied on what they called “tech-savviness” to distinguish misinfor-
mation from authentic information. They occasionally searched
online for the text and images in messages that they found suspi-
cious and leveraged external apps to check veracity. For example,
Rahul followed medical journalists on Twitter and often looked up
their tweets to verify information he came across on WhatsApp
groups. Other participants reported cross-verifying with news snip-
pets apps, such as Inshorts [
7
]. However, none of our participants
used fact-checking websites like Alt News [
5
] and many lacked
awareness about professional fact-checking services. Compared
to older participants, we found younger urban participants to be
more skilled in verifying information. Most respondents expressed
greater trust in their own ability to dierentiate between fake and
credible information, than they did in that of others. This divide
was particularly seen in urban respondents’ characterization of
rural social media users.
Oline Social Congregations.
Unlike their urban counterparts,
rural participants rarely used digital skills to verify misinforma-
tion. They often ignored global WhatsApp messages and cared
more about verifying hyperlocal misinformation, which received
very limited attention from leading news sources and fact-checkers,
thereby rendering online strategies less meaningful for them. In-
stead, to verify hyperlocal messages, they relied on interpersonal
networks in oine public spheres. Many rural participants relied
on in-person social congregations during which they discussed top-
ics of national and regional importance, including the COVID-19
pandemic. They often read the messages aloud and sought peers’
perspectives. For example, when Swaroop received multiple for-
wards about the rst COVID-related death in a nearby village, he
reached out to his friends in that village to verify the same. When
his friends denied the occurrence, he shared that information back
in the group. During such oine discussions, rural participants also
deliberated collective actions that they took in their respective on-
line WhatsApp groups. For instance, Usha, a female rural resident,
described:
“We routinely discuss messages that we get on What-
sApp in our women-only meetups. One of the important
discussions around that is whether to forward the mes-
sages to other groups or not. There are occasions when
my friends are not able to decide if they should forward
a message. We have a discussion to decide if the mes-
sages are credible or fake and we then forward them
accordingly.”
Other collective actions that we found included asking group
members to implement health advice or warning them about mes-
sages they collectively assessed as fake. Consequently, these de-
liberations in oine public spheres (i.e., social gatherings) heavily
inuenced their actions in online public spheres (i.e., WhatsApp
groups). We now outline various factors which impacted how and
why participants propagated misinformation.
5.3 Propagating Misinformation
Community Obligation.
Many participants, both in rural (n=11)
and urban (n=4) areas, admitted sharing misinformation on COVID-
19 in the past. Some of them said they were driven by a sense of
civic obligation and shared any information that could potentially
increase members’ awareness. Some deliberately shared fake mes-
sages to instill a sense of fear of COVID-19 in people who were
behaving irresponsibly – Govind knowingly forwarded a “fake video
in a WhatsApp group showing how patients were mistreated in hospi-
tal” because his friends were not taking the pandemic seriously.
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA XX
Figure 3: (A) Sender sharing misinformation and asking the group members to verify; (B) An example of religious misinforma-
tion that urban participants chose to ignore; (C) An urban participant sharing educational resources to correct the sender who
shared a fake message and wrote “forwarded as received”; (D) A garbage truck repurposed to make public announcements on
loudspeakers to dispel fake WhatsApp forwards.
Others intentionally forwarded suspicious messages to assess
the opinions of group members. Ankit, a senior accountant, for-
warded suspicious messages around lockdown rules hoping that
someone will correct him if it contained misinformation. In doing
so, he felt he was saving “valuable time” which could be invested
in other high priority tasks rather than “frustrating fact-checking
processes.”. These ndings are in line with prior research that shows
that participants valued opinions of other members in their cir-
cles to reduce their own authentication eort [
38
,
117
]. Moreover,
participants like Ankit, added keywords such as “forwarded as
received” or “shared as received” towards the end of message to
safeguard themselves from any backlash from others for sharing
misinformation (see Figure 3.C).
Self-interest.
Some participants shared misinformation know-
ingly to protect and preserve their own community members from
COVID-19, despite the potential harm the misinformation could
cause. Renu, a homemaker, described how she forwarded messages
holding a particular community more responsible for rising infec-
tions to all her WhatsApp groups:
“This message was showing Muslim people cleaning
their plates with tongue and keeping it. Also there was
local news on when [Muslim] people came back from
Delhi, the cases had gone up. I knew this could be fake
and increase communal tension, but I felt I should share
[the message] for others safety in the neighborhood.”
Another communal misinformation that participants in rural
and urban areas shared showed a video of a mentally unstable Mus-
lim vegetable vendor applying saliva to his produce (see Figure
2.A). Our Muslim participants like Afzal, a vegetable vendor whose
shop was forcibly closed for several days because of fake messages
related to the incident, believed that such messages were spread
by various Hindu communities to sabotage his community’s liveli-
hood. Rural participants also reported that they deliberately crafted
hyperlocal misinformation to protect their interests. For instance,
Dara, a village head, shared how some community members started
a hyperlocal disinformation campaign to oust people from their
jobs:
“A message became viral that four migrant women la-
borers from our village, who cook food in a primary
health center, got infected. The whole village went into
a frenzy, turning the situation into a dangerous witch
hunt.People forced them to quarantine for 15 days de-
spite a negative test result. They had closed the roads
that led to their houses to prevent people from coming
in contact with them. ”
Eventually this absence forced Dara to nd other recruits for the
work. This case highlights the horric impact of misinformation
and rumor-mongering, especially on marginalized populations, in
close-knit communities [35, 59, 71, 89].
Coercion.
Several women participants were also coerced by people
around them to forward messages containing misinformation in
their WhatsApp groups. In urban areas, inuencers pushed their
own opinions and prescriptive actions in oine and online public
spheres. An example of such oine public sphere was housewives
who gathered in their free time to socialize. Renu was an inuencer
of such a group she formed in her society and played an active role
in managing group activities like ‘kitty parties’
2
. Renu shaped the
opinions of group members about how they should respond to the
pandemic, including deciding what messages are discussed during
gatherings, applied in their daily lives, and ignored or forwarded
to other oine and online groups.
However, other women in Renu’s group shared how they felt
coerced to implement her suggestions. Rural women experienced
similar coercive practices. A few women described how male fam-
ily members forced them to share COVID-related messages with
neighbors and relatives in their subscribed WhatsApp groups to
2
Typically a women-only gathering in which a pool of cash, called a kitty, is donated
weekly by all the members, with one rotating winner each week allowed to take the
whole kitty home
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA
promote male family members’ enterprises. Bhanu, a tailor by pro-
fession, described how her husband made her market and share
messages related to Arsenic Album (a homeopathic medicine) in
her WhatsApp groups. These messages included specic details,
such as instructions for ordering, visiting card of the doctor, and
the details of the clinic selling the medicine. Bhanu obliged to such
requests to avoid familial tensions.
5.4 Combating Misinformation
Persuasion.
The participants used a range of strategies to actively
ght misinformation in online and oine public spheres. Some
urban participants tried to reason with those who shared misin-
formation. Most such engagements took place on the WhatsApp
groups and participants were aware that push back could snowball
into a conict. Instead of directly saying that the message has mis-
information, they used a discursive strategy, i.e., posing relevant
questions to senders and encouraging them to reect and share
their perspectives. Another strategy included sharing educational
resources relevant to the misinformation message in the hope that
the sender reads them (see Figure3.C). For example, when Uma
received a message that said eating basil leaves can prevent COVID-
19 infections, she responded with a link to the ISRC website
3
that
explained how alternative medications were not proven to be ef-
fective against COVID-19. Often, participants softly pushed back
against suspicious messages using phrases like “please check” par-
ticularly when they had strong ties with sender. They felt such
soft nudges encouraged senders to reect on the shared messages
without leading to a confrontation in the group. Rarely participants
took a strong stand. But when they did, convincing people that
the shared message has misinformation took painstaking back-and-
forth communication. Pooja shared:
“My uncle had posted a fake message in our group about
how my neighbor has died because of COVID-19. So, I
went to my neighbor, conrmed everything, and posted
a reply saying that its not true. I also asked him not
to believe these messages. He replied back saying that
he saw this in the local media and why should he not
believe what comes in the media? Then I told him that I
had found out personally that it is not true. I showed him
the evidence that I procured for my neighbor’s death.
It was due to kidney failure.. . It took a lot of time to
convince him.”
Several participants also believed that frequency of their push
back actions had a direct impact on their image and the trust they
have built. Consequently, they made deliberate choices to not ght
misinformation in certain situations and let them propagate in the
group, particularly when the sender had considerable authority or
when they felt that the message is not going to cause direct harm
to group members. For example, some participants perceived in-
signicant harm to their WhatsApp group members from religious
and spiritual misinformation (see Figure 3.B), or home remedies
that claimed to boost immunity.
Oline Combating.
While rural participants mostly ignored global
misinformation messages because of their relevance to only “city
3https://indscicov.in/for-public/busting-hoaxes/
people”, they were concerned about hyperlocal misinformation and
felt the need to strongly push back against people spreading such
messages. Hyperlocal misinformation, unlike global misinforma-
tion, personally aected rural participants, thereby motivating them
to engage in more confrontational strategies. Many times this led to
verbal and physical altercations with the senders. But, unlike urban
participants who only engaged in conversations in the group where
fake messages were sent, rural participants adopted methods that
penetrated the community’s online as well as oine public spheres.
Moni, a student in a small town, described how a WhatsApp mis-
information about her father’s friend led to a heated debate in an
oine gathering:
“A few members had sent a fake message saying that
my father’s friend died because of contracting COVID-
19. Somebody drafted a message with his picture stating
that his was the rst death in the town. Till his dead
body came back, they were spreading this false informa-
tion. This lead to a heated debate the next day between
my friend and the other people. . . We even called some
of the people who put it on status and asked them to
take the messages down. ”
When hyperlocal misinformation created unrest among the com-
munity members, community leaders adapted local techniques
to target large parts of rural communities at once. For example,
Dara described how they made public service announcements via
makeshift loudspeakers on garbage trucks (or Ghantagadi) that tra-
versed several areas to dispel harmful hyperlocal misinformation
(see Figure 2.D). The vehicle was sent out multiple times a day to
dispel hyperlocal misinformation that was being spread against
laborers and Muslim vendors. Other villages hired Dhindhora (drum-
mer) whose job was to go from one street to another with a drum
and share similar announcements. These localized methods helped
village administrators to cut across and reach village members who
were part of multiple online WhatsApp groups. Moreover, people
recognized and related to instruments like Ghantagadi as trust sym-
bols, bolstering the authenticity of such eorts. However, these
strategies propagated corrections in oine public spheres much
slower than the velocity of fake WhatsApp forwards in the online
public spheres.
6 DISCUSSION
Our ndings oer a window into how rural residents navigated
misinformation on WhatsApp groups and how their experience
diered from urban residents. Using ‘public sphere’ as a theoretical
lens, we nd that certain qualities accelerated as well as inhib-
ited the spread of misinformation specically within structural,
interactional, and representational dimensions [
42
]. As a starting
point, critical structural characteristics of rural communities – the
close-knit relationships, the hierarchies, and in India, the politi-
cally charged schisms between groups oer both causes and con-
sequences immediately relevant to misinformation. On one hand,
the information people access is likely to be mediated by the caste,
community, religion networks they belong to [
73
], and on another,
the consequences and propensities for confrontations are likely to
be higher in communities where these faultlines are sharp [
19
], and
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA XX
mob justice is both dealt lightly or ineectively by the law [
52
],
enabled by the very social media that fuels it [86].
Structural and Representational Issues.
According to Dahlgren
[42]
, while the structural dimension of the public sphere is a key
indicator of inclusivity of the collective in online discursive spaces,
the representational dimension focuses on the plurality of online
exchanges by such communities. Our ndings show that rural and
urban groups participated in public spheres with dierent structural
dynamics to form and shape opinions on health misinformation.
For example, men in rural communities not only participated on
WhatsApp groups (online public spheres), but also gathered in
evenings to discuss topics of local interest (oine public spheres).
These online public spheres replicated the social structures present
in the oine public spheres, for example, separating public spheres
based on gender or caste. This meant that women in rural commu-
nities had limited representation and were mostly excluded from the
dominant publics like the ‘main’ WhatsApp groups (online public
spheres) and discouraged from attending certain social gatherings
in which men routinely participated (oine public spheres). In-
stead, women had to participate on these dominant publics through
intermediaries, but such arrangements meant that they only re-
ceived limited, biased information based on what men thought
women should know. Moreover, their own perspectives and opin-
ions around misinformation rarely reached back to these online
public spheres, contributing to the lack of feminist knowledge pro-
duction.
Prior feminist literature [
21
,
116
] has documented how women’s
knowledge is suppressed by people who occupied higher positions
in the patriarchal societal structures. In our study, they had inordi-
nate inuence on how marginalized members engaged in collective
deliberations in the WhatsApp groups and oine social gatherings.
Such power dynamics subdue marginalized participants’ voice, in-
hibiting them from expressing their perspectives [
114
]. In response
to various exclusions by dominant publics that discourage participa-
tion of marginalized groups, Fraser
[49]
and Bardzell
[21]
advocated
developing multiple alternate public spheres, also known as sub-
altern counterpublics. We saw some evidence of counterpublics in
the form of women-only groups in which women discussed and
deliberated on messages they received on WhatsApp. While these
practices are in line with recent studies [
115
] that helped them
form an understanding of their identity, interests, and needs, these
spaces were far from being ideal. The opinions and deliberations in
these spaces were dominated by older women, women in position
of power, and community health workers who themselves struggled
to distinguish between credible and fake information.
One solution to these issues could be to leverage assets-based
community development [
72
,
84
] that advocates working with indi-
viduals and marginalized groups in identifying and mobilizing their
assets or capacities to attain a shared vision. This approach could
be useful to build and strengthen capabilities of marginalized mem-
bers in discovering misinformation in oine and online settings
by providing training resources to grassroots organizations.
Generating such training resources for capacity building would
require strategic partnerships between fact-checking agencies and
grassroots organizations, especially to translate the current fact
checking knowledge into accessible formats and generalizable lessons
that can counter hyperlocal misinformation in rural contexts. Alt
News [
5
], a leading fact-checking agency in India, has taken a step
in this direction by creating fact-checking video tutorials for novice
technology and social media users [1].
Interactional Benets.
Interactional dimension focuses on the
uidity of views and opinions exchange among dierent group
members Dahlgren
[42]
. The formation and aordances of public
spheres played a key role in inuencing how people interacted with
misinformation in the WhatsApp groups. People in rural commu-
nities engaged in collective deliberations during social gatherings
(oine public spheres) to form and shape opinions around sus-
picious messages they received in the WhatsApp groups (online
public spheres). Similarly, people in urban areas formed parallel
WhatsApp groups (online public spheres) to collectively verify mis-
information propagating in their other WhatsApp groups. These
parallel eorts to counter misinformation resemble satellite public
spheres that Squires
[110]
dened as a public that seeks separation
from other publics, for reasons other than oppressive relations,
but is involved in wider public discourses from time to time. The
satellite public spheres in our study allowed participants to seek
separation from larger public spheres (i.e., larger WhatsApp groups)
and engage in deliberations that they could not have in the larger
groups. Unlike Fraser’s counterpublics, members in satellite public
spheres enjoyed equal membership in satellite as well as larger
public spheres.
Satellite public spheres enabled people to reect and engage more
deeply through deliberations to identify and judge misinformation
[
17
]. This reective process was especially benecial in the context
of contemporary research that indicates individual’s intuitive and
hasty processing plays an important role in promoting belief in
false content (e.g., scrolling quickly through news feed or messages)
[
81
]. The satellite public spheres also helped participants avoid
social desirability bias, a prevalent issue in online groups where
individuals disregard fact-checked information and instead trust
the misinformation shared in group to be viewed favorably by the
group members [75].
One way to strengthen the collective deliberations and sense-
making on the online satellite public spheres could be to integrate
new features that support users in identifying misinformation and
starting a dialogue. For example, a WhatsApp conversational agent
can passively monitor posts in WhatsApp groups and notify group
members if any post matches with false posts in public databases
built by fact-checkers, such as by Tattle Civic Technologies [
4
].
The agent can also be integrated with WhatsApp’s web search fea-
ture
4
to show a preview of Google search results for suspicious
WhatsApp forwards. Although some of these capabilities could be
less useful to counter hyperlocal misinformation in rural regions
which rarely gets fact-checked and is hard to verify online, at the
very least the agent can help shape mental models of novice users
around risks of misinformation by showing them how messages
that seem credible could be false or misleading.
The collective deliberations and sensemaking that our partici-
pants engaged in larger WhatsApp groups had a few bottlenecks.
For example, most participants let suspicious information propagate
in their WhatsApp groups unless they sensed an immediate threat
4https://blog.whatsapp.com/search-the-web
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA
to their communities. They rarely signaled their doubts to group
members who shared misinformation to avoid embarrassment and
conicts. When they engaged in such conversations, they found the
process laborious and stressful. One approach to enable people to
register their doubts could be to allow them to anonymously react
to misinformation, ag messages, and provide alternate narratives.
These features could also increase agency of marginalized members
who hesitated to challenge the more inuential individuals in their
groups. Recent research on collective deliberations to counter online
misinformation, by leveraging the wisdom of crowds, has shown
promise [
14
,
69
]. Twitter is also exploring a community-driven ap-
proach that allows people to identify misleading information and
write notes that provide informative context [
3
]. However, given
the important dierences between closed WhatsApp groups and
Twitter (e.g., closed platform of people with strong ties vs. public
platform of people with weaker ties), future work should carefully
examine if adding such capabilities to closed WhatsApp groups
would lead to generative conversations, or devolve them into utter
chaos severing the very social ties that our participants aimed to
preserve on closed WhatsApp groups.
7 LIMITATIONS AND CONCLUSION
Our study provides a comparative insight into how rural and urban
participants partake in dierent types of public spheres to discover
and interact with prevalent health misinformation. People with
power dominated how people in both rural and urban communities
developed trust and in-turn shared information. We also saw stark
dierences in how rural and urban participants fought misinforma-
tion. In conclusion, we outline dierent ways in which rural and
urban communities can safeguard themselves from the harms and
burdens of online misinformation.
Our work has several limitations. In addition to the inherent
limitations of qualitative research, such as a small sample size,
our study took place during the COVID-19 pandemic because of
which our interactions with participants gravitated more towards
the COVID-19-related misinformation. More work is needed to
generalize our ndings beyond health- and COVID-related misin-
formation. We also acknowledge that our study does not take into
account the viewpoints of many user groups in urban regions who
may have dierent experiences of engaging with information on
private WhatsApp groups. We plan to conduct a follow-up study
with more diverse populations, such as low-income, low-literate
urban residents and migrant urban communities, to identify the
commonalities and dierences between users of varying socioeco-
nomic status, urbanity, and gender identities.
REFERENCES
[1]
[n.d.]. FactEd by Alt News - YouTube. https://www.youtube.com/channel/
UCIyajsbcEWqEQlFzJPntldQ.
[2]
[n.d.]. The focus of misinformation debates shifts south. https://www.niemanlab.
org/2018/12/the-focus- of-misinformation- debates- shifts-south/.
[3]
[n.d.]. Introducing Birdwatch, a community-based approach to misinfor-
mation. https://blog.twitter.com/en_us/topics/product/2021/introducing-
birdwatch-a- community-based- approach-to-misinformation, urldate = 2021-
01-25,.
[4] [n.d.]. Tattle - Tattle. https://tattle.co.in/datasets.
[5] 2021. Alt News. https://www.altnews.in/. [Online; accessed 15-April-2021].
[6]
2021. India Today. https://www.indiatoday.in/fact-check. [Online; accessed
15-April-2021].
[7]
2021. Inshorts:Stay informed. https://inshorts.com/. [Online; accessed 15-April-
2021].
[8]
Syed Ishtiaque Ahmed, Md Romael Haque, Irtaza Haider, Jay Chen, and Nicola
Dell. 2019. " Everyone Has Some Personal Stu" Designing to Support Digital
Privacy with Shared Mobile Phone Use in Bangladesh. In Proceedings of the 2019
CHI Conference on Human Factors in Computing Systems. 1–13.
[9]
Syeda Zainab Akbar, Anmol Panda, Divyanshu Kukreti, Azhagu Meena, and
Joyojeet Pal. 2021. Misinformation as a Window into Prejudice: COVID-19
and the Information Environment in India. Proceedings of the ACM on Human-
Computer Interaction 4, CSCW3 (Jan. 2021), 1–28. https://doi.org/10.1145/
3432948
[10]
Syeda Zainab Akbar, Ankur Sharma, Himani Negi, Anmol Panda, and Joyojeet
Pal. 2020. Anatomy of a Rumour: Social media and the suicide of Sushant Singh
Rajput. arXiv preprint arXiv:2009.11744 (2020).
[11]
Md Sayeed Al-Zaman. 2021. A Thematic Analysis of Misinformation in India
during the COVID-19 Pandemic. International Information & Library Review
(2021), 1–11.
[12]
Hunt Allcott and Matthew Gentzkow. 2017. Social Media and Fake News in
the 2016 Election. Journal of Economic Perspectives 31, 2 (May 2017), 211–236.
https://doi.org/10.1257/jep.31.2.211
[13]
Verna Allee. 2000. Knowledge networks and communities of practice. OD
practitioner 32, 4 (2000), 4–13.
[14]
Jennifer Allen, Antonio A Arechar, Gordon Pennycook, and David G Rand. 2020.
Scaling up fact-checking using the wisdom of crowds. Preprint available at
https://psyarxiv. com/9qdza (2020).
[15]
Mary Grace Antony and Ryan J Thomas. 2010. ‘This is citizen journalism at
its nest’: YouTube and the public sphere in the Oscar Grant shooting incident.
New media & society 12, 8 (2010), 1280–1296.
[16]
Ahmer Arif, John J. Robinson, Stephanie A. Stanek, Elodie S. Fichet, Paul
Townsend, Zena Worku, and Kate Starbird. 2017. A Closer Look at the Self-
Correcting Crowd: Examining Corrections in Online Rumors. In Proceedings of
the 2017 ACM Conference on Computer Supported Cooperative Work and Social
Computing. ACM, Portland Oregon USA, 155–168. https://doi.org/10.1145/
2998181.2998294
[17]
Bence Bago, David G Rand, and Gordon Pennycook. 2020. Fake news, fast and
slow: Deliberation reduces belief in false (but not true) news headlines. Journal
of experimental psychology: general (2020).
[18]
Eytan Bakshy, Solomon Messing, and Lada A Adamic. 2015. Exposure to ide-
ologically diverse news and opinion on Facebook. Science 348, 6239 (2015),
1130–1132.
[19]
Shakuntala Banaji. 2018. Vigilante publics: Orientalism, modernity and Hindutva
fascism in India. Javnost-The Public 25, 4 (2018), 333–350.
[20]
Shakuntala Banaji, Ramnath Bhat, Anushi Agarwal, Nihal Passanha, and Mukti
Sadhana Pravin. 2019. WhatsApp vigilantes: An exploration of citizen reception
and circulation of WhatsApp misinformation linked to mob violence in India.
(2019).
[21]
Shaowen Bardzell. 2010. Feminist HCI: Taking Stock and Outlining an Agenda
for Design. In Proceedings of the SIGCHI Conference on Human Factors in Comput-
ing Systems (CHI ’10). Association for Computing Machinery, Atlanta, Georgia,
USA, 1301–1310. https://doi.org/10.1145/1753326.1753521
[22]
Christoph Bartneck and Jun Hu. 2009. Scientometric analysis of the CHI proceed-
ings. In Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems (CHI ’09). Association for Computing Machinery, New York, NY, USA,
699–708. https://doi.org/10.1145/1518701.1518810
[23]
Dominik Batorski and Ilona Grzywińska. 2018. Three dimensions of the public
sphere on Facebook. Information, Communication & Society 21, 3 (2018), 356–
374.
[24]
Y. Benkler, R. Faris, and H. Roberts. 2018. Network Propaganda: Manipulation,
Disinformation, and Radicalization in American Politics. Oxford University Press.
https://books.google.com/books?id=MVRuDwAAQBAJ
[25]
Nicola J. Bidwell. 2020. Women and the Sustainability of Rural Community
Networks in the Global South. In Proceedings of the 2020 International Conference
on Information and Communication Technologies and Development (Guayaquil,
Ecuador) (ICTD2020). Association for Computing Machinery, New York, NY,
USA, Article 21, 13 pages. https://doi.org/10.1145/3392561.3394649
[26]
Petter Bae Brandtzaeg, Marika Lüders, Jochen Spangenberg, Linda Rath-Wiggins,
and Asbjørn Følstad. 2016. Emerging journalistic verication practices concern-
ing social media. Journalism Practice 10, 3 (2016), 323–342.
[27]
Nadia M. Brashier, Gordon Pennycook, Adam J. Berinsky, and David G. Rand.
2021. Timing matters when correcting fake news. Proceedings of the National
Academy of Sciences 118, 5 (Feb. 2021). https://doi.org/10.1073/pnas.2020043118
Publisher: National Academy of Sciences Section: Social Sciences.
[28]
Joshua A Braun and Jessica L Eklund. 2019. Fake news, real money: Ad tech plat-
forms, prot-driven hoaxes, and the business of journalism. Digital Journalism
7, 1 (2019), 1–21.
[29]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology.
Qualitative research in psychology 3, 2 (2006), 77–101.
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA XX
[30]
J Scott Brennen, Felix M Simon, and Rasmus Kleis Nielsen. 2021. Beyond (Mis)
Representation: Visuals in COVID-19 Misinformation. The International Journal
of Press/Politics 26, 1 (2021), 277–299.
[31]
Michael V Bronstein, Gordon Pennycook, Adam Bear, David G Rand, and Ty-
rone D Cannon. 2019. Belief in fake news is associated with delusionality,
dogmatism, religious fundamentalism, and reduced analytic thinking. Journal
of applied research in memory and cognition 8, 1 (2019), 108–117.
[32]
Axel Bruns. 2005. Gatewatching: Collaborative online news production. Vol. 26.
Peter Lang.
[33]
Michael X Delli Carpini, Fay Lomax Cook, and Lawrence R Jacobs. 2004. Public
deliberation, discursive participation, and citizen engagement: A review of the
empirical literature. Annu. Rev. Polit. Sci. 7 (2004), 315–344.
[34]
Gautami Challagulla, Margaret Young, and Edward Cutrell. [n.d.]. Local Pocket
Internet and Global Social Media Bridging the Digital Gap: Facebook and Youth
Sub-Stratum in Urban India. ([n. d.]).
[35]
Priyank Chandra and Joyojeet Pal. 2019. Rumors and Collective Sensemaking:
Managing Ambiguity in an Informal Marketplace. Association for Computing
Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300563
[36]
Jay Chen, Michael Paik, and Kelly McCabe. 2014. Exploring internet security
perceptions and practices in urban ghana. In Proceedings of the Tenth USENIX
Conference on Usable Privacy and Security (SOUPS ’14). USENIX Association,
USA, 129–142.
[37]
Liang Chen, Xiaohui Wang, and Tai-Quan Peng. 2018. Nature and Diu-
sion of Gynecologic Cancer–Related Misinformation on Social Media: Anal-
ysis of Tweets. Journal of Medical Internet Research 20, 10 (2018), e11515.
https://doi.org/10.2196/11515 Company: Journal of Medical Internet Research
Distributor: Journal of Medical Internet Research Institution: Journal of Medical
Internet Research Label: Journal of Medical Internet Research Publisher: JMIR
Publications Inc., Toronto, Canada.
[38]
Xinran Chen, Sei-Ching Joanna Sin, Yin-Leng Theng, and Chei Sian Lee. 2015.
Why Students Share Misinformation on Social Media: Motivation, Gender, and
Study-Level Dierences. The Journal of Academic Librarianship 41, 5 (Sept.
2015), 583–592. https://doi.org/10.1016/j.acalib.2015.07.003
[39]
John W Creswell and Dana L Miller. 2000. Determining validity in qualitative
inquiry. Theory into practice 39, 3 (2000), 124–130.
[40]
Lincoln Dahlberg. 2001. The Internet and democratic discourse: Exploring the
prospects of online deliberative forums extending the public sphere. Information,
communication & society 4, 4 (2001), 615–633.
[41]
Lincoln Dahlberg. 2011. Re-Constructing Digital Democracy: An Outline of
Four ‘Positions’. New Media & Society 13, 6 (Sept. 2011), 855–872. https:
//doi.org/10.1177/1461444810389569
[42]
Peter Dahlgren. 2005. The Internet, Public Spheres, and Political Communication:
Dispersion and Deliberation. Political Communication 22, 2 (April 2005), 147–162.
https://doi.org/10.1080/10584600590933160
[43]
N. DiFonzo, P. Bordia, American Psychological Association, and Inc Ovid Tech-
nologies. 2007. Rumor Psychology: Social and Organizational Approaches.
American Psychological Association. https://books.google.com/books?id=
XlHYAAAAIAAJ
[44]
Ullrich KH Ecker, Stephan Lewandowsky, and David TW Tang. 2010. Explicit
warnings reduce but do not eliminate the continued inuence of misinformation.
Memory & cognition 38, 8 (2010), 1087–1100.
[45]
Gowhar Farooq. 2018. Politics of Fake News: How WhatsApp Became a Potent
Propaganda Tool in India. Media Watch 9, 1 (2018), 106–117. https://doi.org/10.
15655/mw/2018/v9i1/49279
[46]
Martin Flintham, Christian Karner, Khaled Bachour, Helen Creswick, Neha
Gupta, and Stuart Moran. 2018. Falling for Fake News: Investigating the Con-
sumption of News via Social Media. In Proceedings of the 2018 CHI Conference
on Human Factors in Computing Systems - CHI ’18. ACM Press, Montreal QC,
Canada, 1–10. https://doi.org/10.1145/3173574.3173950
[47]
Claudia Flores-Saviaga and Saiph Savage. 2020. Fighting disaster misinforma-
tion in Latin America: the# 19S Mexican earthquake case study. Personal and
Ubiquitous Computing (2020), 1–21.
[48]
Christopher Fox. 1983. Information and misinformation. An investigation of the
notions of information, misinformation, informing, and misinforming. (1983).
[49]
Nancy Fraser. 1990. Rethinking the public sphere: A contribution to the critique
of actually existing democracy. Social text 25/26 (1990), 56–80.
[50]
Kiran Garimella and Dean Eckles. 2020. Images and misinformation in political
groups: evidence from WhatsApp in India. arXiv preprint arXiv:2005.09784
(2020).
[51]
Christine Geeng, Savanna Yee, and Franziska Roesner. 2020. Fake News on
Facebook and Twitter: Investigating How People (Don’t) Investigate. In Proceed-
ings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM,
Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376784
[52]
Annie Gowen. 2018. As mob lynchings fueled by WhatsApp sweep India,
authorities struggle to combat fake news. The Washington Post (2018), NA–NA.
[53]
Lucas Graves. 2016. Deciding what’s true: The rise of political fact-checking in
American journalism. Columbia University Press.
[54]
Andrew Guess, Brendan Nyhan, and Jason Reier. 2018. Selective exposure to
misinformation: Evidence from the consumption of fake news during the 2016
US presidential campaign. European Research Council 9, 3 (2018), 4.
[55]
Jürgen Habermas. 1990. Moral consciousness and communicative action. MIT
press.
[56] Jurgen Habermas. 1996. Between Facts and Norms: Contributions to a Discourse
Theory of Law and Democracy. MIT Press.
[57]
Jurgen Habermas and Jürgen Habermas. 1991. The structural transformation of
the public sphere: An inquiry into a category of bourgeois society. MI T press.
[58]
Md Mahfuzul Haque, Mohammad Yousuf, Ahmed Shatil Alam, Pratyasha Saha,
Syed Ishtiaque Ahmed, and Naeemul Hassan. 2020. Combating Misinformation
in Bangladesh: Roles and Responsibilities as Perceived by Journalists, Fact-
checkers, and Users. Proceedings of the ACM on Human-Computer Interaction 4,
CSCW2 (2020), 1–32.
[59]
Y. Linlin Huang, Kate Starbird, Mania Orand, Stephanie A. Stanek, and Heather T.
Pedersen. 2015. Connected Through Crisis: Emotional Proximity and the Spread
of Misinformation Online. In Proceedings of the 18th ACM Conference on Com-
puter Supported Cooperative Work & Social Computing - CSCW ’15. ACM Press,
Vancouver, BC, Canada, 969–980. https://doi.org/10.1145/2675133.2675202
[60]
Eslam Hussein, Prerna Juneja, and Tanushree Mitra. 2020. Measuring misinfor-
mation in video search platforms: An audit study on YouTube. Proceedings of
the ACM on Human-Computer Interaction 4, CSCW1 (2020), 1–27.
[61]
Lilly Irani, Janet Vertesi, Paul Dourish, Kavita Philip, and Rebecca E Grinter.
2010. Postcolonial computing: a lens on design and development. In Proceedings
of the SIGCHI conference on human factors in computing systems. 1311–1320.
[62]
Md Saiful Islam, Tonmoy Sarkar, Sazzad Hossain Khan, Abu-Hena Mostofa
Kamal, S. M. Murshid Hasan, Alamgir Kabir, Dalia Yeasmin, Mohammad Ariful
Islam, Kamal Ibne Amin Chowdhury, Kazi Selim Anwar, Abrar Ahmad Chughtai,
and Holly Seale. 2020. CO VID-19–Related Infodemic and Its Impact on Public
Health: A Global Social Media Analysis. The American Journal of Tropical
Medicine and Hygiene 103, 4 (Oct. 2020), 1621–1629. https://doi.org/10.4269/
ajtmh.20-0812 Publisher: The American Society of Tropical Medicine and
Hygiene Section: The American Journal of Tropical Medicine and Hygiene.
[63]
Esther Han Beol Jang, Philip Garrison, Ronel Vincent Vistal, Maria Theresa D.
Cunanan, Maria Theresa Perez, Philip Martinez, Matthew William Johnson,
John Andrew Evangelista, Syed Ishtiaque Ahmed, Josephine Dionisio, Mary
Claire Aguilar Barela, and Kurtis Heimerl. 2019. Trust and Technology Repair
Infrastructures in the Remote Rural Philippines: Navigating Urban-Rural Seams.
Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 99 (nov 2019), 25 pages.
https://doi.org/10.1145/3359201
[64]
R Tallal Javed, Mirza Elaaf Shuja, Muhammad Usama, Junaid Qadir, Waleed
Iqbal, Gareth Tyson, Ignacio Castro, and Kiran Garimella. 2020. A First Look at
COVID-19 Messages on WhatsApp in Pakistan. arXiv preprint arXiv:2011.09145
(2020).
[65]
Dan M Kahan, William H Simon, et al
.
1999. Deliberative politics: Essays on
democracy and disagreement. Oxford University Press on Demand.
[66]
Alireza Karduni, Isaac Cho, Ryan Wesslen, Sashank Santhanam, Svitlana
Volkova, Dustin L Arendt, Samira Shaikh, and Wenwen Dou. 2019. Vulnerable
to misinformation? Veri!. In Proceedings of the 24th International Conference on
Intelligent User Interfaces. 312–323.
[67]
Natascha A Karlova and Karen E Fisher. 2013. A social diusion model of misin-
formation and disinformation for understanding human information behaviour.
(2013).
[68]
Natascha A Karlova and Jin Ha Lee. 2011. Notes from the underground city of
disinformation: A conceptual investigation. Proceedings of the American Society
for Information Science and Technology 48, 1 (2011), 1–9.
[69]
Jooyeon Kim, Behzad Tabibian, Alice Oh, Bernhard Schölkopf, and Manuel
Gomez-Rodriguez. 2018. Leveraging the crowd to detect and reduce the spread of
fake news and misinformation. In Proceedings of the eleventh ACM international
conference on web search and data mining. 324–332.
[70]
Peter Krat, Kaitlyn Zhou, Isabelle Edwards, Kate Starbird, and Emma S. Spiro.
2017. Centralized, Parallel, and Distributed Information Processing during
Collective Sensemaking (CHI ’17). Association for Computing Machinery, New
York, NY, USA, 12 pages. https://doi.org/10.1145/3025453.3026012
[71]
Peter M. Krat and Emma S. Spiro. 2019. Keeping Rumors in Proportion: Managing
Uncertainty in Rumor Systems. Association for Computing Machinery, New
York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300876
[72]
John Kretzmann and John P. McKnight. 1996. Assets-
based community development. National Civic Review 85, 4
(1996), 23–29. https://doi.org/10.1002/ncr.4100850405 _eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1002/ncr.4100850405.
[73]
Vijesh V Krishna, Lagesh M Aravalath, and Surjit Vikraman. 2019. Does caste
determine farmer access to quality information? PloS one 14, 1 (2019), e0210721.
[74]
Lisa M Kruse, Dawn R Norris, and Jonathan R Flinchum. 2018. Social media as
a public sphere? Politics on social media. The Sociological Quarterly 59, 1 (2018),
62–84.
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA
[75]
David MJ Lazer, Matthew A Baum, Yochai Benkler, Adam J Berinsky, Kelly M
Greenhill, Filippo Menczer, Miriam J Metzger, Brendan Nyhan, Gordon Penny-
cook, David Rothschild, et al
.
2018. The science of fake news. Science 359, 6380
(2018), 1094–1096.
[76]
Stephan Lewandowsky, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz,
and John Cook. 2012. Misinformation and Its Correction: Continued Inuence
and Successful Debiasing. Psychological Science in the Public Interest 13, 3 (Dec.
2012), 106–131. https://doi.org/10.1177/1529100612451018
[77]
Sebastian Linxen, Christian Sturm, Florian Brühlmann, Vincent Cassau, Klaus
Opwis, and Katharina Reinecke. 2021. How WEIRD is CHI?. In Proceedings of
the 2021 CHI Conference on Human Factors in Computing Systems. Association
for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/
3411764.3445488
[78]
Robert M Losee. 1997. A discipline independent denition of information.
Journal of the American Society for information Science 48, 3 (1997), 254–269.
[79]
Zhicong Lu, Yue Jiang, Cheng Lu, Mor Naaman, and Daniel Wigdor. 2020. The
Government’s Dividend: Complex Perceptions of Social Media Misinformation
in China. In Proceedings of the 2020 CHI Conference on Human Factors in Com-
puting Systems. ACM, Honolulu HI USA, 1–12. https://doi.org/10.1145/3313831.
3376612
[80]
Caio Machado, Beatriz Kira, Vidya Narayanan, Bence Kollanyi, and Philip
Howard. 2019. A Study of Misinformation in WhatsApp Groups with a Focus
on the Brazilian Presidential Elections.. In Companion Proceedings of The 2019
World Wide Web Conference (San Francisco, USA) (WWW ’19). Association for
Computing Machinery, New York, NY, USA, 1013–1019.
[81]
Cameron Martel, Gordon Pennycook, and David G Rand. 2020. Reliance on emo-
tion promotes belief in fake news. Cognitive research: principles and implications
5, 1 (2020), 1–20.
[82]
Alice E Marwick. 2018. Why do people share fake news? A sociotechnical mo del
of media eects. Georgetown Law Technology Review 2, 2 (2018), 474–512.
[83]
Alice E Marwick and Danah Boyd. 2011. I tweet honestly, I tweet passionately:
Twitter users, context collapse, and the imagined audience. New media & society
13, 1 (2011), 114–133.
[84]
Alison Mathie and Gord Cunningham. 2005. Who is Driving Develop-
ment? Reections on the Transformative Potential of Asset-based Com-
munity Development. Canadian Journal of Development Studies / Re-
vue canadienne d’études du développement 26, 1 (Jan. 2005), 175–186.
https://doi.org/10.1080/02255189.2005.9669031 Publisher: Routledge _eprint:
https://doi.org/10.1080/02255189.2005.9669031.
[85]
Preeti Mudliar. 2018. Public WiFi is for men and mobile internet is for women:
Interrogating politics of space and gender around WiFi hotspots. Proceedings of
the ACM on Human-Computer Interaction 2, CSCW (2018), 1–24.
[86]
Rahul Mukherjee. 2020. Mobile witnessing on WhatsApp: Vigilante virality and
the anatomy of mob lynching. South Asian Popular Culture 18, 1 (2020), 79–101.
[87]
David Nemer. 2016. Online favela: The use of social media by the marginalized
in Brazil. Information technology for development 22, 3 (2016), 364–379.
[88]
Midas Nouwens, Carla F. Griggio, and Wendy E. Mackay. 2017. "WhatsApp Is for
Family; Messenger Is for Friends": Communication Places in App Ecosystems. In
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems.
ACM, Denver Colorado USA, 727–735. https://doi.org/10.1145/3025453.3025484
[89]
Onook Oh, Manish Agrawal, and H Raghav Rao. 2013. Community intelligence
and social media services: A rumor theoretic analysis of tweets during social
crises. MIS quarterly (2013), 407–426.
[90]
Kenton P. O’Hara, Michael Massimi, Richard Harper, Simon Rubens, and Jessica
Morris. 2014. Everyday Dwelling with WhatsApp. In Proceedings of the 17th
ACM Conference on Computer Supported Cooperative Work & Social Computing -
CSCW ’14. ACM Press, Baltimore, Maryland, USA, 1131–1143. https://doi.org/
10.1145/2531602.2531679
[91]
Duncan Omanga. 2019. WhatsApp as ‘digital publics’: the Nakuru Analysts
and the evolution of participation in county governance in Kenya. Journal of
Eastern African Studies 13, 1 (2019), 175–191.
[92]
Mustafa Oz, Pei Zheng, and Gina Masullo Chen. 2018. Twitter versus Facebook:
Comparing incivility, impoliteness, and deliberative attributes. New media &
society 20, 9 (2018), 3400–3419.
[93]
Pinar Ozturk, Huaye Li, and Yasuaki Sakamoto. 2015. Combating rumor spread
on social media: The eectiveness of refutation and warning. In 2015 48th Hawaii
International Conference on System Sciences. IEEE, 2406–2414.
[94]
Z. Papacharissi. 2010. A Private Sphere: Democracy in a Digital Age. Wiley.
https://books.google.com/books?id=cNMhtalEJDUC
[95]
Gordon Pennycook and David G. Rand. 2020. Who Falls for Fake News? The
Roles of Bullshit Receptivity, Overclaiming, Familiarity, and Analytic Thinking.
88, 2 (2020), 185–200. https://doi.org/10.1111/jopy.12476
[96]
Gordon Pennycook and David G. Rand. 2021. The Psychology of Fake News.
Trends in Cognitive Sciences 25, 5 (May 2021), 388–402. https://doi.org/10.1016/
j.tics.2021.02.007
[97]
Julio C. S. Reis, Philipe Melo, Kiran Garimella, Jussara M. Almeida, Dean Eckles,
and Fabrício Benevenuto. 2020. A Dataset of Fact-Checked Images Shared
on WhatsApp During the Brazilian and Indian Elections. Proceedings of the
International AAAI Conference on Web and Social Media 14, 1 (2020), 903–908.
[98]
Gustavo Resende, Philipe Melo, Hugo Sousa, Johnnatan Messias, Marisa Vas-
concelos, Jussara Almeida, and Fabrício Benevenuto. 2019. (Mis) information
dissemination in WhatsApp: Gathering, analyzing and countermeasures. In The
World Wide Web Conference. 818–828.
[99]
Jenny Roberts. 2009. Ignorance is eectively bliss: Collateral consequences,
silence, and misinformation in the guilty-plea process. Iowa L. Rev. 95 (2009),
119.
[100]
Ralph L Rosnow. 1991. Inside rumor: A personal journey. American psychologist
46, 5 (1991), 484.
[101]
R. Ryle. 2011. Questioning Gender: A Sociological Exploration. SAGE Publications.
https://books.google.com/books?id=CHHz_p-j9hMC
[102]
Nithya Sambasivan, Ed Cutrell, Kentaro Toyama, and Bonnie Nardi. 2010. Inter-
mediated Technology Use in Developing Communities. In Proceedings of the 28th
International Conference on Human Factors in Computing Systems - CHI ’10. ACM
Press, Atlanta, Georgia, USA, 2583. https://doi.org/10.1145/1753326.1753718
[103]
Mike S Schäfer. 2015. Digital public sphere. The international encyclopedia of
political communication (2015), 1–7.
[104]
Yaacov Schul. 1993. When warning succeeds: The eect of warning on success
in ignoring invalid information. Journal of Experimental Social Psychology 29, 1
(1993), 42–62.
[105]
Colleen M Seifert. 2002. The continued inuence of misinformation in memory:
What makes a correction eective? In Psychology of learning and motivation.
Vol. 41. Elsevier, 265–292.
[106]
Imani N Sherman, Elissa M Redmiles, and Jack W Stokes. 2020. Designing
Indicators to Combat Fake Media. arXiv preprint arXiv:2010.00544 (2020).
[107]
Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news
detection on social media: A data mining perspective. ACM SIGKDD explorations
newsletter 19, 1 (2017), 22–36.
[108]
Manish Singh. 2019. Reliance Jio partners with Facebook to launch literacy
program for rst time internet users in India. Retrieved January 7, 2021
from https://social.techcrunch.com/2019/07/03/reliance-jio- facebook-digital-
literacy-udaan- india/
[109]
Laura Spinney. 2019. In Congo, ghting a virus and a groundswell of fake news.
[110]
Catherine R Squires. 2002. Rethinking the black public sphere: An alternative
vocabulary for multiple public spheres. Communication theory 12, 4 (2002),
446–468.
[111]
Janaki Srinivasan and Jenna Burrell. 2013. Revisiting the shers of Kerala,
India. In Proceedings of the Sixth International Conference on Information and
Communication Technologies and Development: Full Papers-Volume 1. 56–66.
[112]
Kate Starbird, Dharma Dailey, Owla Mohamed, Gina Lee, and Emma S. Spiro.
2018. Engage Early, Correct More: How Journalists Participate in False Rumors
Online during Crisis Events. In Proceedings of the 2018 CHI Conference on Human
Factors in Computing Systems. ACM, Montreal QC Canada, 1–12. https://doi.
org/10.1145/3173574.3173679
[113]
Kate Starbird and Leysia Palen. 2011. "Voluntweeters" self-organizing by digital
volunteers in times of crisis. In Proceedings of the SIGCHI conference on human
factors in computing systems. 1071–1080.
[114]
Marco R Steenbergen, André Bächtiger, Markus Spörndli, and Jürg Steiner.
2003. Measuring political deliberation: A discourse quality index. Comparative
European Politics 1, 1 (2003), 21–48.
[115]
Sharifa Sultana and Susan R. Fussell. 2021. Dissemination, Situated Fact-
Checking, and Social Eects of Misinformation among Rural Bangladeshi Vil-
lagers During the COVID-19 Pandemic. Proc. ACM Hum.-Comput. Interact. 5,
CSCW2, Article 436 (oct 2021), 34 pages. https://doi.org/10.1145/3479580
[116]
Sharifa Sultana, François Guimbretière, Phoebe Sengers, and Nicola Dell. 2018.
Design within a patriarchal society: Opportunities and challenges in designing
for rural women in bangladesh. In Proceedings of the 2018 CHI Conference on
Human Factors in Computing Systems. 1–13.
[117]
Shalini Talwar, Amandeep Dhir, Puneet Kaur, Nida Zafar, and Mel Alrasheedy.
2019. Why do people share fake news? Associations between the dark side
of social media use and fake news sharing behavior. Journal of Retailing and
Consumer Services 51 (2019), 72–82.
[118]
Edson C Tandoc, Richard Ling, Oscar Westlund, Andrew Duy, Debbie Goh,
and Lim Zheng Wei. 2018. Audiences’ Acts of Authentication in the Age of
Fake News: A Conceptual Framework. New Media & Society 20, 8 (Aug. 2018),
2745–2763. https://doi.org/10.1177/1461444817731756
[119]
Emily Thorson. 2016. Belief Echoes: The Persistent Eects of Corrected
Misinformation. Political Communication 33, 3 (July 2016), 460–480. https:
//doi.org/10.1080/10584609.2015.1102187
[120]
Stefan Timmermans and Iddo Tavory. 2012. Theory construction in qualitative
research: From grounded theory to abductive analysis. Sociological theory 30, 3
(2012), 167–186.
[121]
Data Source Triangulation. 2014. The use of triangulation in qualitative research.
In Oncology nursing forum, Vol. 41. 545.
[122]
Borhan Uddin, Nahid Reza, Md Saiful Islam, Hasib Ahsan, and Mohammad Ruhul
Amin. 2021. Fighting Against Fake News During Pandemic Era: Does Providing
Related News Help Student Internet Users to Detect COVID-19 Misinformation?.
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA XX
In 13th ACM Web Science Conference 2021. 178–186.
[123]
Rama Adithya Varanasi, Aditya Vasishtha, and Nicola Dell. 2021. Tag a Teacher:
A Qualitative Analysis of WhatsApp-Based Teacher Networks in Low-Income
Indian Schools. In Proceedings of the 39rd Annual ACM Conference on Human
Factors in Computing Systems - CHI ’21. ACM Press.
[124]
Feeza Vasudeva and Nicholas Barkdull. 2020. WhatsApp in India? A case study
of social media related lynchings. Social Identities 26, 5 (2020), 574–589.
[125] Shriram Venkatraman. 2017. Social media in south India. UCL Press.
[126]
S. Vidyarthi, S. Mathur, Oxford University Press, and S. Agrawal. 2017. Under-
standing India’s New Approach to Spatial Planning and Development: A Salient
Shift? Oxford University Press.
[127]
Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The Spread of True and False
News Online. Science 359, 6380 (March 2018), 1146–1151. https://doi.org/10.
1126/science.aap9559
[128]
Claire Wardle, Hossein Derakhshan, et al
.
2018. Thinking about ‘information
disorder’: formats of misinformation, disinformation, and mal-information. Ire-
ton, Cherilyn; Posetti, Julie. Journalism,‘fake news’& disinformation. Paris: Unesco
(2018), 43–54.
[129]
Lina Zhou and Dongsong Zhang. 2007. An ontology-supported misinformation
model: Toward a digital misinformation library. IEEE Transactions on Systems,
Man, and Cybernetics-Part A: Systems and Humans 37, 5 (2007), 804–813.
CHI ’22, April 29-May 5, 2022, New Orleans, LA, USA
Theme / Code Count Theme / Code Count
Misinformation discovery (23.10%) 283 Misinformation propagation (22.61%) 277
Dependancy for discovery 23 Lack of interest to forward 19
Message source as trust inuencer 35 Pressure to forward/interact 25
Message sender & relation with them 21 Physical dissemination 19
Power structures inuencing trust 16 Forwarding through cross-app discussion 18
Credentials impacting trust 22 Forwarding as a status 20
Message characteristics inuence trust 24 Take back forwarding messages 17
Physical rumors 19 Increasing awareness through forwarding 34
Personal perceptions & experiences 12 Forwarding as a reex 21
Digital verication strategies 21 Forwarding because it feels right 29
Social verication strategies 28 Communal angle for forwarding 12
Challenges with verication 22 Economic angle for forwarding 24
Issues 26 Result of an positive emotion 17
Fake discovery not possible/important 14 Result of a negative emotion 22
Misinformation combat (27.43%) 336 Misinformation Eects (17.22%) 211
Positive motivations for combating 19 Behavior change around misinformation 29
Lack of motivation for combating 32 Physical actions based on forwards 24
Persuasion - educating the sender 35 Harmful communal actions 45
Persuasion - please check/soft pushback 21 purposeful "no action" 15
Reprimanding techniques 36 Physical actions for Others 19
Training/guidance around misinformation 15 Physical actions for Self 14
Delegation to gatekeepers 23 Emotional impact - Positive 28
Delegation to subject experts 43 Emotional impact - Negative 12
Networked corrections 36 Don’t think about eects 25
Delegation to govt. authorities 25 Misinformation characteristics (9.63%) 118
Local challenges with combating misinformation 17 Misinformation Group characteristics 45
Localized combating techniques 23 Not trustworthy messages 35
Arguments around misinformation 11 Trustworthy messages 38
Table 2: The complete codebook that resulted from our analysis of interviewes, showing our ve themes (bold) and 53 codes,
including the prevalence (%) for each theme, and the total count for each theme/code. (The count for each theme is the sum of
the counts of all codes within that theme.)