ArticlePDF Available

Motivations for Sharing Misinformation: A Comparative Study in Six Sub-Saharan African Countries

Authors:

Abstract

In most African countries, "fake news," politically motivated disinformation, and misinformation in the media were common occurrences before these became a preoccupation in the Global North. However, with a fast-growing population of mobile users, and the popularization of apps such as WhatsApp, misinformation has become much Motivations for Sharing Misinformation 1201 more pervasive across the continent. Researchers have shown that perceived exposure to false information is high in some African countries, and yet citizens often share made-up news intentionally. This article explores the motivations and contributing factors for sharing misinformation in six sub-Saharan African countries. Our analysis of 12 focus groups with university students reveals two common motivations: civic duty and fun. The sharing of political (dis)information was uneven, but common among students with high levels of self-reported political engagement. We also present an array of cues used to determine credibility, which often determines the shareability of information. Cross-national differences are also discussed.
International Journal of Communication 15(2021), 12001219 19328036/20210005
Copyright © 2021 (Dani Madrid-Morales, Herman Wasserman, Gregory Gondwe, Khulekani Ndlovu, Etse
Sikanku, Melissa Tully, Emeka Umejei, and Chikezie Uzuegbunam). Licensed under the Creative Commons
Attribution Non-commercial No Derivatives (by-nc-nd). Available at http://ijoc.org.
Motivations for Sharing Misinformation:
A Comparative Study in Six Sub-Saharan African Countries
DANI MADRID-MORALES
University of Houston, USA
HERMAN WASSERMAN
University of Cape Town, South Africa
GREGORY GONDWE
University of Colorado Boulder, USA
KHULEKANI NDLOVU
University of Cape Town, South Africa
ETSE SIKANKU
Ghana Institute of Journalism, Ghana
MELISSA TULLY
University of Iowa, USA
EMEKA UMEJEI
American University Nigeria, Nigeria
CHIKEZIE UZUEGBUNAM
University of Cape Town, South Africa
In most African countries, “fake news,” politically motivated disinformation, and
misinformation in the media were common occurrences before these became a
preoccupation in the Global North. However, with a fast-growing population of mobile
users, and the popularization of apps such as WhatsApp, misinformation has become much
Dani Madrid-Morales: dmmorale@Central.uh.edu
Herman Wasserman: herman.wasserman@uct.ac.za
Gregory Gondwe: gregory.gondwe@colorado.edu
Khulekani Ndlovu: ndlkhu005@myuct.ac.za
Etse Sikanku: etse.sikanku@gmail.com
Melissa Tully: melissa-tully@uiowa.edu
Emeka Umejei: emeka.umejei@aun.edu.ng
Chikezie Uzuegbunam: chikezieuzuegbunam@gmail.com
Date submitted: 2020-03-06
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1201
more pervasive across the continent. Researchers have shown that perceived exposure to
false information is high in some African countries, and yet citizens often share made-up
news intentionally. This article explores the motivations and contributing factors for
sharing misinformation in six sub-Saharan African countries. Our analysis of 12 focus
groups with university students reveals two common motivations: civic duty and fun. The
sharing of political (dis)information was uneven, but common among students with high
levels of self-reported political engagement. We also present an array of cues used to
determine credibility, which often determines the shareability of information. Cross-
national differences are also discussed.
Keywords: disinformation, “fake news,” social media, information sharing, sub-Saharan
Africa, focus groups
The notion of “fake news” and the related concepts of “misinformation” and “disinformation”
1
rapidly
became areas of scholarly inquiry after the 2016 U.S. presidential election, covering issues ranging from
election manipulation through the media to the implications for mainstream news practices (Tandoc, 2019).
Though the case of the United States is most often discussed, multiple countries around the world have been
grappling with different expressions of “fake news” for some time. Several of these are in the Global South:
Chile (Valenzuela, Halpern, Katz, & Miranda, 2019), Nigeria (Okoro & Emmanuel, 2018), India (Udupa &
McDowell, 2017), and South Africa (Roper, 2019). However, scholarship in the area still reflects a lack of
geographical diversity, and consequently, responses by scholars and policy makers continue to display a limited
purview of the phenomenon. As in the case of journalism studies more generally, research on social media,
including its use for the spread of misinformation, has “often failed to include adequate diversity on matters of
geography, culture, and language as well as race, class, and gender” (Lewis & Molyneux, 2018, p. 19).
Though “fake news” is sometimes presented as a novel scholarly topic in recent literature, false
news as a phenomenon in Africa and elsewhere predates the era of social media (Mäkinen & Kuira, 2008).
Journalists in these regions have always had to learn to treat journalism as a contested area, vulnerable to
manipulation by governments and powerful social elites (Mutsvairo & Bebawi, 2019). Recent discourses on
“fake news” have, however, given authorities a new opportunity to restrict freedom of expression on social
media, particularly on Facebook and WhatsApp (Dwyer & Molony, 2019). These platforms have become
prime sources of viral content, some of which comes from reputable sources, whereas some could easily be
characterized as either mis- or disinformation. The pervasiveness of this type of content is such that
perceived exposure to made-up political news stories in countries such as Kenya, Nigeria, and South Africa
has been found to be higher than in the United States (Wasserman & Madrid-Morales, 2019), the country
that has attracted the most research to date.
The growth of misinformation around the world has been described by the United Nations Educational
Scientific and Cultural Organization (UNESCO) as an “emerging global problem” (UNESCO, 2018, p. 7). The
1
For the remainder of the article, we use the terms “misinformation” and “fake news” interchangeably to refer
to all the expressions and formats in which made-up and inaccurate information has been found to be common.
1202 Dani Madrid-Morales et al. International Journal of Communication 15(2021)
phenomenon has been exacerbated by new ways of accessing newsnamely, websites and social media
platforms. This is true of contexts as diverse as the United States (Bigman, Smith, Williamson, Planey, & Smith,
2019), Britain (Chadwick & Vaccari, 2019), Singapore (Tandoc, Lim, & Ling, 2020), Kenya (Wahutu, 2019),
and South Africa (Roper, 2019). This changing pattern of news consumption is often interlinked with a sharp
decline in trust in mainstream news sources. A report by the London School of Economics and Political Science’s
Truth, Trust and Technology (T3) Commission described the “erosion of trust” as one of the “five evils” arising
from the spread of disinformation, which poses a threat to the ability of individuals to make informed decisions
(London School of Economics and Political Science, 2020, p. 11). This means that not only is determining the
veracity of information online of increasing importance, but also knowing the factors that shape the sharing of
such information is a crucial first step toward improving the quality of online discourse. Although geographically
limited, previous research has identified a handful of factors that contribute to misinformation being shared
online (e.g., Bigman et al., 2019; Duffy, Tandoc, & Ling, 2019; Sterrett et al., 2019). This article assesses how
applicable these are to the sub-Saharan African context by analyzing data from 12 focus groups with university
students in six countries (Ghana, Kenya, Nigeria, South Africa, Zambia, and Zimbabwe).
Satire, Rumors, and Misinformation in Sub-Saharan Africa
Although current manifestations of “fake news” in Africa call for comparative analyses with similar
practices elsewhere in the world (Wasserman, 2020), the phenomenon also has a much longer history that
is tied to rumors and satire. Satire has in fact been an alternative channel of information when trust in the
mainstream mediaoften either owned by the state or oriented toward social elitesis low (Moehler &
Singh, 2011). In Africa, satire can be used to express social reality in contexts where other forms of
journalism might be suppressed (Mano, 2007). Examples of such satirical news content include the Kenyan
comedic program Redykulass and puppet show XYZ (Ogola, 2010). Drawing on the Zimbabwean experience,
Willems (2011) argues that satirical forms such as cartoons and comic strips not only challenge those in
power and reframe officially sanctioned constructions of reality but also self-reflexively help readers to laugh
at their circumstances and powerlessness, and in so doing, help them cope with their everyday experiences.
Social media such as Facebook and Twitter have become especially productive sites for the circulation of
activist and protest messages, as well as satirical memes, jokes, videos, and similar artifacts used to
challenge political power in African contexts (Mare, 2020; Tully & Ekdale, 2014; Uzuegbunam, 2020).
Therefore, an analysis of misinformation in Africa, as elsewhere, stands to benefit from a historical
and cultural perspective on the way media technologies operate within African societies. Nyamnjoh (2005)
has argued for an approach that is attentive to the continuities between older, indigenous forms of
communication, and newer technologies, as Africans are “daily modernising the indigenous and indigenising
the modern with novel outcomes” (p. 4). Oral traditions, which include satire, gossip, and jokes, underpin
the creative adaptations of technology by African media users. Nyamnjoh (2005) sees such creativity as
“not only informed by cultures amenable to conviviality, interdependence and negotiation, but also by
histories of deprivation, debasement and cosmopolitanism” (p. 4). African values of “solidarity,
interconnectedness and interdependence” (Nyamnjoh, 2005, p. 16) have shaped the way African people
interact with digital media since its inception, and it should be assumed that these values and practices will
also inform the way people engage with misinformation on such platforms.
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1203
Media technologies, especially mobile phones, have long been instrumental in the spread of rumor
and misinformation. One example is the 20078 elections in Kenya, where rumors about election rigging,
ethnic hate speech, and calls for violence spread via SMS technology (Goldstein & Rotich, 2010; Mäkinen &
Kuira, 2008). Rumors and falsehoods being spread through these circuits of informal exchanges of
information may result from the lack of trust in the mainstream news media but could also be read as a
result of political disillusionment. This would explain, argues Shoki (2020), conspiracy theories such as the
one circulating in South Africa about Nelson Mandela having died in 1985, not in 2013, and suggesting that
an imposter negotiated the terms of the democratic transition, which proved to be unfavorable to the Black
majority. Such rumors may arise as a result of feelings of disempowerment and serve as an alternative
narrative to better explain the causes of current circumstances.
Long before the emergence of extreme speech on social media platforms and WhatsApp, older
information and communication technologies (ICTs), such as mobile phones, websites, and blogs, were seen
as sites of contestation between different political and social forces in African countries (Goldstein & Rotich,
2010). More recently, mobile phonesthis time especially used to interact with social mediahave proved
to be an important factor in African elections, as seen in Sierra Leone (Dwyer & Molony, 2019) and Nigeria
(Orji, 2019). WhatsApp, especially, facilitated the spread of misinformation, and it became difficult to trust
political information received on this platform during election times (Dwyer & Molony, 2019, p. 119). Beyond
electoral periods, the prevalence of misinformation on social media platforms has also had a limiting impact
on the exercise of journalism, and media organizations have had to play the additional role of educating
audiences about the dangers of misinformation (Electoral Commission of South Africa, 2019).
Information Sharing on Social Media
The sharing of news on social media platforms, particularly when it helps spread misinformation,
has raised concerns that these practices could negatively reshape online culture and limit the ability of online
media to contribute to the democratic process (Chadwick & Vaccari, 2019, p. 7). This is in line with findings
by Wagner and Boczkowski (2019), who argue that the consumption of “fake news” is linked to a general
distrust and cynicism about the credibility of the whole news ecosystem. More broadly, the perception that
information cannot be trusted can lead to the development of what researchers at the London School of
Economics and Political Science (2020) label the “five evils” of misinformation (p. 11). The first is confusion:
Citizens are unsure whom and what to believe among an abundance of sources and information. The second
is cynicism: Misinformation has further eroded public confidence in mainstream news sources and has
contributed to the “fomenting of social antagonism.” The third is fragmentation: Citizens are being divided
into “truth publics” with parallel realities and narratives online. The fourth evil is irresponsibility: An increase
in information generated outside of news organizations with ethical codes has caused a lack of transparency
and accountability. The fifth is apathy: Declining trust in political information may make citizens less likely
to participate in political processes.
In an established democracy such as the United Kingdom, in 2019, more than half of social media
users (57.7%) reported that they had recently come across news on these platforms of which the veracity
was in doubt. What is more, a high percentage (42.8%) admitted to having shared false or inaccurate news,
of which 17.3% said they thought the news was false at the time of sharing it (Chadwick & Vaccari, 2019).
1204 Dani Madrid-Morales et al. International Journal of Communication 15(2021)
Wasserman and Madrid-Morales (2019) found similar results in a study of three African countries. To the
question, “Have you ever shared a political news story online that you thought at the time was made up?”,
29% of Kenyans, 18% of Nigerians, and 25% of South Africans answered “yes.” Based on the distinction
between knowing at the time of sharing that news is false and only finding out later that it was untrue,
Chadwick and Vaccari (2019, p. 14) distinguish between the concepts of “misinformation” (“unintentional
behavior that inadvertently misleads”) and “disinformation” (“intentional behavior that purposely
misleads”). However, as the authors point out, very little is currently known about the reasons and
motivations prompting people to share news online.
One factor to consider is the social identity of the user. In a study with university students in the
United States, Bigman and colleagues (2019) found that race is a strong predictor. Black students reported
“both seeing and posting more content about race on social media” (p. 14). They see their study as providing
evidence that “selective sharing is likely to result in racially differentiated retransmission of news about
disparate racial impact” (p. 14). Chadwick and Vaccari (2019) found that users who willingly and/or knowingly
shared false information on social media platforms were “likely to be male, younger, and more interested in
politics” (p. 5). Not only social position but also political orientation was found to play a role in the likelihood
of British social media users sharing false information. Supporters of the Conservative Party and those with
right-wing leanings were found to be more likely to share inaccurate or false news. This corresponds with the
findings by Guess, Nagler, and Tucker (2019) that during the 2016 U.S. presidential election, conservatives
and those who are extremely conservative were more likely to share “fake news” on Facebook. Unlike in
Chadwick and Vaccari’s study, gender was not found to be a strong predictor in the United States. However,
age was, as those over 65 were the most likely to share links to false news (Guess et al., 2019). More recent
findings from the United States (Jamieson & Albarracín, 2020) seem to confirm that there is an association
between the consumption of conservative media and belief in conspiracy theories.
When asked to reflect on the reasons why they share news on social media, the top three reasons
provided by British respondents (Chadwick & Vaccari, 2019) were the following: “To express my feelings”
(65.5%); “To inform others” (also 65.5%); and “To find out other people’s opinions” (51.1%). These reasons
display an orientation toward civic participation or purpose. Duffy and associates (2019) explored the social
utility of sharing “fake news” in Singapore and draw comparisons between the sociality of “fake news” and
rumor; both are used to “cope with uncertainty, build relationships, and for self-enhancement” (p. 3). The
main types of news stories that are shared, the authors argue, are those that have a high informational
utility (“news you can use”), which resonate with their own lives and have a high emotional impact (p. 5).
They encourage an understanding of sharing practices that looks beyond the political implications of sharing
“fake news” to the interpersonal and social uses for sharer and recipient. Sharing news is seen as
contributing to social cohesion. Users are motivated by the emotional impact the news is seen to have, the
relevance it might have for the receiver, and the sender’s intention to “provide advice or warning” (Duffy et
al., 2019, p. 10). Sharing “fake news,” the authors argue, can therefore be seen as a sign of trust between
sender and recipient. This suggests that “what is sharedand reciprocatedis more than just news or
information; it is also a marker of trust, fellow-feeling and mutuality” (Duffy et al., 2019, p. 10).
Rumor can, however, also be detrimental to the social fabric in advanced democracies, as Petersen,
Osmundsen, and Arceneaux (2018) show. The authors found that in Denmark and the United States, when
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1205
hostile rumors are shared, the aim is to “coordinate the attention and action of the audience with the goal
of mobilizing against the target group and signal their willingness to engage in conflict escalation (i.e.,
helping push the collective over the tipping point for collective action)” (p. 4). The motivations behind
sharing this kind of false rumor online can be partisan in natureto mobilize against a political opponent
or to rail against the whole political system (p. 6). Using experimental and observational data, the authors
conclude that, at least in the two countries they studied, the overriding psychological motivation
underpinning the sharing of false news is the latter: a “need for chaos.”
The applicability to the African context of the motivations outlined above has not been thoroughly
explored. The only study to have investigated audiences’ engagement with “fake news” in an African context
found three reasons that help explain the sharing of “fake news” in Kenya and Nigeria (Chakrabarti, Rooney,
& Kweon, 2018). First, there is the desire to be “in the know” socially, so sharing “fake news” becomes a
form of social currency. This may not be unique to Kenya and Nigeria; the long-standing use of humor in
African societies, which has been noted to play a politically progressive role on the continent, may amplify
the social capital obtained through sharing satirical information. Second, there is a sense of civic duty that
might lead social media users to share warnings of impending disasters or crises. Even if the information
turns out to be false, the potential harm that could result from not informing others may be seen as
outweighing the dangers of spreading false information. And, third, there is the sense that information is
democratic and needs to be passed on. Users may take the popularity or virality of a shared piece of
information as an indication of its veracity (Chakrabarti et al., 2018, p. 44). This motivation may be
especially relevant in African countries where the state exercises a great deal of control or ownership over
the media, which may lead to a decline in trust in mainstream media (Wasserman & Madrid-Morales, 2019).
In this article, as we seek to explore the motivations for African audiences’ consumption and sharing
of false information online, these social, cultural, political, and economic factors should be borne in mind. A
technologically determinist approach that foregrounds the platform on which misinformation is shared
should be avoided in favor of an audience-centered, contextually informed understanding of the motivations
for sharing misinformation. Against this background, we address the following research questions:
RQ1: How often and where do university students encounter misinformation online?
RQ2: How do students decide what information to share on social media, and to what extent do type of
content and source affect shareability across countries?
Method
This article uses data collected between August 2019 and January 2020 in 12 focus groups with
university students in six sub-Saharan African countries. Discussions lasted between 50 and 90 minutes and
were conducted in English. In contrast to surveys, in-depth interviews, or experiments, all of which have
been used in previous studies examining misinformation sharing practices (Chadwick & Vaccari, 2019;
Valenzuela et al., 2019), focus groups offer “richer, more complex and more nuanced information”
(Kamberelis & Dimitriadis, 2013, p. 40). The method is deemed appropriate as it could make an important
1206 Dani Madrid-Morales et al. International Journal of Communication 15(2021)
contribution to an exploratory study such as this one, given the lack of previous research on sharing practices
among young people in the six countries we studied.
Sampling
Two focus groups (FGs), one with undergraduate and one with graduate students, were convened
in each country. Ninety-four participants joined the 12 discussions, in groups ranging from five
(postgraduate FG in Nigeria and Kenya) to 15 (undergraduate FG in Zimbabwe) participants. Our sample
includes more undergraduates (64%), men (52%), and students in the social sciences (69%). To recruit
participants, we used a combination of convenience and snowball sampling. Focus group moderators
requested assistance from undergraduate students in their classes and seminars to enlist other students in
different departments. We tried to assemble a representative and balanced sample in terms of disciplines,
so not all students who volunteered were invited to attend. A consequence of the recruitment process was
that some participants knew each other, but in most cases, they were strangers. Aside from education level,
no other traits were used to enforce sampling quotas. During the recruitment, students were told they would
attend a group discussion about news consumption in general to minimize the risk of priming. Participation
was voluntary. In Kenya, students received 500 Kenyan shillings (approximately US$5) at the end of the
focus group discussion. In Ghana, students were offered food and drinks (equaling approximately 30
Ghanaian cedis, or US$5). Others participated without remuneration. This difference in incentives was
caused by different funding sources for each country. We observed no differences in participation quality
due to the remuneration factor (as indicated by the length and depth of conversation derived from the
transcripts) among the groups. The study procedures received approval from relevant review boards for the
protection of human subjects in research.
In selecting countries, we tried to reflect various political and media systems. South Africa has a high
level of media freedom; self-regulation; an open, participatory media culture; and an established digital media
sphere. Kenya is an East African country with a vibrant independent press as well as a strong presence of
international media, and it has a vocal, active community of social media users. In West Africa, we selected
Nigeria, Africa’s most populated country, which has a strong private media sector as well as a dynamic online
community, and Ghana, which is not only one of the continent’s most stable democracies but also has a free
and diverse media system. Zambia, located in Southern Africa, has a functioning democracy, but has seen
several regressive episodes in terms of media freedoms in recent years. Finally, Zimbabwe has a repressive
media environment and high levels of state ownership and interference in the media. At the same time, there
are also several examples of how Zimbabwean citizens have used alternative channels, including social media
platforms such as Facebook, to undermine authoritarian control of the media. These differences are reflected
in commonly used indexes that measure democracy and press freedom in the world, such as The Economist
Intelligence Unit’s Democracy Index (2020) and Reporters Without Borders’ (2020) Press Freedom Index. We
summarize the categories and scores for the six selected countries in Table 1.
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1207
Table 1. Location, Political, and Press Freedom Indexes of Selected Countries (2019).
The Economist Democracy Index*
RSF Press Freedom Index^
Region
Ghana
Flawed democracy (6.63)
Satisfactory situation (20.81)
West Africa
Kenya
Hybrid regime (5.18)
Noticeable problems (32.44)
East Africa
Nigeria
Hybrid regime (4.12)
Difficult situation (36.50)
West Africa
South Africa
Flawed democracy (7.24)
Satisfactory situation (22.19)
Southern Africa
Zambia
Hybrid regime (5.09)
Difficult situation (36.38)
Southern Africa
Zimbabwe
Authoritarian (3.16)
Difficult situation (42.23)
Southern Africa
Source: *The Economist Intelligence Unit (2020); ^Reporters Without Borders (2020).
Research Design
A common interview guide for all countries was designed by two of the authors, partly based on a
study by Duffy et al. (2019). The suitability of the questionnaire, stimuli, and structure was pretested with
a group of graduate university students in South Africa. The guide was structured around four sections,
each containing key questions asked in all focus groups, and additional questions to be asked at their
discretion. In the first bloc, we asked participants about their media consumption practices (e.g., “How
many of you have a Twitter account? Tell me what you use it for.”). Next, and after having seen the first
stimulus (see Figure 1), participants were asked about their news-sharing practices, and motivations for
sharing (e.g., “Would you consider sharing these posts? Why or why not?”).
Figure 1. Stimulus #1 presented to all focus-group participants.
In the third bloc, and after having been presented with a country-specific stimulus in the form of a (real)
made-up news story (see Figure 2), we asked them about practices of sharing political information (e.g.,
“Can you recall sharing a story that you later found out was not fully accurate?”). Next, groups were asked
1208 Dani Madrid-Morales et al. International Journal of Communication 15(2021)
about their attitudes toward misinformation (e.g., “How much of a problem do you think misinformation and
fake news are where you live?”), and how they personally cope with it (e.g., “What do you usually do when
somebody shares news that you know is made up?”).
Figure 2. Country-specific stimuli #2 presented to participants.
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1209
In the first stimulus, which was shown to all focus groups, participants were presented with one image
depicting a Facebook post and one showing a tweet. Each social media post reproduced a “real” example of
misinformation: one was about plastic rice from China, and the other was about harmful mobile phone
radiation. These cases were selected from a database kept by Africa Check. The second stimulus consisted of
a screenshot of a “real” false news story about politics published online. Though a different item was chosen
for each country, all of them came from equivalent news sources, and included multiple cues that could help
audiences flag them as fabricated newsfor example, outlandish claim(s), thin sourcing, no clear authorship,
poor editing, manipulated images, unclear timing of events. To guarantee uniformity across countries, the lead
author preselected two or three stories based on three criteria: (a) the story referenced a divisive local political
actor; (b) it was published on a local news website, and (c) it included several of the cues identified above.
Preselected stories were shared with facilitators for feedback. When multiple stories were considered a good
fit, the one that appeared to have been circulated most widely was retained. For example, the story shown to
Kenyan participants, titled “Raila is harvesting, now feted as ‘Person of the Year in Africa,’” was originally
published on a now defunct blog called “Daily Active Kenya,” had no clear byline, appeared next to a list of
several dubious headlines, and was poorly edited. As part of the debriefing process at the end of each
discussion, facilitators briefed participants about the inaccuracy and origin of the stimuli.
Data Analysis
All discussions were audio recorded and transcribed verbatim. The analysis of the data was conducted
using NVivo, a software package used for computer-assisted qualitative text analysis. One of the authors first
coded each transcript using a list of themes compiled from the questionnaire and from discussion summaries
prepared by each facilitator. During this process, as new themes emerged, they were incorporated into the
list. All transcripts were coded a second time to search for instances of the new themes. After this, the input
of another author was sought to confirm the validity of the coding, and to identify areas of discrepancy. Once
these were resolved, the list of themes (eight) and codes (68) was organized around the proposed research
questions. A copy of the list of themes and codes is available from the authors on request. During the analysis
of the data, and whenever possible, comparisons were drawn among countries, with special attention being
paid to differences that could be linked to the country selection criteria presented earlier.
Findings
In the next two sections, we report the results of the data analysis. The first section addresses RQ1
(frequency of perceived exposure to misinformation online, and sources that students associate with
inaccurate information), and the second section provides an answer to RQ2 (motivations for sharing [false]
news on social media, and determinants of shareability).
Perceived Prevalence of Misinformation Online
As previous research has suggested, for university students in all six countries, misinformation
appears to be a common occurrence, particularly on certain social media platforms, WhatsApp and Facebook
being mentioned most often. In other words, the answer to the first part of RQ1 is that students, regardless
of country and demographic, believe to be exposed to misinformation very frequently. References to the
1210 Dani Madrid-Morales et al. International Journal of Communication 15(2021)
ubiquity of misinformation emerged spontaneously in most of the discussions (except for Ghana), before
facilitators introduced the stimuli, or asked questions on the issue. For example, describing their daily news
consumption, a South African undergraduate student said, “I consume a lot of social media. So, I used to
get a lot of trash and a lot of clickbait stuff. I really had to tailor my social media to include lots of trustworthy
kind of sources.” Similarly, a graduate student in Zimbabwe noted, “I wanted to say WhatsApp is mainly
used to [get news] nowadays, but sometimes it transmits fake news. So, you have to verify because you’ll
spread something that is not true.” Relatedly, an undergraduate in Zambia claimed that his preferred news
sources are Mwebantu Media and Prime TV (two private media companies), because they “are the closest
to telling the truth; and, as you know, nowadays, it is hard to tell what media is telling the truth.” Macro-
level country characteristics (i.e., media freedom, political system) did not appear to affect perceived
exposure to misinformation among students.
Our data suggest that the answer to the second part of RQ1 is that students feel they are exposed
to misinformation the most on social media. Although a few participants said that legacy media (radio, TV,
and for a tiny minority, newspapers) remain their prime sources of news, the vast majority referred to the
digital platforms WhatsApp, Facebook, Twitter, and Google as their preferred sources. Of these, Twitter
seemed much less popular in Zambia and Zimbabwe, and among graduate students in general, the usage
of other platforms for information seeking (e.g., Instagram, Reddit, Snapchat, YouTube) was much less
widespread. Students reported the highest levels of misinformation on WhatsApp, a messaging platform
that allows group conversations and makes it relatively easy to forward information from other sources to
family, friends, and colleagues. To some, social media is the preferred choice to keep informed because
news stories are “curated” by friends and family members, which, some say, adds an extra layer of trust.
In other cases, the reasons are more pragmatic. Many mobile phone data bundles come with unlimited
access to WhatsApp. Also, some students do not have regular electricity at home, so battery-powered mobile
phones are the only way news can be accessed.
Discussions about inaccurate information on WhatsApp in Kenya, South Africa, and Nigeria
quickly led to claims that “older people” (e.g., parents and grandparents) share a lot of unverified
information because they lack an understanding of how social media works and tend to trust content
without verifying it.
For the older generation, social media has taken them by storm. They are excited about
it. I don’t think they realize what is fake and what is not. . . . My dad gets more likes on
his Facebook posts that I would ever get. But if he shares it, one person shares it to three
more, then it becomes such a big thing. (graduate student, Kenya)
You have a lot of people who are used to going to newspapers. When they’re told the
news, they automatically take it as fact, because that’s the way the world worked before.
In this age, where information is decentralized and anybody can say something is true,
they still have those old habits of those days. Older generations, who are new to social
media will start accepting fake news. (undergraduate student, Kenya)
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1211
In contrast to “older” generations, students feel they are much less vulnerable to misinformation. While
generational differences were not mentioned in the other three countries, in Zambia, some undergraduates
studying in the capital, Lusaka, did allude to geographical differences:
Not everyone can tell whether the story is true or not. In my village in Zambezi [in West
Zambia], people easily believe such stories without questioning beyond. People like us, who
can tell if the story is true or not, should not be in the forefront of making such stories viral.
While discussions in some Global North contexts on news, veracity, and trust have been linked to
belligerent antimedia narratives, we did not find evidence of such narratives in our sample. Instead,
participants often referred to one media house or another as their preferred source for trusted news. In
most cases, these go-to sources are the websites/mobile apps of privately owned domestic media (e.g., The
Nation in Kenya, Mail & Guardian in South Africa, and Channels TV in Nigeria). Government-controlled TV
stations were mentioned by no more than one or two students in South Africa (SABC), Ghana (GBC), and
Zambia (ZNBC). While some students did say that they follow blogs and independent commentators, these
sources did not seem to be widely used. International media, such CNN, The New York Times, or the BBC,
which might be the target of attacks by politicians and citizens elsewhere, were described as sources whose
information can be trusted. In the words of a Zimbabwean undergraduate student, “online, there are some
sources that normally lie, but there are some authentic sources like the BBC.”
The overwhelming majority of students regard issues related to misinformation online as a major
source of concern, particularly as it relates to the political process. A student in South Africa brought up the
example of the 2019 elections in South Africa and associated the rise of Freedom Front Plus, a right-wing
White nationalist party, with misinformation online. In Nigeria, a postgraduate student recalled that
President Buhari once said “fake news is as worse as genocide,” a claim we could not verify, to echo what
other participants appeared to be most concerned about: the dangerous connection between politically
motivated misinformation and the fueling of ethnic tensions in Nigeria. This seemed to be a strong reason
most Nigerian studentsall undergraduates and postgraduates but onesupported the enactment of
legislation to limit “fake news,” even if it may result in the restriction of personal freedoms. In other
countries, the division around support/opposition to tougher legislation appeared along generational lines
(undergraduates were against regulation, and postgraduates in favor of regulation). The only country in
which misinformation was not unanimously seen as a “big problem” was Zambia, where several participants
highlighted what they saw as the “positive” side of “fake news”:
“Fake news” or misinformation are neither good or bad. They serve two purposes. The
first one is negative: misleading the community and causing divisions. However, “fake
news” also plays a positive role. First, it makes boring news interesting. It is entertaining
and makes everyone regardless of their education level engage in conversations. Second,
“fake news” triggers the truth. By presenting a different point of view, it forces people
question more, there making the media clarify their subliminal messages. (graduate
student, Zambia)
1212 Dani Madrid-Morales et al. International Journal of Communication 15(2021)
In summary, across all six countries, we found that students believe misinformation is widespread,
particularly on social media, but not on mainstream media. Except for some Zambian students, the majority
saw the high prevalence of disinformation as a significant problem politically (e.g., it might lead to violence
and conflict) and socially (e.g., some citizens, such as the elderly and those living in rural areas, might be
prone to believing hoaxes).
Determining the Shareability of Information Online
With RQ2, we sought to explain under which circumstances students become willing contributors
to the dissemination of inaccurate information. There are three takeaways from our analysis: (1) various
cues are used to determine the veracity of information; (2) motivations to share a news item, even if known
to be inaccurate, are dependent on the topic; and (3) the political use of humor appears to be central when
examining how young people interact with politically motivated (mis)information. Next, we present evidence
to support each of these claims.
(1) In all countries, except for Zambia and Zimbabwe, students often used cues to determine what
content to share, even if this might sometimes lead to them sharing inaccurate information. This became
apparent when we presented them with a tweet from “UserABC” (not real), who had a blue check next to
their name, a sign that the account has been verified. For some, the blue check meant the story, a hoax
about plastic rice produced in China, was potentially true. Using this as a definite cue, some said they would
share it immediately; others said they would further investigate whether it was legitimate. A large majority,
however, remained adamant that it was clearly a fake story and would not merit their attention. Other cues
that were mentioned included the number of followers, the lack of likes, comments, retweets and other
metrics, the writing style, and the use of excessive punctuation. Cues used to justify the decision not to
share stimulus #2 included that the source did not seem legitimate, the layout of the website was “off,” the
editing was poor, and they had no recollection of the same story being published in mainstream media. The
skillful recognition of these cues would seem to indicate that some university students are quite media
literate. However, the existence of cues indicating that a story might not be true did not always act as a
deterrent for sharing a story, as we discuss next.
(2) A comparison of reactions to the first and second stimulus reveals that the topic of a story
affects its shareability. We found that stories about health and food (as in the case of stimulus #1), as well
as posts/tweets about scams, safety, and terrorism were evaluated differently than news about politics
(stimulus #2). Those who said they would share the former mostly invoked one reason to do so: to create
awareness. This motivation appears to resonate with findings reported in the literature on sharing practices
in other countries (Chadwick & Vaccari, 2019; Duffy et al., 2019). In the typology proposed by Chakrabarti
and colleagues (2018), this would fall between an act of civic duty and a sense that information is democratic
and should be passed on. For many students, the notion of civic duty applied, regardless of the veracity of
the story, as the following exchange among undergraduate students in Zambia shows:
Student A: This is a matter of life, and it should be taken seriously. Who knows what the
Chinese want? Maybe they want to kill all of us, and take our country. Just like everyone
else, I would also share the news.
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1213
Student B: I would wait before I choose to believe this story. Three bowls making one
plastic bag? I doubt it. But it’s on Twitter, so it could be true. But can we find such a story
on BBC or Al-Jazeera?
Student C: Why would you want to wait until you verify. How long will that take you? You
will end up killing people by not sharing. I think there’s no harm in sharing. If it’s not true,
it will harm no one. If it’s true, then it will save some lives. How many fake news stories
do people, even us here, share on Facebook? Sometimes we even know about them being
a lie, yet we go on to share.
The same motivation, a sense of civic duty combined with a “just in case” attitude, applies to other stories
students said they would share, such as terror-related incidents for Kenyan participants, or, for Nigerians,
news about anti-African xenophobic attacks in South Africa.
There were not many students who thought they would share the second stimulus, and the
motivations outlined above do not seem to apply in the case of a political story. First, most of the participants
expressed their lack of interest in politics, which seemed to make them less likely to react to the stimulus
(e.g., a Ghanaian graduate student explained, “I will first of all read it, but then I wouldn’t share because
I’m not interested in politics so that’s why I wouldn’t even want to share”; and a Kenyan graduate student
stated, “I wouldn’t even care. For me, anything with politics, hands off”). However, in each country there
were students who described themselves as politically aware and engaged. These students said they would
share the news story because it aligned with their political views, or because it would spark some debate.
The intensity and length of the discussion around the shareability of the second stimulus differed across
countries. Kenyan and South African students in our sample appeared politically apathetic and did not find
the story we showed worth sharing. Students in our Ghanaian, Nigerian, and Zambian focus groups appeared
to be much more politically engaged, and that seemed to make them more likely to share the story. Overall,
the Zimbabwean focus group discussion turned out to be the most nuanced. Three positions on the issue
could be seen: (a) those who would not share the story; (b) those who, being politically engaged, would
post it (e.g., an undergraduate student from Zimbabwe said, “I joined a political WhatsApp group. I would
send to that because the people in that group would be interested in that”); and (c) those who did not see
sharing the story as a problem in terms of spreading misinformation per se, but as a form of losing social
capital. These quotes summarize the debate well: “I would get negative comments because people would
trust me as a source of that news” (undergraduate student, Zimbabwe); and, “I don’t share such political
stories because of some intimidation that you [see] on social media. Some would say your WhatsApp is
being followed. Sometimes you feel afraid to send such stories” (undergraduate student, Zimbabwe).
Finally, (3) we found humor, and the use of parody, to be a factor influencing the sharing of political
(mis)information. These are some of the initial responses to the second stimulus:
You may say it’s news from NewsDay [a reputable Zimbabwean newspaper], but overall,
I’ll treat it as a joke. When I’m forwarding it, I’m not forwarding news but a joke. As long
as it’s a joke, I don’t need to verify. (graduate student, Zimbabwe)
1214 Dani Madrid-Morales et al. International Journal of Communication 15(2021)
I will laugh. I feel like writing several laughing emojis before I say anything serious,
because the picture alone is already funny. This can be a meme or something before I
now start to criticize what the president is doing. (undergraduate student, Nigeria)
I’ll share this on WhatsApp, on a political party page. Then I’d try and take a few jabs at
Mahama [former Ghanaian President], and then make some funny comments about his
choosing of running mate. (graduate student, Ghana)
Humor, gossip, and satire seem to provide a refuge for media users overwhelmed with serious or depressing
news. And yet, with many saying they would post fake and fabricated stories about politicians to poke fun
at those in power, we found no references to the idea that sharing misinformation is caused by a desire to
create chaos (Petersen et al., 2018). Rather, the sharing practices students reported seem to point to the
importance of conviviality and community, as found in more traditional networks of orality, or the use of
earlier forms of ICTs to resist government control/abuse. As a Zambian graduate student put it,
This stems from our history, where the media was mostly used for entertainment, and the
elites used it for news. From colonial times to the most recent years, the media was for
status. Reading a newspaper was a sign of education and wealth. Most people listened to
radios, and usually for entertainment purposes. So, when social media was introduced,
the mentality did not change, except that this time the people were allowed to create
content. However, the content they create does not really reflect who they are, except
that they do it for entertainment.
Discussion and Conclusion
Building on previous research on the prevalence of misinformation in sub-Saharan Africa, this study
provides more depth to empirical data that showed online users on the continent are oftentimes contributors
to the spread of inaccurate information (Wasserman & Madrid-Morales, 2019). The first finding of the study
indicates that misinformation is experienced as a common occurrence, and is seen as a cause for concern.
As noted in the literature, age was a factor in the sharing of misinformation: Students apportioned blame
for this to an older generation of media users. Although this is just the perception of a young demographic
rather than a finding based on cross-generational sampling, this perception resonates with Guess and
colleagues’ (2019) study of the use of misinformation during the 2016 U.S. election, which found older
Americans to be more likely to share “fake news.”
A second finding was that young media consumers are discerning users who rely on various cues
to evaluate the veracity of information. Although our respondents were well-versed in the affordances of
social media, they did not report a general mistrust of established news media and indicated that established
sources would serve as benchmarks for evaluating the veracity of information. In this regard, our findings
deviated somewhat from some views expressed in the literature about the Global North (London School of
Economics and Political Science, 2020; Wagner & Boczkowski, 2019), which indicate that the use and
distribution of misinformation stem from a cynicism toward the media as a whole. In verbalizing sharing
practices, students in all six countries often spoke of the current media environment as one in which
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1215
discerning what is true and what is not is increasingly difficult. While students do not seem to distrust all
media or see state-owned media as the epitome of false information, there seems to be no single source
that is trusted by all (or most). Although media literacy as a concept was not familiar to most students, they
described behaviors and practices that could be viewed as applying media literacy skills (e.g., seeking out
additional sources, and verifying claims found on social media). Importantly, additional research needs to
determine what students in sub-Saharan Africa think media literacy is and what it ought to be. Little research
has examined this issue in this context, and grounded empirical research is needed.
Only a handful of the motivations for sharing misinformation found in the literature (Chadwick &
Vaccari, 2019; Chakrabarti et al., 2018; Duffy et al., 2019; Guess et al., 2019; Petersen et al., 2018) could
be matched to those provided by our respondents, and different motivations applied to politically and
nonpolitically motivated content. In all six countries, the sharing of health-related (mis)information (also
news about terrorism, political violence, and scams) was attributed to a sense of civic duty. This confirms
the social utility of information sharing noted in the literature. As previously found in Singapore (Tandoc et
al., 2020), and in Nigeria and Kenya (Chakrabarti et al., 2018), our respondents indicated the need to warn
others as a likely motivation for sharing.
Political motivations have often been highlighted as a reason for sharing misinformation, whether to
mobilize against a target group or to rail against the whole system (Petersen et al., 2018). Political orientation
has also been noted as a motivating factor for sharing misinformation (Jamieson & Albarracín, 2020). In our
study, the sharing of political news stories revealed differences across countries. A country’s political culture
and media system seemed to be linked to the way users interact with false information. In Zimbabwe, where
press freedom is weak and authoritarianism is still a reality, the sharing of political (mis)information was
presented as a courageous act, even if done in WhatsApp groups, where encryption is sophisticated. At the
other end, in South Africa and Kenya, both of which have a vibrant media sector and a (flawed) but functioning
democracy, students appeared to be the least motivated to share political news. South African and Kenyan
students seemed to be much less politically engaged than those in countries where participants said they would
share not only the stimulus we presented but also other similar stories about politics. Some participants said
they would do so because it could help them advance their political motives, while others suggested that their
goal when sharing political (mis)information is ridiculing those in power.
It is especially this latter practice that points to a gap in the most recent literature on
misinformation. The political use of humor we see in sharing practices differs in nature from the orchestrated
political campaigns described in the literature on misinformation elsewhere. Our findings seem better aligned
with the extensive literature on the political uses of satire in scholarship on African media and political
communication, especially in contexts where news media is repressed by the state or captured by elites.
This finding also emphasizes the need to better root studies on politically motivated disinformation within
the sub-Saharan African context. Though the boundaries between satire used for political ends and malicious
or misleading information may be nebulous, the long social history of such practices in Africa makes this an
important factor to consider. Given the entrenched role of satirical and humorous content in informal
networks of media use in Africa (Nyamnjoh, 2005; Willems, 2011), and the progressive uses to which these
types of intentionally falsealbeit not misleadingcontent have been put, media users on the continent
might be less resistant to sharing information that they know is untrue.
1216 Dani Madrid-Morales et al. International Journal of Communication 15(2021)
References
Bigman, C. A., Smith, M. A., Williamson, L. D., Planey, A. M., & Smith, S. M. (2019). Selective sharing on
social media: Examining the effects of disparate racial impact frames on intentions to retransmit
news stories among US college students. New Media & Society, 21(11/12), 26912709.
doi:10.1177/1461444819856574
Chadwick, A., & Vaccari, C. (2019). News sharing on UK social media: Misinformation, disinformation, and
correction (03C1). Online Civic Culture Centre. Retrieved from
https://www.lboro.ac.uk/research/online-civic-culture-centre/news-events/articles/o3c-1-survey-
report-news-sharing-misinformation/
Chakrabarti, S., Rooney, C., & Kweon, M. (2018). Verification, duty, credibility: Fake news and ordinary
citizens in Kenya and Nigeria. BBC. Retrieved from http://downloads.bbc.co.uk/mediacentre/bbc-
fake-news-research-paper-nigeria-kenya.pdf
Duffy, A., Tandoc, E., & Ling, R. (2019). Too good to be true, too good not to share: The social utility of
fake news. Information, Communication & Society. Advanced online publication.
doi:10.1080/1369118X.2019.1623904
Dwyer, M., & Molony, T. (Eds.). (2019). Social media and politics in Africa: Democracy, censorship and
security. London, UK: Zed.
The Economist Intelligence Unit. (2020). Democracy index 2019: A year of democratic setbacks and
popular protest. Retrieved from https://www.eiu.com/topic/democracy-index
Electoral Commission of South Africa. (2019). Electoral Commission launches online reporting platform for
digital disinformation. Retrieved from https://www.elections.org.za/ieconline/Report-digital-
disinformation
Goldstein, J., & Rotich, J. (2010). Digitally networked technology in Kenya’s 200708 post-election crisis.
In S. Ekine (Ed.), SMS uprising: Mobile activism in Africa (pp. 124137). Cape Town, South
Africa: Pambazuka.
Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news
dissemination on Facebook. Science Advances, 5(1), eaau4586. doi:10.1126/sciadv.aau4586
Jamieson, K. H., & Albarracín, D. (2020). The relation between media consumption and misinformation at
the outset of the SARS-CoV-2 pandemic in the US. Harvard Kennedy School Misinformation
Review, 1(2), 122. doi:10.37016/mr-2020-012
Kamberelis, G., & Dimitriadis, G. (2013). Focus groups: From structured interviews to collective
conversations. London, UK: Routledge. doi:10.4324/9780203590447
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1217
Lewis, S. C., & Molyneux, L. (2018). A decade of research on social media and journalism: Assumptions,
blind spots, and a way forward. Media and Communication, 6(4), 1123.
doi:10.17645/mac.v6i4.1562
London School of Economics and Political Science. (2020). Tackling the information crisis: A policy
framework for media system resilience. Retrieved from http://www.lse.ac.uk/media-and-
communications/assets/documents/research/T3-Report-Tackling-the-Information-Crisis-v6.pdf
Mäkinen, M., & Kuira, M. W. (2008). Social media and postelection crisis in Kenya. The International
Journal of Press/Politics, 13(3), 328335. doi:10.1177/1940161208319409
Mano, W. (2007). Popular music as journalism in Zimbabwe. Journalism Studies, 8(1), 6178.
doi:10.1080/14616700601056858
Mare, A. (2020). Popular communication in Africa: An empirical and theoretical exposition. Annals of the
International Communication Association, 44(1), 8199. doi:10.1080/23808985.2019.1623060
Moehler, D. C., & Singh, N. (2011). Whose news do you trust? Explaining trust in private versus public
media in Africa. Political Research Quarterly, 64(2), 276292. doi:10.1177/1065912909349624
Mutsvairo, B., & Bebawi, S. (2019). Journalism educators, regulatory realities, and pedagogical
predicaments of the “fake news” era: A comparative perspective on the Middle East and Africa.
Journalism & Mass Communication Educator, 74(2), 143157. doi:10.1177/1077695819833552
Nyamnjoh, F. B. (2005). Africa’s media, democracy, and the politics of belonging. London, UK: Zed.
Ogola, G. O. (2010). “If you rattle a snake, be prepared to be bitten”: Popular culture, politics and the
Kenyan news media. In H. Wasserman (Ed.), Popular media, democracy and development in
Africa (pp. 123136). London, UK: Routledge.
Okoro, N., & Emmanuel, N. O. (2018). Beyond misinformation: Survival alternatives for Nigerian media in
the “post-truth” era. African Journalism Studies, 39(4), 6790.
doi:10.1080/23743670.2018.1551810
Orji, N. (2019). Social media and elections in Nigeria: Digital influence on election observation,
campaigns, and administration. In M. Dwyer & T. Molony (Eds.), Social media and politics in
Africa: Democracy, censorship and security (pp. 152172). London, UK: Zed.
Petersen, M. B., Osmundsen, M., & Arceneaux, K. (2018). The “need for chaos” and the sharing of hostile
political rumors in advanced democracies. PsyArXiv Preprints. doi:10.31234/osf.io/6m4ts
Reporters Without Borders. (2020). 2020 world press freedom index. Retrieved from
https://rsf.org/en/ranking
1218 Dani Madrid-Morales et al. International Journal of Communication 15(2021)
Roper, C. (2019). South Africa. In N. Newman, R. Fletcher, A. Kalogeropoulos, & R. K. Nielsen (Eds.),
Reuters Institute digital news report 2019 (pp. 148149). Retrieved from
https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2019-06/DNR_2019_FINAL_0.pdf
Shoki, W. (2020, January 15). On conspiracy theories. Retrieved from
https://africasacountry.com/2020/01/on-conspiracy-theories
Sterrett, D., Malato, D., Benz, J., Kantor, L., Tompson, T., Rosenstiel, T., . . . & Loker, K. (2019). Who
shared it? Deciding what news to trust on social media. Digital Journalism, 7(6), 783801.
doi:10.1080/21670811.2019.1623702
Tandoc, E. C. (2019). The facts of fake news: A research review. Sociology Compass, 13(9), e12724.
doi:10.1111/soc4.12724
Tandoc, E. C., Lim, D., & Ling, R. (2020). Diffusion of disinformation: How social media users respond to
fake news and why. Journalism, 21(3), 381398. doi:10.1177/1464884919868325
Tully, M., & Ekdale, B. (2014). Sites of playful engagement: Twitter hashtags as spaces of leisure and
development in Kenya. Information Technologies & International Development, 10(3), 6782.
doi:10.1057/9781137404299_6
Udupa, S., & McDowell, S. D. (Eds.). (2017). Media as politics in South Asia. London, UK: Routledge.
doi:10.4324/9781315267159
UNESCO. (2018). Journalism, “fake news” and disinformation. Paris, France: Author.
Uzuegbunam, C. E. (2020). A critical analysis of transgressive user-generated images and memes and
their portrayal of dominant political discourses during Nigeria’s 2015 general elections. In M. N.
Ndlela & W. Mano (Eds.), Social media and elections in Africa (Vol. 2, pp. 223243). Cham,
Switzerland: Palgrave Macmillan. doi:10.1007/978-3-030-32682-1_12
Valenzuela, S., Halpern, D., Katz, J. E., & Miranda, J. P. (2019). The paradox of participation versus
misinformation: Social media, political engagement, and the spread of misinformation. Digital
Journalism, 7(6), 802823. doi:10.1080/21670811.2019.1623701
Wagner, M. C., & Boczkowski, P. J. (2019). The reception of fake news: The interpretations and practices
that shape the consumption of perceived misinformation. Digital Journalism, 7(7), 870885.
doi:10.1080/21670811.2019.1653208
Wahutu, J. S. (2019). Fake news and journalistic “rules of the game.” African Journalism Studies, 40(4),
1326. doi:10.1080/23743670.2019.1628794
International Journal of Communication 15(2021) Motivations for Sharing Misinformation 1219
Wasserman, H. (2020). Fake news from Africa: Panics, politics and paradigms. Journalism, 21(1), 316.
doi:10.1177/1464884917746861
Wasserman, H., & Madrid-Morales, D. (2019). An exploratory study of “fake news” and media trust in
Kenya, Nigeria and South Africa. African Journalism Studies, 40(1), 107123.
doi:10.1080/23743670.2019.1627230
Willems, W. (2011). Comic strips and “the crisis”: Postcolonial laughter and coping with everyday life in
Zimbabwe. Popular Communication: The International Journal of Media and Culture, 9(2), 126
145. doi:10.1080/15405702.2011.562099
... Kuo (2020) argue that Chinese state and non-state entities may utilize African media platforms to promote biased narratives or misinformation that align with Chinese interests. This is supported by instances where Chinese-affiliated media outlets have been accused of advancing content favorable to China's political and economic agendas Benabdallah;Madrid-Morales, et. al., 2021;Monyae, 2023). Such narratives may not only distort public perceptions but also undermine journalistic integrity and media plurality in African countries. Furthermore, China's involvement in African media extends beyond mere financial investments to include digital surveillance and censorship. Reports of Chinese surveillance technology b ...
... for technological advancement and capacity-building, they also pose significant risks to media independence and freedom of expression. Policymakers, journalists, and civil society actors must remain vigilant in safeguarding the principles of media pluralism, editorial independence, and freedom of expression in the face of growing Chinese influence (Madrid, et. al., 2021;Shinn, 2023. This requires greater transparency in Chinese investments, stronger regulatory frameworks to protect digital rights, and support for independent media outlets and journalists who are committed to upholding journalistic integrity and accountability (Abegunrin, & Manyeruke, 2020;Wekesa, 2021). By addressing these challenges, A ...
Preprint
Full-text available
This study explores China's influence on Zambian media, revealing a paradoxical decline in coverage despite increased Chinese investment. Analyzing data from 2012 to 2022 across media sectors, we note a rise until 2017 followed by a decline, notably in government and private outlets. Surprisingly, community and religious media maintain consistency. Additionally, negative story coverage decreases across all sectors, with government and private media reporting none. Amidst this, social media exposes labor abuses in Chinese-owned companies. The absence of critical reporting raises concerns about misinformation. The study prompts questions about media engagement with grassroots issues amidst Chinese influence.
... Given the development of these technologies in primarily Western contexts, many of the conversations regarding the misuse and implications of generative AI have been focused on this region. Despite this, there are even stronger concerns about the negative impacts of generative AI on African populations given the lower levels of traditional and digital literacy [27] and a higher susceptibility for misinformation to spread through social media and messaging channels [15]. With this in mind, it is imperative that research be done to understand the implications of using generative AI in African settings and outline ways to develop, deploy, and use these technologies in responsible ways. ...
... While African companies should be encouraged to adopt emerging technologies in their business processes, the use of generative AI comes with concerns due to people's limited ability to discern between human and AI-generated content [14,31]. Given the prevalence of misinformation in Africa [15,25,42], companies should employ generative AI in responsible ways and inform consumers of its use in their content. ...
Conference Paper
Full-text available
As generative AI applications such as ChatGPT, Midjourney, DALL·E, Bard, and others increase in ubiquity, concerns about the negative implications of these technologies are becoming more present in public discourse. However, little research has examined the impact that generative AI stands to have on African consumers and users who may be affected by its application in various fields such as education, healthcare, and social media. This work presents an early look into the implications of using generative AI within African contexts, exploring case studies of current generative AI use within Africa. These case studies examine the use of generative AI in marketing and for image and text generation. While the potential for generative AI in Africa is growing, this preliminary work aims to set a foundation for highlighting risks associated with generative AI while exploring how generative AI can be responsibly developed and used within African contexts.
... Given the development of these technologies in primarily Western contexts, many of the conversations regarding the misuse and implications of generative AI have been focused on this region. Despite this, there are even stronger concerns about the negative impacts of generative AI on African populations given the lower levels of traditional and digital literacy [27] and a higher susceptibility for misinformation to spread through social media and messaging channels [15]. With this in mind, it is imperative that research be done to understand the implications of using generative AI in African settings and outline ways to develop, deploy, and use these technologies in responsible ways. ...
... While African companies should be encouraged to adopt emerging technologies in their business processes, the use of generative AI comes with concerns due to people's limited ability to discern between human and AI-generated content [14,31]. Given the prevalence of misinformation in Africa [15,25,42], companies should employ generative AI in responsible ways and inform consumers of its use in their content. ...
Poster
Full-text available
As generative AI applications such as ChatGPT, Midjourney, DALL·E, Bard, and others increase in ubiquity, concerns about the negative implications of these technologies are becoming more present in public discourse. However, little research has examined the impact that generative AI stands to have on African consumers and users who may be affected by its application in various fields such as education, healthcare, and social media. This work presents an early look into the implications of using generative AI within African contexts, exploring case studies of current generative AI use within Africa. These case studies examine the use of generative AI in marketing and for image and text generation. While the potential for generative AI in Africa is growing, this preliminary work aims to set a foundation for highlighting risks associated with generative AI while exploring how generative AI can be responsibly developed and used within African contexts.
... Research inviting input from social media users has yielded important knowledge concerning users' psychological motives for misinformation sharing. Qualitative evidence suggests that motives that, for example, relate to social duty and moral principles (Duffy & Tan, 2022;Madrid-Morales et al., 2020), authentication of information (Duffy & Tan, 2022;Mahdi et al., 2022), and status-seeking (Mahamad et al., 2021;Mahdi et al., 2022) can affect users' misinformation sharing. As with the theory-driven work, to obtain a comprehensive yet parsimonious picture of user-reported motives for sharing information online, integration of the findings of such research is needed. ...
... Our findings extend qualitative evidence that group cohesion, helping others, debunking, and social duty can motivate sharing (Table 1) and theory-driven evidence concerning altruistic (Balakrishnan et al., 2021) and civicdeliberative (Chadwick et al., 2022) motives for sharing. For example, our Study 2 findings concerning online activism and information systems go beyond previous bottom-up research (Madrid-Morales et al., 2020;Wasserman & Madrid-Morales, 2021) by showing that moral motivations for sharing extend to calls-for-action that shape political and legislative agendas; these motives are reflected in the prosocial activism cluster of motives. In addition, whereas Study 2 participants were critical toward established structures, such as government or factchecking organizations, they shared misinformation with constructive (e.g., balancing perceived media bias, regulating online information platforms) intentions; these are echoed in the awareness cluster of motives. ...
Article
Social media users are key actors in the spreading of misleading or incorrect information. To develop an integrative parsimonious summary of social media users’ own accounts of motives for sharing political information, we conducted: (1) a literature review of motives for personally sharing false information as reported by social media users and (2) qualitative research concerning these motives using an innovative, ecologically valid method. Based on our findings, we developed a pool of items evaluating social media users’ motives for sharing false political information, which we then tested and analyzed the dimensionality of in (3) a pre-registered questionnaire-based study to identify key clusters of users’ own accounts of motives for sharing both true and false political information. The current findings show that there are distinct sets of motives people report for their misinformation sharing behavior: prosocial activism, attack or manipulation of others, entertainment, awareness, political self-expression, and fighting false information. Also, these sets of motives are associated with variables known to predict sharing misinformation, and some of these sets predict social media users’ self-reports of having shared misinformation in the past. Our findings highlight and elaborate on users’ motives that reflect a concern with “making things better” and acting in a manner that is beneficial to society as a whole, and suggest that different interventions may be required to combat misinformation sharing driven by different motives. A potential set of 18 items that could be used in questionnaires measuring motivations for sharing political news online is described.
... In a study of how young "netizens" in Nigeria responded to the COVID-19 pandemic, Uwalaka et al. (2021) found that their news consumption was overwhelmingly from social media, they were very much exposed to fake stories on social media, and that this decreased the likelihood of their believing credible and real news stories. The motivations for university students' consumption and spreading of incorrect information online were investigated by Madrid-Morales et al. (2021). According to the findings, young people use a variety of indications to judge the reliability of news, which in turn determine the information's shareability. ...
Article
Full-text available
This study explores young adults’ perceptions, behaviors, and how they navigated pandemic-related information, drawing from social cognitive theory. In the digital age, young people, characterized as “digital informavores,” actively seek, consume, and share information, playing a crucial role in health communication. The research, involving participants aged 18–30 in two urban centers in Nigeria, focused on COVID-19 socio-health concerns, including social distancing, masking, sanitizing, movement restrictions, vaccination, infection, testing, and treatment. The analysis, employing the social cognitive lens, and following a critical thematic approach, indicates that the degree of infodemic exposure experienced during the pandemic impacted participants’ understanding, attitudes, behaviors, and risk perceptions. Participants primarily relied on digital sources and social support systems for pandemic-related health information. Their self-efficacy and risk perceptions, as well as pandemic-induced affectations, were evident throughout the data. Attitudes toward the pandemic evolved from its onset, through the announcement and easing of the national lockdown, to the vaccination rollout. Dominant perceptions included the use of “copy and paste solutions” in Nigeria’s pandemic response, COVID-19 denialism, and politicization of the pandemic, leading to mistrust in government and health authorities. The pandemic’s impacts included mental health issues and economic hardship, particularly in a country lacking social security or welfare plans. Following a low vaccination rate among participants, the data revealed vaccine lethargy, “vaccinformation void,” vaccine misinformation, vaccine distrust, and vaccine inaccessibility, due to various reasons and factors at play. Some young adults adhered to health rules due to fear and anxiety, while others were nonchalant, overwhelmed by the rules or discouraged by others’ non-compliance. The young adults’ imaginaries and behaviors were influenced by sociocultural intermediaries, religious and political actors, and Nigeria’s socio-economic conditions.
... recognizing the potential for cognitive dissonance, which may result in either a rejection of new information or an entrenched polarization of views. Our approach transcends speculative research by actively integrating indings such as the role of cognitive ability in false news appraisal [39] and cross-national studies on MDM behavior [2,5,43,81]. These insights will inform the development of applications that map content contextually and enhance user resilience against MDM. ...
Article
Full-text available
The emergence of generative artificial intelligence (GenAI) has exacerbated the challenges of Misinformation, Disinformation, and Mal-information (MDM) within digital ecosystems. These multifaceted challenges demand a re-evaluation of the digital information lifecycle and a deep understanding of its social impact. An interdisciplinary strategy integrating insights from technology, social sciences, and policy analysis is crucial to address these issues effectively. This paper introduces a three-tiered framework to scrutinize the lifecycle of GenAI-driven content from creation to consumption, emphasizing the consumer perspective. We examine the dynamics of consumer behavior that drive interactions with MDM, pinpoints vulnerabilities in the information dissemination process, and advocates for adaptive, evidence-based policies. Our interdisciplinary methodology aims to bolster information integrity and fortify public trust, equipping digital societies to manage the complexities of GenAI and proactively address the evolving challenges of digital misinformation. We conclude by discussing how GenAI can be leveraged to combat MDM, thereby creating a reflective cycle of technological advancement and mitigation.
Preprint
Full-text available
This study investigates the impact of the Ukraine war on Polish public opinion, focusing on four main areas: public attention to the war, war anxiety, susceptibility to disinformation, and anti-refugee attitudes. Utilising a cross-sectional survey of over 10,000 respondents, the research analyses how these factors have evolved during the conflict through the lens of cognitive dissonance theory. Our results highlight the dynamic nature of public sentiment, the influence of disinformation campaigns, and the complex attitudes towards refugees within Poland. The findings offer valuable insights, emphasising the importance of addressing disinformation and fostering social cohesion amidst crises. Additionally, we are empirically verifying expectations based on cognitive dissonance theory, showing that this framework could be valuable for analysing public opinion during times of crisis and that it meets most empirical expectations.
Article
These days, people have increasingly used social media as a go-to resource for any information need and daily news diet. In the past decade, the news ecosystem and information flow have been dramatically transformed by the popularity of such platforms. Social media users can, in fact, easily access nearly any kind of information and then spread it nearly without friction through activities like tweets/retweets in Twitter (now X) and similar means on other social media. This seemingly innocuous activity of spreading information has a collective consequence of making social media users responsible for radical changes in the way news is distributed, including both authentic and fake news. Moreover, malicious individuals have been implicated in capitalizing on the ease of introducing and spreading information in these platforms to create misinformation, spread it to a wider audience, and subsequently influence public opinion on important topics through information diffusion. Therefore, understanding the factors that motivate a user’s decision to share is of paramount importance in understanding the information diffusion phenomenon in social media. In this paper, we propose an approach based on the Diffusion of Innovation theory to model, characterize, and compare real and fake news sharing in social media with a focus on different levels of influencing factors including innovation, communication channels, and social system. We apply that approach to identify factors related to the spread of fake news as they relate to users, the structure of news items themselves, and the networks through which news is circulated. We address the problem of predicting real and fake news sharing as a classification task and demonstrate the potentials of the proposed features by achieving an AUROC of around 0.97 and an average precision ranging from 0.88 to 0.95, consistently outperforming baseline models with a higher margin (at least 13% of average precision). In addition, we also found out that empirically identifiable characteristics of news items themselves and users who share news are the strongest element allowing accurate prediction of real and fake news sharing, followed by network-based features. Moreover, our proposed approach can be effectively used to model news diffusion as a multi-step propagation process.
Article
Full-text available
This study examines the motivations for sharing misinformation on social media platforms in Nigeria. The study conceptualises social media platforms as digital commons and draws on twelve focus group discussions involving ninety-two participants from the three regional nodes of Nigeria (north, east, west). The findings suggest that sharing misinformation about patriotism, altruism, ethnicity, religion, and regional differences on social media has the potential to abuse the digital commons. However, while the digital commons are universal, their abuse is mediated by contextual realities that are specific to the Nigerian context. Also, there are indications that network ties influence Nigerian audiences’ response to misinformation on social media. Furthermore, we discuss the implications of our findings for a transitional democracy such as Nigeria.
Chapter
The present state of global growth, which is now focused on the process of digitalization and the establishment of information technology networks, has emerged as a fundamental component within the realm of Public Relations professionals. The field of Public Relations has seen significant transformation via digitalization as practitioners have adapted their strategies to meet the increasing needs and expectations of stakeholders. Incorporating digital communication platforms, big data, and Artificial Intelligence (AI) has necessitated Public Relations practitioners to acquire a thorough proficiency in a new array of digital abilities. The primary objective of this preliminary research was to examine the digital skills and competencies shown by public relations practitioners in Indonesia. A quantitative research method was undertaken to investigate the implementation of Digital Public Relations in Indonesia, with surveys as the primary technique of data collecting. The study included participation from many public relations practitioners from diverse organizations as survey respondents. The study revealed that the emergence of Digital Public Relations, together with its accompanying technological advancements, has facilitated the tasks of Public Relations professionals, hence potentially enhancing their efficiency and productivity. Furthermore, many Public Relations professionals in Indonesia have already acquired the necessary knowledge and skills in digital public relations. In addition to possessing knowledge and competencies in Digital Public Relations, it is observed that a significant number of Public Relations practitioners lack practical skills in analysing the vast amount of data pertaining to their organizations on the internet and other social media platforms.
Article
Full-text available
A US national probability-based survey during the early days of the SARS-CoV-2 spread in the US showed that, above and beyond respondents’ political party, mainstream broadcast media use (e.g., NBC News) correlated with accurate information about the disease's lethality, and mainstream print media use (e.g., the New York Times) correlated with accurate beliefs about protection from infection. In addition, conservative media use (e.g., Fox News) correlated with conspiracy theories including believing that some in the CDC were exaggerating the seriousness of the virus to undermine the presidency of Donald Trump. Five recommendations are made to improve public understanding of SARS-CoV-2.
Chapter
Full-text available
During the 2015 General Election in Nigeria, satirical images and memes were disseminated across social media, comprising both real life and cartooned images of mainly the two key presidential election aspirants. This research, firstly through semiotic analysis, deconstructs the meanings embedded within the images, and maps how they shaped discourses during the election and possibly after it. Secondly, through focus group interviews, the study explores the dominant readings given to the images and how much they influenced people’s opinions regarding the electoral process and its key actors. The study extends understanding of political communication by providing a more nuanced overview of how we may attach meanings to transgressive, user-generated political memes that had the potential to ‘take a life of their own’ and to generate polysemic meanings.
Article
Full-text available
This article offers a review of scholarly research on the phenomenon of fake news. Most studies have so far focused on three main themes: the definition and the scope of the problem; the potential causes; and the impact of proposed solutions. First, scholarly research has defined fake news as a form of falsehood intended to primarily deceive people by mimicking the look and feel of real news. While initial research has shown that only a small fraction of the online audience is exposed to fake news, for this small group of individuals, the impact of fake news can be quite substantial. Second, studies have identified cognitive processes that make individuals more prone to the influence of fake news, such as confirmation bias, selective exposure, and lack of analytical thinking. Fake news also derives its power from its appeal to partisanship, perceived novelty, and repeated exposure facilitated by both bots and human users that share them in the online sphere. Finally, while fact checking has also risen in response to fake news, studies have found that corrections to wrong information only work on some individuals.
Article
Full-text available
Theoretical and empirical evidence suggests disparate racial impact frames may lead to selective sharing on social media and result in differential retransmission rates across racial groups. In this online study, we (1) examined reported exposure to and sharing of content about race on social media among Black, White, and “Other” race/ethnicity college students (N = 150); (2) experimentally tested how exposure to news story previews with control, implicit, or explicit disparate racial impact frames affected subsequent sharing intentions; and (3) explored reasons students provided for their intentions to share/not share the stories. Black students reported more exposure to and sharing of content about race on social media. Few participants cited discrimination in open-ended responses explaining sharing/non-sharing intentions. Nevertheless, despite holding story topic and source constant, disparate racial impact frames resulted in differences in sharing intentions among Black and White students, demonstrating these frames can influence selective sharing intentions.
Article
While fake news has been widely reviled as an attack on democracy, less has been written about its threat to interpersonal relationships. Social networks have become increasingly popular for sharing news and as a result have also offered fertile ground for the spread of fake news. This paper considers the impact of the latter on the former, particularly in circumstances where the sharer either does not know or does not suspect that the news they are sharing is fake. This distinction is important because while sharing information and news may be construed as a social good, sharing news that turns out to be fake might negatively impact relationships. How do people react when the news they have shared with the intention of fostering social cohesion turns out to be fake, and as a result damages that cohesion? Based on 12 focus groups, this study examines how social media users react to fake news and how it affects interpersonal relationships between sender and receiver.
Article
How do people make sense of, and deal with, a changing media landscape perceived to be filled with misinformation and fake news? To answer this, we draw upon data from seventy-one in-depth interviews in Chicago, Philadelphia, and Miami. We found that perceptions about the overall media ecosystem were characterized by a: a) negative view of the current quality of news reporting, b) particular distrust of news circulation on social media; and c) concern about the effects of these trends mainly on the information habits of others. To counter these perceptions, participants indicated to rely on: a) traditional fact-based media, accompanied by a rejection of opinionated outlets; b) personal experience and knowledge; c) repetition of information across outlets; d) consumption of cross-ideological sources; e) fact-checking; and f) trust in certain personal contacts on social media, who are perceived as assessors of news quality. Our findings suggest that: a) news consumption is being ritualized in new and more personalized ways; b) social media is seen as a gateway to news partly because audiences find opinion leaders in terms of their skills as credibility assessors; and c) journalism could cater more to audiences’ demands for more fact-oriented and less discussion-based content.
Article
This exploratory study seeks to understand the diffusion of disinformation by examining how social media users respond to fake news and why. Using a mixed-methods approach in an explanatory-sequential design, this study combines results from a national survey involving 2501 respondents with a series of in-depth interviews with 20 participants from the small but economically and technologically advanced nation of Singapore. This study finds that most social media users in Singapore just ignore the fake news posts they come across on social media. They would only offer corrections when the issue is strongly relevant to them and to people with whom they share a strong and close interpersonal relationship.
Article
In recent years, concerns about the perceived increase in the amount of “fake news” have become prevalent in discussions about media and politics, particularly in the United States and Europe. However, debates around “fake news”, even if some object to the use of the term due to it being loosely defined, appear to speak of processes that occur not only in the Global North but also elsewhere. In Africa, mis- and disinformation campaigns have been used to influence political agendas, and governments have responded with countermeasures. This article explores the phenomenon in Kenya, Nigeria and South Africa using data from a two-wave online survey (N = 1847). We find that perceived exposure to disinformation is high, and that trust in social and national media is low. We also identify a significant relationship between higher levels of perceived exposure to disinformation and lower levels of media trust in South Africa. The limitations of this study, which focuses on a subset of the population that is highly educated, the implications of our findings, and recommendations for future research are discussed.
Article
The increase in social media use within African media fields has seen a concomitant increase in fears and concerns about “fake news” over the last few years. However, there is little empirical evidence that “fake news” has been as much of a menace as observers would have us believe. Much of the “fake news” excitement is anchored on panic by American news organisations following the 2016 US presidential elections. Nevertheless, recent scholarship shows that the effects of “fake news” on the US elections were largely exaggerated while the role of conventional media actors largely suppressed. This article argues that the circulation of “fake news” is intricately tied to traditional (western) media practices which have themselves been problematic. It contends that “fake news” is not the problem in and of itself, but rather a sign that African media fields need to reimagine how journalism is practised within the continent. As such, it maintains that studying African journalists not just as “carrier groups” (in the Weberian sense) but also as the primary definers of what the boundaries of the “Overton window” should be is more informative on “fake news” effects than the current fixation with social media’s role in disseminating “fake news”.