ArticlePDF Available

Crowdsourcing Facts in Conflicts: Towards a Truth-Telling Mechanism to Counter Fake News


Abstract and Figures

The unprecedented rise of mis- and disinformation surrounding conflicts around the globe poses an imminent threat to humanitarian assistance and conflict resolution. This paper reflects on previous approaches to harness the so-called "wisdom of the crowd" by collecting reliable and real-time information from primary sources during crises. Using insights from game theory, we present the design of a novel crowdsourcing platform thought to increase reliability and objectivity of conflict-related information. In contrast to previous approaches, our design incorporates a mechanism that sets incentives for citizen reporters to tell the truth disclosing privately held information as accurately as possible. Moreover, the proposed platform offers the possibility to certify news based on a multi-stage review process and directly connects reporters and media outlets via an integrated marketplace. By enabling media outlets to access original and verified information from primary sources, the platform is anticipated to become an attractive sourcing tool for media outlets that otherwise would purchase news from secondary sources. We argue that a platform with these design features can help counter the spread of fake news on conflicts and thereby contribute to more effective humanitarian assistance and peacebuilding.
Content may be subject to copyright.
Contact Address
The International Journal of Humanitarian Studies
P.O. Box 51049 Riyadh 11543 Kingdom of Saudi Arabia - Fax: 4647851
A Peer-Reviewed Journal Issued Every Four Months by King Salman Humanitarian Aid and Relief Center
Supervisor General
His Excellency Dr. Abdullah Bin Abdulaziz Al-Rabeeah
Counselor at the Royal Court and Supervisor General of the King Salman
Humanitarian Aid and Relief Center
Editor In-Chief
Dr. Aqeel Bin Jamaan Al-Ghamdi
Assistant Supervisor General of King Salman Humanitarian Aid
and Relief Center for Planning and Development Affairs
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism Dr. Manuel Schubert 3
to Counter Fake News Dr. Fahad Alsharif
The Legality of International Humanitarian Intervention Dr. Mohammed Ennadi
in International Policy 25
Fundamental Environmental Challenges
in the Arab World Dr. Khaled Kazem Aboudouh
A Socio-Security Perspective 47
Counter-Terrorism and Humanitarian Aid
Repercussions and Solutions Dr. Hanan Ahmed Elfouly 67
Development of Public Education in Yemen Dr. Ziad Mohammed Al-Mehwari
Case of Kingdom of Saudi Arabia Assistance 99
uel Schubert
Issue (5) September 2021 | Safar 1443
1Crowdsourcing Facts
in Conicts
Towards a Truth-Telling Mechanism
to Counter Fake News
Dr. Manuel Schubert - Germany
Managing Director at Behavia, Behavioral Public Policy
and Economics GmbH
Dr. Fahad Alsharif - Saudi Arabia
Senior Resident Research Fellow, King Faisal Center for Research
& Islamic Studies
Issue No. (5) September 2021 / Safar 1443
The unprecedented rise of mis- and
disinformation surrounding conicts around
the globe poses an imminent threat to
humanitarian assistance and conict resolution.
This paper reects on previous approaches to
harness the so-called “wisdom of the crowd”
by collecting reliable and real-time information
from primary sources during crises. Using
insights from game theory, we present the
design of a novel crowdsourcing platform
thought to increase reliability and objectivity
of conict-related information. In contrast to
previous approaches, our design incorporates
a mechanism that sets incentives for citizen
reporters to tell the truth disclosing privately
held information as accurately as possible.
Moreover, the proposed platform offers the
possibility to certify news based on a multi-
stage review process and directly connects
reporters and media outlets via an integrated
marketplace. By enabling media outlets to
access original and veried information from
primary sources, the platform is anticipated
to become an attractive sourcing tool for
media outlets that otherwise would purchase
news from secondary sources. We argue that
a platform with these design features can help
counter the spread of fake news on conicts
and thereby contribute to more effective
humanitarian assistance and peacebuilding.
Keywords: Crowdsourcing, Violent
Conflicts, Fake News, Journalism, Game
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake News(1)
Dr. Manuel Schubert
Dr. Fahad Alsharif
Saudi Arabia
“People need information as much as water, food, medicine or shelter.
Information can save lives, livelihoods and resources.”
Markku Niskala, Secretary General IFRC (2005)
The advancement of social media has affected societies worldwide, including
the way we produce, consume, and respond to information. Social media help to
disseminate news and knowledge and contribute to more inclusive and diverse
coverage. With currently around 3.8 billion users worldwide (Statista, 2021), social
media exert a signicant inuence over public opinions, shaping the cultural, social,
and political lenses through which societies are perceived.
Despite its positive implications, social media also have a dark side. They have
been the carriers of falsehoods and manipulations (Gupta et al., 2013; Tran et al.,
2020). The spread of fake news has nurtured polarization in and destabilization of
societies, undermining cohesiveness and widening existing inequalities. Fake news
usually follows a malicious agenda: they intend to alter the perceptions, beliefs, and
behaviors of people by changing the way we look at products, events, or people (U.S.
Homeland Security 2018). They offer simple explanations for complex problems,
purposefully triggering psychological heuristics and biases that induce ad-hoc
judgments and reinforce negative sentiments (Schubert et al. 2020).
The ultimate goal of fake news is mass deception and disinformation, either
intending to increase people’s willingness to perform a desired action, e.g.,
purchasing a product, supporting a political position, changing a policy, or vice
versa, to abstain from an undesired action by creating confusion or inducing
uncertainty. Although fake news is not a new phenomenon, the volume and scale of
distributed mis- and disinformation of the last years are unprecedented. Today’s
social media diffuse false information signicantly faster, deeper, and more broadly
than accurate news, particularly in environments of uncertainty (Vosoughi et al.
2018, Pennycook et al., 2020).
In the context of humanitarian aid, false information can have a severe impact
on the living conditions and livelihood of millions of people. During humanitarian
Issue No. (5) September 2021 / Safar 1443
crises, accurate and trustworthy information is vital for affected communities,
local authorities, policymakers, as well as aid organizations (Hannides 2015, Tran
et al., 2020). Humanitarian emergencies, especially those arising from armed
conicts, are often deeply politicized. Multiple conict parties seek to inuence
their national and international media representation to change public opinion and
foreign relations in their favor. In Syria, for example, media reports are found to
capture less than 10% of the estimated total number of conict-related events and
exhibit systematic actor-specic biases (Baliki, 2017). In South Sudan, the UN states
that social media “has been used by partisans on all sides, including some senior
government ofcials, to exaggerate incidents, spread falsehoods and veiled threats,
or post outright messages of incitement” (UN Security Council, 2016: 10). Hence,
disseminating false information can undercut aid effectiveness and substantially
impede conict reconciliation (Igoe 2018, Bunce 2019, Tran et al. 2020).
Research in various disciplines has started to explore the dynamics of mis-
and disinformation, including possible ways to alleviate their impact on news
consumers’ behavior and attitudes. Besides government regulations, prominent
approaches focus on improving media literacy and information agency of media
consumers (Kahan et al., 2017; Semakula et al., 2017; Meyer et al., 2020; Lorenz-
Spreen et al., 2020; Kozyreva et al., 2020; Schubert et al., 2020).
In the humanitarian sector, research has focused more on the supply-side of news.
One of the most promising methods that have recently ventured into this eld is
crowdsourcing.(5) Crowdsourcing relies on the wisdom of the crowd. Emerging from
psychological research and cognitive science, the idea is that multiple estimates’
average value tends to be more accurate than any one single estimate (Surowiecki,
2004; Fiechter & Kornell, 2021). This assumption has been found to be remarkably
accurate: The wisdom of the crowd outperforms individual estimates in forecasting
future events (Mellers et al. 2014, Turner et al. 2014), in judging probabilities (Ariely
et al. 2000; Lee and Danileiko 2014), or in knowledge and information-processing
tasks (Steyvers et al. 2009, Yi et al. 2012, Bennett et al. 2018).
These astonishing results combined with the rapid advancement of social media
technology have given rise to crowdsourcing in humanitarian assistance and, more
broadly, to the emergence of citizen journalism. Crowdsourced information is of
particular help in emergencies and conict environments where data collection
is extremely difcult and, at the same time, veried information is increasingly
needed (Kahl et al., 2012). A variety of crowd-based techniques have already
been launched with the objective to gather (more) reliable and (more) objective
information in conicts and crises (e.g., Butler 2013, Shiel 2013, Rigterink and Baliki
2019). However, despite their promising potential of becoming the new reporting
standard, these attempts have largely failed to attract sufciently high numbers of
individuals that are willing to share their information on conict events truthfully.(6)
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
Moreover, most platforms are not self-sustaining: They require external regular
funding from external parties to maintain and develop their operations.
This is the starting point of our paper. How can we better harness the crowd’s
wisdom in the ght against fake news in conict environments and increase the
reliability and objectivity of news? How can more people be encouraged to share
privately held information on recent events and incidents? What is needed to secure
continuous platform revenues in a self-sufcient manner?
In this study, we try to provide answers to these questions. Using insights from game
theory, we present the design of a new crowdsourcing platform that an incentivized
truth-telling mechanism. By applying a bonus-and-penalty system to validate news
content, the mechanism makes it the dominant strategy for individuals to disclose
privately held information as accurately as possible. Moreover, the proposed
platform offers the possibility to certify news based on a multi-stage review process
and directly links all sides of the market, reporters, and media outlets. We believe
that a platform that incorporates these key design features can strengthen the role
of journalism as Fourth Estate in an age of mis- and disinformation, help shape
a more nuanced picture of the complex realities and contribute to more effective
humanitarian assistance and conict resolution.
This paper is organized as follows: First, we present key design features of existing
crowdsourcing platforms and highlight the main differences to our approach.
Second, we introduce the new platform’s basic architecture to certify news based
on a multi-stage review process. Third, we discuss the truth-telling mechanism,
including the bonus-and-penalty system to stimulate the disclosure of privately held
information. Fourth, we present additional and optional features of the platform
to increase uptake rates. Finally, we discuss limitations and concluding remarks.
Crowdsourcing in Humanitarian
A series of crowd-based initiatives
have had considerable success in the eld
of humanitarian action and crisis manage-
ment. Despite the growing interest,
however, notable knowledge and meth-
odological gaps remain in the literature.
Hence, providing a comprehensive over-
view of all major crowdsourcing plat-
forms in the humanitarian sector would
be beyond this paper’s scope. Instead, we
organize a small selection of well-known
platforms and their main design param-
eters and features to portray similarities
and key differences, particularly vis-à-vis
the new platform architecture proposed
in this paper. For further discussions of
existing platforms, including various case
studies, we refer to the works of Shiel
(2013), Munro (2013), and Hunt and
Specht (2019).
Probably the most prominent crowd-
sourcing platform in the humanitarian
sector is Ushahidi (Swahili for “testi-
mony”). Based in Kenya, Ushahidi was
developed to map reports of violence in
Kenya after the post-election violence in
Issue No. (5) September 2021 / Safar 1443
2008. Ushahidi is an open-source software
platform for crowdsourcing reports from
citizen-reporters on the ground. People
can use their mobile phones to send text
messages, emails, or Tweets to a central
platform on which messages are gathered,
stored, and visualized on a map (Figure
1). Platform users can see the data points
on Google Maps as well as the details of
individual reports, which can be ltered
by various categories. In addition, reports
can be veried by requiring a minimum
number of similar reports to reach the plat-
form.(7) The main user group of Ushahidi
is Civil Society Organizations (CSOs),
but the platform is also frequently used by
journalists, governmental organizations,
and research institutes. Until 2018, Usha-
hidi has been used more than 150,000
times in over 160 countries, crowdsourcing
more than 50 million reports from citizens
(Ushahidi 2018).
Figure 1: Example of Ushahidi incident and timeline mapping (OSGeoLive 2009)
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
Ushahidi provides the software engine
for various other crowdsourcing plat-
forms. For instance, the Crisis Tracker
maps violent incidents in remote border
regions encompassing the northeastern
Democratic Republic of Congo and the
eastern Central African Republic. The
Syria Tracker, the former lighthouse
project of the Humanitarian Tracker, used
the Ushahidi platform to gather informa-
tion on violent incidents and humanitarian
needs in Syria. The Digital Humani-
tarian Network (DHN), a network-of-net-
works, supported humanitarian efforts in
response to the 2010 Haiti earthquake and
the 2012/13 Typhoons in the Philippines.
It also assisted the UN OCHA in mapping
the conict in Libya (Standby Task Force,
2011). Both the Syria Tracker and the
DHN have been suspended, according to
their websites.(8)
Besides these Ushahidi-based plat-
forms, the Qatar Computing Research
Institute and the Masdar Institute of Tech-
nology have developed an independent
system called is a demand-
driven platform to crowdsource the veri-
cation of unconrmed reports during
crises. Media outlets or humanitarian
organizations can send a verication
request to, which is then shared
with its reporters, pre-registered citizens
called “Digital Detectives.” These detec-
tives search the web for clues that can
help to verify the request (Meier 2015).
As a novelty, reporters are awarded points
for the accuracy of their submissions. The
higher their points score; the more weight
will be placed on their future search
results.’s service is not publicly
available yet; its website is currently
ofine, so it is unclear if and when
will be launched.
Table (1) provides a high-level compar-
ison of the ve crowdsourcing platforms
that have been introduced above. The
leftmost column shows key features and
design parameters. Important aspects,
such as major differences and similarities,
are highlighted in colors. The rightmost
column shows the features of the new
platform proposed in this paper and which
are explained in more detail in subsequent
1. Ushahidi 2. Crisis
3. Syria
Tracker 4. DHN 5. New
Focus Incident
mapping News content News content
Main user
group CSOs CSOs CSOs CSOs Media outlets Media outlets
Table (1): Comparison of selected platforms and key features against the new platform architecture
(Contd... page on 10)
Issue No. (5) September 2021 / Safar 1443
1. Ushahidi 2. Crisis
3. Syria
Tracker 4. DHN 5. New
Geotagging Yes Yes Yes Yes Yes Ye s
Regions Worldwide,
Africa Syria Worldwide Worldwide,
Qatar-based Worldwide
reporters Everyone Everyone Everyone Everyone Everyone Everyone
Verication Yes, but only
Yes, but only
Yes, but only
optional Unknown Yes,
Payment to
reporters No No No No
points based
on the
based on the
making No No No No Unknown7Ye s
Status Active Active Inactive Inactive Unknown(9) N/A
.com/ N/A
Table (1): Comparison of selected platforms and key features against the new platform architecture
As shown in Table 1, Usha-
hidi and its partner platforms (plat-
forms 1-4) differ from and
the proposed new platform in various
dimensions. Below we discuss four key
differences in more detail and comple-
ment the discussion by explaining
the rationale and strategic reasoning
behind the new platform presented in
this paper.
First, the Ushahidi-based platforms
primarily focus on supporting CSOs, espe-
cially relief organizations, by providing
incidents’ geographical mappings.
and our platform design pursue a different
approach: They focus on providing infor-
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
mation, both based on quantitative and
qualitative data, to international media
outlets as a primary user group.
Second, our approach differs from
Veri-ly and the other platforms by concen-
trating primarily on conicts, i.e., one
specic type of humanitarian crisis. The
reason why we address media outlets and
conict-related information is rooted in the
rapidly changing dynamics of the sector:
As technological advancement generates
massive amounts of unveried data during
crises, it is increasingly difcult for profes-
sional journalists to nd reliable, timely
and trustworthy information on conicts.
As Bunce (2019: 52-53) points out: “In an
ideal world, journalists would fact-check
and verify all such claims before they
published them.” In practice, however,
“journalists look online for existing content
– press releases, marketing material, social
media posts and previously published
news articles and newswire articles – and
they frequently republish this content with
only slight amendments.” Our platform is
designed to bridge this gap and provide
international media outlets and newswires
with a rapid method to cross-check uncon-
rmed reports and/or source new, veried
information from primary sources on the
Third, none of the existing platforms
offers payments to their reporters. While applies a non-monetary point
system, only our platform explicitly
accounts for adequate compensation. This
aspect is essential as it refers to a systemic
challenge faced by many crowdsourcing
portals: People typically volunteer in
crowdsourcing and other social initiatives
due to a sense of altruism or reciprocity,
giving them a warm glow of doing some-
thing good (Camerer, 2003, Jin et al.,
2010). However, such intrinsic motiva-
tions might not always be present at suf-
ciently high levels among communities
suffering from acute hardship. Van der
Windt and Humphreys (2016), therefore,
criticize that the data generated through
volunteer-based systems like the Usha-
hidi-based platforms might be less repre-
sentative, potentially subject to tenden-
tious reporting, and misrepresented due to
self-selection problems. Our design seeks
to overcome these challenges by providing
direct compensation to reporters with the
aim to generate sufciently high partici-
pation rates.
Fourth, most platforms are not
self-sustaining but require institutional
and periodic funding by external parties in
order to maintain operations and develop
their portals. Their actual beneciaries -
CSOs, media outlets, and government
authorities - gain access to valuable infor-
mation free of charge and do not co-share
the costs for running the platforms. Our
design embeds a match-making feature to
rectify this aw, directly connecting the
media sector’s supply and demand sides,
and thereby securing sustainable funding.
To be more precise, media outlets and
other customers will be charged a small
service fee to receive the information of
interest. The fees are used to compensate
the reporters and nance the costs for
maintaining and developing the platform
on a not-for-prot basis. The following
box provides more details on the under-
lying business case.
Issue No. (5) September 2021 / Safar 1443
A New Crowdsourcing Platform
In this section, we introduce the basic
architecture of the new crowdsourcing
platform for conict news. The general
idea is to let the crowd review any quan-
titative and qualitative content during
a multi-stage process. As a novelty
compared to existing platforms, our
design involves reporters from opposing
conict zones. Content that passes the
process will be certied as veried and
objective.(11) The review process provides
the framework for the truth-telling mech-
anism, which will be explained in the
coming section.
Box 1: Business case of the new platform
The business case of the proposed new platform arises from the ongoing
transformation in international media sector. For decades, international media
has suffered from signicant cost pressure, leading to deep cuts in the number of
foreign correspondents around the world (Sambrook, 2010). The vast majority
of today’s news outlets do not have foreign correspondents anymore (Bunce,
2019). For example, Scott et al. (2018) report that only twelve English-lan-
guage media outlets remain that regularly publish original articles on humani-
tarian affairs. The rest purchases and repurposes content produced by secondary
The proposed platform is designed to offer an alternative to media outlets that
cannot afford deploying foreign correspondents of their own. Instead of buying
and republishing old content from secondary sources, it enables these media
outlets to publish original news based on veried information gathered from
primary sources. Furthermore, since the process does not involve any interme-
diaries, obtaining relevant information and news-on-demand is expected to be
signicantly less costly than in the current models.
We start by dening the minimum
criteria that reports have to meet during
the review process: In order for a report to
qualify as “certied,”
1) any quantitative data of a report
must be veried by at least one other
source and
2) any qualitative information must
be reviewed by at least one other
We further assume that the review
and certication process is implemented
through a central platform, i.e., a mobile
application connecting and communi-
cating with a server system. Techni-
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
cally, the system needs to provide secure
payment methods, time-stamping and
geotagging within a pre-dened radius.
The app could either be an add-on to
existing crowdsourcing engines, such as
Ushahidi, to existing social media plat-
forms, such as Twitter or Facebook, or a
standalone application.
Based on these assumptions, the basic
design of the application incorporates
four distinct stages to produce reports
that meet the above-mentioned minimum
criteria of certication:
1) Stage – data entry: In the rst stage,
a person from a conict party’s region,
labeled reporter A, uploads a raw report
on a news-relevant incident. The raw
report can include statistics, e.g., on casu-
alties, images, clips, and a verbal expla-
nation of what has happened according to
the reporter’s view. After uploading, the
report proceeds to the second stage.
2) Stage – cross-check: The applica-
tion informs all other reporters from the
same geographical area about the newly
uploaded report and invites them to verify
any quantitative data. The rst reporter
accepting, labeled reporter B, is tasked
to cross-check the numbers, dates, and
visual content contained in the raw report.
If s/he considers the quantitative data to
be accurate enough, e.g., within a pre-de-
ned acceptable error margin of 10% for
numerical data, s/he clicks ‘approve.’
This will push the report to the next stage.
Otherwise, s/he clicks’ reject’. In this
case, the report goes back to stage 1, and
reporter A is requested to revise the data
and restart or, alternatively, cancel the
review process.
3) Stage – de-biasing: The appli-
cation invites reporters from the other
conict region to review the qualitative
content. The rst reporter accepting the
invitation, labeled reporter C, is tasked
to assess the second party’s perspective
by integrating a few lines containing
alternative explanations for the incident
and/or by inserting comments on overly
tendentious statements that should be
changed or deleted. Like a word docu-
ment with tracked changes, all edits and
comments made by reporter C are visibly
tracked and become an integral part of the
report’s history log. Once reporter C has
completed his/her review, s/he clicks on
‘submit’. Then, the report is locked and
proceeds to the next stage.
4) Stage – marketplace: Finally, the
report reaches an online marketplace.
Here it can be purchased by a media
outlet or any other interested party,
e.g., CSOs, government authorities,
or researchers. The media outlet owns
the exclusive rights, i.e., it can edit the
report according to its own standards
as it sees t for publication. In essence,
this stage does not differ from stan-
dard news production processes, which
collate and integrate (background) infor-
mation provided by freelance reporters
and/or secondary sources. In contrast to
current practice, however, a media outlet
is obliged to publish the original report
including possible edits and comments
as supplementary material in an online
appendix to qualify its news piece as
“based on certied information.” Figure
2 summarizes the four stages of the
basic design.
Issue No. (5) September 2021 / Safar 1443
Setting Incentives to “Tell the
After the basic architecture of the certi-
cation process has been dened, we now
turn to the mechanism which should incen-
tivize reporters to tell the truth. According
to the game theory, a mechanism reveals
the “truth” if every person can achieve the
best outcome for themselves just by being
truthful, regardless of what the others do
(e.g., Fudenberg & Tirole., 1991, Vazirani
et al. 2007). Applied to crowdsourcing
news, a truth-telling mechanism for
reporters must set monetary incentives in
a way that a reporter is always better off if
s/he reports as truthful as possible (under
full anonymity conditions).
To integrate this logic, we rst need
to determine a default payment scheme.
In this scheme, each reporter’s payment
is linked to a reference price, henceforth
called the “Report Standard Price” (RSP).
The RSP is a market-compatible price
for reports acceptable to media outlets.
It may, for example, follow a pre-dened
pricing tableau depending on a report’s
length, timeliness, and available supple-
ments, such as images and clips.
Once the certied report is purchased
by a media outlet, the application automat-
ically triggers secured payments: In the
default condition, reporter A receives the
RSP for submitting the initial raw report.
Reporter B receives a xed percentage of
the RSP, e.g., 5%, for his/her data cross-
check, and reporter C receives, e.g., 10%
of the RSP for his/her content review
and revision. In sum, a media outlet is
charged 115% of the RSP for the certi-
ed report. Figure 3 illustrates the default
payment scheme at every stage of the
review process.
Figure 2: Multi-stage review process in the new crowdsourcing platform
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
Incentives, to tell the truth, can now be
incorporated through a bonus-and-penalty
system, i.e., a performance-based payment
function that adjusts the above-mentioned
default payments.
Each adjustment, i.e., a corrected
mistake, triggers a penalty or a bonus
for the respective reporter during the
review process. Mistakes result in penal-
ties, i.e., they reduce the default payment
of a reporter by a pre-dened amount.
Spotting mistakes, in turn, is rewarded
by bonuses, i.e., every spotted mistake
increases a reporter’s payment by a given
amount. The underlying rationale is the
better the provided input, the higher the
payment; correspondingly, media outlets
will only pay the full price for reports that
have been proven to be robust upon initial
submission. Figure 4 depicts an example
containing specic bonus and penalty
parameters to further illustrate the logic.
Figure 3: Basic payment scheme in the new platform
Figure 4: Illustration of payment scheme including bonus-and-penalty system
RSP: Report Standard Price
100% of RSP
5% of RSP
10 % of RSP
115% of RSP
Submit raw
Approve or
reject data
Change and/or
Purchase and
Reporter A
(zone 1, geo-
Reporter B
(zone 1, geo-
Reporter C
(zone 2, geo-
(100% -4% * R -1% * C) * RSP
(5% + 2% * R) * RSP
(10 % + 0.5% * C) * RSP
(115% -2% * R –0.5% * C) * RSP
Report Standard
Price, RSP
Reward, R, equals
1 if rejected, 0
Change, C, equals
the number of
changes and
Reporter A
(zone 1, geo-
Reporter B
(zone 1, geo-
Reporter C
(zone 2, geo-
Issue No. (5) September 2021 / Safar 1443
In this example, bonuses amount to 2%
of the (RSP) in case a report is rejected
at stage 2 and 0.5% of the RSP for every
content change that was made to the report
at stage 3. Penalties, on the other hand, are
twice as high as the bonuses, i.e., amount
to 4% in case a report was rejected at stage
2 and to 1% for every content edit. The
asymmetry between penalties and bonuses
guarantees that a certied report’s total
cost never exceeds a maximum ceiling –
in our example, 115% of the RSP.
To illustrate these parameters’ effect on
payments, consider the following hypo-
thetical scenario with an RSP of USD
100: Reporter A submits a brief report on a
violent incident that was allegedly carried
out on 31 August 2021, killing 15 people.
The report is reviewed by reporter B from
the same conict zone. B witnessed the
incident herself and can also conrm the
estimated number of fatalities. She, thus,
approves the data in the raw report (R=0),
and the report moves to the next stage.
At stage 3, the report is qualitatively
reviewed by reporter C, who lives in the
opposing party’s area, conict zone B.
According to his view, the initial report
omitted valuable information. Therefore,
he adds a short paragraph to explain the
strategic setting, including the motivation
behind the incident, e.g., embedding it as a
tactical move of a more extensive, preven-
tive operation. This addition is counted as
one change (C=1). After that, he submits the
report including tracked changes. It reaches
the marketplace as veried report (stage 4).
The certied report can now be purchased
by a media agency or any other interested
party for a total price of USD 114.50. The
price is automatically determined by the
RSP and the edits that were required to
certify the report. In the example above,
reporter C would then receive a total of
USD 10.50 of the total price for conducting
the qualitative review (USD 10) and one
edit (+USD 0.50). Reporter B would
receive USD 5 for conrming the statistics
and dates when cross-checking the quanti-
tative part of the report. Reporter A would
receive ‘only’ USD 99, as one change was
needed at stage 3 to de-bias the raw report.
The change triggers a penalty of USD 1,
which is deducted from the default RSP of
USD 100. Figure 5 provides an overview
based on this hypothetical scenario.
Figure 5: Hypothetical example of a review process and corresponding payments
Uploads raw report on an alleged
aack on 31/08/2021 killing 15 people
Can confirm the date and the number
of casuales and therefore approves
the data (R=0)
Adds a short paragraph on the
strategic setng in which the incident
took place (C=1)
Purchases the cerfied report from
marketplace, revises and publishes it
Reporter A
(conflict zone 1)
Reporter B
(conflict zone 1)
Reporter C
(conflict zone 2)
(100% -4% * 0 -1% * 1) *
USD 100 = USD 99
(5% + 2% * 0) * USD 100 =
(10 % + 0.5% * 1) * USD
100 = USD 10.5
(115% - 2% * 0 –0.5% * 1)
* USD 100 = USD 114.5
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
The example illustrates how the truth-
telling mechanism is embedded in the
review process. The bonus-and-pen-
alty system induces incentives for each
reporter to tell the truth as accurately as
possible. The underlying rationales can
be summarized as follows:
For reporter A: ‘The more robust
my initially submitted report, the
higher my payment
For reporter B: ‘The better my
cross-check, the higher my
For reporter C: ‘The better my
review, the higher my payment
For the media outlet: ‘We only pay
the full price if the report is fully
robust’, i.e., if it passed all stages
without any changes or comments.
Additional Features
After illustrating the new crowd-
sourcing platform’s basic design and
its embedded truth-telling mecha-
nism, a series of essential features and
potential add-ons need to be consid-
ered to ensure privacy, usability, and
objectivity. They are briey explained
in the following:
Complaint system
If a rejection, change, or comment
appears to be unjustied, reporter A
should be allowed to le a request for
a reviewer quality check to the applica-
tion provider. If a review proves to be
inaccurate, the respective deduction is
transferred back to reporter A.
Automated re-review and suspension
Automated (manual or algorithmic)
re-reviews on a sample of rejec-
tions, changes, or comments should
be conducted on a regular basis irre-
spective of any complaints led by
reporter A. These re-reviews should
follow best practice in the verication of
user-generated content, including trian-
gulation (e.g., Silverman 2014). If a
review proves to be inaccurate, the
respective deduction has to be trans-
ferred back to reporter A.
Reputation building
Every reporter who uploads reports,
reviews data and/or content should
interact via an avatar on the platform.
This will lead to users building up
reputation over time – like Ebay or
Amazon. Besides purchasing via the
marketplace, media outlets can also
directly request a specic ‘avatar with
a proven track record to submit a report
on demand (like the logic applied by
the platform). This option is
mirroring the real-world interaction
between media outlets and credible
freelance reporters and, in addition,
allows matchmaking between multiple
parties simultaneously.
User categories
After a piloting phase per region,
users could be classied into ‘accred-
ited reporters,’ e.g., after reaching a
minimum activity level or a specic
registration, and normal ‘reporters,’ e.g.,
laypeople or citizen journalists. Profes-
sional journalists, on the other hand,
should be grouped in another dedicated
user category. The user categories could
be linked to different payment schemes
reecting the users reputation and
Issue No. (5) September 2021 / Safar 1443
Following research on peace-jour-
nalism and nudging, reporters can be
asked to respond to de-biasing questions
before being admitted to uploading or
revising a report. These and other inter-
ventions can be designed to intercept
emotional distress and anger, reducing
the impact of effect and subjectivity on a
report or review. In addition, users cate-
gorized as ‘accredited reporters’ may
be required to complete short online
courses on conict-sensitive reporting
on a regular basis to maintain their
privileged user status.
Seniority rule
After the launching period, normal
‘reporters’ on the platform could be
restricted to only submit reports and
run cross-checks (stages 1 and 2).
Only ‘accredited reporters’ would be
admitted to all stages, i.e., reviewing
and revising qualitative content (stage
3). This would prevent users from
creating fake accounts or pursuing ‘hit
and run’ strategies.
User activity rule
Admission to review qualitative
content could also depend on user
activity. For example, a report submitted
by a power user, someone with more
than 500 submitted reports/reviews, may
only be reviewed by another power user.
Geographical verication
In line with common practice in
crowdsourcing, submitted reports, rejec-
tions, or changes/comments need to be
geotagged within a pre-dened radius
(e.g., 10km) and time-stamped. With the
help of ltering, requests to complete
stages 2 and 3 will only be shown to
those users whose positions match the
location requirements.
Privacy protection
High-end data security, storage,
and encryption methods should be
employed to guarantee personal safety
and, if required, users’ full anonymity.
Payments should be organized via
blockchain technology. Data storage
facilities and legal obligations should
preferably fall under a country’s juris-
diction with high standards in terms of
data protection.
Report standard price
The general Report Standard Prices
(RSPs) should be a function of lengths
and supporting material, such as, for
example, additional documents, clips
or photographs. The RSPs should
match normal market prices but remain
well below the costs for reports from
secondary sources. For professional
journalists, on-top charges are possible
and can be organized through a custom-
ized charging procedure.
Advanced crowdsourcing
Upon request of a media outlet,
the reviews at stages 2 and 3 could
be rolled out to an extended sample,
producing more than one review per
stage (against additional charges). The
maximum extent of data robustness
may be reached, for example, after
collating at least 150 observations
from users.
Concluding Remarks
The unprecedented rise of mis- and
disinformation during the last years
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
makes it increasingly difcult for media
outlets and relief organizations to nd
reliable, timely, and trustworthy infor-
mation on armed conicts and crises.
Fake news and biased reporting impede
the formation of balanced, multifac-
eted opinions and thus undermine
humanitarian and diplomatic efforts
to improve the living conditions of
affected societies.
In the last years, researchers and
practitioners have started to explore the
potential of crowdsourcing to ght the
spread of unveried information world-
wide. The basic idea is to harness the
crowd’s wisdom to gather (more) reli-
able and objective information on crises.
Despite the growing need and interest
in these approaches, previous attempts
to crowdsource data in conict envi-
ronments have been confronted with a
series of challenges.
Considering these challenges,
we presented the design of a novel
crowdsourcing platform to harness the
“wisdom of the crowd” and increase the
reliability and objectivity of conict-re-
lated news. Our proposal incorporates
a multi-stage review process that can
produce certied news meeting well-de-
ned criteria in contrast to existing
platforms. An integrated bonus-and-
penalty system sets incentives for citi-
zen-reporters to disclose and verify
privately held information as accu-
rately as possible. As another novelty,
our platform directly connects both
sides of the news market, reporters,
and media outlets via an integrated
marketplace. By enabling media outlets
to publish original news based on veri-
ed information gathered from primary
sources, the platform is anticipated to
become an attractive sourcing tool to
media outlets that otherwise would
need to purchase news from secondary
and more expensive sources. We believe
that a platform with the key design
features outlined in this paper can help
contribute to the production and dissem-
ination of more reliable and compre-
hensive information on public and
international interest topics around
the world.
However, we would like to address
couple of limitations in our concluding
remarks: First, the design outlined
in this paper is only a draft, a rst
blueprint. Depending on whether the
design should be implemented as an
add-on to existing platforms or as a
standalone solution, the technical
specication and the minimum viable
product are still to be developed,
especially regarding secure payment
methods and privacy aspects. Second,
the payment schedules presented in this
paper serve illustrative purposes
only. To avoid a crowding out of
intrinsic motivations, payments and
service fees would need to be revis-
ited during the development stage,
pretested, and, if required, adjusted.
Third, our design proposal does not
yet address usability aspects - a critical
prerequisite to generate trafc in online
communities and increase uptake
rates (Shiel, 2013). In a similar vein,
the launch of a new platform would
need to be accompanied by promotion
Issue No. (5) September 2021 / Safar 1443
activities to create enough awareness
among potential reporters and media
outlets in regions of interest. Fourth,
even a fully operational crowdsourcing
platform that integrates all major features
presented in this paper is unlikely to
replace manual investigations and veri-
cations, especially in highly sensitive
In light of these limitations, we consider
the proposed platform as a starting point
for further discussions on incentives and
fees, usability, security, and data privacy
and invite interested researchers, practi-
tioners and journalists alike to join the
debate and contribute to the development
of a new approach to crowdsource facts in
conict environments.
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
1- The authors are indebted to Dr. Julia Stauf, Dr. Anja Reitemeyer, Muhannad Al-Saho, Dr. Maha
Bashri and Julia Hafenrichter for providing valuable support and comments to improve earlier ver-
sions of this paper. The authors owe thanks to the reviewers as well as seminar participants at Passau
University for their comments and suggestions.
2- Managing Director at Behavia – Behavioral Public Policy and Economics GmbH, Germany.
3- Afliate and Adjunct Assistant Professor (Lehrbeauftragter), Passau University, Germany.
4- Senior Resident Research Fellow, King Faisal Center for Research & Islamic Studies, Saudi Arabia.
5- Munro (2013) points out that the term crowdsourcing has multiple meanings in the context of hu-
manitarian work. It is frequently used as “sourcing information from the crowd”, also called citi-
zen-reporting (Munro 2013: 214).
6- The general concept of citizen journalism has been subject to intensive debate in the last years.
Content created by citizens is argued to lack objectivity and quality, falling short of meeting the
ethical standards associated with the journalist profession and possibly contributing to the spread of
mis- and disinformation (e.g., Maher, 2006, Mutsvairo and Salgado, 2020). As we focus on citizen
reporting in conict environments, i.e. on specic situations in which professional journalists lack
access to primary sources, our approach is based on the idea that citizen reporting complements pro-
fessional journalism practices and thereby raise quality and objectivity of current conicts news.
7- An alternative and widely used method to verify reports is triangulation, i.e. comparing submissions
with reports on other platforms, e.g. Twitter or YouTube.
8- See Hunt and Specht (2019) for a review of 51 signicant deployments of crowdsourcing initiatives
in response to humanitarian crises between 2010 and 2016.
9-’s website is currently ofine (last accessed on 29/03/2021).
10- The suggested review process is broadly in line with best practice in the eld (e.g., Silverman 2014).
Yet, its basic design does not include triangulation with other sources.
11- The suggested review process is broadly in line with best practice in the eld (e.g., Silverman 2014).
Yet, its basic design does not include triangulation with other sources.
Issue No. (5) September 2021 / Safar 1443
Ariely, D., Au, W., Bender, R., Budesco, D, Dietz, C, Gu, H., and G. Zauberman (2000). The effects of aver-
aging subjective probability estimates between and within judges. Journal of Experimental Psychology:
Applied 6: 130–147.
Baliki, G. (2017). Measuring Media Bias in Conict: Evidence from Syria. In Empirical Advances in the
Measurement and Analysis of Violent Conict. Berlin: Humboldt-Universität zu Berlin.
Bennett, S., Benjamin, A., Mistry, P, and M. Steyvers (2018). Making a crowd wiser: Benets of individual
metacognitive control on crowd performance. Computational Brain & Behavior 1: 90–99.
Bunce, M. (2019). Humanitarian Communication in a Post-Truth World. Journal of Humanitarian Affairs
1(1): 49–55.
Butler, D. (2013). Crowdsourcing goes mainstream in typhoon response. Nature News.
Camerer, C. (2003). Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University
Press, Princeton, New Jersey.
Fiechter, J., and N. Kornell (2021). How the wisdom of crowds, and of the crowd within, are affected by
expertise. Cognitive Research.
Fudenberg, D., and J. Tirole (1991). Game Theory. Cambridge, Mass. and. London: MIT Press.
Gupta, A., Lamba, H., Kumaraguru, P., and Joshi, A. (2013). Faking sandy: Characterizing and identifying
fake images on twitter during hurricane sandy. Proceedings of the 22nd International Conference on
World Wide Web: 729-736.
Hannides, T. (2015). Humanitarian Broadcasting in Emergencies: A Synthesis of Evaluation Findings,
Research Report. London: BBC Media Action.
Hunt, A., and D. Specht (2019). Crowdsourced mapping in crisis zones: collaboration, organization and
impact. Journal of International Humanitarian Action, 4(1).
Igoe, M. (2017). Q&A: Fighting “Fake News” to Aid Development. Devex.
ghting-fake-news-to-aiddevelopment-90585 (last accessed on 01/03/2021).
Jin, B., Park, J., and H. Kim (2010). What Makes Online Community Members Commit? A Social Exchange
Perspective. Behaviour & Information Technology 29(6): 587-599.
Kahan, D., Peters, E., Dawson, E., and P. Slovic (2017). Motivated numeracy and enlightened self-govern-
ment. Behavioural Public Policy 1: 54–86.
Kahl, A., Mcconnell, C., and W. Tsuma (2012). Crowdsourcing as a Tool in Conict Prevention. Conict
Trends 2012(1): 27–34.
Kozyreva, A., Lewandowsky, S., and R. Hertwig (2020). Citizens Versus the Internet: Confronting Digital
Challenges With Cognitive Tools. Psychological Science in the Public Interest 21(3): 103–156.
Lee, M., and I. Danileiko (2014). Using cognitive models to combine probability estimates. Judgment and
Decision Making 9: 259–273.
Crowdsourcing Facts in Conicts
Towards a Truth-Telling Mechanism to Counter Fake New
Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R., and R. Hertwig (2020). How behavioural sciences
can promote truth, autonomy and democratic discourse online. Nature Human Behaviour: 1-8.
Martin-Shields, C. (2013). The Technologist’s Dilemma: Ethical Challenges of Using Crowdsourcing Tech-
nology in Conict and Disaster-Affected regions. Georgetown Journal of International Affairs 14(2):
Meier, P. (2015). How to Become a Digital Sherlock Holmes and Support Relief Efforts. iRevolutions Blog. (last accessed on 21/03/2021).
Mellers, B., Ungar, L., Baron, J., Ramos, J., Gurcay, B., Fincher, K., and P. Tetlock (2014). Psychological
strategies for winning a geopolitical forecasting tournament. Psychological Science 25: 1106–1115.
Meyer, T., Marsden, C., and I. Brown (2020). Regulating Internet Content with Technology: Analysis of
Policy Initiatives Relevant to Illegal Content and Disinformation Online in the European Union.
In Disinformation and Digital Media as a Challenge for Democracy 6. Edited by Terzis G., Kloza D.,
Kużelewska, E., and D. Trottier. Intersentia: Cambridge: 309-326.
Maher, V. (2006). Citizen Journalism is Dead. Media in Transition Blog. School of Journalism & Media
Studies, Rhodes University.
Munro, R. (2013). Crowdsourcing and the crisis-affected community. Lessons learned and looking forward
from Mission 4636. Information Retrieval 16: 210–266.
Mutsvairo B., and S. Salgado (2020). Is citizen journalism dead? An examination of recent developments in
the eld. Journalism. November 2020.
OSGeoLive (2009). Ushahidi. Incident Timeline & Mapping.
view/ushahidi_overview.html (last accessed on 27/02/2021).
Pennycook, G., McPhetres, J., Zhang, Y., and D. Rand (2020). Fighting COVID-19 misinformation on
social media: Experimental evidence for a scalable accuracy nudge intervention. Psychological Science.
Rigterink, A., and G. Baliki (2019). The wisdom of seeking crowd wisdom. Reections on the ethics of
using crowdsourcing and crowdseeding to collect data in conict zones. Discussion paper.
Sambrook, R. (2010). Are Foreign Correspondents Redundant? Oxford: Reuters.
Schubert, M., Bashri, M., Jayet, C., Vautrin, E., and J. Stauf (2020). Shared Truth. V20 Global Values Policy
Brief: 6-12.
Scott, M., Wright, K. and M. Bunce (2018). The State of Humanitarian Journalism. Norwich: University of
East Anglia.
Semakula, D. Nsangi, A., Oxman, A., Oxman, M. Austvoll-Dahlgren, A., Rosenbaum, S. Morelli, A.,
Glenton, C., Lewin, S., Kaseje, M., Chalmers, I., Fretheim, A., Kristoffersen, D., and N., Sewankambo
(2017). Effects of the Informed Health Choices Podcast on the Ability of Parents of Primary School
Children in Uganda to Assess Claims about Treatment Effects: A Randomised Controlled Trial’, Lancet
390(10092): 389–398.
Issue No. (5) September 2021 / Safar 1443
Shiel, A. (2013). Conict Crowdsourcing: Harnessing the power of crowdsourcing for organizations
working in conict and the potential for a crowdsourced portal for conict-related information. McGill
Silverman, Craig, ed. (2014). Verication Handbook: An Ultimate Guideline on Digital Age Sourcing for
Emergency Coverage. Maastricht: European Journalism Centre.
Standby Task Force (2011). Libya Crisis Map Deployment 2011 Report. https://www.standbytaskforce.
org/2011/09/01/libya-crisis-map-report/ (last accessed on 27/02/2021).
Statistia (2021). Number of social network users worldwide from 2017 to 2025.
statistics/278414/number-of-worldwide-social-network-users/ (last accessed on 01/03/2021).
Steyvers, M., Lee, M., Miller, B., and P. Hemmer (2009). The wisdom of crowds in the recollection of order
information. Proceedings of the 22nd International Conference on Neural Information Processing
Systems: 1785–1793.
Surowiecki, J. (2004). The wisdom of crowds: Why the many are smarter than the few and how collective
wisdom shapes business, economies, societies, and nations. New York, NY: Random House.
Tran, T., Valecha, R., Rad, P., and H. Rao (2020). An Investigation of Misinformation Harms Related to
Social Media during Two Humanitarian Crises. Information Systems Frontiers.
Turner, B., Steyvers, M., Merkle, E. C., Budesco, D., and T. Wallsten (2014). Forecast aggregation via recal-
ibration. Machine Learning 95: 261–289.
UN Security Council (2016). Interim Report of the Panel of Experts on South Sudan Established Pursuant to
Security Council Resolution 2206.
U.S. Homeland Security (2018). Countering False Information on Social Media in Disasters and Emergen-
cies Social Media Working Group for Emergency Services and Disaster Management.
Ushahidi (2018). Impact report: 10 years of innovation. 10 years of global impact.
Van der Windt, P., and M. Humphreys (2016). Crowdseeding in Eastern Congo: Using Cell Phones to
Collect Conict Events Data in Real Time. Journal of Conict Resolution 60(4): 748–781.
Vazirani, V., Nisan, N., Roughgarden, T., and E. Tardos (2007). Algorithmic Game Theory. Cambridge, UK:
Cambridge University Press.
Vosoughi, S., Roy, D. and Aral, S. (2018). The Spread of True and False News Online. Science 359(6380):
Yi, S., Steyvers, M., Lee, M., and M. Dry (2012). The wisdom of the crowd in combinatorial problems.
Cognitive Science 36: 452–470.
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
We investigated the effect of expertise on the wisdom of crowds. Participants completed 60 trials of a numerical estimation task, during which they saw 50–100 asterisks and were asked to estimate how many stars they had just seen. Experiment 1 established that both inner- and outer-crowd wisdom extended to our novel task: Single responses alone were less accurate than responses aggregated across a single participant (showing inner-crowd wisdom) and responses aggregated across different participants were even more accurate (showing outer-crowd wisdom). In Experiment 2, prior to beginning the critical trials, participants did 12 practice trials with feedback, which greatly increased their accuracy. There was a benefit of outer-crowd wisdom relative to a single estimate. There was no inner-crowd wisdom effect, however; with high accuracy came highly restricted variance, and aggregating insufficiently varying responses is not beneficial. Our data suggest that experts give almost the same answer every time they are asked and so they should consult the outer crowd rather than solicit multiple estimates from themselves.
Full-text available
The Internet has evolved into a ubiquitous and indispensable digital environment in which people communicate, seek information, and make decisions. Despite offering various benefits, online environments are also replete with smart, highly adaptive choice architectures designed primarily to maximize commercial interests, capture and sustain users’ attention, monetize user data, and predict and influence future behavior. This online landscape holds multiple negative consequences for society, such as a decline in human autonomy, rising incivility in online conversation, the facilitation of political extremism, and the spread of disinformation. Benevolent choice architects working with regulators may curb the worst excesses of manipulative choice architectures, yet the strategic advantages, resources, and data remain with commercial players. One way to address some of this imbalance is with interventions that empower Internet users to gain some control over their digital environments, in part by boosting their information literacy and their cognitive resistance to manipulation. Our goal is to present a conceptual map of interventions that are based on insights from psychological science. We begin by systematically outlining how online and offline environments differ despite being increasingly inextricable. We then identify four major types of challenges that users encounter in online environments: persuasive and manipulative choice architectures, AI-assisted information architectures, false and misleading information, and distracting environments. Next, we turn to how psychological science can inform interventions to counteract these challenges of the digital world. After distinguishing among three types of behavioral and cognitive interventions—nudges, technocognition, and boosts—we focus on boosts, of which we identify two main groups: (a) those aimed at enhancing people’s agency in their digital environments (e.g., self-nudging, deliberate ignorance) and (b) those aimed at boosting competencies of reasoning and resilience to manipulation (e.g., simple decision aids, inoculation). These cognitive tools are designed to foster the civility of online discourse and protect reason and human autonomy against manipulative choice architectures, attention-grabbing techniques, and the spread of false information.
Full-text available
The reliance on untrained reporters with limited or no understanding of journalistic standards has become increasingly widespread particularly in less democratic environments and these practices have impacted news gathering and reporting. There however has been some debate about the conceivability, capacity, reliability and acceptability of citizen journalists due to the lack of the professional standards associated with the profession. Even so, diverse forms of citizen journalism continue to emerge and develop in several countries in the Global South, such as Zimbabwe and Mozambique, examined in-depth in our study of the current frameworks, trends, practices and principles of citizen journalism in Africa. Buoyed by what appears like a slump in global citizen journalism research, we identify specific cases to rethink the concept, seeking to theoretically contribute to new directions on the phenomenon’s role in African societies. Our analysis suggests that a reconceptualization of citizen journalism is imperative thanks to several factors, including improved access to the Internet and changing attitudes toward political dissent and participation, citizen journalism in Africa is taking new directions.
Full-text available
During humanitarian crises, a large amount of information is circulated in a short period of time, either to withstand or respond to such crises. Such crises also give rise to misinformation that spreads within and outside the affected community. Such misinformation may result in information harms that can generate serious short term or long-term consequences. In the context of humanitarian crises, we propose a synthesis of misinformation harms and assess people’s perception of harm based on their work experience in the crisis response arena or their direct exposure to crises.
Full-text available
Across two studies with more than 1,700 U.S. adults recruited online, we present evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share. In Study 1, participants were far worse at discerning between true and false content when deciding what they would share on social media relative to when they were asked directly about accuracy. Furthermore, greater cognitive reflection and science knowledge were associated with stronger discernment. In Study 2, we found that a simple accuracy reminder at the beginning of the study (i.e., judging the accuracy of a non-COVID-19-related headline) nearly tripled the level of truth discernment in participants’ subsequent sharing intentions. Our results, which mirror those found previously for political fake news, suggest that nudging people to think about accuracy is a simple way to improve choices about what to share on social media.
Full-text available
When people look online for information about humanitarian crises, they increasingly encounter media content that blurs the line between reality and fiction. This includes everything from rumour and exaggeration to partisan journalism and completely invented stories designed to look like real news (so-called ‘fake news’). This article shows that disinformation is causing real and serious harm to those affected by humanitarian emergencies; it can undermine the ability of humanitarian workers to provide relief; and it has exacerbated conflict and violence. Disinformation is also making it harder for journalists to report on the humanitarian sector, and hold the powerful to account, because it undermines audience trust in information more generally. The article concludes by considering interventions that could address the challenges of disinformation. It argues for more support of quality journalism about humanitarian crises, as well as media literacy training. Finally, it is crucial that aid agencies and news outlets commit to accuracy and fact checking in their reporting and campaigning.
Full-text available
Abstract Crowdsourced mapping has become an integral part of humanitarian response, with high profile deployments of platforms following the Haiti and Nepal earthquakes, and the multiple projects initiated during the Ebola outbreak in North West Africa in 2014, being prominent examples. There have also been hundreds of deployments of crowdsourced mapping projects across the globe that did not have a high profile. This paper, through an analysis of 51 mapping deployments between 2010 and 2016, complimented with expert interviews, seeks to explore the organisational structures that create the conditions for effective mapping actions, and the relationship between the commissioning body, often a non-governmental organisation (NGO) and the volunteers who regularly make up the team charged with producing the map. The research suggests that there are three distinct areas that need to be improved in order to provide appropriate assistance through mapping in humanitarian crisis: regionalise, prepare and research. The paper concludes, based on the case studies, how each of these areas can be handled more effectively, concluding that failure to implement one area sufficiently can lead to overall project failure.
Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions—nudging and boosting— that enlist these cues to redesign online environments for informed and autonomous choice.
The wisdom of the crowd refers to the finding that judgments aggregated over individuals are typically more accurate than the average individual’s judgment. Here, we examine the potential for improving crowd judgments by allowing individuals to choose which of a set of queries to respond to. If individuals’ metacognitive assessments of what they know is accurate, allowing individuals to opt in to questions of interest or expertise has the potential to create a more informed knowledge base over which to aggregate. This prediction was confirmed: crowds composed of volunteered judgments were more accurate than crowds composed of forced judgments. Overall, allowing individuals to use private metacognitive knowledge holds much promise in enhancing judgments, including those of the crowd.
Game theory, the formalized study of strategy, began in the 1940s by asking how emotionless geniuses should play games, but ignored until recently how average people with emotions and limited foresight actually play games. This book marks the first substantial and authoritative effort to close this gap. Colin Camerer, one of the field's leading figures, uses psychological principles and hundreds of experiments to develop mathematical theories of reciprocity, limited strategizing, and learning, which help predict what real people and companies do in strategic situations. Unifying a wealth of information from ongoing studies in strategic behavior, he takes the experimental science of behavioral economics a major step forward. He does so in lucid, friendly prose. Behavioral game theory has three ingredients that come clearly into focus in this book: mathematical theories of how moral obligation and vengeance affect the way people bargain and trust each other; a theory of how limits in the brain constrain the number of steps of "I think he thinks . . ." reasoning people naturally do; and a theory of how people learn from experience to make better strategic decisions. Strategic interactions that can be explained by behavioral game theory include bargaining, games of bluffing as in sports and poker, strikes, how conventions help coordinate a joint activity, price competition and patent races, and building up reputations for trustworthiness or ruthlessness in business or life.