Content uploaded by Selin Gurgun
Author content
All content in this area was uploaded by Selin Gurgun on Nov 23, 2022
Content may be subject to copyright.
ONLINE SILENCE: WHY DO PEOPLE NOT CHALLENGE OTHERS WHEN POSTING MISINFORMATION?
Selin Gurgun, Emily Arden-Close and Keith Phalp
Faculty of Science and Technology, Bournemouth University, Poole, UK, and
Raian Ali
College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
Abstract
Purpose: There is a scarcity of research studies on why people remain inactive when
encountering and recognising misinformation online. The main aim of this paper is to provide
a groundwork for future research into why users do not challenge misinformation on digital
platforms by generating hypotheses through a synthesis of pertinent literature, including
organisational behaviour, communication, human-computer interaction (HCI), psychology
and education.
Design/methodology/approach: Given the lack of directly related literature, this paper
synthesised findings from relevant fields where the findings might be relevant, as the
tendency to withhold opinions or feedback is a well-documented practice in offline
interaction.
Findings: Following the analysis of relevant literature, the potential reasons for online
silence towards misinformation can be divided into six categories: self-oriented, relationship-
oriented, others-oriented, content-oriented, individual characteristics and technical factors.
Originality: Although corrections coming from peers can effectively combat misinformation,
several studies showed that people in cyberspace do not take such action. To the best of the
authors’ knowledge, there has been scarce and virtually non-existent research investigating
why people refrain from challenging others who post misinformation online. Thus, this paper
attempts to address this gap and identify reasons in adjacent domains. The reasons provide
a starting point for researching interventions to reduce reluctance and abstinence regarding
the challenge of misinformation. The findings can be beneficial beyond the area of
challenging misinformation and are extensible to other types of content and communication
that people are hesitant to discuss and challenge, such as online injustice, prejudice and
hate speech.
Keywords—Misinformation, Challenging misinformation, Digital platforms, Online
silence
1. INTRODUCTION
Many users on digital platforms may not see themselves as key players in mitigating
misinformation, but many may contribute to its dissemination through their silence. One of
the main challenges in combatting misinformation is that when users notice it online, most
ambivalently ignore it and do not respond to others to challenge it (Chadwick and Vaccari,
2019; Vicol, 2020). We call this behaviour “Online Silence”. One might argue that, in not
challenging such misinformation, users are complicit in its spread by being silent. In this
context, silence is construed as a type of self-censorship, which refers to the choice not to
speak out despite the belief that something should be said (Pinder and Harlos, 2001). The
way social media limits the exposure of diverse opinions and promotes a common narrative
with like-minded people, known as an “echo chamber” (Cinelli et al., 2021), is also argued to
amplify users’ cognitive biases, such as confirmation bias, which refers to the tendency to
search or remember information that confirms or supports one’s previous opinions
(Westerwick et al., 2017; Zhao et al., 2020). Thus, those who agree with the mindset of the
false information, knowing it is indeed false, and see no harm in spreading it, may not only
remain silent but also help to disseminate it by showing their support for the content through
“liking” or commenting positively on it. Studying this group of users and their motives is
beyond our scope as we only concentrate on self-censorship, which refers to remaining
silent and suppressing the will to respond when something should be said.
With the widespread use of the internet and Social Networking Sites (SNS) for information
seeking and users becoming content creators and broadcasters on such sites, information
starts to spread widely even before its accuracy can be verified. Although many terms such
as disinformation, fake news, and rumours are used interchangeably to refer to incorrect or
misleading information, throughout this paper, the term “misinformation” will be used as an
umbrella term, as it refers widely to any type of false information, regardless of the intention
or format.
On SNS, fake news spreads six times faster than true news (Vosoughi et al., 2018) and
misinformation is engaged with more than factual posts (Edelson et al., 2021). A recent
study also showed that using words related to conspiracy theories in a tweet increases its
chances of being shared (Visentin et al., 2021). In such an environment, it becomes even
more important to combat the spread of misinformation. Even though prior research showed
that approximately 5% of people’s news consumption is comprised of misinformation (Acerbi
et al., 2022), the effects of information operations should not be underestimated as the
spectrum of ramifications and potential problems is quite broad. This spectrum ranges from
affecting consumers’ attitudes toward brand image (Visentin et al., 2019; Borges-Tiago et al.,
2020) and consumers’ purchase intentions (Mishra and Samu, 2021) to altering people’s
attitudes toward issues such as climate change (Lutzke et al., 2019) or voting behaviour
(Cantarella et al., 2023). Social media companies took proactive measures to combat
misinformation after the 2016 United States election prompted concerns over misinformation
online (Pourghomi et al., 2017). However, much as companies and scholars have raised
concerns about the issue, the spread and severe impact of misinformation are still being
seen in critical domains, e.g. public health information related to the COVID-19 pandemic
(Cinelli
et
al.
,
2020
;
Galv
~
ao,
2021
)
and
political
and
humanitarian
domains,
e.g.
the
Ukraine
–
Russia war (Park et al., 2022). This points to a need to supplement these measures with
new approaches and techniques employing individuals as not only reporters but also
activists in challenging misinformation.
“Social corrections,” which refer to correction attempts made by social sources where social
contacts are usually a primary source of information, have been identified as a possible
intervention (Bode and Vraga, 2018, 2021b; Walter and Murphy, 2018; Walter et al., 2021). It
reduces the spread of misinformation by influencing other users who observe the corrections
(Vraga and Bode, 2018). It is even more effective when users provide credible sources to
refute the information provided (Vraga and Bode, 2017). Therefore, it is important that users
who post misinformation are informed about this when possible. However, although
evidence shows that users’ corrections are as effective as algorithmic corrections (Bode and
Vraga, 2018), people can be hesitant to take any action to correct misinformation (Chadwick
and Vaccari, 2019; Tandoc et al., 2020; Tully et al., 2020).
Research that studied social corrections as an intervention strategy to combat misinformation
lacked the investigation of why people refrain from correcting misinformation or even do not
feel the need to make that correction in the first place. Existing research has mainly focused
on strategies to motivate people to challenge misinformation rather than the barriers that
prevent users from doing so. Evidence suggests that exposure to norms (i.e. whether
individuals believe others typically correct misinformation) (Koo et al., 2021) or their loved
ones would want them to correct (Xiao, 2022), people’s perception of the severity of the
influence of misinformation on others (Sun et al., 2022) and social identity threat (i.e. when
misinformation threatens a person’s social status in a group) (Cohen et al., 2020) seem to be
effective in motivating people to correct misinformation. Factors such as interpersonal
relationships with the poster, where, for example, participants were more likely to correct
friends or family members (Tandoc et al., 2020), and the format of the content, where, for
example, individuals were less likely to correct content presented in meme format (Lyons,
2017) also seem to affect users’ intention to challenge others when posting misinformation.
In addition, Bode and Vraga (2021a) showed that social factors such as age, education and
reliance on mainstream news and social media also affect willingness to challenge
misinformation. A qualitative study that interviewed participants in Vietnam revealed that the
degree of closeness with the sharer, the age of the sharer and also whether the environment
in which correction takes place is public or private, impact the decision to correct
misinformation (Rohman 2021).
Previous studies exploring the barriers that prevent or discourage people from challenging
misinformation were limited in scope. Following a series of interviews in the United States
with physicians and nurses, the obstacles that they face when they correct health
misinformation on social media were identified in three categories: intrapersonal (e.g., the
lack of time and the perception of limited positive outcomes), interpersonal (e.g., fear of
being harassed and bullied) and institutional (e.g., a lack of institutional support and social
media training) (Bautista et al., 2021). Another recent study based on interviews with 102
people in the UK regarding COVID vaccines also found that the social norm of conflict
avoidance affects people’s responses to vaccine misinformation (Chadwick et al., 2022).
While prior studies present a broad range of factors that motivate people to challenge
misinformation, it is also essential to identify the barriers that prevent them from doing so
and, consequently, develop interventions to eliminate those barriers as a preliminary step.
Offering motivational strategies does not necessarily eliminate people’s barriers to
challenging misinformation. For example, disclaimers from social media platforms
(Colliander, 2019) or source ratings, as suggested in (Kim et al., 2019), do not necessarily
serve to overcome barriers such as fear of receiving a hostile response from the one who
shared misinformation or fear of harming the relationship with them.
The findings of this paper pave the way for future research to understand the barriers that
prevent users from remaining silent when they see misinformation. Furthermore, exploring
these factors that have been overlooked previously might be a starting point for enhancing
the current design of digital platforms. The persuasive systems design (PSD) model (Oinas-
Kukkonen and Harjumaa, 2009) has been used in various domains to promote behaviour
change, such as health (Orji and Moffatt, 2018), fitness (Oyibo and Vassileva, 2019) and e-
learning (Widyasari et al., 2019). It provides principles and techniques to design socio-
technical solutions that can influence attitudes and behaviour by prompting cognitive,
behavioural, psycho-social and other psychological processes. PSD strategies are extensive
and based on well-established theories of behaviour change. As the model has already been
used to change online behaviour, e.g., gaming (Alrobai et al., 2016; Adib et al., 2021) and
cyber security (Misra et al., 2017), it can also be a helpful reference model in designing for
motivating users to challenge misinformation. The purpose of this paper is twofold. The first
is to identify reasons for not challenging misinformation and to pave the way for future
research on the topic, and the second is to propose persuasive socio-technical interventions
to motivate
users to speak up when they encounter misinformation on SNS.
In this research, the term “challenging” rather than “correcting” is used for two reasons. First,
“correcting” is an absolute statement that presumes the person doing the correction is
accurate; however, the person who intends to correct may not actually be correct. Second,
this paper does address not only corrections but also disagreements or disputes over the
content. In other words, the term “challenging” serves a broader function than simply
correcting.
Taken together, this work is a starting point of a broader study that aims to contribute to
altering the trend of seeing misinformation but not questioning it. This research does not
intend to be a systematic review, scoping review, or narrative review as to our knowledge,
there is no literature primarily focused on this topic. This paper aims to synthesise pertinent
literature on related topics and provide potential reasons regarding users’ silence towards
misinformation online. This paper is organised as follows. Section 2 presents a literature
review and an overview of challenging misinformation. Section 3 provides insights into
possible reasons people are reluctant to challenge misinformation. We conclude and present
possible solutions and future work directions in Section 4.
2. BACKGROUND AND MOTIVATION
As many as 53% of US adults obtain news from social media (Shearer and Mitchell, 2021).
Besides being a fertile ground for misinformation, social media also offers opportunities to
mitigate the problem (Katie Elson, 2018; Djordjevic, 2020). In addition to algorithmic
approaches or machine learning-based solutions, individuals’ active participation in
conversations to challenge misinformation can help reduce misinformation (Bode and Vraga,
2018; Margolin et al., 2018).
Data from several studies suggests that correcting misinformation is not common on social
media. Almost 80% of social media users in the UK have not told anyone who shared false
news on social media that the news they shared was false or exaggerated (Chadwick and
Vaccari, 2019). Another UK survey found that although 58% of respondents reported having
encountered content that they thought was false, only 21% said they did something to correct
it (Vicol, 2020). Recent research in the US regarding the correction of misinformation about
COVID-19 on social media revealed similar findings. Among 56.6% of those reporting that
they saw misinformation, only 35.1% said they corrected someone (Bode and Vraga, 2021c).
Similarly, in Singapore, 73% of social media users dismiss fake news posts on social media
without taking further action (Tandoc et al., 2020). Some studies have documented that
actively interacting with corrections when exposed to unconfirmed claims is rare (Zollo et al.,
2017), SNS users are not consistently motivated to correct misinformation publicly (Cohen et
al., 2020), and explicit corrections of other users are rare in the online environment (Arif et al.,
2017).
Reporting misinformation is one of the techniques provided by social media, enabling
individuals to mark a post as false anonymously. However, much as reporting helps diminish
the problem, it requires several steps and does not allow users to express their opinions;
thereby, it does not help to generate a constructive and meaningful dialogue. Such dialogue
may also have the extra benefit of altering the beliefs and enhancing the critical literacy of the
social sources posting or sharing misinformation and their audience, which can, in turn, foster
long-term behaviour change. Active engagement with false posts to challenge them by
deliberation, argumentation or questioning is crucial to decreasing misinformation
dissemination and cultivating a diverse environment as it increases distinct ideas. Individuals
modify their beliefs about misinformation after seeing another user being corrected on social
media (Vraga and Bode, 2017). Users are also less likely to spread rumours when they feel
they could be confronted with a counterargument, criticism or warning (Tanaka et al., 2013;
Ozturk et al., 2015) and are more likely to comment critically on posts when they are exposed
to critical comments from other users (Colliander, 2019).
3. POTENTIAL REASONS FOR AVOIDING CHALLENGING MISINFORMATION ONLINE
The online sphere was construed as fundamentally different from offline spaces by offering
an environment where individuals are less restricted from expressing ideas. Some believe
that the risks of expressing opinions and sharing content online are lower than doing the
same offline (Papacharissi, 2002; Ho and McLeod, 2008; Luarn and Hsieh, 2014). However,
according to social information processing theory (Walther, 1992), although interpersonal
development requires more time in a computer-mediated environment than face-to-face
(FtF), users may develop similar levels of interpersonal relations. This theory suggests that,
regardless of the medium, people are driven to form impressions and build connections and
that language or symbols in computer-mediated communication (CMC) is as important as
nonverbal cues in FtF communication. Based on these findings, the constraints people have
when expressing contradictory opinions or challenging people in an online environment
might not differ from those in person.
In FtF communication, withholding opinions, abstaining from participating in discussions or
remaining silent even though there is an issue that needs intervention may all occur.
Refraining from providing opinions occurs in various environments, such as enterprise
environments where employees withhold information purposefully (Morrison and Milliken,
2000; Dyne et al., 2003; Milliken et al., 2003) or in classrooms, where students keep silent
during discussions (Fassinger, 1995; Jaworski and Sachdev, 1998; Rocca, 2010).
The silence of employees in organisations is considered to be driven by several motivations
and should not be taken as a sign of acceptance. According to Pinder and Harlos (2001) and
Dyne et al. (2003), there are three types of silence based on employee motives:
acquiescent, defensive and prosocial. Acquiescent silence is passive behaviour and is
motivated by a lack of desire to speak up. An employee could withhold their ideas due to the
belief that speaking up will not change the situation or that they are unable to make a
difference. Defensive silence is described as proactive behaviour and is motivated by the
intention of protecting oneself. An employee could remain silent due to fear of the
consequences of expressing ideas, such as getting fired or demoted. Finally, prosocial
silence is defined as withholding ideas based on positive intentions for others or the
organisation. An employee could be reticent due to a desire to protect others from
embarrassment or trouble.
In a qualitative study investigating why employees remain silent (Milliken et al., 2003), fear of
being viewed or labelled negatively and damaging valued relationships were the most
frequently mentioned reasons. Even in situations where silence might have devastating
consequences, people might still choose not to speak up. Bienefeld and Grote (2012)
revealed that although speaking up is critical for flight safety; aircrew members are reluctant
to do so because of the adverse outcomes of speaking up. Their most common reason for
not speaking was their desire to maintain a good relationship with the team and not lose the
other crew members’ acceptance and trust. In addition, captains were afraid of embarrassing
first officers, and first officers were concerned that captains would view them as
troublemakers if they contradicted captains. Reasons that hinder employees from speaking
up align closely with the educational psychology literature investigating reasons for student
participation in the classroom. Barriers to participating in class range from negative outcome
expectations or evaluation apprehension to fear of appearing unintelligent or inadequate to
one’s peers or instructors (Fassinger, 1995; Rocca, 2010). Logistics such as class size,
seating arrangement, mandatory participation and the instructor’s influence also affect
students’ participation (Rocca, 2010). Taken together, these studies provide important
insights into why people refrain from entering conversations, speaking up, or questioning in
offline environments.
In CMC, users also choose to be silent. They refrain from discussing their ideas (Hampton et
al., 2014), posting content about political and social issues (McClain, 2021), commenting on
questionable news (Stroud et al., 2016) and correcting misinformation (Chadwick and
Vaccari, 2019; Tandoc et al., 2020). The concept of silence or non-participation in the online
environment is conceptualised as lurking or passive SNS use, which refers to passively
viewing and not posting or participating in an online community (Nonnecke and Preece,
2001). In trying to understand the factors affecting lurking behaviour, Amichai-Hamburger et
al. (2016) proposed a model with three main reasons for remaining passive: individual
differences (need for gratification, personality dispositions, time available and self-efficacy),
social-group processes (socialisation, type of community, social loafing, responses to
delurking and quality of the response) and technological setting (technical design flaws and
the privacy and safety of the group). However, the concept of lurking or passive SNS use
may not provide a comprehensive explanation of self-silencing behaviour in the online
environment, as self-silencing might be due to reasons other than just passively browsing or
choosing to observe over participating.
There is a lack of evidence on what prevents people from challenging online misinformation.
Such active behaviour can effectively complement existing technical and socio-technical
solutions, such as those based on A.I. and natural language processing (NLP) (de Oliveira et
al., 2021), the analysis of the profile of media sources (Nakov, 2020) and crowdsourcing that
enables the reporting of misinformation (Kim et al., 2018). The active role of users can also
combat misinformation beyond the public online forums and cover closed platforms such as
messaging groups and possibly cover languages and dialects that current A.I. and NLP
solutions do not cover. We first scanned the literature to identify relevant papers to our
mission in identifying reasons for not challenging misinformation. We used combinations of
keywords to search for articles which contained them in their title, abstract or keywords list.
The search sentence we used was [factors OR reasons OR determinants OR barriers] AND
[challeng* OR question* OR correct* OR counteract* OR debunk* OR refut*] AND
[misinformation OR fake news OR disinformation OR rumour OR rumor OR false
information]. Our search yielded only two relevant papers, even after manually searching the
proceedings of known conferences in the area covering research in media, A.I. and
computational linguistics. After that, we extended our search to include papers that studied
why people remain silent when they see issues, whether misinformation or opinions, in other
environments such as the workplace or classroom. These reasons can provide a starting
point to understanding online silence in the context of online misinformation. Our objective
was not to systematically determine which and how many articles provided a relevant result
but to identify a theory-informed set of reasons for online silence by drawing a parallel with
adjacent behaviours. We stopped the search when reaching a degree of saturation, i.e.
finding the same identified reasons when reviewing new papers. To enhance presentation,
we grouped the elicited reasons, based on their similarity, into six categories: self-oriented,
relationship-oriented, others-oriented, content-oriented, individual characteristics and
technical factors (see Figure 1).
Figure 1. Potential reasons why people do not challenge misinformation
3.1. Self-oriented reasons
3.1.1. Fear of being attacked
Anonymity and lack of visual cues in CMC may lead to an online disinhibition effect,
describing users acting less restrainedly in cyberspace than they would in real life by
loosening social norms or restrictions (Suler, 2004). Internet users who are aware that
cyberspace enables and provides ample opportunities for hostile communication maybe
reluctant to engage in challenging misinformation due to fear of being attacked or becoming
the victim of cyberbullying, which is deliberate, repeated hostile behaviour to harm others
using information and communication technologies (Slonje et al., 2013).
The negative consequences of cyberbullying are known to be intense (Barlett, 2015) and
include depression (Patchin and Hinduja, 2006) or emotional distress (Cao et al., 2020).
Therefore, fear of cyber aggression may thwart users from expressing their deviant opinions.
For instance, college students and young adults avoid expressing their opinions regarding
politics in the online environment because of online outrage (Vraga et al., 2015; Powers et
al.,2019). Evidence also suggests that users exposed to cyberbullying tend to decrease or
abandon their usage of SNS (Cao et al., 2020; Urbaniak et al., 2022).
The fear of being attacked might emanate from the polarisation of social media. Social
media enable an environment encouraging homophily, where individuals with the same
beliefs and opinions get together and become homogeneous (Cinelli et al., 2021). While
convenient, algorithms showing users customised content based on their interests and views
facilitate further polarisation. It is therefore likely that users who are aware that radicalisation
and extremism are prevalent in the online environment are more prone to keeping silent. For
instance, 32% of users who never or rarely share content about political or social issues
cited the fear of being attacked as the reason for not posting (McClain, 2021). These findings
suggest that users might refrain from challenging misinformation due to fear of being
attacked in the online environment.
3.1.2. Desire to protect self-image
Impression management (also known as self-presentation) is how people try to control how
others see them (Leary and Kowalski,1990). According to this approach, people sometimes
modify their behaviour to create positive impressions in the eyes of others by monitoring and
assessing others’ perceptions of themselves. People seek to manage their impressions on
social media (Paliszkiewicz and Ma˛dra-Sawicka, 2016), where users are able to curate their
images easily (Weinstein, 2014). They selectively disclose information to create a desirable,
ideal and socially acceptable image (Zhao et al., 2008). In order to meet the expectations of
the audience, they post information that their audience will find non-offencive (Marwick and
Boyd, 2011) and avoid engaging in controversial topics (Sleeper et al., 2013).
According to impression management theory (Leary and Kowalski, 1990), individuals are
motivated to make a positive impression rather than act as they feel they should. In this
case, it can be speculated that although individuals think they should correct misinformation
(Bode and Vraga, 2021c), they may remain silent owing to the risk of creating a negative
impression, as conflicts, negative feedback and political discussions on social media are not
desirable (Thorson, 2014; Koutamanis et al., 2015; Vraga et al., 2015).
3.1.3. Lack of self-efficacy
Self-efficacy theory, derived from social cognitive theory,focuses on the interconnections
between behaviour, outcome expectancies and self-efficacy (Bandura, 1977). According to
this theory, self-efficacy refers to a person’s judgement of their own ability to determine how
successfully they can perform a specific behaviour. Simply put, self-efficacy is a person’s
belief in their own capacity to succeed.
Outcome expectancies, defined as a person’s perception of the consequences of their
actions, have the potential to influence self-efficacy (Bandura, 1977). The theory suggests
that efficacy beliefs influence outcome expectancies. More precisely, people’s outcome
expectancies are heavily influenced by their assessments of how well they would perform in
various settings. In this case, we can speculate that individuals might not challenge
misinformation due to a perceived lack of efficacy in achieving the behaviour (e.g., a
perceived lack of knowledge). Indeed, when individuals feel equipped enough to express
their opinions, they are more likely to speak up on a political issue regardless of what the
majority thinks (Lasorsa, 1991). Similarly, Tandoc et al. (2020) found that personal efficacy is
one of the main factors affecting a user’s decision to correct fake news. In sum, users may
avoid challenging others due to their belief that their abilities are insufficient to succeed or
that their efforts would not make any difference.
3.1.4. Lack of accountability
The bystander effect suggests that people are less likely to offer help in an emergency when
other people are present because of the diffusion of responsibility (Darley and Latan'e,
1968). Although this phenomenon is associated with emergencies in physical space, it is
also examined in virtual environments (Fischer et al., 2011), such as participation in
conversations in an online learning environment or cyberbullying (Hudson and Bruckman,
2004; You and Lee, 2019). It was shown that one of the reasons for not intervening in
cyberbullying or not participating in conversations is that the participants delegate the
responsibility for intervention to other bystanders.
In SNS, misinformation can be seen by many people, which might lead to a diffusion of
responsibility in which people do not feel accountable for not correcting misinformation. They
might regard their responsibility as lower because they think others might be more
accountable for correcting misinformation.
3.2. Relationship-oriented reasons
3.2.1. Fear of isolation
Asch (1956) demonstrated empirically that individuals adjust their behaviour in order to fit in
with the group. Relying on the Asch conformity experiment, Noelle-Neumann (1974)
introduced the Spiral of Silence theory, which proposes that people gauge the public opinion
climate and, if they perceive that their opinion is in the minority, they are more likely to hold
back their opinion, while if they think their opinion is in the majority, they tend to speak out
confidently. One of the main reasons for conforming is the fear of isolation. Noelle-Neumann
(1974) argues that because of our social nature, we are afraid of being isolated from our
peers and losing their respect. In order to avoid disapproval or social sanctions, people
constantly monitor their environment and decide whether to express their opinions.
Being isolated or ostracised (ignored or excluded by others) is painful as it triggers several
physiological, affective, cognitive and behavioural responses (Williams and Nida, 2011).
Therefore, users on SNSs conform, comply, or obey so they are not excluded (Williams et
al., 2000). One of the reasons they do not correct others might be the fear of being isolated
as they want to fit in the group.
3.2.2. The desire to maintain relationships
Maintaining social ties plays a pivotal role in psychological well-being (Kawachi and
Berkman, 2001). The need to belong is one of the fundamental needs (Williams and
Sommer, 1997) and is linked to psychological and physical well-being (Baumeister and
Leary, 1995). The pursuit of belonging also exists in cyberspace (Williams et al., 2000) and
on SNS such as Facebook (Covert and Stefanone, 2018), Instagram and Twitter (Hayes et
al., 2018). In SNS, users interact with a large number of friends. “Friends” on SNS
encompass both strong and weak ties and include friends, family, neighbours, colleagues, or
romantic partners, but also acquaintances or consequential strangers (Fingerman, 2009), all
of whom are sources of social capital (Antheunis et al., 2015). One of the main motivations
for using SNS is the desire to maintain relationships (Joinson, 2008; Dunne et al., 2010).
As a result, people might be more cautious in their interactions. Indeed, Gallrein et al. (2019)
found that individuals tend to withhold negative interpersonal feedback as they perceive it
has the potential to harm their relationships. In another study exploring why employees do
not speak up about issues or concerns, Milliken et al. (2003) reported that fear of damaging
relationships is the second most common reason for withholding opinions.
As conflicts may pose a threat to users’ sense of belonging, users may avoid challenging or
confronting others on social media.
3.3. Others-oriented reasons
3.3.1. Concerns about the negative impact on others
Individuals might withhold their opinions or refrain from challenging others for altruistic
purposes, such as fear of embarrassing or offending others. For instance, in organisations,
employees remain silent because of the concern that speaking up might upset, embarrass or
in some way harm other people (Milliken et al., 2003).
Withholding opinions because of concern for others reveals itself in the public arena. A
Norwegian study exploring freedom of expression in different social conventions and
norms found that citizens withheld their opinions in the public domain due to
fear of offending others (Steen-Johnsen and Enjolras, 2016). On social media, users also
adhere to the norm of not offending others. A qualitative study showed that users hesitate to
counteract misinformation due to fear of embarrassing the sharer, preferring to use private
communication to minimise the risk (Rohman, 2021).
3.3.2. Normative beliefs
Perceived norms are people’s understanding of the prevalent set of rules regulating the
behaviour that group members can enact (Lapinski and Rimal, 2005). They can be divided
into two categories: descriptive norms and injunctive norms. While descriptive norms explain
beliefs regarding the prevalence of a behaviour, injunctive norms explain others’ perceived
approval of that behaviour (Rimal and Real, 2003; Lapinski and Rimal, 2005). Engagement
in the behaviour is influenced by these two norms, the extent to which behaviour is prevalent
and approved by others (Berkowitz, 2003).
Social norms play an important role as behavioural antecedents in many contexts, as well as
in the context of the correction of misinformation. Koo et al. (2021) showed that when
individuals perceive corrective actions as common, they are more motivated to correct
misinformation. In their experimental study, Gimpel et al. (2021) found that emphasising the
socially desirable action of reporting false news using an injunctive social norm increases the
rate of reporting fake news. Although individuals think it is normative to correct someone
(Bode and Vraga, 2021c), it is not easy to engage in a dialogue to challenge the content
poster due to the norms that govern social interactions. On Facebook, for example, where
everyone’s social contacts could see the entire conversation, heated interactions and public
discussions were viewed as norm violations (McLaughlin and Vitak, 2012).
Taken together, as prevalence and approval by others influence engagement in the
behaviour, it can be proposed that people do not challenge others since they perceive doing
so to be unusual and unacceptable on social media.
3.4. Content-oriented reasons
3.4.1. Issue relevance
One reason that individuals choose to remain silent might be the extent to which the content
is personally relevant or important to them. Indeed, studies have found that people are more
willing to correct misinformation when the news story is personally relevant to them or their
loved ones (Tandoc et al., 2020). Another study showed that people skipped past false posts
without thoroughly reading them as they did not find them interesting or relevant enough to
read fully (Geeng et al., 2020).
3.4.2. Issue importance
Issue importance also influences people’s willingness to speak out publicly on a contentious
topic. The greater the perceived importance, the more willing people are to speak out (Moy
et al., 2001; Gearhart and Zhang, 2014). Consequently, it might be argued that people’s
avoidance of correcting false news can be related to the content’s importance, relevance, or
appeal.
3.5. Individual characteristics
Although there are some contextual influences on people’s decisions to discuss or confront,
individual factors such as demographics (e.g., age, sex and education level) may influence
the decision to engage in these conversations. For example, in their study about correction
experiences on social media regarding COVID-19, Bode and Vraga (2021a) found that
respondents with more education were more likely to engage in correction, and older
respondents were less likely to report correcting others.
Personality traits might also influence users’ willingness to challenge. The five-factor model
of personality describes five dimensions of personality: extraversion, agreeableness,
conscientiousness, openness to experience and neuroticism (McCrae and John, 1992).
Personality traits influence the frequency and patterns of social interactions in political
discussions (Hibbing et al., 2011; Gerber et al., 2012), commenting on online news (Wu and
Atkin, 2017) and students’ participation in controversial discussions in the classroom
(Gronostay, 2019).
In the context of politics, there is an association between extraversion and the tendency to
discuss politics (Mondak and Halperin, 2008; Hibbing et al., 2011). As challenging someone
requires asking questions or voluntarily providing corrections, extraversion might be
positively associated with engaging in conversations to correct misinformation. It may
influence people’s willingness to approach controversial dialogues on social media. Since
agreeable individuals focus on being acceptable in the eyes of others (Graziano and Tobin,
2002) and agreeableness is associated with conflict avoidance, it might be negatively related
to challenging others. Individuals high in openness to experience are amenable to new ideas
and experiences. It can be speculated that openness to experience might be positively
associated with approaching conversations to question and learn the perspective of the
sharer. Neuroticism describes individuals who are unstable and troubled by negative
emotions such as worry and stress (McCrae and John, 1992). Therefore, neuroticism might
be negatively related to approaching conversations to challenge or correct misinformation.
Perspective-taking and empathic concern could also impact a user’s decision to challenge.
According to the empathy-altruism hypothesis, empathy for another person generates an
altruistic drive to improve that individual’s welfare (Batson, 1987). As people try to establish
a positive self-presentation on SNS (Zhao et al., 2008) and sharing misinformation could hurt
one’s reputation (Altay et al., 2022), it can be speculated that users may develop empathy
and therefore refrain from challenging misinformation to protect themselves from negative
feelings.
3.6. Technical characteristics
Features and affordances on digital platforms impact users’ engagement in several ways.
These features can encourage young people to express themselves on political issues
(Lane, 2020), or affect a user’s decision on whether to interact with the content, as they may
choose not to engage by using available features rather than commenting (Zhu et al., 2017;
Wu et al., 2020a, 2020b). Features like “hide a post”, “unfollow”, “snooze” and reactions
(smiley face, angry face) can be utilised by users as avoidance strategies for not
commenting (Wu et al., 2020b).
Research has shown that the way in which information is displayed on the interface can
have an impact on users’ misinformation sharing behaviour (Avram et al., 2020; Di
Domenico et al., 2021). To illustrate, people who are exposed to high engagement metrics
(i.e., the numbers of likes and shares) are more likely to like or share false content without
verifying it (Avram et al., 2020). In addition, when misinformation is presented in a way that
the source precedes the message, users are less likely to share it due to a lack of trust (Di
Domenico et al., 2021). Social media design also affects users’ misinformation sharing
behaviour (Fazio, 2020). Integrating friction into a design, for example, a question to make a
user pause and think before sharing information, reduces misinformation sharing (Fazio,
2020).
Given that features, affordances and interface design have an impact on whether to engage
with the content or how to engage on SNS, it can also be argued that they may also affect
users’ decisions to challenge misinformation. The lack of tools provided by the platforms and
the way SNS are designed might affect users’ tendency to be silent when encountering
misinformation.
4. CONCLUSION
Despite many studies demonstrating that individuals in cyberspace do not question false
information (Chadwick and Vaccari, 2019; Tandoc et al., 2020; Tully et al., 2020; Vicol, 2020;
Bode and Vraga, 2021c) there is much we still do not know about why people remain silent
when they encounter misinformation.
By synthesising insights from various bodies of literature, we presented hypotheses about
why people refrain from challenging misinformation when they encounter it online. Although
there is a commonly held belief that social media is a disinhibited environment where
individuals discuss or express anything they like with little concern, studies show that users
may feel restrained in some circumstances while expressing their opinions online (Thorson,
2014) or correcting misinformation (Tandoc et al., 2020). Identifying why people do not
engage in conversations to question or correct the content might help devise socio-technical
measures to encourage people to challenge misinformation and contribute to mitigating its
spread.
Scholars have proposed tools, design considerations, or systems to cultivate constructive
discussions in online environments from different fields, e.g. web-based learning
environments (Lazonder et al., 2003; Yiong-Hwee and Churchill, 2007; Hew and Cheung,
2008) and political deliberation (Semaan et al., 2015; Lane, 2020). To the best of our
knowledge, no application has been identified for facilitating challenging misinformation
online. Given that incorporating design approaches into digital behaviour change
interventions is successful in many diverse areas (Elaheebocus et al., 2018), design
considerations can be extended to encourage users to disagree, question, or correct.
As the PSD model has been employed in a variety of methods to encourage behaviour
change (Torning and Oinas-Kukkonen, 2009) it offers an opportunity to motivate users to
challenge misinformation. Table 1 fleshes out a few design suggestions to illustrate the
potential of using PSD strategies. These strategies aim to influence attitudes and behaviour.
Strategy Definition Example Implementation
Reduction The design strategy
of reducing complex
behaviour into
simple tasks
Stickers in instant messaging and SNS are used
for many reasons, such as facilitating self-
expression, filling the conversation or managing
self-impression (Tang et al., 2021). Prefabricated
stickers with questions to challenge, such as “Did
you fact-check this information?” may help users
question the content in a quick and impersonal
way.
The design strategy
that offers fitting
suggestions
Sentence openers are one of the successful
techniques used in online learning environments
to cultivate students’ participation (Lazonder et
al., 2003; Albertson, 2020). A sentence opener is
a pre-defined mechanism to start a sentence
(Yiong-Hwee and Churchill, 2007). “This
information is false because . . .” is an example. A
user chooses a sentence opener and adds their
comment to finish the argument, such as “the
company made a statement that this is false.”
They may help users in providing well-constructed
arguments and challenging misinformation more
quickly.
Self-monitoring The design strategy
that enables you to
monitor your status
or progress.
Affect labelling or labelling one’s feelings, helps
users to regulate their emotions (Torre and
Lieberman, 2018). The idea behind using such an
approach is to provide insight into users’
expressions of emotion. A tone detector is a tool
that provides feedback to users about how their
comment is likely to sound to someone reading it.
As the user writes a comment, the indicator on
the scale of emotions begins to form as word
choices, style and punctuation are identified. It
may help users to monitor how their comments
are likely to sound to someone reading them. This
may increase the willingness to challenge
misinformation as one of the reasons for
refraining from doing so is the fear of seeming
aggressive.
Recognition The design strategy
that provides public
recognition for
performing
A badge can provide public recognition. Users
who
occasionally correct misinformation can display
the
badge on their profile. As a result, other users are
able to see that this user with the badge has
taken the initiative to challenge misinformation on
social
media. Badges of the type ‘Trusted Fact Checker’
may motivate users to challenge misinformation
more frequently.
Normative
influence
The design strategy
that displays norms
regarding how most
people behave and
what behaviour they
approve
A message that gives information about other
users’
acceptance and positive attitudes towards
correcting misinformation on social media may
motivate users. An example is, “Do you know, on
this website, 80% of users correct others when
they spot misinformation?”.
Praise The design strategy
that uses praise
as feedback for
people’s behaviour
A notification or message after correcting
misinformation may motivate users. For example,
“Your message is the third to dispute this content.
Your contribution helps the fight against
misinformation.”
Rewards The design strategy
that rewards people
for performing the
target behaviour
A reward such as points after each correction
may
motivate users to correct misinformation more
often. Such points, when accumulated, can
translate to a free subscription to a media outlet,
e.g., affiliated with where the discussion forums
are hosted.
Table I. Design suggestions based on Persuasive System Design (PSD) strategies
4.1. Research limitations and future research directions
4.1.1. Research limitations
This paper has several limitations. These include a lack of previous research primarily
focused on this topic. Therefore, it was not possible to conduct a systematic literature
review. Instead, this paper seeks to synthesise relevant literature on related topics and
provide potential reasons for users’ silence towards misinformation online. It serves as a
basis for further studies. We synthesised published literature from a range of fields, including
organisational behaviour, communication, human-computer interaction (HCI), psychology
and education. Despite this broad range of topics, we may have missed literature from other
research areas. Further, as we searched only published literature, we missed research from
the grey literature.
4.1.2. Future research directions
Future research should investigate such reasons through primary studies. Additionally,
research has revealed that open discussion and direct confrontation are more acceptable in
Western societies (Morris et al., 1998; Friedman et al., 2006). A systematic study conducted
across cultures on large samples to determine which reasons are common characteristics of
human socialisation and which, if any, are distinctive to individual and national experiences
may prove useful in the context of challenging misinformation. Another area for future work
would involve examining whether challenging behaviour varies according to the
individuals’ psychometrics and perception of the group and others. For example, research
has shown that those who perceive themselves as having less power than the offender are
less likely to confront them despite finding their comments inappropriate than those who feel
they have higher power (Ashburn-Nardo et al., 2014). According to this, the likelihood of
challenge may differ depending on the perceived power level of the person they confront
(e.g. their boss at work).
Technical solutions have been mainly based on A.I. One direction of research could relate to
improving social media design to empower people to speak up when they see
misinformation. Our design ideas presented in this paper are only a starting point and
intended as hypotheses about how to make people motivated to challenge misinformation
through the design of social media. Future research may focus on the actual design and
implementation of these ideas. Given the strong link to user experience, methods such as
co-design (Sanders, 2002; Sanders and Stappers, 2014) can be more effective in
maintaining the balance between correction requirements and other requirements, including
connectedness to others and ease of use. We proposed interventions based on the PSD
model. More research is needed to determine whether persuasive techniques are likely to
break through the hesitancy and increase the perception of utility regarding challenging
misinformation. More research is also required to identify the users’ groups concerning their
different reactions and preferences to such persuasive interventions. Our identified barriers
could also be used to interpret other types of passive online behaviour. Research showed
that many individuals who observe instances of racism or prejudice do not attempt to
confront the perpetrators (Dickter and Newton, 2013). Future research could investigate our
hypothesised reasons for online silence beyond the domain of challenging misinformation.
REFERENCES
!"#$#%& %$ '#$#&
& %$ ()*++,
- %./012++$ & 3 1##45%#
62 1 !"#$%%
#$)++,*!7888
600 % $#9+421:0$+$ 8#; %
31 !< =;11,7& %6 > %,4+++ &"'(')
$$%%*+")*,
93 #0+?8@+ #A1%
# &+122$ # %#$ "(!
( ,% -+#,
A1,B &=+ +&C=(71
+1$ ./
%,%1#D5E<6,790E. -6 93<
0 #& C &+$+$ 1 !%"
"0,
1FG0?<%+ &" C1
% ,2 +!)*,
& 99C"8< =0B C1H?
C&, $# =!8@%# $ 1% '+
1$!!!%" "%% !%23
!%"+0 .# 4 $ & ; %+1$#I
2& %!>+!JJ #JJK
81 &+ & %!7% & #1% 1
%/ '++%4#%%)*
1 - F69;0 9 ?5 =3 1‐
%#=: 1 (< &++ + =+/1 & $
&"")*,
2%&-0$E"8@+ 1 ##%%
21 %& %$
61&,L! =1&# &2 #'+
)*
6>;0 % 11$# !<M & % 1#
&A1'+'%"!" ")*
6 ;30 $2$ !72<11$(46C =EF$
5% '+-%0,
61%"F< #!3& + >%
&1%1%% $2$ '+0")*
61$9H#D5=EC940N-1N $2$ 6
%%$ & ; $#& %$ !O1$2
72=1&%"6 ")*
6C =E3++$ & % /1$1
40CB7%% %+++
"6 6"4$63" 8"8- C-9!9 ,
6JB,
6&-5 5%C!B=%% N+C1+
=$'+$%%"9 )*I
6 FG#8? %# %#!; $ &5
& %$ !"!,
6 FG#8?; $ 8@+ 31#;.G73,
:
6 FG#8?0 +,+ = $ !"@#%& %$ %
7" +!% ,'%"- 1#,
6 FG#8?G1& ; $ !3 1%$#0+$ 10
; $ && %$ ; @ &;.G73,&"
;" ,%4,+
6 # <# <<# "2.51$E96 ;13.‐ ‐
1:P1 =&C=!7%+$ & %#%'+/
3+)*,
;" -G +3 &C=M2 $#2 1(
')*
; Q?-?-; A1 &;1# .2
=4#-!1 &4N3 $1 14#62 -
9)*,
;% #13-;.,?1CC <'=
+1 #$ '"( !'"
7+)++,*!+#
;=CG;.+<48
8F 1# 1#!.;2;11;F 1# 1#
42
;=CG;-,!%++4
+++F 1# 1#42
; 53"5EEO1> B<
%M %'+ .$)*
;O1> B5EEG;61# 8%FH 0
H "<;.G73, %& %)% )*
,
; 8F=? %C1< %C?-
< ( $=#
1&C= 1#+1%%R1 $%++$
!"% )*,
; 9S<&C=T!72$#$# & & % 1N2=
= %%$# +#& %$ %!%" "
0,
; 29& 3 /$ $1(8@%#8M &-= C
>$ 8@+ 1 . 8@1 !%" )*
,
39FU662$ %#!M1 &+
&"' '+)+*
.2-00F +E 3G> 3"
7$&#&C= = C 1#1#+ #!
#)*
33 % 5-1390G"1&C+!B#2#+%
1%& %$ # %'+/3+
)*,
3C;F-= G< & & !- #:21$ &‐
+ %%&"$%%'+8,8
3/ /2; + >%+ ; %"C-=93.
473+=! +
31VF= =9D 1#+ +:1 & = C#I
1#$'$ ++$2&" 3+
3FG#6 7;; +1E#%+ %+ 2
%1$% 1&"+ ")*,
8 F-#1,?5 75 #.; 3F1#<4#
##%=4)%*& %$ = 1 " C!>4'+
$! " !)++,*
8 1B FDF0, %
&12 #2$ !%$2=&"
)*
"#04# %$ !1:+ & : 1$
1:7&"+")*,
"E F01# =1 &+1# &
&=)*
"#%?F; A1$#++$!<%+ &1%+
$ +&"97/)*,
"0?1#975%<G #;?%W"3B
?<,M!%,$2=
2$ # 1 ,# 1%#'+6")*
"%;,;F1F@+% &;I%M
R,2 #&" 0" ")*,
566 H#?FF#37+ &C2&!‐
8%+15%4"%&"'+
)*,
52X 9;.G73,! &%& %$ 7* ",
)*
5H#B51# + @+
!%" )*,
5#;D ""C-= " C<=>!72$#$# =
+ +) :*2$#!"9 !%"+ '+
,
5153 33 =#;3#%2 &
+ $1 !###$ +M +
$&"')*,
5%+#.#;4EF<M$2 & %'#$#
&C= %&"+ )*,
5E B5< #!3% &+
$&(&"' )*,
5 3< #1 #1(< &+ #1%$2
+% #&#+ $ & 1N+$+$ 2
+ $ %1 < ?)*,
%+ ?-F1B3=701?
>%>0=;
B%83;;<B SCT 1!02 %
1#+#1$#M = %: )*
=?";1#B>$#1+$+$ 1
1 !1 &+&$ !%" /")*,
#G 0 + $1 '
0")*,
F 3 ,+ #R1 + @+ &, ,
& %+1,% %%1$ !")*,
1 961C%<68M!F& 4#0> &
0$+$ &" *+)*,
9= C276& 1 %*+"+"
)*,
9 -F C# C#1+ C+#1+=+ +( $21 &" C
!>@4'+ #!!"9 !%"+ )++
,*
?$8 5P#A1= = C++! %$#&C=
%*67.)*,
?=76C%F" $%&"<6 )*
,
?% 20F3; %$#&C= %= 1
$#!<M &1@++1$ $#&"+
)*,
?%9<6.YC +&65 %E, #1EF2## =
1+ &&C=%& %$ 2,>@4'+
$! !26, +)++,*
? H,Q1,F,D /B $20 + ;
& %$ (8@%#8M &<,+ 0+$ 02- %
&"0+/ )*,
? 1%G 5GC1#0 N %%
%!B 2#$2&C= % C(!%"
"0,
F3 %#& 1+ $@+ !<$# &$'
# , 1./ )*,
F+C?%-@+$ & %!" )*
,
F 3F99O0 $ 1+ C!" = C##+ &
&";" ),*,
FE BB%0. B4# + & 1
$ %+1,%#2 %!%" /")*
,
F? =C7%+ %#%!12== ,
%+ % '+0")*
F10,D+ A)*,
F1ECF31%% ; 20V290%#$C#!%+2$
%R1 &&C= 1%# " C#6
!+' $4"/',-05,-05
F 67 1<2! =0 $%323 =; $27
>+!JJ===%1JJ7 1Z<2Z =Z0 $Z%Z32Z3
=Z; $2Z7!42 &8@
# 36CB70 $",;C# <=>!B3
; $ 28M('!"!,
=C86 37= 7=+ !<=>1 @
+%#1./ )*,
;;$7'
;9 .0 1$ '2 & % ++$ ‐
&"' )*,
F1#;GC9- %2 1$ 2 $ " C./
)*,
C"9 8B=0"@+ 1 &%+ !
71%+ N %%11+==&"+
")*,
%17%+ &&C= %#++$ 1%N
2 $ &"!"3+
5#-56C 2C0+!#%#++
1 '%$#$#+#>C %""
$% " /$"$$1
C99+?3&%= C& 1 &+ + $
2 10&"')*,
BB%?DF1#?FC E<6#3F9?
F1 9,F19,;; R%#%! 1$#& ,$
M&" 6" ")*,
8BC"9.#E$ ! #
2 +%+1$= 7$+ )*,
03 %C3%%?<+ &01.+ L%$2
$ &"/!";" )*,
-C 20;B+ ["C-=[6& 7B82B>(B%%
B4@-CD1C
- ,-1%8<+ &< &01.+ &"
!")*,
- C609BF1CF1C$>!
$!
.,?1CC /1%012%#!?1+ %
%&1!" $ )*
./ M>?012 #& =!, &,,
%## &")*,
. ?G2972$#$ &+12%#+ & %+$$2
2 '++$ !%@,% ++ ,+
.E1C0FC% D; %$#1% + %!<M$2
&&1$ =#EC@
)++,*!7888
0EC=E9\,=C7%+ #% !<
8@%+ &FC7+ )*
0+H<21+!+1+./
)*,
0C;D 9"<2C 2DG 2!3 &1
$21#4C,1BB%% B4E-D@
09B1/61% 2 !+% C
1#F" =&"&")*,
0;; ?08%+ !O1A1+
+2/1$%""+ -8%
5 1+01#F%
0 1# %0'"B3 2 = ++ &
%& %$ %!" C+2#,C1$++ 1
!++/!)++,*!7888
0 =8? C510S 1$#8 ;%T!02
7$<0 $&,; + &"
!"
%-?4#R1 &+2 % 2
!"7)*,
?1+$+$ # %!@%1$+
12=!"")*,
%; 1$#& %$ O1 $P# !
)++,*!+#
86,-" %1, +$+ #++ ,+
-;;0,
86-++090 C+ +!++
%C# ##!,+)*,
%6"1> 01%3 1#3##+ $
$ 2 % 1++ $ +1+!>E4'+
DD$"$!!"9 !%"+ )++,
*
8.<$'G1
+6C 3; FB9; F"<+
=:!@+ #&, + & C'+ D
!%" "%% %3 <@4 $ & ; %+1$#
I2& %!>+!JJ #JJK
/%0?"U<1 &1##&
+2$ !%" "0)*,
,9 ?8/ 6<" &.M#! - %" % &
8@+ )*,
1-9G3180 C;-= %%= %%
+++.'H ,
19<.3$ 8M!6'+/0)*,
1D.C219B#F1"< &R1 &+1%R1
$+#12 C# $ &;.G73,%& %$
!",
<CDC% D1C<< = ,< #%
7$2"1% 1#;$<C# &; =K; &]DCI
! 7888,2& %!
>+!JJ %J #+@(
^1_^_-^_^,2_ +^K
< 8;F%3F#3M1 && %$ ! = %1+
&C==&")*,
<#D=?"#;;O)* %%1$ 1#$C
# 1+1 !%1$+,1,"/!")*,
< ?"#1+$ !D 1#$E+ $$
" C8!"/ )*,
< #?.,?1CC 012%#! &&11
$ '">4'+ C !'"
7+)++,*
< 96F%301P#&# = !M#%+
% $ #1$ )*,
<16 FG#8? E#4!3 8@+ 1 & %$ 7
; $ M4N+ & %$ 0 (:
4C0E`C<%+C0F=56 CB E`C
0 >C1$2 = C#+a %!%" "
0
G 3.2 3 6 J
>+!JJ&1& #J%J1+ J= ,2,,%& %$ +&
G$0EE50"C=+ %& !<%+ &
1&1 1 1%N2 $ =
2$&" 3+)*,
G$<133 % 5B %>! =+2
+ + =>'+/3+)*,
G 1# 3<+ &1&=
)*,
G#8?6 F4#@+ 1 %& %$
%!")*,
G#8?6 F7 2 1! =+ 2# 1
%++$ %+a %8!"/
)*,
G#8?< ??#,GC-58 =21$2$
#%+ 1+ $@+ " C!%" "0
,
B-6 C991;91821$#%+ &>%+
%& %$ %!%, !"
)*,
B-1+< = 1#!%,$++ $
&%& %$ !"+%!,
B967+ M %+1,%$ !$
++$2!")*,
B8;<+ + $ %!.2@+ +>
+=% #2## 1 H""
B=C9 6?? ,B=C; '%$ $2
@+ 1 + $ & %$ ! 12 !"
+%)*,
BD3F-1# F80%8012 #&
#2 #1$ &""7+
+")*,
B%?3;1#;?; B; %!M &## 2
7&"'
'+)*
B%?3-.%!; A1 +#!" ,
'+)*,
B%?3 %%?F % = C!3 /$ '#
%+$ (' '+0")*,
B1<,DC3.-=31 !8@+ # &40
$2$ & 0 $#; %% -=&"/!";"
)*,
B1<,D. &,C3C= 1= !0 #
+ &1#C+& + @+ &"
!"
B1<,DQ1QC3<$2 #!@+ # + @+
2 #& 1#+ $ " C
Q QFN2&$&b8@%#1R1 &C++ %
%P#%& %$ ./
D #,=<;134#.+ 1++ 1N#1%$
.F#82 %" )*,
D 1FFD,<M1# = C!
%# 1+E2$ $ 7
H "1;Q0 % $#1N$ $
%!< & '%$ '+/+ )*
H 5%1C$97$ 1$ " C!3#%+ =%
$ +!%" "0)*,
H1OC "7%&& %!$22 %
1#+ $+ '!")*,
H "63G ;5C%F2
O1> B31C#B &<'*A.