ArticlePDF Available

Why People Post Benevolent and Malicious Comments Online

Authors:
Article

Why People Post Benevolent and Malicious Comments Online

Abstract and Figures

WITH THE PROLIFERATION of smart devices and mobile and social network environments, the social side effects of these technologies, including cyberbullying through malicious comments and rumors, have become more serious. Malicious online comments have emerged as an unwelcome social issue worldwide. In the U.S., a 12-year-old girl committed suicide after being targeted for cyberbullying in 2013.20 In Singapore, 59.4% of students underwent at least some kind of cyberbullying, and 28.5% were the targets of nasty online comments in 2013.10 In Australia, Charlotte Dawson, who at one time hosted the "Next Top Model" TV program, committed suicide in 2012 after being targeted with malicious online comments. In Korea, where damage caused by malicious comments is severe, more than 20% of Internet users, from teenagers to adults in their 50s, posted malicious comments in 2011.
Content may be subject to copyright.
74 COMMUNICATIONS OF THE ACM | NOVEMBER 2015 | VOL. 58 | NO. 11
contributed articles
DOI:10.1145/2739042
Explaining motivations for online comments,
this study looks to help establish a positive,
nonthreatening online comment culture.
BY SO-HYUN LEE AND HEE-WOONG KIM
WITH THE PROLIFERATION of smart devices and mobile
and social network environments, the social side
effects of these technologies, including cyberbullying
through malicious comments and rumors, have
become more serious. Malicious online comments
have emerged as an unwelcome social issue worldwide.
In the U.S., a 12-year-old girl committed suicide after
being targeted for cyberbullying in 2013.20 In Singapore,
59.4% of students underwent at least some kind of
cyberbullying, and 28.5% were the targets of nasty
online comments in 2013.10 In Australia, Charlotte
Dawson, who at one time hosted the “Next Top
Model” TV program, committed suicide in 2012 after
being targeted with malicious online comments. In
Korea, where damage caused by malicious comments
is severe, more than 20% of Internet us-
ers, from teenagers to adults in their
50s, posted malicious comments in
2011.9
Recognizing the harm due to ma-
licious comments, many concerned
people have proposed anti-cyberbul-
lying efforts to prevent it. In Europe,
one such campaign was called The
Big March, the world’s first virtual
global effort to establish a child’s
right to be safe from cyberbullying.
The key motivation behind these
campaigns is not just to stop the post-
ing of malicious comments but also
to motivate people to instead post be-
nevolent comments online. Research
in social networking has found benev-
olent comments online are not alone
but coexist in cyberspace with many
impulsive and illogical arguments,
personal attacks, and slander.14 Such
comments are not made in isolation
but as part of attacks that amount to
cyberbullying.
Both cyberbullying and malicious
Why People
Post Benevolent
and Malicious
Comments
Online
key insights
˽ People post benevolent comments to
encourage and help others, feel self-
satisfaction, and promote the social good;
they post malicious comments to express
anger, resolve feelings of dissatisfaction,
and even have “fun.
˽ One problem with online comments is
lack of responsibility due to anonymity;
one response might be ethical education
for Internet users and use of real
identities with stronger regulation.
˽ Developers could apply text filters to
their systems to detect forbidden and
malicious texts and post the identities of
users most active in posting benevolent
comments on their sites.
NOVEMBER 2015 | VOL. 58 | NO. 11 | COMMUNICATIONS OF THE ACM 75
ALL IMAGES BY NL SHOP
comments are increasingly viewed as
a social problem due to their role in
suicides and other real-world crimes.
However, the online environment gen-
erally lacks a system of barriers to pre-
vent privacy invasion, personal attacks,
and cyberbullying, and the barriers that
do exist are weak. Social violence as an
online phenomenon is increasingly
pervasive, a phenomenon manifesting
itself through social divisiveness.
Research is needed to find ways to
use otherwise socially divisive factors
to promote social integration. Howev-
er, most previous approaches to online
comments have focused on analyzing
them in terms of conceptual defini-
tion, current status, and cyberbullying
that involves the writing of malicious
comments.1,8,13,16,21,22 Still lacking is an
understanding of why people post ma-
licious comments in the first place or
even why they likewise post benevolent
comments that promote social inte-
gration. Unlike previous studies that
focused on cyberbullying itself as a so-
cially divisive phenomenon, this study,
which we conducted in Korea in 2014,
involved in-depth interviews with so-
cial media users in regard to both ma-
licious and benevolent comments. To
combat the impropriety represented
by the culture of malicious comments
and attacks, our study sought to high-
light the problem of malicious com-
ments based on the reasons people
post comments. Here, we outline an
approach toward shaping a healthier
online environment with fewer mali-
cious comments and more benevolent
ones that promote social integration.
Methodology
As an exploratory study, we took an
interview approach. Unlike previous
studies where the research typically
reflected the perspective of elemen-
tary, middle, or high school students,
we included in-depth interviews with
a broader range of age groups. Ques-
tions dealt with reasons for benevolent
and malicious comments, problems
associated with online comments, and
suggestions for addressing the prob-
lems.
As a qualitative study, we adopted
the convenience-sampling approach
for selecting interviewees. For quali-
tative researchers, it is the relevance
of interview subjects to the research
topic rather than their representative-
ness that determines how they select
participants.4 The interviewees should
be able to explain the reasons or mo-
tivations for such postings. We thus
checked whether interview subjects
had posted comments online.
Our 110 interview subjects ranged
from students in their teens to adults in
their 50s. The number was determined
by confirming theoretical saturation,18
indicating no additional relevant re-
sponses, or codes, emerged from ad-
ditional data collection and analysis.
By grouping 10 interview subjects at
each stage, we were able to analyze
interview data based on the coding at
each stage. After conducting interviews
over 11 stages with the 110 subjects, we
could no longer find new codes. For
this reason, we limited ourselves to the
110 subjects, of whom three did not
complete their interviews. We thus in-
cluded 107 subjects in the analysis (see
Table 1). The average interview time
per participant was 30 to 40 minutes.
We gave gift certificates for books to
participants to encourage sincerity.
Using the open coding approach,19
we subjected transcripts of the inter-
views to content analysis, permitting
inclusion of a large amount of textual
information and systematic identifi-
cation of its properties.7 Coding was
performed by two researchers, one of
whom, to avoid potential bias, was not
involved in data collection. With open
coding, each coder examined the inter-
view transcripts line by line to identify
codes within the textual data. We then
grouped the codes into categories.
The inter-rater agreement scores
for the entire coding process averaged
0.79, with Cohen’s Kappa scores aver-
aging 0.78, indicating an acceptable
level of inter-rater reliability.9 Inter-
rater disagreements were reconciled
through discussion between the two
raters. We then grouped the identi-
fied codes into broader categories
that reflected commonalities among
the codes. We further counted the fre-
quency of relevant responses, or codes,
for each category.
76 COMMUNICATIONS OF THE ACM | NOVEMBER 2015 | VOL. 58 | NO. 11
contributed articles
subjects said people write malicious
comments to express “anger and feel-
ings of inferiority” and to get atten-
tion. “Hostility toward others” sug-
gests malicious commenters attack
and slander a certain person’s blog or
bulletin board and spread groundless
rumors elsewhere. “Low self-control”
suggests people post malicious com-
ments irresponsibly for “fun” and to
release “stress.” “Supporting other ma-
licious comments” and “online con-
text norm,” or following the general
pattern of the selected online context,
demonstrate how often people post
malicious comments by simply follow-
ing others.
The figure here outlines the identi-
fied problems with online comments
(“anonymity” ranked highest at 42.8%)
and suggestions for addressing them.
Writing comments online allows peo-
ple to participate without a certifica-
tion process, creating an environment
in which they can criticize, curse, and
malign others in the course of ex-
pressing their opinions. “Lack of re-
sponsibility” was cited by 30.3% of in-
terview subjects. Most people posting
malicious comments do not appreci-
ate the potential seriousness of their
effect or actually belong to a category
involving some kind of violence; they
impulsively write comments without a
sense of responsibility for their effect.
Moreover, “online context climate,”
or the environmental conditions of
the online context, represented 11.0%
of the interview subjects, implying
that as people become less caring and
more disdainful of others, their inter-
nal dissatisfaction can be expressed
through malicious comments. There
also appears to be a “lack of regula-
tion and punishment” for malicious
comments and a rise of “commercial
selfishness” (such as programs that
trigger witch hunts and prompt adver-
tising by commenters).
Among suggestions from interview
subjects on how to deal with malicious
online comments, improvement of
awareness through “educational pro-
grams and campaigns” ranked high-
est, with 39.7%, meaning organized ed-
ucation for Internet users was seen as a
way to ensure a civil public dialogue. A
more proactive type of Internet ethical
education targeting teenagers should
be a priority in schools, as well as at
Results
Table 2 outlines the reasons for post-
ing benevolent comments online. Five
of the seven main ones accounted for
85.4% of the total: encouragement
(36.3%); self-satisfaction (21.5%); pro-
viding advice or help (11.8%); support-
ing other benevolent comments (8.9%);
and actualizing the social good (7.4%).
Many respondents said they write be-
nevolent comments to “encourage” or
give hope or courage to someone and
think such an attitude can yield posi-
tive change. People also post benevo-
lent comments to “provide advice and
help” to others. Ranked next as the
main reasons for posting benevolent
comments were “support for other be-
nevolent comments” and “actualizing
society’s good.” They revealed them-
selves by agreeing with others’ benevo-
lent comments, following others in the
selected online context (“online con-
text norm”), or preventing malicious
comments and spreading benevolent
comments. Posting benevolent com-
ments based on such reasons made the
people doing the posting feel “satisfac-
tion,” motivating them to continue to
post further benevolent comments.
Table 3 outlines the reasons for
posting malicious comments. Five of
the seven main ones accounted for
85.0% of the total: resolving a feeling
of dissatisfaction (28.1%); hostility
(20.3%); low self-control (15.7%); sup-
porting other malicious comments
(8.9%); and fun (8.5%). Many interview
Table 1. Demographics of interview participants.
Demographic Variables Frequency (%)
Gender Male 49 (45.8%)
Female 58 (54.2%)
Age (years) <19 11 (10.3%)
20-29 41 (38.4%)
30-39 35 (32.7%)
40-49 10 (9.3%)
>50 10 (9.3%)
Job Employed 30 (28.0%)
Professional 24 (22.4%)
Student 40 (37.4%)
Homemaker 7 (6.6%)
Other 6 (5.6%)
Tot al 107 (100%)
Table 2. Reasons for benevolent comments.
Reason Example Response s Frequency (%)
Encouragement To console someone, encourage someone,
praise someone, support someone
49 (36.3%)
Self-satisfaction For one’s satisfaction, one’s sense of well-being 29 (21.5%)
Providing advic e or help To help someone, due to belief that
one’s faith or opinion can influence others
16 (11.8%)
Supporting other
comments
Agreeing with or sympathizing with others’
benevolent comments
12 (8.9%)
Actualizing
society’s good
To spread benevolent comments, prevent malicious
comments, protect those receiving malicious comments
10 (7.4%)
Online c ontext norm Mob psychology, sense of unity 7 (5.2%)
Information sharing
and communication
To communicate with others, share information
with others, inform of one’s praiseworthy behavior
7 (5.2%)
Others People are essentially good-hearted, want to maintain
offline relationships, want to receive a gift certificate
5 (3.7%)
Tot al 135 (100%)
NOVEMBER 2015 | VOL. 58 | NO. 11 | COMMUNICATIONS OF THE ACM 77
contributed articles
home, to teach commenters to appre-
ciate the seriousness of malicious com-
ments and the potential for cyber vio-
lence. Such programs should highlight
Internet users’ responsibility for their
online comments. They should also
promote the idea of writing benevolent
comments as a way to limit malicious
comments and increase benevolent
comments. Regarding the seriousness
of anonymity in online comments,
the “use of real identity” ranked next,
with 29.3%. Many interview subjects
suggested using real identities (such
as real names or photos) as a way to
reduce malicious comments in the on-
line context.
Another suggestion from interview
subjects was “more and stronger regu-
lation,” with 20.8%. Enforcing official
punishment may be difficult online,
but inadequate punishment is one of
the reasons for malicious comments.
In contrast, many interview subjects
endorsed the idea of stronger regula-
tion and legal punishment for mali-
cious comments. Yet another sugges-
tion was “role of management,” with
6.4%. Many interview subjects high-
lighted the role of managers of social
media providers in monitoring and
deterring online comments, especially
malicious ones. Social media manage-
ment should have a role in resolving
how to reduce malicious comments
and promote benevolent comments
in a social media context; for example,
a management team might develop
an algorithm for filtering online com-
ments and identifying malicious writ-
ers, then implement a supporting
system. Other suggestions included
“a reward program for reporting ma-
licious comments,” “clarifying the
range of privacy,” “abolishing any pro-
gram that triggers a witch hunt,” and
“managing the culture of the online
community.” Despite the value of our
findings, it would be useful to further
test their robustness by replicating the
study in countries other than Korea, in
light of cultural differences.
Discussion
Motivations for malicious comments
identified in the study involved tar-
geting people’s mistakes. Conversely,
most benevolent comments involved
encouragement and compliments to
help people in difficult or risky situa-
tions, showing malicious comments
Table 3. Reasons for malicious comments.
Reason Example Response s Frequency (%)
Resolvin g a feeling
of dissatisfaction
To express one’s dissatisfaction, show
a feeling of inferiority or frustration, desire
to attract others’ attention, jealousy
43 (28.1%)
Hostility toward others To hurt others, criticize others,
show hatred toward someone
31 (20.3%)
Poor self-control Thoughtlessness, lack of responsibility,
having a weak personality
24 (15.7%)
Supporting other
comments
Agreeing that blame is deserved, agreeing
with others’ malicious comments
19 (12.4%)
Fun Just for fun, enjoyment 13 (8.5%)
Releasing stress
or alleviating tension
To release stress, express anger
generated offline
10 (6.5%)
Online c ontext norm A general atmosphere of writing malicious
comments, a social atmosphere where people
cannot trust one another
8 (5.2%)
Others Differences in outlook or perspective, stimulating
content of articles, ignorance of content, judgmental
5 (3.3%)
Tot al 153 (100%)
Problems and suggestions for online comments.
Problems of Online Comments Suggestions for Online Comment Problems
Anonymity
Lack of responsibility
Online context climate
Lack of regulation
and punishment
Educational programs
and campaigns
More and stronger
regulation
Role of management
Use of real identity
Commercial selfishness
Others
Others
42.8%
30.3%
39.7%
29.3%
20.8%
6.4%
3.8%
11.0%
8.0%
5.8%
2.1%
78 COMMUNICATIONS OF THE ACM | NOVEMBER 2015 | VOL. 58 | NO. 11
contributed articles
tion. Because malicious comments
can provoke pain and even violence
in cyberspace, they have emerged as
a serious social issue, including al-
legedly causing their targets to com-
mit suicide. To combat the abuse
represented by the culture of mali-
cious comments and attacks, our
study investigated their sources and
role in social disintegration. Our
study also suggested ways to address
the problem and increase the num-
ber of benevolent comments that
can contribute to social integration
and harmony.
The study has several implications
for research, as it was among the first
to comprehensively consider the rea-
sons for posting comments, identify
related problems, and explain how to
address them. Previous research ex-
plored general reasons (such as to re-
direct feelings, revenge, jealousy, and
boredom) for cyberbullying based on
data collected from high school stu-
dents,22 but there was a lack of under-
standing among researchers as to the
motivators that lead to posting mali-
cious comments and benevolent com-
ments, respectively. Our study thus
adds value to the literature by explain-
ing the reasons for malicious and
benevolent comments and how they
differ. Although previous research
classified types of cyberbullying and
investigated the consequences of cy-
berbullying,10,16 missing was an un-
derstanding of the problems related
to online comments in general. This
study thus adds value to the literature
by identifying the relevant problems
and advancing our understanding of
the phenomenon. Meanwhile, sev-
eral studies have discussed strategies
is a primary reason for degradation of
online social networks. Moreover, the
abolition of anonymity and intensifi-
cation of punishment in social media
can be effective in reducing mali-
cious comments and rumors. Howev-
er, potential violation of freedom of
expression also risks trivializing the
online social network itself. Because
anonymous forms of freedom of ex-
pression have always been contro-
versial in theoretical and normative
spheres of social research,5,6,11,12,17
careful consideration of any limiting
of comments is necessary before a
ban might be contemplated.
To establish an environment of
healthy online commentary, a quick
measure is needed of the damage
caused by the spread of false or toxic
information and formation of socio-
economic support for the potential
victims. Measures are required to min-
imize damage when toxic information
is posted in social media and investi-
gate legal steps that might be needed
to pursue awards for financial damage.
Besides imposing legal punishment on
people posting malicious comments,
victims’ mental pain is much more
urgent. A well-organized legal support
program for such damage is necessary.
Also needed are ways to enforce so-
cial media users’ right to control their
personal information, as well as to
verify distorted information. Such sys-
tems would involve monitoring to de-
tect distorted information, services for
filtering information, an “information
confirmation system,” and laws sup-
porting the rights of users concerning
their consent and choices in how their
personal information is used. Regard-
ing malicious comments, Internet
portal sites should suggest preventive
measures (such as systems to report
malicious comments, disclose com-
menters’ IP addresses, and create lists
of prohibited words).
Education in socially appropriate
and legal use of social media is nec-
essary to minimize the social, cul-
tural, and economic gaps among ap-
proaches and applications prevalent
today. Education along these lines
should emphasize not only the role
of the producer but also of informa-
tion users and instill a sense of re-
sponsibility for information. Such
an effort could be accompanied by
encouraging benevolent comments.
Educational programs and campaigns
could also be directed at motivating
the posting of benevolent comments.
Our study found Internet users post
benevolent comments mainly to en-
courage and help others, often mak-
ing them because other Internet users
have already done so, and gain a sense
of satisfaction from their action. Ef-
forts to develop social norms of post-
ing benevolent comments should
also consider the reasons identified
here for positive posts. People, espe-
cially teens, tend to take collective ac-
tion in the use of social media.15 Our
study further found that people post
benevolent comments and malicious
comments due to the online context,
as in Table 2 and Table 3. It is there-
fore important for all online sites that
accept comments to develop social
norms of posting benevolent com-
ments through educational programs
and campaigns.
Conclusion
Unrestricted by time and space, on-
line communication has led to in-
creasing numbers of both benevolent
and malicious comments, with the
latter including impromptu and ir-
rational personal abuse and defama-
NOVEMBER 2015 | VOL. 58 | NO. 11 | COMMUNICATIONS OF THE ACM 79
contributed articles
school guidance counselors and par-
ents might use to prevent cyberbul-
lying and proposed coping strategies
for students.2,16 Extending the previ-
ous research, we have contributed by
explaining how to manage the prob-
lems of online comments as a way to
reduce malicious ones and promote
benevolent ones.
Our results also suggest how social
media service providers, educational
institutions, and government policy-
makers might be able to establish a
positive, nonthreatening online com-
ment culture. Educators must under-
stand why students post malicious, as
well as benevolent, comments. They
can consider updating their curricula
or teaching content by adding ethical
issues related to online comments.
That is, based on the reasons we iden-
tified, they can help guide their stu-
dents (such as for self-satisfaction
and society’s advancement through
benevolent comments), what to post
(such as support), and how to post
(such as self-expression). They can
likewise also teach why not to post
(such as social problems), what not
to post (such as rage), and how not to
post (such as poor self-control). Stu-
dents especially should be educated
in a way that instills a sense of respon-
sibility for their postings. If they come
to have a sense of responsibility and
perceive that posting malicious com-
ments is a form of violence, cyberbul-
lying would likely be reduced. Educa-
tors might also consider launching
campaigns that promote the posting
of benevolent comments, establish-
ing social norms of conduct that
would reach many other people on-
line and be accepted by them.
Policymakers should understand
government regulations and corre-
sponding legal punishment can be
useful in regulating cyberbullying, es-
pecially in the form of malicious online
comments. Our results further suggest
cyberbullying, especially through such
comments, should be regulated at the
government level. Many people also
believe legal penalties for posting ma-
licious comments should be strength-
ened. Both regulation and punishment
have a role to play in reducing and
even preventing malicious online com-
ments.
For social media service providers,
including information systems devel-
opers, our results suggest they should
consider requiring real identities for
postings. When people access social
media and post comments anony-
mously, they think less about what
they post. Requiring true identities
would cause them to be more care-
ful and responsible. Our results also
suggest providers of social media ser-
vices can apply text filters to their sys-
tems. Because certain texts are used
repeatedly in cyberbullying or mali-
cious comments, providers should be
persuaded to develop a system to de-
tect certain texts and alert them as to
when to possibly take action against
the people posting them. Such a filter-
ing function could reduce the number
of all kinds of malicious comments.
Conversely, social media service pro-
viders should consider posting lists
that rank users most active in post-
ing benevolent comments on their
sites. Because people generally enjoy
self-expression, these rankings could
motivate more people to post positive
comments as a way to develop a new
social norm in which malicious com-
ments are unwelcome and the people
posting them are scorned.
Acknowledgment
This work was supported by the Na-
tional Research Foundation of Korea
Grant funded by the Korean Govern-
ment (NRF-2012S1A3A2033291).
References
1. Aricak, T., Siyahhan, S., Uzunhasanoglu, A.,
Saribeyoglu, S., Ciplak, S., Yilmaz, N. and Memmedov,
C. Cyberbullying among Turkish adolescents.
Cyberpsychology and Behavior 11, 3 (2008), 253–261.
2. Bhat, C.S. Cyber bullying: Overview and strategies
for school counsellors, guidance officers, and all
school personnel. Australian Journal of Guidance and
Counselling 18, 1 (2008), 53–66.
3. Fleiss, J.L. Statistical Methods for Rates and
Proportions. John Wiley & Sons, Inc., New York, 1981.
4. Flick, U. An Introduction to Qualitative Research. Sage,
Thousand Oaks, CA, 1998.
5. Froomkin, M. Flood control on the information ocean:
Living with anonymity, digital cash, and distributed
databases. Journal of Law and Commerce 15, 2
(1996), 395–453.
6. Froomkin, M. Legal issues in anonymity and
pseudonymity. The Information Society 15, 2 (1999),
113–127.
7. Holder, I. The Interpretation of Documents and
Material Culture. Sage, Thousand Oaks, CA, 1994.
8. Kim, S.K. A study on the restriction of freedom
of expression on the Internet. The Journal of
Comparative Law 11, 3 (2011), 47–72.
9. Korea Internet and Security Agency. Internet Ethical
Culture Research, Seoul, Korea (2012); http://isis.kisa.
or.kr/board/index.jsp?pageId=040100&bbsId=7&ite
mId=786
10. Kwan, G.C.E. and Skoric, M.M. Facebook bullying: An
extension of battles in school. Computers in Human
Behavior 29, 1 (2013), 16–25.
11. Long, G.P. Who are you? Identity and anonymity in
cyberspace. University of Pittsburgh Law Review 55,
(1993), 1177–1213.
12. Marsh, T.D. In Defense of Anonymity of the Internet.
Res Gestae (Apr. 2007), 24–32.
13. Park, H.J. A critical study on the introduction of the
cyber contempt. Anam Law Review 28, 1 (2009),
315-347; http://kiss.kstudy.com/journal/thesis_name.
asp?tname=kiss2002&key=2751961
14. Poster, M. Cyber Democracy: Internet and Public
Sphere. University of California, Irvine (1995); http://
www.hnet.uci.edu/mposter/writings/democ.html
15. Seo, H., Houston, J.B., Knight, L.A.T., Kennedy, E.J., and
Inglish, A.B. Teens’ social media use and collective
action. New Media & Society 16, 6 (2014), 883–902.
16. Slonje, R., Smith, P.K., and Frisen, A. The Nature
of cyberbullying and strategies for prevention.
Computers in Human Behavior 29, 1 (2013), 26–32.
17. Stieglitz, E.J. Anonymity of the Internet: How
does it work, who needs it, and what are its policy
implications? Cardozo Arts and Entertainment Law
Journal 24, 3 (2007), 1395–1424.
18. Strauss, A. and Corbin, J. Basics of Qualitative
Research: Grounded Theory Procedures and
Techniques. Sage, Newbury Park, CA, 1990.
19. Strauss, A. and Corbin, J. Basics of Qualitative Coding.
Sage, Thousand Oaks, CA, 1998.
20. The Guardian. Florida cyberbullying: Girls arrested
after suicide of Rebecca Sedwick, 12. The Guardian
(Oct. 15, 2013); http://www.theguardian.com/
world/2013/oct/15/florida-cyberbullying-rebecca-
sedwick-two-girls-arrested
21. Vandebosch, H. and Cleemput, K.V. Defining
cyberbullying: Qualitative research into the
perceptions of youngsters. Cyberpsychology and
Behavior 11, 4 (2008), 499–503.
22. Varjas, K., Talley, J., Meyers, J., Parris, L., and Cutts, H.
High school students’ perceptions of motivations for
cyberbullying: An exploratory study. Original Research
11, 3 (2010), 269–273.
So-Hyun Lee (sohyun1010@yonsei.ac.kr) is a Ph.D.
candidate in the Graduate School of Information
at Yonsei University, Seoul, Korea.
Hee-Woong Kim (kimhw@yonsei.ac.kr), the
corresponding author, is the Underwood Distinguished
Professor in the Graduate School of Information
at Yonsei University, Seoul, Korea.
© 2015 ACM 00010782/15/11 $15.00
Watch the authors discuss
their work in this exclusive
Communications video.
http://cacm.acm.org/videos/
why-people-post-benevolent-
and-malicious-comments-
online
... Another such harmful behavior on SNSs is the posting of malicious comments with the intent to hurt others, which constitutes a relatively more direct and frequent action that has been understudied in previous research. Many people post malicious comments online to express feelings of inferiority and frustration [32]. ...
... Prosocial behavior is defined as voluntary behavior that is done with the aim of benefiting others [34]. Indepth interviews with online community users have shown encouragement (e.g., "encourage/praise/support someone") as the most frequent reason and motivation for posting favorable (benevolent) comments [32]. ...
... Therefore, we believe that this study provides a better understanding of the SNS discontinuance phenomenon by showing that upward contrastive emotions are positively related to willingness to quit Instagram. Furthermore, as malicious comments online emerge as a global social issue, some researchers have started to explore factors motivating the posting of malicious comments [3,32,54]; however, these researchers have not yet realized that upward comparison emotions can generate malicious comments. Hence, we believe that the consideration of upward social comparison in these studies can enrich and advance the current understanding of malicious comments that are prevalent online. ...
Article
Full-text available
With the increase in upward social comparison occurring on social networking sites (SNSs) globally, SNS researchers have examined the impact of upward social comparison. However, they focused mainly on psychological outcomes (e.g., well-being). To extend the existing studies, this study investigates the behavioral consequences of upward social comparison through the underlying mechanisms of emotions. Drawing on Smith’s typology of social comparison-based emotions, we developed a conceptual model that integrates upward social comparison on Instagram, upward comparison emotions (upward contrastive emotions and upward assimilative emotions), and the behavioral responses (SNS discontinuance, posting of comments). A structural equation modeling analysis revealed that upward social comparison in Instagram usage provoked upward contrastive emotions including anger, depression, and envy, which, in turn, induced Instagram discontinuance and the posting of malicious comments on Instagram. Additionally, upward assimilative emotions including admiration, optimism, and inspiration triggered the posting of favorable comments on Instagram. This study enhances our understanding of SNS social comparison by revealing how upward social comparison on Instagram is related to behavioral consequences.
... A journalistic account by The Guardian in 2013 discussed how a 12-year-old girl committed suicide after being targeted for cyberbullying (Haven, 2013). In 2012, Charlotte Dawson, who at one time hosted the "Next Top Model" TV program in Australia, committed suicide after being targeted with malicious online comments (Lee & Kim, 2015). Toxicity is a problem that is seriously affecting the dynamics and usefulness of online social interactions. ...
... Prior studies have shown that online users participate in toxic behaviors out of boredom (Varjas et al., 2010), to have fun (Shachaf & Hara, 2010), or to vent (Lee & Kim, 2015). A comprehensive examination of various forms of toxicity was conducted by Warner et al. (Warner & Hirschberg, 2012). ...
Article
As the novel coronavirus (COVID-19) continues to ravage the world at an unprecedented rate, formal recommendations from medical experts are becoming muffled by the avalanche of toxic content posted on social media platforms. This high level of toxic content prevents the dissemination of important and time-sensitive information and jeopardizes the sense of community that online social networks (OSNs) seek to cultivate. In this article, we present techniques to analyze toxic content and actors that propagated it on YouTube during the initial months after COVID-19 information was made public. Our dataset consists of 544 channels, 3,488 videos, 453,111 commenters, and 849,689 comments. We applied topic modeling based on Latent Dirichlet Allocation (LDA) to identify dominant topics and evolving trends within the comments on relevant videos. We conducted social network analysis (SNA) to detect influential commenters, and toxicity analysis to measure the health of the network. SNA allows us to identify the top toxic users in the network, which led to the creation of experiments simulating the impact of removal of these users on toxicity in the network. Through this work, we demonstrate not only how to identify toxic content related to COVID-19 on YouTube and the actors who propagated this toxicity, but also how social media companies and policy makers can use this work. This work is novel in that we devised a set of experiments in an attempt to show how if social media platforms eliminate certain toxic users, they can improve the overall health of the network by reducing the overall toxicity level.
... Despite the supportive video messages reported within this study, there is a growing concern regarding malicious and hurtful feedback from TikTok viewers. These vitriol responses are a growing concern across online social media platforms [31]. Comments in this study were not analyzed according to the civility of the comments; however, nursing students' tweets about the COVID-19 pandemic "used inappropriate language such as profanity and name-calling on this public platform." ...
Article
Background During a time of high stress and decreased social interaction, nurses have turned to social media platforms like TikTok as an outlet for expression, entertainment, and communication. Objective The purpose of this cross-sectional content analysis study is to describe the content of videos with the hashtag #covidnurse on TikTok, which included 100 videos in the English language. Methods At the time of the study, this hashtag had 116.9 million views. Each video was coded for content-related to what nurses encountered and were feeling during the COVID-19 pandemic. Results Combined, the 100 videos sampled received 47,056,700 views; 76,856 comments; and 5,996,676 likes. There were 4 content categories that appeared in a majority (>50) of the videos: 83 showed the individual as a nurse, 72 showed the individual in professional attire, 58 mentioned/suggested stress, 55 used music, and 53 mentioned/suggested frustration. Those that mentioned stress and those that mentioned frustration received less than 50% of the total views (n=21,726,800, 46.17% and n=16,326,300, 34.69%, respectively). Although not a majority, 49 of the 100 videos mentioned the importance of nursing. These videos garnered 37.41% (n=17,606,000) of the total views, 34.82% (n=26,759) of the total comments, and 23.85% (n=1,430,213) of the total likes. So, despite nearly half of the total videos mentioning how important nurses are, these videos received less than half of the total views, comments, and likes. Conclusions Social media and increasingly video-related online messaging such as TikTok are important platforms for social networking, social support, entertainment, and education on diverse topics, including health in general and COVID-19 specifically. This presents an opportunity for future research to assess the utility of the TikTok platform for meaningful engagement and health communication on important public health issues.
... In this vein, analysis to understand users' intentions can inform the effective management of OSNs. For example, a user who posts messages that are assessed to be trolling content could have malicious intent, or might simply be expressing a sincere opinion that differs from those of the other forum members [315,320]. Additionally, users banned from OSN communities have been reported to form two distinct groups: those whose posts were regularly deleted by moderators prior to being banned, and those whose posts were only recently deleted before they were banned [321]. Extending research to differentiate habitual trolls from users who may have engaged in a specific heated debate [321] would be helpful to moderate online communities effectively, to avoid deplatforming users who do not habitually cause problems. ...
Article
Full-text available
Understanding the complex process of information spread in online social networks (OSNs) enables the efficient maximization/minimization of the spread of useful/harmful information. Users assume various roles based on their behaviors while engaging with information in these OSNs. Recent reviews on information spread in OSNs have focused on algorithms and challenges for modeling the local node-to-node cascading paths of viral information. However, they neglected to analyze non-viral information with low reach size that can also spread globally beyond OSN edges (links) via non-neighbors through, for example, pushed information via content recommendation algorithms. Previous reviews have also not fully considered user roles in the spread of information. To address these gaps, we: (i) provide a comprehensive survey of the latest studies on role-aware information spread in OSNs, also addressing the different temporal spreading patterns of viral and non-viral information; (ii) survey modeling approaches that consider structural, non-structural, and hybrid features, and provide a taxonomy of these approaches; (iii) review software platforms for the analysis and visualization of role-aware information spread in OSNs; and (iv) describe how information spread models enable useful applications in OSNs such as detecting influential users. We conclude by highlighting future research directions for studying information spread in OSNs, accounting for dynamic user roles.
... (Cheng et al., 2017) observe certain situational factors that cause online hate speech: personal mood, discussion context, and contagiousness of hate speech. Personal dissatisfaction, bad mood and anger increases aggression towards others which can lead to malicious behaviour online (Lee and Kim, 2015;Cheng et al., 2017). According to their study, the immediate context of the discussion can mold the direction of conversation. ...
Preprint
Full-text available
Online hate speech has caught everyone's attention from the news related to the COVID-19 pandemic, US elections, and worldwide protests. Online toxicity - an umbrella term for online hateful behavior, manifests itself in forms such as online hate speech. Hate speech is a deliberate attack directed towards an individual or a group motivated by the targeted entity's identity or opinions. The rising mass communication through social media further exacerbates the harmful consequences of online hate speech. While there has been significant research on hate-speech identification using Natural Language Processing (NLP), the work on utilizing NLP for prevention and intervention of online hate speech lacks relatively. This paper presents a holistic conceptual framework on hate-speech NLP countering methods along with a thorough survey on the current progress of NLP for countering online hate speech. It classifies the countering techniques based on their time of action, and identifies potential future research areas on this topic.
Article
Over the past few years, researchers have been focusing on the identification of offensive language on social networks. In places where English is not the primary language, social media users tend to post/comment using a code-mixed form of text. This poses various hitches in identifying offensive texts, and when combined with the limited resources available for languages such as Tamil, the task becomes considerably more challenging. This study undertakes multiple tests in order to detect potentially offensive texts in YouTube comments, made available through the HASOC-Offensive Language Identification track in Dravidian Code-Mix FIRE 2021.¹ To detect the offensive texts, models based on traditional machine learning techniques, namely Bernoulli Naïve Bayes, Support Vector Machine, Logistic Regression, and K-Nearest Neighbor, were created. In addition, pre-trained multilingual transformer-based natural language processing models such as mBERT, MuRIL (Base and Large), and XLM-RoBERTa (Base and Large) were also attempted. These models were used as fine-tuner and adapter transformers. In essence, adapters and fine-tuners accomplish the same goal, but adapters function by adding layers to the main pre-trained model and freezing their weights. This study shows that transformer-based models outperform machine learning approaches. Furthermore, in low-resource languages such as Tamil, adapter-based techniques surpass fine-tuned models in terms of both time and efficiency. Of all the adapter-based approaches, XLM-RoBERTa (Large) was found to have the highest accuracy of 88.5%. The study also demonstrates that, compared to fine-tuning the models, the adapter models require training of a fewer parameters. In addition, the tests revealed that the proposed models performed notably well against a cross-domain data set.
Article
Despite their positive effects in promoting participatory politics, digital publics have also manifested an offensive vernacular culture. This study takes a social network analytic approach to explain the contagion of offensive speech in online discussion contexts. The study examines four social interactional mechanisms underlying a user's adoption of political swearing: generalized reciprocity, direct reciprocity, leader-mimicry, and peer-mimicry. The empirical context of this study is a highly popular online discussion forum in Hong Kong. The study examines the effects of social interactional mechanisms on the occurrences of political swearing by analyzing five years of user comments. Findings show that peer-mimicry contributes to the contagion process the most, followed by generalized reciprocity and direct mimicry. The study demonstrates how individual-level speech behaviors spiral into a collective norm that potentially hinders a healthy discussion culture in mediated social spaces.
Article
Full-text available
This research examined how social self-efficacy, collective self-esteem, and need to belong can be used to predict teens' use of social media. The particular focus was on how these social psychological variables together with social media use account for variation in teens' participation in a flash mob – an exemplar of 21st-century collective action. Empirical data come from a survey of teens in a major Midwestern city in the USA. Teens' need to belong was positively associated with the amount of time they reported spending on social networking sites, even when controlling for gender, race, and household socio-economic status. Both teens' social self-efficacy and time spent on YouTube were positively associated with their intention to participate in a flash mob in the future. These and other findings are discussed in the context of the role of social media in youth culture and collective action. 495162N MS0010.1177/1461444813495162new media & societySeo et al. 2013 Article by guest on December 29, 2014 nms.sagepub.com Downloaded from
Article
Full-text available
This study examines the phenomenon of cyberbullying on Facebook and how it is related to school bullying among secondary school students in Singapore, aged 13–17. We also focus on generic use of Facebook and risky Facebook behaviors as the predictors of cyberbullying and victimization on Facebook. 1676 secondary students, from two secondary schools, participated in a pen and paper survey. The findings show that the intensity of Facebook use and engagement in risky Facebook behaviors were related to Facebook victimization and Facebook bullying, respectively. Moderately strong positive relationships between school bullying and Facebook bullying, as well as between school victimization and Facebook victimization, were also uncovered.
Article
This chapter presents analytic methods for matched studies with multiple risk factors of interest. We consider matched sample designs of two types, prospective (cohort or randomized) and retrospective (case-control) studies. We discuss direct and indirect parametric modeling of matched sample data and then focus on conditional logistic regression in matched case-control studies. Next, we describe the general case for matched samples including polytomous outcomes. An illustration of matched sample case-control analysis is presented. A problem solving section appears at the end of the chapter.
Article
Cyberbullying has been identified as an important problem amongst youth in the last decade. This paper reviews some recent findings and discusses general concepts within the area. The review covers definitional issues such as repetition and power imbalance, types of cyberbullying, age and gender differences, overlap with traditional bullying and sequence of events, differences between cyberbullying and traditional bullying, motives for and impact of cyber victimization, coping strategies, and prevention/intervention possibilities. These issues will be illustrated by reference to recent and current literature, and also by in-depth interviews with nine Swedish students aged 13–15 years, who had some first-hand experience of one or more cyberbullying episodes. We conclude by discussing the evidence for different coping, intervention and prevention strategies.
Article
Cyber bullying or bullying via information and communications technology tools such as the internet and mobile phones is a problem of growing concern with school-aged students. Cyber bullying actions may not take place on school premises, but detrimental effects are experienced by victims of cyber bullying in schools. Tools used by cyber bullies are presented and the impact on victims is discussed. Intervention strategies for school counsellors, guidance officers, and school personnel to adopt with students and parents are presented.