Article
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... From this perspective, users might react negatively to a ban perceived as a violation to free speech (Etzioni, 2019). Extant research however has not considered users' reactions to a ban intended to limit aggression on SNS. ...
... Addressing this study gap is pivotal for SNS because scholars in law and ethics disagree on what is offensive or harmful speech, and on whether such forms of speech should be restricted (DuMont, 2016;Etzioni, 2019;Nielsen, 2018). Concurrently, there is growing recognition that high levels of online incivility are detrimental to the well-being of users (Bacile et al., 2018;Muddiman, 2017) and have negative aggregate consequences on social and political debates (Antoci et al., 2019;Gervais, 2019;Soral et al., 2018). ...
... To the best of our knowledge, this is the first study to model the factors that influence the acceptance of restrictions on free speech by SNS. We contribute to the literature on the positions taken by consumers in relation to the protection of the right to free speech (Etzioni, 2019;Klein, 2018;Nielsen, 2018). In particular, we demonstrate that the ban's perceived violation of free speech and perceived unfairness lead observers to engage in negative word of mouth. ...
Article
Full-text available
Social networking sites (SNS) routinely ban aggressive users. Such bans are sometimes perceived as a limitation to the right to free speech. While research has examined SNS users' perceptions of online aggression, little is known about how observers make trade‐offs between free speech and the desire to punish aggression. By focusing on reactions to an SNS ban, this study explores under what circumstances users consider the protection of the right to free speech as more important than the suppression of aggression. We propose a model of moderated mediation that explains under what circumstances online aggression increases the acceptance of a ban. When posts display aggression, the ban is less likely to be perceived as violating free speech and as unfair. Consequently, aggression reduces the likelihood that users will protest through negative word of mouth. Moreover, users protest against an SNS ban only when this affects an in‐group user (rather than an out‐group user). This in‐group bias, however, diminishes when an in‐group aggressor targets a high warmth out‐group user. The study raises managerial implications for the effective management of aggressive interactions on SNS and for the persuasive communication of a decision to ban a user engaging in aggressive behavior.
... We have comprehensively described the difference between the two. [39] III. ...
Preprint
Full-text available
Hate speech is a specific type of controversial content that is widely legislated as a crime that must be identified and blocked. However, due to the sheer volume and velocity of the Twitter data stream, hate speech detection cannot be performed manually. To address this issue, several studies have been conducted for hate speech detection in European languages, whereas little attention has been paid to low-resource South Asian languages, making the social media vulnerable for millions of users. In particular, to the best of our knowledge, no study has been conducted for hate speech detection in Roman Urdu text, which is widely used in the sub-continent. In this study, we have scrapped more than 90,000 tweets and manually parsed them to identify 5,000 Roman Urdu tweets. Subsequently, we have employed an iterative approach to develop guidelines and used them for generating the Hate Speech Roman Urdu 2020 corpus. The tweets in the this corpus are classified at three levels: Neutral-Hostile, Simple-Complex, and Offensive-Hate speech. As another contribution, we have used five supervised learning techniques, including a deep learning technique, to evaluate and compare their effectiveness for hate speech detection. The results show that Logistic Regression outperformed all other techniques, including deep learning techniques for the two levels of classification, by achieved an F1 score of 0.906 for distinguishing between Neutral-Hostile tweets, and 0.756 for distinguishing between Offensive-Hate speech tweets.
ResearchGate has not been able to resolve any references for this publication.