Fig 4 - uploaded by Giuseppe Petracca
Content may be subject to copyright.
Source publication
One of the main goals of all online social communities is to promote a stable, or perhaps, growing membership built around topics of like interest. Yet, communities are not impermeable to the potentially damaging effects resulting from those few participants that choose to behave in a manner that is counter to established norms of behavior. Typical...
Context in source publication
Context 1
... injected posts were selected randomly from a large pool (about 300) of actual NCP posts moderated by the site moderators, available in the dataset. The overall F-rate (computed using classic precision and recall metrics) is reported in Figure 4. We compare the performance with that of a moderator checking 50% of the posts at each round. ...
Similar publications
In this modern era, infectious diseases, such as H1N1, SARS, and Ebola, are spreading much faster than any time in history. Efficient approaches are therefore desired to monitor and track the diffusion of these deadly epidemics. Traditional compartmental epidemiology models are able to capture the disease spreading trends through contact network, h...
Citations
... Qualitative and quantitative studies of bad behavior in online settings have been carried out considering newsgroups, 26 online chat and video communities, 36 and online multiplayer video games. 37 More recently, an emerging line of work has focused on misbehavior and deception in forums, 38,39 community-answering questions, 40 and social networks in general. 39 ...
Use of online social networks has grown dramatically since the first Web 2.0 technologies were deployed in the early 2000s. Our ability to capture user data, in particular behavioral data has grown in concert with increased use of these social systems. In this study, we survey methods for modeling and analyzing online user behavior. We focus on negative behaviors (social spamming and cyberbullying) and mitigation techniques for these behaviors. We also provide information on the interplay between privacy and deception in social networks and conclude by looking at trending and cascading models in social media. WIREs Data Mining Knowl Discov 2017, 7:e1203. doi: 10.1002/widm.1203
This article is categorized under: Commercial, Legal, and Ethical Issues > Social Considerations
... As noted, in the literature review, for online communities, punishments, although applied, are shown not to be truly effective [21]. In this instance, incentives are applied to encourage good behavior. ...
We construct a two species evolutionary game model of an online society
consisting of ordinary users and behavior enforcers (moderators). Among
themselves, moderators play a coordination game choosing between being
"positive" or "negative" (or harsh) while ordinary users play prisoner's
dilemma. When interacting, moderators motivate good behavior (cooperation)
among the users through punitive actions while the moderators themselves are
encouraged or discouraged in their strategic choice by these interactions. We
show the following results: (i) We show that the -limit set of the
proposed system is sensitive both to the degree of punishment and the
proportion of moderators in closed form. (ii) We demonstrate that the basin of
attraction for the Pareto optimal strategy
can be computed exactly. (iii) We demonstrate that for certain initial
conditions the system is self-regulating. These results partially explain the
stability of many online users communities such as Reddit. We illustrate our
results with examples from this online system.
Peer to peer and distributed systems are generally susceptible to sybil attacks. Online social networks (OSNs), due to their fat user base and open access nature are also prone to such attacks. Current state-of-art algorithms for sybil attack detection make use of the inherent social graph created among registered users of OSN service. They rely on the inherent trust relationship among these users. No effort is made to combine other characteristic behavior of sybil users with properties of social graph of OSNs to improve detection accuracy of sybil attacks. Sybil identities are also used as gateways for spreading spam content in OSNs [6]. The proposed approach exploits this behavior of sybil users to improve detection accuracy of existing sybil detection algorithms. In the proposed approach, content generated/published by each user is used along with the topological properties of the social graph of registered users. A machine learning model is used for assigning a fractional value called "trust value" which denotes the amount of legitimate content generated by the user. A modification to sybil detection algorithm is proposed which makes use of the trust value of each user to improve the accuracy of detecting a sybil identity. Real dataset from Facebook is crawled and used for analysis and experiments. Analytical results show the superiority of proposed solution. Results are compared with SybilGuard and SybilShield which shows ~14% decrease in false positive rates with very minimal effect on acceptance rate or false negative rate of the sybil detection algorithms. Also, the proposed modification does not affect the performance of existing sybil detection algorithms and can be implemented in a distributed manner.
In this demo, we present Abuse User Analytics (AuA), an analytical framework aiming to provide key information about the behavior of online social network users. AuA efficiently processes data from users' discussions, and renders information about users' activities in a easy to-understand graphical fashion with the goal of identifying deviant or abusive activities. Using animated graphics, AuA visualizes users' degree of abusiveness, measured by several key metrics, over user selected time intervals. It is therefore possible to visualize how users' activities lead to complex interaction networks, and highlight the degenerative connections among users and within certain threads.