ArticlePDF Available

Abstract and Figures

With the rise of online platforms where individuals could gather and spread information came the rise of online cybercrimes aimed at taking advantage of not just single individuals but collectives. In response, researchers and practitioners began trying to understand this digital playground and the way in which individuals who were socially and digitally embedded could be manipulated. What is emerging is a new scientific and engineering discipline—social cybersecurity. This paper defines this emerging area, provides case examples of the research issues and types of tools needed, and lays out a program of research in this area.
This content is subject to copyright. Terms and conditions apply.
Computational and Mathematical Organization Theory (2020) 26:365–381
1 3
Social cybersecurity: anemerging science
Published online: 16 November 2020
© The Author(s) 2020
With the rise of online platforms where individuals could gather and spread infor-
mation came the rise of online cybercrimes aimed at taking advantage of not just
single individuals but collectives. In response, researchers and practitioners began
trying to understand this digital playground and the way in which individuals who
were socially and digitally embedded could be manipulated. What is emerging is a
new scientific and engineering discipline—social cybersecurity. This paper defines
this emerging area, provides case examples of the research issues and types of tools
needed, and lays out a program of research in this area.
Keywords Social cybersecurity· Social network analysis· Dynamic network
analysis· Social media analytics· Review
In today’s high tech world, beliefs opinions and attitudes are shaped as people
engage with others in social media, and through the internet. Stories from creditable
news sources and finding from science are challenged by actors who are actively
engaged in influence operations on the internet. Lone wolfs, and large propaganda
machines both disrupt civil discourse, sew discord and spread disinformation. Bots,
cyborgs, trolls, sock-puppets, deep fakes, and memes are just a few of the technolo-
gies used in social engineering aimed at undermining civil society and supporting
adversarial or business agendas. How can social discourse without undue influence
persist in such an environment? What are the types of tools and theories needed to
support such open discourse?
Today scientists from a large number of disciplines are working collaboratively
to develop these new tools and theories. There work has led to the emergence of a
new area of science—social cybersecurity. Herein, this emerging scientific area is
described. Illustrative case studies are used to showcase the types of tools and theo-
ries needed. New theories and methods are also described.
* Kathleen M. Carley
1 Center forInformed Democracy andSocial Cybersecurity, Carnegie Mellon University,
Pittsburgh, PA, USA
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
1 Social cybersecurity
In response to these cyber-mediated threats to democracy, a new scientific discipline
has emerged—social cybersecurity. As noted by the National Academies of Science
NAS (2019): Social cybersecurity is an applied computational social science with
two objectives
“characterize, understand, and forecast cyber-mediated changes in human
behavior and in social, cultural, and political outcomes; and
build a social cyber infrastructure that will allow the essential charac-
ter of a society to persist in a cyber-mediated information environment
that is characterized by changing conditions, actual or imminent social
cyberthreats, and cyber-mediated threats.”
Social cybersecurity is both a new scientific and a new engineering field. It is a
computational social science with a large foot in the area of applied research. Draw-
ing on a huge range of disciplines the new technologies and findings in social cyber-
security have near immediate application on the internet. The findings and methods
are relevant to policy makers, scholars, and corporations.
Social cybersecurity uses computational social science techniques to identify,
counter, and measure (or assess) the impact of communication objectives. The meth-
ods and findings in this area are critical, and advance industry-accepted practices
for communication, journalism and marketing research. The field itself has a theory,
application, and policy component. The methods build on work in high dimensional
network analysis, data science, machine learning, natural language processing and
agent-based simulation. These methods are used to provide evidence about who is
manipulating social media and the internet for/against you or your organization,
what methods are being used, and how these social manipulation methods can be
countered. They also support cyber diplomacy (Goolsby 2020).
Social cybersecurity uses computational social science techniques to identify,
counter, and measure (or assess), the impact of influence campaigns, and to identify
and inoculate those at risk against such campaigns. The methods and findings in this
area are critical, and advance practices for intelligence and forensics research. These
methods also provide scalable techniques for assessing and predicting the impact of
influence operations carried out through social media, and for securing social activ-
ity on the internet and mitigating the effects of malicious and undue influence. As
such they are critical for creating a more secure and resilient society.
Influence campaigns vary widely, and who is at risk in part depends on those
conducting the influence campaign and in part on the context. For example, in
our research we found that influence campaigns appearing to come from state-
level actors during the elections in Western Europe and the US from 2016 to
2020 were often aimed at minorities. For example, they targeted women, eth-
nic minorities, and the LGBQT community. In contrast, in India as COVID-19
ramped up internal non-state groups launched anti-Muslim campaigns. As movies
like the Black Panther and Captain Marvel were released individual’s launched
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
Social cybersecurity: anemerging science
campaigns against the movies. In the elections in the Asia Pacific region influ-
ence campaigns often take the form of promoting pro-China candidates. Many
influence campaigns are aimed at specific individuals trying to recruit them to a
new cause, or engage them in insider threat activity.
Social cybersecurity is distinct from cybersecurity. Cybersecurity is focused
on machines, and how computers and databases can be compromised. In con-
trast, social cybersecurity is focused on humans and how these humans can be
compromised, converted, and relegated to the unimportant. Where cybersecurity
experts are expected to understand the technology, computer science, and engi-
neering; social cybersecurity experts are expected to understand social communi-
cation and community building, statistics, social networks, and machine learning.
Social cybersecurity is also distinct from cognitive security. Cognitive security is
focused on human cognition and how messages can be crafted to take advantage
of normal cognitive limitations. In contrast, social cybersecurity is focused on
humans situated in society and how the digital environment can be manipulated
to alter both the community and the narrative. Where cognitive security experts
are expected to understand psychology, social cybersecurity experts are expected
to have a broader social science expertise.
In our research we have found that there is some work in social cybersecurity
that draws on most scientific fields. In a recent study of the field, we identified
1437 papers up through 2019. Each journal was coded by the dominant scientific
fields that it is associated with. The results was a set of 43 disciplines. In Fig.1
we show the discipline to discipline network where the links indicate the number
of articles that draw on both disciplines. The size of the nodes indicate the num-
ber of articles associated with that discipline.
Fig. 1 Network diagram of the interdisciplinary nature of the field of social cybersecurity. Nodes are dis-
ciplines and are sized by number of articles. Links are number of articles associated with both disciplines
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
As seen in Fig.1, social cybersecurity is very much a computational social sci-
ence, with a strong interdisciplinary focus drawing on the social and communica-
tion and computer sciences. The dominant methods are social network analysis,
data mining, and artificial intelligence (which includes language technologies and
machine learning). The areas that have dominated research in this area are largely
ones that draw on theories from diverse disciplines.
Within social cybersecurity artificial intelligence is coupled with social network
analysis to provide new tools and metrics to support the decision maker. Recent
research in social cybersecurity is enabling new tools to support research method-
ology and metrics-based decision making for communicators. The following case
studies highlight the types of research findings made possible in the area of social
cybersecurity using these new tools. After presenting these case studies we then turn
to a discussion of a new orienting approach for doing research in social cybersecu-
rity, referred to as the BEND framework. Collectively, these items provide a glimpse
into the core of this emerging scientific field.
What are the dominant themes in social cybersecurity? As can be seen in Fig.2,
the dominant research area currently is disinformation. This is followed by research
on user behavior and networks on the web, and then research on politics and democ-
racy. In Fig.2 each node represents a research topic and the size of the node reflects
the number of articles on that topic. There are two caveats, first, the size of the dis-
information node is growing rapidly with all the new papers related to COVID-19
and the elections. Second, the reader may wonder why privacy does not appear. This
is because privacy was viewed as a separate field unto itself and papers in that area
were not include in the analysis.
2 Case study 1: building community insocial media
In Ukraine there were a group of young men sending out provocative images of
women. They didn’t know each other they were just posting images they liked. Bots
were used in an influence campaign to send out tweets mentioning each other and
multiple of these young men at once. This led the men to learn of others who, like
Fig. 2 Research topic areas in social cybersecurity
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
Social cybersecurity: anemerging science
them, were sending out these images. They formed an online group—a topic ori-
ented community. Once formed, the bots now-and-then tweeted information about
where to get guns, ammunition, and how to get involved in the fight. Why did this
The cyber landscape is populated by topic oriented communities—groups of
actors all communicating with each other about a topic of interest. Each actor can
be in many topic oriented communities. Actors can be people, bots, cyborgs (person
with bot assistance), trolls (person seeking to disrupt, a corporation or government
account using a fake persona and often engaging in hate speech and identity bash-
ing), and so forth. Members of a topic oriented community are loosely connected
by the fact that they interact with each other. For example, they might friend, fol-
low, retweet, mention, reply, quote or like each other. Some actors will be opinion
leaders, some will have a disproportionate ability to get messages to the community
(super-spreaders), some will be highly involved in the mutual give and take of an
on-going discussion (super-spreaders), some will just be lurking on the sidelines.
The members of topic oriented communities are also loosely connected because
they are sending or sending or receiving messages about the same topics. For exam-
ple, they are all discussing the Army-Navy game. Some actors will be more actively
engaged and send more messages. Topic oriented communities range in size and
how they are organized; e.g. plane spotters is a vast community that is only slightly
connected. Through new tools and research methodologies that measure commu-
nication impacts via social media, it is now possible to measure and visualize data
to demonstrate topic oriented communities that become overly connected become
echo-chambers. (Note: An echo-chamber is a pathologic form of a topic oriented
community in which the level of connection is extremely dense and the topic highly
shared and narrow.) Messages sent within echo-chambers reach all quickly, and such
groups can often be readily excited to respond emotionally rather than rationally to
outside information.
For Ukraine bots were used to send communications which basically introduced
these young men to each other through the use of @mentions. These bots also sent
provocative images. The young men then began to follow each other forming a topic
oriented community. In Ukraine influencers created/controlled the bots that con-
ducted a “build” campaign to misinform (engendering social connections between
the young men by mentioning them together). At the same time, they conducted an
“enhance” campaign by rebroadcasting some images and pointing to others and an
“excite” campaign with new positive language. Once the group was established, a
“distort” campaign appeared bringing in information relative to the revolution.
For additional details on this case study see (Benigni etal. 2019).
3 Case study 2: increasing communicative reach insocial media
Syrian expats and sympathizers with ISIS were engaged in social media conversa-
tions. This included listening to the preaching’s of a prominent Imam. A group of
actors infiltrated this group and redirected attention to a site collecting money for the
children of Syria. How was this done?
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
In social media, your followers may not receive your messages, or your messages
may not be prioritized so that they appear prominently to those concerned with your
messages. Social media platforms use your social network position (how you are
connected to others), and the content of your message, to decide who to recom-
mend your message to, when, and in what order. Who you mention in posts, which
hashtags you use, whether you use memes or link to YouTube videos, the frequency
with which you post, the number of others who follow you or like your posts, all
impact whether your message is prioritized.
In this Syrian ex-pat community influencers created/controlled a social influence
bot, the Firibi gnome bot. This set of bots was used to conduct a sophisticated influ-
ence campaign. Multiple copies of this bot were released which proceeded to send
out messages mentioning each other and so engaged in a “build” campaign to mis-
inform. The results was a topic oriented community of bots, which meant that mes-
sages from any one bot would be recommended to others interested in similar topics.
Then these bots started following retweeting messages from an Imam, who may not
have been aware of this activity. This “boosted” the social influence of the Imam,
and “engaged” the bot with the community. Since the Imam was a super-spreader,
this also meant that messages from the Firibi gnome would be prioritized to the
Imams followers. Then, the Firibi gnome bot engaged in an “enhance” campaign
and started sending messages recommending the charity website. This message was
then prioritized.
For additional details on this case study see (Benigni etal. 2017).
4 Case study 3: conspiracies insocial media
As COVID-19 spread so to dis disinformation regarding the pandemic. Thousands
of disinformation stories were spread focusing on false cures and prevention tech-
niques, false characterization of government response, and claims of leaders hav-
ing COVID even when they did not. Throughout a number of conspiracy stories
appeared and began to gain traction. How was this done?
In social media, messages gain traction through amplification and saturation. A
message from a single actor or site can be amplified and spread by bots and trolls.
This means those messages will get shared disproportionately and may even be
made to trend. The more shared, the more the story gets prioritized to individuals
who do not pay attention to the original source but may be following users who fol-
low those who follow the source. Further, in social media the presence of a story
on two or more platforms does not mean that the story has been independently vali-
dated. Indeed stories often appear on one platform, and bots and trolls are used to
push it to other platforms. Marketing firms hired to spread disinformation can place
the same story simultaneously in different forms on each of the platforms. Twitter,
e.g., is often used to garner attention to YouTube videos and to promulgate stories
that first appear in Blogs or on Facebook. The more platforms a story appears on,
the more it saturates the digital space, the more real it seems, particularly when it
is accompanied by images and videos and supported by celebrities and authorities.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
Social cybersecurity: anemerging science
A number of conspiracies surrounded COVID-19. One is that the virus was cre-
ated in a US lab and carried to Wuhan by US soldiers engaged in a war game, and
another is that it was created by Bill Gates and then spread as step 1 in a plan to
create a new world order. Stories regarding some of these conspiracies appeared on
Chinese state sponsored media. Bots surrounding these media then retweeted the
stories—thus amplifying the reach. Related stories and even a “plandemic” video
were released on multiple media e.g., Facebook, YouTube, Twitter and Instagram.
Bots further amplified these messages or sent messages with the related URLS.
Trolls, employed hate speech to denigrate those that tried to counter the conspiracy
messages, were implicated by the conspiracy messages, or that were anticipated to
not believe the conspiracy messages. The same conspiracy stories, often providing
additional details, also appeared on the purported “fake news” news-sites which are
websites that purport to be news agencies but either are not news sites or have dubi-
ous editorial procedures and are known for spreading disinformation. Large numbers
of bots surround these sites and would send out messages with URLS to these sites.
Even larger numbers of bots retweeted messages referencing these sites. The result
was a topic oriented community of conspiracy theorists, bots, and trolls around vari-
ous conspiracy thrusts, an increase in the number of conspiracies that were spread-
ing, the re-appearance of conspiracy stories even if they were banned, and increas-
ingly elaborate conspiracies such as the new world order.
For additional details on this case study see (Huang, 2019).
5 The BEND framework
The foregoing case studies indicate the types of issues that need to be considered
by a social cybersecurity researcher or practitioner. They also suggest the need for
new technologies such as those to identify disinformation, bots, trolls, cyborgs and
memes. Finally they point to the need for new theories to make sense of the way
in which influence plays out in social media. One such transdisciplinary theory is
referred to as the BEND framework.
Influence campaigns are often described in terms of the 4Ds—distract, distort,
dismay and disrupt (Nimmo, 2015). These are often used to describe information
operations by Russia. However, as the three case studies illustrate influence opera-
tions don’t involve just messages with distract, distort, dismay or disrupt campaigns.
Our research suggests that a broader understanding of information maneuvers is
needed. Specifically, information campaigns that are successful typically impact
both community and narrative. That is maneuvers are conducted that alter both who
is communicating with whom as well as what is being communicated. Further, the
4Ds are essentially negative maneuvers; that is, they are problem creation not prob-
lem solution maneuvers. In the Ukraine—images were used to excite, in Syria—
messages were used to explain, in COVID-19 explain and enhance messages were
used. While in each case these are not the only kind of messages used, the point is,
there was more than just the 4Ds. The.
BEND framework argues that influence campaigns are comprised of sets of nar-
rative and structural maneuvers, carried out by one or more actors by engaging
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
others in the cyber environment with the intent of altering topic-oriented commu-
nities and the position of actors within these communities. A topic oriented com-
munity is a group of actors who are more or less talking to each other about more
or less the same thing. This engagement with the topic oriented community is often
assisted by or carried out by bots, trolls and cyborgs in addition to human users.
This engagement is aimed at manipulating either or both the narrative (what is being
talked about) and the community (who is talking to whom). Bots, trolls, cyborgs and
humans engage with others in cyberspace in ways designed to, and send messages
that are constructed to, take advantage of three things: the technology, the mind and
emotions, and the world view. Technology: these activities are designed to exploit
the algorithms that prioritize the order in which messages are presented and the rec-
ommendation algorithms so that messages selected by the perpetrator appear first,
often, and trend, and the goods, services, urls, and actors they mention are recom-
mended to the readers. Mind & Emotion: these activities are designed to exploit nat-
ural human biases and reflexes such as confirmation bias, escalation of commitment,
and the fear or flight reflex. World View: these activities are designed to make use
human social cognition, the set of heuristics we use to make sense of vast quantities
of data in terms of group—such as the generalized other and stereotyping.
In social cybersecurity, theories and methods go hand-in-hand. Hence associated
with the BEND theory is a methodology for empirically assessing which maneu-
vers are being used and for measuring the impact of social media communication
research, planning, and objectives (Beskow and Carley 2019). The BEND frame-
work thus has associated with it a set of methods and tools for looking at who
engaged in what information maneuvers directed at whom with what impact. The
BEND framework characterizes communication objectives and so the maneuvers
into 16 objectives such that 8 are aimed at shaping the social networks of who is
communicating with whom and 8 are aimed at shaping the narrative. For the social
network, there are four positive objectives (the four B’s) and four negative objectives
(the four N’s). Similarly, for shaping the narrative there are four positive objectives
(the four E’s) and the four traditional negative objectives (the four D’s). These are
described in Table1.
The BEND framework is the product of years of research on disinformation and
other forms of communication based influence campaigns, and communication
objectives of Russia and other adversarial communities, including terror groups such
as ISIS that began in late December of 2013. It draws on both findings regarding
the communication objectives and tactics of adversarial actors adversarial (Benigni
etal. 2017; Lucas and Nimmo 2015; Manheim 1994), political influence (Howard
and Kollanyi 2016; Howard et al. 2018; Huckfeldt and Sprague 1995), marketing
(Webster 2020), psychology (Sanborn and Harris 2013). The BEND Framework
addresses these communication objectives and tactics from a transdisciplinary per-
spective. Early evidence suggests that excite, enhance, dismay, and distort may be
the most common communication objectives used to spread disinformation.
The BEND framework is more than a description of the maneuvers shown in
Table1. It begins with identifying the types of user who is conducting a maneuver
or is targeted by such a maneuver. Actors are characterized by whether they are bots,
trolls, news agencies, government actors, celebrities or are influential super-friends
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
Social cybersecurity: anemerging science
Table 1 Communication objectives BEND
Manipulating the narrative Manipulating the social network
Positive Engage Messages that bring up a related but relevant topic Back Actions that increase the importance of the opinion leader or create
a new opinion leader
Explain Messages that provides details on or elaborate the topic Build Actions that create a group or the appearance of a group
Excite messages that elicit a positive emotion such as joy or excitement Bridge Actions that build a connection between two or more groups
Enhance Messages that encourage the topic-group to continue with the
Boost Actions that grow the size of the group or make it appear that it has
Negative Dismiss Messages about why the topic is not important Neutralize Actions decrease the importance of the opinion leader
Distort Messages that alter the main message of the topic Nuke Actions that lead to a group being dismantled or breaking up, or
appearing to be broken up
Dismay Messages that elicit a negative emotion such as sadness or anger Narrow Actions that lead to a group becoming sequestered from other
groups or marginalized
Distract Discussion about a totally different topic and irrelevant Neglect Actions that reduce the size of the group or make it appear that the
group has grown smaller
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
(those with a high number of reciprocated ties in social media), super-spreaders
(those with a high number of others who they reach with their messages e.g. a large
number of followers etc.) or are otherwise influential in social media (e.g. send a
large number of messages). Actors being targeted can be topic oriented communities
or individual actors. Then each of the messages are characterized for which of the 16
maneuverers the message is consistent with using a set of discreet measures. Finally,
impact in terms of change in the target is assessed from a social network perspective
and content perspective exploring how over time the target has changed.
Associated with the BEND maeuvers are a series of measures and indicators for
each of the objectives. These have been operationalized, are now part of the ORA-
PRO social media tools, and have been tested on Twitter data. They were used in
assessing data during various NATO exercises, Naval exercises, elections, and disas-
ters. We find that in many cases complex influence campaigns involve using multi-
ple BEND objectives as was described in the three case studies.
6 Using social network analysis andarticial intelligence
One of the key tools in social cybersecurity is high dimensional dynamic social net-
work analysis. Social network analysis is the analysis of who interacts with whom.
Network techniques have long been used in intelligence for identifying groups and
tracking adversarial actors and by marketers for identifying key informants and
opinion leaders. With social media such techniques have been expanded to enable
scalable solutions for massive data that take into account multiple types of rela-
tions among actors as well as relations among resources, ideas and so forth. Today
such high dimensional dynamic network techniques underlie social media analysis.
The two interaction networks—such as (1) who likes or retweets whom and (2) the
content of the messages—are treated as networks. The techniques to identify these
interactions are embedded in ORA-PRO and are used for identifying topic-groups
and the influential actors within these groups; the depth of this data is not possible
with other off-the-shelf analysis tools. Running social network techniques on social
media provides indicators that can then be used in machine learning tools to identify
actors and messages of interest such as bots, cyborgs, and trolls.
Artificial intelligence (AI) techniques, particularly machine learning and natu-
ral language processing techniques are also key tools in social cybersecurity. AI,
and particularly machine learning (ML), are often pointed to as force multipliers in
dealing with the vast quantity of digital data available today. Such technologies are
clearly of value; however, they are not the panacea envisioned. The problems faced
by the military in social cyberwar are continually changing and often occur only
once; thus, new technique for responding are continuously needed. Further, current
AI and ML techniques are often focused on easily measured data rather than the
more volatile socio-political-cultural context.
Language technologies are used for translation, sentiment, and stance detection.
Most sentiment tools simply inform the reader if a message containing a word of inter-
est is positive or negative, which often has no relation to the sentiment about the word
of interest. We find that as much as 50% of the time the sentiment toward the word of
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
Social cybersecurity: anemerging science
interest is the opposite as the sentiment of the message as a whole. Consider the senti-
ment toward U.S. in the message—“I hate Russian interference in social media and
treatment of the U.S. as evil”. The sentiment of the message as a whole is negative; but
it is positive toward the U.S..In contrast, the NetMapper system used with the BEND
framework identifies the sentiment about the word of interest, and measures a set of
subconscious CUES in the message to assess the sender’s emotional state.
Machine learning techniques are frequently used to identify bots, false statements,
and message on particular topics. An example is BotHunter. Such tools can rapidly
identify the likelihood that potential actors are Bots. This can indeed support analysis
and help a communicator understand an adversary’s communication objectives. How-
ever these, tools that are based on “supervised” learning have a limited shelf life. First,
they require large training sets. Training sets need to be created by humans tediously
coding messages and the senders of messages into categories required for the AI tool.
Today, bots are evolving faster than are the tools to find them in large part because it
takes too long to create training sets. Training sets are often biased—e.g., sentiment
training sets are biased toward lower middle-class ways of expressing sentiment in Eng-
lish. The AI tools themselves give probability scores and no explanation on why they
reached the conclusion they did. Bot detection tools often disagree because the tools
were “trained” differently—leaving the ultimate decision in the hands of the analyst.
These factors reduce how long these technologies will be useful for and in what con-
texts. Today’s technology advances are being made in developing AI techniques that do
not require massive training sets and that provide explanations—BotRecommender is
such a tool.
For disinformation, the issues are legion and there are many types of disinformation
as is illustrated in Table2. Fact-checking tools using humans or human-AI teams are
providing valuable guidance but so far take a long time to determine if a story con-
tains an inaccuracy. Assessing intent is difficult—was the sender intentionally trying to
deceive (disinformation) or were they just mistaken (misinformation). Many disinfor-
mation campaigns are not based on inaccurate facts, but on innuendo, fights of illogic,
reasoning from data taken out of context, and so on. Many times, stories labeled as
disinformation are simple alternative interpretations of facts. AI only helps for some
types of disinformation. It is less useful the more unique the storyline, and the faster the
story spreads.
AI techniques are only useful as part of the toolkit. AI can support classifying mes-
sages by BEND objectives, or the perpetrators into types such as bots, trolls, and news-
agents. The BEND framework and associated tools, some of which employ AI, can be
used to assess how communications are spreading and measure the impact. For exam-
ple, MemeHunter was used to identify an influence campaign from Russia engaged
with a dismay objective that implied that compared to Russia, NATO was weak as the
head of many countries defense were women, not a strong male military leader. This
meme was spread by bots and humans alike.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
Table 2 Types of “disinformation”
Disinformation types Example Potential for AI techniques to detect
Fake news (story made to look like news) Naval Destroyer crash in Hurricane Harvey AI could be used to identify sites, and do fact checking
Fabrication with visual Parkland student ripping up constitution AI could be used to create and identify fake images
Fabrication without visual Opposition peso scam in Philippines AI might be of some assistance in finding all instances of story
Propaganda Duerte’s helicopter scaring off the Chinese AI could help classify underlying BEND objectives
Conspiracy Pizzagate AI could be used to do fact checking
Misleading—due to misquoting Captain Marvel—Brie Larson is a racist/sexist AI could be used to do fact checking, and stance checking
Misleading—due to being out of context voting makes you lose your hunting license AI might provide support tools
Innuendo and illogic Anti-vax campaign AI might provide some support but won’t solve
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
Social cybersecurity: anemerging science
7 Research directions
Social cybersecurity is an exciting and emerging field. The BEND framework
and so the associated theory and methods, BotHunter, MemeHunter and other
such tools, are examples of the kind of work central to this area. However, much
remains to be done. Indeed there are seven core research areas.
1. Social Cyber-Forensics: Social cyber-forensics is concerned with identifying who
is conducting social cybersecurity attacks. Often the concern is with the type of
actor rather than the specific actor. Further, this can involve cross platform assess-
ment with the need to track down the source of information. New ways to track
and build linkages at scale are needed.
2. Information Maneuvers: The key here is to understand the strategies used to
conduct an attack and the intent of those strategies. Can we, for example, expand
upon BEND to identify sets of maneuvers that are consistently used together or
in a particular order to effect a particular impact? Improved abilities to detect
maneuvers, and provide early warning that an attack is starting, are needed—par-
ticularly cross platform.
3. Motive Identification: The goal here is to understand what the perpetrators motive
is. Why is the attack being conducted? Multiple motives have been seen. These
include conducting influence campaigns: for fun, to create havoc, to polarize
society, to alleviate boredom, for money, to polarize society, to market goods or
services, to gain personal influence, and to generate community. There are likely
to be other reasons as well. Being able to identify and track motive at scale and
quickly is an important area for new research.
4. Diffusion: In this area the objective is to trace, and even predict, the spread of an
influence campaign. A sub-aspect of this is to trace, and even predict, the move-
ment of the components of a campaign such as people, ideas, beliefs, memes,
videos, and images. Tracing the attackers and the impact of the attack across and
through multiple social media is key. Live monitors that suggest when diffusion
is about to explode, peak, and peter out. Improve theories of and methods for
monitoring diffusion, particularly cross-platform are needed.
5. Effectiveness of Information Campaigns: The goal of this area is to quantify the
effectiveness of the social cyber-security attack. This includes both the short term
and the long term impact. It also involves creating improved measures of impact
– such as polarization or mass-hysteria – rather than the traditional measures of
reach such as number of followers, likes, and recommendation. While some meas-
ures cannot be done in real time, real time estimates of potential impact would
be of value. New theories about impact and effect, as well as new techniques to
measure effect are needed.
6. Mitigation: There are two related goals here. The first is to understand how a
social cybersecurity attack be countered or mitigated. The second is to understand
how communities can become more resilient to attacks. Many different avenues
of research can be pursued here. Some examples are use of agent based models
to assess the relevant impact of interventions, scalable techniques for teaching
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
critical thinking for social media, and basic research on the characteristics of
resilient communities. New empirical results, transdisciplinary theories, models
and ways of measuring resilience and mitigation in this space are needed.
7. Governance: The objective here is to understand what policies and laws are
needed so the people can continue to use the internet without fear of undue influ-
ence, so that an informed democracy can survive. This is a key area as it brings
together issue of legality, rights, and education. This area needs to bring together
all the diverse perspectives and diverse knowledge to develop actionable govern-
8 Conclusion
As noted, social cybersecurity is an emerging scientific and engineering discipline.
While there are thousands working in this space, more research and more coordina-
tion of that research is needed. Work in this area began as interdisciplinary and is
becoming transdisciplinary. Two words of caution. First, for individuals new to the
area it is easy to come to the conclusion that little is known and that only a few indi-
viduals are working in this area. There are several reasons for this. First the research
is spread across hundreds of venues with no one conference or journal being domi-
nant. Second there is some research in most disciplines, but in each of the extant dis-
ciplines this is a fringe area. What we have found is that most researchers in this area
do now know of others outside their own group. Often faculty in this area don’t even
know of others in their own university. Greater outreach and ways of collaborating
and coordinating across groups is needed. This is beginning to happen. In Fig.3, the
collaboration network based on who co-authors with whom, for 2018 and late 2019
is shown. As can be seen the central core has grown, and there are now links where
none existed before. These growth is largely due to the Department of Defense Min-
erva program and the Knight foundation, both of which began to support research in
social cybersecurity and to encourage collaboration.
Second, it is easy to think of this area as one where computer science and artifi-
cial intelligence will provide the solutions. Artificial intelligence solutions often aim
at the easy things such as fact checking and currently require large levels of training
data. But, this is a fast moving area where training sets are difficult to come by and
are often out of data by the time they are created. It is easy to fall prey to stories that
claim success because the mined extremely large data sets. But this fails to recog-
nize that what is being discovered is the mean behavior and that most social change
and social activity is on the fringe, and in the margins. What both approaches fail to
recognize is that at its heart, social cybersecurity is about people as social beings,
and it is people as social beings that are impacting and being impacted. To be sure
artificial intelligence and data science are critical to this area; however, they should
be in supporting positions not the drivers sear. What is needed is socially informed,
social human being led computational social science.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
Social cybersecurity: anemerging science
Acknowledgements This paper is the outgrowth of research in the center for Computational Analysis of
Social an Organizational Systems (CASOS), and the center for Informed Democracy and Social-cyber-
security (IDeaS) at Carnegie Mellon University. This work was supported in part by both centers, the
Knight Foundation, and the Office of Naval Research through the Minerva program N00014-16-1-2324
and the Office of Naval Research N000141812108 and N00014182106. The views and conclusions con-
tained in this document are those of the authors and should not be interpreted as representing the official
Fig. 3 Evolving co-authorship network. The top image shows the central core in 2018 and the bottom
image the central core in 2019. Each node is an author and the links are weighted by the number of
papers those two authors co-authored
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
policies, either expressed or implied, of the Knight Foundation, the Office of Naval Research or the U.S.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is
not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission
directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen
Benigni M, Joseph K, Carley KM (2017) Online extremism and the communities that sustain it: detecting
the ISIS supporting community on Twitter. PLoS One 12(12):e0181405
Benigni M, Joseph K, Carley KM (2019) Bot-ivistm: assessing information manipulation in social media
using network analytics. In: Agarwal N, Dokoohaki N (eds) Emerging research challenges and
opportunities in social network analysis and mining. Springer, Cham
Beskow DA, Carley KM (2019) Social cybersecurity: an emerging national security requirement, military
review, March–April 2019—see https ://www.armyu press als/Milit ary-Revie w/Engli
sh-Editi on-Archi ves/Mar-Apr-2019/117-Cyber secur ity/b/
Goolsby R (2020) Developing a new approach to cyber diplimacy. Future Force 6(2):8–15
Howard PN, Kollanyi B (2016) Bots, # strongerin, and # brexit: computational propaganda during the
uk-eu referendum. Available at SSRN 2798311
Howard PN, Woolley S, Calo R (2018) Algorithms, bots, and political communication in the US 2016
election: the challenge of automated political communication for election law and administration. J
Inform Tech Polit 15(2):81–93
Huang B (2019) Learning User Latent Attributes on Social Media. Ph.D. Thesis, Institute for Software
Research, Carnegie Mellon University
Huckfeldt RR, Sprague J (1995) Citizens, politics and social communication: information and influence
in an election campaign. Cambridge University Press, Cambridge
Lucas E, Nimmo B (2015) Information warfare: what is it and how to win it. CEPA Infowar Paper 1
Manheim JB (1994) Strategic public diplomacy and American foreign policy: the evolution of influence.
Oxford University Press, New York
National Academies of Sciences, Engineering, and Medicine (2019) A decadal survey of the social and
behavioral sciences: a research agenda for advancing intelligence analysis. The National Academies
Press, Washington, DC. Ch. 6
Nimmo B (2015) Anatomy of an info-war: how Russia’s propaganda machine works, and how to counter
it. Central European Policy Institute, 15
Sanborn FW, Harris RJ (2013) A cognitive psychology of mass communication. Routledge, New York
Webster T. 8 surprising Twitter statistics that will help you get more engagement. https ://postc
en/blog/8-surpr ising -twitt er-stati stics -get-more-engag ement /. Accessed 2/2020
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
Kathleen M. Carley received the Ph.D. degree from Harvard University, Cambridge, MA, USA in 1984
in sociology, and two S.B. degrees from the Massachusetts Institute of Technology, Cambridge, MA,
USA in 1978 in political science and in economics, and an H.D. from the University of Zurich. She is a
professor at Carnegie Mellon University in the Institute of Software Research department in the School
of Computer Science, in Pittsburgh, PA USA where she directs the center for Computational Analysis of
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1 3
Social cybersecurity: anemerging science
Social and Organizational Systems (CASOS) and the center for Informed Democracy and Social-cyber-
security (IDeaS). She is the CEO and President of Carley Technologies Inc., A.D.B. as Netanomics, in
Sewickley, PA, USA. Her research examines complex socio-technical systems using high-dimensional
dynamic networks, agent based models, machine learning, and text mining.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
... As online manipulation has gained increased attention, several new analytical frameworks have been developed to characterize online manipulation techniques. The BEND framework examines how actors use narrative and structural maneuvers such as creating excitement or bridging groups in targeting and manipulating actors and groups (Carley, 2020), which will be discussed in the next section; the ABC(D) framework describes the actors-behavior-content-distribution of the manipulation (Alaphilippe, 2020); and the SCOTCH (source, channel, objective, target, composition, hook) framework presents a campaign overview to summarize the actions (Blazek, 2021). ...
... To characterize online manipulation, we used the BEND framework (Carley, 2020). We performed annotation of the BEND maneuvers in each of the one-week networks using the built-in BEND maneuver annotation in the ORA-PRO software v. (Altman et al., 2020). ...
Full-text available
Social media platforms are information battlegrounds where actors or communities compete to influence ideas and beliefs. These platforms can benefit government and health organizations by quickly disseminating pertinent information about the COVID-19 vaccine to a large population. However, at the same time, the social-cyberspace domain has made it easy for counter-messages to gain mainstream support and widespread propagation. What were once isolated fringe groups can now distribute—on a massive scale—false healthcare narratives in the form of disinformation and conspiracy theories. Twitter is a popular online medium where COVID-19 pro- and anti-vaccine communities strive to convince large audiences of their particular stances. This chapter explores how these competing groups use different types of online manipulation techniques to spread information and influence their followers on Twitter. Using a corpus of COVID-19 tweets, we identify the key players and communities spreading the pro-vaccine- and anti-vaccine-related messages and their beliefs. We then analyze the labeled messages to determine how messages influence targets and impact the overall network using a novel influence assessment approach referred to as the BEND framework.
... Une tendance forte consiste aujourd'hui à faire converger différentes disciplines de manière à considérer ces questions de sécurité dans un cadre moins dispersé et plus complet, car moins orienté strictement sur la technique informatique ou sur les technologies de laboratoire, mais davantage vers les problèmes à résoudre dont la nature est aussi sociale, économique ou juridique. La sociologie et la criminologie y participent dans une constellation d'autres disciplines (Carley 2020). La notion de trace s'intègre progressivement au débat (Pollitt et al. 2018). ...
Le rôle de la police scientifique est d’abord d’exploiter les traces laissées lors d’activités criminelles. Elle est aujourd’hui équipée de technologies de traçabilité si puissantes que celles-ci ont, en peu de temps, démultiplié la quantité et la variété de données mises à disposition de l’enquête judiciaire et du renseignement criminel. Or cette évolution rapide a paradoxalement eu pour conséquence une remise en question du rôle, du statut et de l’action de la police scientifique : qu’attend-on aujourd’hui de ces services ? Que sont-ils supposés conclure à partir de données devenues aussi considérables que spécialisées et fragmentées ? L’auteur décrit comment la police scientifique évolue vers une nouvelle discipline appelée « traçologie ». Celle-ci s’oppose à l’hyper-spécialisation en encourageant les professionnels à adopter une vision d’ensemble essentielle pour résoudre des enquêtes complexes, analyser la criminalité sérielle et renseigner l’action de sécurité. Un ouvrage manifeste, principalement destiné aux criminalistes et criminologues concernés par l’avenir de la police scientifique, mais aussi à tous les professionnels de la sécurité, qui trouveront dans ces pages des méthodes et des modèles directement applicables, aux étudiants en sciences criminelles, aux chercheurs en quête d’interdisciplinarité et au public intéressé par les méthodes d’investigation et curieux d’en découvrir les arcanes.
... In literature, users engaged in conspiracy activities are often identified as accounts who employ specific conspiracy-related keywords or share URLs from conspiracy websites [28,29,30,20,31,21,32,33]. However, some of these users may be bots or trolls attempting to spread panic and skepticism in authorities by pushing alternative explanations for events [60,61,62,63]. Misinformed users may inadvertently spreading conspiracy theories [35,36,34]. ...
Full-text available
The discourse around conspiracy theories is currently thriving amidst the rampant misinformation prevalent in online environments. Research in this field has been focused on detecting conspiracy theories on social media, often relying on limited datasets. In this study, we present a novel methodology for constructing a Twitter dataset that encompasses accounts engaged in conspiracy-related activities throughout the year 2022. Our approach centers on data collection that is independent of specific conspiracy theories and information operations. Additionally, our dataset includes a control group comprising randomly selected users who can be fairly compared to the individuals involved in conspiracy activities. This comprehensive collection effort yielded a total of 15K accounts and 37M tweets extracted from their timelines. We conduct a comparative analysis of the two groups across three dimensions: topics, profiles, and behavioral characteristics. The results indicate that conspiracy and control users exhibit similarity in terms of their profile metadata characteristics. However, they diverge significantly in terms of behavior and activity, particularly regarding the discussed topics, the terminology used, and their stance on trending subjects. Interestingly, there is no significant disparity in the presence of bot users between the two groups, suggesting that conspiracy and automation are orthogonal concepts. Finally, we develop a classifier to identify conspiracy users using 93 features, some of which are commonly employed in literature for troll identification. The results demonstrate a high accuracy level (with an average F1 score of 0.98%), enabling us to uncover the most discriminative features associated with conspiracy-related accounts.
... As we traverse the philosophical nexus, we illuminate the profound implications that this breach has on the delicate balance between technological advancements, governance mechanisms, and international cooperation. Drawing from social theory [19], we analyze the cascading effects of this cyber event through the lens of Niklas Luhmann's Systems Theory [20]. The attack, a disruption in the intricate dance of communication channels, creates a ripple that extends beyond the digital realm. ...
Full-text available
Cybersecurity in politics has emerged as a critical and intricate realm intersecting technology, governance, and international relations. In this interconnected digital context, political entities confront unparalleled challenges in securing sensitive data, upholding democratic procedures, and countering cyber threats. This study delves into the multifaceted landscape of political cybersecurity, examining the evolving landscape of cyberattacks, their impact on political stability, and strategies for bolstering digital resilience. The intricate interplay between state-sponsored hacking, disinformation campaigns, and eroding public trust underscores the imperative for robust cybersecurity measures to safeguard political system integrity. Through an extensive exploration of real-world case studies, policy frameworks, and collaborative initiatives, this research illuminates the intricate network of technological vulnerabilities, geopolitical dynamics, and ethical concerns that shape the dynamic evolution of cybersecurity in politics. Amidst evolving digital landscapes, the imperative for agile and preemptive cybersecurity strategies is paramount for upholding the stability and credibility of political institutions.
... This has two implications: Theoretically, it implicates reconsidering the existing behavioral models of the social cyber-attack intentions in low resource languages such as Arabic. For example considering an extension of the BEND social-cyber security framework (Carley 2020). Practically, our research implicates the need to consider semantics and context to detect those attacks effectively. ...
Conference Paper
Full-text available
Social Cyber-attacks such as propaganda, conspiracy theories, anger, and hate discourse are very old phenomena that inflict harm to humans, organizations, national security, public officials, democratization efforts, careers, and policies. There have been significant efforts to identify anti-US speech on social media which includes propaganda. Such efforts largely ignore the attempts by other countries to manipulate social media in some regions including the Middle East. Research in this area is computational and solely focuses on fine tuning language models to detect general propaganda attacks. This paper addresses a new category of propaganda attacks that are tied to state-linked accounts that spread anti-US propaganda by taking advantages of specific geo-political crises in the Middle East. We investigated the role of general language models and training data to detect those forms of propaganda. Our study concludes that existing propaganda training data is unable to successfully detect targeted propaganda. We propose a contextualized span detection approach to identify these types of propaganda and show that our targeted training models work significantly better compared to the existing general propaganda detection approaches.
... [14] addressed the privacy issues at the level of individuals and developed a model to study how the consumers' concerns about privacy, security and trust in addition to their risk beliefs can impact their engagement in e-commerce transactions. Moreover [15] focused on the concept of social cybersecurity and how individuals can be compromised. The work in [16] propose a model for online retail industry to have a clear understanding of the factors influencing online consumers' intentions toward online purchase across gender. ...
Full-text available
As individuals increasingly engage with the digital landscape, they face a multitude of risks associated with their online activities and the security of their personal information. Individuals seek guidance in balancing the benefits and risks of the digital transformation. To effectively mitigate these risks, it is essential to establish a comprehensive Digital Risk Assessment Framework tailored to individual users. In this research, an a interpretive study have been carried out to propose a novel Digital Security Management Framework. The main contribution of this study is providing a novel approach by examining the recent recorded threats against individuals, quantifying these threats, and proposing a novel digital risk framework detailing the list of threats and the corresponding risk treatment options tailored for individuals. The scenario of the case study is a family that use personal computers to access banking and investment accounts online, engage in online shopping and also frequently use social media to share artwork and opinions. 17 types of digital risks were identified and the probability of loss and impact of each risk have been quantified using Bernoulli distribution f(L;p). The quantified values were used to prioritise mitigation measures. According to the results, and the proposed framework, suitable treatment option(s) was recommended for each risk. The results show that online scams present the biggest financial risk to individuals, that security incidents present a moderate risk, and that communication-based harms (e.g. bullying and radicalization) are difficult to quantify.
Conference Paper
The 2022 Russian invasion of Ukraine is a war being fought both on the physical battlefield and online. This paper studies Telegram activity in the first weeks of the invasion, applying social cybersecurity methods to characterize the information environment on a platform that is popular in both Ukraine and Russia. In a study of over 4 million Telegram messages, we find a contentious environment where channel discussions often contain content with the opposite stance of their associated main channels. We apply the BEND framework to characterize potential disinformation maneuvers on the platform, finding that the English-language community is the most contested in the information space. In addition to the specific analysis of the Russian invasion, we demonstrate the utility of Telegram as a useful platform in social cybersecurity research.
Conference Paper
Governments around the world leverage social media to enact public diplomacy. In this article, we examine Chinese diplomatic communication on Twitter during two highly controversial events through a social cybersecurity lens: then-Speaker Pelosi’s visit to Taiwan in early August 2022 and Taiwanese President Tsai’s visit to the U.S. in early April 2023. We identify a small set of Chinese state-affiliated accounts that consistently tweet the most and are retweeted the most, demonstrating the highly centralized nature of China’s external messaging. Using the BEND framework, we quantify social-cyber maneuvers used by the Chinese state to target U.S. and Taiwanese officials. We find they target individuals and ideas they perceive as direct challengers to the One China principle, neutralizing specific Taiwanese officials who support independence, while broadly dismissing and critiquing U.S. leaders, domestic affairs, and foreign policies. Our findings have implications for the study of online influence strategies and understanding China’s broader diplomatic goals.
Full-text available
Utilizing social media data is imperative in comprehending critical insights on the Russia–Ukraine cyber conflict due to their unparalleled capacity to provide real-time information dissemination, thereby enabling the timely tracking and analysis of cyber incidents. The vast array of user-generated content on these platforms, ranging from eyewitness accounts to multimedia evidence, serves as invaluable resources for corroborating and contextualizing cyber attacks, facilitating the attribution of malicious actors. Furthermore, social media data afford unique access to public sentiment, the propagation of propaganda, and emerging narratives, offering profound insights into the effectiveness of information operations and shaping counter-messaging strategies. However, there have been hardly any studies reported on the Russia–Ukraine cyber war harnessing social media analytics. This paper presents a comprehensive analysis of the crucial role of social-media-based cyber intelligence in understanding Russia’s cyber threats during the ongoing Russo–Ukrainian conflict. This paper introduces an innovative multidimensional cyber intelligence framework and utilizes Twitter data to generate cyber intelligence reports. By leveraging advanced monitoring tools and NLP algorithms, like language detection, translation, sentiment analysis, term frequency–inverse document frequency (TF-IDF), latent Dirichlet allocation (LDA), Porter stemming, n-grams, and others, this study automatically generated cyber intelligence for Russia and Ukraine. Using 37,386 tweets originating from 30,706 users in 54 languages from 13 October 2022 to 6 April 2023, this paper reported the first detailed multilingual analysis on the Russia–Ukraine cyber crisis in four cyber dimensions (geopolitical and socioeconomic; targeted victim; psychological and societal; and national priority and concerns). It also highlights challenges faced in harnessing reliable social-media-based cyber intelligence.
Full-text available
Political communication is the process of putting information, technology, and media in the service of power. Increasingly, political actors are automating such processes, through algorithms that obscure motives and authors yet reach immense networks of people through personal ties among friends and family. Not all political algorithms are used for manipulation and social control however. So what are the primary ways in which algorithmic political communication—organized by automated scripts on social media—may undermine elections in democracies? In the US context, what specific elements of communication policy or election law might regulate the behavior of such “bots,” or the political actors who employ them? First, we describe computational propaganda and define political bots as automated scripts designed to manipulate public opinion. Second, we illustrate how political bots have been used to manipulate public opinion and explain how algorithms are an important new domain of analysis for scholars of political communication. Finally, we demonstrate how political bots are likely to interfere with political communication in the United States by allowing surreptitious campaign coordination, illegally soliciting either contributions or votes, or violating rules on disclosure.
Full-text available
The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS’ unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and “lone wolf” attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS’ sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group’s propaganda dissemination through retweets.
In a constantly changing media landscape, A Cognitive Psychology of Mass Communication is the go-to text for any course that examines mass communication from a psychological perspective. Now in its seventh edition, the book continues its exploration of how our experiences with media affect the way we acquire and process knowledge about the world and how this knowledge influences our attitudes and behavior. Updates include end-of-chapter suggestions for further reading, new research and examples for a more global perspective, as well as an added emphasis on the power of social media in affecting our perceptions of reality and ourselves. While including real-world examples, the book also integrates psychology and communication theory along with reviews of the most up-to-date research. The text covers a diversity of media forms and issues, ranging from commonly discussed topics such as politics, sex, and violence, to lesser-studied topics, such as emotions and prosocial media. Readers will be challenged to become more sensitized and to think more deeply about their own media use as they explore research on behavior and media effects. Written in an engaging, readable style, the text is appropriate for graduate or undergraduate audiences. The accompanying companion website also includes resources for both instructors and students. For students: Chapter outlines and review questions. Useful links. For instructors: Guidelines for in-class discussions. Sample syllabus. Summaries.
Social influence bot networks are used to effect discussions in social media. While traditional social network methods have been used in assessing social media data, they are insufficient to identify and characterize social influence bots, the networks in which they reside and their behavior. However, these bots can be identified, their prevalence assessed, and their impact on groups assessed using high dimensional network analytics. This is illustrated using data from three different activist communities on Twitter—the “alt-right,” ISIS sympathizers in the Syrian revolution, and activists of the Euromaidan movement. We observe a new kind of behavior that social influence bots engage in—repetitive @mentions of each other. This behavior is used to manipulate complex network metrics, artificially inflating the influence of particular users and specific agendas. We show that this bot behavior can affect network measures by as much as 60% for accounts that are promoted by these bots. This requires a new method to differentiate “promoted accounts” from actual influencers. We present this method. We also present a method to identify social influence bot “sub-communities.” We show how an array of sub-communities across our datasets are used to promote different agendas, from more traditional foci (e.g., influence marketing) to more nefarious goals (e.g., promoting particular political ideologies).
Bots are social media accounts that automate interaction with other users, and they are active on the StrongerIn-Brexit conversation happening over Twitter. These automated scripts generate content through these platforms and then interact with people. Political bots are automated accounts that are particularly active on public policy issues, elections, and political crises. In this preliminary study on the use of political bots during the UK referendum on EU membership, we analyze the tweeting patterns for both human users and bots. We find that political bots have a small but strategic role in the referendum conversations: (1) the family of hashtags associated with the argument for leaving the EU dominates, (2) different perspectives on the issue utilize different levels of automation, and (3) less than 1 percent of sampled accounts generate almost a third of all the messages.
Democratic politics is a collective enterprise, not simply because individual votes are counted to determine winners, but more fundamentally because the individual exercise of citizenship is an interdependent undertaking. Citizens argue with one another, they inform one another, and they generally arrive at political decisions through processes of social interaction and deliberation. This book is dedicated to investigating the political implications of interdependent citizens within the context of the 1984 presidential election campaign as it was experienced in the metropolitan area of South Bend, Indiana. Hence, this is a community study in the fullest sense of the term. National politics is experienced locally through a series of filters unique to a particular setting. And this study is concerned with understanding that setting and its consequences for the exercise of democratic citizenship. Several different themes structure the undertaking: the dynamic implications of social communication among citizens, the importance of communication networks for citizen decision making, the exercise of citizen purpose in locating sources of information, the constraints on individual choice that arise as a function of contexts and environments, and the institutional and organizational effects that operate on the flow of information within particular settings. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
/117-Cyber secur ity/b/ Goolsby R (2020) Developing a new approach to cyber diplimacy
  • D A Beskow
  • K M Carley
Beskow DA, Carley KM (2019) Social cybersecurity: an emerging national security requirement, military review, March-April 2019-see https ://www.armyu als/Milit ary-Revie w/Engli sh-Editi on-Archi ves/Mar-Apr-2019/117-Cyber secur ity/b/ Goolsby R (2020) Developing a new approach to cyber diplimacy. Future Force 6(2):8-15
(1995) Citizens, politics and social communication: information and influence in an election campaign
  • B Huang
Huang B (2019) Learning User Latent Attributes on Social Media. Ph.D. Thesis, Institute for Software Research, Carnegie Mellon University Huckfeldt RR, Sprague J (1995) Citizens, politics and social communication: information and influence in an election campaign. Cambridge University Press, Cambridge