ArticlePDF Available

Abstract and Figures

A framework to motivate safe online behavior that interprets prior research and uses it to evaluate some of the nonprofit online safety education efforts is presented. Self-efficacy and response efficacy have the most consistent impact on safety behavior, and also interacts with risk perceptions. Fear is most likely to work if the threat information is coupled with information about how to cope with them. Safety takes the time and expense to obtain protective software and keep it updated. When users are deeply involved in online safety, they are likely to carefully consider all of the pros and cons of arguments made for and against online safety practices. Collective moral responsibility encourages safe online behavior. An average user can be induced to take a more active role in online safety. Relatively modest and carefully targeted interventions can prove effective in promoting online safety.
Content may be subject to copyright.
Online safety is everyone’s responsibility—a concept much
easier to preach than to practice.
COMMUNICATIONS OF THE ACM March 2008/Vol. 51, No. 3 71
owcan weencourage Internet users to assume
more responsibility for protecting themselves online?
Four-fifths of all home computers lack one or more
core protections against virus, hacker, and spyware
threats [6], while security threats in the workplace are
shifting to the desktop [7], making user education
interventions a priority for IT security professionals. So, it is log-
ical to make users the first line of defense [10, 12]. But how?
Here, we present a framework to motivate safe online behavior
that interprets prior research and uses it to evaluate some of the
current nonprofit online safety education efforts. We will also
describe some of our own (i-Safety) findings [4] from a research
project funded by the National Science Foundation (see Table 1).
By Robert LaRose, Nora J. Rifon, and Richard Enbody
72 March 2008/Vol. 51, No. 3 COMMUNICATIONS OF THE ACM
The most obvious safety message is fear. This strat-
egy is found at all online safety sites in Table 1.
Sometimes it works. Among students enrolled in
business and computer science courses, awareness of
the dangers of spyware was a direct predictor of
intentions to take protective measures [2].
More formally, threat appraisal is the process by
which users assess threats toward
themselves, including the severity
of the threats and one’s suscepti-
bility to them. Examples of these
and the other user education
strategies found here, along with
the names of related variables
found in prior research and
empirical evidence supporting
them, are shown in Table 2. The
subheadings in the table areorga-
nized around the headings in this
article and reflect key concepts in
Protection Motivation Theory
(threat appraisal, coping
appraisal, rewards and costs), the
Elaboration Likelihood Model
(involvement), and Social Cognitive Theory (self-reg-
ulation). The interested reader will find an overview
of these theories in [1].
tis unfortunate that communication about
risk is surprisingly, well, risky. It often fails to
motivate safe behavior or has weak effects.
And therecan be “boomerang effects,”
named for the shape of the nonlinear rela-
tionships sometimes found between safe
behavior and fear [8, 11]. Moderate amounts
of fear encourage safe behavior. Low amounts of fear
diminish safety, because the threat is not seen as
important enough to address. However, intense fear
can also inhibit safe behavior,perhaps because people
suppress their fear rather than cope with the danger.
In our own research involving students from social
science courses (who were probably not as knowl-
edgeable as students depicted in [2], but perhaps
closer to typical users), we found a boomerang point-
ing in the opposite direction. Moderate levels of threat
susceptibility were the least related to safe behaviors
like updating security patches and scanning for spy-
ware, while users with both high and lowlevels of per-
ceived threat were more likely to act safely. The point
is, without knowing the level of risk perceived by each
individual, threatening messages have the potential to
discourage safe behavior.
Users also evaluate their ability to respond to threats
by performing a coping appraisal. Building self-effi-
cacy, or confidence in one’s abilities and in the safety
measures used, is perhaps the most effective educa-
tion strategy. Self-efficacy is the belief in one’s own
ability to carry out an action in pursuit of a valued
goal, such as online safety. Perceived behavioral con-
trol is a related concept that builds on the notions of
controllability and perceived ease of use to predict
intentions to enact safety protections [2]. Self-effi-
cacy is distinguishable from “actual” skill levels in
that wemay feel confident tackling situations we
havenot encountered beforeand, conversely,may
not feel confident enacting behavior we mastered
only during a visit from the IT person months ago.
Beliefs about the efficacy of safety measures are
also important. It’s called response efficacy in the pre-
sent framework although others have identified it as
the relative advantage of online protections [5]. Our
confidence in our computer’s capability to handle
advanced protective measures (computing capacity in
[5]) is another response efficacy issue. Self-efficacy
and response efficacy have the most consistent impact
on safe behavior across many safety issues, and we [4]
and others [2, 5] have verified their importance in the
online safety domain.
Efficacy has a direct impact on safe behavior, but
also interacts with risk perceptions. Fear is most likely
to work if the threat information is coupled with
information about howto cope with them, since the
coping information raises self-efficacy. When mes-
sages arouse fears but dont offer a rational means for
dealing with the fear, people are likely to deny the
danger exists or is unlikely to affect them [11]. In
Internet terms, that defines the “newbie.”
Not all user education sites include self-efficacy
messages; some that do set unrealistic expectations:
“You can easily keep yourself safe if you just perform
these two dozen simple network security tasks daily.”
Still, persuasion attempts are a proven approach to
LaRose table 1 (3/08)
Name Sponsor
Consumer Information Security
National Cyber Security Alliance
U.S. Department of Justice
The Guardian Angels
National Cyber Security Alliance
Parry Aftab, The Privacy Lawyer™
Internet Education Foundation
U.K. Government
Federal Trade Commission
Homeland Security
Michigan State University
Table 1. Online safety
user education sites.
building self-efficacy, anxiety reduction is another, but
both can backfireif safety measures are complex, are
perceived to be ineffective, or have the possibility of
making matters worse. The most effective approach is
to help users master more difficult self-protection
Mismatches among threat perceptions, self-effi-
cacy,and response efficacy could explain why so many
users fail to enact simple spywareprotections [2, 9]
and also the inconsistent findings of previous research.
Some may not perceive the seriousness of the threat,
novice users (such as those surveyed in [9]) may not
have the self-efficacy required to download software
“solutions,” while others may
doubt the effectiveness of the pro-
tection. In a sample comprised
mainly of industry professionals
[5], a self-efficacy variable (per-
ceived ease of use) did not predict
intentions to enact spyware protec-
tions, but perceptions of response
efficacy (relative advantage) did.
Possibly the industry professionals
had uniformly high levels of self-
efficacy but divergent views on the
effectiveness of spyware protections
so only the latter was important.
Users perform a mental calculus of
the rewards and costs associated
with both safe and unsafe behav-
ior.The advantages of safe behav-
ior arenot always self-evident and
there are negative outcomes (the
cons) associated with safe behav-
ior. Safety takes the time and
expense to obtain protective soft-
wareand keep it updated. The
negatives must be countered so
that fearful users don’t invoke
them as rationalizations for doing
nothing. Wecan also encourage
safety by disparaging the rewards
of unsafe behavior, such as those
touted byparties who make
unscrupulous promises if we just
“click here.”
Another tactic is to stress the positive outcomes of
good, that is, safe behavior. Eliminating malware is in
itself a positive outcome, but the secondary personal
benefits of moreefficient computer operation, reduced
repairs, and increased productivity also deserveatten-
tion. In one study [5] a status outcome, enhancing one’s
self-image as a technical or moral leader, was an impor-
tant predictor of safe behavior. The ability to observe the
successful safety behavior of others (visibility in [5] ) or
COMMUNICATIONS OF THE ACM March 2008/Vol. 51, No. 3 73
LaRose table 2 (3/08)
Emphasize Threat
Emphasize Threat
Nearly all computer systems are susceptible to
viruses, Trojan horses, and worms if they are
connected to the Internet (Staysafeonline)
You could lose important personal information
or software that’s stored on your hard drive
(Consumer Information Security)
Strategy User Education
Example (Source)
Empirically Verified
Variable [Citation]
Threat susceptibility [1]
Awareness [1, 2]
Threat severity [1]
Build Self Efficacy
Build Response
Install firewalls for your family—it is not difficult.
By having a firewall on guard, coupled with
up-to-date AVS, this can repel the vast majority
of attacks from the outside. (Itsafe)
Self-efficacy [1, 2,4]
Controllability [2] ease of use [2]
Perceived behavioral control [1,2]
Response efficacy [1, 4]
Perceived usefulness [2]
Relative advantage [5]
Downplay Rewards
of Unsafe Behavior
Minimize Costs of
Safe Behavior
Highlight Benefits
of Safe Behavior
So what if you haveto reregister every time
you visit a Web site? What do you get out of
personalization anyway?
Safety protections are easy to use and take
only moments each day.(i-Safety)
You will find that a safe computer will run
better and cost you less money and effort in
the long run (i-Safety)
Not tested
Perceived ease of use [2]
Attitude toward behavior [1, 2]
Image [1, 5] Visibility [5]
Trialability [5]
Make Safety
Keeping your computer safe is the key to
maintaining your privacy (i-Safety)
Involvement [This article]
Threat Appraisal
Coping Appraisal
Activate Social
Stress Responsibility
Build Good Habits
A mentor is a student who has received the
valuable Internet safety information that i-SAFE
offers, and teams up with other students (i-SAFE)
A call to action: be a cyber secure citizen!
Update your protections at the same time
each week (i-Safety)
Perceived social norm [1, 2]
Personal responsibility [4]
Moral compatibility [1, 5]
Habit strength [4]
Self Regulation
Table 2. Framework
for motivational user
education strategies.
COSTS ASSOCIATED with both safe and unsafe behavior.
to observe them on a trial basis for ourselves (trialability
[5]) also encourages safety.
When users are deeply
involved in the subject of
online safety they are
likely to carefully con-
sider all of the pluses and
minuses of arguments
made for and against
online safety practices.
Personal relevance is an
indicator of involvement.
In the research we will
describe, 44% of the par-
ticipants said that online
safety was highly relevant,
but the other 56% had
lower levels of involve-
ment. However, many
users (11% of our sam-
ple) did not find online safety relevant at all.
Although safety involvement was related to self-effi-
cacy (a significant positivecorrelation of 0.25) and
to response efficacy (0.4 correlation), involvement is
conceptually and empirically distinct from both.
Involvement matters. Along with our ability to
process information free from distraction or confu-
sion, involvement determines the types of arguments
likely to succeed. Here, we argue that even minor
deficiencies in involvement make a difference in
response to online safety education. When involve-
ment or our ability to process information is low,
individuals are likely to take mental shortcuts (heuris-
tics), such as relying on the credibility of a Web site
rather than reading its privacy policy. That is when
the boomerang effects we mentioned earlier can hap-
pen. The fear shuts down rational thinking about the
threat to the point that users may deny the impor-
tance of the threat and choose unsafe actions [11].
When involvement is high users are likely to elabo-
rate: They arelikely to think arguments through, pro-
vided they are presented with clear information and
are not distracted from reflection. This is known as
the Elaboration Likelihood Model (ELM) [8].
“Phishcatchers” exploit ELM. The fear-inducing
news that one’s account has been compromised can
overwhelm careful thinking even among the highly
safety conscious. Spoofed URLs and trusted logos
provide peripheral cues that convince users to “just
click here,” an action that requires little or no self-effi-
cacy and, they promise, will be an entirely effective
response. IT professionals tacitly enlist the peripheral
processing route of ELM when they broadcast dire
warnings about current network security threats
through trusted email addresses.
However, what if the message from the IT depart-
ment is itself a spoof?
How can threats that
attack individual desktops
and escape the notice of
network security profes-
sionals be countered?
Next, we argue for an
approach that promotes
user involvement along
with personal responsibil-
ity and that builds user
Behavioral theories change as unexpected new prob-
lems areencountered. A news storyabout our proj-
ect prompted a letter criticizing “the professors” for
assuming that online safety was the user’s problem.
This led us to uncover the role of personal responsi-
bility.There is evidence that collective moral respon-
sibility encourages safe online behavior [5], but not
personal responsibility. Indeed, personal responsibil-
ity is theoretically an indicator of involvement [8],
but wefound the two were weakly correlated (r =
0.20), and so a different conceptual approach was
required. We realized that personal responsibility is
aform of self-regulation in Social Cognitive theory:
Users act safely when personal standards of respon-
sibility instruct them to.
nour surveys those who agree that “online
safety is my personal responsibility” are sig-
nificantly morelikely to protect themselves
than those who do not agree (Table 3). The
likelihood of taking many commonly recom-
mended safety measures is related to feeling
personally responsible, with large “responsi-
bility gaps” noted for perhaps the most daunting
safety measure, firewall protection, and also the easi-
est, erasing cookies. However, surveys alone cannot
establish the direction of causation. It could be that
personal responsibility is a post hoc rationalization
after users acquireself-efficacy and safe surfing habits,
and does not itself cause safe behavior.
So, we investigated personal responsibility in a
controlled experiment involving 206 college students
from an introductory mass communication class. We
74 March 2008/Vol. 51, No. 3 COMMUNICATIONS OF THE ACM
LaRose table 3 (3/08)
Basis: An online survey administered to 566 undergraduate students in November 2004. All
differences are statistically significant based on chi-square analyses (* p < 0.05;** p < 0.001)
i-Safety Precaution
“Online safety is my
personal responsibility”
In the next month I am likely to…
Update virus protection**
Scan with a hijack eraser*
Scan with anti-spyware*
Update operating system patches**
Erase cookies**
Use a spam filter**
Use a pop-up blocker**
Use a firewall**
Update browser patches**
% of those
who agree
% of those who
don’t agre e
Table 3. Personal responsibility
and online safety precautions.
split the group into high- and low-efficacy conditions
at the median value of a multi-item index. We con-
trolled for involvement based on responses to a multi-
item index also included in the pretest. As we noted
earlier, about half our sample was highly involved
(that is, stated that online safety was highly relevant),
so splitting the group at the median separated the
“safety fanatics” from the rest. This resulted in four
groups: High involvement/high self- efficacy (n= 41),
lowinvolvement/lowself-efficacy (n=38), high
involvement/low self efficacy (n=64), and low
involvement/high self-efficacy (n=63)
Prior to taking the posttest, half the respondents in
each of the four groups were randomly selected to visit
aWeb page with online safety tips from Consumer
Reports,with the heading “Online Safety is Everyones
Job!” and a brief paragraph arguing that it was the
readers’ responsibility to protect themselves. That was
the personal responsibility treatment condition. The
other half of the sample was randomly assigned to a
Web page headed “Online Safety isn’t My Job!” and
arguing that online protection was somebody else’s
job,not the readers. That was the irresponsibility
The results are shown in the accompanying figure.
The vertical axes indicate average scores on an eight-
item index of preventive safety behaviors, such as
intentions to read privacy policies before downloading
software and restricting instant messenger connec-
tions. After controlling for pretest scores, the personal
responsibility treatment caused increases in online
safety intentions in all conditions except one: Those
with low self-efficacy and low safety involvement had
lower safety intentions when told that safety was their
personal responsibility than when they were told it
was not (the lower line on the graph to the right).
Thus, those who are not highly involved in online
safety and who are not confident they can protect
themselves—a description likely to
fit many newer Internet users—
were evidently discouraged to learn
that safety was their responsibility.
The positive effect of the personal
responsibility manipulation was
greatest in the high involvement,
high self-efficacy condition and
high involvement users (the left-
hand graph) exhibited morepro-
tective behavior than users with
less involvement (the right-hand
When safety maintenance
behaviors (for example, updating
virus and anti-spyware protec-
tions) were examined, the pattern
for the lowsafety involvement
group reversed. There, the argu-
ment about personal responsibility
caused those with high self-efficacy
to be less likely to engage in routine maintenance than
the argument against personal responsibility. We spec-
ulate that those who are confident but not highly
involved in online safety reacted by resolving to fix the
problems after the fact rather than incur the burden of
regular maintenance. The other groups had the
expected improvements in safety maintenance inten-
tions with the personal responsibility message. How-
ever, there was very little difference between
treatments for the lowinvolvement/low self-efficacy
group perhaps because they felt unable to carryout
basic maintenance tasks.
Thus, it is possible to improve safety behavior by
emphasizing the user’s personal responsibility. How-
ever, the strategy can backfire when addressed to those
who are perhaps most vulnerable; namely, those who
COMMUNICATIONS OF THE ACM March 2008/Vol. 51, No. 3 75
LaRose figure (3/08)
Figure 1. Experimental Results for Safety Prevention Intentions.
Note: Overall F(8,197) = 23.9, p < 0.001. Treatment x Involvement x Self-efficacy F(7,197) = 2.69, p < 0.02. For the dependent
Prevention Intentions (mean = 4.13, standard deviation = 1.39, range = 1–7),an eight-item additive index of prevention intentions
assessed on a 7-point scale, with total scores divided by 8.
Low Safety Involvement
High Safety Involvement
High Self-Efficacy Low Self-Efficacy
Experimental results
for safety prevention
INVOLVED IN ONLINE SAFETY REACTED BY resolving to fix the problems after
are uninterested in safety and who lack the self-confi-
dence to implement protection. The personal respon-
sibility message can also backfire when directed to
bold (or perhaps, foolhardy) users, those who think
they can recover from security breaches but who are
not involved enough to apply routine maintenance.
In the present research safety involvement was a
measured variable rather than a manipulated one.
However, safety involvement might also be manipu-
lated by linking it to a more personally relevant issue,
privacy. This is substantiated by the high correlation
(0.72) we found between privacy and safety involve-
ment. Privacy is often conceived as a social policy or
information management issue [3], but safety threats
affect privacy, too, by releasing personal information
or by producing unwanted intrusions. Within an
organization, the privacy of the firm might be linked
to personal involvement through employee evaluation
policies that either encourage safe practices or punish
safety breaches.
Among all of the factors wehave discussed, per-
sonal responsibility,self-efficacy,and response efficacy
were the ones most related to intentions to engage in
safe online behavior in our research [4]. Intentions are
directly related to actual behavior. Self-efficacy has a
direct impact on behavior over and above their effects
on good intentions [2, 5]. Still, therearefactors that
intervene between intentions and behavior, especially
when the protective measures are relatively burden-
some and requireattention over long periods of time,
as is the case for online safety.
Other sources of self-regulation can be tapped.
Social norms also affect safety intentions [see 5] if we
believethat our spouses and co-workers wish that we
would be safer online. Having a personal action plan
helps, as does a consistent context for carrying out the
safe behavior. That builds habit strength. Another
stratagem is offering ourselves incentives for executing
our safety plan (for example, a donut break after the
daily protection update). That is action control [1],
and it has proven effectivein managing long-term
health risks that are analogous to the network security
Personalized interventions are critical. Seemingly
obvious but undifferentiated communication strate-
gies such as alerting users to spyware (found in [2, 5])
could have unwelcome effects. While there are differ-
ences bygender and age [5], our experimental data
suggests that a more refined audience segmentation
approach is required. User education Web sites could
screen visitors with “i-safety IQ” quizzes that would
route them to appropriate content. Instead of serving
as one-shot repositories of safety tips, online interven-
tions might encourage repeat visits to build self-effi-
cacy and maintain action control. User-side applica-
tions that detect problem conditions and alert users to
their risks and potential protective measures and walk
them through implementation would also help.
We conclude that the average user can be induced
to take a more active role in online safety. Progress has
been made in uncovering the “pressure points” for
effective user education. Here, we have attempted to
fit these into a logical and consistent framework. Still,
much work needs to be done to better understand
online safety behavior, including experimental studies
that can validate the causes of both safe and unsafe
behavior. More diverse populations must also be stud-
ied since much of the currently available research has
focused either on uncharacteristically naïve [9] or
savvy [2] groups. Our experimental findings suggest
that relatively modest, if carefully targeted, interven-
tions can be effective in promoting online safety.
Thus, improving user responsibility for overall online
safety is a desirable and achievable goal.
1. Abraham, C., Sheeran, P., and Johnson, M. From health beliefs to self-
regulation: Theoretical advances in the psychology of action control.
Psychology and Health 13,(1998), 569–591.
2. Hu, Q. and Dinev, T. Is spyware an Internet nuisance or public men-
ace? Commun. ACM, 48,8(Aug. 2005), 61–65.
3. Karat, C.-M., Brodie, C., and Karat, J. Usable privacy and security for
personal information management. Commun. 49, 1 (Jan. 2006),
4. LaRose, R., Rifon, N., Liu, X., and Lee, D. Understanding online
safety behavior: A multivariate model. International Communication
Association (May 27–30, 2005, New York).
5. Lee, Y. and Kozar, K.A. Investigating factors affecting the adoption of
anti-spyware systems. Commun. 48,8(Aug. 2005), 72–77.
6. National Cyber Security Alliance AOL/NCSA Online Safety Study,
7. National Cyber Security Alliance. Emerging Internet Threat List,
8. Petty, R. and Cacioppo, J. Communication and Persuasion: Central and
Peripheral Routes to Attitude Change.Springer-Verlag, New York, 1986.
9. Poston, R., Stafford, T.F. and Hennington, A. Spyware: A view from
the (online) street. Commun. ACM 48,8(Aug. 2005), 96–99.
10. Thompson, R. Why spyware poses multiple threats to security. Com-
mun. ACM 48,8(Aug. 2005), 41–43.
11. Witte, K. Putting the fear back into fear appeals: The Extended Paral-
lel Process Model. Communication Monographs 59, 4 (1992), 329–349.
12. Zhang, X. What do consumers really know about spyware? Commun.
ACM, 48,8(Aug. 2005), 44–48.
Robert LaRose ( is a professor in the
Department of Telecommunication, Information Studies, and Media
at Michigan State University, East Lansing, MI.
Nora J. Rifon ( is a professor in the Department
of Advertising, Public Relations, and Retailing at Michigan State
University, East Lansing MI.
Richard Enbody ( is an associate professor
in the Department of Computer Science and Engineering at Michigan
State University, East Lansing, MI.
©2008 ACM 0001-0782/08/0300 $5.00
DOI: 10.1145/1325555.1325569
76 March 2008/Vol. 51, No. 3 COMMUNICATIONS OF THE ACM
... Literature on different aspects of this issue is rich. Comprising for example strands on the central role of user awareness (Bulgurcu et al., 2010;Corradini & Nardelli, 2018;Culnan et al., 2008;D´Arcy et al., 2009;Macabante et al., 2019;Spears & Barki, 2010), user responsibility (de Bruijn & Janssen, 2017;Filipczuk et al., 2019;LaRose et al., 2008), or the impact of different types of organizations (Acuna et al., 2021;Balozian & Leidner, 2017). Still, we lack a comprehensive understanding of users' cyber security behavior (Chen et al., 2021;Jenkins et al., 2021) and especially the interrelations of the different building blocks of organizational cyber security management. ...
... Filipczuk and colleagues (2019) confirm these findings as users assigned with a high level of responsibility for their own digital behavior easily adopted it and acted according to security guidelines. Both studies (Filipczuk et al., 2019;LaRose et al., 2008) showed that personal responsibility plays an important role in improving user behavior. ...
... However, internalizing responsibility requires user awareness and user IT capabilities. Our results confirm previous findings that in the absence of user awareness or insufficient user IT capabilities, responsibility is not internalized and users thus do not show desirable, i.e., cyber security compliant, behavior (Furnell et al., 2007;LaRose et al., 2008). ...
Conference Paper
Full-text available
Desirable user behavior is key to cyber security in organizations. However, a comprehensive overview on how to manage user behavior effectively, in order to support organizational cyber security, is missing. Building on extant research to identify central components of organizational cyber security management and on a qualitative analysis based on 20 semi-structured interviews with users and IT-Managers of a European university, we present an integrated model on this issue. We contribute to understanding the interrelations of namely user awareness, user IT-capabilities, organizational IT, user behavior, and especially internalized responsibility and relation to organizational cyber security.
... This circumstance results in a deadweight reduction where clients may have favoured the new form, and designers would like to keep up fewer variants. However, the client isn't refreshing because of potential dangers which are nothing but misty advantages [23,26,28]. In this job, we are involved in the pieces that are the most outstanding consumer for the product update process. ...
... Worm containment: Programming reports on servers are starting at yet an incapable first-line safeguard against worms [28] because executives are careful about introducing refreshesleaving the product defenceless. We accept that robotic updates will become more and more typical, with programming dealers introducing refreshments without customer consent. ...
Full-text available
Windows updates adjust the programming capacities by fixing bugs, evolving highlights, and altering the user interface. Now and again, changes are welcome, even envisioned, and now and then, they are undesirable on the off chance that clients delay or don't introduce refreshes. It can have genuine security suggestions for their PC. Updates are one of the essential components for amending found vulnerabilities when a client doesn't refresh and remains helpless against an expanding number of assaults. For situations where refuge updates are not implemented or gradually introduced, end customers are at an enlarged risk of maliciousness. Program makers have sought to remove consumers from the software upgrade circle to improve safety. In any case, customer inclusion in system updates ruins essential, all updates are not needed and necessary restarts can adversely affect customers. The programmer used a multi-strategy approach to gather information from 37 clients on Windows 7 for the meeting, studying and PC logs. The programmer thought about what the clients believe is going on their PCs (meeting and study information), what clients need to occur on their PC (meeting and review information), and what was going on (log information). They found that 28 out of our 37 members had a misconception about what was going on their PC, what's more, that over a portion of the members couldn't execute their aims for the PC board.
... 391) . If individuals perceive their degree of information security self-efficacy as being low, they will likely perform only simple tasks to protect their information; those with a high sense of self-efficacy will be more inclined to confront challenges and actively work to make their information as secure as possible (Anderson & Agarwal, 2010;Larose et al., 2008;Lee & Larsen, 2009;Woon et al., 2005) . ...
... The effect of response efficacy on expected adaptive behavior has been supported in numerous studies (Crossler et al., 2019;Crossler et al., 2014;LaRose et al., 2008;Li et al., 2019;Liang & Xue, 2010;Vance et al., 2012) . However, response efficacy may be a less powerful predictor than self-efficacy (Crossler et al., 2019;Li et al., 2019;Thompson et al., 2017;Tu et al., 2015) . ...
Digital technologies are ubiquitous, and the proliferation of attacks on information assets is a corollary of their ubiquity. Thus, information security (IS) appears to be a crucial issue for individuals and managers. While attempts to identify the factors that guide the information security behavior (ISB) of actors are not new, such identification remains more necessary and topical than ever. From this perspective, this empirical study contributes to a better understanding of the cognitive and socialization factors that influence ISB. Using a second-order hierarchical model with partial least squares structural equation modeling (PLS-SEM), we test for the first time the applicability of protection motivation theory (PMT) and social bond theory (SBT) to the information security technology awareness (ISTA) and malware protection behavior (MPB) of 430 students. First, our results demonstrate that combining PMT and SBT produces a more robust model for analyzing ISTA and MPB compared to considering each of these theories separately. Secondly, ISTA partially mediates social bonds and protection motivation and thus, could be a root security behavior. If we underline the preponderant role of involvement, the significant difference observed in the ISB of both genders is related to the strongest influence of females’ social connections on ISTA. In particular, this result is explained by more homogeneous effects of socialization factors for females than for males. We suggest that the design of ISTA programs and education should be better adapted to the different cognitive and socialization factors of individuals, notably with an emphasis on social bonds and, more specifically, on involvement. We also provide detailed recommendations on how practitioners can improve individuals’ ISB.
... In some countries, major banks provide customers with free USB-shield to encrypt transaction data and provide digital authentication [28]. This might be necessary because Internet users may view Internet security to be the responsibility of societies or firms [41]. Such actions might change users' trust in the Internet which in turn improves their coping perceptions and increases their adaptive coping behaviors. ...
Extant research seldom focuses on maladaptive security coping behaviors. Applying the extended parallel process model, this study has developed a research model to reveal the processes underlying users’ adaptive and maladaptive security coping behaviors. The model is empirically examined along with alternative models. Results show that perceived coping efficacy is the most influencing factor promoting adaptive coping behaviors and deterring maladaptive coping behaviors. Fear plays a mediating role in the threat appraisal process and leads to adaptive and maladaptive coping behaviors. Trust in the Internet as the contextual factor influences the threat and coping appraisal processes and adaptive coping behaviors.
... Some information security research using PMT has focused on a few key factors (e.g., [46,47]), while other research [14] argues for including many factors from the full nomology. In the current context, self-efficacy and response efficacy have been found to have the most significant influence on intentions to adopt protection-motivated behaviors [55]. For example, in the security context, self-efficacy and response efficacy were strongly influential in online account password management behavior [103]. ...
Current information security behavior research assumes that lone individuals make a rational, informed decision about security technologies based on careful consideration of personally available information. We challenge this assumption by examining how the herd behavior influences users’ security decisions when coping with security threats. The results show that uncertainty about a security technology leads users to discount their own information and imitate others. We found that imitation tendency has a more substantial effect on security decisions than the personal perceived efficacy of the security technology. It is essential for researchers and managers to consider how the herd behavior effect influences users' security decisions.
... Users are the weakest link or target toward cybersecurity-related threats (Siponen, 2000), and hence, more studies are needed to understand users' security responses and behavior (Lebek et al., 2013). Self-efficacy has been shown to influence information security behavior (LaRose et al., 2008). A survey study by Woon et al. (2005) has also demonstrated that perceived severity, response cost, perceived susceptibility and self-efficacy influence users' cybersecurity actions. ...
Purpose Phishing attacks are the most common cyber threats targeted at users. Digital nudging in the form of framing and priming may reduce user susceptibility to phishing. This research focuses on two types of digital nudging, framing and priming, and examines the impact of framing and priming on users' behavior (i.e. action) in a cybersecurity setting. It draws on prospect theory, instance-based learning theory and dual-process theory to generate the research hypotheses. Design/methodology/approach A 3 × 2 experimental study was carried out to test the hypotheses. The experiment consisted of three levels for framing (i.e. no framing, negative framing and positive framing) and two levels for priming (i.e. with and without priming). Findings The findings suggest that priming users to information security risks reduces their risk-taking behavior, whereas positive and negative framing of information security messages regarding potential consequences of the available choices do not change users' behavior. The results also indicate that risk-averse cybersecurity behavior is associated with greater confidence with the action, greater perceived severity of cybersecurity risks, lower perceived susceptibility to cybersecurity risks resulting from the action and lower trust in the download link. Originality/value This research shows that digital nudging in the form of priming is an effective way to reduce users' exposure to cybersecurity risks.
The latest advances in data-driven marketing, such as real-time personalization, have increasingly made consumers more vulnerable. In response, some consumers deliberately falsify information in order to redress the balance of power, a practice that constitutes a serious threat to the digital economy. The topic of falsification is still largely under-researched in information systems and marketing. Based on protection motivation theory, the author conceptualizes privacy controls as a source of information and the falsification of information as a coping response, with vulnerability representing the threat appraisal mechanism and self-efficacy the coping appraisal mechanism. Through a within-subject experiment (n = 207), the results of the mediation analysis for repeated measures show that the effect of privacy controls as a source of information on the falsification of information is fully mediated by vulnerability and self-efficacy. The author provides insights for managers regarding the significant trade-off between reducing consumer vulnerability and maintaining the usefulness of the data.
Full-text available
Employee engagement in unhygienic cyber practices (UCP) is a concern for organizations across the world. This study explores the effects of personal cognitive and environmental factors in decreasing workers' engagement in UCP in a developing country. The study employs a simplified version of the theory of reciprocal determinism (triadic reciprocality), a model comprising three factors: personal, environment, and behavior. Data were collected from working MBA students in Ethiopia. The key results show that the personal factor of self-regulation related to acceptable cyber practices decreased workers' engagement in UCP, while self-efficacy did not in the research setting. The environmental factor of computer monitoring (CM) decreased workers' engagement in UCP, while the availability of security education and training awareness (SETA) programs did not. Both CM and SETA programs had positive effects in improving self-efficacy, but only SETA programs positively impacted self-regulation. Thus, our understanding the influences of end-user security behavior (i.e., engagement in UCP in a developing country) is enhanced by this study.
Full-text available
Defined as someone using other’s personally identifiable information to make profit or commit crimes, identity theft has become a critical problem for the whole society. The development of Internet technology has made this problem worse. Results of empirical assessment including non-linear quadratic effect prove that when users have the perception of more control over the identity theft threat, they are likely to find solutions, feel it is their responsibility and has more intentions for identity theft prevention actions to prevent identity theft. Interestingly, our theory and empirical results suggest that quadratic effect exist among critical constructs in our theoretical model and that the underlying complexity require further investigation and that linear models may not be necessarily sufficient. We further contribute by developing theory and empirical validation of non-linear quadratic effect among key constructs related to Identity theft in the context of IT security literature.
Understanding users' individual differences may provide clues to help identify computer users who are prone to act insecurely. We examine factors that impact home users' reported computer security behaviour. We conducted two online surveys with a total of 650 participants to investigate the relationship between self-reported security behaviour and users' knowledge, motivation, confidence, risk propensity and sex-typed characteristics. We found that all of these factors impacted security behaviour, with knowledge as the most important predictor. We further show that a user's affinity to feminine or masculine characteristics is a better determinant of security behaviour than using binary male/female descriptors. Our study enabled us to confirm earlier results in the literature in a non-organisational setting, and to extend the literature by studying additional factors and by comparing the relative importance of each factor as a predictor of security behaviour.
Full-text available
Recent media attention to spyware [2, 5, 7, 8] has brought to light the blunt intrusion into individual privacy and the uncertain hidden cost of free access to Internet sites, along with freeware and shareware. Most spyware programs belong to the more benign category of adware that delivers targeted pop-up ads based on a user's Web surfing habits. The more malicious type of spyware tracks each keystroke of the user and sends that information to its proprietors. Such information could be used for legitimate data mining purposes or it could be abused by others for identity theft and financial crimes.
Full-text available
There are indications of late that the use of anti-spyware software is on the rise, with more than 100 million Internet users downloading Lavasoft's free anti-spyware software [2]. Some big-name companies are also beginning to address the spyware issue, including Microsoft, which currently has a beta version of its own anti-spyware available to Microsoft Windows users for download. However, a Gartner survey finds only 10% of respondents were taking sufficiently aggressive steps to minimize spyware infestations [5] and a Forrester survey found that even though 55% of consumers knew what spyware was, only 40% were running anti-spyware programs routinely [7].
Full-text available
Technology has revolutionized information collection and distribution to the point where marketers have expanded and implemented new technologies to enable efficient consumer information acquisition. Such sophisticated data collection methods have raised serious concerns about consumer privacy, as some marketers have quickly discovered ways to abuse this power.
The paper reviews the theoretical concepts included in a range of social cognitive models which have identified psychological antecedents of individual motivation and behaviour. Areas of correspondence are noted and core constructs (derived primarily from the theory of planned behaviour and social cognitive theory) are identified. The role of intention formation, self-efficacy beliefs, attitudes, normative beliefs and self-representations are highlighted and it is argued that these constructs provide a useful framework for modelling the psychological prerequisites of health behaviour. Acknowledging that intentions do not translate into action automatically, recent advances in our understanding of the ways in which prior planning and rehearsal can enhance individual control of action and facilitate the routinisation of behaviour are considered. The importance of engaging in preparatory behaviours for the achievement of many health goals is discussed and the processes by which goals are prioritised, including their links to self-representations, are explored. The implications of social cognitive and self-regulatory theories for the cognitive assessment of individual readiness for action and for intervention design in health-related settings are highlighted.
The fear appeal literature is diverse and inconsistent. Existing fear appeal theories explain the positive linear results occurring in many studies, but are unable to explain the boomerang or curvilinear results occurring in other studies. The present work advances a theory integrating previous theoretical perspectives (i.e., Janis, 1967; Leventhal, 1970; Rogers, 1975, 1983) that is based on Leventhal's (1970) danger control/fear control framework. The proposed fear appeal theory, called the Extended Parallel Process Model (EPPM), expands on previous approaches in three ways: (a) by explaining why fear appeals fail; (b) by re‐incorporating fear as a central variable; and (c) by specifying the relationship between threat and efficacy in propositional forms. Specific propositions are given to guide future research.
IBM T.J. Watson Research Center in Hawthorne, NY, focuses on the design and development of policy SPARCLE, authoring and transformation tools that enable organizations to create machine-readable policies for real-time enforcement decisions. Usable privacy and security technology is a critical need in the management of personal information, and should be a part of the initial design considerations for technology applications, systems, and devices that involves personal information collection, access, and communications. SPARCLE will enable individuals to be able to know that the policies are enforced within the organizations by their own processes. The prototype workbench transforms natural language rules through the use of natural language parsing technology into machine-readable XML code. The tools promise to give organizations a verifiable path from the written form of privacy rule to real-time enforcement decisions regarding access to personal information.
The misuse of technology and hijacking of spyware which presents a danger to security and privacy is discussed. Increased costs due to unnecessary consumption of bandwidth on individual PCs and the necessary labor costs in rebuilding systems to ensure they are no longer corrupt are virtually unquantifiable. System degradation is time consuming for the individual PC use and even more so for network administrations managing corporate networks. Spyware is a significant threat to the effective functioning and continued growth of the Internet.
Spyware is the latest epidemic security threat for Internet users. There are various types of spyware programs (see Table 1) creating serious problems such as copying and sending personal information, consuming CPU power, reducing available bandwidth, annoying users with endless pop-ups, and monitoring users' computer usage. As spyware makes the Internet a riskier place and undermines confidence in online activities, Internet users stop purchasing at online stores---a consequence that clearly disrupts e-business.