ArticlePDF Available

Abstract and Figures

A framework to motivate safe online behavior that interprets prior research and uses it to evaluate some of the nonprofit online safety education efforts is presented. Self-efficacy and response efficacy have the most consistent impact on safety behavior, and also interacts with risk perceptions. Fear is most likely to work if the threat information is coupled with information about how to cope with them. Safety takes the time and expense to obtain protective software and keep it updated. When users are deeply involved in online safety, they are likely to carefully consider all of the pros and cons of arguments made for and against online safety practices. Collective moral responsibility encourages safe online behavior. An average user can be induced to take a more active role in online safety. Relatively modest and carefully targeted interventions can prove effective in promoting online safety.
Content may be subject to copyright.
PROMOTING PERSONAL
RESPONSIBILITY for
INTERNET SAFETY
Online safety is everyone’s responsibility—a concept much
easier to preach than to practice.
COMMUNICATIONS OF THE ACM March 2008/Vol. 51, No. 3 71
owcan weencourage Internet users to assume
more responsibility for protecting themselves online?
Four-fifths of all home computers lack one or more
core protections against virus, hacker, and spyware
threats [6], while security threats in the workplace are
shifting to the desktop [7], making user education
interventions a priority for IT security professionals. So, it is log-
ical to make users the first line of defense [10, 12]. But how?
Here, we present a framework to motivate safe online behavior
that interprets prior research and uses it to evaluate some of the
current nonprofit online safety education efforts. We will also
describe some of our own (i-Safety) findings [4] from a research
project funded by the National Science Foundation (see Table 1).
By Robert LaRose, Nora J. Rifon, and Richard Enbody
H
72 March 2008/Vol. 51, No. 3 COMMUNICATIONS OF THE ACM
THREAT APPRAISAL: THE FEAR FACTOR
The most obvious safety message is fear. This strat-
egy is found at all online safety sites in Table 1.
Sometimes it works. Among students enrolled in
business and computer science courses, awareness of
the dangers of spyware was a direct predictor of
intentions to take protective measures [2].
More formally, threat appraisal is the process by
which users assess threats toward
themselves, including the severity
of the threats and one’s suscepti-
bility to them. Examples of these
and the other user education
strategies found here, along with
the names of related variables
found in prior research and
empirical evidence supporting
them, are shown in Table 2. The
subheadings in the table areorga-
nized around the headings in this
article and reflect key concepts in
Protection Motivation Theory
(threat appraisal, coping
appraisal, rewards and costs), the
Elaboration Likelihood Model
(involvement), and Social Cognitive Theory (self-reg-
ulation). The interested reader will find an overview
of these theories in [1].
I
tis unfortunate that communication about
risk is surprisingly, well, risky. It often fails to
motivate safe behavior or has weak effects.
And therecan be “boomerang effects,”
named for the shape of the nonlinear rela-
tionships sometimes found between safe
behavior and fear [8, 11]. Moderate amounts
of fear encourage safe behavior. Low amounts of fear
diminish safety, because the threat is not seen as
important enough to address. However, intense fear
can also inhibit safe behavior,perhaps because people
suppress their fear rather than cope with the danger.
In our own research involving students from social
science courses (who were probably not as knowl-
edgeable as students depicted in [2], but perhaps
closer to typical users), we found a boomerang point-
ing in the opposite direction. Moderate levels of threat
susceptibility were the least related to safe behaviors
like updating security patches and scanning for spy-
ware, while users with both high and lowlevels of per-
ceived threat were more likely to act safely. The point
is, without knowing the level of risk perceived by each
individual, threatening messages have the potential to
discourage safe behavior.
COPING APPRAISAL: BUILDING CONFIDENCE
Users also evaluate their ability to respond to threats
by performing a coping appraisal. Building self-effi-
cacy, or confidence in one’s abilities and in the safety
measures used, is perhaps the most effective educa-
tion strategy. Self-efficacy is the belief in one’s own
ability to carry out an action in pursuit of a valued
goal, such as online safety. Perceived behavioral con-
trol is a related concept that builds on the notions of
controllability and perceived ease of use to predict
intentions to enact safety protections [2]. Self-effi-
cacy is distinguishable from “actual” skill levels in
that wemay feel confident tackling situations we
havenot encountered beforeand, conversely,may
not feel confident enacting behavior we mastered
only during a visit from the IT person months ago.
Beliefs about the efficacy of safety measures are
also important. It’s called response efficacy in the pre-
sent framework although others have identified it as
the relative advantage of online protections [5]. Our
confidence in our computer’s capability to handle
advanced protective measures (computing capacity in
[5]) is another response efficacy issue. Self-efficacy
and response efficacy have the most consistent impact
on safe behavior across many safety issues, and we [4]
and others [2, 5] have verified their importance in the
online safety domain.
Efficacy has a direct impact on safe behavior, but
also interacts with risk perceptions. Fear is most likely
to work if the threat information is coupled with
information about howto cope with them, since the
coping information raises self-efficacy. When mes-
sages arouse fears but dont offer a rational means for
dealing with the fear, people are likely to deny the
danger exists or is unlikely to affect them [11]. In
Internet terms, that defines the “newbie.”
Not all user education sites include self-efficacy
messages; some that do set unrealistic expectations:
“You can easily keep yourself safe if you just perform
these two dozen simple network security tasks daily.”
Still, persuasion attempts are a proven approach to
LaRose table 1 (3/08)
Name Sponsor
Staysafeonline
iSAFE
CyberAngels
Cybersmart
WiredSafety
GetNetWise
Itsafe
Consumer Information Security
Us-Cert.
i-Safety
National Cyber Security Alliance
U.S. Department of Justice
The Guardian Angels
National Cyber Security Alliance
Parry Aftab, The Privacy Lawyer™
Internet Education Foundation
U.K. Government
Federal Trade Commission
Homeland Security
Michigan State University
URL
www.staysafeonline.info
www.isafe.org
www.cyberangels.org
www.cybersmart.org
www.wiredsafety.org
www.getnetwise.org
itsafe.gov.uk
www.ftc.gov/infosecurity
www.us-cert.gov
www.msu.edu/~isafety
Table 1. Online safety
user education sites.
building self-efficacy, anxiety reduction is another, but
both can backfireif safety measures are complex, are
perceived to be ineffective, or have the possibility of
making matters worse. The most effective approach is
to help users master more difficult self-protection
tasks.
Mismatches among threat perceptions, self-effi-
cacy,and response efficacy could explain why so many
users fail to enact simple spywareprotections [2, 9]
and also the inconsistent findings of previous research.
Some may not perceive the seriousness of the threat,
novice users (such as those surveyed in [9]) may not
have the self-efficacy required to download software
“solutions,” while others may
doubt the effectiveness of the pro-
tection. In a sample comprised
mainly of industry professionals
[5], a self-efficacy variable (per-
ceived ease of use) did not predict
intentions to enact spyware protec-
tions, but perceptions of response
efficacy (relative advantage) did.
Possibly the industry professionals
had uniformly high levels of self-
efficacy but divergent views on the
effectiveness of spyware protections
so only the latter was important.
REWARDS AND COSTS: THE PROS
AND CONS OF SAFETY
Users perform a mental calculus of
the rewards and costs associated
with both safe and unsafe behav-
ior.The advantages of safe behav-
ior arenot always self-evident and
there are negative outcomes (the
cons) associated with safe behav-
ior. Safety takes the time and
expense to obtain protective soft-
wareand keep it updated. The
negatives must be countered so
that fearful users don’t invoke
them as rationalizations for doing
nothing. Wecan also encourage
safety by disparaging the rewards
of unsafe behavior, such as those
touted byparties who make
unscrupulous promises if we just
“click here.”
Another tactic is to stress the positive outcomes of
good, that is, safe behavior. Eliminating malware is in
itself a positive outcome, but the secondary personal
benefits of moreefficient computer operation, reduced
repairs, and increased productivity also deserveatten-
tion. In one study [5] a status outcome, enhancing one’s
self-image as a technical or moral leader, was an impor-
tant predictor of safe behavior. The ability to observe the
successful safety behavior of others (visibility in [5] ) or
COMMUNICATIONS OF THE ACM March 2008/Vol. 51, No. 3 73
LaRose table 2 (3/08)
Emphasize Threat
Susceptibility
Emphasize Threat
Severity
Nearly all computer systems are susceptible to
viruses, Trojan horses, and worms if they are
connected to the Internet (Staysafeonline)
You could lose important personal information
or software that’s stored on your hard drive
(Consumer Information Security)
Strategy User Education
Example (Source)
Empirically Verified
Variable [Citation]
Threat susceptibility [1]
Awareness [1, 2]
Threat severity [1]
Build Self Efficacy
Build Response
Efficacy
Install firewalls for your family—it is not difficult.
(Cybersmart)
By having a firewall on guard, coupled with
up-to-date AVS, this can repel the vast majority
of attacks from the outside. (Itsafe)
Self-efficacy [1, 2,4]
Controllability [2] ease of use [2]
Perceived behavioral control [1,2]
Response efficacy [1, 4]
Perceived usefulness [2]
Relative advantage [5]
Downplay Rewards
of Unsafe Behavior
Minimize Costs of
Safe Behavior
Highlight Benefits
of Safe Behavior
So what if you haveto reregister every time
you visit a Web site? What do you get out of
personalization anyway?
(i-Safety)
Safety protections are easy to use and take
only moments each day.(i-Safety)
You will find that a safe computer will run
better and cost you less money and effort in
the long run (i-Safety)
Not tested
Perceived ease of use [2]
Attitude toward behavior [1, 2]
Image [1, 5] Visibility [5]
Trialability [5]
Make Safety
Relevant
Keeping your computer safe is the key to
maintaining your privacy (i-Safety)
Involvement [This article]
Threat Appraisal
Coping Appraisal
Rewards/Costs
Involvement
Activate Social
Norms
Stress Responsibility
Build Good Habits
A mentor is a student who has received the
valuable Internet safety information that i-SAFE
offers, and teams up with other students (i-SAFE)
A call to action: be a cyber secure citizen!
(Staysafeonline)
Update your protections at the same time
each week (i-Safety)
Perceived social norm [1, 2]
Personal responsibility [4]
Moral compatibility [1, 5]
Habit strength [4]
Self Regulation
Table 2. Framework
for motivational user
education strategies.
USERS PERFORM A MENTAL CALCULUS OF THE REWARDS AND
COSTS ASSOCIATED with both safe and unsafe behavior.
to observe them on a trial basis for ourselves (trialability
[5]) also encourages safety.
INVOLVEMENT: CENTRAL OR PERIPHERAL
PERSUASION?
When users are deeply
involved in the subject of
online safety they are
likely to carefully con-
sider all of the pluses and
minuses of arguments
made for and against
online safety practices.
Personal relevance is an
indicator of involvement.
In the research we will
describe, 44% of the par-
ticipants said that online
safety was highly relevant,
but the other 56% had
lower levels of involve-
ment. However, many
users (11% of our sam-
ple) did not find online safety relevant at all.
Although safety involvement was related to self-effi-
cacy (a significant positivecorrelation of 0.25) and
to response efficacy (0.4 correlation), involvement is
conceptually and empirically distinct from both.
Involvement matters. Along with our ability to
process information free from distraction or confu-
sion, involvement determines the types of arguments
likely to succeed. Here, we argue that even minor
deficiencies in involvement make a difference in
response to online safety education. When involve-
ment or our ability to process information is low,
individuals are likely to take mental shortcuts (heuris-
tics), such as relying on the credibility of a Web site
rather than reading its privacy policy. That is when
the boomerang effects we mentioned earlier can hap-
pen. The fear shuts down rational thinking about the
threat to the point that users may deny the impor-
tance of the threat and choose unsafe actions [11].
When involvement is high users are likely to elabo-
rate: They arelikely to think arguments through, pro-
vided they are presented with clear information and
are not distracted from reflection. This is known as
the Elaboration Likelihood Model (ELM) [8].
“Phishcatchers” exploit ELM. The fear-inducing
news that one’s account has been compromised can
overwhelm careful thinking even among the highly
safety conscious. Spoofed URLs and trusted logos
provide peripheral cues that convince users to “just
click here,” an action that requires little or no self-effi-
cacy and, they promise, will be an entirely effective
response. IT professionals tacitly enlist the peripheral
processing route of ELM when they broadcast dire
warnings about current network security threats
through trusted email addresses.
However, what if the message from the IT depart-
ment is itself a spoof?
How can threats that
attack individual desktops
and escape the notice of
network security profes-
sionals be countered?
Next, we argue for an
approach that promotes
user involvement along
with personal responsibil-
ity and that builds user
self-efficacy.
SELF-REGULATION: TAKING
RESPONSIBILITY
Behavioral theories change as unexpected new prob-
lems areencountered. A news storyabout our proj-
ect prompted a letter criticizing “the professors” for
assuming that online safety was the user’s problem.
This led us to uncover the role of personal responsi-
bility.There is evidence that collective moral respon-
sibility encourages safe online behavior [5], but not
personal responsibility. Indeed, personal responsibil-
ity is theoretically an indicator of involvement [8],
but wefound the two were weakly correlated (r =
0.20), and so a different conceptual approach was
required. We realized that personal responsibility is
aform of self-regulation in Social Cognitive theory:
Users act safely when personal standards of respon-
sibility instruct them to.
I
nour surveys those who agree that “online
safety is my personal responsibility” are sig-
nificantly morelikely to protect themselves
than those who do not agree (Table 3). The
likelihood of taking many commonly recom-
mended safety measures is related to feeling
personally responsible, with large “responsi-
bility gaps” noted for perhaps the most daunting
safety measure, firewall protection, and also the easi-
est, erasing cookies. However, surveys alone cannot
establish the direction of causation. It could be that
personal responsibility is a post hoc rationalization
after users acquireself-efficacy and safe surfing habits,
and does not itself cause safe behavior.
So, we investigated personal responsibility in a
controlled experiment involving 206 college students
from an introductory mass communication class. We
74 March 2008/Vol. 51, No. 3 COMMUNICATIONS OF THE ACM
LaRose table 3 (3/08)
Basis: An online survey administered to 566 undergraduate students in November 2004. All
differences are statistically significant based on chi-square analyses (* p < 0.05;** p < 0.001)
i-Safety Precaution
“Online safety is my
personal responsibility”
In the next month I am likely to…
Update virus protection**
Scan with a hijack eraser*
Scan with anti-spyware*
Update operating system patches**
Erase cookies**
Use a spam filter**
Use a pop-up blocker**
Use a firewall**
Update browser patches**
% of those
who agree
80
54
80
70
71
68
84
80
65
% of those who
don’t agre e
66
43
66
53
46
45
65
58
44
Table 3. Personal responsibility
and online safety precautions.
split the group into high- and low-efficacy conditions
at the median value of a multi-item index. We con-
trolled for involvement based on responses to a multi-
item index also included in the pretest. As we noted
earlier, about half our sample was highly involved
(that is, stated that online safety was highly relevant),
so splitting the group at the median separated the
“safety fanatics” from the rest. This resulted in four
groups: High involvement/high self- efficacy (n= 41),
lowinvolvement/lowself-efficacy (n=38), high
involvement/low self efficacy (n=64), and low
involvement/high self-efficacy (n=63)
Prior to taking the posttest, half the respondents in
each of the four groups were randomly selected to visit
aWeb page with online safety tips from Consumer
Reports,with the heading “Online Safety is Everyones
Job!” and a brief paragraph arguing that it was the
readers’ responsibility to protect themselves. That was
the personal responsibility treatment condition. The
other half of the sample was randomly assigned to a
Web page headed “Online Safety isn’t My Job!” and
arguing that online protection was somebody else’s
job,not the readers. That was the irresponsibility
treatment.
The results are shown in the accompanying figure.
The vertical axes indicate average scores on an eight-
item index of preventive safety behaviors, such as
intentions to read privacy policies before downloading
software and restricting instant messenger connec-
tions. After controlling for pretest scores, the personal
responsibility treatment caused increases in online
safety intentions in all conditions except one: Those
with low self-efficacy and low safety involvement had
lower safety intentions when told that safety was their
personal responsibility than when they were told it
was not (the lower line on the graph to the right).
Thus, those who are not highly involved in online
safety and who are not confident they can protect
themselves—a description likely to
fit many newer Internet users—
were evidently discouraged to learn
that safety was their responsibility.
The positive effect of the personal
responsibility manipulation was
greatest in the high involvement,
high self-efficacy condition and
high involvement users (the left-
hand graph) exhibited morepro-
tective behavior than users with
less involvement (the right-hand
graph).
When safety maintenance
behaviors (for example, updating
virus and anti-spyware protec-
tions) were examined, the pattern
for the lowsafety involvement
group reversed. There, the argu-
ment about personal responsibility
caused those with high self-efficacy
to be less likely to engage in routine maintenance than
the argument against personal responsibility. We spec-
ulate that those who are confident but not highly
involved in online safety reacted by resolving to fix the
problems after the fact rather than incur the burden of
regular maintenance. The other groups had the
expected improvements in safety maintenance inten-
tions with the personal responsibility message. How-
ever, there was very little difference between
treatments for the lowinvolvement/low self-efficacy
group perhaps because they felt unable to carryout
basic maintenance tasks.
Thus, it is possible to improve safety behavior by
emphasizing the user’s personal responsibility. How-
ever, the strategy can backfire when addressed to those
who are perhaps most vulnerable; namely, those who
COMMUNICATIONS OF THE ACM March 2008/Vol. 51, No. 3 75
LaRose figure (3/08)
Figure 1. Experimental Results for Safety Prevention Intentions.
Note: Overall F(8,197) = 23.9, p < 0.001. Treatment x Involvement x Self-efficacy F(7,197) = 2.69, p < 0.02. For the dependent
variable
Prevention Intentions (mean = 4.13, standard deviation = 1.39, range = 1–7),an eight-item additive index of prevention intentions
assessed on a 7-point scale, with total scores divided by 8.
4.10
4.00
3.90
3.80
3.70
Irresponsibility
Treatment
Treatment
Low Safety Involvement
Responsibility
Treatment
4.70
4.60
4.50
4.40
4.30
4.20
4.10
4.00
Irresponsibility
Treatment
Treatment
High Safety Involvement
Responsibility
Treatment
High Self-Efficacy Low Self-Efficacy
Experimental results
for safety prevention
intentions.
WESPECULATE THAT THOSE WHO ARE CONFIDENT BUT NOT HIGHLY
INVOLVED IN ONLINE SAFETY REACTED BY resolving to fix the problems after
the fact RATHER THAN INCUR THE BURDEN OF REGULAR MAINTENANCE.
are uninterested in safety and who lack the self-confi-
dence to implement protection. The personal respon-
sibility message can also backfire when directed to
bold (or perhaps, foolhardy) users, those who think
they can recover from security breaches but who are
not involved enough to apply routine maintenance.
In the present research safety involvement was a
measured variable rather than a manipulated one.
However, safety involvement might also be manipu-
lated by linking it to a more personally relevant issue,
privacy. This is substantiated by the high correlation
(0.72) we found between privacy and safety involve-
ment. Privacy is often conceived as a social policy or
information management issue [3], but safety threats
affect privacy, too, by releasing personal information
or by producing unwanted intrusions. Within an
organization, the privacy of the firm might be linked
to personal involvement through employee evaluation
policies that either encourage safe practices or punish
safety breaches.
Among all of the factors wehave discussed, per-
sonal responsibility,self-efficacy,and response efficacy
were the ones most related to intentions to engage in
safe online behavior in our research [4]. Intentions are
directly related to actual behavior. Self-efficacy has a
direct impact on behavior over and above their effects
on good intentions [2, 5]. Still, therearefactors that
intervene between intentions and behavior, especially
when the protective measures are relatively burden-
some and requireattention over long periods of time,
as is the case for online safety.
Other sources of self-regulation can be tapped.
Social norms also affect safety intentions [see 5] if we
believethat our spouses and co-workers wish that we
would be safer online. Having a personal action plan
helps, as does a consistent context for carrying out the
safe behavior. That builds habit strength. Another
stratagem is offering ourselves incentives for executing
our safety plan (for example, a donut break after the
daily protection update). That is action control [1],
and it has proven effectivein managing long-term
health risks that are analogous to the network security
problem.
Personalized interventions are critical. Seemingly
obvious but undifferentiated communication strate-
gies such as alerting users to spyware (found in [2, 5])
could have unwelcome effects. While there are differ-
ences bygender and age [5], our experimental data
suggests that a more refined audience segmentation
approach is required. User education Web sites could
screen visitors with “i-safety IQ” quizzes that would
route them to appropriate content. Instead of serving
as one-shot repositories of safety tips, online interven-
tions might encourage repeat visits to build self-effi-
cacy and maintain action control. User-side applica-
tions that detect problem conditions and alert users to
their risks and potential protective measures and walk
them through implementation would also help.
We conclude that the average user can be induced
to take a more active role in online safety. Progress has
been made in uncovering the “pressure points” for
effective user education. Here, we have attempted to
fit these into a logical and consistent framework. Still,
much work needs to be done to better understand
online safety behavior, including experimental studies
that can validate the causes of both safe and unsafe
behavior. More diverse populations must also be stud-
ied since much of the currently available research has
focused either on uncharacteristically naïve [9] or
savvy [2] groups. Our experimental findings suggest
that relatively modest, if carefully targeted, interven-
tions can be effective in promoting online safety.
Thus, improving user responsibility for overall online
safety is a desirable and achievable goal.
REFERENCES
1. Abraham, C., Sheeran, P., and Johnson, M. From health beliefs to self-
regulation: Theoretical advances in the psychology of action control.
Psychology and Health 13,(1998), 569–591.
2. Hu, Q. and Dinev, T. Is spyware an Internet nuisance or public men-
ace? Commun. ACM, 48,8(Aug. 2005), 61–65.
3. Karat, C.-M., Brodie, C., and Karat, J. Usable privacy and security for
personal information management. Commun. 49, 1 (Jan. 2006),
56–57.
4. LaRose, R., Rifon, N., Liu, X., and Lee, D. Understanding online
safety behavior: A multivariate model. International Communication
Association (May 27–30, 2005, New York).
5. Lee, Y. and Kozar, K.A. Investigating factors affecting the adoption of
anti-spyware systems. Commun. 48,8(Aug. 2005), 72–77.
6. National Cyber Security Alliance AOL/NCSA Online Safety Study,
2005; www.staysafeonline.info/pdf/safety_study_2005.pdf.
7. National Cyber Security Alliance. Emerging Internet Threat List,
2006; www.staysafeonline.info/basics/Internetthreatlist06.html.
8. Petty, R. and Cacioppo, J. Communication and Persuasion: Central and
Peripheral Routes to Attitude Change.Springer-Verlag, New York, 1986.
9. Poston, R., Stafford, T.F. and Hennington, A. Spyware: A view from
the (online) street. Commun. ACM 48,8(Aug. 2005), 96–99.
10. Thompson, R. Why spyware poses multiple threats to security. Com-
mun. ACM 48,8(Aug. 2005), 41–43.
11. Witte, K. Putting the fear back into fear appeals: The Extended Paral-
lel Process Model. Communication Monographs 59, 4 (1992), 329–349.
12. Zhang, X. What do consumers really know about spyware? Commun.
ACM, 48,8(Aug. 2005), 44–48.
Robert LaRose (larose@msu.edu) is a professor in the
Department of Telecommunication, Information Studies, and Media
at Michigan State University, East Lansing, MI.
Nora J. Rifon (rifon@msu.edu) is a professor in the Department
of Advertising, Public Relations, and Retailing at Michigan State
University, East Lansing MI.
Richard Enbody (enbody@cse.msu.edu) is an associate professor
in the Department of Computer Science and Engineering at Michigan
State University, East Lansing, MI.
©2008 ACM 0001-0782/08/0300 $5.00
DOI: 10.1145/1325555.1325569
c
76 March 2008/Vol. 51, No. 3 COMMUNICATIONS OF THE ACM
... Literature on different aspects of this issue is rich. Comprising for example strands on the central role of user awareness (Bulgurcu et al., 2010;Corradini & Nardelli, 2018;Culnan et al., 2008;D´Arcy et al., 2009;Macabante et al., 2019;Spears & Barki, 2010), user responsibility (de Bruijn & Janssen, 2017;Filipczuk et al., 2019;LaRose et al., 2008), or the impact of different types of organizations (Acuna et al., 2021;Balozian & Leidner, 2017). Still, we lack a comprehensive understanding of users' cyber security behavior (Chen et al., 2021;Jenkins et al., 2021) and especially the interrelations of the different building blocks of organizational cyber security management. ...
... Filipczuk and colleagues (2019) confirm these findings as users assigned with a high level of responsibility for their own digital behavior easily adopted it and acted according to security guidelines. Both studies (Filipczuk et al., 2019;LaRose et al., 2008) showed that personal responsibility plays an important role in improving user behavior. ...
... However, internalizing responsibility requires user awareness and user IT capabilities. Our results confirm previous findings that in the absence of user awareness or insufficient user IT capabilities, responsibility is not internalized and users thus do not show desirable, i.e., cyber security compliant, behavior (Furnell et al., 2007;LaRose et al., 2008). ...
Conference Paper
Full-text available
Desirable user behavior is key to cyber security in organizations. However, a comprehensive overview on how to manage user behavior effectively, in order to support organizational cyber security, is missing. Building on extant research to identify central components of organizational cyber security management and on a qualitative analysis based on 20 semi-structured interviews with users and IT-Managers of a European university, we present an integrated model on this issue. We contribute to understanding the interrelations of namely user awareness, user IT-capabilities, organizational IT, user behavior, and especially internalized responsibility and relation to organizational cyber security.
... Other studies have examined PMT in relation to individual end-users in interactions with data backups [10], home computers [7,40], smartphone locking [3,4], passwords [111], mobile payment apps [96], Tor browser [97], and more. Researchers often incorporate other behavioral change theories in addition to PMT to explain security behaviors, highlighting the role of factors such as personal responsibility [9,56,91], attitudes [93], social norms [42,52,93], psychological capital [11], and prior negative experience [61] in addition to threat and coping appraisals. ...
... Nevertheless, prior work has provided inconclusive results on which one is more effective between threat and coping appeals. Some studies have identified coping appraisal as a significant predictor of security intention or behavior [9,11,18,40,42,48,52,56,61,72,93], but among these studies many also found significant effects for threat-related constructs, either threat severity or vulnerability alone [11,42,48,52,61] or both [7,93]. Other studies suggest that threat perceptions did not directly impact intention or behavior, but moderated the effect of coping-related constructs [72] or had interaction effects with the subject's occupational background [18]. ...
Preprint
Full-text available
We draw on the Protection Motivation Theory (PMT) to design nudges that encourage users to change breached passwords. Our online experiment (n=1,386) compared the effectiveness of a threat appeal (highlighting negative consequences of breached passwords) and a coping appeal (providing instructions on how to change the breached password) in a 2x2 factorial design. Compared to the control condition, participants receiving the threat appeal were more likely to intend to change their passwords, and participants receiving both appeals were more likely to end up changing their passwords; both comparisons have a small effect size. Participants' password change behaviors are further associated with other factors such as their security attitudes (SA-6) and time passed since the breach, suggesting that PMT-based nudges are useful but insufficient to fully motivate users to change their passwords. Our study contributes to PMT's application in security research and provides concrete design implications for improving compromised credential notifications.
... Our study took inspiration from van Bavel et al.'s study, in which the authors separated threat and coping appeals in examining their effects on online security behavior including choosing a secure connection, selecting a trusted vendor, choosing a strong password, and logging out); the authors found that the coping appeal was as effective as both appeals combined, but not so the threat appeal alone [542]. Other survey-based studies also validated constructs within the coping appraisal as significant predictors of security intention or behavior, usually through structural equation modeling [49,63,102,202,216,237,252,280,289,363,463]. ...
... Prior work has integrated PMT with other theories and highlighted the importance of other factors such as personal responsibility [49,280,455], attitudes [463], social norms/influences [216,252,463], psychological capital [63], prior negative experience [289], and more. ...
Thesis
As much as consumers express desires to safeguard their online privacy, they often fail to do so effectively in reality. In my dissertation, I combine qualitative, quantitative, and design methods to uncover the challenges consumers face in adopting online privacy behaviors, then develop and evaluate different context-specific approaches to encouraging adoption. By examining consumer reactions to data breaches, I find how consumers' assessment of risks and decisions to take action could be subject to bounded rationality and potential biases. My analysis of data breach notifications provides another lens for interpreting inaction: unclear risk communications and overwhelming presentations of recommended actions in these notifications introduce more barriers to action. I then turn to investigate a broader set of privacy, security, and identity theft protection practices; the findings further illuminate individual differences in adoption and how impractical advice could lead to practice abandonment. Leveraging these insights, I investigate how to help consumers adopt online privacy-protective behaviors in three studies: (1) a user-centered design process that identified icons to help consumers better find and exercise privacy controls, (2) a qualitative study with multiple stakeholders to reimagine computer security customer support for serving survivors of intimate partner violence, and (3) a longitudinal experiment to evaluate nudges that encourage consumers to change passwords after data breaches, taking inspiration from the Protection Motivation Theory. These three studies demonstrate how developing support solutions for consumers requires varying approaches to account for the specific context and population studied. My dissertation further suggests the importance of critically reflecting on when and how to encourage adoption. While inaction could be misguided sometimes, it could also result from rational cost-benefit deliberations or resignation in the face of practical constraints.
... The awareness of hazards and the gravity of threats from social media are also influenced by the protection behavior of family, friends, leaders, or peers (Buhi et al., 2009). Learning defensive information from others may also increase the person's capacity to deal with threats (Pahnila et al., 2007;LaRose et al., 2008;Zhang et al., 2009). In the workplace, Pahnila et al. (2007) found that social norms positively affected people's intentions to follow security protection behaviors. ...
Article
Full-text available
Information and Communication Technology (ICT) has profoundly impacted social, psychological, and physical well-being, presenting positive and negative effects. This study primarily explores the negative aspects, specifically technostress, and its influence on mobile learning (m-learning) among university students, a crucial area with limited research, especially post-Covid-19. We aimed to develop a model for m-learning usage among undergraduates, investigating how factors like technostress, technology addiction, and technophobia could impede its benefits. Utilizing the Theory of Planned Behavior (TPB), we adopted a mixed-method approach, conducting an online survey with 1,144 students and in-depth interviews with 30 students from Riyadh, Saudi Arabia. The quantitative data, analyzed using Structural Equation Modeling (Smart PLS 3.8), validated our model, highlighting the significance of the three proposed factors on m-learning usage. Qualitatively, we gained insights into technostress, technophobia, and technology adoption barriers. Our findings suggest that m-learning can enhance academic performance, but its efficacy is subject to overcoming these identified barriers. This study contributes to educational research by emphasizing the need to address technological adoption challenges in higher education.
... 30 Given that the current work features voluntary rather than compulsory, the motivation sought is intrinsic and self-determined. Consistent with previous work, which demonstrates that confidence in one's ability is among the most important antecedents to security-related behaviors in voluntary use scenarios, 51 we hypothesize: ...
Article
We draw on the Protection Motivation Theory (PMT) to design interventions that encourage users to change breached passwords. Our online experiment ( n =1,386) compared the effectiveness of a threat appeal (highlighting the negative consequences after passwords were breached) and a coping appeal (providing instructions on changing the breached password) in a 2×2 factorial design. Compared to the control condition, participants receiving the threat appeal were more likely to intend to change their passwords, and participants receiving both appeals were more likely to end up changing their passwords. Participants’ password change behaviors are further associated with other factors, such as their security attitudes (SA-6) and time passed since the breach, suggesting that PMT-based interventions are useful but insufficient to fully motivate users to change their passwords. Our study contributes to PMT’s application in security research and provides concrete design implications for improving compromised credential notifications.
Article
Full-text available
This study aims to identify and analyze the main research streams on digital safety and online privacy related to higher education students. Students are at risk of cybercrime, including online harassment, identity theft, and exposure to inappropriate content. A systematic literature review is conducted on digital security, cybersecurity, Internet security, online safety, and use of digital technologies. The results show that publications on digital security for higher education students have increased consistently during the last two decades. The main research streams are: digital risk and student perspectives, digital children, digital teenagers, online harassment, and digital behavior. In conclusion, this study provides a wide assessment on digital security for university students and highlights the importance of future research on improving awareness and safety on digital environments for teenagers.
Article
Full-text available
Privacy literacy is recognized as a crucial skill for safeguarding personal privacy online. However, self-assessed privacy literacy often diverges from actual literacy, revealing the presence of cognitive biases. The protection motivation theory (PMT) is widely used to explain privacy protection behavior, positing that whether individuals take defensive measures depends on their cognitive evaluation of threats and coping capabilities. However, the role of cognitive biases in this process has been understudied in previous research. This study focuses on Chinese digital natives and examines the differential impacts of subjective and objective privacy literacy on privacy protection behavior, as well as the role of cognitive biases in privacy decision-making. The results show that there is no significant correlation between subjective and objective privacy literacy, and a bias exists. When privacy concern is used as a mediating variable, there are significant differences in the paths through which subjective and objective privacy literacy influence privacy protection behavior. Furthermore, privacy literacy overconfidence moderates the relationship between privacy concern and privacy protection behavior. The findings confirm the influence of cognitive biases in privacy behavior decision-making and extend the PMT. This study also calls for the government to enhance privacy literacy training for digital natives to improve their privacy protection capabilities.
Article
The latest advances in data-driven marketing, such as real-time personalization, have increasingly made consumers more vulnerable. In response, some consumers deliberately falsify information in order to redress the balance of power, a practice that constitutes a serious threat to the digital economy. The topic of falsification is still largely under-researched in information systems and marketing. Based on protection motivation theory, the author conceptualizes privacy controls as a source of information and the falsification of information as a coping response, with vulnerability representing the threat appraisal mechanism and self-efficacy the coping appraisal mechanism. Through a within-subject experiment (n = 207), the results of the mediation analysis for repeated measures show that the effect of privacy controls as a source of information on the falsification of information is fully mediated by vulnerability and self-efficacy. The author provides insights for managers regarding the significant trade-off between reducing consumer vulnerability and maintaining the usefulness of the data.
Article
Full-text available
Recent media attention to spyware [2, 5, 7, 8] has brought to light the blunt intrusion into individual privacy and the uncertain hidden cost of free access to Internet sites, along with freeware and shareware. Most spyware programs belong to the more benign category of adware that delivers targeted pop-up ads based on a user's Web surfing habits. The more malicious type of spyware tracks each keystroke of the user and sends that information to its proprietors. Such information could be used for legitimate data mining purposes or it could be abused by others for identity theft and financial crimes.
Article
Full-text available
There are indications of late that the use of anti-spyware software is on the rise, with more than 100 million Internet users downloading Lavasoft's free anti-spyware software [2]. Some big-name companies are also beginning to address the spyware issue, including Microsoft, which currently has a beta version of its own anti-spyware available to Microsoft Windows users for download. However, a Gartner survey finds only 10% of respondents were taking sufficiently aggressive steps to minimize spyware infestations [5] and a Forrester survey found that even though 55% of consumers knew what spyware was, only 40% were running anti-spyware programs routinely [7].
Article
Full-text available
Technology has revolutionized information collection and distribution to the point where marketers have expanded and implemented new technologies to enable efficient consumer information acquisition. Such sophisticated data collection methods have raised serious concerns about consumer privacy, as some marketers have quickly discovered ways to abuse this power.
Article
The paper reviews the theoretical concepts included in a range of social cognitive models which have identified psychological antecedents of individual motivation and behaviour. Areas of correspondence are noted and core constructs (derived primarily from the theory of planned behaviour and social cognitive theory) are identified. The role of intention formation, self-efficacy beliefs, attitudes, normative beliefs and self-representations are highlighted and it is argued that these constructs provide a useful framework for modelling the psychological prerequisites of health behaviour. Acknowledging that intentions do not translate into action automatically, recent advances in our understanding of the ways in which prior planning and rehearsal can enhance individual control of action and facilitate the routinisation of behaviour are considered. The importance of engaging in preparatory behaviours for the achievement of many health goals is discussed and the processes by which goals are prioritised, including their links to self-representations, are explored. The implications of social cognitive and self-regulatory theories for the cognitive assessment of individual readiness for action and for intervention design in health-related settings are highlighted.
Article
The fear appeal literature is diverse and inconsistent. Existing fear appeal theories explain the positive linear results occurring in many studies, but are unable to explain the boomerang or curvilinear results occurring in other studies. The present work advances a theory integrating previous theoretical perspectives (i.e., Janis, 1967; Leventhal, 1970; Rogers, 1975, 1983) that is based on Leventhal's (1970) danger control/fear control framework. The proposed fear appeal theory, called the Extended Parallel Process Model (EPPM), expands on previous approaches in three ways: (a) by explaining why fear appeals fail; (b) by re‐incorporating fear as a central variable; and (c) by specifying the relationship between threat and efficacy in propositional forms. Specific propositions are given to guide future research.
Article
IBM T.J. Watson Research Center in Hawthorne, NY, focuses on the design and development of policy SPARCLE, authoring and transformation tools that enable organizations to create machine-readable policies for real-time enforcement decisions. Usable privacy and security technology is a critical need in the management of personal information, and should be a part of the initial design considerations for technology applications, systems, and devices that involves personal information collection, access, and communications. SPARCLE will enable individuals to be able to know that the policies are enforced within the organizations by their own processes. The prototype workbench transforms natural language rules through the use of natural language parsing technology into machine-readable XML code. The tools promise to give organizations a verifiable path from the written form of privacy rule to real-time enforcement decisions regarding access to personal information.
Article
The misuse of technology and hijacking of spyware which presents a danger to security and privacy is discussed. Increased costs due to unnecessary consumption of bandwidth on individual PCs and the necessary labor costs in rebuilding systems to ensure they are no longer corrupt are virtually unquantifiable. System degradation is time consuming for the individual PC use and even more so for network administrations managing corporate networks. Spyware is a significant threat to the effective functioning and continued growth of the Internet.
Article
Spyware is the latest epidemic security threat for Internet users. There are various types of spyware programs (see Table 1) creating serious problems such as copying and sending personal information, consuming CPU power, reducing available bandwidth, annoying users with endless pop-ups, and monitoring users' computer usage. As spyware makes the Internet a riskier place and undermines confidence in online activities, Internet users stop purchasing at online stores---a consequence that clearly disrupts e-business.