ArticlePDF Available

Keeping Up With The Joneses: Assessing Phishing Susceptibility in an Email Task


Abstract and Figures

Most prior research on preventing phishing attacks focuses on technology to identify and prevent the delivery of phishing emails. The current study supports an ongoing effort to develop a user-profile that predicts when phishing attacks will be successful. We sought to identify the behavioral, cognitive and perceptual attributes that make some individuals more vulnerable to phishing attack than others. Fifty-three participants responded to a number of self-report measures (e.g., dispositional trust) and completed the ‘Bob Jones’ email task that was designed to empirically evaluate phishing susceptibility. Over 92% of participants were to some extent vulnerable to phishing attacks. Additionally, individual differences in gender, trust, and personality were associated with phishing vulnerability. Application and implications for future research are discussed.
Content may be subject to copyright.
Kyung Wha Hong1, Christopher M. Kelley2, Rucha Tempa2,
Emerson Murphy-Hill1 & Christopher B. Mayhorn2
1Department of Computer Science, 2Department of Psychology,
North Carolina State University, Raleigh, NC
Most prior research on preventing phishing attacks focuses on technology to identify and
prevent the delivery of phishing emails. The current study supports an ongoing effort to
develop a user-profile that predicts when phishing attacks will be successful. We sought
to identify the behavioral, cognitive and perceptual attributes that make some individuals
more vulnerable to phishing attack than others. Fifty-three participants responded to a
number of self-report measures (e.g., dispositional trust) and completed the ‘Bob Jones’
email task that was designed to empirically evaluate phishing susceptibility. Over 92% of
participants were to some extent vulnerable to phishing attacks. Additionally, individual
differences in gender, trust, and personality were associated with phishing vulnerability.
Application and implications for future research are discussed.
Cybersecurity involves a complex interaction
between users and technology. While security
threats might take a variety of forms such as viruses
or worms delivered via nefarious websites or USB
drives, theft using social engineering tactics such as
phishing are becoming increasingly common and
costly. Loss of time and increased stress levels are
the immediate personal costs (Hardee, West, &
Mayhorn, 2006). Long term personal costs are
likely as well, such as decreased trust and usage of
the internet for banking, shopping, and other
conveniences (Dhamija & Tygar, 2005; Kelley,
Hong, Mayhorn, & Murphy-Hill, 2012). In terms of
economic losses, a recent survey (Gartner, 2007)
indicates phishing attacks caused a loss of 3.2
billion dollars based on a sample of 4500 adults
with an average of $866 lost per phishing
occurrence. Moreover, phishing targeted at
administrators can compromise entire systems and
user communities (Schwartz, 2011).
The goal of this research is to develop a user-
profile that predicts when and where phishing
attacks will be successful. Such a user-profile could
be useful to help identify behavioral, cognitive, and
perceptual differences that make some users more
susceptible to phishing than others. For instance,
individual differences in trust and cognitive and
attentional capacity have been identified separately
as contributing to phishing susceptibility. However,
no one has constructed a unified user-profile that
combines individual differences to proactively
identify individual users who are prone to being
successfully phished.
Fifty-three undergraduate students were
recruited to complete an experiment (Table 1).
Participants were tested individually in sessions that
lasted approximately two hours and given extra-
credit as compensation.
The experiment was completed in two stages
such that participants completed an online survey
and then a laboratory session.
Self-report measures. Participants completed a
survey that measured demographic characteristics
such as age, gender, and primary language as well
as previous experiences with phishing, online
purchasing behavior, and general computing
behavior (based on Eveland, Shah, & Kwak, 2003;
Yoshioka, Washizaki, & Maruyama, 2008).
Participants also responded to measures of
dispositional trust (Merritt & Ilgen, 2008),
impulsivity (Neyste & Mayhorn, 2009), and
personality (Gosling, Rentfrow, & Swann, 2003).
Table 1
Participant Characteristics
Age 20.20 18 - 27
Gende r
English Primary
Computer Science
Freque ncie s
Behavioral measures. To empirically assess
phishing susceptibility, participants completed an
email task where they were asked to access a
Google Mail account for a character named “Bob
Jones” and categorize 14 email messages (Table 2).
Table 2
Email Messages Divided by Category
Email Category n
Spam 1
Malware 1
Legitimate 5
Total 14
Figure 1 shows one of the phishing emails we used
as stimuli in this experiment. This email appears to
be from, a legitimate website
representing a real company (even with their logo).
Also it seems to give useful information to the user.
However, if a user clicks on the links included in
the email, it actually leads them to a website that is
not related to careerbuilder’s official website.
Disguising the sender or source of an email by
making it look like a legitimate company is a
typical tactic used to create phishing emails.
Figure 1
Example Phishing Email
Participants were given the following instructions:
When you are going through each email, do
as you normally do. For example, if you
normally read each email carefully do as
you usually do. Or if you usually skim
through each message quickly that’s also
fine, too. After going through an email you
have to make a decision about the email. If
you think email is legitimate and you’d like
to respond (e.g., reply, click on a link,
download a file) to the email, then mark
‘Important’. If you think email is legitimate
but doesn’t need any response and would
like to just archive, leave it as it is. If you
think email is not legitimate, suspicious, or
spam, then ‘Delete’.
After providing informed consent and
completion of the self-report measures delivered
online, participants visited the laboratory where a
battery of cognitive tests and the Bob Jones email
task were administered. The cognitive tests included
a measure of working memory capacity (WMC)
(Unsworth, Heitz, Schrock, & Engle, 2005),
crystallized intelligence (Shipley, 1986), spatial
ability (Peters et al., 1995; Vandenberg & Kuse,
1978), and sustained attention (Temple et al., 2000).
Upon completion of the cognitive tests, instructions
for the Bob Jones email task were delivered.
Finally, participants were debriefed and dismissed.
Responses to self-report measures were
captured via an online survey tool, Qualtrics, and
the results of the cognitive tests and the Bob Jones
email task were entered into SPSS for analysis.
Survey Results
Prior phishing experience. Many respondents
indicated that they had previous phishing
experience via email. For instance, 25% reported
glancing at the contents of a phishing email whereas
36% admitted to completely reading a phishing
message. Thirty percent were compelled to ask
someone else whether they thought the email was
authentic whereas 11% reported contacting an
authority (e.g., bank). The most severe phishing
consequences seemed to be relatively rare with 15%
clicking on a link, 8% installing a virus/malware,
and 6% entering personal information. Of those
who entered personal information, name (6%) and
password (6%) comprised the information provided
to phishers. Most frequent consequences of worst
experience included “noticed unusual activity in an
online account” (15%) and “reduced online
activity” (15%). Based on this previous experience,
89% agreed that they were “confident that they can
tell the difference between a legitimate email and
one sent by a scammer.”
Behavioral Results
Bob Jones email task performance. To
ascertain phishing susceptibility, a score that ranged
from 0 (perfect ability) to 100 (no ability) was
calculated for participant’s ability to identify
phishing emails. The data suggested more than 92%
of participants were susceptible to phishing with
only 4 participants (7.5% of the sample)
successfully identifying all of the phishing emails
and approximately 52% misclassifying more than
half of the phishing emails. Since phishing also
impacts the ability of people to identify legitimate
emails, the number of authentic emails that were
incorrectly deleted was assessed. Fifty-four percent
deleted at least one authentic email.
Individual differences correlated with
accuracy. The ability to correctly identify phishing
emails revealed gender, trust, and personality were
correlated with phishing vulnerability. For example,
women were less likely than men to correctly
identify phishing emails, t(51) = -2.15, p < .036.
Dispositional trust, extraversion and openness to
new experience were correlated with deleting
legitimate emails. Specifically, less trusting
individuals, r(52) = -.30, p < .034, introverts, r(53)
= -.29, p < .054, and those less open to new
experiences, r(53) = -.435, p < .002, were more
likely to delete legitimate emails.
Severity of email misclassification. In
addition, because misclassifying some emails could
have more severe consequences than others, five
classes of email severity were created that ranged
from 1 to 5. (Class 1:legitimate email—no danger,
Class 2:spam email or email sent to numerous
recipients—no danger but less useful, Class
3:phishing email redirecting to unexpected site—no
danger, Class 4:phishing email with a danger of
loosing less critical information, Class 5: phishing
email with a danger of losing money or critical
information). Thus, when an email was
misclassified a severity score was assigned based on
the participant’s response (e.g., their classification)
and the consequence of misclassifying that
particular email (Table 3). For example, if a
participant responded with ‘important’ for a
phishing email in email severity class 4, the severity
score for this response was assigned a score of 4.
However, if this participant responded with ‘delete’
for a phishing email in email severity class 5, the
severity score for this response was assigned a score
of 0. A total severity score due to misclassification
was calculated as the sum of severity scores for
each email response and ranged from 0 (no
consequence) to 23 (severe consequence).
Table 3
The Severity Score based on Email Severity Class and
Participants’ responses
Results revealed an average severity score of
14.24. What’s more, only 2% of participants
correctly classified all emails indicating
approximately 98% would have experienced
adverse consequences resulting from email
While the topic of phishing and social
engineering is not new, the current focus on the
human side of the HCI equation promises to expand
our knowledge in this area. The preliminary results
of the current study illustrate a number of findings.
First, results suggest a disconnect between
participants’ self-reported data and the empirical
data collected from the Bob Jones email task.
Specifically, approximately 92% of participants
misclassified phishing emails even though 89%
indicated they were confident of their ability to
identify phishing emails. These results suggest a
majority of participants were not only susceptible to
phishing attacks, but overconfident in their ability to
protect themselves from such attacks. Second, only
2% of the participants suffered no adverse
consequences due to misclassification of emails
during the task. Third, individual differences such
as gender, dispositional trust, and personality appear
to be associated with the ability to correctly
categorize emails as either legitimate or phishing.
While these results are interesting, they should
be interpreted with caution given several potential
methodological and analytical limitations. For
instance, reliance on self-report of prior behavior
may be subject to memory biases. Likewise, the
behavioral measure (Bob Jones email task) could be
described as artificial because participants were
asked to role play; however, this methodology has
been validated with prior research (Sheng et al.,
2010). Moreover, analysis of the consequences of
participants’ email misclassification severity was
based on a preliminary coding scheme developed by
an individual rater. Current efforts are underway to
provide inter-rater reliability for this measure and
additional measures used in the Bob Jones email
task. The sample recruited for the current study
consisted of college students. However, efforts are
currently underway to recruit a more diverse set of
participants (i.e., a non-student sample of working
professionals). Recently, we collected data from
volunteers employed at a government agency.
Future analyses will compare the students and non-
students to determine whether there are similarities
that are common to the two groups and more
importantly, how they vary in terms of phishing
Future Research and Application
These results contribute to an ongoing effort to
develop a user profile that identifies those most at
risk of being phished. One implication might be the
ability to recommend a tailored anti-phishing
training tool to a user who is determined to be
vulnerable to phishing attack. Moreover, our efforts
to investigate individual differences in phishing
susceptibility are exemplified in a recent paper that
describes how people from different cultures
conceptualize phishing (Tembe, Hong, Murphy-
Hill, Mayhorn, & Kelley, 2013).
Further research will focus on refining this
profiling procedure and using it to inform the design
of a usable and effective tool to help users combat
phishing attacks. Our plan is to develop a training
tool that includes training contents reflecting the
results from this study in addition to conventional
training tools’ contents (e.g., disguised email
source, poor grammar, urgency cues, etc.).
Moreover, we will analyze how our anti-phishing
tool contributes to protecting users from the severe
consequences of phishing attacks compared to other
tools that are currently on the market.
This research was supported by a National
Security Agency Grant to the fourth and fifth
Dhamija, R., & Tygar, J. D. (2005). The battle against
phishing: Dynamic security skins. Paper presented at the
ACM International Conference Proceeding Series.
Eveland, W. P., Shah, D. V., & Kwak, N. (2003). Assessing
causality in the cognitive mediation model: A panel study
of motivations, information processing, and learning
during campaign 2000. Communication Research, 30(4),
359-386. doi: 10.1177/0093650203253369
Gartner. (2007). Gartner survey shows phishing attacks
escalated in 2007; more than $3 billion lost to these
attacks. Retrieved from
Gosling, S. D., Rentfrow, P. J., & Swann, W. B. (2003). A
very brief measure of the big-five personality domains.
Journal of Research in personality, 37(6), 504-528.
Hardee, J. B., West, R., & Mayhorn, C. B. (2006). To
download or not to download: An examination of
computer security decision making. interactions, 13(3),
Kelley, C. M., Hong, K. W., Mayhorn, C. B., & Murphy-Hill,
E. (2012). Something smells phishy: Exploring
definitions, consequences, and reactions to phishing.
Proceedings of the Human Factors and Ergonomics
Society Annual Meeting, 56(1), 2108-2112. doi:
Merritt, S. M., & Ilgen, D. R. (2008). Not all trust is created
equal: Dispositional and history-based trust in human-
automation interactions. Human Factors: The Journal of
the Human Factors and Ergonomics Society, 50(2), 194-
Neyste, P. G., & Mayhorn, C. B. (2009). Perceptions of
cybersecurity: An exploratory analysis. Proceedings of
the 17th world congress of the international ergonomics
association. Beijing, China.
Peters, M., Laeng, B., Latham, K., Jackson, M., Zaiyouna, R.,
& Richardson, C. (1995). A redrawn vandenberg and kuse
mental rotations test-different versions and factors that
affect performance. Brain and cognition, 28(1), 39-58.
Schwartz, M. J. (2011). Spear phishing attacks on the rise,
InformationWeek. Retrieved from
Sheng, S., Holbrook, M., Kumaraguru, P., Cranor, L. F., &
Downs, J. (2010). Who falls for phish?: A demographic
analysis of phishing susceptibility and effectiveness of
interventions. Proceedings of the 28th international
conference on Human factors in computing systems.
Atlanta, Georgia, USA
Shipley, W. C. (1986). Shipley institute of living scale. Los
Angeles, CA: Western Psychological Services.
Tembe, R., Hong, K. W., Murphy-Hill, E., Mayhorn, C. B., &
Kelley, C. M. (2013). American and Indian
Conceptualizations of Phishing. Proceedings of the 3rd
Workshop on Socio-Technical Aspects in Security and
Temple, J. G., Warm, J. S., Dember, W. N., Jones, K. S.,
LaGrange, C. M., & Matthews, G. (2000). The effects of
signal salience and caffeine on performance, workload,
and stress in an abbreviated vigilance task. Human
Factors: The Journal of the Human Factors and
Ergonomics Society, 42(2), 183-194. doi:
Unsworth, N., Heitz, R. P., Schrock, J. C., & Engle, R. W.
(2005). An automated version of the operation span task.
Behavior Research Methods, 37(3), 498-505.
Vandenberg, S. G., & Kuse, A. R. (1978). Mental rotations, a
group test of three-dimensional spatial visualization.
Perceptual and motor skills, 47(2), 599-604. doi:
Yoshioka, N., Washizaki, H., & Maruyama, K. (2008). A
survey on security patterns. Progress in Informatics, 5(5),
... Openness to Experience. Openness to experience, or the willingness to try new things, was significantly positively correlated with phishing susceptibility (Alseadon, 2014 behavior which may indicate less susceptibility to phishing (Hong et al., 2013). Contrary to these findings, Pattinson et al. (2012) found that high-openness users who were not informed that they were in a phishing study were better able to correctly manage phishing emails than low-openness users. ...
... Extraversion, or level of outgoingness, was a significant predictor of phishing susceptibility such that more extraverted users were more susceptible (Alseadon, 2014;Lawson et al., 2020). Another study also demonstrated that less extraverted users delete legitimate emails more often than more extraverted users, which may affect risk of falling for a phishing attack (Hong et al., 2013). Pattinson et al. (2012), however, found that high-extraversion users who were not informed that they were in a phishing study were better able to correctly manage phishing emails than low-extraversion users. ...
... Dispositional trust, or the tendency to believe in others' positive attributes, was shown to be a significant positive predictor of phishing susceptibility (scale developed by McKnight et al., 2004;Alseadon, 2014;Workman, 2008;Wright et al., 2009). Additionally, Hong et al. (2013) demonstrated that less trusting users were more likely to delete legitimate emails, a behavior that could potentially lower risk of falling for an attack. Although Moody et al. (2017) did not find significant main effects, they demonstrated that some subscales of dispositional trust, as well as distrust, measured using a scale from Moody et al. (2014), may be predictors of phishing susceptibility with stronger effects occurring when the user knows the apparent sender of the phishing email compared to when they do not. ...
Phishing attack countermeasures have previously relied on technical solutions or user training. As phishing attacks continue to impact users resulting in adverse consequences, mitigation efforts may be strengthened through an understanding of how user characteristics predict phishing susceptibility. Several studies have identified factors of interest that may contribute to susceptibility. Others have begun to build predictive models to better understand the relationships among factors in addition to their prediction power, although these studies have only used a handful of predictors. As a step toward creating a holistic model to predict phishing susceptibility, it was first necessary to catalog all known predictors that have been identified in the literature. We identified 32 predictors related to personality traits, demographics, educational background, cybersecurity experience and beliefs, platform experience, email behaviors, and work commitment style.
Phishing attacks pose substantial threats to the security of individuals and organizations. Although current anti-phishing tools achieve high accuracy rates and present a potential solution to this problem, users are often reluctant to rely on the predictions of these competent tools. However, we continue to lack a means of resolving this reluctance—or even an explanation for it. To address this need and advance toward a solution, we investigate the factors that influence users' reliance on anti-phishing tools. Over the course of two studies, we test the effects of tool attributes (i.e., accuracy and frequency of phishing email predictions) and develop a model based on the notions of trust and distrust. Countering the common conjecture that tools are not accurate enough, we find that users' under-reliance is not an artifact of the insufficient accuracy of tools, as even in a 100% accuracy condition, users were under-reliant on tools. Rather, we find that while accuracy increases users' trust in tools, full reliance is inhibited by users' distrust, which is driven by a lack of transparency regarding tools' functionalities and the quantity of predictions provided. Thus, overall, our study shows the limits of accuracy in engendering reliance and explains the under-reliance phenomenon by showing that due to lack of knowledge or understanding, some users prefer to rely on their own inferior judgment instead of trusting and relying on the predictions provided by highly accurate tools.
This study explores the psychological aspects of social engineering by analyzing personality traits in the context of spear-phishing attacks. Phishing emails were constructed by leveraging multiple vulnerable personality traits to maximize the success of an attack. The emails were then used to test several hypotheses regarding phishing susceptibility by simulating a series of spear-phishing campaigns inside a software development company. The company’s employees underwent a standard Big Five personality test, four different phishing emails over four weeks, and cybersecurity training. The results were aggregated before and after the cybersecurity course, and binary logistic regression analyses were performed at each phase of the phishing attack. The results show that personality traits correlate with phishing susceptibility under certain circumstances and pave the way for new methods of protecting individuals from phishing attacks.
Purpose: Social engineering attacks rely on compromising users’ confidential information usually using quid pro quo methods. Understanding the psychological reasons underlying the motivation for falling prey is imperative to developing successful defenses. Cybercriminals depend on human vulnerability rather than technology with an overreliance on technical solutions for protection rather than behavioral control models. Cyber sextortion is a type of quid pro quo social engineering that is under-researched. Hence, investigating the individual differences in security behavior and susceptibility to sextortion attacks using personality-based models is crucial. Methodology: Applying a quantitative methodology with online questionnaires, data was collected and analyzed using standard multiple regressions and Spearman's correlations in light of risky cyber security behavior (RCSB) scale correlating positively and negatively with extraversion, openness, agreeableness, neuroticism, and conscientiousness. Findings: The findings indicated the hypothesis of scoring high in the RCSB scale positively correlating negatively with conscientiousness was supported, although the overall regression analysis proved to be statistically significant. Social desirability to not admit risky cyber behaviors was apparent; however, the overall score for RCSB did show slightly risker behavior, indicating participants’ vulnerability to cyber sextortion. Originality/Value: This study supports that risky security behavior could be predicted by the personality of individuals. Developing and incorporating learning materials on how to mitigate the risks of cyber sextortion with organizational security awareness and training programs becomes highly crucial. Understanding the impact of conscientiousness, openness, extraversion, agreeableness, and neuroticism are necessary to safeguard against emerging attacks by means of cyber sextortion.
Phishing emails have certain characteristics, including wording related to urgency and unrealistic promises (i.e., “too good to be true”), that attempt to lure victims. To test whether these characteristics affected users’ suspiciousness of emails, users participated in a phishing judgment task in which we manipulated 1) email type (legitimate, phishing), 2) consequence amount (small, medium, large), 3) consequence type (gain, loss), and 4) urgency (present, absent). We predicted users would be most suspicious of phishing emails that were urgent and offered large gains. Results supporting the hypotheses indicate that users were more suspicious of phishing emails with a gain consequence type or large consequence amount. However, urgency was not a significant predictor of suspiciousness for phishing emails, but was for legitimate emails. These results have important cybersecurity-related implications for penetration testing and user training.
In the phishing email literature, recent researchers have given much attention to individual differences in phishing susceptibility from the perspective of the Big Five personality traits. Although the effectiveness and advantages of the phishing susceptibility measures in the signal detection theory (SDT) framework have been verified, the cognitive mechanisms that lead to individual differences in these measures remain unknown. The current study proposed and examined a theoretical path model to explore how the Big Five personality traits, related knowledge and experience and the cognitive processing of emails (i.e., mail elaboration) influence users’ susceptibility to phishing emails. A sample of 414 Chinese participants completed the 44-item Big Five Personality Inventory (BFI-44), Mail Elaboration Scale (MES), Web Experience Questionnaire, Experience with Electronic Mail Scale, Knowledge and Technical Background Test and a demographic questionnaire. The phishing susceptibility measures were calculated after the participants finished an email legitimacy task in a role-playing scenario. The results showed that the general profile of the “victim personality” included low conscientiousness, low openness and high neuroticism, and Internet experience and computer and web knowledge played an important role. All of these factors have significant indirect effects on phishing susceptibility by influencing mail elaboration. Moreover, the probabilities of checking for further information or deleting the email reflect the sensitivity of email judgment. These findings reveal the mediating role of cognitive processing between individual factors and phishing susceptibility. The theoretical implications of this study for the phishing susceptibility literature and its applications to phishing risk interventions or training programs are discussed.
Full-text available
Initial research on using crowdsourcing as a collaborative method for helping individuals identify phishing messages has shown promising results. However, the vast majority of crowdsourcing research has focussed on crowdsourced system components broadly and understanding individuals' motivation in contributing to crowdsourced systems. Little research has examined the features of crowdsourced systems that influence whether individuals utilise this information, particularly in the context of warnings for phishing emails. Thus, the present study examined four features related to warnings derived from a mock crowdsourced anti‐phishing warning system that 438 participants were provided to aid in their evaluation of a series of email messages: the number of times an email message was reported as being potentially suspicious, the source of the reports, the accuracy rate of the warnings (based on reports) and the disclosure of the accuracy rate. The results showed that crowdsourcing features work together to encourage warning acceptance and reduce anxiety. Accuracy rate demonstrated the most prominent effects on outcomes related to judgement accuracy, adherence to warning recommendations and anxiety with system use. The results are discussed regarding implications for organisations considering the design and implementation of crowdsourced phishing warning systems that facilitate accurate recommendations.
When interacting with computers or digital artifacts, individuals tend to replicate interpersonal trust and distrust mechanisms to calibrate their trust. Such mechanisms involve cognitive processes that individuals rely on before making a decision to trust or distrust. With the worldwide increase in email traffic, both the academic literature and professionals warn of insider threats, that is, coming from inside an organization, in particular those created by legitimate users who have decided to trust a phishing email. This article offers a cognitive approach to the decision whether to trust a phishing email. After reviewing the literature on decision making concerning a cognitive perspective, interpretation, trust, distrust, online deception, and insider threats, we present a study conducted on 249 participants designed to ascertain how they interpreted phishing emails and decided whether or not to trust them. We noted that certain elements eliciting trust or distrust remained invariable regardless of the participant. We show examples of phishing emails designed to maximize (or minimize) the decision to trust (or distrust), and lastly consider the limitations and ethical questions raised by this research.
Full-text available
One hundred fifty-five participants completed a survey on Amazon’s Mechanical Turk that assessed characteristics of phishing attacks and requested participants to describe their previous experiences and the related consequences. Results indicated almost all participants had been targets of a phishing with 22% reporting these attempts were successful. Participants reported actively engaging in efforts to protect themselves online by noticing the “padlock icon” and seeking additional information to verify the legitimacy of e-retailers. Moreover, participants indicated that phishers most frequently pose as members of organizations and that phishing typically occurs via email yet they are aware that other media might also make them susceptible to phishing scams. The reported consequences of phishing attacks go beyond financial loss, with many participants describing social ramifications such as embarrassment and reduced trust. Implications for research in risk communication and design roles by human factors/ergonomics (HF/E) professionals are discussed.
Conference Paper
Full-text available
Using Amazon's Mechanical Turk, fifty American and sixty-one Indian participants completed a survey that assessed characteristics of phishing attacks, asked participants to describe their previous phishing experiences, and report phishing consequences. The results indicated that almost all participants had been targets, yet Indian participants were twice as likely to be successfully phished as American participants. Part of the reason appears to be that American participants reported more frequent efforts to protect themselves online such as by looking for the padlock icon in their browser. Statistical analyses indicated that American participants agreed more with items for characteristics of phishing, consequences of phishing and the types of media where phishing occurs, suggesting more cautiousness and awareness of phishing.
Full-text available
Security has become an important topic for many software systems. Security patterns are reusable solutions to security problems. Although many security patterns and techniques for using them have been proposed, it is still difficult to adapt security patterns to each phase of software development This paper provides a survey of approaches to security patterns. As a result of classifying these approaches, a direction for the integration and future research topics is illustrated.
Full-text available
This two-wave national panel study was designed to test the causal claims of the “cognitive mediation model.” The data indicate strong support for the following causal relationships predicted by the model: (a) surveillance motivations influence information processing, (b) information processing influences knowledge, and (c) motivations influence knowledge only indirectly through information processing. However, additional analyses demonstrated that these variables are not related in a simple unidirectional causal pattern. Instead, panel analyses found that most of these relationships are mutually causal. Future research should consider the reciprocal nature of relationships between information processing and knowledge, particularly as it relates to the study of the knowledge gap hypothesis.
Conference Paper
Full-text available
In this paper we present the results of a roleplay survey instrument administered to 1001 online survey respondents to study both the relationship between demographics and phishing susceptibility and the effectiveness of several anti- phishing educational materials. Our results suggest that women are more susceptible than men to phishing and participants between the ages of 18 and 25 are more susceptible to phishing than other age groups. We explain these demographic factors through a mediation analysis. Educational materials reduced users' tendency to enter information into phishing webpages by 40% percent; however, some of the educational materials we tested also slightly decreased participants' tendency to click on legitimate links.
Full-text available
The concept of making security decisions fundamental to design security features used by the users, is described. A series of decision-making scenarios were designed to systematically vary by decision domain, risk, and gain-to-loss ratio in an effort to determine how computer users might respond to potential security decisions. Fifty-six students enrolled at a public university volunteered to participate in a study that used a 2×2×3 repeated measures factorial design. The study used performance on a scenario-based decision task to draw conclusions about how risk and gain-to-loss ratio might affect decision-making within the domains of computing and non-computing security decisions. Combining the evaluation approach with potential alterations of security warnings should allow designers to improve security systems.
When time is limited, researchers may be faced with the choice of using an extremely brief measure of the Big-Five personality dimensions or using no measure at all. To meet the need for a very brief measure, 5 and 10-item inventories were developed and evaluated. Although somewhat inferior to standard multi-item instruments, the instruments reached adequate levels in terms of: (a) convergence with widely used Big-Five measures in self, observer, and peer reports, (b) test–retest reliability, (c) patterns of predicted external correlates, and (d) convergence between self and observer ratings. On the basis of these tests, a 10-item measure of the Big-Five dimensions is offered for situations where very short measures are needed, personality is not the primary topic of interest, or researchers can tolerate the somewhat diminished psychometric properties associated with very brief measures.
Conference Paper
Phishing is a model problem for illustrating usability concerns of privacy and security because both system designers and attackers battle using user interfaces to guide (or misguide) users.We propose a new scheme, Dynamic Security Skins, that allows a remote web server to prove its identity in a way that is easy for a human user to verify and hard for an attacker to spoof. We describe the design of an extension to the Mozilla Firefox browser that implements this scheme.We present two novel interaction techniques to prevent spoofing. First, our browser extension provides a trusted window in the browser dedicated to username and password entry. We use a photographic image to create a trusted path between the user and this window to prevent spoofing of the window and of the text entry fields.Second, our scheme allows the remote server to generate a unique abstract image for each user and each transaction. This image creates a "skin" that automatically customizes the browser window or the user interface elements in the content of a remote web page. Our extension allows the user's browser to independently compute the image that it expects to receive from the server. To authenticate content from the server, the user can visually verify that the images match.We contrast our work with existing anti-phishing proposals. In contrast to other proposals, our scheme places a very low burden on the user in terms of effort, memory and time. To authenticate himself, the user has to recognize only one image and remember one low entropy password, no matter how many servers he wishes to interact with. To authenticate content from an authenticated server, the user only needs to perform one visual matching operation to compare two images. Furthermore, it places a high burden of effort on an attacker to spoof customized security indicators.
A new paper-and-pencil test of spatial visualization was constructed from the figures used in the chronometric study of Shepard and Metzler (1971). In large samples, the new test displayed substantial internal consistency (Kuder-Richardson 20 = .88), a test-retest reliability (.83), and consistent sex differences over the entire range of ages investigated. Correlations with other measures indicated strong association with tests of spatial visualization and virtually no association with tests of verbal ability.