ArticlePDF Available

Breaching the Human Firewall: Social engineering in Phishing and Spear-Phishing Emails

Authors:

Abstract and Figures

We examined the influence of three social engineering strategies on users' judgments of how safe it is to click on a link in an email. The three strategies examined were authority, scarcity and social proof, and the emails were either genuine, phishing or spear-phishing. Of the three strategies, the use of authority was the most effective strategy in convincing users that a link in an email was safe. When detecting phishing and spear-phishing emails, users performed the worst when the emails used the authority principle and performed best when social proof was present. Overall, users struggled to distinguish between genuine and spear-phishing emails. Finally, users who were less impulsive in making decisions generally were less likely to judge a link as safe in the fraudulent emails. Implications for education and training are discussed.
Content may be subject to copyright.
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
1
Breaching the Human Firewall: Social engineering in
Phishing and Spear-Phishing Emails
Marcus Butavicius
National Security and Intelligence, Surveillance and Reconnaissance (ISR) Division
Defence Science and Technology Group
Edinburgh, South Australia
Email: marcus.butavicius@dsto.defence.gov.au
Kathryn Parsons
National Security and Intelligence, Surveillance and Reconnaissance (ISR) Division
Defence Science and Technology Group
Edinburgh, South Australia
Email: kathryn.parsons@dsto.defence.gov.au
Malcolm Pattinson
Business School
University of Adelaide
Adelaide, South Australia
Email: malcolm.pattinson@adelaide.edu.au
Agata McCormac
National Security and Intelligence, Surveillance and Reconnaissance (ISR) Division
Defence Science and Technology Group
Edinburgh, South Australia
Email: agata.mccormac@dsto.defence.gov.au
Abstract
We examined the influence of three social engineering strategies on users judgments of how safe it is
to click on a link in an email. The three strategies examined were authority, scarcity and social proof,
and the emails were either genuine, phishing or spear-phishing. Of the three strategies, the use of
authority was the most effective strategy in convincing users that a link in an email was safe. When
detecting phishing and spear-phishing emails, users performed the worst when the emails used the
authority principle and performed best when social proof was present. Overall, users struggled to
distinguish between genuine and spear-phishing emails. Finally, users who were less impulsive in
making decisions generally were less likely to judge a link as safe in the fraudulent emails. Implications
for education and training are discussed.
Keywords
human-computer interaction, cyber security, phishing, empirical evaluation
1 Introduction
Phishing emails are emails sent with malicious intent that attempt to trick recipients into providing
information or access to the sender. Typically, the sender masquerades as a legitimate entity and crafts
the email to try and persuade the user to perform an action. This action may involve revealing sensitive
personal information (e.g., passwords) and / or inadvertently providing access to their computer or
network (e.g., through the installation of malware) (Aaron and Rasmussen 2010; APWG 2014; Hong
2012). In a recent survey of Australian organisations, the most common security incident reported
(45%) was that of employees opening phishing emails (Telstra Corporation 2014). While the direct
financial costs of such cyber-attacks in 2013 is estimated at a staggering USD $5.9 billion (RSA
Security 2014), there are also a range of other negative consequences to organisations that can be just
as harmful (Alavi et al. 2015). These include damage to reputation, loss of intellectual property and
sensitive information, and the corruption of critical data (Telstra Corporation 2014).
A more sinister development in cyber-attacks has been the increase in spear-phishing (Hong 2012). In
contrast with phishing emails, which tend to be more generic and are sent in bulk to a large number of
recipients, spear-phishing emails are sent to, and created specifically for, an individual or small group
of individuals (APWG 2014). When directed towards senior executives and high-ranking staff, such
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
2
attacks are known as 'whaling'. These targets typically have greater access to sensitive corporate
information and may have privileged access accounts when compared to the average user. Spear-
phishing emails include more detailed contextual information to increase the likelihood of a recipient
falling victim to them (Hong 2012). For example, they may include information relevant to the
recipient’s personal or business interests to increase the likelihood that the recipient will respond.
Such attacks are increasingly deployed by criminals who are attempting to commit financial crimes
against specific targets, corporate spies involved in stealing intellectual property and sensitive
information, and hacktivists who wish to draw attention to their cause (APWG 2014).
Phishing and spear-phishing remain ongoing threats because they circumvent many technical
safeguards by targeting the user, rather than the system (Hong 2012). Previous phishing studies have
attempted to understand these human issues by studying the visual and structural elements of emails
that influence people (Jakobbsson 2007; Furnell 2007; Parsons et al. 2013). However, phishing emails
also frequently use social engineering to coerce the target into responding (Samani and McFarland
2015), and there is a lack of research examining the influence of social engineering strategies.
1.1 The Influence of Social Engineering Strategies
Social engineering refers to the psychological manipulation of people into disclosing information or
performing an action (Mitnick et al 2002). This paper focuses on how three different social
engineering strategies influence users’ response to emails. To our knowledge, no previous studies on
phishing have manipulated social engineering strategies in a controlled user study. However,
psychological persuasion has been studied extensively in other contexts such as advertising and
helping behaviour (Knowles and Lin 2004).
Although there is disagreement in the literature as to how to categorise persuasion strategies (Shadel
and Park 2007; Pratkanis 2007), the most widely accepted classification of psychological persuasion
strategies is by Cialdini (2007). Cialdini’s (2007) summary includes six principles of persuasion. Three
of the tactics, namely, reciprocation, consistency and liking, are more dependent on a mutual,
recurring relationship, and are therefore less suited to the lab-based scenario of our study, in which
the users do not have any relationship with the sender. Hence, our study focused on Cialdini’s (2007)
other three principles: social proof, scarcity and authority.
Social proof suggests that people are more likely to comply with a request if others have already
complied (Cialdini 2007). In a phishing context, emails that specify the offer has already been taken up
by other people are likely to be more persuasive. Scarcity is based on the idea that people are more
likely to value something that is rare or limited. Hence, people are more likely to be influenced by
emails that claim an offer is only available for a short time. The authority principle indicates that
people are more likely to comply with a request that appears to be from a respected authority figure.
Hence, an email with a request from the CEO of an organisation should be more effective than the
same request from a less influential person. A recent survey of phishing emails reported that, between
August 2013 and December 2013, authority was the most prevalent social engineering technique used
in such attacks followed closely by scarcity, particularly in emails that requested information on
account details (Akbar 2004).
1.2 The Design of Phishing Studies
There are only two previous studies on human susceptibility to phishing emails that have involved the
direct manipulation of Cialdini’s (2007) influence techniques. Neither of these were controlled user
studies, and they yielded contradictory results. While Wright et al. (2014) found that authority was the
least influential technique in tricking people into falling for phishing scams, Halevi et al. (2015) found
the same strategy to be the most influential. Both these studies used the methodology that has been
become known in the literature as 'real phishing’, whereby users unknowingly receive emails as part of
the test in their normal inbox (see also Jagatic et al. 2007).
To address these contradictory findings and further examine the issue of how Cialdini’s techniques
may influence people’s susceptibility to phishing emails, we decided to use an alternative, lab-based
approach (see also Parsons et al. 2013; Pattinson et al. 2012). Using this methodology, users were
presented with emails in a controlled setting and their behaviour was logged. While such studies may
lack the real-world face validity of 'real phishing’ experiments, they have the advantage that they
provide greater control and more comprehensive measurement of user behaviour. For example, ‘real
phishing’ studies do not measure how people make decisions on genuine emails. In our study, by
testing performance on both genuine and fraudulent emails, we can apply an approach to evaluation
known as Signal Detection Theory (SDT: Green and Swets 1966). Previously, we applied this approach
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
3
to the analysis of human detection performance of phishing emails (Parsons et al. 2013). SDT has also
been applied to a wide range of other applications including human face recognition and image
identification, biometric system assessment, economics and neuroscience (Butavicius 2006; Fletcher
et al. 2008; Gold and Shadlen 2007; Hanton et al. 2010). SDT allows us to estimate two measures:
discrimination and bias. Discrimination measures how well people can distinguish between genuine
and fraudulent emails and bias measures people’s tendency to classify an email as either genuine or
fraudulent. In contrast, real phishing’ studies cannot estimate discrimination and bias measures
because they only collect behavioural responses to phishing emails1.
1.3 Individual differences
Finally, we need to understand what makes some individuals better at detecting phishing emails than
others. Halevi et al. (2015) and Pattinson et al. (2012) have shown that a number of individual
differences (e.g., personality characteristics and familiarity with computers) can influence how people
respond to phishing emails. In the current study, we included the Cognitive Reflection Test (CRT:
Frederick 2005), which measures how impulsive people are when making decisions (i.e., cognitive
impulsivity). Sagarin and Cialdini (2004) have argued that resisting persuasion techniques requires
more cognitive resources than accepting them. Accordingly, previous research has shown that higher
individual levels of impulsivity in decision making are associated with poorer performance in the
detection of phishing emails (Parsons et al. 2013). In this study, we sought to replicate this finding. In
addition, we tested whether cognitive impulsivity was associated with the ability to detect spear-
phishing emails. By understanding what individual differences influence information security
behaviours, we can begin to assist in the training and education of users to resist phishing attacks.
1.4 Aims of the research
In summary, the current study seeks to address the following questions:
How do the three social engineering strategies of authority, scarcity and social proof
influence users’ judgments on the safety of links in emails?
Does the influence of these techniques vary across different types of emails, i.e., genuine,
phishing and spear-phishing emails?
How well can people detect phishing and spear-phishing emails?
How does a user’s impulsivity in making decisions affect their ability to judge the safety of a
link in an email?
In what follows, we will describe the methodology of our experiment, present a statistical analysis of
the results of our study, and then discuss the findings and implications of this work, with a focus on
training and education.
2 Methodology
2.1 Participants
Our convenience sample consisted of 121 students from a large South Australian university, and they
were recruited via email invitation. At the time of the study, these students were currently enrolled in
undergraduate and postgraduate level courses including finance, international business, accounting,
marketing, management and entrepreneurship. Approximately half of the participants (60) had
undertaken most of their tertiary education in Australia. The majority of the students were female
(68%), all were 18 years or older and most were between 20-29 years of age (62%).
2.2 Emails
Our experiment used 12 emails, which were either genuine, phishing or spear-phishing emails. To
create the emails, we consulted with university IT security staff, who provided examples of phishing
emails that had been sent to university email accounts. These phishing emails had been sent within the
previous six months and, based on recipient-reporting and system monitoring, had been identified as
the most successful attacks against students and staff. These standard phishing emails (i.e., not spear-
1 ‘Real phishing’ studies can only calculate hits (i.e., the number of correct detections of a phishing
email) but not false alarms (i.e., the number of incorrect judgments of a genuine email as phishing).
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
4
phishing) were used as templates to form the phishing emails in our experiment. For safety purposes,
we disabled students access to the internet during the study, and also modified the actual phishing
link by a single character. We also collected genuine emails that had been received by students of the
university to provide as a template for the remaining emails. These were used to create the genuine
and spear-phishing emails in our study, where the only difference between the two emails was the link.
For genuine emails we used a legitimate link, while for spear-phishing emails, we used the modified,
illegitimate links from the actual phishing emails as previously specified.
In all phishing and spear-phishing emails, the displayed text for a link was a description such as “Click
here” or “Take the surveyrather than the actual link, and participants were advised, both verbally and
in writing at the start of experiment, that if they “hover over a link, it will show you where it would take
you”. Although the names and contact details in the emails were fictitious, the position titles in the
genuine and spear-phishing emails were actual positions at the university. Participants were advised
that, when judging the emails, they were to assume that the emails had all been sent to them
deliberately (i.e., they had not received them by mistake) and that the topics in the emails were
relevant to them (i.e., “if the email mentions a piece of software, assume that you are interested in that
software”).
In order to include the appropriate social engineering strategy, we added phrases to the emails that
appealed to these strategies. There were four conditions testing the effects of social engineering:
Authority: The email appeared to come from a person or institution of authority (e.g., CEO,
CIO) and the language used was more authoritative.
Social proof: The email encouraged the participants to take a particular action because other
people, often peers, had already undertaken this action (e.g., “Over 1000 students will
study overseas in 2014. Will you be one of them?”).
Scarcity: The email includes information that suggests an offer is limited (e.g., they have a
limited time to respond or that there are a limited number of places available on a course).
None: The email did not contain any phrases appealing to authority, social proof or scarcity
strategies.
Each participant saw the same 12 emails. For each type of email (i.e., genuine, phishing and spear-
phishing), we applied each of the four social engineering treatments once (i.e., authority, social proof,
scarcity and none).
2.3 Procedure
Participants were allocated to separate lab-based sessions with a maximum of twenty students in each.
Each student completed the experiment independently via computer. A research assistant explained
the procedure and remained present in the room throughout the experiment to answer any questions
and to ensure students worked independently. Participants were not explicitly told they were
participating in an experiment on phishing. This is because previous research has shown that
informing people they will view phishing emails artificially raises their awareness of phishing attacks
for the duration of the experiment via a psychological process known as priming (Parsons et al. 2013).
The experiment was delivered using the Qualtrics online survey software. Participants were shown
each email separately and were asked to provide a ‘Link Safety’ judgment (i.e., ‘It is okay to click on the
link in this email’.). Responses were provided on a five point Likert scale where “1 = strongly disagree,
“2” = disagree, “3” = neither agree nor disagree, “4” = agree and “5” = strongly agree. The emails were
presented to participants in a different, random order for each session. After participants had judged
all emails, they were then asked to complete the CRT in Qualtrics.
3 Results
To summarise the overall results, we recoded the ‘Link Safety’ judgments into a binary variable (‘Safe
to click?’) such that scores of 4 (‘agree’) and 5 (‘strongly agree’) were classified as ‘safe’ and all
remaining responses were classified as ‘unsafe’. A summary of all participants’ responses to links
within the experiment can be seen in Figure 1. Participants correctly determined that legitimate links
in genuine emails were safe to click 77% of the time. However, in spear-phishing emails, where the link
was always unsafe, they incorrectly judged the link to be safe 71% of the time. Almost half the sample
(45%) did not judge any of the links in the spear-phishing emails as unsafe. For standard phishing
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
5
emails, the percentage of responses that incorrectly judged the link to be safe dropped to 37%. 10% of
the participants did not judge any of the links in the phishing emails as unsafe.
Next we analysed performance using a 4 x 3 Repeated Measures Analysis of Variance on the original
five point Link Safety ratings. There were four levels of social engineering strategy: scarcity, social
proof, authority and none. There were three levels of email type: genuine, phishing and spear-
phishing. As displayed in Figure 2, there was a significant overall influence of email type (Wilks'
Lambda = .38, F(2,119) = 80.09, p < .001, multivariate = .62). In other words, there was a
significant variation in decisions on the safety of the link depending on whether the email was genuine,
phishing or spear-phishing.
There was also a significant influence of social engineering strategy on the ‘Link Safety judgments of
participants (Wilks' Lambda = .85, F(3,118) = 6.89, p < .001, multivariate = .15). Overall,
participants judged the links in phishing emails that conveyed authority as the safest (see Figure 2).
Pairwise comparisons showed that the mean ‘Link Safety’ rating when the authority tactic was present
was significantly higher than those for the other social engineering strategies (Mean Authority Scarcity =
.32, CI95% = [.12, .525], SE = .08, p < .001; Mean Authority Social Proof = .33, CI95% = [.11, .55], SE = .08, p <
.05). The mean for authority, although higher than the mean for emails with an absence of any social
engineering strategy, was not significantly so (Mean Authority None = .198, CI95% = [-.02, .42], SE = .08, p
= .097).
Figure 1: Summary of ‘Safe to click?’ judgments across the experiment. Results are displayed for the
three types of emails (genuine, spear-phishing and phishing). Correct responses are indicated in
grey, while incorrect responses are shown in black.
As can be seen in Figure 2, there was a significant interaction between the main effects of social
engineering strategy and email type (Wilks' Lambda = .471, F(6,115) = 21.5, p < .001, multivariate =
.53). While the effect of social engineering strategy was similar for genuine and spear-phishing emails,
the influence of those strategies was qualitatively different for phishing emails. Specifically, for
phishing emails, when there was an absence of any social engineering strategy, the mean ‘Link Safety’
score was actually higher than when a social engineering strategy was present. In addition, the lowest
mean rating was associated with the social proof attempts.
Using the Signal Detection Theory (SDT) approach, we calculated A’ and B’’ which are non-parametric
measures of discrimination and bias, respectively (Stanislaw & Todorov, 1999). Discrimination
measures how well someone can distinguish between a fraudulent email and a genuine email. A score
of 1 for A’ means that discrimination ability is perfect while a score of 0.5 means that fraudulent emails
cannot be distinguished from genuine emails. Bias measures someone’s tendency to respond one way
or the other, i.e., their bias towards saying an email is fraudulent or that it is genuine, regardless of
2
p
2
p
2
p
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
6
how well they can discriminate between them. B’’ scores can range from -1 (everything is classified as
fraudulent) to 1 (everything is classified as genuine) while zero indicates no response bias.
Figure 2: Errorbar plot of means (+/- 1 SE) for ‘Link Safety’ ratings for each combination of Social
Engineering Strategy (Y axis) and Deceit Effort (separate lines).
First, we looked at these SDT measures in relation to detecting spear-phishing emails (see Table 1).
According to the SDT framework, the decision-making task of the user is to distinguish between a
‘signal’ and ‘noise’ (Swets 1966). Using the binary variable ‘Safe to click?’, we defined noise trials as
cases where the email was genuine, and signal trials as the spear-phishing emails. As described in
Section 2.2, spear-phishing emails differed from the genuine emails only in the maliciousness of the
embedded link. In this way, the ‘signals’ the user are trying to detect are the spear-phishing attempts,
and the only cue is the legitimacy of the link. Secondly, we calculated SDT measures for users’ ability to
detect phishing emails. In this way, the ‘signals’ that the user are trying to detect are the phishing
attacks and the distinguishing cues may include not just the hyperlink but also legitimacy of the
sender, consistency and personalisation and spelling or grammatical irregularities (see Parsons et al.
2015).
Authority
Scarcity
None
Spear-
phishing
A’
0.5
0.51
0.59
B’’
0
0.01
0.07
Phishing
A’
0.72
0.82
0.67
B’’
0.14
0.03
0.11
Table 1. Signal Detection Theory measures for phishing and spear-phishing emails across the
different social engineering conditions.
Not surprisingly, users were better able to detect phishing emails (Mean A’ = 0.78) than spear-
phishing emails (Mean A’ = 0.59). The relative effectiveness of the different social engineering
strategies was the same for phishing and spear-phishing emails. Authority was the most successful
strategy for confusing individuals as to the legitimacy of the fraudulent email and social proof was the
least successful. When the fraudulent email used the authority strategy, participants were unable to
reliably detect spear-phishing at all (A’ = 0.5).
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
7
The major difference between the two analyses was for the emails with an absence of any social
engineering strategy. For these emails, the ability to discriminate between genuine and fraudulent
emails was relatively high for the spear-phishing emails (i.e., second best after social proof) whereas it
was relatively lower for standard phishing emails (i.e., performance was worst of all conditions). In
detecting phishing and spear-phishing attacks, users were biased towards responding that an email
was legitimate in all but one of the conditions of the experiment.
Averaged ‘Link Safety judgments for individuals were compared against their scores on the CRT using
Spearman’s rank correlation coefficients (ρ). There was a significant negative correlation between CRT
scores and link safety judgments for spear-phishing (ρ = -.23, p = .014, N = 112) and phishing (ρ = -.3,
p = .001, N = 112) emails. In other words, participants who were less impulsive in decision-making
were more likely to judge a link in a fraudulent email as unsafe. However there was no significant
correlation between performance on the CRT and link safety judgments on genuine emails (ρ = -.01, p
= .973, N = 114).
4 Discussion
In our study, the social engineering strategy that was most likely to influence users to judge that an
email link was safe was authority, and the least effective strategy was social proof. The effectiveness of
authority in our experiment, although in contrast with Wright et al.’s (2014) findings, supports the
results of Halevi et al. (2015). In addition, our results concur with lab-based research into social
engineering strategies in messages within emails (Guéguen and Jacob 2002) and marketing (Sagarin
and Cialdini 2004) and are consistent with extensive research in other areas of psychology that suggest
a strong tendency for people to be obedient towards authority (Blass 1999; Milgram 1974). The relative
effectiveness of the social engineering strategies in our study was similar for both phishing and spear-
phishing emails.
Overall, users demonstrated a bias towards classifying an email as genuine rather than fraudulent
which is to be expected given that most emails in the wild are in fact genuine. Given the additional
contextual information included in spear-phishing emails, it was also not surprising that participants
were far worse at detecting them than the generic phishing emails in our experiment. However, what
was alarming was the particularly poor performance of participants in trying to detect spear-phishing
emails when appeals to authority were present. Taken as a whole, the participants were unable to
reliably distinguish between spear-phishing and genuine emails when the email contained reference to
an authority figure. What makes this particularly concerning is that:
a) the heightened effort and vigilance expected of users in a lab-based experiment should
improve performance in comparison to real life,
b) participants were explicitly told how to check the real destination of a link in an email before
the start of the experiment, and
c) the malicious link destinations were obviously unrelated to the content of the email.
Our findings are particularly worrying given the increase in spear-phishing in the wild reported in
recent analyses of cyber-attacks (APWG 2014; Hong 2012; Samani and McFarland 2015) and the
dominance of the authority persuasion technique within them (Akbar 2014). In fact, the success of
such deceitful tactics, as demonstrated in our study, may partly explain their increased popularity.
Interestingly, the use of any social engineering technique in phishing emails appeared less effective
than no technique at all. This may be due to an inoculation effect against this type of persuasion
(McGuire 1970), whereby users have been exposed to so many generic phishing emails that attempt to
use social engineering, that they have learnt to resist the persuasion attempt and not to respond to
them. Such inoculation to persuasion has been demonstrated in marketing contexts (Friedstat and
Wright 1994; Szybillo and Heslin 1973). However, a simpler explanation may also account for the
findings. It may be that the presence of any social engineering strategy in standard phishing emails,
where no significant effort has been made to target an individual using inside knowledge, is in fact a
cue to the malicious intent behind the email. Without the necessary context, the persuasion may
appear inappropriate and therefore raise the suspicions of the user.
Participants who were less impulsive in decision making were more likely to judge the links in
phishing emails as more dangerous. Our findings replicated those of previous research that found that
lower cognitive impulsivity was associated with resistance to phishing email attacks (Parsons et al.
2013). However, our results have also found that lower cognitive impulsivity can protect against
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
8
targeted, high effort attacks such as spear-phishing. In addition, lower cognitive impulsivity did not
adversely influence the judgments of genuine emails.
One of the limitations of our study was the use of a convenience sample of university students enrolled
in subjects on business and information systems. Such a sample may not necessarily reflect the
abilities of the wider population and, therefore, this limits the generalisability of our findings. As a
result, we propose that future research should seek to replicate our study on a larger, more diverse
sample.
The fact that our study found a potential link between someone’s preference for a decision making
style and their susceptibility to phishing has implications for future research in this area and,
ultimately, the development of a training solution. Cognitive impulsivity is linked to what is known as
dual processing models of persuasion (Chaiken et al. 1996). These models assume that we have two
modes of processing information. The first mode, known as the ‘central’ mode, uses systematic
processing that is highly analytical and detailed. The second mode, known as the ‘peripheral’ mode, is
heuristic in nature and is more influenced by superficial cues. By using heuristics rather than detailed
analysis, the ‘peripheral mode’ is faster and uses fewer cognitive resources than the ‘central’ mode.
Humans have evolved to use a large number of heuristics that allow us to function effectively in a
range of different scenarios (Gigerenzer et al. 1999). However, this efficiency comes at a cost because
these heuristics are less-accurate than the analytical approach associated with the ‘central’ mode. The
CRT measures someone’s tendency to use the ‘central’ mode more than the ‘peripheral’ mode and
therefore can account for the increase in errors in judging emails by people high in cognitive
impulsivity.
Research has shown that our style of decision making can be modified, at least in the short term. For
example, Pinillos et al. (2011) showed that completing the CRT itself can activate ‘central’ mode
processing for subsequent tasks. Therefore, training people to defend against phishing attacks could
focus on activating ‘central’ mode processing when people are judging emails. In the short-term, future
research should investigate whether pre-testing with the CRT can improve phishing email
discrimination. In the long-term, research may involve the development of structured analytic
techniques similar in style to the techniques that are commonly used by intelligence analysts (e.g.,
Heuer 1999). Such techniques activate the ‘central’ mode of processing so that an analyst is less likely
to make an incorrect assessment of intelligence by falling back on our natural tendency towards faster,
but less accurate ‘peripheral’ processing. A possible training solution to phishing may require that we
develop and teach analogous structured analytic techniques for assessing the legitimacy of emails. In
other words, rather than simply warning users about the threat posed by malicious emails or providing
them with specific examples, we may eventually be able to train people to use more effective strategies
to detect phishing and spear-phishing attacks.
5 References
Aaron, G., and Rasmussen, R. 2010. Global Phishing Survey: Trends and Domain Name Use in
2H2009. Lexington, MA: AntiPhishing Working Group (APWG).
Akbar, N. 2014. Analysing Persuasion Principles in Phishing Emails. Masters thesis, University of
Twente.
Alavi, R., Islam, S., Mouratidis, H., and Lee, S. 2015. “Managing Social Engineering Attacks
Considering Human Factors and Security Investment,” Proceedings of the Ninth International
Symposium on Human Aspects of Information Security & Assurance, HAISA2015, pp 161-171.
APWG Working Group 2014. Global Phishing Survey: Trends and Domain Name Use in 2H2014,”
http://www.antiphishing.org/download/document/245/APWG_Global_Phishing_Report_2H
_2014.pdf Retrieved 21 Jun, 2015.
Blass, T. 1999. The Milgram Paradigm After 35 Years: Some Things We Now Know About Obedience
to Authority,” Journal of Applied Social Psychology (29), pp 955978.
Butavicius, M.A. 2006. Evaluating and Predicting the Performance of An Identification Face
Recognition System in An Operational Setting,” Australian Society for Operations Research
Bulletin (25:2), pp 2-13.
Cialdini, R. B. 2007. Influence: The Psychology of Persuasion (Revised ed.). New York: HarperCollins
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
9
Chaiken, S., Wood, W., and Eagly, A. H. 1996. “Principles of Persuasion,” in Social Psychology:
Handbook of Basic Principles, Guilford, pp 702 - 744.pp
Fletcher, K.I., Butavicius, M.A., and Lee, M.D. 2008. Attention to Internal Face Features in
Unfamiliar Face Matching,” British Journal of Psychology (99), pp 379-394.
Frederick, S. 2005. Cognitive Reflection and Decision Making,” Journal of Economic Perspectives
(16:4), pp 2542.
Furnell, S. 2007. "Phishing: Can We Spot The Signs?," Computer Fraud & Security (3), pp 10-15.
Gigerenzer, G., and Todd, P.M. 1999. Simple Heuristics That Make Us Smart. New York: Oxford
University Press.
Gold, J.I., and Shadlen, M.N. 2007. The Neural Basis of Decision Making. Annual Review of
Neuroscience (30), pp 535-574.
Green, D.M. and Swets, J.A. 1966. Signal Detection Theory and Psychophysics. New York: Wiley.
Guéguen, N., and Jacob, C. 2002. Solicitation by E-mail and Solicitor's Status: A Field Study of Social
Influence on the Web,” CyberPsychology & Behavior (5:4), pp 377-383.
Hanton, K., Sunde, J., Butavicius, M.A., and Gluscevic, V. 2010. Infrared Image Enhancement and
Human Detection Performance Measures,” International Journal of Intelligent Defence
Support Systems 3(1:2), pp 5-21.
Halevi, T., Memon, N., and Oded, N. 2015. Spear-Phishing in the Wild: A Real-World Study of
Personality, Phishing Self-efficacy and Vulnerability to Spear-Phishing Attacks,” Social Science
Research Network, DOI:http://dx.doi.org/10.2139/ssrn.2544742 Retrieved 2 July, 2015.
Heuer, R.J. Jnr. 1999. Psychology of Intelligence Analysis. Langley, VA: Central Intelligence Agency
Press.
Hong, J. 2012. The State of Phishing Attacks, Communications of the ACM (55:1), pp 74-81
Jagatic, T., Johnson, N., Jakobssen, M., and Menczer, F. 2007. “Social Phishing,” Communications of
the ACM (50:10), pp 94-100.
Jakobsson, M. 2007. "The Human Factor in Phishing," Privacy & Security of Consumer Information
(7), pp 1-19.
Knowles, E.S., & Linn, J.A. 2004. Resistance and Persuasion. Mahwah, NJ: Lawrence Erlbaum.
Milgram, S. 1974. Obedience to Authority. New York: Harper & Row.
Mitnick, K., and Simon, W. 2002. The Art of Deception: Controlling the Human Element of Security.
Indianapolis, IN: Wiley.
Parsons, K., Butavicius, M., Calic, D., McCormac, A., Pattinson, M., and Jerram, C. Do Users Focus on
the Correct Cues to Identify Phishing Emails? Proceedings of the Australasian Conference on
Information Systems, ACIS2015.
Parsons, K., McCormac, A., Pattinson, M., Butavicius, M., and Jerram, C. 2013. "Phishing for the
Truth: A Scenario-Based Experiment of Users’ Behavioural Response to Emails," Security and
Privacy Protection in Information Processing Systems - IFIP Advances in Information and
Communication Technology, Springer2013, pp 366-378.
Pattinson, M., Jerram, C., Parsons, K.M., McCormac, A., and Butavicius, M.A. 2012. Why Do Some
People Manage Phishing Emails Better than Others? Information Management & Computer
Security (20:1), pp 18-28.
Pinillos, N.Á., Smith, N., Nair, G.S., Marchetto, P., and Mun, C. 2011. Philosophy's New Challenge:
Experiments and Intentional Action,” Mind and Language (26:1), pp 115-139.
RSA Security 2014. RSA Online Fraud Report: 2013 A Year in Review.”
http://www.emc.com/collateral/fraud-report/rsa-online-fraud-report-012014.pdf Retrieved 29
April, 2015.
Sagarin, B.J., and Cialdini, R.B. 2004. “Creating Critical Consumers: Motivating Receptivity by
Teaching Resistance,” in Resistance and Persuasion, Lawrence Erlbaum, pp 259-282.
Australasian Conference on Information Systems Butavicius et al.
2015, Adelaide Social engineering and emails
10
Samani, R. and McFarland, C. 2015. "Hacking the Human Operating System: The Role of Social
Engineering within Cybersecurity.http://www.mcafee.com/au/resources/reports/rp-hacking-
human-os.pdf Retrieved 1 June, 2015.
Stanislaw, H., and Todorov, N. 1999. Calculation of Signal Detection Theory Measures,” Behavior
Research Methods, Instruments & Computers (31:1), pp 137-149.
Telstra Corporation 2104. Telstra Cyber Security Report 2014. http://www.telstra.com.au/business-
enterprise/download/document/telstra-cyber-security-report-2014.pdf Retrieved 3 June, 2015.
Wright, R.T., Jensen, M.L., Bennett Thatcher, J., Dinger, M., and Marett, K. 2014. “Reseach Note:
Influence Techniques in Phishing Attacks: An Examination of Vulnerability and Resistance,
Information Systems Research (25:2), pp 385-400.
Acknowledgements
This project was supported by a Premier’s Research and Industry Fund granted by the South
Australian Government Department of State Development.
Copyright
Copyright: © 2015 authors. This is an open-access article distributed under the terms of the Creative
Commons Attribution-Non Commercial 3.0 Australia License, which permits non-commercial use,
distribution, and reproduction in any medium, provided the original author and ACIS are credited.
... Previous research in this area already studied users' susceptibility to phishing [5,11] and proposed different education methods [7,26,27]. Multiple studies analyzed how psychological and technical vectors are used in phishing attacks [6,12,13,34]. These studies rely on experiments with a limited number of participants (typically students), often in a (potentially artificial) lab environment. ...
... Caputo et al. send three phishing e-mails to 1,359 participants, show training material to a subset of them, and analyze whether it prevents participants from clicking on links in phishing e-mails [8]. Butavicius et al. ask 121 university students to classify 12 e-mails into legitimate, phishing, and spearphishing [6]. Rajivan et al. perform a two-phase experiment in which 105 participants first create phishing e-mails, which are classified in a second phase along with legitimate e-mails by 340 other participants [34]. ...
... In this section, we discuss our main results and compare them with previous publications. We can confirm the results of Butavicius et al. that an authoritative tone increases the susceptibility of users to fall for phishing e-mails [6]. However, Butavicius et al. tested only authority, scarcity (similar to the vector "trust" in our data set), and social proof, which we do not have in our data set. ...
Chapter
Phishing is in practice one of the most common attack vectors threatening digital assets. An attacker sends a legitimate-looking e-mail to a victim to lure her on a website with the goal of tricking the victim into revealing credentials. A phishing e-mail can use both technical (e.g., a forged link) and psychological vectors (e.g., an authoritarian tone) to persuade the victim. In this paper, we present an analysis of more than 420,000 phishing e-mails sent over more than 1.5 years by a consulting company offering awareness trainings. Our data set contains detailed information on how users interact with the e-mails, e.g., when they click on links and what psychological vectors are used in the e-mails to convince the recipient of its legitimacy. While previous studies often used lab environments, the e-mails in our data set are sent to real users during their day-to-day work so that we can study their behavior in a genuine setting. Our results indicate a continually decreasing click rate (from 19% to 10%) with progressing awareness training. We also found some psychological vectors, including an authoritative tone and curiosity, to be more effective than others to trick a user into falling for this type of scam e-mails.
... [12][13][14][15][16][17][18][19] in spam and phishing related to deliveries (2021) ...
Preprint
Full-text available
Cyberattacks are constantly evolving and phishing activities have risen steeply in the last few years. As the number of online users is increasing so as the phishing attacks and scams are increasing too. It is even more surprising in the presence of the most sophisticated technical security measures and online users are continually becoming the victim of phishing attacks that cause financial and emotional loss. Phishing attacks involve deceiving a target user into revealing their most important personal information such as ID, password, username, bank card, or other sensitive information to the cybercriminals. The typical way to instigate a phishing attack by sending malicious emails that may contain malware or a link to a phishing website. It is evident from various phishing reports that despite the most sophisticated and expensive technical security measures, the phishing attacks are proved to be still successful. This is happening because phishing techniques bypass technical security measures and try to exploit vulnerabilities associated with human and use social engineering to reach its target. Therefore, in this situation, anti-phishing awareness is the most effective tool that can protect internet users against phishing attacks. Anti-phishing awareness material can be delivered in a number of methods; however, the effectiveness of these awareness delivery methods is an open question among the researcher community and the anti-phishing awareness program designers. Which method is more effective in anti-phishing awareness-raising, increasing overall users' confidence in dealing with phishing emails, and which method users preferred more? In an attempt to address all these questions, we conducted experimental research involving online users with different demographic backgrounds. We design and deliver and online anti-phishing awareness-raising material in three formats, video-based, text-based, and infographic-based. We found all training methods significantly improve the accuracy rate of identifying phishing and genuine emails. The training decreased the false-negative rate and also reduced the false positive rate among the participants of all training groups when compared with a control group. However, our study did not find one awareness delivery method significantly more effective than other methods in transferring knowledge. However, the study found video and infographic methods as most preferred by the users. This study also found an interesting result that the difference between the accuracy of identifying phishing emails of participants who received training in their preferred learning method and the accuracy of participants who received training in other methods was not significantly different. These results serve researchers, students, organizations, cybersecurity expert, and security awareness program designers, who are interested in understating the relationship between different awareness rising delivery methods and their effectiveness in educating internet users about prevention from phishing attacks. 4
... They also leverage the data for illegal purposes, such as using social engineering and spear-phishing attacks to trick potential victims into providing confidential information, perform fraudulent transactions, or install malware on victims' computers (Butavicius et al., 2016;. Whereas traditional phishing involves sending email to a massive number of individuals but expecting only a small response, in spear-phishing the attackers have personal information about their victims and can target specific individuals within an organization -often chief executive officers, executives, or accounting and finance personnel ). ...
Book
The prevalence of cyber-dependent crimes and illegal activities that can only be performed using a computer, computer networks, or other forms of information communication technology has significantly increased during the last two decades in the USA and worldwide. As a result, cybersecurity scholars and practitioners have developed various tools and policies to reduce individuals' and organizations' risk of experiencing cyber-dependent crimes. However, although cybersecurity research and tools production efforts have increased substantially, very little attention has been devoted to identifying potential comprehensive interventions that consider both human and technical aspects of the local ecology within which these crimes emerge and persist. Moreover, it appears that rigorous scientific assessments of these technologies and policies "in the wild" have been dismissed during the process of encouraging innovation and marketing. Consequently, governmental organizations, public and private companies allocate a considerable portion of their operations budgets to protecting their computer and internet infrastructures without understanding the effectiveness of various tools and policies in reducing the myriad of risks they face. Unfortunately, this practice may complicate organizational workflows and increase costs for government entities, businesses, and consumers. The success of the evidence-based approach in improving the performances of a wide range of professions (for example, medicine, policing, and education) leads us to believe that an evidence-based cybersecurity approach is critical for improving cybersecurity efforts. This book seeks to explain the foundation of the evidence-based cybersecurity approach, reviews its relevance in the context of existing security tools and policies, and the authors provide concrete examples of how adopting this approach could improve cybersecurity operations and guide policymakers' decision-making process. The evidence-based cybersecurity approach explained aims to support security professionals', policymakers', and individual computer users' decision-making processes regarding the deployment of security policies and tools by calling for rigorous scientific investigations of the effectiveness of these policies and mechanisms in achieving their goals in protecting critical assets. This book illustrates how this approach provides an ideal framework for conceptualizing an interdisciplinary problem like cybersecurity because it stresses moving beyond decision-makers political, financial, social backgrounds, and personal experiences when adopting cybersecurity tools and policies. This approach is also a model in which policy decisions are made based on scientific research findings. https://www.routledge.com/Evidence-Based-Cybersecurity-Foundations-Research-and-Practice/Pomerleau-Maimon/p/book/9781032062761
... Butavicius et al. [22] performed a phishing study with 121 students. These researchers found a significant negative correlation between CRT scores and link safety judgments for spear-phishing (ρ < -.23, p < .014, ...
... Some studies have adopted a narrow focus, i.e., they examine the effect on just one or a few variables. For example, it has been shown that more impulsive people are likelier to judge links as safe in fraudulent emails (Butavicius et al., 2015). The role of impulsivity in phishing has been demonstrated in several studies (Mayhorn, Welk, Zelinska, & Murphy-Hill, 2015;Pattison, Jerram, Parsons, McCormac, & Butavicius, 2012;Neupane, Saxena, Maximo, & Kana, 2016). ...
Preprint
Full-text available
Self-disclosure of personal information is generally accepted as a security risk. Nonetheless, many individuals who are concerned about their privacy will often voluntarily reveal information to others. This inconsistency between individuals' expressed privacy concern and the willingness to divulge personal information is referred to as privacy paradox. Several arguments have been proposed to explain the inconsistency. One set of arguments centers around the possible effects of differences in personality characteristics, such as the Big Five factors. In the current article, we examine the role of one personality characteristic, impulsivity, in explaining the relationship between privacy concern and information disclosure. We report the results of a survey-based study that consisted of two hundred and forty-two (242) usable responses from subjects recruited on Amazon Mechanical Turk. The results show that one of the three dimensions of impulsivity, motor impulsivity, directly influences the extent of information disclosure and also moderates the relationship between privacy concern and information disclosure. Furthermore, our study shows impulsivity explains more variance in information disclosure than explained by the Big Five factors only.
... Social engineering, which aims to exploit internet users' weaknesses, plays an essential role in phishing cases (Bullée et al., 2015;Krombholz et al., 2015). Fraudsters target users through emails, including highly sophisticated and challenging social engineering tactics, to solicit financial or personal information (Butavicius et al., 2016;Clark, 2017). Phishing emails are also utilized to lure users into opening attachments or clicking links containing malicious content, thereby facilitating the installation of malware such as ransomware on the target systems or devices (Gomes et al., 2020). ...
Article
Full-text available
This empirical study is an exploration of the influence methods, fear appeals, and urgency cues applied by phishers to trick or coerce users to follow instructions presented in coronavirus-themed phishing emails. To that end, a content analysis of 208 coronavirus-themed phishing emails has been conducted. We identified nine types of phishing messages crafted by phishers. Phishing emails purporting to provide information about the spread of the disease were the most common type of unsolicited emails. Authority, liking and commitment emerged as the most common influence methods. Fear appeals and urgency cues were present in almost all of the sampled phishing messages. Finally, the analysis of coronavirus-themed phishing emails revealed a shift in the modus operandi of phishers. The implications of these results are discussed in this paper.
... To produce more realistic simulation results, probability distributions need to be assigned to attack steps and defenses to describe the efforts required for adversaries to exploit certain attack steps. For example, a user clicking a Spearphishing Link follows a Bernoulli distribution with parameter 0.71 [6]. In addition, the defenses in enterpriseLang currently have only Boolean values (TRUE/FALSE) to indicate their status. ...
Article
Full-text available
Enterprise systems are growing in complexity, and the adoption of cloud and mobile services has greatly increased the attack surface. To proactively address these security issues in enterprise systems, this paper proposes a threat modeling language for enterprise security based on the MITRE Enterprise ATT&CK Matrix. It is designed using the Meta Attack Language framework and focuses on describing system assets, attack steps, defenses, and asset associations. The attack steps in the language represent adversary techniques as listed and described by MITRE. This entity-relationship model describes enterprise IT systems as a whole; by using available tools, the proposed language enables attack simulations on its system model instances. These simulations can be used to investigate security settings and architectural changes that might be implemented to secure the system more effectively. Our proposed language is tested with a number of unit and integration tests. This is visualized in the paper with two real cyber attacks modeled and simulated.
Chapter
Abstract. The emergence of synthetic media such as deep fakes is considered to be a disruptive technology shaping the fight against cybercrime as well as enabling political disinformation. Deep faked material exploits humans’ interpersonal trust and is usually applied where technical solutions of deep fake authentication are not in place, unknown, or unaffordable. Improving the individual’s ability to recognise deep fakes where they are not perfectly produced requires training and the incorporation of deep fake-based attacks into social engineering resilience training. Individualised or tailored approaches as part of cybersecurity awareness campaigns are superior to a one-size-fits-all approach, and need to identify persons in particular need for improvement. Research conducted in phishing simulations reported that persons with educational and/or professional background in information technology frequently underperform in social engineering simulations. In this study, we propose a method and metric to detect overconfident individuals in regards to deep fake recognition. The proposed overconfidence score flags individuals overestimating their performance and thus posing a previously unconsidered cybersecurity risk. In this study, and in line with comparable research from phishing simulations, individuals with IT background were particularly prone to overconfidence. We argue that this data-driven approach to identifying persons at risk enables educators to provide a more targeted education, evoke insight into own judgement deficiencies, and help to avoid the self-selection bias typical for voluntary participation.
Thesis
Full-text available
Absztrakt Az adathalászat jelenségét a kriminológiaelméleti áttekintés után kérdőíves saját kutatással vizsgáltam egyetemi hallgatókra fókuszálva. Elsősorban azt kutattam, hogyan és miért lesz valaki adathalászat áldozata. A leggyengébb láncszem a rendszerben maga a felhasználó, mert nélküle az eszközös védelem sem működik. Eredményeim alapján arra következtettem, hogy a legnagyobb védőhatása az informatikai tudásnak és az internet tisztességes használói közösségének van. Kulcsszavak: kiberbűnözés, adathalászat, kérdőíves vizsgálat
Article
Phishing emails have certain characteristics, including wording related to urgency and unrealistic promises (i.e., “too good to be true”), that attempt to lure victims. To test whether these characteristics affected users’ suspiciousness of emails, users participated in a phishing judgment task in which we manipulated 1) email type (legitimate, phishing), 2) consequence amount (small, medium, large), 3) consequence type (gain, loss), and 4) urgency (present, absent). We predicted users would be most suspicious of phishing emails that were urgent and offered large gains. Results supporting the hypotheses indicate that users were more suspicious of phishing emails with a gain consequence type or large consequence amount. However, urgency was not a significant predictor of suspiciousness for phishing emails, but was for legitimate emails. These results have important cybersecurity-related implications for penetration testing and user training.
Article
Full-text available
Recent research has begun to focus on the factors that cause people to respond to phishing attacks. In this study a real-world spear-phishing attack was performed on employees in organizational settings in order to examine how users’ personality, attitudinal and perceived efficacy factors affect their tendency to expose themselves to such an attack. Spear-phishing attacks are more sophisticated than regular phishing attacks as they use personal information about their intended victim and present a stronger challenge for detection by both the potential victims as well as email phishing filters.While previous research showed that certain phishing attacks can lure a higher response rate from people with a higher level of the personality trait of Neuroticism, other traits were not explored in this context. The present study included a field-experiment which revealed a number of factors that increase the likelihood of users falling for a phishing attack: the factor that was found to be most correlated to the phishing response was users’ Conscientiousness personality trait. The study also found gender-based difference in the response, with women more likely to respond to a spear-phishing message than men. In addition, this work detected negative correlation between the participants subjective estimate of their own vulnerability to phishing attacks and the likelihood that they will be phished. Put together, the finding suggests that vulnerability to phishing is in part a function of users’ personality and that vulnerability is not due to lack of awareness of phishing risks. This implies that real-time response to phishing is hard to predict in advance by the users themselves, and that a targeted approach to defense may increase security effectiveness.
Conference Paper
Full-text available
Using a role play scenario experiment, 117 participants were asked to manage 50 emails. To test whether the knowledge that participants are undertaking a phishing study impacts on their decisions, only half of the participants were informed that the study was assessing the ability to identify phishing emails. Results indicated that the participants who were informed that they were undertaking a phishing study were significantly better at correctly managing phishing emails and took longer to make decisions. This was not caused by a bias towards judging an email as a phishing attack, but instead, an increase in the ability to discriminate between phishing and real emails. Interestingly, participants who had formal training in information systems performed more poorly overall. Our results have implications for the interpretation of previous phishing studies, the design of future studies and for training and education campaigns, as it suggests that when people are primed about phishing risks, they adopt a more diligent screening approach to emails. © IFIP International Federation for Information Processing 2013.
Research
Full-text available
Soliciting and managing the protection of information assets has become a objective of paramount importance in an organizational context. Information Security Management System (ISMS) has the unique role of ensuring that adequate and appropriate security tools are in place in order to protect information assets. Security is always seen in three dimensions of technology, organization, and people. Undoubtedly, the socio-technical challenges have proven to be the most difficult ones to tackle. Social Engineering Attacks (SEAs) are a socio-technical challenge and considerably increase security risks by seeking access to information assets by exploiting the vulnerabilities in organizations as they target human frailties. Dealing effectively and adequately with SEAs requires practical security benchmarking together with control mechanism tools, which in turn requires investment to support security and ultimately organizational goals. This paper contributes in this area. In particular, the paper proposes a language for managing SEAs using several concepts such as actor, risks, goals, security investment and vulnerabilities. The language supports in-depth investigation of human factors as one of the main causes of SEAs. It also assists in the selection of appropriate mechanisms considering security investment to mitigate risks. Finally, the paper uses a real incident in a financial institution to demonstrate the applicability of the approach.
Article
Full-text available
Experimental philosophers have gathered impressive evidence for the surprising conclusion that philosophers' intuitions are out of step with those of the folk. As a result, many argue that philosophers' intuitions are unreliable. Focusing on the Knobe Effect, a leading finding of experimental philosophy, we defend traditional philosophy against this conclusion. Our key premise relies on experiments we conducted which indicate that judgments of the folk elicited under higher quality cognitive or epistemic conditions are more likely to resemble those of the philosopher. We end by showing how our experimental findings can help us better understand the Knobe Effect.
Book
The Psychology of Intelligence Analysis has been required reading for intelligence officers studying the art and science of intelligence analysis for decades. Richards Heuer, Jr. discusses in the book how fundamental limitations in human mental processes can prompt people to jump to conclusions and employ other simplifying strategies that lead to predictably faulty judgments known as cognitive biases. These analytic mindsets cannot be avoided, but they can be overcome through the application of more structured and rigorous analytic techniques including the Analysis of Competing Hypotheses.
Article
Phishing is a major threat to individuals and organizations. Along with billions of dollars lost annually, phishing attacks have led to significant data breaches, loss of corporate secrets, and espionage. Despite the significant threat, potential phishing targets have little theoretical or practical guidance on which phishing tactics are most dangerous and require heightened caution. The current study extends persuasion and motivation theory to postulate why certain influence techniques are especially dangerous when used in phishing attacks. We evaluated our hypotheses using a large field experiment that involved sending phishing messages to more than 2,600 participants. Results indicated a disparity in levels of danger presented by different influence techniques used in phishing attacks. Specifically, participants were less vulnerable to phishing influence techniques that relied on fictitious prior shared experience and were more vulnerable to techniques offering a high level of self-determination. By extending persuasion and motivation theory to explain the relative efficacy of phishers' influence techniques, this work clarifies significant vulnerabilities and lays the foundation for individuals and organizations to combat phishing through awareness and training efforts.
Article
The ability to detect and recognise dangerous objects at a safe distance is a very important task in a number of defence, police and security applications. In this paper, we look at ways of increasing the effectiveness of infrared imagery for object recognition through processes such as super‐resolution image reconstruction and deconvolution methods. In this paper, we propose two techniques for assessing image quality improvement: operator assessment and edge detection; and report on some initial work recently undertaken.
Article
Dr Steven Furnell at Plymouth University has conducted research, which looks at why some computer users still can't tell the difference between an official email and a phishing scam. Steven Furnell looks at the increasing sophistication of phishing emails and examines why users are still vulnerable.