Conference PaperPDF Available

A systematic Gap Analysis of Social Engineering Defence Mechanisms Considering Social Psychology

Authors:

Abstract and Figures

Social engineering is the acquisition of information about computer systems by methods that deeply include non-technical means. While technical security of most critical systems is high, the systems remain vulnerable to attacks from social engineers. Social engineering is a technique that: (i) does not require any (advanced) technical tools, (ii) can be used by anyone, (iii) is cheap. Traditional penetration testing approaches often focus on vulnerabilities in network or software systems. Few approaches even consider the exploitation of humans via social engineering. While the amount of social engineering attacks and the damage they cause rise every year, the defences against social engineering do not evolve accordingly. Hence, the security awareness of these attacks by employees remains low. We examined the psychological principles of social engineering and which psychological techniques induce resistance to persuasion applicable for social engineering. The techniques examined are an enhancement of persuasion knowledge, attitude bolstering and influencing the decision making. While research exists elaborating on security awareness, the integration of resistance against persuasion has not been done. Therefore, we analysed current defence mechanisms and provide a gap analysis based on research in social psychology. Based on our findings we provide guidelines of how to improve social engineering defence mechanisms such as security awareness programs.
Content may be subject to copyright.
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
241
A systematic Gap Analysis of Social Engineering Defence
Mechanisms Considering Social Psychology
P. Schaab1, K. Beckers1 and Sebastian Pape2
1Technische Universität München (TUM)
2Goethe Universität Frankfurt
e-mail: {peter.schaab, beckersk}@in.tum.de; Sebastian.Pape@m-chair.de
Abstract
Social engineering is the acquisition of information about computer systems by methods that
deeply include non-technical means. While technical security of most critical systems is high,
the systems remain vulnerable to attacks from social engineers. Social engineering is a
technique that: (i) does not require any (advanced) technical tools, (ii) can be used by anyone,
(iii) is cheap. Traditional penetration testing approaches often focus on vulnerabilities in
network or software systems. Few approaches even consider the exploitation of humans via
social engineering. While the amount of social engineering attacks and the damage they cause
rise every year, the defences against social engineering do not evolve accordingly. Hence, the
security awareness of these attacks by employees remains low. We examined the
psychological principles of social engineering and which psychological techniques induce
resistance to persuasion applicable for social engineering. The techniques examined are an
enhancement of persuasion knowledge, attitude bolstering and influencing the decision
making. While research exists elaborating on security awareness, the integration of resistance
against persuasion has not been done. Therefore, we analysed current defence mechanisms and
provide a gap analysis based on research in social psychology. Based on our findings we
provide guidelines of how to improve social engineering defence mechanisms such as security
awareness programs.
Keywords
social engineering, security management, persuasion, human-centred defence
mechanisms
1. Introduction
Although security technology improves, the human user remains the weakest link in
system security. Therefore, it is widely accepted that the people of an organization
are the main vulnerability of any organization’s security, as well as the most
challenging aspect of system security (Mitnick and Simon, 2011). This is
emphasized by many security consultants, as well as from genuine attackers, which
accessed critical information via social engineering (Gragg, 2003). Early on Gulati
(2003) reported that cyber attacks cost U.S. companies $266 million every year and
that 80% of the attacks are a form of social engineering. A study in 2011 showed that
nearly half of the considered large companies and a third of small companies fell
victim of 25 or more social engineering attacks in the two years before (Dimensional
Research, 2011). The study further shows that costs per incident usually vary
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
242
between $25 000 and over $100 000. Furthermore, surveys, like Verizon’s ’Data
Breach Investigation Report’ (2012; 2013), show the impact of social engineering.
Even though the awareness about the phenomenon of social engineering has
increased, at least in literature, the impact has grown from 7% of breaches in 2012 to
29% of breaches in 2013 according to these studies. In addition, current security
awareness programs are apparently ineffective (Pfleeger et al., 2014). These
alarming numbers question whether the existing approaches towards awareness and
defence of social engineering are fundamentally incomplete.
Frangopoulos et al. (2010) consider the psychological aspects of social engineering
and relate them to persuasion techniques in their 2010 publication. In contrast to our
work their work is not based on a literature review of behaviour psychology, but
based on the expertise of the authors. Moreover, the scope of the authors is broader
and consider physical measures, as well as security standards in their work. Our
results classify existing research in IT security and persuasion in literature and
contribute a structured gap analysis. In addition, Frangopoulos et al. (2012) transfer
the knowledge of psychosocial risks, e.g. influence of headaches and colds on
decisions, from a managerial and organisational point of view to the information
security view.
Our hypothesis is that the psychological aspects behind social engineering and user
psychology are not considered to their full extend. For instance, Ferreira et al. (2015)
constitute psychological principles in social engineering and relate these principles to
previous research of Cialdini (2009), Gragg (2003) and Stajano and Wilson (2011).
However, these principles have to be the fundamental concern of any security
defence mechanism against social engineering. Thus, we contribute a list of concepts
that address social engineering defence mechanisms. We analyse in particular what
IT security recommends in comparison to recommendations given by social
psychology. The results of our analysis reveal fundamental gaps in today’s security
awareness approach. We provide a road map that shows how to address these gaps in
the future. Our road map is an instrumental vision towards reducing the social
engineering threat by addressing all relevant psychological aspects in its defence.
2. Methodology
Our research was guided by the methodology outlined in Fig. 1. We initialized the
work with a working definition of social engineering (Sect. 3) and surveyed the state
of the art from the viewpoint of computer science in particular with regard to IT
security (Step 2) and separately from the viewpoint of social psychology (Sect. 4).
We used the meta search engines Google Scholar and Scopus, which include the
main libraries of IEEE, ACM, Springer, Elsevier and numerous further publishers.
Based on the findings of our literature survey, we identified requirements and
techniques from social sciences for defending against social engineering and map
these to the defence mechanisms used in IT security today (Sect. 5). We outline the
resulting gap and present a vision for overcoming these shortcomings of current IT
security defences (Sect. 6). Finally, we conclude and provide directions for future
research (Sect. 7).
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
243
Figure 1: Methodology
3. Definition of Social Engineering
Although there is no agreed upon definition of social engineering, the common idea
arising from the available definitions is that social engineering is the acquisition of
confidential, private or privileged information by methods including both technical
and non-technical means (Manske, 2009). This common idea is quite general, as it
includes means of gaining information access such as shoulder surfing, dumpster
diving, etc. However, it especially refers to social interaction as psychological
process of manipulating or persuading people into disclosing such information
(Thornburgh, 2004). Other than the former methods of accessing information, the
latter are more complex and more difficult to resist, as persuasion is based on
psychology. In this context, persuasion can be viewed as “any instance in which an
active attempt is made to change a person’s mind” (Petty and Cacioppo, 1996, p.4).
The concept of ‘optimism bias’ states that people believe that others fall victim to
misfortune, not themselves (Weinstein, 1980). Additionally, they tend to
overestimate their possibilities to influence an event’s outcome. Hence people think
that they (i) will not be targeted by social engineering and (ii) are more likely to
resist than their peers.
To actually raise resistance, we analyse how information security awareness can be
increased. In alignment with Kruger and Kearny (2006) we define information
security awareness as the degree to which employees understand the need for
security measures and adjust their behaviour to prevent security incidents.
Furthermore, in accordance with Veseli (2011) we focus on the information security
dimensions attitude (how does a person feel about the topic) and behaviour (what
does a person do) as they are an expression of conscious and unconscious knowledge
(what does a person know).
4. An Analysis of Social Engineering Defence Mechanisms in IT
Security
After having established the concept of social engineering, we analyse how the threat
of social engineering is met in IT security. As the main vulnerability exploited by
social engineering is inherent in human nature, it is the human element in systems
that needs to be addressed. Thus, we concentrate on human based defence
mechanisms. Predominantly three human based mitigation methods are proposed:
external
input
methodinput/output
Gaps in current
IT Security
Defence
Mechanisms
Section 5.
Analyse
Psychological
Principles
Section 6.
A Gap
Analysis
Definition of
Social
Engineering
Section 7.
Conclusion
and Future
Work
Section 3.
Define Social
Engineering
General Psychological
Principles and their
Effect on Decision
Making
Section 4.
State of the
Art Defence
State of the Art
IT Security
Defence
Mechanisms
Social Psychology
Literature Survey
IT Security:
Literature
Survey
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
244
Policies, audits and security awareness programs, as indicated in Table 1. User
awareness and security
Dimension Defence
Mechanism
Description
Knowledge
Attitude
Policy
Compliance
- Foundation of information security
- System standards and security levels
- Guidelines for user behaviour
Security
Awareness
Program
- Familiarity with security policy
- Knowledge about sensitive, valuable
information
- Basic indicators, suspicious behaviour
connected to social engineering attacks
- (Recognition of being manipulated)
Behaviour
Audit - Test employee susceptibility to social
engineering
- Identify weaknesses of policy and security
awareness program
Table 1: Defence mechanisms used in IT security
policies dominate the recommendations to defend social engineering (Scheeres,
2008).
Security Policies. Any information security is founded on its policy (Mitnick and
Simon, 2011). Furthermore, policies provide instructions and guidelines how users
should behave. It is especially hard to address social engineering in security policies,
since people need to know how to respond to ambiguous requests (Gragg, 2003). By
safe-guarding information, users should not come into uncertainty to decide whether
certain information is sensitive or not. Necessarily these policies need to be enforced
consistently throughout the system.
Security Awareness Programs. Upon establishment of a security policy all users need
to be trained in security awareness programs to follow the policy, practices and
procedures (Mitnick and Simon, 2011; Thornburgh, 2004). In general, the literature
agrees upon the cornerstones of an awareness program. First of all, familiarity with
the security policy needs to be established. It is important that everyone in the
organization knows what kind of information is sensitive, hence particularly valuable
for an attacker. Secondly, knowledge about social engineering is to be conveyed.
This includes basics of social engineering, and how attacks work in detail. This
should help employees to understand the reasons for related security policies that
simply contains rules and usually not the reasoning behind it. The idea is that the
understanding of why these polices were defined, will increase compliant behaviour
among employees. In addition, the thought knowledge should reach beyond the rules
in the policies and contain in particular indicators of social engineering attacks and
what behaviour could be suspicious, such as requesting confidential information or to
refuse provision of personal or contact information. Gragg (2003) demands the
inclusion of additional training for key personnel to include inoculation, forewarning
and reality check, see Section 5.
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
245
Audit. The conduction of audits is complementary to the above approaches
(Thornburgh, 2004). It serves the purpose to test the susceptibility to social
engineering attacks (Mitnick and Simon, 2011). Hence, it tests the effectiveness and
identifies weaknesses of the other conducted methods (Winkler and Dealy, 1995). In
this particular case, classic audits or penetration tests need an extension to social
engineering penetration testing as done by Bakhshi et al. (2008). This extension is
not trivial since it tests humans who can get upset and the work council needs to be
involved.
5. Relevant Defence Mechanisms in Social Psychology
The intentions of security awareness programs are to inform about social engineering
and sensitive information. It is assumed that by knowing about the threat of social
engineering, users are less likely to be susceptible for such attacks. There is only a
few researchers that have found this not to be sufficient, which appears to be ignored
by most others. Gragg (2003) considers psychological principles of persuasion
behind social engineering. Ferreira et al. (2015) have established a framework of
psychological principles. These exhibit the ability to influence and potentially
manipulate a person’s attitude, believes and behaviour. Gragg therefore recommends
techniques to build resistance against persuasion, borrowed from social psychology,
to be included into awareness programs. An overview over these methods is given in
Table 2: Defence mechanisms against persuasion borrowed from social
psychology
Inoculation. A user gets exposed to persuasive attempts of a social engineer, he is put
into a situation a social engineer would put him in. Thereby he is exposed to
Dimension Defence
Mechanism
Description
Knowledge Attitude
Persuasion
Knowledge
- Information about tactics used in
persuasion attempts and their potential
influence on attitude and behaviour
- Information about appropriate coping
tactics
Forewarning - Warning of message content and
persuasion attempt
Attitude
Bolstering
- Thought process strengthening security
attitude
Reality
Check
- Demonstration of vulnerability to perceive
risk of persuasion
Behaviour
Inoculation - Exposition to persuasive attempts and
arguments of a social engineer
- Provision of counter arguments to resist
persuasion
Decision
Making
- Repeated exposition to “similar” decision
making situations
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
246
arguments that a social engineer may use. Also he is given counter arguments that he
can use to resist the persuasion. This works the same way as preventing a disease
being spread by using inoculation and induces resistance to persuasion.
Forewarning. Forewarnings of message content and the persuasion attempt of the
message triggers resistance to a social engineering attack. The intention is to not only
warn about the persuasive attempt of a social engineer, but in particular to warn
about the arguments being manipulative and deceptive. An example of this technique
would be the warning about fraudulent IT support calls asking for user login and
password.
Reality Check. As people tend to believe that they are invulnerable due to optimism
bias, users need to realize that in fact they are vulnerable. Therefore, it has to be
demonstrated to them, that they are vulnerable, to make them perceive the risks and
training to be effective. However, any such effort has to be careful not to cause an
amount of frustration that leads people to conclude their security efforts are useless.
The balance between the demonstration of the vulnerability and the ensurance that
people can make a difference in social engineering defence is vital for the success of
defences.
Even though it appears that most programs are not extensive or limited in impact, it
is unclear how much attention is given to these proposals in security practice.
Nevertheless, research in the field of psychology over the past five decades has
proven that inoculation is the most consistent and reliable method to induce
resistance to persuasion (Miller et al., 2013). We are not aware of any study directly
analysing the effects of inoculation to the resistance to social engineering. We are
convinced that the principles behind inoculation are sound and we will analyse their
effect on people in a future empirical study. In addition, Gragg (2003) has already
adopted inoculation as a valuable mechanism for resistance to social engineering.
Nevertheless, there exist further techniques in social psychology to train resistance to
persuasion:
Persuasion Knowledge. Aim of security awareness programs is for users to
experience resistance toward persuasion in case of a social engineering at- tack. This
experience is increased if a user is concerned about being deceived (Friestad and
Wright, 1994). Persuasion knowledge consists of information about tactics used in
persuasive situations, their possible influence on attitudes and behaviour, their
effectiveness and appropriateness, the persuasive agent’s motives, and coping
strategies (Fransen et al., 2015; Friestad & Wright, 1994). Activated persuasion
knowledge usually either elicits suspicion about the persuasive agent’s motives, or
scepticism about arguments, and perceptions of manipulation or deception.
Furthermore, it directs to options how to respond and selects coping tactics believed
to be appropriate (Friestad and Wright, 1994). This positive relationship between
persuasion knowledge and resistance to persuasive attempts is demonstrated by
(Briñol et al., 2015): People are aware of persuasive attempts when having
knowledge about persuasion and respond appropriately. This means educating users
not only about common social engineering attack methods (e.g. phishing) but
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
247
particularly about psychological principles used in social engineering is an absolute
necessity. As people also enhance their persuasion knowledge from experiences in
social interactions, inoculation plays a vital role. Knowledge about coping tactics is,
as indicated, essential to evaluate response options and to cope with persuasive
attempts.
Attitude Bolstering. Awareness and knowledge of security policy, its implications
and guidelines about e.g. confidential information are necessary to make use of
attitude bolstering. The self or existing believes and attitudes are strengthened and
therefore the vulnerability to persuasive attempts can be reduced (Fransen et al.,
2015). In this process people generate thoughts that support their attitudes (Lydon et
al., 1988). As demonstrated by Xu and Wyer (2012) it is possible to generate a
bolstering mind-set that decreases the effectiveness of persuasive attempts. This is
even possible when the cognitive behaviour leading to this bolstering mind-set has
been performed in an unrelated, earlier situation.
Decision Making. Information is processed by using two different systems as
explained by Kahneman (2003): intuition and reasoning. Decisions are made based
on either one. Butavicius et al. (2015) found the preference for a decision making
style has a link to the susceptibility to persuasion, i.e. phishing. Decisions based on
heuristics or mental shortcuts are intuitive, impulsive judgements that are more likely
to be influenced by persuasive attempts. But interestingly it seems that the style of
decision making can be modified by training. This would imply that recurring
exposure to different social engineering approaches helps in establishing effective
strategies to cope with social engineering. Furthermore, it demonstrates that solely
education about the threats of social engineering is not sufficient.
6. A Gap Analysis of Missing Defence Mechanisms in IT Security
against Social Engineering
As indicated above, the available defence mechanisms can be classified into the
dimensions attitude and behaviour, which in turn exert knowledge. Table 3 presents a
mapping of defence mechanisms comparing suggestions in IT security against
techniques known in social psychology. When comparing the dimension attitude, the
limited scope of IT security becomes evident. As established in Section 4, in the
dimension attitude IT security considers establishment of policy and security
awareness programs. The purpose of security awareness programs is twofold. Firstly,
it is concerned with getting users to know and adhere to the established policy.
Secondly, security awareness program’s scope is usually limited to the provision of
basic knowledge about social engineering. In comparison social psychology offers
distinctively more. Although some approaches may be at least partly covered.
Forewarning can be seen as included in the education of social engineering basics, as
malicious intention of social engineers certainly belongs to basic knowledge about
social engineering. But persuasion knowledge goes beyond social engineering basics
as it includes knowledge about persuasion strategies as well as counter tactics to rely
on in any persuasive situation. For reliance on attitude bolstering good knowledge
about security policy is necessary. Again IT security does the first step in user
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
248
education, but fails in the second step, the enhancement of this knowledge. The use
of attitude bolstering, implies not only the knowledge about policy but its
implications and a thought process initiated by each user that strengthens his attitude
to e.g. keep sensitive information private. The necessity to perform a reality check
can directly be deduced from the concept of ‘optimism bias’, as illustrated in Section
243. It might partially be covered in security awareness programs. A reality check
might be done for e.g. spam mails. But as this particular reality check has a technical
background and people tend to dismiss their possible failure by it being a technical
detail and in the same time greatly underestimating personal susceptibility, it is
important to demonstrate to them their failure in a non-technical environment as
well.
Dimension IT Defence Mechanisms Psychological Defence
Mechanisms
Knowledge Attitude Policy Compliance -
Security Awareness
Program
Forewarning
- Persuasion Knowledge
- Attitude Bolstering
- Reality Check
Behaviour Audit -
- Inoculation
- Decision Making
Table 3: Comparison of defence mechanisms suggested in IT security and social
psychology
Table 3 presents another crucial finding. The dimension behaviour is under-
represented in IT security. The only suggestion made for this dimension is to verify
correct behaviour via audits. But IT security fails to actually enhance secure
behaviour. Training correct behaviour as part of security awareness programs is, as
indicated in Section 4, recommended by only a few authors and is usually at most
done for spam mails. Even though this is the application of inoculation, this is only
one possible social engineering attack and a particular technical one as well. Focus
should again also be set on the persuasive nature of social engineering attacks. Hence
trainings could for example include role plays. Additionally, it has been proven
effective to alter the decision making process by conducting decision trainings where
users make a “similar” decision in various appearances.
7. Conclusions and Future Work
Previously, we have discussed gaps in IT security. As indicated, both dimensions,
attitude and behaviour, are represented inadequately in IT security when compared to
recommendations from social psychology. To counter this gap. We envision a two-
step improvement of available security awareness programs (as shown in Table 4). In
a first step persuasion resistance trainings should be conducted. They should include
a broad approach to social engineering including psychological principles and their
effects, possible counter strategies, the initiation of attitude bolstering. As optimism
bias is a strong enabler of successful social engineering, it would be desirable to
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
249
demonstrate users their susceptibility. This step is particularly promising, as it is
feasible with little monetary effort. The second step is persuasive situation role plays.
It is conceivable to include experiential exercises in this step as well as repeated
decision trainings that force users to re-evaluate their knowledge and attitude by
making a “similar” decision multiple times. This step is more effortful and it might
suffice to only educate key personnel as it includes “live” training sessions guided by
possibly costly trainers, actors or generally personnel capable of create persuasive
situations.
Dimensions Future defence mechanisms
Attitude Persuasion resistance training
Behaviour Persuasive situation role plays
Table 4: Envisioned training steps as part of security awareness programs
8. References
Bakhshi, T., Papadaki, M. and Furnell, S., 2008. A Practical Assessment of Social Engineering
Vulnerabilities. In N. L. Clarke & S. Furnell, eds. 2nd International Conference on Human
Aspects of Information Security and Assurance, {HAISA} 2008, Plymouth, UK, July 8-9, 2008.
Proceedings. University of Plymouth, pp. 12–23.
Briñol, P., Rucker, D.D. and Petty, R.E., 2015. Naïve theories about persuasion: Implications
for information processing and consumer attitude change. International Journal of
Advertising, 34(1), pp.85–106.
Butavicius, M. et al., 2015. Breaching the Human Firewall: Social engineering in Phishing
and Spear-Phishing Emails. Australasian Conference on Information Systems, pp.1–11.
Cialdini, R.B., 2009. Influence: the psychology of persuasion EPub editi., New York: Collins.
Dimensional Research, 2011. The Risk of Social Engineering on Information Security: A
Survey of IT Professionals, 2011
Ferreira, A., Coventry, L. and Lenzini, G., 2015. Principles of Persuasion in Social
Engineering and Their Use in Phishing. In T. Tryfonas & I. Askoxylakis, eds. Human Aspects
of Information Security, Privacy, and Trust SE - 4. Lecture Notes in Computer Science.
Springer International Publishing, pp. 36–47. Available at:
Frangopoulos E.D.; Eloff, M.M.; Venter L.M., 2010. Psychological considerations in Social
Engineering - The "ψ-wall" as defense, Proceedings of the IADIS International Conference
Information Systems, pp. 1-20.
Frangopoulos, E.D., Eloff, M.M. and Venter, L.M., 2012. Psychosocial Risks: can their effects
on the Security of Information Systems really be ignored? In N. L. Clarke & S. Furnell, eds.
6th International Symposium on Human Aspects of Information Security and Assurance,
{HAISA} 2012, Crete, Greece, June 6-8, 2012. Proceedings. University of Plymouth, pp. 52–
63.
Fransen, M.L. et al., 2015. Strategies and motives for resistance to persuasion: an integrative
framework. Frontiers in psychology, 6(August), pp.1–12.
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
250
Friestad, M. and Wright, P., 1994. The Persuasion Knowledge Model: How People Cope with
Persuasion Attempts. Journal of Consumer Research, 21(1), pp.1–31.
Gragg, D., 2003. A multi-level defense against social engineering. SANS Reading Room.
Gulati, R., 2003. The Threat of Social Engineering and your defense against it. SANS Reading
Room.
Kahneman, D., 2003. A perspective on judgment and choice: mapping bounded rationality.
The American psychologist, 58(9), pp.697–720.
Kruger, H. A. and Kearney, W. D., 2006. A prototype for assessing information security
awareness. Comput. Secur. 25, 4 , pp. 289-296.
Lydon, J., Zanna, M.P. and Ross, M., 1988. Bolstering Attitudes by Autobiographical Recall:
Attitude Persistence and Selective Memory. Personality and Social Psychology Bulletin,
14(1), pp.78–86. Available at: http://psp.sagepub.com/content/14/1/78.abstract.
Manske, K., 2009. An Introduction to Social Engineering. Information Security Journal: A
Global Perspective, 9(5), pp.1–7.
Miller, C.H. et al., 2013. Boosting the Potency of Resistance: Combining the Motivational
Forces of Inoculation and Psychological Reactance. Human Communication Research, 39(1),
pp.127–155.
Mitnick, K.D. and Simon, W.L., 2011. The art of deception: Controlling the human element of
security, John Wiley & Sons.
Petty, R.E. and Cacioppo, J.T., 1996. Attitudes and persuasion: Classic and contemporary
approaches, Boulder, CO, US: Westview Press.
Pfleeger, S.L., Sasse, M.A. and Furnham, A., 2014. From Weakest Link to Security Hero:
Transforming Staff Security Behavior. Journal of Homeland Security and Emergency
Management, 11(4), pp.489–510.
Sagarin, B.J. et al., 2002. Dispelling the illusion of invulnerability: The motivations and
mechanisms of resistance to persuasion. Journal of Personality and Social Psychology, 83(3),
pp.526–541.
Scheeres, J.W., 2008. Establishing the human firewall: reducing an individual’s vulnerability
to social engineering attacks,
Stajano, F. and Wilson, P., 2011. Understanding Scam Victims: Seven Principles for Systems
Security. Commun. ACM, 54(3), pp.70–75.
Thornburgh, T., 2004. Social Engineering: The “Dark Art.” In Proceedings of the 1st Annual
Conference on Information Security Curriculum Development. InfoSecCD ’04. New York,
NY, USA: ACM, pp. 133–135.
Verizon, 2012. Data Breach Investigations Report. Available at:
http://www.verizonenterprise.com/resources/reports/rp_data-breach-investigations-report-
2012-ebk_en_xg.pdf [Accessed January 13, 2016].
Proceedings of the Tenth International Symposium on
Human Aspects of Information Security & Assurance (HAISA 2016)
251
Verizon, 2013. Data Breach Investigations Report. Available at:
http://www.verizonenterprise.com/resources/reports/rp_data-breach-investigations-report-
2013_en_xg.pdf [Accessed January 13, 2016].
Veseli, I., 2011. Measuring the Effectiveness of Information Security Awareness Program.
Gjøvik University College.
Weinstein, N.D., 1980. Unrealistic Optimism About Future Life events. Journal of Personality
and Social Psychology, 39(5), pp.806–820.
Winkler, I.S. and Dealy, B., 1995. Information Security Technology?...Don’t Rely on It A
Case Study in Social Engineering. In Fifth Usenix Security Symposium. pp. 1–6.
Xu, A.J. and Wyer, R.S.J., 2012. The Role of Bolstering and Counterarguing Mind-Sets in
Persuasion. Journal of Consumer Research, 38(5), pp.920–932. Available at:
http://www.jstor.org/stable/10.1086/661112.
... 2.1.2 surveys defense strategies and compares them to findings in social psychology [183,184] (cf. Sect. ...
... For that purpose, we surveyed the state of the art [183,184] from a computer science, namely IT security viewpoint, as well as from the viewpoint of social psychology. Following Kruger and Kearney [119], social engineering awareness was considered to consist of the three dimensions knowledge, attitude and behavior. ...
Thesis
Full-text available
In order to address security and privacy problems in practice, it is very important to have a solid elicitation of requirements, before trying to address the problem. In this thesis, specific challenges of the areas of social engineering, security management and privacy enhancing technologies are analyzed: Social Engineering: An overview of existing tools usable for social engineering is provided and defenses against social engineering are analyzed. Serious games are proposed as a more pleasant way to raise employees’ awareness and to train them. Security Management: Specific requirements for small and medium sized energy providers are analyzed and a set of tools to support them in assessing security risks and improving their security is proposed. Larger enterprises are supported by a method to collect security key performance indicators for different subsidiaries and with a risk assessment method for apps on mobile devices. Furthermore, a method to select a secure cloud provider – the currently most popular form of outsourcing – is provided. Privacy Enhancing Technologies: Relevant factors for the users’ adoption of privacy enhancing technologies are identified and economic incentives and hindrances for companies are discussed. Privacy by design is applied to integrate privacy into the use cases e-commerce and internet of things.
... User-centered anti-phishing interventions include education, (awareness) training, and design (e.g., visual elements, redirecting a user's course of action) [16]. Others categorize defensive strategies into the dimensions of attitude and behavior change [45]. ...
... This way players can learn about the attackers' perspective, their vulnerabilities and get a better understanding of potential attack vectors. HATCH builds on previous work examining the psychological principles of social engineering [146] and investigating which psychological techniques induce resistance to persuasion applicable for social engineering [147]. ...
Technical Report
Full-text available
This report proposes a conceptual framework for the monitoring and evaluation of a cybersecurity awareness (CSA) program. In order to do so, it uses a nonsystematic or purposive literature review. Initially, it reviewed nine existing frameworks/models on CSA mainly to derive the skeleton (phases and sub-phases) of the framework. This is followed by a set of guidelines and practical advice in each phase and sub-phases of the framework that would be useful for the enhancement of a CSA program. The guidelines and advice on "what to do in each phase" as well as "what to expect in each phase" will be useful for CSA professionals, individuals, or organizations who intend to design a CSA program. In addition to this, the report also presents the evaluation criteria of two CSA mechanisms, which are posters and serious games.
... Schaab et al. [48] examined the psychological principles of social engineering and investigated which psychological techniques induce resistance to persuasion applicable for social engineering. Based on the identified gaps [49], the serious game HATCH [5] is proposed to foster the players' understanding of social engineering attacks. ...
Chapter
Serious games seem to be a good alternative to traditional trainings since they are supposed to be more entertaining and engaging. However, serious games also create specific challenges: The serious games should not only be adapted to specific target groups, but also be capable of addressing recent attacks. Furthermore, evaluation of the serious games turns out to be challenging. While this already holds for serious games in general, it is even more difficult for serious games on security and privacy awareness. On the one hand, because it is hard to measure security and privacy awareness. On the other hand, because both of these topics are currently often in the main stream media requiring to make sure that a measured change really results from the game session. This paper briefly introduces three serious games to counter social engineering attacks and one serious game to raise privacy awareness. Based on the introduced games the raised challenges are discussed and partially existing solutions are presented.
... However, the latest Data Breach Investigations Report [1] also reports another increase of financially motivated SE, where the attacker directly ask for some money, i. e. by impersonating CEOs or other high-level executives. While a couple of defense methods and counteracting training methods [2,3] exist, at present, companies have three main strategies to fend off SE attacks: SE penetration testing, security awareness training and campaigns. ...
Article
Full-text available
Zusammenfassung It is generally accepted that the management of a company has a legal obligation to maintain and operate IT security measures as part of the company’s own compliance – this includes training employees with regard to social engineering attacks. On the other hand, the question arises whether and how the employee must tolerate associated measures, as for example social engineering penetration testing can be very intrusive.
... Cognition based on heuristics or mental shortcuts, e.g. intuitive and impulsive judgements, are more likely to be exploited by persuasive social engineering attempts [67]. For employees who are indifferent to their work, it will be very difficult to ensure information security, especially when shortcuts are taken and security rules are not followed [68]. ...
Article
Full-text available
Social engineering attacks have posed a serious security threat to cyberspace. However, there is much we have yet to know regarding what and how lead to the success of social engineering attacks. This paper proposes a conceptual model which provides an integrative and structural perspective to describe how social engineering attacks work. Three core entities (effect mechanism, human vulnerability and attack method) are identified to help the understanding of how social engineering attacks take effect. Then, beyond the familiar scope, we analyze and discuss the effect mechanisms involving 6 aspects (persuasion, social influence, cognition & attitude & behavior, trust and deception, language & thought & decision, emotion and decision-making) and the human vulnerabilities involving 6 aspects (cognition and knowledge, behavior and habit, emotions and feelings, human nature, personality traits, individual characters), respectively. Finally, 16 social engineering attack scenarios (including 13 attack methods) are presented to illustrate how these mechanisms, vulnerabilities and attack methods are used to explain the success of social engineering attacks. Besides, this paper offers lots of materials for security awareness training and future empirical research, and the model is also helpful to develop a domain ontology of social engineering in cybersecurity.
... However, he also claims that no hardware or software is able to protect an organization fully against social engineering attacks. In addition to that, social engineering is highly interdisciplinary, however most defense strategies are advised by IT security experts who rather have a background in information systems than psychology [26,27]. ...
Chapter
While social engineering is still a recent threat, many organisations only address it by using traditional trainings, penetration tests, standardized security awareness campaigns or serious games. Existing research has shown that methods for raising employees’ awareness are more effective if adjusted to their target audience. For that purpose, we propose the creation of specific scenarios for serious games by considering specifics of the respective organisation. Based on the work of Faily and Flechais [11], who created personas utilizing grounded theory, we demonstrate how to develop a specific scenario for HATCH [4], a serious game on social engineering. Our method for adapting a scenario of a serious game on social engineering resulted in a realistic scenario and thus was effective. Since the method is also very time-consuming, we propose future work to investigate if the effort can be reduced.
... Additionally, scammers also base attacks on the current news situation, such as COVID-19 Ransomware [15]. While a couple of defense methods and counteracting training methods [16,17] exist, at present, most of them can not be adapted fast enough to cope with this amount and speed of new variations. ...
Chapter
Recent approaches to raise security awareness have improved a lot in terms of user-friendliness and user engagement. However, since social engineering attacks on employees are evolving fast, new variants arise very rapidly. To deal with recent changes, our serious game CyberSecurity Awareness Quiz provides a quiz on recent variants to make employees aware of new attacks or attack variants in an entertaining way. While the gameplay of a quiz is more or less generic, the core of our contribution is a concept to create questions and answers based on current affairs and attacks observed in the wild.
... However, he also claims that no hardware or software is able to protect an organization fully against social engineering attacks. In addition to that, social engineering is highly interdisciplinary, however most defense strategies are advised by IT security experts who rather have a background in information systems than psychology [26,27]. ...
Conference Paper
While social engineering is still a recent threat, many organisations only address it by using traditional trainings, penetration tests, standardized security awareness campaigns or serious games. Existing research has shown that methods for raising employees' awareness are more effective if adjusted to their target audience. For that purpose, we propose the creation of specific scenarios for serious games by considering specifics of the respective organisation. Based on the work of Faily and Flechais [11], who created personas utilizing grounded theory, we demonstrate how to develop a specific scenario for HATCH [4], a serious game on social engineering. Our method for adapting a scenario of a serious game on social engineering resulted in a realistic scenario and thus was effective. Since the method is also very time-consuming, we propose future work to investigate if the effort can be reduced.
Article
Full-text available
Three studies examined the impact of a treatment designed to instill resistance to deceptive persuasive messages. Study 1 demonstrated that after the resistance treatment, ads using illegitimate authority-based appeals became less persuasive, and ads using legitimate appeals became more persuasive. In Study 2, this resistance generalized to novel exemplars, persevered over time, and appeared outside of the laboratory context. In Study 3, a procedure that dispelled participants' illusions of invulnerability to deceptive persuasion maximized resistance to such persuasion. Overall, the present studies demonstrate that attempts to confer resistance to appeals will likely be successful to the extent that they install 2 conceptual features: perceived undue manipulative intent of the source of the appeal and perceived personal vulnerability to such manipulation.
Article
Full-text available
We examined the influence of three social engineering strategies on users' judgments of how safe it is to click on a link in an email. The three strategies examined were authority, scarcity and social proof, and the emails were either genuine, phishing or spear-phishing. Of the three strategies, the use of authority was the most effective strategy in convincing users that a link in an email was safe. When detecting phishing and spear-phishing emails, users performed the worst when the emails used the authority principle and performed best when social proof was present. Overall, users struggled to distinguish between genuine and spear-phishing emails. Finally, users who were less impulsive in making decisions generally were less likely to judge a link as safe in the fraudulent emails. Implications for education and training are discussed.
Conference Paper
Full-text available
Research on marketing and deception has identified principles of persuasion that influence human decisions. However, this research is scattered: it focuses on specific contexts and produces different taxonomies. In regard to frauds and scams, three taxonomies are often referred in the literature: Cialdini’s principles of influence, Gragg’s psychological triggers, and Stajano et al. principles of scams. It is unclear whether these relate but clearly some of their principles seem overlapping whereas others look complementary. We propose a way to connect those principles and present a merged and reviewed list for them. Then, we analyse various phishing emails and show that our principles are used therein in specific combinations. Our analysis of phishing is based on peer review and further research is needed to make it automatic, but the approach we follow, together with principles we propose, can be applied more consistently and more comprehensively than the original taxonomies.
Article
Full-text available
Practitioners, researchers and policy-makers involved with cyber security often talk about “security hygiene:” ways to encourage users of computer technology to use safe and secure behavior online. But how do we persuade workers to follow simple, fundamental processes to protect themselves and others? These issues are raised by behavioral scientists, to encourage worker, passenger and patient compliance. In this paper, we explore and summarize findings in social psychology about moral values and habit formation, and then integrate them into suggestions for transforming staff security behavior online.
Article
Full-text available
Consumers have knowledge about persuasion that includes naïve theories about persuasion. The present work examines naïve theories with regard to whether consumers associate the meaning of persuasion as something that is either good or bad. Furthermore, naïve theories about persuasion are demonstrated to affect how consumers respond to a persuasive message. Two studies are presented, one that manipulates and another that measures naïve theories related to the meaning of persuasion. The meaning associated with persuasion is found to play a significant role in influencing the amount of message elaboration that consumers engage in. Implications for attitude change and advertising, persuasion knowledge, and the importance for further research on the meanings attached to persuasion are discussed.
Article
Full-text available
Persuasion is an important element of human communication. But in many situations, we resist rather than embrace persuasive attempts. Resistance to persuasion has been studied in many different disciplines, including communication science, psychology, and marketing. The present paper reviews and connects these diverse literatures, and provides an organizing framework for understanding and studying resistance. Four clusters of resistance strategies are defined (avoidance, contesting, biased processing, and empowerment), and these clusters are related to different motivations for resisting persuasion (threat to freedom, reluctance to change, and concerns of deception). We propose that, while avoidance strategies may be triggered by any of these motivations, contesting strategies are linked primarily to concerns of deception, while empowerment and biased processing strategies are most common when people are reluctant to change.
Article
Social Engineering (SE) attacks exploit vulnerabilities that are based on principles of human psychology. In conjunction with loopholes in the security structure of the organisation, these attacks can yield results that would be difficult, if not impossible, to obtain through the use of purely technical hacking methods. As SE attacks are based on deception, they are very difficult to categorise. Hence, designing countermeasures for them is even more difficult and as such, to this day, provisions present in current security standards and best practices against SE methods are limited, indirect and rather inadequate. Thus, a more fundamental approach is called for, if effective defense methods are to be devised. The current analysis of the psychological aspects of SE forms part of a larger effort to identify the risks emerging from the largely non-technical issues of Information Security (IS) and devise methods for their mitigation. To this end, the notion of the ψ-wall is introduced.
Article
Purpose The purpose of this paper is to highlight the relation of psychosocial risks to information security (IS). Although psychosocial risks at the workplace have been extensively researched from a managerial point of view, their effect on IS has not been formally studied to the extent required by the gravity of the topic. Design/methodology/approach Based on existing research on psychosocial risks, their potential effects on IS are examined. Findings It is shown that as psychosocial risks affect people at the workplace, they diminish their ability to defend IS. Research limitations/implications Psychosocial risks are identified as a factor in IS breakdown. Future research should be directed towards assessing the significance of the effects of various psychosocial risks on IS, creating an assessment methodology for the resulting IS posture of the organisation and devising mitigation methodologies. Practical implications The proposed approach will provide a significant part of the answer to the question of why IS fails when all prescribed measures and controls are in place and active. More effective controls for psychosocial risks at the workplace can be created as the incentive of upholding IS will be added to the equation of their mitigation. Social implications The organisational environment in which human beings are called upon to function in a secure manner will be redefined, along with what constitutes a “reasonable request” from human operators in the context of IS. Originality/value Bringing together psychosocial risks and IS in research will provide a better understanding of the shortcomings of human nature with respect to IS. Organisations and employees will benefit from the resulting psychosocial risk mitigation.
Conference Paper
Many companies spend hundreds of thousands of dollars to ensure corporate computer security. The security protects company secrets, assists in compliance with federal laws, and enforces privacy of company clients. Unfortunately, even the best security mechanisms can be bypassed through Social Engineering. Social Engineering uses very low cost and low technology means to overcome impediments posed by information security measures. This paper details a Social Engineering attack performed against a company with their permission. The attack yielded sensitive company information and numerous user passwords, from many areas within the company, giving the attackers the ability to cripple the company despite extremely good technical information security measures. The results would have been similar with almost any other company. The paper concludes with recommendations for minimizing the Social Engineering threat.
Article
The efficacy of inoculation theory has been confirmed by decades of empirical research, yet optimizing its effectiveness remains a vibrant line of investigation. The present research turns to psychological reactance theory for a means of enhancing the core mechanisms of inoculation—threat and refutational preemption. Findings from a multisite study indicate reactance enhances key resistance outcomes, including: threat, anger at attack message source, negative cognitions, negative affect, anticipated threat to freedom, anticipated attack message source derogation, perceived threat to freedom, perceived attack message source derogation, and counterarguing. Most importantly, reactance-enhanced inoculations result in lesser attitude change—the ultimate measure of resistance.