Content uploaded by Sebastian Pape
Author content
All content in this area was uploaded by Sebastian Pape on Aug 24, 2021
Content may be subject to copyright.
Requirements Engineering and
Tool-Support for Security and Privacy
Dr. rer. nat. Sebastian Pape
Habilitation thesis submitted in
fulfillment of the requirements for
the academic title Dr. habil.
(Doctor habilitatus)
Submitted to the Faculty of Computer Science and Mathematics
of the Johann Wolfgang Goethe University,
Frankfurt am Main, Germany
September 2020
ii
For my beloved wife Aline
iv
Requirements Engineering and Tool-Support for Security and Privacy
Dr. rer. nat. Sebastian Pape
Abstract
In order to address security and privacy problems in practice, it is very important to have a solid elicitation
of requirements, before trying to address the problem. In this thesis, specific challenges of the areas of social
engineering, security management and privacy enhancing technologies are analyzed:
Social Engineering
An overview of existing tools usable for social engineering is provided and defenses
against social engineering are analyzed. Serious games are proposed as a more pleasant way to raise
employees’ awareness and to train them.
Security Management
Specific requirements for small and medium sized energy providers are analyzed
and a set of tools to support them in assessing security risks and improving their security is proposed.
Larger enterprises are supported by a method to collect security key performance indicators for different
subsidiaries and with a risk assessment method for apps on mobile devices. Furthermore, a method to
select a secure cloud provider – the currently most popular form of outsourcing – is provided.
Privacy Enhancing Technologies
Relevant factors for the users’ adoption of privacy enhancing technologies
are identified and economic incentives and hindrances for companies are discussed. Privacy by design
is applied to integrate privacy into the use cases e-commerce and internet of things.
v
vi
Preface
When my research career started with a diploma thesis on the Ajtai-Dwork crypto system [
144
,
145
], my focus
was on the most theoretical aspects of cryptography. During my dissertation [
150
,
151
] which addressed
visual cryptography [
149
,
152
] and non-transferable anonymous credentials [
146
–
148
], I broadened my view
to also consider the environment in which the proposed approach would be applied. Naturally, the next
step was to consider not only the environment itself, but also the economic and legal aspects, as well as the
usability and user acceptability, treating privacy and security as cross-sectoral and interdisciplinary topics.
The habilitation thesis in front of you, “Requirements Engineering and Tool-Support for Security and
Privacy”, forms the basis of my journey through research on social engineering, security management and
privacy-enhancing technologies. It has been written to fulfil the requirements for the academic title Dr. habil.
(Doctor habilitatus) at the Faculty of Computer Science and Mathematics of the Johann Wolfgang Goethe
University, Frankfurt am Main. The main part of this thesis was written in September 2020 although most of
the research was conducted over a number of years. My habilitation thesis was submitted in September 2020
and accepted by the extended faculty council in January 2021 as a cumulative thesis. Depending on what the
publishers allow, the related papers in the appendix could either be the final published versions or in some
cases pre-prints, which would be accepted author versions.
I would like to express my deepest gratitude to Prof. Dr. Kai Rannenberg, who gave me the opportunity to
pursue my habilitation at his chair and accompanied my academic career with helpful and supportive advice,
a constant willingness for discussions, invaluable conversations and constant support. I would also like to
take this opportunity to thank him most sincerely for writing an expert report on this habilitation thesis. I
would like to extend my sincere thanks to the external reviewers Prof. Dr.-Ing. Felix Freiling and Prof. Dr.
Melanie Volkamer for taking on and promptly delivering the expert reports. I also wish to thank everybody
involved at the Faculty of Computer Science and Mathematics of the Johann Wolfgang Goethe University for
their support and the smooth course of my habilitation process, in particular dean and head of the habilitation
commission Prof. Dr.-Ing. Lars Hedrich as well as the commission’s members Prof. Dr. Uwe Brinkschulte,
Prof. Dr. Detlef Krömker, and Prof. Dr. Mirjam Minor.
Scientific work thrives on the suggestions, hints and criticisms of active and interested discussion partners.
I would like to take this opportunity to thank all my colleagues at the Johann Wolfgang Goethe University
and the projects I have been involved in for their wonderful cooperation. In particular, I am grateful to all my
co-authors for the fruitful discussions and joint efforts: Dina Aladawy, Kristian Beckers, Sören Bleikertz,
Maren Braun, Xinyuan Cai, Julian Dax, Trajce Dimkov, Veronika Fries, Ludger Goeke, Akos Grosz, David
Harborth, Majid Hatamian, Vera Hazilov, Jan Jürjens, Dennis-Kenji Kipker, Jörg Lässig, Benedikt Ley, Fabio
Massacci, Toni Mastelic, Federica Paci, Niklas Paul, Wolter Pieters, Volkmar Pipek, Alejandro Quintanar,
Peter Schaab, Michael Schmid, Christopher Schmitz, Daniel Schosser, André Sekula, Jelena Stankovic,
Mattea Stelter, Daniel Tasche, and Welderufael B. Tesfay.
To all the persons mentioned here, I would like to express my sincere and heartfelt thanks. Needless to
say, any errors and inaccuracies are entirely my responsibility. I hope you enjoy reading this thesis.
Frankfurt, April 2021
Sebastian Pape
vii
viii
Contents
Preface vii
List of Figures xi
List of Tables xiii
1 Introduction 1
2 Social Engineering 3
2.1 Social Engineering Tools and Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Survey on Tools for Social Engineering . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 MappingofDefenses................................. 6
2.2 Serious Games on Social Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 HATCH........................................ 8
2.2.2 PROTECT ...................................... 13
2.2.3 CyberSecurity Awareness Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Security Management 17
3.1
Security Risk Assessment and Security Management for Small and Medium Energy Providers
18
3.1.1 Requirement Elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.2 Tool-Support ..................................... 22
3.2 Security Risk Assessment for Large Enterprises . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.1 Comparison of Subsidiaries’ Security Levels in E-Commerce . . . . . . . . . . . 26
3.2.2 Security Risk Management for Smartphone Apps . . . . . . . . . . . . . . . . . . 28
3.3 Cloud Service Provider Security for Customers . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.1 Secure Cloud Provider Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.2 Supporting Security Assessments . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4 Privacy Enhancing Technologies 37
4.1 Users’ Technology Acceptance and Economic Incentives . . . . . . . . . . . . . . . . . . 38
4.1.1 User Concerns and Technology Acceptance Models . . . . . . . . . . . . . . . . . 38
4.1.2 EconomicIncentives ................................. 44
4.2 PrivacybyDesign....................................... 45
4.2.1 E-Commerce ..................................... 46
4.2.2 InternetofThings................................... 47
5 Discussion and Conclusion 53
Bibliography 57
ix
Contents
A Social Engineering 73
A.1 A Serious Game for Eliciting Social Engineering Security Requirements . . . . . . . . . . 75
A.2 HATCH: Hack And Trick Capricious Humans – A Serious Game on Social Engineering . 87
A.3
A systematic Gap Analysis of Social Engineering Defence Mechanisms considering Social
Psychology .......................................... 93
A.4 Social engineering defence mechanisms and counteracting training strategies . . . . . . . 107
A.5 A Structured Comparison of Social Engineering Intelligence Gathering Tools . . . . . . . 131
A.6 PERSUADED: Fighting Social Engineering Attacks with a Serious Game . . . . . . . . . 149
A.7
PROTECT - An Easy Configurable Serious Game to Train Employees Against Social
EngineeringAttacks...................................... 167
A.8 Systematic Scenario Creation for Serious Security-Awareness Games . . . . . . . . . . . . 185
A.9 Conceptualization of a CyberSecurity Awareness Quiz . . . . . . . . . . . . . . . . . . . 205
A.10 Case Study: Checking a Serious Security-Awareness Game for its Legal Adequacy . . . . 223
B Security Management 237
B.1 Defining the Cloud Battlefield – Supporting Security Assessments by Cloud Customers . . 239
B.2
Elicitation of Requirements for an inter-organizational Platform to Support Security Manage-
mentDecisions ........................................ 251
B.3 Easing the Burden of Security Self-Assessments . . . . . . . . . . . . . . . . . . . . . . . 263
B.4 A structured comparison of the corporate information security . . . . . . . . . . . . . . . 275
B.5 Aggregating Corporate Information Security Maturity Levels of Different Assets . . . . . 293
B.6 ESARA: A Framework for Enterprise Smartphone Apps Risk Assessment . . . . . . . . . 313
B.7 An Insight into Decisive Factors in Cloud Provider Selection with a Focus on Security . . 331
B.8 Selecting a Secure Cloud Provider: An Empirical Study and Multi Criteria Approach . . . 353
B.9 LiSRA: Lightweight Security Risk Assessment for Decision Support in Information Security 383
B.10 On the use of Information Security Management Systems by German Energy Providers . . 413
C Privacy Enhancing Technologies 441
C.1
Examining Technology Use Factors of Privacy-Enhancing Technologies: The Role of
Perceived Anonymity and Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
C.2
Anreize und Hemmnisse für die Implementierung von Privacy-Enhancing Technologies im
Unternehmenskontext..................................... 455
C.3
Towards an Architecture for Pseudonymous E-Commerce – Applying Privacy by Design to
OnlineShopping ....................................... 471
C.4 JonDonym Users’ Information Privacy Concerns . . . . . . . . . . . . . . . . . . . . . . 485
C.5 Assessing Privacy Policies of Internet of Things Services . . . . . . . . . . . . . . . . . . 503
C.6 Applying Privacy Patterns to the Internet of Things’ (IoT) Architecture . . . . . . . . . . . 519
C.7 Why Do People Pay for Privacy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
C.8
How Privacy Concerns and Trust and Risk Beliefs Influence Users’ Intentions to Use
Privacy-Enhancing Technologies – The Case of Tor . . . . . . . . . . . . . . . . . . . . . 549
C.9
How Privacy Concerns, Trust and Risk Beliefs and Privacy Literacy Influence Users’
Intentions to Use Privacy-Enhancing Technologies - The Case of Tor . . . . . . . . . . . . 561
C.10
Explaining the Technology Use Behavior of Privacy-Enhancing Technologies: The Case of
TorandJonDonym ...................................... 591
x
List of Figures
2.1
The Relation between HATCH [
16
], PROTECT [
72
] and CyberSecurity Awareness Quiz [
158
]
7
2.2 The THREAT-ARREST Advanced Training Platform [117] . . . . . . . . . . . . . . . . . 8
2.3 HATCH Cards:Psychological Principle, Social Engineering Attack, Attacker Type . . . . . 10
2.4 HATCH: Adaption of Emergency and Escape Plan for the Game . . . . . . . . . . . . . . 10
2.5 HATCH: Scenario for an Energy Provider . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 HATCH: Persona Card for Jonas, an Accountant . . . . . . . . . . . . . . . . . . . . . . . 12
2.7 HATCH: Overview of Scenario Creation Process [94] . . . . . . . . . . . . . . . . . . . . 13
2.8 PROTECT [72]: Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.9 CyberSecurity Awareness Quiz [158]: Graphical User Interface . . . . . . . . . . . . . . 15
2.10 CyberSecurity Awareness Quiz [158]: Gathering and Analyzing Content about Attacks . . 15
3.1 Size of the participating energy providers [162] . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Motivation, Benefits and Expectations to Implement an ISMS [162] . . . . . . . . . . . . 20
3.3 Status of each ISMS implementation phase [162] . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Portal Mock-up: Security Measures Module [46] . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Portal:InputSection[189] .................................. 23
3.6 Portal: Modules for Updates to Maturity Levels [189] . . . . . . . . . . . . . . . . . . . . 23
3.7 Portal: Benchmarking Section [189] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.8 Portal: Risk Assessment Section by LiSRA [188] . . . . . . . . . . . . . . . . . . . . . . 25
3.9 LiSRA:Overview[188].................................... 25
3.10 LiSRA: General Risk Computation Process [188] . . . . . . . . . . . . . . . . . . . . . . 26
3.11 AHP Applied to Security Controls in E-Commerce [187] . . . . . . . . . . . . . . . . . . 27
3.12 ESARA: Architecture Overview [92] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.13 Consensus Assessments Initiative Questionnaire in Version 3.1 [39] . . . . . . . . . . . . 32
3.14CPS[161]........................................... 33
3.15 System Model with Relations Between Entities and Components [24] . . . . . . . . . . . 34
3.16 Attacking Other Customers Through Side-channels in Hardware and/or Software [24] . . . 35
4.1
JonDonym Users, IUIPC, Path Estimates and Adjusted R
2
-values of the Structural Model [
80
]
39
4.2 Tor Users, IUIPC, Path Estimates and Adjusted R2-values of the Structural Model [83] . . 40
4.3
Tor Users, IUIPC & OPLIS, Path Estimates and Adjusted R
2
-values of the Structural Model [
84
]
41
4.4
Tor/Jondonym Users, TAM, Path Estimates and Adjusted R
2
-values of the Structural Model [
90
]
41
4.5 Data Flow Diagram for Different Architectures in E-Commerce [157] . . . . . . . . . . . 46
4.6 Three-layer service delivery model [154] . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.7 Privacy Patterns Applied to the IoT / Cloud Computing / Fog Computing Architecture [154] 50
xi
List of Figures
xii
List of Tables
1.1 Mapping of Papers to Requirement Elicitation and Tool-Support . . . . . . . . . . . . . . 2
2.1 Overview of Social Engineering Phases by Milosevic [130] . . . . . . . . . . . . . . . . . 4
2.2 Tools vs. Attack Type Knowledge [19] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Comparison of Defense Mechanism Suggested in IT Security and Social Psychology [184] 6
2.4 Mapping of Defense Mechanisms Against Attacks Based on Psychological Principles [184] 7
3.1 AHP Applied to Different Aggregation Types for Security Controls for Multiple Assets [186] 28
3.2 Coverage of Top 10 Mobile App Risks [169] by ESARA . . . . . . . . . . . . . . . . . . 29
3.3 CCM-Item and CAIQ-Question Numbers per Domain (version 3.1) [161] . . . . . . . . . 32
4.1 Tor and Jondonym Users, TAM, Total effects [90] . . . . . . . . . . . . . . . . . . . . . . 42
4.2 Tor and Jondonym Users, TAM, Multi-Group Analysis [90] . . . . . . . . . . . . . . . . . 42
4.3 Results of the coding for the open questions including quotes . . . . . . . . . . . . . . . . 43
4.4 Tor and Jondonym Users, Logistic Regression Model for Willingness to Donate/Pay [89] . 45
4.5 Privacy Threats Mapped to Architecture Variants in E-Commerce [157] . . . . . . . . . . 47
4.6 Parameters for the Framework to Assess Privacy Policies [165] . . . . . . . . . . . . . . . 49
4.7 Summary Statistics of Examined Policies [165] . . . . . . . . . . . . . . . . . . . . . . . 50
xiii
List of Tables
xiv
Chapter 1
Introduction
It’s important to recognize that you can’t have
100 percent security and also then have 100
percent privacy and zero inconvenience.
Barack Obama
With several data breaches and data leakages every month [
99
], data security and privacy issues have
arrived in the middle of society. However, tackling security and privacy issues is not an easy task since both
of them not only involve all technical layers, but are highly interdisciplinary too.
Often the boundary between security and privacy is blurred in both directions: If the used social-technical
systems are not secure, all user data is at risk to leak if a breach occurs. In fact, the Cloud Security Alliance
lists for its top threats to cloud computing data breaches as top threat of its last three reports [
36
,
37
,
40
].
Vice versa, if personal data leaks, that data can also be used for further attacks on the individuals or their
companies, e. g. by attacks on the ’reset password’ mechanism of many sites [
108
,
111
,
175
,
185
] or other
means of social engineering.
Experience has shown that security and privacy problems are often hard and even there is a solution in
theory or academia, it is still not self-evident that the proposed solutions get to work in practice. On the one
hand, there is a gap between research and practice [
139
] and even if solutions in academia exist they are
often not applied in practice or only decades later. On the other hand, a proposed solution which is secure
in theory, does not automatically imply that it is secure in practice. This holds for technical measures, e.g.
cryptographic algorithms whose implementations can be attacked by side-channel attacks [
115
,
116
], but also
for humans which regularly struggle to use programs and mechanisms designed by engineers [
69
,
195
,
212
].
In order to address problems in practice, it is very important to have a solid elicitation of requirements,
before trying to address the problem. In this thesis, specific challenges of the areas of social engineering,
security management and privacy enhancing technologies are analyzed:
Social Engineering
(cf. Sect. 2) The main challenge when counterfeiting social engineering is that all its
defenses need to consider human behavior, which – contrary to technical systems – is in general not
deterministic, but depends on a variety of other factors. While a variety of tools exists, most of them
rather support attackers than defenders. This is not necessary the fault of the tools, since many of
them were not designed for social engineering attacks. Besides the disadvantage on the tool side,
it is also a hard task to raise awareness and train employees, since in general their main task is not
fighting social engineering attacks off. Section Sect. 4 provides an overview of existing tools usable
for social engineering and analyses defenses against social engineering. Additionally, serious games
are proposed as a more pleasant way to make employees aware and to train them.
Security Management
(cf. Sect. 3) One of the challenges for security management is that information
security can only be measured indirectly [
25
], e. g. by using metrics and KPIs[
1
] which aim to
approximate the real status of information security. Unfortunately, security management often goes
together with compliance, which means that sometimes measures are not applied to increase the
1
Introduction
security, but to demonstrate compliance in order to anticipate claims for damages should the company
be successfully attacked. This also means that security risk assessment has to be a fundamental part of
security management and often requires information security management systems to be implemented.
Naturally, requirements differ for small and large companies. Sect. 3 analyses specific requirements for
small and medium sized energy providers and proposes a set of tools to support them in assessing
security risks and improving their security. Larger enterprises are supported by a method to collect
security key performance indicators for different subsidiaries and with a risk assessment method for
apps on mobile devices. Furthermore, as the currently most popular form of outsourcing, the selection
of a secure cloud provider is discussed.
Privacy Enhancing Technologies
(cf. Sect. 4) For privacy enhancing technologies the main challenge is
their dissemination. Often companies do not want to integrate privacy enhancing technologies into
their services, because their business model is built on collecting the users’ data, or they think that
they might need the collected data later, or because they simply do not know how to integrate them
without harming usability or performance. On the other hand, even if stand-alone privacy enhancing
technologies exist, the users’ adoption is quite low and in particular for laymen it can be a cumbersome
task to get them working [
69
]. Section Sect. 4 therefore aims to identify relevant factors for the users’
adoption of privacy enhancing technologies. Besides that, economic incentives and hindrances are
discusses and privacy by design is applied to integrate privacy into the use cases e-commerce and
internet of things.
Section 5 elaborates on commonalities of the three areas and sketches future work. For a better overview, a
mapping of papers to requirement elicitation and tool-support is provided in Tab. 1.1. However, there is no
clear border since many of the papers on tools not only rely on previous results, but also include a short
eliciting of requirements, e. g. an experiment.
Table 1.1: Mapping of Papers to Requirement Elicitation and Tool-Support
Section Topic Requirement Elicitation Tool-Support
2 Social Engineering A.3, A.4, A.5, A.10 A.1, A.2, A.6, A.7, A.8, A.9
3 Risk Assessment &
Security Management
B.2, B.7, B.10
B.1, B.3, B.4, B.5, B.6, B.8,
B.9
4 Privacy Enhancing
Technologies
C.1, C.2, C.4, C.5, C.7, C.8,
C.9, C.10
C.3, C.6
2
Chapter 2
Social Engineering
My work is a game, a very serious game.
M. C. Escher
The European Network and Information Security Agency, ENISA, defines social engineering as a
technique that exploits human weaknesses and aims to manipulate people into breaking normal security
procedures [
143
]. In most cases, maliciously motivated attackers aim to gain access to their victim’s
commercial, financial, sensitive or private information in order to use it against them or cause harm
otherwise [8].
“The biggest threat to the security of a company is not a computer virus, an unpatched hole in a key
program or a badly installed firewall. In fact, the biggest threat could be you [. . . ] What I found personally to
be true was that it’s easier to manipulate people rather than technology [. . . ] Most of the time organizations
overlook that human element.”
1
These words from Kevin Mitnick, a former hacker who now works as an IT
security consultant, spoken in a BBC interview were made almost two decades ago and are still of utmost
importance today.
The latest Data Breach Investigations Report [
12
] supports Mitnick’s statement and reports another
increase of financially motivated social engineering, where the attacker directly ask for some money, i.e. by
impersonating CEOs or other high-level executives. Social engineering attacks represent a continuing threat
to employees of organizations. With a wide availability of different tools and information sources [
19
], it is a
challenging task to keep up to date of recent attacks on employees since new attacks are being developed
and modifications of known attack scenarios are emerging. For example, during the last year, scammers
have already varied their approach and also ask for purchase and transfer of online gift cards
2
in order to
scam employees. Additionally, scammers also base attacks on the current news situation, such as COVID-19
Ransomware [182] or fake websites [15].
Furthermore, a social engineering attack is often only the first step of a larger attack, in which the attacker
uses the information gained there for further attacks [
12
]. According to Milosevic
[130]
, a social engineering
attack itself consists of multiple phases as summarized in Table 2.1. In the first phase, the attacker conducts
surveillance to identify persons with access to the information the attacker desires. The second phase focuses
on finding out as much about these persons as possible to help the attacker to manipulate the victims. Based
on that information, the attacker starts building a relationship to the victim (pretexting phase). Afterwards
the attacker exploits the built up trust in the relationship and evaluates the gathered information in the
post-exploitation phase.
However, most organizations have difficulties addressing this issue adequately. According to Mitnick,
most companies rather purchase heavily standardized security products, such as firewalls or intrusion detection
systems, than considering potential threats of social engineering attacks [
132
]. Peltier
[166]
supports this
observation and states that technology-based countermeasures should be applied whenever possible. However,
he also claims that no hardware or software is able to protect an organization fully against social engineering
1https://news.bbc.co.uk/2/hi/technology/2320121.stm
2https://twitter.com/sjmurdoch/status/1217449265112535040
3
Social Engineering
Table 2.1: Overview of Social Engineering Phases by Milosevic [130]
Phase Description
Pre-Engagement
Find targets with sufficient access to information/knowledge to perform an
attack.
Intelligence Gathering Gather information on each of the valid targets. Choose the ones to attack.
Pretexting
Use gathered information to build a relationship to the target. Gain victims’
trust to access additional information.
Exploitation Use the built up trust to get the desired information.
Post-Exploitation
Analyze the attack and the retrieved information. If necessary return to a
previous phase to continue the chain of attack until the final information has
been retrieved.
attacks. Furthermore, social engineering is highly interdisciplinary, however most defense strategies are
advised by IT security experts who rather have a background in information systems than in psychology [
184
].
The remainder of this chapter is structured as follows:
•Sect. 2.1 discusses tools for and defenses against social engineering.
–Sect. 2.1.1 describes a survey on tools for social engineering [19] (cf. Sect. A.5).
–
Sect. 2.1.2 surveys defense strategies and compares them to findings in social psychology [
183
,
184] (cf. Sect. A.3, A.4).
•
Sect. 2.2 sketches the purpose and relations of the different serious games introduced in the subsections.
–
Sect. 2.2.1 describes the serious game HATCH, along with its two different applications [
16
,
17
]
(cf. Sect. A.1, A.2), a legal assessment of them [
153
] (cf. Sect. A.10), and a structured method to
generate appropriate scenarios to adapt HATCH to different domains [94] (cf. Sect. A.8).
–
Sect. 2.2.2 describes the serious game PROTECT [
72
] (cf. Sect. A.7) and its predecessor
Persuaded [7] (cf. Sect. A.6).
–Sect. 2.2.3 describes the concept for a CyberSecurity Awareness Quiz [158] (cf. Sect. A.9).
The respective papers can be found in Appendix A and the author’s contribution for each paper is indicated
in Tab. ?? on page ??.
2.1 Social Engineering Tools and Defenses
Even if companies are aware of social engineering attacks, they have only a limited number of tools available
to support them. This might be one of the reasons for the aforementioned preference of heavily standardized
security products. In Sect. 2.1.1, we will discuss the findings of a survey on tools for social engineering.
The alternative to hire penetration testing companies that attack the company’s employees and clients in
order to show weaknesses in their defenses does not seem to be a promising solution: Besides the need to
address legal issues, which requires high effort upfront [
211
], experiments indicate that this approach might
be counterproductive due to humans’ demotivation when confronted with the testing results [53].
In order to provide an overview of further alternatives, besides trainings and awareness campaigns, we
will compare defense mechanisms suggested in IT security and (social) psychology in Sect. 2.1.2. The
idea behind the comparison is that social engineering tackles humans and while IT security is a rather new
discipline, (social) psychology can be traced back to the ancient Greeks [
196
]. Therefore, we can expect to
find further concepts to fight social engineering.
4
Social Engineering Tools and Defenses
2.1.1 Survey on Tools for Social Engineering
In the process of social engineering described in Tab. 2.1, the first two phases are heavily based on information
gathering. For that purpose, a number of tools are available. On the one hand, these tools may be used by a
social engineer to prepare an attack. On the other hand, these tools could also provide an organization with
an excellent alternative to pen testing or awareness trainings, as they allow to analyze possible vulnerabilities.
For that purpose, we did a structured survey on the tools’ capabilities [19] and contribute the following:
•
a classification of existing tools regarding categories such as proposed purpose, price, perceived
usability, visualization of results;
•a survey of information types retrieved by the tools regarding information about company employees
and their communication channels, as well as related information e .g. company policies;
•a mapping of tools to certain types of social engineering attacks (phishing, baiting, impersonation).
For the mapping study, first for each information type (e. g. email, friends, (private) location, co-workers,
company location) and each considered social engineering attack (phishing, baiting, impersonation) it was
determined if it was of help for executing the attack. Then for each of the investigated tools (e. g. Cree-py,
Maltego) or websites (e. g. LinkedIn, Xing) it was investigated which information they could provide. By
combining these tables, finally Tab. 2.2 indicates which of the tools may be useful for which of the social
engineering attacks.
Table 2.2: Tools vs. Attack Type Knowledge [19]
Information Type
Cree.py
Gitrob
KnowEm
LinkedIn
Maltego
Namechk
Recon-ng
Spokeo
theHarvester
Wayback Machine
Wireshark
Xing
Telephone Number P P
Friends P,I P,I P,I P,I P,I
Personal Information P,I P,I P,I P,I P,I P,I
Private Locations P,I P,I P,I
E-Mail P P P P P
Instant Messenger P P P P P
Co-Workers: New Employee I I I
Co-Workers: Hierarchies I I I
Lingo
Facilities: Security-Measures B,I B,I
Facilities: Company Location B,I B,I B,I B,I B,I B,I
Websites P P P P
with P for Phishing, I for Impersonation, and B for Baiting
Taking a closer look at the table, one can notice that the only information type that social engineering
tools do not provide today is the so-called company lingo, the abbreviations and specific words used in a
company or domain. However, by analyzing postings in business oriented social network sites, it might be a
matter of time when machine learning is applied and big data analysis will fill this gap.
Further results of our survey were, that none of the investigated tools or websites offered specific help for
the defense against social engineering attacks. None of the tools provided any kind of risk assessment for the
collected information on the employees themselves or on chief information (security) officers. Moreover, none
of the tools was able to propose to remove certain information or to propose to add fake or bogus information
to the publicly listed information. Fake information would allow to easily identify social engineering attacks
later which might have relied on it.
5
Social Engineering
Additionally, most of the tools were easy to use, opening the field not only to skilled professionals, but
also to non-experts or even script-kiddies.
2.1.2 Mapping of Defenses
As we have seen in the previous section, most of the tools rather support the attacker than the defender.
Therefore, it’s worth to have a closer look at different defenses against social engineering.
For that purpose, we surveyed the state of the art [
183
,
184
] from a computer science, namely IT security
viewpoint, as well as from the viewpoint of social psychology. Following Kruger and Kearney
[119]
, social
engineering awareness was considered to consist of the three dimensions knowledge, attitude and behavior.
Therefore, the identified defense mechanisms were mapped to this three dimensions. Furthermore, defense
mechanisms from IT security and social psychology can be mapped against each other as shown in Tab. 2.3.
Table 2.3: Comparison of Defense Mechanism Suggested in IT Security and Social Psychology [184]
Dimension IT Defense Mechanism Psychological Defense Mechanism
Policy Compliance –
Security Awareness Program Forewarning
– Persuasion Knowledge
– Attitude Bolstering
Attitude
– Reality Check
Audit –
– Inoculation
Knowledge
Behavior
– Decision Making
When comparing the different mechanisms, it is visible that defense mechanisms within IT security have
not reached their full potential, yet. Within the attitude dimension, they mostly consist of security awareness
trainings and programs and the definition of security policies. In comparison, social psychology offers
with forewarning a mechanism similar to awareness raising. However, persuasion knowledge (including
knowledge about persuasion strategies as well as counter tactics) and attitude bolstering (which relies on a
good knowledge on security policies and its implication to create a bolstering mind-set) go far beyond the
described defense mechanisms from IT security. Last but not least, carefully designed reality checks, could
help people realizing that in fact they are vulnerable. However, reality checks have to be carried out very
carefully in order to avoid frustration and create similar effects than those of social engineering penetration
testing.
From a behavioral perspective, audits (including penetration testing specifically for social engineering)
are a typical defense mechanism within the behavior dimension. However, since in general the penetration
testers focus strongly on the detection of attacks and not on any kind of trainings or reality checkings of their
victims, besides the aforementioned effort to set it up, care has to be taken not to demotivate employees. On
the other hand, from a social psychological point of view, inoculation, e.g. putting employees in a similar
situation a social engineer would put them in to train counter arguments, and training on decision making,
i. e. avoiding impulsive decisions, which often benefit social engineering attackers, might let the employees
persist a real attack.
Altogether, the gap analysis shows that defense mechanisms from IT security can be improved, and in
particular consider the behavior dimension for to little.
Another contribution of the literature survey is a review of psychological principles which support
impulsive decisions and avoid deeper reasoning. By mapping them to the applicability of (psychological)
defense mechanisms, we find that attacks exploiting authority, social proof or distraction are mainly
defendable through the dimension of attitude and attacks based on liking, similarity, deception, reciprocation
and consistency require a training of both dimensions, attitude and behavior (cf. Tab. 2.4).
As a result, we recommend to integrate persuasion knowledge and resistance trainings into trainings or
awareness campaigns fighting social engineering. Furthermore, attitude bolstering has been shown to be
6
Serious Games on Social Engineering
effective in decreasing the effectiveness of persuasion attempts [
218
], and could be vital when users are not
only shown their failures but also their successful attempts to prevent a social engineering attack.
Table 2.4: Mapping of Defense Mechanisms Against Attacks Based on Psychological Principles [184]
Dimension Defense Mechanism
Authority
Social Proof
Liking,
Similarity,
Deception
Commitment,
Reciprocation,
Consistency
Distraction
Persuasion Knowledge 3 3 3 3 3
Forewarning 3 3 3 3 3
Attitude Bolstering 3 3 3
Attitude
Reality Check 3 3
Inoculation 3 3
Knowledge
Behavior
Decision Making 3 3 3 3 3
2.2 Serious Games on Social Engineering
In order to address the issues identified and developed in the previous sections, we have designed three
serious games. Serious games have built a reputation for getting employees of companies involved in security
activities in an enjoyable and sustainable way. While still preserving a playful character, serious games are
used for e.g. security education and threat analysis [
52
,
197
,
198
,
214
,
215
]. Since at that time, none of the
games was specifically developed for social engineering, all three games proposed in this section, aim to
address social engineering, although in a different way (cf. Fig. 2.1). We start with a brief overview of the
games and their relation in this section. They are described in more detailed in the following subsections.
Figure 2.1: The Relation between HATCH [
16
], PROTECT [
72
] and CyberSecurity Awareness Quiz [
158
]
HATCH [
16
,
17
] (cf. Sect. 2.2.1) aims to identify social engineering threats and develop them to security
requirements. Since we noticed that most tools for social engineering do not help the defenders, the main
idea of this game is to support the defenders in a systematic threat elicitation. Players develop attacks on their
colleagues’ based on their existing knowledge of work processes, skills and preferences. As a result, a list
of possible SE threats is generated that can be used to improve work processes and security policies. The
7
Social Engineering
advantage over a threat analysis by experts is that the employees of a department or a company know the real
work processes very well, so it is easier to train them in social engineering than to have experts study all work
processes. Furthermore, when asked about real processes, many employees will most likely not reveal what
they are really doing but demonstrate their knowledge about how the process looks like in the official process
definition.
While HATCH helps to develop and refine security policies, PROTECT [
72
] (cf. Sect. 2.2.2) aims to
offer the employees an environment where they can learn and train the application of defenses. In the long
run, the game raises the employees’ security awareness and it also helps to bolster their attitudes.
Since PROTECT is based on security policies, it is naturally somewhat generic and can not address all
recent variations of certain attacks. Therefore, the idea of the CyberSecurity Awareness Quiz [
158
] (cf.
Sect. 2.2.3) is to have a quiz based on latest attacks and their variations. After a new attack or variation
emerges, all it takes is the development of a new question which then can be used within the quiz immediately.
Besides the interplay of the three games itself, which already provides a tool chain to defend against social
engineering, it is also important to allow the integration of the games into a more general training platform,
such as the THREAT-ARREST [
117
] advanced training platform (cf. Fig. 2.2). The THREAT-ARREST
3
project received funding from the European Union’s Horizon 2020 research and innovation program and
aims to develop an advanced training platform incorporating emulation, simulation, serious gaming and
visualization capabilities to adequately prepare stakeholders with different types of responsibility and levels
of expertise in defending high-risk cyber systems and organizations to counter advanced, known and new
cyber-attacks.
Figure 2.2: The THREAT-ARREST Advanced Training Platform [117]
The integration into the platform allows to combine the efforts within the serious game with other
components such as the emulation and the simulation tool. This contributes to a continuous evaluation of the
individual trainees’ performance and the effectiveness of the training programs. Within the platform for each
trainee results of the serious games, the emulation, the simulation and the training tool are brought together
to spot possible gaps in the employee’s knowledge or awareness. If knowledge gaps are identified, it can be
checked if there already exists a training on the specific topic as serious game, simulation or emulation of the
cyber range system. If no appropriate training can be identified, this might indicate the need of producing a
new training, tailored to the organizational needs and the trainee types.
2.2.1 HATCH
Hack and Trick Capricious Humans (HATCH) is a physical (tabletop) serious game on social engineering [
16
,
17
]. The game is available in two versions, a real life scenario and a generic version. Each version of the
3https://www.threat-arrest.eu/
8
Serious Games on Social Engineering
game pursues a slightly different objective: The real life scenario is aiming to derive social engineering
security requirements of a company or one of its departments. Therefore, a real environment is modeled
and players attack their colleagues in order to identify real attack vectors. The generic version of the game
aims to raise the players’ awareness for social engineering threats and educate them on detecting this kind
of attacks. In order to not unnecessarily expose and blame colleagues during a training session, it is based
on a virtual scenario with personas as attack victims [
153
]. The initial scenario consists of a layout of a
medium-sized office and ten employees as personas, printed on cards that contain fictional descriptions of
them. By definition, personas are imaginary however, realistic descriptions of stakeholders or future users of
a service or product, who have names, jobs, feelings, goals, certain needs and requirements [62].
In both versions two decks of cards are used: psychological principles and social engineering attacks.
Psychological principle cards state and describe human behaviors or patterns that are often exploited by
social engineers, as for example: ‘Distraction - While you distract your victims by whatever retains their
interests, you can do anything to them’. The psychological principle card patterns are based on the work of
Stajano and Wilson
[200]
, who describe why attacks on scam victims may succeed. We extended the set of
behavioral patterns by patterns found in work on social engineering from Gulati
[75]
and Peltier
[166]
. On
the other hand, the social engineering cards name and define some of the most common social engineering
attacks, for example dumpster diving, which is ‘the act of analyzing documents and other things in a garbage
bin of an organization to reveal sensitive information’. The used attack techniques are mostly based on the
work of Krombholz et al.
[118]
. Again, we extended the set of attack techniques by work of Gulati
[75]
,
Peltier [166], and Chitrey et al. [34].
When playing the game, each player draws one psychological principle card and three social engineering
attack cards and reads the respective descriptions. Each player has then the task to choose a victim
4
which
fits to the psychological principle card and to elaborate an attack by using one of the social engineering attack
cards which matches victim and psychological principle best. Players take turns to reveal their cards and
describe the social engineering attack they came up with. Other players discuss the proposed attack and
award points for attack’s feasibility and viability and rate if it is compliant with descriptions of this player’s
cards. The total score of each player is calculated by the end of the group rating and the player with the
highest score wins the game. At the end of the game, all players briefly reflect on proposed social engineering
attacks and derive potential security threats. The following list provides an overview of the steps of the game:
1.
Each player draws a card from the deck of human behavioral principles, e. g. the “Need and Greed”
principle.
2.
Each player draws three cards from the deck of the social engineering attack techniques, e.g. phishing.
3.
Each player develops an attack targeting one of the personas in the scenario based on the drawn cards.
4.
Each player presents his/her attack to the group and the other members of the group discuss if the
attack is feasible.
5.
The players get points based on how viable their attack is and if the attack was compliant to the drawn
cards. The player with the most points wins the game.
6. As debriefing, the perceived threats are discussed and the players reflect their attacks.
Figure 2.3 shows samples of the cards. The original card layout is shown in Fig. 2.3a and a version
developed later with the help of Kristina Femmer, a professional designer, is shown in Fig. 2.3b. As discussed,
the game can be played either with an imaginary (virtual) scenario or a (realistic) scenario that reflects the
real working environment. We describe both scenario types in the following in more details.
Realistic Scenarios
The basic gameplay of HATCH has already been described above. For the realistic scenario, the aim was
to identify a list of social engineering threats. Players should develop attacks on their colleagues based on
their existing knowledge of work processes, skills and preferences. In order to foster creativity, a game plan
4depending on the version either a colleague or a persona
9
Social Engineering
Social Engineering
Principles
The Distraction
Principle
While you distract your victims
by whatever retains their interests,
you can do anything to them.
2
Social Engineering
Attack Scenarios
Dumpster Diving
Dumpster Diving is the act of
analysing the documents and other
things in a garbage bin of an
organisation to reveal sensitive
information. 1
Social Engineering
Attacker
Outside Attacker
An outsider is new to the
organization and has to establish
trust to its employees.
2
(a) Card Version 1 [16]
Principles
While you distract your victims
by whatever retains their interests,
you can do anything to them.
Distraction
Attack Scenarios
Dumpster Diving is the act
of analysing the documents and
other things in a garbage bin of
an organisation to reveal
sensitive information.
Dumpster Diving
Attacker Type
Outside Attacker
An outsider is new to the
organization and has to establish
trust to its employees.
(b) Card Version 2
Figure 2.3: HATCH Cards:Psychological Principle, Social Engineering Attack, Attacker Type
is developed based on an existing emergency and escape plan (cf. Fig. 2.4a). Emergency and evacuation
plans are a good source to build upon since they include a site plan which suits our needs and their layout is
standardized [
102
]. Furthermore, they are publicly available in corridors, need to be updated frequently, and
are – depending on the type of building – in most countries required by law (cf. [
29
, § 4 Abs. 4]). This way,
a game plan can be created with low effort, e. g. by just adding images or icons of the co-workers and some
assets (cf. Fig. 2.4b).
(a) Emergency and Escape Plan (b) Adapted Game Plan
Figure 2.4: HATCH: Adaption of Emergency and Escape Plan for the Game
Besides training and awareness raising, the result is a list of possible social engineering threats that can
be used to improve work processes and security policies. The advantage over a threat analysis by experts is
that the employees of a department or a company know the real work processes very well, so it is easier to
train them in social engineering than to have experts study all work processes. Beckers and Pape
[16]
showed
that the realistic scenario was helpful for the elicitation of context-specific attacks by utilizing the domain
knowledge of the players and their observations and knowledge about daily work and processes.
10
Serious Games on Social Engineering
Virtual Scenarios
Virtual scenarios are used when HATCH is used for training and awareness purposes [
17
]. The basic
gameplay of HATCH with a virtual scenario is the same as with a realistic scenario. However, instead
of a plan of the real working environment along with co-workers, a map of a virtual environment is used.
The virtual environment consists of a plan of a department or company (see Fig. 2.5) and for each of the
employees shown in the plan there is a persona card that outlines the basic characteristics of the employee
(see Fig. 2.6). The players’ task now is to come up with an attack that is as plausible as possible on the basis
of the drawn cards and that exploits the characteristics of the employees present in the game. The attack
found is then evaluated for plausibility by the players.
4
Figure 2.5: HATCH: Scenario for an Energy Provider
Besides the initial virtual scenario with a simple office environment, meanwhile scenarios for a maritime
environment, an energy provider and a consulting scenario [
94
] exist. They all contain a basic map along
with detailed persona descriptions on additional cards. Like the cards, the initial versions of the scenarios
have been reworked with the help of a professional designer.
Legal Assessment
It is generally accepted that management has a legal obligation to maintain and operate IT security measures
as part of the company’s own compliance - this includes training employees with regard to social engineering
attacks. Therefore, at a first glance, the use of a serious game for awareness raising and training against
social engineering attacks seems to be fine. However, on the other hand the question is whether and how
the employee must tolerate associated measures and, if necessary, also participate in them. The field of
conflict between the employee’s freedom and the company’s security involves issues relating to labor law,
data protection law, as well as for corporate compliance and corporate governance.
11
Social Engineering
Jonas is an accountant and takes care
of fi nance, in particular of invoices from
suppliers.
He is familiar with data analysis and da-
tabases.
He is concerned regarding the availability
and integrity of the databases.
Jonas spends a lot of time learning new
analysis methods.
Jonas
Figure 2.6: HATCH: Persona Card for Jonas, an Accountant
While there are reports on the use of serious games in the corporate sector [
56
], the body of literature
specific to serious games aiming to raise awareness and allow security training is rather low. Regarding
compliance and serious games, there is a lot of work, but only on using serious games to increase the
compliance and not on the compliance of serious games. In the area of social engineering, most of the work
is focused on social engineering penetration testing. Hatfield
[93]
discusses the ethics of social engineering
penetration testing, and Kuhn and Willemsen
[121]
and Zimmer and Helle
[220]
discuss social engineering
penetration testing from a legal perspective towards labor law. Therefore, we have investigated the legal
challenges to make use of the game HATCH, and in particular the circumstances for HATCH’s two different
scenario types [153]5.
As a result, our legal assessment showed large differences in the assessment of the two different scenario
types. In the realistic scenario, employee’s personal characteristics are part of the game, thus care needs
to be taken to not unnecessarily expose the personality of the employees, e.g. by accidentally revealing
details of another employee such as long breaks, political, religious or sexual preferences not known to other
players before – which could all be part of the game. Furthermore, it can not be ensured that some players do
not use the environment of the game for some (additional) harassment at work. This even holds when the
employees ask for or volunteer to play the scenario with a realistic environment, where they would suggest
social engineering attacks on each other. Thus, for training and awareness raising, the virtual scenario should
be used. On the other hand, if the employer can demonstrate a reasonable interest, i .e. if the game is used for
threat analysis, the use of the game with a realistic scenario may be admissible.
Scenario Creation
Awareness campaigns benefit from addressing the target audience as specific as possible [
11
]. Transferred to
HATCH, this refers mostly to the virtual scenarios and the need to have them as specific as possible. For that
purpose, we investigated how to systematically developed a new scenario suitable for consulting companies.
Our approach [
94
] also tackles the problem, that although many serious games for IT security exist, it is still
hard to find an accurately fitting serious game for the environment of a specific organization.
In 2011, Faily and Flechais
[62]
introduced a method for developing personas that is based on grounded
theory, a “[. . . ] systematic, yet flexible guideline for collecting and analyzing qualitative data” [
33
]. Our
proposed scenario creation process and consists out of 6 steps as shown in Fig. 2.7:
5based on previous work by the same authors [113]
12
Serious Games on Social Engineering
Figure 2.7: HATCH: Overview of Scenario Creation Process [94]
Conduct interviews with relevant stakeholders (stage 1) and transcribe them (stage 2). Code the answers
in two rounds: open and axial coding (stage 3). Open and axial coding are typical for qualitative analyses:
For open coding textual data is analyzed line-by-line to identify certain phenomena and attaching adequate
codes to it. For axial coding previously assigned codes are examined to identify certain relationships among
them and summarizing them into concepts and categories [42]. Following Faily and Flechais’ method [62],
develop propositions from codes (stage 4), such as ‘more consultants are hired for project than clients’ and
‘generally, the consulting team consists of 4 to 5 people’. Summarize these propositions, assign them to
concepts and categorize them (stage 5). As the last step, select appropriate propositions and use them as
characteristics for persona narratives (stage 6).
The result of the evaluation of our method for creating a new scenario for HATCH was that it was effective:
All participants of the evaluation sessions agreed that the derived scenario and its personas are realistic.
However, since it was also very time-consuming, the required effort only makes sense if the scenario can be
used several times by an organization or can be transferred to another, similarly structured organization.
2.2.2 PROTECT
The serious game PROTECT [
72
] builds on its predecessor Persuaded [
7
], thus both games share the same
gaming principle. Players draw cards in a patience like manner from a pile and besides special cards, the
pile contains attacks and defenses. If an attack is drawn, the player gets confronted with a possible social
engineering threat and has to select a defense mechanism. The correct defense mechanism is a pattern of
behavior ensuring a secure outcome, e. g. as described in a security policy. An example of the user interface
and for a presented attack is shown in Fig. 2.8.
In order to make the game more challenging and increase the long-term motivation to play the game, in
the basis version, players need to ensure that they have the correct defense card on their hands when an attack
is drawn from the pile of cards. For that purpose, they may use special cards to view the next three cards on
the pile or to discard the next card on the pile. By playing anticipatorily, players can navigate through the
pile, collecting defense cards and discarding attacks they do not have a correct defense card, yet.
PROTECT is an advancement of Persuaded in several directions. First of all, Persuaded was more or
less static in the way that the cards were represented by images, making it quite difficult to build a new
deck. PROTECT allows to define card decks in its configuration file, and therefore can be adapted with
low effort to new scenarios or security policies. As a consequence, several different card decks exist, e. g.
scenarios for maritime transport or electronic cancer registration domains. Furthermore, the game play can
be changed by configuring various game settings to allow a progression between difficulty levels and various
other challenges to allow the players to get familiar with attacks and defenses, but also keep the players’
long-term motivation to start new games up. This makes PROTECT a family of games, with Persuaded being
a specific member of the game family.
As a further enhancement, all configuration options are accessible via an application programming
interface, allowing PROTECT not only to serve as a stand-alone application, but being easily embedded into
a training platform. The training platform, e. g. the THREAT-ARREST platform, can then control the game’s
difficulty by changing configuration parameters based on the players achievements in previous games or in
other trainings within the platform.
13
Social Engineering
Figure 2.8: PROTECT [72]: Graphical User Interface
We have evaluated PROTECT by asking five practitioners to play the game and provide us feedback.
The feedback was in general good, in particular emphasizing that after players are familiar with the game,
the game contributes to bolstering the players’ attitude by making them confident that they might be able to
defend against certain social engineering attacks in future. Although, the game is of course less complex than
reality, it also contributed to inoculation by letting the players react repeatedly on a limited number of attacks.
2.2.3 CyberSecurity Awareness Quiz
A general challenge for serious games is to cope with new attacks and variations of attacks. For most of the
existing games, including PROTECT, it is lots of effort to adapt the game in a timely manner. Therefore, the
idea of the CyberSecurity Awareness Quiz [
158
] is in particular to allow fast update of the game’s content to
cover recent threats.
During the game conceptualization, we defined the following requirements: i) As discussed beforehand,
the game should refer to recent real-world threats. ii) Since we expect only a reasonable amount of new
attacks, the game should be lightweight, short and playable on mobile phones with the idea that it could be
played occasionally (e. g. when traveling in trams or subways). As a result, we decided to aim for a quiz-like
game as shown in Fig. 2.9. Since the game type is straight forward, players answer questions and can either
play single-player to compete for a high-score or have several multi-player modes to compete against each
other, the main focus in this section is on a systematic process to create questions based on current affairs and
attacks observed in the wild.
The first step of the process consists of the procurement of information with respect to current social
engineering attacks as shown in Fig. 2.10. Within that step, relevant sources, regularly publishing content
related to social engineering attacks like news websites, websites about information security, websites of
institutions, blogs or even twitter are collected, preferably in a structured manner (e.g. standardized formats
like RSS
6
). These sites are then automatically checked with appropriate tools for updates on new attacks. As
of now, the updated information has to be manually reviewed by a game content editor to assess if the content
to the web feed is relevant.
6depending on the version RSS means: RDF Site Summary or Really Simple Syndication
14
Serious Games on Social Engineering
Figure 2.9: CyberSecurity Awareness Quiz [158]: Graphical User Interface
1. Research for appropriate
web feed services
2. Subscribe
to relevant
web feeds
3. Pull new web
feeds
Web feeds
Content Collector
Feed
Aggregator
6. Review of original content
Web-
sites
Game content
editor
5. Notify for new content
7. Assess web feed
Content
Manager
4. Notify for
new content
Assessment web feed
Figure 2.10: CyberSecurity Awareness Quiz [158]: Gathering and Analyzing Content about Attacks
If a relevant attack is identified, a question is formulated based on the attack. In order to allow the creation
of quizzes based on a certain topic or on attacks popular in a certain time frame or region, metadata like
the category of an attack (e. g. phishing) is assigned to the question. In the next steps, correct and incorrect
answers are assigned to the question.
Several tools are provided to allow the quiz manager either to manually create quizzes or to create them
based on certain metadata. Players can then choose out of a list of provided quizzes which topic they are
15
Social Engineering
interested in. Alternatively, the CyberSecurity Awareness Quiz can be connected to a training platform, in
the same manner as described for PROTECT in the previous section. This way, the platform is able to choose
a quiz based on the player’s performance in other parts of the training.
16
Chapter 3
Security Management
There are risks and costs to a program of
action – but they are far less than the long
range cost of comfortable inaction.
John F. Kennedy
Security risk assessments should be at the core of any digitally evolved organization. Often they are part
of the organizations’ constant effort for compliance. While a certification following ISO/IEC 27001 [
104
] in
some domains, i. e. critical infrastructure, is mandatory, there are also economic reasons even for small and
medium enterprises to get ISO/IEC 27001 certified [96].
Naturally, approaches and challenges will be different for small and large enterprises: Small enterprises
often struggle with the necessary know-how since they can not afford entrusting someone full time with
security. Therefore, already the introduction of an information security management system, which is required
by the ISO/IEC 27001 standard, is a serious challenge for them. As a consequence, they often ask specialists
for consultation. On the other side of the spectrum, large enterprises often have dedicated security specialists,
but struggle more often with being split in several units. Their challenge is to setup a consistent security level
across all units and allow their management to overlook the system and its security needs as a whole. For that
purpose the collection of key performance indicators is vital for large enterprises.
However, there are also challenges regarding security which are common to small, medium and large
enterprises. The outsourcing of processes and services has been part of strategic decisions since decades [
172
]
and in particular the outsourcing of IT has been a trend in the 1990s [
59
]. Even with the occasionally
reverse trend of backsourcing [
213
], this trend has been increased by cloud computing, which has been
emerging as the new computing paradigm in the last ten years. Cloud Computing enables consumers to
purchase on-demand, conveniently and cost efficiently computing power and storage capacity that can be
rapidly provisioned and released with minimal management effort [
138
] from specialized providers. Recent
studies claim that cloud computing has left the hype phase behind and can already be considered the norm
for IT [
27
]. As cloud adoption is still a kind of IT outsourcing, it also comes with security concerns from the
customers. On the other hand, in certain scenarios there might be also benefits to security since cloud service
providers (CSPs) enjoys economies of scale in terms of security as well. Therefore, they are able to hire
security specialists and thereby achieve a higher security level than most client companies would with an
in-house data center [
77
,
109
]. In either case, it is a challenging task for the customers to assess the security
of cloud service providers and select the most secure one.
The remainder of this chapter is structured as follows:
•
Sect. 3.1 focuses on the risk assessment for small and medium enterprises, in particular energy
providers.
–
Sect. 3.1.1 focuses on reporting on the background and eliciting requirements for a tool supporting
them with information security [46, 162] (cf. Sect. B.2, B.10).
17
Security Management
–
Sect. 3.1.2 describes certain aspects of the developed tool, namely an inter-organizational security
platform [189] (cf. Sect. B.3) and a lightweight risk assessment tool [188] (cf. Sect. B.9).
•Sect. 3.2 focuses on risk assessment for large enterprises
–
Sect. 3.2.1 describes an approach to compare the security levels of subsidiaries of large
enterprises [
187
] (cf. Sect. B.4) with a refinement about different aggregation functions for
security maturity levels defined for multiple assets [186] (cf. Sect. B.5).
–Sect. 3.2.2 proposes an approach for the risk assessment of mobile apps [92] (cf. Sect. B.6).
•Sect. 3.3 discusses security assessments for customers of cloud service providers.
–
Sect. 3.3.1 first sketches the best practice in companies for security cloud service provider
selection [
155
] (cf. Sect. B.7) to motivate the proposal of an semi-automated approach for secure
cloud service provider selection [161] (cf. Sect. B.8).
–
Sect. 3.3.2 proposes a model for security assessments of cloud service providers [
24
] (cf.
Sect. B.1).
The respective papers can be found in Appendix B and the author’s contribution for each paper is indicated
in Tab. ?? on page ??.
3.1 Security Risk Assessment and Security Management for Small
and Medium Energy Providers
Critical infrastructures are of vital importance to a nation’s society and economy because their failure would
result in sustained supply shortages causing a significant disruption of public safety and security. In 2016,
malicious software in nuclear power plants was reported [
201
] followed by further reports, e. g. warnings
about hackers attacking German energy providers in 2018 [194].
With the European Program for Critical Infrastructure Protection (EPCIP) and its counterpart, the
German critical infrastructure protection program KRITIS [112] governments aimed to provide the ground
for more secure critical infrastructures. The new regulation challenged critical infrastructure providers in
many ways. Besides general challenges such as understanding the definitions and requirements (cf. [
28
,
p. 150ff]), and challenges from other areas, i. e. coping with the energy transition, energy providers needed to
register a contact point, establish processes to report security incidents, implement security requirements
following a security catalog (§11 Abs. 1a respectively 1b EnWG), and establish and certify an information
security management system (ISMS).
The SIDATE project [
49
] aimed to support small and medium energy providers to cope with the security
requirements. Since most of the small and medium sized German energy providers were in a similar situation
and they were not directly competing against each other, the idea was to support their collaboration using
a web-based platform. For that purpose, we conducted a survey among all German energy providers and
elicited criteria from energy providers on how such a platform should be designed [46].
3.1.1 Requirement Elicitation
Due to the new regulations in Germany energy providers are required to obtain IT security certificates.
Especially small and medium-sized energy providers struggle to fulfill these new requirements. To get a
general idea how they could be supported, we conducted a survey among all energy providers and had a
series of workshops to discuss their needs and how a tool supporting needs to be designed.
Survey on Establishment of an Information Security Management System
The investigation focused on the introduction of an information security management system (ISMS) and
how German energy providers deal with information security in general. For that purpose, we surveyed
German energy providers in autumn 2016 when they had just learned about the requirements and in autumn
18
Security Risk Assessment and Security Management for Small and Medium Energy Providers
2018, two years later and roughly half a year after they had to provide the certification of their ISMS. The
new regulation offered the chance to have a closer look at a large amount of energy providers introducing an
ISMS to get ready for certification at the same time [162].
The questionnaires covered sections about general information, organizational aspects, ISMS and ISMS
maintenance (only in 2018), the office IT, and networking and organizational aspects about the industrial
control system of the energy providers, which are reported in more details in technical reports [
47
,
48
,
50
,
156
].
In 2016 (2018), we (physically) mailed to all 881 (890) energy providers listed in August 2016 (September
2018) [
31
] by the Federal Network Agency (German: Bundesnetzagentur or BNetzA), the German regulatory
office for electricity, gas, telecommunications, post and railway markets [
30
]. We received a total of 61 (84)
replies resulting in a response rate of 6.9% (9,4%).
We asked the energy providers about the number of supply points and the number of employees as shown
in Fig. 3.1 to get an idea about their size. We checked with Spearman’s rank correlation for similarities
between the number of employees and the number of supply points and found for 2016 (2018)
𝜌
-values of
0.725 (0.496) with p-values lower than 10
−5
indicating a strong (moderate) relationship. Therefore, we argue
that it is sufficient to consider the number of supply points and refer in the following to the size of an energy
provider following the definition above. A comparison with the study from Müller et al.
[135]
shows that we
had more small energy providers than they had.
(a) Number of Employees (b) Number of Supply Points
Figure 3.1: Size of the participating energy providers [162]
We also tested the similarity of the data for 2016 and 2018 with a two-one-sided t-test (TOST) [
191
] for
the energy provider’s size and since for
𝜖=
0
.
5the p-value of 0
.
027 was within the 95% confidence interval,
we assume that the participating energy providers were similarly distributed within both surveys.
We also asked about the reasons the energy provider were implementing an ISMS and in 2018 additionally
about the perceived benefits and the future expectations regarding the ISMS (cf. Sect. 3.2). Unsurprisingly,
legal requirements were the largest factor. However, it also showed that most of them in 2016 were also
expecting a security improvement, which even more of them reported as a benefit in 2018. The result is an
indication that the German critical infrastructure protection program at least succeeded in making the energy
providers implementing an ISMS. Most likely, most of the energy providers would not have implemented
without being forced.
In order to get an idea about the status quo of the implementation of ISMS, we asked for each of the 18
phases if the phase was finished, begun, planned or not yet planned (cf. Fig. 3.3). Given the regulation, it
came again not as a surprise that almost all energy providers had started in 2016 and most of them were
finished in 2018. However, we were also aware that some of the small energy providers spend quite some
effort in demonstrating that they do not fulfill the definition of a critical infrastructure, and they therefore
do neither need to implement an ISMS nor get a corresponding certificate. This in line with Müller et al.
[135]
and one more time confirms that legal obligations were the main driver to implement an ISMS. Further
results showed that the energy providers as a whole overestimated the needed duration for the implementation
by roughly 20%. Furthermore, most of them only planned external support for the implementation of the
ISMS but not for running it [156].
19
Security Management
1 Fulfilling Legal Requirements
2 Improving Information Security
3 Better Representation of IT Processes
4 Better External Representation of IT Processes
5 (Re-)Structuring of Relevant Business Processes
(a) Top 5 Reasons 2016 + Benefits and Expectations 2018
1 Legal Requirements
2 Business Processes are Depending on IT
3 Increased Threats
4 Public Discussion on IT-Security
5 Outsourcing of Services
(b) Top 5 Reasons 2018
Figure 3.2: Motivation, Benefits and Expectations to Implement an ISMS [162]
(a) 2016 (b) 2018
1 Target Setting and Scoping
2 ISMS Policy Development
3 Overview of the existing security architecture
4 Performing Risk Analyses
5 Elaboration of Catalog of Security Measures
6 Design of the New Security Architecture
7 Description of Quality and Risk Manag. Interf.
8 Development of a Migration Process
9 Elaboration of the Req. Documentation
10 Structure of the Security Organization
11 Implementation of Management Processes
12 Formulation of Security Architecture (Rules)
13 Measures of Sensitization and Training
14 Implementation of Security Measures
15 Final Project Scope Analysis
16 Preparation for Certification Auditing
17 Execution of Business and Organizational Audits
18 Incident-Management Support
Figure 3.3: Status of each ISMS implementation phase [162]
Requirement Elicitation for Inter-Organizational Security Platform
Besides the surveys, we also got some insights by workshops within the SIDATE project [
49
] with personnel
from energy providers responsible for IT security [
46
,
189
]. Since most of the German energy providers
were in the same situation and they were not directly competing against each other, the idea was to support
their collaboration using a web-based platform. For that purpose, we elicited criteria from energy providers
on how such a platform should be designed [46] in the workshops.
20
Security Risk Assessment and Security Management for Small and Medium Energy Providers
We conducted three two-hour workshops with different stakeholder groups with in total eleven experts
from eight energy providers. Most participants were IT security officers or IT managers from energy
providers, but also representatives from national interest groups were present.
In the first workshop, we elicited the platform’s requirements and the experts’ expectations in a moderated
discussion. The following modules were considered helpful by the experts: a wiki, a forum, a questions and
answers module, a glossary, training modules for further education for security officers and other employees,
checklists, a place to exchange documents, benchmarks, security assessment modules and a general module
to support the launch of an ISMS.
Because the platform processes highly sensitive data, data privacy requirements had a very high priority
for the stakeholders, e. g. having different user interface views to anonymize individuals and organizations to
external experts, and having a restricted and moderated access for new members. Furthermore, the integration
into existing workflows played a central role, e.g. the self-assessment should provide individual checklists
and tools according to ISO/IEC 27001 [
104
] and should contribute to the internal information security audit.
Besides that, the general usability of the platform was mentioned as essential requirement.
Based on these results from the first workshop, a design workshop with eight members from the project
partners was conducted. As a result, the most relevant modules for the energy providers were identified and
several mock-ups visualizing the platform’s functionalities were sketched (cf. Fig. 3.4):
•
A security assessment module, allowing the energy providers to assess and benchmark their security
level.
•
A security measures module, providing information and recommendation to energy providers (including
the practical experiences by other energy providers) about security measures they can implement in
order to strengthen their IT-security.
•
A question and answer module, allowing the energy providers to share their experiences with both
other energy providers as well as with external experts.
In the third workshop, the developed mock-ups were presented and the experts were asked for mandatory
and nice-to-have requirements the platform had to fulfill to be usable for them. We clustered the answers into
four major categories: (1) platform members and confidentially/data privacy, (2) integration into existing
workflows, (3) general usability of the platform.
Platform Participants and Data Privacy
As already discussed during the first workshop, participants had
essential concerns about the privacy in respect to sensitive IT-security related data they would share
across the platform. Interestingly, these concerns did not refer much to the platform itself or its operator
but to other members. Participants were most worried about the participation of external experts like
information security consultants or lawyers. Even if they saw an advantage in the qualified and skilled
feedback from such persons, they were afraid of misuse of the platform for advertising purposes and
non-reliable members could use the content of individual energy providers to identify them and exploit
possible security flaws. This resulted in the following list of requirements:
R.1
The platform should support restricted and moderated access for new members, members need to
be validated by the platform operator and have to agree to suitable terms of use in order to get
access.
R.2
Some participants of the platform should not be able to see the corresponding author of content
within the platform, e. g. external experts should not be able to identify energy providers. A
reputation system should allow the energy providers to assess the quality of a contribution or the
reputation of an external expert.
R.3
Alternatively, no third parties such as external experts should be allowed on the platform, but
energy providers should be able to mark content as ‘expert approved’ to allow the indirect passing
of experts’ assessments and opinions.
Integration into Existing Workflows
The effort necessary for using the platform should not exceed the
potential benefit and be integrated into users’ existing workflows. This resulted in the following list of
requirements:
21
Security Management
Figure 3.4: Portal Mock-up: Security Measures Module [46]
I.1
The self-assessment module should provide individual checklists and tools that help the users to
implement required information security measures, i. e. fulfillment of statutory provisions such
as the implementation of an ISMS according to ISO/IEC 27001 [104].
I.2
The self-assessment should contribute to internal information security audits, e .g. the regular
validation of measures and processes.
I.3
The export from results of the self-assessments, e. g. for internal reports or other processes should
be possible.
General Usability of the Platform
The requirements focused on the usability were very general and not
specific to the platform, such as expectations that the platform should be well-structured and maintained,
and should have a moderator who ensures that new topics/questions are created in the right section and
prevents duplicates.
3.1.2 Tool-Support
Based on the requirements elicitation, two different tools were developed: An inter-organizational security
platform [
189
] and the risk assessment tool LiSRA [
188
]. However, LiSRA was connected to the platform
via a REST API. This way LiSRA can make use of the data within the platform and users of the platform do
not need another tool or website to make use of it.
Inter-Organizational Security Platform
In the previous section, we elicited requirements for an inter-organizational security platform with the idea
that energy providers can exchange expertise when working on similar problems [
51
,
192
]. We do not
22
Security Risk Assessment and Security Management for Small and Medium Energy Providers
discuss the requirements regarding platform participants and data privacy any further, since these were either
organizational processes outside the platform or can be covered by the underlying framework Liferay 1.
The requirements regarding the integration into existing workflows were fulfilled in the following way.
The self-assessment was based on maturity levels for security controls in ISO/IEC 27001 (I.1) as shown
in Fig. 3.5, but can be further improved by a more specific standard (e. g. ISO/IEC 27019 [
105
] for the
electric sector). By referring to the security maturity levels of the COBIT framework [
101
], which are also
Figure 3.5: Portal: Input Section [189]
defined in ISO/IEC 15504 [
103
], the self-assessment can also contribute to internal security audits (I.2).
Individual checklists and tools were covered by a document exchange module (I.1) which allows the member
to exchange and discuss documents. In order to keep the information up to date, a portlet was developed,
which asked users for missing or outdated maturity levels as shown in Fig. 3.6. The export function of the
security maturity levels was implemented via a REST API (I.3).
Figure 3.6: Portal: Modules for Updates to Maturity Levels [189]
Figure 3.7 shows the benchmark functionality. Besides an overview for each of the ISO/IEC 27001’s
control (sub)groups, it shows the maturity levels for other users of the platform along with scale where the
user’s own maturity level is compared to the others. Note that the recent security maturity level can also
1https://www.liferay.com/
23
Security Management
easily be changed here, without the need to change to the input section. In contrast to the risk assessment tool
described in the next section, there is no further evaluation besides the comparison to other users here.
Figure 3.7: Portal: Benchmarking Section [189]
Risk Assessment Tool: LiSRA
As already discussed in the previous section, the lightweight security risk assessment (LiSRA) [
188
], is
integrated via a REST API into the inter-organizational security platform as shown in Fig. 3.8. The bar on the
top shows the calculated risk value in the range from 0 to 1. LiSRA is based on attack trees, and therefore on
the lower half of the user interface each attack scenario can be examined in form of a risk value per scenario.
The corresponding attack trees can also be investigated in detail. On the lower right is a list of controls
answered by the user and relevant for the investigated attack tree.
LiSRA is designed with a particular focus on the special needs for SMEs. Therefore, a key requirement is
to mainly use already existing data and to keep the user’s input to a minimum but to ensure good analysis
results at the same time. To meet the requirements, LiSRA expects input from both users and domain experts
who may be associated with the platform provider. The general concept consist out of four phases and is
illustrated in Fig. 3.9:
Phase 1: Expert Input
LiSRA assumes that organizations within a particular domain are basically exposed to
similar attacks, e. g. the National Electric Sector Cybersecurity Organization Resource (NESCOR) [
137
]
lists domain-specific attack scenarios for the electric sector. Therefore, in the first phase domain
experts initially set up the framework for particular domains (e. g. the electric sector) by constructing
parameterized attack trees that are linked to security controls. In a later step the user can select the
domain in which his organization operates so that the risk assessment only considers attack trees that
are relevant for the respective domain.
Phase 2: User Input
The only user inputs required are the maturity levels of the organization’s security
controls. They are used to model the implemented security practices of the organization in a lightweight
manner. For many organizations this only causes little extra effort because they have already collected
these information, e. g. within their ISMS. As already mentioned, LiSRA is integrated into the
24
Security Risk Assessment and Security Management for Small and Medium Energy Providers
Figure 3.8: Portal: Risk Assessment Section by LiSRA [188]
Risk Simul ator
MLSC &
Charact.
Attack-Control Trees
Phase 1:
Expert Input
Phase 2:
User Input
Domain Exp erts User
Phase 3:
Risk Co mp.
Phase 4:
Recommen der App.
ΔMSLC Risk
Risk Com putation
Figure 3.9: LiSRA: Overview[188]
inter-organizational security platform, and thus can benefit from the platform’s update modules (cf.
Fig. 3.6) reducing the need to bother the user with a long list of maturity levels to fill in.
Phase 3: Risk Computation
The general risk computation process is illustrated in Fig. 3.10. Before the
risk computation can start, the control dependencies are resolved. This is needed because many of
the controls are dependent on each other so that their effect cannot be assessed independently [
193
].
Therefore, the effective maturity levels may be lower than the actual maturity levels due to control
dependencies. Then, the total risk is derived from scenario risks that are calculated based on both the
probability of adverse impact and its severity. The probability of adverse impact is the probability
that an attack is initiated and succeeds. Both factors are calculated using attack trees and depend on
25
Security Management
the chosen attacker type, e.g. a script kiddie goes for the cheapest attack while a nation-state attacker
chooses the attack with the highest success-chances. The probability of attack success not only depends
on the attacker, but also on the maturity level of assigned controls. Then, the probability of attack
success is subsequently aggregated up the tree until the final attack goal is reached.
Phase 4: Recommender Application
The next step, when the risk has been computed, is to identify the
most beneficial security control to improve by increasing its maturity level. One option for that is to
manually inspect the results of the risk analysis. If the total risk indicates the need for action one can
go through the list of scenarios to find the high-risk scenarios and identify related security controls.
However, LiSRA also offers a recommender application that automatizes the inspection process to
find the most beneficial security control. By most beneficial, we mean the most effective and the most
cost-efficient security control. Most existing approaches evaluate new security activities in isolation
of security activities already in place, and they ignore that multiple overlapping activities can be
of complementary, substitutive, or dependent nature which leads to an over-investment in security
measures [
21
]. LiSRA explicitly addresses both aspects without bothering the user. Transparent
recommendations are of crucial importance for the acceptance of recommender systems such as LiSRA.
It describes to which extent users understand why a particular item is recommended to them [
171
].
Therefore, besides the recommendations themselves, also the rationale behind the recommendations is
presented to the user by a graphical explanation interface.
Probability of
Attack Succe ss
Scenario Ri sk
Using an Attac k-Control Tr ee For each
Scenario
Total Risk
Probabilit y of
Attack Initiat ion
Controls‘
Strength
Attacker
Capability
Organisati onal
Charakter istics
Attacker Mo del
Controls‘
Maturity Lev els
User
Severity of
Adverse Im pact
Figure 3.10: LiSRA: General Risk Computation Process [188]
LiSRA was implemented in Java and the evaluation showed it is robust against logical transformations of
the underlying attack trees, with good performance, and perceived as useful by a focus group. However, it
has some limitations. On the one hand, it does not consider adaptive or multiple-shot adversaries. On the
other hand, it is particularly designed for small and medium enterprises, where a certain maturity level for a
security control is consistent in the whole scope. Larger organizations might have different maturity levels
for security controls in different security zones or even for different assets. Subsequent work adds to the
visualization of the attack trees [190].
3.2 Security Risk Assessment for Large Enterprises
As we just have seen when discussing the limitations of LiSRA, small and medium enterprises might have
different needs than large enterprises. Therefore, within this section, we consider two different challenges: In
Sect. 3.2.1, a larger company with several subsidiaries wants to compare their subsidiaries’ security levels.
Section 3.2.2 discusses risk management of mobile devices, namely for smartphone apps.
3.2.1 Comparison of Subsidiaries’ Security Levels in E-Commerce
In general, the comparison of subsidiaries’ security levels could be done by treating them as different
companies and applying a separate LiSRA instance for all of them. However, LiSRA focuses on risk
assessment of a single enterprise along with recommendations for improvements and the basic problem
of comparing subsidiaries’ security levels is closer to a multicriteria decision making problem. A natural
26
Security Risk Assessment for Large Enterprises
approach for that is the analytic hierarchy process (AHP) [
179
,
180
], which makes the prioritization of
controls – implicitly already included in LiSRA – explicit. The AHP allows a structured comparison of
security controls maturity levels and also provides a ranking. The AHP breaks down a complex evaluation
problem into manageable sub-problems by using pairwise comparisons. The comparisons use a predefined
scale to assign numerical values according to different preferences. Based on the pairwise comparisons, a
square matrix is derived and its normalized eigenvector is used as numerical measure for the preferences.
Figure 3.11: AHP Applied to Security Controls in E-Commerce [187]
Again, we select security controls based on ISO/IEC 27001 along with the COBIT framework since
this data is already available for all subsidiaries in the ISMS system of the large enterprise. To not further
complicate the problem, we only consider e-commerce subsidiaries within the large enterprise to avoid the
need of cross-domain comparisons [
187
]. Figure 3.11 shows a part of the evaluation. The priority column
reflects the weight of each security control within each subgroup and for each subgroup as part of the whole.
The different results in the company columns stems from different maturity levels for the security controls.
Since the maturity levels are determined per asset within the subsidiaries, for each security control there
are multiple maturity levels. The most natural approach would be to extent the AHP by one level and consider
assets as another subcategory of the security controls, which would also allow to prioritize them. However,
the AHP only works with a fixed set of categories, leading to problems if only some companies have a certain
asset, e. g. a file-server. One can solve the problem by only considering assets which are common in all
subsidiaries. However, this would draw only a limited picture. As a solution, an asset class for all remaining,
unspecified assets could be introduced, but then again, the question is which aggregation should be used?
Therefore, we investigated different aggregation types [
186
]. Unfortunately, the process of aggregating
maturity levels is neither well documented nor comprehensively studied or understood (from a psychological
perspective), so most of this labor is done by rule-of-thumb [
205
]. We investigated four aggregation types
- namely the minimum, maximum, average and median - to compare their different potential impacts on
decision making.
Regarding average and median, strengths and weaknesses have been discussed in scientific literature.
Averages are strongly influenced by extreme values. Although, the scale has only a limited range from zero
to five, in this context, this could lead to an over- or underestimation of control maturity. Minimum and
maximum further alleviate potential misrepresentations of control maturity, as they provide the numerical
range of scores and expose potential outliers [23].
It is also worth to briefly consider different optimization strategies information security managers might
follow, if they knew the aggregation method: Using the minimum would reward improving only the worst
values. Seen as weakest link of a chain, this could make sense in some scenarios. Using the maximum
rewards improving only the best value or do nothing if it is already at five, which is probably not desirable.
Using the average rewards improving any value, most likely the easiest or cheapest ones are increased first.
Using the median may lead to a really two-fold security level with half of the services being insecure and half
of the services being secure.
27
Security Management
We also investigated the different aggregations with real world data as shown in Tab. 3.1. One can notice
that the maximum differs most from all other aggregations. Considering also the different strategies, as a
result, we recommend to use the average or minimum as aggregation method.
Aggregation/Proportion Company1 Company2 Company3 Company4 Company5
Average 16.7% (4.) 15.4% (5.) 19.8% (1.) 18.3% (3.) 19.5% (2.)
Median 16.7% (4.) 16.3% (5.) 19.8% (1.) 18.8% (2.) 18.1% (3.)
Minimum 16.6% (4.) 14.6% (5.) 21.3% (1.) 18.7% (2.) 18.5% (3.)
Maximum 17.5% (2.) 15.6% (5.) 16.1% (4.) 16.2% (3.) 24.2% (1.)
Table 3.1: AHP Applied to Different Aggregation Types for Security Controls for Multiple Assets [186]
3.2.2 Security Risk Management for Smartphone Apps
With a raising number of apps for smartphones and a great diversity of developers ranging from spare time
developers to large companies, it is more difficult than ever to assess the risk of a certain app. Despite
approaches to raise their awareness [127], spare time developers, but also large enterprises rely on libraries
from other parties, which often spy on the users. However, none of the app stores offers a dedicated security
or privacy score for such apps. With security policies such as “bring your own device” (BYOD) the lines
between personal use and use for work are blurred. BYOD is an attractive employee IT ownership model
that enables employees to bring and use their personal devices in enterprises. It provides more flexibility
and productivity for the employees, but may impose some serious privacy and security risks since personal
matters are mixed with work. One of the arising problems of BYOD is that in order to benefit from it, the IT
security governance may not be as strict as it could be for a smartphone only used for work. But even if
users would accept allowed and blocked lists, the decision which apps to block would need to be made by the
IT department. Decisions should be made as a trade off between the necessity of the app for business (or
personal) purposes and the risk with regard to enterprise assets.
For that purpose, we propose Enterprise Smartphone Apps Risk Assessment (ESARA) as a novel
framework aimed at supporting enterprises to protect their data against adversaries and unauthorized
accesses [
92
]. ESARA makes use of different approaches from literature and combines them with the app
behavior analyzer and the app perception analyzer [
91
] to get a more realistic and holistic picture of installed
apps. Requirements for the development of ESARA were:
(E.1) Reuse of existing approaches
(E.2) Limiting the necessary effort (since there is a large number of apps)
(E.3)
Scalability in the way that it should be easy to rely on external services and allowing several companies
to share a same infrastructure
(E.4)
Independence from app markets since even after several years none of them offers a decent security or
privacy score
(E.5) Involving employees for feedback when using an app
(E.6) Involving employees for decisions
Figure 3.12 shows an overview of the proposed architecture for ESARA, which consists of three main
modules: employee’s smartphone, server and enterprise IT department. On the employee’s device there
is an app running that analyzes the behavior of a certain installed app and ultimately communicates the
results to the employee (behavior analyzer). To respect the employees’ privacy while trying to identify
security intrusive apps only the apps’ permission requests are analyzed which is not as intrusive as run-time
monitoring, where one could conclude what an employee was doing. Furthermore, information is only sent
to the server when confirmed by the user, optionally along with reviews regarding each privacy and security
invasive activity that the employees observe. The behavior analyzer also stores the employee’s security and
28
Security Risk Assessment for Large Enterprises
Malware'Checker
Vulnera bility'Checker
Perception'Analyze r
Perception'Database
(user&based)
GUI
Behavior'Analyzer
Percept ion'
Database
(employee-based)
Enforced'Security'
Polici es
Security'Policies
Risk'Assessment
Recommendation '
Generator
Smartphone Server/Outsourced&Service IT&Department
Employee App'Market
Black/White/Grey&Lists
List&of&hits
Hits
Classified&reviews
App&reviews
Security&
relevant&reviews
Warn ing/
Information
Warn ing/
Information
List&of&hits
Report
Black/White/Grey&Lists
Filtering
Figure 3.12: ESARA: Architecture Overview [92]
privacy perception in the perception database and receives results regarding the perception analysis and risk
assessment from the other two modules.
The server or an outsourced service is supposed to check apps for vulnerabilities and malicious activities
by running a malware and vulnerability scanner, therefore, it does not collect any data from employees
and also avoids to deploy these checkers on resource constrained smartphones [
32
]. Diverse vulnerability
checkers are available and fulfill the requirements [
123
,
124
,
141
,
142
,
173
]. This server/service is also
responsible to analyze employees’ and other users’ perception (e. g. from the app stores) about security
and privacy behavior of apps (perception analyzer). For that purpose, natural language processing (NLP)
techniques (e. g. tokenization, stemming and removing stop words) along with sentiment analysis techniques
are used to find both positive and negative reviews with regards to privacy and security aspects. If security
policies are put in place, black, white and gray lists can also be distributed via this server/service.
The enterprise IT department takes the final decisions about which app is to place on which list – either
manually or automatically by defining certain rule sets. For that purpose, reports written by the employees
along with the automated analyses from the perception analyzer are used.
Table 3.2: Coverage of Top 10 Mobile App Risks [169] by ESARA
Malware Vuln. Behavior Perception
No. Risk Checker Checker Analyzer Analyzer
1 Activity monitoring and data retrieval X– (X) (X)
2 Unauthorized dialing, SMS, and payments X–X(X)
3 Unauthorized network connectivity (X) – – (X)
4 UI Impersonation – – – (X)
5 System modification X– – X
6 Logic or Time bomb X(X) – –
7 Sensitive data leakage (X)X X X
8 Unsafe sensitive data storage – X– (X)
9 Unsafe sensitive data transmission – X–X
10 Hardcoded password/keys X X – –
29
Security Management
For the evaluation of ESARA, besides the evaluation of the behavior and perception analyzer, we checked
its coverage of the most prevalent mobile app risks taken from Veracode’s [
169
] top 10 mobile app risks and
investigated the robustness of ESARA in assessment and detection of each individual risks. Table 3.2 also
connects ESARA’s components with the mobile app risks, demonstrating that ESARA covers all the listed
risks and nearly all of them by at least two components. Besides that, all requirements we defined afore are
considered. As future work, a real implementation along with testing outside of a laboratory environment
and user studies to investigate the users’ acceptance are planned.
3.3 Cloud Service Provider Security for Customers
Cloud Computing has been emerging as the new computing paradigm in the last ten years, enabling consumers
to flexibly purchase computing power and storage capacity on-demand, conveniently and cost efficiently
from specialized providers. Recent studies claim that cloud computing has left the hype phase behind
and can already be considered the norm for IT [
27
]. However, besides the potential economic benefits
of cloud adoption, there are also security concerns as it represents a form of IT outsourcing and exhibits
technological peculiarities concerning size, structure and geographical dispersion [
120
]. As a consequence,
cloud customers are often afraid of loosing control over their data and applications and of being exposed to
data loss, data compliance and privacy risks. On the other hand, there may be also benefits to security in the
cloud, since a cloud service provider (CSP) enjoys economies of scale in terms of security as well, being able
to invest more and thereby achieve a higher security level on a much larger scale than most client companies
would with an in-house data center [77, 109].
So one would expect, that a cloud customer will most likely engage with a CSP demonstrating a high
level of security. However, in practice there are two challenges. First, selecting the most secure CSP is not
straightforward. With the outsourcing the customer also delegates the implementation of security controls
to the CSP. From a CSP’s view, its main objective is to make profit. Therefore, it can be assumed that the
CSP does not want to invest more than necessary in security. Additionally to the different objectives of
customer and CSP, there is the problem that security – compared to other providers’ attributes like cost or
performance – is not easily measurable and there are no precise metrics to quantify it [25].
The consequences are twofold. It is not only hard for the tenant to assess the security of outsourced
services, it is also hard for the CSPs to demonstrate their security capabilities. Even if a CSP puts a lot of
effort in security, it will be hard to demonstrate it to the customer, since malicious CSPs will pretend to do the
same. This imbalance of knowledge is long known as information asymmetry [
6
] and together with the cost
of cognition to identify a good provider and negotiate a contract [
207
] has been widely studied in economics.
The contribution of this section is twofold. First, we will present an investigation how companies choose
as CSP [
155
] and propose a method to support the selection of a secure provider [
161
]. Second, we propose
a model to support the systematic analysis of attacks on cloud customers [24].
3.3.1 Secure Cloud Provider Selection
In this section, we first report about the investigation of the role of security in cloud service provider selection.
We then briefly describe the Consensus Assessment Initiative Questionnaire (CAIQ), a questionnaire from
the Cloud Security Alliance (CSA) to determine the security of CSPs. Based on the questionnaire we then
propose a method to compare the security of multiple CSPs.
Decisive Factors in Cloud Service Provider Selection
We investigated organizations’ practices when selecting a CSP and expected to verify the importance of
security. Furthermore, we expected customers as well as CSPs to come up with security assurance methods
to verify and respectively demonstrate their security efforts. For that purpose, we interviewed practitioners
from eight German companies who deal with CSP selection [155].
The respondents were asked about criteria and requirements for the selection of a CSP instead of directly
asking them about the role of security. While security was rarely mentioned first, it was sooner or later
addressed in all the discussions. Mentioned selection criteria were costs, size of provider, and by ease of use.
30
Cloud Service Provider Security for Customers
Others already pointing in the direction of security were trust, compliance, and confidentiality of their users’
data. The participants also gave insights into processes of CSP selection within the organizations. Some
were using multiple CSPs and choosing them per project or task, for some the provider decision was made on
a higher hierarchical level, and several respondents admitted that the choice for a CSP was made by chance,
e. g. simply choose any convenient provider to make the first steps in the cloud, because a developer already
had some experience with it or just because the company had a voucher.
We further investigated the moderate interest in security and found that most respondents were more
focused on mitigating risks, e.g. by regarding the location of a provider as an indication for trustworthiness
or considering the criticality of data placed in the cloud in relation to the security level. Further answers
revealed that some respondents were assuming that many users trust their providers without any proof, in
particular when sticking with a large CSP such as Amazon, they referred to the "IBM Effect" stating that "No
one ever got fired for buying IBM" applies to Amazon’s AWS nowadays. This supports the assumption that
the requirement on security is extrinsically motivated by compliance.
Another objective was to gain some insights whether and how the respondents verified the security levels
of their CSPs. Here the respondents mostly named non-technical measures such as certification, (financial)
audits checking for the capability of a CSP to grant compensations, and contractual agreements. Besides
that, few respondents named security tests and two also presented their own risk evaluation respectively
questionnaire for the CSP. However, several respondents also expressed skepticism when talking about
assurance, criticizing external auditors or service level agreements as toothless and pointing out that the need
to control or verify everything, although one had outsourced, is unnecessary costly.
In summary, the collected findings on the role of security in CSP selection were ambiguous. Security
however, was never the first answer of the respondents and most of them could not provide specific security
requirements. On the other hand, security as a requirement was present in all the discussions, and showed up
particularly as availability and in rare cases as confidentiality. In the investigated sample we could rarely
find any elaborated process of eliciting requirements and then coming to a rational decision which CSP
to select. Instead, CSPs were chosen based on vouchers, by chance, or by the management because of
established relationships. Another identified pattern was that companies often try to ’first get into the cloud’
and then optimize costs and sometimes security (lift and shift). In all phases of the selection, the requirement
elicitation, the decision making process and in the use of assurance technologies there seems to be a gap
between research and practice. This gap is quite common in a lot of areas [139].
Consensus Assessments Initiative Questionnaire
It seems that questionnaires to the CSPs are the only way of gathering information on the security of a CSP,
in particular before there is a business relationship established, which might allow to test certain security
parameters. An obvious strategy for the cloud customer is to ask the CSP to answer a set of questions from a
proprietary questionnaire and then try to fix the most relevant issues in the service level agreements. However,
this makes the evaluation process inefficient and costly for the customers and the CSPs.
To standardize the requests and render them unnecessary the Cloud Security Alliance [
41
], a non-profit
organization with the aim to promote best practices for providing security assurance within cloud computing,
has provided the Cloud Controls Matrix (CCM) and the Consensus Assessments Initiative Questionnaire
(CAIQ). The CCM [
39
] is designed to guide cloud vendors in improving and documenting the security of
their services and to assist potential customers in assessing the security risks of a CSP.
Each control consists of a control specification which describes a best practice to improve the security of
the offered service. These controls are mapped to other industry-accepted security standards, regulations,
and controls frameworks, e.g. ISO/IEC 27001/27002/27017/27018, NIST SP 800-53, PCI DSS, and ISACA
COBIT.
For each control in the CCM the CAIQ [
38
] contains one or more associated ‘yes or no’ questions asking
if the CSP has implemented the respective control (see Tab. 3.3 for an overview of the CAIQ’s structure and
Fig. 3.13 for some example questions).
31
Security Management
Figure 3.13: Consensus Assessments Initiative Questionnaire in Version 3.1 [39]
Table 3.3: CCM-Item and CAIQ-Question Numbers per Domain (version 3.1) [161]
ID Domain CCM CAIQ
AIS Application & Interface Security 4 9
AAC Audit Assurance & Compliance 3 13
BCR Business Continuity Management & Operational Resilience 11 22
CCC Change Control & Configuration Management 5 10
DSI Change Control & Configuration Management 7 17
DCS Datacenter Security 9 11
EKM Encryption & Key Management 4 14
GRM Governance and Risk Management 11 22
HRS Human Resources 11 24
IAM Identity & Access Management 13 40
IVS Infrastructure & Virtualization Security 13 33
IPY Interoperability & Portability 5 8
MOS Mobile Security 20 29
SEF Security Incident Management, E-Discovery & Cloud Forensics 5 13
STA Supply Chain Management, Transparency and Accountability 9 20
TVM Threat and Vulnerability Management 3 10
Total 133 295
Selection of a Secure Cloud Service Provider
Overall, the CAIQ (in version 3.1) contains 295 questions. As an experiment, we asked participants to decide
for an imaginary scenario which out of two CSPs offers the more suitable security [
161
]. In order to keep
the experiment manageable, we only gave the participants a small subset (20 questions and answers) of the
CAIQ. While most of them were able to correctly identify the more suitable CSP, the participants were not
confident about the ease of use and usefulness of the manual approach. Mainly, because even if they worked
only on a subset, the imagination of doing the comparison with the full set of 295 questions, identifying the
related questions and comparing the results seemed cumbersome to them.
Given that in practice more than 2 CSPs need to be compared, a more automatic approach is necessary.
As of March 2020, the Cloud Security Alliance listed 733 CSPs with 690 CAIQs and 106 certifications
2
.
2
Note that some companies list the self-assessment along with their certification, some do not provide their self-assessment when
they got a certification.
32
Cloud Service Provider Security for Customers
For that purpose, we developed an approach that facilitates the comparison of the security posture of CSPs
based on answers to the CAIQ (cf. Fig. 3.14). The three main actors involved are the tenant, the alternative
Figure 3.14: CPS [161]
CSPs, and a cloud broker. A cloud broker is an intermediary between the CSPs and the tenant helping the
tenant to choose a provider tailored to his (security) needs (cf. NIST Cloud Computing Security Reference
Architecture [74]). The suggested approach consists of three phases:
1.
In the setup, the broker has to assess the answers of the CSPs to the CAIQ (classification and scoring)
and defines security categories which are mapped to the CAIQ’s questions. The list of security
categories is then provided to the tenant.
2.
The tenants map their security requirement to the security categories provided by the broker and
prioritize them. The tenants also need to provide a (rough) description of their service requests or lists
of CSPs they want to compare.
3.
The broker first selects candidate CSPs delivering the services requested by the customers or uses the
provided list for a start. The broker ranks the candidate providers then based on the prioritization of
security categories specified by the customers and the answers that the CSPs gave to the CAIQ. The list
of ranked CSPs is then returned to the customers, who can use the list as part of their selection process,
i. e. by manually comparing the top 5 candidates or using the result of the security comparison as a
building block where other factors such as costs and performance are also considered.
The approach to rank CSPs adopts the Analytic Hierarchy Process (AHP) [
179
] similar to what we have
described in Sect. 3.2.1 already. In the same manner, the output of the presented approach is a hierarchy
where each CSP gets a overall score and a score for each security category, allowing the customer not only to
use the overall result of the ranking, but also to reproduce each CSP’s strengths and weaknesses. This allows
the customers further reasoning or an adaptation of the requirements/scoring should they not be confident
with the result.
The presented approach is the first approach for CSP selection with an effective way to measure and
compare the security of a provider. Previous works have considered security as a relevant criteria for the
comparison and ranking of CSPs [
43
,
66
,
70
,
76
,
164
,
203
,
216
]. However, most of the approaches did not
suggest a method for the collection of data about the CSPs’ security. Closest to the presented approach is the
approach by Habib et al.
[76]
, which identified CAIQ as data source, but did not specify in which way the
data should be used. The proposed approach could be used as a building block for the existing approaches to
CSP selection that consider also other providers’ attributes like cost and performance.
3.3.2 Supporting Security Assessments
In this section, we introduce a high level approach to support cloud customers in their security assessments of
the clouds [
24
]. The idea is to capture the security requirements of cloud customers as well as characteristics
of attackers. The model can be used for deriving new security threats from existing scenarios, as well as
describing and analyzing new what-if scenarios by changing characteristics of involved parties.
33
Security Management
System Model
We define a model of a cloud environment on an Infrastructure-as-a-Service layer consisting of entities and
the system components as shown on Figure 3.15.
Provider
Manufacturer Developer
Third-party
Customer
Administration
Hardware Software
Usage Appliance
Tech. Support
Data
Physical
Logical
Access type
Privileged
Unprivileged
None
Access level
Figure 3.15: System Model with Relations Between Entities and Components [24]
Entities represent subjects which are involved in a cloud service, directly or indirectly, while components
represent objects of which a cloud service is composed of. The entities include: a cloud service provider
who manages and operates a cloud infrastructure, which includes hardware and software resources; the
manufacturer who produces the hardware resource used by the provider; a developer who produces the
software resource used by the provider; the customer who uses the cloud service; and third parties which are
not directly involved in providing or using Infrastructure-as-a-Service, but can represent user on higher layers
of the cloud service (e. g. Software-as-a-Service). Each entity has one or more components, which can be
accessed physically or logically, e. g. the provider has an administration maintaining the software (logical
access) and a technical support team maintaining the hardware (physical access). Each entity or component
can have multiple instances when used for describing an attack scenario, e. g. there can be several customers.
The relationship between entities and their components, as well as between components themselves, is
defined through different access levels: privileged means full access with all the privileges for configuring
and manipulating a component; unprivileged means limited access to functionality or an interface of a
component; and none means no access at all. Access levels are directed and transitive: A can use its access
to B in order to manipulate C, when B has access to C.
Different archetypes describe the contributors to an attack: malicious (intentionally contributing to an
attack); ostrich (knowingly contribute to an attack); charlatan (failing to acquire essential knowledge about
contributing to an attack); stepping stone (unknowingly contributing to an attack). The malicious and ostrich
archetypes are driven by goals, e.g. causing damages or for monetary reasons, and their skill level determines
the success of reaching such goals. The charlatan and stepping stone archetypes have low skills, which
renders their goal of providing a secure cloud service to their customers unsuccessfully. The ostrich can also
been called lazy, and the term sloppy can been used for charlatans and stepping stones.
Evaluation of the System Model
We evaluated the system model by applying it to already known attacks and investigating its modeling ability.
In this case, we consider a side-channel attack. The setup of a side-channel attack scenario consists of a
customer who tries to attack another customer by placing a virtual machine on the same physical server and
by trying to observe the system’s behavior [
178
]. In this case almost all entities are involved as shown in
Fig. 3.16.
34
Cloud Service Provider Security for Customers
Provider
Manufacturer
Administration
Hardware
Software
Appliance
Tech. Support
Customer
Appliance
Developer
Customer
Attacker
Victim
Figure 3.16: Attacking Other Customers Through Side-channels in Hardware and/or Software [24]
The CSP configures and chooses the hardware and software (operating system, hypervisor, etc.) which
are supplied by the manufacturer and the developer, respectively. The input of the manufacturer and the
developer depends on their archetypes. In this scenario it is not reasonable to consider them being malicious,
but the remaining range from ostrich to defender may result in input from low quality hardware / software
to specially hardened ones counteracting side-channel attacks. The CSP also influences the feasibility of
side-channel attacks, since he configures the system and has to justify his choices of the used software and
hardware.
In the considered side-channel attacks, one customer (red) attacks another customer (green) by using his
appliance to observe characteristics of the hardware directly or via the software. The attacker tries to gather
information by eavesdropping on the data processed in the attacked appliance of the other involved customer.
The attacked customer can hardly do anything to protect himself against side-channel attacks besides paying
to use physical resources exclusively. However, if the CSP is a defender, the CSP can monitor appliance
integrity from the software in order to protect the customer [
9
,
65
], provide recovery options once intrusion
has been detected and removed [114] or install a secured environment like SICE [10].
Since we positively evaluated the system model with three more attacks, and we were also able to
construct four more “what-if”-scenarios constructing theoretical, new attacks, we conclude that the proposed
system model is helpful for a customer in assessing cloud computing security. In particular, the system model
helps identifying characteristics for the involved service providers which can aid to attacks of other parties.
35
Security Management
36
Chapter 4
Privacy Enhancing Technologies
If you care about privacy online, you need to
actively protect it.
Roger Dingledine
Bruce Schneier states [
131
]: ”Surveillance is the business model of the internet. Everyone is under
constant surveillance by many companies, ranging from social networks like Facebook to cellphone providers.“
One of the reasons for surveying user is a rising economic interest in the internet [
20
]. However, users are
not helpless and can make use of privacy-enhancing technologies (PETs) to protect them. Examples of PETs
include services which allow anonymous communication, such as Tor [206] or JonDonym [110].
Tor and JonDonym are low latency anonymity services which redirect packets in a certain way to hide
metadata (the sender’s and optionally – in case of a hidden service – the receiver’s internet protocol (ip)
address) from passive network observers. While Tor and JonDonym differ technically, they are highly
comparable with respect to the general technical structure and the use cases. Tor offers an adapted browser
including the Tor client for using the Tor network, the “Tor Browser”. Similarly, the “JonDoBrowser“ includes
the JonDo client for using the JonDonym network.
However, the entities who operate the PETs are different. Tor is operated by a non-profit organization
with thousands of voluntarily operated servers (relays) and an estimated 2 million daily users by the Tor
Project [
206
] and an estimated 8 million daily users by Mani et al.
[126]
. Tor is free to use with the option
that users can donate to the Tor project. JonDonym is run by a commercial company with servers (mix
cascades) operated by independent and non interrelated organizations or private individuals who all publish
their identity. A limited service is available for free, and different premium rates allow to overcome the
limitations. The actual number of users is not predictable since the service does not keep track of this.
However, while the user number of anonymization services is large enough to conduct studies and evaluate
the running systems, it is quite low compared to the number of internet users in total, which was estimated to
4.13 billion in 2019 [35]. Far less than 1% of the users use anonymization networks.
In order to investigate why there isn’t a broader adoption of anonymization services, some retrospective
requirements engineering seems to be necessary: Investigating users privacy concerns and their technology
acceptance to find factors promoting the use of PETs. Since Tor is one of the most prominent PETs, the hope
is that the insights can also be transferred to other PETs.
Besides the users’ perspective, it is also important to investigate the economic side: Are users willing to
pay for PETs and which incentives and hindrances exist for companies to implement PETs?
Besides stand-alone PETs, another way to promote privacy is to integrate it in existing services or design
services with privacy in mind (privacy by design). The last part therefore deals with the application of
privacy by design for online shopping and the internet of things.
The remainder of this chapter is structured as follows:
•
Sect. 4.1 discusses how the distribution of PETs could be increased but investigating user’s concerns,
technology acceptance and willingness to pay as well as business models on PETs.
37
Privacy Enhancing Technologies
–
Sect. 4.1.1 investigates technology use factors for the anonymization networks Jondonym [
79
,
80
]
(cf. Sect. C.1, C.4) and Tor [
83
,
84
] (cf. Sect. C.8, C.9) and compares them [
90
] (cf. Sect. C.10).
–
Sect. 4.1.2 assesses incentives for customers to pay for PETs [
89
] (cf. Sect. C.7) as well as
incentives and barriers for companies to build a business model on PETs [88] (cf. Sect. C.2).
•Sect. 4.2 discusses application of privacy by design.
–
Sect. 4.2.1 discusses different architectures for pseudonymous online shopping [
157
] (cf.
Sect. C.3).
–
Sect. 4.2.2 investigates privacy by design in the internet of things by investigating privacy
policies [165] (cf. Sect. C.5) and privacy patterns [154] (cf. Sect. C.6).
The respective papers can be found in Appendix C and the author’s contribution for each paper is indicated
in Tab. ?? on page ??.
4.1 Users’ Technology Acceptance and Economic Incentives
For PETs like anonymization networks like Tor [
206
] or JonDonym [
110
] which allow anonymous communi-
cation, there has been a lot of research [
133
,
181
], but the large majority of it is of technical nature and does
not consider the users and their perceptions. However, the number of users is essential for anonymization
networks since an increasing number of (active) users also increases the anonymity set. The anonymity set is
the set of all possible subjects who might be related to an action [
168
], thus a larger anonymity set may make
it more difficult for an attacker to identify the sender or receiver of a message. Therefore, it is crucial to
understand the reasons for the users’ intention to use a PET or obstacles preventing it [3].
However, for the distribution of a PET it is not only important to understand the users’ intentions to
use the PET, but also the users’ willingness to pay for the service, which would allow companies to build
a business model upon the provision of the service. The main challenge in motivating the user to pay for
an anonymization service is that the user can barely notice a working PET like an anonymization network
directly. Noticing it is in most cases the result of a limitation of throughput, performance, or response time.
Indirect effects such as fewer profiling are also hard to detect, but even harder to connect to a PET in place.
This makes it hard for a company as well as the user to sell or, respectively, understand the advantages for
these types of PETs. As a consequence, it is hard for a company to come up with a business model, and thus
the further distribution of PETs is prevented.
Therefore, besides investigating the users’ intention to use a PET in Sect. 4.1.1, we also investigate in
Sect. 4.1.2 the economic sides of PETs from the perspective of the users’ willingness to pay and from the
perspective of a business owner to provide a PET as service.
4.1.1 User Concerns and Technology Acceptance Models
To investigate the users intention to use Tor or JonDonym we made us of two different popular structural
equation [78] models:
Internet Users’ Information Privacy Concerns
(IUIPC) is a construct by Malhotra et al.
[125]
for mea-
suring and explaining privacy concerns of online users. IUIPC is operationalized as a second-order
construct
1
of the sub-constructs collection, awareness and control. That means the user’s concerns are
determined by concerns about data on the user in relation to the value or received benefits, by concerns
about the control users have over their own data, and by concerns about his or her awareness regarding
organizational privacy practices. IUIPC then influences trusting beliefs and risk beliefs which then
influence the user’s behavior, which was in the original research the release of personal information to
a marketing service provider. The trusting and risk beliefs refer to the users’ perceptions about the
behavior of online firms (in general) to protect or lose the users’ personal information.
1For an extensive discussion on second-order constructs see Steward [202].
38
Users’ Technology Acceptance and Economic Incentives
Technology Acceptance Model
(TAM) was developed by Davis
[44
,
45]
based on the the theory of
reasoned action (TRA) by Fishbein and Ajzen
[63]
and the theory of planned behavior (TPB) Ajzen
[5]
. According to the TRA, a person’s behavioral intention determines that persons behavior. The
behavioral intention itself is influenced by the person’s subjective norms and attitude toward the
behavior. The subjective norms refer to a person’s normative beliefs and normative pressure to perform
or not perform the behavior. The attitude relies on the person’s beliefs about the behavior and its
consequences. TPB is an extension of the TRA with the same overall structural process: the behavioral
intention is influenced by several components and influences the behavior. However, the TPB adds
perceived behavioral control which refers to a person’s perception regarding the ease or difficulty of
performing a given behaviour in a given situation.
Internet Users Information Privacy Concerns
We conducted a survey among users of the anonymization services JonDonym (141 valid questionnaires [
80
,
85
]) and Tor (124 valid questionnaires [
83
,
86
]) to investigate how the users’ privacy concerns influence their
behavioral intention to use the service.
For that purpose we used the IUIPC construct [
125
,
159
,
160
]. The IUIPC construct has been used in
various contexts, such as internet of things [
136
], internet transactions [
95
] and mobile apps [
174
], but so far
it had not been applied to a PET such as an anonymization service. There is a major difference between PETs
and the other services regarding the application of the IUIPC instrument. The other services had a certain
use for their customer (primary use) and the users’ privacy concerns were investigated for the use of the
service. The concepts of trusting and risk beliefs matched that in a way that they were referring to ’general
companies’ which may provide a service to the user based on data they receive. However, for anonymization
services providing privacy is the primary purpose. Therefore, it is necessary to distinguish between trusting
and risk beliefs with respect to technologies which aim to protect personal data (PETs) and regular internet
services. As a consequence, the trust model within IUIPC’s causal model was extended by trusting beliefs in
Tor/JonDonym.
We tested the model using SmartPLS version 3.2.6 [
177
]. The measurement model was consistent and
checks were fine for reliability and validity on both data sets. Figure 4.1 shows the structural equation model
for Jondonym users and Fig. 4.2 for Tor users.
Figure 4.1: JonDonym Users, IUIPC, Path Estimates and Adjusted R2-values of the Structural Model [80]
39
Privacy Enhancing Technologies
Figure 4.2: Tor Users, IUIPC, Path Estimates and Adjusted R2-values of the Structural Model [83]
The models for JonDonym and Tor users turned out to be very similar. Most of the relations were as
expected, somewhat surprising was the result that general trusting and risk belief had no significant effect on
the use behavior. However, for the rather small effect sizes, it might be that the sample size was simply not
large enough to show a significant relationship. In any case the trust in JonDonym or Tor had by far a larger
influence on the use behavior, respectively the behavioral intention. The result shows that the reputation
of being a trustworthy provider respectively service is crucial for an anonymization service provider. The
results also show that users with a higher level of privacy concern rather tend to trust their anonymization
service provider, which might be affected by the fact that we only asked users of the respective PET.
In general, if there is a reliable measure of the use behavior, this is a better indicator than the users’
behavioral intention to use a service. Since we questioned actual users, we could use their use frequency
of the services. However, it showed for Tor that the influence of the behavioral intention on the actual use
behavior was rather small.
Users’ attitudes and behavioral intention often differ from the behavior decisions they make often denoted
as ‘privacy paradox’ [
67
]. Two possible explanations come to mind to explain the privacy paradox: i) users
balance between potential risks and benefits they gain from the service (privacy calculus) [
54
], ii) users are
concerned but lack knowledge to react in a way that would reflect their needs [
208
]. However, since we
surveyed active users of Tor, both argumentations do not fit. Regarding the privacy paradox, we have already
discussed how PETs differ from regular internet services. Regarding the lack of knowledge, users have
already installed the PET and use it. However, it is still important to investigate the users’ capabilities since
users need a certain amount of knowledge in order to adequately evaluate the given level of privacy [
163
,
208
].
For that purpose, we added the users’ privacy literacy measured with the “Online Privacy Literacy Scale”
(OPLIS) [
128
] to the model. It showed that users’ privacy literacy positively influence trusting beliefs in Tor
(cf. Fig. 4.3). Therefore, educating users and increasing their privacy literacy should add to the behavioral
intention of using Tor. We will further investigate the influence of the behavioral intention on the actual use
behavior by making use of the TAM model in the next subsection.
Technology Acceptance Models
Within the same survey, we also asked the participants about certain constructs we could use in a TAM
model [
82
]: How they perceived the usefulness, the ease of use and the anonymity of the PET. Since we had
already identified trust in the PET as a major driver for the behavioral intention, we included it too. The
resulting model is shown in Fig. 4.4 including JonDonym and Tor users [
90
]. The model shows significant
relationships for all paths as already known from the TAM model with three noteworthy observations:
•
There are three main drivers of the PETs’ perceived usefulness: perceived anonymity, trust and
perceived ease of use which explain almost two-thirds of its variance. This demonstrates that for PETs
40
Users’ Technology Acceptance and Economic Incentives
Figure 4.3: Tor Users, IUIPC & OPLIS, Path Estimates and Adjusted R
2
-values of the Structural Model [
84
]
Figure 4.4: Tor/Jondonym Users, TAM, Path Estimates and Adjusted R
2
-values of the Structural Model [
90
]
the two newly added variables perceived anonymity and trust in the PETs can be important antecedents
in an technology acceptance models for PETs.
•
Similar than in the IUIPC model, trust in the PET is the most important factor for behavioral intention.
One more time emphasizing trust in the PETs as a highly relevant concept when determining the
drivers of users use behavior of PETs.
41
Privacy Enhancing Technologies
•
Since the effects of perceived anonymity and trust in the PETs on behavioral intention and actual use
behavior were partially indirect, we calculated the total effects. Table 4.1 shows that the total effects
for behavioral intention are relatively large and highly statistically significant.
Table 4.1: Tor and Jondonym Users, TAM, Total effects [90]
Total effect Effect size P-value
PA →BI 0.446 < 0.001
PA →USE 0.177 < 0.001
Trust𝑃𝐸𝑇𝑠 →BI 0.511 < 0.001
Trust𝑃𝐸𝑇𝑠 →USE 0.203 < 0.001
BI: Behavioral Intention PA: Perceived Anonymity USE: Actual Use Frequency
To investigate the differences between JonDonym and Tor and also to further investigate the small effect
of behavioral intention on actual use behavior, we conducted a multigroup analysis to test whether there are
statistically significant differences between JonDonym and Tor users as shown in Tab. 4.2.
Table 4.2: Tor and Jondonym Users, TAM, Multi-Group Analysis [90]
Relationships
Path coeff.
original
Path coeff.
original
P-values P-values
Difference
path coeff.
P-values
(JonDonym)
(Tor)
(JonDonym)
(Tor) (JonDonym vs Tor)
PA →Trust𝑃𝐸𝑇𝑠 0.597 0.709 < 0.001 < 0.001 0.112 0.865
PA →PU 0.543 0.369 < 0.001 < 0.001 0.174 0.088
Trust𝑃𝐸𝑇𝑠 →BI 0.416 0.232 < 0.001 0.010 0.184 0.064
Trust𝑃𝐸𝑇𝑠 →PU 0.173 0.304 0.035 0.008 0.131 0.823
Trust𝑃𝐸𝑇𝑠 →PEOU 0.378 0.431 < 0.001 < 0.001 0.053 0.657
PU →BI 0.183 0.300 0.046 0.002 0.117 0.805
PEOU →BI 0.206 0.371 0.011 < 0.001 0.165 0.929
PEOU →PU 0.182 0.300 0.039 < 0.001 0.118 0.830
BI →USE 0.679 0.179 < 0.001 0.029 0.500 < 0.001
BI: Behavioral Intention PEOU: Perceived Ease of Use PA: Perceived Anonymity USE: Actual Use Frequency
PU: Perceived Usefulness of Protecting Users’ Privacy
It showed that the most significant difference between JonDonym and Tor users was the effect size
between behavioral intention and actual use, which is 0
.
679 for JonDonym and 0
.
179 for Tor. Less significant
observations were that the effects of trust on behavioral intention and perceived anonymity on perceived
usefulness were slightly larger for JonDonym users. A possible explanation could be the structure of the two
services, JonDonym is a profit-oriented company that charges for the unlimited use of the PET [
110
] while
Tor is a community-driven project based on donations.
To gather some reasons for the observed differences and possibly identify other differences of the services
from a user perspective, we included five open questions in the survey and collected altogether 626 statements,
which we coded in two phases [
33
] with initial and focused coding. The results are shown in Tab. 4.3. In the
left column, we have the three concepts technical issues, beliefs and perceptions and economical issues. Each
of them includes several subconcepts. The results were then clustered into statements common to both PETs,
such as feature requests (
Tor.1
,
Jon.1
), statements only referring to Tor, such as statements about malicious
exit nodes (
Tor.2
), and statements only referring to Jondonym, such as concerns about the location of mix
cascades