Content uploaded by Max Sauer
Author content
All content in this area was uploaded by Max Sauer on May 08, 2023
Content may be subject to copyright.
Methods for analyzing the
relation of user experience
and information security in
the project SDIKA
Max Sauer, Markus Toran
Methods for analyzing the relation of user experience and information security in the project SDIKA
2
Table of Contents
Abstract ........................................................................................................................ 3
1 Showcase Secure Digital Identities Karlsruhe (SDIKA) ...................................................... 4
2 Analysis methods ......................................................................................................... 5
2.1 SecureUse Score (M1) ........................................................................................................................................... 5
2.2 Heuristic Evaluation (M2) ...................................................................................................................................... 5
2.3 Secure Usability Symmetry (M3) .......................................................................................................................... 5
2.4 Cognitive Walkthrough (M4) ................................................................................................................................. 5
2.5 Heuristic Walkthrough (M5) .................................................................................................................................. 6
2.6 Method of Gonzalez et al. (M6) ............................................................................................................................. 6
2.7 Method of Alarifi et al. (M7) ................................................................................................................................... 6
2.8 Thinking Aloud (M8) .............................................................................................................................................. 6
2.9 Goals, Operators, Methods and Selection (GOMS) (M9) .................................................................................... 6
2.10 Diary studies (M10) ....................................................................................................................................... 7
2.11 Focus groups (M11) ....................................................................................................................................... 7
2.12 Eye Tracking (M12) ........................................................................................................................................ 7
2.13 Questionnaires (M13-M22) ........................................................................................................................... 7
3 Selection criteria.......................................................................................................... 9
4 Selection ................................................................................................................... 11
4.1 Data collection type und integration time ........................................................................................................ 11
4.2 Type, qualifications and number of evaluators ............................................................................................... 12
4.3 Fixed costs, variable costs and scope ............................................................................................................... 13
4.4 Digital identity solutions, SDIKA use cases and stakeholders ........................................................................ 14
4.5 Advanced filtering................................................................................................................................................ 15
5 Conclusion ................................................................................................................ 17
6 Acknowledgments...................................................................................................... 18
7 References ................................................................................................................ 19
Authors ....................................................................................................................... 23
Imprint ....................................................................................................................... 24
Methods for analyzing the relation of user experience and information security in the project SDIKA
3
Abstract
The project “Schaufenster Sichere Digitale Identitäten Karlsruhe” (engl. Showcase Secure Digital Identities
Karlsruhe, short SDIKA) is one of four nationwide projects funded by the German Feder al Ministry of Economics
and Climate Protection (BMWK) as part of the showcase program “Secure Digital Identities”. SDIKA aims to
realize wide-ranging, cross-use-case identities in open ecosystems. To achieve this goal, the system must be
usable and secure for end users, otherwise there is a risk that the system will not be accepted by software end
users. A high user experience (UX) is not only a quality requirement of software, it can also have a negative or
positive influence on the information security of the software in use. Information security mechanisms can be
misused or even ignored by end users if the system has UX deficits. In addition, Mechanisms that lead to
complex operating processes may also result in the system not being used. In order to design systems that
have good UX and protect information in an adequate way that both individual aspects on the one hand and
the overall effect on the other hand meet the requirements, methods are useful for evaluating the relation.
This paper gives an overview of such analysis methods (after a systematic literature review from Max Sauer et
al. that will be published in future) and selects suitable analysis methods for the SDIKA project using defined
selection criteria.
Methods for analyzing the relation of user experience and information security in the project SDIKA
4
1 Showcase Secure Digital Identities Karlsruhe (SDIKA)
The project “Schaufenster Sichere Digitale Identitäten Karlsruhe”(engl. Showcase Secure Digital Identities
Karlsruhe, short SDIKA) is nationwide one of four projects funded by the German Federal Ministry of Economics
and Climate Protection (BMWK) as part of the showcase program „Secure Digital Identities“ [1]. SDIKA pursues
the goal of realizing far-reaching, cross-use-case identities in open ecosystems. Technical, semantic and
economic interoperability are intended to promote the use of solutions that ensure high functionality,
information security and digital sovereignty [2]. A triad of development of the "SDI-X system", demonstration
and evaluation in the regional showcase, and the establishment of a cross-regional ecosystem follows. The
"SDI-X system" is to enable each acceptance point to connect all identity solutions available in the ecosystem
via a local "SDI-X adapter". Individuals and organizations should themselves be able to choose between
identity solutions of different types (in a cloud or locally on a smartphone). Application-specific and self-
sovereign electronic identities [3] (for example, the electronic ID card) are to be supported. The benefits will
be demonstrated to citizens and organizations in Karlsruhe and Metropolitan Region Rhine-Neckar (MRN)
using use cases from the areas of health, mobility, digital planning and construction, digital urban society and
e-government. SDIKA creates a municipal ecosystem of acceptance points and trust services [4] and makes
digital identities tangible for Karlsruhe and MRN residents in their everyday lives [5].
Figure 1 shows main work packages of the project SDIKA.
Figure 1: Project overview of SDIKA
(Illustration from the project consortium)
Methods for analyzing the relation of user experience and information security in the project SDIKA
5
2 Analysis methods
In the following, the identified methods (after a systematic literature review from Max Sauer et al. that will be
published in future) for analyzing the relation of user experience (UX) and information security are explained.
2.1 SecureUse Score (M1)
SecureUse Score [6] is a metric that measures a usability and security score. The usability score is evaluated
by a usability or UX exper t. It is the sum of the scores of effectiveness, efficiency, and satisfaction. Effectiveness
can be evaluated by a score of 0 up to 5 (0: failed; 1-4: partially fulfilled; 5: completely fulfilled). Efficiency is
evaluated by a score of 0 up to 5 (0: failed; 1: quickly to 5: very quickly fulfilled). Satisfaction can be evaluated
by a score of 0 up to 5, e.g. by the System Usability Scale (see M20). The security score is evaluated by a security
expert using a scale of 0 up to 15.
2.2 Heuristic Evaluation (M2)
Heuristic Evaluation [7] is used to evaluate a system using design principles called heuristics. UX or/and
information security experts test a system using heuristics. Thus, UX and information security problems can
be identified. Originally, Nielsen and Molich [7] defined nine heuristics that relate only to usability. However,
heuristics that include information security now exist [3–13]. In the context of information security, Heuristic
Evaluation has been used to evaluate audio CAPTCHA [16].
2.3 Secure Usability Symmetry (M3)
Security Usability Symmetry [17] represents a variant of Heuristic Evaluation (M2). Quality in Use Integrated
Measurement (QUIM) [18] is used to define heuristics. The first step is to collect usability and security
scenarios, then to identify relevant factors and finally to define measurable heuristics. With the help of the
defined heuristics, a system can be evaluated and a list of problems can be collected, analogous to Heuristic
Evaluation (M2). Additionally, metrics could be used to evaluate severity or frequency of a usability or security
problem. For example, Security Usability Symmetry has been used to evaluate Automated Teller Machines
(ATMs) [17].
2.4 Cognitive Walkthrough (M4)
Using Cognitive Walkthrough [19], UX or/and information security experts put themselves in the perspective
of a hypothetical user and perform given tasks with plausible explanations from the users point of view of
choosing a specific step. This is used together with previously defined questions, such as "Will the end users try
to achieve the right result?". Among the experts, the influence of UX and information security during individual
tasks is discussed [19]. For example, Cognitive Walkthrough has been used to evaluate the usability and
information security of desktop firewalls [20].
Methods for analyzing the relation of user experience and information security in the project SDIKA
6
2.5 Heuristic Walkthrough (M5)
Heuristic Walkthrough [19] is a combination of Heuristic Evaluation (M2) and Cognitive Walkthrough (M4). The
system is evaluated using two walkthroughs. In the first walkthrough, a task-oriented evaluation is performed.
Evaluators perform predefined tasks and evaluate the system based on various questions, analogous to the
Cognitive Walkthrough (M4). The second walkthrough is a free-form evaluation. Evaluators now navigate
through the system without defined tasks, but remember the findings from the first run, and evaluate the
system using various heuristics [21], analogous to Heuristic Evaluation (M2). Heuristic Walkthrough has been
applied to four static analysis tools for information security [22].
2.6 Method of Gonzalez et al. (M6)
Gonzalez et al. [23] published a measurement model for secure and usable e-commerce websites in addition
to the defined heuristics of Heuristic Evaluation (M2). With the help of this model, heuristics of the categories
human-computer interaction (HCI), usability and security can be measured using various metrics [23]. For
example, 30 different e-commerce websites were evaluated with this method [23].
2.7 Method of Alarifi et al. (M7)
Alarifi et al. [15] have not only defined heuristics, but also a model for evaluating the security and usability of
e-banking platforms. In summary, security and usability problems are initially identified and a solution for
conflicting problems of both properties is defined. Then, the system is evaluated using the defined heuristics
[15]. This method was used to evaluate the usability and security of five banking systems [15].
2.8 Thinking Aloud (M8)
Using Thinking Aloud [24], evaluators test the application and express their thoughts verbally. This makes it
clear how the software and its functions are understood by the evaluators. UX and information security
problems as well as positive aspects can be collected through the constant communication and justification
of action decisions. One method variant is "Retrospective Testing". In contrast to "Constructive Interaction",
the subjects do not think aloud during the testing of the application, only afterwards. The test process is
recorded on video and verbally commented after usage [24]. In the context of security, Thinking Aloud was
used to identify usability and information security vulnerabilities of an internet voting system that affects the
information security of the voting process [25].
2.9 Goals, Operators, Methods and Selection (GOMS) (M9)
Goals, Operators, Methods and Selection (GOMS) [26] can be used to estimate the scope of system
functionality, assess the effect of the interface on users, and identify specific UX problems users have with the
system. In addition, quantitative statements about learning and execution time can be made. GOMS includes
the following aspects:
• Goals: Goals that the users should achieve through the system. Example: Shut down computer.
• Operators: All actions of a method to achieve a goal. Example: Press windows icon.
Methods for analyzing the relation of user experience and information security in the project SDIKA
7
• Methods: Contain all relevant operators. Example: Shut down computer by windows icon.
• Selection Rules: Selection, if several methods exist and one should be preferred. An alternative would
be to shut down using the key combination "Ctrl + Alt + Ent".
By defining goals, it can be checked whether the goals are achieved with the help of the defined methods,
which consist of various operators [26].
In the context of information security, GOMS has been used to assess the usability of e-government websites
in order to gather usability issues that affect information security [27].
2.10 Diary studies (M10)
A diary study can be used to collect qualitative data on user behavior, activities and user experience over a
longer period of time. During the defined reporting period, participants keep a diary and document relevant
information while using a given system [28]. Mare et al. [29] used a diary study to obtain information about the
daily use of different authentication methods.
2.11 Focus groups (M11)
Focus groups [30] can be used to assess users' needs and perceptions. For this purpose, users discuss
questions or concerns about a system. A moderator leads the discussion and maintains the focus of the group.
This allows spontaneous reactions and ideas of users to emerge and makes it possible to observe group
dynamics. A focus group is not suitable as a sole analysis method, because there are often differences between
user statements and actual user behavior, which should always be supplemented by direct observation of
users during system use. Marky et al. [31] used focus groups to evaluate their 3D authentication method.
2.12 Eye Tracking (M12)
Eye tracking makes it possible to visualize the course of a person's gaze with the help of eye trackers. It is
possible to find out which and how long elements are perceived by users. It can also be determined which
elements distract from the relevant areas and the order in which the elements are perceived. Furthermore, it
can be answered whether the structure of the system leads to the fact that information can be found fast
without complex search [32]. Since the acquired data must subsequently be interpreted in the respective
context, eye tracking is not suitable as a stand-alone method and should be combined with other methods
[33]. In the context of information security, the usability of various websites has been investigated to
determine whether it has a detrimental effect on security indicators, such as whether a slim log-in area
encourages users to stop checking the URL for fraud [34].
2.13 Questionnaires (M13-M22)
Questionnaires are often used to establish a numerical profile of the sample (e.g., the proportion of the sample
in different age groups) or to assess the frequency of occurrence of opinions, attitudes, experiences, processes,
behaviors, or predictions [35]. 10 questionnaires could be identified, which are listed below.
• Usefulness, Satisfaction and Ese of use (USE) [36] (M13)
• UX Needs Scale [37] (M14)
Methods for analyzing the relation of user experience and information security in the project SDIKA
8
• Attrakdiff 2 [38] (M15)
• After-Scenario Questionnaire (ASQ) [39] (M16)
• Short Version of User Experience Questionnaire (UEQ-S) [40] (M17)
• Subjective Mental Effort Question (SMEQ) [41] (M18)
• User Experience Questionnaire (UEQ) [42] (M19)
• System Usability Scale (SUS) [43] (M20)
• Questionnaire for User Interface Satisfaction (QUIS) [44] (M21)
• Lewis’ Post-Study System Usability Questionnaire (PSSUQ) [45] (M22)
The questionnaires were used to evaluate UX in the area of information security [34, 39–46].
Methods for analyzing the relation of user experience and information security in the project SDIKA
9
3 Selection criteria
In order to select the identified analysis methods (see chapter 2) for SDIKA, selection criteria are used, which
are discussed with several project members and which are explained in the following.
• Data collection type
The system should be evaluated qualitatively and quantitatively. For this purpose, a "true/false" metric for
each characteristic "quantitative" and "qualitative" is used.
• Integration time
Each method must be applicable already in the design phase, because the methods are needed to evaluate
prototypes. A "true/false" metric is used for evaluation.
• Type of evaluators
The type of evaluators indicates whether experts (UX, information security or system experts) and/or
representative software end users are involved. The evaluation is done by a "true/false" metric with the
expressions "experts" and "users".
• Qualifications of evaluators
If evaluators must have certain qualifications (UX, information security or system qualifications), they must
be available in SDIKA. The evaluation is done by a "true/false" metric with the characteristics "UX",
"information security" and "system".
• Number of evaluators
If a specific number of evaluators is required, they must be available in SDIKA.
• Fixed costs
If fixed costs of a method are incurred, e.g. for eye tracker, they must be affordable in SDIKA. The fixed costs
of the methods are relatively assigned to the categories "low", "normal" and "high".
• Variable costs
If variable costs of a method are incurred, e.g. for UX experts, they must be affordable in SDIKA. The variable
costs of the methods are relatively assigned to the categories "low", "normal" and "high".
• Scope
The respective method can be used to evaluate at least one of the following attributes of UX and information
security: satisfaction, efficiency, effectiveness, memorability, attention, confidentiality, integrity or
availability. The attributes were defined using expert interviews with project members. The assessment is
done using a "true/false" metric per attribute.
• Suitability for digital identity solutions
The methods must be applicable to digital identity solutions. For example, heuristics that only apply to e-
government, are not sufficient. The evaluation is done by a "true/false" metric.
Methods for analyzing the relation of user experience and information security in the project SDIKA
10
• Suitability for the SDIKA use cases
The method should be able to evaluate the SDIKA use case. That means it should be useful for different use
cases of the domains of e-government, health, mobility, digital planning and construction, and digital urban
society. The evaluation is done by a "true/false" metric.
• Consideration feedback from all relevant SDIKA stakeholders
The respective method must be able to incorporate feedback from the following SDIKA stakeholders:
Identity holders (individuals and organizations), acceptance points (for-profit or not-for-profit), identity
issuers, ID solution providers, multipliers (working groups, organizations, professional user groups,
advocacy groups), societal/general economic stakeholders (organized civil/urban society, federal
government, BMWK), research/academia and software developers.
Methods for analyzing the relation of user experience and information security in the project SDIKA
11
4 Selection
In the following, a selection of the identified analysis methods (see section 2) is made using the previously
defined selection criteria (see section 3) for SDIKA.
4.1 Data collection type und integration time
In the following, the methods are evaluated using the criteria data collection type and integration time. The
result can be seen in table 1. If the criteria are fulfilled, "X" is set. Otherwise the criteria are not fulfilled.
Table 1: Evaluation using data collection type and integration time
Data collection type Integration time
Qualitative Quantitative Usage in design stage
M1 X [6] X [6]
M2 X [51] X [51] X [7]
M3 X [17] X [17] X [17]
M4 X [52] X [53]
M5 X [52] X [17] X [21]
M6 X [23] X [23] X [23]
M7 X [15] X [15] X [15]
M8 X [24] X [24]
M9 X [26] X [26] X [54]
M10 X [28] X [28]
M11 X [30] X [30]
M12 X [55] X [55] X [33]
M13 X [55] X [36]
M14 X [36] X [37]
M15 X [37] X [38]
M16 X [56] X [39]
M17 X [39] X [40]
M18 X [40] X [41]
M19 X [41] X [42]
M20 X [42] X [57]
M21 X [44] X [58]
M22 X [45] X [45]
Methods for analyzing the relation of user experience and information security in the project SDIKA
12
In summary, all methods can already be used in the design phase of software, but it should be mentioned that
diary studies (M10) require a certain degree of maturity of a prototype, which can be evaluated over a longer
period of time. This requirement is not met by the SDIKA prototypes, so diary studies (M10) are not suitable for
SDIKA. The results of the data collection type are considered in the extended filtering (section 4.5), because
the selected methods should pursue a qualitative and quantitative approach.
4.2 Type, qualifications and number of evaluators
In the following section, the methods are evaluated using type, qualifications and number of evaluators. The
result can be seen in table 2. If the criteria are fulfilled, "X" is set. Otherwise the criteria are not fulfilled.
Table 2: Evaluation using type, qualification and number of evaluators
Type Qualifications Number
Users Experts UX Information security System
M1 X [6] X [6] X [6] X [6] X [7] Min. 1 [6]
M2 X [7] X [7] Min. 1 [7]
M3 X [17] X [17] Min. 1 [17]
M4 X [53] X [53] X [53] X [53] Min. 1 [53]
M5 X [21] X [21] X [21] X [21] Min. 1 [21]
M6 X [23] X [23] Min. 1 [23]
M7 X [15] X [15] Min. 1 [15]
M8 X [24] Min. 1 [24]
M9 X [26] X [26] X [26] Min. 1 [26]
M10 X [28] Min. 1 [28]
M11 X [30] Min. 2 [30]
M12 X [33] Min. 1 [33]
M13 X [36] Min. 1 [36]
M14 X [37] Min. 1 [37]
M15 X [56] Min. 1 [56]
M16 X [39] Min. 1 [39]
M17 X [40] Min. 1 [40]
M18 X [41] Min. 1 [41]
M19 X [42] Min. 1 [42]
M20 X [43] Min. 1 [43]
M21 X [44] Min. 1 [44]
Methods for analyzing the relation of user experience and information security in the project SDIKA
13
M22 X [45] Min. 1 [45]
In summary, all researched methods are suitable in the context of type, qualification and number of
evaluators, because the experts and representative end users needed for each method are available in SDIKA.
4.3 Fixed costs, variable costs and scope
In the follow section, the methods are evaluated using fixed costs, variable costs and scope.
The fixed costs of Eye Tracking (M12) are classified as “high”, because separate, expensive hardware (an eye
tracker) is required. The fixed costs of the remaining methods are classified as "low", because no separate
hardware is required.
The variable costs of the expert-based methods are rated higher than those of the user-based methods.
Considering the similar number of evaluators required for the user- and expert-based methods (see section
4.2), the variable costs of the user-based methods were classified as "low". The number of required evaluators
of the methods is low (maximum 2, see table 2). Moreover, only SecureUse Score (M1), Cognitive Walkthrough
(M4) and Heuristic Walkthrough (M5) require a maximum of one information security expert. SecureUse Score
(M1), Cognitive Walkthrough (M4), Heuristic Walkthrough (M5) and GOMS (M9) require a maximum of one UX
expert. Because no other specific evaluator qualifications are required and the number of evaluators needed
is low, the variable costs of expert-based methods are classified as "normal" rather than "high".
Furthermore, the scope is evaluated. Abbreviations are assigned to the individual attributes for this purpose:
Satisfaction, Efficiency, Effectiveness, Memorability, Attention, Confidentiality, Integrity and Availability.
The result can be seen in table 3. If the criteria are fulfilled, "X" is set. Otherwise the criteria are not fulfilled.
Table 3: Evaluation using fixed costs, variable cost s and scope
Fixed
costs
Variable
costs
Scope
Satisfaction
Efficiency
Effectiveness
Memorability
Attention
Confidentiality
Integrity
Availability
M1 low low X [6] X [6] X [6] X [6] X [6] X [6]
M2 low normal X [16] X [16] X [16] X [16] X [16] X [16] X [16] X [16]
M3 low normal X [17] X [17] X [17] X [17] X [17] X [17] X [17] X [17]
M4 low normal X [20] X [20] X [20] X [20] X [20] X [20] X [20] X [20]
M5 low normal X [21] X [21] X [21] X [21] X [21] X [21] X [21] X [21]
M6 low normal X [23] X [23] X [23] X [23] X [23] X [23] X [23] X [23]
M7 low normal X [15] X [15] X [15] X [15] X [15] X [15] X [15] X [15]
M8 low low X [25] X [25] X [25] X [25] X [25] X [25] X [25] X [25]
M9 low normal X [26] X [26] X [26] X [26] X [26] X [26] X [26] X [26]
Methods for analyzing the relation of user experience and information security in the project SDIKA
14
M10 low low X [29] X [29] X [29] X [29] X [29] X [29] X [29] X [29]
M11 low low X [31] X [31] X [31] X [31] X [31] X [31] X [31] X [31]
M12 high low X [32] X [32] X [32] X [32] X [32]
M13 low low X [36] X
[36]
X [36] X [36] X [36]
M14 low low X [37] X [37] X [37] X [37] X [37] X [37] X [37] X [37]
M15 low low X [38] X [38]
M16 low low X [39] X [39] X [39]
M17 low low X [40] X [40]
M18 low low
M19 low low X [42] X [42] X [42] X [42]
M20 low low X [43] X [43] X [43]
M21 low low X [44] X [44] X [44] X [44]
M22 low low X [45] X [45] X [45] X [45] X [45]
The fixed costs of Eye Tracking (M12) are the highest. These amount to 10105 euros for the Tobii Pro Spark eye
tracker (incl. software) [59], which is justifiable in relation to the benefit in SDIKA. UX and system experts are
available in SDIKA. The variable cost of an information security expert performing the methods is also
reasonable. In context scope, each method must be able to evaluate at least one of the UX/information
security attributes already mentioned. SMEQ (M18) does not meet this requirement. For this reason, SMEQ
(M18) cannot be used in SDIKA. Otherwise, each method evaluates at least one of the attributes.
4.4 Digital identity solutions, SDIKA use cases and stakeholders
The following section shows the suitability for digital identity solutions and SDIKA use cases as well as the
ability to consider feedback from all relevant SDIKA stakeholders. For this purpose, a "true/false" metric is
used in each case. The results can be found in table 4. If the criteria are fulfilled, "X" is set. Otherwise the criteria
are not fulfilled.
Table 4: Evaluation using digital identity solutions, SDIKA use cases and stakeholders
Digital identity solutions SDIKA use cases SDIKA stakeholders
M1 X [6] X [6] X [6]
M2 X [7] X [7] X [7]
M3 X [17]
M4 X [17] X [17] X [17]
M5 X [21] X [21] X [21]
Methods for analyzing the relation of user experience and information security in the project SDIKA
15
M6 X [23]
M7 X [15]
M8 X [24] X [24] X [24]
M9 X [26] X [26] X [26]
M10 X [28] X [28] X [28]
M11 X [30] X [30] X [30]
M12 X [32] X [32] X [32]
M13 X [36] X [36] X [36]
M14 X [37] X [37] X [37]
M15 X [38] X [38] X [38]
M16 X [39] X [39] X [39]
M17 X [40] X [40] X [40]
M18 X [41] X [41] X [41]
M19 X [42] X [42] X [42]
M20 X [43] X [43] X [43]
M21 X [44] X [44] X [44]
M22 X [45] X [45] X [45]
It becomes clear that Secure Usability Symmetry (M3), method of Gonzalez et al. (M6) and method of Alarifi et
al. (M7) cannot be used for digital identity solutions [10, 12, 18]. In addition, the SDIKA use cases cannot be
evaluated with Secure Usability Symmetry (M3), method of Gonzalez et al. (M6) and method of Alarifi et al. (M7)
[10, 12, 18]. For these reasons, Secure Usability Symmetry (M3), method of Gonzalez et al. (M6) and method of
Alarifi et al. (M7) cannot be used in SDIKA. All other might be used.
4.5 Advanced filtering
17 methods remain after filtering (section 4.4): 4 expert-based methods Heuristic Evaluation (M2), Cognitive
Walkthrough (M4), Heuristic Walkthrough (M5), GOMS (M9), one method SecureUse Score (M1) that is expert
and end-user based, and 12 end-user-based methods Thinking Aloud (M8), focus groups (M11), Eye Tracking
(M12), and the questionnaires but SMEQ (M18). The remaining methods are now discussed and selected.
Because on the one hand an expert-based assessment and on the other hand an end-user-based assessment
is to be performed in SDIKA, filtering is first performed among the remaining expert-based analysis methods.
SecureUse Score (M1) and GOMS (M9) can be understood as a subset of Heuristic Evaluation (M2) and are
therefore redundant. Cognitive Walkthrough (M4) and Heuristic Evaluation (M2) have been further developed,
resulting in Heuristic Walkthrough (M5). Furthermore, Heuristic Walkthrough (M5) follows a qualitative and
quantitative approach.
Methods for analyzing the relation of user experience and information security in the project SDIKA
16
Of the end-user-based methods, Eye Tracking (M12) can be combined with Thinking Aloud (M8) as a variant
“Retrospective Testing” [60]. First, Eye Tracking (M12) is performed, the end user's click behavior is recorded
synchronously and verbally commented afterwards. Finally, a questionnaire can be filled out by the end user.
UX Needs Scale (M14) is suitable for this purpose, as it assesses most of the UX and information security
attributes relevant from SDIKA's point of view. Thus, the questionnaires USE (M13), Attrakdiff 2 (M15), ASQ
(M16), UEQ-S (M17), SMEQ (M18), UEQ (M19), SUS (M20), QUIS (M21) and PSSUQ (M22) are omitted. However,
because UX is assessed by UX Needs Scale (M14) on a very abstract level and usability should be assessed in
more detail, it is recommended to combine UX Needs Scale (M14) with SUS (M20). SUS (M20) is specifically
designed for usability and evaluated in literature [57]. In addition, workshops can be conducted in which the
system to be evaluated is discussed with the help of focus groups (M11) consisting of SDIKA stakeholders. On
the part of the end-user based methods, Eye Tracking (M12) alone already fulfills the criterion of a quantitative
and qualitative survey approach [55].
Methods for analyzing the relation of user experience and information security in the project SDIKA
17
5 Conclusion
22 methods for analyzing the relation of UX and information security were identified. This was followed by the
selection of the researched methods for SDIKA. For this purpose, the selection criteria data collection type,
integration time, type of evaluators, qualification of evaluators, number of evaluators, fixed costs, variable
costs, scope, suitability for the SDIKA use cases, suitability for digital identity solutions and consideration of
all relevant SDIKA stakeholders were defined. The researched methods were evaluated and filtered against
the selection criteria. This left 18 analysis methods. Through a further filtering step, it was examined to what
extent methods overlapped and could be combined. Among the expert-based methods, Heuristic
Walkthrough (M5) was selected because it is a further development of Heuristic Evaluation (M2) and Cognitive
Walkthrough (M4). Among the end-user-based methods, Eye Tracking (M12) was chosen, followed by Thinking
Aloud (M8, Retrospective Testing) and a final questionnaire consisting of a combination of UX Needs Scale
(M14) and System Usability Scale (M20). If needed, additional focus groups (M11), consisting of SDIKA
stakeholders, can be used to discuss the system under test in a focus group.
In summary, the selected methods will be used in particular for the SDIKA use case of digital registration
certification in the topic area of e-government. So far, the digital application for the registration certificate is
done via the service portal Baden-Württemberg, where users can log in via user name and password or with
the electronic ID function. A potential variant of authentication would be conceivable with a digital certificate
of the ID card, which is initially located in the wallet and presented in the Baden-Württemberg service portal.
The selected methods are to be used for evaluation here.
Methods for analyzing the relation of user experience and information security in the project SDIKA
18
6 Acknowledgments
The content of this paper is a result of the project "SDIKA - Schaufenster Sichere Digitale Identitäten Karlsruhe"
(Showcase Secure Digital Identities Karlsruhe). The goal of the project is to use digital identities to connect
people, organizations, and processes. The values of digital sovereignty, fairness, and interoperability are
guiding principles of the project and for the regional showcase. This Project is supported by the Federal
Ministry for Economic Affairs and Climate Action (BMWK) on the basis of a decision by the German Bundestag.
In addition, we thank our project partner raumobil GmbH for the cooperation and feedback.
Methods for analyzing the relation of user experience and information security in the project SDIKA
19
7 References
[1] “Showcase programme ‘Secure Digital Identities.’” https://www.digitale-technologien.de/DT/
Navigation/EN/ProgrammeProjekte/AktuelleTechnologieprogramme/Sichere_Digitale_Identitaeten/si
chere_digitale_ident.html (accessed May 02, 2023).
[2] M. Weinreuter, S. Alpers, and A. Oberweis, “Method for Eliciting Requirements in the area of Digital
Sovereignty (MERDigS),” in 8th International Congress on Information and Communication Technology,
London, England, in press.
[3] M. Kubach, C. H. Schunck, R. Sellung, and H. Roßnagel, “Self-sovereign and Decentralized identity as the
future of identity management?,” 2020, doi: 10.18420/OIS2020_03.
[4] S. Schwalm, D. Albrecht, and I. Alamillo, “eIDAS 2.0: Challenges, perspectives and proposals to avoid
contradictions between eIDAS 2.0 and SSI,” 2022, doi: 10.18420/OID2022_05.
[5] “SDIKA,” SDIKA. https://www.sdika.de/ (accessed Nov. 02, 2022).
[6] S. Dutta, S. Madnick, and G. Joyce, “SecureUse: Balancing Security and Usability Within System Design,”
in 18th international Conference on Human-Computer Interaction, C. Stephanidis, Ed., in
Communications in Computer and Information Science. Cham: Springer International Publishing, 2016,
pp. 471–475. doi: 10.1007/978-3-319-40548-3_78.
[7] J. Nielsen and R. Molich, “Heuristic evaluation of user interfaces,” in Proceedings of the SIGCHI conference
on Human factors in computing systems Empowering people, Seattle, Washington, United States: ACM
Press, 1990, pp. 249–256. doi: 10.1145/97243.97281.
[8] F. Falconi, A. Moquillaza, J. Aguir re, and F. Paz, “Developing and Validating a Set o f Usability and Security
Metrics for ATM Interfaces,” in Design, User Experience, and Usability: UX Research and Design, M. M.
Soares, E. Rosenzweig, and A. Marcus, Eds., in Lecture Notes in Computer Science. Cham: Springer
International Publishing, 2021, pp. 225–241. doi: 10.1007/978-3-030-78221-4_16.
[9] A. Yeratziotis, D. Pottas, and D. Greunen, “A Usable Security Heuristic Evaluation for the Online Health
Social Networking Paradigm,” International Journal of Human-computer Interaction, vol. 28, 2012, doi:
10.1080/10447318.2011.654202.
[10] D. Napoli, “Developing Accessible and Usable Security (ACCUS) Heuristics,” in 2018 CHI Conference on
Human Factors in Computing Systems, in CHI EA ’18. New York, NY, USA: Association for Computing
Machinery, 2018, pp. 1–6. doi: 10.1145/3170427.3180292.
[11] J. R. C. Nurse, S. Creese, M. Goldsmith, and K. Lamberts, “Guidelines for usable cybersecurity: Past and
present,” in 2011 Third International Workshop on Cyberspace Safety and Security (CSS), 2011, pp. 21–26.
doi: 10.1109/CSS.2011.6058566.
[12] P. Jaferian, K. Hawkey, A. Sotirakopoulos, M. Velez-Rojas, and K. Beznosov, “Heuristics for evaluating IT
security management tools,” in Proceedings of the Seventh Symposium on Usable Privacy and Security,
in SOUPS ’11. New York, NY, USA: Association for Computing Machinery, 2011, pp. 1–20. doi:
10.1145/2078827.2078837.
[13] D. Feth and S. Polst, “Heuristics and Models for Evaluating the Usability of Security Measures,” in
Proceedings of Mensch und Computer 2019, in MuC’19. New York, NY, USA: Association for Computing
Machinery, 2019, pp. 275–285. doi: 10.1145/3340764.3340789.
[14] S. Ambore, H. Z. Dogan, and E. Apeh, “Development of Usable Security Heuristics for Fintech,” in 34th
British HCI Conference (HCI2021), BCS Learning & Development, 2021. doi: 10.14236/ewic/HCI2021.12.
[15] A. Alarifi, M. Alsaleh, and N. Alomar, “A model for evaluating the security and usability of e-banking
platforms,” Computing, vol. 99, no. 5, pp. 519–535, 2017, doi: 10.1007/s00607-017-0546-9.
[16] V. Fanelle, S. Karimi, A. Shah, B. Subramanian, and S. Das, “Blind and Human: Exploring More Usable
Audio {CAPTCHA} Designs,” presented at the Sixteenth Symposium on Usable Privacy and Security
(SOUPS 2020), 2020, pp. 111–125.
Methods for analyzing the relation of user experience and information security in the project SDIKA
20
[17] C. Braz, A. Seffah, and D. M’Raihi, “Designing a Trade-Off Between Usability and Security: A Metrics
Based-Model,” in Human-Computer Interaction – INTERACT 2007, in Lecture Notes in Computer Science,
vol. 4663. Springer Berlin Heidelberg, 2007, pp. 114–126. doi: 10.1007/978-3-540-74800-7_9.
[18] A. Seffah, M. Donyaee, R. B. Kline, and H. K. Padda, “Usability measurement and metrics: A consolidated
model,” Software Qual J, vol. 14, no. 2, pp. 159–178, 2006, doi: 10.1007/s11219-006-7600-8.
[19] C. Wharton, J. Rieman, C. Lewis, and P. Polson, “The cognitive walkthrough method: a practitioner’s
guide,” in Usability inspection methods, USA: John Wiley & Sons, Inc., 1994, pp. 105–140.
[20] A. Herzog and N. Shahmehri, “Usability and Security of Personal Firewalls,” in New Approaches for
Security, Privacy and Trust in Complex Environments, H. Venter, M. Eloff, L. Labuschagne, J. Eloff, and R.
von Solms, Eds., in IFIP International Federation for Information Processing. Boston, MA: Springer US,
2007, pp. 37–48. doi: 10.1007/978-0-387-72367-9_4.
[21] A. Sears, “Heuristic Walkthroughs: Finding the Problems Without the Noise,” International Journal of
Human–Computer Interaction, vol. 9, no. 3, pp. 213–234, 1997, doi: 10.1207/s15327590ijhc0903_2.
[22] J. Smith, L. N. Q. Do, and E. Murphy-Hill, “Why can’t johnny fix vulnerabilities: a usability evaluation of
static analysis tools for security,” in Proceedings of the Sixteenth USENIX Conference on Usable Privacy
and Security, in SOUPS’20. USA: USENIX Association, 2020, pp. 221–238.
[23] R. M. Gonzalez, M. V. Martin, J. Munoz-Arteaga, F. alvarez-Rodriguez, and M. A. Garcia-Ruiz, “A
measurement model for secure and usable e-commerce websites,” in 2009 Canadian Conference on
Electrical and Computer Engineering, 2009, pp. 77–82. doi: 10.1109/CCECE.2009.5090096.
[24] J. Nielsen, Usability engineering. Boston: Academic Press, 1993.
[25] K. Marky, V. Zimmermann, M. Funk, J. Daubert, K. Bleck, and M. Mühlhäuser, “Improving the Usability
and UX of the Swiss Internet Voting Interface,” in Proceedings of the 2020 CHI Conference on Human
Factors in Computing Systems, Honolulu HI USA: ACM, 2020, pp. 1–13. doi: 10.1145/3313831.3376769.
[26] B. John and D. Kieras, “The GOMS Family of Analysis Techniques: Tools for Design and Evaluation,” 1994.
[27] A. Din, Usable Security using GOMS: A Study to Evaluate and Compare the Usability of User Accounts on E-
Gov ernment Web sites. 2015.
[28] W. L. in R.-B. U. Experience, “Diary Studies: Understanding Long-Term User Behavior and Experiences,”
Nielsen Norman Group. https://www.nngroup.com/articles/diary-studies/ (accessed Nov. 16, 2022).
[29] S. Mare, M. Baker, and J. Gummeson, “A study of authentication in daily life,” in Proceedings of the
Twelfth USENIX Conference on Usable Privacy and Security, in SOUPS ’16. USA: USENIX Association, 2016,
pp. 189–206.
[30] W. L. in R.-B. U. Experience, “Focus Groups in UX Research: Article by Jakob Nielsen,” Nielsen Norman
Group. https://www.nngroup.com/articles/focus-groups/ (accessed Nov. 16, 2022).
[31] K. Marky, M. Schmitz, V. Zimmermann, M. Herbers, K. Kunze, and M. Mühlhäuser, “3D-Auth: Two-Factor
Authentication with Personalized 3D-Printed Items,” in Proceedings of the 2020 CHI Conference on
Human Factors in Computing Systems, Honolulu HI USA: ACM, 2020, pp. 1–12. doi:
10.1145/3313831.3376189.
[32] A. Bojko, “Eye Tracking in User Experience Testing: How to Make the Most of It,” Proceedings of the 14th
Annual Conference of the Usability Professionals’ Association (UPA). Montréal, Canada, 2005.
[33] M. Manhartsberger and N. Zellhofer, “Eye tracking in usability research: What users really see,” in
Usability Symposium 2005, Vienna, Austria, 2005, pp. 161–166.
[34] A. Darwish and E. Bataineh, “Eye tracking analysis of browser security indicators,” in 2012 International
Conference on Computer Systems and Industrial Informatics, 2012, pp. 1–6. doi:
10.1109/ICCSII.2012.6454330.
[35] J. Rowley, “Designing and using research questionnaires,” Management Research Review, vol. 37, no. 3,
pp. 308–330, 2014, doi: 10.1108/MRR-02-2013-0027.
Methods for analyzing the relation of user experience and information security in the project SDIKA
21
[36] A. Lund, “Measuring Usability with the USE Questionnaire,” Usability and User Experience Newsletter of
the STC Usability SIG, vol. 8, 2001.
[37] C. Lallemand and V. Koenig, “Lab testing beyond usability: challenges and recommendations for
assessing user experiences,” Journal of Usability Studies, vol. 12, no. 3, pp. 133–154, 2017.
[38] M. Hassenzahl, M. Burmester, and F. Koller, “AttrakDiff: Ein Fragebogen zur Messung wahrgenommener
hedonischer und pragmatischer Qualität,” in Mensch & Computer 2003, in Berichte des German Chapter
of the ACM, vol. 57. Wiesbaden: Vieweg+Teubner Verlag, 2003, pp. 187–196. doi: 10.1007/978-3-322-
80058-9_19.
[39] J. Lewis, “Psychometric evaluation of an after-scenario questionnaire for computer usability studies:
The ASQ,” ACM SIGCHI Bulletin, vol. 23, pp. 78–81, 1991, doi: 10.1145/122672.122692.
[40] M. Schrepp, A. Hinderks, and J. Thomaschewski, “Design and Evaluation of a Short Version of the User
Experience Questionnaire (UEQ-S),” International Journal of Interactive Multimedia and Artificial
Intelligence, vol. 4, 2017, doi: 10.9781/ijimai.2017.09.001.
[41] J. Sauro and J. S. Dumas, “Comparison of three one-question, post-task usability questionnaires,” in
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston MA USA: ACM,
2009, pp. 1599–1608. doi: 10.1145/1518701.1518946.
[42] B. Laugwitz, T. Held, and M. Schrepp, “Construction and Evaluation of a User Experience Questionnaire,”
in HCI and Usability for Education and Work, in Lecture Notes in Computer Science, vol. 5298. Springer
Berlin Heidelberg, 2008, pp. 63–76. doi: 10.1007/978-3-540-89350-9_6.
[43] J. Brooke, “SUS: A quick and dirty usability scale,” Usability Evaluation In Industry, 1995.
[44] J. P. Chin, V. A. Diehl, and K. L. Norman, “Development of an instrument measuring user satisfaction of
the human-computer interface,” in Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems, in CHI ’88. New York, NY, USA: Association for Computing Machinery, 1988, pp. 213–218. doi:
10.1145/57167.57203.
[45] J. R. Lewis, “IBM computer usability satisfaction questionnaires: Psychometric evaluation and
instructions for use,” International Journal of Human-Computer Interaction, vol. 7, no. 1, pp. 57–78, 1995,
doi: 10.1080/10447319509526110.
[46] M. Fassl, L. T. Gröber, and K. Krombholz, “Exploring User-Centered Security Design for Usable
Authentication Ceremonies,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing
Systems, in CHI ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 1–15. doi:
10.1145/3411764.3445164.
[47] W. Wang, D. Arya, N. Novielli, J. Cheng, and J. L. C. Guo, “ArguLens: Anatomy of Community Opinions On
Usability Issues Using Argumentation Models,” in Proceedings of the 2020 CHI Conference on Human
Factors in Computing Systems, in CHI ’20. New Y ork, NY, USA: Ass ociation for Com puting Machinery, 20 20,
pp. 1–14. doi: 10.1145/3313831.3376218.
[48] K. Marky, M.-L. Zollinger, P. Roenne, P. Y. A. Ryan, T. Grube, and K. Kunze, “Investigating Usability and
User Experience of Individually Verifiable Internet Voting Schemes,” ACM Trans. Comput.-Hum. Interact.,
vol. 28, no. 5, 2021, doi: 10.1145/3459604.
[49] M. Korir, S. Parkin, and P. Dunphy, “An Empirical Study of a Decentralized Identity Wallet: Usability,
Security, and Perspectives on User Control,” in Eighteenth Symposium on Usable Privacy and Security
(SOUPS 2022), 2022, pp. 195–211.
[50] A. L. Stephano and D. P. Groth, “USEable security: interface design strategies for improving security,” in
Proceedings of the 3rd international workshop on Visualization for computer security, in VizSEC ’06. New
York, NY, USA: Association for Computing Machinery, 2006, pp. 109–116. doi: 10.1145/1179576.1179598.
[51] M. González, L. Masip, A. Granollers, and M. Oliva, “Quantitative analysis in a heuristic evaluation
experiment,” Advances in Engineering Software, vol. 40, no. 12, pp. 1271–1278, 2009, doi:
10.1016/j.advengsoft.2009.01.027.
Methods for analyzing the relation of user experience and information security in the project SDIKA
22
[52] M. P. González, J. Loréss, and A. Granollers, “Assessing Usability Problems in Latin-American Academic
Webpages with Cognitive Walkthroughs and Datamining Techniques,” in Usability and
Internationalization. HCI and Culture, N. Aykin, Ed., in Lecture Notes in Computer Science. Springer
Berlin, Heidelberg, 2007, pp. 306–316. doi: 10.1007/978-3-540-73287-7_38.
[53] M. Georgsson, N. Staggers, E. Årsand, and A. Kushniruk, “Employing a user-centered cognitive
walkthrough to evaluate a mHealth diabetes self-management application: A case study and beginning
method validation,” Journal of Biomedical Informatics, vol. 91, p. 103110, 2019, doi:
10.1016/j.jbi.2019.103110.
[54] M. Schrepp and T. Held, “Anwendung von GOMS-Analysen und CogTool in der Design-Praxis,” in Mensch
& Computer 2010, Olgenbourg Wissenschaftsverlag, 2010, pp. 351–360. doi: 10.1524/9783486853483.351.
[55] K. R. Grzyb, A. Rösler, J. Rockstroh, G. Quint, and T. Bartel, “Eye-Tracking in Usability-Tests - Die
häufigsten Fehler und wie man sie vermeiden kann,” 2016, doi: 10.18420/MUC2016-UP-0149.
[56] M. Hassenzahl, F. Koller, and M. Burmester, “Der User Experience (UX) auf der Spur: Zum Einsatz von
www.attrakdiff.de,” Usability Professionals 2008, 2008.
[57] S. Mclellan, A. Muddimer, and S. Peres, “The Effect of Experience on System Usability Scale Ratings,”
Journal of Usability Studies, vol. 7, 2011.
[58] S. Naeini and S. Mostowfi, “Using QUIS as a Measurement Tool for User Satisfaction Evaluation (Case
Study: Vending Machine),” vol. 2015, pp. 14–23, 2015, doi: 10.5923/j.ijis.20150501.03.
[59] “Enter the world of eye tracking with Tobii Pro Spark.” https://www.tobii.com/products/eye-
trackers/screen-based/tobii-pro-spark (accessed Apr. 15, 2023).
[60] F. Elbabour, O. Alhadreti, and P. Mayhew, “Eye tracking in retrospective think-aloud usability testing: is
there added value?,” J. Usability Studies, vol. 12, no. 3, pp. 95–110, 2017.
Methods for analyzing the relation of user experience and information security in the project SDIKA
23
Authors
Max Sauer (M.Sc.) is research scientist at the FZI Forschungszentrum Informatik
in Karlsruhe in the research division Software Engineering (SE). In addition, he is
PhD student at the Karlsruhe Institute of Technology (KIT), in the research group
Information Systems. His present research field relates to the conflict area of user
experience (UX) and information security.
email: sauer@fzi.de
Markus Toran is a master student of informatics at the Karlsruhe Institute of
Technology (KIT), specializing on IT-Security. He finds human-centered IT-
Security especially interesting and currently writes his master’s thesis at the FZI
Forschungszentrum Informatik about UX and IT-Security for digital identities.
email: toran@fzi.de
Methods for analyzing the relation of user experience and information security in the project SDIKA
24
Imprint
Editor
FZI Forschungszentrum Informatik
Haid-und-Neu-Str. 10-14
76131 Karlsruhe
+49 721 9654-0
fzi@fzi.de
www.fzi.de
The editor's work is licensed under the Creative
Commons Attribution 4.0 International license (CC BY
4.0). You can read the license terms here:
http://creativecommons.org/licenses/by/4.0