Conference PaperPDF Available

Helping John to Make Informed Decisions on Using Social Login


Abstract and Figures

Users make two privacy-related decisions when signing up for a new Service Provider (SP): (1) whether to use an existing Single Sign-On (SSO) account of an Identity Provider (IdP), or not, and (2) the information the IdP is allowed to share with the SP under specific conditions. From a privacy point of view, the use of existing social network-based SSO solutions (i.e. social login) is not recommended. This advice, however, comes at the expense of security, usability, and functionality. Thus, in principle, it should be up to the user to consider all advantages and disadvantages of using SSO and to consent to requested permissions, provided that she is well informed. Another issue is that existing social login sign-up interfaces are often not compliant with legal privacy requirements for informed consent and Privacy by Default. Accordingly, our research focuses on enabling informed decisions and consent in this context. To this end, we identified users' problems and usability issues from the literature and an expert cognitive walkthrough. We also elicited end user and legal privacy requirements for user interfaces (UIs) providing informed consent. This input was used to develop a tuto-rial to inform users on the pros and cons of sign-up methods and to design SSO sign-up UIs for privacy. A between-subject laboratory study with 80 participants was used to test both the tutorial and the UIs. We demonstrate an increase in the level to which users are informed when deciding and providing consent in the context of social login.
Content may be subject to copyright.
Helping John to Make Informed Decisions on Using Social Login
Farzaneh Karegar
Karlstad University
Nina Gerber
Technische Universität Darmstadt
Melanie Volkamer
Karlstad University and Technische Universität Darmstadt
Simone Fischer-Hübner
Karlstad University
Users make two privacy-related decisions when signing up for a
new Service Provider (SP): (1) whether to use an existing Single
Sign-On (SSO) account of an Identity Provider (IdP), or not, and
(2) the information the IdP is allowed to share with the SP under
specic conditions. From a privacy point of view, the use of existing
social network-based SSO solutions (i.e. social login) is not recom-
mended. This advice, however, comes at the expense of security,
usability, and functionality. Thus, in principle, it should be up to
the user to consider all advantages and disadvantages of using SSO
and to consent to requested permissions, provided that she is well
informed. Another issue is that existing social login sign-up inter-
faces are often not compliant with legal privacy requirements for
informed consent and Privacy by Default. Accordingly, our research
focuses on enabling informed decisions and consent in this context.
To this end, we identied users’ problems and usability issues from
the literature and an expert cognitive walkthrough. We also elicited
end user and legal privacy requirements for user interfaces (UIs)
providing informed consent. This input was used to develop a tuto-
rial to inform users on the pros and cons of sign-up methods and to
design SSO sign-up UIs for privacy. A between-subject laboratory
study with 80 participants was used to test both the tutorial and
the UIs. We demonstrate an increase in the level to which users are
informed when deciding and providing consent in the context of
social login.
Security and privacy
Social aspects of security and privacy;
Privacy protections;Usability in security and privacy;
Informed Decision, Usable Privacy, Privacy by Design, GDPR, Single
ACM Reference Format:
Farzaneh Karegar, Nina Gerber, Melanie Volkamer, and Simone Fischer-
Hübner. 2018. Helping John to Make Informed Decisions on Using Social
Login. In Proceedings of SAC 2018: Symposium on Applied Computing (SAC
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from
SAC 2018, April 9–13, 2018, Pau, France
©2018 Association for Computing Machinery.
ACM ISBN 978-1-4503-5191-1/18/04. . . $15.00
2018). ACM, New York, NY, USA, 10 pages.
Single Sign-On (SSO) solutions provided by social networks are
broadly deployed nowadays. Facebook, Google, and Twitter are
three top-English speaking services that also act as Identity Providers
(IdPs) [
] enabling authentication to another Service Provider (SP),
also known as a relying party. When signing up to websites oering
a social login to sign up besides a manual option, users encounter
two privacy-related decisions. First, they need to decide if they
should sign up using the social login method, and secondly they
need to decide if they want to consent to grant access permissions
to the SP to sharing personal information from their social network
prole, under specic conditions. Contrary to a manual sign-up
method for SPs, a social login relieves users of the need to recall
many sets of credentials and it is less time consuming, as the per-
sonal information is forwarded directly from the IdP to the SP.
However, the social network also becomes a single point of failure
as without the network and the account one cannot sign in to the
SP. Moreover, the social network also learns to which services and
when its customers communicate; thus the SSO method enables
increased user proling. Further privacy issues result from the way
that permissions to share personal information from the social net-
work prole are granted to an SP. The current UIs for signing up
with social login methods and for consenting to share information
with unclearly displayed opt-out, instead of clear opt-in, choices
make it dicult for users to conceive, notice and control what they
share: e.g. previous studies show that participants were not aware
of the information they consented to share or even that the service
provider also had the right to access this information in future
]. Thus, users do not give their informed consent since, con-
trary to current practice, users should be fully informed as to what
they are consenting when they use social login methods. Enabling
an informed decision is a known problem in the privacy context
] and a prerequisite for obtaining consent for data processing,
which must be freely given, specic and informed, according to
Art. 4 (11) of the EU General Data Protection Regulation (GDPR)
]. An informed decision does not necessarily imply that people
should select the most privacy-friendly method or disclose less per-
sonal information, even though this should be oered to the user
as the default option, according to the Data Protection by Default
principle postulated by Art. 25 GDPR. An informed decision means
1Sign-up: registration for the rst time.
SAC 2018, April 9–13, 2018, Pau, France F. Karegar et al.
that the individual can decide based on insights into dierent sign-
up methods, on the personal information about them that can be
shared, with whom, and under which conditions.
The objective of our research presented here is to develop and
evaluate the means to empower users to make informed decisions,
in the context of the social login methods. To support users in
deciding whether to use social logins, a tutorial was developed.
Furthermore, to achieve informed consent to share personal infor-
mation, new UI concepts based on ‘Drag and Drop’ and ‘Question
and Answer’ were designed, developed and tested.
Tsormpatzoudi et al. [
] emphasize the importance of involving
end-users as stakeholders in the Privacy by Design process, involv-
ing multiple disciplines including usability design, as the end users
should ultimately prot from Privacy by Design. Also, Cavoukian
stresses that the Privacy by Design principle Respect for Privacy
extends to the need for UIs to be “human-centered, user-centric and
user-friendly, so that informed privacy decision may be reliably ex-
ercised” [
]. For developing our UIs, we follow a Privacy by Design
and human-centered approach involving end users as stakeholders
by addressing end user-specic and legal privacy requirements
from the beginning and throughout the UI development cycle.
The remainder of this paper is structured as follows: First, users’
misconceptions and problems identied in i) literature, ii) a cog-
nitive walkthrough, and iii) a legal analysis, help us dene design
requirements to obtain informed decisions from users, and are
presented in Section 2. Meeting those requirements resulted in i)
general knowledge necessary to know about the concept in order
to make an informed decision (transferred into a short tutorial in
Section 3), and ii) eective new UIs to enable informed consent
(Section 3). Both the tutorial and the UIs were developed in an
iterative process and evaluated in a lab user study with 80 partici-
pants (Section 4). Results are discussed in Section 5 and 6. Section
7 discusses related work and Section 8 concludes the paper.
Firstly, to propose solutions to help users to make informed deci-
sions, users’ problems, misconceptions and usability problems of
the current Facebook UIs are analysed. Then, relevant requirements
to counter users’ problems and to make better-informed decisions
are elicited. To this end, a literature review, a legal analysis and an
expert cognitive walkthrough (CW) on the current user interfaces
of the Facebook SSO (see Figure 1) were conducted. The results are
reported in this section.
2.1 Literature Review, CW and Derived
A CW is an expert review method in which interface experts imitate
users, walking through a series of tasks [
]. We dened tasks with
dierent types of users in mind to identify as many problems as
possible. Two experts (authors of the paper) worked together on the
current user interfaces of Facebook SSO to identify usability and
potential users’ problems. The detected problems (denoted with P#)
from the CW and literature review [
] help to elicit
some requirements (denoted with R#) which can be categorised
into three groups entailing sign-up/in, underlying process, and
consent form related issues. For common ndings and problems
Figure 1: Facebook SSO authorization dialogues. The right one ap-
pears if the Edit This is clicked as indicated by the dashed line.
from the CW, literature review and the related elicited requirements
we avoid redundancy and cite results in the literature review.
Sign-up/in Related Issues.
The problems encountered from
the CW and the elicited requirements in this group pertain to the
decision a user should make regarding how to sign up for an SP.
When a user wants to select between the sign-up methods,
there is no source of information available for her by which she
can gain knowledge about the properties of the methods and what
happens if she selects them, i.e. the advantages, disadvantages, and
the steps each method requires for sign-up process.
There is a
need of a proper source of information for users who need more
knowledge to decide on which method to use for sign-up.
Depending on the current practices of the SP and the sign-up
methods oered, dierent personal data may be requested by each
method. However, before choosing the exact method, the type of
personal data being requested by each method is not conveyed to
The personal data that each method requests should be
communicated to users before the selection is made.
Process Related Issues.
Being unaware of the underlying pro-
cess (e.g. how the sign-up takes place) when using SSO systems
caused problems and misconceptions for users, is reported in some
literature [
]. The related requirements derived from the
problems identied in literature and in the CW are listed below:
Sun et al. reported [
], that all of their participants
expressed great concerns about IdP phishing attacks once they
were informed of this issue; half the participants (51%), even when
prompted, could not nd any distinguishing features on a bogus
Google login form.
The necessity to check for phishing attacks
should be communicated to users.
Results from the CW emphasise that users may get confused
about new dialogues that open up showing the Facebook web page
and then disappear again during the sign-up process.
The di-
rection of movements and various steps should be clear for users
during the entire process of sign-up.
Helping John to Make Informed Decisions on Using Social Login SAC 2018, April 9–13, 2018, Pau, France
Sun et al.’s and Arianezhad’s studies [
] clarify about
participants’ security misconceptions when they use SSO solutions.
For example, among 19 participants including both experts and
lay users, just four participants correctly answered that their IdP
passwords were not learned by the SP [
Users should not
only be informed about the personal information that is shared
with the SP but also that their credentials for the IdP are not shared.
Informed Consent Related Issues.
The requirements cate-
gorised in this group relate to the lack of knowledge and meaning-
ful transparency about the information an SP receives from an IdP,
and under which conditions. In other words, there are problems
related to improper consent forms.
Results of dierent user studies show that there is a mis-
match between participants’ understanding and perception about
the access rights they believe they grant and the access rights that
they actually grant to SPs. Bauer et al. [
] show that participants
have little insight into the level of access that SPs actually receive.
38% of participants erroneously believed that the SP could access
the attribute just once. In addition, Sun et al. [
] report that most
of their participants were uncertain about the types of data that
they shared, and did not know that SPs can post messages back to
the IdP on their behalf.
The users should be properly informed
about the data they share with the IdP, for how long and the kind
of access rights the IdP gets, based on their permissions.
Over the years, interface designers trained users to repeat-
edly click dialogues to nish their primary tasks. Bauer et al. [
report that participants’ understanding of the information IdPs
shared with SPs was based on preconceptions rather than the con-
tent of authorisation dialogues. In Egelman’s study [
] participants
also failed to notice the changes made to the dialogues, which is
due to habituation.
Proper substitutions for current common
integrated design solutions in authorisation dialogues, which are
robust against habituation, should be considered.
Robinson et al. [
] report that most participants did not
realise that they were giving access to their personal information
even if they had marked it with a privacy level other than public.
Users should be made aware of the irrelevance of privacy settings
and the shared information and proper design solutions should be
considered to alert users when conicts occur.
The public prole information
which is always pre-selected
to be shared by default, and is unchangeable, is not clearly dened
as emerged in the CW. Users can have various interpretations of
this item.
A clear description of the exact personal information
being included in the public prole should be provided to users.
] reports that the vast majority of their participants (84%)
did not know that they could change their sharing decisions made
previously; at the same time, almost half (48%) of the participants
reported that the availability of an eective audit tool would cause
them to use an IdP more often.
Users should be aware of the
possibility to revoke granted permissions and how to do it.
Results from the CW show that improper language and the
size of objects used in the current Facebook authorisation interfaces
are among the reasons why users’ attention may be diverted and
Controls available on many social networks and other websites that allow users to
limit who can access their prole and what information visitors can see.
Includes all information that is public by default (e.g. cover photo), made publicly
available by users, or published publicly by others to Facebook, and is linked to a
user’s account.
they nish the sign-up task before gaining proper knowledge of
what is shared, and how. For example, the big button clicking on
which means giving consent has an improper name: Continue as
[user’s name] and the size and colour dominate all the other objects
on the screen. The problems also include the ambiguous link to
change selected data (very small with an unrelated name), an un-
communicative sentence conveying the write access accompanied
with an inappropriate lock icon, and the very small, hard-to-see
links for privacy policies and terms of service of the SP (see Figure
Language and size of objects should be designed to help
users to not only nish the task but also to nish it while they are
informed, and their privacy is not invaded.
2.2 Legal Requirements
As pointed out in [
], the legal privacy principles have Human-
Computer Interaction (HCI) implications as they describe “mental
processes and behavior of the end user that must be supported
in order to adhere to the principle”. In this section, we elicited
legal requirements related to transparency and informed consent
pursuant to the GDPR [
] and derived from Opinion 10/2004 of Art.
29 Data Protection Working Party [
] that has a potential impact
on the design of authorisation dialogues. According to Art. 4 (11)
GDPR, consent is dened as “any freely given, specic, informed
and unambiguous indication of the data subject’s wishes by which
he or she, by a statement or by a clear armative action, signies
agreement to the processing of personal data relating to him or
her”. This denition implies the following legal requirements are
of special importance for our authorisation user interfaces:
Consent should be given by a clear armative action (Art.
4 (11) GDPR). According to Recital 32 of the GDPR, the armative
action could include ticking a box, choosing technical settings or
other statements which clearly indicate the data subject’s accep-
tance of the proposed processing of his or her personal data. Thus,
implicit and opt-out consent and particularly silence, pre-ticked boxes
or inactivity are presumed inadequate to confer consent. Opt-out
choices for pre-selected data items that are not minimal, i.e. not
needed for the purpose of the requested service, would also violate
the Data Protection by Default principle of Art. 25 GDPR.
Consent needs to be informed (Art. 4 (11) GDPR). Pursuant
to Art. 13 (1) GDPR and stressed in Recital 42, when personal data
are collected from a data subject (e.g. in the authorisation dialogue),
the data subject should at least be made aware of the identity of
the controller and the intended purposes of the processing of data.
Furthermore, according to Art. 13 (2) GDPR, the controller shall
provide the data subject with some further information to ensure
fair and transparent processing. Such policy information includes
but is not limited to information of recipients/categories of recipi-
ents, the period for which the personal data will be stored and data
subject rights, including the right to withdraw consent at any time.
Policy information to be provided pursuant to Art. 13 GDPR,
needs to be given to the data subject in a concise, transparent,
intelligible and easily accessible form (Art. 12 (1) GDPR). Making
policy information more transparent and easily accessible, the Art.
29 Data Protection Working Party recommended in its Opinion
10/2004 [
] to provide policy information in a multi-layered format,
where a short privacy notice on the top layer must oer individuals
the core information, i.e. the identity of the controller and the data
SAC 2018, April 9–13, 2018, Pau, France F. Karegar et al.
processing purposes, and a clear indication must be given on how
the individual can access the other layers presenting the additional
policy information. Furthermore, according to the Data Protection
by Default principle (Art. 25 GDPR), we derive:
Only the minimal data needed for a service should be
mandatory; other data items should be optional or voluntary.
Social login UIs of Facebook (1) do not comply with the legal
requirements of the GDPR for informed consent. In regard to the
requirement for a clear armative action (R12), even though users
still have to click a button to nish the sign-up process and for
providing consent, there is no clear instruction, since Continue as
[user’s name] button does not mean Agree. Opt-out choices that
are hidden on a second layer and pre-selected data items, which
are not mandatory, are not only violating the armative action
requirement but also fail to comply with the Data Protection by
Default principles (i.e. R12 and R15 are violated). Moreover, infor-
mation about the data processing purposes is not displayed in the
UIs. In other words, required policy information is neither made
transparent nor easily accessible as required by R14.
Based on the ndings in the previous two sections, we discuss if the
identied issues can be addressed by improving transparency, and
showing corresponding information in new user interfaces. How-
ever, to avoid overwhelming users with a surfeit of information we
split the required information into i) the group that is independent
from the concrete SP and is required to make an informed choice
for the sign-up method (such as pros and cons of the social login
option), and ii) the group which is dependent on the particular SP
and is required for providing informed consent (such as requested
data items, identity of the controller and the purposes of process-
ing). In this section, we discuss how the requirements are addressed
by the design of tutorial and new UIs for Facebook social login.
We developed a tutorial aimed at empowering users
with informed decisions about their selected method to sign up for
an SP in an iterative manner, i.e. integrating feedback from aca-
demic experts to improve the content and its understandability. The
tutorial can be used independently from concrete user interfaces
and it contains two parts: 1) brief process description of sign-up and
, and 2) explanations of the advantages and disadvantages.
The rst part, describing the steps involved in each method, mainly
addresses the following two requirements: R1 and R4. The second
part explains the advantages and disadvantages of the social login
compared to manual sign-up methods. The elaboration on disadvan-
tages of the social login method in the tutorial also includes some
information about the phishing problem, and the conditions of data
sharing in the context of social login, e.g. write access and validity
duration of access, which may cause privacy issues. Consequently,
the second part addresses requirements R1,R3, and R6 identied in
Section 3.2, and in particular the possibility of the write access and
duration of access granted.
Moreover, advantages and disadvantages of social logins in com-
parison to the manual sign-up method listed in the tutorial were
identied from the literature such as [
] and brain-
storming with academic experts. The advantages and disadvantages
encompass user related issues and are classied into two categories:
(1) authentication-related items and (2) items related to data sharing.
The authentication-related advantage is that no new password is
required for every registration for a website and the disadvantage
in this category is the fact that the social network is the single point
of failure. Relevant to data sharing, using social logins saves users’
time, most importantly, when they want to sign up for a website.
On the other hand, the disadvantage is mostly about lack of privacy.
Evaluating the eects of the tutorial in Section 5.1, we considered
these four advantages and disadvantages. The detailed descriptions
of the evaluations are made available separately online with the
content of the tutorial4.
Note, for both sign-up methods we consider that the same infor-
mation is requested from the SP and, as it is varying based on the
specic SP, the relevant knowledge (i.e. requested information from
users in each method) is omitted from the tutorial and is provided
on the sign-up page of the SP (R2).
User Interfaces and Informed Consent.
Aiming to address
the related requirements in Section 2 to help users give informed
consent, we developed new interfaces for the sign-up process using
Facebook SSO. Here, we describe how the end user and legal re-
quirements given in Section 2) are met by the proposed interfaces.
The proposed user interfaces are depicted in Figures 2 and 3.
To actively involve users with an armative action in the se-
lection of their personal information to share (R12), instead of pre-
selected checkboxes, the method of Drag and Drop has been exerted.
Pettersson et al. suggest using the Drag And Drop Agreements
(DADAs) [
] as an alternative way for users to express consent by
moving graphic representations of their data to receivers’ locations
on a map. The user did not only have to pick a set of predened
data but had to choose the correct personal data symbol(s) and drop
them on the correct receiver symbol. However, it remained as a pro-
posal and was never tested in usability studies. Section 7 elaborates
more on alternative designs for obtaining informed consent. In our
newly proposed UIs, we have one receiver who is the SP, rather
than having several. Users should drag the mandatory information,
or optional information, and drop it to a unique specic box (Figure
2) to indicate what they want to share. They further click the corre-
sponding button to accept sharing of what they selected. However,
when innovative interfaces become prevailing, habituation might
re-appear and detract from the reported short-term benets [
Thus, to make the proposed UIs robust against habituation and to
meet R7 requirement, each data item could have a specic place
in the white box represented by a meaningful relevant icon, for
example. However, testing it against habituation is deferred for
future work. It should be noted that data items to be shared are
considered separately and not as a set of Public Prole (R9) as in
current Facebook SSO interfaces (Figure 1).
Our proposed authorisation dialogues contain multi-layered pri-
vacy notices to meet requirement R14. The information required
in R6,R10 and the legal requirement R13 are provided as part of
the top-layer short privacy notice. In the UI, optional data is clearly
marked and separated from the mandatory data (R15). We rst
provide the identity of the SP and the purposes of data sharing
to meet R13 and R14. Furthermore, information about duration of
access, the level of access the SP gets (e.g. write or read access),
the possibility of access revocation, not sharing the IdP credentials
with the SP and independence of sharing personal information from
Helping John to Make Informed Decisions on Using Social Login SAC 2018, April 9–13, 2018, Pau, France
privacy settings on the IdP (Figure 2) are also provided to meet R5-6,
R8,R10,R13 requirements. Besides, clearly visible links to the full
privacy policy are provided (R14).
However, it is a well-known problem that users often ignore
privacy notices, as they are long, time-consuming to read and di-
cult to understand. Furthermore, providing too many or repetitive
privacy notices can result in habituation: users repeatedly click on
notices without considering their content. Even with short, multi-
layered privacy notices, much of the information may not seem
relevant to users. Many data practices are anticipated and obvious,
may not cause concern, or may not apply to a user’s current interac-
tion with an SP [
]. Another approach is to force interaction with
a notice which can reduce habituation eects [
]. Rowbotham et al.
demonstrated that combining an introductory video, standard con-
sent language, and an interactive quiz on a tablet-based system can
improve comprehension of clinical research study procedures and
risks [
]. Therefore, we designed a second authorisation dialogue
(Figure 3) to actively involve users and force them to pay attention
to the conditions of data sharing, by integrating the question and
answer method (Q&A). Users must answer some questions and
check their responses. In the case of wrong answers, the correct
responses are shown to the users who must select the right answers
and check them again. When answering the questions, users can
revert to the rst authorisation dialogue and read the short notices.
Figure 2: Drag&Drop interface (rst authorisation dialogue)
The purpose of our user study is twofold: 1) to evaluate if reading the
tutorial helps users to make better-informed decisions, i.e. informed
choices when they select a method to sign up for a website, and
2) to analyse the extent that the new interfaces helps users make
better-informed consent when granting permission to share their
personal data, in comparison to the Facebook social login UIs.
Framing these questions as hypotheses, we tested the following:
Compared to the user group who do not read the tutorial,
users who receive the tutorial make better-informed decisions when
Figure 3: Q&A interface (second authorisation dialogue)
they select a sign-up method. In other words, they have a better
understanding of the advantages and disadvantages of the sign-up
methods they select, than users who do not receive the tutorial.
Users who use the new interfaces have a better understand-
ing of how, and under which conditions, permission is granted,
compared to the group who experience the current Facebook SSO
interfaces. Specically, the users of the new interfaces know better
about the irrelevance of Facebook privacy settings, and correspond-
ing shared data items (R8), the possibility of access revocation (R10
and R13) and the fact that the SPs do not know about their creden-
tials of the IdP (R5).
Users who use the new interfaces have a better under-
standing of the data items to which access is granted (and to which
access is not granted) during the authorisation process compared
to the group who experience the current Facebook SSO interfaces.
Specically, they know better for which data items and for how
long read access is granted (or not granted) and if the write access
is granted to the SP (R6, partly R13).
4.1 Ethics, Recruitment & Demographics
All necessary steps were taken to adhere to the Swedish Research
Council’s principles of ethical research [
]. This includes obtaining
informed consent, not using participants’ actual or sensitive data
to sign up and debrieng participants at the end of the study.
Participants were recruited via social media, mailing lists, paper
yers posted across the university and at public places in the city
center. When signing up for an appointment in the lab, participants
were asked to conrm their eligibility that they were at least 18
years old and had a Facebook account. Participants were randomly
assigned to one of four groups of the study. They received either
a lunch coupon for the university canteen or a present card on
completion of the study, depending on where they were recruited.
In total 80 people, all of whom had Facebook accounts, par-
ticipated in our study. Among them, 45 had already experienced
Facebook login. The age range is 19 - 60 years (M=32.7, SD=10.7).
SAC 2018, April 9–13, 2018, Pau, France F. Karegar et al.
Except for four who have high school degree level only, the other
participants have, or are pursuing, various third level educational
subjects, including Psychology, Political Science, Applied Math-
ematics, Geography, Nursing, Architecture. One has a degree in
Computer Engineering. Table 1 shows our participants’ demograph-
ics. Using the IUIPC questionnaire (ten questions - IUIPC for control,
awareness and collection, with a 7-point Likert scale)
we assessed
that participants are rather concerned (M=56.31, SD=8.65, Min=27,
Max=70) about information privacy.
Table 1: Demographics – in total and per group
Properties Total
18-25 25 6 6 5 8
26-32 23 7 5 5 6
33-39 14 2 4 4 4
40-46 7 3 0 3 1
47-53 5 1 3 1 0
54-60 6 1 2 2 1
Male 31 8 7 7 9
Female 49 12 13 13 11
Educational background
High school 4 1 1 1 1
Bachelor 37 9 11 8 9
Master 19 8 2 4 5
PhD 20 2 6 7 5
English prociency level
Elementary 6 1 2 2 1
Limited 16 5 4 3 4
Professional 25 6 6 5 8
Full profess. 25 8 5 7 5
Native 8 0 3 3 2
Privacy concern values using IUIPC for awareness, control and collection
Mean 56.31 55.50 53.00 58.55 57.75
SD 0.97 2.02 2.10 1.55 1.92
Using password manager
No 59 18 12 16 13
Yes 19 2 7 4 6
Do not know 2 0 1 0 1
Previous experience of Facebook SSO login
No 31 6 7 9 9
Yes 45 11 12 11 11
Do not know 4 3 1 0 0
4.2 Study Design
A functional mock-up of the sign-up process for a cticious photo
printing website, PhotoHex, was developed using Axure prototyp-
ing tool. The mock-up provided the entire interfaces needed to sign
up to the website using Facebook SSO, both simulating the real
Facebook interfaces and our proposed UIs (Figures 2 and 3).
A between-subject study with four groups was conducted: Group
1 (G1) read the tutorial and signed up using the new interfaces,
Group 2 (G2) did not receive the tutorial and signed up using the
new interfaces, Group 3 (G3) read the tutorial and signed-up using
current Facebook interfaces, and Group 4 (G4) did not receive the
tutorial and signed up using current Facebook interfaces.
We conducted the study with participants individually. Since we
did not want our participants to be primed for privacy we did not
reveal our full study purpose, until afterwards. We carefully and
ethically obfuscated the purpose, both during the recruitment phase
and during our interactions with participants in the study session,
Internet Users’ Information Privacy Concerns (IUIPC) is developed to measure peo-
ple’s general concerns about organisations’ information handling practices [16].
using some dummy questions. The stated goal of the study was
introduced as a usability test of a photo printing website, PhotoHex,
and we advised the true goal of the study in the debrieng session.
Figure 4 provides an overview of the study design and the col-
lected data types. The study is divided into the following phases:
Welcome and Demographics
: The moderator welcomed and
thanked the participants, provided them with information about
the study and the PhotoHex website and asked them to sign the
informed consent form for participation. After signing, they were
requested to complete the survey, starting with demographics (in-
cluding familiarity with English
). Participants’ privacy concerns
were then assessed using the IUIPC. At the end of this rst phase,
participants were informed that PhotoHex website provides either
a manual sign-up for an account, or a Facebook social login.
: Next, those assigned to complete the tutorial, G1 and
G3, were prompted to contact the study moderator to receive the
tutorial, provided on paper. Once nished reading the tutorial, they
were asked to complete the survey. Those participants in G2 and
G4 simply completed the survey, without intervention.
Sign-up Option:
Participants were asked the sign-up option
they preferred to use for the PhotoHex website. They were invited
to justify their decisions, and to provide the advantages and disad-
vantages of the method they selected, in free-text.
Role Play:
Independent of the method chosen in the Sign-up
phase, participants received some instructions about signing up for
the PhotoHex website using the Facebook SSO option, while role-
playing a persona called Elsa. Information on Elsa was provided on
a role-playing card that included her Facebook credentials. Using
a persona serves a dual purpose: 1) it allows full control of the
information each participant encounters, providing a standard ex-
perience that can be compared between participants, and 2) ethical
reasons: it helps us avoid handling sensitive participant information,
which needs to be disclosed for the study, e.g. birth dates or page
likes on Facebook. Although role-playing may aect the ecological
validity of results it is not severely aecting comparisons between
dierent tests as the premises remain the same.
Task on the website:
Participants signed-up using either the
new interfaces or the old ones.
Questions about the experienced task:
Once signed-up, the
moderator asked participants to continue the survey, answering
questions about their experience using PhotoHex. Questions in-
cluded open and multiple-choice regarding the granted access, as
well as questions to deduce the users’ satisfaction, using the System
Usability Scale (SUS) questionnaire [
]. At the end of this phase,
we also asked our participants if they had used Facebook SSO login
before our study.
At the end, participants were debriefed on the actual pur-
pose of the study, and asked for feedback on the tutorial and inter-
faces. The instructor then reimbursed and thanked the participants.
This section deals with the eect receiving the tutorial has on the
users’ ability to make an informed decision when choosing a sign-
up method.
6The study was conducted in Sweden, through English.
Helping John to Make Informed Decisions on Using Social Login SAC 2018, April 9–13, 2018, Pau, France
Figure 4: Study design including the paths for the four dierent groups.
5.1 Tutorial
As described in Section 4.2, saving users’ time and not having a need
for a new password are the advantages; and being the single point-
of-failure and not respecting users’ privacy are the disadvantages
of the SSO solutions. Considering these, we used a close coding
approach for the free-text question. To examine H1 hypothesis,
we grouped those who read the tutorial, and groups who did not
(G1 + G3 with G2 + G4) and we compared the number of both
correct and false advantages and disadvantages mentioned in the
free-text questions, based on the selected sign-up method, for each
group. UIs could not have any eects on H1 because participants
read the tutorial and answered the questions related to it before
experiencing the UIs.
Free-text. A Kruskal-Wallis-Test
showed signicant dierences
in the correctly identied advantages between participants who
received the tutorial and those who did not (
(1)=8.36, p=.004).
Participants who received the tutorial were able to identify more
advantages correctly (M=1.23, SD=0.58) than participants who did
not receive the tutorial (M=0.90, SD=0.38). Although participants
who received the tutorial were able to identify more disadvantages
correctly (M=0.93, SD=0.57) than participants who did not (M=0.73,
SD=0.51), this result is not statistically signicant. Figure 5 depicts
how often the four items were provided by the participants in the
dierent groups. Participants who did not receive the tutorial were
less aware of the fact that using Facebook means there is a single
point-of-failure and that this option may cause privacy concerns.
Twelve in the group who received the tutorial, and eight in the
group without the tutorial, selected the Facebook SSO and did not
mention any false disadvantages or advantages (Figure 5). On the
other hand, the fact that sign-up takes less time was erroneously
mentioned as an advantage of the manual option among partici-
pants who selected this method. Not being privacy friendly com-
pared to the Facebook SSO, and being the single point-of-failure
were the false disadvantages listed by participants who selected the
manual option.
5.2 UI
This section describes and discusses the eect of the new UIs on
users’ ability to give informed consent. The results are reported
considering two groups who experienced the new UIs and who
used the current Facebook interfaces, regardless of reading the
tutorial. Since half of the participants who experienced new UIs
We used non-parametric tests since the assumptions of normality and homogeneity
of variances were violated.
Figure 5: Percentages of participants mentioned each of the items in
free-text question (M-manual, FB-Facebook). A: Privacy-friendly, B:
No single point-of-failure, C: No need for a new password, D: Sign-
up takes less time.
and half of the participants who used current Facebook SSO dia-
logues received the tutorial, to test the H2a and H2b hypotheses we
checked if receiving the tutorial had an eect on making informed
consent, when confronting the authorisation dialogues. We found
no signicant eect of receiving the tutorial on giving informed
consent, including understanding of how, under which conditions,
and to which items, access is granted (with all p>.05).
To examine H2a, participants’ answers to three statements about
Facebook information sharing with PhotoHex described in Table
2 are analysed. In detail, the statements involve participants’ com-
prehensions about the relation between privacy settings in the
Facebook prole and sharing information with the SP (S1), access
revocation (S2) and sending Facebook credentials to the SP (S3).
Using a Kruskal-Wallis-Test
, we found signicant eects for the
type of interfaces used on participants’ ability to correctly evaluate
all three statements (see Table 2), with more participants who used
the new interfaces evaluating all three statements correctly than
participants who used current Facebook authorisation dialogues as
depicted in Figure 6. Thus, H2a is supported.
After completing the sign-up process, participants were pre-
sented a list of fteen dierent types of personal information as
depicted in Figure 7 and had to indicate whether they shared the par-
ticular information with PhotoHex or not. In Figure 7, the rst three
types from the left are mandatory and the next two are optional
requested information while the remaining ten are dummy infor-
mation not requested in the authorisation dialogues. We populated
SAC 2018, April 9–13, 2018, Pau, France F. Karegar et al.
Table 2: Results of the Kruskal-Wallis-Test for participants’ ability
to correctly evaluate the statements. TF: True/False question. MC:
Multiple-Choice question.
Statement df χ2Sig.
(S1) Your privacy setting for your Birthday
on Facebook is only friends. Thus, although
it is selected, Facebook is not allowed to
share it with the website (TF).
1 12.77
(S2) You can cancel the permission you give
to Facebook to share your selected
information with PhotoHex (TF).
1 10.32 .001**
(S3) The website can sign you up because it
knows your Facebook password (TF). 1 8.25 .004**
(S4) PhotoHex has write access to post
something to your Facebook prole on
behalf of you (TF).
1 21.07
(S5) PhotoHex will be able to request the
information you selected ...(MC) 1 55.47
the list with dummy information to avoid the right answer being
the selection of shared option for all the requested information. To
test H2b, we compared how many of the fteen presented informa-
tion types were recalled correctly as shared and not shared, by the
participants who used the new interfaces and those who used the
current Facebook authorisation dialogues. Using a Kruskal-Wallis-
Test, we found signicant dierences for the type of interfaces used
(1)=26.53, p<.001), with participants using the new interfaces
recalling a greater number of shared and not shared information
correctly (M=83.50%, SD=24.20%) than participants who used the
current Facebook authorisation dialogues (M=48.67%, SD=27.97%).
Figure 6: Answers to the statements for the two groups (new or old
We also measured the number of not sure answers for each partici-
pant for all listed data types. The Kruskal-Wallis-Test, demonstrated
signicant dierences for the type of interfaces used (
p<.001). Participants using the current Facebook authorisation di-
alogues expressed higher levels of uncertainty (M=5.48, SD=5.12)
than participants using the new interfaces (M=1.55, SD=2.86).
Accordingly, we deduce that involving participants more actively
in the process of selecting the information to be shared helps them
to pay more attention to what was shared, and decreased the level
of uncertainty. We further evaluated the eect of using the new UIs
on participants’ understanding of the access granting process by
comparing their answers to two statements addressing their con-
ception about write access (S4) and the duration of the access token
(S5). Contrary to other statements which were true/false questions,
S5 was a multiple-choice question with ve answers among them
one was the correct one: PhotoHex can request the information
you selected for 60 days or until you cancel your permission. We
identied signicant eects for the type of interfaces used on par-
ticipants’ ability to correctly evaluate both statements (see Table 2),
again with more participants who used the new interfaces evaluat-
ing both statements correctly than participants who used current
authorisation dialogues of Facebook (see Figure 6). Thus, H2b is
also supported.
In this section, we report the level of users’ satisfaction and e-
ciency to show the trade-o the new UIs bring and highlight the
parts of the proposed UIs requiring potential improvements. We
also report the results of the eects of participants’ privacy con-
cerns and previous experience of Facebook login on our dependent
variables for testing hypotheses (see Section 5.2 and 5.1).
The SUS and eciency values are displayed in Table 3. The
Mean SUS value for the new UIs in total is 61.76 which is although
acceptable according to Brooke’s work [
], is still low. Using a
univariate analysis of variance (ANOVA)
, we found a signicant
eect for receiving the tutorial on the SUS values (F(1, 76)=5.62,
p=.020, partial η2=0.07).
The time reported in Table 3 is the duration of sign-up for the
website using Facebook SSO in both UIs. Using a Kruskal-Wallis-
Test, no eect of receiving the tutorial was found on the eciency
(1)=0.62, p=.43). The total time to complete the sign-up process
using the new UIs is approximately 3 and 4 times more than the
time required for signing up using the current Facebook UIs for the
group who read the tutorial and who did not, respectively. However,
the eciency of the Facebook UIs is achieved by not adhering to
legal requirements of the GDPR that may be time-consuming for
users. A simple click-through dialogue providing insucient policy
information is presented with opt-out (instead of opt-in) choices
hidden on a second layer that only appears if the user clicks on the
Edit This link. This means that the user can simply click Continue
as [user’s name] (Figure 1) without being confronted with required
policy information (such as, on the data processing purposes) that
should be read, and without having to do any active armative
actions or choices for selecting the data items to be shared (i.e. as
pointed out in Section 2.2, R12, R13, R15 are violated). If Facebook
implemented eective UIs that were legally compliant with the
GDPR, they would also demand more activities from the user and
could therefore not be as ecient as the current Facebook interfaces.
For the new UIs, the reported time consists of the time to au-
thenticate to Facebook (entering the username and password), time
for sharing information using DADA and time to answer the quiz
questions (Q&A). The Mean time for DADA is 78.70s for the group
who read the tutorial and 94.50s for the rest. The Mean time for
Q&A is 115.75s and 135.65s respectively. The time for DADA and
Q&A time is dependent on the number of information to be shared
and the questions to be answered, accordingly. Less mandatory
information requested to be shared, and eliminating statements not
dependent on the specic service provider in Q&A, and including
them in Q&A just for the rst time a user selects the Facebook as
an IdP on a website, contribute to the reduction in time.
Regarding the tutorial, a Kruskal-Wallis-Test showed no signif-
icant relationship between the previous experience of Facebook
8The assumptions of normality and homogeneity of variance were satised.
Helping John to Make Informed Decisions on Using Social Login SAC 2018, April 9–13, 2018, Pau, France
Figure 7: Number of information recalled as shared, not shared or not sure by participants who used the new and the old UIs.
login and the number of correctly and falsely identied advantages
and disadvantages. IUIPC values, and IUIPC awareness values, are
also not signicantly correlated with advantages and disadvantages
mentioned by participants. Finally, considering UIs, previous expe-
rience of Facebook login and IUIPC values, and IUIPC awareness
values, are not signicantly correlated with number of correctly re-
called personal information as shared or not shared, and the ability
to correctly evaluate the ve statements, except for statement S5
and S2. Using non-parametric Spearman’s rank correlation, there is
a signicant positive relationship between the ability to correctly
evaluate S5 and the IUIPC awareness values for participants who
said they would choose the manual option to sign up for PhotoHex
website (
= .257, p=.047*) and a signicant positive relationship be-
tween the ability to correctly evaluate S2 and the IUIPC awareness
values for participants who said they would choose the Facebook
login (ρ= .566, p=.009**).
Table 3: Perceived usability (SUS values) and the eciency (time) of
current UIs (C) and new UIs (N) participants experienced.
Without tutorial With tutorial
Type of UI SUS Time(s) SUS Time(s)
C (n=40)
M 73.25 70.85 75.88 74.15
SD 13.55 39.62 12.49 37.26
N (n=40)
M 56.13 291.40 67.38 240.65
SD 11.96 83.79 14.22 78.43
The related work of this paper includes research relevant to propos-
ing informational tutorials and research related to improving in-
formed consent.
Prior work on the eectiveness of tutorials has mostly tried to
change users’ attitudes toward online behaviour and security tools,
and techniques. For example, Albayram et al. [
] investigated the
eectiveness of informational videos on improvements in users’
adoption rate of two-factor authentication. However, Albayram et
al. [
] did not directly measure the gain in participants’ post video
knowledge. Contrary, with the proposed tutorial in this paper we
did not want to nudge users’ behaviours towards a specic method
for sign-up. However, the aim was to improve users’ knowledge
about the available options which could help them to make deci-
sions consciously. In the context of social logins, Ronen et al. [
also observed changes in users’ selection of sign-up methods, with
dierent identity providers, after they were exposed to the benets
they would receive, and the personal information they had to share,
for each individual option.
Earlier work on helping users to be aware of the information
they share and preventing leakage of personal information in the
context of social login has proposed dierent methods. In particular,
a proposal from Wang et al. [
] suggests new interfaces based on
the limitations of Facebook authorisation dialogues at the time of
preparing their work. However, the extent to which the users might
understand and pay attention to what was actually shared using
the proposed new interfaces was not evaluated. Javed and Shehab
] investigated the eects of animated authorisation dialogues
for Facebook. Another proposal by Javed and Shebab [
] used eye
tracking in order to force users to read the permission dialogue but
they did not report about the cost to users in terms of time and
satisfaction. Also, Karegar et al. [
] studied users’ recall of personal
information disclosure in authorisation dialogues in which desired
data could be selected by checking boxes. They also investigated
the eect of previewing the selected information on improving
users’ attention before giving consent.
An early work in HCI solutions for informed consent was done
by the PISA project as a pioneer, as pointed out in Section 2, which
conducted important research on how to map legal privacy princi-
ples to possible HCI design solutions [
], suggesting the concept
of Just-In-Time-Click-Through Agreements (JITCTAs) as a possible
solution for obtaining consent. Two-clicks (i.e. one click to conrm
that one is aware of the proposed processing, and another one to
consent to it) or ticking a box have also been suggested by dier-
ent European legal experts and data commissioners as a means
for representing the data subject’s consent [
]. Pettersson et al.
], building on PISA project results, developed the alternative
concept of DADAs, to address the problem of habituation, to which
JITCTAs are vulnerable. In this paper, we adapted the DADAs to
t our context for selecting the personal information to be shared
with the SP using Facebook SSO.
To the best of our knowledge, there is no prior work trying to
enforce informed consent while measuring it, considering the legal
requirements in a user study. In our proposed interfaces aligned
with GDPR, personal information is not selected by default and
users are actively involved in selection. Moreover, conditions of
data sharing have received special attention; being aware of such
information is necessary for a consent to be considered informed,
SAC 2018, April 9–13, 2018, Pau, France F. Karegar et al.
and informed consent is measured as a function of users’ knowledge
about what they share under which conditions.
Our objective in this work is to empower users to make informed
decisions in the context of signing up for an SP using a social
network as an IdP. A tutorial was designed to inform users on the
pros and cons of using social logins. Moreover, we designed UIs
for enabling informed consent for sharing personal information,
following the approaches of human-centered and privacy by design
by addressing end user-specic and legal privacy requirements
from the beginning and throughout the UI development cycle. Our
evaluations show that the tutorial notably helps users to improve
their knowledge about the benets of options they have for sign-up,
however, more investigations are required to ideally communicate
the pros and cons of services that may threaten the users’ privacy.
For our proposed UIs, informed consent was enforced with the help
of an active involvement of users via ‘Drag and Drop’ and ‘Question
and Answer’. A between-subject user study shows that our new
UIs are signicantly more eective in helping users to provide
informed consent comparing to the current authorisation dialogues
of the social network. Hence, armative actions like ‘Drag and
Drop’ that require users to carefully check opt-in choices, as well
as interactive knowledge testing and feedback, are examples of
eective HCI concepts for informed consent UIs. However, their use
comes at a cost. We continue to work on decreasing the gap between
legally compliant authorisation dialogues and usable user-centric
ones, while considering dierent modes of armative actions which
could potentially direct users’ attention to information they disclose,
the robustness of methods against habituation, and the eects of
providing data processing purposes.
This research has received funding from the European Union’s
Horizon 2020 research and innovation programme under grant
agreement No. 653454. It is also supported by the German Federal
Ministry of Education and Research (BMBF) within MoPPa and
co-funded by the DFG as part of project D.1 within the RTG 2050
“Privacy and Trust for Mobile Users”. The authors thank Bridget
Kane for proofreading the manuscript and John Sören Pettersson,
Henrik Andersson, and Maria Wahl for their help in reaching more
Yusuf Albayram, Mohammad Mai Hasan Khan, and Michael Fagan. 2017. A
Study on Designing Video Tutorials for Promoting Security Features: A Case
Study in the Context of Two-Factor Authentication (2FA). International Journal
of HumanâĂŞComputer Interaction 33, 11 (2017), 927–942.
Majid Arianezhad, L Jean Camp, Timothy Kelley, and Douglas Stebila. 2013.
Comparative Eye Tracking of Experts and Novices in Web Single Sign-on. In
CODASPY. ACM, 105–116.
Art. 29 Data Protection Working Party. 2004. Opinion 10/2004 on More Har-
monised Information Provisions. Available from:
policies/privacy/docs/wpdocs/2004/wp100_en.pdf. (2004).
Lujo Bauer, Cristian Bravo-Lillo, Elli Fragkaki, and William Melicher. 2013. A
Comparison of Users’ Perceptions of and Willingness to Use Google, Facebook,
and Google+ Single-sign-on Functionality. In DIM. ACM, 25–36.
R. Böhme and S. Köpsell. 2010. Trained to Accept?: A Field Experiment on
Consent Dialogs. In CHI. ACM, 2403–2406.
Cristian Bravo-Lillo, Lorrie Cranor, Saranga Komanduri, Stuart Schechter, and
Manya Sleeper. 2014. Harder to Ignore? Revisiting Pop-Up Fatigue and Ap-
proaches to Prevent It. In SOUPS. USENIX Association, 105–111.
John Brooke. 2013. SUS: A Retrospective. Journal of Usability Studies 8, 2 (2013),
Ann Cavoukian. 2009. Privacy by Design: The 7 Foundational Principles. Imple-
mentation and Mapping of Fair Information Practices. Information and Privacy
Commissioner of Ontario, Canada (2009).
Serge Egelman. 2013. My Prole is My Password, Verify Me!: The Pri-
vacy/Convenience Tradeo of Facebook Connect. In CHI. ACM, 2369–2378.
Batya Friedman, Edward Felten, and Lynette I. Millett. 2000. Informed Consent
Online: A Conceptual Model and Design Principles. University of Washington
Computer Science & Engineering Technical Report 00–12–2 (2000).
Ruti Gafni and Dudu Nissim. 2014. To Social Login or not Login? Exploring
Factors Aecting the Decision. Issues in Informing Science and Information
Technology 11 (2014), 57–72.
Yousra Javed and Mohamed Shehab. 2016. Investigating the Animation of Appli-
cation Permission Dialogs: A Case Study of Facebook. In DPM. Springer, 146–162.
Yousra Javed and Mohamed Shehab. 2017. Look Before You Authorize: Using
Eye-Tracking to Enforce User Attention Towards Application Permissions. PoPET
2, 2 (2017), 23–37.
Farzaneh Karegar, Daniel Lindegren, John Sören Pettersson, and Simone Fischer-
Hübner. 2017. Assessments of a Cloud-Based Data Wallet for Personal Identity
Management. In Information Systems Development: Advances in Methods, Tools
and Management (ISD2017 Proceedings).
Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser.2010. Research Methods
in Human-Computer Interaction. Wiley Publishing.
Naresh K. Malhotra, Sung S. Kim, and James Agarwal. 2004. Internet Users’
Information Privacy Concerns (IUIPC): The Construct, the Scale, and a Causal
Model. Information Systems Research 15, 4 (2004), 336–355.
Andrew S. Patrick and Steve Kenny. 2003. From Privacy Legislation to Interface
Design: Implementing Information Privacy in Human-Computer Interactions. In
PET. Springer, 107–124.
John Sören Pettersson, Simone Fischer-Hübner, Ninni Danielsson, Jenny Nilsson,
Mike Bergmann, Sebastian Clauss, Thomas Kriegelstein, and Henry Krasemann.
2005. Making PRIME Usable. In SOUPS. ACM, 53–64.
Ashwini Rao, Florian Schaub, Norman Sadeh, Alessandro Acquisti, and Ruogu
Kang. 2016. Expecting the Unexpected: Understanding Mismatched Privacy
Expectations Online. In SOUPS. USENIX Association, 77–96.
Nicky Robinson and Joseph Bonneau. 2014. Cognitive Disconnect: Understanding
Facebook Connect Login Permissions. In COSN. 247–258.
Shahar Ronen, Oriana Riva, Maritza Johnson, and Donald Thompson. 2013. Tak-
ing Data Exposure into Account: How Does It Aect the Choice of Sign-in
Accounts?. In CHI. ACM, 3423–3426.
Michael C Rowbotham, John Astin, Kaitlin Greene, and Steven R Cummings. 2013.
Interactive Informed Consent: Randomized Comparison with Paper Consents.
PloS one 8, 3 (2013), e58603.
San-Tsai Sun, Eric Pospisil, Ildar Muslukhov, Nuray Dindar, Kirstie Hawkey, and
Konstantin Beznosov. 2011. What Makes Users Refuse Web Single Sign-on?: An
Empirical Investigation of OpenID. In SOUPS. ACM, Article 4, 20 pages.
San-Tsai Sun, Eric Pospisil, Ildar Muslukhov, Nuray Dindar, Kirstie Hawkey, and
Konstantin Beznosov. 2013. Investigating Users’s Perspectives of Web Single
Sign-On: Conceptual Gaps and Acceptance Model. TOI T 13, 1, Article 2 (2013),
35 pages.
The European Parliament and the Council of the European Union. 2016. Regula-
tion (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016
on the protection of natural persons with regard to the processing of personal
data and on the free movement of such data (General Data Protection Regulation).
Pagona Tsormpatzoudi, Bettina Berendt, and Fanny Coudert. 2015. Privacy by De-
sign: From Research and Policy to Practice–the Challenge of Multi-disciplinarity.
In APF. Springer, 199–212.
Anna Vapen, Niklas Carlsson, Anirban Mahanti, and Nahid Shahmehri. 2015.
Information Sharing and User Privacy in the Third-party Identity Management
Landscape. In CODASPY. ACM, 151–153.
Vetenskapsrådet. 2002. Forskningsetiska principer inom humanistisk-
samhällsvetenskaplig forskning.
Na Wang, Jens Grossklags, and Heng Xu. 2013. An Online Experiment of Privacy
Authorization Dialogues for Social Applications. In CSCW. ACM, 261–272.
... The objective of this study is to contribute additional insights to the findings of recent consent studies, including the ways in which different presentations of consent form content (e.g., Perrault and Keating, 2017;Perrault and McCullock, 2019) and the inclusion of interactive elements (e.g., Karegar et al., 2018) affected participant comprehension and engagement. This study also looks at the impact of individual differences on participant engagement with online consent forms. ...
... Bravo-Lillo et al. (2014) use the term "habituation" to describe the tendency of users to begin ignoring relevant details when they are confronted with the same or similar thing repeatedly. There is evidence that users have become habituated to online consent forms to the extent that they will click through rather than reading the terms, which calls into question whether their consent is truly informed (Karegar et al., 2018;Lindegren et al., 2019). Research has demonstrated that including elements which require participants to interact with the content rather than simply read it is an effective way to direct a participant's attention to changes made to content (Bravo-Lillo et al., 2014). ...
... Research has demonstrated that including elements which require participants to interact with the content rather than simply read it is an effective way to direct a participant's attention to changes made to content (Bravo-Lillo et al., 2014). This finding has been confirmed in multiple studies, showing that the inclusion of interactive elements in consent forms is an effective way to combat habituation (Karegar et al., 2018;Lindegren et al., 2019). ...
Informed consent is an important part of the research process; however, some participants either do not read or skim the consent form. When participants do not read or comprehend informed consent, then they may not understand the potential benefits, risks, or details of the study before participating. This study used previous research to develop experimentally manipulated online consent forms utilizing various presentations of the consent form and interactive elements. Participants ( n = 576) were randomly exposed to one of six form variations. Results found that the highly interactive condition was significantly better for comprehension than any of the other conditions. The highly interactive condition also performed better for readability, though not significantly. Further research should explore the effects of interactive elements to combat habituation and to engage participants with the parts of the consent form unique to the study.
... Using secure authentication methods Zimmermann and Gerber 2020a;Marky et al. 2020a;Zimmermann et al. 2019b;Renaud and Zimmermann 2019;Mayer et al. 2019;Zimmermann et al. 2018b;Renaud and Zimmermann 2018a, b;Marky et al. 2018;Zimmermann et al. 2017a;Gerber and Zimmermann 2017;Zimmermann and Gerber 2017; Using end-to-end encryption (E2EE) for their digital communications (Brendel and Gerber 2019;Gerber et al. 2018c, d;Ghiglieri et al. 2018;Zimmermann et al. 2017a, b) Making informed decisions regarding the handling of their private data Gerber et al. 2021;Stöver et al. 2021;Balthasar et al. 2021;Schürmann et al. 2020;Marky et al. 2020b, c;Kulyk et al. 2020a, b;Gerber et al. 2019a, b;Zimmermann et al. 2019a, c;Gerber et al. 2018a, b;Karegar et al. 2018;Zimmermann et al. 2018a;Kulyk et al. 2018a, b) For example, in a lab study in which 41 participants interacted with twelve different authentication schemes chosen based on an objective rating scheme (Bonneau et al. 2012;Zimmermann et al. 2019bZimmermann et al. , 2018b, we found that the classic password, followed directly by fingerprint, was rated best by users in terms of preference, usability, and intention to use, and users expected the fewest problems and effort with this scheme (Zimmermann and Gerber 2020b). Further research, consequently focusing on the password as most favoured authentication scheme, revealed that hybrid password meters, i.e., password meters that include password feedback, a feedback nudge, and additional guidance for creating a secure password, enable users to create better passwords compared to each intervention (feedback, nudge, additional guidance) on its own ). ...
... Further studies aimed, for example, to identify users' mental models of smart homes (Zimmermann et al. 2019a), or privacy threats in smart environments Zimmermann et al. 2019c). Other experimental lab and online studies explored, for example, how users can be supported in selecting privacy-preserving mobile apps Gerber et al. 2017a, b;Kulyk et al. 2019), deciding about using Single Sign On (SSO) login (Karegar et al. 2018), or how they perceive and interact with cookie consent notices (Kulyk et al. 2020a(Kulyk et al. , 2018a. ...
Full-text available
In this article, we highlight current research directions in the Technikpsychologie research area, using the example of the interdisciplinary research work of FAI (Work and Engineering Psychology Research Group at the Technical University of Darmstadt) and the articles included in this special issue. To this end, we relate the articles in this special issue from the research areas of road traffic planning (Hupfer et al.), usable IT security and privacy solutions (Renaud), social aspects of technically mediated communication (Diefenbach), human-centered interface design (Mucha et al.), aviation safety (Santel), human-centered design of autonomous vehicles (Lindner & Stoll), and perceptual psychology-oriented product design (Zandi & Khanh) to current research projects at FAI. Practical Relevance Technical products only offer added value by efficiently supporting users in achieving their goals if they have been developed appropriately for the context of use and the individual characteristics of the users. The human-centered design of—especially technical—products reflects this through an iterative and participatory development process. In this article, we describe nine examples of such human-centered design of technology products. The research results and the methods presented provide insights for developers and decision-makers in the fields of transportation, IT, vehicle development and general product design.
... Providers can learn on which other sites people use their credentials and when [40] CMS-provided integration, privacy-friendly identity providers ...
Full-text available
Modern websites frequently use and embed third-party services to facilitate web development, connect to social media, or for monetization. This often introduces privacy issues as the inclusion of third-party services on a website can allow the third party to collect personal data about the website's visitors. While the prevalence and mechanisms of third-party web tracking have been widely studied, little is known about the decision processes that lead to websites using third-party functionality and whether efforts are being made to protect their visitors' privacy. We report results from an online survey with 395 participants involved in the creation and maintenance of websites. For ten common website functionalities we investigated if privacy has played a role in decisions about how the functionality is integrated, if specific efforts for privacy protection have been made during integration, and to what degree people are aware of data collection through third parties. We find that ease of integration drives third-party adoption but visitor privacy is considered if there are legal requirements or respective guidelines. Awareness of data collection and privacy risks is higher if the collection is directly associated with the purpose for which the third-party service is used.
... It turned out that users are often unaware of the privacy-issues with existing IdP solutions. Our analyses suggest that video tutorials can be an efficient way to inform users: statistical tests showed significant differences in the correctly identified advantages between participants who received a tutorial on single sign-on and those who did not, and also perceived usability increased of a more elaborate user interface which supported them in making more informed decisions [9]. ...
... Single-sign on with credentials from popular services (e. g., Apple, Google, Twitter, Facebook) Providers can learn on which other sites people use their credentials and when [31] CMS-provided integration, privacy-friendly identity providers ...
Modern websites frequently use and embed third-party services to facilitate web development, connect to social media, or for monetization. This often introduces privacy issues as the inclusion of third-party services on a website can allow the third party to collect personal data about the website's visitors. While the prevalence and mechanisms of third-party web tracking have been widely studied, little is known about the decision processes that lead to websites using third-party functionality and whether efforts are being made to protect their visitors' privacy. We report results from an online survey with 395 participants involved in the creation and maintenance of websites. For ten common website functionalities we investigated if privacy has played a role in decisions about how the functionality is integrated, if specific efforts for privacy protection have been made during integration, and to what degree people are aware of data collection through third parties. We find that ease of integration drives third-party adoption but visitor privacy is considered if there are legal requirements or respective guidelines. Awareness of data collection and privacy risks is higher if the collection is directly associated with the purpose for which the third-party service is used.
In Katastrophen und in humanitären Notlagen benutzen davon betroffene Personen aktiv mobile Technologien. Dabei werden große Datenmengen generiert, die für die Hilfsorganisationen wichtige Informationen enthalten können. Das können z. B. Informationen über Art und Umfang der Katastrophe oder Hilfeersuchen von Betroffenen sein. Die Auswertung der Daten und die anschließende Bereitstellung der Ergebnisse kann durch digitale Freiwillige in der humanitären Hilfe, allen voran durch Organisationen des Digital Humanitarian Networks (DHN) oder Virtual Operations Support Teams (VOST), erfolgen. Diese Art der digitalen organisatorischen Strukturierung ermöglicht neue Formen des Engagements, die vor allem bei den Einsatzorganisationen aber auch Skepsis und Misstrauen erzeugen können. Wie zahlreiche Beispiele verdeutlichen, können Mittlerorganisationen oder -personen diese jedoch abbauen.
From smart homes to highly energy-optimized office building and smart city, the adoption of living in smart spaces requires that the inhabitants feel comfortable with the level of data being collected about them in order to provide smartness. However, you usually provide this consent on—or best before—your very first interaction. Thus, firstly your consent might vary over the time of usage. Secondly, it is not always obvious if data is currently collected or not. This paper addresses two missing elements in the interaction with a smart environment: First, the general concept of dynamicity of consent to data collection. Second, provision of a physical interaction to gather and change consent and a physical feedback on the current data collection status. By the feedback being physical we mean being visual, haptic or accoustic, in order to allow natural perception by the users in the physical space. For both components we provide examples which show how one could make both the current status as well as the consent physical and discuss the user perception. We argue that having a physical interaction to start potentially privacy-invasive data collections is a useful enrichment for legal consent, and physically visible status is helpful to make a decision.
Conference Paper
Full-text available
Online privacy policies are the primary mechanism for informing users about data practices of online services. In practice, users ignore privacy policies as policies are long and complex to read. Since users do not read privacy policies, their expectations regarding data practices of online services may not match a service's actual data practices. Mismatches may result in users exposing themselves to unanticipated privacy risks such as unknowingly sharing personal information with online services. One approach for mitigating privacy risks is to provide simplified privacy notices, in addition to privacy policies, that highlight unexpected data practices. However, identifying mismatches between user expectations and services' practices is challenging. We propose and validate a practical approach for studying Web users' privacy expectations and identifying mismatches with practices stated in privacy policies. We conducted a user study with 240 participants and 16 websites, and identified mismatches in collection, sharing and deletion data practices. We discuss the implications of our results for the design of usable privacy notices, service providers, as well as public policy.
Full-text available
Habituation is a key factor behind the lack of attention towards permission authorization dialogs during third party application installation. Various solutions have been proposed to combat the problem of achieving attention switch towards permissions. However, users continue to ignore these dialogs, and authorize dangerous permissions, which leads to security and privacy breaches. We leverage eye-tracking to approach this problem, and propose a mechanism for enforcing user attention towards application permissions before users are able to authorize them. We deactivate the dialog’s decision buttons initially, and use feedback from the eye-tracker to ensure that the user has looked at the permissions. After determining user attention, the buttons are activated. We implemented a prototype of our approach as a Chrome browser extension, and conducted a user study on Facebook’s application authorization dialogs. Using participants’ permission identification, eye-gaze fixations, and authorization decisions, we evaluate participants’ attention towards permissions. The participants who used our approach on authorization dialogs were able to identify the permissions better, compared to the rest of the participants, even after the habituation period. Their average number of eye-gaze fixations on the permission text was significantly higher than the other group participants. However, examining the rate in which participants denied a dangerous and unnecessary permission, the hypothesized increase from the control group to the treatment group was not statistically significant.
Full-text available
Research Methods in Human-Computer Interaction is a comprehensive guide to performing research and is essential reading for both quantitative and qualitative methods. Since the first edition was published in 2009, the book has been adopted for use at leading universities around the world, including Harvard University, Carnegie-Mellon University, the University of Washington, the University of Toronto, HiOA (Norway), KTH (Sweden), Tel Aviv University (Israel), and many others. Chapters cover a broad range of topics relevant to the collection and analysis of HCI data, going beyond experimental design and surveys, to cover ethnography, diaries, physiological measurements, case studies, crowdsourcing, and other essential elements in the well-informed HCI researcher's toolkit. Continual technological evolution has led to an explosion of new techniques and a need for this updated 2nd edition, to reflect the most recent research in the field and newer trends in research methodology. This Research Methods in HCI revision contains updates throughout, including more detail on statistical tests, coding qualitative data, and data collection via mobile devices and sensors. Other new material covers performing research with children, older adults, and people with cognitive impairments.
This paper investigates the effectiveness of informational videos that are designed to provide an introduction to two-step verification (i.e., 2FA) and in turn seeks to improve the adoption rate of 2FA among users. Towards that, eight video tutorials based on three themes (e.g., Risk, Self-efficacy, and Contingency) were designed and a three-way between-group study with 399 participants on Amazon’s MTurk was conducted. Furthermore, a follow-up study was run to see the changes in participants’ behavior (e.g., enabling of 2FA). The Self-efficacy and Risk themes were found to be most effective in making the videos more interesting, informative, and useful. Willingness to try 2FA was found to be higher for participants who were exposed to both the Risk and Self-efficacy themes. Participants’ decision regarding actually enabling 2FA was found to be significantly correlated with how interesting, informative, and useful the videos were. Implications of our findings in a broader context are discussed in the paper.
Conference Paper
The concept of Privacy by Design (PbD) is a vision for creating data-processing environments in a way that respects privacy and data protection in the design of products and processes from the start. PbD has been inspired by and elaborated in different disciplines (especially law and computer science). Developments have taken place in research and policy, with the General Data Protection Regulation to be adopted by the European Parliament in 2016 and to enter into force in 2018. It is now time to use the results for practical guidance on how to achieve the goals defined by the legislation. In this paper, we summarise lessons learned from the special session on Multidisciplinary Aspects of PbD organised at the Annual Privacy Forum 2015. In particular, we identify important current and future implementation challenges of PbD. These are: terminology, legal compliance, different disciplines’ understandings, the role of the data protection officer, the involvement of all stakeholders, and education. We conclude by emphasising the importance of approaching PbD in an interdisciplinary way.
Conference Paper
Third party applications play an important role in enhancing a social network user’s online experience. These applications request various permissions from the users at install-time. However, these permissions are often ignored, and the users end up granting access to sensitive information. This motivates the need for techniques that can attract user attention towards the requested permissions and make users read and understand the permissions before authorizing them. We investigate the animation of application permission dialogs. Using a real-life analogy of luggage screening at airport security checkpoints, we attempt to draw user attention towards application’s requested permissions. We map the various elements involved at an airport security checkpoint to our context through the use of avatars, and present the permissions one by one. The user makes decision on a permission based on its provided details. The permission details include its description, type, and the user’s personal information example to communicate the potential information disclosure in the event of its authorization. We developed a prototype of our proposed animated dialog design for Facebook applications, and compared it with Facebook’s existing dialog designs. Our preliminary evaluation on 16 participants with the help of their eye-tracking data shows that the use of animation and personal information examples on a permission authorization dialog is effective.
Conference Paper
We study Facebook Connect's permissions system using crawling, experimentation, and user surveys. We find several areas in which it it works differently than many users and developers expect. More permissions can be granted than developers intend. In particular, permissions that allow a site to post to the user's profile are granted on an all-or-nothing basis. While users generally understand what data sites can read from their profile, they generally do not understand the full extent of what sites can post. In the case of write permissions, we show that user expectations are influenced by the identity of the requesting site although this has no impact on what is actually enforced. We also find that users generally do not understand the way Facebook Connect permissions interact with Facebook's privacy settings. Our results suggest that users understand detailed, granular messages better than those that are broad and vague.