Conference PaperPDF Available

Explicitness of Consequence Information in Privacy Warnings: Experimentally Investigating the Effects on Perceived Risk, Trust, and Privacy Information Quality

Authors:

Abstract and Figures

Utility that modern smartphone technology provides to individuals is most often enabled by technical capabilities that are privacy-affecting by nature, i.e. smartphone apps are provided with access to a multiplicity of sensitive resources required to implement context-sensitivity or personalization. Due to the ineffectiveness of current privacy risk communication methods applied in smartphone ecosystems, individuals' risk assessments are biased and accompanied with uncertainty regarding the potential privacy-related consequences of long-term app usage. Warning theory suggests that an explicit communication of potential consequences can reduce uncertainty and enable individuals to make better-informed cost-benefit trade-off decisions. We extend this design theory to the field of information privacy warning design by experimentally investigating the effects of explicitness in privacy warnings on individuals' perceived risk and trustworthiness of smartphone apps. Our results suggest that explicitness leads to more accurate risk and trust perceptions and provides an improved foundation for informed decision-making.
Content may be subject to copyright.
Thirty Fifth International Conference on Information Systems, Auckland 2014 1
Explicitness of Consequence Information in
Privacy Warnings: Experimentally
Investigating the Effects on Perceived Risk,
Trust, and Privacy Information Quality
Completed Research Paper
Gökhan Bal
Goethe University Frankfurt
Grüneburgplatz 1, 60323 Frankfurt, Germany
goekhan.bal@m-chair.de
Abstract
Utility that modern smartphone technology provides to individuals is most often
enabled by technical capabilities that are privacy-affecting by nature, i.e. smartphone
apps are provided with access to a multiplicity of sensitive resources required to
implement context-sensitivity or personalization. Due to the ineffectiveness of current
privacy risk communication methods applied in smartphone ecosystems, individuals’
risk assessments are biased and accompanied with uncertainty regarding the potential
privacy-related consequences of long-term app usage. Warning theory suggests that an
explicit communication of potential consequences can reduce uncertainty and enable
individuals to make better-informed cost-benefit trade-off decisions. We extend this
design theory to the field of information privacy warning design by experimentally
investigating the effects of explicitness in privacy warnings on individuals’ perceived
risk and trustworthiness of smartphone apps. Our results suggest that explicitness leads
to more accurate risk and trust perceptions and provides an improved foundation for
informed decision-making.
Keywords: Information Privacy, Risk Communication, Mobile Computing, Privacy
Behavior, Privacy Calculus
IS Security and Privacy
2 Thirty Fifth International Conference on Information Systems, Auckland 2014
Introduction
The use of information technology by individuals in everyday life is driven by their striving for utility.
Particularly modern smartphone technology has rapidly become indispensable for many people, as they
offer utility in manifold ways. Smartphones provide users with access to the Internet, connectivity in
various ways, context-sensitivity, personalization, and entertainment. Utility is enabled and promoted by
two main factors. Firstly, smartphone applications (“apps”) are provided with access to a multiplicity of
device resources required to implement utility (e.g. access to positioning features to implement location -
based services). Secondly, the major app markets are centralized and open, i.e. apps can be developed and
provided to customers easily by anyone. In this way, the global app economy became worth $68 billion in
2013 and is projected to grow to $143 billion in 2016 (Vision Mobile 2014).
While offering significant utility, smartphone app usage is also considerably affecting users’ information
privacy
1
. This is caused by the same capabilities of smartphone platforms that enable utility, specifically,
allowing apps to access and process a multiplicity of sensitive resources on the device such as positioning
features, the contacts database, or device sensors. Numerous research studies reveal that users are often
not aware of sensitive information flows (Egele et al. 2011; Enck et al. 2010), which as a consequence,
negatively influences the users’ ability to assess the potential privacy risks of app usage. Risk assessments
are rather accompanied with uncertainty regarding the actual consequences of usage behavior (Acquisti
and Grossklags 2005). Current privacy risk communication methods in smartphone ecosystems have
been shown to be ineffective in informing the users about privacy-affecting properties of individual apps.
Smartphone users rather rely on other trust anchors such as app ratings, user reviews, or visual
appearance in risk assessments and decision-making (Chia et al. 2012; Krasnova et al. 2013).
Warning theory suggests that a more explicit communication of potential risks can reduce uncertainty and
help individuals making more informed decisions regarding product use (Laughery and Smith 2006). So
far, the potential effects of explicitness have not yet been examined empirically in privacy risk
communication. Grounded in warning design theory and smartphone privacy literature, we propose a
design for privacy warnings with explicit consequence information and experimentally study its effects on
perceived privacy information quality, perceived privacy risk, perceived trustworthiness, and app
preference in the context of smartphone app markets. We contribute to privacy warning design theory
with knowledge to improve the quality of individuals’ privacy risk and trust assessments by providing an
improved foundation for making informed decisions. The research results are relevant for app providers,
app market providers, and privacy warning research.
The paper is structured as follows. Section 2 provides the theoretical background of our research. This
includes relevant theories from information privacy research, a discussion of the nature of privacy risks in
smartphone app ecosystems to recognize the potential privacy-related consequences of app usage, and
relevant design knowledge from warning research. Section 3 presents our proposed design for privacy
warnings for smartphone apps with explicit consequence information and concludes with a set of testable
hypotheses. Section 4 elaborates on the used methodology to test the effectiveness of our proposed
warning scheme, including a description of the experimental user study that we have conducted. Section 5
presents the results of the user study, which we discuss in Section 6. Implications and limitations of our
findings are discussed in Section 7, while Section 8 concludes the article.
Theoretical Background
The discussion of prior literature starts with a literature-based privacy risk analysis of smartphone app
usage. The aim is to recognize and conceptualize potential privacy-related consequences of app usage to
inform the design of more explicit privacy warnings. Next, we provide an overview of relevant theories
from information privacy research to explain factors that influence individuals’ privacy behavior. We then
discuss relevant theories from warning research that provide design knowledge for effective warnings.
Finally, we discuss the effectiveness of existing privacy indicators in smartphone app ecosystems to
further motivate the need for an improved privacy risk communication.
1
In the rest of this article, we refer to the concept of information privacy when using the term privacy.
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 3
Privacy Risks in Smartphone App Usage
Prior to designing warnings, a hazard analysis to understand the nature and the causes of potential risks
should be performed (Smith-Jackson and Wogalter 2006). The privacy-affecting nature of smartphone
app usage mainly emerges due to technical capabilities of smartphones that are primarily intended to
enable utility. For example, apps are provided with access to sensitive resources in order to enable
personalized or context-sensitive services. Since many apps in addition have Internet-access capabilities,
apps in principle can share any collected sensitive information with external entities (e.g. app provider or
advertisement networks). Using the information-flow tracking system TaintDroid, Enck et al. (2010, 2011)
demonstrated that many Android apps leak sensitive data without informing the user. Similar behavior
has been observed on the iOS platform (Egele et al. 2011). On most smartphone platforms, applications
have to request permissions to access sensitive resources, which the users have to accept if they intend to
use the application or the respective functionality. Permission-based security mechanisms are meant to
enforce informed consent and to provide privacy transparency. Another function of those mechanisms is
to drive app developers to follow the principle of least privilege. However, application analysis tools
revealed that many apps are over-privileged, i.e. they request more permissions than actually required to
implement functionality (Felt, Chin, et al. 2011; Wei et al. 2012). This suggests that app developers do not
always comply with the principle of least privilege, regardless of whether harm is intended or not.
Irrespective of the permissions granted, Davi et al. (2010) showed that current permission-based security
mechanisms are vulnerable and can be circumvented. Thus, malicious applications can gain unauthorized
access to sensitive resources. Moreover, inter-application communication capabilities allow apps to
exchange messages, which is another way to circumvent permission-control mechanisms (Chin et al.
2011). Enck et al. (2009) argue that privacy and security risk assessments should not isolate single
permissions from each other. Rather, risk assessments should consider potentially dangerous
combinations of permissions (e.g. Internet access in combination with call listening and audio recording
capabilities) since the overall risk might be greater than the sum of the risks posed by single permissions.
A more sophisticated risk assessment framework for smartphone apps has been proposed by Zhang et al.
(2013). It calculates an intrusiveness score for individual apps that is based on the apps’ individual data-
access patterns. The context of access (e.g., whether or not the app is idle during data access) is a
particularly considered factor in the assessment. The researchers’ analysis of 33 apps across four different
categories revealed that each app appears to have an individual privacy fingerprint based on its data-
access behavior. These findings underline the need for considering the application-specific and dynamic
aspects of data access in privacy risk assessments (Bal 2012). The potentials of harming user privacy when
long-term smartphone data is available have been further demonstrated by several data-mining based
information extraction approaches. These show how individuals implicitly reveal additional information
about themselves when access to sensitive resources on the device is granted to third parties. Weiss and
Lockhart (2011) used available accelerometer data to accurately predict user traits such as sex, height, and
weight. Min et al. (2013) could infer smartphone users’ social network structure by analyzing
communication logs on the devices. Similar results are demonstrated by Eagle et al. (2009). The
possibility to predict individuals’ personality traits by analyzing smartphone usage data has also been
demonstrated (Chittaranjan et al. 2011). Further, individuals’ movement patterns can be predicted by
using smartphone data (Phithakkitnukoon et al. 2010). The potentials for user identification and
authentication by analyzing smartphone data have been demonstrated in numerous studies (Conti et al.
2011; E. Shi et al. 2011; W. Shi et al. 2011).
Table 1 summarizes the different factors that have an impact on the privacy risks in smartphone app
usage. The risk analysis suggests that the risks and their potential consequences are multifaceted and not
limited to the leakage of single information items. We conclude the hazard analysis by emphasizing the
importance of considering the dynamic and long-term aspects of data access when designing privacy
warnings. We want to note that the severity of privacy risks is further dependent on factors such as the
purpose and functionality of the app, or on the app provider’s intentions to share personal data with third
parties. In this work we abstract from those additional factors to gain first understanding of the potential
effects of explicitness and suggest that future work on privacy warning design should consider more
factors. In this study, we focus on the risks coming from data-mining approaches (cf. Table 1).
IS Security and Privacy
4 Thirty Fifth International Conference on Information Systems, Auckland 2014
Table 1. Privacy Risk Factors in Smartphone Usage
Factor
Explanation
Data Leakage
Many apps leak private data items without informing the user (Egele et al.
2011; Enck et al. 2010). Data leakages represent the most basic level of
information privacy risks. They violate the principle of user control and
transparency.
Over-privilege
Many apps possess more privileges to access private data than required to
implement the functionality. Consequently, the privacy principles of data
minimization and purpose-binding are threatened (Felt, Greenwood, et al.
2011; Wei et al. 2012).
Privilege escalation
Permission-based security mechanisms are vulnerable and can be
circumvented. Consequently, there is a risk of unauthorized access to private
data by harmful apps (Davi et al. 2010). This risk can rather be mitigated by
improved implementations of permission-based security mechanisms.
Inter-application
communication
Apps can communicate with each other. In this way, private data can be
shared with apps that don’t possess the required permissions (Chin et al.
2011). While this capability provides some benefits, the risk is unauthorized
access to private data.
Combinations of
permissions
Some permission combinations are dangerous, e.g. an app with Internet,
audio recording, and phone listening capabilities could record phone calls and
share the recordings with external servers (Enck et al. 2009). The risk is
permission misuse.
Out-of-context data
access
Apps might access sensitive resources out of the usage context (Zhang et al.
2013). This poses a risk to the principle of purpose-binding and
transparency.
Data mining
Smartphone data can be used to infer additional information about the user
(e.g. Min et al. 2013). The privacy risk is the unintended, implicit revelation of
private information and consequently leads to a bias in users’ risk awareness.
Information Privacy: Influencing Factors of Privacy Behavior
Individuals’ privacy behavior
2
is affected by various factors. While privacy concerns have been shown to
negatively influence privacy behavior (Choi and Choi 2007; Malhotra et al. 2004; Smith et al. 1996; Zhou
2011), Milne and Boza (1999) suggest that building trust is more effective than trying to reduce privacy
concerns. The positive effects of trust on privacy behavior have been demonstrated in various contexts
(Doney and Cannon 1997; George 2004; Pavlou 2003; Slyke et al. 2006; Smith et al. 2011; Wang et al.
2004; Xu et al. 2005; Zhou 2011). Trust on the other hand, has been shown to be positively affected by the
presence of privacy seals or notices (LaRose and Rifon 2006; Wang et al. 2004). Consequently, privacy
behavior can be influenced by the availability of privacy notices. Furthermore, trust has been shown to
reduce perceived risk (Jarvenpaa et al. 2000; Xu et al. 2005; Zhou 2011), while also a direct, negative
relationship between perceived risk and privacy behavior has been suggested (Malhotra et al. 2004; Xu et
al. 2005). Moreover, the privacy calculus theory suggests that privacy behavior is the outcome of a risk-
benefit analysis of individuals in which potential risks of behavior are weighed against the benefits (Dinev
and Hart 2006; Smith et al. 2011). An accurate calculus requires the availability of both benefit and risk
information that serve as input variables to the calculus. Potential benefits are most often clear from the
2
In the rest of this article, we use the term “privacy behavior” to denote any behavior-related outcome of
information privacy, e.g. behavioral intentions to use, actual behavior, willingness to share data.
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 5
purpose of a service (e.g. personalization). Furthermore, the benefits are what drive individuals toward
products. Privacy risk information on the other hand is not always present and accessible. To the contrary,
privacy decision-making is often based on incomplete information and consequently leads to uncertainty
regarding the potential consequences of privacy behavior (Acquisti and Grossklags 2007). Regarding the
usefulness of reducing uncertainty by providing more risk information, Acquisti and Grossklags (2007)
argue that even with access to complete information, individuals would be unable to process and act
optimally on large amounts of data and consequently people rather rely on simplified mental models and
approximate strategies in privacy decision-making. Lederer et al. (2004), however, argue that
illuminating information disclosure can constructively contribute to the user’s mental model and
understanding of privacy risks. We therefore aim for designing privacy indicators that more accurately
and efficiently communicates the potential consequences of privacy behavior. The ultimate goal is to
provide individuals with an improved foundation for privacy calculi that help in identifying the less
privacy-intrusive choices when alternatives are available. We consult warning theory in the following to
inform our design search process for more effective privacy warnings.
Warning Theory
Warnings are safety communications used to inform people about hazards so that undesirable
consequences are avoided or minimized (Wogalter 2006), i.e., warnings are artifacts designed for risk
communication. The term “hazard” covers all kinds of potential vulnerabilities and risks encountered
during product use, when performing tasks, and in the environment. In this work, the hazards of interest
are privacy risks caused by information technology use, particularly the long-term use of smartphone
apps. According to the hazard control hierarchy concept from safety engineering, warnings should only be
the last resort of hazard control (Wogalter 2006). The first in the sequence should be designing the hazard
out or eliminating it, e.g. through alternative design. The next suggested action in the sequence is
protecting people from the hazard to reduce the likelihood that people come in contact with the hazard.
Designing out the privacy risks of smartphone app usage, as identified in our privacy risk analysis, would
require removing the data-access capabilities that enable utility. This would negatively influence the user
experience of smartphone usage and is therefore not considered an option. Protecting users from the
hazard, on the other hand, can be achieved by providing users with enhanced privacy control mechanisms
to better control sensitive-information flows and adapt the privacy risks according to their preferences.
While some tools have been developed to provide users with a more fine-grained control over personal
data (Bai et al. 2010; Conti et al. 2010; Zhou et al. 2011), the inherent limitations of such privacy tools are
related to their usability and perceived usefulness. Firstly, such tools often require a certain degree of
technical knowledge. Secondly, the perceived usefulness of privacy protection tools is affected by the
perceived privacy risks of a product. Thus, privacy control tools are only useful if users have a fair
understanding of the risks. Moreover, the effectiveness of such tools is seldom being evaluated. All things
considered, we believe that privacy risk communication is a key area which needs improved design work.
The Communications-Human Information Processing (C-HIP) model describes a set of stages involved in
warning processing that can be used to systematically examine and improve the effectiveness of warnings
(Smith-Jackson and Wogalter 2006). The consecutive stages are defined as attention switch (the warning
must have characteristics that make it noticeable), attention maintenance (the warning must be able to
hold attention so that users can encode its message), comprehension (the receiver of the warning must be
able to extract meaning from the warning and activate relevant information in memory), attitudes/beliefs
(the warning must be able to influence the receiver’s hazard-related attitudes and beliefs), motivation (the
warning must be able to push the receiver towards safe behavior), and behavior (the receiver must
perform the safe behavior). Warning theory further provides design knowledge for improving warning
effectiveness in the different stages of warning processing. Suggested factors to influence warning
effectiveness are size (Barlow and Wogalter 1991), location (Frantz and Rhoades 1993), timing (Egelman
et al. 2009), colors used (Kline et al. 1993; Young 1991), signal words or pictorials, message length
(Laughery and Wogalter 1997), or explicitness (Laughery et al. 1993).
Explicitness has emerged as an particularly efficacious design factor for warning effectiveness (Laughery
and Wogalter 2006). Explicitness in the context of warnings is defined as information that is specific,
detailed, clearly stated, and leaves nothing implied, while explicitness can refer to hazard information,
consequence information, or the instructions to avoid the hazard. Laughery and Smith (2006) suggest a
set of principles regarding explicitness that may be useful for the warnings designer, e.g. “do not assume
IS Security and Privacy
6 Thirty Fifth International Conference on Information Systems, Auckland 2014
everybody knows.”, “do not rely on inference.”, “be careful about assuming that hazards and consequences
are open and obvious.”, “technical jargon is usually not a good way to achieve explicitness.”. Laughery and
Wogalter (2006) suggest that more specific information about hazards and consequences can reduce
uncertainty and enable people to make better-informed cost-benefit trade-off decisions regarding the
need to comply. It is suggested that explicitness of warning information is especially effective and useful
when the injury severity is high. Further, warning research suggests that the contents of risk
communication should focus on risk severity instead of likelihood of injury since risk severity has been
shown to be the better predictor of perceived risk (Wogalter et al. 1999). The effects of explicitness of
consequence information in privacy warnings so far have not been empirically investigated.
Privacy Warning Effectiveness and Alternative Approaches
Information security and privacy warnings have been shown to be ineffective in risk communication.
Individuals often have difficulties in understanding security warnings (Bravo-Lillo et al. 2011). Privacy
indicators such as privacy policies are not processed by consumers since they tend to be too long or
written in a technical or juridical language (Milne and Culnan 2004). Specifically in smartphone
ecosystems there are no reliable privacy risk indicators currently in app markets (Chia et al. 2012). A
commonly used approach to provide privacy indicators and risk information is to face smartphone users
with permission requests to access sensitive resources. Research studies have already demonstrated the
ineffectiveness of this approach (Kelley et al. 2012). As a result of the ineffectiveness of existing privacy
indicators, people rather act based on their general attitudes or rely on other trust anchors such as user
reviews or app ratings (Chia et al. 2012; Krasnova et al. 2013).
In attempts to provide individuals with more effective privacy risk information, researchers have designed
and tested a variety of alternative privacy indicators. Kelley et al. (2009) made use of the nutrition-label
approach to design consumer privacy notices. Their attempt was to bring text-based privacy policies to a
format which allows for more efficient information gathering and easier comparison between different
policies. While the researchers could show that comparison is easier, the contents of the indicators are
still based on privacy policies that do not provide information about the potential risks of a service. The
P3P standard enables machine-readable privacy policies that can be retrieved automatically by web
browsers and other user agents. Cranor et al. (2002) have developed the Privacy Bird as a P3P user agent
that can compare P3P policies against users privacy preferences. However, privacy risk communication of
Privacy Bird focuses on potential policy mismatches and does not inform about potential consequences.
The approach taken by Lin et al. (2012) is to model privacy information as expectations. The rationale of
this approach is to trust on the accuracy of crowd opinions regarding the appropriateness of permission
requests. While we believe that especially expert knowledge can help, no risk information is provided that
can help the users in making more informed decisions. Thompson et al. (2013) tackled another aspect of
privacy risk communication, namely helping users to attribute data access to the software responsible for
it. The purpose is to help users identify misbehavior. Another approach tested is the use of the metaphor
of growing eyes on the display with data disclosure (Schlegel et al. 2011). A problem of this approach is
that it might disturb the primary task in the long run. Another factor that has been shown to be relevant in
privacy risk communication is the timing of information (Egelman et al. 2009).
Altogether, privacy risk communication in the digital world is lagging behind risk communication in the
physical world. Privacy warnings or indicators are most often ineffective in influencing privacy behavior.
Camp (2009) suggests that established risk communication methods from warning research should be
applied to improve the effectiveness of computer security and privacy warnings. We recognize that the
long-term data-access behavior of apps significantly influences the privacy-related consequences of app
usage and, based on knowledge from warning theory, we suggest introducing explicit consequence
information into privacy warnings.
Designing Explicit Privacy Warnings for Apps
The key findings from our privacy warning design search process are as follows. Privacy behavior can be
positively influenced by providing trust indicators (e.g. privacy seals) or by employing risk communication
methods (e.g. privacy warnings). Privacy decision-making is often accompanied with uncertainty
regarding potential consequences (Acquisti and Grossklags 2005). As a consequence, individuals rely on
other trust anchors than risk information (Chia et al. 2012; Krasnova et al. 2013). Computer security
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 7
warning research suggests applying established methods from general warning research (Camp 2009) to
security and privacy warning design. A specific factor that can help to reduce uncertainty in privacy
decision-making is explicitness in warning messages (Laughery et al. 1993). Explicitness can be added to
the hazard information, the consequence information, or the instructions to avoid the hazard. While
privacy research has so far focused on informing about the hazard (e.g. potential data access or intentions
to share with third parties), we want to investigate the potential effects of explicitness in consequence
information. To the best of our knowledge, this has not yet been empirically investigated in the context of
information privacy risk communication. Regarding the nature of potential consequences of privacy
behavior, our hazard analysis suggests that smartphone applications differ in their data-access behavior
(Zhang et al. 2013), thus considering dynamic data-access patterns in risk analyses is essential (Bal 2012).
The implicit revelation of private information through potential application of data-mining techniques is
one type of potential consequences of long-term privacy behavior. We propose adding information about
such potential revelations to privacy warnings as the typification of privacy-related consequences. We
believe that this will reduce uncertainty about the risks and consequently lead to more accurate privacy
risk assessments, i.e. individuals will be better able to distinguish apps with high risk severity from apps
with low risk severity. Figure 1 conceptually illustrates our design suggestion for the warning’s contents. It
includes both hazard information (data-access permissions) and consequence information (implicit
revelations). Since timing is another critical factor in warning design and placement (Egelman et al.
2009), we suggest introducing such warning information into app markets in order to maximize the
chances to influence users’ behavior (e.g. selecting the less privacy-intrusive app when alternatives are
available). In that way, contact with the hazard can be avoided.
Figure 1. Privacy Warning with Explicit Consequence Information
We used the proposed conceptual design in an experimental study to investigate its effects on individuals’
risk and trust perceptions and their privacy behavior. Backed up by existing theory, we formulated a set of
testable hypotheses that we used in our study to investigate on the effectiveness. Since we aim for
contributing with design knowledge for more effective privacy risk communication methods, our
hypotheses are directed and express positive expectations regarding the outcomes. Yet, effective privacy
warnings do not necessarily increase risk perception or decrease trust in general, they rather aid users in
making better assessments regarding these two variables. For example, with improved privacy risk
communication, individuals should be better able to distinguish apps with high privacy risk severity from
apps with low risk severity and perceived risk and trust should be affected accordingly. Inspired by the
notions used in a previous study about physical-product warning explicitness (Laughery et al. 1993), we
use the following abbreviations to classify apps with varying privacy risk severity and number of requested
permissions in our testable hypotheses. H-S refers to apps with high privacy risk severity; L-S refers to
apps with low privacy risk severity. Analogous, L-P refers to apps with a low number of data-access
permission requests; H-P refers to apps with a high number of permission requests.
IS Security and Privacy
8 Thirty Fifth International Conference on Information Systems, Auckland 2014
Due to the ineffectiveness of current privacy risk communication in smartphone app markets (e.g.
permission request screens), users rely on other trust anchors (Chia et al. 2012). We specifically assume
that users base their judgments on the quantity of permission requests rather than on the quality of their
privacy-related consequences when no other cues are available even though the relationship between the
number of permissions requested and the privacy risk severity is not necessarily linear. Based on this
assumption, we are primarily interested in scenarios in which the quantity and the quality of permission
requests regarding privacy consequences are diametrical (H-S/L-P and L-S/H-P) and therefore we blind
out the “trivial” cases (L-S/L-P and H-S/H-P). We want to measure individuals’ risk perceptions with
two related variables. On the one hand, we are interested in measuring individuals’ perceived privacy risk
for individual apps as an absolute measure. On the other hand, we investigate the participantsability to
sort different apps according to their privacy-intrusiveness, which we consider as a measure of relative
perceived privacy risk among different alternatives. The latter will provide more direct insight into
individuals’ ability to distinguish privacy-intrusive apps from less privacy-intrusive apps. To find support
for our expectations, we test the following hypotheses:
Hypothesis 1a: The perceived privacy risk of an H-S/L-P app will be higher when explicit
consequence information is available in privacy warnings.
Hypothesis 1b: The perceived privacy risk of an L-S/H-P app will be lower when explicit
consequence information is available in privacy warnings.
Hypothesis 1c: Individuals will rank an H-S/L-P app more privacy-intrusive when explicit
consequence information is available in privacy warnings.
Hypothesis 1d: Individuals will rank an L-S/H-P app less privacy-intrusive when explicit
consequence information is available in privacy warnings.
Hypothesis 1e: The perceived privacy risk of an H-S/L-P app will be higher than the perceived
privacy risk of an L-S/H-P app when explicit consequence information is available in privacy
warnings.
Prior research further suggests that trust is affected by the presentation of privacy notices and also a
mediating effect of perceived risk on trust has been suggested (Lim 2003). Based on these findings, we
suggest that more effective privacy notices will affect the trustworthiness of an app:
Hypothesis 2a: The perceived trustworthiness of an H-S/L-P app will be lower when explicit
consequence information is available in privacy warnings.
Hypothesis 2b: The perceived trustworthiness of an L-S/H-P app will be higher when explicit
consequence information is available in privacy warnings.
Hypothesis 2c: The perceived trustworthiness of an H-S/L-P app will be lower than the perceived
trustworthiness of an L-S/H-P app when explicit consequence information is available in privacy
warnings.
We expect that the new privacy warnings will improve risk assessments abilities and as a consequence,
privacy decisions will be improved. We therefore hypothesize that privacy behavior will be affected
towards safer choices when making potential consequences explicit:
Hypothesis 3a: App preference will be affected by the availability of explicit consequence information
in privacy warnings.
Hypothesis 3b: App preference will result in the selection of apps with lower privacy risk severity
when explicit consequence information is available.
Prior research suggests that the presence of privacy notices positively affects the perceived
trustworthiness of a service provider (LaRose and Rifon 2006; Wang et al. 2004). In our context, app
market providers are service providers offering smartphone apps to smartphone users. Based on these
findings, we further expect that more effective privacy notices in an app market will have a positive effect
on the perceived trustworthiness of the app market as the service provider:
Hypothesis 4: The perceived trustworthiness of an app market will be higher when it provides
privacy information with explicit consequence information.
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 9
We introduce explicit privacy warnings to provide additional trust and risk information, but ultimately we
aim for increasing the quality of privacy information. We expect that adding explicit consequence
information to privacy warnings will increase the perceived privacy information quality. We further
expect that explicit consequence information in privacy warnings will make the comparison of apps
regarding their privacy properties easier and more enjoyable:
Hypothesis 5a-5f: The perceived information quality (amount / believability / interpretability /
relevance / understandability / correctness) of privacy warnings will be higher when explicit
consequence information is available.
Hypothesis 6: The perceived enjoyment of comparing apps regarding the privacy properties will be
higher when explicit consequence information is available in privacy warnings.
Method
We tested the effectiveness of the proposed privacy risk communication scheme with explicit consequence
information by conducting online experiments in which individual participants are presented several app
descriptions in a fictive app market called “SMART App Shop”. The anonymous online experiment was
conducted in the beginning of the year 2014 during a period of seven weeks.
Participants
Participants were recruited through advertisements in online platforms such as Facebook, student forums
on university web sites, or by e-mail. Recipients of the advertisement were asked to further spread the
advertisement to friends, colleagues, or relatives. We invited people to partake in an online experiment
about “Experiences with App Discovery”. We wanted to mitigate priming effects by mentioning the topic
of privacy upfront. There were no specific requirements for participation, thus the sample was from the
general population. The participants remained anonymous and they were not compensated by any
financial means. In total, 94 participants started the experiment, out of which 71 participants completed it
(attrition rate = 24.47%). In the data analysis, we only used the data from the 71 individuals who
completed the experiment. The sample included 24 female and 47 male participants with an average age
of 32.87 (SD = 10.02; range = 18 - 60). Regarding experiences with smartphone usage, 52.1% of the
participants had most experiences with the Android platform, 35.2% with the iOS platform, 7% with the
Symbian platform, 4.2% with the Windows Phone platform, and 1.4% with the Blackberry platform. The
average number of apps that the participants used regularly (at least once a week) was 10.07 (SD = 11.81).
Regarding the period of using a smartphone, 9.8% of the participants used a smartphone for less than 1
year, 38% between 1-3 years, and 52.1% for more than 3 years.
Materials
We implemented the experiment as an online survey using the LimeSurvey
3
software. The survey first
collected some demographic information and information about prior experiences with smartphone
usage. We further measured privacy concerns with smartphone apps and general trust towards apps.
Since existing scales for privacy concerns and perceived trustworthiness were not specific enough for our
purposes, we developed the required measurement instruments that are more targeted towards our
smartphone app scenarios
4
. The used smartphone privacy concern scale consists of 10 items (Cronbach’s
Alpha = .891; example item: “It is important to me to know which personal data is accessed by my apps.”).
The trust scale consists of 3 items (Cronbach’s Alpha = .783; example item: I believe that apps only
access personal data when it is required for the functionality.”). Both scales were rated on 6-point Likert
scales ranging from “strongly disagree” to “strongly agree”. We further included the 10-item General Self-
Efficacy scale in our survey (example item: I can always manage to solve difficult problems if I try hard
enough.”), which had to be rated on the proposed 4-point Likert scale ranging from “not at all true” to
3
https://www.limesurvey.org/
4
The scales were developed by employing PCA-based exploratory factor analysis on a pool of items that
we iteratively developed and improved in multiple rounds of focus group meetings.
IS Security and Privacy
10 Thirty Fifth International Conference on Information Systems, Auckland 2014
“exactly true (Schwarzer and Jerusalem 1995). To measure perceived privacy risk of individual apps, we
slightly modified the 3-item perceived risk scale developed by Featherman and Pavlou (2003) to make it
applicable for our purposes (example item: Using this app would cause me to lose control over the
privacy of my personal information.”). To measure perceived trustworthiness of specific apps, we used a
slightly adapted version of the trust scale developed by Wang et al. (2004) (example item: I believe that
the app provider is not likely to sell my personal information.”). We measured six dimensions of perceived
privacy information quality based on the scales developed by Lee et al. (2002). The six dimensions that we
were interested in were amount (example item: The privacy information is of sufficient volume.”),
believability (4 items; example item The privacy information is believable.”), interpretability (4 items,
example reversed item: The privacy information is difficult to interpret.”), relevance (3 items; example
item: The privacy information is useful.”), understandability (4 items; example item: The privacy
information is easy to comprehend.”), and correctness (4 items; example reversed item: The privacy
information is incorrect.”). To measure the perceived trustworthiness of the fictive app market used in the
study, we used another 8-item perceived trustworthiness scale (example item: “I can count on SMART
App Shop to protect my privacy.”), based on the trust instruments developed by Jarvenpaa et al. (2000)
and Fogel and Nehmad (2009). Finally, we used and slightly adapted the 2-item enjoyment and ease of
comparison scale (example item: Comparing apps in the SMART App Shop regarding their privacy
friendliness was an enjoyable experience.”) used by Kelley et al. (Kelley, Bresee, et al. 2009).
Experimental Design
The design of our experiment is partly adapted from and inspired by the experiments conducted by
Laughery et al. (1993) to study the effectiveness of explicit information in physical product warnings. We
used a between-subjects design, while we manipulated the contents of the privacy warnings in the
experimental part of the survey. Figure 2 illustrates the process of the experiment. The survey started with
the collection of demographic information and questions about smartphone usage experiences, followed
by the smartphone app privacy concern, trust, and self-efficacy scales. We explicitly measured privacy
concern and trust before the actual experiment started to avoid any bias that might have been caused by
the experimental scenario. Subsequently, introductory information about the experiment has been
presented to explain the experimental setting, in which the participants were smartphone users who
search for some apps with specific functionality (flashlight app and weight diary tool) and therefore visit
the fictive app market called “SMART App Shop” to use its search function. The actual search was
simulated and part of the background story, i.e. the participants did not have to type in anything. The
random group assignment took place transparently while the introductory page was shown (random
number generation). Following the introductory page, they were presented a list of four flashlight apps
and respective descriptions that pretendedly resulted from the search (cf. Figure 3). The following
information about the apps was provided: generic name (e.g. “Flashlight 1”), icon, short description of the
core functionality, and privacy information. The list of apps was static, i.e. all participants were presented
the same set of apps with the same icons and descriptions in the same order, only the privacy information
presented was different between the two experimental conditions (without explicit consequence
information in the control group and with explicit consequence information in the experimental group).
To test our hypotheses, we have manipulated the privacy information of the apps in two ways. Firstly,
explicit consequence information (framed parts in Figure 3) was only shown in the experimental group
(between-subjects design), while the control group was only presented permission requests. Secondly, the
four apps presented have been manipulated by their number of permissions requested and their privacy
risk severity based on the actual permissions requested (within-subjects design). Thus, we have created
different app profiles that were required for our hypotheses testing (cf. Figure 4). As mentioned before, we
did not use the apps matching the “trivial profiles for hypothesis testing. Yet, we introduced them to the
experiment for the sake of completeness. The set of permission requests that we used in our experiment is
a subset of existing permissions in the Android framework and smartphone ecosystem, while some of
them have no or little privacy consequences (e.g. “control vibration” or “prevent device from sleeping”)
and others have high privacy consequences (e.g. “precise location” or “read call history”). To create the
required app profiles, we used a respective subset of those permissions for each app. The potential
privacy-related consequences per app were then based on and derived from the permissions granted to an
individual app. For example, an app that requested only permissions with low or no privacy consequences
was classified as having low privacy risk severity, whereas an app that had at least one permission with
high privacy consequences has been classified as having high privacy risk severity.
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 11
Figure 2. Process of the experiment with between-subjects design.
Figure 3. The flashlight apps as presented in the experiment. The parts in dotted frames
(explicit consequence information) were only shown in the experimental condition.
IS Security and Privacy
12 Thirty Fifth International Conference on Information Systems, Auckland 2014
Figure 4. Risk severity and permission profiles of the apps used in the experiment.
The classification of the permissions regarding the degree of their potential privacy consequences has
been performed in a rationale and qualitative approach prior to the start of the study. We introduced two
app categories (flashlight apps and health apps) to split hypothesis testing between these two categories.
While we targeted the flashlight app profiles towards the hypotheses concerning user perceptions
(hypotheses 1a-e and 2a-c), the health app profiles were designed in a way to best help testing the
hypotheses regarding privacy behavior (app selection according to hypotheses 3a and 3b). All hypotheses
could also well be tested with one category of apps, but we wanted to limit the required number of apps
per app category (more app profiles would be necessary) and avoid boredom effects by just using one
category. We were not interested in effects caused by the category of the apps. The first question asked the
participants to state their preferred flashlight app based on the available information. Next, they were
asked to put the four flashlight apps into an order regarding their privacy-intrusiveness. After that, the
participants had to answer the questions regarding perceived privacy risk and perceived trustworthiness
of the apps. These questions had to be answered individually for each app. The same procedure followed
in the next app category (health apps) on the following page of the survey. After this experimental part,
the group splitting ended and all participants had to answer the questions about perceived privacy
information quality in the SMART App Shop, perceived trustworthiness of the SMART App Shop, and
enjoyment of comparison of apps in the SMART App Shop. The experiment ended afterwards.
Results
Prior to hypothesis testing, we were interested in whether the two experimental groups that resulted out
of the random assignment are from the same population. The assignment resulted in 33 participants in
the control group and 38 participants in the experimental group. We calculated Mann-Whitney U tests
5
to
determine whether the two groups differ regarding the control variables age, self-efficacy, privacy
concern, trust, period of using smartphones, and number of apps used regularly. The test statistics did not
reveal any significant differences between the two groups regarding the tested parameters. To test the
distribution regarding gender and smartphone platform used, we calculated Pearson’s chi-square test
5
In our data analyses, we only calculated non-parametric tests since normality tests on our data revealed
that they are significantly non-normal. Thus, the assumptions for using parametric tests are not met.
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 13
statistics. The test did not reveal any statistically significant difference between the two groups regarding
those parameters. We therefore reject the assumption that the two data samples are from different
populations regarding the tested characteristics.
To test Hypothesis 1a, we conducted a Mann-Whitney U test with perceived privacy risk of the “Flashlight
4” app (H-S/L-P) as dependent variable and the experimental condition as independent variable.
Perceived privacy risk of this app in the control group shows a mean rank of 33.91, while the mean rank in
the experimental group is 37.82. The mean ranks differ in the expected direction. However, the test
statistics revealed that the difference is statistically not significant (cf. Table 2). Hypothesis 1a is
therefore not supported. To test Hypothesis 1b, we calculated a Mann-Whitney U test with perceived
privacy risk of the “Flashlight 1” app (L-S/H-P) as dependent variable (cf. Table 2). The mean rank of
perceived privacy risk of this app in the control group was 38.56 and in the experimental group 33.78. The
test results suggest rejecting Hypothesis 1b, since the difference was statistically not significant.
In the ranking task, the participants sorted the four apps according to the perceived privacy-intrusiveness.
We used the assigned ranking position (1 to 4) of each app as its perceived privacy-intrusiveness value.
Thus, a higher value is associated with higher perceived privacy-intrusiveness. We conducted a Mann-
Whitney U test with the participant-assigned privacy-intrusiveness rank of the “Flashlight 4” app (H-S/L-
P) as the dependent variable and the experimental condition as the independent variable. The mean rank
of privacy-intrusiveness of this app in the control group is 30.12 and in the experimental group 41.11. The
test statistics (cf. Table 2) revealed that the mean ranks differ statistically significant, p < .01.
Hypothesis 1c is therefore supported. We calculated the same test with the privacy-intrusiveness rank of
the “Flashlight 1” app (L-S/H-P). The mean privacy-intrusiveness rank in the control group is 42.36 and
in the experimental group 30.47. The test statistics (cf. Table 2) revealed a statistically significant
difference between the two mean ranks, p < .01. Therefore, Hypothesis 1d is supported.
To test Hypothesis 1e, we have designed the apps “Flashlight 1” (L-S/H-P) and “Flashlight 4” (H-S/L-P)
accordingly. We conducted a Wilcoxon Signed Rank test for dependent samples (experimental group)
with perceived privacy risk of “Flashlight 1” (Mdn = 3.00) and “Flashlight 4” (Mdn = 3.83) as input
variables. The test revealed a statistically significant difference, z = -1.738, p < .05, r = -.28. As a
consequence, Hypothesis 1e is supported.
To test Hypothesis 2a, we calculated a Mann-Whitney U test with perceived trustworthiness of the
“Flashlight 4” app (H-S/L-P) as dependent variable. The mean rank of perceived trustworthiness of this
app in the control group is 39.06, while the mean rank in the experimental group is 33.34. The test
statistics (cf. Table 2) revealed that the difference is statistically not significant. Hypothesis 2a has
therefore to be rejected. To test Hypothesis 2b, we conducted the same test with the perceived
trustworthiness of the “Flashlight 1” app (L-S/H-P) as dependent variable. The mean rank of perceived
trustworthiness of this app in the control group was 26.62 and in the experimental group 44.14. The
difference was statistically significant, p < .001 (cf. Table 2). Hypothesis 2b is therefore supported.
A Wilcoxon Signed Rank test for dependent samples revealed that perceived trustworthiness of
“Flashlight 1” (Mdn = 4.50) differs significantly from perceived trustworthiness of “Flashlight 4” (Mdn =
2.25) in the experimental group, z = -3.501, p < .001, r = -.57. Hypothesis 2c is therefore supported.
To test Hypothesis 3, we analyzed the app preferences in the health application category, where the apps,
except for “Health 2”, did not differ in the numbers of permissions requested but in the privacy risk
severity. In the control group, 12.1% of the participants preferred “Health 1”, 60.6% preferred “Health 2”,
3% preferred “Health 3”, and 24.2% preferred “Health 4”. Whereas in the experimental group 15.8%
preferred “Health 1”, 57.9% preferred “Health 2”, 23.7% preferred “Health 3”, and 2.6% preferred “Health
4”. A Pearson’s chi-square test revealed that there was a significant association between the experimental
group and the app preferences χ2 (3) = 12.05, p <. 01, Cramer’s V = .412. Thus, Hypothesis 3a is
supported. Regarding the privacy risk severity of the preferred apps, in the control group, 75.8% of the
participants preferred an app with low privacy risk severity, and 24.2% preferred an app with high privacy
risk severity. In the experimental group, 97.4% of the participants preferred an app with low privacy risk
severity, 2.6% preferred an app with high privacy risk severity. There was a significant association
between the type of privacy warning presented and the privacy risk severity of the preferred apps χ2 (1) =
7.45, p < .01, Cramer’s V = .324. Hypothesis 3b is therefore supported.
IS Security and Privacy
14 Thirty Fifth International Conference on Information Systems, Auckland 2014
Hypothesis 4 has been tested by conducting a Mann-Whitney U test with the perceived trustworthiness of
the SMART App Shop as dependent variable and the experimental condition as independent variable.
Perceived trustworthiness of the SMART App Shop in the control group shows a mean rank of 28.35 and
in the experimental group 42.64. The test statistics (cf. Table 2) revealed that the difference is statistically
significant, p < .01. Hypothesis 4 is therefore supported.
We further calculated individual Mann-Whitney U tests with each privacy information quality scale as
dependent variable and the experimental condition as independent variable. The test results (cf. Table 2)
indicate that the mean ranks were higher in the experimental group in all six dimensions, while only in
the correctness dimension the difference was not statistically significant. Hypotheses 5a-5e are
therefore supported, Hypothesis 5f has to be rejected.
Finally, a Mann-Whitney U test with perceived enjoyment of comparison as dependent variable and
experimental condition as independent variable revealed that enjoyment of comparison was statistically
significant higher in the experimental group, p < .01 (cf. Table 2). Hypothesis 6 is therefore supported.
Table 2. Results of Mann-Whitney U tests; 1(H-S/L-P): app with high privacy risk severity
and low number of permissions; 2(L-S/H-P) app with low privacy risk severity and high
number of permissions; Inf. Qual. = Information Quality; the testable hypotheses are
indicated per construct; * p < .05, ** p < .01, *** p < .001
Scale
Hyp.
Mean Rank (n = 71)
U
sig.
r
Control
Experimental
Perceived Risk (H-S/L-P)1
1a
33.91
37.82
558.00
.214
-.09
Perceived Risk (L-S/H-P)2
1b
38.56
33.78
542.50
.166
-.12
Privacy Rank (H-S/L-P) **
1c
30.12
41.11
433.00
.009
-.28
Privacy Rank (L-S/H-P) **
1d
42.36
30.47
417.00
.005
-.31
Trustworthiness (H-S/L-P)1
2a
39.06
33.34
526.00
.123
-.14
Trustworthiness (L-S/H-P)2 ***
2b
26.62
44.14
317.50
.000
-.42
Trust SMART App Shop **
4
28.35
42.64
374.50
.002
-.35
Inf. Qual. (Amount) *
5a
31.23
40.14
469.50
.034
-.22
Inf. Qual. (Believability) *
5b
31.42
39.97
476.00
.040
-.21
Inf. Qual. (Interpretability) **
5c
28.83
42.22
390.50
.003
-.33
Inf. Qual. (Relevance) *
5d
31.64
39.79
483.00
.048
-.20
Inf. Qual. (Understandability) **
5e
28.36
42.63
375.00
.002
-.35
Inf. Qual. (Correctness)
5f
32.62
38.93
515.50
.100
-.15
Enjoyment of Comparing **
6
28.17
42.80
368.50
.002
-.36
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 15
Discussion
When comparing perceived privacy risks of apps between the two experimental groups (Hypotheses 1a &
1b), we observed notable differences in the expected direction. However, the differences were statistically
not significant. However, explicitness had a significant effect on the individuals’ ability to sort different
apps according to their actual privacy-intrusiveness. Participants provided with explicit consequence
information were more accurate in distinguishing apps with high privacy risk severity from apps with low
privacy risk severity (Hypotheses 1c, 1d, and 1e). These results support the theory that the communication
of explicit consequence information in privacy warnings helps to better recognize the actual risk severity
of apps. A potential explanation for the absence of significant effects on (absolute) perceived privacy risk
when comparing the same apps between the groups is uncertainty that accompanies people’s assessments
regarding consequences that comes from incompleteness of available information (Acquisti and
Grossklags 2005). Uncertainty might have positively influenced perceived risk and consequently created a
“baseline” perceived risk in the control group, independent from the quantity of permissions requested.
Thus, variances in perceived risk in the two conditions might be explained differently.
We further expected an effect of explicitness on the perceived trustworthiness of apps. Particularly the
trustworthiness of apps with low privacy risk severity increased significantly when using explicit risk
communication (Hypothesis 2b), while the difference in perceived trustworthiness was not significant
when the app had a high privacy risk severity (Hypothesis 2a). There was also a significant effect of
explicitness on the trustworthiness of two apps with diametrical risk severity and permission profiles
(Hypothesis 2c). This further indicates the effectiveness of explicitness regarding more accurate trust
assessments. While the warnings helped the users making more accurate risk and trust assessments of
individual apps, the introduction of explicit consequence information had an overall positive and
significant effect on the perceived trustworthiness of the app market (Hypothesis 4). Individuals
perceived the app market with an explicit communication of privacy risks as being more dedicated to
protecting user privacy. This is in-line with previous findings suggesting a general positive effect of the
availability privacy notices on the perceived trustworthiness of a service (LaRose and Rifon 2007).
In terms of influencing privacy behavior, a more explicit communication of potentials privacy risks helped
participants in more accurately identifying the safer options (Hypothesis 3a & 3b). Using explicit
information reduced the preference of apps with high risk severity significantly. This suggests that such
privacy indicators provide an improved foundation for performing privacy calculi. Finally, the perceived
privacy information quality increased significantly regarding the amount, believability, interpretability,
relevance, and understandability of privacy information (Hypotheses 5a-5e). Especially the positive
effects on perceived interpretability can be regarded as an indicator for reduced uncertainty since the
participants were more confident regarding the understanding of the risks. Another metric for the
efficiency of the proposed explicit warnings is the perceived enjoyment of comparing apps, which was
significantly higher when presenting explicit risk information (Hypothesis 6). Thus, participants did not
only prefer the safer options, but also the process of identifying the safer option was perceived more
efficient. This suggests that explicit privacy warnings perform well in risk communication.
Practical Implications and Limitations
A practical implication of our results is that information privacy warning design should make risk
communication more explicit regarding the potential consequences of privacy behavior. The use of
information technology is part of people’s everyday life and individuals can choose information
technology products out of a sheer amount of alternatives. Products vary in their functionality as well as
their privacy-impacting properties. Individuals should therefore be provided with a fair base for informed
decision-making regarding service selection and use. Our results suggest that adding explicitness to
consequence information is appropriate to inform people about the privacy-impacting properties of
services and lead to more accurate risk assessments. Platform providers such as app markets can increase
their trustworthiness when introducing a privacy warning scheme with more explicit consequence
information, while the consequences for individual service providers are not necessarily negative.
Especially if the privacy risk severity of services is low, service providers can profit from increased
trustworthiness owing to explicitness. As a theoretical implication of our findings we suggest that privacy
warning research should enhance design theory development with new perspectives of privacy risk
IS Security and Privacy
16 Thirty Fifth International Conference on Information Systems, Auckland 2014
conceptualizations. We investigated on the effects of explicitness of consequence information by using
only a small set of potential consequences inspired by data mining-based information revelation
potentials. Future research could elaborate on more potential consequences or on improved
conceptualizations of potential of consequences. Furthermore, when including perceived privacy risk as
variable in research, we suggest controlling potential effects of uncertainty on perceived risk.
There are some limitations of this study. Firstly, the study and its findings are limited to the domain of
smartphone apps. While we believe that privacy-related consequences of privacy behavior are similar in
other domains (e.g. social network services), different characteristics of information system services
might have different consequences to consider. Further, we investigated the effects with a relatively small
number of participants, a limited number of apps, and a small set of potential consequences. The effects
might be different in a real setting with more targeted consequence information. For use in practice, a
more detailed search for potential consequences should be performed. So far, there is no established ways
for anticipating and reliably describing data-mining consequences, however, based on research results
that demonstrate the potentials in general, we believe that this is an open and interesting research
challenge. Furthermore, we did not empirically validate the used privacy risk severity profiles of the apps
used in this experiment, which poses a potential threat to the external validity of the study. In the
experimental group, the participants may have been influenced by the explicit consequence information
without paying attention to the quantity and quality of permission requests. Our experimental design does
not allow making statements regarding this aspect. To elaborate on this, additional conditions could be
added to the experimental design in which the permissions are further manipulated in addition to the
existence of explicit consequence information. Also, we did not consider factors such as purpose-binding
or context that add to the perception of privacy risks and also to the privacy risk severity of apps, nor did
we integrate our control variables in our statistical tests to better control their effects using different
statistical techniques. Especially regarding context influences on privacy behavior, we implied a rational
perspective on decision-making, meaning that we based our research on the assumption that higher
privacy risks will lead to lower willingness to divulge personal information and vice versa. Nevertheless,
John et al. (2009) could show in a series of experiments that disclosure is responsive to contextual factors
that have little to do with the actual costs and benefits of divulging information, for example the pure
presentation of assurances regarding a privacy-friendly handling of revealed personal information could
reduce the willingness to divulge personal information. However, our focus was on improving individuals’
ability to understand potential risks of divulging sensitive information and our results provide first
indications for the effectiveness of privacy warnings with explicit consequence information.
Conclusion
In this study, we have experimentally tested the effects of adding explicitness to consequence information
in privacy risk communication in the context of smartphone app markets. Our results suggest that
explicitness help individuals to more accurately and more efficiently perform risk and trust assessments.
When using explicit consequence information, people are more accurate in sorting apps according to their
true privacy-intrusiveness, thus they were better able to distinguish apps with high privacy risk severity
from apps with low privacy risk severity. Also, making consequences more explicit had an effect on
perceived trustworthiness of apps, especially apps with low privacy risk severity profit from explicitness.
An indication for the reduced uncertainty in decision-making is that adding explicitness to privacy
warnings also positively affected the perceived privacy information quality, especially the dimensions of
amount, believability, interpretability, relevance, and understandability. Further, the comparison of apps
regarding their privacy-intrusiveness was perceived more efficient by participants. Our systematic privacy
risk analysis of smartphone app usage forms another contribution towards the conceptualization of
privacy-related consequences of privacy behavior. We proposed explicit privacy warnings as a measure to
reduce uncertainty in privacy risk assessments and to provide individuals with a better basis to make
informed decisions regarding app selection and use. Existing privacy risk communication methods have
been shown to be ineffective and users tend to trust other anchors such as user reviews. Our study
demonstrates the potentials for improving individuals’ privacy risk and trust assessments by providing
more useful risk information. Not only the users can profit, but also app providers and app mar kets can
increase their trustworthiness. We contribute with design knowledge to the field of privacy risk
communication, particularly in the domain of smartphone technology.
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 17
References
Acquisti, A., and Grossklags, J. 2005. “Privacy and rationality in individual decision making,” IEEE
Security & Privacy (3:1), pp. 2633.
Acquisti, A., and Grossklags, J. 2007. “What can behavioral economics teach us about privacy,” Digital
Privacy: Theory, Technologies and Practices , pp. 367377.
Bai, G., Gu, L., Feng, T., Guo, Y., and Chen, X. 2010. “Context-Aware Usage Control for Android,” in
Security and Privacy in Communication Networks, Springer, pp. 326343.
Bal, G. 2012. “Revealing Privacy-Impacting Behavior Patterns of Smartphone Applications,” in MoST
2012 - Proceedings of the Mobile Security Technologies Workshop 2012, San Francisco, USA.
Barlow, T., and Wogalter, M. 1991. “Increasing the surface area on small product containers to facilitate
communication of label information and warnings,” in Proceedings of Interface ’91, .
Bravo-Lillo, C., Cranor, L., Downs, J., Komanduri, S., and Sleeper, M. 2011. “Improving Computer
Security Dialogs,” in Human-Computer Interaction INTERACT 2011, Lecture Notes in Computer
Science, P. Campos, N. Graham, J. Jorge, N. Nunes, P. Palanque, and M. Winckler (eds.), (Vol.
6949) Berlin, Heidelberg, pp. 1835.
Camp, L. J. 2009. “Mental Models of Privacy and Security,” IEEE Technology and Society Magazine
(28:3)IEEE, pp. 3746.
Chia, P. H., Yamamoto, Y., and Asokan, N. 2012. “Is this App Safe? A Large Scale Study on Application
Permissions and Risk Signals,” in WWW ’12 - Proceedings of the 21st International Conference on
World Wide Web, Lyon, France, pp. 311320.
Chin, E., Felt, A. P., Greenwood, K., and Wagner, D. 2011. “Analyzing inter-application communication in
Android,” in Proceedings of the 9th international conference on Mobile systems, applications, and
services - MobiSys ’11, New York, New York, USA, June 28, p. 239.
Chittaranjan, G., Blom, J., and Gatica-Perez, D. 2011. “Mining Large-scale Smartphone Data for
Personality Studies,” Personal and Ubiquitous Computing (17:3), pp. 433–450.
Choi, S. S., and Choi, M.-K. 2007. “Consumer’s Privacy Concerns and Willingness to Provide Personal
Information in Location-Based Services,” in The 9th International Conference on Advanced
Communication Technology, , February, pp. 21962199.
Conti, M., Nguyen, V. T. N., and Crispo, B. 2010. “CRePE: Context-related Policy Enforcement for
Android,” in ISC’10 - Proceedings of the 13th International Conference on Information Security,
October 25-28, 2010, Boca Raton, FL, USA, October 25, pp. 331345.
Conti, M., Zachia-Zlatea, I., and Crispo, B. 2011. “Mind how you answer me!: transparently authenticating
the user of a smartphone when answering or placing a call,” in Proceedings of the 6th ACM
Symposium on Information, Computer and Communications Security, , pp. 249259.
Cranor, L. F., Arjula, M., and Guduru, P. 2002. “Use of a P3P user agent by early adopters,” in Proceeding
of the ACM workshop on Privacy in the Electronic Society - WPES ’02, New York, New York, USA,
November 21, pp. 110.
Davi, L., Dmitrienko, A., Sadeghi, A.-R., Winandy, M., Burmester, M., Tsudik, G., Magliveras, S., and Ilic,
I. 2010. “Privilege Escalation Attacks on Android,” in Proceedings of the 13th international
conference on Information security - ISC’10, Lecture Notes in Computer Science, M. Burmester, G.
Tsudik, S. Magliveras, and I. Ilić (eds.), (Vol. 6531) Berlin, Heidelberg, pp. 346–360.
Dinev, T., and Hart, P. 2006. “An Extended Privacy Calculus Model for E-Commerce Transactions,”
Information Systems Research (17:1)INFORMS, pp. 6180.
IS Security and Privacy
18 Thirty Fifth International Conference on Information Systems, Auckland 2014
Doney, P., and Cannon, J. 1997. “An Examination of the Nature of Trust in Buyer-Seller Relationships,”
The Journal of Marketing (61:2), pp. 3551.
Eagle, N., Pentland, A. S., and Lazer, D. 2009. “Inferring Social Network Structure using Mobile Phone
Data,” Proc. National Academy of Sciences, , p. 9.
Egele, M., Kruegel, C., Kirda, E., and Vigna, G. 2011. “PiOS: Detecting Privacy Leaks in iOS Applications,”
in In Proceedings of the 18th Annual Network & Distributed System Security Symposium (NDSS), 6-
9 February 2011, San Diego, California.
Egelman, S., Tsai, J., Cranor, L. F., and Acquisti, A. 2009. “Timing is everything?: the effects of timing and
placement of online privacy indicators,” in Proceedings of the 27th international conference on
Human factors in computing systems - CHI ’09, New York, New York, USA, April 4, p. 319.
Enck, W., Gilbert, P., Chun, B., Cox, L. P., Jung, J., McDaniel, P., and Sheth, A. N. 2010. “TaintDroid: An
Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones,” in Proc. of
USENIX Symposium on Operating Systems Design and Implementation (OSDI), .
Enck, W., Octeau, D., Mcdaniel, P., and Chaudhuri, S. 2011. “A Study of Android Application Security,” in
SEC’11 - Proceedings of the 20th USENIX conference on Security, 8-12 August 2011, San Francisco,
USA.
Enck, W., Ongtang, M., and McDaniel, P. 2009. “On Lightweight Mobile Phone Application Certification,
in Proceedings of the 16th ACM conference on Computer and communications security - CCS ’09,
New York, New York, USA, November 9, p. 235.
Featherman, M. M. S., and Pavlou, P. a. 2003. “Predicting e-services adoption: a perceived risk facets
perspective,” International Journal of Human-Computer Studies (59:4), pp. 451474.
Felt, A. P., Chin, E., Hanna, S., Song, D., and Wagner, D. 2011. “Android permissions demystified,” in
Proceedings of the 18th ACM conference on Computer and communications security - CCS ’11, New
York, New York, USA, October 17, p. 627.
Felt, A. P., Greenwood, K., and Wagner, D. 2011. “The effectiveness of application permissions,” in
WebApps’11 Proceedings of the 2nd USENIX conference on Web application development, , June 15,
p. 7.
Fogel, J., and Nehmad, E. 2009. “Internet social network communities: Risk taking, trust, and privacy
concerns,” Computers in Human Behavior (25:1)Elsevier Ltd, pp. 153–160.
Frantz, J. P., and Rhoades, T. P. 1993. “A Task-Analytic Approach to the Temporal and Spatial Placement
of Product Warnings,” Human Factors: The Journal of the Human Factors and Ergonomics Society
(35:4), pp. 719730.
George, J. F. 2004. “The theory of planned behavior and Internet purchasing,” Internet Research
(14:3)Emerald Group Publishing Limited, pp. 198212.
Jarvenpaa, S. L., Tractinsky, N., and Vitale, M. 2000. “Consumer trust in an Internet store,” Information
Technology and Management (1), pp. 4571.
John, L. K., Acquisti, A., and Loewenstein, G. F. 2009. “The Best of Strangers: Context Dependent
Willingness to Divulge Personal Information,” SSRN Electronic Journal, .
Kelley, P. G., Bresee, J., Cranor, L. F., and Reeder, R. W. 2009. “A ‘nutrition label’ for privacy,” in
Proceedings of the 5th Symposium on Usable Privacy and Security - SOUPS ’09, New York, New
York, USA, July 15, p. 1.
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 19
Kelley, P. G., Cesca, L., Bresee, J., and Cranor, L. F. 2009. “Standardizing Privacy Notices: An Online
Study of the Nutrition Label Approach,” in CHI ’10 - Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems, Atlanta, GA, USA, pp. 15731582.
Kelley, P. G., Consolvo, S., Cranor, L. F., Jung, J., Sadeh, N., and Wetherall, D. 2012. “A Conundrum of
Permissions: Installing Applications on an Android Smartphone,” in Proceedings of USEC 2012, ,
pp. 112.
Kline, P. B., Braun, C. C., Peterson, N., and Silver, N. C. 1993. “The Impact of Color on Warnings
Research,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting (37:14)SAGE
Publications, pp. 940944.
Krasnova, H., Eling, N., Schneider, O., Wenninger, H., Widjaja, T., and Buxmann, P. 2013. “Does this App
Ask for Too Much Data? The Role of Privacy Perceptions in User Behavior Towards Facebook
Applications and Permission Dialogues,” in Proceedings of the 21st European Conference on
Information Systems, , pp. 112.
LaRose, R., and Rifon, N. 2006. “Your privacy is assured - of being disturbed: websites with and without
privacy seals,” New Media & Society (8:6), pp. 10091029.
LaRose, R., and Rifon, N. J. 2007. “Promoting i-Safety: Effects of Privacy Warnings and Privacy Seals on
Risk Assessment and Online Privacy Behavior, The Journal of Consumer Affairs (41:1), pp. 127
149.
Laughery, K. R., Vaubel, K. P., Young, S. L., Brelsford, J. W., and Rowe, A. L. 1993. “Explicitness of
Consequence Information in Warnings,” Safety Science (16:5-6), pp. 597613.
Laughery, K. R., and Wogalter, M. S. 2006. “Designing Effective Warnings,” Reviews of Human Factors
and Ergonomics (2:1), pp. 241271.
Laughery, K., and Smith, D. 2006. “Explicit Information in Warnings,” in Handbook of Warnings, M. S.
Wogalter (ed.), Lawrence Erlbaum Associates, Mahwah, NJ, pp. 419428.
Laughery, K., and Wogalter, M. 1997. “Risk Perception and Warnings,” in Handbook of Human Factors
and Ergonomics, G. Salvendy (ed.), New York, NY: Wiley-Interscience.
Lederer, S., Hong, J., Dey, A., and Landay, J. 2004. “Personal Privacy Through Understanding and
Action: Five Pitfalls for Designers,” Personal and Ubiquitous Computing .
Lee, Y. W., Strong, D. M., Kahn, B. K., and Wang, R. Y. 2002. “AIMQ: a methodology for information
quality assessment, (40), pp. 133146.
Lim, N. 2003. “Consumers’ perceived risk: sources versus consequences,” Electronic Commerce Research
and Applications (2:3), pp. 216228.
Lin, J., Amini, S., Hong, J., Sadeh, N., Lindqvist, J., and Zhang, J. 2012. “Expectation and Purpose:
Understanding Users ’ Mental Models of Mobile App Privacy through Crowdsourcing,” in
Proceedings of the 14th ACM International Conference on Ubiquitous Computing - Ubicomp 2012, .
Malhotra, N. K., Kim, S. S., and Agarwal, J. 2004. “Internet Users’ Information Privacy Concerns (IUIPC):
The Construct , the Scale, and a Causal Model,” Information Systems Research (15:4), pp. 336355.
Milne, G. R., and Boza, M.-E. 1999. “Trust and concern in consumers’ perceptions of marketing
information management practices,” Journal of Interactive Marketing (13:1), pp. 5–24.
Milne, G. R. G., and Culnan, M. M. J. 2004. “Strategies for reducing online privacy risks: Why consumers
read (or don’t read) online privacy notices,” Journal of Interactive Marketing (18:3), pp. 1529.
IS Security and Privacy
20 Thirty Fifth International Conference on Information Systems, Auckland 2014
Min, J., Wiese, J., Hong, J. I., and Zimmerman, J. 2013. “Mining Smartphone Data to Classify Life-Facets
of Social Relationships,” in Conference on Computer Supported Cooperative Work and Social
Computing 2013, .
Pavlou, P. 2003. “Consumer acceptance of electronic commerce: integrating trust and risk with the
technology acceptance model,” International journal of electronic commerce (7:3), pp. 101134.
Phithakkitnukoon, S., Horanont, T., Di Lorenzo, G., Shibasaki, R., and Ratti, C. 2010. “Activity-Aware
Map: Identifying Human Daily Activity Pattern Using Mobile Phone Data,” in Human Behavior
Understanding, Lecture Notes in Computer Science, Lecture Notes in Computer Science, A. A. Salah,
T. Gevers, N. Sebe, and A. Vinciarelli (eds.), (Vol. 6219) Berlin, Heidelberg: Springer Berlin
Heidelberg, pp. 1425.
Schlegel, R., Kapadia, A., and Lee, A. J. 2011. “Eyeing your exposure: quantifying and controlling
information sharing for improved privacy,” in Proceedings of the Seventh Symposium on Usable
Privacy and Security - SOUPS ’11, New York, New York, USA, July 20, p. 1.
Schwarzer, R., and Jerusalem, M. 1995. “Generalized Self-Efficacy Scale,” in Measures in health
psychology: A user’s portfolio. Causal and control beliefs, J. Weinmann, S. Wright, and M. Johnston
(eds.), Windsor, England: NFER-NELSON, pp. 3537.
Shi, E., Niu, Y., Jakobsson, M., and Chow, R. 2011. “Implicit Authentication through Learning User
Behavior,” in Information Security SE - 9, Lecture Notes in Computer Science, M. Burmester, G.
Tsudik, S. Magliveras, and I. Ilić (eds.), (Vol. 6531) Springer Berlin Heidelberg, pp. 99–113.
Shi, W., Yang, J., and Jiang, Y. 2011. “Senguard: Passive user identification on smartphones using
multiple sensors,” in Proceedings of the 2011 IEEE 7th International Conference on Wireless and
Mobile Computing, Networking and Communications (WiMob), , pp. 141 148.
Slyke, C. Van, Johnson, R., and Jiang, J. 2006. “Concern for Information Privacy and Online Consumer
Purchasing,” Journal of the Association for Information Systems (7:6), pp. 415–444.
Smith, H. J., Dinev, T., and Xu, H. 2011. “Information Privacy Research: An Interdisciplinary Review,”
MIS Quarterly (35:4), pp. 980A27.
Smith, H. J., Milberg, S. J., and Burke, S. J. 1996. “Information Privacy: Measuring Individuals’ Concerns
About Organizational Practices,” MIS Quarterly (20:2), pp. 167196.
Smith-Jackson, T. L., and Wogalter, M. S. 2006. “Methods and Procedures in Warning Research,” in
Handbook of Warnings, Lawrence Erlbaum Associates, pp. 2333.
Thompson, C., Johnson, M., Egelman, S., Wagner, D., and King, J. 2013. “When it’s better to ask
forgiveness than get permission,” in Proceedings of the Ninth Symposium on Usable Privacy and
Security - SOUPS ’13, , p. 1.
Vision Mobile. 2014. “Developer Economics 2014 Q1,” .
Wang, S., Beatty, S. E., and Foxx, W. 2004. “Signaling the trustworthiness of small online retailers,”
Journal of Interactive Marketing (18:1)Elsevier, pp. 5369.
Wei, X., Gomez, L., Neamtiu, I., and Faloutsos, M. 2012. “Permission Evolution in the Android
Ecosystem,” in Proceedings of ACSAC 2012, .
Weiss, G. M., and Lockhart, J. W. 2011. “Identifying user traits by mining smart phone accelerometer
data,” in Proceedings of the Fifth International Workshop on Knowledge Discovery from Sensor
Data - SensorKDD ’11, New York, New York, USA, pp. 61–69.
Wogalter, M. S. 2006. “Purposes and Scope of Warnings,” in Handbook of Warnings, M. S. Wogalter
(ed.), Lawrence Erlbaum Associates, Mahwah, NJ, pp. 39.
Explicitness of Consequence Information in Privacy Warnings
Thirty Fifth International Conference on Information Systems, Auckland 2014 21
Wogalter, M. S., Young, S. L., Brelsford, J. W., and Barlow, T. 1999. “The Relative Contributions of Injury
Severity and Likelihood Information on Hazard-Risk Judgments and Warning Compliance,” Journal
of Safety Research (30:3), pp. 151162.
Xu, H., Teo, H., and Tan, B. C. Y. 2005. “Predicting the Adoption of Location-Based Services: The Role of
Trust and Perceived Risk,” in ICIS 2005 Proceedings, , pp. 897–910.
Young, S. L. 1991. “Increasing the Noticeability of Warnings: Effects of Pictorial, Color, Signal Icon and
Border,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, (Vol. 35) ,
January 1, pp. 580584.
Zhang, F., Shih, F., and Weitzner, D. 2013. “No surprises: measuring intrusiveness of smartphone
applications by detecting objective context deviations,” in Proceedings of the 12th ACM workshop on
Workshop on privacy in the electronic society - WPES ’13, New York, New York, USA, November 4,
pp. 291296.
Zhou, T. 2011. “The impact of privacy concern on user adoption of location-based services,” Industrial
Management & Data Systems (111:2), pp. 212226.
Zhou, Y., Zhang, X., Jiang, X., and Freeh, V. W. 2011. “Taming information-stealing smartphone
applications (on Android),” in Proceedings of the 4th international conference on Trust and
trustworthy computing (TRUST’11), , June 22, pp. 93107.
... Such explicit warnings have the effect that the receiver is better prepared and feels less uncertain (Ivaturi et al., 2014). This again leads to a more accurate performance of risk and trust assessments (Bal 2014) and an increased likelihood of detection (Xiao and Benbasat 2015). Also, the more often a warning is shown to users, the more their confidence in their detection skills can increase (Zahedi et al. 2015). ...
... For example, explicitness is an important characteristic of a successful warning message. Explicit warnings reduce uncertainty, improve decision making and have an effect on information's perceived trustworthiness (Bal 2014). Zahedi et al. (2015) also support the view that a warning message is the most effective when it is illustrated explicitly. ...
... One attribute is the explicitness of the warning. This could result in an enhanced information quality and therefore, in an increase in perceived credibility which could finally be linked to users following the advice presented in the warning message (Bal 2014;Huang and Kuo 2014). Such additional information could include specific consequences of being misled by fake news. ...
Conference Paper
Full-text available
Warning messages are being discussed as a possible mechanism to contain the circulation of false information on social media. Their effectiveness for this purpose, however, is unclear. This article describes a survey experiment carried out to test two designs of warning messages: a simple one identical to the one used by Facebook, and a more complex one informed by recent research. We find no evidence that either design is clearly superior to not showing a warning message. This result has serious implications for brands and politicians, who might find false information about them spreading uncontrollably, as well as for managers of social media platforms, who are struggling to find effective means of controlling the diffusion of misinformation.
... In combination with other factors from the cluster demographics, such as age [4,26,45,71,73] and gender [102], individuals weigh the risk of self-disclosure against the financial exchanges/benefits and modify their privacy behavior accordingly [84,109]. If the individual considers that the risk of disclosure is greater than what they can accept in relation to the perceived benefit, then they will disclose less personal information [19,84,109] or they will reject the financial offer. Reith et al. [99] research showed that ICT users' risk perception is important when they must choose what kind of mobile payment solution, they should choose to make a payment on the internet [99]. ...
Article
Full-text available
What influences Information Communications and Technology (ICT) users' privacy behavior? Several studies have shown that users state to care about their personal data. Contrary to that though, they perform unsafe privacy actions, such as ignoring to configure privacy settings. In this research, we present the results of an in-depth literature review on the factors affecting privacy behavior. We seek to investigate the underlying factors that influence individuals' privacy-conscious behavior in the digital domain, as well as effective interventions to promote such behavior. Privacy decisions regarding the disclosure of personal information may have negative consequences on individuals' lives, such as becoming a victim of identity theft, impersonation, etc. Moreover, third parties may exploit this information for their own benefit, such as targeted advertising practices. By identifying the factors that may affect SNS users' privacy awareness, we can assist in creating methods for effective privacy protection and/or user-centered design. Examining the results of several research studies, we found evidence that privacy behavior is affected by a variety of factors, including individual ones (e.g., demographics) and contextual ones (e.g., financial exchanges). We synthesize a framework that aggregates the scattered factors that have been found in the literature to affect privacy behavior. Our framework can be beneficial to academics and practitioners in the private and public sectors. For example, academics can utilize our findings to create specialized information privacy courses and theoretical or laboratory modules.
... Por su parte, Bal (2014) realizó un análisis, en el caso de los teléfonos móviles, sobre el efecto de un sistema apropiado de comunicación de riesgo y su impacto en la calidad de la toma de decisiones. Concluye que, con la comunicación adecuada, se realiza un cálculo de privacidad más pertinente, guiando a un comportamiento más apropiado. ...
Article
Full-text available
Las Tecnologías de Información y Comunicación (TIC) forman parte integral de nuestra vida y acarrean riesgos inherentes a su uso, siendo la intrusión a la privacidad de los usuarios uno de los más comunes. Este trabajo busca estimar la efectividad de la aplicación de medidas de seguridad de los usuarios de TIC para mitigar el riesgo a partir de un conjunto de variables como alfabetización digital, actividades y comportamientos en la red como compulsión y permisividad. Para lograr este objetivo, se llevó a cabo un estudio observacional de corte transversal, mediante el diseño de una encuesta que se aplicó a una población de 159 internautas mexicano, usuarios de redes sociales y pertenecientes a una comunidad de video jugadores de computadora. Para analizar los resultados de dicha encuesta, se utilizó un modelo binario y multivariado de regresión logística, por el método de máxima verosimilitud, con el que se seleccionaron variables explicativas mediante la técnica de pasos (hacia adelante y hacia atrás) y se realizaron pruebas de bondad de ajuste. Entre los resultados más importantes se muestra que a pesar de realizar mejores prácticas de seguridad y tener un menor número de intrusiones a la privacidad, el nivel de alfabetización digital no es significativo o determinante para ser víctima de una intrusión. Al estratificar por edad se observó una tendencia a estar en mayor posibilidad de sufrir intrusión cuando se tiene mayor edad, sin embargo, la diferencia entre los grupos no resultó significativa, pero sí ilustrativa.
Article
Augmented reality (AR) gained much public attention after the success of Pokémon Go in 2016, and has found application in online games, social media, interior design, and other services since then. AR is highly dependent on various different sensors gathering real time context-specific personal information about the users causing more severe and new privacy threats compared to other technologies. These threats have to be investigated as long as AR is still shapeable in order to ensure users’ privacy and foster market adoption of privacy-friendly AR systems. To provide viable recommendations regarding the design of privacy-friendly AR systems, we follow a user-centric approach and investigate the role and causes of privacy concerns within the context of mobile AR (MAR) apps. We design a vignette-based online experiment adapting ideas from the framework of contextual integrity to analyze drivers of privacy concerns related to MAR apps, such as characteristics of permissions, trust-evoking signals, and AR-related contextual factors. The results of the large-scale experiment with 1,100 participants indicate that privacy concerns are mainly determined by the sensitivity of app permissions (i.e., whether sensitive resources on the smartphone are accessed) and the number of prior app downloads. Furthermore, we devise detailed practical and theoretical implications for developers, regulatory authorities and future research.
Conference Paper
Privacy-ABCs are elegant techniques to deliver secure yet privacy-enhanced authentication solutions. The cryptography behind them enables new capabilities, such as selective disclosure of attributes, set membership, and predicates over attributes, which many of them were never experienced by typical users before. Even if the users intuitively accept the existence of such features, they may not be still ready to perceive the semantic of such a proof within the context of authentication. In this work, we argue that additional information is necessary to support the user understand the semantic of their operations. We present the results of our empirical experiment on investigating the effect of providing such a support during authentication with Privacy-ABCs on the perceived security and privacy risk of the users.
Conference Paper
As Android’s permission-based system cannot fulfill the requirements of personal data protection, several countries around the world are requesting application developers to provide privacy policies for their applications. To address the issue, this study proposes a framework to Manage Privacy Policies of Android Applications (MaPPA). MaPPA provides standard format for application providers to present privacy policies in machine processable format and to embed the policies into applications. Application verifiers or marketplace providers can then verify whether an application complies with embedded privacy policies and envelop verification reports in the application. Therefore, users can extract privacy policies and verification reports from applications directly. Compared to providing URL links to privacy policies in marketplaces, the proposed framework can reduce the cost for application developers to maintain additional servers to provide privacy policies. Moreover, application users can obtain verification reports in an application to comfirm the consistency between privacy policies and application behavior. In light of this, the study can hopefully solve current problems of privacy policy notification for Android applications.
Article
Full-text available
Article
Full-text available
Smartphone applications pose interesting security problems because the same resources they use to enhance the user experience may also be used in ways that users might find objectionable. We performed a set of experiments to study whether attribution mechanisms could help users understand how smartphone applications access device resources. First, we performed an online survey and found that, as attribution mechanisms have become available on the Android platform, users notice and use them. Second, we designed new attribution mechanisms; a qualitative experiment suggested that our proposed mechanisms are intuitive to understand. Finally, we performed a laboratory experiment in which we simulated application misbehaviors to observe whether users equipped with our attribution mechanisms were able to identify the offending applications. Our results show that, for users who notice application misbehaviors, these attribution mechanisms are significantly more effective than the status quo.
Chapter
Privacy is a complex decision problem resulting in opinions, attitudes, and behaviors that differ substantially from one individual to another [1]. Subjective perceptions of threats and potential damages, psychological needs, and actual personal economic returns all play a role in affecting our decisions to protect or to share personal information. Thus, inconsistencies or even contradictions emerge in individual behavior: Sometimes we feel entitled to protection of information about ourselves that we do not control and end up trading away that same information for small rewards. Sometimes we worry about personal intrusions of little significance, but overlook those that may cause significant damages. In previous works [1-4], we have highlighted a number of difficulties that distance individual actual privacy decision making from that prescribed by classical rational choice theory.
Article
Individuals communicate and form relationships through Internet social networking websites such as Facebook and MySpace. We study risk taking, trust, and privacy concerns with regard to social networking websites among 205 college students using both reliable scales and behavior. Individuals with profiles on social networking websites have greater risk taking attitudes than those who do not; greater risk taking attitudes exist among men than women. Facebook has a greater sense of trust than MySpace. General privacy concerns and identity information disclosure concerns are of greater concern to women than men. Greater percentages of men than women display their phone numbers and home addresses on social networking websites. Social networking websites should inform potential users that risk taking and privacy concerns are potentially relevant and important concerns before individuals sign-up and create social networking websites.
Article
The authors integrate theory developed in several disciplines to determine five cognitive processes through which industrial buyers can develop trust of a supplier firm and its salesperson. These processes provide a theoretical framework used to identify antecedents of trust. The authors also examine the impact of supplier firm and salesperson trust on a buying firm's current supplier choice and future purchase intentions. The theoretical model is tested on data collected from more than 200 purchasing managers. The authors find that several variables influence the development of supplier firm and salesperson trust. Trust of the supplier firm and trust of the salesperson (operating indirectly through supplier firm trust) influence a buyer's anticipated future interaction with the supplier. However, after controlling for previous experience and supplier performance, neither trust of the selling firm nor its salesperson influence the current supplier selection decision.