Conference PaperPDF Available

Who wants to know what when? privacy preference determinants in ubiquitous computing

Authors:

Abstract

We conducted a questionnaire-based study of the relative importance of two factors, inquirer and situation, in determining the preferred accuracy of personal information disclosed through a ubiquitous computing system. We found that privacy preferences varied by inquirer more than by situation. That is, individuals were more likely to apply the same privacy preferences to the same inquirer in different situations than to apply the same privacy preferences to different inquirers in the same situation. We are applying these results to the design of a user interface for managing everyday privacy in ubiquitous computing.
Who Wants to Know What When? Privacy Preference
Determinants in Ubiquitous Computing
Scott Lederer, Jennifer Mankoff
Group for User Interface Research
University of California
Berkeley, CA 94720 USA
{lederer, jmankoff}@cs.berkeley.edu
Anind K. Dey
Intel Research, Berkeley
Intel Corporation
Berkeley, CA 94720 USA
anind@intel-research.net
ABSTRACT
We conducted a questionnaire-based study of the relative
importance of two factors, inquirer and situation, in
determining the preferred accuracy of personal information
disclosed through a ubiquitous computing system. We
found that privacy preferences varied by inquirer more than
by situation. That is, individuals were more likely to apply
the same privacy preferences to the same inquirer in
different situations than to apply the same privacy
preferences to different inquirers in the same situation. We
are applying these results to the design of a user interface
for managing everyday privacy in ubiquitous computing.
Keywords
Ubiquitous Computing, Privacy, Social and Legal Issues
INTRODUCTION
As part of our efforts to develop a user interface for
managing personal privacy in ubiquitous computing, we
conducted a questionnaire-based study to determine the
relative importance of two factors the inquirer’s identity,
and the user’s situation at the time of inquiry in
determining the preferred accuracy of personal information
disclosed to an inquirer through a ubiquitous computing
system. Adjustable accuracy is an important element of
privacy management in ubiquitous computing, allowing for
nuanced disclosure approximating that of traditional social
life. For example, by disclosing one’s location as “New York
City” but not “the corner of 6th Ave. and West 13th St.,” one
reveals just enough information to satisfy certain inquirers
without breaching one’s desired level of privacy. We
developed a study to help identify the stronger determinant
of a user’s preferred accuracy of information disclosure in
ubiquitous computing the identity of the inquirer, or the
user’s situation at the time of inquiry.
PRIVACY ABSTRACTIONS AND METAPHORS
We have previously identified three user-level abstractions
for a ubiquitous computing privacy management system:
inquirer, situation, and accuracy preferences [4]. Their
essential relation is that the inquirer’s identity and the
user’s situation at the time of inquiry together determine
which of the user’s preferences will moderate the accuracy
of disclosed information. In this study, we sought to
determine which factor is the stronger preference
determinant: inquirer or situation.
Our three abstractions map partially to Anne Adams’s
findings [1], which show that perception of privacy in an
audio/video-captured environment is shaped by the
interrelation of (1) the perceived identity of the information
receiver (comparable to our inquirer), (2) the perceived
usage of the information (which we do not address in this
paper), (3) the subjective sensitivity of the disclosed
information (made adjustable by our accuracy preferences),
and (4) the context in which the information is disclosed
(comparable to our situation). In this paper we report on the
relative importance of two of these abstractions, inquirer
and situation, in determining users’ privacy preferences in
ubiquitous computing.
In the study design, we represented sets of accuracy
preferences as metaphorical “faces”. That is, moderating the
accuracy of personal information with a given set of
preferences is akin to wearing a given “face” in a social
situation. This approach is rooted in the work of social
psychologist Erving Goffman, who furthered the theory that
an individual actively yet intuitively monitors and adjusts
his behavior in the presence of others in an attempt to
control their conceptions of his identity [3]. The notion of
fragmented identity pervades user interfaces in the forms of
pseudonyms and profiles and has been used in research on
privacy user interfaces on the desktop [e.g., 2].
STUDY DESIGN
Our scenario-based web questionnaire was designed to
measure the relative importance of inquirer and situation in
determining an individual’s privacy preferences in
ubiquitous computing environments.
The website asked each subject to imagine she had a cell
phone containing her name (true name and a set of
pseudonyms) and profile (primary and secondary email
addresses, occupation, and interests) and capable of
automatically determining her location and activity.
Interested parties could collect some or all of this
Copyright is held by the author/owner(s).
CHI 2003, April 510, 2003, Ft. Lauderdale, Florida, USA.
ACM 1-58113-630-7/03/0004.
information in real-time through various services (e.g., a
remote friend could determine the subject’s location
through a website, or a nearby merchant could directly
query the phone for the subject’s contact information and
interests). The phone contains a set of three “faces”, each
of which specifies the accuracy of the personal information
an inquirer can collect about the subject while she wears
that face (see Table 1).
Subjects were asked to assign a face to each of the eight
personal information disclosure events representing the
cross-product of two situations:
Working Lunch: Downtown with a colleague,
Social Evening: Live music club with two friends;
and four inquirers:
Spouse/Significant Other (remotely located),
Employer (remotely located),
Stranger (proximately located),
Merchant (proximately located).
For each event, subjects chose from the 3 faces to blur the
information disclosed to that inquirer in that situation, or
created a custom face by specifying the accuracy levels of
personal information to disclose.
RESULTS
We posted the questionnaire to community websites across
the U.S. and to engineering students at UC Berkeley,
resulting in a sample size of 130. Note these results are self-
reported and based on imaginary scenarios; future work
should emphasize the analysis of empirical data.
Our primary concern was not which face a subject chose for
a given event, but rather which factor had a greater
influence over that choice. We wanted to know whether
users would be more likely to (1) assign a given face to
handle a given inquirer in all situations, or (2) assign a given
face to handle all inquirers in a given situation.
We found that the inquirer’s identity is a stronger
determinant of privacy preferences than is the user’s
situation. The mean number of different faces used across
the four inquirers in the working lunch situation was 2.72
(SD: 0.84); the mean in the social evening situation was 2.58
(SD: 0.89). This shows that within a given situation,
subjects did vary faces across inquirers. In contrast, for a
given inquirer, subjects generally did not vary faces across
situations. Table 2 shows that when the inquirer is a
significant other, stranger, or merchant, the situation (or, at
least, the two situations covered in the study) is a weak
determinant of the choice of face. One subject wrote, “The
recipient is more important than the context, because the
information will likely outlive the circumstances.” Another
wrote, “For me, ‘who’ is all that matters. If I don't trust the
person with personal information, I wouldn't want to give
them any information at any time. If I do trust the person,
I'm willing to give out information freely.”
When the inquirer is the subject’s employer, situation
becomes a stronger determinant of face. 45.4% of subjects
assigned a different face to employers in different
situations, more than twice as many as for any other
inquirer. One subject wrote, “[D]uring the work day, or
after-hours during crunch time, I'd want my boss/coworkers
to find [me] - after hours I'd rather be more anonymous.”
CONCLUSIONS AND FUTURE WORK
Study results show that (1) identity of the information
inquirer is a stronger determinant of privacy preferences
than is the situation in which the information is collected,
and (2) situation is nonetheless an important determinant,
especially when the inquirer is the user’s employer. These
results imply that designers of privacy user interfaces for
ubiquitous computing should strongly consider
emphasizing the inquirer as the primary index and the
situation as a secondary index into the user’s privacy
preferences.
REFERENCES
1. Adams, A. Multimedia information changes the whole
privacy ballgame. Proc. of Computers, Freedom and
Privacy 2000, 25-32.
2. boyd, d. Faceted Id/entity: Managing representation in a
digital world. M.S. Thesis, Mass. Inst. of Tech., (2002).
3. Goffman, E. The Presentation of Self in Everyday Life.
Anchor, Doubleday, New York, (1959).
4. Lederer, S., Dey, A.K., & Mankoff, J. A Conceptual
Model and a Metaphor of Everyday Privacy in
Ubiquitous Computing Environments. Technical Report
CSD-02-1188, Univ. of California, Berkeley, (2002).
Personal Information Accuracy
Face Identity Profile Activity Location
True Actual Primary email,
Occupation,
Interests
Actual Actual
Vague
Pseudonym Secondary email,
Interests
Vague Vague
Blank
Anonymous Undisclosed Undisclosed
Undisclosed
Table 1. For each disclosure event, subjects could assign any
of these three faces or create a custom face.
Inquirer Same face Different face
Spouse / S.O. 109 (83.8%) 21 (16.2%)
Employer 71 (54.6%) 59 (45.4%)
Stranger 101 (77.7%) 29 (22.3%)
Merchant 112 (86.2%) 18 (13.8%)
Table 2. Number of subjects who assigned the same face
or different faces to each inquirer in the two situations.
... As communication activity increasingly moves to mobile and ubiquitous platforms [21,41], measuring individuals' receptivity across constantly changing contexts is becoming more challenging. Prior research used various contextual factors in its attempts to estimate and predict various notions of receptivity, including attentiveness [22,57], responsiveness [38,43], interruptibility [48,50,52,53,55,73,77], and opportune moment [25,31,37,39,56,59,78,79]. For example, Dingler and Pielot [22] predicted mobile users' attentiveness to incoming messages using logs of their phone usage, and achieved an accuracy rate of close to 80%. Lee et al. [43] likewise predicted users' responsiveness to their IM contacts based on IM chat logs, with up to 71% accuracy (AUROC). ...
... Buchenscheit [11] also suggested that OSI may be used to infer users' daily routines and habits, such as bedtimes and waking-up times; when they deviate from such routines and habits; whether they are using systems when they are expected not to, e.g., when they are meant to be working; and even whom they are communicating with (see also [7,12,23]). Therefore, users sometimes seek to deactivate OSI features or to otherwise manage their own online status: including by controlling what information is being shared, at what granularity, and with whom [8,10,21,32,41,70,80]. Previous studies of status sharing have consistently found that individuals prefer to appear either away or ofine, i.e., to remain "invisible" [17,54]. ...
Article
The sharing of information between older adults and their friends, families, caregivers, and doctors promotes a collaborative approach to managing their emotional, mental, and physical well-being and health, prolonging independent living and improving care quality and quality of life in general. However, information flow in collaborative systems is complex, not always transparent to elderly users, and may raise privacy and security concerns. Because older adults’ decisions about whether to engage in information exchange affects interpersonal communications and delivery of care, it is important to understand the factors and context that influence those decisions. Our work contributes empirical evidence and suggests a systematic approach. In this paper, we present the results of semi-structured interviews with 46 older adults age 65+ about their views on information collection, transmission, and sharing. We develop a detailed model of the contextual factors that combine in complex ways to affect older adults’ decision-making about information sharing. We discuss how our comprehensive model compares to existing frameworks for analyzing information sharing expectations and preferences. Finally, we suggest directions for future research and describe practical implications of our model for the design and evaluation of collaborative information-sharing systems, as well as for policy and consumer protection.
Chapter
Using networks of Internet-connected sensors, the Internet of Things (IoT) makes technologies “smart” by enabling automation, personalization, and remote control. At the same time, IoT technologies introduce challenging privacy issues that may frustrate their widespread adoption. This chapter addresses the privacy challenges of IoT technologies from a user-centered perspective and demonstrates these prevalent issues in the domains of wearables (e.g., fitness trackers), household technologies (e.g., smart voice assistants), and devices that exist in the public domain (e.g., security cameras). The chapter ends with a comprehensive list of solutions and guidelines that can help researchers and practitioners introduce usable privacy to the domain of IoT.
Article
Full-text available
Chronic health conditions are becoming increasingly prevalent. As part of chronic care, sharing patient-generated health data (PGHD) is likely to play a prominent role. Sharing PGHD is increasingly recognized as potentially useful for not only monitoring health conditions but for informing and supporting collaboration with caregivers and healthcare providers. In this paper, we describe a new design for the fine-grained control over sharing one's PGHD to support collaborative self-care, one that centers on giving people with health conditions control over their own data. The system, Data Checkers (DC), uses a grid-based interface and a preview feature to provide users with the ability to control data access and dissemination. DC is of particular use in the case of severe chronic conditions, such as spinal cord injuries and disorders (SCI/D), that require not just intermittent involvement of healthcare providers but daily support and assistance from caregivers. In this paper, after providing relevant background information, we articulate our steps for developing this innovative system for sharing PGHD including (a) use of a co-design process; (b) identification of design requirements; and (c) creation of the DC System. We then present a qualitative evaluation of DC to show how DC satisfied these design requirements in a way that provided advantages for care. Our work extends existing research in the areas of Human-Computer Interaction (HCI), Computer-Supported Cooperative Work (CSCW), Ubiquitous Computing (Ubicomp), and Health Informatics about sharing data and PGHD.
Conference Paper
Full-text available
In order to create user-centric and personalized privacy management tools, the underlying models must account for individual users' privacy expectations, preferences, and their ability to control their information sharing activities. Existing studies of users' privacy behavior modeling attempt to frame the problem from a request's perspective, which lack the crucial involvement of the information owner, resulting in limited or no control of policy management. Moreover, very few of them take into the consideration the aspect of correctness, explainability, usability, and acceptance of the methodologies for each user of the system. In this paper, we present a methodology to formally model, validate, and verify per-sonalized privacy disclosure behavior based on the analysis of the user's situational decision-making process. We use a model checking tool named UPPAAL to represent users' self-reported privacy disclosure behavior by an extended form of finite state automata (FSA), and perform reachability analysis for the verification of privacy properties through computation tree logic (CTL) formulas. We also describe the practical use cases of the methodology depicting the potential of formal technique towards the design and development of user-centric behavioral modeling. This paper, through extensive amounts of experimental outcomes, contributes several insights to the area of formal methods and user-tailored privacy behavior modeling.
Conference Paper
Full-text available
Individuals are known to lie and/or provide untruthful data when providing information online as a way to protect their privacy. Prior studies have attempted to explain when and why individuals lie online. However, no work has examined into how people lie or provide untruthful data online, i.e. the specific strategies they follow to provide untruthful data, or attempted to predict whether people would be truthful or not depending on the specific question/data. To close this gap, we present a large-scale study with over 800 participants. Based on it, we show that it is possible to predict whether users are truthful or not using machine learning with very high accuracy (89.7%). We also identify four main strategies people employ to provide untruthful data and show the factors that influence the choices of their strategies. We discuss the implications of findings and argue that understanding privacy lies at this level can help both users and data collectors. CCS CONCEPTS • Security and privacy → Privacy protections; Usability in security and privacy; • Human-centered computing → Empirical studies in HCI .
Article
Many social network sites (SNSs) have become available around the world and users’ online social networks increasingly include contacts from different cultures. However, there is lack of investigation into the concrete cultural differences in the effects of contextual factors and privacy concerns on users’ privacy decisions on social network sites (SNSs). The goal of this paper is to understand how contextual factors and privacy concerns cast different impact on privacy decisions, such as friend request decisions, information disclosure and perceived risk, in different countries. We performed a quantitative study through a large-scale online survey across the US, Korea and China to model the relationships between contextual factors, privacy concerns and privacy decisions. We find that the contextual influence and focus of privacy concerns vary between the individualistic and collectivistic countries in our sample. We suggest that multinational SNS service providers should consider different contextual factors and focus of privacy concerns in different countries and customise privacy designs and friend recommendation algorithms in SNSs in different countries.
Article
Full-text available
Global multimedia communications is advancing the freedom of information and knowledge. However, as the amount and variety of multimedia data generated through these applications in-creases, so do risks associated with widespread accessibility and utilization of such data. Specifi-cally, data may be used in a manner which users regard as an invasion of their privacy. The relation-ship between multimedia data and privacy invasion has not yet been clearly described. The main problem is that current approaches to privacy define characteristics of the data and thus information, rather than how it is perceived by the users (Davies, 1997). Three years of research within this field have, however, identified that previous approaches to privacy protection are not addressing the real problems in this field. Most multimedia invasions of privacy are not intentional or malicious; rather, the designers failed to anticipate how the data could be used, by whom, and how this might affect users (Adams, 1999a & b; Adams & Sasse, 1999a & b). Seeking to address this problem a model of the user perspective on privacy in multimedia environments has been identified. The model helps to deter-mine which information users regard as private, from whom, and in which context. Trade-offs users make, thus rendering some privacy risks acceptable are also identified. The model can assist design-ers and organizations utilizing multimedia communications to assess privacy implications, and thus develop mechanisms for acceptable use of the technology. 1. The importance of users’ perceptions It has been argued that there are many inalienable privacy rights which should never be disre-garded when developing systems (Davies, 1997). Similarly it is also maintained that privacy experts understand potential privacy risks at a greater depth than users (Bennett, 1997). Both these argu-ments have directed privacy research and identification of privacy requirements in system develop-ment towards appraisals by privacy advocates. The problem with only taking this approach is that any expert may have a distorted perception of a situation and potential privacy risks that do not reflect the perceptions of those whose privacy needs protecting. Inaccurate assumptions are a major cause of unintentional invasions of privacy (Adams, 1999b; Adams & Sasse, 1999a/b).
Article
Full-text available
We present a unified model of everyday privacy in ubiquitous computing environments, designed to aid system designers and administrators in conceptualizing the end-user privacy experience. The model accounts for the influence of societal-scale forces, contextual factors, and subjective perception on end-user privacy. We identify notice and consent as the fair information practices of greatest everyday utility to users, as they gradually engender the user's conceptual model of ubicomp privacy. Navigating the regular deluge of personal information collection events in ubicomp requires that notice be minimally intrusive and consent be implicitly granted by a persistent, situationspecific set of user preferences. We extend our model into an interactional metaphor called situational faces, designed to mitigate the complexity of privacy for the end-user. When encountering a situation, a user engages the appropriate face, a metaphorical abstraction of a set of privacy preferences.