Conference PaperPDF Available

Towards context adaptive privacy decisions in ubiquitous computing

Authors:

Abstract

In ubiquitous systems control of privacy settings will be increasingly difficult due to the pervasive nature of sensing and communication capabilities. We identify challenges for privacy decisions in ubiquitous systems and propose a system for in situ privacy decision support. When context changes occur, the system adapts a user’s privacy preferences to the new situation. As a consequence, recommendations can be offered to the user or sharing behavior can be automatically adjusted to help the user maintain a desired level of privacy. The system learns from user interaction and behavior to improve decision accuracy. In this paper, we outline the main components of our system and illustrate its operation with an ambient assisted living use case.
Towards Context Adaptive Privacy Decisions
in Ubiquitous Computing
Florian Schaub, Bastian Könings, Michael Weber, Frank Kargl
Institute of Media Informatics, Ulm University, Germany
Email: { florian.schaub |bastian.koenings |michael.weber }@uni-ulm.de
DIES Group, University of Twente, The Netherlands; Institute of Distributed Systems, Ulm University, Germany
Email: f.kargl@utwente.nl
Abstract—In ubiquitous systems control of privacy settings will
be increasingly difficult due to the pervasive nature of sensing and
communication capabilities. We identify challenges for privacy
decisions in ubiquitous systems and propose a system for in
situ privacy decision support. When context changes occur, the
system adapts a user’s privacy preferences to the new situation.
As a consequence, recommendations can be offered to the user
or sharing behavior can be automatically adjusted to help the
user maintain a desired level of privacy. The system learns from
user interaction and behavior to improve decision accuracy. In
this paper, we outline the main components of our system and
illustrate its operation with an ambient assisted living use case.
I. INTRODUCTION
Future ubiquitous or pervasive computing environments will
be characterized by the integration of sensing, processing,
and communication capabilities in the physical environment.
Everyday objects will become smart [1], [2] in the sense
that they can perceive their environment and communicate
and interact with each other. The main goal is to facilitate
more natural and less obtrusive interaction between users and
computing systems in order to better support users in their
activities. Context awareness and implicit interaction are seen
as basic ingredients towards realizing this vision.
But with the promises of ubiquitous computing also arise
novel privacy issues as pointed out early on by Weiser [3].
Almost invisible sensing capabilities ubiquitously embedded
in the environment, paired with increasing storage and pro-
cessing capabilities, also enable large scale surveillance of
users [4]. Research on privacy in ubiquitous computing has
been mainly focused on how sensitive information can be
protected and shared in a privacy preserving manner [5], [6].
As location awareness is one of the key characteristics in
ubiquitous computing, location privacy became a dominant
topic [7], [8]. One common approach is the obfuscation of
information to reduce its quality. Other approaches aim to
hide the information owner via anonymization or the use of
pseudonyms. Iachello and Hong [9] give a good overview
on privacy controls for information sharing, e.g., based on
privacy policies. Approaches based on spheres [6] or territorial
privacy [10] employ physical metaphors to reduce complexity
of privacy controls in ubiquitous systems.
While these approaches offer means for controlling privacy,
their configuration is commonly left to the user. But users will
have a hard time defining privacy settings that match their
actual privacy preferences due to the complexity of systems,
the multitude of entities (technical and human), and changing
context. Especially manually pre-defining what information
should be available to which entity (human or technical)
in any given situation will be infeasible in such scenarios.
The abstract nature of privacy implications makes it difficult
to presuppose the desired level of privacy for a specific
situation [11]. Instead, privacy decision making should be
supported in situ, i.e., in the situation where a privacy decision
is required [12]. The current context needs to be considered
in order to help users effectively control their desired level of
privacy in any given situation.
Existing approaches for supporting in situ privacy decision
making either focus on enhancing awareness of information
flow or on controlling specific information items, such as
location. Most systems [13], [14] only become active when
triggered by external requests for the user’s information.
However, we argue that passive observations dominate in
ubiquitous computing scenarios, i.e. entities sense, process,
and communicate information about users without explicitly
requesting access or interacting with them [10]. Therefore, not
only interactors but also other physically or virtually present
entities must be taken into account for privacy decisions
as well as privacy controls. We propose to utilize context-
awareness to dynamically adapt privacy preferences in a
privacy decision model to help a user maintain a desired level
of privacy, either by providing recommendations to the user
or via automatic reconfiguration. By adapting to the user, the
system can learn the user’s preferred trade-off between user
involvement and automated enforcement.
In this paper, we first discuss privacy decision issues in
ubiquitous systems (Sec. II). Then, we outline our system and
privacy decision process (Sec. III) and exemplify it with a use
case (Sec. IV). We conclude the paper with an outlook on
challenges and future work (Sec. V).
II. PR IVACY DECISIONS IN UBIQUITOUS COMPUTING
Making adequate privacy decisions in information systems
is already difficult today. For example, when sharing infor-
mation on social networking sites [15]. The characteristics of
ubiquitous systems further increase the difficulty of specifying
privacy settings that match the user’s actual privacy prefer-
ences. The following challenges can be identified:
In IEEE International Conference on Pervasive Computing and Communications (PerCom 2012), Work in Progress (WiP) session, Lugano, March 2012
Author version. Definitive version at ieeexplore.ieee.org
Information explosion: The amount of information about
a user grows with the number of sensors. Information exists on
multiple levels of abstraction with varying semantic richness.
Privacy decisions for sharing this information must be made on
relevant abstraction levels to be comprehensible to users. For
example, privacy decisions for indoor location sharing should
rather be based on rooms than on specific sensors.
Context explosion: With an increasing number of smart
entities and environments, situations in which privacy de-
cisions are required explode due to exponential growth in
potential context configurations. Taking all context situations
into account a priori is infeasible and would prevent efficient
privacy decisions. Instead, privacy decisions must be adapted
to previously unknown situations.
Physical boundaries dissolve: The integration of sensors
and communication capabilities into the environment enables
virtual entities to participate in physical environments. Users
may not know that they are being observed or by whom. Thus,
sharing decisions must consider physically present entities as
well as virtual and remote entities, which may be unknown.
Users must be made aware of the extend of their physical-
virtual mixed environment.
Observations and disturbances: Physically and virtually
present entities may observe users without actively interacting
with them. Furthermore, not only exchanged information may
be privacy sensitive. A user’s activities, interaction partners,
external state, posture, and behavior may also need to be
protected. Therefore, privacy decisions must not only pertain
to sensitive information but also observations of the user.
Smart entities may also cause disturbances in a user’s phys-
ical environment, e.g., via audiovisual output, automation, or
robots. Such disturbances can impact a user’s privacy and must
be included in privacy decisions in ubiquitous systems.
Abstract privacy implications: Due to the complexity
introduced by the characteristics above, implications of privacy
decisions become too abstract to be estimated properly by the
user in advance. Therefore, a continuous re-evaluation process
is required to support the user [12].
III. INSIT U PRI VACY DECISION SUP PO RT
We propose a system for supporting a user’s privacy deci-
sions in situ, i.e., in the context they are required in, following
the notion of contextual integrity [11]. Instead of requiring
static definition of privacy settings beforehand, our system
approximates the user’s privacy preferences and adapts them
to the current context. The system can then either recommend
sharing decisions and actions or autonomously reconfigure
privacy settings. The goal is to support users in maintaining a
level of privacy that adequately fits their privacy needs for the
current activity in the current context. This means that privacy
settings should neither be too tight nor too open [12].
In order to provide adequate decision support and properly
adapt privacy preferences, we take into account the issues
named in the previous section. Especially territorial privacy
aspects, such as physical-virtual mixed environments, obser-
vations and disturbances, factor into privacy decisions, besides
information privacy aspects. Furthermore, the system adapts to
the specific user by learning from explicit sharing decisions,
implicit user behavior, and reactions to system actions. The
personalization to a specific user has the advantage of better
emulating that user’s privacy decision process. It also helps
to decide when to involve the user in the decision process by
providing recommendations only and when privacy decisions
can be realized autonomously.
We assume that the system is implemented as a personal
trusted agent and supports privacy decisions of a single user.
The system’s main components are the context model, the
privacy decision engine, and realization and enforcement of
adapted privacy preferences. In the following, we will discuss
them in more detail together with the privacy decision process.
Figure 1 provides an overview.
A. Context Model
To facilitate privacy decisions on an appropriate abstraction
level, we distinguish between decision level and system level
(see Fig. 1). The system level handles context acquisition
and provides semantically enriched information to the deci-
sion level. Thus, the system level enables context awareness
but also filters context information and maps it to semantic
concepts required for decisions. Semantic mappings can be
derived from a pre-defined or learnt world model. On the
decision level, the context model only contains components
relevant for privacy decision making. The main components
on the decision level are the user that performs an activity in
an environment.
The user has resources, which can be information items but
also devices or sensors, and an observable state, i.e., the user’s
posture or bodily expression. An activity is always user-centric
and has some abstract goal. An activity involves the user, user
resources, and interactors, i.e., entities the user engages with.
The environment spans the physical and virtual realm and
is user-centric in the sense that it is defined by the user’s
physical location. It is assigned a type, i.e., a semantic label,
such as home or work, based on system level input. The
environment consists of physical and virtual entities and
staging. Humans, software agents, services, and smart devices
are all represented as entities with the ability to perceive,
process and communicate information. The staging defines an
environment’s physical and virtual configuration. For example,
the position of walls, windows, and screens, and how virtual
entities are connected to the physical environment. While the
staging shares system level characteristics, it can influence
privacy decisions (e.g., an unfamiliar office) and is therefore
represented on the decision level.
Entities can have different roles. In our previously defined
territorial privacy model [16], an observer is an entity that
can perceive the user, either directly or via other observers
forwarding information. A disturber is an entity that has the
ability to disturb a user’s activity, e.g., a person walking into
a room disrupting a conversation or a cleaning robot becom-
ing active when the user watches television. When the user
engages with an entity in an activity, the entity becomes an
realization and enforcementprivacy decision enginecontext model
system level
context
decision level
context
context
transition
affected
items
knowledge base
privacy
preferences
privacy
profile
adapted privacy
preference
privacy policy
variant
explicit user
decision
semantic
mapping
transition
detection
transition
evaluation
privacy
relevance
reasoning
inference
of policy
variants
selected
privacy policy
autonomous
policy
selection
system level
decision level
user reaction
and behavior
user
involvement
policy
selection
policy
realization
learning
Fig. 1. Overview of the privacy decision process.
interactor. An entity can be observer, disturber, and interactor
at the same time. Entities can be organized hierarchically to
simplify privacy decisions. For example, sharing decisions can
reference a person, while privacy settings will also apply to
the person’s devices, e.g., a mobile phone.
Privacy decisions are further facilitated by assigning trust
to entities in the context model and ambient trust to the
environment, i.e., determined by its type and staging. For this
purpose, we are currently developing entity trust evaluation
mechanisms based on social trust concepts [17].
B. Privacy Decision Engine
The context model allows to reason about which context
items are affected by a context transition. A change in context
Cleads to a new context situation C0. The transition TCC0
is defined as the difference between Cand C0and captures all
changes between them. When a transition occurs, the privacy
decision engine (PDE) evaluates TCC0to determine which
protection worthy context items are affected by TCC0(cf.
Fig. 1). Protection worthy items can be the user’s resources,
but also the user’s state, the activity, or its interactors. Protec-
tion worthiness (or privacy relevance) of context items for a
given context are determined by the user’s privacy preferences.
As a user’s true privacy preferences exist only inside the
user’s mind, our system can only approximate them. Thus,
the PDE matches privacy preferences stored in the knowledge
base to the context transition TCC0and the new situation
C0in oder to infer an adapted privacy preference for C0
in the reasoning step (see Fig. 1). We employ case-based
reasoning [18] to avoid extensive a priori knowledge acqui-
sition. Privacy preferences are stored as cases, which are
retrieved by evaluating the similarity of previous situations
with the current one. Similar cases are then adapted to the
new context resulting in an adapted privacy preference that
is retained as a new case. We plan to use personality-based
privacy profiles to govern adaptation of privacy preferences.
The main idea is to determine the user’s personality type [19]
before initial system use to select a privacy profile in order
to speed up the bootstrapping process of learning the user’s
personal preferences. The privacy profile then serves as a
basis for adapting privacy preferences and is subsequently
further adjusted to the user by learning from the user’s explicit
decisions, behavior, and reaction to system actions.
Based on the adapted privacy preference the PDE infers
multiple privacy policy variants (see Fig. 1). While a privacy
preference describes the user’s privacy goal on the decision
level, a privacy policy describes one way of achieving the
privacy preference on the system level. Thus, C0may offer
different alternatives, represented by privacy policies, for real-
izing the privacy preference. It may also be possible that the
privacy preference cannot be realized in the current context.
In that case, the privacy policy would suggest terminating
the activity. For each privacy policy variant a confidence
score is calculated based on how well it fits the adapted
privacy preference. Based on the confidence scores, the PDE
selects the most appropriate policy candidate or triggers user
involvement if the confidence is below a certain threshold (see
Fig. 1). The specific threshold is determined by the user’s
personality and previous privacy decisions.
The system learns from explicit user decisions, as well as
reactions to the system’s realization of privacy policies. These
cues are used to adjust stored privacy preferences and tailor
interaction thresholds to the user’s expectations.
C. Realization and Enforcement
Next, the selected privacy policy must be realized on the
system level (see Fig. 1). Our privacy policies combine terri-
torial privacy and information privacy aspects. First, territorial
privacy mechanisms [10], [16] are employed to prune the
number of physical and virtual entities granted access to
the user’s private territory. The private territory is defined
by a territorial privacy boundary that separates desired and
undesired entities. The entities remaining inside the private
territory have defined observation or disturbance channels to
the user, or more specifically, to protectionworthy context
items.
Next, our privacy policies define granularity adjustments for
specific information items. For example, instead of the user’s
exact position only the street address or city can be provided.
Similarly, the granularity of the user’s identity can be adjusted,
e.g., anonymous, pseudonymous, or full identity. Granularity
adjustments can also be defined for other information types.
Depending on the environment, different strategies for pol-
icy realization and varying degrees of enforcement are possi-
ble [10]. In personal and shared personal environments, such
as the home, system components are under the user’s control
allowing trust assumptions in terms of policy realization. In
shared and public environments, the user has generally less
control. Yet, trusted computing or collaborative mechanisms
can support enforcement of privacy policies.
IV. USE CA SE : AMBIENT AS SI ST ED LIVING
In the following we discuss a use case to illustrate the
proposed system. Alice is an elderly person living alone in
her home equipped with ambient assisted living technology.
One day, Alice falls on her way to the bathroom and remains
unconscious. The monitoring system (MS) detects Alice’s fall
and wants to inform Dan, her doctor, with a warning message
including her vital signs.
Our system detects multiple context transitions. Before the
fall (C0), Alice’s activity is walking to bathroom and the MS
is active but can only process information locally. In transition
TC0C1, Alice’s fall is detected on the system level. Her
state on the decision level changes to unresponsive. The PDE
analyzes TC0C1and determines that no privacy adaptation
is required. Next, MS initiates a new activity on behalf of
Alice in TC1C2. This activity involves entities Alice, MS,
and Dan and requires access to Alice’s vital sign sensors. The
PDE matches this activity to the emergency help activity which
allows sharing of vital signs with remote medical personnel.
In the current context, the PDE derives a privacy policy that
allows the MS to pass information on to Dan. The selected
privacy policy also includes Dan’s warning system, which the
message is sent to to reach Dan. Note that this system is not
part of the adapted privacy preference, but is required on the
system level to realize the adapted privacy preference.
Dan’s warning system receives the emergency message and
informs Dan. Dan immediately drives to Alice and arrives
at her front door shortly after. Dan’s arrival triggers another
context transition TC2C3. A new entity is added in C3as
a disturber. The system authenticates the entity as Dan. The
PDE matches Alice’s state unresponsive and Dan’s attribute
doctor with Alice’s privacy preference to receive medical help
in emergencies. The derived privacy policy states that Dan
can enter the house. The policy is realized by the house
automation system that automatically opens the door for Dan.
Dan enters the house and can help Alice. Note, that if Alice
would have been conscious, the PDE could have involved her
in the privacy decision by asking her if she wants to let Dan in.
The adapted privacy preferences would be stored and validated
later on when Alice is well again.
V. OU TL OO K
In this paper, we proposed a system for context adaptive
privacy decision support in ubiquitous computing. Our system
takes both physical and virtual entities into account and
extends privacy decisions not only to information sharing but
also to observations and disturbances. The system’s privacy
decisions can be used to support the user in her privacy
decision making or to autonomously reconfigure the system.
The goal is to maintain a privacy level that aligns with the
user’s privacy preferences and activity in the current context,
and is neither too open nor too restrictive. While focusing on a
single user enables improved personalization, it also introduces
challenges, such as handling shared resources, e.g., devices
owned or operated by multiple users.
The development of the system is currently work in
progress. Next, we plan to further refine the context model
and validate its generality and expressiveness by applying it
to a set of versatile use cases. At the same time, we are
working on the integration of trust aspects in the context
model. For the privacy decision engine, we are currently
working on suitable context and knowledge representations
to enable efficient reasoning and inference of preferences and
policies on different abstraction levels. We plan to implement
the privacy decision engine in a prototype system to evaluate
the accuracy of privacy decisions in user studies.
REFERENCES
[1] M. Weiser, “The computer for the 21st century,” Scientific American,
vol. 265, no. 3, pp. 94–104, 1991.
[2] J. Bohn, V. Coroam˘
a, M. Langheinrich, F. Mattern, and M. Rohs, “Living
in a world of smart everyday objects – social, economic, and ethical
implications,” Human and Ecological Risk Assessment, vol. 10, no. 5,
p. 763–785, 2004.
[3] M. Weiser, “Some computer science issues in ubiquitous computing,
Communications of the ACM, vol. 36, no. 7, pp. 75–84, 1993.
[4] M. Langheinrich, “Privacy by Design - Principles of Privacy-Aware
Ubiquitous Systems,” in UbiComp’01. Springer, 2001, pp. 273–291.
[5] ——, “A privacy awareness system for ubiquitous computing environ-
ments,” in UbiComp’02. Springer, 2002, pp. 315–320.
[6] J. I. Hong and J. A. Landay, “An architecture for privacy-sensitive
ubiquitous computing,” in MobiSys’04. ACM, 2004, pp. 177–189.
[7] A. Beresford and F. Stajano, “Location privacy in pervasive computing,
IEEE Pervasive Computing, vol. 2, no. 1, pp. 46–55, 2003.
[8] J. Krumm, “A survey of computational location privacy,Personal and
Ubiquitous Computing, vol. 13, no. 6, pp. 391–399, 2008.
[9] G. Iachello and J. Hong, “End-user privacy in human-computer interac-
tion,” Foundations and Trends in Human-Computer Interaction, vol. 1,
no. 1, pp. 1–137, 2007.
[10] B. Könings and F. Schaub, “Territorial Privacy in Ubiquitous Comput-
ing,” in WONS’11. IEEE, 2011, pp. 104–108.
[11] H. Nissenbaum, Privacy in Context - Technology, Policy, and the
Integrity of Social Life. Stanford University Press, 2009.
[12] L. Palen and P. Dourish, “Unpacking "privacy" for a networked world,
in CHI ’03. ACM, 2003, pp. 129–136.
[13] M. Wu, “Adaptive Privacy Management for Distributed Applications,”
Ph.D. Thesis, Lancaster University, 2007.
[14] N. Sadeh, J. Hong, L. Cranor, I. Fette, P. Kelley, M. Prabaker, and J. Rao,
“Understanding and capturing people’s privacy policies in a mobile
social networking application,” Personal and Ubiquitous Computing,
vol. 13, no. 6, pp. 401–412, Aug. 2009.
[15] A. Acquisti and R. Gross, “Imagined communities: Awareness, infor-
mation sharing, and privacy on the facebook,” in PET’06. Springer,
2006, pp. 36–58.
[16] B. Könings, F. Schaub, M. Weber, and F. Kargl, “Towards Territorial
Privacy in Smart Environments,” in Intelligent Information Privacy
Management Symposium. AAAI, 2010.
[17] C. Castelfranchi and R. Falcone, Trust Theory: A Socio-Cognitive and
Computational Model, 1st ed. John Wiley & Sons, 2010.
[18] A. Kofod-Petersen and A. Aamodt, “Contextualised ambient intelligence
through case-based reasoning,” in ECCBR’06. Springer, 2006.
[19] M. Korzaan and N. Brooks, “Demystifying Personality and Privacy: An
Empirical Investigation into Antecedents of Concerns for Information
Privacy,Journal of Behavioral Studies in Business, pp. 1–17, 2009.
... These solutions are close to an automated approach but also rely on machine learning, making them capable of adapting to changes and requiring some level of user involvement. Existing solutions avoid interrupting the user unless it is strictly necessary, i.e. low inference confidence [8,31,33]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. ...
... Prediction Certainty: Prediction certainty relates to the decision for a particular interaction. It has been the main variable considered when deciding to interrupt the user for further input [8,31,33] and is defined solely by the system's ability to correctly infer the user's privacy preferences. Depending on the user and context its role may vary immensely. ...
Conference Paper
This paper presents an organized set of variables that can aid intelligent privacy agents in predicting the best and necessary moments to interrupt users in order to give them control and awareness over their privacy, avoiding information overload or over choice.
... For example, while some propose to fully automate the privacy decision-making process (e.g. [34]), others have implemented adaptive suggestions (e.g. [14], or suggested the use of personalized nudges (e.g. ...
Article
Full-text available
Research finds that the users of Social Networking Sites (SNSs) often fail to comprehensively engage with the plethora of available privacy features— arguably due to their sheer number and the fact that they are often hidden from sight. As different users are likely interested in engaging with different subsets of privacy features, an SNS could improve privacy management practices by adapting its interface in a way that proactively assists, guides, or prompts users to engage with the subset of privacy features they are most likely to benefit from. Whereas recent work presents algorithmic implementations of such privacy adaptation methods, this study investigates the optimal user interface mechanism to present such adaptations. In particular, we tested three proposed “adaptation methods” (automation, suggestions, highlights) in an online between-subjects user experiment in which 406 participants used a carefully controlled SNS prototype. We systematically evaluate the effect of these adaptation methods on participants’ engagement with the privacy features, their tendency to set stricter settings (protection), and their subjective evaluation of the assigned adaptation method. We find that the automation of privacy features afforded users the most privacy protection, while giving privacy suggestions caused the highest level of engagement with the features and the highest subjective ratings (as long as awkward suggestions are avoided). We discuss the practical implications of these findings in the effectiveness of adaptations improving user awareness of, and engagement with, privacy features on social media.
... Context awareness can play a key role in this type of situation as it automatically changes the privacy setting assigned to a user with change in context (F. Schaub, 2012). The study in (Motta, Gustavo, 2013) focuses on using context in role based access control for providing authorization services in healthcare services. ...
Article
Full-text available
Personalized health care services have improved the quality of service. To make critical decisions in emergency situations, Hospital workers, require ubiquitous access to real-time patient data. A collaborative system incorporates the immediate alarm notification in critical situation by introducing context-awareness. It helps to support the intensive information management within a healthcare environment. These health care applications are more vulnerable to privacy risks. This study proposes development of ontology for the effective handling of healthcare system problems during an emergency situation.
... Such a keyfob could also be used in the opposite direction, i.e. to instruct the surveillance or smart house system. Florian Schaub et al. propose a system for "in situ privacy decision support" [60]. Their ap- proach is based on a range of sensors that give signals when there are occurrences that ought not to happen in a given context. ...
Chapter
Full-text available
In the field of assisted living technologies, one central strand is to investigate how smart homes might fulfill ambitions for older adults to live longer at home. With the advent of the General Data Protection Regulative (GDPR), there are clear regulations demanding consent to automated decision-making regarding health. This contribution to applied ethics in the field of algorithmic decision-making opens up some of the possible dilemmas in the intersection between the smart home ambition and the GDPR with specific attention to the possible trade-offs between privacy and well-being through a future case, to the learning goals in a future smart home with health detection systems, and presents different approaches to advance consent.
... The study [27] has designed the framework based on privacy policies. The studies [28] [29] have worked on designing the model to provide the security, privacy and Reliability services in different application domains.C. Contextual Models for enhancing security in various applications: ...
... Wallmounted displays can provide similar functionality but face the same issues when information is statically displayed with full details. In this work, we use the concept of context-adaptive pri- vacy [31] to enhance the utility of wall-mounted ambient calendar displays. The proposed PriCal system detects present persons in the proximity of the calendar display and dynamically adapts shown events to the privacy preferences of individual users by displaying events either fully, hiding them, or showing them simply as busy. ...
Conference Paper
Full-text available
PriCal is an ambient calendar display that shows a user's schedule similar to a paper wall calendar. PriCal provides context-adaptive privacy to users by detecting present persons and adapting event visibility according to the user's privacy preferences. We present a detailed privacy impact assessment of our system, which provides insights on how to leverage context to enhance privacy without being intrusive. PriCal is based on a decentralized architecture and supports the detection of registered users as well as unknown persons. In a three-week deployment study with seven displays, ten participants used PriCal in their real work environment with their own digital calendars. Our results provide qualitative insights on the implications, acceptance, and utility of context-adaptive privacy in the context of a calendar display system, indicating that it is a viable approach to mitigate privacy implications in ubicomp applications.
... Dynamic privacy adaptation [8] leverages context awareness to support users in dynamically regulating their privacy in ubicomp environments [7], by either autonomously reconfiguring disclosure settings according to user preferences or providing situation-specific recommendations. For collaborative calendar displays, present persons are the dominant factor guiding privacy adaptation. ...
Conference Paper
Full-text available
Office wall calendars often contain only entries considered public, which reduces their utility for scheduling meetings or gaining an overview of one's schedule. PriCal is a collaborative calendar display that dynamically adapts to present persons and their privacy preferences. We outline our current prototype, consisting of a calendar agent for display adaptation, a mobile app for managing individual calendars and privacy settings, and a system for detecting present persons and identifying registered users.
Chapter
Using networks of Internet-connected sensors, the Internet of Things (IoT) makes technologies “smart” by enabling automation, personalization, and remote control. At the same time, IoT technologies introduce challenging privacy issues that may frustrate their widespread adoption. This chapter addresses the privacy challenges of IoT technologies from a user-centered perspective and demonstrates these prevalent issues in the domains of wearables (e.g., fitness trackers), household technologies (e.g., smart voice assistants), and devices that exist in the public domain (e.g., security cameras). The chapter ends with a comprehensive list of solutions and guidelines that can help researchers and practitioners introduce usable privacy to the domain of IoT.
Conference Paper
Communication and collaboration among members in virtual workspace are becoming more complex and challenging. Users are invisible and they are merely represented by their tasks in align with resources. In order to maintain effective communication among users, monitoring context awareness in a collaborative space is crucial. A vital aspect of awareness is associated with coordinating work practices by displaying and monitoring virtual users' actions. This paper focuses on user activities in five dominant domains for the purpose of understanding phenomena in contextual awareness. Activities and their relationships are explored to produce an activity-based entities relationship in monitoring context awareness. © 2013 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.
Thesis
Full-text available
Ubiquitous computing is characterized by the integration of computing aspects into the physical environment. Physical objects gain digital sensing, processing, and communication capabilities. This introduces a number of privacy challenges. Smart devices may gather and exchange information about users with remote parties anywhere in the world, while the complexity of ubiquitous computing systems makes it difficult for users to accurately estimate privacy implications. We propose a dynamic privacy adaptation process that leverages context awareness to support users in their privacy regulation activities. Our user-centric privacy context model captures privacy-relevant contextual information in a given situation. Privacy-relevant context changes trigger our privacy decision engine, which employs case-based reasoning and context-based preference rules to reason about the user"s privacy preferences. Privacy preferences then have to be realized and implemented in diverse ubiquitous computing environments. We analyze what factors influence the selection of suitable realization strategies, and we provide an overview of common optimistic and pessimistic privacy control strategies. In particular, we highlight our contributions to an architecture for distributed privacy policy enforcement and outline a conceptual approach for combining the proposed privacy adaptation process with multimodal interaction systems. To evaluate our approach, we conducted an in-depth case study with a privacy-adaptive calendar display that implements the dynamic privacy adaption process. Ten participants used the developed system for three weeks with their own calendar data in a real work setting. Our results indicate that dynamic privacy adaptation is a feasible approach for supporting users in the regulation of their privacy in complex ubiquitous computing environments.
Conference Paper
Full-text available
Although privacy is broadly recognized as a dominant concern for the development of novel interactive technologies, our ability to reason analytically about privacy in real settings is limited. A lack of conceptual interpretive frameworks makes it difficult to unpack interrelated privacy issues in settings where information technology is also present. Building on theory developed by social psychologist Irwin Altman, we outline a model of privacy as a dynamic, dialectic process. We discuss three tensions that govern interpersonal privacy management in everyday life, and use these to explore select technology case studies drawn from the research literature. These suggest new ways for thinking about privacy in socio-technical environments as a practical matter.
Conference Paper
Full-text available
Territorial privacy is an old concept for privacy of the personal space dating back to the 19th century. Despite its former relevance, territorial privacy has been neglected in recent years, while privacy research and legislation mainly focused on the issue of information privacy. However, with the prospect of smart and ubiquitous environments, territorial privacy deserves new attention. Walls, as boundaries between personal and public spaces, will be insufficient to guard territorial privacy when our environments are permeated with numerous computing and sensing devices, that gather and share real-time information about us. Territorial privacy boundaries spanning both the physical and virtual world are required for the demarcation of personal spaces in smart environments. In this paper, we analyze and discuss the issue of territorial privacy in smart environments. We further propose a real-time user-centric observation model to describe multimodal observation channels of multiple physical and virtual observers. The model facilitates the definition of a territorial privacy boundary by separating desired from undesired observers, regardless of whether they are physically present in the user’s private territory or virtually participating in it. Moreover, we outline future research challenges and identify areas of work that require attention in the context of territorial privacy in smart environments.
Article
Full-text available
Visions of Pervasive Computing and ambient intelligence involve integrating tiny microelectronic processors and sensors into everyday objects in order to make them“smart.”Smart things can explore their environment, communicate with other smart things, and interact with humans, therefore helping users to cope with their tasks in new, intuitive ways. Although many concepts have already been tested out as prototypes in field trials, the repercussions of such extensive integration of computer technology into our everyday lives are difficult to predict. This article is a first attempt to classify the social, economic, and ethical implications of this development.
Chapter
This book provides an introduction, discussion, and formal-based modelling of trust theory and its applications in agent-based systems. This book gives an accessible explanation of the importance of trust in human interaction and, in general, in autonomous cognitive agents including autonomous technologies. The authors explain the concepts of trust, and describe a principled, general theory of trust grounded on cognitive, cultural, institutional, technical, and normative solutions. This provides a strong base for the author's discussion of role of trust in agent-based systems supporting human-computer interaction and distributed and virtual organizations or markets (multi-agent systems). Key Features: Provides an accessible introduction to trust, and its importance and applications in agent-based systems. Proposes a principled, general theory of trust grounding on cognitive, cultural, institutional, technical, and normative solutions. Offers a clear, intuitive approach, and systematic integration of relevant issues. Explains the dynamics of trust, and the relationship between trust and security. Offers operational definitions and models directly applicable both in technical and experimental domains. Includes a critical examination of trust models in economics, philosophy, psychology, sociology, and AI. This book will be a valuable reference for researchers and advanced students focused on information and communication technologies (computer science, artificial intelligence, organizational sciences, and knowledge management etc.), as well as Web-site and robotics designers, and for scholars working on human, social, and cultural aspects of technology. Professionals of ecommerce systems and peer-to-peer systems will also find this text of interest.
Conference Paper
Although privacy is broadly recognized as a dominant concern for the development of novel interactive technologies, our ability to reason analytically about privacy in real settings is limited. A lack of conceptual interpretive frameworks makes it difficult to unpack interrelated privacy issues in settings where information technology is also present. Building on theory developed by social psychologist Irwin Altman, we outline a model of privacy as a dynamic, dialectic process. We discuss three tensions that govern interpersonal privacy management in everyday life, and use these to explore select technology case studies drawn from the research literature. These suggest new ways for thinking about privacy in socio-technical environments as a practical matter.
Article
Specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence.
Article
Given the ubiquitous nature of technology, privacy remains a focal issue. The purpose of this paper is to incorporate individual personality variables into a research model that helps explain and predict concerns about information privacy, computer anxiety, and behavioral intentions. The personality traits investigated include morality, self efficacy, risk taking, trust, and anxiety. Data was collected via a survey instrument that was completed by undergraduate college students. Analysis of the data indicates that morality and self efficacy have a positive, significant influence on individual concerns for information privacy (CFIP). Risk taking was found to have a negative, significant influence on CFIP. In addition, anxiety exerted a significant influence on computer anxiety. Both CFIP and computer anxiety were positively related to behavioral intentions. Individuals who possess high levels of morality and self efficacy and low levels of risk taking are more likely to be concerned about information privacy. Practitioners can benefit by establishing privacy statements, policies, and standards that emphasize low risk to the individual as well as highlight ethical responsibility and integrity in their practices regarding personal information. The results provided here further empirical knowledge by expanding an existing theoretical model to incorporate the role of individual characteristics in influencing concerns for information privacy. The study also provides insight for practitioners in establishing their information privacy statements, policies, and standards.
Article
This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information. This definition includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the geometric nature of the data. The survey omits non-computational techniques like manually inspecting geotagged photos, and it omits techniques like encryption or access control that treat location data as general symbols. The paper reviews studies of peoples’ attitudes about location privacy, computational threats on leaked location data, and computational countermeasures for mitigating these threats.