Article

Ad Hoc Privacy Management in Ubiquitous Computing Environments

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Privacy in ubiquitous computing is often discussed on a technical level with a focus on cryptography and anonymization. On the other side it is equally important to concentrate on user side aspects of privacy control, i.e., to enable users of ubicomp applications to practice privacy dynamically and in an intuitive way. In our work we review previous approaches on user side management of private information in smart environments and motive a new ad hoc based, semi-automatic privacy control. We present a privacy focused, data mining powered interaction model with ubicomp services and provide a prototype environment for evaluating this model. This environment can be used to perform and capture privacy related service interaction in a user study with real users as well as in a simulation with agents simulating users and their privacy related service interaction. Finally we show results of a first simulation which initially tests the proposed interaction model in the prototype environment.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A fully automatic solution is also not possible or desirable, as discussed in the related work (Section 2.1). Hence, a semi-automatic approach such as the one proposed by Bunnig could be more appropriate, where a data-mining powered interaction model learns from a user's decision of which information to disclose to which service in which situation, and for any later point of time uses that to automatically decide or suggest for disclosure in that context [11]. This ad hoc approach could save users from having to make frequent decisions [11]. ...
... Hence, a semi-automatic approach such as the one proposed by Bunnig could be more appropriate, where a data-mining powered interaction model learns from a user's decision of which information to disclose to which service in which situation, and for any later point of time uses that to automatically decide or suggest for disclosure in that context [11]. This ad hoc approach could save users from having to make frequent decisions [11]. The implementation manner is a system-level issue, while our focus is on the user level. ...
... To protect online privacy and subsequent implications in the physical world, many often kept their Internet usage to a minimum. Seven participants (P1, 2,7,8,11,12,14) reported minimal sharing of personal information over social networking sites, especially when traveling, as they were afraid of having that information passed to burglars and then getting burgled: "It is not good to share your holiday pictures while you are still on holiday. This might give burglars a chance" [P1]. ...
Article
The emergence of ubiquitous computing (UbiComp) environments has increased the risk of undesired access to individuals’ physical space or their information, anytime and anywhere, raising potentially serious privacy concerns. Individuals lack awareness and control of the vulnerabilities in everyday contexts and need support and care in regulating disclosures to their physical and digital selves. Existing GUI-based solutions, however, often feel physically interruptive, socially disruptive, time-consuming and cumbersome. To address such challenges, we investigate the user interaction experience and discuss the need for more tangible and embodied interactions for effective and seamless natural privacy management in everyday UbiComp settings. We propose the Privacy Care interaction framework, which is rooted in the literature of privacy management and tangible computing. Keeping users at the center, Awareness and Control are established as the core parts of our framework. This is supported with three interrelated interaction tenets: Direct, Ready-to-Hand, and Contextual . Direct refers to intuitiveness through metaphor usage. Ready-to-Hand supports granularity, non-intrusiveness, and ad hoc management, through periphery-to-center style attention transitions. Contextual supports customization through modularity and configurability. Together, they aim to provide experience of an embodied privacy care with varied interactions that are calming and yet actively empowering. The framework provides designers of such care with a basis to refer to, to generate effective tangible tools for privacy management in everyday settings. Through five semi-structured focus groups, we explore the privacy challenges faced by a sample set of 15 older adults (aged 60+) across their cyber-physical-social spaces. The results show conformity to our framework, demonstrating the relevance of the facets of the framework to the design of privacy management tools in everyday UbiComp contexts.
... These solutions are close to an automated approach but also rely on machine learning, making them capable of adapting to changes and requiring some level of user involvement. Existing solutions avoid interrupting the user unless it is strictly necessary, i.e. low inference confidence [8,31,33]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. ...
... Prediction Certainty: Prediction certainty relates to the decision for a particular interaction. It has been the main variable considered when deciding to interrupt the user for further input [8,31,33] and is defined solely by the system's ability to correctly infer the user's privacy preferences. Depending on the user and context its role may vary immensely. ...
Conference Paper
This paper presents an organized set of variables that can aid intelligent privacy agents in predicting the best and necessary moments to interrupt users in order to give them control and awareness over their privacy, avoiding information overload or over choice.
... [7] claimed that privacy is a set of policies and conditions that requires a system to protect users. [8] defined privacy as the protection of personal data from being maliciously used by only allowing certain entities to access personal information by making it visible to them. Moreover, [9] indicated that privacy is more important in applications that want to store location information to prevent it from being shared with others. ...
Article
Full-text available
In this study, we aimed at exploring the current practices of using mobile phones by students in Saudi Arabia and identifying the potential data privacy risks that they are exposed to. The key findings from the study was that although a majority of students are aware of data privacy, many of the respondents were willing to chat with unknown people online and a considerable number of those participants opened URLs sent from strangers. This is a huge data privacy risk since it may result in the disclosure of their personal information without the knowledge that the websites were deceptive sites that posed a threat to their personal data, which led to several risks to their personal data, such as theft of personal accounts, identity cards, information, and personal photos. Based on the finding and due to the lack of research and commercial efforts of detecting malicious URLs on mobile devices, we developed a system, Suspicious URL Detector, which is targeted to support mobile users to seamlessly verify URLs before they can be opened. Our system is composed of a mobile app and a URL verification server that is backed by PhishTank [1], a user-contributed blacklist of suspicious URLs. The mobile app is also designed to be user-friendly and lightweight to ensure it performs well on mobile devices with limited resources.
... More particular, there are some specific definitions for privacy that can be very useful for the case of protecting personal information in social media. For instance, Bünnig and Cap [12] defined privacy as "protecting personal information from being misused by malicious entities and allowing certain authorized entities to access that personal information by making it visible to them" (as cited in [13]). Additionally, Alan Westin [9] described privacy as the right to let people decide when, how, and to what extent their information is exposed to others (as cited in [14]). ...
Article
Full-text available
With the increased use of social networkingplatforms, especially with the inclusion of sensitive personalinformation, it has become important for those platforms to haveadequate levels of security and privacy. This research aimed toevaluate the usability of privacy in the WhatsApp, Twitter, andSnapchat applications. The evaluation was conducted based onthe structured analysis of privacy (STRAP) framework. Sevenexpert evaluators performed heuristic evaluations and appliedthe 11 STRAP heuristics to the privacy policy statements andsettings provided in the WhatsApp, Twitter, and Snapchatapplications. This study provides useful information fordesigners and developers of social media applications as well asusers of the apps. In particular, the results indicate that Snapchathad the highest severity rating, followed by Twitter andWhatsApp. Moreover, the most notable severity rating for all the apps was regarding the ability to revoke consent, where all of the apps had a very high number of usability problems.
... Privacy is essential in an extensive range of applications that pursue to protect the user's information related to their location and other details too. As it's a diverse concept so no single definition of privacy can include all the characteristics related to it [4]. There is no privacy standard for controlling the user's personal information settings as it varies from site to site. ...
Article
Full-text available
http://jictra.com.pk/index.php/jictra/article/view/189
... 3) Technical dimension: this dimension aims to protect privacy through technical specifications by controlling (automatically and/or manually) data and information. Bunnig and Cap [28] described privacy as protecting personal information from malicious and unauthorized entities. Another definition of privacy is to hide some details from others [29]. ...
Article
Full-text available
A large amount of sensitive data is transferred, stored, processed, and analyzed daily in Online Social Networks (OSNs). Thus, an effective and efficient evaluation of the privacy level provided in such services is necessary to meet user expectations and comply with the requirement of the applicable laws and regulations. Several prior works have proposed mechanisms for evaluating and calculating privacy scores in OSNs. However, current models are system-specific and assess privacy only from the user’s perspective. There is still a lack of a universal model that can quantify the level of privacy and compare between different systems. In this paper, we propose a generic framework to (i) guide the development of privacy metrics and (ii) to measure and assess the privacy level of OSNs, more specifically microblogging systems. The present study develops an algorithmic model to compute privacy scores based on the impact of privacy and security requirements, accessibility, and difficulty of information extraction. The proposed framework aims to provide users as well as system providers with a measure of how much the investigated system is protecting privacy. It allows also comparing the privacy protection level between different systems. The privacy score framework has been tested using real microblogging social networks and the results show the potential of the proposed privacy scoring framework.
Chapter
Die Digitalisierung verändert die Gesellschaft und bewirkt einen Wertewandel, der eine gesellschaftliche Verhandlung neuer Grenzen notwendig macht. Der Diskurs wird in diesem Beitrag in das Spannungsfeld fiktiver Idealpositionen gestellt, die als (digitaler) Feudalismus und als (digitale) Aufklärung vorgestellt werden. Dabei steht Feudalismus sinnbildlich für eine Herrschaftsform der Dominanz einzelner Stände – hier der Anbieter digitaler Produkte – und Aufklärung sinnbildlich für die Position des informierten und freien Individuums. Zu Themenfeldern wie Datenschutz, Privatheit und Überwachung und Beispielen aus der digitalen Wirtschaft wird thesenhaft gezeigt, welche Art von gesellschaftlicher Debatte sich anbahnt. Dabei ist es von Bedeutung, dass die Frage nach dem Weg der digitalen Gesellschaft durch aufgeklärte Menschen beantwortet und nicht durch eine Entwicklung vorgegeben wird, die anschließend nur zur Kenntnis genommen werden kann.
Chapter
Services in smart environments usually require personal information to customize their behavior for the specific needs of a user. Traditionally users express privacy preferences in precompiled policies to control which information is disclosed to services within smart environments. A limitation of policies is that they are hard to create and maintain when the potentially communicated information or the context that influences a disclosure decision are highly diverse and hard to predict. Managing privacy ad hoc, in the moment when a service requests personal information, circumvents those problems. A drawback of ad hoc privacy control is the increased privacy related user interaction during service usage. This can be balanced by an assistance that handles personal information dynamically based on context information influencing a disclosure decisions. In this paper we describe a simulation environment to evaluate a context based, data mining driven disclosure assistance and present related results.
Article
Ubiquitous social networking (USN) can be seen as an evolution of ubiquitous computing supporting the social well-being of people in their everyday lives. The vision of USN focuses on enhancing social interactions among its participants during users' physical meetings. This target is leading towards important challenges such as social sensing, enabling social networking and privacy protection. In this paper we firstly investigate the methods and technologies for acquisition of the relevant context for promotion of sociability among inhabitants of USN environments. Afterwards, we review architectures and techniques for enabling social interactions between participants. Finally, we identify privacy as the major challenge for networking in USN environments. Consequently, we depict design guidelines and review privacy protection models for facilitating personal information disclosure.
Article
Improving human communication during face-to-face meetings is nowadays possible by transferring online social networking benefits to the physical world. This is enabled by the ubiquitous social networking services that became available by means of wirelessly interconnected smart devices, automatically exchanging personal user data. The main goal of these services is to facilitate the initialisation of relationships between people who do not know each other, but they probably should. Given that sharing of personal information is an intrinsic part of ubiquitous social networking, these services are subject to crucial privacy threats. Inspired by the usability and privacy limitations of existing design solutions, we identify, describe and qualitatively evaluate four drawbacks to be avoided when designing ubiquitous social networking applications. By addressing these drawbacks, services become more functional and more oriented to ensure the end users' privacy, thus contributing to the long-term success of this technology.
Article
Despite the great success of online social networks, there is still no automated way to facilitate communication between people in the physical environment. Ubiquitous social networking services target at transferring online social networking benefits to the physical world, by facilitating advantageous relationships during physical meetings between people who do not know each other, but probably they should. In this paper, we present a potential solution for establishing ubiquitous social networking services by integrating online social networks with opportunistic networks. This solution, called local social networks, focuses on uncovering relevant connections between people nearby, by providing a platform for automatic exchange of user personal information in order to discover interpersonal affinities. Firstly, we define and discuss the concept, advantages, preliminary architecture and potential future applications of local social networks as well as introduce the first prototype, named Spiderweb. Afterwards, we present results of a qualitative investigation that researched whether 16 active online social networks users would accept ubiquitous social networking services. The results revealed that all the participants perceived the usefulness of these services and 14 of them would be willing to accept all the necessary requirements for the establishment of local social networks and consequently be potential users.
Ubiquitous social networking services offer new opportunities for developing advantageous relationships by uncovering hidden connections that people share with others nearby. As sharing of personal information is an intrinsic part of ubiquitous social networking, these services are subject to crucial privacy threats. In order to contribute to the design of privacy management systems, we present results of a mixed methods study that investigated the influential factors for the variation of human data sensitivity upon different circumstances. The results indicate that the users' information sensitivity is decreasing inversely proportionally to the relevance of data disclosure for initiation of relationships with others. We suggest privacy designers should take into account the purpose of disclosure and environment as primary indexes for data disclosure. Other influential factors, i.e. activity, mood, location familiarity, number of previous encounters and mutual friends, were also discovered to influence participants' data disclosure, but as factors of secondary importance.
Nowadays, mobile social networks are capable of promoting social networking benefits during physical meetings, in order to leverage interpersonal affinities not only among acquaintances, but also between strangers. Due to their foundation on automated sharing of personal data in the physical surroundings of the user, these networks are subject to crucial privacy threats. Privacy management systems must be capable of accurate selection of data disclosure according to human data sensitivity evaluation. Therefore, it is crucial to research and comprehend an individual's personal information disclosure decisions happening in ordinary human communication. Consequently, in this paper we provide insight into influential factors of human data disclosure decisions, by presenting and analysing results of an empirical investigation comprising two online surveys. We focus on the following influential factors: inquirer, purpose of disclosure, access & control of the disclosed information, location familiarity and current activity of the user. This research can serve as relevant input for the design of privacy management models in mobile social networks.
Article
Full-text available
Over the past few years, a number of mobile applications have emerged that allow users to locate one another. Some of these applications are driven by a desire from enterprises to increase the productivity of their employees. Others are geared towards supporting social networking scenarios or security-oriented scenarios. The growing number of cell phones sold with location tracking technologies such as GPS or A- GPS along with the emergence of WiFi-based location tracking solutions could lead to mainstream adoption of some of these applications. At the same time, however, a number of people have expressed concerns about the privacy implications associated with this class of software, suggesting that broad adoption may only happen to the extent that these concerns are adequately addressed. In this article, we report on work conducted at Carnegie Mellon University in the context of PEOPLE FINDER , an application that enables cell phone and laptop users to selectively share their locations with others (e.g. friends, family, and colleagues). The objective of our work has been to better understand people's attitudes and behaviors towards privacy as they interact with such an application, and to explore technologies that empower users to more effectively and efficiently specify their privacy preferences (or
Article
Full-text available
Privacy-Enhancing Technologies (PET) are the technical answer to social and legal privacy requirements. PET become constituents for tools to manage users' personal data. Users can thereby control their individual digital identity, i.e. their individual partial identities in an online world. Existing commercially available identity management systems (IMS) do not yet provide privacy-enhancing functionality.We discuss general concepts and mechanisms for privacy-enhancing IMS (PE-IMS) in detail and highlight where existing IMS need to be improved in order to deliver them.Derived from general concepts and incorporating existing mechanisms, we define a component-based architecture for PE-IMS. This architecture describes the basic building blocks a PE-IMS must include, and so it is meant to be used as a fundamental concept for PE-IMS in practice.Finally, we give an outlook on the future development concerning IMS.Identity, Privacy, Identity Management System, Privacy-Enhancing Technologies, PET, Privacy-Enhancing Identity Management System, Multilateral Security
Article
Full-text available
Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only “hard” areas but also outliers and noise.
Article
Full-text available
To participate in meaningful privacy practice in the context of technical systems, people require opportunities to understand the extent of the systems' alignment with relevant practice and to conduct discernible social action through intuitive or sensible engagement with the system. It is a significant challenge to design for such understanding and action through the feedback and control mechanisms of today's devices. To help designers meet this challenge, we describe five pitfalls to beware when designing interactive systems—on or off the desktop—with personal privacy implications. These pitfalls are: obscuring potential information flow, obscuring actual information flow, emphasizing configuration over action, lacking coarse-grained control, and inhibiting existing practice. They are based on a review of the literature, on analyses of existing privacy-affecting systems, and on our own experiences designing a prototypical user interface for managing privacy in ubiquitous computing. We illustrate how some existing research and commercial systems—our prototype included—fall into these pitfalls and how some avoid
Conference Paper
Full-text available
KDD is a complex and demanding task. While a large number of methods has been established for numerous problems, many challenges remain to be solved. New tasks emerge requiring the development of new methods or processing schemes. Like in software development, the development of such solutions demands for careful analysis, specification, implementation, and testing. Rapid prototyping is an approach which allows crucial design decisions as early as possible. A rapid prototyping system should support maximal re-use and innovative combinations of existing methods, as well as simple and quick integration of new ones. This paper describes YALE, a free open-source environment for KDD and machine learning. YALE provides a rich variety of methods which allows rapid prototyping for new applications and makes costly re-implementations unnecessary. Additionally, YALE offers extensive functionality for process evaluation and optimization which is a crucial property for any KDD rapid prototyping tool. Following the paradigm of visual programming eases the design of processing schemes. While the graphical user interface supports interactive design, the underlying XML representation enables automated applications after the prototyping phase. After a discussion of the key concepts of YALE, we illustrate the advantages of rapid prototyping for KDD on case studies ranging from data pre-processing to result visualization. These case studies cover tasks like feature engineering, text mining, data stream mining and tracking drifting.
Article
Full-text available
Various aspects of location based applications in privacy issues are discussed. Global Positioning System (GPS) and phone based technologies come under the location based applications. The step in protecting users location privacy is notifying them of requests for the privacy information. LocServ is created to support the various location based applications which is a middleware service that lies between location based application and location tracking technologies. Validators check the acceptability of privacy policy and determines wheather a system should accept a request. Potential validator component includes user conformation, user data and context and external services.
Article
Full-text available
Ubiquitous computing stands to redefine established notions of pr ivacy as it introduces regular, pervasive sensing of personal information such as identity, location, and activity. To effectively and comfortably manage the di sclosure of personal information, end-users will require a coherent conceptual model of privacy and convenient user interfaces to manage it. We describe a conceptual framework designed to support personal privacy management in ubiquitous computing by empowering users to adjust the precision of disclosed information. The framework relies on three key strategies: encapsulation, a priori configuration, and manual configuration. We describe a prototypical user interface built to instantiate this framework and we report the results of a formative evaluation of the framework. Results show our approach is superior to simple, automated disclosure paradigms but can be further refined.
Article
Full-text available
Privacy is a severe problem facing pervasive computing. The fundamental question arises: Who gets to know personal data stored on mobile devices? Current devices have access controls for the user of the device, but do not consider the environment from a privacy aspect. The user has limited control over which personal data is offered at different locations. However, this information offered already allows the generation of various profiles of the device's user, for example location profiles. To improve the user's privacy, we propose a situation-based control over the data published and the services offered. Comparable to "normal life", this identity management allows the device to present different subsets of the user's identity depending on the perceived context.
Conference Paper
Although privacy is broadly recognized as a dominant concern for the development of novel interactive technologies, our ability to reason analytically about privacy in real settings is limited. A lack of conceptual interpretive frameworks makes it difficult to unpack interrelated privacy issues in settings where information technology is also present. Building on theory developed by social psychologist Irwin Altman, we outline a model of privacy as a dynamic, dialectic process. We discuss three tensions that govern interpersonal privacy management in everyday life, and use these to explore select technology case studies drawn from the research literature. These suggest new ways for thinking about privacy in socio-technical environments as a practical matter.
Chapter
In this short paper we describe the architectural concept of a Citizen Digital Assistant (CDA) and preliminary results of our implementation. A CDA is a mobile user device, similar to a Personal Digital Assistant (PDA). It supports the citizen when dealing with public authorities and proves his rights— if desired, even without revealing his identity. Requirements for secure and trusted interactions in e-Government solutions are presented and shortcomings of state of the art digital ID cards are considered. The Citizen Digital Assistant eliminates these shortcomings and enables a citizen-controlled communication providing the secure management of digital documents, identities, and credentials.
Conference Paper
User centric identity management will be necessary to pro- tect user's privacy in an electronic society. However, design- ing such systems is a complex task, as the expectations of the dieren t parties involved in electronic transactions have to be met. In this work we give an overview on the actual situation in user centric identity management and point out problems encountered there. Especially we present the cur- rent state of research and mechanisms useful to protect the user's privacy. Additionally we show security problems that have to be borne in mind while designing such a system and point out possible solutions. Thereby, we concentrate on attacks on linkability and identiabilit y, and possible pro- tection methods.
Article
Most people do not often read privacy policies because they tend to be long and difficult to understand. The Platform for Privacy Preferences (P3P) addresses this problem by providing a standard machine-readable format for web site privacy policies. P3P user agents can fetch P3P privacy policies automatically, compare them with a user's privacy preferences, and alert and advise the user. Developing user interfaces for P3P user agents is challenging for several reasons: privacy policies are complex, user privacy preferences are often complex and nuanced, users tend to have little experience articulating their privacy preferences, users are generally unfamiliar with much of the terminology used by privacy experts, users often do not understand the privacy-related consequences of their behavior, and users have differing expectations about the type and extent of privacy policy information they would like to see. We developed a P3P user agent called Privacy Bird. Our design was informed by privacy surveys and our previous experience with prototype P3P user agents. We describe our design approach, compare it with the approach used in other P3P use agents, evaluate our design, and make recommendations to designers of other privacy agents.
Conference Paper
Protecting personal privacy is going to be a prime concern for the deployment of ubiquitous computing systems in the real world. With daunting Orwellian visions looming, it is easy to conclude that tamper-proof technical protection mechanisms such as strong anonymization and encryption are the only solutions to such privacy threats. However, we argue that such perfect protection for personal information will hardly be achievable, and propose instead to build systems that help others respect our personal privacy, enable us to be aware of our own privacy, and to rely on social and legal norms to protect us from the few wrongdoers. We introduce a privacy awareness system targeted at ubiquitous computing environments that allows data collectors to both announce and implement data usage policies, as well as providing data subjects with technical means to keep track of their personal information as it is stored, used, and possibly removed from the system. Even though such a system cannot guarantee our privacy, we believe that it can create a sense of accountability in a world of invisible services that we will be comfortable living in and interacting with.
Article
Many existing rule learning systems are computationally expensive on large noisy datasets. In this paper we evaluate the recently-proposed rule learning algorithm IREP on a large and diverse collection of benchmark problems. We show that while IREP is extremely efficient, it frequently gives error rates higher than those of C4.5 and C4.5rules. We then propose a number of modifications resulting in an algorithm RIPPERk that is very competitive with C4.5rules with respect to error rates, but much more efficient on large samples. RIPPERk obtains error rates lower than or equivalent to C4.5rules on 22 of 37 benchmark problems, scales nearly linearly with the number of training examples, and can efficiently process noisy datasets containing hundreds of thousands of examples. 1 INTRODUCTION Systems that learn sets of rules have a number of desirable properties. Rule sets are relatively easy for people to understand [ Catlett, 1991 ] , and rule learning systems outperform...
Learning context based disclosure of private information
  • C Bünnig
C. Bünnig, "Learning context based disclosure of private information," in The Internet of Things & Services -1st Intl. Research Workshop, Valbonne, France, Sep. 2008.
The platform for privacy preferences 1.0 (P3P1.0) specification
W3C, "The platform for privacy preferences 1.0 (P3P1.0) specification," Apr. 2002, available at http://www.w3.org/TR/P3P/ (accessed December 12, 2008).