[show abstract][hide abstract] ABSTRACT: It has become cliché to observe that new information technologies endanger privacy. Typically, the threat is viewed as coming from Big Brother (the government) or Company Man (the firm). But for a nascent data practice we call "self-surveillance," the threat may actually come from ourselves. Using various existing and emerging technologies, such as GPS-enabled smartphones, we are beginning to measure ourselves in granular detail - how long we sleep, where we drive, what we breathe, what we eat, how we spend our time. And we are storing these data casually, perhaps promiscuously, somewhere in the "cloud," and giving third-parties broad access. This data practice of self-surveillance will decrease information privacy in troubling ways. To counter this trend, we recommend the creation of the Privacy Data Guardian, a new profession that manages Privacy Data Vaults, which are repositories for self-surveillance data. In Part I, we describe the emerging data practice of self-surveillance, which has been enabled by various new measurement and communication technologies. We explain how self-surveillance can produce substantial benefits to both the individual and society, in both intrinsic and instrumental terms. Unfortunately, such benefits may never be achieved without substantial privacy costs. Part II makes threshold clarifications about those privacy costs. It proffers two different metrics by which privacy might be measured and explains why the rise of self-surveillance will entail the net loss of privacy under either metric. We also point out that the problem of self-surveillance (our surveilling us) is, fortunately, more tractable than related privacy problems, such as third-party surveillance of us and our surveillance of third-parties. Having cleared this brush, we turn to our central proposal - the creation of the Personal Data Guardian, a professional whose job it is to maintain a client’s self-surveillance data in a Personal Data Vault. In addition to providing technical specifications of this approach, we outline the specific legal relations, which include a fiduciary relationship, between client and Guardian. In addition, we recommend the creation of an evidentiary privilege, similar to a trade secret privilege, that protects self-surveillance data held by a licensed Guardian. Finally, Part IV answers objections that our solution is implausible or useless. We conclude by pointing out that various legal, technological, and self-regulatory attempts at safeguarding privacy from new digital, interconnected technologies have not been particularly successful. Before self-surveillance becomes a widespread practice, some new innovation is needed. In our view, that innovation is a new "species," the Personal Data Guardian, created through a fusion of law and technology and released into the current information ecosystem.
[show abstract][hide abstract] ABSTRACT: The increasing ubiquity of the mobile phone is creating many opportunities for personal context sensing, and will result in massive databases of individuals' sensitive information incorporating locations, movements, images, text annotations, and even health data. In existing system architectures, users upload their raw (unprocessed or filtered) data streams directly to content-service providers and have little control over their data once they "opt-in". We present Personal Data Vaults (PDVs), a privacy architecture in which individuals retain ownership of their data. Data are routinely filtered before being shared with content-service providers, and users or data custodian services can participate in making controlled data-sharing decisions. Introducing a PDV gives users flexible and granular access control over data. To reduce the burden on users and improve usability, we explore three mechanisms for managing data policies: Granular ACL, Trace-audit and Rule Recommender. We have implemented a proof-of-concept PDV and evaluated it using real data traces collected from two personal participatory sensing applications.
Proceedings of the 2010 ACM Conference on Emerging Networking Experiments and Technology, CoNEXT 2010, Philadelphia, PA, USA, November 30 - December 03, 2010; 01/2010
[show abstract][hide abstract] ABSTRACT: As mobile phones advance in functionality and capability, they are being used for more than just communication. Increasingly, these devices are being employed as instruments for introspection into habits and situations of individuals and communities. Many of the applications enabled by this new use of mobile phones rely on contextual information. The focus of this work is on one dimension of context, the transportation mode of an individual when outside. We create a convenient (no specific position and orientation setting) classification system that uses a mobile phone with a built-in GPS receiver and an accelerometer. The transportation modes identified include whether an individual is stationary, walking, running, biking, or in motorized transport. The overall classification system consists of a decision tree followed by a first-order discrete Hidden Markov Model and achieves an accuracy level of 93.6% when tested on a dataset obtained from sixteen individuals.
[show abstract][hide abstract] ABSTRACT: PEIR, the Personal Environmental Impact Report, is a participatory sensing application that uses location data sam- pled from everyday mobile phones to calculate personalized estimates of environmental impact and exposure. It is an ex- ample of an important class of emerging mobile systems that combine the distributed processing capacity of the web with the personal reach of mobile technology. This paper doc- uments and evaluates the running PEIR system, which in- cludes mobile handset based GPS location data collection, and server-side processing stages such as HMM-based ac- tivity classification (to determine transportation mode); au- tomatic location data segmentation into "trips"; lookup of traffic, weather, and other context data needed by the mod- els; and environmental impact and exposure calculation us- ing efficient implementations of established models. Addi- tionally, we describe the user interface components of PEIR and present usage statistics from a two month snapshot of system use. The paper also outlines new algorithmic compo- nents developed based on experience with the system and un- dergoing testing for inclusion in PEIR, including: new map- matching and GSM-augmented activity classification tech- niques, and a selective hiding mechanism that generates be- lievable proxy traces for times a user does not want their real location revealed.
Proceedings of the 7th International Conference on Mobile Systems, Applications, and Services (MobiSys 2009), Kraków, Poland, June 22-25, 2009; 01/2009
[show abstract][hide abstract] ABSTRACT: Mobile phones and accompanying network layers provide a platform to capture and share location, image, and acoustic data.
This substrate enables participatory sensing: coordinated data gathering by individuals and communities to explore the world
around them. Realizing such widespread and participatory sensing poses difficult challenges. In this paper, we discuss one
particular challenge: creating a recruitment service to enable sensing organizers to select well-suited participants. Our
approach concentrates on finding participants based on geographic and temporal coverage, as determined by context-annotated
mobility profiles that model transportation mode, location, and time. We outline a three-stage recruitment framework designed
to be parsimonious so as to limit risk to participants by reducing the location and context information revealed to the system.
Finally, we illustrate the utility of the framework, along with corresponding modeling technique for mobility information,
by analyzing data from a pilot mobility study consisting of ten users.
Location and Context Awareness, 4th International Symposium, LoCA 2009, Tokyo, Japan, May 7-8, 2009, Proceedings; 01/2009
[show abstract][hide abstract] ABSTRACT: Participatory design (PD) involves users in all phases of design to build systems that fit user needs while simultaneously helping users understand complex systems. We argue that traditional PD techniques can benefit participatory sensing: community-based participatory research (CBPR) projects in which complex technologies, such as sensing networks using mobile phones, are the research instruments. Based on our pilot work on CycleSense, a community-based data gathering system for bicycle commuters, we discuss the benefits and challenges of PD in participatory sensing settings, and outline a method to integrate PD into the research process.
Proceedings of the Tenth Conference on Participatory Design, PDC 2008, Bloomington, Indiana, USA, October 1-4, 2008; 01/2008
[show abstract][hide abstract] ABSTRACT: Imagers are an increasingly significant source of sensory observations about human activity and the urban environment. ImageScape is a software tool for processing, clustering, and browsing large sets of images. Implemented as a set of web services with an Adobe Flash-based user interface, it supports clustering by both image features and context tags, as well as re-tagging of images in the user interface. Though expected to be useful in many applications, ImageScape was designed as an analysis component of DietSense, a software system under development at UCLA to support (1) the use of mobile devices for automatic multimedia documentation of dietary choices with just-in-time annotation, (2) efficient post facto review of captured media by participants and researchers, and (3) easy authoring and dissemination of the automatic data collection protocols. A pilot study, in which participants ran software that enabled their phones to autonomously capture images of their plates during mealtime, was conducted using an early prototype of the DietSense system, and the resulting image set used in the creation of ImageScape. ImageScape will support two kinds of users within the DietSense application: The par- ticipants in dietary studies will have the ability to easily audit their images, while the recipients of the images, health care professionals managing studies and performing analysis, will be able to rapidly browse and annotate large sets of images.
Proceedings of the 4th Workshop on Embedded Networked Sensors, EmNets 2007, Cork, Ireland, June 25-26, 2007; 01/2007
[show abstract][hide abstract] ABSTRACT: The rapid adoption of mobile phones by society over the last decade and the increasing ability to capture, classifying, and transmit a wide variety of data (image, audio, and location) have enabled a new sensing paradigm - where humans carrying mobile phones can act as sensor systems. Human-in-the-loop sensor systems raise many new challenges in areas of sensor data quality assessment, mobility and sampling coordination, and user interaction procedures.
[show abstract][hide abstract] ABSTRACT: Pervasive computing’s historical driver applications include environmental monitoring, safety and security, home and office productivity and guided experience of cultural activities. It also suggests and offers the means to achieve new types of expression in art and entertainment, which has not been a significant area of research despite its cultural and socio-economic importance and unique requirements. This paper discusses motivations and requirements for a pervasive computing architecture for expression, and it presents a specific approach being developed at UCLA in collaboration among research groups in engineering, theater, film and television.
[show abstract][hide abstract] ABSTRACT: Ecce Homology, a physically interactive new-media work, visualizes genetic data as calligraphic forms. A novel computer-vision user interface allows multiple participants, through their movement in the installation space, to select genes from the human genome for visualizing the Basic Local Alignment Search Tool (BLAST), a primary algorithm in comparative genomics. Ecce Homology was successfully installed in the UCLA Fowler Museum, 6 November 20034 January 2004.
[show abstract][hide abstract] ABSTRACT: With the advent of tiny networked devices, Mark Weiser's vision of a world embedded with invisible computers is coming to age. Due to their small size and relative ease of deployment, sensor networks have been utilized by zoologists, seismologists and military personnel. In this paper, we investigate the novel application of sensor networks to the film industry. In particular, we are interested in augmenting film and video footage with sensor data. Unobtrusive sensors are deployed on a film set or in a television studio and on performers. During a filming of a scene, sensor data such as light intensity, color temperature and location are collected and synchronized with each film or video frame. Later, editors, graphics artists and programmers can view this data in synchronization with film and video playback. For example, such data can help define a new level of seamless integration between computer graphics and real world photography. A real-time version of our system would allow sensor data to trigger camera movement and cue special effects. In this paper, we discuss the design and implementation of the first part of our embedded film set environment, the augmented recording system. Augmented recording is a foundational component for the UCLA Hypermedia Studio's research into the use of sensor networks in film and video production. In addition, we have evaluated our system in a television studio.
Pervasive Computing and Communications, IEEE International Conference on. 03/2004;
[show abstract][hide abstract] ABSTRACT: This paper presents a vision of digital technology for the museum as a dynamic connection-making tool that defines new genres and enables new experiences of existing works. The following media-rich interactive installations and performances developed at the HyperMedia Studio, a digital media research unit in the UCLA (University of California Los Angeles) School of Theater, Film and Television are described: (1) "...two, three, many Guevaras," an interactive database that analyzes the message and relevance of Latin American revolutionary Ernesto "Che" Guevara through the artworks he inspired; (2) "Time&Time Again...," a distributed interactive installation that extends media navigation to a site-specific context with both Web- and body-based interfaces; (3) "Invocation & Interference," which explores the cultural practices that regularly overlap and collide, producing unexpected readings and relational interpretations, as experienced from a car traveling in the Argentine pampas; and (4) "Hamletmachine," an installation featuring an original audio performance of "Hamletmachine" by the German playwright Heiner Muller. Also described are the recent UCLA performance collaborations "Fahrenheit 451,""Macbett," and "The Iliad Project." Core technologies are discussed, including instrumented objects and environments, dynamic media control, databases, distributed glue, aesthetic framework, context, presence, and process. (Contains 21 references.) (MES)
[show abstract][hide abstract] ABSTRACT: This paper describes the architecture of a new control system and associated scripting language currently under development in a collaboration between computer scientists, engineers, and artists. The system is designed to facilitate the creation of real-time relationships between people and media elements in live performance and installation artworks. It draws on the experience of the UCLA HyperMedia Studio in producing media-rich artistic works and suggests an approach also useful for prototyping "interactive" and "smart" spaces for entertainment and education.
Entertainment Computing: Technologies and Applications, IFIP First International Workshop on Entertainment Computing (IWEC 2002), May 14-17, 2002, Makuhari, Japan; 01/2002