Conference Paper

Understanding lifelog sharing preferences of lifeloggers

To read the full-text of this research, you can request a copy directly from the authors.


The lifelogging activity enables users, the lifeloggers, to passively capture images using wearable cameras from a first person perspective and ultimately create a visual diary encoding every possible aspect of their life with unprecedented details. This growing phenomenon, has posed several privacy concerns for the lifeloggers (people wearing the device), and bystanders (any person who is captured in the images). In this paper, we present a user- study to understand the sharing preferences of the lifeloggers for the images captured in difference scenarios with different audience groups. Our findings motivate the need to design privacy preserving techniques, which will automatically recommend sharing decisions which will help the lifeloggers avoid misclosure, i.e. wrongly sharing a sensitive image with one or more sharing groups.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In addition to concerns about information overload [18], privacy was frequently raised as a significant issue for the potential widespread adoption of lifelogging [14,26,37,59]. Even without the public dissemination of lifelogging data that we associate with the practice today, privacy was seen as a roadblock due to legal issues and the recording of data beyond oneself (e.g., bystanders or sensitive settings [19,26,68]). ...
... Though as lifelogging technologies evolved, sharing became a more integrated component, and early research regarding sharing preferences revealed an interest in sharing the majority of data, even amidst privacy concerns [37]. There has since been a great deal of research that has examined the privacy, ethical, and social implications of lifelogging generally [1,14,26], including implications for research ethics (particularly in the context of informed consent and scope of data [42,48,58]). ...
As the process of creating and sharing data about ourselves becomes more prevalent, researchers have access to increasingly rich data about human behavior. Framed as a fictional paper published at some point in the not-so-distant future, this design fiction draws from current inquiry and debate into the ethics of using public data for research, and speculatively extends this conversation into even more robust and more personal data that could exist when we design new technologies in the future. By looking to how the precedents of today might impact the practices of tomorrow, we can consider how we might design policies, ethical guidelines, and technologies that are forward-thinking.
... This was also noted in Hoyle et al.'s more recent analysis [19] and Korayem et al. [24] propose a framework to automatically address this issue. While Chowdhury et al. [10] nd that lifeloggers exhibit little concern for the privacy of bystanders, other work [9] from the perspective of bystanders nds many are unwilling to have their images used without consent, with privacy preferences depending on the context and content of the photos. The authors argue lifelogging applications must understand context in order to make appropriate privacy decisions. ...
Low cost digital cameras in smartphones and wearable devices make it easy for people to automatically capture and share images as a visual lifelog. Having been inspired by a US campus based study that explored individual privacy behaviours of visual lifeloggers, we conducted a similar study on a UK campus, however we also focussed on the privacy behaviours of groups of lifeloggers. We argue for the importance of replicability and therefore we built a publicly available toolkit, which includes camera design, study guidelines and source code. Our results show some similar sharing behaviour to the US based study: people tried to preserve the privacy of strangers, but we found fewer bystander reactions despite using a more obvious camera. In contrast, we did not find a reluctance to share images of screens but we did find that images of vices were shared less. Regarding privacy behaviours in groups of lifeloggers, we found that people were more willing to share images of people they were interacting with than of strangers, that lifelogging in groups could change what defines a private space, and that lifelogging groups establish different rules to manage privacy for those inside and outside the group.
Full-text available
The Microsoft SenseCam is a small lightweight wearable camera used to passively capture photos and other sensor readings from a user's day-to-day activities. It can capture up to 3,000 images per day, equating to almost 1 million images per year. It is used to aid memory by creating a personal multimedia lifelog, or visual recording of the wearer's life. However the sheer volume of image data captured within a visual lifelog creates a number of challenges, particularly for locating relevant content. Within this work, we explore the applicability of semantic concept detection, a method often used within video retrieval, on the novel domain of visual lifelogs. A concept detector models the correspondence between low-level visual features and high-level semantic concepts (such as indoors, outdoors, people, buildings, etc.) using supervised machine learning. By doing so it determines the probability of a concept's presence. We apply detection of 27 everyday semantic concepts on a lifelog collection composed of 257,518 SenseCam images from 5 users. The results were then evaluated on a subset of 95,907 images, to determine the precision for detection of each semantic concept. We conduct further analysis on the temporal consistency, co-occurance and trends within the detected concepts to more extensively investigate the robustness of the detectors within this novel domain. We additionally present future applications of concept detection within the domain of lifelogging.
Conference Paper
While media reports about wearable cameras have focused on the privacy concerns of bystanders, the perspectives of the `lifeloggers' themselves have not been adequately studied. We report on additional analysis of our previous in-situ lifelogging study in which 36 participants wore a camera for a week and then reviewed the images to specify privacy and sharing preferences. In this Note, we analyze the photos themselves, seeking to understand what makes a photo private, what participants said about their images, and what we can learn about privacy in this new and very different context where photos are captured automatically by one's wearable camera. We find that these devices record many moments that may not be captured by traditional (deliberate) photography, with camera owners concerned about impression management and protecting private information of both themselves and bystanders.
A number of wearable 'lifelogging' camera devices have been released recently, allowing consumers to capture images and other sensor data continuously from a first-person perspective. Unlike traditional cameras that are used deliberately and sporadically, lifelogging devices are always 'on' and automatically capturing images. Such features may challenge users' (and bystanders') expectations about privacy and control of image gathering and dissemination. While lifelogging cameras are growing in popularity, little is known about privacy perceptions of these devices or what kinds of privacy challenges they are likely to create. To explore how people manage privacy in the context of lifelogging cameras, as well as which kinds of first-person images people consider 'sensitive,' we conducted an in situ user study (N = 36) in which participants wore a lifelogging device for a week, answered questionnaires about the collected images, and participated in an exit interview. Our findings indicate that: 1) some people may prefer to manage privacy through in situ physical control of image collection in order to avoid later burdensome review of all collected images; 2) a combination of factors including time, location, and the objects and people appearing in the photo determines its 'sensitivity;' and 3) people are concerned about the privacy of bystanders, despite reporting almost no opposition or concerns expressed by bystanders over the course of the study.
We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”
In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category.
As access to information changes with increased use of technology, privacy becomes an increasingly prominent issue among technology users. Privacy concerns should be taken seriously because they influence system adoption, the way a system is used, and may even lead to system disuse. Threats to privacy are not only due to traditional security and privacy issues; human factors issues such as unintentional disclosure of information also influence the preservation of privacy in technology systems. A dual pronged approach was used to examine privacy. First, a broad investigation of younger and older adults' privacy behaviors was conducted. The goal of this study was to gain a better understanding of privacy across technologies, to discover the similarities, and identify the differences in what privacy means across contexts as well as provide a means to evaluate current theories of privacy. This investigation resulted in a categorization of privacy behaviors associated with technology. There were three high level privacy behavior categories identified: avoidance, modification, and alleviatory behavior. This categorization furthers our understanding about the psychological underpinnings of privacy concerns and suggests that 1) common privacy feelings and behaviors exist across people and technologies and 2) alternative designs which consider these commonalities may increase privacy. Second, I examined one specific human factors issue associated with privacy: disclosure error. This investigation focused on gaining an understanding of how to support privacy by preventing misclosure. A misclosure is an error in disclosure. When information is disclosed in error, or misclosed, privacy is violated in that information not intended for a specific person(s) is nevertheless revealed to that person. The goal of this study was to provide a psychological basis for design suggestions for improving privacy in technology which was grounded in empirical findings. The study furthers our understanding about privacy errors in the following ways: First, it demonstrates for the first time that both younger and older adults experience misclosures . Second, it suggests that misclosures occur even when technology is very familiar to the user. Third, it revealed that some misclosure experiences result in negative consequences, suggesting misclosure is a potential threat to privacy. Finally, by exploring the context surrounding each reported misclosure, I was able to propose potential design suggestions that may decrease the likelihood of misclosure.
Conference Paper
The SenseCam is a passively capturing wearable camera, worn around the neck and takes an average of almost 2,000 images per day, which equates to over 650,000 images per year.It is used to create a personal lifelog or visual recording of the wearer's life and generates information which can be helpful as a human memory aid. For such a large amount of visual information to be of any use, it is accepted that it should be structured into "events", of which there are about 8,000 in a wearer's average year. In automatically segmenting SenseCam images into events, it is desirable to automatically emphasise more important events and decrease the emphasis on mundane/routine events. This paper introduces the concept of novelty to help determine the importance of events in a lifelog. By combining novelty with face-to-face conversation detection, our system improves on previous approaches. In our experiments we use a large set of lifelog images, a total of 288,479 images collected by 6 users over a time period of one month each.
A survey on life logging data capturing
  • L M Zhou
  • C Gurrin
Zhou, L.M. and Gurrin, C., 2012. A survey on life logging data capturing.SenseCam 2012. of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM,
Sensitive Lifelogs: A Privacy Analysis of Photos from Wearable Cameras
  • R Hoyle
Hoyle, R, et al., 2015, April. Sensitive Lifelogs: A Privacy Analysis of Photos from Wearable Cameras. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 1645-1648). ACM.
Combining image descriptors to effectively retrieve events from visual lifelogs
  • A R Doherty
Doherty, A.R, et al., 2008, October. Combining image descriptors to effectively retrieve events from visual lifelogs. In Proceedings of the 1st ACM international conference on Multimedia information retrieval (pp. 10-17). ACM.
Privacy behaviors of lifeloggers using wearable cameras
  • R Hoyle
Hoyle, R, et al., 2014, September. Privacy behaviors of lifeloggers using wearable cameras. In Proceedings of the 2014 ACM Joint Conference on Pervasive and Ubiquitous Computing (pp. 571-582). ACM.