Conference Paper

Eliciting Security & Privacy-Informed Sharing Techniques for Multi-User Augmented Reality

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Recent work addresses this gap by embedding privacy considerations into development processes, aligning the expertise of XR developers and privacy specialists. For example, Rajaram et al. [36] employed scenario-based threat modeling in multi-user AR to balance competing goals of usability, feasibility, and security/privacy. Ruth et al. [41] proposed design goals that accommodate security needs while maintaining functionality, exemplified by a content-sharing prototype for multi-user AR. ...
... Understanding these challenges is critical to bridging the gap between user needs and constraints in hands-on implementation, motivating our focus on the obstacles designers and developers face during the development process for VR (RQ2). Prior AR research indicates that incorporating privacy considerations often requires balancing competing design goals, such as usability, feasibility, and privacy [36]. However, comprehensive guidelines for designing effective and compliant VR privacy interfaces are still lacking. ...
Preprint
Full-text available
Extended reality (XR) devices have become ubiquitous. They are equipped with arrays of sensors, collecting extensive user and environmental data, allowing inferences about sensitive user information users may not realize they are sharing. Current VR privacy notices largely replicate mechanisms from 2D interfaces, failing to leverage the unique affordances of virtual 3D environments. To address this, we conducted brainstorming and sketching sessions with novice game developers and designers, followed by privacy expert evaluations, to explore and refine privacy interfaces tailored for VR. Key challenges include balancing user engagement with privacy awareness, managing complex privacy information with user comprehension, and maintaining compliance and trust. We identify design implications such as thoughtful gamification, explicit and purpose-tied consent mechanisms, and granular, modifiable privacy control options. Our findings provide actionable guidance to researchers and practitioners for developing privacy-aware and user-friendly VR experiences.
... Additionally, Rajaram et al. [37] highlight the growing use of AR in collaborative settings and the associated risks of data breaches, unauthorized access, and privacy violations. They conducted a user study to explore concerns and preferences about security and privacy in shared AR environments, using the findings to propose techniques such as granular access controls, encryption, and feedback mechanisms to address vulnerabilities. ...
... Our findings highlight the need to reflect on UI design in MR environments and educate users on potential MR-specific threats, building on the emphasis from [37] on integrating security and privacy considerations into collaborative MR systems while balancing usability, feasibility, and access control. We believe there is a pressing need to optimize UI design with intuitive security indicators that align with the user's mental model and cognitive load in MR space. ...
Preprint
Mixed Reality (MR) devices are being increasingly adopted across a wide range of real-world applications, ranging from education and healthcare to remote work and entertainment. However, the unique immersive features of MR devices, such as 3D spatial interactions and the encapsulation of virtual objects by invisible elements, introduce new vulnerabilities leading to interaction obstruction and misdirection. We implemented latency, click redirection, object occlusion, and spatial occlusion attacks within a remote collaborative MR platform using the Microsoft HoloLens 2 and evaluated user behavior and mitigations through a user study. We compared responses to MR-specific attacks, which exploit the unique characteristics of remote collaborative immersive environments, and traditional security attacks implemented in MR. Our findings indicate that users generally exhibit lower recognition rates for immersive attacks (e.g., spatial occlusion) compared to attacks inspired by traditional ones (e.g., click redirection). Our results demonstrate a clear gap in user awareness and responses when collaborating remotely in MR environments. Our findings emphasize the importance of training users to recognize potential threats and enhanced security measures to maintain trust in remote collaborative MR systems.
... • Conducting multiple iterations of the process to achieve saturation (Dinh et al., 2023;Faber et al., 2022;Hegde et al., 2023;Hirzle et al., 2023;Impedovo et al., 2013;Turakhia et al., 2023) • Utilising multiple persons in the process of developing and/or evaluating codes (Çolakoglu et al., 2023;Dinh et al., 2023;Faber et al., 2022;Hegde et al., 2023;Hirzle et al., 2023;Impedovo et al., 2013;Rajaram et al., 2023;Turakhia et al., 2023) A review of 76 papers published between 2019 and 2023 identified in a PRISMA study on XR development strategies and policies did not reveal any codes for developing an XR project. Within these data, the researchers developed codes for the recordings of participants' responses (Williams, 2020), literature on AR use cases, benefits, or obstacles (Nassereddine, 2019), and interview transcripts (Karre et al., 2019). ...
... This in-depth knowledge and experience provided greater insights into the content of the XR development strategy paper to identify suitable codes. (Dinh et al., 2023) 1.2 Convenient access to data Number-only with phrase Healthcare (Faber et al., 2022) Challenges in Research; Confidence; EMK Phrase; single word; Letter-only abbreviation Education (Hirzle et al., 2023) C3 Contribution or main findings; C1 Category Letter-number with Phrase; Letter-number with a single word Technology (Karre et al., 2019) Lack of efficient methods/tools Phrase Technology (Rajaram et al., 2023) Granularity of sharing; Transparency Phrase; Single word Technology (Houghton et al., 2017) Social and psychological; Valuing Phrase; Single word Healthcare (Impedovo et al., 2013) Participation Single word Education (Turakhia et al., 2023) Competencies Single word Education (Hegde et al., 2023) Availability and utilisation of resources Phrase Healthcare (Nassereddine, 2019) PreCon7 ...
Article
Full-text available
The Covid-19 pandemic highlighted the importance of virtual systems during physical isolation. Extended Reality (XR) is an alternative to disruptions of physical reality. Therefore, there is a need to increase the development of XR projects. This paper provides a reference codebook to identify critical elements for developing or assessing any XR project. The review of the paper "Analysis of Caribbean XR Survey Creates an XR Development Strategy as a Path to the Regional Metaverse Evolution" provided the codes. The analysis of the development strategy's elements identified factors that encourage XR project creation, completion, and accelerated development. The codebook consists of 24 codes, grouped into categories: strategy and policy, financial, software, human resources, training, geographic, industry sector, design, UX, and I4.0. It employs a three-step process: code familiarisation, code application, and analysis and assessment of coded information. A concise summary table facilitates easy usage. The codebook provides a systematic approach to analyse XR development from ideation to Proof of Concept. It enables stakeholders to identify core requirements, prioritise factors of influence, allocate resources, and select target markets. These codes also assist in evaluating ongoing initiatives and identifying areas for improvement or refinement. Stakeholders can use the codebook for post-mortem analyses to inform strategic actions and optimise future XR projects. This paper's value is a clearly defined set of codes influencing XR project development with a recommended usage process. Stakeholders can leverage these codes to unlock the potential of XR projects to enhance their impact, originality, and market effectiveness.
... Specifically, we applied three types of priming: context, creativity, and environmental priming. With context priming (similar to [100]), we used paper documents to foster an understanding of the document organization scenario and its requirements. We implemented creativity priming with sci-fi movies to inspire designs based on new form factors [2]. ...
Conference Paper
Full-text available
Augmented Reality (AR) promises to enhance daily office activities involving numerous textual documents, slides, and spreadsheets by expanding workspaces and enabling more direct interaction. However, there is a lack of systematic understanding of how knowledge workers can manage multiple documents and organize, explore, and compare them in AR environments. Therefore, we conducted a user-centered design study (N = 21) using predefined spatial document layouts in AR to elicit interaction techniques, resulting in 790 observation notes. Thematic analysis identified various interaction methods for aggregating, distributing, transforming, inspecting, and navigating document collections. Based on these findings, we propose a design space and distill design implications for AR document arrangement systems, such as enabling body-anchored storage, facilitating layout spreading and compressing, and designing interactions for layout transformation. To demonstrate their usage, we developed a rapid prototyping system and exemplify three envisioned scenarios. With this, we aim to inspire the design of future immersive offices.
... Building on these challenges, the integration of privacy, security, and user data protection in XR collaboration further complicates the design of asynchronous systems. Effective access control is crucial in XR environments to manage permissions and prevent unauthorised access to shared virtual content and physical spaces [19]. Privacy concerns arise from environmental sensing, where unintended capture of users' surroundings can occur without consent. ...
Preprint
Full-text available
Asynchronous communication has become increasingly essential in the context of extended reality (XR), enabling users to interact and share information immersively without the constraints of simultaneous engagement. However, current XR systems often struggle to support effective asynchronous interactions, mainly due to limitations in contextual replay and navigation. This paper aims to address these limitations by introducing a novel system that enhances asynchronous communication in XR through the concept of MemoryPods, which allow users to record, annotate, and replay interactions with spatial and temporal accuracy. MemoryPods also feature AI-driven summarisation to ease cognitive load. A user evaluation conducted in a remote maintenance scenario demonstrated significant improvements in comprehension, highlighting the system's potential to transform collaboration in XR. The findings suggest broad applicability of the proposed system across various domains, including direct messaging, healthcare, education, remote collaboration, and training, offering a promising solution to the complexities of asynchronous communication in immersive environments.
... Refs. 38,39 ) or the sharing of personalized content across users 40,41 might make such pervasive personalization more societally beneficial. ...
Article
Full-text available
We are currently in a period of upheaval, as many new technologies are emerging that open up new possibilities to shape our everyday lives. Particularly, within the field of Personalized Human-Computer Interaction we observe high potential, but also challenges. In this article, we explore how an increasing amount of online services and tools not only further facilitates our lives, but also shapes our lives and how we perceive our environments. For this purpose, we adopt the metaphor of personalized ‘online layers’ and show how these layers are and will be interwoven with the lives that we live in the ‘human layer’ of the real world.
... Erebus 3 proposed an access control framework for third-party AR applications to prevent intentional and accidental data gathering and transmission. Rajaram et al. 4 focused on access control for sharing AR content, identifying regular other users on the application's as potential privacy threats. ...
Article
Full-text available
Metaverse technologies are transforming how distant individuals interact in immersive virtual environments. These technologies, in combination with the latest developments in 3D sensing, will create a hybrid metaverse blending virtual and physical spaces. However, high-resolution 3D sensing poses privacy risks from unrestricted spatial data sharing. It may inadvertently expose the human visual appearance and private objects and spaces in the users’ physical environments to the shared metaverse. To mitigate these challenges, this article surveys the visual privacy issues in the era of the metaverse. It then introduces a visual privacy control method enabling privacy protection in the current metaverse and beyond. Our goal is to empower users to control the visual privacy of spatial objects by comprehending their inherent semantics. The design allows users to define their privacy protection levels or, if preferred, delegate this responsibility to an automated control system. We present two use-case scenarios to demonstrate this concept.
Preprint
Extended Reality (XR) experiences involve interactions between users, the real world, and virtual content. A key step to enable these experiences is the XR headset sensing and estimating the user's pose in order to accurately place and render virtual content in the real world. XR headsets use multiple sensors (e.g., cameras, inertial measurement unit) to perform pose estimation and improve its robustness, but this provides an attack surface for adversaries to interfere with the pose estimation process. In this paper, we create and study the effects of acoustic attacks that create false signals in the inertial measurement unit (IMU) on XR headsets, leading to adverse downstream effects on XR applications. We generate resonant acoustic signals on a HoloLens 2 and measure the resulting perturbations in the IMU readings, and also demonstrate both fine-grained and coarse attacks on the popular ORB-SLAM3 and an open-source XR system (ILLIXR). With the knowledge gleaned from attacking these open-source frameworks, we demonstrate four end-to-end proof-of-concept attacks on a HoloLens 2: manipulating user input, clickjacking, zone invasion, and denial of user interaction. Our experiments show that current commercial XR headsets are susceptible to acoustic attacks, raising concerns for their security.
Article
In Mixed Reality (MR), users can collaborate efficiently by creating personalized layouts that incorporate both personal and shared virtual objects. Unlike in the real world, personal objects in MR are only visible to their owner. This makes them susceptible to occlusions from shared objects of other users, who remain unaware of their existence. Thus, achieving unobstructed layouts in collaborative MR settings requires knowledge of where others have placed their personal objects. In this paper, we assessed the effects of three visualizations, and a baseline without any visualization, on occlusions and user perceptions. Our study involved 16 dyads (N=32) who engaged in a series of collaborative sorting tasks. Results indicate that the choice of visualization significantly impacts both occlusion and perception, emphasizing the need for effective visualizations to enhance collaborative MR experiences. We conclude with design recommendations for multi-user MR systems to better accommodate both personal and shared interfaces simultaneously.
Chapter
Augmented Reality (AR) applications are becoming increasingly popular and are being used in various fields, including education, entertainment, and healthcare. However, these applications face numerous security challenges, such as data privacy, authentication, and authorization. In this chapter, we explore the use of Artificial Intelligence and Machine Learning techniques to enhance the security of AR applications. We discuss the different security challenges faced by AR applications and provide an overview of the current state-of-the-art security solutions. We then introduce several novel approaches to secure AR applications using AI/ML techniques, including deep learning and reinforcement learning.
Article
Modern AR applications collect a wide range of data to leverage context-specific functionalities. This includes data that might be private or security-critical (e.g., the camera view of a private home), calling for protective measures, especially in collaborative settings where data is inherently shared. A literature research revealed a lack of development support for privacy and security in collaborative AR. This makes it difficult for developers to find the time and resources to include protection mechanisms, leading to very limited options for end-users to control what data about them is shared. To address this problem, we present TARPS, a development Toolbox for enhancing collaborative AR applications with Privacy and Security protection mechanisms. TARPS is an out-of-the-box solution to add protection features to collaborative AR applications in a configurable manner. In developer interviews, the idea of TARPS was well received and an end-user study with an application created using TARPS showed that the included protection features were usable and accepted by end-users.
Article
Full-text available
We present a device-centric analysis of security and privacy attacks and defenses on extended reality (XR) devices. We present future research directions and propose design considerations to help ensure the security and privacy of XR devices.
Conference Paper
Full-text available
Gesture elicitation studies represent a popular and resourceful method in HCI to inform the design of intuitive gesture commands, reflective of end-users' behavior, for controlling all kinds of interactive devices, applications, and systems. In the last ten years, an impressive body of work has been published on this topic, disseminating useful design knowledge regarding users' preferences for finger, hand, wrist, arm, head, leg, foot, and whole-body gestures. In this paper, we deliver a systematic literature review of this large body of work by summarizing the characteristics and findings ofN=216gesture elicitation studies subsuming 5,458 participants, 3,625 referents, and 148,340 elicited gestures. We highlight the descriptive, comparative, and generative virtues of our examination to provide practitioners with an effective method to (i) understand how new gesture elicitation studies position in the literature; (ii) compare studies from different authors; and (iii) identify opportunities for new research. We make our large corpus of papers accessible online as a Zotero group library at https://www.zotero.org/groups/2132650/gesture_elicitation_studies.
Conference Paper
Full-text available
Current wearable AR devices create an isolated experience with a limited field of view, vergence-accommodation conflicts, and difficulty communicating the virtual environment to observers. To address these issues and enable new ways to visualize, manipulate, and share virtual content, we introduce Augmented Augmented Reality (AAR) by combining a wearable AR display with a wearable spatial augmented reality projector. To explore this idea, a system is constructed to combine a head-mounted actuated pico projector with a Hololens AR headset. Projector calibration uses a modified structure from motion pipeline to reconstruct the geometric structure of the pan-tilt actuator axes and offsets. A toolkit encapsulates a set of high-level functionality to manage content placement relative to each augmented display and the physical environment. Demonstrations showcase ways to utilize the projected and head-mounted displays together, such as expanding field of view, distributing content across depth surfaces, and enabling bystander collaboration.
Conference Paper
Full-text available
" Emerging technologies such as Augmented Reality (AR), have the potential to radically transform education by making challenging concepts visible and accessible to novices. In this project, we have designed a Hololens-based system in which collaborators are exposed to an unstructured learning activity in which they learned about the invisible physics involved in audio speakers. They learned topics ranging from spatial knowledge, such as shape of magnetic fields, to abstract conceptual knowledge, such as relationships between electricity and magnetism. We compared participants' learning, attitudes and collaboration with a tangible interface through multiple experimental conditions containing varying layers of AR information. We found that educational AR representations were beneficial for learning specific knowledge and increasing participants' self-efficacy (i.e., their ability to learn concepts in physics). However, we also found that participants in conditions that did not contain AR educational content, learned some concepts better than other groups and became more curious about physics. We discuss learning and collaboration differences, as well as benefits and detriments of implementing augmented reality for unstructured learning activities.
Conference Paper
Full-text available
We propose a multi-scale Mixed Reality (MR) collaboration between the Giant, a local Augmented Reality user, and the Miniature, a remote Virtual Reality user, in Giant-Miniature Collaboration (GMC). The Miniature is immersed in a 360-video shared by the Giant who can physically manipulate the Miniature through a tangible interface, a combined 360-camera with a 6 DOF tracker. We implemented a prototype system as a proof of concept and conducted a user study (n=24) comprising of four parts comparing: A) two types of virtual representations, B) three levels of Miniature control, C) three levels of 360-video view dependencies, and D) four 360-camera placement positions on the Giant. The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation. From the results, we give design recommendations and demonstrate an example Giant-Miniature Interaction.
Conference Paper
Full-text available
Despite the availability of software to support Affinity Diagramming (AD), practitioners still largely favor physical sticky-notes. Physical notes are easy to set-up, can be moved around in space and offer flexibility when clustering unstructured data. However, when working with mixed data sources such as surveys, designers often trade off the physicality of notes for analytical power. We propose Affinity Lens, a mobile-based augmented reality (AR) application for Data-Assisted Affinity Diagramming (DAAD). Our application provides just-in-time quantitative insights overlaid on physical notes. Affinity Lens uses several different types of AR overlays (called lenses) to help users find specific notes, cluster information, and summarize insights from clusters. Through a formative study of AD users, we developed design principles for data-assisted AD and an initial collection of lenses. Based on our prototype, we find that Affinity Lens supports easy switching between qualitative and quantitative ‘views’ of data, without surrendering the lightweight benefits of existing AD practice.
Article
Full-text available
Novel collaborative technologies afford new modes of behavior, which are often not regulated by established social norms. In particular, shared augmented reality (AR) - where multiple users can create, attach, and interact with the same virtual elements embedded into the physical environment - has the potential to interrupt current social norms of behavior. The objective of our study is to shed light on the ways in which shared AR challenges existing behavioral expectations. Using a simulated lab experimental design, we performed a study of users' interactions in a shared AR setting. Content analysis of participants' interviews reveals users' concerns over the preservation of their self- and social identity, as well as concerns related to personal space and the sense of psychological ownership over one's body and belongings. Our findings also point to the need for regulation of shared AR spaces and design of the technology's control mechanisms.
Conference Paper
Full-text available
Virtual Reality enables users to explore content whose physics are only limited by our creativity. Such limitless environments provide us with many opportunities to explore innovative ways to support productivity and collaboration. We present Spacetime, a scene editing tool built from the ground up to explore the novel interaction techniques that empower single user interaction while maintaining fluid multi-user collaboration in immersive virtual environment. We achieve this by introducing three novel interaction concepts: the Container, a new interaction primitive that supports a rich set of object manipulation and environmental navigation techniques, Parallel Objects, which enables parallel manipulation of objects to resolve interaction conflicts and support design workflows, and Avatar Objects, which supports interaction among multiple users while maintaining an individual users' agency. Evaluated by professional Virtual Reality designers, Spacetime supports powerful individual and fluid collaborative workflows.
Article
Full-text available
Are the many formal definitions and frameworks of privacy consistent with a layperson’s understanding of privacy? We explored this question and identified mental models and metaphors of privacy, conceptual tools that can be used to improve privacy tools, communication, and design for everyday users. Our investigation focused on a qualitative analysis of 366 drawings of privacy from laypeople, privacy experts, children, and adults. Illustrators all responded to the prompt “What does privacy mean to you?” We coded each image for content, identifying themes from established privacy frameworks and defining the visual and conceptual metaphors illustrators used to model privacy. We found that many non-expert drawings illustrated a strong divide between public and private physical spaces, while experts were more likely to draw nuanced data privacy spaces. Young children’s drawings focused on bedrooms, bathrooms, or cheating on schoolwork, and seldom addressed data privacy. The metaphors, themes, and symbols identified by these findings can be used for improving privacy communication, education, and design by inspiring and informing visual and conceptual strategies for reaching laypeople.
Conference Paper
Full-text available
Designers and researchers often rely on simple gesture recognizers like Wobbrock et al.'s 1forrapiduserinterfaceprototypes.However,mostexistingrecognizersarelimitedtoaparticularinputmodalityand/orpretrainedsetofgestures,andcannotbeeasilycombinedwithotherrecognizers.Inparticular,creatingprototypesthatemployadvancedtouchandmidairgesturesstillrequiressignificanttechnicalexperienceandprogrammingskill.Inspiredby1 for rapid user interface prototypes. However, most existing recognizers are limited to a particular input modality and/or pre-trained set of gestures, and cannot be easily combined with other recognizers. In particular, creating prototypes that employ advanced touch and mid-air gestures still requires significant technical experience and programming skill. Inspired by 1's easy, cheap, and flexible design, we present the GestureWiz prototyping environment that provides designers with an integrated solution for gesture definition, conflict checking, and real-time recognition by employing human recognizers in a Wizard of Oz manner. We present a series of experiments with designers and crowds to show that GestureWiz can perform with reasonable accuracy and latency. We demonstrate advantages of GestureWiz when recreating gesture-based interfaces from the literature and conducting a study with 12 interaction designers that prototyped a multimodal interface with support for a wide range of novel gestures in about 45 minutes.
Article
Full-text available
Mixed reality (MR) technology is now gaining ground due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. This survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specifically, we list and review the different protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as a number of work from the larger area of mobile devices, wearables, and Internet-of-Things (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.
Conference Paper
Full-text available
With the rapid deployment of Internet of Things (IoT) technologies and the variety of ways in which IoT-connected sensors collect and use personal data, there is a need for transparency, control, and new tools to ensure that individual privacy requirements are met. To develop these tools, it is important to better understand how people feel about the privacy implications of IoT and the situations in which they prefer to be notified about data collection. We report on a 1,007-participant vignette study focusing on privacy expectations and preferences as they pertain to a set of 380 IoT data collection and use scenarios. Participants were presented with 14 scenarios that varied across eight categorical factors, including the type of data collected (e.g. location, biometrics, temperature), how the data is used (e.g., whether it is shared, and for what purpose), and other attributes such as the data retention period. Our findings show that privacy preferences are diverse and context dependent; participants were more comfortable with data being collected in public settings rather than in private places, and are more likely to consent to data being collected for uses they find beneficial. They are less comfortable with the collection of biometrics (e.g. fingerprints) than environmental data (e.g. room temperature, physical presence). We also find that participants are more likely to want to be notified about data practices that they are uncomfortable with. Finally, our study suggests that after observing individual decisions in just three data-collection scenarios, it is possible to predict their preferences for the remaining scenarios, with our model achieving an average accuracy of up to 86%.
Conference Paper
Full-text available
Virtual reality (VR) head-mounted displays (HMD) allow for a highly immersive experience and are currently becoming part of the living room entertainment. Current VR systems focus mainly on increasing the immersion and enjoyment for the user wearing the HMD (HMD user), resulting in all the bystanders (Non-HMD users) being excluded from the experience. We propose ShareVR, a proof-of-concept prototype using floor projection and mobile displays in combination with positional tracking to visualize the virtual world for the Non-HMD user, enabling them to interact with the HMD user and become part of the VR experience. We designed and implemented ShareVR based on the insights of an initial online survey (n=48) with early adopters of VR HMDs. We ran a user study (n=16) comparing ShareVRto a baseline condition showing how the interaction using ShareVR led to an increase of enjoyment, presence and social interaction. In a last step we implemented several experiences for ShareVR, exploring its design space and giving insights for designers of co-located asymmetric VR experiences.
Conference Paper
Full-text available
Research has brought forth a variety of authentication systems to mitigate observation attacks. However, there is little work about shoulder surfing situations in the real world. We present the results of a user survey (N=174) in which we investigate actual stories about shoulder surfing on mobile devices from both users and observers. Our analysis indicates that shoulder surfing mainly occurs in an opportunistic, non-malicious way. It usually does not have serious consequences, but evokes negative feelings for both parties, resulting in a variety of coping strategies. Observed data was personal in most cases and ranged from information about interests and hobbies to login data and intimate details about third persons and relationships. Thus, our work contributes evidence for shoulder surfing in the real world and informs implications for the design of privacy protection mechanisms.
Conference Paper
Full-text available
We propose combining shape-changing interfaces and spa- tial augmented reality for extending the space of appearances and interactions of actuated interfaces. While shape-changing interfaces can dynamically alter the physical appearance of objects, the integration of spatial augmented reality additionally allows for dynamically changing objects’ optical appearance with high detail. This way, devices can render currently challenging features such as high frequency texture or fast motion. We frame this combination in the context of computer graphics with analogies to established techniques for increasing the realism of 3D objects such as bump mapping. This extensible framework helps us identify challenges of the two techniques and benefits of their combination. We utilize our prototype shape-changing device enriched with spatial augmented reality through projection mapping to demonstrate the concept. We present a novel mechanical distance-fields algorithm for real-time fitting of mechanically constrained shape-changing devices to arbitrary 3D graphics. Furthermore, we present a technique for increasing effective screen real estate for spatial augmented reality through view-dependent shape change.
Conference Paper
Full-text available
Data glasses do carry promising potential for hands-free interaction, but also raise various concerns amongst their potential users. In order to gain insights into the nature of those concerns, we investigate how potential usage scenarios are perceived by device users and their peers. We present results of a two-step approach: a focus group discussion with 7 participants, and a user study with 38 participants. In particular, we look into differences between the usage of data glasses and more established devices such as smart phones. We provide quantitative measures for scenario-related social acceptability and point out factors that can influence user attitudes. Based on our quantitative and qualitative results, we derive design implications that might support the development of head-worn devices and applications with an improved social acceptability.
Conference Paper
Full-text available
Recently there has been an increase in research towards using hand gestures for interaction in the field of Augmented Reality (AR). These works have primarily focused on researcher designed gestures, while little is known about user preference and behavior for gestures in AR. In this paper, we present our guessability study for hand gestures in AR in which 800 gestures were elicited for 40 selected tasks from 20 participants. Using the agreement found among gestures, a user-defined gesture set was created to guide designers to achieve consistent user-centered gestures in AR. Wobbrock’s surface taxonomy has been extended to cover dimensionalities in AR and with it, characteristics of collected gestures have been derived. Common motifs which arose from the empirical findings were applied to obtain a better understanding of users’ thought and behavior. This work aims to lead to consistent user-centered designed gestures in AR.
Conference Paper
Full-text available
Privacy mechanisms are important in mixed-presence (collocated and remote) collaborative systems. These systems try to achieve a sense of co-presence in order to promote fluid collaboration, yet it can be unclear how actions made in one location are manifested in the other. This ambiguity makes it difficult to share sensitive information with confidence, impacting the fluidity of the shared experience. In this paper, we focus on mixed reality approaches (blending physical and virtual spaces) for mixed presence collaboration. We present SecSpace, our software toolkit for usable privacy and security research in mixed reality collaborative environments. SecSpace permits privacy-related actions in either physical or virtual space to generate effects simultaneously in both spaces. These effects will be the same in terms of their impact on privacy but they may be functionally tailored to suit the requirements of each space. We detail the architecture of SecSpace and present three prototypes that illustrate the flexibility and capabilities of our approach. Author Keywords Usable privacy and security, mixed reality, mixed presence, software toolkit, smart room, framework
Conference Paper
Remotely instructing and guiding users in physical tasks has offered promise across a wide variety of domains. While it has been the subject of many research projects, current approaches are often limited in the communication bandwidth (lacking context, spatial information) or interactivity (unidirectional, asynchronous) between the expert and the learner. Systems that use Mixed-Reality systems for this purpose have rigid configurations for the expert and the learner. We explore the design space of bi-directional mixed-reality telepresence systems for teaching physical tasks, and present Loki, a novel system which explores the various dimensions of this space. Loki leverages video, audio and spatial capture along with mixed-reality presentation methods to allow users to explore and annotate the local and remote environments, and record and review their own performance as well as their peer's. The system design of Loki also enables easy transitions between different configurations within the explored design space. We validate its utility through a varied set of scenarios and a qualitative user study.
Conference Paper
Home is a person's castle, a private and protected space. Internet-connected devices such as locks, cameras, and speakers might make a home "smarter" but also raise privacy issues because these devices may constantly and inconspicuously collect, infer or even share information about people in the home. To explore user-centered privacy designs for smart homes, we conducted a co-design study in which we worked closely with diverse groups of participants in creating new designs. This study helps fill the gap in the literature between studying users' privacy concerns and designing privacy tools only by experts. Our participants' privacy designs often relied on simple strategies, such as data localization, disconnection from the Internet, and a private mode. From these designs, we identified six key design factors: data transparency and control, security, safety, usability and user experience, system intelligence, and system modality. We discuss how these factors can guide design for smart home privacy.
Conference Paper
End-user elicitation studies are a popular design method. Currently, such studies are usually confined to a lab, limiting the number and diversity of participants, and therefore the representativeness of their results. Furthermore, the quality of the results from such studies generally lacks any formal means of evaluation. In this paper, we address some of the limitations of elicitation studies through the creation of the Crowdlicit system along with the introduction of end-user identification studies, which are the reverse of elicitation studies. Crowdlicit is a new web-based system that enables researchers to conduct online and in-lab elicitation and identification studies. We used Crowdlicit to run a crowd-powered elicitation study based on Morris's "Web on the Wall" study (2012) with 78 participants, arriving at a set of symbols that included six new symbols different from Morris's. We evaluated the effectiveness of 49 symbols (43 from Morris and six from Crowdlicit) by conducting a crowd-powered identification study. We show that the Crowdlicit elicitation study resulted in a set of symbols that was significantly more identifiable than Morris's.
Conference Paper
Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.
Conference Paper
3D pointing is an integral part of Virtual Reality interaction. Typical pointing devices rely on 3D trackers and are thus subject to fluctuations in the reported pose, i.e., jitter. In this work, we explored how different levels of rotational jitter affect pointing performance and if different selection methods can mitigate the effects of jitter. Towards this, we designed a Fitts’ Law experiment with three selection methods. In the first method, subjects used a single controller to position and select the object. In the second method, subjects used the controller in their dominant hand to point at objects and the trigger button of a second controller, held in their non-dominant hand, to select objects. Finally, subjects used the controller in their dominant hand to point the objects and pressed the space bar on a keyboard to select the object in the third condition. During the pointing task we added five different levels of jitter: no jitter, 0.5°, 1°, and 2° uniform noise, as well as White Gaussian noise with 1° standard deviation. Results showed that the Gaussian noise and 2° of jitters significantly reduced the throughput of the participants. Moreover, subjects made fewer errors when they performed the experiment with two controllers. Our results inform the design of 3D user interfaces, input devices and interaction techniques.
Conference Paper
A primary goal of research in usable security and privacy is to understand the differences and similarities between users. While past researchers have clustered users into different groups, past categories of users have proven to be poor predictors of end-user behaviors. In this paper, we perform an alternative clustering of users based on their behaviors. Through the analysis of data from surveys and interviews of participants, we identify five user clusters that emerge from end-user behaviors-Fundamentalists, Lazy Experts, Technicians, Amateurs and the Marginally Concerned. We examine the stability of our clusters through a survey-based study of an alternative sample, showing that clustering remains consistent. We conduct a small-scale design study to demonstrate the utility of our clusters in design. Finally, we argue that our clusters complement past work in understanding privacy choices, and that our categorization technique can aid in the design of new computer security technologies.
Conference Paper
There is a significant gap in the body of research on cross-device interfaces. Research has largely focused on enabling them technically, but when and how users want to use cross-device interfaces is not well understood. This paper presents an exploratory user study with XDBrowser, a cross-device web browser we are developing to enable non-technical users to adapt existing single-device web interfaces for cross-device use while viewing them in the browser. We demonstrate that an end-user customization tool like XDBrowser is a powerful means to conduct user-driven elicitation studies useful for understanding user preferences and design requirements for cross-device interfaces. Our study with 15 participants elicited 144 desirable multi-device designs for five popular web interfaces when using two mobile devices in parallel. We describe the design space in this context, the usage scenarios targeted by users, the strategies used for designing cross-device interfaces, and seven concrete mobile multi-device design patterns that emerged. We discuss the method, compare the cross-device interfaces from our users and those defined by developers in prior work, and establish new requirements from observed user behavior. In particular, we identify the need to easily switch between different interface distributions depending on the task and to have more fine-grained control over synchronization.
Conference Paper
System design using novel forms of interaction is commonly argued to be best driven by user-driven elicitation studies. This paper describes the challenges faced, and the lessons learned, in replicating Morris's Web on the Wall guessability study which used Wizard of Oz to elicit multimodal interactions around Kinect. Our replication involved three steps. First, based on Morris's study, we developed a system, Kinect Browser, that supports 10 common browser functions using popular gestures and speech commands. Second, we developed custom experiment software for recording and analysing multimodal interactions using Kinect. Third, we conducted a study based on Morris's design. However, after first using Wizard of Oz, Kinect Browser was used in a second elicitation task, allowing us to analyse and compare the differences between the two methods.Our study demonstrates the effects of using mixed-initiative elicitation with significant differences to user-driven elicitation without system dialogue. Given the recent proliferation of guessability studies, our work extends the methodology to obtain reproducible and implementable user-defined interaction sets.
Conference Paper
Public information displays are evolving from passive screens into more interactive and smarter ubiquitous computing platforms. In this research we investigate applying gesture interaction and Augmented Reality (AR) technologies to make public information displays more intuitive and easy to use. We focus especially on designing intuitive gesture based interaction methods to use in combination with an augmented virtual mirror interface. As an initial step, we conducted a user study to indentify the gestures that users feel are natural for performing common tasks when interacting with augmented virtual mirror displays. We report initial findings from the study, discuss design guidelines, and suggest future research directions.
Article
Modern applications increasingly rely on continuous monitoring of video, audio, or other sensor data to provide their functionality, particularly in platforms such as the Microsoft Kinect and Google Glass. Continuous sensing by untrusted applications poses significant privacy challenges for both device users and bystanders. Even honest users will struggle to manage application permissions using existing approaches. We propose a general, extensible framework for controlling access to sensor data on multi-application continuous sensing platforms. Our approach, world-driven access control, allows real-world objects to explicitly specify access policies. This approach relieves the user's permission management burden while mediating access at the granularity of objects rather than full sensor streams. A trusted policy module on the platform senses policies in the world and modifies applications' "views" accordingly. For example, world-driven access control allows the system to automatically stop recording in bathrooms or remove bystanders from video frames,without the user prompted to specify or activate such policies. To convey and authenticate policies, we introduce passports, a new kind of certificate that includes both a policy and optionally the code for recognizing a real-world object. We implement a prototype system and use it to study the feasibility of world-driven access control in practice. Our evaluation suggests that world-driven access control can effectively reduce the user's permission management burden in emerging continuous sensing systems. Our investigation also surfaces key challenges for future access control mechanisms for continuous sensing applications.
Conference Paper
RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.
Article
A number of wearable 'lifelogging' camera devices have been released recently, allowing consumers to capture images and other sensor data continuously from a first-person perspective. Unlike traditional cameras that are used deliberately and sporadically, lifelogging devices are always 'on' and automatically capturing images. Such features may challenge users' (and bystanders') expectations about privacy and control of image gathering and dissemination. While lifelogging cameras are growing in popularity, little is known about privacy perceptions of these devices or what kinds of privacy challenges they are likely to create. To explore how people manage privacy in the context of lifelogging cameras, as well as which kinds of first-person images people consider 'sensitive,' we conducted an in situ user study (N = 36) in which participants wore a lifelogging device for a week, answered questionnaires about the collected images, and participated in an exit interview. Our findings indicate that: 1) some people may prefer to manage privacy through in situ physical control of image collection in order to avoid later burdensome review of all collected images; 2) a combination of factors including time, location, and the objects and people appearing in the photo determines its 'sensitivity;' and 3) people are concerned about the privacy of bystanders, despite reporting almost no opposition or concerns expressed by bystanders over the course of the study.
Article
Researchers are making efforts to reduce legacy bias, which is a limitation of current elicitation methods. There are many open challenges in updating elicitation methods to incorporate production, priming, and partner techniques. Gesture elicitation is emerging as a potential approach to address this challenge. Gesture elicitation has been applied to a wide variety of emerging interaction and sensing technologies, including touchscreens, depth cameras, styli, foot-operated UIs, multidisplay environments, mobile phones, multimodal gesture-and-speech interfaces, stroke alphabets, and above-surface interfaces. One advantage of gesture elicitation is that the technique is not limited to current sensing technologies. It enables interaction designers to focus on end users' desires as opposed to settling for what is technically convenient at the moment.
Article
Augmented reality (AR) devices are poised to enter the market. It is unclear how the properties of these devices will affect individuals' privacy. In this study, we investigate the privacy perspectives of individuals when they are bystanders around AR devices. We conducted 12 field sessions in cafés and interviewed 31 bystanders regarding their reactions to a co-located AR device. Participants were predominantly split between having indifferent and negative reactions to the device. Participants who expressed that AR devices change the bystander experience attributed this difference to subtleness, ease of recording, and the technology's lack of prevalence. Additionally, participants surfaced a variety of factors that make recording more or less acceptable, including what they are doing when the recording is being taken. Participants expressed interest in being asked permission before being recorded and in recording-blocking devices. We use the interview results to guide an exploration of design directions for privacy-mediating technologies.