Michael Nebeling’s research while affiliated with University of Michigan and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (83)


Rapid Prototyping of Augmented Reality & Virtual Reality Interfaces
  • Conference Paper

April 2019

·

86 Reads

·

3 Citations

Michael Nebeling

This course introduces participants to rapid prototyping techniques for augmented reality and virtual reality interfaces. Participants will learn about both physical prototyping with paper and Play-Doh as well as digital prototyping via new visual authoring tools for AR/VR. The course is structured into four sessions. After an introduction to AR/VR prototyping principles and materials, the next two sessions are hands-on, allowing participants to practice new physical and digital prototyping techniques. These techniques use a combination of new paper-based AR/VR design templates and smartphone-based capture and replay tools, adapting Wizard of Oz for AR/VR design. The fourth and final session will allow participants to test and critique each other's prototypes while checking against emerging design principles and guidelines. The instructor has previously taught the techniques to broad student audiences with a wide variety of non-technical backgrounds, including design, architecture, business, medicine, education, and psychology, who shared a common interest in user experience and interaction design. The course is targeted at non-technical audiences including HCI practitioners, user experience researchers, and interaction design professionals and students. A useful byproduct of the course will be a small portfolio piece of a first AR/VR interface designed iteratively and collaboratively in teams.


360proto: Making Interactive Virtual Reality & Augmented Reality Prototypes from Paper

April 2019

·

370 Reads

·

84 Citations

We explore 360 paper prototyping to rapidly create AR/VR prototypes from paper and bring them to life on AR/VR devices. Our approach is based on a set of emerging paper prototyping templates specifically for AR/VR. These templates resemble the key components of many AR/VR interfaces, including 2D representations of immersive environments, AR marker overlays and face masks, VR controller models and menus, and 2D screens and HUDs. To make prototyping with these templates effective, we developed 360proto, a suite of three novel physical--digital prototyping tools: (1) the 360proto Camera for capturing paper mockups of all components simply by taking a photo with a smartphone and seeing 360-degree panoramic previews on the phone or stereoscopic previews in Google Cardboard; (2) the 360proto Studio for organizing and editing captures, for composing AR/VR interfaces by layering the captures, and for making them interactive with Wizard of Oz via live video streaming; (3) the 360proto App for running and testing the interactive prototypes on AR/VR capable mobile devices and headsets. Through five student design jams with a total of 86 participants and our own design space explorations, we demonstrate that our approach with 360proto is useful to create relatively complex AR/VR applications.



Figure 2: An AR scene with custom 2D marker in HoloBuilder
Figure 3: ProtoAR's [20] 360-degree capture of Play-Doh model (top); captured quasi-3D object (middle); marker-based AR preview (bottom)
Figure 4: GestureWiz's rapid prototyping interfaces for recording (Requester Interface) and recognition (Wizard of Oz Interface) of arbitrary and potentially multi-modal gesture sets, e.g., single-stroke, multi-stroke, and mid-air 3D (adapted from [30])
The Trouble with Augmented Reality/Virtual Reality Authoring Tools
  • Conference Paper
  • Full-text available

October 2018

·

1,060 Reads

·

158 Citations

There are many technical and design challenges in creating new, usable and useful AR/VR applications. In particular, non-technical designers and end-users are facing a lack of tools to quickly and easily prototype and test new AR/VR user experiences. We review and classify existing AR/VR authoring tools and characterize three primary issues with these tools based on our review and a case study. To address the issues, we discuss two new tools we designed with support for rapid prototyping of new AR/VR content and gesture-based interactions geared towards designers without technical knowledge in gesture recognition, 3D modeling, and programming.

Download

Arboretum and Arbility: Improving Web Accessibility Through a Shared Browsing Architecture

October 2018

·

80 Reads

·

14 Citations

Steve Oney

·

Alan Lundgard

·

Rebecca Krosnick

·

[...]

·

Walter S. Lasecki

Many web pages developed today require navigation by visual interaction-seeing, hovering, pointing, clicking, and dragging with the mouse over dynamic page content. These forms of interaction are increasingly popular as developer trends have moved from static, logically structured pages to dynamic, interactive pages. However, they are also often inaccessible to blind web users who tend to rely on keyboard-based screen readers to navigate the web. Despite existing web accessibility standards, engineering web pages to be equally accessible via both keyboard and visuomotor mouse-based interactions is often not a priority for developers. Improving access to this kind of visual and interactive web content has been a long-standing goal of HCI researchers, but the barriers have proven to be too varied and unpredictable to be overcome by some of the proposed solutions: promoting guidelines and best practices, automatically generating accessible versions of pre-exisiting web pages, or developing human-assisted solutions, such as screen and cursor-sharing, which tend to diminish an end user's agency. In this paper we present a real-time, collaborative approach to helping blind web users overcome inaccessible parts of existing web pages. We introduce *Arboretum*, a new architecture that enables any web user to seamlessly hand off controlled parts of their browsing session to remote users, while maintaining control over the interface via a "propose and accept/reject" mechanism. We illustrate the benefit of Arboretum by using it to implement *Arbility*, a browser that allows blind users to hand off targeted visual interaction tasks to remote crowd workers. We evaluate the entire system in a study with 9 blind web users, showing that Arbility allows them to interact with web content that was previously difficult to access via a screen reader alone.


Fig. 2. AR Furniture Placement: All family members can use their own mobile devices to place virtual models of new furniture in the room. 
Fig. 4. Example implementation of XD-AR, integrating with HoloLens, Tango, and RoomAlive. Extensions to support Windows Mixed Reality headsets and ARCore are possible directions of future work. 
Fig. 5. The calibration process of the shared world anchor on HoloLens (left) and Tango (right). The direction of the z-axis is indicated by the blue features. 
Fig. 6. The two proof of concept applications in use: Furniture Placement (left) and Shooter Game (right). 
Figure 5 of 5
XD-AR: Challenges and Opportunities in Cross-Device Augmented Reality Application Development

June 2018

·

2,037 Reads

·

41 Citations

Proceedings of the ACM on Human-Computer Interaction

Augmented Reality (AR) developers face a proliferation of new platforms, devices, and frameworks. This often leads to applications being limited to a single platform and makes it hard to support collaborative AR scenarios involving multiple different devices. This paper presents XD-AR, a cross-device AR application development framework designed to unify input and output across hand-held, head-worn, and projective AR displays. XD-AR's design was informed by challenging scenarios for AR applications, a technical review of existing AR platforms, and a survey of 30 AR designers, developers, and users. Based on the results, we developed a taxonomy of AR system components and identified key challenges and opportunities in making them work together. We discuss how our taxonomy can guide the design of future AR platforms and applications and how cross-device interaction challenges could be addressed. We illustrate this when using XD-AR to implement two challenging AR applications from the literature in a device-agnostic way.


Fig. 1. The collaboration scenarios targeted by 360Anywhere lie in the "same time, different place" quadrant of the Time/Space Matrix, i.e., synchronous, remote collaboration. 
Fig. 2. (1) 360Anywhere's Configuration with activated framework components; (2) the Collaborator UI as seen by a remote participant; (3) the Collaborator UI with Gaze Awareness as seen by a local participant. 
Fig. 3. The component-based architecture of our framework. It takes a 360-degree stream and a user-provided configuration as input (left) and outputs a corresponding remote collaboration system that can be used with a variety of devices (right). The two components highlighted in gray are always contained in the system; the remaining components can be activated/deactivated by the user to tailor the result to their needs. Through the Collaborator UI, the user can directly interact with the components marked with an asterisk. 
Fig. 5. The design jam set-up: (1) an Acer Predator Notebook connected to (2) a RICOH Theta S 360-degree camera for the local participants; (3) a projector connected to a Microsoft SurfaceBook; (4) icons provided for scenario #1; (5) another Acer Predator Notebook for one remote collaborator; (6) an ASUS ZenFone AR for the other remote collaborator; (7) a 4K screen acting as the second projection in scenario #3; and (8) hieroglyphic symbols for scenario #2. 
Fig. 6. Example 360-degree snapshot persisted by the Session component (T1S1). The inlay shows how T1 approached explaining the hieroglyphic symbols in scenario #2. 
360Anywhere: Mobile Ad-hoc Collaboration in Any Environment using 360 Video and Augmented Reality

June 2018

·

423 Reads

·

35 Citations

Proceedings of the ACM on Human-Computer Interaction

360-degree video is increasingly used to create immersive user experiences; however, it is typically limited to a single user and not interactive. Recent studies have explored the potential of 360 video to support multi-user collaboration in remote settings. These studies identified several challenges with respect to 360 live streams, such as the lack of gaze awareness, out-of-sync views, and missed gestures. To address these challenges, we created 360Anywhere, a framework for 360 video-based multi-user collaboration that, in addition to allowing collaborators to view and annotate a 360 live stream, also supports projection of annotations in the 360 stream back into the real-world environment in real-time. This enables a range of collaborative augmented reality applications not supported with existing tools. We present the 360Anywhere framework and tools that allow users to generate applications tailored to specific collaboration and augmentation needs with support for remote collaboration. In a series of exploratory design sessions with users, we assess 360Anywhere's power and flexibility for three mobile ad-hoc scenarios. Using 360Anywhere, participants were able to set up and use fairly complex remote collaboration systems involving projective augmented reality in less than 10 minutes.


Figure 6. Example integration of the GestureWiz library and mapping of gestures to corresponding functions in our YouTube app. 
GestureWiz: A Human-Powered Gesture Design Environment for User Interface Prototypes

April 2018

·

530 Reads

·

36 Citations

Designers and researchers often rely on simple gesture recognizers like Wobbrock et al.'s 1forrapiduserinterfaceprototypes.However,mostexistingrecognizersarelimitedtoaparticularinputmodalityand/orpretrainedsetofgestures,andcannotbeeasilycombinedwithotherrecognizers.Inparticular,creatingprototypesthatemployadvancedtouchandmidairgesturesstillrequiressignificanttechnicalexperienceandprogrammingskill.Inspiredby1 for rapid user interface prototypes. However, most existing recognizers are limited to a particular input modality and/or pre-trained set of gestures, and cannot be easily combined with other recognizers. In particular, creating prototypes that employ advanced touch and mid-air gestures still requires significant technical experience and programming skill. Inspired by 1's easy, cheap, and flexible design, we present the GestureWiz prototyping environment that provides designers with an integrated solution for gesture definition, conflict checking, and real-time recognition by employing human recognizers in a Wizard of Oz manner. We present a series of experiments with designers and crowds to show that GestureWiz can perform with reasonable accuracy and latency. We demonstrate advantages of GestureWiz when recreating gesture-based interfaces from the literature and conducting a study with 12 interaction designers that prototyped a multimodal interface with support for a wide range of novel gestures in about 45 minutes.


User-Driven Design Principles for Gesture Representations

April 2018

·

101 Reads

·

27 Citations

Many recent studies have explored user-defined interactions for touch and gesture-based systems through end-user elicitation. While these studies have facilitated the user-end of the human-computer dialogue, the subsequent design of gesture representations to communicate gestures to the user vary in style and consistency. Our study explores how users interpret, enact, and refine gesture representations adapting techniques from recent elicitation studies. To inform our study design, we analyzed gesture representations from 30 elicitation papers and developed a taxonomy of design elements. We then conducted a partnered elicitation study with 30 participants producing 657 gesture representations accompanied by think-aloud data. We discuss design patterns and themes that emerged from our analysis, and supplement these findings with an in-depth look at users' mental models when perceiving and enacting gesture representations. Finally, based on the results, we provide recommendations for practitioners in need of "visual language" guidelines to communicate possible user actions.


ProtoAR: Rapid Physical-Digital Prototyping of Mobile Augmented Reality Applications

April 2018

·

312 Reads

·

93 Citations

The latest generations of smartphones with built-in AR capabilities enable a new class of mobile apps that merge digital and real-world content depending on a user's task, context, and preference. But even experienced mobile app designers face significant challenges: creating 2D/3D AR content remains difficult and time-consuming, and current mobile prototyping tools do not support AR views. There are separate tools for this; however, they require significant technical skill. This paper presents ProtoAR which supplements rapid physical prototyping using paper and Play-Doh with new mobile cross-device multi-layer authoring and interactive capture tools to generate mobile screens and AR overlays from paper sketches, and quasi-3D content from 360-degree captures of clay models. We describe how ProtoAR evolved over four design jams with students to enable interactive prototypes of mobile AR apps in less than 90 minutes, and discuss the advantages and insights ProtoAR can give designers.


Citations (70)


... Studies have demonstrated that eye-tracking can be utilized to estimate factors such as confidence [5], personality [3], attention [41], and cognitive load [46]. Additionally, eye-tracking is employed in diverse interaction tasks like gaze-based typing [10,26], menu navigation [23], and object selection [13]. The choice of eye-tracking device varies depending on the data type and task. ...

Reference:

SensPS: Sensing Personal Space Comfortable Distance between Human-Human Using Multimodal Sensors
SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XR
  • Citing Conference Paper
  • October 2024

... Finally, the accessibility of data stories specifically concerns immersive systems, for which there are still no established assistive technologies available (e.g., screen readers). A recent trend encourages accessibility and inclusiveness in mixed reality, as shown by workshops at ACM CHI and IEEE ISMAR (e.g., [53]), although so far none consider data storytelling specifically. We see an opportunity related to the accessibility of immersive data stories in their potential for multi-modal input and multi-sensory output (DS: Interaction (modality)). ...

Designing Inclusive Future Augmented Realities
  • Citing Conference Paper
  • May 2024

... Those patterns of DD, previously known as dark patterns [7], receive increasing attention from researchers and legislators, and a growing number of regulations have been implemented [20] to protect users against potentially harmful consequences caused by digital platforms and services that incorporate DD into their products. While the latest scientific work increasingly broadens the perspective on DD also to cover emerging technologies like augmented and virtual reality [22,28], robots [30], or socially-acting computers that base their behavior on LLMs [2], such practices are already widely spread and adopted in more established technologies and use cases, e.g., e-commerce platforms and webshops [34]. Therefore, taking regulatory action is inevitable as research indicates that users may fall victim to such practices despite being aware of the manipulation [5]. ...

What Makes XR Dark? Examining Emerging Dark Patterns in Augmented and Virtual Reality through Expert Co-Design
  • Citing Article
  • April 2024

ACM Transactions on Computer-Human Interaction

... Future authoring tools may embrace a paradigm shift to enhance consideration for viewer autonomy. Some systems [47,55] have demonstrated such possibilities. For example, REFRAME [55] allows creators to anticipate and address potential threats in the early design stage from a user's perspective by personifying various threats as characters in storyboards. ...

Reframe: An Augmented Reality Storyboarding Tool for Character-Driven Analysis of Security & Privacy Concerns
  • Citing Conference Paper
  • October 2023

... Specifically, we applied three types of priming: context, creativity, and environmental priming. With context priming (similar to [100]), we used paper documents to foster an understanding of the document organization scenario and its requirements. We implemented creativity priming with sci-fi movies to inspire designs based on new form factors [2]. ...

Eliciting Security & Privacy-Informed Sharing Techniques for Multi-User Augmented Reality
  • Citing Conference Paper
  • April 2023

... The color cue, influenced by luminance contrast, enhances depth perception when combined with other cues, where proximityluminance covariance applies to color as a pictorial depth cue [31]. Research shows that color augmentation improves depth perception in VR, especially against darker backgrounds [2,50]. Partial occlusion combined with the color red produces stronger depth perception and faster judgments, making variations in the red spectrum effective depth cues [31]. ...

Color-to-Depth Mappings as Depth Cues in Virtual Reality
  • Citing Conference Paper
  • October 2022

... This way, the environments themselves do not need to be aligned, but rather a common anchor is established within each collaborator's environment. Herskovitz et al. provide a toolkit capable of displaying collaborators through a portal, world-in-miniature display, or by anchoring them to a common element of both rooms, such as a chair or table [9]. The idea of anchoring has also been explored in other works [6,7,10]. ...

XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration
  • Citing Article
  • November 2022

Proceedings of the ACM on Human-Computer Interaction

... For instance, dynamic elements such as lighting conditions, moving objects (e.g., vehicles, temporary structures), and pedestrian activity are typically absent from pre-captured 3D models. This absence can lead to inconsistencies between virtual elements and the real environment, ultimately degrading the user experience [54]. Consequently, developers frequently resort to repeated on-site visits, which can be costly, time-consuming, and logistically challenging. ...

XR tools and where they are taking us: characterizing the evolving research on augmented, virtual, and mixed reality prototyping and development tools
  • Citing Article
  • September 2022

XRDS Crossroads The ACM Magazine for Students

... Although many prototyping methods have been applied to support rapid AR prototyping [6,25], the support for creating interactive behavior in AR is limited in the context of goal-driven prototyping of AR use cases from end users' perspective [26]. More specifically, the techniques utilized by prior research are commonly found to be application-oriented and focus on the interactions that the target users are supposed to perform using the application. ...

Rapid prototyping for XR: SIGGRAPH 2022 course
  • Citing Conference Paper
  • August 2022

... 2016] emphasizes the importance of leveraging AR to make complex concepts more accessible and engaging, particularly in topics like human health and biology. In [Cárdenas Gasca et al., 2022], the design of AR exhibitions for sensitive narratives is explored, emphasizing the importance of story-telling-like structures and user interaction in creating immersive experiences. Their work highlights the potential of AR to convey complex historical and cultural narratives in a way that is both engaging and educational, making it a valuable tool for museum curators and designers. ...

AR Exhibitions for Sensitive Narratives: Designing an Immersive Exhibition for the Museum of Memory in Colombia
  • Citing Conference Paper
  • June 2022