Yi Fei Cheng’s research while affiliated with Carnegie Mellon University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (14)


Fig. 2: Spacetop EA device (top left) and interface (bottom left), and a user wearing the device (right). Note that for users the background appeared transparent (i.e., optical see-through), not black as depicted.
Fig. 3: Themes -We report on how considerations, such as the participants' tasks, influenced their workspace arrangement behaviors, such as the number of windows they opened. In addition, we highlight alternative workflows that involve the use of AR in tandem with physical displays and tasks. Finally, we summarize participants' perceptions of the value and challenges of using AR.
Augmented Reality In-the-Wild: Usage Patterns and Experiences of Working with AR Laptops in Real-World Settings
  • Preprint
  • File available

February 2025

·

40 Reads

Yi Fei Cheng

·

Ari Carden

·

·

[...]

·

Augmented Reality (AR) is increasingly positioned as a tool for knowledge work, providing beneficial affordances such as a virtually limitless display space that integrates digital information with the user's physical surroundings. However, for AR to supplant traditional screen-based devices in knowledge work, it must support prolonged usage across diverse contexts. Until now, few studies have explored the effects, opportunities, and challenges of working in AR outside a controlled laboratory setting and for an extended duration. This gap in research limits our understanding of how users may adapt its affordances to their daily workflows and what barriers hinder its adoption. In this paper, we present findings from a longitudinal diary study examining how participants incorporated an AR laptop -- Sightful's Spacetop EA -- into their daily work routines. 14 participants used the device for 40-minute daily sessions over two weeks, collectively completing 103 hours of AR-based work. Through survey responses, workspace photographs, and post-study interviews, we analyzed usage patterns, workspace configurations, and evolving user perceptions. Our findings reveal key factors influencing participants' usage of AR, including task demands, environmental constraints, social dynamics, and ergonomic considerations. We highlight how participants leveraged and configured AR's virtual display space, along with emergent hybrid workflows that involved physical screens and tasks. Based on our results, we discuss both overlaps with current literature and new considerations and challenges for the future design of AR systems for pervasive and productive use.

Download






Controllers or Bare Hands? A Controlled Evaluation of Input Techniques on Interaction Performance and Exertion in Virtual Reality

November 2023

·

138 Reads

·

24 Citations

IEEE Transactions on Visualization and Computer Graphics

Virtual Reality (VR) systems have traditionally required users to operate the user interface with controllers in mid-air. More recent VR systems, however, integrate cameras to track the headset's position inside the environment as well as the user's hands when possible. This allows users to directly interact with virtual content in mid-air just by reaching out, thus discarding the need for hand-held physical controllers. However, it is unclear which of these two modalities—controller-based or free-hand interaction—is more suitable for efficient input, accurate interaction, and long-term use under reliable tracking conditions. While interacting with hand-held controllers introduces weight, it also requires less finger movement to invoke actions (e.g., pressing a button) and allows users to hold on to a physical object during virtual interaction. In this paper, we investigate the effect of VR input modality (controller vs. free-hand interaction) on physical exertion, agency, task performance, and motor behavior across two mid-air interaction techniques (touch, raycast) and tasks (selection, trajectory-tracing). Participants reported less physical exertion, felt more in control, and were faster and more accurate when using VR controllers compared to free-hand interaction in the raycast setting. Regarding personal preference, participants chose VR controllers for raycast but free-hand interaction for mid-air touch. Our correlation analysis revealed that participants' physical exertion increased with selection speed, quantity of arm motion, variation in motion speed, and bad postures, following ergonomics metrics such as consumed endurance and rapid upper limb assessment. We also found a negative correlation between physical exertion and the participant's sense of agency, and between physical exertion and task accuracy.





Citations (8)


... Spatial audio is central to audio augmentation, such that virtual sounds are perceived as emanating from specific locations in 3D space [56]. Humans rely on multiple sensory modalities when they engage with their environment [35,46,67], and the auditory sense remains highly significant for localization even when visual cues is limited, sometimes replacing visual information altogether (e.g., "watching" television from another room) [14,29]. Research indicates that the use of spatial audio encourages users to adopt a more active role in spatial navigation, leading to even more accurate cognitive maps [13], while simultaneously reinforcing the sense of presence in XR environments [36]. ...

Reference:

AudioMiXR: Spatial Audio Object Manipulation with 6DoF for Sound Design in Augmented Reality
New Ears: An Exploratory Study of Audio Interaction Techniques for Performing Search in a Virtual Reality Environment
  • Citing Conference Paper
  • October 2024

... Since file management mostly consists of sub-tasks such as dragging files to other folders or applying specific operations (i.e., deleting, duplicating, copying, compressing) on multiple files, a multi-selection function would be more useful. Furthermore, providing an efficient method for managing multiple objects in XR becomes more crucial, as recent XR is capable of presenting multiple windows at once [9,35]. ...

Predicting the Noticeability of Dynamic Virtual Elements in Virtual Reality
  • Citing Conference Paper
  • May 2024

... AR applications can support content arrangement by suggesting and refining object placement through methods such as surface detection [87,92,121], object auto-clustering [117], or object relocation [80,91]. Full automation that minimizes manual effort has also been explored, like adaptation to the physical environments [19,38,97], user context [34,60,80], or original layouts [20]. However, as adaptation results can deviate from actual intentions, users reported preferring to retain control [80,117]. ...

InteractionAdapt: Interaction-driven Workspace Adaptation for Situated Virtual Reality Environments
  • Citing Conference Paper
  • October 2023

... People are able to choose a variety of input methods in most Augmented Reality (AR) and Virtual Reality (VR) interfaces, and so understanding user input preferences is important. Input methods such as hand gestures and handheld controllers are widely used in AR and VR [19,33,39,46,47,50]. However, user preferences may change over time. ...

Controllers or Bare Hands? A Controlled Evaluation of Input Techniques on Interaction Performance and Exertion in Virtual Reality
  • Citing Article
  • November 2023

IEEE Transactions on Visualization and Computer Graphics

... AvatarPoser [33] and its subsequent work [8,34] predict full-body poses based on head and hand poses tracked by commercial mixed reality devices. HOOV [86] extends hand tracking beyond the field of view of head-mounted cameras using inertial signals captured at the wrist. ...

HOOV: Hand Out-Of-View Tracking for Proprioceptive Interaction using Inertial Sensing
  • Citing Conference Paper
  • April 2023

... Recent advances in natural input techniques, like hand gestures [14,29,60] and speech recognition [4,35,55], have enabled more intuitive and controller-free interactions. However, these methods are often limited by a lack of haptic feedback [40,47], reduced precision [11,46], and user fatigue [6,28] over extended periods. To overcome these challenges, the focus is now on developing robust input sensing techniques that can transform everyday surfaces into interactive, touch-sensitive interfaces. ...

ComforTable User Interfaces: Surfaces Reduce Input Error, Time, and Exertion for Tabletop and Mid-air User Interfaces
  • Citing Conference Paper
  • October 2022

... This way, the environments themselves do not need to be aligned, but rather a common anchor is established within each collaborator's environment. Herskovitz et al. provide a toolkit capable of displaying collaborators through a portal, world-in-miniature display, or by anchoring them to a common element of both rooms, such as a chair or table [9]. The idea of anchoring has also been explored in other works [6,7,10]. ...

XSpace: An Augmented Reality Toolkit for Enabling Spatially-Aware Distributed Collaboration
  • Citing Article
  • November 2022

Proceedings of the ACM on Human-Computer Interaction

... By combining machine learning techniques for spatial understanding as well as object segmentation and classification (e.g., Augmented Object Intelligence [11]), our approach for transforming virtual objects relative to their 3D centroid can be extended to physical objects as well. Such a system could create a virtual replica of relevant physical objects [31] and apply diminished reality techniques [8] to remove the physical objects from view. This would allow the virtual replica objects to be transformed for each user in the same way as in the Decoupled Hands approach. ...

Towards Understanding Diminished Reality
  • Citing Conference Paper
  • April 2022