April 2025
·
1 Read
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
April 2025
·
1 Read
April 2025
February 2025
·
46 Reads
·
1 Citation
Augmented Reality (AR) is increasingly positioned as a tool for knowledge work, providing beneficial affordances such as a virtually limitless display space that integrates digital information with the user's physical surroundings. However, for AR to supplant traditional screen-based devices in knowledge work, it must support prolonged usage across diverse contexts. Until now, few studies have explored the effects, opportunities, and challenges of working in AR outside a controlled laboratory setting and for an extended duration. This gap in research limits our understanding of how users may adapt its affordances to their daily workflows and what barriers hinder its adoption. In this paper, we present findings from a longitudinal diary study examining how participants incorporated an AR laptop -- Sightful's Spacetop EA -- into their daily work routines. 14 participants used the device for 40-minute daily sessions over two weeks, collectively completing 103 hours of AR-based work. Through survey responses, workspace photographs, and post-study interviews, we analyzed usage patterns, workspace configurations, and evolving user perceptions. Our findings reveal key factors influencing participants' usage of AR, including task demands, environmental constraints, social dynamics, and ergonomic considerations. We highlight how participants leveraged and configured AR's virtual display space, along with emergent hybrid workflows that involved physical screens and tasks. Based on our results, we discuss both overlaps with current literature and new considerations and challenges for the future design of AR systems for pervasive and productive use.
November 2024
·
4 Reads
October 2024
·
7 Reads
·
2 Citations
October 2024
·
7 Reads
·
2 Citations
May 2024
·
12 Reads
·
8 Citations
May 2024
·
11 Reads
·
2 Citations
November 2023
·
141 Reads
·
33 Citations
IEEE Transactions on Visualization and Computer Graphics
Virtual Reality (VR) systems have traditionally required users to operate the user interface with controllers in mid-air. More recent VR systems, however, integrate cameras to track the headset's position inside the environment as well as the user's hands when possible. This allows users to directly interact with virtual content in mid-air just by reaching out, thus discarding the need for hand-held physical controllers. However, it is unclear which of these two modalities—controller-based or free-hand interaction—is more suitable for efficient input, accurate interaction, and long-term use under reliable tracking conditions. While interacting with hand-held controllers introduces weight, it also requires less finger movement to invoke actions (e.g., pressing a button) and allows users to hold on to a physical object during virtual interaction. In this paper, we investigate the effect of VR input modality (controller vs. free-hand interaction) on physical exertion, agency, task performance, and motor behavior across two mid-air interaction techniques (touch, raycast) and tasks (selection, trajectory-tracing). Participants reported less physical exertion, felt more in control, and were faster and more accurate when using VR controllers compared to free-hand interaction in the raycast setting. Regarding personal preference, participants chose VR controllers for raycast but free-hand interaction for mid-air touch. Our correlation analysis revealed that participants' physical exertion increased with selection speed, quantity of arm motion, variation in motion speed, and bad postures, following ergonomics metrics such as consumed endurance and rapid upper limb assessment. We also found a negative correlation between physical exertion and the participant's sense of agency, and between physical exertion and task accuracy.
October 2023
·
14 Reads
·
27 Citations
... Spatial audio is central to audio augmentation, such that virtual sounds are perceived as emanating from specific locations in 3D space [56]. Humans rely on multiple sensory modalities when they engage with their environment [35,46,67], and the auditory sense remains highly significant for localization even when visual cues is limited, sometimes replacing visual information altogether (e.g., "watching" television from another room) [14,29]. Research indicates that the use of spatial audio encourages users to adopt a more active role in spatial navigation, leading to even more accurate cognitive maps [13], while simultaneously reinforcing the sense of presence in XR environments [36]. ...
October 2024
... For instance, users can become giants [10], have novel body parts [125] or even adopt non-humanoid avatars [11,72]. There is also a growing attention to how audio in VR, such as changes to the avatar's voice [19,23,25,61] or the presence of footstep sounds [65] impact immersive experience. Similarly, audio is gaining attention in remote collaboration contexts [116,129]. ...
October 2024
... Since file management mostly consists of sub-tasks such as dragging files to other folders or applying specific operations (i.e., deleting, duplicating, copying, compressing) on multiple files, a multi-selection function would be more useful. Furthermore, providing an efficient method for managing multiple objects in XR becomes more crucial, as recent XR is capable of presenting multiple windows at once [9,35]. ...
May 2024
... To realize such adaptive MR UI behaviors, recent research formulates the problem as multi-objective optimization [9,10,16,17,22,25,26,32]. User's goals are formulated as a set of objective functions and placements are selected that maximize/minimize these objectives. ...
October 2023
... People are able to choose a variety of input methods in most Augmented Reality (AR) and Virtual Reality (VR) interfaces, and so understanding user input preferences is important. Input methods such as hand gestures and handheld controllers are widely used in AR and VR [19,33,39,46,47,50]. However, user preferences may change over time. ...
November 2023
IEEE Transactions on Visualization and Computer Graphics
... Aliakbarian et al. [19] also used a generative model to learn the conditional distribution of full-body poses based on the knowledge from the head and hands, named FLAG, and proposed an optimized pose prior and a new approach based on conditional normalized flow to generate high-quality poses. In order to reduce the influence of the visual range of the HMD on hand interaction, Streli et al. [20] proposed HOOV, which supplements the headset information with continuous signals from a wristband to estimate the current hand position outside the visible field of view. Some other studies suggest adding additional signal sources to track pelvic motion. ...
April 2023
... Recent advances in natural input techniques, like hand gestures [14,29,60] and speech recognition [4,35,55], have enabled more intuitive and controller-free interactions. However, these methods are often limited by a lack of haptic feedback [40,47], reduced precision [11,46], and user fatigue [6,28] over extended periods. To overcome these challenges, the focus is now on developing robust input sensing techniques that can transform everyday surfaces into interactive, touch-sensitive interfaces. ...
October 2022
... For example, options to adjust how much an agent can intervene would preserve a teams' autonomy and allow GenAI agents to assist without overstepping or overwhelming them. Another approach that leverages the flexible collaboration spaces MR allows [44] would be to allow teams to seamlessly transition between individual, subgroup, and collective interaction modes. This would enable groups to balance collective evaluation and individual agency within their problem-solving processes as they deem appropriate. ...
November 2022
Proceedings of the ACM on Human-Computer Interaction
... By combining machine learning techniques for spatial understanding as well as object segmentation and classification (e.g., Augmented Object Intelligence [11]), our approach for transforming virtual objects relative to their 3D centroid can be extended to physical objects as well. Such a system could create a virtual replica of relevant physical objects [31] and apply diminished reality techniques [8] to remove the physical objects from view. This would allow the virtual replica objects to be transformed for each user in the same way as in the Decoupled Hands approach. ...
April 2022