October 2024
·
4 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
October 2024
·
4 Reads
September 2024
·
36 Reads
We introduce SonoHaptics, an audio-haptic cursor for gaze-based 3D object selection. SonoHaptics addresses challenges around providing accurate visual feedback during gaze-based selection in Extended Reality (XR), e.g., lack of world-locked displays in no- or limited-display smart glasses and visual inconsistencies. To enable users to distinguish objects without visual feedback, SonoHaptics employs the concept of cross-modal correspondence in human perception to map visual features of objects (color, size, position, material) to audio-haptic properties (pitch, amplitude, direction, timbre). We contribute data-driven models for determining cross-modal mappings of visual features to audio and haptic features, and a computational approach to automatically generate audio-haptic feedback for objects in the user's environment. SonoHaptics provides global feedback that is unique to each object in the scene, and local feedback to amplify differences between nearby objects. Our comparative evaluation shows that SonoHaptics enables accurate object identification and selection in a cluttered scene without visual feedback.
May 2024
·
9 Reads
April 2023
·
49 Reads
·
10 Citations
April 2023
·
17 Reads
·
15 Citations
April 2023
·
68 Reads
·
37 Citations
March 2023
·
182 Reads
Explainable AI (XAI) has established itself as an important component of AI-driven interactive systems. With Augmented Reality (AR) becoming more integrated in daily lives, the role of XAI also becomes essential in AR because end-users will frequently interact with intelligent services. However, it is unclear how to design effective XAI experiences for AR. We propose XAIR, a design framework that addresses "when", "what", and "how" to provide explanations of AI output in AR. The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users' preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR. XAIR's utility and effectiveness was verified via a study with 10 designers and another study with 12 end-users. XAIR can provide guidelines for designers, inspiring them to identify new design opportunities and achieve effective XAI designs in AR.
March 2023
·
36 Reads
·
19 Citations
October 2022
·
35 Reads
·
11 Citations
October 2022
·
9 Reads
·
13 Citations
... The proposed work builds on prior research on thumb-to-finger interactions [50,54], microgestures [25,42,43,45], on the hand interfaces [10] and explorations using physical objects as part of the interaction design [19,24]. Yet, even with the increasing interest in making XR interfaces more practical for everyday real-world interactions, addressing the physical elements of these interactions is still a challenge for developers [1]. ...
April 2023
... Similarly, MimiCook [43] combines a depth camera and projector to deliver on-the-spot guidance during cooking. Computer-vision-based approaches are popular to guide users through various tasks [20,22,32,47,55]. For instance, AR Cooking [20] used 3D animation of cookware on AR glasses. ...
April 2023
... Implementation of Universal Design Specification for AR as interface for AI: utilized explainable AI recommendations from Xu et al. (2023) and UD heuristics for AR from Szentirmai & Murano (2023), based on the following aspects: Fig. 1). The prototype is compatible with system-level assistive technologies like voice commands and screen readers, allowing users to start the application seamlessly. ...
April 2023
... Hand-eye coordination is also crucial for the Apple Vision Pro, improving its spatial computing capabilities and accessibility [1]. This dual-modal interaction has been proven effective in numerous scenarios, e.g., menu selection [36,30], text input [19,29,61], and page browsing [1,35], significantly reducing the learning curve and aligning the user experience more closely with natural human behaviors. Additionally, Bao et al. have explored how hand-eye coordination can aid in selecting and translating objects within environments with heavy 3D occlusions, further illustrating the practical applications and advantages of this interaction technique [4]. ...
March 2023
... Prior research has developed tools to support hand-object interaction data collection. One such tool is ARnnotate [40], which assists users in performing various hand poses while the system records 3D hand positions, 3D object bounding boxes, images, and additional metadata. Tools like ARnnotate could be utilized to gather data for expanding the GraV dataset we provide or for generating personalized GraV data in real-time. ...
October 2022
... For example, in CAPturAR [54] and ProGesAR [55], users can prototype context-aware IoT applications by recording the human's movement or by using proxies through AR-based interfaces. In MechARSpace [56], users can author AR enhanced toys with a two-way biding between AR content and physical sensors and actuators from a plug-and-play IoT toolkit. While these tools do not focus on visualisation aspects, there are some that do: in PapARVis Designer [12] users can extend static visualisations in physical books by augmenting them with virtual content. ...
October 2022
... Additionally, a new interactive scene synthesis tool was proposed to help designers group object arrangements or quickly identify potential synthesis results [33]. [34] proposed an environment in which designers can create scenes for augmented reality (AR) users in VR by extracting indoor scene information using Microsoft HoloLens2-based AR and reconstructing the extracted scene data into a virtual environment. Additionally, an authoring environment in which designers can directly reconstruct and synthesize scenes based on an immersive virtual environment was proposed [35]. ...
April 2022
... Freehand gesture. Gesture interaction is widely used in current AR systems [40,50,37], due to its precision and resemblance to daily behaviour. Users can interact through various gestures, such as pinch, tap, drag, and dual-hand interaction [8,38]. ...
October 2021
... Wang and others have combined light painting with augmented reality (AR) technology. Through moving a light source along these AR trajectories, users can create accurate traces on the photograph, thereby bridging the gap between actual output and planned trajectories, and enhancing the quality of the creative content (Wang et al., 2021). Nestor Z. Salamon and others have proposed using light painting videos combined with computation to optimise the photo environment and light painting patterns. ...
May 2021
... For instance, Uriu et al. [50] developed a sensor-equipped frying pan that offers context-sensitive information like the pan's current temperature. Also, AdapTutAR [22] is a machine task tutoring system designed to monitor users in following tutorials and offer feedback through AR glasses. However, we often need help with mundane tasks or in situations where instrumenting ourselves or the environment is not easy. ...
May 2021