Tianyi Wang’s research while affiliated with Purdue University West Lafayette and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (20)


SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XR
  • Conference Paper

October 2024

·

4 Reads

·

·

Michael Nebeling

·

[...]

·


Figure 2: The "bouba/kiki" effect: Which one is 'bouba'? Which one is 'kiki'? Cross-modal correspondences enable us to perceive features across multiple sensory modalities, such as shapes visually or aurally.
Figure 3: Perception Study Setup: Participants were shown a cube that varied in color lightness and size. They used the right thumbstick of a Quest Pro controller to manipulate the pitch of an audio signal (left-right direction) and amplitude of a vibration signal (up-down), and the left controller trigger button to confirm selection after selecting the best matching pitch and signal. In-ear stereo earphones were used for audio feedback. Four linear resonance actuators positioned at cardinal directions on a wristband provided haptic feedback (wristband illustrated to maintain anonymity).
Figure 6: Pearson's correlation coefficients í µí±Ÿ for lightness/size to pitch/amplitude mappings. One-to-one mappings are when participants could change only one of pitch and amplitude value at a time when only one of lightness or size changed. Compound mappings are when participants could change both pitch and amplitude values at once while both lightness and size of the cube change simultaneously.
Figure 7: The evaluation study compared SonoHaptics and four baseline feedback techniques: No Feedback, Static, Text-tospeech, and Visual feedbacks.
Figure 9: Average selection time for feedback techniques. Effect of technique was statistically significant.

+2

SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XR
  • Preprint
  • File available

September 2024

·

36 Reads

We introduce SonoHaptics, an audio-haptic cursor for gaze-based 3D object selection. SonoHaptics addresses challenges around providing accurate visual feedback during gaze-based selection in Extended Reality (XR), e.g., lack of world-locked displays in no- or limited-display smart glasses and visual inconsistencies. To enable users to distinguish objects without visual feedback, SonoHaptics employs the concept of cross-modal correspondence in human perception to map visual features of objects (color, size, position, material) to audio-haptic properties (pitch, amplitude, direction, timbre). We contribute data-driven models for determining cross-modal mappings of visual features to audio and haptic features, and a computational approach to automatically generate audio-haptic feedback for objects in the user's environment. SonoHaptics provides global feedback that is unique to each object in the scene, and local feedback to amplify differences between nearby objects. Our comparative evaluation shows that SonoHaptics enables accurate object identification and selection in a cluttered scene without visual feedback.

Download





Figure 3: Highlight of Survey Results with 506 End-Users about Their Needs and Preferences of XAI in everyday AR scenarios.
Figure 7: Application of XAIR on Two Everyday AR Scenarios. In the second scenario, the hand icon indicates that explanations are manually triggered (the same below). Figures only present the default, concise explanations. Detailed explanations are described in the main text of Sec. 6.
Figure 8: CSI Scores of Design Workshops in Study 3
Figure 9: End-user Evaluation of The AR System in Study 4. (a) Study Setup. (b) Evaluation Scores. Users had positive experience in both tasks. Note that tasks were evaluated separately and not meant to compare against each other.
Figure 10: Version 1 before The 1rd Iterative Expert Workshop (Study 2)
XAIR: A Framework of Explainable AI in Augmented Reality

March 2023

·

182 Reads

Explainable AI (XAI) has established itself as an important component of AI-driven interactive systems. With Augmented Reality (AR) becoming more integrated in daily lives, the role of XAI also becomes essential in AR because end-users will frequently interact with intelligent services. However, it is unclear how to design effective XAI experiences for AR. We propose XAIR, a design framework that addresses "when", "what", and "how" to provide explanations of AI output in AR. The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users' preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR. XAIR's utility and effectiveness was verified via a study with 10 designers and another study with 12 end-users. XAIR can provide guidelines for designers, inspiring them to identify new design opportunities and achieve effective XAI designs in AR.





Citations (15)


... The proposed work builds on prior research on thumb-to-finger interactions [50,54], microgestures [25,42,43,45], on the hand interfaces [10] and explorations using physical objects as part of the interaction design [19,24]. Yet, even with the increasing interest in making XR interfaces more practical for everyday real-world interactions, addressing the physical elements of these interactions is still a challenge for developers [1]. ...

Reference:

GraV: Grasp Volume Data for the Design of One-Handed XR Interfaces
Ubi Edge: Authoring Edge-Based Opportunistic Tangible User Interfaces in Augmented Reality
  • Citing Conference Paper
  • April 2023

... Similarly, MimiCook [43] combines a depth camera and projector to deliver on-the-spot guidance during cooking. Computer-vision-based approaches are popular to guide users through various tasks [20,22,32,47,55]. For instance, AR Cooking [20] used 3D animation of cookware on AR glasses. ...

InstruMentAR: Auto-Generation of Augmented Reality Tutorials for Operating Digital Instruments Through Recording Embodied Demonstration

... Implementation of Universal Design Specification for AR as interface for AI: utilized explainable AI recommendations from Xu et al. (2023) and UD heuristics for AR from Szentirmai & Murano (2023), based on the following aspects: Fig. 1). The prototype is compatible with system-level assistive technologies like voice commands and screen readers, allowing users to start the application seamlessly. ...

XAIR: A Framework of Explainable AI in Augmented Reality
  • Citing Conference Paper
  • April 2023

... Hand-eye coordination is also crucial for the Apple Vision Pro, improving its spatial computing capabilities and accessibility [1]. This dual-modal interaction has been proven effective in numerous scenarios, e.g., menu selection [36,30], text input [19,29,61], and page browsing [1,35], significantly reducing the learning curve and aligning the user experience more closely with natural human behaviors. Additionally, Bao et al. have explored how hand-eye coordination can aid in selecting and translating objects within environments with heavy 3D occlusions, further illustrating the practical applications and advantages of this interaction technique [4]. ...

Gaze Speedup: Eye Gaze Assisted Gesture Typing in Virtual Reality
  • Citing Conference Paper
  • March 2023

... Prior research has developed tools to support hand-object interaction data collection. One such tool is ARnnotate [40], which assists users in performing various hand poses while the system records 3D hand positions, 3D object bounding boxes, images, and additional metadata. Tools like ARnnotate could be utilized to gather data for expanding the GraV dataset we provide or for generating personalized GraV data in real-time. ...

ARnnotate: An Augmented Reality Interface for Collecting Custom Dataset of 3D Hand-Object Interaction Pose Estimation
  • Citing Conference Paper
  • October 2022

... For example, in CAPturAR [54] and ProGesAR [55], users can prototype context-aware IoT applications by recording the human's movement or by using proxies through AR-based interfaces. In MechARSpace [56], users can author AR enhanced toys with a two-way biding between AR content and physical sensors and actuators from a plug-and-play IoT toolkit. While these tools do not focus on visualisation aspects, there are some that do: in PapARVis Designer [12] users can extend static visualisations in physical books by augmenting them with virtual content. ...

MechARspace: An Authoring System Enabling Bidirectional Binding of Augmented Reality with Toys in Real-time

... Additionally, a new interactive scene synthesis tool was proposed to help designers group object arrangements or quickly identify potential synthesis results [33]. [34] proposed an environment in which designers can create scenes for augmented reality (AR) users in VR by extracting indoor scene information using Microsoft HoloLens2-based AR and reconstructing the extracted scene data into a virtual environment. Additionally, an authoring environment in which designers can directly reconstruct and synthesize scenes based on an immersive virtual environment was proposed [35]. ...

ScalAR: Authoring Semantically Adaptive Augmented Reality Experiences in Virtual Reality
  • Citing Conference Paper
  • April 2022

... Freehand gesture. Gesture interaction is widely used in current AR systems [40,50,37], due to its precision and resemblance to daily behaviour. Users can interact through various gestures, such as pinch, tap, drag, and dual-hand interaction [8,38]. ...

GesturAR: An Authoring System for Creating Freehand Interactive Augmented Reality Applications
  • Citing Conference Paper
  • October 2021

... Wang and others have combined light painting with augmented reality (AR) technology. Through moving a light source along these AR trajectories, users can create accurate traces on the photograph, thereby bridging the gap between actual output and planned trajectories, and enhancing the quality of the creative content (Wang et al., 2021). Nestor Z. Salamon and others have proposed using light painting videos combined with computation to optimise the photo environment and light painting patterns. ...

LightPaintAR: Assist Light Painting Photography with Augmented Reality
  • Citing Conference Paper
  • May 2021

... For instance, Uriu et al. [50] developed a sensor-equipped frying pan that offers context-sensitive information like the pan's current temperature. Also, AdapTutAR [22] is a machine task tutoring system designed to monitor users in following tutorials and offer feedback through AR glasses. However, we often need help with mundane tasks or in situations where instrumenting ourselves or the environment is not easy. ...

AdapTutAR: An Adaptive Tutoring System for Machine Tasks in Augmented Reality
  • Citing Conference Paper
  • May 2021