ArticlePDF Available

Transvision: A hand-held augmented reality system for collaborative design

Authors:

Abstract

This paper presents the shared augmented reality system called TransVision. TransVi-sion augments real table-top with the computer graphics objects. Two or more partici-pants hold a palmtop size see-through display and look at the world through the display. They can share the same virtual environment in the real world environment. Since users are not isolated from the real world, natural mutual communications such as body ges-tures can eectively be used during collaboration. This paper describes the architecture of the TransVision system and reports some early experiences.
Palmtop Video See-through Displays
Graphics Object
Real Object
Graphics Workstation
Palmtop TV
CCD Camera
NTSC
NTSC
Perspective
Transformation
Superimpose
3D Sensor
Image
Generation
Position &
Orientation
Tracking
RS232C
Buttons
Interaction
Control
... Based on the theory of Computer Supported Cooperative Work, AR provides a richer sense of coexistence for collaborators . AR collaboration methods were more often used to enhance face-to-face symmetrical work in the early years (Rekimoto, 1996). They all allow collaborators to collaborate face-to-face with AR devices. ...
... In some scenarios, multiple people must simultaneously operate on the same object locally. The information they provided needs to be symmetric (i.e., the collaborators have the same basic roles and capabilities) (Benko et al., 2014;Rekimoto, 1996). In this case, asymmetric (Grandi et al., 2019) remote systems may not adapt well. ...
... Collaboration has long been the focus of spatial computing tools based on Augmented and Virtual Reality [8]. Some of the earliest research projects in the field concerned the use of handheld AR displays for 3D object inspection in teams [6], and collaborative scientific visualization using head-mounted displays [9]. Today, AR and VR headsets are growing more available in certain parts of the world, enabling a rich range of collaborative experiences both in-person (such as games on the Tilt Five 1 ) and remotely (through social VR worlds like Rec Room 2 and VRChat 3 ). ...
... We also used a PC Client to reset the position of the 4 Xreal Light: xreal.com 5 Unity: unity.com 6 Google AR Core developers.google.com/ar 7 XReal SDK developer.xreal.com ...
... Collaboration in and across VE can be collocated or distributed, and synchronous or asynchronous [61], [62]. Compared to VR, collocated collaboration in AR/MR benefits from face-to-face communication [63], [64], comparing well to unmediated face-to-face communication [65]. It preserves nonverbal cues (Exp. 1) [66] and serves well in communicating common ground [67] or spotting errors [68]. ...
Preprint
Full-text available
Problem solving is a composite cognitive process, invoking a number of systems and subsystems, such as perception and memory. Individuals may form collectives to solve a given problem together, in collaboration, especially when complexity is thought to be high. To determine if and when collaborative problem solving is desired, we must quantify collaboration first. For this, we investigate the practical virtue of collaborative problem solving. Using visual graph analysis, we perform a study with 72 participants in two countries and three languages. We compare ad hoc pairs to individuals and nominal pairs, solving two different tasks on graphs in visuospatial mixed reality. The average collaborating pair does not outdo its nominal counterpart, but it does have a significant trade-off against the individual: an ad hoc pair uses 1.46 more time to achieve 4.6 higher accuracy. We also use the concept of task instance complexity to quantify differences in complexity. As task instance complexity increases, these differences largely scale, though with two notable exceptions. With this study we show the importance of using nominal groups as benchmark in collaborative virtual environments research. We conclude that a mixed reality environment does not automatically imply superior collaboration.
... Hand-held tablet technology can allow participants to view a mixed reality of 3D graphics overlaid onto a camera feed of the real world [24]. These types of approaches mediate the issue of diverted attention which 2D displays suffer. ...
Preprint
Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods. Many existing methods for robot motion intent communication rely on 2D displays, which require the human to continually pause their work and check a visualization. We propose a mixed reality head-mounted display visualization of the proposed robot motion over the wearer's real-world view of the robot and its environment. To evaluate the effectiveness of this system against a 2D display visualization and against no visualization, we asked 32 participants to labeled different robot arm motions as either colliding or non-colliding with blocks on a table. We found a 16% increase in accuracy with a 62% decrease in the time it took to complete the task compared to the next best system. This demonstrates that a mixed-reality HMD allows a human to more quickly and accurately tell where the robot is going to move than the compared baselines.
... Collaborative VR/AR involves multiple users interacting with virtual objects or shared environments [10]. One of the earliest works in the context of collaborative augmented reality was Rekimoto's Transvision system, which made it possible for multiple people to simultaneously access graphical elements of a table [11]. From then on, a lot of work has been conducted in this field in the different sectors: from an entertainment context [12] to an industrial context [13,14]. ...
Article
Full-text available
This work explores the application of collaborative virtual and augmented reality in a cloud continuum context, focusing on designing, implementing, and verifying three reference architectures for five collaborative VR/AR software deployment. The architectures designed differ in their distribution of computational load: one handles everything in the cloud, one balances the load between the cloud and the edge, and the last concentrates the load entirely on the edge. The design of the architectures was initially outlined through sequence and component diagrams and then implemented using the most appropriate technologies and frameworks. For each architecture, a specific application was developed and deployed on the various components of that architecture to test its proper functioning. Finally, the scenarios were simulated to be stressed with a significant number of users, employing tools such as Cloud Analyst to analyze performance and present well-defined and implemented reference architectures.
... While the initial prototypes for collaborative AR date back almost three decades [6,37,47], recent surveys [33,43] indicate that this remains a prominent and extensively discussed topic. It continues to pose open research questions and challenges across various research domains [15,32,46]. ...
Article
Full-text available
Collaboration is a key aspect of immersive visual data analysis. Due to its inherent benefit of seeing co-located collaborators, augmented reality is often useful in such collaborative scenarios. However, to enable the augmentation of the real environment, there are different types of technology available. While there are constant developments in specific devices, each of these device types provide different premises for collaborative visual data analysis. In our work we combine handheld, optical see-through and video see-through displays to explore and understand the impact of these different device types in collaborative immersive analytics. We conducted a mixed-methods collaborative user study where groups of three performed a shared data analysis task in augmented reality with each user working on a different device, to explore differences in collaborative behaviour, user experience and usage patterns. Both quantitative and qualitative data revealed differences in user experience and usage patterns. For collaboration, the different display types influenced how well participants could participate in the collaborative data analysis, nevertheless, there was no measurable effect in verbal communication.
... Also, there are some challenges with the Host Controller Interface (HCI) mode of 3D assembly instructions. For instance, engineers regularly do not use their hands to directly manipulate the tasks but instead use 2D interfaces such as displays, mouse, and keyboard [83]. Figure 12 shows a graphical representation of the current assembly procedures in aerospace industries where tablet-based manuals are used along with cloud-based updates from manufacturers. ...
Chapter
Full-text available
The concept of Augmented Reality (AR) has existed in the field of aerospace for several decades in the form of Head-Up Display (HUD) or Head-Worn Display (HWD). These displays enhance Human-Machine Interfaces and Interactions (HMI2) and allow pilots to visualize the minimum required flight information while seeing the physical environment through a semi-transparent visor. Numerous research studies are still being conducted to improve pilot safety during challenging situations, especially during low visibility conditions and landing scenarios. Besides flight navigation, aerospace engineers are exploring many modern cloud-based AR systems to be used as remote and/or AI-powered assist tools for field operators, such as maintenance technicians, manufacturing operators, and Air Traffic Control Officers (ATCO). Thanks to the rapid advancement in computer vision and deep neural network architectures, modern AR technologies can also scan or reconstruct the 3D environment with high precision in real time. This feature typically utilizes the depth cameras onboard or independent from the AR devices, helping engineers rapidly identify problems during an inspection and implement the appropriate solutions. Some studies also suggest 3D printing of reconstructed models for additive manufacturing. This chapter covers several aspects and potentials of AR technology in the aerospace sector, including those already adopted by the companies and those currently under research.
Article
In Mixed Reality (MR), users can collaborate efficiently by creating personalized layouts that incorporate both personal and shared virtual objects. Unlike in the real world, personal objects in MR are only visible to their owner. This makes them susceptible to occlusions from shared objects of other users, who remain unaware of their existence. Thus, achieving unobstructed layouts in collaborative MR settings requires knowledge of where others have placed their personal objects. In this paper, we assessed the effects of three visualizations, and a baseline without any visualization, on occlusions and user perceptions. Our study involved 16 dyads (N=32) who engaged in a series of collaborative sorting tasks. Results indicate that the choice of visualization significantly impacts both occlusion and perception, emphasizing the need for effective visualizations to enhance collaborative MR experiences. We conclude with design recommendations for multi-user MR systems to better accommodate both personal and shared interfaces simultaneously.
Article
Research has identified applications of handheld-based VR, which utilizes handheld displays or mobile devices, for developing systems that involve users in mixed reality (MR) without the need for head-worn displays (HWDs). Such systems can potentially accommodate large groups of users participating in MR. However, we lack an understanding of how group sizes and interaction methods affect the user experience. In this paper, we aim to advance our understanding of handheld-based MR in the context of multiplayer, co-located games. We conducted a study (N = 38) to understand how user experiences vary by group size (2, 4, and 8) and interaction method (proximity-based or pointing-based). For our experiment, we implemented a multiuser experience for up to ten users. We found that proximity-based interaction that encouraged dynamic movement positively affected social presence and physical/temporal workload. In bigger group settings, participants felt less challenged and less positive. Individuals had varying preferences for group size and interaction type. The findings of the study will advance our understanding of the design space for handheld-based MR in terms of group sizes and interaction schemes. To make our contributions explicit, we conclude our paper with design implications that can inform user experience design in handheld-based mixed reality contexts.
Conference Paper
Full-text available
The MR Toolkit Peer Package is an extension to the MR Toolkit that allows multiple independent MR Toolkit applications to communicate with one another across the Internet. The master process of an MR Toolkit application can transmit device data to other remote applications, and receive device data from remote applications. Application-specific data can also be shared between independent applications. Nominally, any number of peers may communicate together in order to run a multiprocessing application, and peers can join or leave the collaborative application at any time. The Peer Package is introduced and the theory of its operation is explained. The authors' experience with a demonstration program which they have written, called multi-player handball, that uses the Peer Package is discussed
Conference Paper
Full-text available
This TechNote introduces a novel interaction technique for small screen devices such as palmtop computers or hand-held electric devices, including pagers and cellular phones. Our proposed method uses the tilt of the device itself as input. Using both tilt and buttons, it is possible to build several interaction techniques ranging from menus and scroll bars, to more complicated examples such as a map browsing system and a 3D object viewer. During operation, only one hand is required to both hold and control the device. This feature is especially useful for field workers.
Article
Full-text available
We are exploring how virtual reahty theories can be applied toward palmtop computers. In our prototype, called the Chameleon, a small 4-inch hand-held monitor acts as a palmtop computer with the capabihties of a Silicon graphics workstation. A 6D input device and a response button are attached to tbe small monitor to detect user gestures and input selections for issuing commands. An experiment was conducted to evaluate our design and to see how well depth could be perceived in the small screen compared to a large 21-inch screen, and the extent to which movement of the small display (in a palmtop virtual reality condition) could improve depth perception, Results show that with very little training, perception of depth in the palmtop virtual reality condition is about as good as corresponding depth perception in a large (but static) display. Variations to the initial design are also discussed, along with issues to be explored in future research, Our research suggests that palmtop virtual reality may support effective navigation and search and retrieval, in rich and portable information spaces.
Article
Full-text available
Current user interface techniques such as WIMP or the desktop metaphor do not support real world tasks, because the focus of these user interfaces is only on human-computer interactions, not on human-real world interactions. In this paper, we propose a method of building computer augmented environments using a situation-aware portable device. This device, called NaviCam, has the ability to recognize the user's situation by detecting color-code IDs in real world environments. It displays situation sensitive information by superimposing messages on its video see-through screen. Combination of ID-awareness and portable video-see-through display solves several problems with current ubiquitous computers systems and augmented reality systems.
Conference Paper
Palmtop displays have been extensively studied, but most of them simply refocus information in the real or virtual world. The palmtop display for dextrous manipulation (PDDM) proposed in this paper allows the users to manipulate a remote object as if they were holding it in their hands. The PDDM system has a small LCD, a 3D mouse and a mechanical linkage (force display). When the user locks onto an object in the center of the palmtop display, s/he can manipulate the object through motion input on the palmtop display with haptic sensation. In this paper, the features of the PDDM with haptic sensation are described, then four operating methods and the haptic representation methods for a trial model are proposed and evaluated. (see Video Figure in the CH196 Video Program)
Conference Paper
A virtual environment, which is created by computer graphics and an appropriate user interface, can be used in many application fields, such as teleoperation, telecommunication and real time simulation. Furthermore, if this environment could be shared by multiple users, there would be more potential applications. Discussed in this paper is a case study of building a prototype of a cooperative work environment using a virtual environment, where more than two people can solve problems cooperatively, including design strategies and implementing issues. An environment where two operators can directly grasp, move or release stereoscopic computer graphics images by hand is implemented. The system is built by combining head position tracking stereoscopic displays, hand gesture input devices and graphics workstations. Our design goal is to utilize this type of interface for a future teleconferencing system. In order to provide good interactivity for users, we discuss potential bottlenecks and their solutions. The system allows two users to share a virtual environment and to organize 3-D objects cooperatively.
Sandeep Sighal, and Michael Macedonia . Networked virtual environments
  • Michael J Zyda
  • Rich Gossweiler
  • John Morrison
Michael J. Zyda, Rich Gossweiler, John Morrison, Sandeep Sighal, and Michael Macedonia. Networked virtual environments. In A panel at Virtual Reality Annual International Symposium (VRAIS) '95, pp. 230{231, 1995.
Olv St_ ahl, and Christer Carlsson. A space based model for user interaction in shared synthetic environments
  • E Lennart
  • Charles Grant Fahl En
  • Brown
Lennart E. Fahl en, Charles Grant Brown, Olv St_ ahl, and Christer Carlsson. A space based model for user interaction in shared synthetic environments. In INTERCHI'93, pp. 43{48, 1993.