ArticlePDF Available

Transvision: A hand-held augmented reality system for collaborative design

Authors:

Abstract

This paper presents the shared augmented reality system called TransVision. TransVi-sion augments real table-top with the computer graphics objects. Two or more partici-pants hold a palmtop size see-through display and look at the world through the display. They can share the same virtual environment in the real world environment. Since users are not isolated from the real world, natural mutual communications such as body ges-tures can eectively be used during collaboration. This paper describes the architecture of the TransVision system and reports some early experiences.
Palmtop Video See-through Displays
Graphics Object
Real Object
Graphics Workstation
Palmtop TV
CCD Camera
NTSC
NTSC
Perspective
Transformation
Superimpose
3D Sensor
Image
Generation
Position &
Orientation
Tracking
RS232C
Buttons
Interaction
Control
... As embodiment is natural in co-located systems, awareness of others is better perceived since the bodies and consequential communication cues [118] follow social rules. For instance, in Rekimoto's [228] system, users could easily interpret others' deictic gestures. Rekimoto also found that users are mostly absorbed in the workspace. ...
... Despite these limitations, the movements of the tablet are visible by all, which everyone understands as manipulating the AR world in co-located contexts. In our survey, the only work focusing on awareness is Rekimoto et al.'s [228], which proposed to add an eye gaze equivalent as an awareness cue. ...
... Hybrid systems using personal 2D screens can provide user-friendly personal 2D views and UIs [62,196,228]. ...
Thesis
I studied the benefits and limitations of Augmented Reality (AR) Head-Mounted Displays (AR-HMDs) for collaborative 3D data exploration. Prior of conducting any projects, I saw in AR-HMDs benefits concerning their immersive features: AR-HMDs merge the interactive, visualization, collaborative, and users' physical spaces together. Multiple collaborators can then see and interact directly with 3D visuals anchored within the users' physical space. AR-HMDs usually rely on stereoscopic 3D displays which provide additional depth cues compared to 2D screens, supporting users at understanding 3D datasets better. As AR-HMDs allow users to see each other within the workspace, seamless switches between discussion and exploration phases are possible. Interacting within those visualizations allow for fast and intuitive 3D direct interactions, which yields cues about one's intentions to others, e.g., moving an object by grabbing it is a strong cue about what a person intends to do with that object. Those cues are important for everyone to understand what is currently going on. Finally, by not occluding the users' physical space, usual but important tools such as billboards and workstations performing simulations are still easily accessible within this environment without wearing off the headsets. That being said, and while AR-HMDs are being studied for decades, their computing power before the recent release of the HoloLens in 2016 was not enough for an efficient exploration of 3D data such as ocean datasets. Moreover, previous researchers were more interested in how to make AR possible as opposed to how to use AR. Then, despite all those qualities one may think prior of working with AR-HMDs, there were almost no work that discusses the exploration of such 3D datasets. Moreover AR-HMDs are not suitable for 2D input which are however commonly used with usual explorative tools such as ParaView or CAD software, where users such as scientists and engineers are already efficient with. I then theorize in what situations are AR-HMDs preferable. They seem preferable when the purpose is to share insights with multiple collaborators and to explore patterns together, and where explorative tools can be minimal compared to what workstations provide as most of the prior work and simulations can be done before hand. I am thus combining AR-HMDs with multi-touch tablets, where I use AR-HMDs to merge the visualizations, some 3D interactions, and the collaborative spaces within the users' physical space, and I use the tablets for 2D input and usual Graphical User Interfaces that most software provides (e.g., buttons and menus). I then studied low-level interactions necessary for data exploration which concern the selection of points and regions inside datasets using this new hybrid system. The techniques my co-authors and I have chosen possess different level of directness that we investigated. As this PhD aims at studying AR-HMDs within collaborative environments, I also studied their capacities to adapt the visual to each collaborator for a given anchored 3D object. This is similar to the relaxed "What-You-See-Is-What-I-See" that allows, e.g., multiple users to see different parts of a shared document that remote users can edit simultaneously. Finally, I am currently (i.e., is not finished by the time I am writing this PhD) studying the use of this new system for the collaborative 3D data exploration of ocean datasets that my collaborators at Helmholtz-Zentrum Geesthacht, Germany, are working on. This PhD provides a state of the art of AR used within collaborative environments. It also gives insights about the impacts of 3D interaction directness for 3D data exploration. This PhD finally gives designers insights about the use of AR for collaborative scientific data exploration, with a focus on oceanography.
... Additional before 2008 [16], [17], [24], [26], [27], [36], [90], [106], [127], [133], [144], [149], [156], [161], [162] 15 ...
... Co-located Collaboration: As embodiment is natural in co-located systems, awareness of others is better perceived since the bodies and consequential communication cues [69] follow social rules. For instance, in Rekimoto's [144] system, users could easily interpret others' deictic gestures. Rekimoto also found that users are mostly absorbed in the workspace. ...
... Despite these limitations, the movements of the tablet are visible by all, which everyone understands as manipulating the AR world in co-located contexts. In our survey, the only work focusing on awareness is Rekimoto et al.'s [144], which proposed to add an eye gaze equivalent as an awareness cue. ...
Article
Full-text available
In Augmented Reality (AR), users perceive virtual content anchored in the real world. It is used in medicine, education, games, navigation, maintenance, product design, and visualization, in both single-user and multi-user scenarios. Multi-user AR has received limited attention from researchers, even though AR has been in development for more than two decades. We present the state of existing work at the intersection of AR and Computer-Supported Collaborative Work (AR-CSCW), by combining a systematic survey approach with an exploratory, opportunistic literature search. We categorize 65 papers along the dimensions of space, time, role symmetry (whether the roles of users are symmetric), technology symmetry (whether the hardware platforms of users are symmetric), and output and input modalities. We derive design considerations for collaborative AR environments, and identify under-explored research topics. These include the use of heterogeneous hardware considerations and 3D data exploration research areas. This survey is useful for newcomers to the field, readers interested in an overview of CSCW in AR applications, and domain experts seeking up-to-date information.
... Using these technologies a number of interesting application areas were being explored. For example, Rekimoto [Rekimoto, 1996b], Billinghurst [Billinghurst et al., , 1997 and Schmalstieg [Schmalstieg et al., 1996, Szalavári et al., 1998 were exploring how AR could be used to enhance face-to-face collaboration, developing systems that allowed people in the same location to see and interact with shared AR content (see Figure 3.4). A range of di erent medical applications of AR were being explored, such as for visualization of laparoscopic surgery [Fuchs et al., 1998], X-ray visualization in the patient's body [Navab et al., 1999], and for image guided surgery [Leventon, 1997]. ...
... Other researchers have explored the use of augmented reality to support face-to-face collaboration and re-mote collaboration. Projects such as Studierstube [Schmalstieg et al., 1996, Szalavári et al., 1998], Transvision [Rekimoto, 1996b], and AR2 Hockey [Ohshima et al., 1998] allow users to see each other as well as 3D virtual objects in the space between them. Users can interact with the real world at the same time as the virtual images, supporting spatial cues and facilitating very natural collaboration. ...
Book
A Survey of Augmented Reality summarizes almost fifty years of research and development in the field of Augmented Reality (AR). From early research in the 1960's until widespread availability by the 2010's, there has been steady progress towards the goal of being able to seamlessly combine real and virtual worlds. This monograph provides an overview of the common definitions of AR, and shows how AR fits into taxonomies of other related technologies. A history of important milestones in Augmented Reality is followed by sections on the key enabling technologies of tracking, display, and input devices. The author also review design guidelines and provide some examples of successful AR applications. The work concludes with a summary of directions for future work, and a review of some of the areas that are currently being researched. A Survey of Augmented Reality is an invaluable resource for researchers and practitioners. It provides an ideal starting point for those who want an overview of the technology and to undertake research and development in the field.
... 1999 ARToolkit was developed by H. Kato in the Nara Institute of Science and Technology. In 1999 Kato and Billinghurst published their paper (Kato and Billinghurst, 1999) about using HMD and markers for the conferencing system, based on the method proposed by Rekimoto (Rekimot o, 1996). ARToolkit is a computer library for the tracking of the visual markers and their registration in the camera space (http://www.hitl.washington.edu/artoolkit/). ...
... Au cours des années 70 et 80, la recherche sur ce domaine portait essentiellement sur des systèmes de réalité augmentée pour l'aéronautique et le spatial [Furness III 1986]. Puis dans les années 90, les projets de recherche se sont multipliés sur d'autres applications telles que le médical [Bajura 1992], le travail collaboratif [Rekimoto 1996] et sur l'amélioration de points particuliers des systèmes de RA tels que la mobilité [Feiner 1997], la transparence [Kancherla 1996], le tracking [Azuma 1993], avec notamment l'arrivée de la bibliothèque logicielle de détection de tag ARToolKit [Kato 1999]. ...
Thesis
Les recherches menées au cours de cette thèse de Doctorat s'inscrivent dans les activités du laboratoire commun OPERA (OPtique EmbaRquée Active) impliquant ESSILOR-LUXOTTICA et le CNRS. L’objectif est de contribuer au développement des “lunettes du futur” intégrant des fonctions d'obscurcissement, de focalisation ou d'affichage qui s’adaptent en permanence à la scène et au regard de l’utilisateur. Ces nouveaux dispositifs devront être dotés de capacités de perception, de décision et d’action, et devront respecter des contraintes d'encombrement, de poids, de consommation énergétique et de temps de traitement. Ils présentent par conséquent des connexions évidentes avec la robotique. Dans ce contexte, les recherches ont consisté à investiguer la structure et la construction de tels systèmes afin d’identifier leurs enjeux et difficultés. Pour ce faire, la première tâche a été de mettre en place des émulateurs de divers types de lunettes actives, qui permettent de prototyper et d’évaluer efficacement diverses fonctions. Dans cette phase de prototypage et de test, ces émulateurs s’appuient naturellement sur une architecture logicielle modulaire typique de la robotique. La seconde partie de la thèse s'est focalisée sur le prototypage d’un composant clé des lunettes du futur, qui implique une contrainte supplémentaire de basse consommation : le système de suivi du regard, aussi appelé oculomètre. Le principe d’un assemblage de photodiodes et d’un traitement par réseau de neurones a été proposé. Un simulateur a été mis au point, ainsi qu’une étude de l'influence de l'agencement des photodiodes et de l’hyper-paramétrisation du réseau sur les performances de l'oculomètre.
... People will be crossing the street at the same time and in the same place." This comment leads us to believe that in certain situations, a shared AR experience (Rekimoto, 1996), where multiple users can see the same virtual elements, may help guide pedestrian traffic more efficiently. As a result, personal and shared (augmented) reality should both be considered when designing AR eHMIs. ...
Article
Full-text available
Wearable augmented reality (AR) offers new ways for supporting the interaction between autonomous vehicles (AVs) and pedestrians due to its ability to integrate timely and contextually relevant data into the user's field of view. This article presents novel wearable AR concepts that assist crossing pedestrians in multi-vehicle scenarios where several AVs frequent the road from both directions. Three concepts with different communication approaches for signaling responses from multiple AVs to a crossing request, as well as a conventional pedestrian push button, were simulated and tested within a virtual reality environment. The results showed that wearable AR is a promising way to reduce crossing pedestrians' cognitive load when the design offers both individual AV responses and a clear signal to cross. The willingness of pedestrians to adopt a wearable AR solution, however, is subject to different factors, including costs, data privacy, technical defects, liability risks, maintenance duties, and form factors. We further found that all participants favored sending a crossing request to AVs rather than waiting for the vehicles to detect their intentions-pointing to an important gap and opportunity in the current AV-pedestrian interaction literature.
... Some of the early works on collaborative AR demonstrated that shared AR technology that displays virtual artifacts within the user's physical space [87] can support collaborative design processes [77,78,91]. Moreover, collocated collaborative AR may be a particularly effective solution for collaboration in the context of 3D artifacts' creation [7,8,31]. ...
Article
Design and co-creation activities around 3D artifacts often require close collocated coordination between multiple users. Augmented reality (AR) technology can support collocated work enabling users to flexibly work with digital objects while still being able to use the physical space for coordination. With most of current research focusing on remote AR collaboration, less is known about collocated collaboration in AR, particularly in relation to interpersonal dynamics between the collocated collaborators. Our study aims at understanding how shared augmented reality facilitated by mobile devices (mobile augmented reality or MAR) affects collocated users' coordination. We compare the coordination behaviors that emerged in a MAR setting with those in a comparable fully physical setting by simulating the same task-co-creation of a 3D artifact. Our results demonstrate the importance of the shared physical dimension for participants' ability to coordinate in the context of collaborative co-creation. Namely, participants working in a fully physical setting were better able to leverage the work artifact itself for their coordination needs, working in a mode that we term artifact-oriented coordination. Conversely, participants collaborating around an AR artifact leveraged the shared physical workspace for their coordination needs, working in what we refer to as space-oriented coordination. We discuss implications for a AR-based collaboration and propose directions for designers of AR tools. CCS Concepts: • Human-centered computing • Collaborative and social computing • Empirical studies in collaborative and social computing
... 1999 ARToolkit was developed by H. Kato in the Nara Institute of Science and Technology. In 1999 Kato and Billinghurst published their paper (Kato and Billinghurst, 1999) about using HMD and markers for the conferencing system, based on the method proposed by Rekimoto (Rekimot o, 1996). ARToolkit is a computer library for the tracking of the visual markers and their registration in the camera space (http://www.hitl.washington.edu/artoolkit/). ...
Conference Paper
Full-text available
Nowadays, there is a growing interest in Augmented reality (AR) as a field of research, as well as a domain for developing a broad variety of applications. Since the coining of the phrase "Augmented reality" in 1990, the technology has come a long way from research laboratories and big international companies to suit the pockets of millions of users all over the world. AR's popularity among younger generations has inspired an effort to utilize AR as a tool for education. For teachers, starting with AR educational authoring, we selected some important milestones of the history of the field with the focus on the specific domain of educational applications. We comment on Videoplace and Construct3D projects in more detail. Finally, we draw a few implications from the available literature for educational authoring.
... The concept of collaboration in Augmented Reality context took its first steps shortly after the first AR systems [4] emerged. One of the first approaches was the Transvision System from Rekimoto [5], which allowed multiple users to share virtual content graphics disposed in a table. This system also enabled the users to interact with the content by choosing and action (e.g., selection or manipulation) to be implemented through a physical device. ...
Article
Full-text available
Augmented Reality (AR) functionalities may be effectively leveraged in collaborative service scenarios (e.g., remote maintenance, on-site building, street gaming, etc.). Standard development cycles for collaborative AR require to code for each specific visualization platform and implement the necessary control mechanisms over the shared assets. in order to face this challenge, this paper describes SARA, an architecture to support cross-platform collaborative Augmented Reality applications based on microservices. The architecture is designed to work over the concept of collaboration models which regulate the interaction and permissions of each user over the AR assets. Five of these collaboration models were initially integrated in SARA (turn, layer, ownership, hierarchy-based and unconstrained examples) and the platform enables the definition of new ones. Thanks to the reusability of its components, during the development of an application, SARA enables focusing on the application logic while avoiding the implementation of the communication protocol, data model handling and orchestration between the different, possibly heterogeneous, devices involved in the collaboration (i.e., mobile or wearable AR devices using different operating systems). to describe how to build an application based on SARA, a prototype for HoloLens and iOS devices has been implemented. the prototype is a collaborative voxel-based game in which several players work real time together on a piece of land, adding or eliminating cubes in a collaborative manner to create buildings and landscapes. Turn-based and unconstrained collaboration models are applied to regulate the interaction. the development workflow for this case study shows how the architecture serves as a framework to support the deployment of collaborative AR services, enabling the reuse of collaboration model components, agnostically handling client technologies.
Article
Full-text available
Psychological ownership defines how we behave in and interact with the social world and the objects around us. Shared Augmented Reality (shared AR) may challenge conventional understanding of psychological ownership because virtual objects created by one user in a social place are available for other participants to see, interact with, and edit. Moreover, confusion may arise when one user attaches a virtual object in a shared AR environment onto the physical object that is owned by a different user. The goal of this study is to investigate tensions around psychological ownership in shared AR. Drawing on prior work, we developed a conceptualization of psychological ownership in shared AR in terms of five underlying dimensions: possession, control, identity, responsibility, and territoriality. We studied several shared AR scenarios through a laboratory experiment that was intended to highlight normative tensions. We divided participants into pairs, whereby one participant in each pair created the virtual object (object-creator) and placed it over the other person's (space proprietor) physical object or space. We recorded participants’ perceptions of psychological ownership along the 5 dimensions through surveys and interviews. Our results reveal that the paired participants failed to form a mutual understanding of ownership over the virtual objects. In addition, the introduction of virtual objects called into question participants’ sense of psychological ownership over the physical articles to which the virtual objects were attached. Building on our results, we offer a set of design principles for shared AR environments, intended specifically to alleviate psychological ownership-related concerns. Herein, we also discuss the implications of our findings for research and practice in this field.
Conference Paper
Full-text available
The MR Toolkit Peer Package is an extension to the MR Toolkit that allows multiple independent MR Toolkit applications to communicate with one another across the Internet. The master process of an MR Toolkit application can transmit device data to other remote applications, and receive device data from remote applications. Application-specific data can also be shared between independent applications. Nominally, any number of peers may communicate together in order to run a multiprocessing application, and peers can join or leave the collaborative application at any time. The Peer Package is introduced and the theory of its operation is explained. The authors' experience with a demonstration program which they have written, called multi-player handball, that uses the Peer Package is discussed
Conference Paper
Full-text available
This TechNote introduces a novel interaction technique for small screen devices such as palmtop computers or hand-held electric devices, including pagers and cellular phones. Our proposed method uses the tilt of the device itself as input. Using both tilt and buttons, it is possible to build several interaction techniques ranging from menus and scroll bars, to more complicated examples such as a map browsing system and a 3D object viewer. During operation, only one hand is required to both hold and control the device. This feature is especially useful for field workers.
Article
Full-text available
We are exploring how virtual reahty theories can be applied toward palmtop computers. In our prototype, called the Chameleon, a small 4-inch hand-held monitor acts as a palmtop computer with the capabihties of a Silicon graphics workstation. A 6D input device and a response button are attached to tbe small monitor to detect user gestures and input selections for issuing commands. An experiment was conducted to evaluate our design and to see how well depth could be perceived in the small screen compared to a large 21-inch screen, and the extent to which movement of the small display (in a palmtop virtual reality condition) could improve depth perception, Results show that with very little training, perception of depth in the palmtop virtual reality condition is about as good as corresponding depth perception in a large (but static) display. Variations to the initial design are also discussed, along with issues to be explored in future research, Our research suggests that palmtop virtual reality may support effective navigation and search and retrieval, in rich and portable information spaces.
Article
Full-text available
Current user interface techniques such as WIMP or the desktop metaphor do not support real world tasks, because the focus of these user interfaces is only on human-computer interactions, not on human-real world interactions. In this paper, we propose a method of building computer augmented environments using a situation-aware portable device. This device, called NaviCam, has the ability to recognize the user's situation by detecting color-code IDs in real world environments. It displays situation sensitive information by superimposing messages on its video see-through screen. Combination of ID-awareness and portable video-see-through display solves several problems with current ubiquitous computers systems and augmented reality systems.
Conference Paper
Palmtop displays have been extensively studied, but most of them simply refocus information in the real or virtual world. The palmtop display for dextrous manipulation (PDDM) proposed in this paper allows the users to manipulate a remote object as if they were holding it in their hands. The PDDM system has a small LCD, a 3D mouse and a mechanical linkage (force display). When the user locks onto an object in the center of the palmtop display, s/he can manipulate the object through motion input on the palmtop display with haptic sensation. In this paper, the features of the PDDM with haptic sensation are described, then four operating methods and the haptic representation methods for a trial model are proposed and evaluated. (see Video Figure in the CH196 Video Program)
Conference Paper
A virtual environment, which is created by computer graphics and an appropriate user interface, can be used in many application fields, such as teleoperation, telecommunication and real time simulation. Furthermore, if this environment could be shared by multiple users, there would be more potential applications. Discussed in this paper is a case study of building a prototype of a cooperative work environment using a virtual environment, where more than two people can solve problems cooperatively, including design strategies and implementing issues. An environment where two operators can directly grasp, move or release stereoscopic computer graphics images by hand is implemented. The system is built by combining head position tracking stereoscopic displays, hand gesture input devices and graphics workstations. Our design goal is to utilize this type of interface for a future teleconferencing system. In order to provide good interactivity for users, we discuss potential bottlenecks and their solutions. The system allows two users to share a virtual environment and to organize 3-D objects cooperatively.
Sandeep Sighal, and Michael Macedonia . Networked virtual environments
  • Michael J Zyda
  • Rich Gossweiler
  • John Morrison
Michael J. Zyda, Rich Gossweiler, John Morrison, Sandeep Sighal, and Michael Macedonia. Networked virtual environments. In A panel at Virtual Reality Annual International Symposium (VRAIS) '95, pp. 230{231, 1995.
Olv St_ ahl, and Christer Carlsson. A space based model for user interaction in shared synthetic environments
  • E Lennart
  • Charles Grant Fahl En
  • Brown
Lennart E. Fahl en, Charles Grant Brown, Olv St_ ahl, and Christer Carlsson. A space based model for user interaction in shared synthetic environments. In INTERCHI'93, pp. 43{48, 1993.