ArticlePDF Available

Abstract and Figures

In this paper we discuss Augmented Reality (AR) displays in a general sense, within the context of a Reality-Virtuality (RV) continuum, encompassing a large class of "Mixed Reality" (MR) displays, which also includes Augmented Virtuality (AV). MR displays are defined by means of seven examples of existing display concepts in which real objects and virtual objects are juxtaposed. Essential factors which distinguish different Mixed Reality display systems from each other are presented, first by means of a table in which the nature of the underlying scene, how it is viewed, and the observer's reference to it are compared, and then by means of a three dimensional taxonomic framework, comprising: Extent of World Knowledge (EWK), Reproduction Fidelity (RF) and Extent of Presence Metaphor (EPM). A principal objective of the taxonomy is to clarify terminology issues and to provide a framework for classifying research across different disciplines.
Content may be subject to copyright.
A preview of the PDF is not available
... With the rapid advancement of digital technology, Extended Reality (XR), encompassing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), has gradually become a cutting-edge trend and research focus in the field of education (Milgram et al., 1995). These innovative technologies, by constructing immersive and interactive virtual environments, have ignited endless possibilities for improving educational practices, thus attracting widespread attention and active investment from educational institutions and experts around the world (Johnson et al., 2016). ...
... For example, using an HMD, we can superimpose multiple viewpoints on the current viewpoint, but if an HMD is not available, we need to project them using a nearby projector or display them using a public display. For this purpose, it is necessary to consider the presentation of multiple viewpoints in consideration of the reality-virtuality continuum [73] shown in Fig. 13. The reality-virtuality continuum considers the range from complete virtuality to complete reality on a continuous scale. ...
Article
Full-text available
To acquire information from the real world and respond appropriately to life's circumstances, vision is indispensable for humans. However, due to its ubiquitous nature, we often perceive the world unconsciously, thereby overlooking the opportunity to contemplate the significance of sight. Seeing goes beyond being a mere method of gathering information; it is an act of uncovering new perspectives and engaging in profound exploration. Theories on creative problem-solving strongly advocate for the advantages of adopting multiple viewpoints. By generating a multitude of alternatives through information gleaned from diverse perspectives, we enhance our ability to expand the range of choices available to us, thus facilitating more effective problem-solving. In this paper, we present Posthuman CollectiveEyes, a digital platform that enriches the human act of visual perception by integrating diverse viewpoints such as collective human, augmented human, and nonhuman viewpoints, and constructs posthuman viewpoints from the diverse viewpoints. In the design of Posthuman CollectiveEyes, we adopt the more-than-human perspective, widely employed in the social sciences to analyze the impact of technology on human actions and decision-making in organizations and societies. This perspective enables us to uncover knowledge that conventional human-centered approaches cannot capture, as the objective of Posthuman CollectiveEyes is to expand human cognitive capabilities through enhanced visual perception. The novel contribution of our approach lies in demonstrating that the design of innovative digital platforms aimed at enhancing human abilities necessitates a fresh design approach that incorporates the more-than-human perspective.
... Em 1994 um importante artigo publicado por Milgram e mais três colegas (MILGRAM et al., 1994) apresentou o que passou a ser conhecido como "Contínuo realvirtual" ou "Contínuo de Milgram" que demonstra que os termos tecnológicos "RV/RA/RX" não são sinônimos. Na Figura 1 é possível observar que a RV se situa no extremo direito, enquanto o mundo "real" encontra-se no extremo esquerdo, constata-se que o ambiente virtual se torna uma simulação do ambiente real. ...
Article
Full-text available
Os indivíduos portadores de Transtorno de Espectro Autista (TEA) experimentam muitas barreiras para o seu desenvolvimento, sobretudo em relação à aquisição e ao progresso de habilidades sociais e/ou cognitivas que impactam na rotina e no comportamento cotidiano destes individuos. Esta pesquisa de revisão sistemática da literatura teve como objetivo analisar os estudos e mostrar o contexto da aplicação alternativa da tecnologia de Realidade Virtual (RV) como auxílio terapêutico no processo de tratamento de indivíduos com TEA e déficits de habilidades sociais e/ou cognitivas. Foi utilizado a metodologia Methodi Ordinatio para esta revisão sistemática que pelos critérios de elegibilidade classificou 20 artigos finais para a avaliação. Com a presente pesquisa foi constatado que, mediante a evolução tecnológica, a aplicabilidade das características da RV, além de trazer benefícios para o avanço no campo das habilidades em indivíduos com TEA tem se mostrado eficaz, conveniente e muito promissora.
... Meanwhile, immersive technologies are becoming readily available for more interactive, engaging, and efficient medical practices. Figure 1 describes the properties of immersive technologies by juxtaposing two opposite ends-reality and virtuality (6,7). In this research, immersive technologies will be used interchangeably with Extended Reality (XR), an umbrella term encapsulating Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR). ...
Article
Full-text available
Objectives This research focuses on how built environment experts can contribute to the MXR-enabled digital innovation as part of the multidisciplinary team effort to ensure post-pandemic resilience in healthcare built environment. The goal of this research is to help healthcare providers, built environment experts, and policy makers respectively: (1) Advocate the benefits of MXR for innovating health and social care; (2) Spark debate across networks of expertise to create health-promoting environment; and (3) Understand the overriding priorities in making effective pathways to the implementation of MXR. Methods To highlight the novelty of this research, the study relies on two qualitative methodologies: exploratory literature review and semi-structured interviews. Based on the evaluation of prior works and cross-national case studies, hypotheses are formulated from three arenas: (1) Cross-sectional Initiatives for Post-pandemic Resilience; (2) Interoperability and Usability of Next-gen Medicines; and (3) Metaverse and New Forms of Value in Future Healthcare Ecosystems. To verify those hypotheses, empirical findings are derived from in-depth interviews with nine key informants. Results The main findings are summarized under the following three themes: (1) Synergism between Architecture and Technology; (2) Patient Empowerment and Staff Support; and (3) Scalable Health and Wellbeing in Non-hospital and Therapeutic Settings. Firstly, both built environment and healthcare sectors can benefit from the various capabilities of MXR through cross-sectional initiatives, evidence-based practices, and participatory approaches. Secondly, a confluence of knowledge and methods of HCI and HBI can increase the interoperability and usability of MXR for the patient-centered and value-based healthcare models. Thirdly, the MXR-enabled technological regime will largely affect the new forms of value in healthcare premises by fostering more decentralized, preventive, and therapeutic characteristics in the future healthcare ecosystems. Conclusion Whether it's virtual or physical, our healthcare systems have placed great emphasis on the rigor of evidence-based approach linking health outcome to a clinical environment. Henceforth, built environment experts should seek closer ties with the MXR ecosystems for the co-production of scalable health and wellbeing in non-hospital and therapeutic settings. Ultimately, this is to improve resource efficiency in the healthcare sector while considering the transition of health resources towards in silico status by increasing the implementation of MXR.
... AR devices are hardware tools that overlay virtual information onto the real world, enhancing what we see, hear, and feel [61]. These devices can be categorized into several types but this classification encompasses and separates current devices in a practical way [62]: (1) Headsets and Smart Glasses, (2) Handheld Devices, (3) Projection-based Systems, (4) Spatial Augmented Reality (SAR), (5) HUD (Head-Up Displays) and, (6) AR Contact Lenses. ...
Article
Full-text available
An Augmented Reality (AR) system is a technology that overlays digital information, such as images, sounds, or text, onto a user’s view of the real world, providing an enriched and interactive experience of the surrounding environment. It has evolved into a potent instrument for improving human perception and decision-making across various domains, including industrial, automotive, healthcare, and urban planning. This systematic literature review aims to offer a comprehensive understanding of AR technology, its limitations, and implementation challenges in the most significant areas of application in engineering and beyond. The review will explore the state-of-the-art AR techniques, their potential use cases, and the barriers to widespread adoption, while also identifying future research directions and opportunities for innovation in the rapidly evolving field of augmented reality. This study works as a compilation of the existing technologies in the subject, especially useful for beginners in AR or as a starting point for developers who seek to innovate or implement new technologies, thus knowing the limitations and current challenges that could arise. /// Keywords: augmented reality; survey; literature review; immersive environments; mixed reality; urban planning; intelligent transportation; AR devices; user engagement; virtual objects; image processing; real-time interaction
Article
To effectively communicate the archaeological remains of the distant past is a challenge: little may be left to see, and the culture may be very different to comprehend. This paper compares two technological approaches to communicating Roman archaeology in museums: virtual reality and tangible interaction. Although very different in rationale, design, and implementation, the two explorative studies have the same aim of engaging visitors with important exhibits. The challenge is to effectively communicate the exhibit's original and cultural context. In ‘Views of the Past’ virtual reality was used to support an environmental narrative experience where fragments of history are found scattered in the 3D reconstructed forum of Augustus in Rome. In ‘My Roman Pantheon’, a tangible interactive installation, visitors act as Romans living along Hadrian's Wall making offering to the deities of the Roman pantheon in order to secure their protection. In both explorative studies the combination of the features (virtual reality + narratives, tangible + acting) make visitors feel ‘cultural presence’ where the perception of a place is combined with the awareness of the culture and an understanding of the past. Although they work on very different sensorial reaction (sight for virtual reality, touch for tangible interaction), both are promising mechanisms to design effective visitor's experiences for challenging cultural heritage settings.
Chapter
This chapter discusses the use of extended reality (XR) for the design and development of online learning applications using real-time physics simulations. The authors propose an instructional design based on the ARCS motivational model to improve aspects of the presentation, organization, and distribution of learning content for an online XR learning application, which presents an interactive and exploratory learning environment for physics education, through a real-time physics simulator of dynamics and kinematics. Depending on the characteristics of a XR-compliant device and platform, the authors can offer XR experiences that range from augmented reality (AR) to non-immersive and immersive virtual reality (VR) environments in a single application, being available to any device with internet capabilities. To evaluate the instructional design of the XR application, the authors present an assessment using the John Keller´s attention, relevance, confidence and satisfaction (ARCS) learning motivation model, which allows to analyze the correlation between student's motivation and the learning technique.
Chapter
In recent years, the term Augmented Humanity (AH) has been increasingly used in the research community. AH is a term that refers to the use of advanced technologies to enhance human capabilities, whether physical, cognitive, or social. Technologies used to achieve AH range from wearable devices such as smart watches to cybernetic implants and advanced prostheses. The purpose of this article is to present a proposal for a taxonomy of AH. After describing the main research topics that have been worked on in this area, the reasons why such a taxonomy is needed are explained. Then, this article presents three main approaches to AH: the enhancement of physical, intellectual, and social capabilities. In addition, several challenges of providing a taxonomy of AH are introduced. Finally, it may be concluded that this proposed AH taxonomy will help to organize the field of research and establish a common knowledge base and terminology for future studies.
Article
Full-text available
Typical teleoperator display systems to date have relied on standard monoscopic video as the primary feedback link. Research clearly shows that using stereoscopic video (SV) is generally a significant improvement over monoscopic video (MV) for most teleoperation tasks. Abundant as the benefits of SV are, they can be dramatically increased with the superpositioning of stereoscopic graphics (SG) on the SV image. The resulting SV + SG system can greatly enhance the benefits of SV alone in the understanding of a remote world. This combination of media results in an exploration tool uniquely suited for examining different environments, whether remote, microscopic, or artificial. The SV + SG technology can also be used as a command tool that can be used to control manipulators (remote, microscopic, or artificial) in the different environment. A SV + SG system was implemented, and an experiment was performed to determine whether or not SG images were actually perceived as existing at the desired location in the SV view. The experiment compared the performance of a virtual SV + SG Pointer with that of a real-world pointer, and showed that the virtual pointer could be positioned as accurately as the real pointer, with only a slightly higher variance.
Article
Full-text available
This paper describes a number of aspects of a display technology under development, which involves the integration of stereoscopic computer graphics and stereoscopic video displays. The background and justification for this development are discussed, based on the need of operators of remotely controlled vehicles and/or manipulators to estimate absolute sizes and locations of objects at a remotely viewed site. The basic technology involves superimposing an interactively controllable computer-generated stereographic cursor onto a stereoscopically viewed video image. Absolute measurements can be made with this system, based on relative comparison of cursor position with target object location. Experimental results are presented in which the ability of subjects to perform such tasks was investigated. In general, results were promising; subjects were able to align virtual pointers with real targets essentially as well as they were able to manipulate real objects. A number of implications of this technology for the enhancement of three dimensional video displays are discussed.
Article
Full-text available
Mixed Reality (MR) visual displays, a particular subset of Virtual Reality (VR) related technologies, involve the merging of real and virtual worlds somewhere along the 'virtuality continuum' which connects completely real environments to completely virtual ones. Augmented Reality (AR), probably the best known of these, refers to all cases in which the display of an otherwise real environment is augmented by means of virtual (computer graphic) objects. The converse case on the virtuality continuum is therefore Augmented Virtuality (AV). Six classes of hybrid MR display environments are identified. However quite different groupings are possible and this demonstrates the need for an efficient taxonomy, or classification framework, according to which essential differences can be identified. An approximately three-dimensional taxonomy is proposed comprising the following dimensions: extent of world knowledge, reproduction fidelity, and extent of presence metaphor.
Article
Full-text available
One of the most promising and challenging future uses of head-mounted displays (HMDs) is in applications where virtual environments enhance rather than replace real environments. To obtain an enhanced view of the real environment, the user wears a see-through HMD to see 3D computer-generated objects superimposed on his/her real-world view. This see-through capability can be accomplished using either an optical or a video see-through HMD. We discuss the tradeoffs between optical and video see-through HMDs with respect to technological, perceptual, and human factors issues, and discuss our experience designing, building, using, and testing these HMDs.
This paper describes studies on perception of virtual object locations. It explores the behavior of some factors related to depth perception, especially the effect of inter-pupillary distance (IPD) mismatch and the interplay of image blur and binocular disparity. IPD mismatch (which is caused by errors in estimation of the parameter) results in a certain perceptual error of virtual objects' depth. Blur of images is also a source of error in depth representation. It was found, in some cases, to be a very strong depth cue. The results of a series of experiments conducted on IPD mismatch and image blur are also presented.
Article
A telerobot control system using stereoscopic viewing has been developed. The objective of the system is to implement a world-model mapping capability using live stereo video to provide an operator with three-dimensional image information of an unstructured environment and to use stereo computer graphics renderings of wire-frame models of sought-after features in the environment in order to relate robotic task information. The operator visually correlates or matches the stereo video image of the environment with the graphic image to register the world model space to the manipulator's environment. This allows operator control of the manipulator through teleoperation in unstructured environments with a change-over to autonomous operation when the operation can be restricted and a task becomes repetitive. Details of the robot control, stereo imaging, and system control components are provided.
Article
This paper describes our studies on perception of virtual object locations. We explore the behavior of various factors related to depth perception, especially the interplay of fuzziness and binocular disparity. Experiments measure this interplay with the use of a three- dimensional display. Image fuzziness is ordinarily seen as an effect of the aerial perspective in two-dimensional displays for distant objects. We found that it can also be a source of depth representation also in three-dimensional display space. This effect occurs with large individual variations, and for subjects who have good stereopsis it does not significantly affect their depth perception. However, it can be a very strong depth cue for subjects who have weak stereopsis. A subsequent experiment measured the effects when both size and brightness of sharp stimuli are adjusted to a standard fuzzy stimulus. The results suggest that the fuzziness cue at short range is explained by other cues (i.e. the size cue and the brightness cue). This paper presents results of a series of such experiments.
Article
Along with the marriage of motion pictures and computers has come an increasing interest in making images appear to have a greater degree of realness or presence, or `realspace imaging.' Such topics as high-definition television, 3-D, fisheye lenses, surrogate travel, and `cyberspace' reflect such interest. These topics are usually piled together and are unparsable, with the implicit assumptions that the more resolution, the more presence, and the more presence, the better. This paper proposes a taxonomy of the elements of real-space imaging. The taxonomy is organized around six sections: (1) monoscopic imaging, (2) stereoscopic imaging, (3) multiscopic imaging, (4) panoramics, (5) surrogate travel, and (6) real-time imaging.
Article
The ability to make on-line adjustments to stereoscopic camera position parameters dynamically, during execution of telemanipulation tasks, allows one to maintain a theoretically `optimal' camera configuration, in response to changing viewing conditions. Associated with this, however, is the problem of the observer's being forced to adapt to a (continuously) changing relationship between perceived inter-object distances in the depth plane and the corresponding real distances. One problem in particular is the potential conflict between varying stereoscopic depth cues and unchanging size cues. Two experiments were performed. In the first we investigated how depth judgement ability varied with unsignalled changes in camera convergence distance. This resulted in significant changes in distance judgement, with overestimation for increases in camera separation and underestimation for decreases. Short- term feedback on judgement error was sufficient to correct the changes. In the second experiment, on-screen calibrated depth cues were added, by means of overlaid stereoscopic computer graphics, causing the significant estimation errors found in the first experiment to disappear. The implication of this is that distance/depth judgement can in principle be rescaled to compensate for perceptual conflicts caused by changing camera configuration, by providing either real or virtual depth scaling cues at the task site.