Figure 2 - uploaded by Abraham G. Campbell
Content may be subject to copyright.
Source publication
In recent years, an increasing number of Mixed Reality (MR) applications have been developed using agent technology — both for the underlying software and as an interface metaphor. However, no unifying field or theory currently exists that can act as a common frame of reference for these varied works. As a result, much duplication of research is ev...
Contexts in source publication
Context 1
... Reality Agent recorded in scientific literature and was years ahead of similar efforts. The Invisible Person (Psik et al., 2003) is based on the ALIVE system and employs a humanoid virtual character in an effort to engage visitors at the Vienna Museum of Technology in a game of Tic Tac Toe. The game board is digitally added onto the floor, and both user and character control the game via body postures and hand gestures. The character’s internal state is based on an emotional system that directs its actions, facial expressions and manner of interaction with the user. Feedback from visitors of the exhibition commend the lifelikeness of the agent. Storytelling engines also explicitly model individual agents’ goals and motivations in order to create dynamic narratives from the interplay of these goals and the users actions. One of the first projects to apply digital narrative to Mixed Reality is the cultural heritage application GEIST (Kretschmer et al., 2001). GEIST 1 immerses the user in a thrilling adventure involving events from the Thirty Years’ War. As the user roams the old town of Heidelberg, he can enter certain ‘hotspots’ in which ghosts from the past appear in the form of virtual characters. They plead for the user’s help in solving the mystery surrounding their death, creating a quest around the city in which the user has to learn about the history of places and events in order to succeed. In the physical domain, GEIST agents are limited to sensing the user’s position and orientation, while in the virtual, which is populated with spatially aligned models of buildings and other virtual objects, interaction is much more varied and versatile. However, due to the spirit nature of the GEIST agents, corporeal presence of the ghosts is inhibited, as they appear translucent and float in midair. Another prominent example of MR storytelling is the Mixed-Reality Interactive Storytelling (MRIS) project (Charles et al., 2004), which allows a user to immerse himself into a spy thriller story in the role of the villain. It does so by capturing the user’s image in real time, extracting it from the background, and inserting it into a virtual world populated by autonomous synthetic actors with which the user then interacts using natural language and gestures. The resulting image is projected onto a large screen facing the user, who sees his own image embedded in the virtual stage alongside the synthetic actors. Notably, when viewed from the perspective of Milgram’s continuum of MR displays (see Figure 2), MRIS is a rare example exemplifying the concept of Augmented Virtuality, i.e. a virtual world with added ‘real’ components. Finally, Virtual Gunslinger (Hartholt et al., 2009), is a more recent example of a Mixed Reality storytelling experience. In it, the user plays the character of a cowboy in a Wild West saloon who gets challenged to a duel. The user is placed in an environment featuring a real bar counter and a virtual bartender and outlaw, both of which are projected onto screens placed in the room. The user can interact with the agents using natural language dialogues and gestures, e.g. moving his arm as if to pull a gun when duelling with the outlaw. Common to all these strong agents is that the agent architecture facilitates the development of agents that exhibit realistic and lifelike behaviour. But strong agent system are also often used to realise highly complex and distributed systems that deal with ...
Context 2
... (1998, 1999) and Duffy (2000), who claim that the social environment includes the agent’s interactions with other agents or human users. Within the context of this research, embodiment is seen as the strong provision of environmental context (structural coupling) with a social element included. An agent is embodied if it is situated in a particular environment, has a body , and senses and interacts with that environment, and any other individuals located therein. This definition of embodiment coincides with that of Dourish (2001), who emphasised the importance of an embodied approach to Human-Computer Interaction, in light of new developments in Ubiquitous and Social Computing and proposed a number of design guidelines for the development of Embodied Interaction. Early Artificial Intelligence (AI) research focused upon reasoning based upon search of abstract symbol structures (Newell and Simon, 1976). But this unembodied approach, sometimes referred to as ‘Good Old Fashioned AI’ (Haugeland, 1985), had a number of flaws. As noted by Steels (2000) and Dautenhahn (1999), humans have a tendency to ‘animate’ the world and are unlikely to see an unembodied agent as intelligent. Therefore embodiment has, in recent years, come to be seen as an important requirement in the development of an intelligent system (Duffy et al., 2005). The move away from the unembodied approach was triggered by a series of papers by Brooks (1991a,b), who emphasised the situatedness and embodiment of an agent. Brooks’ popularisation of the reactive approach served as a catalyst for the creation of a more embodied approach to AI, where an agent must be structurally coupled with its environment if it is to be seen as intelligent. While robot agents are embodied in a physical form, using sensors and actuators to perceive and act upon the physical world, virtual agents can also be considered embodied in their simulated environment, at least to the extent to which the simulation manages to create a structural coupling between the agent and the simulated environment. Both strands are motivated by the desire to create agents that are capable of behaving and interacting in an intelligent manner with other agents. Crucially, both robotic and virtual agents can be considered synthetic characters, although differently embodied, i.e. in physical or digital form. Key to the definition of Mixed Reality Agents, as outlined within this paper, is the idea of a Mixed Reality environment. Milgram and Kishino (1994) define MR in terms of their Reality-Virtuality Continuum (Figure 2) whereby Mixed Reality is the space between a purely physical (or ‘real’, as they describe it) environment and a purely virtual environment. Each MR environment can be seen, to a greater or ...
Similar publications
As computing and displays become more pervasive and wireless networks are increasing the connections between people and things, humans inhabit both digital and physical realities. In this paper we describe our prototype Mirror Worlds framework, which is designed to fuse these realities: Fusality. Our goal for Fusality is to support innovative resea...
Citations
... Lucas et al. [49] also demonstrated that virtual human interviewers can enhance service members' disclosure of mental health symptoms. Moreover, virtual agents in mixed reality environments offer unique opportunities for blending virtual and physical interactions [33,57]. For example, Kim et al. [41] demonstrated how subtle environmental interactions, such as airflow influencing both virtual and real objects, can increase the sense of social presence in mixed reality environments by making virtual agents seem more aware of and connected to the physical space around them. ...
Robotic ultrasound systems have the potential to improve medical diagnostics, but patient acceptance remains a key challenge. To address this, we propose a novel system that combines an AI-based virtual agent, powered by a large language model (LLM), with three mixed reality visualizations aimed at enhancing patient comfort and trust. The LLM enables the virtual assistant to engage in natural, conversational dialogue with patients, answering questions in any format and offering real-time reassurance, creating a more intelligent and reliable interaction. The virtual assistant is animated as controlling the ultrasound probe, giving the impression that the robot is guided by the assistant. The first visualization employs augmented reality (AR), allowing patients to see the real world and the robot with the virtual avatar superimposed. The second visualization is an augmented virtuality (AV) environment, where the real-world body part being scanned is visible, while a 3D Gaussian Splatting reconstruction of the room, excluding the robot, forms the virtual environment. The third is a fully immersive virtual reality (VR) experience, featuring the same 3D reconstruction but entirely virtual, where the patient sees a virtual representation of their body being scanned in a robot-free environment. In this case, the virtual ultrasound probe, mirrors the movement of the probe controlled by the robot, creating a synchronized experience as it touches and moves over the patient's virtual body. We conducted a comprehensive agent-guided robotic ultrasound study with all participants, comparing these visualizations against a standard robotic ultrasound procedure. Results showed significant improvements in patient trust, acceptance, and comfort. Based on these findings, we offer insights into designing future mixed reality visualizations and virtual agents to further enhance patient comfort and acceptance in autonomous medical procedures
... Lucas et al. [49] also demonstrated that virtual human interviewers can enhance service members' disclosure of mental health symptoms. Moreover, virtual agents in mixed reality environments offer unique opportunities for blending virtual and physical interactions [33,57]. For example, Kim et al. [41] demonstrated how subtle environmental interactions, such as airflow influencing both virtual and real objects, can increase the sense of social presence in mixed reality environments by making virtual agents seem more aware of and connected to the physical space around them. ...
Robotic ultrasound systems can enhance medical diagnostics, but patient acceptance is a challenge. We propose a system combining an AI-powered conversational virtual agent with three mixed reality visualizations to improve trust and comfort. The virtual agent, powered by a large language model, engages in natural conversations and guides the ultrasound robot, enhancing interaction reliability. The visualizations include augmented reality, augmented virtuality, and fully immersive virtual reality, each designed to create patient-friendly experiences. A user study demonstrated significant improvements in trust and acceptance, offering valuable insights for designing mixed reality and virtual agents in autonomous medical procedures.
... MIX, a concept initially introduced by Milgram & Kishino (1994), describes the continuum between fully virtual and fully real environments. MIX refers to a real environment that allows for real-time interaction with virtual experiences (Cheok et al., 2009;Holz et al., 2011;Milgram & Kishino, 1994). It encompasses a broad definition that includes a range of applications, from digital layers in camera views to the use of physical objects for interaction with digital screens and the development of virtual reality interactions with haptic feedback (Lindgren & Johnson-Glenberg, 2013). ...
This study used network meta-analysis to investigate the impact of the use of interactive learning environments (ILE) tools (augmented reality (AR), virtual reality (VR), mixed reality (MIX), and interactive digital games (GAME)) in science education on learning outcomes. A total of 53 primary studies were retrieved from the literature according to the inclusion/exclusion criteria. According to this, MIX demonstrated the largest effect size compared to conventional teaching (CON) for both cognitive and affective outcomes. AR exhibited a larger effect size than VR for affective outcomes but did not differ significantly from VR or GAME for cognitive outcomes. GAME outperformed CON for cognitive outcomes but did not differ significantly from VR or GAME for either outcome. VR’s effect size on cognitive outcomes was not significantly different from CON but was significantly higher for affective outcomes. Indirect comparisons revealed no significant differences between MIX and the other ILE formats for either outcome. Network analysis ranked AR as the most effective format for both cognitive and affective outcomes. These findings highlight the potential of ILEs, particularly AR, for enhancing learning outcomes. Limitations of the study include the lack of direct comparisons for MIX, high heterogeneity, and publication bias in some binary comparisons. More primary studies are needed to address these limitations and increase the generalizability of the findings.
... Due to the various advantages, it provides, Mixed Reality (MR), which has seen increased usage in recent years and is not always distinctly separated from AR and can be considered synonymous with AR by many researchers, can be said to be a hybrid of AR and VR (Morimoto et al., 2022). Because MR is the result of blending virtual computer graphic objects with a real 3D scene or incorporating physical world elements into a digital environment (Pan et al., 2006;Holz et al., 2011). MR stands out because it alleviates the limitations of VR, such as excluding the physical world environment and the inability of AR to interact with 3D data packets (Sakai et al., 2020). ...
... The concept of agents and humans coexisting in a shared MR environment has introduced greater intelligence into MR experiences. Prior studies have highlighted the general benefits of MR supported by CVAs [2,32] enhancing both task performance and the social aspects of user interaction [32,51]. This enhancement boosts engagement, motivation, and performance in virtual settings [51,22]. ...
... The concept of agents and humans coexisting in a shared MR environment has introduced greater intelligence into MR experiences. Prior studies have highlighted the general benefits of MR supported by CVAs [2,32] enhancing both task performance and the social aspects of user interaction [32,51]. This enhancement boosts engagement, motivation, and performance in virtual settings [51,22]. ...
... We employed the CVA classification design architecture [21] and the MiRAS (Mixed Reality Agents) Cube Taxonomy [32] to integrate a CVA named "Katie" into the MR application for on-demand user guidance. We explored two different modalities for Katie: a Voice-only version and an Embodied form, which incorporated the voice with basic lip-syncing and idle movements (see Figure 1). ...
Conversational Virtual Agents (CVAs) offer a promising approach for enhancing user task performance in Mixed Reality (MR) environments. This paper explores the integration of a CVA into an MR application designed to assist in solving a 2D physical puzzle, offering enhanced spatial cognitive capabilities. Using the CVA classification architecture and the MiRAS (Mixed Reality Agents) Cube Taxonomy, we developed an MR system with a state-aware assistant to guide users in a puzzle-solving task only when requested. The primary research question is whether or not the CVA needs to be embodied. We conducted a study with 34 participants to investigate the influence of Voice-only and Embodied CVAs on puzzle-solving performance, user interactions with the assistant , the assistant's social presence, overall system cognitive work-load, and users' perceptions of future system use. Both modalities showed equivalent outcomes regarding the number of position-and orientation-related queries, perceived usability, message and affec-tive understanding, performance, frustration, and usefulness. However , results showed that Voice-only CVA significantly enhanced task efficiency: participants completed puzzles more quickly and accurately, reporting lower effort than in the Embodied condition. These findings suggest that Voice-only CVAs may be more effective for tasks like puzzle solving, where auditory guidance alone appears sufficient to support better performance.
... Researchers have also depicted the virtual and real worlds as two ends of a line and defined the environments between these two worlds as MR. In other words, in MR, virtual and real spaces are spatially merged (Holz et al., 2011). In this context, it can be said that MR is a combination of both VR and AR. ...
This study aims to conduct a bibliometric analysis of the articles published on Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) research in the field of education. This study also aims to provide more comprehensive information on research trends by conducting a systematic review based on bibliometric analysis data. Therefore, this study was designed according to the multiple research methods. In this direction, bibliometric analysis was conducted first. After the bibliometric analysis, the systematic review technique was used to evaluate the most cited studies. VOSviewer was used to analyze bibliometric data, and the MaxQda program was used to analyze systematic review data. In this study, the findings showed that educational research conducted with AR and VR started to be conducted in the 1990s. On the other hand, it was determined that the integration of MR research into education began in the mid-2000s. The findings showed that the keywords virtual reality, augmented reality, education, medical education, simulation, and mixed reality, respectively, were used more in the studies found in Web of Science. Also, it was observed that research on AR, VR, and MR was mostly conducted in the United States of America and China. On the other hand, it was concluded that the studies were published more in "Education and Information Technology" and "Interactive Learning Environment" journals. Three publications by Guido Makransky ranked in the top ten regarding the number of citations. Similarly, Makransky ranked first among the authors who published the most articles. Finally, it was observed that the studies conducted with these technologies were mostly written by two, three, and four authors.
... An MR environment combines the virtual and physical domains in a way that is spatially coherent. The fusion of these domains creates an environment where both the virtual and physical elements coexist seamlessly (Holz et al., 2011). ...
Designing the built environment with an inclusive approach that allows equal access to everyone is necessary for reducing social inequalities. Creating a physical environment that includes all segments of society and activates all senses is crucial in museum spaces. Digital interactions used in museum spaces offer new possibilities and interfaces to eliminate inequalities and increase inclusiveness in museum spaces. This study examines the integration of digital interactives in museums with a focus on the social museum concept. Through a comprehensive review of literature spanning museum studies on social museums, digital interactives, and inclusive museum concepts, this research investigates the role of digital interactives in fostering social engagement and facilitating interactive learning experiences within museum settings. Drawing on the theoretical framework and practical examples the paper explores how to employ digital technologies strategically to enhance visitor interaction, promote inclusivity, and facilitate knowledge sharing in social museum environments. This research demonstrates the transformative potential of digital interactives in museums as social spaces and provides a comprehensive understanding of the dynamic relationship between technology, museum practices, and social inclusion.
... This definition was extended by Holtz et al. with the proposal of a mixed reality agent (MiRA), which is defined as an agent embodied in an MR environment. 3 These agents exist at the intersection of physical reality, virtual reality, and Agenthood as shown in Figure 3. In this article, a digital being has significantly similar characteristics to a MiRA; however, given the limits of MR, which are discussed later, a digital being possesses capabilities that go beyond those of a MiRA and encompasses a wider range of characteristics. ...
... 17 These and similar works focus on how humans perceive digital beings. Works that focus on how digital beings Special Section on Metaverse and eXtended uniVerse (XV): Opportunities and Challenges for Consumer Technologies perceive and interact with the physical world are Holtz et al. 3,18 and Skarbez et al. 19 and are related to the work presented in this article. ...
... Holtz et al. 3 presented the MiRA Cube Taxonomy as a categorization system for MiRAs. It introduces a taxonomy based on three axes: corporeal presence (the degree of physical or virtual representation in MR-embodiment), interactive capacity (the ability to sense and act upon the physical/virtual environment), and the level of agency (the extent of the agent's autonomy and capabilities). ...
Work in the metaverse and development of taxonomies to describe aspects of these types of realities have been from a human perspective. Consequently, very little attention has been paid to the perspective of the digital beings that inhabit these worlds and how they can view and interact with us in our realities. This article introduces the Dyadic-XV taxonomy-a taxonomy that is created when the eXtendiVerse taxonomy is reflected and viewed from the perspective of digital beings. This work builds on prior taxonomies that focus on the Reality-Virtuality continuum and shows that the Sociality axis of the XV and Dyadic-XV taxonomies reveal potentially new research areas that are built on prior work on intelligent agents. New digital beings like Ensomadroids and Pragmavatars are presented which can be used as a path forward from the earlier works to new areas of research. This taxonomy will be valuable to game developers, social simulation practitioners and virtual world designers.
... One of the technologies that can present digital objects in the real world and interact with them by using touch is Mixed Reality. Mixed Reality refers to the real environment that allows interaction along with virtual experiences [7]. Mixed Reality is a combination of Augmented Reality and Virtual Reality that offers the ability to interact physically with virtual objects in the real world [8] [9]. ...
In the current era, learning has been touched by technology that develops very rapidly, one of which is the book metamorphosis from print to digital form. Therefore, we need a book in digital form which is called a digital book. Digital books will be more interesting if they can turn our environment into a digital space. One technology that can present digital objects in the real world and interact with them using touch is mixed reality. The purpose of the study is to produce a mixed reality-based digital book that is valid for improving students’ numeracy literacy skills. This study used Borg and Gall’s research and development pattern method. The data were collected through expert validation for product testing. In media validation, which includes general aspects such as aspects of learning presentation, aspects of language feasibility, and graphic aspects, the result meets the valid criteria with an average score of 5 validators of 92.8%. Material validation includes general aspects such as, aspects of material substance, learning aspects, and beneficial aspects, the result meets the valid criteria with an average score from 5 validators of 92.4%. Based on the results of media and material validation of the digital book, it can be concluded that it is suitable for use. The implication of this study is to make it easier for students to learn using digital books. Keywords: digital book, mixed reality, literacy, numeracy
... However, their review focused on the degree of embodiment of social agents in the environment, rather than on multiembodiment and identity. Holz et al. [31] later presented a survey and taxonomy of Mixed Reality Agents in 2011, limited to categorising social agents embodied in mixed reality into axes of agency, corporeal presence, and interactive capacity. Deng et al. [18] comprehensively reviewed physical embodiment in socially interactive robots. ...
... We found 5 papers that focused on this challenge. These papers included 2 user studies investigating augmented reality agents [14] and blended reality characters [69], and 3 review papers on social agents across the reality spectrum [32], mixed reality agents [31], and migration between devices [24]. ...
... Holz et al. [32] surveyed social agents across Milgram's realityvirtuality continuum. Holz et al. [31] categorised Mixed Reality Agents (MiRA), defining them as agents embodied in a mixed-reality environment that can migrate between environments and exist as blended-reality agents, composed of a mix of real and virtual components. Robert and Breazeal [69] found that children were more engaged with a blended-reality character that seamlessly transitioned from a physical robot to a virtual form, than with a purely screen-based agent. ...
Multi-embodied agents can have both physical and virtual bodies, moving between real and virtual environments to meet user needs, embodying robots or virtual agents alike to support extended human-agent relationships. As a design paradigm, multi-embodiment offers potential benefits to improve communication and access to artificial agents, but there are still many unknowns in how to design these kinds of systems. This paper presents the results of a scoping review of the multi-embodiment research, aimed at consolidating the existing evidence and identifying knowledge gaps. Based on our review, we identify key research themes of: multi-embodied systems, identity design, human-agent interaction, environment and context, trust, and information and control. We also identify 16 key research challenges and 12 opportunities for future research.