Article

A survey of visual, mixed, and augmented reality gaming

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Visual mixed and augmented realities have historically been applied to the gaming application domain. This article provides a survey of visual mixed and augmented reality gaming in both the academic and commercial contexts. There is an exploration of both indoor and outdoor mixed and augmented reality gaming. The different games are presented via the three major display technologies: head-mounted display, handheld display, and spatial immersive display. A number of academic mixed and augmented reality research projects are described that provide an overview of the current state of the art. A set of example commercial games are also examined to provide the context for the state of the games on the market.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... At the intersection of these two emerging areas of personal computing, we focus on lifeloggers engaging in mixed reality, for which the lifelog should reflect perception of both the physical and virtual elements of the experienced world. For example, engaging in mixed reality gaming [49], interacting with virtual avatars [17], or watching augmented reality television [40], are life experiences with strong virtual world elements that should be reflected in the lifelog just like other experiences involving the physical world [18]. In this context, we define "mixed reality lifelogging" as the practice of recording one's actions, life events, and experiences in a mixed reality world rendered by a HMD or similar device; see Figure 1 for a visual illustration of the concept. ...
... Mixed reality systems leverage context-dependent virtual content aligned with the physical environment towards augmented perception and amplified intelligence [5]. Representative applications span gaming [49], education [28], cultural heritage [32], entertainment [38], assistive technology [55], and psychological interventions [39]. For more details, we refer readers to Speicher et al.'s [48] examination of mixed reality concepts and Billinghurst et al.'s [8] comprehensive survey of augmented reality. ...
... Our preliminary experiment for the technical evaluation of Lifelog-MR was conducted with a limited set of virtual objects of limited nature. However, other types of virtual content relevant for mixed reality lifelogging, such as gaming [49], virtual avatars [17], or virtual TV screens [40] displayed in the physical world, represent interesting future explorations. Also, our prototype considered only life tags [2] to represent the lifelog, a format suitable in our evaluation due to its simplicity and capacity to capture the essence of the experienced environment, but alternative visualizations may be more suited for querying and consuming mixed reality lifelog content, such as worlds-in-miniature approaches [30], visualizations of proxemic dimensions [9], or object-based navigation techniques [26]. ...
... See e.g. [10] for 3D modeling and 3D printing, [11] for computer-aided design, [14] for 3Danimation, [3,17] for interactive anatomical modeling and education, [13] for prototyping and manufacturing, [15] for face recognition, [5] for surveillance systems; and the surveys [15,16]. The wide applicability and potential vulnerabilities of the 3D object models M.H. Annaby E-mail: mhannaby@sci.cu.edu.eg raised concerns of security and access control. ...
... Lemma 2 Let C j be as defined in (16). Then, for j = 1, 2, · · · , N , we have ...
... However, it is worthwhile to mention that this will happen most likely when the points of P N and O N lie on the boundaries of their domains and based on the permutation process. We see also that the scaling factor must be 1 12 , not 1 9 in the case when P 0 is the origin and we use (16). ...
Preprint
We compute precise estimates for dimensions of 3D-encryption techniques of 3D-point clouds which use permutations and rigid body motion, in which geometric stability is to be guaranteed. Few attempts are made in this direction. An attempt is established using the notions of dimensional and spatial stability by Jolfaei et al. (2015), who also proposed a 3D object encryption algorithm, claiming that it preserves dimensional and spatial stability. However, as we mathematically prove neither the algorithm, nor the associated estimates are correct. We introduce more rigorous definitions of the geometric stability of such 3D data encryption algorithms, followed by dimensionality measures
... It would appear that AmI and AR are two distinct areas of scientific research with different visions, paths, and supporting technology, enabling applications of Human-Computer Interaction (HCI) that make users more effective at performing tasks in the physical world. Newcomers to HCI, interested in applying the technologies of AmI and AR for prototyping new interactive computer systems, would have little difficulty in seeing AmI and AR as conceptually distinct due to differences in their supporting technology and typical applications, 1 different scientific communities, 2 and also attempts to distinguish between the two by reduction to specific 1 Traditional application areas for AmI have been healthcare [4] and ambient assisted living [38,52], while AR has been applied to video games [66], television [71], computersupported collaborative work [51], and cultural heritage [16]; see Sadri [54] and Dunne et al. [22] for comprehensive surveys of AmI and Azuma et al. [6,7] and Billinghurst et al. [13] for reviews of AR technology and applications. 2 Although contributions related to AmI and AR can be found at traditional SIGCHI venues, such as CHI, UIST, and DIS, distinct communities have emerged with their own, specialized dissemination venues, such as ISAmI (https://www.isami-conference. ...
... A technical definition from Azuma [7], which has stood the test of time [13], specifies that an AR system (1) combines the real and the virtual, (2) is interactive in real time, and (3) is registered in 3D. For general surveys of AR, we refer readers to Azuma et al. [6,7] and Billinghurst et al. [13], while other surveys have focused on specific applications of AR, such as video games [66], AR for television [71], computer-supported collaborative work [51], cultural heritage [16], and specific aspects of the technology and methods used for scientific investigation in AR [20,31,32]. ...
... The development of extended reality systems enabled a plethora of applications ranging from industry [9] to arts [10], from medicine [11] to gaming [12], as well, of course, scientific research [13]. In the context of exposure therapies [14], augmented reality (AR) specifically allows to safely expose subjects to fearful or even dangerous stimuli in a safe and controlled manner [15], a possibility that paved the way to the development of new stimulation protocols [16] [17]. ...
Conference Paper
Full-text available
How relevant are specific perceptual features for the quick categorization of evolutionary-relevant stimuli? We used augmented reality to measure the nature/nurture role in processing hairiness of spiders. A sample of 75 more-or-less spiderfearful participants was administered emotionally-subliminal holograms representing phobic (hairy or glabrous spiders), generically-fearful (bear), or neutral (deers, birds) stimuli while recording their electroencephalographic (EEG) activity. Event-related potentials (ERPs) showed significant differences between hairy and glabrous spiders at early latencies (120 ms), suggesting that the spider's hairiness is distinguished yet at pre-conscious stages of visual processing.
... Through video tracking, the library overlays virtual visuals on the physical world [7]. In the same year, Bruce Thomas created ARQuake, the world's first outdoor AR game [12]. ...
... In addition, augmented reality simulations in educational settings go beyond simple visual engagement [18]- [20]. They provide a multi-sensory experience that replicates real-life situations, allowing learners to safely explore, experiment, and learn from their mistakes without facing real-world repercussions. ...
... These applications are not restricted to a specific area; rather, they include a wide range of fields and industries such as education [39], prototyping and manufacturing [64], Communicated by J. Gao. medicine [31,34,47], construction and architecture [11], virtual reality [48], augmented reality [7], 3D visualization [9], and various industries like film, animation [43] and gaming [55]. This wide-spread use of 3D contents in a variety of applications and technologies makes describing, representing, and securing 3D contents urgent issues. ...
Article
Full-text available
Using the topological equivalence between the Riemann sphere S\mathbb {S} S and the extended complex plane C=C{}\overline{\mathbb {C}} = \mathbb {C} \cup \{\infty \} C ¯ = C ∪ { ∞ } , where C\mathbb {C} C is the field of complex numbers, we establish 2D-bijective representations of 3D point clouds. Points of 3D point clouds are mapped into the Riemann sphere S\mathbb {S} S , and a stereographic projection is implemented to map the points into the complex plane C\mathbb {C} C . The way the 3D objects are mapped into S\mathbb {S} S may be varied for various applications. To prove the accuracy and efficiency of the proposed 2D representation of 3D objects, we apply this correspondence to 3D point cloud encryption. We utilize chaotic permutations, chaotic circuits, and Latin cubes in addition to the stereographic projection representation to construct our scheme. The permutation steps using chaotic maps and Latin cubes are carried out on the object data points in both S\mathbb {S} S and C\overline{\mathbb {C}} C ¯ , while the chaotic circuits are applied to 2D projections of the 3D objects. To the best of our knowledge, no earlier work employed stereographic projections for 3D object encryption. Experimental simulations of this method show high encryption strength and strong confusion and diffusion properties based on quantitative and statistical measures.
... XR is currently experiencing rapid growth [2]. The literature has highlighted the potential of XR to enhance gaming and socialization [79,81], arts and design [67], e-commerce advertisements [51], and education [57]. The rise of XR technology has prompted discussions on deceptive design 1 (also known as "dark pattern")-the user interface design that researchers deem manipulative [6,49]-from experts in engineering [39,73], security and privacy [8,21], cognitive science [20], and humanities and social science [67]. ...
Article
Full-text available
The well-established deceptive design literature has focused on conventional user interfaces. With the rise of extended reality (XR), understanding deceptive design's unique manifestations in this immersive domain is crucial. However, existing research lacks a full, cross-disciplinary analysis that analyzes how XR technologies enable new forms of deceptive design. Our study reviews the literature on deceptive design in XR environments. We use thematic synthesis to identify key themes. We found that XR's immersive capabilities and extensive data collection enable subtle and powerful manipulation strategies. We identified eight themes outlining these strategies and discussed existing countermeasures. Our findings show the unique risks of deceptive design in XR, highlighting implications for researchers, designers, and policymakers. We propose future research directions that explore unintentional deceptive design, data-driven manipulation solutions, user education, and the link between ethical design and policy regulations.
... Unlike unwearable devices, wearable devices are no longer equipped with the conventional input devices, e.g., keyboards, mice, touch panels, etc. In order to input commands and texts in various application scenarios [7][8][9][10][11], speech/voice recognition is a feasible input method. However, speech/voice recognition suffers from many downsides. ...
Article
Full-text available
Text input using hand gestures is an essential component of human–computer interaction technology, providing users with a more natural and enriching interaction experience. Nevertheless, the current gesture input methods have a variety of issues, including a high learning cost for users, poor input performance, and reliance on hardware. To solve these problems and better meet the interaction requirements, a hand recognition-based text input method called iHand is proposed in this paper. In iHand, a two-branch hand recognition algorithm combining a landmark model and a lightweight convolutional neural network is used. The landmark model is used as the backbone network to extract hand landmarks, and then an optimized classification head, which can preserve the space structure of landmarks, is designed to classify gestures. When the landmark model fails to extract hand landmarks, a lightweight convolutional neural network is employed for classification. Regarding the way letters are entered, to reduce the learning cost, the sequence of letters is mapped as a two-dimensional layout, and users can type with seven simple hand gestures. Experimental results on the public datasets show that the proposed hand recognition algorithm achieves high robustness compared to state-of-the-art approaches. Furthermore, we tested the performance of users’ initial use of iHand for text input. The results showed that the iHand’s average input speed was 5.6 words per minute, with the average input error rate was only 1.79%.
... A variety of transparent displays are available as consumer products, with a majority catered to augmented reality (AR) applications such as AR glasses or heads-up displays (HUDs). These products display visual objects and auxiliary information within the user's field of view for various applications, including gaming [52], driving assistance [1], surgical assistance [56], and construction safety [27]. ...
Preprint
Full-text available
Camera-based autonomous systems that emulate human perception are increasingly being integrated into safety-critical platforms. Consequently, an established body of literature has emerged that explores adversarial attacks targeting the underlying machine learning models. Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems. However, the real world poses challenges related to the "survivability" of adversarial manipulations given environmental noise in perception pipelines and the dynamicity of autonomous systems. In this paper, we take a sensor-first approach. We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples. EvilEye exploits the camera's optics to induce misclassifications under a variety of illumination conditions. To generate dynamic perturbations, we formalize the projection of a digital attack into the physical domain by modeling the transformation function of the captured image through the optical pipeline. Our extensive experiments show that EvilEye's generated adversarial perturbations are much more robust across varying environmental light conditions relative to existing physical perturbation frameworks, achieving a high attack success rate (ASR) while bypassing state-of-the-art physical adversarial detection frameworks. We demonstrate that the dynamic nature of EvilEye enables attackers to adapt adversarial examples across a variety of objects with a significantly higher ASR compared to state-of-the-art physical world attack frameworks. Finally, we discuss mitigation strategies against the EvilEye attack.
... Extended Reality (XR), broadly encompassing virtual, augmented, and mixed reality technologies, can potentially revolutionize fields such as education, healthcare, and gaming [5,79,93]. The primary ethos for XR is to provide immersive, interactive, and realistic experiences for users. ...
Preprint
Full-text available
Understanding the location of ultra-wideband (UWB) tag-attached objects and people in the real world is vital to enabling a smooth cyber-physical transition. However, most UWB localization systems today require multiple anchors in the environment, which can be very cumbersome to set up. In this work, we develop XRLoc, providing an accuracy of a few centimeters in many real-world scenarios. This paper will delineate the key ideas which allow us to overcome the fundamental restrictions that plague a single anchor point from localization of a device to within an error of a few centimeters. We deploy a VR chess game using everyday objects as a demo and find that our system achieves 2.4 cm median accuracy and 5.3 cm 90th90^\mathrm{th} percentile accuracy in dynamic scenarios, performing at least 8×8\times better than state-of-art localization systems. Additionally, we implement a MAC protocol to furnish these locations for over 10 tags at update rates of 100 Hz, with a localization latency of 1\sim 1 ms.
... "Mixed reality is a combination of virtual and augmented reality, where real and virtual images are interwoven, allowing for interaction and manipulation of the real and virtual environments." [1] Mixed reality interactive games are a form of game that combines elements of the virtual and real world, using mixed reality technology to merge virtual elements with the real environment, enabling players to experience virtual game content in a real setting. The characteristics of mixed reality interactive games include: ...
Article
This paper proposes a user experience model based on mixed reality interactive games. The model combines the characteristics of user experience and mixed reality games, and through user surveys and the establishment of a user experience model, it verifies that sensory experience, interactive experience, emotional experience, and behavioral experience are the main factors in mixed reality games, which can provide guidance for the design of mixed reality games.
... Nowadays, 3D object models have been widely explored in numerous application domains for 3D data representation, visualization, and storage. Examples of these domains include virtual and augmented reality [3], 3D modeling and 3D printing [18], computer-aided design [19], animation [24], gaming [31], interior design and architecture [5], interactive anatomical modeling [33], education [10], prototyping and manufacturing [22], face recognition [28], and surveillance systems [12]. The 3D object models can be developed in the form of 3D point clouds [13], meshes [21], and other geometric models [16]. ...
Preprint
Full-text available
Three-dimensional point-cloud data has been enormously abundant with the emergence of numerous technologies for 3D data acquisition, processing, and visualization. Encryption algorithms have been recently introduced to ensure secure storage and communication for this type of data. However, maintaining the correctness and the geometric stability of such algorithms are still key challenges towards the construction of reliable, trustful, and practical ciphers of 3D point clouds. Few attempts have been made to establish geometrically stable algorithms for 3D point cloud encryption, without compromising the cipher robustness. In particular, Jolfaei et al. [IEEE Transactions on Information Forensics and Security , vol. 10, no. 2, pp. 409-422, 2015] proposed a 3D object encryption algorithm along with geometric notions of dimensional and spatial stability. However, these notions are not consistent and the geometric stability and correctness of that cipher are not guaranteed as we show through counterexamples. In this paper, we introduce an enhanced cipher with correctness, reversibility, and geometric stability guarantees. The soundness and significance of the proposed scheme are demonstrated by rigorous mathematical proofs, extensive experimentation, and comparisons against state-of-the-art methods.
... AR lies on a reality-virtuality continuum with the real-world environment on one pole and the virtual environment on the other (Milgram et al., 1995). It is a form of mixed environments where digital objects or information is superimposed on a user's view of the physical world (Bower et al., 2014;Thomas, 2012), in contrast to augmented virtuality (AV) "where real-world content is transplanted into a virtual environment" (Bower et al., 2014, p.2). Whilst Milgram et al. (1995) categorized AR and AV as types of mixed reality, Farshid et al. (2018) created a more granular continuum with six types of reality and virtual reality: (1) reality, (2) augmented reality, (3) virtual reality, (4) mixed reality, (5) augmented virtuality, and (6) virtuality. Figure 1 shows the reality-virtuality continuum adapted from Farshid et al. (2018). ...
Chapter
Full-text available
There is a rise in the integration of augmented reality (AR) in education in recent years. This emerging medium is less explored in literacy education, yet the new textual form is necessary for adapting to the changes in the world of literacy. The chapter examines students' active use of bodily movement in AR-mediated literacy learning and addresses the issue of the relationships between literacy, body, space, and new media in a primary school classroom in Australia. The findings presented are based on the systemic functional-multimodal discourse analysis of classroom observations. Data includes the multimodal instructions provided by an AR app as well as students' work samples, specifically their AR compositions. These findings provide insights into students' agentive meaning making processes and teachers' leadership in expanding students' metalanguages in AR-mediated learning.
... Collaborative virtual environments (CVE) tend to metaphorically reduce the distance between the users by using virtual reality (VR) or Augmented Reality (ar) to create a shared workspace among the users. There are applications of CVE in many domains, e.g., education and training [1], entertainment and gaming [2], manufacturing [3], architecture [4], and engineering [5]. While CVEs are becoming widespread, there is still a lack in the perception of other users' activity and emotional states [6]. ...
Article
Full-text available
Collaborative virtual environments allow people to work together while being distant. At the same time, empathic computing aims to create a deeper shared understanding between people. In this paper, we investigate how to improve the perception of distant collaborative activities in a virtual environment by sharing users’ activity. We first propose several visualization techniques for sharing the activity of multiple users. We selected one of these techniques for a pilot study and evaluated its benefits in a controlled experiment using a virtual reality adaptation of the NASA MATB-II (Multi-Attribute Task Battery). Results show (1) that instantaneous indicators of users’ activity are preferred to indicators that continuously display the progress of a task, and (2) that participants are more confident in their ability to detect users needing help when using activity indicators.
... XR is an emerging technology that simulates realistic environments for users. XR techniques have provided revolutionary user experiences in various application scenarios (e.g., training [19,21], education [38], product/architecture design [46], gaming [41], remote conferencing/tours [24,45], etc.). According to a recent report from MarketsandMarkets Research [5], the XR market size is expected to reach USD 125.2 billion by 2026 from USD 33.0 billion in 2021, at a Compound Annual Growth Rate (CAGR) of 30.6% during the forecast period. ...
Preprint
Extended Reality (XR) includes Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR). XR is an emerging technology that simulates a realistic environment for users. XR techniques have provided revolutionary user experiences in various application scenarios (e.g., training, education, product/architecture design, gaming, remote conference/tour, etc.). Due to the high computational cost of rendering real-time animation in limited-resource devices and constant interaction with user activity, XR applications often face performance bottlenecks, and these bottlenecks create a negative impact on the user experience of XR software. Thus, performance optimization plays an essential role in many industry-standard XR applications. Even though identifying performance bottlenecks in traditional software (e.g., desktop applications) is a widely explored topic, those approaches cannot be directly applied within XR software due to the different nature of XR applications. Moreover, XR applications developed in different frameworks such as Unity and Unreal Engine show different performance bottleneck patterns and thus, bottleneck patterns of Unity projects can't be applied for Unreal Engine (UE)-based XR projects. To fill the knowledge gap for XR performance optimizations of Unreal Engine-based XR projects, we present the first empirical study on performance optimizations from seven UE XR projects, 78 UE XR discussion issues and three sources of UE documentation. Our analysis identified 14 types of performance bugs, including 12 types of bugs related to UE settings issues and two types of CPP source code-related issues. To further assist developers in detecting performance bugs based on the identified bug patterns, we also developed a static analyzer, UEPerfAnalyzer, that can detect performance bugs in both configuration files and source code.
... AR can enhance people's perception of and interaction with their surroundings and can also help them more easily perform real-world tasks [2]. Over the past few decades, technological advancements have significantly increased the adoption of AR technology in a wide range of application domains, including industrial maintenance [3,1,4,5], education [6][7][8], gaming [9,10], and collaborative work [11][12][13][14][15]. ...
... Differently, the optical see-through devices consist of an optical combiner or holographic waveguides, the lenses, that enable the overlay of images transmitted by a projector over the same lenses through which a normal visualization of the real world is allowed. In this way the user visualizes directly the reality augmented with the VOs overlaid onto it [7,19]. The different techniques are summarized in Figure 3. ...
Article
Full-text available
Abstract: Augmented reality (AR) is an innovative system that enhances the real world by superimposing virtual objects on reality. The aim of this study was to analyze the application of AR in medicine and which of its technical solutions are the most used.We carried out a scoping review of the articles published between 2019 and February 2022. The initial search yielded a total of 2649 articles. After applying filters, removing duplicates and screening, we included 34 articles in our analysis. The analysis of the articles highlighted that AR has been traditionally and mainly used in orthopedics in addition to maxillofacial surgery and oncology. Regarding the display application in AR, the Microsoft HoloLens Optical Viewer is the most used method. Moreover, for the tracking and registration phases, the marker-based method with a rigid registration remains the most used system. Overall, the results of this study suggested that AR is an innovative technology with numerous advantages, finding applications in several new surgery domains. Considering the available data, it is not possible to clearly identify all the fields of application and the best technologies regarding AR.
... According to Kruse, (2008), [12] Gagné's nine events of instruction model focus on the outcomes or behaviours from training. Besides, this model has contributed significantly to instructional technology, especially in designing web-based instruction [13]. Hence, Gagne's nine instructional events are considered compatible with adult behaviours and adult learning styles Ference & Vockell (1994) [14] agreed that Gagne's events of instruction are compatible with adult learners characteristics. ...
Article
Full-text available
Game has been proven an effective approach tool to improve learning and has become a new tool for training delivery. The development of the gamification framework involves integrating design processes which are input, process, and output. The design phase is considered essential to guide the flow of the gamification framework. It offers a safer, interactive, and entertaining learning environment for construction-related workers. This paper aims to report on the approach to design a gamification framework for hazard identification training in Construction using Garris's Input-Process-Outcome game model as the basis. It focuses on the three main design elements: instructional design, game characteristic, and user characteristic. The study outlines two objectives: (1) to identify the game's attributes and Gagne's Nine Events Instructional Methods Design which supports effective learning, and (2) to determine the user's characteristics of self-directed learning. This study focused on designing the Design phase, which consists of three main elements, i.e. instructional design, game characteristics and user characteristics. Mixed methods were used to extract the attributes and elements of the Design phase. Content analysis was carried out to determine the model of instructional design and game attributes. The findings show 12 attributes of the game through content analysis methods. Gagne's Nine Events Instructional Methods Design can support effective learning. Meanwhile, a questionnaire survey was subsequently administered to determine the user's ability to self-directed learning and decision-making style, where 319 construction-related workers responded. Data were analysed using mean comparison. The results showed that construction-related workers belong to the independent learners' category and are inclined to 'vigilant' and 'brooding' types of decision-making style. Findings confirmed that construction-related workers belong to the independent learners' category and are inclined to 'vigilant' and 'brooding' types of decision-making style. Following the aim of this paper, these findings were incorporated into the design phase of the game framework.
... Since the 2010s resurgence of interest in consumer-grade virtual reality (VR) [48], VR has successfully established itself on the consumer market [47]. However, as with any emerging technology, differences can exist between how industry / academia perceive a technology and how consumers perceive it [49,50]. ...
... N order to give a user of virtual reality (VR) system highly immersion, to freely interact between a user and the virtual environment, accurate and low latency tracking is one of the most important requirements for the VR system [1]. This accurate and fast tracking is expected to be used in variety of applications not only for entertainment [2] but also for medicine [3], engineering and design [4], military [5], education [6], virtual prototyping [7], and architecture and cultural heritage [8]. To meet the high requirements of tracking, lighthouse localization system of HTC Vive use inertial measurements and light data from the inertial measurement unit (IMU) and the photodiodes in the tracked object like HMD, controller, and tracker [9]. ...
Article
Full-text available
The lighthouse localization system has recently been developed and used for localization in virtual reality (VR). Not only for VR but also for a general indoor positioning systems (IPSs) it has several advantages over existing methods, including low cost, wide detection area, and easy setup. Here, we adopt the stereo configuration of a lighthouse for improved sensing performance and propose a novel calibration method for stereo configuration. For the stereo calibration, the exact positions of sufficient corresponding points in two sensor coordinates need to be determined. A printed checkerboard is widely used for stereo camera systems because it is easy to construct and its accuracy is guaranteed owing to its printing accuracy. However, in the case of the lighthouse system, it is very difficult or impossible to construct a highly accurate calibration board similar to the checkerboard mainly because of manufacturing errors. In this study, we use a receiver sensor and a two-axis linear stage equipped with micrometers. By moving predetermined distances along the x and y directions on the linear stage, we can obtain multiple-point information with high accuracy, which can then be used for the stereo calibration of two lighthouses. In this paper, the calibration and pose estimation procedures are described in detail, and the pose estimation result of the perspective-n-points method is compared with that of the triangulation method. Finally, the pose estimation accuracy of the proposed system is compared with that of a commercial system that is widely used for highly accurate medical applications.
... This paper will then evaluate four AR apps in detail according to five criteria and highlight their educational potential, including ideas for classroom activities that embrace and enhance their use. (Milgram et al., 1995, p. 283) It is a form of mixed reality where digital graphical information is superimposed on a user's view of the physical world (Bower et al., 2014;Thomas, 2012), In essence, AR "uses displays, tracking, and other technologies to enhance (augment) the user's view of a realworld environment with synthetic objects or information" (LaViola et al., 2017, p. 8). Some of the distinguishing characteristics of AR include: ...
Article
Empirical studies of augmented reality (AR) in education have suggested a vast range of educational benefits, including deep learning (Wu et al., 2013), collaborative learning in locative storytelling (Chinthammit & Thomas, 2014), higher-order thinking skills (Bower et al., 2015), increased student engagement in play-based literacy practices (Yamada-Rice et al., 2017), 21st century skills (Wang et al., 2018), and spatial thinking (George et al., 2020). Yet, in language and literacy teaching, little is known about using AR to enhance inquiry learning, and to encourage students to experiment with affordances of new media to develop critical and creative knowledge. Although teachers are expected to support students in learning, both with and through new digital forms, there is a slow uptake of new media, such as AR, in Australian contexts. This is possibly due to a lack of mainstream understanding about what AR is and what educational affordances it offers. The lack of classroom research and a “conceptual framework regarding the implementation of technologies such as Augmented Reality system” remain an impeding factor to effective explorations of AR in education (Bower et al., 2014, p.7).
... Nonetheless, the fields that have recently boosted its potential are virtual reality (VR) and augmented reality (AR). Indeed, the potential applications of AR/VR technology to a multitude of sectors such as online education [5], healthcare [6,7], entertainment [8,9], communication [10,11] and/or gaming industry [12,13] have created an ever-growing demand for more realistic and immersive AR/VR experiences. ...
Article
Full-text available
This paper summarizes the OpenEDS 2020 Challenge dataset, the proposed baselines, and results obtained by the top three winners of each competition: (1) Gaze prediction Challenge, with the goal of predicting the gaze vector 1 to 5 frames into the future based on a sequence of previous eye images, and (2) Sparse Temporal Semantic Segmentation Challenge, with the goal of using temporal information to propagate semantic eye labels to contiguous eye image frames. Both competitions were based on the OpenEDS2020 dataset, a novel dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display with two synchronized eye-facing cameras. The dataset, which we make publicly available for the research community, consists of 87 subjects performing several gaze-elicited tasks, and is divided into 2 subsets, one for each competition task. The proposed baselines, based on deep learning approaches, obtained an average angular error of 5.37 degrees for gaze prediction, and a mean intersection over union score (mIoU) of 84.1% for semantic segmentation. The winning solutions were able to outperform the baselines, obtaining up to 3.17 degrees for the former task and 95.2% mIoU for the latter.
... This brings supplementary opportunities for appealing game-plays involving interaction with real world objects (e.g. hiding behind real walls from attackers) and people [167], [168]. Additionally, AR visualizations are not concerned with many of the challenges linked to VR gaming, including avoiding real-world obstacles (e.g. ...
Article
Full-text available
Virtual self-avatars have been increasingly used in Augmented Reality (AR) where one can see virtual content embedded into physical space. However, little is known about the perception of self-avatars in such a context. The possibility that their embodiment could be achieved in a similar way as in Virtual Reality opens the door to numerous applications in education, communication, entertainment, or the medical field. This article aims to review the literature covering the embodiment of virtual self-avatars in AR. Our goal is (i) to guide readers through the different options and challenges linked to the implementation of AR embodiment systems, (ii) to provide a better understanding of AR embodiment perception by classifying the existing knowledge, and (iii) to offer insight on future research topics and trends for AR and avatar research. To do so, we introduce a taxonomy of virtual embodiment experiences by defining a "body avatarization" continuum. The presented knowledge suggests that the sense of embodiment evolves in the same way in AR as in other settings, but this possibility has yet to be fully investigated. We suggest that, whilst it is yet to be well understood, the embodiment of avatars has a promising future in AR and conclude by discussing possible directions for research.
... Accordingly, the selected developer engine shall support as many platforms as possible. Due to the further application possibilities, it could also be used to create virtual or mixed reality applications, as well as serve simulations and other experiments [71]. The virtual model of ZalaZONE is publicly available in several data formats under MIT License for further research and evaluation. ...
Article
Full-text available
Testing self-driving vehicles is still a new and immature process; the globally harmonised procedure expected much later. The resource-demanding nature of real-world tests makes it indispensable to develop and improve the efficiency of virtual environment based testing methods. Accordingly, a novel X-in-the-Loop framework is proposed to fully exploit the recent advances in info-communication technologies, vehicle automation, and testing and validation requirements. This methodology real-time connects physical and virtual testing with high correlation while completely blurs the sharp boundaries between them. Measurement results confirm the superior performance of the 5G communication link in providing a stable, real-time connection between the real world and its virtual representation. The live demonstration proved the presented concept at the newly constructed Hungarian proving ground for automated driving. The performed investigation also includes comprehensive benchmarking, focusing on the most up-to-date automotive testing frameworks. The analysis considers the methodologies and techniques applied by the most relevant actors in the automotive testing sector worldwide. Accordingly, the newly developed testing framework is evaluated and validated in light of the state-of-the-art methods used by the automotive industry.
... With the rapid development of virtual reality technology, virtual reality systems appear in various places in life. 1 Existing consumer head-mounted displays use a completely immersive design without visual cues from the surrounding real environment. Due to the loss of the external visual information during the virtual reality (VR) experience, users are prone to a certain degree of insecurity, which affects the VR experience. ...
Article
Full-text available
During the immersive virtual reality experience, because the visual senses are completely enclosed in the virtual environment, they are unable to perceive environmental changes in the real world and produce unsafe feelings, which affects their experience in virtual reality systems. In order to improve people's sense of security in the immersive virtual reality experience, the paper has designed four ways to interact with the real world in a virtual environment, so that users can get real‐world information in the virtual environment. This paper tests users' changes in their sense of security during immersive virtual reality experiences under the influence of various interaction methods and conduct psychological analysis. Twenty‐one volunteers are recruited to participate in the experiment, and their personal safety and psychological effects are tested. The experimental results show that the method of using Yolcat neural network to segment the captured images according to categories and then fuse the segmented images into the virtual environment can improve the user's sense of security without destroying the immersion.
... In contrast to these VR examples, challenges are different for enabling co-located interaction in AR settings. AR technology has been applied to a wide variety of leisure and gaming activities [52], e.g., with the widely popular games Ingress [41] and Pokemon Go [42], for augmenting and game balancing table tennis [1,2] and foosball gameplay [47], or for enabling digitally augmented tabletop board games [25]. These approaches use AR technologies to provide additional benefts for users, but were not applied to HMD-based systems that use AR technology in general and therefore have unique challenge, such as one-sided visualization of AR content for the HMD Users resulting in exclusion of bystanders. ...
Conference Paper
Full-text available
Head-Mounted Displays (HMDs) are the dominant form of enabling Virtual Reality (VR) and Augmented Reality (AR) for personal use. One of the biggest challenges of HMDs is the exclusion of people in the vicinity, such as friends or family. While recent research on asymmetric interaction for VR HMDs has contributed to solving this problem in the VR domain, AR HMDs come with similar but also different problems, such as conflicting information in visualization through the HMD and projection. In this work, we propose ShARe, a modified AR HMD combined with a projector that can display augmented content onto planar surfaces to include the outside users (non-HMD users). To combat the challenge of conflicting visualization between augmented and projected content, ShARe visually aligns the content presented through the AR HMD with the projected content using an internal calibration procedure and a servo motor. Using marker tracking, non-HMD users are able to interact with the projected content using touch and gestures. To further explore the arising design space, we implemented three types of applications (collaborative game, competitive game, and external visualization). ShARe is a proof-of-concept system that showcases how AR HMDs can facilitate interaction with outside users to combat exclusion and instead foster rich, enjoyable social interactions.
... Another possible classification entails the variety of involved perceptual channels, because, despite the spatial immersion, sound, haptics, smell, etc., could either be neglected or still play their specific roles. Finally, VR applications can be classified according to their goal, being either serious, e.g., product design and manufacturing [3], rehabilitation [12] and healthcare [15], training (especially in complex tasks where real training could be expensive and/or dangerous) [23], cultural heritage [2], and learning [10], or pure entertainment-oriented, e.g., VR games [21]. The last mentioned work is, to the best of our knowledge, and quite strangely given the recent spread of VR games, the most recent comprehensive survey on VR in games. ...
... Head-mounted displays (HMDs) for Virtual Reality (VR) are finally available for the consumer market. Today, consumers mainly use VR for entertainment applications including 3D movies and games [16]. A large field of view (FOV), high visual fidelity as well as the visual and auditory encapsulation can create truly immersive experiences with almost unlimited opportunities. ...
Research
Full-text available
Entering text is one of the most common tasks when interacting with computing systems. Virtual Reality (VR) presents a challenge as neither the user's hands nor the physical input devices are directly visible. Hence, conventional desktop peripherals are very slow, imprecise, and cumbersome. We want to develop an apparatus that tracks the user's hands, and a physical keyboard, and visualize them in VR. In a text input study with 32 participants, Pascal Knierim [1] investigated the achievable text entry speed and the effect of hand representations and transparency on typing performance, workload, and presence. With that apparatus, experienced typists benefited from seeing their hands, and reach almost outside-VR performance. Inexperienced typists profited from semi-transparent hands, which enabled them to type just 5.6 WPM slower than with a regular desktop setup. We conclude that optimizing the visualization of hands in VR is important, especially for inexperienced typists, to enable a high typing performance.
... Our paper believes that the more natural integration of AR technology with the traditional escape room can apply more interactive and imaginative narratives to help achieve the "total immersion." As shown in Fig. 1, augmented reality could register the virtual object generated by computer through the physical world (Thomas 2012). Augmented reality makes users experience a new scene where realistic scene and virtual object are seamlessly blended together (Billinghurst et al. 2015). ...
Article
Full-text available
Escape room is a live-action adventure game, where the players search clues, solve puzzles and achieve the assigned tasks. This paper proposed a novel escape room system combining augmented reality and deep learning technology. The system adopts a client–server architecture and can be divided into the server module, the smart glasses module and the player–hardware interaction module. The player–hardware interaction module consists of subsystems each of which includes a Raspberry Pi 3. HoloLens is used as the smart glasses in the experiment of the paper. The server communicates with all the Raspberry Pis and HoloLens through TCP/IP protocol and manages all the devices to achieve the game flow by following the process timeline. The smart glasses module provides two display modes, i.e., the AR 3D models display and the 2D text clues display. In the first mode, the SDK Vuforia is used for detection and tracking of markers. In the second mode, the scene images captured by HoloLens camera are sent to the pre-trained image classifier based on deep convolutional neural network. Considering both the image category and the game status value, the server decides the text clue image to be displayed on HoloLens. The accuracy of the image classification model reaches 94.9%, which can be correctly classified for a certain rotation angle and partial occlusion. The integration of AR, deep learning, electronics and escape room games opens up exciting new directions for the development of escape room. Finally, a built mini-escape room is analyzed to prove that the proposed system can support more complicated narratives showing the potential of achieving immersion.
... A place-based analysis was presented by Graham et al. [71]. Thomas [184] surveys the area from a technical perspective. A detailed psychological study of motivations to play a successful location based AR game is presented by Zsila et al [206], who argue that a loss of a sense of reality and a competitive motivation lead to problematic behavior of players. ...
Article
Full-text available
With the CoViD-19 pandemic, location awareness technologies have seen renewed interests due to the numerous contact tracking mobile application variants developed, deployed, and discussed. For some, location-aware applications are primarily a producer of geospatial Big Data required for vital geospatial analysis and visualization of the spread of the disease in a state of emergency. For others, comprehensive tracking of citizens constitutes a dangerous violation of fundamental rights. Commercial web-based location-aware applications both collect data and-through spatial analysis and connection to services-provide value to users. This value is what motivates users to share increasingly private and comprehensive data. The willingness of users to share data in return for services has been a key concern with web-based variants of the technology since the beginning. With a focus on two privacy preserving CoViD-19 contact tracking applications, this survey walks through the key steps of developing a privacy preserving context-aware application: from types of applications and business models, through architectures and privacy strategies, to representations.
... Augmented Reality (AR) is another exciting field and there are several recent developments [3,7] that potentially transform the way people play games [31,59] or experience online shopping [55,63]. The most popular mobile platforms (e.g. ...
Article
Full-text available
Augmented Reality (AR) applications can reshape our society enabling novel ways of interactions and immersive experiences in many fields. However, multi-user and collaborative AR applications pose several challenges. The expected user experience requires accurate position and orientation information for each device and precise synchronization of the respective coordinate systems in real-time. Unlike mobile phones or AR glasses running on battery with constrained resource capacity, cloud and edge platforms can provide the computing power for the core functions under the hood. In this paper, we propose a novel edge cloud based platform for multi-user AR applications realizing an essential coordination service among the users. The latency critical, computation intensive Simultaneous Localization And Mapping (SLAM) function is offloaded from the device to the edge cloud infrastructure. Our solution is built on open-source SLAM libraries and the Robot Operating System (ROS). Our contribution is threefold. First, we propose an extensible, edge cloud based AR architecture. Second, we develop a proof-of-concept prototype supporting multiple devices and building on an AI-based SLAM selection component. Third, a dedicated measurement methodology is described, including energy consumption aspects as well, and the overall performance of the system is evaluated via real experiments.
Article
As augmented reality (AR) systems proliferate and the technology gets smaller and less intrusive, we imagine a future where many AR users will interact in the same physical locations (e.g., in shared work places and public spaces). While previous research has explored AR collaboration in these spaces, our focus is on co-located but independent work. In this paper, we explore co-located AR user behavior and investigate techniques for promoting awareness of personal workspace boundaries. Specifically, we compare three techniques: showing all virtual content, visualizing bounding box outlines of content, and a self-defined workspace boundary. The findings suggest that a self-defined boundary led to significantly more personal workspace encroachments.
Article
Full-text available
Background Knowledge construction in the context of children’s science education is an important part of fostering the development of early scientific literacy. Nevertheless, children sometimes struggle to comprehend scientific knowledge due to the presence of abstract notions. Objective This study aimed to evaluate the efficacy of augmented reality (AR) games as a teaching tool for enhancing children’s understanding of optical science education. Methods A total of 36 healthy Chinese children aged 6-8 years were included in this study. The children were randomly divided into an intervention group (n=18, 50%) and a control group (n=18, 50%). The intervention group received 20 minutes of AR science education using 3 game-based learning modules, whereas the control group was asked to learn the same knowledge for 20 minutes with a non-AR science learning app. Predict observe explain tests for 3 topics (animal vision, light transmission, and color-light mixing) were conducted for all participants before and after the experiment. Additionally, the Intrinsic Motivation Inventory, which measures levels of interest-enjoyment, perceived competence, effort-importance, and tension-pressure, was conducted for children after the experiment. Results There was a statistically significant difference in light transmission ( z =−2.696; P =.008), color-light mixing ( z =−2.508; P =.01), and total predict observe explain test scores ( z =2.458; P =.01) between the 2 groups. There were also variations between the groups in terms of levels of interest-enjoyment ( z =−2.440; P =.02) and perceived competence ( z =−2.170; P =.03) as measured by the Intrinsic Motivation Inventory. Conclusions The randomized controlled trial confirmed that the AR-based science education game we designed can correct children’s misconceptions about science and enhance the effectiveness of science education.
Article
Augmented reality (AR) is a field of knowledge being developed since the middle of the last century. Its use has been spreading because of its usefulness, but more recently because of mobile platforms being widespread and accessible. AR has been applied in several fields of activity, and also in the field of Education and Training, because AR has several advantages over other teaching methods. In this paper, we search and analyze surveys and reviews of AR to present a brief history and its definition. We also present a classification of our sample under a scheme we developed in past work, and present also examples of technologies and applications of AR in each field. Finally, we do a deeper analysis over the publications of Education and Training, advantages and issues of AR in this field, and some research trends.
Article
Full-text available
The usefulness of a web site is based on the prospect of a satisfactory website user. In improving the usefulness of a university’s website, it is essential to present precise data and disseminate prompt information to students. This paper was conducted to analyse and recommend a better process for the information display (content and work processes) of university websites development. Questionnaires were distributed and feedbacks were received from 350 respondents. The aim of the study was to evaluate the effectiveness of the process of (content and processes) of the university websites based on five main aspects: i) Content and Organization Structure, ii) Linkages and Navigation, iii) Language, iv) Education Content and v) Option and Performance. Analysis showed that the percentage of question items has a positive tendency in perception regarding improvements need to enhance the intensity of the university website portal, especially in terms of links, content organization and live help desk requirements that can directly provide an easier means of that can providing information and accelerating up the operation of receiving data. In summary, this study also outlines the proposal for academic websites display to increase the serviceability of the site to users.
Chapter
Viele Mitarbeiterinnen (In diesem Kapitel wird durchgehend die grammatikalisch weibliche Bezeichnung verwendet. Damit sind alle Geschlechter gemeint) im Dienstleistungsbereich sehen sich täglich Herausforderungen gegenüber, welche durch Indoor-Navigation beeinflusst werden. Krankenschwestern müssen ihre Routen anpassen, um auf plötzliche Patientenanfragen reagieren zu können, während Wartungsarbeiterinnen oft Maschinen in unbekannten Räumlichkeiten finden müssen. Dies führt zu Stress und ineffizienten Routing-Entscheidungen. Unter Verwendung der Design-Science-Research-Methode wurde eine Augmented-Reality-Anwendung für die Microsoft HoloLens entwickelt, welche in der Lage ist Benutzerinnen in Gebäuden zu navigieren und Routing-Informationen als erweiterte virtuelle Pfeile an kritischen Punkten anzuzeigen. In einer Nutzerstudie (n = 20) zeigen wir, dass bei der Nutzung von Routing basierend auf virtuellen Pfeilen ein geringerer geistiger Aufwand als bei konventionellen 2D-Visualisierungen erforderlich ist, die vor allem durch Systeme wie Google Maps oder TomTom bekannt sind.
Chapter
Augmented Reality (AR) is a field of knowledge that emerged in the middle of the last century, and its use has been spreading because of its usefulness, but also because of mobile platforms, accessible to most users. AR characteristics are valued in several fields of human activity, and also in the field of Education and Training, being AR pointed out as useful to the learning process. In this paper we search and analyse surveys and reviews of AR. We present a AR’s definition, and we create a classification scheme of two dimensions for AR: the dimension of the fields of application of AR, and the dimension of the technologies of AR.
Article
Augmented Reality (AR) offers the possibility to enrich the real world with digital mediated content, increasing in this way the quality of many everyday experiences. While in some research areas such as cultural heritage, tourism, or medicine there is a strong technological investment, AR for game purposes struggles to become a widespread commercial application. In this article, a novel framework for AR kid games is proposed, already developed by the authors for other AR applications such as Cultural Heritage and Arts. In particular, the framework includes different layers such as the development of a series of AR kid puzzle games in an intermediate structure which can be used as a standard for different applications development, the development of a smart configuration tool, together with general guidelines and long-life usage tests and metrics. The proposed application is designed for augmenting the puzzle experience, but can be easily extended to other AR gaming applications. Once the user has assembled the real puzzle, AR functionality within the mobile application can be unlocked, bringing to life puzzle characters, creating a seamless game that merges AR interactions with the puzzle reality. The main goals and benefits of this framework can be seen in the development of a novel set of AR tests and metrics in the pre-release phase (in order to help the commercial launch and developers), and in the release phase by introducing the measures for long-life app optimization, usage tests and hint on final users together with a measure to design policy, providing a method for automatic testing of quality and popularity improvements. Moreover, smart configuration tools, as part of the general framework, enabling multi-app and eventually also multi-user development, have been proposed, facilitating the serialization of the applications. Results were obtained from a large-scale user test with about 4 million users on a set of eight gaming applications, providing the scientific community a workflow for implicit quantitative analysis in AR gaming. Different data analytics developed on the data collected by the framework prove that the proposed approach is affordable and reliable for long-life testing and optimization.
Article
Full-text available
Augmented reality artwork is an emerging field of art, including grand transformations of entire building facades that have been carried out by use of detailed 3D models and customised to the building's unique surface. The extension of projectors into the realm of small mobile devices affords an opportunity to extend such AR art into the personal scale of the individual, allowing the user to customise and transform their surroundings on an ad hoc basis. In this paper, a system is proposed for the modification and augmentation of a mobile user's surroundings is put forward, together with the technical challenges such a system raises.
Article
Full-text available
This demo shows BattleBoard 3D which is an Augmented Reality (AR) based game prototype featuring the use of LEGO for the physical and digital pieces. Design concepts, the physical setting and, user interface for the game is illustrated and described. Based on qualitative studies of children playing the game we illustrate design issues for AR board games.
Article
Full-text available
An abstract is not available.
Article
Full-text available
The VR Studio was founded in 1992 to explore the potential of Virtual Reality technology for theme park attractions. This paper presents an overview of the VR Studio's history, from the location-based entertainment attractions developed for DisneyQuest, to research in using virtual reality technology for theme park design. The goal is to present many of the lessons learned during 10 years of building interactive virtual worlds. In particular, the paper will focus on the challenge of creating location-based virtual reality attractions for the mass audience.
Article
Full-text available
The Historic Oakland Cemetery in downtown Atlanta provides a unique setting for exploring the challenges of location-based mixed-reality experience design. Our objective is to entertain and educate visitors about historically and culturally significant events related to the deceased inhabitants of the cemetery. We worked with the constraints and affordances of the physical environment of the cemetery to design an audio-based dramatic experience. The dramatic narrative is realized through voice actors who play the parts of cemetery residents and tell stories about the time periods in which they lived. The experience provides navigation and linearity through a main narrator who guides visitors to various gravesites. While at each grave, the visitor can choose from several categories of content using a handheld controller. Formative evaluations conducted with users in the cemetery indicate strengths of the current experience and suggest ideas for continued development.
Article
Full-text available
Human Pacman is a novel interactive entertainment system that ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on mobile computing, wireless LAN, ubiquitous computing, and motion-tracking technologies. Our human Pacman research is a physical role-playing augmented-reality computer fantasy together with real human–social and mobile gaming. It emphasizes collaboration and competition between players in a wide outdoor physical area which allows natural wide-area human–physical movements. Pacmen and Ghosts are now real human players in the real world, experiencing mixed computer graphics fantasy–reality provided by using the wearable computers. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between real and virtual worlds. We believe human Pacman is pioneering a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
Article
Full-text available
We describe a software framework for rapidly developing and deploying self-contained, multi-user Augmented Reality applications on a variety of commercially available handheld computers (PDAs).
Article
Full-text available
Mixed Reality (MR) visual displays, a particular subset of Virtual Reality (VR) related technologies, involve the merging of real and virtual worlds somewhere along the 'virtuality continuum' which connects completely real environments to completely virtual ones. Augmented Reality (AR), probably the best known of these, refers to all cases in which the display of an otherwise real environment is augmented by means of virtual (computer graphic) objects. The converse case on the virtuality continuum is therefore Augmented Virtuality (AV). Six classes of hybrid MR display environments are identified. However quite different groupings are possible and this demonstrates the need for an efficient taxonomy, or classification framework, according to which essential differences can be identified. An approximately three-dimensional taxonomy is proposed comprising the following dimensions: extent of world knowledge, reproduction fidelity, and extent of presence metaphor.
Article
Full-text available
In this paper we present the results of a qualitative, empirical study exploring the impact of immersive technologies on presence and engagement, using the interactive drama Façade as the object of study. In this drama, players are situated in a married couple's apartment, and interact primarily through conversation with the characters and manipulation of objects in the space. We present participants' experiences across three different versions of Façade – augmented reality (AR) and two desktop computing based implementations, one where players communicate using speech and the other using typed keyboard input. Through interviews and observations of players, we find that immersive AR can create an increased sense of presence, confirming generally held expectations. However, we demonstrate that increased presence does not necessarily lead to more engagement. Rather, mediation may be necessary for some players to fully engage with certain interactive media experiences.
Article
Full-text available
This paper presents the design of pOwerball, a novel augmented reality computer game for children aged 8-14. The pOwerball was designed to bring together children with and without a physical or learning disability and to encourage social interactions surrounding the play. The contribution of this design case is two fold. From a design perspective, pOwerball exemplifies an emerging class of computer games where the interaction style and game mechanics support social interactions amongst the players. From a methodological perspective, we describe the various ways children became involved in our design process; we highlight the related difficulties and successes in the context of relevant research literature.
Article
Full-text available
Augmented reality (AR) makes it possible to create games in which virtual objects are overlaid on the real world, and real ob-jects are tracked and used to control virtual ones. We describe the development of an AR racing game created by modifying an ex-isting racing game, using an AR infrastructure that we developed for use with the XNA game development platform. In our game, the driver wears a tracked video see-through head-worn display, and controls the car with a passive tangible controller. Other players can participate by manipulating waypoints that the car must pass and obstacles with which the car can collide. We dis-cuss our AR infrastructure, which supports the creation of AR applications and games in a managed code environment, the user interface we developed for the AR racing game, the game's soft-ware and hardware architecture, and feedback and observations from early demonstrations.
Article
Full-text available
Augmented tabletops can be used to create multi-modal and collaborative environments in which natural interactions with tangible objects that represent virtual (digital) information can be performed. Such environments are considered potentially interesting for many different applications. In this paper, we address the question of whether or not it makes sense to use such environments to design learning experiences for young children. More specifically, we present the "Read-It" application that we have created to illustrate how augmented tabletops can support the development of reading skills. Children of five-to-seven-years old were actively involved in designing and testing this application. A pilot experiment was conducted with a prototype of the Read-It application, in order to confirm that it does indeed meet the a priori expectations. We hope that the Read-It application will inspire the development of more tabletop applications that are targeted at specific user groups and activities.
Article
Full-text available
In this paper we discuss a case study for which we applied a customized augmented reality display –the Virtual Showcase– as a new platform for digital storytelling. Different storytelling components are identified and examples for their specific realization are explained. Our case study focuses on communicating scientific information to a novice audience in a museum context. Addressing first user feedback, we describe our current efforts of improvement.
Article
Full-text available
In this paper we discuss Augmented Reality (AR) displays in a general sense, within the context of a Reality-Virtuality (RV) continuum, encompassing a large class of "Mixed Reality" (MR) displays, which also includes Augmented Virtuality (AV). MR displays are defined by means of seven examples of existing display concepts in which real objects and virtual objects are juxtaposed. Essential factors which distinguish different Mixed Reality display systems from each other are presented, first by means of a table in which the nature of the underlying scene, how it is viewed, and the observer's reference to it are compared, and then by means of a three dimensional taxonomic framework, comprising: Extent of World Knowledge (EWK), Reproduction Fidelity (RF) and Extent of Presence Metaphor (EPM). A principal objective of the taxonomy is to clarify terminology issues and to provide a framework for classifying research across different disciplines.
Article
Full-text available
By introducing virtual content onto a regular tabletop, aug-mented reality (AR) offers a compelling display medium for enhancing tabletop games. Coupled with tangible user in-terfaces, it constitutes a truly novel environment for games. Our research investigates the possibilities and constraints for succesful game design, starting with principles drawn fom tabletop and conventional computer games, and mov-ing towards designs unique to AR. Our first major game, AR Tankwar, was recently demonstrated at GenCon Indy 2005 to over 300 players and many more spectators. This paper presents a short overview of AR Tankwar along with results from questionnaires and player discussion at the con-vention.
Article
Full-text available
This paper describes a recent program for game-based learning within a mixed-reality environment, the Situated Multimedia Arts Learning Lab [SMALLab]. In the program, our research team collaborated with a 9 th grade Language Arts teacher to design and deliver a new learning game and associated curriculum. Through the process of game-design and game-play, students advance their understanding of metaphor. We outline the theoretical basis upon which design decisions were made, and describe the rationale for choosing Language Arts as the subject area for this program. Three goals structure our research: (1) to advance students' understanding of literary devices with an emphasis on metaphor; (2) to engage otherwise under-performing students through game-based learning that is student-centered, collaborative, and based in reflective practice; and (3) to demonstrate effective game-based learning using a mixed-reality platform in a conventional classroom context. Twenty-four students attending a large suburban high school in the southwest United States participated in this learning experience once a week for seven weeks during the Fall of 2007. Our data indicates that these students attained a more globally coherent model of metaphor in the course of their participation, that they found both the game-design and the game-play process stimulating and rewarding, and that, given the necessary scaffolding, a mixed-reality learning environment can be effectively employed to teach standards-based curriculum in a conventional high school classroom.
Article
Full-text available
This paper presents a novel computer entertainment system which recaptures human touch and physical interaction with the real-world environment as essential elements of the game play, whilst also maintaining the exciting fantasy features of traditional computer entertainment. Our system called ‘Touch-Space’ is an embodied (ubiquitous, tangible, and social) computing based Mixed Reality (MR) game space which regains the physical and social aspects of traditional game play. In this novel game space, the real-world environment is an essential and intrinsic game element, and the human’s physical context influences the game play. It also provides the full spectrum of game interaction experience ranging from the real physical environment (human to human and human to physical world interaction), to augmented reality, to the virtual environment. It allows tangible interactions between players and virtual objects, and collaborations between players in different levels of reality. Thus, the system re-invigorates computer entertainment systems with social human-to-human and human-to-physical touch interactions.
Conference Paper
Full-text available
In modern society, people increasingly lack social interaction, although beneficial to work and personal life. Airhockey Over a Distance addresses this issue by recreating the social experience facilitated by physical game play in a distributed environment. We networked two airhockey tables and augmented them with a videoconference. Concealed mechanics on each table allow for a physical puck to be shot back and forth between the two locations. Supporting the hitting of a fast-moving, tangible puck between the two players creates a compelling social game experience which was confirmed by about 30 players. Our preliminary findings suggest that our casual physical game supports social interactions and contributes to an increased connectedness between people who are geographically apart.
Conference Paper
Full-text available
We describe two games in which online participants collaborated with mobile participants on the city streets. In the first, the players were online and professional performers were on the streets. The second reversed this relationship. Analysis of these experiences yields new insights into the nature of context. We show how context is more socially than technically constructed. We show how players exploited (and resolved conflicts between) multiple indications of context including GPS, GPS error, audio talk, ambient audio, timing, local knowledge and trust. We recommend not overly relying on GPS, extensively using audio, and extending interfaces to represent GPS error.
Conference Paper
Full-text available
Physical and social interactions are constrained,and natural interactions are lost in most of present digital family entertainment systems [5]. Magic Cubes strive for bringing the computer storytelling, doll 's house,and board game back into reality so that the children can interact socially and physically as what we did in the old days. Magic Cubes are novel augmented reality systems that explore to use cubes to interact with three dimensional virtual fantasy world. Magic Cubes encourage discussion, idea exchange, collaboration, social and physical interactions among families.
Conference Paper
Full-text available
In this paper we present the results of a qualitative, empirical study exploring the impact of immersive technologies on presence and engagement, using the interactive drama Façade as the object of study. In this drama, players are situated in a married couple's apartment, and interact primarily through conversation with the characters and manipulation of objects in the space. We present participants' experiences across three different versions of Façade - augmented reality (AR) and two desktop computing based implementations, one where players communicate using speech and the other using typed keyboard input. Through interviews and observations of players, we find that immersive AR can create an increased sense of presence, confirming generally held expectations. However, we demonstrate that increased presence does not necessarily lead to more engagement. Rather, mediation may be necessary for some players to fully engage with certain interactive media experiences. Author Keywords
Conference Paper
Full-text available
The augurscope is a portable mixed reality interface for outdoors. A tripod-mounted display is wheeled to different locations and rotated and tilted to view a virtual environment that is aligned with the physical background. Video from an onboard camera is embedded into this virtual environment. Our design encompasses physical form, interaction and the combination of a GPS receiver, electronic compass, accelerometer and rotary encoder for tracking. An initial application involves the public exploring a medieval castle from the site of its modern replacement. Analysis of use reveals problems with lighting, movement and relating virtual and physical viewpoints, and shows how environmental factors and physical form affect interaction. We suggest that problems might be accommodated by carefully constructing virtual and physical content
Conference Paper
Full-text available
There is a growing interest in developing technologies for creating interactive dramas (13, 22). Evaluating them, however , remains an open research problem. In this paper, we present a method for evaluating the technical and design approaches employed in a conversation-centered interactive drama. This method correlates players' subjective experience during conversational breakdowns, captured using retrospective protocols, with the corresponding AI processing in the input language understanding and dialog management subsystems. The methodology is employed to analyze conversation breakdowns in the interactive drama Façade. We find that the narrative cues offered by an interactive drama, coupled with believable character performance, can allow players to interpretively bridge system limitations and avoid experiencing a conversation breakdown. Further, we find that, contrary to standard practice for task-oriented conversation systems, using shallowly understood information as part of the system output hampers the player experience in an interactive drama.
Conference Paper
Full-text available
We present Muddleware, a communication platform designed for mixed reality multi-user games for mobile, lightweight clients. An approach inspired by Tuplespaces, which provides decoupling of sender and receiver is used to address the requirements of a potentially large number of mobile clients. A hierarchical database built on XML technology allows convenient prototyping and simple, yet powerful queries. Server side-extensions address persistence and autonomous behaviors through hierarchical state machines. The architecture has been tested with a number of multi-user games and is also used by non-entertainment applications.
Conference Paper
Full-text available
It is rapidly becoming clear that entertainment will be one of the killer applications of future wireless networks. More specifically mobile gaming is predicted to be worth $1.2 billion by the year 2006 to providers in the U.S. alone [20]. The driving force behind this is the introduction of powerful feature rich handsets and ubiquitous access to high performance wireless networks. However, mobile applications face issues that are subtly different from fixed network applications, including fluctuating connectivity, network QoS and host mobility issues. To investigate the requirements of future mobile applications we have deployed a wireless MAN consisting of GPRS and IEEE 802.11 hotspots based on Mobile IPv6 around the city of Lancaster and have built an augmented reality game designed to evaluate future mobile application requirements.In this paper we introduce Real Tournament, a prototype multi-player mobile game, which uses handheld computers augmented with an array of sensors to enable true mobile interaction in a real-world environment. We then evaluate current approaches to real-time interaction and follow by outlining our own architecture more suited to wireless environments and based on the peer-to-peer approach. The approach provides adaptation, shared state, and consistency mechanisms in order to provide support for scalable, low latency, soft real time mobile applications.
Conference Paper
Full-text available
Human Pacman is an interactive ubiquitous and mobile entertainment system that is built upon position and perspective sensing via Global Positioning System and inertia sensors; and tangible human-computer interfacing with the use of Bluetooth and capacitive sensors. Although these sensing-based subsystems are weaved into the fabric of the game and are therefore translucent to players, they are nevertheless the technical enabling forces behind Human Pacman. The game strives to bring the computer gaming experience to a new level of emotional and sensory gratification by embedding the natural physical world ubiquitously and seamlessly with a fantasy virtual playground. We have progressed from the old days of 2D arcade Pacman on screens, with incremental development, to the popular 3D game console Pacman, and the recent mobile online Pacman. With our novel Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using wearable computers that are equipped with GPS and inertia sensors for players' position and perspective tracking. Virtual cookies and actual tangible physical objects with Bluetooth devices and capacitive sensors are incorporated into the game play to provide novel experiences of seamless transitions between real and virtual worlds. In short, we believe Human Pacman is pioneering a new form of gaming that is based on sensing technology and anchored on physicality, mobility, social interaction, and ubiquitous computing.
Conference Paper
Full-text available
Pervasive games provide a new type of game combining new technologies with the real environment of the players. While this already poses new challenges to the game developer, requirements are even higher for pervasive Augmented Reality games, where the real environment is additionally enhanced by virtual game items. In this paper we will review the technological challenges to be met in order to realize pervasive AR games, show how they go beyond those of other pervasive games, and present how our AR framework copes with them. We will further show how these approaches are applied to three pervasive AR games and draw conclusions regarding the future requirements regarding the support of this type of games.
Conference Paper
Full-text available
In this paper we present MAR (Mobile Augmented Reality) Toolkit as an easy-to-use augmented reality toolset for building multi-user mobile phone games. It is built on top of MUPE - Nokia-developed open source mobile platform based on Java - which considers the special qualities of mobile technology. MAR Toolkit contains four components, map interface (MAP), physical object tagger (POT), public display (PUD) and silent communicator (SIC).We have successfully demonstrated MAR Toolkit by implementing a game named as Mupeland Yard based on classical board game Scotland Yard. In user testing we found that common usability issues related to mobile technology and MUPE-platform troubled the tests. However, especially POT component raised interest among developers. We found that using graphical 2D-tags for providing location information for augmented reality games is a simple and robust alternative for more technology intensive GPS and cell-ID information.
Conference Paper
Full-text available
: We introduce a local collaborative environment for gaming. In our setup multiple users can interact with the virtual game and the real surroundings at the same time. They are able to communicate with other players during the game. We describe an augmented reality setup for multiple users with see-trough head-mounted displays, allowing dedicated stereoscopic views and individualized interaction for each user. We use face-snapping for fast and precise direct object manipulation. With face snapping and the subdivision of the gaming space into spatial regions, the semantics of actions can be derived out of geometric actions of the user. Further, we introduce a layering concept allowing individual views onto the common data structure. The layer concept allows to make privacy management very easy by simply manipulating the common data structure. Moreover, assigning layers to spatial regions carefully, a special privacy management is often not necessary. Moving objects from one region into ...
Article
A new type of interactive attraction in mixed reality environment, BLADESHIPS, has been designed and developed. The BLADESHIPS is a game in which players compete with each other by controlling virtually expressed belt-shaped flying objects, "ships", with their hands in a real environment. They control their own ships by avoiding other ships and real/virtual obstacles. They try to drive enemy ships to hit the obstacles. The occlusion between real and virtual objects and the collision of virtual ships with the real environment are represented to increase the reality of the game.
Article
This paper presents a prototype developed as part of the Backseat gaming project. The aim of the project is to explore how to make use of mobile properties for developing compelling and fun game experiences. The prototype is developed for use in a highly mobile situation, that of a car passenger and is realized by the use of mobile devices and the users physical location during speed to merge the virtual content and surrounding road context into an augmented reality game.
Article
Computer games lack the social bonding and collective physical exercise benefits that sports provide. To overcome these limitations, we have been investigating how to apply the benefits of sport, in particular the workout and social bonding effect, in a distributed setting. We designed, developed, and evaluated Breakout for Two, which allows people who are miles apart to play a physically exhausting ball game together. We had over a thousand players who interacted through a life-size video-conference screen using a regular soccer ball as an input device. In an evaluative study, 56 players were interviewed and said that they got to know the other player better, had more fun, became better friends, and were happier with the transmitted audio and video quality in comparison to those who played the same game using a nonexertion keyboard interface. These results suggest that sports over a distance is an exciting new field with an "exertion interface" that encourages remote interaction, where players can achieve both a work-out and socializing.
Conference Paper
This paper presents ARCHEOGUIDE, a novelsystem offering augmented reality tours in archaeologicalsites. The system is based on wearable and mobilecomputers, networking technology and real-time computergraphics and 3D animation and visualization techniques.The user can participate tours adapted to his profile andautomatically receive information based on his positionand orientation as calculated by a hybrid technique makinguse of GPS, compass and image-based tracking. The usercan interact with the device via multi-modal interactiontechniques and request navigation and other information.ARCHEOGUIDE has been tested at the archaeologicalsite of Olympia in Greece.
Article
University of South Australia has successfully developed through-walls collaboration which allows users in the field work in real time with users indoors who have access to reference materials, a global picture, and advanced technology. The technology supports ubiquitous workspaces, augmented reality (AR), and wearable computers. AR wearable computer technology provides digital images, videos, and voice information that are geospatially mapped to the recording point, which will provide control center personnel better situational awareness. The indoor system provides appropriate visualizations to support situational awareness for control room experts. A particularly powerful feature of the through-walls systems is that both indoor and outdoor users can provide an AR overlay with multimedia data such as images, text, video, and sound for highlighting information.
Article
This paper introduces a collaborative shooting game - "RV-Border Guards," which uses Mixed Reality (MR) technologies. This system is designed to emphasize MR-specific features for entertainment. Three players wearing HMDs cooperatively battle with virtual invaders flying around them in the MR space. Each player is armed with a virtual gear such as a helmet and a gun, and can intuitively interact with the MR space using easy gestures. Total reality of the MR space is carefully tuned. This project tries to achieve a novel multi-player entertainment, which has never been realized without MR technologies.
Article
This sketch presents a prototype developed as part of the Backseat gaming project. The aim of the project is to explore how to make use of mobile properties for developing compelling and fun game experiences. The prototype is developed for use in a highly mobile situation, that of a car passenger and is realized by the use of mobile devices and the users physical location during speed to merge the virtual content and surrounding road context into an augmented reality game. In this research, in addition to location, we also introduce variables such as speed, direction, timing, changing surrounding, fast movement of manipulative objects and multiple entry and exits.
Article
An example of an application enhanced by the "manipulation-by-projection" technique, this cooperative game allows players to visually and intuitively control a robot with projectors. Players interchangeably move and connect their projected images to create a path that leads the robot to its goal.
Article
In this paper, the mobile outdoor gaming system ARQuake is discussed from an implementation point of view. The modifications to the original source code from Id Software are described, with a focus on the changes made for tracking devices, video overlays, firing weapons, and tweaks to the game to improve its visual quality. The game runs under GNU/Linux on a standard laptop mounted to a custom built backpack, containing a variety of equipment necessary to support mobile augmented reality.