Conference Paper

Transferability of Spatial Maps: Augmented Versus Virtual Reality Training

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A comparison of the effect of VR and AR spatial memory training on short-term and long-term memory, has shown that, even if VR outperforms AR in the immediate posttraining test, AR is better suitable for long-term spatial memory transfer [57]. Nonetheless, physical displacements have been shown to be important in acquiring spatial ability skills [58]. ...
... Immediate feedback after actions can also be useful, as it could help the users knowing when they succeeded or failed and why, recognizing their errors, recovering strategies, reducing their frustration and increasing their motivation. To this purpose, multimodal feedback are often provided, through congratulation messages, hints or events, like objects disappearing if correctly selected or sounds of clapping hands to highlight the correctness of a given answer [4]- [6], [13], [20], [41], [43], [50], [57], [59], [106], [112], [125], [133], [162], [208], [210]- [212]. Finally, since the benefits of cognitive training might only be noticed in the long term, healthy competition is greatly appreciated as it can positively affect patients psychological well-being and promote social networking. ...
... System for driving simulation [67], [170], [ Treadmill [94], [115], [146], [151], [217], [218] 6 Pressure sensitive plate [15], [22], [23], [88], [147], [148] 6 Cycle-ergometer [50], [111], [149], [203], [208] 5 Total 198 Touchless Movement-based Sensorized gloves [191] 1 Acceleration-based techniques [8], [12], [231] 3 EMG-based EMG sensors [52], [125] 2 Vision-based RGB, RGB-D or IR cameras with sensors or markers [6], [9], [25], [29], [30], [38], [40], [46], [55], [57], [60], [66], [71], [72], [74], [85], [92], [97], [145], [150], [152]- [154], [156]- [159], [173], [176], [181], [182], [191], [196], [197], [204], [206], [211], [219]- [221], [228], [230], [235] 43 Eye tracking-based Eye tracker [213], [233] 2 EEG-based EEG wearable device [2], [51], [53], [72], [75], [93], [110], [139], [214] 9 ...
Article
Full-text available
Exergames and serious games, based on standard personal computers, mobile devices and gaming consoles or on novel immersive Virtual and Augmented Reality techniques, have become popular in the last few years and are now applied in various research fields, among which cognitive assessment and training of heterogeneous target populations. Moreover, the adoption of Web based solutions together with the integration of Artificial Intelligence and Machine Learning algorithms could bring countless advantages, both for the patients and the clinical personnel, as allowing the early detection of some pathological conditions, improving the efficacy and adherence to rehabilitation processes, through the personalisation of training sessions, and optimizing the allocation of resources by the healthcare system. The current work proposes a systematic survey of existing solutions in the field of cognitive assessment and training. We evaluate the visualization and interaction technologies commonly adopted and the measures taken to fulfil the need of the pathological target populations. Moreover, we analyze how implemented solutions are validated, i.e. the chosen experimental designs, data collection and analysis. Finally, we consider the availability of the applications and raw data to the large community of researchers and medical professionals and the actual application of proposed solutions in the standard clinical practice. Despite the potential of these technologies, research is still at an early stage. Although the recent release of accessible immersive virtual reality headsets and the increasing interest on vision-based techniques for tracking body and hands movements, many studies still rely on non-immersive virtual reality (67.2%), mainly mobile and personal computers, and standard gaming tools for interactions (41.5%). Finally, we highlight that although the interest of research community in this field is increasingly higher, the sharing of dataset (10.6%) and implemented applications (3.8%) should be promoted and the number of healthcare structures which have successfully introduced the new technological approaches in the treatment of their host patients is limited (10.2%).
... Benefits of AR and VR to learning have been studied extensively, showing that both can support memorization [13][14][15][16][17]. It is important to note that the amount of time passed before shortterm and long-term recall tests varies depending on the study [14,[18][19][20][21]. Generally, short-term memory recall is considered to happen immediately after a memorization task (within 30 s) while long-term recall refers to any event where the memory is probed at a later time. ...
... Gacem et al. [14] compared the effects of different confirmation and highlighting techniques on learning the spatial location of elements around the user, but did not investigate the FOV. In a similar study, Caluya et al. [13] compared learning of spatial locations in AR and VR, and found that VR resulted in better short-term recall, while AR led to better long-term memorization. They also noted that although participants were asked to stand at a pre-defined location, they tended to step backwards in both environments to increase their view of the world. ...
... We considered the short-term performance of users when exposed to many and successive iterations of memory stimuli. Previous studies [13,14] showed that participants memorized more locations over multiple iterations of learning the locations. Furthermore, most studies mentioned in related work conducted memory tests immediately after and on the same location. ...
Article
One of the main targets of criticism of head-mounted displays (HMDs) is the field of view (FOV) size, whether in virtual or augmented reality. This limitation is prominent with optical see-through head-mounted displays (OST-HMD), as those with narrow overlay FOV (OFOV) sizes only provide a small window to view virtual objects. We investigated if restricting this OFOV negatively affects a user’s ability to memorize spatial locations in a simulation of a work environment, and consequently, long-term memory transfer to an equivalent scenario in the real world two days later. To find empirical evidence, we conducted a within-subjects experiment with 18 participants performing in three phases with an OST-HMD, simulated on an immersive HMD. For each phase, they viewed the training scenario with a different OFOV size of the augmentable area (30°, 70°, 110° diagonal). Results from recall tests showed that smaller OFOV size did not significantly affect user’s performance on both short-term and transfer tests, but HMD data revealed that users rotated their heads less with a 110° OFOV. We also found that proximity of objects to memorize had an interaction effect with smaller OFOV sizes. Our findings could have implications on the design and HMD choices of augmented training.
... In their recent work, Munoz et al. [41] presented a framework for developing an AR-based mobile application to assess human spatial short-term memory. In [42], Caluya et al. studied the impact of the spatial memory training framework on short-and long-term memory when conducted in AR and VR environments. ...
Preprint
Full-text available
p>Augmented reality (AR) offers novel ways to design, curate, and deliver information to users by integrating virtual, computer-generated objects into a real-world environment. This work presents an AR-based human memory augmentation system that uses computer vision (CV) and artificial intelligence (AI) to replace the internal mental representation of objects in the environment with an external augmented representation. The system consists of two components: (1) an AR headset and (2) a computing station. The AR headset runs an application that senses the indoor environment, sends data to the computing station for processing, receives the processed data, and updates the external representation of objects using a virtual 3D object projected into the real environment in front of the user's eyes. The computing station performs computer vision-based indoor environment self-localization, object detection, and object-to-location binding using first-person view (FPV) data received from the AR headset. We designed a behavioral study to evaluate the usability of the system. In a pilot study with 26 participants (12 females and 14 males), we investigated human performance in an experimental task that involved remembering the positions of objects in a physical space and displaying the positions of the learned objects on the two-dimensional (2D) map of the space. We conducted the studies under two conditions---that is, with and without using the AR system. We investigated the usability of the system, subjective workload, and performance variables under both conditions. The results showed that the AR-based augmentation of the mental representation of objects indoors reduced cognitive load and increased performance accuracy.</p
... In their recent work, Munoz et al. [41] presented a framework for developing an AR-based mobile application to assess human spatial short-term memory. In [42], Caluya et al. studied the impact of the spatial memory training framework on short-and long-term memory when conducted in AR and VR environments. ...
Preprint
Full-text available
p>Augmented reality (AR) offers novel ways to design, curate, and deliver information to users by integrating virtual, computer-generated objects into a real-world environment. This work presents an AR-based human memory augmentation system that uses computer vision (CV) and artificial intelligence (AI) to replace the internal mental representation of objects in the environment with an external augmented representation. The system consists of two components: (1) an AR headset and (2) a computing station. The AR headset runs an application that senses the indoor environment, sends data to the computing station for processing, receives the processed data, and updates the external representation of objects using a virtual 3D object projected into the real environment in front of the user's eyes. The computing station performs computer vision-based indoor environment self-localization, object detection, and object-to-location binding using first-person view (FPV) data received from the AR headset. We designed a behavioral study to evaluate the usability of the system. In a pilot study with 26 participants (12 females and 14 males), we investigated human performance in an experimental task that involved remembering the positions of objects in a physical space and displaying the positions of the learned objects on the two-dimensional (2D) map of the space. We conducted the studies under two conditions---that is, with and without using the AR system. We investigated the usability of the system, subjective workload, and performance variables under both conditions. The results showed that the AR-based augmentation of the mental representation of objects indoors reduced cognitive load and increased performance accuracy.</p
... Conducting experiments with the assistance of the method of AR in conjunction with physical chemistry equipment [31] 2018 Visualizing chemical molecular models by displaying them on labelled cards to help students learn chemistry more clearly [32] 2018 Enabling users to understand laboratory safety rules with mixed reality technology through smart glasses [33] 2018 ...
Article
Full-text available
Featured Application Virtual chemical laboratory system based on augmented reality. Abstract In the natural science curriculum, chemistry is a very important domain. However, when conducting chemistry experiments, safety issues need to be taken seriously, and excessive material waste may be caused during the experiment. Based on the 11-year-old student science curriculum, this paper proposed a virtual chemistry laboratory, which was designed by combining a virtual experiment application with physical teaching materials. The virtual experiment application was a virtual experiment laboratory environment created by using selected experimental equipment cards in combination with augmented reality (AR) technology. The physical teaching materials included all virtual equipment required for experiment units. Each piece of equipment had corresponding cards for learners to choose from and utilize in specific experimental operations. It was hoped that students were able to achieve the desired learning effectiveness of experimental teaching while reducing the waste of experimental materials through the virtual experimental environment. This study employed the quasi-experimental and questionnaire survey methods to evaluate both learning effectiveness and learning motivation. Eighty-one students and eight elementary school teachers were surveyed as research subjects. The experimental results revealed that significant differences in learning effectiveness existed between the experimental group and control group, indicating that the application of AR technology to teaching substantively helped enhance students’ learning effectiveness and motivation. In addition, the results of the teacher questionnaire demonstrated that the virtual chemistry laboratory proposed in this study could effectively assist with classroom teaching.
... Matheis et al. (2007), for example, claimed that VR provides a viable medium for the learning and assessment of spatial memory ability. Similarly, Caluya et al. (2018) conducted an experiment to evaluate the impact of training on short-term and long-term memory. They found that VR-mediated training outperformed augmented realitymediated training in a short-term memory test, while the opposite result was found in a long-term memory test two days later. ...
Article
Full-text available
Supporting smooth target acquisition is an important objective in immersive virtual reality (VR) environments. However, users were obliged to search for objects relying on the vision channel in traditional VR systems. Such eyes-engaged technologies may significantly degrade the interaction efficiency and user experience, particularly when users have to turn their head frequently to search for virtual objects in the limited field of view of a head-mounted display. In this paper, we report a two-stage study which investigates the capability of VR users to acquire spatial targets without eye engagement (i.e., eyes-free target acquisition). First, we measure the eyes-free performance of users in terms of control accuracy and subjective task load. Second, we evaluate the effects of eyes-free acquisition on memory capacity, spatial offset, and task completion time in the context of a VR game. Starting from a set of 54 spatial positions, we identify 18 optimal locations (half on the left side of the user’s body and half on the right) that allow both accurate and comfortable target acquisition without visual attention. After a short training period, users could accurately and quickly acquire 17 targets in a VR game with an average offset of 10.5 cm and an average completion time of 2.7 s. According to our results, we suggest how to optimize the spatial layout, number of targets, target locations, and interaction techniques for eyes-free acquisition in VR applications. Our work can serve as a foundation for future development of eyes-free methods of target acquisition in VR.
... In our work, we were interested in co-located, synchronous MR interaction that affords realtime coordination of short-term cooperative tasks. Such settings are relevant for many applications of AR support for cooperation, including making sense of physical and virtual environments [43], interaction on artistic installations [20], signaling joint attention [44], MR-assisted surgery [2], emergency room support [cf. 45], MR-assisted control room and simulation settings [e.g., 10], maintenance and other support settings [e.g., 15], co-located MR games [e.g., 24] and many other related tasks. ...
Article
Mixed reality (MR) cooperation scenarios are more and more interesting for business and research as powerful wearable devices like head mounted displays (HMD) become commercially available. A lot of work focuses on remote MR cooperation settings like remote maintenance support and co-located scenarios in which participants cooperate over a longer period time. Despite this, MR also has great potential for real-time co-located cooperation support with the need of short-term decisions and interactions. However, little is known on how this support can be provided. To bridge this gap, we conducted an experiment using a MR visual search task performed by dyads. Based on related work, visual search was chosen to represent typical challenges of short-term cooperative MR tasks. The aim of the experiment was to explore how the participants coordinate their searches and how this influences their performance in a task. We found that participants mainly used embodied and verbal cues to coordinate their searches (rather than virtual cues provided by the HMD) and that less communication worked significantly better, which is in (partial) contrast to existing findings. We discuss potential reasons for and impacts of these findings.
Article
Full-text available
Over the past decade, virtual reality (VR) has re-emerged as a popular technology trend. This is mainly due to the recent investments from technology companies that are improving VR systems while increasing consumer access and interest. Amongst many applications of VR, one area that is particularly promising is for pedagogy. The immersive, experiential learning offered by VR provides new training and learning opportunities driven by the latest versions of affordable, highly immersive and easy to use head mounted display (HMD) systems. VR has been tested as a tool for training across diverse settings with varying levels of success in the past. However, there is a lack of recent review studies that investigates the effectiveness, advantages, limitations, and feasibility of using VR HMDs in training. This review aims to investigate the extent to which VR applications are useful in training, specifically for professional skill and safety training contexts. In this paper, we present the results from a systematic review of the effectiveness of VR-based simulation training from the past 30 years. As a secondary aim, the methodological trends of application and practical challenges of implementing VR in training curriculum were also assessed. The results suggest that there is generally high acceptance amongst trainees for VR-based training regardless of the technology limitations, usability challenges and cybersickness. There is evidence that VR is useful for training cognitive skills, such as spatial memory, learning and remembering procedures and psychomotor skills. VR is also found to be a good alternative where on the job training is either impossible or unsafe to implement. However, many training effectiveness studies reviewed lack experimental robustness due to limited study participants and questionable assessment methods. These results map out the current known strengths and weaknesses of VR HMDs and provide insight into required future research areas as the new era of VR HMD’s evolve.
Conference Paper
Full-text available
Spatial memory is a powerful way for users to become expert with an interface, because remembering item locations means that users do not have to carry out slow visual search. Spatial learning in the real world benefits greatly from landmarks in the environment, but user interfaces often provide very few visual landmarks. In this paper we explore the use of artificial landmarks as a way to improve people's spatial memory in spatially-stable grid menus called CommandMaps. We carried out three studies to test the effects of three types of artificial landmarks (standard grid, simple anchor marks, and a transparent image) on spatial learning. We found that for small grid menus, the artificial landmarks had little impact on performance, whereas for medium and large grids, the simple anchor marks significantly improved performance. The simple visual anchors were faster and less error-prone than the visually richer transparent image. Our studies show that artificial landmarks can be a valuable addition to spatial interfaces.
Conference Paper
Full-text available
Research on how to take advantage of Augmented Reality and Virtual Reality applications and technologies in the domain of manufacturing has brought forward a great number of concepts, prototypes, and working systems. Although comprehensive surveys have taken account of the state of the art, the design space of industrial augmented and virtual reality keeps diversifying. We propose a visual approach towards assessing this space and present an interactive, community-driven tool which supports interested researchers and practitioners in gaining an overview of the aforementioned design space. Using such a framework we collected and classified relevant publications in terms of application areas and technology platforms. This tool shall facilitate initial research activities as well as the identification of research opportunities. Thus, we lay the groundwork, forthcoming workshops and discussions shall address the re-finement.
Conference Paper
Full-text available
The objective of this study is to investigate the relationship between the features of augmented reality (AR) and human memorization ability. The basis of this relation is derived from the following features. The AR feature is that AR can provide information associated with specific locations in the real world. The feature of human memory is that humans can easily memorize information if the information is visually associated with specific locations. To investigate this relation, we conduct a pilot user study in which blocks are picked from some drawers. As a result, significant differences are found between a situation in which visual information is displayed at the location of each drawer in the real world and that in which textual information is displayed at an unrelated location.
Conference Paper
Full-text available
This study aims to investigate the effectiveness of Augmented Reality (AR) on user's memory skills when it is used as an information display method. By definition, AR is a technology which displays virtual images on the real world. These computer generated images naturally contain location information on the real world. It is also known that humans can easily memorize and remember information if this information is retained along with some locations on the real world. Thus, we hypothesize that displaying annotations by using AR may have better effects on the user's memory skill, if they are associated with the location of the target object on the real world rather than when connected with an unrelated location. A user study was conducted with 30 participants in order to verify our hypothesis. As a result, a significant difference was found between the situation when information was associated with the location of the target object on the real world and when it was connected with an unrelated location. In this paper, we present the test results and explain the verification based on the results.
Article
Full-text available
In everyday life, the brain is bombarded with a multitude of concurrent and competing stimuli. Only some of these enter consciousness and memory. Attention selects relevant signals for in-depth processing depending on current goals, but also on the intrinsic properties of stimuli. We combined behavior, computational modeling, and functional imaging to investigate mechanisms supporting access to memory based on intrinsic sensory properties. During fMRI scanning, human subjects were presented with pictures of naturalistic scenes that entailed high levels of competition between possible target objects. Following a retention interval of 8 s, participants judged the location (same/different) of a target object extracted from the initial scene. We found that memory performance at retrieval increased with increasing object salience at encoding, indicating a "prior entry" for salient information. fMRI analyses revealed encoding-related activation in the posterior parietal cortex, selectively for salient objects that were later remembered. Moreover, parietal cortex showed increased functional coupling with the medial-temporal lobe, for remembered objects only. These findings reveal a parietotemporal circuit that integrates available sensory cues (based on attention-grabbing saliency signals) and current memory requirements (storing objects' locations) to encode object-related spatial information in working memory.
Article
Full-text available
Proposes a framework for the conceptualization of a broad range of memory phenomena that integrates research on memory performance in young children, the elderly, and individuals under stress with research on memory performance in normal college students. One basic assumption is that encoding operations vary in their attentional requirements. Operations that drain minimal energy from limited-capacity attentional mechanisms are called automatic. Automatic operations function at a constant level under all circumstances, occur without intention, and do not benefit from practice. Effortful operations, such as rehearsal and elaborative mnemonic activities, require considerable capacity, interfere with other cognitive activities also requiring capacity, are initiated intentionally, and show benefits from practice. A 2nd assumption is that attentional capacity varies both within and among individuals. Depression, high arousal levels, and old age are variables thought to reduce attentional capacity. The conjunction of the 2 assumptions of the framework yields the prediction that the aged and individuals under stress will show a decrease in performance only on tasks requiring effortful processing. Evidence from the literature on development, aging, depression, arousal, and normal memory is presented in support of the framework, and 4 experiments with 301 5–40 yr old Ss are described. (5½ p ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Conference Paper
Full-text available
Menus are a primary control in current interfaces, but there has been relatively little theoretical work to model their performance. We propose a model of menu performance that goes beyond previous work by incorporating components for Fitts' Law pointing time, visual search time when novice, Hick-Hyman Law decision time when expert, and for the transition from novice to expert behaviour. The model is able to predict performance for many different menu designs, including adaptive split menus, items with different frequencies and sizes, and multi-level menus. We tested the model by comparing predictions for four menu designs (traditional menus, recency and frequency based split menus, and an adaptive 'morphing' design) with empirical measures. The empirical data matched the predictions extremely well, suggesting that the model can be used to explore a wide range of menu possibilities before implementation. Author Keywords Menus, Hick-Hyman Law, Fitts' Law, performance modelling, adaptive behaviour.
Conference Paper
Full-text available
Interface designers normally strive for a design that minimises the user's effort. However, when the design's objective is to train users to interact with interfaces that are highly dependent on spatial properties (e.g. keypad layout or gesture shapes) we contend that designers should consider explicitly increasing the mental effort of interaction. To test the hypothesis that effort aids spatial memory, we designed a "frost-brushing" interface that forces the user to mentally retrieve spatial information, or to physically brush away the frost to obtain visual guidance. We report results from two experiments using virtual keypad interfaces - the first concerns spatial location learning of buttons on the keypad, and the second concerns both location and trajectory learning of gesture shape. The results support our hypothesis, showing that the frost-brushing design improved spatial learning. The participants' subjective responses emphasised the connections between effort, engagement, boredom, frustration, and enjoyment, suggesting that effort requires careful parameterisation to maximise its effectiveness. Author Keywords Skill acquisition, education, training, gesture stroke, pen input, text entry, spatial memory, learning.
Article
Full-text available
Because fire rescue personnel often enter unfamiliar buildings to perform critical tasks like rescues, the importance of finding new and improved ways to train route navigation is becoming paramount. This research was designed to compare three methods for training firefighters to navigate a rescue route in an unfamiliar building. Thirty firefighters from the Madison County, Alabama, area were trained to navigate through the Administrative Science Building at The University of Alabama in Huntsville. The firefighters, who had not had any experience with the Administrative Science Building prior to the experiment, were randomly assigned to one of three experimental training groups: Blueprint, Virtual Reality, or No Training (Control). After training, we measured the total navigation time and number of wrong turns exhibited by firefighters in the actual building. Participants were required to rescue a mock baby (a life-sized doll) following the specific trained route. Measures of test performance were compared among groups by using analyses of variance (ANOVAs). The results indicated that firefighters trained with virtual reality or blueprints performed a quicker and more accurate rescue than those without training. Furthermore, the speed and accuracy of rescue performance did not differ significantly between virtual reality and blueprint training groups. These results indicate that virtual reality training, if constructed and implemented properly, may provide an effective alternative to current navigation training methods. The results are discussed with regard to theories of transfer of training and human performance in virtual environments.
Conference Paper
Full-text available
Virtual reality (VR) and augmented reality (AR - overlaying virtual objects onto the real world) offer interesting and wide spread possibilities to study different components of human behaviour and cognitive processes. One aspect of human cognition that has been frequently studied using VR technology is spatial ability. Research ranges from training studies that investigate whether and/or how spatial ability can be improved by using these new technologies to studies that focus on specific aspects of spatial ability for which VR is an efficient investigational tool. In this paper we first review studies that used VR technologies to study different aspects of spatial ability. Then results and findings will be presented from one of the first large-scale studies (215 students) that investigated the potential of an AR application to train spatial ability. Author Keywords Virtual reality, augmented reality, spatial ability, training.
Article
Full-text available
In the recent literature there has been considerable confusion about the three types of memory: long-term, short-term, and working memory. This chapter strives to reduce that confusion and makes up-to-date assessments of these types of memory. Long- and short-term memory could differ in two fundamental ways, with only short-term memory demonstrating (1) temporal decay and (2) chunk capacity limits. Both properties of short-term memory are still controversial but the current literature is rather encouraging regarding the existence of both decay and capacity limits. Working memory has been conceived and defined in three different, slightly discrepant ways: as short-term memory applied to cognitive tasks, as a multi-component system that holds and manipulates information in short-term memory, and as the use of attention to manage short-term memory. Regardless of the definition, there are some measures of memory in the short term that seem routine and do not correlate well with cognitive aptitudes and other measures (those usually identified with the term "working memory") that seem more attention demanding and do correlate well with these aptitudes. The evidence is evaluated and placed within a theoretical framework depicted in Fig. 1.
Conference Paper
Various guidance techniques have been proposed to help users to quickly and effectively locate objects in large and dense environments such as supermarkets, libraries, or control rooms. Little research, however, has focused on their impact on learning. These techniques generally transfer control from the user to the system, making the user more passive and reducing kinesthetic feedback. In this paper, we present an experiment that evaluates the impact of projection-based guidance techniques on spatial memorization. We investigate the roles of user (handheld) vs. system control (robotic arm) guidance and of kinesthetic feedback on memorization. Results show (1) higher recall rates with system-controlled guidance, (2) no significant influence of kinesthetic feedback on recall under system control, and (3) the visibility and noticeability of objects impact memorization.
Conference Paper
NeverMind is an interface and application designed to support human memory. We combine the memory palace memorization method with augmented reality technology to create a tool to help anyone memorize more effectively. Preliminary experiments show that content memorized with NeverMind remains longer in memory compared to general memorization techniques. With this project, we hope to make the memory palace method accessible to novices and demonstrate one way augmented reality can support learning.
Article
Spatial memory is an important facet of human cognition – it allows users to learn the locations of items over timeand retrieve them with little effort. In human-computer interfaces, a strong knowledge of the spatial location ofcontrols can enable a user to interact fluidly and efficiently, without needing to visually search for relevant controls. Computer interfaces should therefore be designed to provide support for developing the user's spatial memory,and they should allow the user to exploit it for rapid interaction whenever possible. However, existing systems offervarying support for spatial memory. Many modern interfaces break the user's ability to remember spatial locations, bymoving or re-arranging items; others leave spatial memory underutilised, requiring slow sequences of mechanical actionsto select items rather than exploiting users' strong ability to index items and controls by their on-screen locations.The aim of this paper is to highlight the importance of designing for spatial memory in HCI. To do this, we examine theliterature using an abstract-to-concrete approach. First, we identify important psychological models that underpin ourunderstanding of spatial memory, and differentiate between navigation and object-location memory (with this reviewfocusing on the latter). We then summarise empirical results on spatial memory from both the psychology and HCIdomains, identifying a set of observable properties of spatial memory that can be used to inform design. Finally, weanalyse existing interfaces in the HCI literature that support or disrupt spatial memory, including space-multiplexeddisplays for command and navigation interfaces, different techniques for dealing with large spatial data sets, and theeffects of spatial distortion. We intend for this paper to be useful to user interface designers, as well as other HCIresearchers interested in spatial memory. Throughout the text, we therefore emphasise important design guidelinesderived from the work reviewed, as well as methodological issues and topics for future research.
Article
Augmented Reality is a promising technology for the product lifecycle development, but it is still not established in industrial facilities. The most relevant issues to be addressed relate to the ergonomics: avoid the discomfort of Head-Worn Displays, allow the operators to have free hands and improve data visualization. In this work we study the possibility to use projection-based Augmented Reality (projected AR), as optimal solution for technical visualization on industrial workbenches. In particular, text legibility in projected AR is difficult to optimize since it is affected by many parameters: environment conditions, text style, material and shape of the target surface. This problem is poorly addressed in literature and in the specific industrial field. We analyze the legibility of a set of colors prescribed by international standards for the industrial environments, on six widely used industrial workbenches surfaces. We compared the performance of 14 subjects using projected AR, with that using a traditional LCD monitor. We collected about 2500 measurements (times and errors) through the use of a test application, followed by qualitative interviews. The results showed that, as regards legibility, projected AR can be used in place of traditional monitors in most of the cases. Another not trivial finding is that the influence on legibility of surface irregularities (e.g., grooves, prominences) is more important than that of surface texturization. A possible limitation for the use of projected AR is given by the blue color, whose performance turned out to be lower than that of other colors with every workbench surface.
Article
Smartphones are useful personal assistants and omnipresent communication devices. However, collaboration is not among their strengths. With the advent of embedded projectors this might change. We conducted a study with 56 participants to find out if map navigation and spatial memory performance among users and observers can be improved by using a projector phone with a peephole interface instead of a smartphone with its touchscreen interface. Our results show that users performed map navigation equally well on both interfaces. Spatial memory performance, however, was 41% better for projector phone users. Moreover, observers of the map navigation on the projector phone were 25% more accurate when asked to recall locations of points of interest after they watched a user performing map navigation.
Article
The Method of Loci (MOL) is an ancient mnemonic strategy used to enhance serial recall. Traditionally, the MOL is carried out by imagining navigating a familiar environment and "placing" the to-be-remembered items in specific locations. For retrieval, the mnemonist re-imagines walking through the environment, "looking" for those items in order. Here we test a novel MOL method, where participants use a briefly studied virtual environment as the basis for the MOL and applied the strategy to 10 lists of 11 unrelated words. When our virtual environments were used, the MOL was as effective, compared to an uninstructed control group, as the traditional MOL where highly familiar environments were used. Thus, at least for naïve participants, a highly detailed environment does not support substantially better memory for verbal serial lists.
Conference Paper
A theoretical account is presented on how locations of interface objects are learned and how the mechanisms underlying location learning interact with the representativeness of object labels. The account is embodied in a computational cognitive model built within the ACT-R/PM cognitive architecture [1, 2] and is supported by point-of-gaze and performance data collected in empirical research. The model interacts with the same software under the same experimental task conditions as study participants and replicates both performance and the finer-grained point-of-gaze data. Drawing from the data and model, location learning is characterized as a process that occurs as a by-product of interaction such that, without specific intent to do so, users can gradually learn the locations of the interface objects to which they attend. Characteristics of the user interface shape this learning process, however, by constraining the set of possible strategies for interaction. Locations are learned more quickly when the least-effortful strategy available in the interface explicitly requires retrieval of location knowledge
Conference Paper
In this paper we present an experiment that aims at understanding the influence that (visual) grid-based structuring of user interfaces can have on spatial and content memory. By the term grid we refer to two different aspects. On the one hand, this relates to the structured alignment, the layout of objects on a canvas. On the other hand, a grid can also be indicated visually by inserting lines that form an array which divides a canvas into smaller fields. In both cases we detected a strong positive influence on spatial memory. On content memory, however, grids have a less beneficial influence. Only if grid lines are visible, the structured alignment has a positive effect. On the other hand, the visibility of grid lines always leads to worse results in content memory performance, independent of the spatial arrangement.
Conference Paper
Order picking is one of the most important process steps in logistics. Because of their flexibility human beings cannot be replaced by machines. But if workers in order picking systems are equipped with a head-mounted display, Augmented Reality can improve the information visualization. In this paper the development of such a system -- called Pick-by-Vision - is presented. The system is evaluated in a user study performed in a real storage environment. Important logistics figures as well as subjective figures were measured. The results show that a Pick-by-Vision system can improve considerably industrial order picking processes.
Article
Order picking is one of the most important process steps in logistics. Due to their flexibility, human beings cannot be replaced by machines. But if workers in order picking systems are equipped with a head-mounted display, Augmented Reality can improve the information visualization. In this paper the development of such a Pick-by-Vision system is presented. It is evaluated in a user study performed in a real storage environment. Important logistic figures as well as subjective figures were measured. The results show that Pick-by-Vision can improve order picking processes on a big scale.
Article
Many studies of bottom-up visual attention have focused on identifying which features of a visual stimulus render it salient--i.e., make it "pop out" from its background--and on characterizing the extent to which salience predicts eye movements under certain task conditions. However, few studies have examined the relationship between salience and other cognitive functions, such as memory. We examined the impact of visual salience in an object-place working memory task, in which participants memorized the position of 3-5 distinct objects (icons) on a two-dimensional map. We found that their ability to recall an object's spatial location was positively correlated with the object's salience, as quantified using a previously published computational model (Itti et al., 1998). Moreover, the strength of this relationship increased with increasing task difficulty. The correlation between salience and error could not be explained by a biasing of overt attention in favor of more salient icons during memorization, since eye-tracking data revealed no relationship between an icon's salience and fixation time. Our findings show that the influence of bottom-up attention extends beyond oculomotor behavior to include the encoding of information into memory.
Article
1. The accuracy with which subjects pointed to targets in extrapersonal space was assessed under a variety of experimental conditions. 2. When subjects pointed in the dark to remembered target locations, they made substantial errors. Errors in distance, measured from the shoulder to the target, were sometimes as much as 15 cm. Errors in direction, also measured from the shoulder, were smaller. 3. An analysis of the information transmitted by the location of the subject's finger about the location of the target showed that the information about the target's distance was consistently lower than the information about its direction. 4. The errors in distance persisted when subjects had their arm in view and pointed in the light to remembered target locations. 5. The errors were much smaller when subjects used a pointer to point to the target or when they were asked to reproduce the position of their finger after it had been passively moved to the target. 6. From these findings we conclude that subjects have a reasonably accurate visual representation of target location and are able to effectively use kinesthetically derived information about target location. We therefore suggest that errors in pointing result from errors in the sensorimotor transformation from the visual representation of the target location to the kinematic representation of the arm movement.
Conference Paper
This paper presents cognitive studies and analyses relating to how augmented reality (AR) interacts with human abilities in order to benefit manufacturing and maintenance tasks. A specific set of applications is described in detail, as well as a prototype system and the software library that it is built upon. An integrated view of information flow to support AR is also presented, along with a proposal for an AR media language (ARML) that could provide interoperability between various AR implementations.
The Art of Memory. Random House UK
  • F A Yates
F. A. Yates. The Art of Memory. Random House UK, 1966.