Conference PaperPDF Available

Vicarious Learning with a Digital Educational Game: Eye-Tracking and Survey-Based Evaluation Approaches

Authors:

Abstract and Figures

The paper presents an empirical study with a digital educational game (DEG) called 80Days that aims at teaching geographical content. The goal of the study is twofold: (i) investigating the potential of the eye-tracking approach for evaluating DEG; (ii) studying the issue of vicarious learning in the context of DEG. Twenty-four university students were asked to view the videos of playing two micro-missions of 80Days, which varied with regard to the position of the non-player character (NPC) window (i.e. lower right vs. upper left) and the delivery of cognitive hints (i.e. with vs. without) in this text window. Eye movements of the participants were recorded with an eye-tracker. Learning effect and user experience were measured by questionnaires and interviews. Significant differences between the pre- and post-learning assessment tests suggest that observers can benefit from passive viewing of the recorded gameplay. However, the hypotheses that the game versions with cognitive hints and with the NPC window on the upper left corner can induce stronger visual attention and thus better learning effect are refuted.
Content may be subject to copyright.
A preview of the PDF is not available
... Previous research already showed a relationship between the number of fixations and performance in different tasks (e.g., expert vs. novice problem solving: [59], [60]; puzzle-like task: [54]; reading: [55]; watch a video with geographical content: [95]). In line with this, we observed that performance was lower for users who made more fixations while solving the task. ...
Article
Research on instructional design provides inconsistent results on the use of game elements in cognitive tasks or learning. Cognitive load theory suggests that game elements increase extraneous cognitive load and thus may distract the users. In contrast, from an emotional design perspective, the use of game elements is argued to increase performance by providing a more interesting and motivating task environment. To contribute to this debate, the current study investigated the effect of game elements on behavioral performance, attention, and motivation. We designed two versions of the number line estimation task—one with game elements and one without. Participants completed both versions of the task while their eye-fixation behavior was recorded. Results indicated that participants paid attention to game elements, that is, they fixated them, although they were not necessary to complete the task. However, no difference in estimation accuracy was observed between the two task versions. Moreover, the task version with game elements was rated to be more attractive, stimulating, and novel, and participants reported experiencing greater flow. In sum, these data indicate that game elements seem to capture attention but also increase motivational aspects of learning tasks rather than decreasing performance.
... However, for example Kickmeier-Rust, Hillemann, and Albert (2011) have shown that eye tracking can be successfully applied to measure the quality of serious games. Based on eye tracking results Law, Mattheiss, Kickmeier-Rust and Albert (2010) have argued that the layout of the game plays a bigger role than the content in capturing user attention. In general, for game based learning research, eye tracking can provide new knowledge about how learning happens in games, what game elements can be used to enhance learning, how to focus player's attention to important game elements, how to avoid evaluation gulfs and how feedback is perceived and how graphical implementation is perceived . ...
Article
In this paper we summarize first findings from eedu Elements user experience studies. eedu Elements is a game that brings the whole Finnish maths curriculum (primary school) available for players all over the world and it is optimized for tablets and smart phones. The game is based on teachable agent approach, which means that in the game students can teach skills to their game characters. Research focused on evaluating the implementation of the game and exploring player's attitudes about the game. The participants were Finnish (N = 43) and Ireland (N = 22) primary school students. Group interviews, eye-tracking and observation methods were used to evaluate the eedu Elements game. In general, the game was experienced as good or excellent learning game in all the studied age groups. The results showed that it took less than two minutes to learn how to play the game, but still the players were expecting to learn the gameplay even faster. It seems that users demands are nowadays so high that learning curve should be very steep. Thus the easy of use can be regarded as one of the crucial aspects influencing on diffusion of new game based solutions. Furthermore, the eye-tracking results indicated that players' perception patterns varied a lot and some players even missed relevant information during playing. The results showed that eye-tracking can provide important information from the quality of the game design and this information can be used to improve game's user interface and gameplay. However, we have to be careful when interpreting the eye movement data, because we cannot be sure if the player understands everything that he or she is paying attention to. Thus, eye-tracking should be complemented with offline methods such like retrospective interviews.
... However, for example Kickmeier-Rust, Hillemann, and Albert (2011) have shown that eye tracking can be successfully applied to measure the quality of serious games. Based on eye tracking results Law, Mattheiss, Kickmeier-Rust and Albert (2010) have argued that the layout of the game plays a bigger role than the content in capturing user attention. In general, for game based learning research, eye tracking can provide new knowledge about how learning happens in games, what game elements can be used to enhance learning, how to focus player's attention to important game elements, how to avoid evaluation gulfs and how feedback is perceived and how graphical implementation is perceived . ...
Article
This paper presents the first findings from Math Elements user experience UX studies. Math Elements is a game that makes the whole Finnish maths K-2 curriculum kindergarten and primary school grades 1 and 2 available for players all over the world. The game is based on teachable agent approach, which means that in the game players can teach math skills to their game characters. Research focused on evaluating the implementation of the game, exploring players' opinions about the game, and studying how the game fits to classroom usage. The participants were Finnish N = 111 and Irish N = 42 primary school pupils. In both cases interviews, game log data and observation methods were used to evaluate the UX. The Finnish study was conducted in two phases. First, one first grade class N = 23 participated in a focus group study in which they played the Math Elements game in small groups and finally eight of the pupils participated in an eye tracking study. Second, the class introduced the game in their school and after that all first and second graders of the school played the game daily during a three weeks period. The Irish case study was different from Finnish study and the results are not directly comparable. The Irish pupils fourth and fifth graders played the game for 50 minutes as a part of their regular schoolwork. In general, Math Elements was experienced as an engaging learning game in all studied age groups and it was found to fit well into classroom usage in certain contexts. The paper presents the details of the conducted UX studies and discusses the meaning of UX in educational games.
Article
Virtual reality (VR) and game-based learning strategies have rarely been investigated together with a keen focus on motivational processing. This lack of understanding on motivational support of VR game-based learning has hindered the design of such environments to effectively and efficiently support intended learning processes. The study revealed relationships between learners’ motivational processing and perceived game features in a VR learning environment for delivering introductory archaeology content to college students. The first part of the study adopted the complementary concurrent mixed-method design, which applied qualitative results to clarify quantitative findings to delineate motivational support perceived by 40 participants. The second part employed quantitative survey data only from the same sample to reveal perceived game features and relationships between motivational support and game features. Findings suggest that learners’ motivational processing was supported by the Confidence and Satisfaction components of the ARCS motivational design model. Additionally, not all motivational components were supported by perceived game features according to multiple regression analyses. The discussion of the findings is focused on in what areas and to what extent multimedia-rich VR elements might compete with game-based learning in the same learning environment for learners’ limited cognitive and behavioral learning capacities.
Chapter
Computing machinery allows the creation of intelligent, personalized, adaptive systems and programs that consider the characteristics, interests, and needs of individual users and user groups. In the field of serious games, storytelling and gaming approaches are used as motivational instruments for suspenseful, engaging learning, or personalized training and healthcare. This chapter describes models and mechanisms for the development of personalized, adaptive serious games with a focus on digital educational games (DEG). First, the term adaptation is defined—both in general and in the context of games—and basic mechanisms such as the concept of flow are described. Then, player and learner models are analyzed for classification of player characteristics. For the control of serious games, adaptive storytelling and sequencing mechanisms are described. In particular, the concept of Narrative Game-based Learning Objects (NGLOBs) is presented, which considers the symbiosis of gaming, learning, and storytelling in the context of an adaptive DEG. Finally, the presented theoretical concepts, models, and mechanisms are discussed in the course of the 80Days project as a DEG best-practice example—which considers authoring, control, and evaluation aspects, and its practical implementation in 80Days using the authoring framework StoryTec.
Conference Paper
The main issue of the paper is the need of enriching education of modern tools enhancing the learning process complying to individual preferences and abilities of the learner. The paper presents the conceptual adaptive data model dedicated for the serious computer game. The model applies biomedical indicators and stimuli affecting the ability of learning and concentration.
Article
Educational game research tends to rely too often on behavioral activity rather than cognitive activity. How learning happens is methodologically very challenging to point out and thus it is usually avoided. In this paper we tackle the game based learning process with eye-tracking method. In particularly, the study focuses on exploring the meaning of cognitive feedback in game based learning process. Based on perceptual data we evaluate the effectiveness of cognitive feedback and identify game elements that may hinder the learning process. The results indicated that players' perception patterns varies a lot and some players even miss relevant information during playing. It seems that what sooner the player notices the cognitive feedback and grasps it meaning that better (effectively) they can play the game. The signaling method should be used strongly enough to highlight all the necessary elements. On the other hand, extraneous elements should be eliminated from the game world in order to avoid incidental processing in crucial moments. The results also showed that eye-tracking can provide important information from game based learning process and game designs. However, we have to be careful when interpreting the eye movement data, because we cannot be sure if the player understands everything that he or she is paying attention to. Thus, eye-tracking should be complemented with offline methods. In this study retrospective interview was used as a complementary method and it turned out to be very useful and increased the validity of the results.
Article
Full-text available
The challenge of educational game design is to develop solutions that please as many players as possible, but are still educationally effective. How learning happens in games is methodologically very challenging to point out and thus it is usually avoided. In this paper we tackle this challenge with eye tracking method. The aim of this research is to study the meaning of cognitive feedback in educational games and evaluate the usefulness of eye tracking method in game based learning research and game design. Based on perceptual data we evaluated the playing behavior of 43 Finnish and Austrian children aged from 7 to 16. Four different games were used as test-beds. The results indicated that players’ perception patterns varied a lot and some players even missed relevant information during playing. The results showed that extraneous elements should be eliminated from the game world in order to avoid incidental processing in crucial moments. Animated content easily grasps player’s attention, which may disturb learning activities. Especially low performers and inattentive players have difficulties in distinguishing important and irrelevant content and tend to stick to salient elements no matter of their importance for a task. However, it is not reasonable to exclude all extraneous elements because it decreases engagement and immersion. Thus, balancing of extraneous and crucial elements is essential. Overall, the results showed that eye tracking can provide important information from game based learning process and game designs. However, we have to be careful when interpreting the perceptual data, because we cannot be sure if the player understands everything that he or she is paying attention to. Thus, eye tracking should be complemented with offline methods like retrospective interview that was successfully used in this research.
Conference Paper
This paper demonstrates the use of the eye tracking data to examine the effect of interacting with an adaptive educational game on learning. Empirical results indicate that there is no significant correlation between the length of fixation duration and the extent of knowledge gain and that players paid attention to the instructions prior to gameplay. A challenge of constructing a predictive model is undertaken.
Article
Full-text available
College students either interacted directly with an intelligent tutoring system, called AutoTutor , by contributing to mixed initiative dialog, or they simply observed, as vicarious learners, previously recorded interactive sessions. The mean pretest to posttest effect size (Cohen’s d) across two studies was 1.86 in the interactive conditions and 1.12 in standard vicarious conditions. In Experiment 1, redundant onscreen printed text produced an effect size of 0.43, but the difference was not significant. In addition, the image of a talking head presenting AutoTutor ’s contributions to the dialog while displaying facial expressions, gestures, and gaze did not produce learning gains beyond those produced by the voice alone. In Experiment 2, the effect size was 0.71 when interactive tutoring was contrasted with the standard vicarious condition, but only 0.38 when compared to a collaborative vicarious condition.
Chapter
Full-text available
This section considers the application of eye movements to user interfaces—both for analyzing interfaces, measuring usability, and gaining insight into human performance—and as an actual control medium within a human-computer dialogue. The two areas have generally been reported separately; but this book seeks to tie them together. For usability analysis, the user’s eye movements while using the system are recorded and later analyzed retrospectively, but the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user-computer dialogue. They might be the sole input, typically for disabled users or hands-busy applications, or they might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices. Interestingly, the principal challenges for both retrospective and real time eye tracking in humancomputer interaction (HCI) turn out to be analogous. For retrospective analysis, the problem is to find appropriate ways to use and interpret the data; it is not nearly as straightforward as it is with more typical task performance, speed, or error data. For real time use, the problem is to find appropriate ways to respond judiciously to eye movement input, and avoid over-responding; it is not nearly as straightforward as responding to well-defined, intentional mouse or keyboard input. We will see in this chapter how these two problems are closely related. These uses of eye tracking in HCI have been highly promising for many years, but progress in making good use of eye movements in HCI has been slow to date. We see promising research work, but we have not yet seen wide use of these approaches in practice or in the marketplace. We will describe the promises of this technology, its limitations, and the obstacles that must still be overcome. Work presented in this book and elsewhere shows that the field is indeed beginning to flourish.
Conference Paper
Full-text available
The World Wide Web has become a ubiquitous information source and communication channel. With such an extensive user population, it is imperative to understand how web users view different web pages. Based on an eye tracking study of 30 subjects on 22 web pages from 11 popular web sites, this research intends to explore the determinants of ocular behavior on a single web page: whether it is determined by individual differences of the subjects, different types of web sites, the order of web pages being viewed, or the task at hand. The results indicate that gender of subjects, the viewing order of a web page, and the interaction between page order and site type influences online ocular behavior. Task instruction did not significantly affect web viewing behavior. Scanpath analysis revealed that the complexity of web page design influences the degree of scanpath variation among different subjects on the same web page. The contributions and limitations of this research, and future research directions are discussed.
Article
Full-text available
We investigated the impact of dialogue and deep-level-reasoning questions on vicarious learning in 2 studies with undergraduates. In Experiment 1, participants learned material by interacting with AutoTutor or by viewing 1 of 4 vicarious learning conditions: a noninteractive recorded version of the AutoTutor dialogues, a dialogue with a deep-level-reasoning question preceding each sentence, a dialogue with a deep-level-reasoning question preceding half of the sentences, or a monologue. Learners in the condition where a deep-level-reasoning question preceded each sentence significantly outperformed those in the other 4 conditions. Experiment 2 included the same interactive and noninteractive recorded condition, along with 2 vicarious learning conditions involving deep-level-reasoning questions. Both deep-level-reasoning-question conditions significantly outperformed the other conditions. These findings provide evidence that deep-level-reasoning questions improve vicarious learning.
Article
Full-text available
Two studies considered the interplay between user-perceived usability (i.e., pragmatic attributes), hedonic attributes (e.g., stimulation, identifica-tion), goodness (i.e., satisfaction), and beauty of 4 different MP3-player skins. As long as beauty and goodness stress the subjective valuation of a product, both were related to each other. However, the nature of goodness and beauty was found to differ. Goodness depended on both perceived usability and hedonic attributes. Especially after using the skins, perceived usability be-came a strong determinant of goodness. In contrast, beauty largely depended on identification; a hedonic attribute group, which captures the product's ability to communicate important personal values to relevant others. Per-ceived usability as well as goodness was affected by experience (i.e., actual us-ability, usability problems), whereas hedonic attributes and beauty remained stable over time. All in all, the nature of beauty is rather self-oriented than goal-oriented, whereas goodness relates to both.
Article
For hundreds of years verbal messages - such as lectures and printed lessons - have been the primary means of explaining ideas to learners. In Multimedia Learning Richard Mayer explores ways of going beyond the purely verbal by combining words and pictures for effective teaching. Multimedia encyclopedias have become the latest addition to students' reference tools, and the world wide web is full of messages that combine words and pictures. Do these forms of presentation help learners? If so, what is the best way to design multimedia messages for optimal learning? Drawing upon 10 years of research, the author provides seven principles for the design of multimedia messages and a cognitive theory of multimedia learning. In short, this book summarizes research aimed at realizing the promise of multimedia learning - that is, the potential of using words and pictures together to promote human understanding.
Article
Eye tracking is a technique whereby an individual’s eye movements are measured so that the researcher knows both where a person is looking at any given time and the sequence in which the person’s eyes are shifting from one location to another. Tracking people’s eye movements can help HCI researchers to understand visual and display-based information processing and the factors that may impact the usability of system interfaces. In this way, eye-movement recordings can provide an objective source of interface-evaluation data that can inform the design of improved interfaces. Eye movements also can be captured and used as control signals to enable people to interact with interfaces directly without the need for mouse or keyboard input, which can be a major advantage for certain populations of users, such as disabled individuals. We begin this article with an overview of eye-tracking technology and progress toward a detailed discussion of the use of eye tracking in HCI and usability research. A key element of this discussion is to provide a practical guide to inform researchers of the various eye-movement measures that can be taken and the way in which these metrics can address questions about system usability. We conclude by considering the future prospects for eye-tracking research in HCI and usability testing. Purchase this chapter to continue reading all 9 pages >
Article
Three experiments examined the effects of interactive visualizations and spatial abilities on a task requiring participants to infer and draw cross sections of a three-dimensional (3D) object. The experiments manipulated whether participants could interactively control a virtual 3D visualization of the object while performing the task, and compared participants who were allowed interactive control of the visualization to those who were not allowed control. In Experiment 1, interactivity produced better performance than passive viewing, but the advantage of interactivity disappeared in Experiment 2 when visual input for the two conditions in a yoked design was equalized. In Experiments 2 and 3, differences in how interactive participants manipulated the visualization were large and related to performance. In Experiment 3, non-interactive participants who watched optimal movements of the display performed as well as interactive participants who manipulated the visualization effectively and better than interactive participants who manipulated the visualization ineffectively. Spatial ability made an independent contribution to performance on the spatial reasoning task, but did not predict patterns of interactive behavior. These experiments indicate that providing participants with active control of a computer visualization does not necessarily enhance task performance, whereas seeing the most task-relevant information does, and this is true regardless of whether the task-relevant information is obtained actively or passively.