Content uploaded by Edward Melcer
Author content
All content in this area was uploaded by Edward Melcer on May 06, 2016
Content may be subject to copyright.
Proceedings of 1st International Joint Conference of DiGRA and FDG
© 2016 Authors. Personal and educational classroom use of this paper is allowed, commercial use requires
specific permission from the author.
Bridging the Physical Learning
Divides: A Design Framework for
Embodied Learning Games and
Simulations
Edward Melcer
Game Innovation Lab, NYU Tandon School of Engineering
5 MetroTech Center
Brooklyn, NY 11201
eddie.melcer@nyu.edu
Katherine Isbister
Social Emotional Technologies Lab, Department of Computational Media
University of California, Santa Cruz
Santa Cruz, CA 95064
katherine.isbister@ucsc.edu
ABSTRACT
Due to a broad conceptual usage of the term embodiment across a diverse variety of
research domains, existing embodied learning games and simulations utilize a large
breadth of design approaches that often result in seemingly unrelated systems. This
becomes problematic when trying to critically evaluate the usage and effectiveness of
embodiment within existing designs, as well as when trying to utilize embodiment in the
design of new games and simulations. In this paper, we present our work on combining
differing conceptual and design approaches for embodied learning systems into a unified
design framework. We describe the creation process for the framework, explain its
dimensions, and provide examples of its use. Our design framework will benefit
educational game researchers by providing a unifying foundation for the description,
categorization, and evaluation of designs for embodied learning games and simulations.
Keywords
Embodiment, Embodied Learning Games and Simulations, Design Framework
INTRODUCTION
Recent work on educational systems has shown the benefits of incorporating physicality,
motion, and embodiment into designs. For instance, improved spatial recall and mental
manipulation (Clifton, 2014; Rieser, Garing, & Young, 1994); more intuitive interfaces,
interactions, and mappings (Shelley, Lyons, Zellner, & Minor, 2011; Wyeth, 2008);
increased engagement (Bhattacharya, Gelsomini, Pérez-Fuster, Abowd, & Rozga, 2015;
Edge, Cheng, & Whitney, 2013; Yannier, Koedinger, & Hudson, 2013); greater positive
feelings towards learning content and science in general (Lindgren, Tscholl, & Moshell,
2013; Wei, Chen, & Chen, 2015; Yannier et al., 2013); and enhanced collaboration
(Ahmet, Jonsson, Sumon, & Holmquist, 2011; S. Price, Rogers, Scaife, Stanton, & Neale,
-- 2 --
2003; Yannier et al., 2013). This direction stems from the concept that cognition does not
only occur in the mind but is also supported by bodily activity (Shapiro, 2010); situated
in and interacting with our physical and social environment (Clark, 2008; Dourish, 2001).
However, when examining existing embodied learning games and simulations closely,
we find a large breadth of designs that result in seemingly unrelated systems (see Figure
1). This becomes problematic when trying to understand where and how embodiment
occurs in these systems, and which design elements help to facilitate embodied learning.
The problem is further aggravated by limited empirical validation of many systems
(Zaman, Vanden Abeele, Markopoulos, & Marshall, 2012), and a broad conceptual usage
of embodiment and related terms in a diverse variety of domains such as Human-
Computer Interaction (HCI), learning science, neuroscience, linguistics, and philosophy
(Birchfield et al., 2008; Rohrer, 2007; Ziemke, 2002). Therefore, for designers seeking to
utilize embodiment (i.e., an emergent property from the interactions between brain, body,
and the physical/social environment [Hummels & van Dijk, 2014]), the differences in
approach to physicality, collaboration, and interaction pose a significant hurdle. One
approach that can bridge conceptual differences between existing systems and domains is
the creation of a design framework (Ens & Hincapié-ramos, 2014; Robinett, 1992).
Figure 1: A spectrum of different embodied learning
systems. Left to right - Interactive Slide (Malinverni,
López Silva, & Parés, 2012), Electronic Blocks (Wyeth,
2008), Embodied Poetry (Kelliher et al., 2009),
SpatialEase (Edge et al., 2013), Eco Planner (Esteves &
Oakley, 2011).
BACKGROUND
Our goal in providing an embodied learning design framework is to bridge conceptual
gaps and resulting design choices made from the differing uses of embodiment in various
domains. In this section we present an overview of design frameworks, embodiment and
its application in educational games and simulations, and embodied learning taxonomies.
Design Frameworks
Design frameworks can help designers conceptualize nuances of particular technologies
and formalize the creative process (Ens & Hincapié-ramos, 2014). In interface design,
-- 3 --
design frameworks have been used to provide terminology to categorize ideas (B. A.
Price, Baecker, & Small, 1993) as well as organize complex concepts into logical
hierarchies (Plaisant, Carr, & Shneiderman, 1995). Design frameworks are created by
treating a set of taxonomical terms as orthogonal dimensions in a design space, and the
resulting matrix provides structure for classification and comparison of designs (Robinett,
1992). The completed design framework provides a means to critically examine designs
of existing systems and encourage new designs by providing a unifying foundation for
the description and categorization of systems. Furthermore, the methodical filling-in of
this structure helps to categorize existing concepts, differentiate ideas, and identify
unexplored terrain (Ens & Hincapié-ramos, 2014).
Embodiment, Embodied Cognition, and Embodied Interaction in
Educational Games and Simulations
Embodiment and related terms such as embodied cognition and embodied interaction
have many different interpretations and applications across a wide range of academic
domains. HCI tends to view embodiment from a phenomenological perspective where
embodiment is a physical and social phenomena that unfolds in real time and space as a
part of the world in which we are situated (Dourish, 2001). However, learning science
views tend to be more oriented on purely the body as a central focus for embodiment
(Johnson-Glenberg, Birchfield, Tolentino, & Koziupa, 2014; Rohrer, 2007). Moreover,
Ziemke (2002) has noted this divide in their work identifying six different uses of
embodiment across research domains (i.e., structural coupling, historical embodiment,
physical embodiment, organismoid embodiment, organismic embodiment, and social
embodiment). In order to encompass a large corpus of embodied designs in our design
framework, we take a broad perspective of embodiment: centering it around the notion
that human reasoning and behavior is connected to, or influenced by our bodies and their
physical/social experience and interaction with the world (S. Price & Jewitt, 2013). This
is seen as an iterative relationship, where reasoning and behavior can shape interaction as
well as the other way round, yet also complex because of the context, time, space,
emotion, etc. in which interaction is situated.
Embodied cognition is a similarly important but divided term for education, with Wilson
(2002) identifying six distinct views of embodied cognition where 1) cognition is
situated; 2) cognition is time-pressured; 3) we off-load cognitive work onto the
environment; 4) the environment is part of the cognitive system; 5) cognition is for
action; and 6) off-line cognition is body-based. In learning science, embodied cognition
considers how human cognition is fundamentally grounded in sensory-motor processes
and in our body's internal states (Ionescu & Vasc, 2014). As a result of this body-centric
perspective, learning science games and simulations explicitly addressing embodied
cognition tend to focus on the utilization of sensors to map full-body interaction and
congruency to learning content through the use of gestures (Barendregt & Lindström,
2012; Howison, Trninic, Reinholz, & Abrahamson, 2011; Johnson-Glenberg et al., 2014),
or to track whole-body enactment of learning material (Hatton, Campana, Danielescu, &
Birchfield, 2009; Lindgren et al., 2013). Conversely, HCI and subdomains such as
Tangible Embodied Interaction (TEI) view embodied cognition from a body-in-action
perspective where cognition is a coordination achieved through our brain, our body, and
the dynamic relationships between our body and the physical- and social environment
(Clark, 1997; Hummels & van Dijk, 2014). The resulting embodied cognition oriented
games and simulations in HCI and TEI tend to focus on a more social and collaborative
design, with sensors utilizing physical action as input into virtual or mixed reality worlds
(Clifton, 2014; Mickelson & Ju, 2011; Nakayama et al., 2014).
-- 4 --
Embodied interaction is a term coined by Dourish (2001) to capture a number of research
trends and ideas in HCI around tangible computing, social computing, and ubiquitous
computing. It refers to the creation, manipulation, and sharing of meaning through
engaged interaction with artifacts (Dourish, 2001), and includes material objects and
environments in the process of meaning making and action formation (Streeck, Goodwin,
& LeBaron, 2011). Games and simulations utilizing embodied interaction tend to place
the player in a physical space where they can physically manipulate interactive tangible
tabletops, blocks, and objects (Bakker, Hoven, & Antle, 2011; Chu, Clifton, Harley,
Pavao, & Mazalek, 2015; Esteves & Oakley, 2011; Rikić, 2013).
Embodied Learning Taxonomies
Similar to the many interpretations of embodiment, embodied learning frameworks and
taxonomies also have vastly different interpretations of physicality, motion,
collaboration, and interaction. Johnson-Glenberg et al (2014) created an embodied
learning taxonomy that specifies the strength of embodiment as a combination of the
amount of motoric engagement, gestural congruency to learning content, and immersion.
Black et al (2012) created the Instructional Embodiment Framework (IEF) which consists
of various forms of physical embodiment (i.e., direct, surrogate, and augmented) as well
as imagined embodiment (i.e., explicit and implicit) where the individual can embody
action and perception through imagination. In the TEI field, Fishkin's taxonomy (2004)
for the analysis of tangible interfaces views embodiment as the distance between input
and output where embodiment can be full (output device is input device), nearby (output
is directly proximate to input device), environmental (output is "around" the user), or
distant (output is on another screen or in another room). A related framework by Price
(2008) for tangible learning environments focuses on different possible artifact-
representation combinations and the role that they play in shaping cognition. The
physical-digital links of these combinations are conceptualized into four distinct
dimensions: location—the different location couplings between physical artifacts and
digital representations; dynamics—the flow of information during interaction (e.g., is
feedback immediate or delayed); correspondence—the degree to which the physical
properties of objects are closely mapped to the learning concepts; and modality—
different representation modalities in conjunction with artifact interaction.
TOWARDS A DESIGN FRAMEWORK FOR EMBODIED LEARNING
GAMES AND SIMULATIONS
Creating the Design Framework
To create our design framework, we conducted an extensive literature review for
published examples of embodied learning games and simulations in venues such as CHI,
TEI, FDG, and Interaction Design and Children (IDC). Notably, the core nature of all
games is embodied to some extent. Therefore, for the purpose of this research, only
papers that explicitly mentioned embodiment or related terms (e.g., embodied learning,
embodied cognition, embodied interaction, etc) were collected/used in the literature
review. We also performed a tree search of references and citations from the initial papers
collected and seminal papers concerning embodiment. In addition, we examined related
frameworks and taxonomies in subdomains and communities such as TEI (Fishkin, 2004;
O’Malley & Fraser, 2004; S. Price, 2008), embodiment and embodied learning (Black et
al., 2012; Johnson-Glenberg et al., 2014), and mixed reality (Ens & Hincapié-ramos,
2014; Rogers, Scaife, Gabrielli, Smith, & Harris, 2002). Our final list contains papers
describing designs for a total of 48 different embodied learning games and simulations
-- 5 --
(for the complete list of designs and their categorization within our design framework, go
to: http://edwardmelcer.net/research/supplementary_framework_table.pdf). This list is
not intended to be exhaustive, but does represent a diverse selection of designs that could
be drawn upon when creating a design framework. Bottom up, open coding was then
performed following the process described by Ens & Hincapié-ramos (2014) in order to
distill a set of 25 candidate dimensions that fit concepts found in the reviewed literature
and designs. Candidate dimensions were iteratively reduced and combined into a set
small enough for a concise framework. Afterwards, we presented our framework to
experts in HCI, game design, and learning science for feedback and additional
refinements. The final design framework consists of 7 dimensions shown in Table 1. We
further organized the dimensions into three groups based on their overarching design
themes within the construct of embodiment (i.e., physical body and interactions, social
interactions, and the world where interaction is situated).
Group
Dimension
Values
Physical
Interaction
Physicality
Embodied
Enacted
Manipulated
Surrogate
Augmented
Transforms
PPt
PDt
DPt
Mapping
Discrete
Co-located
Embedded
Correspondence
Symbolic
Literal
Social
Interaction
Mode of Play
Individual
Collaborative
Competitive
Coordination
Other Player(s)
NPC(s)
None
World
Environment
Physical
Mixed
Virtual
Table 1: Our design framework for embodied learning systems. Similar dimensions are
clustered under a group based on an overarching design theme, and the different values
for each dimension are shown.
Design Space Dimensions
Physicality describes how learning is physically embodied in a system and consists of
five distinct values. 1) The embodied value refers to an embodied cognition and learning
science approach where the body plays the primary constituent role in cognition (Shapiro,
2010). This form of embodiment focuses on gestural congruency and how the body can
physically represent learning concepts (Johnson-Glenberg et al., 2014). For instance, a
full body interaction game where players contort their bodies to match letters shown on a
screen (Paul, Goh, & Yap, 2015). 2) The enacted value refers to Direct Embodiment from
the IEF (Black et al., 2012), and to enactivism which focuses on knowing as physically
doing (Holton, 2010; Li, 2012). This form of embodiment focuses more on
acting/enacting out knowledge through physical action of statements or sequences. For
example, a gravitational physics game where payers walk along (i.e., enact) the trajectory
an asteroid would travel in the vicinity of planets and their gravitational forces (Lindgren
et al., 2013). 3) The manipulated value refers to the tangible embodied interactions of
TEI (Marshall, Price, & Rogers, 2003) and the use of manipulatives in learning science
-- 6 --
(Pouw, van Gog, & Paas, 2014). This form of embodiment arises from utilization of
embodied metaphors and interactions with physical objects (Bakker, Antle, & van den
Hoven, 2012), and the objects' physical embodiment of learning concepts (Ishii, 2008; S.
Price, 2008). 4) The surrogate value refers to the IEF concept of Surrogate Embodiment,
where learners manipulate a physical agent or "surrogate" representative of themselves to
enact learning concepts (Black et al., 2012). This form of embodiment is often used in
systems with an interactive physical environment that is directly tied to a real-time virtual
simulation (Gnoli et al., 2014; Kuzuoka, Yamashita, Kato, Suzuki, & Kubota, 2014). 5)
The augmented value refers to the IEF notion of Augmented Embodiment, where
combined use of a representational system (e.g., avatar) and augmented feedback system
(e.g., Microsoft Kinect and TV screen) embed the learner within an augmented reality
system. This form of embodiment is most commonly found in systems where learners'
physical actions are mapped as input to control digital avatars in virtual environments
(Lyons, Silva, Moher, Pazmino, & Slattery, 2013; Nakayama et al., 2014).
Transforms conceptualize a space, describing the relationships between physical or
digital actions and the resulting physical or digital effects in the environment (Rogers et
al., 2002). We utilize the transform types of Physical action => Physical effect (PPt),
Physical action => Digital effect (PDt), and Digital action => Physical effect (DPt) from
Rogers et al (2002) to describe the many forms of existing systems.
Mapping borrows the notion of Embodiment from Fishkin's (2004) taxonomy and
Location from Price's (2008) tangible learning environment framework which describes
the different spatial locations of output in relation to the object or action triggering the
effect (i.e., how is input spatially mapped to output). Mappings can be discrete—input
and output are located separately (e.g., an action triggers output on a nearby screen); co-
located—input and output are contiguous (e.g., an action triggers output that is directly
adjacent or overlaid on the physical space); and embedded—input and output are
embedded in the same object.
Correspondence builds upon the notion of Physical Correspondence from Price's (2008)
tangible learning environment framework which refers to the degree to which the
physical properties of objects are closely mapped to the learning concepts. We expand
this concept to also include physical actions (e.g., congruency of gestures or physical
manipulations to learning concepts). Correspondence can be symbolic—objects and
actions act as common signifiers to the learning concepts (e.g., arranging programming
blocks to learn coding); or literal—physical properties and actions are closely mapped to
the learning concepts and metaphor of the domain (e.g., playing an augmented guitar to
learn finger positioning).
Mode of Play specifies how individuals socially interact and play within a system. The
system can facilitate individual, collaborative, or competitive play for learner(s). Plass et
al (2013) found differing learning benefits for each mode of play, suggesting it is also an
important dimension to consider for learning outcomes.
Coordination highlights how individuals in a system may have to socially coordinate
their actions (Oullier, de Guzman, Jantzen, Lagarde, & Kelso, 2008) in order to
successfully complete learning objectives. Social coordination can occur with other
players and/or in a socio-collaborative experience with digital media typically in the form
of NPCs (Tolentino, Savvides, & Birchfield, 2010). Conversely, social coordination can
also be of limited focus in a design and not occur or even be supported.
-- 7 --
Environment refers to the learning environment in which the educational content is
situated. Environments can be either physical, mixed, or virtual (Rogers et al., 2002).
While transforms conceptualize a space through the description of actions and effects, the
environment dimension focuses on the actual space where learning occurs. For instance, a
PDt transform can occur in drastically different learning environments (see Figure 2). In
some systems, a player's physical actions are tracked but only used as input to control a
virtual character in a virtual environment (Lyons et al., 2013). In other systems, the
player's physical actions are tracked and mapped to control digital effects overlaid on an
augmented physical space or mixed reality environment (Lindgren et al., 2013). Others
still have players situated in a completely physical environment where their physical
actions are tracked primarily to keep score or digitally maintain information related to
learning content that is displayed during the interaction (Gnoli et al., 2014).
Figure 2: Three systems illustrating PDt transforms in
different learning environments. Left - physical actions
are mapped as input into a virtual environment (Lyons et
al., 2013). Middle - physical actions are mapped as input
into a mixed reality environment that is overlaid on
physical space (Lindgren et al., 2013). Right - physical
actions occur in a physical learning environment and are
only tracked to digitally maintain and display
information related to the physical interaction (Gnoli et
al., 2014).
APPLYING THE DESIGN FRAMEWORK FOR EMBODIED LEARNING
GAMES AND SIMULATIONS
Example 1 - Categorizing Existing Games and Simulations
One fundamental feature of any framework is its descriptive capability. To exemplify
how designs of existing embodied learning games and simulations can be described using
our framework, we applied it to the 48 systems identified in our earlier literature review.
For each design, we assigned dimensional values and cataloged the results (see
http://edwardmelcer.net/research/supplementary_framework_table.pdf). This methodical
approach provided us with a means to systematically compare and contrast the different
designs (Ens & Hincapié-ramos, 2014). One important point to note is that our
framework does not perfectly partition every design into dimensional values. There were
some cases where multiple values within a dimension would match a single design or the
design description would leave a chosen value open to interpretation. However, we
believe these minor discrepancies are acceptable since the intentions of a design
framework are to make the designer aware of important design choices and help them
weigh the potential benefits of these choices, rather than provide a set of arbitrary sorting
bins (Ens & Hincapié-ramos, 2014).
-- 8 --
During the analysis and cataloging process, a variety of similar designs emerged and were
reasonably described by 9 distinct categories (see Figures 3 & 4). We found the majority
of reviewed designs (42 of 48) to be a very good fit for one of the categories, despite all 9
categories only representing a small portion of the full design space expressed by the
framework. Similar to the assignment of dimensional values, categories are not absolute.
Therefore, we include designs with minor variations in a category so long as they fit
closely to the overall characteristics of that group.
Figure 3: A parallel coordinates graph showing the
categories found during analysis of existing designs that
utilize embodied and enacted physicality.
Embodied Physicality Categories (Figure 3)
Full-Body Congruency describes designs that employ full-body interactions with all or a
portion of the body being utilized as input into a mixed reality environment. The mapping
of input to output is discrete and sensor-based (e.g., utilizing some form of IR or
computer vision tracking), where players see augmented video feedback of themselves
moving to match virtual objects or actions depicted on a screen. The educational focus of
these systems is on mirroring a learning concept through bodily or gestural congruency,
and instances include using the body to match shapes of alphabet letters (Edge et al.,
2013; Paul et al., 2015; Yap, Zheng, Tay, Yen, & Do, 2015) and geometric shapes
(Mickelson & Ju, 2011).
Finger-Based Congruency is conceptually similar to full-body congruency in that the
educational focus of designs is on mirroring a learning concept through physical or
gestural congruency. However, the interaction focus is instead on usage of fingers to
achieve this congruency. This results in an embedded mapping of input to output on a
physical device (e.g., tablet) where gameplay is situated in a virtual environment.
Examples of this design category include usage of fingers to represent the numbers in a
part-whole relation (Barendregt & Lindström, 2012) and the velocity of a moving object
(Davidsson, 2014).
-- 9 --
Enacted Physicality Categories (Figure 3)
Whole-Body Position is one of the largest set of systems categorized (7 designs) and
focuses on tracking simple aspects of a player's body, such as their location in physical
space, to enact learning concepts in a mixed reality environment. These systems typically
focus on augmenting the physical space with a co-located mapping of input through
motion tracking and output through top down projections (Kelliher et al., 2009; Lindgren
et al., 2013) or through different modalities such as sound (Antle, Droumeva, & Corness,
2008).
Embedded Phenomena is a class of simulations that embed imaginary dynamic
phenomena—scientific or otherwise—into the physical space of classrooms (Moher,
Hussain, Halter, & Kilb, 2005). As a result of this design approach, interaction revolves
around enacting techniques performed by real world professionals in order to measure
and utilize devices embedded into the physical classroom environment that provide
augmented feedback about a specific phenomena. Examples of this design category
include simulations of earthquake trilateration (Moher et al., 2005) and subterranean
water flow (Novellis & Moher, 2011).
Figure 4: A parallel coordinates graph showing the
categories found during analysis of existing designs that
utilize manipulated, surrogate, and augmented
physicality.
Manipulated Physicality Categories (Figure 4)
Tangible Blocks describe designs that utilize notions of tangibility and embodied
interaction from HCI and TEI communities combined with concepts of modularity from
Computer Science. Players physically manipulate/program a set of tangible blocks with
embedded sensing capabilities and feedback systems. These blocks interact within the
physical environment and are usually symbolically representative of physical computing
concepts (Schweikardt & Gross, 2008; Wyeth & Purchase, 2002; Wyeth, 2008).
Tangible Tabletops describe designs that similarly utilize notions of tangibility and
embodied interaction, but instead focus on the usage of symbolic tangibles or gestures in
-- 10 --
conjunction with a virtual world displayed on an interactive tabletop. The setups are
commonly found in public spaces such as museums and typically facilitate large scale
social interactions. Tangible tabletop designs have been employed to teach educational
concepts around energy consumption (Esteves & Oakley, 2011), nanoscale (MoraGuiard
& Pares, 2014), and African concepts for mapping history (Chu et al., 2015).
Tangible Objects describe designs that utilize tangibles and embodied interaction as
input into virtual learning environments. Physical manipulation of the tangible object
results in a discrete and intuitive mapping to a virtual representation of learning content.
Tangible object designs have been utilized to teach a variety of concepts such as urban
planning (Shelley et al., 2011) and heart anatomy (Skulmowski, Pradel, Kühnert,
Brunnett, & Rey, 2016).
Surrogate Physicality Categories (Figure 4)
Tangible Spaces build upon a space-centered view of tangible embodied interaction
where interactive spaces rely on combining physical space and tangible objects with
digital displays (Hornecker & Buur, 2006). The design focus is on creating a tangible
physical environment for the player to actively manipulate—complete with a physical
surrogate avatar that the player controls—and discretely mapping physical changes in that
space to a virtual world that either mirrors or augments the physical one. Tangible spaces
have been used to teach programming (Fernaeus & Tholander, 2006), animal foraging
behavior (Gnoli et al., 2014), and diurnal motion of the sun (Kuzuoka et al., 2014).
Augmented Physicality Categories (Figure 4)
"Touchless" Motion-Based designs employ a discrete mapping of players' physical
actions as input into a virtual world. The use of a "touchless" interaction paradigm
exploits sensing devices which capture, track and decipher body movements and gestures
so that players do not need to wear additional aides (Bartoli, Corradi, Milano, &
Valoriani, 2013). Unlike full-body congruency, the focus is not on mirroring a learning
concept through the body, but instead that a player's physical actions are mapped to
control a digital avatar in the virtual world. As a result, rather than seeing a video of
themselves, players will see silhouettes, digital avatars, or a first-person perspective.
These systems have been utilized to teach concepts around geometric shapes (Kynigos,
Smyrnaiou, & Roussou, 2010), climate change (Lyons et al., 2013), and peer-directed
social behaviors (Bhattacharya et al., 2015).
Example 2 - Identifying Problematic Design Spaces
One benefit of our design framework is that it allows us to systematically examine design
elements of existing systems, identifying potential problematic design spaces. As an
example of this usage, we examine the Tangible Earth system (see Figure 5) where the
authors had to create and use an assessment framework to identify/understand problems
the system encountered (Kuzuoka et al., 2014). Tangible Earth is designed to support
learning of the sun's diurnal motion and earth's rotation. It consists of a doll-like avatar, a
globe and rotating table to represent the earth and its rotation, an electrical light
representing the sun, and a laptop running VR universe simulator. Learners would
physically manipulate the rotation of the earth and position/rotation of the avatar to
observe simulated changes in sun's position from the avatar's perspective.
-- 11 --
Figure 5: The Tangible Earth embodied learning system
(Kuzuoka et al., 2014).
One of the more significant problems identified by Kuzuoka et al (2014) for Tangible
Earth was that learners spent very little time looking at the tangibles themselves (e.g.,
globe, lamp, and avatar), instead focusing primarily on the VR simulation in the laptop.
This proved to be especially problematic for manipulation of the avatar, where users
would frequently forget the position of its body and orientation of its head. This often
caused the sun to appear or disappear unexpectedly in the simulation, confusing learners
and learning concepts. By analyzing this issue with our design framework, we identified a
potential problematic design space (see Figure 6). Learners had difficulty remembering
the position of a physical agent representative of themselves (surrogate embodiment)
because all of their physical actions were mapped to digital effects (PDt) in a simulated
world (virtual environment). This difficulty makes sense considering remembering the
physical position/orientation of a surrogate avatar in both the real world and the virtual
world simultaneously would introduce a significant amount of extraneous cognitive load
(Plass, Moreno, & Brünken, 2010). As a result, our design framework suggests that the
intersection of surrogate embodiment, PDt transforms, and virtual environments is a
problematic design space that should be carefully considered when designing future
embodied learning systems.
Figure 6: Problematic design space identified by
Tangible Earth (Kuzuoka et al., 2014).
Example 3 - Identifying Design Gaps
Another benefit of our design framework is that it allows us to methodically fill in the
framework with existing systems to identify gaps and unexplored terrain (Ens &
Hincapié-ramos, 2014). As an illustration of this usage, we fill in example pairings
-- 12 --
between the two dimensions of Physicality and Transforms (see Table 2). This provides
examples of relevant combinations between these two dimensions in the embodied
learning systems literature.
Physicality
Embodied
Enacted
Manipulated
Surrogate
Augmented
Transforms
PPt
Scratch Direct
Embodiment
(Fadjo &
Black, 2012)
Electronic Blocks
(Wyeth, 2008);
roBlocks
(Schweikardt &
Gross, 2008)
PDt
SpatialEase
(Edge et al.,
2013); Word Out!
(Paul et al.,
2015);
Mathematical
Imagery Trainer
(Howison et al.,
2011)
Embodied
Poetry (Hatton
et al., 2009);
AquaRoom
(Novellis &
Moher, 2011);
MEteor
(Lindgren et
al., 2013)
Mapping Place
(Chu et al.,
2015); MoSo
Tangibles
(Bakker et al.,
2011); Eco
Planner (Esteves
& Oakley, 2011)
Hunger Games
(Gnoli et al.,
2014); Tangible
Programming
Space (Fernaeus
& Tholander,
2006); Tangible
Earth (Kuzuoka
et al., 2014)
Human
SUGOROKO
(Nakayama et al.,
2014); Bump Bash
(Bartoli et al.,
2013); Sorter Game
(Kynigos et al.,
2010)
DPt
ALERT (Lahey,
Burleson, Jensen,
Freed, & Lu, 2008)
Table 2: Example pairings between the Physicality and Transform dimensions.
Examining Table 2, we find several design gaps for existing embodied learning games
and simulations. Some of the more potentially useful pairings in the identified design
gaps are Embodied + PPt, Manipulated + DPt, Surrogate + PPt, and Surrogate + DPt,
where interesting future system designs could evolve from utilizing one of these pairings.
For instance, using a Surrogate + PPt pairing could lead to the design of physically
embodied educational board games. Additionally, a Surrogate + DPt pairing could lead to
an asymmetric computational thinking game where one player controls and interacts with
a physical avatar while another player digitally designs the physical courses and obstacles
for the first player to complete.
CONCLUSION AND FUTURE WORK
In this paper, we presented our design framework for embodied learning games and
simulations based on a detailed analysis of 48 existing embodied learning systems and
related frameworks/taxonomies from subdomains and communities such as TEI, HCI,
embodiment and embodied learning, and mixed reality. A design framework allows us to
systematically understand, analyze, and differentiate design elements of existing
embodied learning systems. This ultimately aids us in determining where and how
embodiment occurs in an educational system, and guides the application of specific
design choices in future systems. Future work will build games and simulations
addressing design gaps and problematic spaces identified by our framework, and test the
efficacy and learning outcomes of these systems. In broader application, this design
framework can also be used to guide construction of systems that methodically examine
questions of when and how embodied learning should be used within games/simulations;
which will help to further ground the framework and clarify its interpretation.
-- 13 --
BIBLIOGRAPHY
Ahmet, Z., Jonsson, M., Sumon, S. I., & Holmquist, L. E. (2011). Supporting embodied
exploration of physical concepts in mixed digital and physical interactive settings.
In Proceedings of TEI ’11.
Antle, A. N., Droumeva, M., & Corness, G. (2008). Playing with The Sound Maker: Do
Embodied Metaphors Help Children Learn? In Proceedings of the 7th international
conference on Interaction design and children - IDC ’08 (p. 178).
Bakker, S., Antle, A. N., & van den Hoven, E. (2012). Embodied metaphors in tangible
interaction design. In Personal and Ubiquitous Computing (Vol. 16).
Bakker, S., Hoven, E. Van Den, & Antle, A. N. (2011). MoSo Tangibles : Evaluating
Embodied Learning. In Proceedings of the fifth international conference on
Tangible, embedded, and embodied interaction - TEI ’11 (pp. 85–92).
Barendregt, W., & Lindström, B. (2012). Development and evaluation of Fingu: a
mathematics iPad game using multi-touch interaction. In Proceedings of the 11th
International Conference on Interaction Design and Children (pp. 204–207).
Bartoli, L., Corradi, C., Milano, P., & Valoriani, M. (2013). Exploring Motion-based
Touchless Games for Autistic Children’s Learning. In Proceedings of the 12th
International Conference on Interaction Design and Children (pp. 102–111).
Bhattacharya, A., Gelsomini, M., Pérez-Fuster, P., Abowd, G. D., & Rozga, A. (2015).
Designing Motion-Based Activities to Engage Students with Autism in Classroom
Settings. In IDC 2015 (pp. 69–78).
Birchfield, D., Thornburg, H., Megowan-Romanowicz, M. C., Hatton, S., Mechtley, B.,
Dolgov, I., & Burleson, W. (2008). Embodiment, Multimodality, and Composition:
Convergent Themes across HCI and Education for Mixed-Reality Learning
Environments. Advances in Human-Computer Interaction, 2008, 1–19.
Black, J. B., Segal, A., Vitale, J., & Fadjo, C. L. (2012). Embodied cognition and learning
environment design. In Theoretical foundations of learning environments (pp. 198–
223).
Chu, J. H., Clifton, P., Harley, D., Pavao, J., & Mazalek, A. (2015). Mapping Place:
Supporting Cultural Learning through a Lukasa-inspired Tangible Tabletop
Museum Exhibit. In Proceedings of the 9th international conference on Tangible,
embedded, and embodied interaction - TEI ’15 (pp. 261–268).
Clark, A. (1997). Being there: Putting brain, body, and world together again. MIT Press.
Clark, A. (2008). Supersizing the mind: embodiment, action, and cognitive extension.
Minds and Machines (Vol. 20). doi:10.1007/s11023-009-9162-6
Clifton, P. (2014). Designing embodied interfaces to support spatial ability. In
Proceedings of TEI ’14 (pp. 309–312).
Davidsson, M. (2014). Finger Velocity – A Multimodal Touch Based Tablet Application
for Learning the Physics of Motion. In Mobile as a Mainstream–Towards Future
Challenges in Mobile Learning (pp. 238–249).
Dourish, P. (2001). Where the Action Is: The Foundations of Embodied Interaction.
Edge, D., Cheng, K., & Whitney, M. (2013). SpatialEase: Learning Language through
Body Motion. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (CHI ’13) (pp. 469–472).
Ens, B., & Hincapié-ramos, J. D. (2014). Ethereal Planes : A Design Framework for 2D
Information Spaces in 3D Mixed Reality Environments. In Proceedings of the 2nd
ACM symposium on Spatial user interaction.
Esteves, A., & Oakley, I. (2011). Design for interface consistency or embodied
facilitation? In CHI 2011 Embodied Interaction: Theory and Practice in HCI
Workshop (pp. 1–4).
Fadjo, C. L., & Black, J. B. (2012). You’re In the Game: Direct Embodiment and
-- 14 --
Computational Artifact Construction. In Proceedings of the International
Conference of the Learning Sciences: Future of Learning (Vol. 2: Symposia).
Fernaeus, Y., & Tholander, J. (2006). Finding Design Qualities in a Tangible
programming space. In CHI 2006 Proceedings ’06 (pp. 447–456).
Fishkin, K. P. (2004). A taxonomy for and analysis of tangible interfaces. Personal and
Ubiquitous Computing, 8(5), 347–358.
Gnoli, A., Perritano, A., Guerra, P., Lopez, B., Brown, J., & Moher, T. (2014). Back to
the future: Embodied Classroom Simulations of Animal Foraging. In Proceedings of
the 8th International Conference on Tangible, Embedded and Embodied Interaction
- TEI ’14 (pp. 275–282).
Hatton, S., Campana, E., Danielescu, A., & Birchfield, D. (2009). Stratification:
Embodied Poetry Works by High School Students. In Proceedings of the 5th ACM
Conference on Creativity & Cognition (pp. 463–464).
Holton, D. L. (2010). Constructivism + embodied cognition = enactivism: theoretical and
practical implications for conceptual change. In AERA 2010 Conference.
Hornecker, E., & Buur, J. (2006). Getting a grip on tangible interaction. In Proceedings
of the SIGCHI conference on Human Factors in computing systems - CHI ’06.
Howison, M., Trninic, D., Reinholz, D., & Abrahamson, D. (2011). The mathematical
imagery trainer: From Embodied Interaction to Conceptual Learning. In
Proceedings of the 2011 annual conference on Human factors in computing systems
- CHI ’11. doi:10.1145/1978942.1979230
Hummels, C., & van Dijk, J. (2014). Seven Principles to Design for Embodied
Sensemaking. In Proceedings of the Ninth International Conference on Tangible,
Embedded, and Embodied Interaction - TEI ’14. doi:10.1145/2677199.2680577
Ionescu, T., & Vasc, D. (2014). Embodied Cognition: Challenges for Psychology and
Education. Procedia - Social and Behavioral Sciences, 128, 275–280.
Ishii, H. (2008). Tangible bits: beyond pixels. In Proceedings of the 2nd international
conference on Tangible and Embedded Intreaction (TEI ’08).
Johnson-Glenberg, M. C., Birchfield, D. a., Tolentino, L., & Koziupa, T. (2014).
Collaborative embodied learning in mixed reality motion-capture environments:
Two science studies. Journal of Educational Psychology, 106(1), 86–104.
Kelliher, A., Birchfield, D., Campana, E., Hatton, S., Johnson-Glenberg, M., Martinez,
C., … Uysal, S. (2009). SMALLab: A mixed-reality environment for embodied and
mediated learning. In MM’09 - Proceedings of the 2009 ACM Multimedia
Conference, with Co-located Workshops and Symposiums (pp. 1029–1031).
Kuzuoka, H., Yamashita, N., Kato, H., Suzuki, H., & Kubota, Y. (2014). Tangible Earth:
Tangible Learning Environment for Astronomy Education. In Proceedings of the
second international conference on Human-agent interaction (pp. 23–27).
Kynigos, C., Smyrnaiou, Z., & Roussou, M. (2010). Exploring rules and underlying
concepts while engaged with collaborative full-body games. In Proceedings of the
9th International Conference on Interaction Design and Children (p. 222).
Lahey, B., Burleson, W., Jensen, C. N., Freed, N., & Lu, P. (2008). Integrating video
games and robotic play in physical environments. In Proceedings of the 2008 ACM
SIGGRAPH symposium on Video games - Sandbox ’08 (Vol. 1, p. 107).
Li, Q. (2012). Understanding enactivism: a study of affordances and constraints of
engaging practicing teachers as digital game designers. Educational Technology
Research and Development, 60(5), 785–806. doi:10.1007/s11423-012-9255-4
Lindgren, R., Tscholl, M., & Moshell, J. M. (2013). MEteor: Developing Physics
Concepts Through Body- Based Interaction With A Mixed Reality Simulation. In
Physics Education Research Conference - PERC ’13 (pp. 217–220).
Lyons, L., Silva, B. L., Moher, T., Pazmino, P. J., & Slattery, B. (2013). Feel the burn:
-- 15 --
Exploring Design Parameters for Effortful Interaction for Educational Games. In
Proceedings of the 12th International Conference on Interaction Design and
Children - IDC ’13 (pp. 400–403).
Malinverni, L., López Silva, B., & Parés, N. (2012). Impact of Embodied Interaction on
Learning Processes: Design and Analysis of an Educational Application Based on
Physical Activity. In Proceedings of IDC ’11 (pp. 60–69).
Marshall, P., Price, S., & Rogers, Y. (2003). Conceptualising tangibles to support
learning. In IDC ’03: Proceedings of the 2003 conference on Interaction design and
children (pp. 101–109). doi:10.1145/953536.953551
Mickelson, J., & Ju, W. (2011). Math Propulsion: Engaging Math Learners Through
Embodied Performance & Visualization. In Proceedings of the fifth international
conference on Tangible, embedded, and embodied interaction - TEI ’11 (p. 101).
Moher, T., Hussain, S., Halter, T., & Kilb, D. (2005). Roomquake: embedding dynamic
phenomena within the physical space of an elementary school classroom. In
Proceedings of ACM CHI 2005 Conference on Human Factors in Computing
Systems (Vol. 2, pp. 1665–1668). doi:10.1145/1056808.1056992
MoraGuiard, J., & Pares, N. (2014). Child as the measure of all things: the body as a
referent in designing a museum exhibit to understand the nanoscale. In IDC ’14.
Nakayama, T., Adachi, T., Muratsu, K., Mizoguchi, H., Namatame, M., Sugimoto, M., …
Takeda, Y. (2014). Human SUGOROKU: Learning Support System of Vegetation
Succession with Full-body Interaction Interface. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (CHI ’14) (pp. 2227–2232).
Novellis, F., & Moher, T. (2011). How Real is “Real Enough”? Designing Artifacts and
Procedures for Embodied Simulations of Science Practices. In Proceedings of the
10th International Conference on Interaction Design and Children (pp. 90–98).
O’Malley, C., & Fraser, S. (2004). Literature review in learning with tangible
technologies.
Oullier, O., de Guzman, G. C., Jantzen, K. J., Lagarde, J., & Kelso, J. a S. (2008). Social
coordination dynamics: measuring human bonding. Social Neuroscience, 3(2).
Paul, F. C., Goh, C., & Yap, K. (2015). Get Creative With Learning: Word Out! A Full
Body Interactive Game. In Proceedings of the 33rd Annual ACM Conference
Extended Abstracts on Human Factors in Computing Systems - CHI EA ’15.
Plaisant, C., Carr, D., & Shneiderman, B. (1995). Image-browser taxonomy and
guidelines for designers. IEEE Software, 12(2), 21–32.
Plass, J. L., Moreno, R., & Brünken, R. (2010). Cognitive Load Theory. Cambridge
University Press.
Plass, J. L., O’Keefe, P. A., Homer, B. D., Case, J., Hayward, E. O., Stein, M., & Perlin,
K. (2013). The impact of individual, competitive, and collaborative mathematics
game play on learning, performance, and motivation. Journal of Educational
Psychology, 105(4), 1050–1066. doi:10.1037/a0032688
Pouw, W. T. J. L., van Gog, T., & Paas, F. (2014). An Embedded and Embodied
Cognition Review of Instructional Manipulatives. Educational Psychology Review,
26(1), 51–72. doi:10.1007/s10648-014-9255-5
Price, B. A., Baecker, R. M., & Small, I. S. (1993). A Principled Taxonomy of Software
Visualization. Journal of Visual Languages & Computing, 4(3), 211–266.
Price, S. (2008). A representation approach to conceptualizing tangible learning
environments. In Proceedings of the 2nd international conference on Tangible and
embedded interaction TEI 08 (p. 151). doi:10.1145/1347390.1347425
Price, S., & Jewitt, C. (2013). A multimodal approach to examining “embodiment” in
tangible learning environments. In Proceedings of TEI ’13 (pp. 43–50).
Price, S., Rogers, Y., Scaife, M., Stanton, D., & Neale, H. (2003). Using tangibles to
-- 16 --
promote novel forms of playful learning. Interacting with Computers, 15(2), 169–
185. doi:10.1016/S0953-5438(03)00006-7
Rieser, J. J., Garing, A. E., & Young, M. F. (1994). Imagery, Action, and Young
Children’s Spatial Orientation: It's Not Being There That Counts, It's What One Has
in Mind. Child Development, 65(5), 1262. doi:10.2307/1131498
Rikić, M. (2013). Buildasound. In Proceedings of the 7th international conference on
Tangible, embedded, and embodied interaction - TEI ’13 (pp. 395–396).
Robinett, W. (1992). Synthetic Experience: A Taxonomy, Survey of Earlier Thought, and
Speculations on the Future. Technical report.
Rogers, Y., Scaife, M., Gabrielli, S., Smith, H., & Harris, E. (2002). A conceptual
framework for mixed reality environments: Designing novel learning activities for
young children. Presence, 11(6), 677–686.
Rohrer, T. (2007). The body in space: Dimensions of embodiment. In Body, language
and mind (pp. 339–378).
Schweikardt, E., & Gross, M. (2008). The robot is the program: interacting with
roBlocks. In Proceedings of the second international conference on Tangible,
embedded, and embodied interaction - TEI ’08 (pp. 167–168).
Shapiro, L. (2010). Embodied Cognition. Routledge.
Shelley, T., Lyons, L., Zellner, M., & Minor, E. (2011). Evaluating the Embodiment
Benefits of a paper-based TUI for Spatially Sensitive Simulations. In Extended
Abstracts of the 2011 Conference on Human Factors in Computing Systems (p.
1375). doi:10.1145/1979742.1979777
Skulmowski, A., Pradel, S., Kühnert, T., Brunnett, G., & Rey, G. D. (2016). Embodied
learning using a tangible user interface: The effects of haptic perception and
selective pointing on a spatial learning task. Computers & Education, 92-93, 64–75.
Streeck, J., Goodwin, C., & LeBaron, C. (2011). Embodied interaction: language and
body in the material world. Embodied Interaction Language and Body in the
Material World, 1–28.
Tolentino, L., Savvides, P., & Birchfield, D. (2010). Applying game design principles to
social skills learning for students in special education. In Proceedings of FDG ’10.
Wei, C., Chen, H., & Chen, N. (2015). Effects of Embodiment-Based Learning on
Perceived Cooperation Process and Social Flow. In 7th World Conference on
Educational Sciences (pp. 608–613). Elsevier B.V.
Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review,
9(4), 625–636. doi:10.3758/BF03196322
Wyeth, P. (2008). How Young Children Learn to Program With Sensor, Action, and
Logic Blocks. Journal of the Learning Sciences, 17(4), 517–550.
Wyeth, P., & Purchase, H. C. (2002). Tangible programming elements for young
children. In CHI ’02 extended abstracts on Human factors in computing systems -
CHI '02 (p. 774). doi:10.1145/506443.506591
Yannier, N., Koedinger, K. R., & Hudson, S. E. (2013). Tangible collaborative learning
with a mixed-reality game: Earthshake. Artificial Intelligence in Education.
Yap, K., Zheng, C., Tay, A., Yen, C.-C., & Do, E. Y.-L. (2015). Word out! Learning the
Alphabet through Full Body Interactions. In Proceedings of the 6th Augmented
Human International Conference on - AH ’15 (pp. 101–108).
Zaman, B., Vanden Abeele, V., Markopoulos, P., & Marshall, P. (2012). Editorial: The
evolving field of tangible interaction for children: The challenge of empirical
validation. Personal and Ubiquitous Computing, 16, 367–378.
Ziemke, T. (2002). What’s that Thing Called Embodiment? In Proceedings of the 25th
Annual meeting of the Cognitive Science Society (pp. 1305–1310).