ArticlePDF Available

Exploring How Role and Background Influence Through Analysis of Spatial Dialogue in Collaborative Problem-Solving Games


Abstract and Figures

This study examines how different roles and background knowledge transform players’ dyadic conversations into spatial dialogues in a virtual cellular biology game. Cellverse is a collaborative virtual reality (VR) game designed to teach cell biology. Players work in pairs, assuming the role of either a Navigator, with reference material and a global view through a tablet, or an Explorer, with a more detailed interactive view of the cell through a VR headset and hand controllers. The game is designed so players must collaborate in order to complete the game. Our results show that roles influenced their reference perspectives at a level of statistical significance. Furthermore, players with high prior knowledge tried to reduce their partner’s mental effort by giving spatial information from their point of view, thus producing fewer occurrences of spatial unawareness. Results of this study suggest that designers can build in different roles and leverage different background knowledge to prompt effective partnerships during collaborative games.
Content may be subject to copyright.
Exploring How Role and Background Influence
Through Analysis of Spatial Dialogue in Collaborative
Problem-Solving Games
Cigdem Uz-Bilgin
&Meredith Thompson
&Melat Anteneh
#Springer Nature B.V. 2020
This study examines how different roles and background knowledge transform playersdyadic conversations into spatial
dialogues in a virtual cellular biology game. Cellverse is a collaborative virtual reality (VR) game designed to teach cell biology.
Players work in pairs, assuming the role of either a Navigator, with reference material and a global view through a tablet, or an
Explorer, with a more detailed interactive view of the cell through a VR headset and hand controllers. The game is designed so
players must collaborate in order to complete the game. Our results show that roles influenced their reference perspectives at a
level of statistical significance. Furthermore, players with high prior knowledge tried to reduce their partnersmentaleffortby
giving spatial information from their point of view, thus producing fewer occurrences of spatial unawareness. Results of this
study suggest that designers can build in different roles and leverage different background knowledge to prompt effective
partnerships during collaborative games.
Keywords Spatial dialogue .Collaborative games .Virtual reality .Prior knowledge .Science education
Collaborative problem-solving (CPS) is an essential skill in
education and the workplace, yet CPS skills are challenging to
develop and to assess (Fiore et al. 2017). Virtual simulations
may provide an avenue for both developing and measuring
twenty-first century skills such as CPS. Immersive virtual re-
ality (VR) has the potential to create a shared environment that
could enable users to collaborate and evaluate their perfor-
mances (Ens et al. 2019). An important step in CPS is being
able to understand the problem and establish a common lan-
guage (Duncan and West 2018; Sawyer 2017). This study sets
up a CPS scenario where pairs of players collaboratively ex-
plore a shared environment from two different viewpoints: an
Explorer embedded within a cell and a Navigator who has a
more global, yet less detailed view. Having clearly defined
roles and a range of expertise helps foster positive interde-
pendenceamong team members (Johnson et al. 1991)where-
in successful completion of the activity requires a joint effort
(Weber and Kim 2015). Players must find ways to describe
the strange and unfamiliar environment of a human lung cell
to a partner when neither partner has a full understanding of
what the other one can see. In this study, we focus on how
these players establish a shared understanding through the
conversations about the environment during the VR-based
VR is a simulated environment that creates realistic expe-
riences by providing users with regular sensory feedback
(Johnson et al. 2016). VR has been used for education and
training in a wide variety of fields including medicine (Abe
et al. 2018), education (Freina and Ott 2015), gaming, and
entertainment (Liszio and Masuch 2016), and to assist in spa-
tial learning (León et al. 2018; Tascón et al. 2017). A properly
designed VR environment can be capable of evoking a sense
of presence and immersion in users interacting with and with-
in it (Sherman and Craig 2018). Spatial presence, the percep-
tion of existing within a space, and spatial immersion, the
feeling of being physically present in a space, are important
contributors to usersenjoyment of VR environments (Shafer
et al. 2019). When implemented correctly, VR environments
can generate levels of engagement that surpass those of tradi-
tional screen-based content (Zaman et al. 2015). The degree to
which users are immersed in the environment and feel that
*Cigdem Uz-Bilgin;
The Education Arcade, Massachusetts Institute of Technology,
Cambridge, MA, USA
Published online: 14 September 2020
Journal of Science Education and Technology (2020) 29:813–826
they are physically present depends on several factors includ-
ing how realistically users are represented in the environment
(e.g., avatars) and what equipment is used to interact with the
environment (e.g., head-mounted displays) (Seibert and
Shafer 2017; Zaman et al. 2015). The majority of VR setups
utilize a headset and some sort of handheld controller used to
navigate within and interact with the environment (Hahn
Witmer and Singer (1998) divide spatial presence in VR
into three contributing (control, sensory, and realism) and one
detracting (distraction) components. Although spatial pres-
ence is typically measured using subjective measures such
as scales and questionnaires, researchers have also developed
some performance and psychological ways to measure spatial
presence (Laarni et al. 2015). Minimizing distractions and
maximizing usersability to selectively focus on the environ-
ment has been found to improve spatial presence (Hite et al.
2019; Tussyadiah et al. 2017). Hite et al. (2019) found that
spatial acuity, spatial rotation ability, and understanding of
angular geometry all contributed to studentsfeelings of pres-
ence during a VR science lesson. Coxon et al. (2016)founda
correlation between spatial presence and self-reported visuo-
spatial imagery as measured by the MEC-SPQs visual spatial
imagery scale.
Many of the same strategies used when navigating in the
real world can be applied to virtual environments. Easily nav-
igable spaces, virtual or otherwise, are designed in such a way
that facilitates usersspatial orientation, the ability to deter-
mine ones position and heading towards a destination in an
environment (Pietropaolo and Crusio 2012). The inability to
locate oneself in an environment or recognize landmarks, also
known as spatial disorientation or spatial unawareness, can be
assessed in VR settings (Kober et al. 2013). Similar to CPS,
effective collaboration in a VR setting requires users to estab-
lish a shared perspective either physically or verbally
(Pouliquen-Lardy et al. 2016). However, many natural phys-
ical motions that are normally used to indicate directions or
objects (e.g., pointing) are not easily transferred through hand
controllers into a virtual environment, resulting in a loss of
information (Giusti et al. 2012).
Spatial dialogue is information communicated by collabo-
rators to establish a common mental representation of the en-
vironment (Pouliquen-Lardy et al. 2016). Spatial dialogue can
also communicate explicit spatial information such as spatial
presence, spatial unawareness, spatial orientation, navigation,
and viewpoint information. Zaman et al. (2015) found that
participants performing a collaborative VR task communicat-
ed better when they were able to establish a shared perspective
with their partner. The study also showed that uttered phrases
were typically categorized as being either egocentric (from the
viewpoint of the speaker) or exocentric (from a viewpoint
other than that of the speaker). Pouliquen-Lardy et al. (2016)
further divided these classifiers into five codes: neutral
(independent of viewpoint), ego-centered(egocentric), ad-
dressee-centered(from the viewpoint of the listener), ob-
ject-centered(from the viewpoint of a reference object),
and other-centered(from the viewpoint of more than one
reference). In their study, participants were split into groups of
two, each with one manipulatorand one guidein order to
observe remote collaboration in VR. Manipulators used sig-
nificantly more ego-centered language while guides used pri-
marily addressee-centered language that required their part-
ners to make more mental rotations. Utilizing different view-
points requires different levels of cognitive effort (Pouliquen-
Lardy et al. 2016;Schober1996). Giving and receiving spatial
information that is not ego-centered takes more time and re-
quires higher mental workload, especially in the case of
addressee-centered representation (Pouliquen-Lardy et al.
2016; Schober 1996). Speakers who consider the information
and speak from the perspective of the listener during commu-
nication minimize the overall mental workload for listeners
(Duran et al. 2011; Pouliquen-Lardy et al. 2016;Schober
Through this study, we sought to understand how individ-
uals collaborating in a cross-platform (VR to tablet) collabo-
rative game communicate spatial information by examining
their dialogue during collaboration. This is a novel study that
explores spatial dialogues in a VR collaborative educational
cell biology game that is developed so players with different
roles must collaborate in order to complete the game. This
study shows that a collaborative VR game might be an effec-
tive way to understand individualsspatial information pro-
cessing. The study aims to examine how players discuss the
cell as a spatial domain. Moreover, this study aims to fill the
gap in the literature by exploring how prior content knowl-
edge impacts playersdialogue during gameplay. Specifically,
the present study poses the following research questions:
1. How do individuals communicate spatial information dur-
ing a role playing cross-platform collaborative game?
2. How does an individualsrole and prior knowledge in
biology impact their dialogue during gameplay?
The participants were 8 pairs of individuals (16 students
total) while playing the game. The audience for the game
is high school students, however, in this initial study re-
searchers sampled slightly below (middle school students)
and slightly above (recent high school graduates) the
intended audience. This divergent sampling strategy is
well suited to an exploratory study (Bickman and Rog
814 J Sci Educ Technol (2020) 29:813–826
2008, p. 92), as it shows a wide range of responses to the
game. The participants included four middle school (MS)
students, four high school (HS) students, and eight stu-
dents in their first semester of a Biotechnology Workforce
Program (BWP). The BWP program is a post-high school
workforce development program that prepares students to
work in biotechnology laboratories. The middle school
students had taken 1 year of life science; the high school
students had taken life science in middle school and high
school biology, and the BWP participants had taken high
school biology and were focused on biology in their pro-
gram. The focus of the game was understanding the cel-
lular environment. Background knowledge of participants
their drawing, and asking them about their past and cur-
rent biology courses. In our research, we have collected
hundreds of drawings of cells, and have found that these
drawings and interviews are the best way to determine a
holistic view of individualslevel of understanding and
recall of cells (Wang et al., 2019). After these analyses,
researchers included 8 BWP students with high prior
knowledge and 8 middle/high school students with low
prior knowledge in this study. Students were paired ac-
cording to their prior knowledge and their relationship
level. Table 1and Table 2contain participantsgender,
relationship level, VR experience, prior knowledge, and
other demographic information about participants.
The Game Cellverse
Goal of the Game and Background Cellverse is a collab-
orative VR game designed to teach cell biology, as shown in
Fig. 1. The goal of the game is to learn about cells by looking
for clues in the cell in order to diagnose the type of cystic
fibrosis in the cell. Cystic fibrosis is a genetic disease where
individuals have a malformed protein called cystic fibrosis
transmembrane conductance regulator (CFTR). Malformed
CFTR prohibits the exchange of ions across the cell
membrane in special types of lung cells called ionocytes
(Montoro et al. 2018); this faulty ion exchange prevents tiny
hairs called cilia from moving back and forth to sweep mucus
out of the lungs. This same CFTR malfunction can be caused
by a few different genetic mutations, which manifest in the
CFTR proteins either as truncated proteins, misfolded pro-
teins, too few proteins on the membrane, or no proteins at all
(CF Foundation 2019). During the game, players review back-
ground information to figure out what types of clues they
should search for in the environment, and then search forthose
clues collaboratively to reach the game goal of diagnosing the
Roles and Resources Cellverse is a collaborative game pur-
posefully designed for two players. The roles of Navigator
and Explorer are linked to the mode of technology used in
the game (Wang et al., 2019). The Explorer views the
virtual cell through a VR HMD. VR is well suited for
providing a detailed, interactive view of the cell, and
allowing the player to develop spatial awareness for the
cell. The Explorer view includes limited text, in the form
of just in timeinformation about the organelle being
selected as shown in Fig. 2. The Navigator views the
virtual cell through a tablet, which allows the player to
have a holistic view of the cell and to access text and
image based reference material about CFTR, about organ-
elles, and about possible therapies to address the patients
CF type, as shown in Fig. 3. The only way to achieve the
goal of diagnosing the cell is for the pair to share infor-
mation with each other so they can understand what clues
to look for (Navigator shares with Explorer) and to find
them in the virtual cell (Explorer shares with the
Navigator), verify that the clues match the CF type (both
players jointly), and select an appropriate therapy (both
players jointly). Cellverse requires players to learn or dis-
cover the location of organelles, their connections and
navigation and orientation to understand them all during
gameplay. Distributing information across two platforms,
Table 1 Gender, relationship level, VR experience, and demographic information for middle and high school student study participants
Role Gender Year in school Do you consider
yourself a gamer?
Prior VR
Relationship # years of relationship Prior Knowledge
Pair 1 Navigator Female 11th grade No No Very good friends Many years Low
Pair 1 Explorer Female 11th grade No Once Very good friends Many years Low
Pair 2 Navigator Female 9th grade No No Sister Entire life Low
Pair 2 Explorer Female 10th grade No No Sister Entire life Low
Pair 3 Explorer Male 8th grade Yes Once Very good friends Many years Low
Pair 3 Navigator Male 8th grade Yes Maybe Twice Very good friends Many years Low
Pair 4 Explorer Male 8th grade Yes No Very good friends Many years Low
Pair 4 Navigator Male 8th grade Yes No Very good friends Many years Low
815J Sci Educ Technol (2020) 29:813–826
VR and tablet, requires players to understand how their
information is similar to or different from different per-
spectives of the environment and communicate spatial
There are also two scales in the game: nano view and mac-
ro view (see Figs. 4and 5). Cellverse allows students to shift
between these two scales, enabling users to compare spatial
relationships of the objects.
Study Procedure
Players were informed that the objective of Cellverse was to
work together to figure out what is wrong with the cell and
that they had 40 min of time to play the game. Participants
decided who would take on the role of the Explorer and who
would take on the role of the Navigator. The pair discussed
who wanted to be in VR and who wanted to use the tablet, and
came to the decision without influence from the researcher.
They did not switch the roles while playing the game, but at
the end of the game they had the opportunity to try different
roles and technologies. Instructions on the VR headset, con-
trollers, tablet, and key points of the game were introduced
before the game started. After they were set up with their
respective technology (headset and controllers or tablet), each
player learned basic features associated with their view
through an individual tutorial. The tutorial introduces the
players to the capabilities they have in the game. The player
in virtual reality begins in a vesicle, which is a small fluid-
filled sac or vacuole. The vesicle is a less dense environment
containing only some RNA and fluid, this allows the player to
focus on learning how to use the hand controllers and move
around the cell. The player on the tablet does a tutorial that
introduces the player to the menus available to them: informa-
tion about cystic fibrosis, information about organelles, and
the ability to pinch to zoom out and the ability to rotate the
cell. After completing the tutorials, the players started the
game. They played the game side by side and were able to
talk with one another throughout the experience (see Fig. 1). A
member of the research team provided technical support and
answered technical questions during gameplay, but players
were not given extra information about the game content.
The teams played until they either finished the game or after
40 min of gameplay. During gameplay, players were recorded
with a video camera and lapel microphones. Dialogues be-
tween team members were transcribed and analyzed.
Data Analysis
The unit of analysis for this study was the approximately
40 min of dialogue between the two players. Conversations
(dialogue between the two players) were transcribed by the
research team while watching videos. Each researcher
watched two videos and transcribed, then checked two tran-
scripts by watching the videos and listening to the conversa-
tions that were transcribed by the other researchers.
Researchers read the transcripts and highlighted the segments
related to spatial context. Codes were developed from the
literature (etic) and from a review of the patterns of dialogue
in the transcriptions (emic) (Miles and Huberman 1994). The
Table 2 Gender, relationship level, VR experience, and demographic information for study participants from the Biotechnology Workforce
Development Program
BWP Role Gender Year in school Do you consider
yourself a gamer?
VR before Relationship # years Prior
Pair 5 Navigator Female 13th grade No No Acquaintancesmet in GBA program Less than a year High
Pair 5 Explorer Male 13th grade No Once Acquaintancesmet in GBA program Less than a year High
Pair 6 Navigator Male 13th grade No Once Acquaintancesmet in GBA program Less than a year High
Pair 6 Explorer Male 13th grade Yes A few times Acquaintancesmet in GBA program Less than a year High
Pair 7 Explorer Male 13th grade No Once Acquaintancesmet in GBA program Less than a year High
Pair 7 Navigator Female 13th grade No No Acquaintancesmet in GBA program Less than a year High
Pair 8 Explorer Male 13th grade No No Knew in high school A few years High
Pair 8 Navigator Female 13th grade No No Knew in high school A few years High
Fig. 1 A pair of players during gameplay, showing the Navigator
viewing the cell through a tablet, and the Explorer with a VR HMD and
hand controllers
816 J Sci Educ Technol (2020) 29:813–826
category of navigation direction codes (etic) were based in
part on those from the study conducted by Pouliquen-Lardy
et al. (2016). These codes were related to giving navigation
direction: neutral, ego-centered, addressee-centered, and ob-
ject-centered. The emic codes were developed from reviewing
the data in light of the research questions. Codes included
navigation question, spatial reference, spatial unawareness,
spatial orientation, and reference to a specific viewpoint.
These codes were compiled into a codebook that included
information about the codes, their description, and example
utterances that fit in these codes. Researchers followed the
following steps to create the codebook (Hruschka et al. 2004):
1) Coders 1 and 2 developed a codebook together based on
initial review of the transcriptions and the literature.
2) Coders 1 and 2 coded the same transcript from one
gameplay session separately.
3) Coder 1 and coder 2 discussed and came to a consensus
on problematic codes. For example, coder 2 used spatial
orientationcode at a time when a player gets closer to an
organelle or an object in the cell; after discussion, coders
also decided to code spatial orientationat a time when
players turn around and look organelles from different
perspectives. After coders came to a consensus, they
modified the codebook.
Fig. 2 The Explorersviewofthe
cell, showing a more detailed,
interactive view of the virtual
Fig. 3 The N avigatorsviewof
the cell, showing reference
information about five types of
cystic fibrosis in the upper left
hand corner, as well as icons for
highlighting organelles
(microscope) and therapies (Rx
bottle) in the lower right hand
817J Sci Educ Technol (2020) 29:813–826
4) Coders 1 and 2 coded the remaining transcripts separately
(7 conversations).
5) A random subset of transcripts were chosen (4 conversa-
tions out of 7 conversations) and checked in terms of
consistency between the coders. This subset showed an
inter-coder reliability percentage of 92%.
The codebook that describes the codes and gives examples
of utterances is below (also see Table 3):
a) Navigation directions: utterances that are about giving
directions while navigating in VR:
i) Neutral: Directions are independent of viewpoint (e.g.,
Go to the ribosome).
ii) Ego-centered:Directions are from the speakersview-
point (e.g., The ER is in front of me).
iii) Addressee-centered:Directions are from the listeners
viewpoint (e.g., Nucleus is on your right).
iv) Object-centered:Directions are from the viewpoint of a
reference object (e.g., Look at the organelle that is in
front of the ER).
b) Navigation question: Utterances that are questions related
to navigation (for example: Where am I going now?,
Where should I go?)
c) Spatial reference: Utterances that are about referencing
absolute location or relative location in the environment
(when they find out where they are, when they identify
organelles present in the environment, when they are
aware of objects surrounding them, etc., e.g., Now I
see, Golgi Body, here it is).
d) Spatial unawareness: Utterances that are about the inabil-
ity to locate oneself in the environment. (e.g., Where am
I? What is surrounding me?).
e) Spatial orientation: Utterances that are about the ability to
determine ones position and where they are heading in
the environment (getting closer to the organelles, turning
around and looking from different perspectives, etc. e.g.,
I am zooming in, I am getting closer).
f) Viewpoint: Utterances that are about either the
Navigators and Explorers point of view, indicating that
one or the other player recognizes that they have different
views of the cell or that they can see different things (e.g.,
Do you see me? Do you see where am I pointing?).
Transcripts of conversations were imported into spread-
sheets with one row for each utterance per player, and a sep-
arate column for each code. Each utterance was coded for
every code either by marking a 1in the code column if the
code was present, or a 0in the code column if the code was
absent. This data format allowed us to analyze the frequencies
and the relationships between codes in the conversation and
also to model the conversation as a network of interrelated
ideas through Epistemic Network Analysis (ENA) (Shaffer
2017; Shaffer et al. 2016). A network consists of nodes (ob-
jects or ideas) and relationships between nodes (ties or con-
nections), in ENA, the nodes are represented as the coded data
(i.e., stages and themes) and the relationship between nodes
indicate when two codes occur within the same segmentation
of time(Fisher et al. 2016, p. 1). Each code column is repre-
sented by a node on the network graph. During ENA, the
researcher establishes a moving frame, consisting of a set of
rows of utterances, and tracks how ideas are connected during
that segment of the conversation. This enables the analysis to
consider not only the topics of conversation but also the num-
ber of times the topics are discussed. We defined conversa-
tions as all lines of data aligned with a single value of speaker
type (Navigator vs. Explorer) subsetted by pairs (collabora-
tors). For example, one conversation consisted of all the lines
aligned with one Navigator and Explorer pair.
We first conducted a chi-square test for independence to
investigate the association between spatial codes and back-
ground knowledge (middle/high vs. BWP students) or role
of the player (Navigator or Explorer). This test is used to
Fig. 4 Micro view in the game
Fig. 5 Nano-scope view in the game
818 J Sci Educ Technol (2020) 29:813–826
compare observed frequencies in each categorical variable
(background knowledge and role of the player) (Pallant
2007). Moreover, we compared spatial utterances by applying
ENA to our data. We used ENA to further investigate the
relationship between the playersbackground knowledge
and role and the way they discussed spatial ideas during the
conversation. Our ENA model included the following spatial
codes: Spatial Reference, Spatial Unawareness, Spatial
Orientation, Neutral, Ego-centered, Addressee-centered, and
Viewpoint. We removed the codes with insufficient frequency
(less than 5) for ENA analysis (object-centered, other-cen-
tered), because their relationships were not depicted
Chi-Square Test for Independence Results
A total of 1418 utterances were produced by Navigators (707
utterances) and Explorers (711 utterances). There was no sig-
nificant difference in the number of utterances between roles.
Of those 1418 utterances, 1030 were coded using the code-
book. Spatial reference was the code with the highest frequen-
cy among other codes (58%) (see Fig. 6), because spatial
reference was used as a top-level code to identify any time
when players talked about the game environment, thus engag-
ing in spatial dialogue. The second most stated utterance type
was viewpoint (13%), marking when one of the players ac-
knowledged they had a different view of the information than
their partner. The code most often attributed to directions was
Neutral,with a frequency of 8%. Utterances coded as neu-
tralwere about giving directions (neutral, ego-centered, ad-
dressee-centered, and object-centered) while navigating in
VR. Collaborators almost never used object-centered direc-
tions. Navigators produced significantly more neutral (χ
1030) = 12.91, p< .05) and addressee-centered (χ
1030) = 9.77, p< .05) utterances. However, Explorers pro-
duced significantly more utterances about spatial unawareness
(1, 1030) = 4.58, p< .05) and ego-centered utterances (χ
(1, 1030) = 13.24, p< .05). In other codes, there was not any
significant difference according to roles.
More total utterances were produced by BWP students
(734 utterances) than MS/HS students (690 utterances). MS/
HS students produced significantly more spatial unawareness
utterances (χ
(1, 1030) = 10.32, p< .05). On the other hand,
BWP students produced significantly more addressee-
centered (χ
(1, 1030) = 3.83, p< .05) and viewpoint (χ
1030) = 41.4, p< .05) utterances than MS/HS students (see
Fig. 7).
Epistemic Network Analysis Results
Navigator vs. Explorer
ENA network graphs show how frequently the code occurs by
the size of the node and by the co-occurrences of codes in the
reference frame through linear connections. Spatial reference
(SR) is at the center of both graphs because all of these codes
include spatial reference. The axes indicate the highest percent
of variance explained by a dimension. In the present study,
Neutral and spatial unawareness codes are in the positive Y
direction, so conversations that are located more positively
contain more neutral references or more discussions of feeling
lost. Students who have higher frequency of co-occurrences of
the codes neutraland spatial unawarenesswould have
strong connections located in the positive Ydirection. ENA
explains 34.5% of the variance in coding co-occurrences
along the X-axis and 20.7% of the variance on the Y-axis.
Along the X-axis, a Mann-Whitney test showed that the over-
all network of the Explorer (Mdn =0.99, N= 8) was statis-
tically significantly different from Navigator (Mdn =1.06,
N= 8 U = 4.00, p= <.05, r= .88). Individual network dia-
grams explain what codes are causing this shift; the individual
diagrams are shown in Fig. 8.
The network diagram for the Navigator shows stronger ties
between the codes SR-Viewpoint and SR-Neutral and a weak
connection between SR-Spatial unawareness. The network
diagram for Explorer has connections that show a presence
of ties between the codes SR-Viewpoint and SR-Spatial un-
awareness, and a weak connection between SR-Neutral and
SR-Ego-centered. Viewpoint is a common and frequently
used code both for Explorers and Navigators, because they
tried to figure out whether they were seeing the same things,
or if they could see each other in the environment during the
conversation. A sample excerpt from a conversation is includ-
ed below, with codes in parentheses.
Table 3 Codes and example utterances
Code Example utterances
So all those red things are ribosomes. The blue things
are transfer RNA. Let us see what else.
What is surrounding me? Where am I?
Viewpoint Oh look! I can see you there!
Can you see me?
Neutral: I will go to the ribosomes
Ego-centered: I am in front of the ER.
Addressee-centered: It is just in front of you.
Object-centered: You need to go in front of the ER
Where should I go?
Maybe I should go there?
819J Sci Educ Technol (2020) 29:813–826
NAV1: OK it seems like it works now. OK, so you are
still in the nucleus, but Im going to try and light up the
Golgi body. Do you see that? (Viewpoint and SR)
EXP1: Yup.
NAV1: Alright so now Im going to light up the ER and
ribosomes. Should be purple.
EXP1 Yep.
NAV1 Cool. Ummmm. OK I see now. So now Im
going to have you [try].. (SR)
The difference in diagrams between the Navigator and
Explorer can be explained by SR-ego-centered, SR-neutral,
and SR-spatial unawarenessutterances. The network graphs
demonstrate that Explorers have stronger connections be-
tween spatial reference and spatial unawareness than
Navigators. This finding shows that students in the VR envi-
ronment talked about spatial unawareness and spatial refer-
ence together. As shown in the dialogue below, while
attempting to identify objects in VR, the Explorer initially
references spatial information in the environment even while
having the feeling of being lost.
EXP1: Im just trying to get to the nano mode. I kind of
like have to find a prompt for it (for going into the nano).
Icannotsee.. (Spatial unawareness)
NAV1: Im trying to think
EXP1: Now I see, Golgi body, here it is. (Spatial
According to chi-square analysis, Explorers produced more
ego-centered utterances, indicating a preference for their own
perspective. The ENA network diagram also showed a con-
nection between ego-centered and spatial reference in utter-
ances by Explorers but not Navigators. In the dialogue below,
the Explorer is telling his position from his own perspective
using an ego-centered utterance, and at the end of the conver-
sation he describes seeing proteins floating aroundhim,
which indicates spatial referencing.
EXP2: No, I think Im inside the ribosome. (Ego-
NAV2: Youre in the ribosome? Can you see proteins?
EXP2: Theyre the blue things floating around?
Fig. 6 Frequency of utterances
and comparisons of utterances
according to the role of the
producer (Explorer vs.
Navigator), p<.05
Fig. 7 Frequency of utterances
and comparisons of utterances
according to school type (BWP
vs. MS/HS), *p<.05
820 J Sci Educ Technol (2020) 29:813–826
NAV2: Yeah.
EXP2: Yup, I can see them. (Spatial reference)
Network diagrams showed that Navigators had stronger
connections between a neutral description of the environment
and spatial reference than Explorers. Navigators preferred not
to take a perspective while directing Explorers.
NAV3: Go to the protein. (Neutral)
EXP3: So whats this? It looks like a giant clump of it.
Maybe it has to do with something. [Explorer is looking
up and focusing on giant clump]
NAV3: So it does not its supposed to be, like, kind of
neatly folded. Itsjust...
EXP3: like all over the place. And look at these - look at
them - [they] are like flying around. Some of them
arent. And so it looks like these are all in clumps.
(Spatial reference)
NAV3: Yeah.
MS/HS Students vs. BWP Students
To explore how the players background may influence their
spatial dialogue, we also created a network graph for each
group of students, one for MS/HS students and one for
BWP students. When MS/HS students and BWP students
were compared, ENA explained 25.7% of the total variance
in coding co-occurrences along the X-axis and 32.3% of the
total variance on the Y-axis. Along the X-axis, a Mann-
Whitney test showed that the overall network for BWP
(Mdn = 2.40, N= 4) was statistically significantly different
from the overall network for MS/HS (Mdn =2.24, N=
4U=15.00,p=05, r=0.87), suggesting that playersbiol-
ogy background had an effect on dialogue. The individual
network diagrams are shown in Fig. 9.
The network diagram for the BWP students shows a pres-
ence of ties to the codes for SR-Neutral, SR-Viewpoint, SR-
Spatial orientation and weak connections between SR-
Addressee-centered and SR-Ego-centered. The diagram for
middle/high school shows a presence of ties to the codes for
SR-Neutral, SR-Spatial unawareness, SR-Viewpoint (strong),
and also connections to the spatial orientation and Ego-
When these diagrams were compared, BWP students had
stronger connections between SR-Spatial orientation and SR-
Addressee-centered direction than middle/high school stu-
dents had. In other words, BWP students used different types
of points of views (addressee-centered, ego-centered, and neu-
tral) while discussing objects in the environment with their
partners. In the example dialogue of BWP students, the
Navigator is trying to help the Explorer by taking his perspec-
tive (coded as addressee-centered). This was a pattern among
the BWP students; they tried to understand the Explorers
viewpoint in order to support their partners while looking for
organelles or clues in the environment.
BWP Students:
NAV1: OK so looks like you are are in the
nucleus again. So thats kind of where I wanted you to
be. So I want you to look at that. Im going to turn some
of the proteins on. Look at the Golgi apparatus in front
Fig. 8 Individual network diagrams for Navigator and Explorer
821J Sci Educ Technol (2020) 29:813–826
of you. Look at the proteins and Golgi body.
EXP1: Yup. So those are the Golgis, those are the rough
ER. I see. (Spatial Reference)
NAV1: OK, do you see [anything].looking around that
seems off to you?
EXP1: Ummmmm...I see well theres a bunch of like
little ribosomes in the.. (Spatial reference)
NAV1: Purple or pink [referring to organelle colors]
EXP1: In the purple.
NAV1: OK Ill turn the Golgi body off so you are not
confused and so you can see better.
On the other hand, MS/HS school students had stronger
connections between spatial reference and spatial unaware-
ness than BWP students. MS and HS students expressed un-
awareness of their location, rather than using different points
of view to support navigation processes of each other. As seen
in the example below, both Navigator and Explorer are having
difficulty understanding the virtual environment, thus show-
ing spatial unawareness.
MS/HS Students:
EXP4: What, its disappearing. (Spatial unawareness)
NAV4: What are you trying to find? [NAV looks at
EXP4: I do not know. (Spatial unawareness)
NAV4: Okay
EXP4: Okay, Im just like floating here. I do not know
what Im supposed to do. Oh, woah. Well. I clicked
something. (Spatial unawareness)
Participants who played Cellverse faced a few challenges.
They had to understand the dynamic and novel environment
of a three dimensional cell. They had to decipher an ill-defined
problem space and had to determine the resources available to
them in order to determine what clues were associated with the
type of cystic fibrosis (the role of the Navigator) and seek the
clues in the cellular environment (the role of the Explorer) and
communicate this information in order to diagnose the cell.
Each of these challenges required the players to develop a way
of communicating about a novel environment. In observing
how players approached these challenges, we found that dif-
ferent roles and different levels of biology knowledge influ-
enced their conversations during the game.
Players discussed the environment from a number of per-
spectives. When dialogues between pairs were analyzed, we
found utterances that could be categorized as spatial reference,
spatial unawareness, spatial orientation, navigation directions
(neutral, ego-centered, addressee-centered, and object-cen-
tered), viewpoint, and navigation questions. While the codes
navigation direction (neutral, ego-centered, addressee-cen-
tered, and object-centered) already existed in the literature,
spatial reference, spatial reference, spatial unawareness, and
spatial orientation were novel codes that were found out in this
study. Viewpoint and spatial reference were the most fre-
quently used utterances both by Explorers and Navigators.
While playing the game, both Navigators and Explorers tried
to figured out whether they were seeing the same things, or if
they could see each other in VR, which is consistent with the
results of Giusti et al. (2012). We did not provide players with
any clues before the game began, as a result, collaborators
spent the first part of their gameplay figuring out their view-
points. The different information provided to the players cre-
ated a necessity for collaboration, resulting in positive inter-
dependence (Johnson et al. 1991). After understanding their
partners viewpoint, taking different perspectives while
referencing objects in a space is important for navigation
and wayfinding (Miniaci and De Leonibus 2018). We found
that the reference perspective used by Explorers and
Navigators did vary significantly. Explorers used more ego-
centered references, referring to their own perspective, which
is not surprising as the HMD put the Explorer at the center of
the action. On the other hand, Navigators used more
addressee-centered and neutral references to help Explorers
navigate and find clues in the game, which reflects the more
global viewpoint of the cell provided on the tablet. Pouliquen-
Lardy et al.s(2016) study also found that the player giving
directions uses more addressee-centered and neutral refer-
ences, while the participant that moving the virtual objects
according to the instructions used more ego-centered refer-
ences. Although the roles were similar in these two studies,
the technology and perspectives were different than in this
study. In our study, the Navigator views the virtual cell
through a tablet, which allows the player to have a holistic
view of the cell, and the Explorer views the virtual cell
through a VR HMD. VR is well suited for providing a de-
tailed, interactive view of the cell from an ego-centered per-
spective. However in Pouliquen-Lardy et al.s (Pouliquen-
Lardy et al. 2016) study, both the guide (Navigator) and ma-
nipulator (Explorer) were in VR and they had the same per-
spectives with different reference materials. Regardless of
these differences, task distribution results in similar findings
for both our study and Pouliquen-Lardy et al.s(Pouliquen-
Lardy et al. 2016) study that Navigators/guides are better able
to describe the environment from the perspective of the
Explorers/manipulators, but Explorers/manipulators are not
as able to speak in terms of the Navigator/guide perspective.
This study supports the idea that role or task distribution is an
important factor that affects the processing of spatial informa-
tion. Giving spatial information that is addressee-centered re-
quires taking the point of view of the other player, which
822 J Sci Educ Technol (2020) 29:813–826
requires higher mental workload and takes more time than
speaking from the players own perspective (Michelon and
Zacks 2006; Pouliquen-Lardy et al. 2016; Schober 1996).
Zaman et al. (2015) also explored how pairs used spatial ref-
erences in a natural and embodied way in a shared-perspective
VR. They found that directions were given from the perspec-
tives of the speaker and listener significantly more often than
directions from their partners perspectives (object-centered
and other-centered). In this study, collaborators almost never
used object-centered or other-centered directions. This makes
sense, because talking from the viewpoint of others requires
more perspective changes and leads to a higher mental work-
load for both members of the team (Pouliquen-Lardy et al.
The study found evidence that players felt spatial presence
during the game in an authentic way: by reviewing dialogues
during gameplay. Prior research on the potential benefits of
VR environments have focused on how VR experiences can
create a sense of spatial presence in users. A number of studies
have explored spatial presence using questionnaires and self-
report surveys (Cheng and Tsai 2019; Seibert and Shafer
2017). Moreover, interviews (Garau et al. 2008) and think-
aloud protocols (Turner et al. 2003)havealsobeenusedfor
measuring spatial presence, and researchers are still thinking
about different ways of measuring spatial presence (Laarni
et al. 2015). Pouliquen-Lardy et al. (2016) asked participants
about their feeling of being present in the environment to find
out their levels of spatial presence. The study did not find any
difference between collaborators. Instead of using scales/
questionnaires or interviews, we analyzed utterances that
players produced that give clues about their feeling of spatial
presence. This strategy is more authentic than questionnaires
and self-report surveys because we are examining partici-
pantsactions in the moment, rather than asking for their per-
ception of their actions after they have finished the game.
Examining conversations during gameplay revealed a number
of spatial references, providing evidence that players were
developinga sense ofspatial presence within the environment.
Additionally, we found that the role each player took influ-
enced the way they made spatial references. Players in VR
used more ego-centered utterances, suggesting that the VR
technology became an integrated part of the speakersview-
point (Riva and Waterworth 2014) and supporting past find-
ings that feelings of presence can be enhanced with head-
mounted displays (Bruder et al. 2009). According to Wirth
et al. (2007), spatial presence occurs when the player accepts
the media as his/her primary ego reference frame, because
their sensation of being located in the environment is connect-
ed to the environment. Media, user characteristics, and activ-
ities in the environment are all associated with the feeling of
being physically situated within the environment (Laarni et al.
2015). In the present study, the HMD that isolates the player
from the real environment and the ability to interact with the
objects via handheld controllers are both media-specific fac-
tors that might affect playersfeeling of being there.
Interacting with organelles and searching for clues by explor-
ing the virtual environment might also result in feelings of
spatial presence. As the interactive aspects of technology im-
prove and the barrier between the technology and reality be-
comes more seamless, we should see larger gains in spatial
Fig. 9 Individual network diagrams for BWP and middle/high school students
823J Sci Educ Technol (2020) 29:813–826
presence resulting from experiences in VR (Regenbrecht and
Schubert 2002).
We also found that background knowledge influenced how
players described the environment. When BWP and MS/HS
students were compared, BWP students produced more
addressee-centered and viewpoint utterances. On the other
hand, MS/HS students produced more utterances related to
spatial unawareness than BWP students did. BWP students
with higher prior knowledge about cell biology, the main topic
of the game, tried to minimize the overall effort of their col-
laborators. Familiarity with the game topic might give players
a conceptual framework of what to expect in the environment,
thereby reducing the effort required to give spatial information
from different points of view. In other words, participants with
more pre-knowledge do not have to focus as hard on the game
and can take on the additional mental workload of performing
the mental rotations necessary for non-ego-centered dialogue.
Previous studies found that participantslearning and
levels of self-efficacy increased when they knew the names
and main concepts before performing activities in VR (Meyer
et al. 2019). In the present study, BWP students might have
had the opportunity to focus on navigating and finding clues
to complete the game instead of dealing with new concepts in
VR than the MS/HS students. Additionally, MS/HS students
produced more utterances related to spatial unawareness than
BWP students did. In this respect, familiarity to the game topic
might impact playersfeeling of being there and the type of
spatial information they shared.
Cellverse is a collaborative problem-solving game that gives
the opportunity to players to establish a shared understanding
of the spatial representation of a virtual cell as a three dimen-
sional environment through their discussion during the game.
Players made frequent spatial references during the game
while trying to find organelles, exploring the location of or-
ganelles, and giving directions to each other in order to navi-
gate in the environment. We found that different roles and
different levels of biology knowledge influenced how they
used spatial references in their conversations during the game.
This study makes a twofold contribution to the body of
literature. First, this study shows that a collaborative VR game
might be an authentic way to understand individualsspatial
information processing. The dialogues we observed aligned
with prior research on spatial dialogue, thus we were able to
examine how players discussed the cell as a spatial domain.
Analyzing dialogue between collaborators with epistemic net-
work analysis gave clues about how pairs shared spatial infor-
mation. This method does not require recalling experiences,
one of the disadvantages of interviews, and avoids distracting
players with think-alouds during gameplay. Observing
interactions between partners as they experienced the game
provides an authentic view into their thinking and actions
in the moment.This method of studying spatial understand-
ing through examining dialogue during collaborative
problem-solving could be useful for future research.
Second, findings from this study can guide designers as
they think about how to scaffold collaborative cross-
platform experiences. Learning how to communicate with
others who have different levels of knowledge and different
roles is essential to successful collaborative problem-solving.
Navigators with low prior knowledge might be supported with
additional materials or aids to help them guide the Explorers in
VR. For instance, an initial spatial representation of the game
environment might be given to Navigators to make them use
more addressee-centered perspectives for Explorers. This rep-
resentation might result in better performance in navigation,
more efficient searches for organelles or clues that they need
during problem-solving in VR, and lower mental workloadfor
Explorers. Explorers could be supported with a map during
the game, in order to see where they are. This spatial repre-
sentation of a cell might be crucial for the players with low
prior knowledge who do not have adequate ideas about where
the organelles are or the general structure of a cell.
Our results demonstrate that the players role (task and
technology distribution) and familiarity with the game topic
all have an impact on the context of spatial information shared
during gameplay. In the future, different factors can be studied
in order to develop a better understanding of spatial process-
ing between collaborators. Gameplay experience, familiarity
with VR technology, and spatial abilities including mental
rotation ability, spatial reasoning, or object location memory
might impact learnersperformances. Moreover, Cellverse is a
cell biology game that you need to learn or discover the loca-
tion of organelles, their connections and requires navigation
and orientation to understand them all during gameplay. In the
future, different games with different contexts might be used
that require both spatial information and content knowledge
processing. These games will help us further explore and un-
derstand the important domain of collaborative problem-
Funding This material is based upon work supported by Oculus
Compliance with Ethical Standards
Conflict of Interest The authors declare that they have no conflict of
Ethics Approval All procedures performed in studies involving human
participants were in accordance with the ethical standards of the institu-
tional and national research committee. Research on human subjects has
been approved by the Institutional Review Board of Massachusetts
Institute of Technology.
824 J Sci Educ Technol (2020) 29:813–826
Consent to Participate Informed consent was obtained from all individ-
ual participants included in the study.
Abe, T., Raison, N., Shinohara, N., Khan, M.S., Ahmed, K., & Dasgupta,
P. (2018). The effect of visual-spatial ability on the learning of
robot-assisted surgical skills. Journal of Surgical Education, 75(2),
Bickman, L., & Rog, D. J. (2008). The SAGE handbook of applied social
research methods. Sage publications.
Bruder, G., Steinicke, F., Rothaus, K., & Hinrichs, K. (2009). Enhancing
presence in head-mounted display environments by visual body
feedback using head-mounted cameras. In 2009 International
Conference on CyberWorlds (pp. 43-50). IEEE.
CF Foundation. (retrieved, July 2019) Basics of the CFTR protein.CF
Cheng, K.-H., & Tsai, C.-C. (2019). A case study of immersive virtual
field trips in an elementary classroom: Studentslearning experience
and teacher-student interaction behaviors. Computers & Education,
140, 103600.
Coxon, M., Kelly, N., & Page, S. (2016). Individual differences in virtual
reality: Are spatial presence and spatial ability linked? Virtual
Reality, 20(4), 203212.
Duncan, J., & West, R. E. (2018). Conceptualizing group flow: A frame-
work. Educational Research and Reviews, 13(1), 111.
Duran, N. D., Dale, R., & Kreuz, R. J. (2011). Listeners invest in an
assumed others perspective despite cognitive cost. Cognition,
121(1), 2240.
Ens, B., Lanir, J., Tang, A., Bateman, S., Lee, G., Piumsomboon, T., &
Billinghurst, M. (2019). Revisiting collaboration through mixed re-
ality: The evolution of groupware. International Journal of Human-
Computer Studies, 131,8198.
Fisher, K. Q., Hirshfield, L., Siebert-Evenstone, A., Arastoopour, G., &
Koretsky, M. (2016). Network analysis of interactions between stu-
dents and an instructor during design meetings. In Proceedings of
the American society for engineering education.ASEE.
Fiore, S. M., Graesser, A., Greiff, S., Griffin, P., Gong, B., Kyllonen, P.,
& Soulé, H. (2017). Collaborative problem solving:
Considerations for the national assessment of educational progress.
National Center for Educational Statistics.
Freina, L., & Ott, M. (2015). A literature review on immersive virtual
reality in education: State of the art and perspectives. In The
International Scientific Conference eLearning and Software for
Education (Vol. 1, p. 133). Carol INational Defence University.
Garau, M., Friedman, D., Ritter Widenfeld, H., Antley, A., Brogni, A., &
Slater, M. (2008). Temporal and spatial variations in presence:
Qualitative analysis of interviews from an experiment on breaks in
presence. Presence, 17(3), 293309.
Giusti, L., Xerxes, K., Schladow, A., Wallen, N., Zane, F., & Casalegno,
F. (2012). Workspace configurations: Setting the stage for remote
collaboration on physical tasks. In Proceedings of the 7th Nordic
Conference on Human-Computer Interaction: Making Sense
Through Design (pp. 351-360). ACM.
Hahn,J.F.(2017).Virtualrealitylibrary environments. American
Library Association.
Hite, R. L., Jones, M. G., Childers, G. M., Ennes, M., Chesnutt, K.,
Pereyra, M., & Cayton, E. (2019). Investigating potential relation-
ships between adolescentscognitive development and perceptions
of presence in 3-D, haptic-enabled, virtual reality science
instruction. Journal of Science Education and Technology, 28(3),
Hruschka, D. J., Schwartz, D., St. John, D. C., Picone-Decaro, E.,
Jenkins, R. A., & Carey, J. W. (2004). Reliability in coding open-
ended data: Lessons learned from HIV behavioral research. Field
Methods, 16(3), 307331.
Johnson, D. W., Johnson, R. T., Ortiz, A. E., & Stanne, M. (1991). The
impact of positive goal and resource interdependence on achieve-
ment, interaction, and attitudes. J Gen Psychol, 118(4), 341347.
Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A.,
& Hall, C. (2016). NMC horizon report: 2016 higher Education
Edition. Austin: The New Media Consortium.
Kober, S. E., Wood, G., Hofer, D., Kreuzig, W., Kiefer, M., & Neuper, C.
(2013). Virtual reality in neurologic rehabilitation of spatial disori-
entation. Journal of Neuroengineering and Rehabilitation, 10(1),
Laarni, J., Ravaja, N., Saari, T., Böcking, S., Hartmann, T., & Schramm,
H. (2015). Ways to measure spatial presence: Review and future
directions. In Immersed in Media (pp. 139185). Cham: Springer.
León, I., Tascón, L., Ortells-Pareja, J.J., & Cimadevilla, J.M. (2018).
Virtual reality assessment of walking and non-walking space in
men and women with virtual reality-based tasks. PloS One,
Liszio, S., & Masuch, M. (2016). Designing shared virtual reality gaming
experiences in local multi-platform games. In International
Conference on Entertainment Computing (pp. 235240). Cham:
Meyer, O. A., Omdahl, M. K., & Makransky, G. (2019). Investigating the
effect of pre-training when learning through immersive virtual real-
ity and video: A media and methods experiment. Computers &
Education, 140, 103603.
Michelon, P., & Zacks, J. M. (2006). Two kinds of visual perspective
taking. Percept Psychophys, 68(2), 327337.
Miles, M. A., & Huberman, A. M. (1994). Qualitative data analysis: An
expanded sourcebook (pp. 5072). Thousand Oaks: SAGE
Miniaci, M. C., & De Leonibus, E. (2018). Missing the egocentric spatial
reference: A blank on the map. F1000Research,7.
Montoro, D. T., Haber, A. L., Biton, M., Vinarsky, V., Lin, B., Birket, S.
E., et al. (2018). A revised airway epithelial hierarchy includes
CFTR-expressing ionocytes. Nature, 560(7718), 319324.
Pallant, J. (2007). A step by step guide to data analysis using SPSS for
windows. SPSS Survival manual, McGraw-Hill Education (UK).
Pietropaolo, S., & Crusio, W. E. (2012). Learning spatial orientation. In
N. M. Seel (Ed.), Encyclopedia of the sciences of learning.Boston:
Pouliquen-Lardy, L., Milleville-Pennel, I., Guillaume, F., & Mars, F.
(2016). Remote collaboration in virtual reality: Asymmetrical ef-
fects of task distribution on spatial processing and mental workload.
Virtual Reality, 20(4), 213220.
Regenbrecht, H., & Schubert, T. E. (2002). Real and illusory interactions
enhance presence in virtual environments. Presence: Teleoperators
and Virtual Environments, 11(4), 425434.
Riva, G., & Waterworth, J. A. (2014). Being present in a virtual world
(pp. 205221). The Oxford Handbook of Virtuality.
Sawyer, K. (2017). Group genius: The creative power of collaboration.
Basic books.
Schober, M. F. (1996). Addressee-and object-centered frames of refer-
ence in spatial descriptions. In American Association for Artificial
Intelligence, Working Notes of the 1996 AAAI Spring Symposiumon
Cognitive and Computational Models of Spatial Representation,
Seibert, J., & Shafer, D. M. (2017). Control mapping in virtual reality:
Effects on spatial presence and controller naturalness. Virtual
Reality, 22(1), 79-88.
Shafer, D. M., Carbonara, C. P., & Korpi, M. F. (2019). Factors affecting
enjoyment of virtual reality games: A comparison involving
825J Sci Educ Technol (2020) 29:813–826
consumer-grade virtual reality technology. Games for Health
Journal, 8(1), 1523.
Sherman, W. R., & Craig, A. B. (2018). Understanding virtual reality:
Interface, application, and design. Morgan Kaufmann.
Tascón, L., García-Moreno, L. M., & Cimadevilla, J. M. (2017). Almeria
spatial memory recognition test (ASMRT): Gender differences
emerged in a new passive spatial task. Neuroscience Letters, 651,
Tussyadiah, I. P., Wang, D., & Jia, C. H. (2017). Virtual reality and
attitudes toward tourism destinations. In Information and communi-
cation technologies in tourism 2017 (pp. 229239). Springer, Cham.
Shaffer, D. W. (2017). Quantitative ethnography. Madison: Cathcart
Shaffer, D. W., Collier, W., & Ruis, A. R. (2016). A tutorial on epistemic
network analysis: Analyzing the structure of connections in cogni-
tive, social, and interaction data. Journal of Learning Analytics,
3(3), 945.
Turner, S., Turner, P., Carroll, F., ONeill S., Benyon, D., McCall, R.,
et al. (2003). Re-creating the Botanics: Towards a sense of place in
virtual environments. Paper presented at the 3rd UK Environmental
Psychology Conference, Aberdeen, 2325 June 2003.
Wang, A., Thompson, M., Roy, D., Pan, K., Perry, J., Tan, P., Eberhart,
R. & Klopfer, E. (2019). Iterative user and expert feedback in the
design of an educational virtual reality biology game. Interactive
Learning Environments,118.
Weber, M. S., & Kim, H. (2015). Virtuality, technology use, and engage-
ment within organizations. J Appl Commun Res, 43(4), 385407.
Wirth, W., Hartmann, T., Böcking, S., Vorderer, P., Klimmt, C.,
Schramm, H., et al. (2007). A process model of the formation of
spatial presence experiences. Media Psychol, 9(3), 493525.
Witmer, B. G., & Singer, M. J. (1998). Measuring presence in virtual
environments: A presence questionnaire. Presence Teleop Virt, 7(3),
Zaman, C. H., Yakhina, A., & Casalegno, F. (2015). nRoom: An
immersive virtual environment for collaborative spatial design. In
Proceedings of the International HCI and UX Conference in
Indonesia (pp. 10-17). ACM.
PublishersNoteSpringer Nature remains neutral with regard to jurisdic-
tional claims in published maps and institutional affiliations.
826 J Sci Educ Technol (2020) 29:813–826
... Moreover, we approach game development through a framework of Design-based Research, or DBR (Ameel & Reeves, 2008;Sandoval & Bell, 2004). Ongoing user testing, studies with various types of users, and reviews by subject matter experts have enabled us to collect valuable qualitative and quantitative data that have enhanced our understanding of how to incorporate authenticity, interactivity, and collaboration in VR learning games (Thompson et al., 2018b;Wang et al., 2019;Thompson et al., 2020;Uz Bilgin, Anteneh, & Thompson, 2020;Wang, 2020;Uz Bilgin, Anteneh, & Thompson, 2021;. ...
... Establishing rules and developing distinct roles for users are both useful ways of encouraging collaboration within VR environments (Uz-Bilgin et al., 2020). Earlier studies have described the benefits of establishing collaborative roles in VR. ...
... Cellverse has been in development since Summer of 2017 and has undergone numerous iterations, which we have discussed in other publications (Wang et al., 2019;Thompson et al., 2020;Uz Bilgin, Anteneh, & Thompson, 2020;Uz Bilgin & Thompson, 2020;Wang, 2020). In this paper, we look across all of the studies and papers to synthesize our experiences as lessons learned and best practices," in designing learning games that include authenticity, interactivity, and collaboration. ...
Full-text available
Virtual reality has become an increasingly important topic in the field of education research, going from a tool of interest to a tool of practice. In this paper, we document and summarize the studies associated with our 4-year design project, Collaborative Learning Environments in Virtual Reality (CLEVR). Our goal is to share the lessons we gleaned from the design and development of the game so that others may learn from our experiences as they are designing, developing, and testing VR for learning. We translate “lessons learned” from our user studies into “best practices” when developing authentic, interactive, and collaborative experiences in VR. We learned that authentic representations can enhance learning in virtual environments but come at a cost of increased time and resources in development. Interactive experiences can motivate learning and enable users to understand spatial relationships in ways that two dimensional representations cannot. Collaboration in VR can be used to alleviate some of the cognitive load inherent in VR environments, and VR can serve as a context for collaborative problem solving with the appropriate distribution of roles and resources. The paper concludes with a summation of best practices intended to inform future VR designers and researchers.
... Learners' interactions were coded in 13 articles (17.1%), other educational activities were coded in ten studies [12], [16], [17], [19], [70]- [75]. Coded data of simulated practices were used in three studies [76]- [78]. ...
... Other reliability tests were used less frequently, such as Krippendorff's test in two articles [97], [98] and Cronbach's alpha in one article [64]. On the other hand, five articles reported only the percentage of agreement between coders [57], [68], [69], [75], [100], and five articles reported that an agreement was tested among coders, without specifying the type of agreement used [79], [84], [90], [91], [101]. ...
... [105] define collaborative learning as a "set of teaching and learning strategies promoting student collaboration in small groups in order to optimize their own and each other's learning". ENA was used to study students' collaboration with each other and their mentors to frame, examine, and solve complex problems in different settings, for example, collaborative problem-solving (CPS) [71], [73], [75] communities of inquiry [48], computer supportive collaborative learning (CSCL) [49], project-based engineering [19] and scientific reasoning processes [16]. ...
Full-text available
Over the past decade, epistemic network analysis (ENA) has emerged as a quantitative ethnography tool for modeling discourse in different types of human behaviors. This article offers a comprehensive systematic review of ENA educational applications in empirical studies ( $\text{n}=76$ ) published between 2010 and 2021. We review the ENA methods that research has relied on, the use of educational theories, their method of application, comparisons across groups and the main findings. Our results show that ENA has helped visually model the coded interactions and illustrate the connection strength among elements of network models. The applications of ENA have expanded beyond discourse analysis to several new areas of inquiry such as modeling surveys, log files or game play. Most of the reviewed articles used ENA based on educational theories and frameworks ( $\text{n}=53$ , 69.7%), with one or more theories per article, while 23 articles (30.3%) did not report theoretical grounding. The implementation of ENA has enabled comparisons across groups and helped augment the insights of other methods such as process mining, however there is little evidence that studies have exploited the quantitative potential of ENA. Most of the reviewed studies used ENA on small sample size with manually coded interactions with few examples of large samples and automated coding.
... When comparing a single-user with system-based teaching with a two-user version, in which one user was a teacher, Simeone et al. [72] showed that the two-user version scored higher regarding overall preference and clarity compared to the single-user version that used animation sequences for teaching. Thompson et al. [83] and Uz-Bilgin et al. [86] explored how to foster collaboration between students in a VR/tablet-based educational game and defined a guiding role provided with all necessary information on the tablet, similar to a teacher. Their results show that such roles can shape collaboration between users. ...
... Since clipboards are less cognitively demanding than an overlay interface [6], one was implemented to show the teaching sheets to the teachers as proposed by Thompson et al. [83] and Uz-Bilgin et al. [86] (see Figure 1b). Currently highlighted animal parts were also highlighted in the teaching sheets by appearing in a green font so that the teacher always knew what to talk about. ...
... However, the same rich and immersive virtual environments that can spark spatial understanding can cause extraneous cognitive load [8]. Studies suggest that collaboration between two partners can help mitigate cognitive load through spatially focused dialogue between the partners [9]. Furthermore, CPS are extremely valuable for individuals to learn and refine. ...
Conference Paper
In this paper we explore the intersection of three domains: immersive virtual reality (VR), collaborative problem solving, and the development of spatial skills. We conducted a narrative review of literature between 2011 and 2021 looking for articles that included keywords associated with spatial skills, collaboration, and immersive virtual reality in 7 well known databases. Searches resulted in a thorough review of 21 articles. These articles included examples of virtual, physical, and applied spatial, collaborative tasks. Most of the tasks in the research were performance tasks that involved spatial skills of object location and spatial navigation by locating objects in a virtual world. Results of these studies also suggest specific design features that can benefit collaboration among users when engaging in spatial tasks. This study contributes to the literature by synthesizing research on collaboration and spatial skills in virtual reality and distilling suggestions for designers of VR experiences.
... All players made gains in their understanding of the cellular environment as evidenced through their drawings, yet the players in the stereoscopic view made greater gains in their representing of the process of translation. The first person viewpoint helped players see translation as a spatial process, which is concurrent with research on VR (Jensen and Virtual reality games Konradsen, 2020;Uz-Bilgin et al., 2020). All players witnessed the process of translation and were able to interact with the environment. ...
Purpose This study isolates the effect of immersion on players’ learning in a virtual reality (VR)-based game about cellular biology by comparing two versions of the game with the same level of interactivityand different levels of immersion. The authors identify immersion and additional interactivity as two key affordances of VR as a learning tool. A number of research studies compare VR with two-dimensional or minimally interactive media; this study focuses on the effect of immersion as a result of the head mounted display (HMD). Design/methodology/approach In the game, players diagnose a cell by exploring a virtual cell and search for clues that indicate one of five possible types of cystic fibrosis. Fifty-one adults completed all aspects of the study. Players took pre and post assessments and drew pictures of cells and translation before and after the game. Players were randomly assigned to play the game with the HMD (stereoscopic view) or without the headset (non-stereoscopic view). Players were interviewed about their drawings and experiences at the end of the session. Findings Players in both groups improved in their knowledge of the cell environment and the process of translation. Players who experienced the immersive stereoscopic view had a more positive learning effect in the content assessment, and stronger improvement in their mental models of the process of translation between pre- and post-drawings compared to players who played the two-dimensional game. Originality/value This study suggests that immersion alone has a positive effect on conceptual understanding, especially in helping learners understand spatial environments and processes. These findings set the stage for a new wave of research on learning in immersive environments; research that moves beyond determining whether immersive media correlate with more learning, toward a focus on the types of learning outcomes that are best supported by immersive media.
Conference Paper
Area measurement has a high priority in mathematics school education. Nevertheless, many students have problems understanding the concept of area measurement. An AR tool for visualizing square units on objects in the real world is developed to enable teachers to support understanding already in primary school. This work-in-progress paper presents the initial test version and discusses the first teaching experiment results. The students’ feedback and use of the app showed possible adaptations of the AR tool, e.g., that the idea of dynamic geometry could be incorporated in the future.
Full-text available
The 8th annual International Conference of the Immersive Learning Research Network (iLRN2022) was the first iLRN event to offer a hybrid experience, with two days of presentations and activities on the iLRN Virtual Campus (powered by ©Virbela), followed by three days on location at the FH University of Applied Sciences BFI in Vienna, Austria.
Full-text available
The advanced visualisation and interactive capabilities make immersive virtual reality (IVR) attractive for educators to investigate its educational benefits. This research reviewed 64 studies published in 2016–2020 to understand how science educators designed, implemented, and evaluated IVR-based learning. The immersive design features (sensory, actional, narrative, and social) originally suggested by Dede provided the framework for the analysis of IVR designs. Educators commonly adopted IVR to better aid visualisation of abstract concepts and enhance learning experience. IVR applications tended to have sensory and actional features, leaving out narrative and social features. Learning theories did not appear to play a strong role in the design, implementation, and evaluation of IVR-based learning. Participants generally reported their IVR experiences as positive on engagement and motivation but the learning outcomes were mixed. No particular immersive design features were identified to result in better learning outcomes. Careful consideration of the immersive design features in alignment with the rationales for adopting IVR and evaluation methods may contribute to more productive investigations of the educational benefits of IVR to improve science teaching and learning. © 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
Full-text available
This study focuses on an educational game titled Cellverse, a two-player cross-platform VR project intended to teach high school biology students about cell structure and function. In Cellverse, players work in pairs to explore a human lung cell and diagnose and treat a dangerous genetic disorder. Cellverse is being designed by the Collaborative Learning Environments in Virtual Reality (CLEVR) team, an interdisciplinary team consisting of game designers, educational researchers, and graduate and undergraduate students. Using a design-based research approach, we have enlisted the help of both subject matter experts and user testers to iteratively design and improve Cellverse. The objective of this paper is to share how user and expert feedback can inform and enhance the development of learning games. We describe how we gather and synthesize information to review and revise our game from in-game observations, semi-structured interviews, and video data. We discuss the input of subject matter experts, present feedback from our user testers, and describe how input from both parties influenced the design of Cellverse. Our results suggest that including feedback from both experts and users has provided information that can clarify gameplay, instruction, subject portrayal, narrative, and in-game goals.
Full-text available
Immersive virtual reality (VR) is predicted to have a significant impact on education; but most studies investigating learning with immersive VR have reported mixed results when compared to low-immersion media. In this study, a sample of 118 participants was used to test whether a lesson presented in either immersive VR or as a video could benefit from the pre-training principle, as a means of reducing cognitive load. Participants were randomly assigned to one of two method conditions (with/without pre-training), and one of two media conditions (immersive VR/video). The results showed an interaction between media and method, indicating that pre-training had a positive effect on knowledge (d = 0.81), transfer (d = 0.62), and self-efficacy (d = 0.64) directly following the intervention; and on self-efficacy (d = 0.84) in a one-week delayed post-test in the immersive VR condition. No effect was found for any of these variables within the video condition.
Full-text available
Collaborative Mixed Reality (MR) systems are at a critical point in time as they are soon to become more commonplace. However, MR technology has only recently matured to the point where researchers can focus deeply on the nuances of supporting collaboration, rather than needing to focus on creating the enabling technology. In parallel, but largely independently, the field of Computer Supported Cooperative Work (CSCW) has focused on the fundamental concerns that underlie human communication and collaboration over the past 30-plus years. Since MR research is now on the brink of moving into the real world, we reflect on three decades of collaborative MR research and try to reconcile it with existing theory from CSCW, to help position MR researchers to pursue fruitful directions for their work. To do this, we review the history of collaborative MR systems, investigating how the common taxonomies and frameworks in CSCW and MR research can be applied to existing work on collaborative MR systems, exploring where they have fallen behind, and look for new ways to describe current trends. Through identifying emergent trends, we suggest future directions for MR, and also find where CSCW researchers can explore new theory that more fully represents the future of working, playing and being with others.
Full-text available
Virtual presence describes a users’ perception of a virtual reality (VR) environment (VRE), specifically, of their involvement (sense of control within a virtual environment with minimal distractions) and immersion (multi-input sensory engagement providing apparent realism of objects and interactions). In education, virtual presence is a significant construct as highly immersive VREs have been linked to users reporting memorable and exciting teaching experiences. Prior research has described that adults and children report different levels of presence when subjected to identical VREs, suggesting cognition may play some role in users’ perceptions of presence. According to Piaget, concrete operational development is a watershed moment when adolescents develop the ability to understand abstract concepts and make assessments what is and is not reality. This period in cognitive development may influence children’s and adolescents’ perceptions of presence. This is an exploratory study of seventy-five 6th-grade and seventy-six 9th-grade students who participated in an instructional module about cardiac anatomy and physiology using a 3-D, haptic-enabled, VR technology. When surveyed on their perceptions of virtual presence, there were no reported differences between grade levels. When assessed using a Piagetian inventory of cognitive development, the analyses indicated that the sixth-grade students’ understanding of spatial rotation and angular geometry was positively correlated with the reported perceived control and negatively correlated with distraction. This study suggests that the spatial acuity of younger learners plays an important role when using VR technologies for science learning. This research raises questions about the relevance of users’ cognitive development when using emergent VR technologies in the K–12 science classroom.
Full-text available
Far space and near space refer to different spatial features in which we unfold our behaviour. On the one hand, classical visuospatial neuropsychological tests assess spatial abilities in the near space; on the other, far space typically involves new spatial memory tasks in which participants display their behaviour in an environment, either interacting with objects or searching for targets. The Boxes Room Task is a virtual test that assesses spatial memory in the far space. Based upon this task, a new test was developed in which participants could not move about within the context, but they could actually perceive it from a specific viewpoint. In this work, both versions of the task were compared with one another. Furthermore, they were also compared with the results of 10/36 spatial recall test, a task assessing spatial memory in the near space. Two conditions were applied in all tasks, both in stable and rotated contexts. Our study included one hundred and twenty healthy young participants who were divided into two groups. The first group performed the Walking Space Boxes Room Task. A second group performed the Non-Walking Space Boxes Room Task as well as another traditional neuropsychological test for near space assessment, the 10/36 spatial recall test. Results proved that orientation in the non-walking space was more difficult than in the walking space. Additionally, our test also showed that men outperformed women in both virtual reality-based tasks, although they did not do it in the traditional 10/36 spatial recall test. In short, this work exposes that virtual-reality technologies provide tools to assess spatial memory, being more sensitive than traditional tests in the detection of small performance changes.
Full-text available
Objective: This study investigated psychological responses to playing videogames using a virtual reality (VR) head-mounted display (HMD). We also investigated how cybersickness impacts the sense of presence one feels in the virtual environment, as well as how cybersickness affects enjoyment. Materials and methods: Participants played randomly assigned VR games that varied in the level of sensory conflict they provided: "Lucky's Tale," "Elite: Dangerous," and "Minecraft." Results were compared based on two headset conditions-the Oculus DK2 and the recently released Oculus Rift Consumer Version (CV1). Results: Cybersickness was not reduced by playing games with a VR HMD of higher technological quality-the Oculus Rift CV1. Furthermore, cybersickness responses were significantly different based on the level of sensory conflict in the games. Games with less sensory conflict, such as third-person platformer games, or space and flight simulator games, produce less cybersickness in players than first-person games. Enjoyment of VR games was shown to be directly influenced by a sense of spatial presence, which was affected by interactivity and realism. Findings suggest that the variables that are important to the enjoyment of console, mobile, or motion-based games are consistently important to enjoyment of VR games. Conclusion: Better technology does not affect the frequency or severity of cybersickness for players; but sensory conflict has a significant impact on how sick users become. Additionally, we present a model that indicates how enjoyment is produced in VR gaming experiences. These findings bear further investigation as new methods of interacting with VR games are developed.
The academic evidence for examining the educational influences of immersive virtual reality (VR) with head-mounted displays (HMD) has been relatively limited until now, in particular for virtual field trips which allow teachers to guide students to explore learning elements in virtual environments. This study therefore invited 24 elementary school students to engage in an immersive virtual field trip which was part of a 2-week summer camp on the learning subject of social studies. The students' learning experiences (i.e., perceived presence, motivational beliefs change, and attitudes) were investigated and the teacher-student interaction behaviors in the learning activity were explored. The results showed that the students' motivation was generally enhanced, particularly for the diminishment of test anxiety. The important role of the perceptions of spatial presence and experienced realism in the students’ motivational beliefs was also addressed. Moreover, different behavioral patterns of teacher-student interactions during the process of the virtual field trips were identified by lag sequential analysis. This work started a pedagogical research to probe how HMD-based VR technology was applied in classrooms for teachers to lead their students on virtual field trips. The proposed instructional strategies for appropriately guiding students to learn during the process of immersive virtual field trips were also the contribution of this study.