Available via license: CC BY 4.0
Content may be subject to copyright.
This work is made available under
a Creative Commons Attribution 4.0 International licence.
Integrating mixed reality spatial learning analytics into secure
electronic exams
Michael Cowling
Central Queensland University,
Australia
Mathew Hillier
Monash University,
Australia
James Birt
Bond University,
Australia
This paper presents an approach to using mixed reality (MR) technologies in supervised summative
electronic exams. The student learning experience is increasingly replete with a rich range of digital
tools, but we rarely see these same e-tools deployed for higher stakes supervised assessment, despite the
increasing maturity of technologies that afford authentic learning experiences. MR, including
augmented and virtual reality, enables educators to provide rich, immersive learner centred experiences
that have unique affordances for collecting a range of learning analytics on student performance. This is
especially so in disciplines such as health, engineering, and physical education requiring a spatial
dimension. Yet, in many institutions, paper-based exams still dominate, in some measure due to
concerns over security, integrity and scalability. This is despite a key concern for educators and
institutions in producing employment ready 21st century graduates being the authenticity of assessments
used for high stakes judgements. We therefore present a proposal for how MR pedagogies can be
deployed for use in supervised examination contexts in a manner that is secure, reliable, and scalable.
Keywords: Mixed reality, learning analytics, electronic exams, spatial pedagogy
Introduction
In many higher education institutions paper-based exams still dominate higher stakes supervised assessment.
Researchers such Hillier & Fluck (2013) cite concerns over security, integrity, and scalability with respect to
using digital technology in exam halls. However, there is an increasing need to bring these out-dated means of
assessment into line with the digitally rich work and education practices of today. In disciplines that require the
examination of spatial skills such in heath, engineering, architecture and physical education, paper based higher
states assessment restricts examiners to assess spatial understanding of students (Roca-González, Martin-
Gutierrez, GarcÍa-Dominguez & del Carmen Mato Carrodeguas, 2017). This is concerning because research
suggests that manipulating physical objects is valuable to create a feedback loop for learning (Paas & Sweller,
2014) in spatial disciplines.
Mixed reality (MR) technologies - comprising augmented reality (AR), virtual reality (VR) and 3D printing -
allow for spatial skills assessment (Birt, Moore & Cowling, 2017), but the uptake in education has been
hindered by cost, expertise and capability. This is changing with the recent wave of low-cost immersive 3D MR
hardware and powerful interactive 3D visualisation software platforms such as Unity3D. However, while the
latest MR technology has been deployed for formative learning with respect to spatial capabilities, these
technologies have yet to be deployed in electronic exams (e-exams).
A barrier to deploying MR technology for e-exams is that spatial data gathering using MR technology (primarily
mobile devices) relies on having access to an internet connection and networked data storage. Conversely for e-
exams to be reliable and secure the reliance on an internet connection presents a point of potential system failure
while the use of hard-to-control mobile devices presents risks to assessment integrity. Therefore, a question in
deploying MR for e-exams is “How can an e-exam effectively be administered whilst still allowing the use of
MR technologies for spatial visualisation and data gathering?”. This paper addresses this question through a MR
learning solution for spatial analytics and assessment, which can be merged into an existing e-exam system.
Providing a spatial visualisation experience for students, addressing security and authenticity required for an
examination.
Background
It is increasingly recognised that there is value in the use of e-exams, both to address the digital preference of
students as well as the cost and logistics of conducting exams. Recent research has shown an increase in
student's preferences for using a keyboard over a pen and paper for exams (Hillier & Grant, 2018). In light of
Open Oceans: Learning without borders
CONCISE PAPER
ASCILITE 2018 Deakin University
330
increasing student numbers and constrained funding to higher education the potential for cost saving due to
eliminating paper and increases in marking efficiencies may be seen as attractive. The adoption of computers for
exams also offers the opportunity for a greatly expanded pedagogical landscape in the exam room (Hillier &
Fluck 2013), such that academics will be able to design a wider range of assessment tasks that draw upon a
range of multimedia technologies and sophisticated software tools. Students will be able to utilise new tools in
responding to the problems set by the examiner. This enhances the university's ability to accredit graduates as
being able to solve problems and operate in modern workplaces (Adams, Cummins, Davis & Yuhnke, 2016).
In disciplines such as health and engineering visualisation is increasingly being used in teaching classrooms as
key means of improving learning, skills and outcomes, particularly as more disciplines in higher education
support the development of practical skills (Höffler, 2010). To enhance students’ conceptualisation,
manipulation, application, retention of knowledge and practical skills, MR visualisations in the classroom
specific learning design characteristics are recommended (Mayer, 2014; Moreno & Mayer, 2007). In part, these
visualisations must prime the learner’s perception, engage their motivations, draw on prior knowledge, avoid
working memory overload through specific learning objectives, provide multiple presentation modalities, move
learners from shallow to deeper learning and allow learners to apply and build mental models (Hwang & Hu,
2013; Mayer, 2014).
The fundamental assumption(s) of MR visualisation and their use in the classroom are: that no single technology
offers a silver bullet for students to grasp specific concepts (Moreno & Mayer, 2007); multiple representations
must take advantage of the differences between the representations (Ainsworth, 2014); and students learn
through a variety of approaches (Mayer, 2014). This reflects the general proponents of blended learning
approaches that long appreciated and advocated for multiple modes of presentation, delivery and content
(Bernard, Borokhovski, Schmid, Tamim & Abrami, 2014). Many disciplines, especially those with STEAM
(Sciences, Technology, Engineering, Arts and Mathematics) subject matter(s), are suitable for 3D MR
presentations if they benefit from the observation that multiple 3D modes of engagement can be reinforcing and
synergistic within the pedagogy (Birt & Cowling, 2017).
To assist with this innovation, technologies such as 3D printing, AR, VR and mobile bring your own devices
(BYOD) are becoming available for use commercially and thus able to be incorporated into the classroom. MR,
a continuum of these innovative technologies, provides a framework to position real and virtual worlds
(Milgram & Kishino, 1994), resulting in the development of new paradigms, tools, techniques, and
instrumentation that allow visualisations at different and multiple scales and the design and implementation of
comparative MR pedagogy across multiple disciplines (Magana, 2014). The 2016 NMC Higher Education
Horizon Report (Johnson, Adams Becker, Cummins, Estrada, Freeman & Hall, 2016) and Technology Outlook
for Australian Tertiary Education Report (Adams et al., 2016) specifically highlight these technologies as key
educational technologies and drivers for learner engagement.
Finally, tying these concepts together are learning analytics. Learning analytics is a growing field, especially in
education, where it is perceived that learning analytics can help to understand student behaviour. As education
becomes more digital, more data on this behaviour can be collected, analysed and mined to understand how
successful the student learning process is (Siemens & Baker, 2012). Despite this, however, researchers such as
Beer, Tickner & Jones (2014), note that this process requires a clear understanding of the context of the data
being collected and how it might effectively be used.
Ferguson et al. (2016) identified that the use of learning analytics to improve and innovative learning and
teaching is still in its infancy, and requires significant action to drive work in education and training, including
work at the institutional level, as well as work at the practice level to ensure learning analytics are developed
that make good use of pedagogy. Currently, data is often collected to inform learning analytics predominantly
through a learning management system (LMS), identifying characteristics of students that make them higher risk
for failure, leading to the use of learning analytics for “early warning” type systems (e.g. Macfadyen & Dawson,
2010).
Moreover, despite this recognition that the use of learning analytics is still in the early stages and the collection
and analysis of data is often a problem, work has been completed on the use of learning analytics in virtual
learning environments (VLEs), particularly through virtual worlds such as second life. For instance, work from
Agudo-Peregrina (2014) looks at the use of student participation in VLEs to predict student success and
performance in their coursework, with the main finding of the work being that whilst there is some correlation in
online courses, no significant correlation exists for students studying face-to-face. Similarly, using MR outside
of the VLE space, Aljohani & Davis, (2012) looked at the use of learning analytics in a mobile environment,
and in particular in a pervasive learning environment, coining the terms mobile learning analytics (MLA) and
Open Oceans: Learning without borders
CONCISE PAPER
ASCILITE 2018 Deakin University
331
pervasive learning analytics (PLA) respectively to describe this approach, and noted how these could be used to
help with understanding of the teacher of the learners’ pattern of interaction between themselves and their
context. In particular, their system SCROLL made use of historical contextual information about the students
geolocated position to help students recall what they wrote at this location effectively.
However, the use of other features of AR or VR systems, such as the spatial positioning of digital objects or
student interaction with these objects, seems to be non-existent in the literature, as does the implementation of
them in a locked down electronic exam system. Our proposal, therefore, is to use the affordances of a MR
system, in particular one that involves the manipulation of digital objects, to record learning analytics for
student interactions that involve the spatial positioning of digital objects within the MR environment, as well as
student interaction with these objects. This data will need to be captured in a secure exam environment, but can
then be replayed for the academic to give a feel for how students proceeding with their learning that can be
judged by an expert. We call this type of analytics spatial learning analytics and the resulting data MR spatial
memories.
Building a mixed reality spatial memory system
The case study for this paper will be a MR system integrating spatial learning analytics for facilitating the
learning of anatomy by health science and medicine students through a visualisation of the human heart. An
example of the relative traditional and MR pedagogies is shown in Figure 1.
Figure 1. Pedagogies for teaching anatomy with a heart diagram/model
The Heart MR system, first proposed in Birt & Cowling (2016), allows students to visualize a heart model using
a MR system built into their BYOD smart mobile device, together with a commercially available Google
Cardboard. Once running the application, students can present a coloured cube to the system that will be
translated into an interactive model of the heart. As they rotate, yaw and pitch the cube, the heart will move and
annotations will appear for the student explaining the different parts of the heart. This is achieved through AR
using VUFORIA and Unity3D. Simultaneously, learning analytics are collected on the student’s interaction with
the app. To understand what learning analytics information could be collected, and how insights could be
derived from this information by learners, a model presented by Davenport Harris & Morison (2010) was
adapted for this work. This includes recording data for reporting, alerting, extrapolating, modelling,
recommending and simulating the model. In this case the data required is the X,Y,Z and rotational information
recorded 24 times per second and recorded answers to anatomical recall questions. Data from the learning
analytics system can then be stored or transferred to a LMS and used by key stakeholders to interpret student
learning outcomes and responses. Using this approach, a MR system can be used to collect data on student
performance in spatial analysis and to provide coaching to students on the process. During term, this can be
done formatively, and students can be coached through the process. However, at the end of term, if students are
required to complete an exam, the question still remains of how this spatial data can be translated and used for
summative purposes considered the security and controls expected of a supervised summative exam. This is
where integration with existing electronic exam systems becomes important.
Integrating mixed reality with electronic exams
Existing e-exam systems implement a secure environment in several ways, either using institution supplied
equipment or student owned equipment (BYOD). Hillier and Fluck (2013) have argued that BYOD is likely the
only viable approach to large scale deployment of e-exams. In terms of securing BYOD one of two approaches
are used - to install lock-down software within the student's resident operating system on the device or to start
the device using an alternative operating system from a network or secondary storage device. The former can be
quite invasive and raises risks of interfering with the ongoing operation of the device while the latter avoids any
interference with data on the student's internal drive. It is common that only approved applications and
documents can be opened, and network access may be removed or limited to ‘whitelisted’ resources, frequently
via the use of a 'secure' browser. Lockdown techniques are available for computers running MacOS, Windows
Open Oceans: Learning without borders
CONCISE PAPER
ASCILITE 2018 Deakin University
332
or Linux, but is almost always implemented using desktop and laptop class computers, but not mobile devices.
This presents a barrier for the use of MR pedagogy because MR technology commonly makes use of mobile
devices such as Android phones and increasingly leverages stereoscopic headsets to create a greater sense of
immersion. Therefore, the challenge is to enable Heart MR to fit into a typical secure e-exam environment using
desktop/laptop centric operating systems. One approach is to use emulation software (such as Genymotion
www.genymotion.com) or an Android Virtual Box to allow the Android system to run within a desktop OS.
This in turn would allow Heart MR to work within or in conjunction with existing secure electronic exam
solutions. An example Android app running in GenyMotion on Linux (Ubuntu) is shown in Figure 2. The cube
shown in this image is being generated dynamically through visualisation triggered by markings on a physical
object (cube) presented to a webcam attached to a laptop computer. Specifically, spatial and rotational
information regarding the cube is captured at 24 times per second and animated, producing the digital
visualisation.
Figure 2. Mixed Reality Pedagogy Running in a Secure Environment through Genymotion
However, limitations currently mean that the use of stereoscopic headsets as required in the original Heart MR
application (see Figure 1) would not be viable. Instead, a webcam will need to be used to capture data with the
visualisation to be displayed on a regular computer screen. The use of the emulator will allow the same version
of the app to be presented to students during the term and for use in exams. This means better continuity
between formative and summative assessment can be achieved where the unique affordances of MR pedagogy
can be integrated into a secure exam environment for the benefit of students and subsequent assessment by staff.
As a first step for exam use, the app can be customized to remove information elements (such as annotations)
that students can then be asked to replace as part of their examination learning analytics data can be collected
from students as they manipulate the app, providing an extra layer of assessment information that can be used by
markers. More advanced designs can take advantage of the unique spatial affordances of MR to allow new
insights. However, a question remains as to whether the lack of a stereoscopic headset in the exam will allow
students to experience the simulation with the same impact as was conducted formatively, given that the
immersive nature of MR has been shown to have a positive impact on learning.
Conclusion
The affordances of MR allow for sophisticated use and analysis of spatial attributes and this is becoming more
important in a number of discipline areas to assess in more authentic ways. Education is looking to adopt
technologies such as MR, however portions of the education system such as exams appear to be slow to catch up
with this trend. This paper presents a method to enable the use of MR to collect analytics and incorporate these
into a secure exam environment. The future work of this project will look to conduct live trials of the proposed
method.
References
Agudo-Peregrina, A. F., Iglesias-Pradas, S., Conde-González, M.A., & Hernández-García, A. (2014), Can we predict
success from log data in VLEs? Classification of interactions for learning analytics and their relation with
performance in VLE-supported F2F and online learning. Computers in Human Behavior, 31, 542-550.
doi:10.1016/j.chb.2013.05.031.
Aljohani, N. R., & Davis, H. C. (2012). Significance of learning analytics in enhancing the mobile and pervasive
learning environments. In 6th International Conference on Next Generation Mobile Applications, Services and
Technologies (NGMAST), September 2012, 70-74. https://doi.org/10.1109/NGMAST.2012.49
Adams Becker, S., Cummins, M., Davis, A., & Yuhnke, B. (2016). NMC Technology Outlook for Australian Tertiary
Education: A Horizon Project Regional Report. Austin, Texas: New Media Consortium.
Ainsworth, S. (2014). The multiple representation principle in multimedia learning. In R. E. Mayer (Ed.), The
Cambridge handbook of multimedia learning (2nd edn), (pp. 464–486). Cambridge University Press.
https://doi.org/10.1017/CBO9781139547369.024
Open Oceans: Learning without borders
CONCISE PAPER
ASCILITE 2018 Deakin University
333
Beer, C., Tickner, R., & Jones, D. (2014). Three paths for learning analytics and beyond: moving from rhetoric to
reality. In Proceedings of the 31st Annual Conference of the Australasian Society for Computers in
Learning in Tertiary Education (ASCILITE 2014), 242-250. https://doi.org/10.14742/apubs.2014.1092
Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., & Abrami, P. C. (2014). A meta-analysis of blended
learning and technology use in higher education: from the general to the applied. Journal of Computing in
Higher Education, 26(1), 87-122. doi:10.1007/s12528-013-9077-3.
Birt, J. & Cowling, M. (2016). Mixed reality in higher education: Pedagogy before technology. 2016 Australian
Learning Analytics Summer Institute Workshop (ALASI).
Birt, J., Moore, E., & Cowling, M. (2017). Improving paramedic distance education through mobile mixed reality
simulation. Australasian Journal of Educational Technology, 33(6). doi:10.14742/ajet.3596.
Birt, J., & Cowling, M. (2017). Toward future 'mixed reality' learning spaces for STEAM education. International
Journal of Innovation in Science and Mathematics Education, 25(4), 1.
Davenport, T., Harris, J.G., & Morison, R. (2010). Analytics at Work. Harvard Business School Press: Boston.
Ferguson, R., Brasher, A,, Clow, D., Cooper, A., Hillaire, G., Mittelmeier, J., Rienties, B., Ullmann, T., Vuorikari, R.
(2016). Research Evidence on the Use of Learning Analytics: Implications for Education Policy. Joint
Research Centre, Seville, Spain.
Hillier, M., & Fluck, A. (2013). Arguing again for e-exams in high stakes examinations. In ASCILITE-Australian
Society for Computers in Learning in Tertiary Education Annual Conference, 385-396.
Hillier, M., & Grant, S. (2018). Towards authentic e-Exams at scale: robust networked Moodle. Working paper,
Monash University.
Höffler, T.N. (2010). Spatial ability: Its influence on learning with visualisations - a meta-analytic review,
Educational Psychology Review, 22(3), 245-269. doi:10.1037/e578522012-018.
Hwang, W.Y., & Hu, S.S. (2013). Analysis of peer learning behaviors using multiple representations in virtual reality
and their impacts on geometry problem solving, Computers and Education, (62), 308-319.
doi:10.1016/j.compedu.2012.10.005.
Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A., & Hall, C. (2016). NMC Horizon Report:
2016 Higher Education Edition. Austin, Texas: The New Media Consortium.
Macfadyen L., & Dawson S. (2010). Mining LMS data to develop an “early warning system” for educators: A proof
of concept. Computers and Education, 54(2), 588-599. doi:10.1016/j.compedu.2009.09.008.
Magana, A. J. (2014). Learning strategies and multimedia techniques for scaffolding size and scale cognition,
Computers and Education, (72), 367-377. doi:10.1016/j.compedu.2013.11.012.
Mayer, R. E. (2014). The Cambridge handbook of multimedia learning (2nd ed). New York, NY: Cambridge
University Press.
Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on
Information and Systems, 77(12), 1321-1329.
Moreno, R., & Mayer, R. (2007). Interactive multimodal learning environments, Educational Psychology Review,
19(3), 309-326. https://doi.org/10.1007/s10648-007-9047-2
Paas, F., & Sweller, J. (2014). Implications of cognitive load theory for multimedia learning. In R. E. Mayer (Ed.),
The Cambridge handbook of multimedia learning. New York, NY: Cambridge University Press.
Roca-González, C., Martin-Gutierrez, J., GarcÍa-Dominguez, M., & del Carmen Mato Carrodeguas, M. (2017).
Virtual Technologies to Develop Visual-Spatial Ability in Engineering Students. Eurasia Journal of
Mathematics, Science & Technology Education, 13(2). doi:10.12973/eurasia.2017.00625a.
Siemens, G., & Baker, R. S. J. (2012) Learning Analytics and Educational Data Mining: Towards Communication
and Collaboration, presented at the Second International Conference on Learning Analytics and
Knowledge (LAK12). https://doi.org/10.1145/2330601.2330661
Please cite as: Cowling, M., Hillier, M. & Birt, J. (2018). Integrating mixed reality spatial learning analytics
into secure electronic exams. In M. Campbell, J. Willems, C. Adachi, D. Blake, I. Doherty, S. Krishnan, S.
Macfarlane, L. Ngo, M. O’Donnell, S. Palmer, L. Riddell, I. Story, H. Suri & J. Tai (Eds.), Open Oceans:
Learning without borders. Proceedings ASCILITE 2018 Geelong (pp. 330-334).
https://doi.org/10.14742/apubs.2018.1972
Open Oceans: Learning without borders
CONCISE PAPER
ASCILITE 2018 Deakin University
334