Content uploaded by Martin Ebner
Author content
All content in this area was uploaded by Martin Ebner on Jun 15, 2020
Content may be subject to copyright.
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
Simulation data visualization using mixed reality with Microsoft HoloLens
TM
Michael Spitzer, Manfred Rosenberger and Martin Ebner
Abstract
Simulations and test beds are very difficult topics, especially for non-experienced students or
new employees in the mechanical engineering domain. In this chapter, a HoloLens app and a
CAD/Simulation workflow are introduced to visualize CAD models and sensor and simulation
data of a test run on an air conditioning system test bed. The main challenge is to visualize
temperature and pressure changes within opaque parts, such as tubes, compressors, condensers
or electrical expansion valves. The HoloLens app supports various simulations and CAD
visualizations. We exemplary implemented colouring the temperature or pressure changes of
the test bed parts as Mixed-Reality (MR) overlays. The main purpose of the HoloLens app is to
reduce the learning effort and time to understand such simulations and test bed settings.
Additionally, the app could be used as a communication tool between different departments to
transfer experiences and domain specific knowledge.
Keywords: HoloLens, Augmented Reality (AR), Mixed Reality (MR), Technology Enhanced
Learning, Wearable Enhanced Learning
Introduction
With the growth of technology, new possibilities are arising to improve the learning process. In the
last years several smart glasses emerged on the market for a reasonable price. One of these devices
is the HoloLens. Microsoft released their Head-Mounted Display (HMD) as a Development
Edition in March 2016 (Tsunoda, 2016). The HoloLens is a see-through head-mounted device.
Microsoft defines their HMD as a Mixed-Reality (MR) device.
MR defines a very broad spectrum in the Reality-Virtuality Continuum between real
environment and virtual environment. A MR environment visualizes real-world and virtual-world
entities within a single display (Milgram, Takemura, Utsumi, & Kishino, 1995). Within the
HoloLens display, real-world objects are shown as they are because of the see-through display;
virtual objects are displayed as holograms.
In this chapter, we follow the idea of using this technology to visualize complex processes
in the field of view to make those processes more efficient. Smart glasses were used in several
studies to explain complex processes and to improve the learning outcome. The studies are listed
and analysed in the State-of-the-Art Analysis section.
We strongly adhere to two main concepts of successful learning: learning by doing and
interaction (Dewey, 1916) and learning through visualization (Holzinger & Ebner, 2003;
Vygotsky, 1978). Holograms or 2D planes such as browser windows or other 2D applications can
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
be pinned to world space at specific locations in the room. Figure 10.1, a screen shot made with
HoloLens, shows a pinned browser window above the test bed. This feature can be used to add
context-based information to the learning situation.
The HoloLens can be used as an untethered solution; hence, no infrastructure is necessary. All
functionality can be programmed into the app. This is very useful because our experiences showed
that the IT infrastructure in the educational domain often lags far behind the state-of-the-art
(Spitzer & Ebner, 2015). In this chapter, we describe how to use the HoloLens for learning
purposes. We developed a HoloLens app with two modes: VR and AR to support learners in
various states of the test bed design and implementation process. The app concept is described in
the prototype section. The main purpose of the app is to show invisible processes such as
temperature and pressure changes in the field of view of the user to make it easier to understand
the test bed functionality.
Overview of the state of the art
This chapter provides an overview of current trends and uses of wearable enhanced learning
relevant to MR. Smart glasses open new possibilities for learning scenarios. With smart glasses,
many issues of other hardware solutions are solved. Smart phones and tablets can use AR
technology, but they must be carried by hand during the learning scenario. Tablets are often large
and heavy, which could influence the learning experience, particularly when learners need their
hands free (Leighton & Crompton, 2017).
Figure 10.1 Pinned browser window above the test bed.
We have already used smart glasses in education and have developed a prototype to support
distance learning. Users of smart glasses, in that case the ReconJet (Recon, 2018), were able
to establish a real-time video/audio stream and the instructor drew shapes to mark objects in
the video stream. The prototype was tested in two use-cases. The first was an industrial
use-case: a maintenance procedure of a production machine. It is comparable to the test
bed use-case described in this chapter. To be able to maintain a machine or a
thermodynamic test bed, the functionality of such a device must be clear. With the
distance learning approach, we gave live support during the process. With the HoloLens
app, we provide a solution to enable unsupervised learning. The second use-case of the
distance learning approach was a generic fine-motor-skills task: to assemble a wooden toy
without any a priori information. The experiments showed supporting the subject during
assembly of the toy by displaying information in the smart glasses was very effective
(Spitzer, Nanic, & Ebner, 2018). Additionally, we developed a scenario for learning
knitting while using smart glasses. Learners need both hands to perform the task of
knitting; therefore, smart glasses were used (Spitzer & Ebner, 2016).
Smart glasses were used in several other educational scenarios: In the medical domain,
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
Google Glass was used to solve communication and surgical education challenges (Moshtaghi et al.,
2015). Additionally, in context-aware learning scenarios, smart glasses were used to give support
during a physics experiment of visualizing invisible processes (frequency of sound) (Kuhn et al.,
2016). This learning scenario faced challenges similar to those of our use-case.
AR technology not only supports the learning scenario in educational settings; it also raises
the motivation of the learners by providing descriptive visualizations of context-based information
(Freitas & Campos, 2008). AR technology has the potential to engage and motivate learners and
help them to understand real-world challenges more efficiently by
providing additional context-based
information during the learning scenario (
Lee, 2012
).
Use-case and learning scenario
In this chapter, a further practical example will be carried out. The department of thermodynamics
is finding it difficult to transport their domain-specific knowledge to other departments or
customers. This knowledge-related problem can be solved by providing additional information
such as learning material and descriptive visualizations. Additionally, they face the challenge of
training and teaching new researchers or students the use of thermodynamics test beds. In this
chapter, the following questions are addressed:
•
RQ1: Are MR applications suitable for supporting learning scenarios in which
students or new employees learn how to use thermodynamic test beds?
•
RQ2: Can MR applications be used to foster the transfer of domain-specific knowledge to
experts of other domains or to customers?
This section describes the life cycle of a thermal test bed, beginning with the Computer-Aided
Design (CAD) construction, followed by the assembly and start up procedure of such a test bed. The
main motivation of using a MR application in a learning scenario is that new employees or students
can already train to use the test bed during the design phase of the test bed. Usually, they must
wait until the whole test bed is built and operational. The research idea is that during the assembly
phase the missing parts can be placed as holograms in the field of view to verify the correct
position and to get a better understanding of how the test bed will be operational when it is finished.
Additionally, basic training scenarios such as localization of the parts within the test bed can be
performed in early stages. Figure 10.2 shows a comparison between the usual learning approach
and the MR learning scenario. Usually, in the CAD phase there is no learning material available
since the test bed is in the construction phase; the only available data is the CAD, which is
improved iteratively. During the assembly phase, some changes can be made to the test bed. In the
usual learning approach, the learner can only perform learning scenarios while using the test bed in
the operational state. In the MR scenario, the learner should be able to walk through simulated
test-bed experiments in VR mode. The two modes of the HoloLens app are explained in detail in
the prototype section. For the VR mode, one reasonable state of the CAD is sufficient to create a
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
VR learning environment. The learner is able to train in VR mode even if no physical test bed has
been built. During the assembly phase, more and more parts are physically available. The missing
parts can be placed as hologram overlays, and the learner can perform spatial learning (location
of parts) in AR mode.
Additionally, simulation data can be visualized to train already simulated learning
situations. During operational phase already performed test runs can be displayed to train learning
situations even if the test bed is not in service. In our new learning approach learners can perform
learning and training situations in all the three states of the test bed life cycle.
Figure 10.2 Learning approaches.
Design of the learning scenario
A learning scenario should always reference a clearly defined objective. In the validation method,
the context and the learner influence the design of the learning scenario (Airasian et al., 2001;
Bloom, Englehart, Furst, Hill, & Krathwohl, 1956; Starr, Manaris, & Stalvey, 2008;
Weidenmann, 1993). The design of such a learning scenario can be compared to the requirements
of the engineering phase for product development. There is a need for detailed pre-examination and
requirements analysis (Meyer, 2003; Ross & Schoman, 1977).
Additionally, aspects of learning psychology have to be considered (Schulmeister, 2004).
The learning objective can be separated into knowledge and competences. Knowledge can be
grouped into conceptual, procedural and declarative knowledge. Knowledge is the basis for
acquiring competences. The acquired competences can be transferred to other domains (Heyse &
Erpenbeck, 2004; Hudson & Miller, 2005; North, 2011). We use a context-based approach to
support the learner, by showing location-based information of parts, temperature and pressure.
Several learning methods use location to help learners remember content. One of these methods is
called Loci, a method of memory enhancement using spatial memory to recall information. This
method was developed in ancient Rome and Greece to support learning large numbers and texts
(Yates, 1966). The Loci method has already been implemented in a mobile app called Loci
Spheres. The app has been evaluated in an in-the-wild study. Visual stimuli such as spatial and
panning loci provide higher perceived system support (Wieland, Müller, Pfeil, & Reiterer, 2017).
Research method
This section describes the method we used to develop the Software artefact. We used the
prototyping approach we already used in another learning scenario (Spitzer et al., 2018).
Prototyping is very effective when new technology is introduced, in our case, the HoloLens. The
prototyping approach is necessary for learning the working of the device, which features are available
and the sophistication of the features before developing a full-scale system (Alavi, 1984).
First, we identified the basic requirements: to show invisible, physical conditions. Then
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
we built the prototype and tested it with key users. We improved the prototype iteratively to
follow agile software development principles (Cockburn, 2001). The prototyping approach was
necessary for quickly evaluating a functional software artefact.
The prototype was tested with subjects from the target group in a qualitative evaluation
(thinking aloud) to gather early feedback. This feedback was then considered for improving the app
iteratively. We have followed this approach in other studies (Spitzer & Ebner, 2015, 2017). After
the thinking-aloud test, the users answered the following questions:
•
Do you understand the functionality of the thermodynamic test bed?
•
What are the risks of operating the test bed?
•
Please describe the parts of the test bed and their functionality.
The purpose of these questions was to identify issues with the learning situation in a very
early stage. When the prototype reaches a more mature state, a final evaluation will take place
to verify the success of the learning situation compared to other learning materials such as
paper-based manuals and video instructions. The target group for the MR test bed is made up of
new employees or students in the mechanical engineering domain who are not yet familiar with
real-world simulations and test beds. They can train and learn how to use a thermodynamic
test bed without being at risk.
Prototype
We implemented a MR application to visualize the temperature and pressure of a car air
conditioning unit test bed. The first step was to create a CAD model of the test bed. The test bed
consists of the following components:
1.
Frame
2.
Compressor
3.
Condenser
4.
Filter
5.
Expansion valve
6.
Vaporizer
7.
Measuring probes
8.
Analogous gauges
9.
Emergency switch
10.
Pipes
The test bed model is shown in Figure 10.3.
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
Figure 10.3 Test bed CAD.
The next step was to define measurement points, which were then used to visualize
measurement and simulation data. At first, we implemented the VR mode. The advantage of
providing a VR mode is that learners are able to see the 3D model as a hologram even though the
test bed is not physically built yet.
Figure 10.4 VR mode of the test bed.
Figure 10.4 shows the test bed in a VR mode. The whole test bed is displayed as a
hologram without any related real-world context. The blue cursor is used as pointing area,
which is controlled by the HoloLens user’s head movement. Instead of a mouse or other
pointing devices, head movement (direction of view) is used to determine the region of interest.
The blue cursor is always centred in the direction of view of the user. This kind of user
interaction was unfamiliar for all subjects, but they adapted to this input technique quickly.
When the blue cursor hits a part of the model, the part is highlighted gray. Figure 10.4 shows
the VR mode with highlighted condenser. A video of the VR mode is available (Spitzer,
2018b). All parts of the model are augmented with a text description and title. When the user
air taps the part, the information is displayed in 3D space.
The part information is not implemented within the HoloLens application itself. All
text information is accessed from a web server. The advantage of this approach is that the
text can be easily adapted to various training and learning situations without reprogramming
the HoloLens application. The part information is shown as a text plane in 3D space, which
adapts to the user’s distance and rotation automatically.
The vaporizer (‘Verdampfer’) is selected and the current pressure from a simulation or
already performed test run is displayed. The white button below (description – ‘Beschreibung’)
opens a more detailed description of the selected part. A pole connects the part with the
corresponding text information to create the appropriate context. Users can freely move the text
information within the 3D space.
This functionality (VR mode and part description) can be used in the early test-bed
development stage to train and teach students or new employees. They learn the exact 3D position
of parts in the test bed and can connect the related context information (part descriptions) more
efficiently.
When the user taps on the tubes, the animation of temperature or pressure distribution in
the test bed for a simulated or already performed test run is displayed. White rings indicate the
flow direction of the coolant. Every measurement point is coloured according to the measured or
simulated value. The lowest value of the simulation is coloured blue and the highest value is
coloured red. For the visualization of the tubes between the measurement points, we use colour
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
interpolation. Figure 10.5 shows an example of colourization in VR mode. The next step was to
connect the Software prototype with the simulation data. We exported the simulation data from a
simulation tool as comma-separated values. To visualize current measurements of an ongoing test
run, we connected the measurement software with the HoloLens app via a web service.
Figure 10.5 VR mode placed above the real-world test bed.
The red and blue airflow indicate hot and cold air. This information is useful for safety
reasons. Operators should stay away from hot sections of the test bed. A lot of information is
transferred to the learner with the VR mode even though the test bed does not physically exist yet.
During the assembly (second) phase, users can switch to the AR mode. Figure 10.6 shows a screen
shot of the AR mode. A video capture of the HoloLens app is available (Spitzer, 2018a). The blue
cursor indicates the view of the user. The compressor hologram is shown and can be tapped to show
the description. The same functionality is provided in the VR mode. The major difference is that
part holograms are shown only when the cursor hits part of the model. Since all parts were already
physically available, this approach was obvious. We decided not to implement occlusion (hidden
surface detection) because it seemed better to show the simulation regardless of whether other parts
are located between the tubes and the line of sight. We decided to implement the visualization of
the AR mode similarly to the VR mode to provide a continuous learning experience in all test-bed
phases.
Figure 10.6 AR mode.
The visualization process and workflow are shown in Figure 10.7. At first, we created the
CAD model of the test bed. Then we calculated simulations of different learning scenarios for
several test-run types. The values for the simulation (temperature and pressure) were provided
via a Representational State Transfer web service. This web service was also used to access stored
measurement data of real test runs. The next step was to create a visualization of the temperature
and pressure distribution. We decided to colour the tubes according to the simulated or measured
values. This process is a general approach, independent of the used device. The same process
can be used to display the learning scenario on a tablet or smart phone. Only the last step is
adapted to the HoloLens. The focus of the used process was to create generic architecture for
multiple learning scenarios.
Figure 10.7 Information visualization with mixed reality.
Discussion and conclusion
With our prototype, we addressed two research questions. First, we investigated whether MR
applications are suitable for supporting learning in the thermodynamic domain. The app was very
helpful even for non-experts. We tested the learning situation with five internal test people in a
qualitative evaluation. Additionally, we tested the learning situation with external audiences at
several public events (open houses) without performing formal evaluations. The test users did not
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
get any a priori detailed information of the test bed. They explored the HoloLens app mainly in
AR mode with the physical test bed on site. We used a thinking-aloud approach to validate their
understanding of the test bed and the running simulations.
The main purpose of the qualitative evaluation was to identify major flaws in our
design. We are now improving the app according to the feedback and will perform thorough
evaluations of the app and the learning scenario after the prototype reaches a more mature state.
We are now implementing the following improvements:
•
The highlighting of the parts in the field of view is sometimes confusing when users
look around quickly. This issue is fixed by delaying the highlighting until the user stops
looking around and focuses on a certain part.
•
The poles and part information stay in the field of view even though the user has already
selected another part. This issue is fixed by closing all open text fields and poles after
the user selects another part. In this way, only the currently selected part is displayed.
•
The user interface is in a very early stage. Since the Microsoft HoloLens is very new,
it is very challenging to design and implement a sophisticated user interface. There are
no long-term guidelines and experiences in building UIs in 3D space for smart glasses.
We are now improving the UI iteratively. We are considering more accurate algorithms
for adapting label size according to the distance of the user so that label placement will
not block the field of view. The most challenging part is to show only as much
information as is needed for learning context; we do not want to pollute the field of view
with unnecessary information that could distract and confuse the user.
All test users understood the functionality quickly. It was very helpful that various
simulations could be executed to show different results to the user. It took some time to become
familiar with the input techniques of the device, but it was very intuitive for most of the users, even
wearers of regular glasses did not have problems. One issue was that placing virtual content in a
room had to be thoroughly considered. The more content was added, the more confusing it was
for users. Hence, only context-based content should be displayed; all unrelated content should be
hidden. Another issue was that users were able to move the description labels freely in the room
without any restrictions. After some time, they forgot where they had put the labels in the room.
The poles connecting parts to their descriptions were very helpful in that case. In general, all users
enjoyed watching the MR content with the HoloLens and were very motivated to investigate the
test bed. Since motivation is a key factor in increasing the learning effectivity, technology is very
useful in this learning scenario (Dickinson, 1995) (RQ1). We further investigated how this
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
technology can be used to foster the transfer of domain-specific knowledge to experts of other
domains or to customers. The feedback of the target group was very promising. With such
technology, it is easier to compare different settings of a certain machine, test bed or other devices
and the effects of the adaptions can be visualized efficiently without understanding the device
fully. This makes it easier to justify decisions of parameterizing the test bed. By visualizing
invisible content in the field of view of the user, even non-technical experts could better understand
complex processes (RQ2).
The introduced learning prototype should also be evaluated according to the aspects
mentioned in this section and compared with alternative learning methods or learning support.
Table 10.1 compares possible alternative learning approaches. This incomplete comparison also
provides an outlook on further evaluations of different learning approaches in the described
learning context. Since our focus in this work was a prototypical implementation, further research
will be performed in the evaluation of this approach compared with other approaches.
The costs of the implementation of the different types of learning support must be
considered. In our case, the effort was justified by the fact that it could be very dangerous and
expensive if students or employees make mistakes while using the test bed (e.g. due to high
temperature and pressure). In other learning situations, the outcome in relation to the effort and
costs should be discussed and considered.
Acknowledgements
This chapter was written at VIRTUAL VEHICLE Research Center in Graz, Austria. The authors
would like to acknowledge the financial support of the COMET K2 – Competence Centers for
Excellent Technologies Programme of the Federal Ministry for Transport, Innovation and
Technology (bmvit), the Federal Ministry for Digital, Business and Enterprise (bmdw), the
Austrian Research Promotion Agency (FFG),he Province of Styria and the Styrian Business
Promotion Agency (SFG).
Table 10.1 Alternative learning support.
Alternative learning support
features/drawbacks (±)
Test bed process description
performed by an expert on site
(+)
questions are answered by the
expert
(+)
the expert can explain
coherences that are not
documented anywhere
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
Alternative learning support
features/drawbacks (±)
(+)
existing learning material can
additionally be used
(-) support from an expert can be
expensive and time consuming
(-) invisible/audible signals cannot
be perceived by the learner
Test bed simulation with a domain
specific tool on a PC off-site
(+)
when the user learns how to use
the software, a lot of training and
learning situations can be simulated
and tested without any further help
(-) domain-specific software often is
difficult to use
(-) very time consuming to become
familiar with such highly
specialized software
(-) information is completely
decoupled from the test bed
Previously recorded learning
situation or simulation.
(+)
acceptable effort to create
learning content
(-) nearly no interactive participation
of the user except for browsing/
repeating sections of the video
(-) not every detail can
be covered by video
(-) a video has a fixed
viewing angle
(-) information is completely
decoupled from the test bed
Infra-red goggles
(+)
easy to use, easy to use for other
learning situations
(-) show only current temperature
in pipes, no simulation (VR)
mode, test bed must already be
built and must be activated and
running while observed by the
learner
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
Alternative learning support
features/drawbacks (±)
(-) no additional context-based
information available
Mixed reality glasses (HoloLens)
(+)
augmentation of
invisible/auditable processes
(+)
spatial awareness, objects
are placed/attached to the real
test bed
(+)
context related information
can be displayed directly on the
test bed
(+)
intuitive navigation in
3D space
(zoom/pan/context
information)
(+)
effective learning
/
recognition
effort because of the usage of real-
world context in AR mode
(-) high implementation effort
(-) expensive hardware (smart
glasses)
(-) infrastructure (WLAN, server)
necessary to display simulations and
measurement values with HoloLens
References
Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M. C.
(2001). A taxonomy for learning, teaching, and assessing. A revision of Bloom’s taxonomy of
educational objectives. L. W. Anderson & D. R. Krathwohl (Eds.), (2nd ed.). New York, NY:
Allyn & Bacon.
Alavi, M. (1984, June). An assessment of the prototyping approach to information systems
development. Communication of the ACM, 27(6), 556–563. doi:10.1145/358080.358095.
Retrieved from.
Bloom, B. S., Englehart, M., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of
educational objectives: Handbook I. Cognitive domain. New York, NY: David McKay.
Cockburn, A. (2001). Agile software development (Vol. 177). Cary, NC: Addison-Wesley
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
Professional.
Dewey, J. (1916). Democracy and education. An introduction to the philosophy of education
(reprint 1997). Rockland, NY: Free Press.
Dickinson, L. (1995). Autonomy and motivation a literature review. System, 23(2), 165–174.
doi:10.1016/0346-251X(95)00005-5. Retrieved from
www.sciencedirect.com/science/article/pii/0346251X95000055
Freitas, R., & Campos, P. (2008). Smart: A system of augmented reality for teaching 2nd grade
students. In Proceedings of the 22nd British HCI Group Annual conference on People and
Computers: Culture, creativity, interaction – Volume 2 (pp. 27–30). Swindon, UK: BCS
Learning & Development Ltd. Retrieved from
http://dl.acm.org/citation.cfm?id=1531826.1531834
Heyse, V., & Erpenbeck, J. (2004). Kompetenztraining. Informations-und Trainingsprogramme,
2. Retrieved from www.ciando.com/img/books/extract/3799263675_lp.pdf
Holzinger, A., & Ebner, M. (2003). Interaction and usability of simulations & animations: A
case study of the flash technology. In Proceedings of the human-computer interaction interact
2003 (pp. 777–780). IOS Press.
Hudson, P., & Miller, S. P. (2005). Designing and implementing mathematics instruction for
students with diverse learning needs. Boston, MA: Pearson/Allyn and Bacon.
Kuhn, J., Lukowicz, P., Hirth, M., Poxrucker, A., Weppner, J., & Younas, J. (2016,October).
gPhysics – Using smart glasses for head-centered, context-aware learning in physics
experiments. IEEE Transactions on Learning Technologies, 9(4) 304–317.
doi:10.1109/TLT.2016.2554115
Lee, K. (2012, March 01). Augmented reality in education and training. TechTrends, 56(2), 13–
21. doi:10.1007/s11528-012-0559-3. Retrieved from.
Leighton, L. J., & Crompton, H. (2017). Augmented reality in k-12 education. In Kurubacak &
H. Altinpulluk (Eds.), Mobile technologies and augmented reality in open education (pp. 281–
290). Hershey, PA: IGI Global. doi:10.4018/978-1-5225-2110-5.ch014. Retrieved from
www.igi-global.com/gateway/chapter/178247
Meyer, H. (2003). Zehn Merkmale guten Unterrichts. Pädagogik, 10(2003), 36–43.
Microsoft. (2017, November). Microsoft HoloLens. Retrieved 2017-11-27, from
www.microsoft.com/en-us/hololens
Milgram, P., Takemura, H., Utsumi, A., & Kishino, F. (1995). Augmented reality: A class of
displays on the reality-virtuality continuum. In Telemanipulator and telepresence technologies
(Vol. 2351, pp. 282–293). International Society for Optics and Photonics.
Doi:10.1117/12.197321. Retrieved from.
Moshtaghi, O., Kelley, K. S., Armstrong, W. B., Ghavami, Y., Gu, J., & Djalilian, H. R. (2015).
Using google glass to solve communication and surgical education challenges in the operating
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
room. The Laryngoscope, 125(10), 2295–2297. doi:10.1002/lary.25249 Retrieved from
https://onlinelibrary.wiley.com/doi/abs/10.1002/lary.25249
North, K. (2011). Wissen in organisationen. In Wissensorientierte unternehmensführung:
Wertschöpfung durch wissen (pp. 35–68). Wiesbaden: Gabler. doi:10.1007/978-3-8349-6427-
4_3. Retrieved from.
Recon. (2018). ReconJet. Retrieved 2018-03-26, from www.reconinstruments.com/products/jet/
Ross, D. T., & Schoman, K. E. (1977, January). Structured analysis for requirements definition.
IEEE Transactions on Software Engineering, SE-3(1), 6–15. doi:10.1109/TSE.1977.229899
Schulmeister, R. (2004). Didaktisches Design aus hochschuldidaktischer Sicht – ein Plädoyer
für offene Lernsituationen. In U. Rinn & D. M. Meister (Eds.), Didaktik und Neue Medien (Vol.
21, pp. 19–49). Münster: Waxmann. Retrieved from www.zhw.uni-
hamburg.de/pdfs/Didaktisches_Design.pdf
Spitzer, M. (2018a). Test bed – AR mode. Retrieved 2018-02-10, from
https://youtu.be/LB7SiJKzHc8
Spitzer, M. (2018b). Test bed – VR mode. Retrieved 2018-02-10, from https://youtu.be/Bx-
lmA9pc_E
Spitzer, M., & Ebner, M. (2015). Collaborative learning through drawing on iPads. In EdMedia:
World conference on educational media and technology (pp. 806–815). Vancouver, BC:
Association for the Advancement of Computing in Education (AACE).
Spitzer, M., & Ebner, M. (2016, 6 29). Use cases and architecture of an information system to
integrate smart glasses in educational environments. In Proceedings of edmedia: World
conference on educational media and technology 2016 (pp. 51–58). Vancouver, BC: AACE.
Spitzer, M., & Ebner, M. (2017, 6 19). Project based learning: From the idea to a finished Lego
technic artifact, assembled by using smart glasses. In Proceedings of edmedia: World
conference on educational media and technology 2017 (pp. 196–209). United States:
Association for the Advancement of Computing in Education.
Spitzer, M., Nanic, I., & Ebner, M. (2018, 1 27). Distance learning and assistance using smart
glasses. Education Sciences, 8(1), 18. doi:10.3390/educsci8010021
Starr, C. W., Manaris, B., & Stalvey, R. H. (2008, March). Bloom’s taxonomy revisited:
Specifying assessable learning objectives in computer science. SIGCSE Bulletin, 40(1), 261–
265. doi:10.1145/1352322.1352227. Retrieved from.
Tsunoda, K. (2016, February). Introducing first ever experiences for the Microsoft HoloLens
development edition. Retrieved 2018-01-17, from
https://blogs.windows.com/devices/2016/02/29/introducing-first-ever-experiences-for-the-
microsoft-hololens-development-edition/
Vygotsky, L. (1978). Interaction between learning and development. Readings on the
Development of Children, 23(3), 34–41.
Draft – originally published in: Spitzer, M., Rosenberger, M., Ebner, M. (2020) Simulation data visualization using
mixed reality with Microsof HoloLensTM, In: New Perspectives on Virtual and Augmented Reality, Daniela, L. (ed.),
pp. 147-162, London: Routledge, https://doi.org/10.4324/9781003001874
Weidenmann, B. (1993). Instruktionsmedien (Arbeiten zur Empirischen Pädagogik und
Pädagogischen Psychologie nr. 27.). München: Hochschule der Bundeswehr.
Wieland, J., Müller, J., Pfeil, U., & Reiterer, H. (2017). Loci spheres: A mobile app concept
based on the method of loci. In M. Burghardt, R. Wimmer, C. Wolff, & C. Womser-Hacker
(Eds.), Mensch und Computer 2017 – Tagungsband (pp. 227–238). Regensburg: Gesellschaft
für Informatik e.V. doi: 10.18420/muc2017-mci-0235. Retrieved from
https://dl.gi.de/handle/20.500.12116/3265
Yates, F. A. (1966). The art of memory. Chicago, IL: University of Chicago Press.