Conference PaperPDF Available

Enlightening Patients with Augmented Reality

Authors:

Abstract and Figures

Enlightening Patients with Augmented Reality (EPAR) enhances patient education with new possibilities offered by Augmented Reality. Medical procedures are becoming increasingly complex and printed information sheets are often hard to understand for patients. EPAR developed an augmented reality prototype that helps patients with strabismus to better understand the processes of examinations and eye surgeries. By means of interactive storytelling, three identified target groups based on user personas were able to adjust the level of information transfer based on their interests. We performed a 2-phase evaluation with a total of 24 test subjects, resulting in a final system usability score of 80.0. For interaction prompts concerning virtual 3D content, visual highlights were considered to be sufficient. Overall, participants thought that an AR system as a complementary tool could lead to a better understanding of medical procedures.
Content may be subject to copyright.
Enlightening Patients with Augmented Reality
Andreas Jakl*Anna-Maria LienhartClemens BaumannArian Jalaeefar§Alexander Schlager
Lucas Sch¨
offer|| Franziska Bruckner**
Institute of Creative\Media/Technologies
University of Applied Sciences St. P ¨
olten, Austria
Figure 1: The EPAR app in use, showing how strabismus works. The Augmented Reality visualization helps exploring the
three-dimensional effects, the surgery and the recovery process. Interactive Storytelling engages the user through multiple methods.
ABSTRACT
Enlightening Patients with Augmented Reality (EPAR) enhances pa-
tient education with new possibilities offered by Augmented Reality.
Medical procedures are becoming increasingly complex and printed
information sheets are often hard to understand for patients. EPAR
developed an augmented reality prototype that helps patients with
strabismus to better understand the processes of examinations and
eye surgeries. By means of interactive storytelling, three identified
target groups based on user personas were able to adjust the level
of information transfer based on their interests. We performed a
2-phase evaluation with a total of 24 test subjects, resulting in a final
system usability score of
80.0
. For interaction prompts concerning
virtual 3D content, visual highlights were considered to be sufficient.
Overall, participants thought that an AR system as a complementary
tool could lead to a better understanding of medical procedures.
Index Terms:
Human-centered computing—Mixed / augmented
reality Human-centered computing—Interface design prototyping
Human-centered computing—Interaction design theory, concepts
and paradigms Human-centered computing—Usability testing
*e-mail: andreas.jakl@fhstp.ac.at
e-mail: dh181804@fhstp.ac.at
e-mail: clemens.baumann@fhstp.ac.at
§e-mail: arian.jalaeefar@fhstp.ac.at
e-mail: alexander.schlager@fhstp.ac.at
|| e-mail: lucas.schoeffer@fhstp.ac.at
**e-mail: franziska.bruckner@fhstp.ac.at
1INTRODUCTION
Low health literacy is a well-known and serious issue. Doak et al.
[14] state that 1 in 5 American adults lacks skills to fully understand
implications of processes related to their health, including patient
education or their own post-intervention responsibilities. Research
suggests that audio or computer-aided instructions may be helpful.
Studies have shown that spoken instructions lead to a higher rate
of understanding compared to text for adults with lower literacy
skills [51]. Multimedia systems have additional positive effects on
patients’ educational needs [56]. Combining these aspects through a
smartphone can provide a great benefit in patient education.
We developed and evaluated a prototype Augmented Reality (AR)
mobile application called Enlightening Patients with Augmented
Reality (EPAR). The app is designed for patient education about
strabismus and the corresponding eye surgery. It is intended to be
used in addition to the doctors’ mandatory consultations (for the
EPAR App in use see Fig. 1). The project was completed within
10 months and combines multiple educational approaches that have
been identified as useful in previous literature. Furthermore, it
evaluates the results through user experience testing based on patient
education personas. Our approach as described in this paper:
Definition of scenarios
of patient education for examinations
and operations for strabismus.
Interactive storytelling
for health education based on devel-
oped storyboards for immersive AR visualization.
3D modelling and sound
including audio recording and text
to speech generation for content creation.
Technical
prototype mobile AR app
implementation with
special focus on usability.
User experience and interaction evaluation
of the proto-
type.
195
2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
2642-5254/20/$31.00 ©2020 IEEE
DOI 10.1109/VR46266.2020.00-66
To achieve optimized results, the development process was split
into two phases, with the results of the first evaluation being incor-
porated into a second, improved prototype. Specifically, the aim
was to gain insights into the
three main research questions
(see
Sect. 5.1) concerning different types of interactions with virtual
objects, whether 3D augmentation adds value to the users and if it is
clear how user interactions with 3D objects modify the story.
2S
TATE OF THE ART
Patient education needs to target a broad audience as health affects
everyone from children to the elderly. Because of this, research
results from different target groups were used as a base for the
storytelling design within EPAR.
2.1 Educational AR
As described in a survey by [16],
benefits of AR in education
commonly observed by different papers are: better understanding
of content, long-term memory retention, better collaboration and
increased motivation. The meta-study performed by [42] established
an “effect size” achieved by the learning tools through comparing
the results of the control group with the AR group. While the effect
turned out to be widely variable, the mean effect size was calculated
as 0.56 which is considered moderate.
In contrast, identified
challenges of AR-based learning
are ac-
cording to [36]: attention tunneling, usability difficulties, ineffective
classroom integration and learner differences.
A case study for developing AR apps for teaching was performed
by [13]. In their evaluation with two groups of pupils, they compared
the learning effect of AR with traditional multimedia content. The
AR supported learning processes turned out to have a better effect
for increasing the knowledge according to a pre-test and post-test
questionnaire. In addition, participating teachers observed higher
motivation. Another study which showed that students could im-
prove their learning and achieve better results in an academic context
with AR was [53]. They developed a butterfly garden in AR with
the aim that biology students could walk around the campus and
observe virtual butterflies which resulted in a heightened motivation
to study. [3] showed that perceived usefulness and enjoyment are
important factors when developing learning platforms for AR.
What makes AR so effective for learning are its spatial quali-
ties. Because of the 3D properties of AR, the story appears more
interesting and the user can pay attention for a longer period of
time [28].
Within the medical field, most of the
AR prototypes for medical
applications
focus on support and education for doctors and medical
students. [27] supports doctors in planning and completing oral
medical procedures, while [30] helps mentors and mentees with
the augmentation of surgical tele-monitoring. Further educational
aspects in this field include AR edutainment systems for learning
bone anatomy [48], AR for medical diagnosis [8] or rehabilitation
[46].
Only a few projects focus on the connection between Vir-
tual / Augmented Reality and patient education. Several studies
have been performed using the Virtual Environment for Radiother-
apy Training (VERT) system. The linear accelerator is displayed in
a 3D environment as well as the planned radiotherapy based on the
underlying data of computed tomography (CT). In the study by [52],
83 %
of the interviewed patients understood the planning and im-
plementation of radiotherapy better with the aid of 3D visualization.
The perception of VERT was studied by [50] as an innovative in-
formation delivery tool for prostate cancer patients. They identified
the advantages of using VERT as a patient information tool and
examined the level of knowledge gain because of the system. [23]
investigated if education with VERT influenced the residual set-up
errors during irradiation.
EPAR also builds on the results of the AR prototype “VIPER –
Virtual Patient Education in Radiotherapy” developed by Alexan-
der Raith [37] at St. P
¨
olten University of Applied Sciences. Here,
radiotherapy patient education is represented by a freely placeable
hologram, audio recordings of real sounds heard during therapy,
while a pre-recorded speaker explains the process. A prototype was
tested by health experts for patient education in radiotherapy and
was then evaluated with a questionnaire. Results showed that experts
believed that patients who use VIPER are able to understand the
radiotherapy process better. It also reduces their anxiety about the
procedure as a whole. They did not expect or were not sure that the
application would additionally save time.
As the scenario of strabismus is centered on mis-aligned three-
dimensional movements of the eyes, visualization through AR could
lead to a better understanding compared to traditional 2D paper- or
video-based patient education. Therefore, part of the questionnaire
and specifically research question 2 explicitly query whether the
application would be a good complementary tool to increase under-
standing. This information is crucial to develop tools that provide
value to users.
2.2 Interactive Storytelling
AR enables storytelling to be rethought in new ways: it is based
on the concept of digital storytelling that has established itself as
an important topic within the last 20 years at the latest, e.g. by
[10, 21, 29, 31].
For an investigation of storytelling in Augmented and Virtual
reality, the concepts of interactivity and immersion are mentioned
by various sources [12, 17, 35, 40]. Bucher [7] modifies already
established dramaturgies with interactive elements: while the Inter-
active Three-Act Structure and the Interactive Five-Act Structure
follow rather linear storylines, the Interactive Beat-Based Structure
is already oriented towards web and gaming formats.
For Ryan [40], immersion occurs when a user enters a virtual
world, is absorbed in it and does not distinguish anymore between
reality and virtuality. This immersion can either be achieved by
technical setups such as VR experiences. Alternatively, the spec-
tator can also be immersed into a story world only by imagination
and narration. She combines this concept of mental immersion in
conjunction with interactive storytelling into Poetics of Immersion
based on spatial, temporal and emotional categories: “spatial im-
mersion, the response to setting; temporal immersion, the response
to story; and emotional immersion, the response to characters..
The poetics of interactive dramaturgies are also exemplified in a
typology-connecting interactivity and in the Epic Plot, the Epistemic
Plot, the Dramatic Plot and the Soap Opera Plot.
The advance of digital technologies has a significant influence
on interactive storytelling, as digital storytelling is influenced more
and more by
computer-supported storytelling systems
. Multiple
artificial intelligence (AI)-enhanced storytelling experience man-
agers include the Automated Story Director (ASD), Player-Specific
Stories via Automatically Generated Events (PaSSAGE), or Player-
Specific Automated Storytelling (PAST) [38]. For [55], computer
technologies were viable for the development of the StoryCube. In
this tangible interactive interface, children create their own 3D en-
vironment in which a story takes place. In their user study, they
discovered that the children were able to easily learn the interface,
and that the interactive technology motivated and inspired them.
Digital technologies not only made it possible to use AI supported
systems and 3D graphics for storytelling, they have also paved the
way for interactive fiction. Interactive hypertext has been in use
since the end of the 20
th
century and has recently again gained
attention for its potential storytelling capabilities. Tools like Twine
1
are incorporated for writing interactive stories. They are also used
1https://twinery.org/, accessed: November. 12th, 2019
196
to plan stories with branching path narratives that are subsequently
developed on other platforms like Unity or Ren’Py2[11].
Several studies investigated the process of
developing interac-
tive stories
specifically for VR and AR in various areas. Azuma [2]
identified three strategies for location-based Mixed and Augmented
Reality storytelling: in Reinforcing, a significant historical or archi-
tectural location is chosen and complemented with new information
by audio, video or 3D augmentation. For Reskinning, no specific
location is necessary; but the arbitrary environment is enhanced
by specific virtual content suitable for the story. Remembering is
also connected to specific locations but more focused on personal
histories, as the created content, e.g. of a wedding, can be watched at
the actual location where it took place. Further projects focusing on
storytelling aspects in AR include storytelling for urban tourism [34],
AR transmedia storytelling for cultural content within printed pub-
lications [41] or the use of mixed reality storytelling for learning
handicraft [47].
The story and content development of EPAR also strongly refers
to the results of the following two projects: [5] produced a narrative
story set in a future mining-scenario, made specifically for govern-
ment decision-makers. The goal was to make complex and hard
to grasp technologies understandable in an interactive and visually
appealing way. One of their insights was that a stylized aesthetic
was more visually appealing and easier to view than a photo-realistic
representation. There were critical comments about the ability to
mislead, though, by showing images that represent an idealized real-
ity. In that regard, it is a challenge to find the ideal position between
a stylized approach and accuracy.
[57] explored how the level of interactivity within VR storytelling
affects the knowledge-gain for educational purposes in the area of
immunology. They created an interactive VR story with three dif-
ferent versions: 1) low interactivity because of automated systems,
2) high interactivity with many user-controlled action-possibilities
without disturbing the dramaturgy of the application, and 3) medium
interactivity where system automatization and user-controlled ac-
tion are combined. In contrast to their hypothesis, evaluation data
showed no difference in students’ learning gains due to the level of
interactivity. However, results of the questionnaire and interview
suggested that a higher level of interactivity affected the engagement
as well as the attention of students regarding the learning material.
3D
EVELOPING INTERACTIVE STORIES FOR AR
As described in Sect. 2, interactive storytelling has been used for
Augmented Reality applications in various areas. The goal of EPAR
was to develop an interactive story for patient education that en-
hances the users’ feeling of exploring the topic of strabismus and
eye surgery on their own.
3.1 Story Development
Story development was performed during a period of 2 months. We
started with detailed research on eye-surgery processes [6,44, 49]. In
addition, we had an expert on our team, Anna-Maria Lienhart, who
has seven years of experience studying and working with strabismus
in a clinical setting. Based on her expertise and the additional
literature, she identified the main points to convey to users and was
responsible for creating the text later in the project.
After the initial research, we simplified the process into a mini-
mum number of steps – namely opening the conjunctiva, detaching
the eye muscle, recession / resection of the eye muscle, and stitching
– so the viewer would understand the surgery and the methods behind
it without being overwhelmed. We also gathered general information
on strabismus and the healing phase after the surgery.
With that information, we had a rough layout of the plot and
divided the
interactive storyline
into three chapters (which also
2https://www.renpy.org/, accessed: November. 12th, 2019
Figure 2: The structure of the story as seen in Twine.
correspond to acts in a story) that users would be able to start viewing
in any order. The first chapter / act serves as an introduction and
displays how eye muscles are set up in general, as well as which
different functions they have, how healthy eyes are working and
what happens if eyes are misaligned inwards as convergent squint
(esotropia), outwards as divergent squint (exotropia), and upwards
(hypertropia) or downwards (hypotropia) as vertical squint. The
second chapter /act is a step-by-step explanation of the eye surgery
itself. Here, the pace of the story is the quickest and the information
is conveyed in a visually exciting way, containing more animations
than the other chapters. The third chapter / act takes place after the
surgery and gives information about the course of recovery and the
healing process. It resolves the story and releases the tension of the
second act. As the viewer has the choice in which order she can
view the chapters, the story she sees will be either linear or rather
non-linear. There are no avatars in the story; the main character is
the viewer with the goal to find information about the strabismus
surgery.
3.2 Interaction Design
As we guide the user through the edited information, we explored
an interactive storytelling concept. Two main goals were impor-
tant: first, we wanted the viewer to experience the story and at the
same time receive the most relevant information (according to our
research) from the beginning to the end. Second, the users should be
able to interact with the AR-environment – and if interested, have
access to more information at certain points of the story line. As
explored by [57], we assumed that a higher level of interaction (e.g.,
increase of the number of user interactions in contrast to automated
system responses) would affect the engagement with the topic of
strabismus.
In order to enhance the feeling of interaction, we chose the struc-
ture
Vector (with Optional Side Branches)
suggested by [40]. This
structure describes a narrative that is quite straight-forward with
opportunities to explore side branches of the story and additional
information. Therefore, what mostly changes from play-through to
197
Figure 3: The interaction triangle we built for the interactive story.
play-through of the EPAR application is the length of the experience.
The viewers are able to listen to more information or they can take
shorter routes, skipping the “bonus material”.
We used the open-source tool Twine in order to layout the in-
teractivity of the story. The primary goal of Twine is to output an
HTML page which a viewer can look at and click through hyperlinks
in order to get to different parts of the story. However, as seen in
the State of the Art (see Sect. 2), Twine is also effective for plan-
ning an interactive story, like an interactive screenplay, without the
viewer ever seeing the text, only the implementation. See Fig. 2 for
a representation of the story structure for EPAR in Twine.
We built an
interaction triangle
as an interactive storytelling
concept which would be applicable for the whole app (see Fig. 3).
We divided the content of each chapter (act) into scenes (sequences).
At the beginning of each scene, the information is accessible by text
and sound to all viewers. After this initial information part, there
is a fork in the story and users can choose to either click on the
screen to trigger an animation which shows what was heard before,
or they can go to the next scene immediately. The app always allows
them to rewind and thus restart the scene; the user can replay the
animation or listen to the information again.
Furthermore, we developed three
interaction prompt concepts
that made sense for the story line. Because we wanted the viewer to
interact with the app by tapping on the 3D models, it made sense to
us to visually highlight the part of the model that is clickable. The
second kind of interaction prompt was haptic feedback in addition
to the visual highlight. As a third type of interaction prompt, we
implemented the previous two types together with an auditory signal
which should alarm the viewer.
After writing the script in Twine, we created storyboards. Since
we wanted the storyboards to be interactive and clickable, we de-
cided to create them in Adobe XD3.
4ARP
ROTOTYPE SETUP
After the initial storytelling setup, the prototype was centered on
aspects of the technical implementation as well as content creation.
The team spent 5 months developing and refining the prototype
until the first stage evaluation. To integrate feedback and learnings
from this step, an additional 2 months were allowed for further
improvements towards the second stage evaluation.
3https://www.adobe.com/products/xd.html
, accessed: Novem-
ber. 17th, 2019
Figure 4: The high-level architecture components of the AR Prototype.
4.1 3D Development Environment
The prototype application was built upon the Unity engine
4
version
2018.3.3f1. The main reason for using Unity was its framework
to manage graphical assets such as 3D models, to include audio as
well as capabilities for building and customizing user interfaces and
interaction through C#. For the app architecture, learnings from
previous projects have been included [26].
We developed the prototype for the Android platform by Google
and its ARCore Augmented Reality subsystem [19], as Android has
a worldwide smartphone OS market share of around
87 %
[25] and
therefore potentially reaches the biggest number of users. Unity has
cross-platform support, which allows porting to other devices like
the Apple iPhone in the future for commercial deployment.
With ARCore we have the possibility to place digital objects
into the real world as seen through the live camera view. These
can be anchored to well-trackable places, either by automatically
finding suitable real-world surfaces, or manually by users tapping
on an on-screen location to place the 3D object at the corresponding
real-world position. However, as our target group personas include
people who have never used AR before, we decided to use a pre-
designed paper target image (“marker”) in the real-world as anchor
called Augmented Images by Google [20]. The position, rotation
and scale of the virtual object is then tied to the paper sheet, which
is handed out to users in the patient education use case.
Ideally, the graphics on the sheet should be carefully designed so
that they give users instructions on how to get started with the app,
while at the same time achieving a high tracking quality score as
measured by an ARCore tool (
arcoreimg
). Our final marker was
classified with a score of 85 (out of 100) and achieved satisfactory
tracking quality; Google recommends a minimum of 75 [20]. The
whole application used a single marker to show all scenes to make
physical paper handling for users easier. Our evaluation setup specif-
ically required space for the user to physically walk around the table
where the marker was placed, to allow a free choice of interaction
4https://unity3d.com/what-is- a-game- engine/
, accessed:
November. 19th, 2019
198
style (moving the marker vs. moving the phone).
The first app version only showed the on-screen text “scan the
image” after the app was started. During the first tests, several users
were not sure which image to scan. Consequently, we have extended
the in-app guidance to show the image to scan as a transparent
overlay until the user aligned the on-screen image with the printed
marker.
4.2 Flow Control System
After successfully detecting the marker, the core app content gets
anchored to it and is set to visible. To facilitate creating and changing
the story, an XML based flow control system was developed. The
system separates the story into acts, sequences and actions (see
Fig. 4).
The
acts
form the main chapters of the story, starting with the
tutorial. Then, depending on the user’s choice, it branches into: 1)
information about squinting, 2) the surgery and 3) post-operative
care. These acts contain different
sequences
which represent all
possible decisions the user could take. In the sequences, it is possible
to: 1) define sequential actions, for example fading objects in /out, 2)
create outlines around that objects to visually show the user possible
interaction areas, 3) add a delay to the sequence, 4) switch to a
different act or sequence and 5) add text that is shown on the screen
as well as transformed into speech.
In addition to the methods performed during the evaluation, the
app also generates CSV-formatted log files. These record user met-
rics that were used to track the user’s progress through the appli-
cation as an additional automated measure for the future, even for
larger studies that are not performed by the evaluation team together
with test subjects. The team participated at a science fair event to
intensify test the data logging stability and quality.
4.3 Driving User Attention
Every time text is shown to the user (see Fig. 5), the app transforms it
to a speech sound file using the cloud service from Microsoft Azure
5
.
With neural voices, the generated speech is indistinguishable from
humans [24]. As such, it has benefits for rapid prototyping where
development teams can iterate on different versions of the text. Addi-
tionally, it potentially allows for quick multi-lingual version creation.
With our added caching of generated speech files, the system only
needs to download each audio file once. For real-life deployments,
it should be evaluated if traditional audio recording of professional
speakers makes a difference and provides benefits compared to the
corresponding costs.
An important goal of the evaluation was to find out which interac-
tion prompt works best for AR content (see Sect. 5.1). This specific
research question requires a direct representation in the app develop-
ment process. Three different approaches were implemented:
Visual (highlight):
glowing and pulsating yellow border
around the object the user can tap on (see Fig. 5).
Haptic (phone vibration):
the phone briefly vibrates, syn-
chronized to the visual highlights.
Auditive (sound): a notification sound is played.
After five seconds of user inactivity in a sequence where inter-
action is possible, interaction prompts or combinations of them are
triggered (depending on the pre-defined state in the configuration
XML file). The aim was to inform the user about the possible AR
interactions.
5https://github.com/Azure-Samples/
cognitive-services- speech-sdk/tree/master/quickstart/
csharp/unity, accessed: October 20th, 2019
Figure 5: Screenshot of the EPAR app. Visually highlighted arrows
(pulsating yellow border) enable the user to interact with the virtual 3D
objects and to influence the storytelling. The visible patient education
text is also spoken to improve the reach for low health literacy.
4.4 Content Creation
In the eye surgery use case of the EPAR app, the human eye 3D
model was the most used and thus the most important asset. Early
designs were based on a photo-realistic look. However, feedback
from test users during development indicated that more abstract
shapes and animations reduced fear and anxiety of the actual surgery.
This was especially noticeable for details like cutting the eye muscle
with a scalpel. Another advantage of a more artificial look is that it
helps to avoid the Uncanny Valley effect as discovered by Mori [32].
It describes that an artificial object that looks very close to reality
but not perfectly human can cause negative affinity. As such, a better
approach is often to aim for a less photo-realistic look [45]. These
insights match with the results of [5] who explored that in virtual
environments visually stylized aesthetics are more appealing than
realistic representations.
All other assets were designed to match the look of the human
eye, e.g. for choosing the acts through the menu (eye, scalpel and
bed) or for visualizing specific steps (e.g., eye drops). The goal
was to achieve a continuous identity for the whole application. 3D
modeling was performed using Autodesk Maya 2019.
5E
VALUATION
5.1 Evaluation Design
In addition to the storytelling concept, a major focus during the
development process was the usability of the prototype. Therefore,
a two-stage usability-test was developed in a period of 2 months
(formative testing) [4, 18, 39].
Based on the research area of this paper, the following main
research questions were defined:
1.
Which interaction prompt (visual; visual & haptic; visual &
haptic & auditive) seems comfortable and goal-oriented to the
199
user? What difficulties and obstacles appeared?
2.
Does the three-dimensional augmentation of the content add
subjective value for the user?
3.
Is it clear to the user how interactions with 3D objects modify
the story?
First stage evaluation:
in an earlier stage of the prototype’s
lifecycle, a Cognitive Walkthrough was used to evaluate the effec-
tiveness of the preliminary design concepts [4, 9, 39, 43, 54]. The
focus was on observing the actions performed by users and measur-
ing the relevant attributes of a usability test – such as usefulness,
effectiveness, etc. Six experts in the subject areas AR, UI and health-
care (3 male, 3 female; 2 from each subject area) were asked to
test the prototype ( [1] argue that for heuristic evaluations,
7±2
participants are sufficient). Representative tasks such as starting
the app and receiving information about strabismus and the surgery
were given. To ensure that the whole functionality of the application
got tested, the tasks were written down as step-by-step instructions.
Additionally, the participants were encouraged to “think aloud” to
get insights into their thought process [4, 39, 43, 54].
The test took place in a lab-like test setup. Formative tests require
extensive interaction between the participant and the test modera-
tor. Therefore, a sit-by session was performed. The qualitative and
quantitative data collected in the first evaluation stage came from
the verbal protocol documentation, the post-task questionnaire docu-
mentation and the scores from the post-test questionnaire (the stan-
dardized usability questionnaire “System Usability Scale” – SUS,
plus additional questions on the storytelling concept) [4, 15, 33, 39].
The evaluation results were used to revise the prototype.
Second stage evaluation:
for the second stage of the evaluation,
possible end users were recruited. Therefore, the defined user pro-
files (personas) were used to get representative participants. At the
beginning of the project, three persona profiles were defined, each
corresponding to a certain level of technical knowledge.
Low technical knowledge
was defined as having a non-
technical job, not studying a technical subject, using smart-
phones and computers only sparingly, and not having experi-
ence with AR.
Good technical knowledge
was defined as not having a tech-
nical job but being proficient in using a computer and smart-
phones but not having experience in AR.
High technical knowledge
was defined as having a technical
job and having experience with AR and VR applications.
The test group consisted of 18 participants (six per persona pro-
file; 9 male, 9 female; the low technical knowledge group had a
mean age of 49.8, the good technical knowledge group had a mean
age of 28.5, the high technical knowledge group had a mean age
of 31.2). The number 18 was chosen because of [1] who estimated
that
16 ±4
is the most efficient number in a user study. For the test,
the
interaction prompts
were implemented in the following way:
chapter 1
visual, chapter 2
visual & haptic, chapter 3
visual
& haptic & auditive. The test took place in the same lab-like single-
room sit-by setup as the first evaluation [4,39]. The test took place
over two days moderated by the same researcher with one research
assistant helping with the observation. The test setup was optimized
to the use of AR. The participants were standing and had the chance
to walk around the marker which was placed on a round standing
table. For every test the same mid-range smartphone (Nokia 7.1) was
used. The participants had to perform representative tasks (such as
answering questions through exploring the application, e.g., “Which
is the most frequent infant type of strabismus?”) which lead them
through different chapters with different interaction prompts. Every
user was administered all the tasks, but the order changed from user
to user so the influence of one interaction prompt on another was
counterbalanced. Because this phase focused on testing usability as
well as interactive storytelling, the tasks description was quite open –
the participants had the chance to decide on their own how to reach
the goal. Furthermore, they were encouraged to “think aloud”. The
participants were tested individually. Every test lasted between 30
and 45 minutes. The qualitative and quantitative data collected in
the second evaluation stage came from the verbal protocol documen-
tation, the post-task questionnaire /interview documentation and the
scores from the post-test questionnaire (System Usability Scale –
SUS, plus additional questions on the storytelling concept).
Exclusion criteria for the first and second stage of the usability test
were: reading problems and difficulties; being underage; headache
after using the smartphone / tablet /laptop for 10 minutes. These
were confirmed by the participants beforehand. The latter criterion
was introduced to ensure possible known issues viewing screens did
not directly affect the outcome of the survey. These are common
issues with Mixed Reality scenarios – mostly for head mounted
displays, but studies show that also smartphones can cause symptoms
like disorientation or temporal eye dryness [22].
5.2 Evaluation Results
The collected data from both tests was analyzed. The verbal protocol
documentation results, as well as those from the post-task question-
naire /interview, were summarized with a focus on the most frequent
participants’ statements and comments. The first part of the post-
test questionnaire, the SUS, was evaluated in accordance with the
requirements from the literature. For both parts of the questionnaire,
the central tendency as well as the minimum and maximum values
were calculated.
First stage evaluation
: the results from the first evaluation were
used to revise the prototypical application. These were the main
tasks identified:
Revise the
start / tracking-procedure
: 2 out of 6 had difficul-
ties. Statement from participant 1: “It wasn’t clear to me that
I have to track the whole sheet to start the application.
Revise the
main menu
appearance: 4 out of 6 had difficulties.
Statement of participant 4: “I was searching for a burger menu
– I haven’t realized that the three-dimensional objects should
present the menu.. 2 out of 6 tried to turn the smartphone to
view the application in a horizontal view.
Revise the appearance of the
textual information
: 3 out of 6
thought it was not necessary that the application explains the
menu items every time when returning to the main menu. 4 out
of 6 criticized the appearance of the text. Statement participant
2: “That is quite annoying..
Revise the menu navigation features.
Revise the
voice
of the application: 1 out of 6 mentioned “I
don’t like the voice.. The first prototype did not yet use the
neural voice.
Revise the different types of interaction requests.
The first evaluation reached a median SUS-score of 67.5 (mean
value 72.9, mode value 92.5, minimum value 32.5, maximum value
92.5;n=6).
Second stage evaluation
: the second test phase was performed
with representative end users with different levels of technical knowl-
edge. Therefore, three persona profiles were defined at the beginning
of the project, each corresponding to a certain level of technical
knowledge (low, intermediate, high).
Over all three groups (n = 18), a median SUS-Score of 80.0 (mean
value 82.2, mode value 77.5, minimum value 52.5, maximum value
97.5) was reached (see Fig. 6). In every single group, the median
SUS-Score was higher than the (in the literature recommended [33])
value of 68:
Persona group low technical knowledge (n = 6): median SUS-
Score of 77.5
200
Figure 6: Results of the System Usability Scale (SUS) for EPAR (n =
18, post-test questionnaire).
Persona group intermediate technical knowledge (n = 6): me-
dian SUS-Score of 90.0
Persona group high technical knowledge (n = 6): median SUS-
Score of 82.5
The implemented improvements based on the main tasks iden-
tified in the first stage evaluation proved to have a positive impact
on the AR app. In comparison to the first test stage, the starting
procedure of the application was clear for almost all participants.
The main menu (the improved menu can be seen on the phone screen
in Fig. 4) was recognized as that; only one of the test users tried
to flip the smartphone from the designed-for portrait mode to land-
scape mode. The feedback concerning the text appearance was good.
Some participants wanted to speed up the time of text display to gain
more control when reading the text on their own instead of listening
to the neural voice. (For a detailed table of questions and results
please see the additional material)
Main results for research question 1
(which interaction prompt
seems comfortable and goal-oriented to the user): 14 out of 18 pro-
vided the feedback that a visual highlight – such as the implemented
yellow frame around the interactive area – is sufficient as interaction
prompt. 4 out of those 14 mentioned that it would be comfortable if
a further textual / spoken sentence such as “Click on the scalpel to
get further information” would be available.
The combination of visual highlight and vibration interaction
prompt was comfortable for 2 out of 18 participants. Other test users
even mentioned that the vibration was annoying or that they thought
someone was calling.
2 out of 18 users liked the combination of visual highlight, vibra-
tion and audio interaction prompt most. Others felt stressed as well
as confused; two users stated, “I connect the vibration and sound
with danger.” (see Fig. 7).
Main results for research question 2
(the benefit of the 3D aug-
mentation of the content): the participants provided good feedback.
During the test it was noticeable that the participants could repeat
and summarize the content quite well, which is important when
enlightening patients – it forms the basis for a well-founded deci-
sion for or against a medical examination or therapy. This result is
corroborated by these questionnaire values:
Statement 4 from the post-test questionnaire: “I think the three-
dimensional augmentation of the content is useful/reasonable and
informative.– median value: 4.5, mean value: 4.4, mode value:
5.0, minimum value: 3.0, maximum value: 5.0. [Questionnaire key:
1 = do not agree at all, 5 = fully agree]
Statement 12 from the post-test questionnaire: “I think the use
of the application as an complementary tool can lead to a better
understanding of the content of a medical consultation. – median
value: 5.0, mean value: 4.4, mode value: 5.0, minimum value: 1.0,
maximum value: 5.0.
Figure 7: Frequency of answers about the interaction prompts (n =
18, post-test questionnaire).
Main results for research question 3
(whether it is clear to the
user how the interaction with the three-dimensional objects mod-
ifies the story): both quantitative results from the post-test ques-
tionnaire as well as qualitative results from the post-task question-
naire /interview contribute to our findings. Additional background
information about the evaluation that is important to answer this
question:
At the beginning of the test session, the test administrator ex-
plained the scenario to every participant: the test users should imag-
ine that they have strabismus and they want a surgery to correct the
eye deviation; they are at the hospital, right before the consultation
with the doctor. To test the different features of the application, the
users had to complete tasks. After each task, a short interview with
predefined questions was conducted. The following description is
an example of one of the tasks (task 2): “You want to find out more
about the different types of strabismus. Which is the most common
type of infant strabismus? How can you get this information?” The
idea was that the participants should reach a goal; how they got there
was their decision. The users received the tasks in different order.
Overall, the participants described in the post-task interview and
the post-test questionnaire that they had a feeling of control over the
system and they knew how to interact with it (Statement 3: median
value: 4.0, mean value: 4.2, mode value: 5.0, minimum value:
3.0, maximum value: 5.0). The conclusion of the observations and
the interviews is that the users perceived visual highlights as an
interaction prompt as useful, but in some cases they did not notice
them.
That is the reason why some features have not been used by a
few test users: e.g. 10 out of 18 participants tried to select the
eye muscle to let the eyes move, while the others have not noticed
the possibility. A goal for future improvements is to revise the
highlights as interaction prompt, for example by making them larger,
especially because the users mentioned that this prompt type is their
favorite and the most useful one. Following results from the post-test
questionnaire support the assumptions:
Statement 6 from the post-test questionnaire: “The interaction
possibility with the system and the three-dimensional objects were
clear to me. – median value: 4.0, mean value: 4.2, mode value: 5.0,
minimum value: 1.0, maximum value: 5.0.
Statement 8 from the post-test questionnaire: “I had no difficul-
ties in interacting with the three-dimensional objects. – median
value: 4.0, mean value: 4.3, mode value: 4.0, minimum value: 3.0,
maximum value: 5.0.
6D
ISCUSSION AND CONCLUSION
Within the research project Enlightening Patients with Augmented
Reality, a mobile app prototype was developed to enhance the knowl-
201
edge about the topic strabismus, the connected surgery and the re-
covery process. Our state of the art survey concluded that most
medical AR applications are focused on supporting doctors or ed-
ucating medical students. Therefore, our prototype was designed
to accompany the mandatory consultation of doctors and to help
patients with deciding whether they want to proceed with surgery.
In order to design a sufficient storyline, a detailed research about
strabismus and eye surgery was conducted. The validity was cross-
checked by one of our team-members who has a vast expertise in
this medical field. We simplified the information and conceptualized
a plot with three chapters (acts), each containing various scenes that
could be watched by users in a self-decided order.
Our state-of-the-art research about interactive storytelling showed
us that a high level of interactivity positively affects the commit-
ment of participants to a specific topic. We designed an iterative
interaction triangle for each scene (sequence) that was applicable
for the whole app. Within this triangle, the user can click on further
information, rewind, or skip and go to the next topic. Furthermore,
the open-source tool Twine proved to be valuable for laying out
the interactive story, but we needed Adobe XD to build a clickable
interactive storyboard.
The prototype was developed for the Android platform & AR-
Core and built upon the Unity engine, as this technical framework
would be extendable to iOS systems in the future. One of the most
important research questions within the EPAR project was which
interaction prompt was the most suitable for an AR application. We
therefore developed three different prompts with visual, haptic and
auditive highlights and stimulations.
We also performed a 2-phased evaluation with a total of 24 test
subjects. These were selected based on developed personas to repre-
sent different possible user groups of patient education tools. Based
on the feedback gathered in the first phase (system usability score
(SUS) of 67.5), we improved the application. The generated insights
turned out to be very valuable, leading to a SUS of 80.0 in phase
2. An additional detailed analysis of possible interaction prompts
(visual, haptic, auditive) for the virtual 3D holograms revealed that
visual highlights were considered sufficient. Overall, participants
thought that an AR system as a complementary tool for medical
patient education could lead to a better understanding, which is the
basis for making a well-informed decision.
The ultimate target group of patients that suffer from strabis-
mus usually adopt one eye as their main eye; as such, their three-
dimensional vision is impaired so they would not be able to use a
head-mounted display for its expected purpose. For smartphone-
based AR, the display is still 2D. Therefore, we do not expect strabis-
mus patients to have a different outcome when using a smartphone-
based AR system. However, this should be confirmed by a follow-up
survey.
A limitation of our research is that most of the discussed findings
are based on questionnaires or observations. Including additional
objective measures might reveal further insights (e.g. comparing the
perceived user activity with the measured activity such as the user’s
physical movement in the real world). Even though we covered a
broad range of test users by including various subject experts as well
as people recruited based on patient education personas, conducting
tests with real patients might have an influence on some results.
However, as the application could potentially influence patients’
decisions when it comes to performing or cancelling a surgery, an
ethics approval would be needed to perform such a study in Austria.
ACKNOWLEDGMENTS
Our research is funded by the Austrian Ministry of Digital and
Economic Affairs within the FFG COIN project Immersive Media
Lab (866856). We further wish to thank Peter Judmair and Laura
Zauner for their feedback as well as Georg Vogt and Manuel Mader
for their support within the project.
REFERENCES
[1]
R. Alroobaea and P. Mayhew. How Many Participants are Really
Enough for Usability Studies? Aug. 2014. doi:
10.1109/SAI.2014.
6918171
[2]
R. Azuma. Location-Based Mixed and Augmented Reality Story-
telling. In W. Barfield, ed.,
Fundamentals of Wearable Computers and
Augmented Reality, Second Edition
, pp. 259–276. CRC Press, Aug.
2015. doi: 10.1201/b18703-15
[3]
A. Balog and C. Pribeanu. The Role of Perceived Enjoyment in the
Students’ Acceptance of an Augmented Reality Teaching Platform: a
Structural Equation Modelling Approach.
Studies in Informatics and
Control, 19:319–330, Sept. 2010. doi: 10.24846/v19i3y201011
[4]
C. M. Barnum.
Usability testing essentials: ready, set– test
. Morgan
Kaufmann Publishers, Burlington, MA, 2011.
[5]
T. Bednarz, D. Filonik, A. Buchan, and L. Ogden-Doyle. Future-
mine VR As Narrative Decision Making Tool. In
Proceedings of the
24th ACM Symposium on Virtual Reality Software and Technology
,
VRST ’18, pp. 50:1–50:2. ACM, New York, NY, USA, 2018. doi:
10.
1145/3281505.3281581
[6]
A. Berke. Augenmuskeln und Augenbewegungen.
Optometrie
,
(1/2000):13–27, 2000.
[7]
J. Bucher.
Storytelling for virtual reality: Methods and principles for
crafting immersive narratives
. Routledge, Jan. 2017. doi:
10.4324/
9781315210308
[8]
Centre for Innovation, Leiden University. Seeing clearly: How aug-
mented reality can help medical students understand complex trans-
plants. Accessed: November. 10th, 2019.
[9]
M. Conyer. User and usability testing - how it should be undertaken?
Australasian Journal of Educational Technology
, 11(2), Dec. 1995.
doi: 10.14742/ajet.2075
[10]
C. Crawford.
Chris Crawford on interactive storytelling
. New Riders,
Berkeley, Calif.?, second edition ed., 2013.
[11]
R. Crawford and Y. Chen. From hypertext to hyperdimension Neptu-
nia: The future of VR visual novels: The potentials of new technolo-
gies for branching-path narrative games. In
2017 23rd International
Conference on Virtual System Multimedia (VSMM)
, pp. 1–7, Oct.
2017. ISSN: 2474-1485. doi: 10.1109/VSMM.2017.8346298
[12]
C. Cruz-neira, D. Sandin, C. Cruz-neira, D. J. Sandin, and T. A. Defanti.
Surround-Screen Projection-Based Virtual Reality: The Design and
Implementation of the CAVE. In
in Proceedings of Computer Graphics
(SIGGRAPH) Proceedings, Annual Conference Series
, pp. 135–142,
1993.
[13]
M. de Paiva Guimar
˜
aes, B. C. Alves, R. S. Durelli, R. de FR Guimar
˜
aes,
and D. C. Dias. An approach to developing learning objects with aug-
mented reality content. In
International Conference on Computational
Science and Its Applications, pp. 757–774. Springer, 2018.
[14]
C. Doak, L. Doak, and J. Root. Teching patients with low literacy
skills.
AJN The American Journal of Nursing
, 96:16M, 12 1996. doi:
10.1097/00000446-199612000-00022
[15]
M. R. Drew, B. Falcone, and W. L. Baccus. What Does the System Us-
ability Scale (SUS) Measure? In A. Marcus and W. Wang, eds.,
Design,
User Experience, and Usability: Theory and Practice
, vol. 10918, pp.
356–366. Springer International Publishing, Cham, 2018. doi:
10.1007/
978-3-319-91797-9 25
[16]
K. Dutta. Augmented Reality for E-Learning. vol. Augmented Reality,
Mobile & Wearable. RWTH Aachen, Germany, 2015.
[17]
R. D
¨
orner, W. Broll, P. Grimm, and B. Jung, eds.
Virtual
und Augmented Reality (VR/AR): Grundlagen und Methoden der
Virtuellen und Augmentierten Realit¨
at
. Springer Vieweg, 2 ed., 2019.
doi: 10.1007/978-3-662-58861-1
[18]
A. D
¨
unser and M. Billinghurst. Evaluating Augmented Reality Sys-
tems. In B. Furht, ed.,
Handbook of Augmented Reality
, pp. 289–307.
Springer New York, New York, NY, 2011. doi:
10.1007/978-1-4614-0064
-6 13
[19]
Google Developers. Fundamental concepts of ARCore. Accessed:
November. 19th, 2019.
[20]
Google Developers. Recognize and augment images. Accessed:
November. 19th, 2019.
[21]
H. Hageb
¨
olling, ed.
Interactive Dramaturgies: New Approaches in
202
Multimedia Content and Design
. X.media.publishing. Springer-Verlag,
Berlin Heidelberg, 2004. doi: 10.1007/978-3-642-18663-9
[22]
J. Han, S. H. Bae, and H.-J. Suk. Comparison of visual discomfort
and visual fatigue between head-mounted display and smartphone.
Electronic Imaging, 2017(14):212–217, 2017.
[23]
H. Hansen, B. Nielsen, A. Boejen, and A. Vestergaard. Teaching Cancer
Patients the Value of Correct Positioning During Radiotherapy Using
Visual Aids and Practical Exercises.
Journal of Cancer Education
, 33,
Oct. 2016. doi: 10.1007/s13187-016-1122-2
[24]
X. Huang. Microsoft’s new neural text-to-speech service helps ma-
chines speak like people |Blog |Microsoft Azure, Sept. 2018.
[25]
IDC. Smartphone Market Share - OS. Accessed: November. 14
th
,
2019.
[26]
A. Jakl, L. Sch
¨
offer, M. Husinsky, and M. Wagner. Augmented reality
for industry 4.0: Architecture and user experience. In
11th Forum
Media Technology 2018, pp. 38–42, 11 2018.
[27]
J. Jiang, Z. Huang, W. Qian, Y. Zhang, and Y. Liu. Registration
Technology of Augmented Reality in Oral Medicine: A Review.
IEEE
Access, 7:53566–53584, 2019. doi: 10.1109/ACCESS.2019.2912949
[28]
N. Kara, C. C. Aydin, and K. Cagiltay. Investigating the Activities of
Children toward a Smart Storytelling Toy.
Educational Technology &
Society, 16:28–43, 2013.
[29]
J. Lambert.
Digital Storytelling: Capturing Lives, Creating
Community. Routledge, 2013. Google-Books-ID: 6 2yxOSn1xYC.
[30]
C. Lin, D. Andersen, V. Popescu, E. Rojas-Mu
˜
noz, M. E. Cabrera,
B. Mullis, B. Zarzaur, K. Anderson, S. Marley, and J. Wachs. A
First-Person Mentee Second-Person Mentor AR Interface for Surgical
Telementoring. In
2018 IEEE International Symposium on Mixed and
Augmented Reality Adjunct (ISMAR-Adjunct)
, pp. 3–8, Oct. 2018.
doi: 10.1109/ISMAR-Adjunct.2018.00021
[31]
C. H. Miller.
Digital Storytelling: A Creator’s Guide to Interactive
Entertainment
. Taylor & Francis, 2004. Google-Books-ID: kW-
Fosl5j3fIC.
[32]
M. Mori, K. F. MacDorman, and N. Kageki. The uncanny valley [from
the field].
IEEE Robotics Automation Magazine
, 19(2):98–100, June
2012. doi: 10.1109/MRA.2012.2192811
[33]
T. Nathan. How To Use The System Usability Scale (SUS) To Evaluate
The Usability Of Your Website, July 2015.
[34]
R. N
´
obrega, J. Jacob, A. Coelho, J. Weber, J. Ribeiro, and S. Ferreira.
Mobile location-based augmented reality applications for urban tourism
storytelling. In
2017 24º Encontro Portuguˆ
es de Computac¸˜
ao Gr´
afica e
Interac¸˜
ao (EPCGI)
, pp. 1–8, Oct. 2017. doi:
10.1109/EPCGI.2017.8124314
[35]
K. Pimentel and K. Teixeira. Virtual reality - through the new looking
glass. 1993. doi: 10.5860/choice.30-5051
[36]
I. Radu. Augmented reality in education: a meta-review and cross-
media analysis.
Personal and Ubiquitous Computing
, 18(6):1533–
1543, 2014.
[37]
A. Raith. VIPER – Virtual Patient Education in Radiotherapy. Master’s
thesis, Fachhochschule St. P¨
olten, 2017.
[38]
A. Ramirez and V. Bulitko. Automated Planning and Player Model-
ing for Interactive Storytelling.
IEEE Transactions on Computational
Intelligence and AI in Games
, 7(4):375–386, Dec. 2015. doi:
10.1109/
TCIAIG.2014.2346690
[39]
J. Rubin and D. Chisnell.
Handbook of usability testing: how to plan,
design, and conduct effective tests
. Wiley Pub, Indianapolis, IN, 2nd
ed ed., 2008. OCLC: ocn212204392.
[40]
M.-L. Ryan.
Narrative as Virtual Reality II: Revisiting Immersion and
Interactivity. Johns Hopkins University Press, 2015.
[41]
D. Santano and H. Thwaites. Augmented Reality Storytelling:
A Transmedia Exploration. In
2018 3rd Digital Heritage
International Congress (DigitalHERITAGE) held jointly with 2018
24th International Conference on Virtual Systems Multimedia
(VSMM 2018)
, pp. 1–4, Oct. 2018. doi:
10.1109/DigitalHeritage.2018.
8809996
[42]
M. E. C. Santos, A. Chen, T. Taketomi, G. Yamamoto, J. Miyazaki,
and H. Kato. Augmented Reality Learning Experiences: Survey of
Prototype Design and Evaluation.
IEEE Transactions on Learning
Technologies, 7(1):38–56, 2014. doi: 10.1109/TLT.2013.37
[43] S. E. Schaeffer. Usability Evaluation for Augmented Reality. 2014.
[44]
T. Schmidt. Operativ-technische Grundlagen der Augen-
muskelchirurgie. p. 7, 2006.
[45]
V. Schwind, K. Wolf, and N. Henze. Avoiding the uncanny valley in
virtual character design.
Interactions
, 25(5):45–49, Aug. 2018. doi:
10.
1145/3236673
[46]
X. Song, L. Ding, J. Zhao, J. Jia, and P. Shull. Cellphone Augmented
Reality Game-based Rehabilitation for Improving Motor Function and
Mental State after Stroke. In
2019 IEEE 16th International Conference
on Wearable and Implantable Body Sensor Networks (BSN)
, pp. 1–
4, May 2019. ISSN: 2376-8894, 2376-8886. doi:
10.1109/BSN.2019.
8771093
[47]
Y. Song, N. Zhou, Q. Sun, W. Gai, J. Liu, Y. Bian, S. Liu, L. Cui,
and C. Yang. Mixed Reality Storytelling Environments Based on
Tangible User Interface: Take Origami as an Example. In
2019 IEEE
Conference on Virtual Reality and 3D User Interfaces (VR)
, pp. 1167–
1168, Mar. 2019. ISSN: 2642-5254, 2642-5246. doi:
10.1109/VR.2019.
8798114
[48]
P. Stefan, P. Wucherer, Y. Oyamada, M. Ma, A. Schoch, M. Kanegae,
N. Shimizu, T. Kodera, S. Cahier, M. Weigl, M. Sugimoto, P. Fallavol-
lita, H. Saito, and N. Navab. An AR edutainment system supporting
bone anatomy learning. In
2014 IEEE Virtual Reality (VR)
, pp. 113–
114, Mar. 2014. ISSN: 1087-8270, 2375-5334. doi:
10.1109/VR.2014.
6802077
[49] H. Steffen and H. Kaufmann. Strabismus. 2019.
[50]
A. Stewart-Lord, M. Brown, S. Noor, J. Cook, and O. Jallow. The
utilisation of virtual images in patient information giving sessions for
prostate cancer patients prior to radiotherapy.
Radiography
, 22(4):269–
273, 2016. doi: 10.1016/j.radi.2016.05.002
[51]
T. G. Sticht et al. Auding and reading: A developmental model. 1974.
[52]
J. Sul
´
e-Suso, S. Finney, J. Bisson, S. Hammersley, S. Jassel, R. Knight,
C. Hicks, S. Sargeant, K. Lam, J. Belcher, D. Collins, R. Bhana,
F. Adab, C. O’Donovan, and A. Moloney. Pilot study on virtual
imaging for patient information on radiotherapy planning and delivery.
Radiography, 21, Mar. 2015. doi: 10.1016/j.radi.2015.02.002
[53]
W. Tarng, K.-L. Ou, C.-S. Yu, F.-L. Liou, and H.-H. Liou. Development
of a virtual butterfly ecological system based on augmented reality and
mobile learning technologies.
Virtual Reality
, 19(3):253–266, Nov.
2015. doi: 10.1007/s10055-015-0265-5
[54]
C. Tho, H. Pranoto, H. L. H. Spits Warnars, E. Abdurachman, and
B. Soewito.
Usability Testing Method in Augmented Reality
. IEEE.,
S.l., 2017. OCLC: 1096674928.
[55]
D. Wang, L. He, and K. Dou. StoryCube: Supporting children’s
storytelling with a tangible tool.
The Journal of Supercomputing
, 70,
Oct. 2013. doi: 10.1007/s11227-012-0855-x
[56]
J. Wofford, D. Currin, R. Michielutte, and M. Wofford. The multimedia
computer for low-literacy patient education: a pilot project of cancer
risk perceptions.
MedGenMed: Medscape general medicine
, 3(2):23–
23, 2001.
[57]
L. Zhang, D. A. Bowman, and C. N. Jones. Exploring Effects of Inter-
activity on Learning with Interactive Storytelling in Immersive Virtual
Reality. In
2019 11th International Conference on Virtual Worlds and
Games for Serious Applications (VS-Games)
, pp. 1–8, Sept. 2019.
ISSN: 2474-0489, 2474-0470. doi: 10.1109/VS-Games.2019.8864531
203
... AR-based display devices hold a potential for the purposes of treatment, education and surgical assistance, since 3D-spatial properties of AR technology provide great advantage for strabismus, which is a disease regarding the misaligned three-dimensional movements of the eyes, thus leading to a better understanding compared to 2D-based methods. A recent study focuses on education of patients with AR to increase awareness on medical examinations and eye surgeries [91]. ...
Article
Throughout the last decade, augmented reality (AR) head-mounted displays (HMDs) have gradually become a substantial part of modern life, with increasing applications ranging from gaming and driver assistance to medical training. Owing to the tremendous progress in miniaturized displays, cameras, and sensors, HMDs are now used for the diagnosis, treatment, and follow-up of several eye diseases. In this review, we discuss the current state-of-the-art as well as potential uses of AR in ophthalmology. This review includes the following topics: (i) underlying optical technologies, displays and trackers, holography, and adaptive optics; (ii) accommodation, 3D vision, and related problems such as presbyopia, amblyopia, strabismus, and refractive errors; (iii) AR technologies in lens and corneal disorders, in particular cataract and keratoconus; (iv) AR technologies in retinal disorders including age-related macular degeneration (AMD), glaucoma, color blindness, and vision simulators developed for other types of low-vision patients.
Conference Paper
Full-text available
This research paper discusses the implementation of augmented reality in transmedia storytelling. The innovation was applied on the subject matter of culture and heritage. A range of content was designed for augmented reality distribution for the 2 research-creation output from the projects. The paper will discuss the consideration of the chosen content for AR distribution, its production design and lastly its limitation on the type of content that was chosen.
Conference Paper
Full-text available
Immersive virtual reality (VR) holds great potential for learning, but it is unclear how VR experiences should be designed to maximize learning potential. In this study, we explored how the level of interactivity in an educational VR storytelling experience for immunology learning affects a user's learning gains. We created three versions of the VR experience with low (system automates as many actions as possible), medium (a combination of system automation and user-controlled actions), and high (as many user-controlled actions as possible) levels of interactivity. We hypothesized that too much or too little interactivity would result in smaller learning gains than a medium level of interactivity. Although data from pre and post-tests showed no significant difference in students' learning gains due to interactivity level, questionnaire and interview data suggest that interactivity in the experience significantly affects students' engagement in learning, attention, and focus on learning material. Participants also perceived that they could learn better and more effectively in a VR experience with a higher level of interactivity.
Article
Full-text available
Augmented reality (AR) technology, as a computer simulation technology, combines various technologies such as virtual reality, computer vision, computer network, and human-computer interaction. AR has been widely used in medicine. The introduction of AR can effectively help doctors complete preoperative planning, intraoperative guidance, postoperative evaluation and medical training. Oral medicine is a major branch of modern medicine. AR can enhance the doctor’s visual system, making the internal structure of the oral clearer and effectively reducing the difficulty of oral repair/surgery. Real-time tracking, registration, display, and interactive technologies for AR will play an important role in oral medicine. Among them, the registration technology has become an important indicator for evaluating the AR system, and it is also the main bottleneck restricting the stability and applicability of the current AR system. Therefore, we reviewed the registration technology of AR in oral medicine. Firstly, we conducted a hot spot analysis of AR keywords based on Citespace. And then, the registration technology is divided into static registration and real-time registration according to the actual clinical application, among which static registration is divided into rigid registration and non-rigid registration. We discussed problems and limitations of static registration and real-time registration in oral applications at this stage. Finally, the future direction of AR registration technology in oral medicine is proposed.
Conference Paper
Full-text available
For Industry 4.0 – the Internet of Things (IoT) in an industrial manner – new methodologies for support and collaboration of employees are needed. One of these methodologies combines existing work practices with support through technologies like Augmented Reality (AR). Therefore, usability concepts for appropriate hardware as well as the data transfer need to be analyzed and designed within applicable industry standards. In this paper, we present two different use cases ("Real-Time Machine Data Overlay" and "Web-Based AR Remote Support") in the context of collaboration and support of employees. Both use cases are focusing on three main requirements: 1) Effective data transmission; 2) Devices certified for industrial environments; and 3) Usability targeted towards industrial users. Additionally, we present an architecture recommendation for combining both use cases as well as a discussion of the benefits and the limitations of our approaches leading to future directions.
Book
Die aktualisierte und erweiterte Zweitauflage dieses umfassenden Lehrbuchs bietet Studierenden, Lehrenden, Forschenden, Anwendern und Interessierten einen wissenschaftlich fundierten und dabei auch gleichzeitig praxisnahen Einstieg in die Grundlagen und Methoden der Virtuellen und Augmentierten Realität (VR/AR). Die Leser erhalten das theoretische Fundament, um selbst VR/AR-Systeme zu realisieren oder zu erweitern, User Interfaces und Anwendungen mit Methoden der VR/AR zu beurteilen und zu verbessern sowie ein vertieftes Verständnis für die Nutzung von VR/AR zu entwickeln. Studierenden dient dieses Lehrbuch als eine anschauliche Begleit- und Nachschlaglektüre zu Lehrveranstaltungen, die Virtual Reality / Augmented Reality (VR/AR) thematisieren z. B. im Bereich Informatik, Medien oder Natur- und Ingenieurwissenschaften. Der modulare Aufbau des Buches gestattet es, sowohl die Reihenfolge der Themen den Anforderungen der jeweiligen Unterrichtseinheit anzupassen als auch eine spezifische Auswahl für ein individuelles Selbststudium zu treffen. Potenzielle Anwender in Forschung und Industrie erhalten einen wertvollen und hinreichend tiefen Einblick in die faszinierenden Welten von VR/AR sowie ihre Möglichkeiten und Grenzen. Der Inhalt • Einführung in Virtual und Augmented Reality • Wahrnehmungsaspekte von VR • Virtuelle Welten • VR/AR-Eingabegeräte und Tracking • VR/AR-Ausgabegeräte • Interaktionen in Virtuellen Welten • Echtzeitaspekte von VR-Systemen • Augmentierte Realität • Fallbeispiele für VR/AR • Authoring von VR/AR-Anwendungen • Mathematische Grundlagen von VR/AR Die Autoren Dr. Ralf Dörner ist Professor für Graphische Datenverarbeitung und Virtuelle Realität an der Hochschule RheinMain in Wiesbaden. Dr. Wolfgang Broll ist Professor für Virtuelle Welten und Digitale Spiele an der Technischen Universität Ilmenau. Dr. Paul Grimm ist Professor für Computergraphik an der Hochschule Fulda. Dr. Bernhard Jung ist Professor für Virtuelle Realität und Multimedia an der TU Bergakademie Freiberg.
Conference Paper
This work presents a narrative story of a Future Mine scenario that uses Virtual Reality as a medium to replace traditional spreadsheet-based policy making framework currently widely used in government agencies for decision making process. The scenario presented envisions user exploring underground mine, where extraction processes had been almost fully automated, and environment is constantly monitored by a variety of modern and futuristic sensors. The use of story-telling using VR is explored to present novel application scenarios for sensing technologies and to facilitate better understanding of the context in which they will be used. Further the experience is translated into informed decision making.