Conference PaperPDF Available

Deep Reinforcement Learning in Immersive Virtual Reality Exergame for Agent Movement Guidance

Authors:

Abstract and Figures

Immersive Virtual Reality applied to exercise games has a unique potential to both guide and motivate users in performing physical exercise. Advances in modern machine learning open up new opportunities for more significant intelligence in such games. To this end, we investigate the following research question: What if we could train a virtual robot arm to guide us through physical exercises, compete with us, and test out various double-jointed movements? This paper presents a new game mechanic driven by artificial intelligence to visually assist users in their movements through the Unity Game Engine, Unity Ml-Agents, and the HTC Vive Head-Mounted Display. We discuss how deep reinforcement learning through Proximal Policy Optimization and Generative Adversarial Imitation Learning can be applied to complete physical exercises from the same immersive virtual reality game. We examine our mechanics with four users through protecting a virtual butterfly with an agent that visually helps users as a cooperative "ghost arm" and an independent competitor. Our results suggest that deep learning agents are effective at learning game exercises and may provide unique insights for users.
Content may be subject to copyright.
Deep Reinforcement Learning in Immersive Virtual
Reality Exergame for Agent Movement Guidance
Aviv Elor
Department of Computational Media
University of California, Santa Cruz
Santa Cruz, CA, USA
aelor@ucsc.edu
Sri Kurniawan
Department of Computational Media
University of California, Santa Cruz
Santa Cruz, CA, USA
skurina@ucsc.edu
Abstract—Immersive Virtual Reality applied to exercise games
has a unique potential to both guide and motivate users in per-
forming physical exercise. Advances in modern machine learning
open up new opportunities for more significant intelligence in
such games. To this end, we investigate the following research
question: What if we could train a virtual robot arm to guide
us through physical exercises, compete with us, and test out
various double-jointed movements? This paper presents a new
game mechanic driven by artificial intelligence to visually assist
users in their movements through the Unity Game Engine,
Unity Ml-Agents, and the HTC Vive Head-Mounted Display. We
discuss how deep reinforcement learning through Proximal Policy
Optimization and Generative Adversarial Imitation Learning
can be applied to complete physical exercises from the same
immersive virtual reality game. We examine our mechanics with
four users through protecting a virtual butterfly with an agent
that visually helps users as a cooperative “ghost arm” and an
independent competitor. Our results suggest that deep learning
agents are effective at learning game exercises and may provide
unique insights for users.
Index Terms—Exercise Games (Exergames), Serious Games,
Head Mounted Display (HMD), Immersive Virtual Reality (iVR),
Project Butterfly (PBF), Machine Learning, Deep Reinforcement
Learning, Imitation Learning, Artificial Intelligence
I. INTRODUCTION
Physical activity is an essential part of daily living, yet
48.3% of the 40 million older adults in the United States
are classified as inactive [1], [2]. Inactivity leads to a decline
of health with signification motor degradation: a loss of
coordination, movement speed, gait, balance, muscle mass,
and cognition [1]–[3]. The medical benefits of regular physical
activity include weight loss and reduction in the risk of
heart disease and certain cancers [4]. However, compliance
in performing regular physical activity often lacks due to
high costs, lack of motivation, lack of accessibility, and low
education [2]. As a result, exercise is often perceived as a
chore rather than a fun activity.
Copyright and Reprint Permission: Abstracting is permitted with credit to
the source. Libraries are permitted to photocopy beyond the limit of U.S.
copyright law for private use of patrons those articles in this volume that carry
a code at the bottom of the first page, provided the per-copy fee indicated
in the code is paid through Copyright Clearance Center, 222 Rosewood
Drive, Danvers, MA 01923. For reprint or republication permission, email to
IEEE Copyrights Manager at pubs-permissions@ieee.org. All rights reserved.
Copyright ©2020 IEEE.
Immersive Virtual Reality (iVR) and the increasingly recent
use of games for health and well-being have shown great
promise in addressing these issues. The ability to create
stimulating and re-configurable virtual worlds has been shown
to improve exercise compliance, accessibility, and performance
analysis [5]–[7]. Other studies have suggested that engaging
in a virtual environment during treatment can distract from
pain and discomfort while motivating the user to achieve
their personal goals [8], [9]. Additional success has been
reported in using virtual environments for a broad range of
health interventions from a psychological and a physiological
perspective [10], [11]. Some of the biggest challenges that
these studies found were technological constraints such as cost,
inaccurate motion capture, non-user friendly systems, and a
lack of accessibility [6], [12], [13].
The past five years have seen explosive growth of iVR
systems, stemming from a projected 200 million head-mounted
displays systems sold on the consumer market since 2016
[14]. This mass adoption has been in part due to a decrease
in hardware cost and a corresponding increase in usability.
From these observations, we argue that the integration of
iVR as a serious game for health can offer a cost-effective
and more computationally adept option for exercise. These
systems provide a method for conveying 6-DoF information
(position and rotation), while also learning from user behavior
and movement. While there has been a number of works in
exploring iVR environments for physical exercise [5], [7],
[11], we present our paper as an exploration of making these
environments more physically intelligent through machine
learning. Specifically, we leverage the integration of the Unity
Game Engine, ML-Agents, Deep Reinforcement Learning,
and a custom in-house iVR exercise game. Through these
technologies, we examine how neural network agents can
augment a playable experience where a virtual robot arm
assists user exercise masked as a task of protecting butterflies
from incoming projectiles.
A. Virtual Reality and Machine Learning
Virtual games provide controlled environments and simu-
lations for a wide range of Artificial Intelligence and Ma-
chine Learning applications. Game AI has been extensively
researched from mechanical control, behavior learning, player
modeling, procedural content, and assisted gameplay [15].
Applying machine learning to the virtual game domain opens
up a playground for researchers to find appropriate learning
techniques and solve various reward-based tasks [16]. For
example, Conde et al showcased reinforcement learning for
behavioral animation of autonomous virtual agents in a town
[17]. Huang et al demonstrated imitation learning through
a 2D GUI to control a Matlab simulated robot in sorting
objects [18]. Yeh et al explored Microsoft Kinect exercise
with a Support Vector Machine (SVM) classifier for quantified
balance performance [19]. Additionally, agent learning in an
iVR environment may be especially advantageous for assistive
applications.
The computational requirements and data-throughput of
modern iVR systems can be leveraged to analyze therapeutic
gamification [7], [20], [21], postural analysis [22], and ac-
curacy for research data collections [23]. This is important
because iVR systems must have accurate motion capture and
low latency of a user’s position and rotation from the physical
world to reduce motion sickness [24]. As a result, iVR systems
are becoming more powerful, immersive, accurate at capturing
user behavior, and affordable to the average consumer [14].
Some researchers are recognizing the potential of utilizing
machine learning and AI with iVR systems. Zhang et al
explored an iVR environment for human demonstrated robot
skill acquisition [25]. The authors describe a deep neural
network policy to solve this problem for training teleoperation
robotics and illustrate that mapping policies of learning using
VR HMDs is challenging. Through utilizing an HTC Vive,
PR2 Telepresence Robot, and a Primesense 3d camera, the
authors successfully trained their neural network to control a
robot by collecting user 6-DoF pose and color depth images
of player movement. In terms of utilizing machine learning
to support player movement, we found two recent studies
through our literature review. Kastanis et al described a method
of reinforcement learning for training virtual characters to
guide participants to a location in an iVR environment [26].
The authors used presence theory to predict uncomfortable
interpersonal distance for human players and successfully
incentivized study participants to move away from trained
virtual agents. And Rovira et al examined how reinforcement
learning could be used to guide user movement in iVR through
projecting a 6-DoF predictive path for user collision avoidance
[27].
While several works have been explored in utilizing ma-
chine learning for games, and researchers have started looking
at iVR as a medium for human-agent learning, there have been
few works exploring agents for iVR exergaming. iVR exercises
can provide a vehicle for real-time motion capture and inverse
kinematics of player movement. Such data could enable the
analysis of confounding postural issues, such as slouched
backs and other movement biases, and could adapt the game in
real-time to maximize exercise outcome. With these previous
works in mind, we consider the following question: what if
we could have a predictive model that could inform us of our
movement trajectory in a virtual exercise game?
B. Study Goals and Contribution
The prior work discussed in this section has demonstrated
that deep reinforcement learning can enable promising pre-
dictive models for system control and user behavior. Little
work has been done in exploring machine learning from
6-DoF user exercise movement (or movement in general)
for iVR experiences. Through this project, which we call
“Illumination Butterfly (IB),” we aim to explore how deep
reinforcement learning can inform iVR exergames in terms of
user movements and game mechanics. Specifically, the goals
of this study are to:
1) Examine Deep Reinforcement Learning for a Double-
Jointed Virtual Arm to model physical exercise move-
ments through 6-DoF interaction with Immersive Virtual
Environments.
2) Explore the capabilities of Generative Adversarial Imita-
tion Learning (GAIL) and Proximal Policy Optimization
(PPO) for learning in-game physical exercises.
3) Evaluate the trained agent for cooperative and competi-
tive exercise applications between human users.
Our serious game explores neural network-driven 3DUI
interaction techniques by using two emergent machine learning
algorithms (GAIL and PPO) to see how a virtual robot arm
can both cooperatively and competitively guide users in their
movements. This project stems from previous iVR games de-
signed through the interpretation of exercise theory and human
anatomy. We expand our work from Elor et al’s previous
exploration into serious games for upper-extremity exercise
movement: a multi-year interdisciplinary exploration between
local healthcare professionals, roboticists, game developers,
and disability learning centers at Santa Cruz, California [7],
[28]–[31]. Through leveraging machine learning, we hope to
enable Project IB as a new computational experience to under-
stand human exercise and robotic behavior via virtual butterfly.
This project may be a step forward for other researchers
interested in integrating “physical intelligence” via predictive
models of user movement for other iVR exergames.
II. SY ST EM DESIGN
The system in this paper is based on “Project Butterfly”
(PBF), a serious iVR game for exercise previously explored
by Elor et al [28]. We heavily modified PBF to create a
new gaming experience directed at AI guided upper extremity
exercises. Our version of PBF was developed in the Unity
2019.2.18f1 Game Engine with SteamVR 2.0 and incorporates
the HTC Vive Pro 2018 by Valve Corporation, a highly
adopted commercial VR system that uses outside-in tracking
through a constellation of “lighthouse” laser systems for pose
collection in a 3D 4x4m space [14], [32], [33]. Vive has been
verified in previous studies to analyze therapeutic gamification
[7], [20], [21], postural analysis [22], and accuracy for research
data collections [23].
The objective of the game is to protect a virtual butter-
fly from inclement weather and projectiles by covering the
avatar with a translucent “bubble shield” using the HTC Vive
Controller. Thus the player is required to follow the path of
the butterfly with plus or minus 0.1 meters, which enables
the dynamic control of pace and position for a prescribed
exercise. The player is awarded a score point for every half
second they successfully protect the butterfly, with both audio
and haptic feedback to notify them that they were successful.
By protecting the butterfly, the world around them changes
- meadows become brighter, trees grow, and the rain slows
down. Conversely, if the butterfly is not protected, no positive
feedback occurs - the world does not change. The game can
be tailored to each player’s speed and range of motion through
a dynamic evaluator interface. Previously, PBF was explored
with post-stroke and older users to analyze the feasibility of
the game with exo-skeletal assistance for two exercises [28] by
Elor et al, but was not designed or tested for neural network
guided upper extremity movements varying custom exercise
movements as reported in this paper.
To explore the application of deep-learning agents for visu-
ally guided upper-limb exercise, we created a new modified
version of PBF, which included the following changes from
the previous version:
1) A modified “Reacher Agent,” a double-jointed arm con-
trolled by predictive torque [34], was added into the
player controller with the reward given when protecting
a virtual butterfly.
2) A training scene for 16 parallel agents and three butterfly
movements was created, as shown in Figure 1.
3) A “ghost arm” game mechanic was added for user visual
guided movements with the original PBF game modes,
and a “human vs agent” game mode was added for
competitive analysis.
To the best of our knowledge, this study is one of the first
to leverage an immersive VR HMD such as the HTC Vive
with deep reinforcement learning to examine visually assisting
agents for exergaming.
A. Machine Learning Environment and Agent Design
Project IB has been fully integrated with Unity ML-Agents,
an open-source Unity plugin that enables games and simula-
tions to serve as environments for training intelligent agents.
The experimental plugin enables a python server to train agents
in development environments through reinforcement learning,
imitation learning, neuroevolution, and other emerging Ten-
sorflow based algorithms [32], [35], [36]. We targeted upper-
extremity torque and angular momentum as metrics to predict
for our model. Having our AI model examine these metrics
at the elbow and shoulder joints is advantageous. Torque is
important as it used to describe the movement and force
produced by the muscles surrounding the joint [37]–[40].
Prior research has examined the torque of upper-body exercise
for more in-depth injury assessment; for example, Perrin et
al demonstrated that bilateral torque enables clinicians to
more accurately set guidelines in the rehabilitation of varying
athletic groups [41]. Additionally, angular momentum provides
a metric to monitor user movement performance over several
exercises, ensuring safety and preventing overuse [42]. Several
Fig. 1. Project IB Training Scene and AI Agents. Agents act as a double-
jointed virtual arm with observation on the shoulder, elbow, and end effector
joints. Sixteen agents were set up in parallel to train through the python ml-
agents library with an action space of +/- 1.0 for actuating pitch and roll
torques on the elbow and shoulder joints, respectively. A reward of +0.01
is given to the agent per every frame the end effector successfully remains
on the butterfly. The training scene tasks agents to collectively learn three
exercise movements: Horizontal Shoulder Rotation, Forward Arm Raise, and
Side Arm Raise.
Fig. 2. Project IB Imitation Learning and User Demonstration. A user
demonstrates how to protect a butterfly. Vive Trackers are placed on the
user’s shoulder and elbow joints to record fixed joint movement dynamics.
The agent is set to heuristic control to observe the user’s joint torques, angular
momentum, and hand (bubble) position. A reward of +0.01 is given to the
user per every frame the bubble successfully remains on the butterfly. The
recorded demonstration is then used to augment reward during parallel agent
training with GAIL & PPO.
other studies have explored the benefits of quantifying angular
momentum for robotic assistance [43], the severity of lower
body gait impairment [44], [45], and how it contributes to
whole-body muscle movement [46]. Predicting average torque
and angular momentum through an AI model may hopefully
provide insights for user movements and future assistive
robotic design for Project Butterfly to be re-evaluated with
exo-skeletal assistance [28], [47].
With our target predictions in mind, we chose to utilize
the Unity Ml-Agents Reacher Agent and Deep Deterministic
Continuous Control as it observes and predicts agent fixed
Fig. 3. Project IB exercise movements for Horizontal Shoulder Rotation
(HSR), Forward Arm Raise (FAR), and Side Arm Raise (SAR). Movement
directions are indicated by the labels ABC followed by CBA for one repetition.
joint dynamics to complete a given virtual task [35], [36]. We
modified the agent to act as a double-jointed virtual arm with
specific control and observation on the shoulder, elbow, and
end effector joints. This allows our agents to collectively learn
from an action space from +/- 1.0 where the agent observes
joint torques, angular momentum, and butterfly position to
predict shoulder and elbow torque. The agent was given a
+0.01 reward per every game engine frame update that the
bubble or end effector was successfully on the butterfly. Three
exercises were targeted for the agent to learn from Horizontal
Shoulder Rotation (HSR), Forward Arm Raise (FAR), and Side
Arm Raise (SAR), as shown in Figure 3. These movements
were chosen as they are considered conventional movement
modalities required for active daily living [28], [47].
To examine agent learning, we chose to explore two learning
algorithms: Proximal Policy Optimization (PPO) and Gen-
erative Adversarial Imitation Learning (GAIL). PPO is a
policy gradient method of reinforcement learning that allows
sampling parallel agent interaction with an environment and
optimizing the agents objective through stochastic gradient
descent [48]. GAIL is an imitation learning method where
inverse reinforcement learning is applied to augment the policy
reward signal through a recorded expert demonstration [49].
In short, GAIL provides a medium for the agent to imitate
the user’s exercise, and PPO helps the agent find the maximal
reward policy to protect the butterfly.
B. Agent Training
Two training sessions were examined through Project IB:
parallel agent training (as shown in Figure 1) with PPO only,
and PPO with GAIL. We examined the PPO only model to
determine the agent performance when solving for maximal re-
ward and the GAIL + PPO model to see if user demonstrations
can influence the training process and or personalize agents
to the user’s movement biases. For GAIL, a demonstration
was recorded for each butterfly exercise movement by a
human demonstrator, as shown in Figure 2. To record human
demonstration, a user was tasked with demonstrating to the
agent how to protect the butterfly through arm movement.
Vive Trackers were placed at the user’s elbow and shoulder
joints for agent observation of movement dynamics. This was
achieved by creating virtual fixed joints in Unity and inputting
Fig. 4. Project IB Training Results from Tensorboard for one million steps.
Results are viewed from the cumulative 16 agents trained in parallel for the
three PBF exercises. The “PPO Only” model attained the highest reward with
a 11.4% increase compared the “GAIL + PPO” model. Darker lines indicate
smoothed results and lighter lines indicate raw data.
rigid body torque and angular momentum into the heuristic
agent model. Users demonstrated ideal movements to the agent
for about two minutes per exercise.
Training was done with sixteen agents in parallel, as
shown in Figure 1. Model parameters were tuned to each
trainer config.yaml file as recommended in the Unity ML-
Agents v3.X.X plugin [35], [36]. The training parameters
differed between “PPO Only” and ‘GAIL + PPO,” where
GAIL was added as a parameter to the PPO reward
signal with a strength of 1%. Full tuning parameters and
trained models can be found at https://github.com/avivelor/
UnityMachineLearningForProjectButterfly. Each training
model was run for one million steps at a time scale of 100
through the unity ml-agents API. This was equivalent to
about a couple hours of training per each model where agents
attempted to learn Horizontal Shoulder Rotation, Forward
Arm Raise, and Side Arm Raise.
C. Training Results
Training results between the two models can be seen in
Figure 4. Both models demonstrated a promising learning
Fig. 5. Project IB Cooperative Gameplay with Trained Agent. The user
controls the bubble shield through the controller as a transparent “ghost”
arm appears through the user to help guide and predict user movement in
protecting the butterfly.
rate through one million steps for the 16 parallel agents.
However, the “PPO Only” model attained the highest reward
with an 11.4% increase compared to the “GAIL + PPO”
model. This may imply that the human demonstrator was
imperfect in gameplay, and or the motion dynamics recorded
through the Vive Tracker require a higher precision. The
human demonstrator in Figure 2 attained a mean score of
48 between all three movements, which may suggest that
the GAIL + PPO model successfully imitated the user to
the best of their ability. While the imitation learning model
did receive less reward, the GAIL + PPO model may be
useful in understanding user movement bias and weakness.
Personalizing agents from user demonstrations may open up
pathways to autonomously adjust exercise difficulty around
user day-to-day movement capabilities. Subsequently, a future
evaluation must be done with a more significant amount of
users to understand the ability for personalization and tuning
user movement with GAIL as a reward parameter for training.
For the PPO Only model, the deep reinforcement learning
alone demonstrated that PPO is highly capable of learning
exercise movements by protecting the butterfly. When com-
paring the results of Figure 2 to the Reacher Agent reported
by Juliani et al on the Unity ML Agents Toolkit, the PPO Only
model for Project IB received a 41.2% increase in cumulative
reward [36]. This may suggest that games like PBF may be an
ideal environment for utilizing double-jointed movements, as
it was designed for upper-extremity exercise by Elor et al [28].
With the training done, the double-jointed arm for Project IB
was then used to provide visual guidance for iVR exercise
with PBF. Guidance was done by overlaying the IB Agent
as a transparent “ghost arm” as shown in Figure 5. With the
agents successfully trained, we moved on to perform a small
pilot study to see how the PPO Only model competed with
human agents.
III. USE R STU DY
For this study’s scope, we sought to explore how our trained
PPO agent would compare to human players. Four users
from the University of California Santa Cruz were recruited
to compete against the trained “PPO only” model in PBF.
Participants were adult college students from UCSC (one
female, three males, with a mean age of 23.5 years old and
1.73 age standard deviation). Each exercise was played for one
minute at ten repetitions per minute. A score point is awarded
for every crystal the user blocks with the bubble shield on
the butterfly. A research administrator was always present to
monitor user experience and followed a strict written protocol
when interacting with users. Specifically, user testing sessions
consisted of the following protocol steps:
1) Preparation: The study administrator sanitized the iVR
equipment, made sure all equipment was fully charged,
and personally ran a session of Project IB to check the
quality of motion capture data communication.
2) Introduction: The administrator instructed the user to
remain still and relax. The user was verbally informed
about the three exercise movements and the goal of
protecting the butterfly. The user was then given a one
minute tutorial for each exercise to protect the butterfly
with the cooperative IB Agent “ghost arm.” An example
of this stage can be seen in Figure 5.
3) Rest: The user was instructed to relax for 90 seconds
before performing the exercise with Project IB. This was
done before every new exercise was administered.
4) Exercise: Users completed 60 seconds of gameplay
while competing against the Project IB agent, and the
user’s final game score was recorded. Upon completion
of one set, the Rest stage was repeated. An example
of this stage can be seen in Figure 6. This stage was
repeated until the user successfully completed all three
exercises during competition with the agent.
IV. RES ULT S AN D DISCUSSION
Each of the four users from the pilot user study successfully
competed with the Project IB agent. The resulting final scores
between the users and agent can be seen in Table I. The Project
IB agent was able to complete exercises just as well (and
even slightly better) than the users for the Horizontal Shoulder
Rotation movements. Nevertheless, gameplay indicated that
the users were able slightly to outperform the agent for the
Forward Arm Raise and Side Arm Raise exercises. Side arm
raise appeared to have the highest standard deviation for the
agent and the users, indicating a mixed performance. All users
reported that they felt the movements were “tiring” at the speed
of ten repetitions per minute (requiring a slow and controlled
movement in following the butterfly).
While the initial results of Project IB were promising, there
are many limitations to consider. More users must compete
with both the “PPO Only” and the “PPO + GAIL” models to
understand the efficacy of these models as well as exploring
unlearned exercises. More demonstrations and imitation learn-
ing tuning parameters should be explored with GAIL, such that
Fig. 6. Project IB Competitive Gameplay with Trained Agent. The user
competes with the Project IB agent to collect the most crystals while
protecting the butterfly. The agent is set to the right of the user and is tasked
with protecting it’s own butterfly. Crystal paths and human vs agent avatar
representation are shown in the scene and game view.
Exercise User Score Agent Score
Horizontal Shoulder Rotation 46.6 (1.15) 47.3 (0.58)
Forward Arm Raise 45.6 (0.58) 44.0 (1.00)
Side Arm Raise 33.3 (4.04) 31.0 (1.73)
TABLE I
RES ULTS I N [MEA N (STAN DAR D DEV IATI ON) ] FORMAT FOR HUMAN
VE RSU S AGE NT GA ME PLAY. USE RS W ERE A DU LT COL LEG E ST UDE NTS
FRO M UCSC (N=4, F=1, M=3, AGE=23.5 +/- 1.73). EAC H EXE RC ISE
WAS PL AYED FO R ON E MIN UT E AT 10 REPS PER MINUTE. ONE SCORE
PO INT I S AWARDE D PE R EVE RY CRYS TAL TH E US ER BL OC KS WI TH T HE
BUB BL E SHI EL D ON TH E BU TTE RFL Y.
each model is tailored to each user’s movement capabilities
for a normalized comparison. Furthermore, a more in-depth
investigation must be done to understand the effects of the
cooperative “ghost arm” agent to examine if it is assistive
from a presence, immersion, embodiment, and self-reported
performance perspective. For example, how does the ghost arm
compare to the visual guidance from crystals or no guidance at
all? These limitations are being considered for future studies
with our pilot data in mind.
V. CONCLUSION
Through this paper, we presented a novel game mechanic for
iVR exercise games that employed deep reinforcement learn-
ing and immersive virtual environments to learn from and help
guide double-jointed exercise movements. We demonstrated
how to convert a previously explored iVR exercise game for
machine learning agents. We showcased a methodology of uti-
lizing Generative Adversarial Imitation Learning and Proximal
Policy Optimization to exercise with virtual butterflies. We
examined two differing models for training our agents, with
and without imitation learning. We demonstrated a promising
learning rate through training 16 agents in parallel throughout
one million steps. We evaluated one of the trained models with
a set of four young adults to explore competitive applications
with the agent as a game mechanic. The results suggest that
with the right training parameters, the model can compete
with and adhere to human-level performance in iVR for some
exercises after a single training session.
In the future, we hope to explore unlearned exercises and
validate a greater range of deep learning models through
more extensive user testing to examine its effects on user
performance, immersion, and self-reported perception. Our
long term goal is to develop an at-home recovery game that
uses machine learning to adapt exercise difficulty and assis-
tance. Subsequently, we plan to explore more machine learning
algorithms and input parameters such as biofeedback and
musculoskeletal simulation to inform of gameplay progression.
The incorporation of predictive runtime models to identify
muscle weaknesses may further aid in custom movements for
an individual user to help maximize their exercise by ensuring
the targeted muscles are being used for a given movement. To
this end, there are more butterflies to learn from as we continue
working towards achieving greater physical intelligence.
ACKNOWLEDGMENT
We thank Professor Angus Forbes of UC Santa Cruz for
his advice during this project and the many participants who
volunteered for this study.
REFERENCES
[1] L. M. Howden and J. A. Meyer, Age and sex composition, 2010. US
Department of Commerce, Economics and Statistics Administration,
US . . . , 2011.
[2] CDC, “Brfss survey data and documentation 2017,” C. for Disease Con-
trol, Prevention et al., Eds., 2017.
[3] H. Sandler, Inactivity: physiological effects. Elsevier, 2012.
[4] P. Z. Pearce, “Exercise is medicine™,” Current sports medicine reports,
vol. 7, no. 3, pp. 171–175, 2008.
[5] D. Corbetta, F. Imeri, and R. Gatti, “Rehabilitation that incorporates
virtual reality is more effective than standard rehabilitation for improving
walking speed, balance and mobility after stroke: a systematic review,”
Journal of physiotherapy, vol. 61, no. 3, pp. 117–124, 2015.
[6] H. Mousavi Hondori and M. Khademi, “A review on technical and clin-
ical impact of microsoft kinect on physical therapy and rehabilitation,”
Journal of Medical Engineering, vol. 2014, 2014.
[7] A. Elor, M. Teodorescu, and S. Kurniawan, “Project star catcher: A novel
immersive virtual reality experience for upper limb rehabilitation,ACM
Transactions on Accessible Computing (TACCESS), vol. 11, no. 4, p. 20,
2018.
[8] H. G. Hoffman, W. J. Meyer III, M. Ramirez, L. Roberts, E. J.
Seibel, B. Atzori, S. R. Sharar, and D. R. Patterson, “Feasibility of
articulated arm mounted oculus rift virtual reality goggles for adjunctive
pain control during occupational therapy in pediatric burn patients,
Cyberpsychology, Behavior, and Social Networking, vol. 17, no. 6, pp.
397–401, 2014.
[9] H. G. Hoffman, G. T. Chambers, W. J. Meyer, L. L. Arceneaux, W. J.
Russell, E. J. Seibel, T. L. Richards, S. R. Sharar, and D. R. Patterson,
“Virtual reality as an adjunctive non-pharmacologic analgesic for acute
burn pain during medical procedures,” Annals of Behavioral Medicine,
vol. 41, no. 2, pp. 183–191, 2011.
[10] P. J. Standen and D. J. Brown, “Virtual reality in the rehabilitation
of people with intellectual disabilities,” Cyberpsychology & behavior,
vol. 8, no. 3, pp. 272–282, 2005.
[11] J. Diemer, G. W. Alpers, H. M. Peperkorn, Y. Shiban, and
A. M¨
uhlberger, “The impact of perception and presence on emotional
reactions: a review of research in virtual reality,” Frontiers in psychology,
vol. 6, 2015.
[12] J. Crosbie, S. Lennon, J. Basford, and S. McDonough, “Virtual reality
in stroke rehabilitation: still more virtual than real,” Disability and
rehabilitation, vol. 29, no. 14, pp. 1139–1146, 2007.
[13] P. J. Costello, Health and safety issues associated with virtual reality:
a review of current literature. Advisory Group on Computer Graphics,
1997.
[14] M. Beccue and C. Wheelock, “Research report: Virtual reality
for consumer markets,” Tractica Research, Tech. Rep., Q4 2016.
[Online]. Available: https://www.tractica.com/research/virtual-reality-
for-consumer-markets/
[15] G. N. Yannakakis and J. Togelius, “A panorama of artificial and com-
putational intelligence in games,” IEEE Transactions on Computational
Intelligence and AI in Games, vol. 7, no. 4, pp. 317–335, 2014.
[16] J. F¨
urnkranz, “Machine learning in games: A survey,” Machines that
learn to play games, pp. 11–59, 2001.
[17] T. Conde, W. Tambellini, and D. Thalmann, “Behavioral animation
of autonomous virtual agents helped by reinforcement learning,” in
International Workshop on Intelligent Virtual Agents. Springer, 2003,
pp. 175–180.
[18] D.-W. Huang, G. Katz, J. Langsfeld, R. Gentili, and J. Reggia, “A
virtual demonstrator environment for robot imitation learning,” in 2015
IEEE International Conference on Technologies for Practical Robot
Applications (TePRA). IEEE, 2015, pp. 1–6.
[19] S.-C. Yeh, M.-C. Huang, P.-C. Wang, T.-Y. Fang, M.-C. Su, P.-Y. Tsai,
and A. Rizzo, “Machine learning-based assessment tool for imbalance
and vestibular dysfunction with virtual reality rehabilitation system,
Computer methods and programs in biomedicine, vol. 116, no. 3, pp.
311–318, 2014.
[20] A. Borrego, J. Latorre, M. Alca˜
niz, and R. Llorens, “Comparison of
oculus rift and htc vive: feasibility for virtual reality-based exploration,
navigation, exergaming, and rehabilitation,Games for health journal,
vol. 7, no. 3, pp. 151–156, 2018.
[21] S. M. Palaniappan and B. S. Duerstock, “Developing rehabilitation
practices using virtual reality exergaming,” in 2018 IEEE International
Symposium on Signal Processing and Information Technology (ISSPIT).
IEEE, 2018, pp. 090–094.
[22] F. Soffel, M. Zank, and A. Kunz, “Postural stability analysis in virtual
reality using the htc vive,” in Proceedings of the 22nd ACM Conference
on Virtual Reality Software and Technology. ACM, 2016, pp. 351–352.
[23] D. C. Niehorster, L. Li, and M. Lappe, “The accuracy and precision of
position and orientation tracking in the htc vive virtual reality system for
scientific research,” i-Perception, vol. 8, no. 3, p. 2041669517708205,
2017.
[24] H. K. Kim, J. Park, Y. Choi, and M. Choe, “Virtual reality sickness
questionnaire (vrsq): Motion sickness measurement index in a virtual
reality environment,Applied ergonomics, vol. 69, pp. 66–73, 2018.
[25] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and
P. Abbeel, “Deep imitation learning for complex manipulation tasks from
virtual reality teleoperation,” in 2018 IEEE International Conference on
Robotics and Automation (ICRA). IEEE, 2018, pp. 1–8.
[26] I. Kastanis and M. Slater, “Reinforcement learning utilizes proxemics:
An avatar learns to manipulate the position of people in immersive
virtual reality,ACM Transactions on Applied Perception (TAP), vol. 9,
no. 1, pp. 1–15, 2012.
[27] A. Rovira and M. Slater, “Reinforcement learning as a tool to make
people move to a specific location in immersive virtual reality,” Inter-
national Journal of Human-Computer Studies, vol. 98, pp. 89–94, 2017.
[28] A. Elor, S. Lessard, M. Teodorescu, and S. Kurniawan, “Project butterfly:
Synergizing immersive virtual reality with actuated soft exosuit for
upper-extremity rehabilitation,” in 2019 IEEE Conference on Virtual
Reality and 3D User Interfaces (VR). IEEE, 2019, pp. 1448–1456.
[29] A. Elor, S. Kurniawan, and M. Teodorescu, “Towards an immersive
virtual reality game for smarter post-stroke rehabilitation,” in 2018 IEEE
International Conference on Smart Computing (SMARTCOMP). IEEE,
2018, pp. 219–225.
[30] A. Elor, M. Powell, E. Mahmoodi, N. Hawthorne, M. Teodorescu,
and S. Kurniawan, “On shooting stars: Comparing cave and hmd
immersive virtual reality exergaming for adults with mixed ability,ACM
Transactions on Computing for Healthcare.
[31] A. Elor and A. Song, “isam: Personalizing an artificial intelligence
model for emotion with pleasure-arousal-dominance in immersive virtual
reality,” in 2020 15th IEEE International Conference on Automatic Face
and Gesture Recognition (FG 2020)(FG), pp. 583–587.
[32] Unity Technologies, “Unity real-time development platform — 3d, 2d
vr ar,” Internet: https://unity.com/ [Jun. 06, 2019], 2019.
[33] HTC-Corporation, “Vive vr system,” Vive, November 2018, https://www.
vive.com/us/product/vive-virtual-reality-system/.
[34] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa,
D. Silver, and D. Wierstra, “Continuous control with deep reinforcement
learning,” arXiv preprint arXiv:1509.02971, 2015.
[35] M. Lanham, Learn Unity ML-Agents–Fundamentals of Unity Machine
Learning: Incorporate new powerful ML algorithms such as Deep
Reinforcement Learning for games. Packt Publishing Ltd, 2018.
[36] A. Juliani, V.-P. Berges, E. Vckay, Y. Gao, H. Henry, M. Mattar, and
D. Lange, “Unity: A general platform for intelligent agents,” arXiv
preprint arXiv:1809.02627, 2018.
[37] J. M. Burnfield, K. R. Josephson, C. M. Powers, and L. Z. Rubenstein,
“The influence of lower extremity joint torque on gait characteristics in
elderly men,” Archives of physical medicine and rehabilitation, vol. 81,
no. 9, pp. 1153–1157, 2000.
[38] L. Ballaz, M. Raison, C. Detrembleur, G. Gaudet, and M. Lemay, “Joint
torque variability and repeatability during cyclic flexion-extension of the
elbow,BMC sports science, medicine and rehabilitation, vol. 8, no. 1,
p. 8, 2016.
[39] A. K. Gillawat and H. J. Nagarsheth, “Human upper limb joint torque
minimization using genetic algorithm,” in Recent Advances in Mechan-
ical Engineering. Springer, 2020, pp. 57–70.
[40] K. Kiguchi and Y. Hayashi, “An emg-based control for an upper-limb
power-assist exoskeleton robot,IEEE Transactions on Systems, Man,
and Cybernetics, Part B (Cybernetics), vol. 42, no. 4, pp. 1064–1071,
2012.
[41] D. H. Perrin, R. J. Robertson, and R. L. Ray, “Bilateral isokinetic peak
torque, torque acceleration energy, power, and work relationships in
athletes and nonathletes,” Journal of Orthopaedic & Sports Physical
Therapy, vol. 9, no. 5, pp. 184–189, 1987.
[42] J. Hamill and K. M. Knutzen, Biomechanical basis of human movement.
Lippincott Williams & Wilkins, 2006.
[43] M. T. Farrell and H. Herr, “Angular momentum primitives for human
turning: Control implications for biped robots,” in Humanoids 2008-8th
IEEE-RAS International Conference on Humanoid Robots. IEEE, 2008,
pp. 163–167.
[44] S. M. Bruijn, P. Meyns, I. Jonkers, D. Kaat, and J. Duysens, “Control
of angular momentum during walking in children with cerebral palsy,
Research in developmental disabilities, vol. 32, no. 6, pp. 2860–2866,
2011.
[45] C. Nott, R. R. Neptune, and S. Kautz, “Relationships between frontal-
plane angular momentum and clinical balance measures during post-
stroke hemiparetic walking,” Gait & posture, vol. 39, no. 1, pp. 129–134,
2014.
[46] R. R. Neptune and C. P. McGowan, “Muscle contributions to whole-
body sagittal plane angular momentum during walking,” Journal of
biomechanics, vol. 44, no. 1, pp. 6–12, 2011.
[47] M. Ora Powell, A. Elor, M. Teodorescu, and S. Kurniawan, “Openbutter-
fly: Multimodal rehabilitation analysis of immersive virtual reality for
physical therapy,American Journal of Sports Science and Medicine,
vol. 8, no. 1, pp. 23–35, 2020.
[48] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Prox-
imal policy optimization algorithms,” arXiv preprint arXiv:1707.06347,
2017.
[49] J. Ho and S. Ermon, “Generative adversarial imitation learning,” in
Advances in neural information processing systems, 2016, pp. 4565–
4573.
... Additional users must be included to assess the efficacy of the "PPO Only" and "PPO + GAIL" models and investigate unlearned exercises. [5] ...
... The middle ray detects the player -0. 5 Hitting a wall 1 ...
... Getting closer/further from the player -0. 5 Taking too long to win ...
... Facts like these demonstrate the need for investment in game design development in clinical environments. Considering that stimulating personalised interfaces, and with detailed feedback systems, added to options for modulating the difficulties of activities within playful immersive virtual reality environments, have been promoting greater engagement in autonomy while performing tasks, and increasing the chances for new tasks to be performed (Elor and Kurniawan 2020). ...
... Currently, works such as by Elor and Kurniawan (2020) have been using deep reinforcement learning by integrating game engine systems (Unity Game Engine); ML-Agents together with serious games, such as immersive virtual reality exergames with the HTC Vive Head-Mounted Display, to generate machine learning capable of creating promising predictive models, to control the system and understand user behaviour. Specifically, this technology utilized neural network agents to enhance the gameplay experience, in which a virtual robot arm assisted the user in exercising to protect butterflies from projectiles. ...
... Through recent studies (Elor and Kurniawan 2020;Willwacher and Korn 2021) there is a concern with precision in capturing and analysing human movement data, through an automated and increasingly complex AI, to produce exercises with an adapted level of difficulty to the possibilities and interests of the patient. This requires a level of detail through a system of rules developed by experienced professionals (experts) from different areas, to produce through AI, a decision tree or a machine learning system. ...
Chapter
This chapter describes the role artificial intelligence (AI) and augmented reality (AR) play in preventing natural disasters such as pandemics, earthquakes, cyclones, etc. Disaster relief agencies can process large volumes of fragmented and complex data with the help of AI. Consequently, it will be able to generate valuable information that can be acted upon more quickly. Despite its infancy, AR will soon become a key digital tool across many industries. Most companies are implementing technologies such as Cognitive AI, Big Data, Augmented Reality, and Cloud Computing. In the recent pandemic situation, AR has been used for remote assistance, diagnostics, checklists, and training. Enhancing e-health experiences by combining AR and AI can also help resolve legal disputes. As part of its legislative process, the European Union (EU) has prioritized this awareness. Due to this, it is vital to emphasize that these new technologies during disaster emergency management are also supported by European legal standards.
... 7) Quiz: Games of questions and answers, in which one or more players earn points based on their answers [58]. 8) Simulation: Games that seek to imitate some area of real life giving primary attention to realism [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74]. 9) Maze: Games in which the player or the players must go through a maze to earn points or reach a specific point [34], [75], [76]. ...
... It should be noted that games in several of the selected publications were defined as serious games. Although this category is not related to a single genre, in the calculation of our study, they represent 16% of the total, with 12% corresponding to the Simulation genre [59], [60], [62], [63], [64], [65], [66], [71], [73], and 4% to Education [56], [57], [74]. ...
... 1) Game/Physics Engine: Game and physics engines are software specialised in 2D or 3D rendering and provide easy tools for physics simulation and collision detection, animation, graphic scenarios, etc. [18], [19], [20], [22], [24], [27], [28], [38], [43], [45], [47], [48], [52], [54], [57], [59], [60], [62], [63], [67], [69], [72], [73], [88], [90], [91]. ...
Article
Full-text available
italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Context : Games are a well-established scenario to test AI and Multi-Agent Systems (MAS) proposals due to their popularity and defiance. However, there is no big picture of the application of this technology to games, the evolution of the kind of problem tackled, or the game scenarios in which agents have been experimented. Objective : To perform a systematic mapping to characterise the state of the art in the field of MAS applied to virtual games and to identify trends, strengths, and gaps for further research. Method: A Systematic Mapping Study has been conducted to find primary studies in the field. A search was performed on title, abstracts, and keywords, whilst classification, data extraction, and further analysis were performed according to specific criteria focused on MAS papers with experimentation and evidence in a game scenario. Results : 78 studies published between 1998 and 2021 were found. Studies have been classified according to the MAS problem faced and the agent reasoning strategy. We detect that Machine Learning is the most common AI technique for MAS in games, considering both reinforcement learning and evolutionary techniques. MAS are used in a variety of gaming genres, especially in Real-Time Strategy (RTS), Sports and Simulation. Conclusions : RTS and Sports games are well-suited for concrete MAS problems such as multi-agent planning and task allocation. Expanding evidence and experimentation on other aspects related to scalability and usability issues is discussed. Those MAS problems and experiments that remain slightly modelled on games or are not thoroughly studied yet have been also identified.
... The scene and game views display crystal routes as well as avatars of people and agents. [71]. (d) The 'Ant-Man Vision' experience's method [72]. ...
... In the context of home rehabilitation games, Aviv Elor et al. developed an upper limb exercise component that assists users in learning and guiding their exercise movements [71]. To personalize the rehabilitation strategy and adapt to the exercise difficulty and assistance, they employed a technique called generative imitation learning (GAIL) and proximal strategy optimization (PPO) with the use of virtual butterflies, as depicted in Figure 13(c). ...
Article
Full-text available
The integration of artificial intelligence (AI) and virtual reality (VR) has expanded the possibilities of research in various scientific fields. Virtual reality simulations driven by artificial intelligence has revolutionary significance in the fields of education, healthcare, and entertainment. However, there is a lack of comprehensive investigation to systematically summarize the basic characteristics and development trajectory of metaverse visual content generated by artificial intelligence. This survey focused on the construction of intelligent metaverse scene content, aiming to bridge this gap by studying the application of artificial intelligence in content generation. It explores scene content generation, simulation biology, personalized content, and intelligent agents. By analyzing the current situation in this field and identifying common features, this survey provides a detailed description of the methods for constructing intelligent scenes in the metaverse. The main contribution of this study is a comprehensive analysis of the current situation of intelligent visual content production in the metaverse, highlighting emerging trends. It aims to motivate researchers to use artificial intelligence to enhance VR experience and promote the creation of immersive metaverse simulation scenarios. The discussion on methods for constructing intelligent scene content in the metaverse suggests that, in the era of intelligence, it has the potential to become the dominant approach for content creation in metaverse scenes.
... By creating an experimental setup, this research proposed the use of immersive learning environment for training security agents for emergency response. Elor and Kurniawan (2020) used deep reinforcement learning in immersive VR and presented a novel game mechanic for exercise games that suggested human-level performance is possible with agents in immersive environment if we use right parameters. Jacobson et al. (2008) proposed the implementation of multi-user immersive learning experiences with intelligent agents to support the learning gain and engagements. ...
... With an average of score of 5.33, participants think machine learning can help in self-directed learning in immersive environments. This result has endorsed the previous studies where machine learning and artificial agents are proposed for immersive learning (Dyulicheva & Glazieva, 2022;Elor & Kurniawan, 2020). ...
Article
Full-text available
Integration of extended reality (XR) in education is becoming popular to change the traditional classroom with immersive learning environments. The adoption of immersive learning is accelerating as an innovative approach for science and engineering subjects. With new powerful interaction techniques in XR and the latest developments in artificial intelligence, interactive and self-directed learning are becoming important. However, there is a lack of research exploring these emerging technologies research with kinesthetic learning or “hands-one learning" as a pedagogical approach using real-time hand interaction and agent-guided learning in immersive environment. This paper proposes a novel approach that uses machine learning agents to facilitate interactive kinesthetic learning in science and engineering education through real-time hand interaction in virtual world. To implement the following approach, this paper uses a chemistry-related case study and presents a usability evaluation conducted with 15 expert reviewers and 2 subject experts. NASA task load index is used for cognitive workload measurement, and the technology acceptance model is used for measuring perceived ease of use and perceived usefulness in the evaluations. The evaluation with expert reviewers proposed self-directed learning using trained agents can help in the end-user training in learning technical topics, and controller-free hand interaction for kinesthetic tasks can improve hands-on learning motivation in virtual laboratories. This success points to a novel research area where agents embodied in an immersive environment using machine learning techniques can forge a new pedagogical approach where they can act as both teachers and assessors.
... We put our mechanisms to the test by having four people work together to keep a digital butterfly safe, with the aid of an agent that may appear both as a friendly "ghost arm" and a fierce foe. Based on our findings, deep learning bots may be useful for learning game tasks and offering players fresh perspectives [7]. This study investigates the feasibility of developing a virtual reality (VR) game for children with Down syndrome who are confined to wheelchairs. ...
Article
Full-text available
Some of the most significant computational ideas in neuroscience for learning behavior in response to reward and penalty are reinforcement learning algorithms. This technique can be used to train an artificial intelligent (AI) agent to serve as a virtual assistant and a helper. The goal of this study is to determine whether combining a reinforcement learning-based Virtual AI assistant with play therapy. It can benefit wheelchair-bound youngsters with Down syndrome. This study aims to employ play therapy methods and Reinforcement Learning (RL) agents to aid children with Down syndrome and help them enhance their abilities like physical and mental skills by playing games with them. This Agent is designed to be smart enough to analyze each patient's lack of ability and provide a specific set of challenges in the game to improve that ability. Increasing the game's difficulty can help players develop these skills. The agent should be able to assess each player's skill gap and tailor the game to them accordingly. The agent's job is not to make the patient victorious but to boost their morale and skill sets in areas like physical activities, intelligence, and social interaction. The primary objective is to improve the player's physical activities such as muscle reflexes, motor controls and hand-eye coordination. Here, the study concentrates on the employment of several distinct techniques for training various models. This research focuses on comparing the reinforcement learning algorithms like the Deep Q-Learning Network, QR-DQN, A3C and PPO-Actor Critic. This study demonstrates that when compared to other reinforcement algorithms, the performance of the AI helper agent is at its highest when it is trained with PPO-Actor Critic and A3C. The goal is to see if children with Down syndrome who are wheelchair-bound can benefit by combining reinforcement learning with play therapy to increase their mobility.
Chapter
This chapter introduces Augmented Reality (AR) as one of the most prominent technologies promoting immersive and sensorial experiences. The research aims to describe how Artificial Intelligence (AI) has been used through game engines in complex feedback systems (Deep Learning), as well as customizing efficient models to engage and monitor human interaction in an immersive way in activities such as health and well-being. It was possible to observe that different fields such as military training, education, cultural exhibition, and many others, are applying activities and developing many interactions mediated through AR. In addition, with the rise of wearables, it is clear there is a growing need to expand mechanisms that make the experience more realistic and interactive. One of the possible proposals would be the expansion of immersive environments with AR projection. Another growing area is entertainment, bringing new connections with Virtual Reality (VR) in the newest generation of exergames. The intention is to connect body movements with games through the sensorial immersion of AR, empowering new possibilities for the evolution of video games. This chapter presents a review of literature with the initial experiences involving AR and exergames through Deep Learning and to discuss the possible future of this emerging connection of sensoriality and body movements, supported by the AR tools and devices.
Conference Paper
Full-text available
Amblyopia is the most common neurological eye disorder worldwide, decreasing vision in approximately 1-5% of the global population. To reduce this loss of visual acuity, occlusion therapy is often beneficial for people with Amblyopia, yet this conventional rehabilitation method often suffers from low compliance and adherence rates due to its repetitive nature. Im-mersive Virtual Reality (iVR) applied to occlusion therapy games has a unique potential to engage players and stimulate their visual acuity. Advances in modern untethered Head-Mounted Display (HMD) systems that increase the accessibility of iVR open up new opportunities for more significant serious games interventions with Amblyopia. To this end, this paper investigates Project Star Catcher, a novel serious iVR HMD exergame refactored as a short 3-minute gaze-based color-matching intervention, and examined the game's effects on the vision of adults with Amblyopia. We present a pilot study [N=51 adults with Amblyopia] that measures changes in LogMAR visual acuity between the serious game (moving a drone to catch shooting stars) vs placebo (moving a drone in an empty starry sky) and younger adults (under 39 years old) vs older adults (39+ years old) from a mean age split comparison between users. Our results suggest that 3-minutes of serious iVR gameplay with experiences such as Project Star Catcher can significantly improve near distance visual acuity by over a mean of 6.2 letters when compared to placebo. We also found that older adults improve their near-distance visual acuity seven times greater than younger adults from gameplay. This paper concludes with discussion and considerations on utilizing iVR HMD serious games for visual acuity and Amblyopia.
Conference Paper
Full-text available
Robots are being taught by increasingly broader populations of people who provide training data for machine learning algorithms. Many studies over the past decade have begun demonstrating reproducible robot teaching methodologies and have highlighted benefits in human-robot interaction (HRI). However, there have been few investigations about what it is like for the people teaching these robots. In this study, we consider how teaching a skill to a robot arm, performing a reaching task (as opposed to observing the robot self-learning), influences a user's emotional experience and perceptions of the robot. In a 2x2 experiment (N=160), we varied the agent's learning technique (user reinforcement feedback or robot self-learning) and expressiveness (static agent face or performance-based valence expression with head following), using an online WebGL virtual environment to enable remote HRI. Our results demonstrate that users experience significantly more trust, believability, and emotional response when teaching the robot than when observing it learning, which can be amplified with agent expressiveness.
Conference Paper
Full-text available
Emotion, a crucial element of mental health, is not often explored in the field of immersive Virtual Reality (iVR). Enabling personalized affective iVR experiences may be incredibly useful for the expansion and evaluation of serious games. To further this direction of research, we present a playable iVR experience in which the user evaluates the emotion of images through an immersive Self-Assessment Manikin (iSAM). This game explores a pilot system for enabling efficient online fine-tuning of a user's Pleasure-Arousal-Dominance (PAD) emotional model using personalized deep-learning. We discuss adapting the International Affective Picture system (IAPs), in which our Artificial Intelligence (AI) model responds with a personalized image after learning from ten user supplied answers during an iVR session. Lastly, we evaluated our iVR experience with an initial pilot study of four users. Our preliminary results suggest that iSAM can successfully learn from user affect to better predict a 'happy' personalized image than the static base model.
Article
Full-text available
Upper limb injury often requires repetitive and long-term physical rehabilitation which can result in low adherence due to the repetitive and internally motivated nature of the exercises. Immersive Virtual Reality (iVR) systems enhanced with games can address these challenges. These systems provide a platform for adaptable sensing and analytical tools to track progress, personalize therapy, and increase long term engagement. This paper explores such a system, through an iVR-based experience for upper-extremity rehabilitation called "OpenButterfly," where users follow movements to protect a virtual butterfly. OpenButterfly enables a dynamically controllable environment for individual exercise by utilizing motion capture, a biomechanical model of torque and angular momentum, and a biometric pipeline for brainwave, heartrate, and skin conductance analysis. We examine this experience for five adult users with varying degrees of injury over the course of eight weeks. Our results suggest that experiences like OpenButterfly provide strong platforms for long-term physical therapy engagement, analysis, and recovery. Lastly, this paper concludes with considerations for future research into adaptive iVR physio-rehabilitation.
Article
Full-text available
Inactivity and a lack of engagement with exercise is a pressing health problem in the United States and beyond. Immersive Virtual Reality (iVR) is a promising medium to motivate users through engaging virtual environments. Currently, modern iVR lacks a comparative analysis between research and consumer-grade systems for exercise and health. This paper examines two such iVR mediums: the Cave Automated Virtual Environment (CAVE) and the Head-Mounted Display (HMD). Specifically, we compare the room-scale Mechdyne CAVE and HTC Vive Pro HMD with a custom in-house exercise game that was designed such that user experiences were as consistent as possible between both systems. To ensure that our findings are generalizable for users of varying abilities, we recruited forty participants with and without cognitive disabilities concerning the fact that iVR environments and games can differ in their cognitive challenge between users. Our results show that across all abilities, the HMD excelled in-game performance, biofeedback response, and player engagement. We conclude with considerations in utilizing iVR systems for exergaming with users across cognitive abilities.
Chapter
Full-text available
Minimization of joint torque has been a keen interest of researchers to predict the trajectory to achieve the desired position. Dynamic equations are used to define objective function and range of motions of human upper limb joints are set as constraints. MATLAB genetic algorithm (GA) toolbox is used to minimize the joint torques. Desired position is defined as a nonlinear constraint. Optimization problem consists of eleven objectives and thirty-one variables. Torques at joints are fed as objective function such that the magnitude of the torque is minimized. Variables used may be broadly classified into four groups: angular displacements, angular velocities, and angular accelerations comprising 10 sets each. One more variable is added as time of rotation. GA parameters are required to be predicted for the developed objective function. Analytic hierarchy process (AHP) approach is used to determine the GA parameters. The results obtained are satisfactory.
Conference Paper
Full-text available
Immersive Virtual Reality paired with soft robotics may be synergized to create personalized assistive therapy experiences. Virtual worlds hold power to stimulate the user with newly instigated low cost, high-performance commercial Virtual Reality (VR) devices to enable engaging and accurate physical therapy. Soft robotic wearables are a versatile tool in such stimulation. This preliminary study investigates a novel rehabilitative VR experience, Project Butterfly (PBF), that synergizes VR Mirror Visual Feedback Therapy with soft robotic exoskeletal support. Nine users of ranging ability explore an immersive gamified physio-therapy experience by following and protecting a virtual butterfly, completed with an actuated robotic wearable that motivates and assists the user to perform rehabilitative physical movement. Specifically, the goals of this study are to evaluate the feasibility, ease-of-use, and comfort of the proposed system. The study concludes with a set of design considerations for future immersive physio-rehab robotic-assisted games.
Article
Full-text available
Modern immersive virtual reality experiences have the unique potential to motivate patients undergoing physical therapy for performing intensive repetitive task-based treatment and can be utilized to collect real-time user data to track adherence and compliance rates. This article reports the design and evaluation of an immersive virtual reality game using the HTC Vive for upper limb rehabilitation, titled “Project Star Catcher” (PSC), aimed at users with hemiparesis. The game mechanics were adapted from modified Constraint Induced Therapy (mCIT), an established therapy method where users are asked to use the weaker arm by physically binding the stronger arm. Our adaptation changes the physical to psychological binding by providing various types of immersive stimulation to influence the use of the weaker arm. PSC was evaluated by users with combined developmental and physical impairments as well as stroke survivors. The results suggest that we were successful in providing a motivating experience for performing mCIT as well as a cost-effective solution for real-time data capture during therapy. We conclude the article with a set of considerations for immersive virtual reality therapy game design.
Conference Paper
Full-text available
Traditional forms of physical therapy and rehabilitation are often based on therapist observation and judgment, coincidentally this process oftentimes can be inaccurate, expensive, and non-timely. Modern immersive Virtual Reality systems provide a unique opportunity to make the therapy process smarter. In this paper, we present an immersive virtual reality stroke rehabilitation game based on a widely accepted therapy method, Constraint-Induced Therapy, that was evaluated by nine poststroke participants. We implement our game as a dynamically adapting system that can account for the user’s motor abilities while recording real-time motion capture and behavioral data. The game also can be used for tele-rehabilitation, effectively allowing therapists to connect with the participant remotely while also having access to +90Hz real-time biofeedback data. Our quantitative and qualitative results suggest that our system is useful in increasing affordability, accuracy, and accessibility of post-stroke motor treatment.
Article
This study aims to develop a motion sickness measurement index in a virtual reality (VR) environment. The VR market is in an early stage of market formation and technological development, and thus, research on the side effects of VR devices such as simulator motion sickness is lacking. In this study, we used the simulator sickness questionnaire (SSQ), which has been traditionally used for simulator motion sickness measurement. To measure the motion sickness in a VR environment, 24 users performed target selection tasks using a VR device. The SSQ was administered immediately after each task, and the order of work was determined using the Latin square design. The existing SSQ was revised to develop a VR sickness questionnaire, which is used as the measurement index in a VR environment. In addition, the target selection method and button size were found to be significant factors that affect motion sickness in a VR environment. The results of this study are expected to be used for measuring and designing simulator sickness using VR devices in future studies.