ArticlePDF Available

Abstract and Figures

Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.
Content may be subject to copyright.
HTC Vive MeVisLab integration via OpenVR for
medical applications
Jan Egger
*, Markus Gall
, Ju
¨rgen Wallner
, Pedro Boechat
, Alexander Hann
, Xing Li
Xiaojun Chen
, Dieter Schmalstieg
1Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz,
Austria, 2BioTechMed-Graz, Krenngasse 37/1, Graz, Austria, 3Medical University of Graz, Department of
Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz, Austria, 4Department of Internal Medicine I,
Ulm University, Albert-Einstein-Allee 23, Ulm, Germany, 5Shanghai Jiao Tong University, School of
Mechanical Engineering, Shanghai, China
Virtual Reality, an immersive technology that replicates an environment via computer-simu-
lated reality, gets a lot of attention in the entertainment industry. However, VR has also great
potential in other areas, like the medical domain, Examples are intervention planning, train-
ing and simulation. This is especially of use in medical operations, where an aesthetic out-
come is important, like for facial surgeries. Alas, importing medical data into Virtual Reality
devices is not necessarily trivial, in particular, when a direct connection to a proprietary
application is desired. Moreover, most researcher do not build their medical applications
from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These
platforms have in common that they use libraries like ITK and VTK, and provide a conve-
nient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In
this study, the usage of a Virtual Reality device for medical data under the MeVisLab plat-
form is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a
direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab
platform. Medical data coming from other MeVisLab modules can directly be connected per
drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for
immersive virtual reality inspection.
Virtual Reality (VR) places a user inside a computer-generated environment. This paradigm is
becoming increasingly popular, due to the fact that computer graphics have progressed to a
point where the images are often indistinguishable from the real world. The computer-gener-
ated images previously presented in movies, games and other media are now detached from
the physical surroundings and presented in new immersive head-mounted displays (HMD),
like the Oculus Rift, the PlayStation VR or the HTC Vive (Fig 1). Virtual reality not only places
a user inside a computer-generated environment – it is able to completely immerse a user in a
virtual world, removing any restrictions on what a user can do or experience [13]. Beside
movies, games, and other media, the medical area has great potential for the newly VR devices,
PLOS ONE | March 21, 2017 1 / 14
Citation: Egger J, Gall M, Wallner J, Boechat P,
Hann A, Li X, et al. (2017) HTC Vive MeVisLab
integration via OpenVR for medical applications.
PLoS ONE 12(3): e0173972.
Editor: Hans A. Kestler, University of Ulm,
Received: August 26, 2016
Accepted: March 1, 2017
Published: March 21, 2017
Copyright: ©2017 Egger et al. This is an open
access article distributed under the terms of the
Creative Commons Attribution License, which
permits unrestricted use, distribution, and
reproduction in any medium, provided the original
author and source are credited.
Data Availability Statement: Data is available on
figshare at
Funding: The work received funding from
BioTechMed-Graz in Austria (https://, “Hardware accelerated
intelligent medical imaging”), the 6th Call of the
Initial Funding Program from the Research &
Technology House (F&T-Haus) at the Graz
University of Technology (
en/, “Interactive Planning and Reconstruction of
Facial Defects”, PI: Jan Egger) and was supported
because they are also now able to process and display high-resolution medical image data
acquired with modern CT (Computed Tomography) and MRI (Magnetic Resonance Imaging)
scanners [4]. Examples are surgery trainers and simulators, and preoperative surgical planning
[5], [6]. Others already working in the area of medical Virtual Reality are Nunnerley et al. [7],
who tested the feasibility of an immersive 3D virtual reality wheelchair training tool for people
with spinal cord injury. They designed a wheelchair training system that used the Oculus
Rift headset and a joystick. Newbutt et al. [8] studied the usage of the Oculus Rift in autism
patients. Their study explores the acceptance, willingness, sense of presence and immersion of
participants with autism spectrum disorder (ASD). The examination of digital pathology slides
with the virtual reality technology and the Oculus Rift has been explored by Farahani et al. [9].
They applied the Oculus Rift and a Virtual Desktop software to review lymph node cases for
digital pathology. A usability comparison between HMD (Oculus Rift) and stereoscopic desk-
top displays (DeepStream3D) in a VR environment with pain patients has been performed by
Tong et al. [10]. Twenty chronic pain patients assessed the severity of physical discomforts by
trying both displays, while watching a virtual reality pain management demo in a clinical set-
ting. An interactive 3D virtual anatomy puzzle for learning and simulation has been designed
and tested by Messier and collages [11]. The virtual anatomy puzzle is supposed to help users
to learn the anatomy of various organs and systems by manipulating virtual 3D data. It was
implemented with an Oculus Rift. A computer-based system, which can record a surgical pro-
cedure with multiple depth cameras and reconstruct in three dimensions the dynamic geome-
try of the actions and events that occur during the procedure, has been introduced by Cha
et al. [12]. Equipped with a virtual reality headset, such as the Oculus Rift, the user was able to
Fig 1. Illustration of a person using the HTC Vive and controllers (photo filtered with PRISMA 2.2.1
under an iPhone 6s running iOS 9.3.3).
HTC Vive for medical applications
PLOS ONE | March 21, 2017 2 / 14
by TU Graz Open Access Publishing Fund. Dr.
Xiaojun Chen receives support by the Natural
Science Foundation of China (,
Grant No.: 81511130089) and the Foundation of
Science and Technology Commission of Shanghai
Municipality (Grants No.: 14441901002,
15510722200 and 16441908400). The funders
had no role in study design, data collection and
analysis, decision to publish, or preparation of the
Competing interests: The authors declare no
competing interests.
walk around the reconstruction of the procedure room, while controlling the playback of the
recorded surgical procedure with simple VCR controls (e.g., play, pause, rewind, and fast for-
ward). Xu et al. [13] studied the accuracy of the Oculus Rift during cervical spine mobility
measurement by designing a virtual reality environment to guide participants to perform cer-
tain neck movements. Subsequently, the cervical spine kinematics was measured by both the
Oculus Rift tracking system and a reference motion tracker. The Oculus Rift has been applied
by Kim et al. [14] as a cost-effective tool for studying visual-vestibular interactions in self-
motion perception. The vection strength has been measured in three conditions of visual
compensation for head movement (1. compensated, 2. uncompensated and 3. inversely com-
pensated). King [15] developed an immersive VR environment for diagnostic imaging. The
environment consisted of a web server acquiring data from volumes loaded within the 3D
Slicer platform [16] and forwarding them to a Unity application ( to ren-
der the scene for the Oculus Rift. However, to the best of the authors’ knowledge, no work
described the integration the HTC Vive into MeVisLab ( platform
[17] yet. We developed and implemented a new module for the medical prototyping platform
MeVisLab that provides an interface via the OpenVR library to head mounted displays, enabl-
ing the direct and uncomplicated usage of the HTC Vive in medical applications. Unlike the
Oculus Rift, the room-scale tracking offered by the HTC Vive allows walking around virtual
objects, which enables a more advanced immersion and inspection.
This contribution is organized as follows: Section 2 introduces details of the methods, Sec-
tion 3 presents experimental results and Section 4 concludes the paper and gives an outlook on
future work.
In this section, the materials and methods that have been used for the integration of the HTC
Vive into the MeVisLab environment via the OpenVR library are introduced.
For testing and evaluating our software integration, we used multiple high-resolution CT
(Computed Tomography) scans from the clinical routine. The resolution of the scans consisted
of 512x512 voxels in the x- and the y-direction with an additional few hundred slices in the z-
direction. The scans varied in anatomy and location of pathology, including datasets from
patient skulls with cranial defects. In Fig 2, 3D visualizations of patient skulls with cranial
defects on the left (left) and right side (right) are shown. The medical scans are freely available
for download and usage in own research projects, however, we kindly ask users cite our work
[18], [19]. All relevant data are hosted at the public repository Figshare. Please see data hosted
at Figshare at the following URL:
Note that the datasets from our clinical partners have not been altered or downsampled in
any way. Hence, we assess the visual quality and evaluate the frames per second (fps) when dis-
playing original sized scans inside the HTC Vive.
This paragraph describes the medical imaging platform MeVisLab (Medical Visualization Labora-
tory), which includes a software development kit (SDK) used to interface with the HTC Vive.
The MeVisLab platform provides basic and advanced algorithms for medical image processing
and visualization. It also includes an environment for programming new modules in C++, as
well as an environment for implementing user interfaces with MDL script (MeVis Description
HTC Vive for medical applications
PLOS ONE | March 21, 2017 3 / 14
Language). MeVisLab allows to construct dataflow networks using a variety of image processing
modules. Developers can employ visual programming by compositing pre-installed modules into
complex networks using the graphical user interface. Alternatively, the Python scripting language
can be used. Finally, C++ programmers generate individually designed modules. Furthermore, it
is possible to design user interfaces with MDL script, hiding the complexity of user-interface pro-
gramming from non-technical users. Fig 3 shows a simple MeVisLab network with a small
amount of modules: a module for loading the image data, an algorithm module, which applies a
threshold function in this example, and a viewer module for the visualization.
In summary, the MeVisLab network concept is based on a graphical representation of modules
with specific functions for image processing. MeVisLab uses three different types of modules (Fig
4). ML modules (blue) are processing modules, Open Inventor modules (green) provide scene
graphs in 3D, and Macro modules (brown) combine other modules. Module connectors located
on the bottom of the module are inputs, and connectors located at the top of the modules are out-
puts. The shape of the inputs and outputs define the connection types: A triangle for ML images,
a rectangle for a base object indicating pointers to data structures, and a half circle for an Open
Inventor scene (see Figs 3and 4). A data transmission between connectors is enabled by a connec-
tion, visually represented by a blue line, like shown in Fig 3 on the right side. In addition to these
data connections, a parameter connection can also be established. This enables the connection of
single parameters between modules and is indicated by a two-sided arrow. For further informa-
tion, please see the MeVisLab Reference manual (last accessed in January 2017):
HTC Vive
This paragraph states the technical specifications of the VR headset HTC Vive, which is devel-
oped by HTC and Valve Corporation and was released in April 2016. In contrast to the current
Fig 2. 3D visualization of a patient skull with a cranial defects on the left side (left) and 3D visualizationof a patient skull with a cranial
defects on the right side (right).
HTC Vive for medical applications
PLOS ONE | March 21, 2017 4 / 14
version of the Oculus Rift, the HTC Vive is designed to turn a room into 3D space. Two sta-
tionary reference units track the user’s head and handheld controllers, while the user is free to
walk around and manipulate virtual objects. The HTC Vive uses an organic light-emitting
diode (OLED) display and provides a combined resolution of 2160x1200 (1080x1200 per eye)
with a refresh rate of 90 Hz and a field of view (FOV) of about 110 degrees. The head mounted
display weights about 555 grams and has HDMI 1.4, DisplayPort 1.2 and USB 2.0 connections.
Fig 3. Basic module processing pipeline with a network of three modules: An image source for loading the data set into the network, an
algorithm module to process the data and a viewer module that enables the visualization of the image.
Fig 4. Types of modules under MeVisLab: ML modules (blue) are page/-patch based and demand-driven, Open
Inventor modules (green) provide scene graphs in 3D and Macro modules (brown) consist of a combination of
HTC Vive for medical applications
PLOS ONE | March 21, 2017 5 / 14
Moreover, the HTC Vive has a 3.5 mm audio jack for headphones and a built-in microphone.
Finally, the HTC Vive has a front-facing camera for blending real-world elements into the vir-
tual world. For further information, please see the HTC Vive website (last accessed in January
This paragraph gives a short description of OpenVR, which is a software development kit and
application programming interface developed by Valve. The C++ library supports the HTC
Vive and other VR headsets via the SteamVR software platform ( Thus,
the OpenVR API provides a way to connect and interact with VR displays without relying on a
specific hardware vendor’s SDK. OpenVR is implemented as a set of C++ interface classes
with virtual functions supposed to support also future versions of the hardware. For further
information, please see the corresponding GitHub repository (last accessed in January 2017):
The current OpenVR GitHub repository comes for Microsoft Windows, Linux and
MacOS. Moreover, it includes examples in C++ for Visual Studio from Microsoft.
Fig 5 shows a high-level workflow diagram of the communication and interaction between
MeVisLab and the HTC Vive device via OpenVR. In doing so, the MeVisLab platform pro-
vides several modules to import and load medical data, e.g., in DICOM format (http://dicom. The medical image data can be processed and visualized with a variety of modules
under MeVisLab. Options include the visualization in 2D and 3D, or medical image analysis
algorithms, like the segmentation of anatomical or pathological structures. Moreover, MeVi-
sLab modules that derive directly from ITK (Insight Segmentation and Registration Toolkit, and VTK (The Visualization Toolkit, can be applied.
Our new HTCVive module communicates with the HTC Vive via OpenVR. However, our
MeVisLab module may also be able to communicate with another VR device, like the Oculus
Rift (, because the OpenVR API provides a way to connect and
interact with VR displays without relying on a specific hardware vendor’s SDK. The HTCVive
module transfers the image date to the HMD and receives the position and the orientation of
the HMD in real-world to MeVisLab. The position and orientation enable further visualiza-
tions under MeVisLab, like rendering the user’s view in a second 3D viewer on a conventional
screen, so users without HMD can see the VR view, too. For our implementation, we followed
the hellovr_opengl source code example from OpenVR (
openvr/tree/master/samples/hellovr_opengl). In doing so, using an initialization method to
start the SteamVR runtime and render a distortion view of the right and left eye.
Fig 6 shows the overall MeVisLab network with our HTCVive module (blue) in the lower left
surrounded by a yellow rectangle. In this example network, the (medical) data is imported
via the WEMLoad module (here named DataLoad) and directly passed via its output (rectan-
gle) to the HTCVive module (rectangle input at the bottom of the HTC Vive module). Fur-
thermore, the loaded data is passed via a WEMModify module (blue, lower right side), an
SoWEMRenderer module (green, right side in the middle) and an SoSeparator module (green,
upper right side) to another SoSeparator module (named 3DUserView at the top). The win-
dow on the right side belongs to the 3DUserView module at the top and displays what the user
HTC Vive for medical applications
PLOS ONE | March 21, 2017 6 / 14
wearing the HTC Vive headset sees. Note that the HMD tracking coordinates are transferred
via direct parameter connections in our example. The remaining modules in the network
example from Fig 6 are mainly arithmetic and matrices modules to decompose (Decompose-
Matrix and DecomposeMatrix2) and compose (ComposeMatrix) the rotation and translation
for a correct visualization. The interface and parameters of our HTCVive module are shown
on the left side (Panel HTCVive). Finally, a status window of the SteamVR is presented on the
lower right corner (black), indicating that the HMD, the two HTC Vive controllers and the
two HTC Vive Lighthouse base stations are ready and running (green).
The aim of this contribution was to investigate the feasibility of using the Virtual Reality device
HTC Vive under the medical prototyping platform MeVisLab. We present the direct rendering
and visualization of (medical) image data in the head mounted device using the publicly avail-
able OpenVR library. The software integration was achieved under Microsoft Windows 8.1
with MeVisLab version 2.8.1 (21-06-2016) for Visual Studio 2015 x64 (
download/) and the OpenVR SDK version 1.0.2 (
The MeVisLab module was implemented as an image processing module in C++ using the
MeVisLab Project Wizard and the Microsoft Visual Studio 2015 Community Edition. The HTC
Vive MeVisLab module (HTCVive) has an input for the medical image data, which is trans-
ferred and displayed in the HTC Vive. Additionally, the HTCVive module provides the data at
Fig 5. High level workflow diagram that demonstrates the communication and interaction between the MeVisLab platform and the
HTC Vive device via the OpenVR library.
HTC Vive for medical applications
PLOS ONE | March 21, 2017 7 / 14
an output (rectangle) for visualization in a standard undistorted 3D view, and the module pro-
vides the HMD tracking coordinates of the HTC Vive (Fig 6, right window). Finally, an extra
window can be created for a distorted view of the virtual reality input for the left and the right
eye (Fig 7 without texture and Fig 8 with texture). For an evaluation, we tested several medical
datasets, like patient skulls with cranial defects and scans from gastrointestinal tracts for usage
in virtual colonoscopy (Fig 9). As a hardware setting, we applied two computers: a desktop PC
and a MacBook Pro laptop, which both had Windows 8.1 Enterprise installed. The hardware
Fig 6. Screenshot of the comprehensive MeVisLab network with the HTCVive module (blue, lower left corner surrounded by
a yellow rectangle) and its interface and parameters on the left side (Panel HTCVive). Additionally, the module provides the HMD
tracking coordinates of the HTC Vive, which can be used to have the same position and viewing direction as the actual user wearing the
HTC Vive in an extra 3D view (Viewer SoSeparator(3DUserView) on the right side).
Fig 7. The distorted view of the VR input for the right and left eye without texture.
HTC Vive for medical applications
PLOS ONE | March 21, 2017 8 / 14
Fig 8. The distorted view of the VR input for the right and left eye with texture.
Fig 9. The figure shows a scan of the gastrointestinal tract that has been imported and displayed in the HTC Vive for usage in virtual
HTC Vive for medical applications
PLOS ONE | March 21, 2017 9 / 14
configuration of the desktop PC consisted of an Intel Core i7-3770 CPU @ 3.40GHz, 16 GB
RAM and a NVIDIA GeForce GTX 970 graphics card, and the hardware configuration of the
laptop computer consisted of an Intel Core i7-4850HQ CPU @ 2.30GHz, 16 GB RAM and a
NVIDIA GeForce GT 750M graphics card. Figs 10 and 11 show the frame timing results of
both configurations for the CPU (top) and the GPU (bottom). The frame timing results come
directly from the SteamVR and can be activated and displayed via the SteamVR status window.
As seen, the default view splits the CPU and GPU performance in a pair of stacked graphs: The
blue sections represent the time spent by the application, which is further spitted between appli-
cation-scene and application-other. Application-scene is the amount of work performed in
between, when WaitGetPoses (returns pose(s) to use to render scene) returns and the second
eye texture is submitted. Application-other is any time spent after this for rendering the applica-
tion’s companion window, etc. Note that the CPU timing does not capture any parallel work
being performed, for example, on the application’s main thread. The brown section (other) in
the GPU timing reflects any GPU bubbles, context switching overhead, or GPU work from
other application getting scheduled in between other segments of work for that frame (to exam-
ine this in more detail requires the use of gpuview). For more information, please see also (last
access January 2017):
As shown in the frame timing diagram in Fig 10, a sufficient framerate for the desktop PC
configuration could be achieved, because ten frames per second are already considered as real-
time or interactive in computer graphics applications [20]. Hence, the medical datasets could
be displayed very pleasant to the human eye inside the HTC Vive device. In contrast, the fra-
merates on the laptop were in general not sufficient, resulting in a flickering inside the HTC
Vive. For a short period of time, this is acceptable as a temporary (mobile) solution, but for a
longer working period, this is currently definitely not suitable. However, the laptop and its
Fig 10. Frame timing result diagrams of the CPU (top) and the GPU (bottom) for a Windows desktop PC with Intel Core
i7-3770 CPU @ 3.40GHz, 16 GB RAM and a NVIDIA GeForce GTX 970 graphic card.
Fig 11. Frame timing result diagrams of the CPU (top) and the GPU (bottom) for a Windows laptop with Intel Core i7-4850HQ CPU @ 2.30GHz, 16
GB RAM and a NVIDIA GeForce GT 750M graphic card.
HTC Vive for medical applications
PLOS ONE | March 21, 2017 10 / 14
hardware is also not considered as HTC Vive ready, but we expect to see soon laptops that can
handle the HTC Vive requirements, so it can also be used as a portable application.
In this open access article, we demonstrated the successful integration and evaluation of the
HTC Vive headset into the research prototyping platform MeVisLab via the OpenVR library.
OpenVR is a software development kit and application programming interface developed by
Valve ( for supporting the SteamVR (HTC Vive) and other
VR headsets. Hence, our integration is device and vendor independent and can also be used
with other devices. The proposed software integration has been carried out with the C+
+ implementation of a MeVisLab image processing module that provides inputs for medical
datasets in different formats. Thus, our module is easy to use and can be added to existing
MeVisLab networks or applied in new ones. Per drag and drop, data outputs under MeVisLab
can be directly connected to an input of our module to transfer the data into the HTC Vive
headset and display it there. Furthermore, dataset operations and manipulations, like transla-
tion, rotation, segmentations, etc. that have been performed in advance with corresponding
MeVisLab modules will also directly be shown inside the headset. Highlights of the proposed
contribution are as follows:
Successful integration of OpenVR into MeVisLab
Solution enables MeVisLab networks to connect to VR headsets
Real-time visualization of medical data in VR under MeVisLab
Proof of concept evaluation with the HTC Vive headset
HTC Vive MeVisLab module can be added to existing MeVisLab networks.
Areas of future work include the evaluation of our integration with a greater amount of
medical data and formats. Although if we did not encounter motion sickness in our tests, this
might be an issue for some users and needs to be explored systematically. In addition, we plan
a MeVisLab module communicating with the Oculus Rift head mounted display, or, as an
alternative, the extension of our VR module supporting also the Oculus Rift, when fully sup-
ported by the OpenVR library, and the evaluation on non-Windows-Setups like Linux or Mac
OS. Moreover, we work on applying our solution to support computer-aided reconstructions
of facial defects [21], [22] with photorealistic rendering in VR [23]. This will enable an even
more realistic assessment of pre-operative planning results. Furthermore, we investigate
options for performing image-guided therapy tasks inside VR, for example, the planning of
facial interventions using so called miniplates [24] and virtual stent simulations to treat aneu-
rysms [2529]. Capturing and reconstruction of 3D models of tumors in the gastrointestinal
tract during diagnostic endoscopic procedures will ease the planning for the endoscopic
removal without the need for prolonged or repeated anesthesia [30]. Using segmentation algo-
rithms implemented in MeVisLab [3133] will help identify areas of tumor spread in the 3D
model due to surface characteristics. Interactive segmentation [34], [35] might even be done
with the HTC Vive controllers. Additionally, reviewing 3D models of tumors in virtual space
will provide a platform for training to improve recognition of these lesions from different
angles. We would like to pursue the development of an Augmented Reality (AR) module for
MeVisLab, to support novel devices like the Microsoft HoloLens, because optical see-through
head-mounted displays seem to be very promising during surgical navigation [36]. Last but
not least, we plan to integrate OpenVR into other research platforms, like OsiriX [37], 3D
HTC Vive for medical applications
PLOS ONE | March 21, 2017 11 / 14
Slicer [38] or the two Medical Imaging (Interaction) Toolkits (MITK) from Beijing in China
( [39] and Heidelberg in Germany ( [40], respectively.
The work received funding from BioTechMed-Graz in Austria (,
“Hardware accelerated intelligent medical imaging”), the 6
Call of the Initial Funding Pro-
gram from the Research & Technology House (F&T-Haus) at the Graz University of Technol-
ogy (, “Interactive Planning and Reconstruction of Facial Defects”,
PI: Jan Egger) and was supported by TU Graz Open Access Publishing Fund. Dr. Xiaojun
Chen receives support by the Natural Science Foundation of China (, Grant
No.: 81511130089) and the Foundation of Science and Technology Commission of Shanghai
Municipality (Grants No.: 14441901002, 15510722200 and 16441908400). Moreover, the
authors want to thank their colleagues Laurenz Theuerkauf, Peter Mohr and Denis Kalkofen
for their support. Furthermore, we are planning to provide the module as Open Source to
the research community in the future. Until then, the module can be requested from the
authors. Finally, a video demonstrating the integration can be found on the following You-
Tube channel:
Author Contributions
Conceptualization: JE MG.
Data curation: JW XC XL.
Formal analysis: JE MG.
Funding acquisition: JE DS XC.
Investigation: JE MG.
Methodology: JE MG.
Project administration: JE DS.
Resources: JE MG JW PB AH XL XC DS.
Software: JE MG PB.
Supervision: JE DS.
Validation: JE MG.
Visualization: JE MG.
Writing – original draft: JE.
Writing – review & editing: JE DS.
1. Schmalstieg D, Ho
¨llerer T. Augmented Reality: Principles and Practice. Addison-Wesley Professional;
1st ed., 2016; 1–528.
2. Bowman DA, McMahan RP. Virtual reality: How much immersion is enough? IEEE Computer 2007; 40
(7): 36–43.
3. Rheingold H. Virtual Reality: Exploring the Brave New Technologies. Simon & Schuster Adult Publish-
ing Group 1991; 1–384.
HTC Vive for medical applications
PLOS ONE | March 21, 2017 12 / 14
4. McCloy R, Stone R. Virtual reality in surgery. BMJ 2001; 323(7318): 912–915. PMID: 11668138
5. Reitinger B, Bornik A, Beichel R, Schmalstieg D. Liver surgery planning using virtual reality. IEEE Com-
put Graph Appl. 2006; 26(6): 36–47. PMID: 17120912
6. Gor M, McCloy R, Stone R, Smith A. Virtual reality laparoscopic simulator for assessment in gynaecol-
ogy. BJOG 2003; 110(2): 181–7. PMID: 12618163
7. Nunnerley J, Gupta S, Snell D, King M. Training wheelchair navigation in immersive virtual environ-
ments for patients with spinal cord injury—end-user input to design an effective system. Disabil Rehabil
Assist Technol. 20016; 4:1–7.
8. Newbutt N, Sung C, Kuo HJ, Leahy MJ, Lin CC, Tong B. Brief Report: A Pilot Study of the Use of a Vir-
tual Reality Headset in Autism Populations. J Autism Dev Disord. 2016; 46(9): 3166–76.
10.1007/s10803-016-2830-5 PMID: 27272115
9. Farahani N, Post R, Duboy J, Ahmed I, Kolowitz BJ, Krinchai T, et al. Exploring virtual reality technology
and the Oculus Rift for the examination of digital pathology slides. J Pathol Inform. 2016; 7: 22. https:// PMID: 27217972
10. Tong X, Gromala D, Gupta D, Squire P. Usability Comparisons of Head-Mounted vs. Stereoscopic
Desktop Displays in a Virtual Reality Environment with Pain Patients. Stud Health Technol Inform.
2016; 220: 424–31. PMID: 27046617
11. Messier E, Wilcox J, Dawson-Elli A, Diaz G, Linte CA. An Interactive 3D Virtual Anatomy Puzzle for
Learning and Simulation—Initial Demonstration and Evaluation. Stud Health Technol Inform. 2016;
220: 233–40. PMID: 27046584
12. Cha YW, Dou M, Chabra R, Menozzi F, State A, Wallen E, et al. Immersive Learning Experiences for
Surgical Procedures. Stud Health Technol Inform. 2016; 220: 55–62. PMID: 27046554
13. Xu X, Chen KB, Lin JH, Radwin RG. The accuracy of the Oculus Rift virtual reality head-mounted dis-
play during cervical spine mobility measurement. J Biomech. 2015; 48(4): 721–4.
1016/j.jbiomech.2015.01.005 PMID: 25636855
14. Kim J, Chung CY, Nakamura S, Palmisano S, Khuu SK. The Oculus Rift: a cost-effective tool for study-
ing visual-vestibular interactions in self-motion perception. Front Psychol. 2015; 6: 248.
10.3389/fpsyg.2015.00248 PMID: 25821438
15. King F. An immersive virtual reality environment for diagnostic imaging. Thesis (Master, Computing),
Queen’s University, Kingston, Ontario, Canada, 2015; 1–60.
16. Egger J, Kapur T, Nimsky C, Kikinis R. Pituitary Adenoma Volumetry with 3D Slicer. PLoS ONE 7(12):
e51788. PMID: 23240062
17. Egger J, Tokuda J, Chauvin L, Freisleben B, Nimsky C, Kapur T, et al. Integration of the OpenIGTLink
network protocol for image-guided therapy with the medical platform MeVisLab. Int J Med Robot. 2012;
8(3): 282–90. PMID: 22374845
18. Gall M, Li X, Chen X, Schmalstieg D, Egger J. Cranial Defect Datasets. ResearchGate 2016.
19. Gall M, Li X, Chen X, Schmalstieg D, Egger J. Computer-Aided Planning and Reconstruction of Cranial
3D Implants. IEEE Engineering in Medicine and Biology Society 2016; 1179–1183.
20. Ai Z, Evenhouse R, Leigh J, Charbel F, Rasmussen M. Cranial implant design using augmented reality
immersive system. Stud Health Technol Inform. 2007; 125: 7–12. PMID: 17377223
21. Gall M, Li X, Chen X, Schmalstieg D, Egger J. Computer-Aided Planning of Cranial 3D Implants. Int J
CARS 11 2016; (Suppl 1): S241.
22. Li X, Xu L, Zhu Y, Egger J, Chen X. A semi-automatic implant design method for cranial defect restora-
tion. Int J CARS 11 2016; (Suppl 1): S241–S243.
23. Vogelreiter P, Wallner J, Reinbacher K, Schwenzer-Zimmerer K, Schmalstieg D, Egger J. Global Illumi-
nation Rendering for High-Quality Volume Visualization in the Medical Domain. Face 2 Face – Science
meets Arts, Medical University of Graz, Department of Maxillofacial Surgery, 2015.
24. Gall M, Wallner J, Schwenzer-Zimmerer K, Schmalstieg D, Reinbacher K, Egger J. Computer-aided
Reconstruction of Facial Defects. IEEE Engineering in Medicine and Biology Society 2016; Posters,
Orlando, FL, USA.
25. Egger J, MostarkićZ, Maier F, Kaftan JN, Grosskopf S, Freisleben B. Fast self-collision detection and
simulation of bifurcated stents to treat abdominal aortic aneurysms (AAA). 29th Annual International
Conference of the IEEE Engineering in Medicine and Biology Society 2007; 6231–6234.
26. Lu J, Egger J, Wimmer A, Großkopf S, Freisleben B. Detection and visualization of endoleaks in CT
data for monitoring of thoracic and abdominal aortic aneurysm stents. SPIE Medical Imaging 2008;
HTC Vive for medical applications
PLOS ONE | March 21, 2017 13 / 14
27. Renapurkar RD, Setser RM, O’Donnell TP, Egger J, Lieber ML, Desai MY, et al. Aortic volume as an
indicator of disease progression in patients with untreated infrarenal abdominal aneurysm. Eur J Radiol.
2012; 81: e87–93. PMID: 21316893
28. Greiner K, Egger J, Großkopf S, Kaftan JN, Do
¨rner R, Freisleben B. Segmentation of aortic aneurysms
in CTA images with the statistic approach of the active appearance models. Bildverarbeitung fu¨r die
Medizin (BVM) 2008; 51–55.
29. Egger J, Mostarkic Z, Grosskopf S, Freisleben B (2007) Preoperative Measurement of Aneurysms and
Stenosis and Stent-Simulation for Endovascular Treatment. IEEE International Symposium on Biomed-
ical Imaging (ISBI): From Nano to Macro 2007; 392–395.
30. Alcantarilla PF, Bartoli A, Chadebecq F, Tilmant C, Lepilliez V. Enhanced imaging colonoscopy facili-
tates dense motion-based 3D reconstruction. Conf Proc IEEE Eng Med Biol Soc. 2013; 7346–9.
31. Egger J, Kapur T, Dukatz T, Kolodziej M, Zukic D, Freisleben B, et al. Square-Cut: A Segmentation
Algorithm on the Basis of a Rectangle Shape. PLoS ONE 7(2): e31064.
pone.0031064 PMID: 22363547
32. Schwarzenberg R, Freisleben B, Nimsky C, Egger J. Cube-Cut: Vertebral Body Segmentation in MRI-
Data through Cubic-Shaped Divergences. PLoS ONE 9(4): e93389.
pone.0093389 PMID: 24705281
33. Egger J. PCG-Cut: Graph Driven Segmentation of the Prostate Central Gland. PLoS ONE 8(10):
e76645. PMID: 24146901
34. Egger J, Lu¨ddemann T, Schwarzenberg R, Freisleben B, Nimsky C. Interactive-cut: real-time feedback
segmentation for translational research. Comput. Med. Imaging Graphics 2014; 38(4): 285–295.
35. Egger J, Busse H, Brandmaier P, Seider D, Gawlitza M, Strocka S, et al. Interactive Volumetry of Liver
Ablation Zones. Sci Rep. 2015; 5: 15373. PMID: 26482818
36. Chen X, Xu L, Wang Y, Wang H, Wang F, Zeng X, et al. Development of a surgical navigation system
based on augmented reality using an optical see-through head-mounted display. J Biomed Inform.
2015; 55: 124–31. PMID: 25882923
37. Rosset A, Spadola L, Ratib O. OsiriX: an open-source software for navigating in multidimensional
DICOM images. J Digit Imaging 2004; 17(3): 205–16.
PMID: 15534753
38. Egger J, Kapur T, Fedorov A, Pieper S, Miller JV, Veeraraghavan H, et al. GBM Volumetry using the 3D
Slicer Medical Image Computing Platform. Sci Rep. 2013; 3: 1364.
PMID: 23455483
39. Tian J, Xue J, Dai Y, Chen J, Zheng J. A novel software platform for medical image processing and ana-
lyzing. IEEE Trans Inf Technol Biomed. 2008; 12(6): 800–12.
926395 PMID: 19000961
40. Klemm M, Kirchner T, Gro¨hl J, Cheray D, Nolden M, Seitel A, et al. MITK-OpenIGTLink for combining
open-source toolkits in real-time computer-assisted interventions. Int J Comput Assist Radiol Surg.
2017; 12(3): 351–361. PMID: 27687984
HTC Vive for medical applications
PLOS ONE | March 21, 2017 14 / 14
... Immersive Extended Reality (XR) technology, which includes Augmented Reality (AR), Virtual Reality and Mixed Reality (MR), has been proposed [4][5][6][7][8][9][10][11][12][13][14][15] to improve the visualisation of, and interaction with, anatomical models of the heart for educational and medical applications. ...
... MeVisLab (accessed on 30 May 2021) integrated a VR headset to enable medical applications using OpenVR [11], for a proof-of-concept of VR visualisation of 3D CT images. 3D Slicer VR [12], an extension that enabled communication between 3D Slicer ...
Full-text available
The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, the control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper, we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multiplanar reformatting, flow visualisation and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation and multiuser interactions. The available features were evaluated by imaging and nonimaging clinicians, showing that the virtual reality system can help improve the understanding and communication of three-dimensional echocardiography imaging and potentially benefit congenital heart disease treatment.
... VR-platform has designed an efficient and accurate collision algorithm. Open the "Physical Collision" panel in the VR-Platform editor, select the model to which collision detection effects need to be added, and add collision attributes [21], and then, hide the model so that visitors cannot cross when they use the angle of view of the collision detection camera to roam. Finally, browsing is prohibited in some places in the scene, as shown in Figure 9. (4) Simulate different weather effects. ...
Full-text available
With the rise of some concepts such as “Digital Earth” and “digital city,” how to realize the three-dimensional reconstruction and visualization of the urban art landscape is becoming a current research hotspot. This study takes the VR-Platform as the development platform and combines it with 3dsMax technology to study the dynamic simulation technology of urban 3D art landscapes. Additionally, this study combines the dynamic simulation technology of urban three-dimensional artistic landscape, designs the urban three-dimensional artistic landscape dynamic simulation system from the aspects of landscape modeling, texture mapping, drive integration, and so on, and finally realizes the basic functions such as roaming interaction, map navigation, information measurement, and visual display of the urban three-dimensional artistic landscape dynamic simulation system, which has a certain practical application value.
... This would result in an elegant solution for integrating worldwide research findings instantly in a single environment, which is not possible for the existing desktop solutions; or where there is anything whatsoever in this direction, then only as suboptimal extension plug-ins. Such an online environment can also offer an uncomplicated usage of recently arising, portable augmented reality (AR) [19,20] and virtual reality (VR) devices [21], by accessing the online environment from a VR/AR-ready web browser, such as Mozilla, Firefox, or Google Chrome, thus also removing the need for downloading, installing, and configuring a comprehensive platform package. In summary, an online environment would make the use of AR and VR devices more widespread in the medical domain, especially for medical teaching, training, and web conferencing. ...
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
... Currently, VR has been investigated for managing stress caused by anxiety disorders [1], navigation and tracking systems [2], minimizing hand tremor [3], and in medical simulators that teach subjects how to use surgical tools through realistic scenarios in virtual environments [4]. A simulator used the MeVisLab platform merged with the HTC headset to interact fast and instantly in medical scenarios [5]. Its medical data enabled an immersive environment through the HTC headset. ...
Full-text available
Virtual reality (VR) technology plays a significant role in many biomedical applications. These VR scenarios increase the valuable experience of tasks requiring great accuracy with human subjects. Unfortunately, commercial VR controllers have large positioning errors in a micro-manipulation task. Here, we propose a VR-based framework along with a sensor fusion algorithm to improve the microposition tracking performance of a microsurgical tool. To the best of our knowledge, this is the first application of Kalman filter in a millimeter scale VR environment, by using the position data between the VR controller and an inertial measuring device. This study builds and tests two cases: (1) without sensor fusion tracking and (2) location tracking with active sensor fusion. The static and dynamic experiments demonstrate that the Kalman filter can provide greater precision during micro-manipulation in small scale VR scenarios.
... After rendering, we use a head-mounted display (HMD), the HTC Vive (HTC Corporation China), to allow students to completely immerse themselves in the virtual world. [41] The advantage of using such HMD lies in the degree of immersion that can be realized. The success of our VR laparoscopy training platform is evidenced by students' improved scores after training. ...
Full-text available
Backgroud: Virtual reality (VR) technology represents the future of medical education due to its unique advantages, especially with the Covid-19 pandemic lasting. We developed a laparoscopic VR surgery collaborative training platform hoping to shed light on future medical education in China. Methods: We constructed a VR surgery training platform and designed surgery curriculum on laparoscopic cholecystectomy (LC). 36 first-year postgraduate students in China standardized training program for resident doctor (C-STRD) from the Third Xiangya Hospital of Central South University were enrolled for validation trials. In the Phase I trial, 12 students performed LC in the exploration mode. After training in the surgery learning mode, they performed LC again. The LC scores before and after training were compared. In the Phase II trial, another 12 students were randomly assigned to either the collaborative group or the control group. The former trained with a senior surgeon collaboratively in the surgery learning mode and then performed LC alone in the exploration mode. The latter trained in the surgery learning mode by themselves and performed LC in the exploration mode. The LC scores between groups were compared. The user experience (intention to use, skills improvement, usability, degree of enjoyment) were analyzed through questionnaires from the above 24 students. Interest in surgery learning of Phase I students was compared with 12 students who didn’t experience the VR platform. Results: In Phase I trial, the mean LC scores of the students were elevated from 56.83 to 61.17 (p=0.042) after learning in surgery learning mode. In Phase II trial, collaborative group students had higher scores than their rivals (67.17 vs 61.33, p=0.014). Most students have a positive users’ experience regarding the intention to use and skills improvement. Collaborative group students had higher evaluation regarding usability. Students who experienced the VR platform were significantly more interested in future surgery learning (3.60 vs 2.58, p <0.05). Conclusion: Our study constructed a VR platform for collaborative surgery training, which showed an excellent training effect. Medical students rated the platform highly, and their interest in learning increased.
Purpose: Virtual reality (VR) can provide an added value for diagnosis and/or intervention planning. Several VR software implementations have been proposed but they are often application dependent. Previous attempts for a more generic solution incorporating VR in medical prototyping software (MeVisLab) were still lacking functionality precluding easy and flexible development. Methods: We propose an alternative solution that uses rendering to a graphical processing unit (GPU) texture to enable rendering arbitrary Open Inventor scenes in a VR context. It facilitates flexible development of user interaction and rendering of more complex scenes involving multiple objects. We tested the platform in planning a transcatheter cardiac stent placement procedure. Results: This approach proved to enable development of a particular implementation that facilitates planning of percutaneous treatment of a sinus venosus atrial septal defect. The implementation showed it is intuitive to plan and verify the procedure using VR. Conclusion: An alternative implementation for linking OpenVR with MeVisLab is provided that offers more flexible development of VR prototypes which can facilitate further clinical validation of this technology in various medical disciplines.
In the field of joint surgery, the computer-aided design of knee prostheses suitable for the Chinese population requires a large quantity of anatomical knee data. In this study, we propose a new method that uses 3D Slicer software to automatically measure the morphological parameters of the distal femur. First, 141 femur samples were segmented from CT data to establish the femoral shape library. Next, balanced iterative reducing and clustering using hierarchies (BIRCH) combined with iterative closest point (ICP) and generalised procrustes analysis (GPA) were used to achieve fast registration of the femur samples. The statistical model was automatically calculated from the registered femur samples, and an orthopaedic surgeon marked the points on the statistical model. Finally, we developed an automatic measurement system using 3D Slicer software, and a deformable model matching method was applied to establish the point correspondence between the statistical model and the other samples. By matching points on the statistical model to corresponding points in other samples, we measured all other samples. We marked six points and measured eight parameters. We evaluated the performance of automatic matching by comparing the points marked manually with those matched automatically and verified the accuracy of the system by comparing the manual and automatic measurement results. The results indicated that the average error of the automatic matching points was 1.03 mm, and the average length error and average angle error measured automatically by the system were 0.37 mm and 0.63°, respectively. These errors were smaller than the intra-rater and inter-rater errors measured manually by two different surgeons, which showed that the accuracy of our automatic method was high. Taken together, this study established an accurate and automatic measurement system for the distal femur based on the secondary development of 3D Slicer software to assist orthopaedic surgeons in completing the measurements of big data and further promote the improved design of Chinese-specific knee prostheses.
Wearable technology is an emerging field that has the potential to revolutionize healthcare. Advances in sensors, augmented reality devices, the internet of things, and artificial intelligence offer clinically relevant and promising functionalities in the field of surgery. Apart from its well-known benefits for the patient, minimally invasive surgery (MIS) is a technically demanding surgical discipline for the surgeon. In this regard, wearable technology has been used in various fields of application in MIS such as the assessment of the surgeon’s ergonomic conditions, interaction with the patient or the quality of surgical performance, as well as in providing tools for surgical planning and assistance during surgery. The aim of this chapter is to provide an overview based on the scientific literature and our experience regarding the use of wearable technology in MIS, both in experimental and clinical settings.
The quality and efficiency of aircraft maintenance are the key to ensure flight safety and on‐time rate, which mainly depend on the techniques and experience of maintenance engineer. Generally, exercises on physical prototypes are used to improve the maintenance capability of engineers, but this will waste a lot of consumables and easily cause safety accidents. With the development of computer technology, maintenance training in a virtual environment has become an advanced and reliable solution. In this paper, a virtual training system of aircraft maintenance based on gesture recognition interaction is established. Leap Motion is used as a sensor to construct a hybrid machine learning gesture recognition model, so as to obtain natural human–computer interaction experience. In the recognition model, the initial weight matrix and the number of hidden layer nodes in the back propagation neural network are jointly optimized by the Particle Swarm Optimization algorithm with self‐adaption inertial weight. This optimization algorithm achieved a recognition rate of 81.26% in the dynamic gesture database constructed in this paper, which is higher than other available algorithms. A preliminary usability evaluation in university classrooms shows that the teaching system in this paper can achieve a better interactive experience. A training system for aircraft virtual maintenance using gesture recognition is presented. To improve the effect of gesture recognition, a joint optimization method for the initial weight matrix and the number of hidden layer nodes in the BPNN model is proposed. For the inertia weight in the adopted PSO optimization algorithm, a new formula is constructed to make it adaptive to the gesture recognition process.
Full-text available
PurposeDue to rapid developments in the research areas of medical imaging, medical image processing and robotics, computer-assisted interventions (CAI) are becoming an integral part of modern patient care. From a software engineering point of view, these systems are highly complex and research can benefit greatly from reusing software components. This is supported by a number of open-source toolkits for medical imaging and CAI such as the medical imaging interaction toolkit (MITK), the public software library for ultrasound imaging research (PLUS) and 3D Slicer. An independent inter-toolkit communication such as the open image-guided therapy link (OpenIGTLink) can be used to combine the advantages of these toolkits and enable an easier realization of a clinical CAI workflow. MethodsMITK-OpenIGTLink is presented as a network interface within MITK that allows easy to use, asynchronous two-way messaging between MITK and clinical devices or other toolkits. Performance and interoperability tests with MITK-OpenIGTLink were carried out considering the whole CAI workflow from data acquisition over processing to visualization. ResultsWe present how MITK-OpenIGTLink can be applied in different usage scenarios. In performance tests, tracking data were transmitted with a frame rate of up to 1000 Hz and a latency of 2.81 ms. Transmission of images with typical ultrasound (US) and greyscale high-definition (HD) resolutions of \(640\times 480\) and \(1920\times 1080\) is possible at up to 512 and 128 Hz, respectively. Conclusion With the integration of OpenIGTLink into MITK, this protocol is now supported by all established open-source toolkits in the field. This eases interoperability between MITK and toolkits such as PLUS or 3D Slicer and facilitates cross-toolkit research collaborations. MITK and its submodule MITK-OpenIGTLink are provided open source under a BSD-style licence (
Full-text available
Cranioplasty is among the oldest surgical procedures and trauma, infections, tumors and compression caused by brain edema are some of the reasons for the removal of bone. However, the reconstruction becomes difficult, if the defect is large and located in the fronto-orbital region, which is an area requiring aesthetic considerations. Thus, good preoperative evaluation, surgical planning and preparation, and accurate restoration of the anatomical contours are mandatory for a satisfactory outcome. Computer-aided planning of cranial 3D implants has gained importance during the last decade due to the limited time in clinical routine, especially in emergency cases and to avoid complications. But, state-of-the-art techniques are still quiet time consuming due to a low level approach. Generally speaking, CT scans are used to design an implant, often using non-medical software, which is not really appropriate. As a result, neurosurgeons spend hours with tedious low level operations on polygonal meshes for designing a satisfactory 3D implant. Commercial implant modeling software, like MIMICS, Biobuild or 3D-Doctor, is not always available, but if it is, using such software can be very complex. In this paper, an alternative software allowing fast, semi-automatic planning of cranial 3D implants under MeVisLab is presented.
Conference Paper
Full-text available
In this contribution, a novel method for computer-aided planning of facial surgery using miniplates is proposed. The planning software is able to fit the most common miniplates, to the desired position in a 3D facial model. The placement respects the local surface curvature. The implants can be manufactured with 3D printing technology.
To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls.
Researchers have shown that immersive Virtual Reality (VR) can serve as an unusually powerful pain control technique. However, research assessing the reported symptoms and negative effects of VR systems indicate that it is important to ascertain if these symptoms arise from the use of particular VR display devices, particularly for users who are deemed "at risk," such as chronic pain patients Moreover, these patients have specific and often complex needs and requirements, and because basic issues such as 'comfort' may trigger anxiety or panic attacks, it is important to examine basic questions of the feasibility of using VR displays. Therefore, this repeated-measured experiment was conducted with two VR displays: the Oculus Rift's head-mounted display (HMD) and Firsthand Technologies' immersive desktop display, DeepStream3D. The characteristics of these immersive desktop displays differ: one is worn, enabling patients to move their heads, while the other is peered into, allowing less head movement. To assess the severity of physical discomforts, 20 chronic pain patients tried both displays while watching a VR pain management demo in clinical settings. Results indicated that participants experienced higher levels of Simulator Sickness using the Oculus Rift HMD. However, results also indicated other preferences of the two VR displays among patients, including physical comfort levels and a sense of immersion. Few studies have been conducted that compare usability of specific VR devices specifically with chronic pain patients using a therapeutic virtual environment in pain clinics. Thus, the results may help clinicians and researchers to choose the most appropriate VR displays for chronic pain patients and guide VR designers to enhance the usability of VR displays for long-term pain management interventions.
Conference Paper
In this contribution, a prototype for semi-automatic computer-aided planning and reconstruction of cranial 3D Implants is presented. The software prototype guides the user through the workflow, beginning with loading and mirroring the patient’s head to obtain an initial curvature of the cranial implant. However, naïve mirroring is not sufficient for an implant, because human heads are in general too asymmetric. Thus, the user can perform Laplacian smoothing, followed by Delaunay triangulation, for generating an aesthetic looking and well-fitting implant. Finally, our software prototype allows to save the designed 3D model of the implant as a STL-file for 3D printing. The 3D printed implant can be used for further pre-interventional planning or even as the final implant for the patient. In summary, our findings show that a customized MeVisLab prototype can be an alternative to complex commercial planning software, which may not be available in a clinic.
This paper introduces a computer-based system that is designed to record a surgical procedure with multiple depth cameras and reconstruct in three dimensions the dynamic geometry of the actions and events that occur during the procedure. The resulting 3D-plus-time data takes the form of dynamic, textured geometry and can be immersively examined at a later time; equipped with a Virtual Reality headset such as Oculus Rift DK2, a user can walk around the reconstruction of the procedure room while controlling playback of the recorded surgical procedure with simple VCR-like controls (play, pause, rewind, fast forward). The reconstruction can be annotated in space and time to provide more information of the scene to users. We expect such a system to be useful in applications such as training of medical students and nurses.
Purpose: A user-centred design was used to develop and test the feasibility of an immersive 3D virtual reality wheelchair training tool for people with spinal cord injury (SCI). Method: A Wheelchair Training System was designed and modelled using the Oculus Rift headset and a Dynamic Control wheelchair joystick. The system was tested by clinicians and expert wheelchair users with SCI. Data from focus groups and individual interviews were analysed using a general inductive approach to thematic analysis. Results: Four themes emerged: Realistic System, which described the advantages of a realistic virtual environment; a Wheelchair Training System, which described participants’ thoughts on the wheelchair training applications; Overcoming Resistance to Technology, the obstacles to introducing technology within the clinical setting; and Working outside the Rehabilitation Bubble which described the protective hospital environment. Conclusions: The Oculus Rift Wheelchair Training System has the potential to provide a virtual rehabilitation setting which could allow wheelchair users to learn valuable community wheelchair use in a safe environment. Nausea appears to be a side effect of the system, which will need to be resolved before this can be a viable clinical tool.
The application of virtual reality technologies (VRTs) for users with autism spectrum disorder (ASD) has been studied for decades. However, a gap remains in our understanding surrounding VRT head-mounted displays (HMDs). As newly designed HMDs have become commercially available (in this study the Oculus Rift™) the need to investigate newer devices is immediate. This study explored willingness, acceptance, sense of presence and immersion of ASD participants. Results revealed that all 29 participants (mean age = 32; 33 % with IQ < 70) were willing to wear the HMD. The majority of the participants reported an enjoyable experience, high levels of ‘presence’, and were likely to use HMDs again. IQ was found to be independent of the willingness to use HMDs and related VRT immersion experience.