PosterPDF Available

Integration of the HTC Vive into the Medical Platform MeVisLab

Poster

Integration of the HTC Vive into the Medical Platform MeVisLab

Abstract

Virtual Reality (VR) is an immersive technology that replicates an environment via computer-simulated reality. VR gets a lot of attention in computer games but has also great potential in other areas, like the medical domain. Examples are planning, simulations and training of medical interventions, like for facial surgeries where an aesthetic outcome is important. However, importing medical data into VR devices is not trivial, especially when a direct connection and visualization from your own application is needed. Furthermore, most researcher don’t build their medical applications from scratch, rather they use platforms, like MeVisLab, Slicer or MITK. The platforms have in common that they integrate and build upon on libraries like ITK and VTK, further providing a more convenient graphical interface to them for the user. In this contribution, we demonstrate the usage of a VR device for medical data under MeVisLab. Therefore, we integrated the OpenVR library into MeVisLab as an own module. This enables the direct and uncomplicated usage of head mounted displays, like the HTC Vive under MeVisLab. Summarized, medical data from other MeVisLab modules can directly be connected per drag-and-drop to our VR module and will be rendered inside the HTC Vive for an immersive inspection.
Integration of the HTC Vive into the Medical Platform MeVisLab
Jan Egger, Markus Gall, Jürgen Wallner, Pedro Boechat, Alexander Hann, Xing Li, Xiaojun Chen, Dieter Schmalstieg
Graz University of Technology, Institute for Computer Graphics and Vision, Graz, Austria; BioTechMed-Graz, Graz, Austria
Medical University of Graz, Department of Maxillofacial Surgery, Graz, Austria
Ulm University, Department of Internal Medicine I, Ulm, Germany
Shanghai Jiao Tong University, School of Mechanical Engineering, Shanghai, China
METHODSINTRODUCTION RESULTS
1. Chen, X. et al. “Development of a Surgical Navigation System based
on Augmented Reality using an Optical see-through Head-mounted
Display,” J Biomed Inform. 2015 Jun;55:124-31 (2015).
2. Egger, J. et al. “Integration of the OpenIGTlink network protocol for
image guided therapy with the medical platform MeVisLab,” The
international Journal of medical Robotics and Computer assisted
Surgery, 8(3):282-390 (2012).
CONCLUSIONS
REFERENCES
The work received funding from BioTechMed-Graz in Austria (“Hardware accelerated intelligent medical imaging”), the 6th
Call of the Initial Funding Program from the Research & Technology
House (F&T-Haus) at the Graz University of Technology (PI: Jan Egger, Ph.D., Ph.D.) and ClinicIMPPACT (610886). Dr. Xiaojun Chen receives support by the Natural Science Foundation of
China (Grant No.: 81511130089) and the Foundation of Science and Technology Commission of Shanghai Municipality (Grants No.: 14441901002, 15510722200 and 16441908400). A video
demonstrating the integration of the HTC Vive into the medical platform MeVisLab is available under the following YouTube channel: https://www.youtube.com/c/JanEgger/videos
Virtual Reality (VR) is an immersive technology that replicates
an environment via computer-simulated reality. VR gets a lot
of attention in computer games but has also great potential in
other areas, like the medical domain. Examples are planning,
simulations and training of medical interventions, like for facial
surgeries where an aesthetic outcome is important. However,
importing medical data into VR devices is not trivial, especially
when a direct connection and visualization from your own
application is needed. Furthermore, most researcher don’t
build their medical applications from scratch, rather they use
platforms, like MeVisLab, Slicer or MITK. The platforms have
in common that they integrate and build upon on libraries like
ITK and VTK, further providing a more convenient graphical
interface to them for the user.
In this contribution, we demonstrate the usage of a VR
device for medical data under MeVisLab. Therefore, we
integrated the OpenVR library into MeVisLab as an own
module. This enables the direct and uncomplicated usage of
head mounted displays, like the HTC Vive under MeVisLab.
Summarized, medical data from other MeVisLab modules can
directly be connected per drag-and-drop to our VR module
and will be rendered inside the HTC Vive for an immersive
inspection (Figure 1).
Data As datasets for testing and evaluating the integration
we used several high-resolution Computed Tomography (CT)
acquisitions from the clinical routine.
Workflow A high level workflow diagram showing the
communication and interaction between MeVisLab and the
HTC Vive via OpenVR is presented in Figure 2.
Network The overall MeVisLab network with our HTCVive
module is presented in Figure 3. In this network, the medical
data is loaded via a WEMLoad module (named DataLoad) and
is directly passed to the HTCVive module (rectangle input at
the bottom of the HTC Vive module).
We developed a new module for the medical
prototyping platform MeVisLab that provides an
interface via the OpenVR library to head mounted
devices, enabling the direct and uncomplicated
usage of the HTC Vive under MeVisLab.
The OpenVR API provides a way to connect and
interact with Virtual Reality displays without relying
on a specific hardware vendor’s SDK. Thus, our
module could also communicate with other VR
devices, like the Oculus Rift.
Overall, the goal of this contribution was to investigate the
feasibility of using the HTC Vive under the medical prototyping
platform MeVisLab.
1. The successful integration of OpenVR with MeVisLab
has been demonstrated;
2. The developed solution allows MeVisLab programs
to connect to virtual reality headset devices;
3. Real-time visualization of medical data in VR is now
possible under MeVisLab;
4. For proof of concept, the integration has been tested
with the HTC Vive device;
5. The HTC Vive module can be used in new MeVisLab
networks or added to existing ones.
Fig. 2 High level workflow diagram showing the
communication / interaction between MeVisLab and the HTC
Vive via OpenVR.
Fig. 1 HTC Vive
The integration could be successfully achieved
under Microsoft Windows 8.1 with the MeVisLab 2.8.1
(21-06-2016) Version for Windows Visual Studio 2015
X64 and OpenVR SDK 1.0.2 (Figure 4).
There are several areas for future work, like the
evaluation of our integration with a greater amount of
medical data formats.
Fig. 4 Demonstration of the integration of the HTC Vive
into the medical platform MeVisLab.
Fig. 3 The overall MeVisLab network with the HTCVive
module and its interface, and parameters on the left side.
... Researchers have developed VR based simulators for various surgical procedures such as heart surgery [7][8][9][10], laparoscopic surgery [11,12] and others [2,3,13,14,30]. Few researchers have designed simulators for arthroscopic surgical procedure [18][19][20][21][22][23][24][25], hip surgical procedures [28,29] and orthopedic drilling [26]. ...
... Other researchers have also demonstrated the use of haptic interfaces in surgical simulators [4][5][6] and the role of VR simulators and their use as a planning and training tool in surgeries [31,32]. Recently, the emergence of low cost Immersive platforms such as Vive and Oculus Rift TM have been explored to design VR simulators for medical surgical training contexts [2,3,13,14]. ...
... The users (residents and students) rated the training simulators on a scale of 1 to 10. For assessment purposes, the range of score was categorized as low (1-3), moderate (3)(4)(5)(6), high (6)(7)(8) and very high (8)(9)(10). ...
... VR based simulation tools and approaches are being widely used in a range of domains including manufacturing [44][45][46], space systems, aerospace engineering [47], among others. Although VR based simulation approaches have been proposed for medical training, most have adopted nonimmersive approaches or the more expensive (traditional) CAVE based technologies [1,2]; the recent emergence of low cost fully immersive VR and Mixed Reality (MR) platforms such as the Vive and Oculus Rift holds the potential for the design of the next generation medical simulators. In this paper, the design and development of two medical training simulators designed for an orthopedic surgical process called Condylar Plating is discussed. ...
... Haptic interfaces have also been adopted for surgical training [8,33,34,51,52]. The adoption of low cost Immersive platforms such as Vive and Oculus Rift is relatively new in medical training [1,2,36]. A training simulator using the Oculus Rift platform is presented in [37]. ...
... com/) [56], which needs the installation of Unity or an App itself and dealing with specific plug-ins and also a file conversion, the Studierfenster option is quite lightweight. The same applies for other software tools that need to be installed, like MeVisLab [57]. ...
Article
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( www.studierfenster.at ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
... Orthopedic simulation environments developed for the haptic interface have been discussed in [7,8]. Recently, the emergence of low-cost Immersive platforms such as Vive and Oculus Rift have been explored to design VR simulators for medical surgical training contexts [9,10]. However, this number is very less given the recent emergence of these VR platforms. ...
... However, they are cumbersome and not very intuitive to use, and the possibility of using our bare hands for the interaction is very appealing for VR games and VR environments. It can even play a key role in medical applications [9]. ...
Conference Paper
Full-text available
We propose an efficient physics-based method for dexterous ’real hand’-’virtual object’ interaction in Virtual Reality environments. Our method is based on the Coulomb friction model, and we show how to efficiently implement it in a commodity VR engine for realtime performance. This model enables very convincing simulations of many types of actions such as pushing, pulling, grasping, or even dexterous manipulations such as spinning objects between fingers without restrictions on the objects’ shapes or hand poses. Because it is an analytic model, we do not require any prerecorded data, in contrast to previous methods. For the evaluation of our method, we conduction a pilot study that shows that our method is perceived more realistic and natural, and allows for more diverse interactions. Further, we evaluate the computational complexity of our method to show real-time performance in VR environments.
Article
Full-text available
Systems for estimating the six-degrees-of-freedom human body pose have been improving for over two decades. Technologies such as motion capture cameras, advanced gaming peripherals and more recently both deep learning techniques and virtual reality systems have shown impressive results. However, most systems that provide high accuracy and high precision are expensive and not easy to operate. Recently, research has been carried out to estimate the human body pose using the HTC Vive virtual reality system. This system shows accurate results while keeping the cost under a 1000 USD. This system uses an optical approach. Two transmitter devices emit infrared pulses and laser planes are tracked by use of photo diodes on receiver hardware. A system using these transmitter devices combined with low-cost custom-made receiver hardware was developed previously but requires manual measurement of the position and orientation of the transmitter devices. These manual measurements can be time consuming, prone to error and not possible in particular setups. We propose an algorithm to automatically calibrate the poses of the transmitter devices in any chosen environment with custom receiver/calibration hardware. Results show that the calibration works in a variety of setups while being more accurate than what manual measurements would allow. Furthermore, the calibration movement and speed has no noticeable influence on the precision of the results.
Article
This paper discusses the adoption of information-centric systems engineering (ICSE) principles to design a cyber-human systems-based simulator framework to train orthopedic surgery medical residents using haptic and immersive virtual reality platforms; the surgical procedure of interest is a less invasive stabilization system plating surgery that is used to treat fractures of the femur. Developing such training systems is a complex task involving multiple systems, technologies, and human experts. The information-centric approach proposed provides a structured foundation to plan, design, and build the simulators using the ICSE approach; in addition, the information models of the surgical processes were built to capture the surgical complexities and relationships between the various systems/components in the simulator framework, along with the controlling factors, performing mechanisms, and decision outcomes at various levels of abstraction. The simulator platforms include a haptic-based training system and a fully immersive training system for six training environments. Next-generation networking principles were adopted to support the collaborative training activities within this framework. As part of the proposed approach, expert surgeons played an important role in the design of the training environments. The outcomes of the learning assessment conducted demonstrate the effectiveness of using such simulator-based cyber-human training frameworks.
ResearchGate has not been able to resolve any references for this publication.