Conference PaperPDF Available

Towards 5G Telementoring in VR-Assisted Heart Transplantation Using HoloLens 2

Authors:

Abstract and Figures

In this work-in-progress paper, we present the current state of a research project regarding the use of HoloLens 2 (HL2) as single optical see-through head-mounted display (OST-HMD) in medical telementoring. In the past, some projects have been presented that show the potential of 3D reconstruction of operations and advanced communication using 3D annotations in augmented reality (AR). In our research project, we develop a system to support the process of heart transplantation (HTX) which introduces great challenges by its inherent requirements. We present first findings of the technical development and deployment in the clinical environment and also limitations using the HL2.
Content may be subject to copyright.
Towards 5G Telementoring in VR-Assisted
Heart Transplantation Using HoloLens 2
Bastian Dewitz,, Roman Bibo, Sebastian Kalkhoff, Sobhan Moazemi,
Artur Liebrecht, Christian Geiger, Frank Steinicke, Hug Aubinand Falko Schmid
Universitätsklinkum Düsseldorf Universität Hamburg Hochschule Düsseldorf
Moorenstraße 5 Vogt-Kölln-Straße 30 Münsterstraße 156
40225 Düsseldorf 22527 Hamburg 40476 Düsseldorf
Abstract: In this work-in-progress paper, we present the current state of a research project
regarding the use of HoloLens 2 (HL2) as single optical see-through head-mounted display
(OST-HMD) in medical telementoring. In the past, some projects have been presented that
show the potential of 3D reconstruction of operations and advanced communication using
3D annotations in augmented reality (AR). In our research project, we develop a system
to support the process of heart transplantation (HTX) which introduces great challenges
by its inherent requirements. We present first findings of the technical development and
deployment in the clinical environment and also limitations using the HL2.
Keywords: HoloLens 2, Augmented Reality, Virtual Reality, Telementoring, Transplan-
tation, HTX, 5G, Annotation, Communication
1 Introduction
HTX is a process that may benefit greatly from technological advances in telementoring and
AR-devices due to time-critical and logistical challenges. While the explantation of a donor
organ is possible in many hospitals in the whole Eurotransplant region, only some specialized
hospitals perform the implantation. The time span between explantation of the donor heart
and the implantation into the body of the recipient is only 4 hours to avoid increasing risks
due to ischemia. In case of HTX, the explantation is usually performed by a team of surgeons
which is sent from the implanting hospital to the remote location. They work together with a
second, remote team of surgeons, who are located at the implanting hospital and perform the
heart implantation when the organ arrives. Typically, the heart explantation is carried out in
parallel with explantations of other organs, such as liver, lung or kidney. Therefore, not only
time but also space in the operation room is limited. Additionally, the explanation takes place
in a foreign environment in an external hospital. A crucial part of the trnasplantation is the
evaluation and precise explantation of the donor organ at the explantation site. In most cases
the means for communication and synchronization between both teams are usually limited
to occasional phone calls and a full remote observation and support of the explantation is
still not common today. In many cases, the explanting surgeons are also less experienced
and can benefit from real-time communications with the implanting team.
Candidates for supporting this process of explantation using mixed-reality technology
are OST-HMDs such as the HL2. These head-worn devices allow displaying diverse media
in real-time and can be connected to the internet to allow a remote surgeon to virtually
join an explantation. One of the main application of the HL2 is the support of workers by
experts from afar using software such as the preinstalled Remote Assist in industrial cases.
In this paper, we present the current state of an experimental systems which is tailored
for remote telementoring in HTX and which only relies on the built-in sensors of the HL2.
The system is intended to capture a 3D-reconstruction of an explantation site, stream the
data in real-time to the implantation surgeons and allow for advanced communication using
annotations. During development, we encountered various challenges which are presented
and discussed in this paper.
The remainder of this paper structures as follows: First, we give an overview of related
work in this field of research. Second, we break down the pipeline (see Figure 1) of the
system and present key challenges. Third, the presented approach and solutions and further
remaining challenges are discussed. Finally, we summarize key findings, draw a conclusion
regarding the presented approach and give an outlook on future developments.
2 Related Work
The HL2 has been successfully used in research projects regarding AR as tool in clinical
procedures. In the past five years, one of the main research field for HoloLens 1 and 2 has
been medical applications [PBC21]. One of the most common uses to support surgeries
using HL2 is displaying macroscopic anatomical structures (e.g. bones [PIL+18, MUHO19],
organs [BBS+19] and vessels [PIL+18]) as 3D models or images and other information, such as
annotations [RMLST+20, GJS+21, LRvA+19] as overlays in a digitally augmented operation
room [BECS22]. For telementoring system in the medical domain, STAR [RMLST+20] and
ARTEMIS [GJS+21] are elaborated recent research projects, that showcase the potential
of such systems: The remote clinical procedure is captured and reconstructed in 3D at
a different location to allow expert surgeons to support decision making and procedures
by adding annotations in 3D-space. As current limitation, they do not only rely on the
built-in sensors of the OST-HMD, but require a prepared environment that is equipped with
different sensors. In other cases, the HL2 is used without any additional sensors and Microsoft
Remote 365, which is shipped with the HL2, is used as software for streaming videos and
communication [vdPAvG22]. The actual deployment in clinical procedures is yet limited and
the most common type of study are phantom experiments and system setups [BECS22]. Due
to the critical situation in operations, the HL2 moves only slowly from a proof-of-concept to
an actual application in the operation room and clinical testing [CNH+21, DBS+21].
Modifications of the HL2 (and the HoloLens 1) have been used to enhance the device’s
capabilities by adding new or by replacing existing sensors with better hardware [CUBW21,
GBD+16, LYW+18]. In previous cases, the means to attach additional sensors has been
developed individually and only in rare cases a 3D-model is publicly available (e.g. [Cla20]).
3 Technical Challenges in Using HL2 for HTX
5G
A
B
C
D
E
F
Figure 1: Planned pipeline of this research project. (A): Recording of situs using a single
HL2 at the remote explantation, (B): Processing of the data on the HL2, (C): Streaming via
5G using a handheld smartphone, (D): 3D reconstruction of the situs at the hosptial, (E):
Annotations in Virtual Reality, (F): Display of Annotations at the remote location.
3.1 Recording
The HL2 is equipped with a RGB camera with and a time-of-flight depth sensor (AHAT).
While the AHAT sensor has a resolution of 512 x 512 pixels with a frame rate of 45 fps (which
drops to 5 fps when no hands are present in the depth image), the RGB camera provides
different profiles according to the application’s needs. The AHAT sensor can be accessed
using the official Research Mode [UBG+20] and publicly available wrappers [Wen21, Gsa21].
The resolution of the RGB camera and the depth sensor is considerable low for the intended
scenario and only a small section of the image is relevant for the examination of the situs
from afar (ca. 240 x 180 pixels in the RGB image and 75 x 50 pixels in the AHAT image,
respectively. See also Figure 6).
For the intended scenario to use HL2 as device for capturing an operation a critical weak
spot was found: The viewing direction of the RGB camera is aligned with the view of a user
in a face-to-face scenario with an area of interest directly in front of the user. In contrast,
the area of interest during surgery is located in the lower area of the field of view of the
surgeon. An typical ergonomic posture is a standing with a slightly downwards tilted head,
around 20°to 30°, also visible on the left side in Figure 1 just in front of the operation situs.
Especially when surgical loupes are used, the surgeon needs to keep this position to focus
on the operation. When the color camera of the HL2 is aligned to the operation situs, the
surgeon is forced to tilt his or her head downwards at a much higher angle (around 50°). This
forced posture is problematic from an ergonomic point of view. To counter this problem,
we followed two approaches to deflect the camera view direction of the color camera by
(a) Prism module (PM). (b) Mirror module (MM). (c) MM attached to HL2.
Figure 2: 3D-printed mounts which can be attached to the HL2-mount. The mounts were
printed using a Zortrax M300 Dual printer.
(a) View without modifica-
tion. Head tilt ca. 50°.
(b) Distorted view using PM.
Head tilt ca. 35°
(c) Mirrored view using MM
(flipped). Head tilt ca. 20°.
Figure 3: View with and without modifications on a life-size model of a human heart recorded
with the HL2. The width of one square is 10 mm. The opening for open-heart surgery is
typically not bigger than the area of the depicted checkerboard.
constructing: (1) a mirror module and (2) a prism module. Both were designed as modules
that can be attached to a 3D-printed HL2-mount (visible in Figure 2c).
The HL2-mount is designed to allow attaching diverse modules to the HL2 as experimen-
tal device. It can be tightly screwed to the HL2 using two M2x16 mm screws to prevent it
from falling down and thereby providing a hazard to a patient. The mount was developed in
an iterative prototyping process to reduce the weight and allow heat dissipation by adding
a supported gap between HL2-mount and HL2 as the whole body of the processing unit
acts as heat sink and covering it increases the device temperature significantly. A technical
drawing of the HL2-mount can be seen in the appendix A. Further, the CAD models are
available online (see appendix). The first module, the mirror module (MM) (see Figure 2b)
deflects the camera view at an angle of 35°. It uses a circular mirror with a diameter of 50
mm that is directly placed at the top of the camera lens. An additional cover plate is added
below the mirror (see Figure 2b) to reduce reflections that disturb the AHAT sensor which
decreases the usable image area of the AHAT sensor to a resolution of 512x320 pixels. The
mirror is positioned on top of a 3D-printed rim with a diameter of 0.5 mm that is intended
to mechanically avoid the risk of detaching of the mirror and secured with a glued-on back-
plate. The second module, the prism modul (PM) (see Figure 2a) uses a wedge prism with
a ray deviation of 15°and a diameter of 25 mm. The physical properties of prism produce
distortion and visible chromatic aberrations in the image (see Figure 4a). The distortion
(a) Close up of chromatic aberrations at edges
using PM.
(b) Manual correction of chromatic aberration by
shifting red and blue channels along y.
Figure 4: Chromatic aberrations produced by the PM and result of correction.
can be compensated using the an adequate model during camera calibration and chromatic
aberrations can sufficiently be reduced by shifting the red and blue channel of the image in
y-direction (see Figure 4b). While both methods for deflecting the camera view work, in
case of surgery, the MM supports the natural posture of surgeons better than the PM.
3.2 Network Transmission
For network transmission, MixedReality-WebRTC (MRRTC) [Mic22] can be used. MRRTC
is a framework for Unity that allows to easilyestablish a connection between two peers using
a STUN or TURN server which can be accessed in the public internet. It automatically
transmits encoded video and audio over media channels or arbitrary data over data channels
in different modes (each channel can be set to reliable and ordered, if necessary) with SRTP-
encryption. While MRRTC provides an easy way of connecting the explantation site and
hospital “out-of-the-box” it also has some major disadvantages. MRRTC is mostly intended
to be used in video call scenarios. The available bandwidth is continuously renegotiated
and compression is set to allow communication with a low latency. The encoding reduces
the depth resolution to 8 bit, which is visible as discrete steps and generated noise at sharp
corners which reduces the overall quality compared to raw data (see Figure 5). Further, the
synchronization of streams is also difficult to implement as no timestamps are transmitted
in the current version of MRRTC and the delay of individual streams can vary over time.
Overall, the combined data rate of all streams after compression was measured at around 7
Mbit/s. To allow a network connection between remote explantation site and hospital in the
HTX use case, 5G seems a promising technology as speed and latency of 5G can be considered
sufficient for transmitting this amount of data. Handheld 5G capable smartphones which
allow to be independent from existing infrastructure at the explantation site may be used to
to establish a 5G internet connection as mobile hotspots or via cable connection to the USB
NCM port.
Figure 5: Reconstructed point cloud of a human hand after 8 bit encoding using MRRTC
at a high bandwidth (direct cable connection) (left) and as raw data (10 bit) (right).
3.3 Reconstruction in 3D
With an appropriate camera model, the mirror module and the prism module can be cali-
brated to map color from the RGB image to depth values in the AHAT image (see Figure
6). The depth and color information can then be rendered as point cloud or 3D mesh which
can be evaluated on a 2D screen or in 3D in a virtual reality or AR environment. For a
proof-of-concept, the RGB image is currently mapped to the depth image using 2D image
transformations (scaling and translating the image to fit to 3D structures). The combined
depth and color information is then rendered as quads with a diameter of 1 mm as point
cloud. The real-time reconstruction using compute shaders in Unity 3D seems promising
(see Figure 3b) and a procedure for calibration using the standardized methods of OpenCV
is currently under development. This procedure will be integrated into the software to allow
a fast calibration as part of the device preparation before the explantation.
Some challenges that emerge by reconstructing a 3D model from a single depth sensor
need to be addressed in the future. While smooth surfaces are reconstructed quite well (see
Figure 7a and Figure 7b), edges are difficult to perceive (see Figure 7c) without further
processing. Some parts of the reconstructed environment are occluded by other parts of the
environment or the hands of the surgeon. Further, the noise at a distance of around 50
cm, which is typical for HTX operations, is considerably high which makes some filtering
necessary.
3.4 Annotations and Communication
To extend the communication beyond video and audio, 3D annotations are a reasonable
choice that has been implemented in previous projects. Literature shows a sufficient accuracy
for many surgical procedures with a deviation below 1 cm [GSB+20, GPW+19]. With a
correct 3D reconstruction, the transfer of 3D labels from the reconstruction to the real
environment becomes possible. Different means of interaction can be implemented to allow
the drawing of labels such as 2D-drawing on a monitor or 3D-drawing in virtual reality. In
case of open-heart surgery, the placement of labels is however a more difficult task as tissue
(a) RGB view with MM (flipped
along y).
(b) AHAT depth image with
512x320 cutout resolution.
(c) Combined partly colored
point cloud.
Figure 6: Recording of a model of a human heart at a typical working distance of ca. 45 cm.
and organs constantly move, or are occluded by the surgeons hands or tools. Computer-
vision algorithms may allow to attach annotations to specific locations on the tissue and track
them over time [YLS+12, LSQ+16, BNSD17] and also AI-based methods can be considered
[MBN21, MPMA+22]. Considering the low resolution of the RGB and AHAT depth video
streams this needs specialized algorithms and it is unclear to which degree a tracking of
labels will be possible.
The limited field of view of the HL2 only allows to display content in a small part of
the field of view of a user. Tests showed, that it is possible to wear the HL2 on top of
typical surgical loupes with individually fitted oculars. This suggest the use of the HL2 as
secondary monitor that displays 3D-registered annotations and other media content in the
upper part of the field of view and a clear view on the situs in the lower part of the field
of view. This has already been researched as feasible approach in previous research work
[YCR+17]. Considering the difficult situation in the operation room, some questions need
to be researched in the future, such as, the coloring and size of 3D lines, what actions need
to be communicated to the remote location and how accurately they can be interpreted.
4 Discussion
Although the HL2 enables remote support with the shipped software, this is tailored to
other applications, such as maintenance of industrial plants using video annotation. The
annotation of 3D reconstruction as intended in this project, seems like a useful application
that is worth exploring, especially in the field of telementoring. The presented project
consists of several complex problems that need to be addressed simultaneously and in a
coordinated manner to ensure a successful project. The presented approaches seem promising
and a gradual refinement in the future may lead to an experimental system in the context of
heart transplantation. While still some work has to be done, we are confident that testing
the application under clinical conditions will soon be possible.
One bottleneck that will persist in the current pipeline is using the HL2 as OST-HMD.
Although it can be considered state-of-the-art, we have repeatedly reached the device’s
capabilities during development. A big limitation we encountered was the processing power
on the device itself. While the camera can record video material in a resolution of 1920x1080
(a) Reconstruction with
MRRTC-encoded depth data.
(b) Reconstruction with 5x5
gaussian filtering.
(c) Side view: Missing pixels at
steep edges.
Figure 7: Close-up rendering of the reconstructed point cloud of a life-size model of a human
heart at a distance of 45 cm.
pixels, the processing power is not sufficient to support this resolution in transmission. It was
considered to use a cutout of the relevant areas in the image to reduce bandwidth, but the
frame rate dropped well below 5 fps using different methods (OpenCV for Unity, C# in Unity,
compute shader) which was ultimately considered to low for real-time interaction. During
development, MRRTC has unfortunately been marked as deprecated, so an alternative way
of transmitting data has to be found in the future to allow support and compatibility.
Overall, the mirror mount seems to be the most promising candidate to deflect the RGB
camera as the posture of surgeons has not to be adjusted and will therefore be the primary
approach in this application and in similar use cases. The prism can be used in a similar way,
but, in this case, the deflection is not strong enough. While both approaches successfully
modify the view of the HL2, it would be preferable for future OST-HMDs to be equipped
with an additional tilted camera.
5 Conclusion and Future Work
In this paper, we presented the most important steps in the pipeline of a project regarding 5G
telementoring in HTX. Even though some features of the HL2, such as video resolution and
computation power, were determined as possibly not sufficient in this scenario beforehand,
developing a running system yielded interesting insight for future developments. Following
key findings were made with our current prototype: (1) In surgical applications, the camera
view direction of the HL2 can be deflected to allow recording of the region of interest without
forcing an artificial posture on users. (2) The 5G network transmission of 3D data is possible
using WebRTC, although important details are lost due to encoding. For critical data, other
means of streaming need to be implemented. (3) A 3D reconstruction is possible using data
from the built-in sensors of the HL2, although the resolution is rather low and it is not clear
if it will be sufficient for this use case. (4) The HL2 can be worn above surgical loupes to
provide a on-demand view of annotations. A big challenge for tracking of annotations in this
specific use case will be moving tissue and occlusions. While there are still some challenges
ahead of this project, the prospects are positive and an actual deployment using 5G in
surgeries will be an interesting use case in the field of mixed-reality-assisted telementoring.
References
[BBS+19] Henrik Brun, Robin Anton Birkeland Bugge, LKR Suther, Sigurd Birkeland,
Rahul Kumar, Egidijus Pelanis, and Ole Jacob Elle. Mixed reality holograms
for heart surgery planning: first user experience in congenital heart disease.
European Heart Journal-Cardiovascular Imaging, 20(8):883–888, 2019.
[BECS22] Manuel Birlo, PJ Eddie Edwards, Matthew Clarkson, and Danail Stoyanov.
Utility of optical see-through head mounted displays in augmented reality-
assisted surgery: a systematic review. Medical Image Analysis, page 102361,
2022.
[BNSD17] Sylvain Bernhardt, Stéphane A Nicolau, Luc Soler, and Christophe Doignon.
The status of augmented reality in laparoscopic surgery as of 2016. Medical
image analysis, 37:66–90, 2017.
[Cla20] Thomas Clarke. Hololens 2 Mount for ZED Mini. https://www.thingiverse.
com/thing:4561113, 2020. Accessed: June 12, 2022.
[CNH+21] Fabio A Casari, Nassir Navab, Laura A Hruby, Philipp Kriechling, Ricardo
Nakamura, Romero Tori, Fátima de Lourdes dos Santos Nunes, Marcelo C
Queiroz, Philipp Fürnstahl, and Mazda Farshad. Augmented reality in or-
thopedic surgery is emerging from proof of concept towards clinical studies:
a literature review explaining the technology and current state of the art.
Current Reviews in Musculoskeletal Medicine, 14(2):192–203, 2021.
[CUBW21] Zubin Choudhary, Jesus Ugarte, Gerd Bruder, and Greg Welch. Real-time
magnification in augmented reality. In Symposium on Spatial User Interaction,
pages 1–2, 2021.
[DBS+21] Cyrill Dennler, David E Bauer, Anne-Gita Scheibler, José Spirig, Tobias
Götschi, Philipp Fürnstahl, and Mazda Farshad. Augmented reality in the
operating room: A clinical feasibility study. BMC musculoskeletal disorders,
22(1):1–9, 2021.
[GBD+16] Mathieu Garon, Pierre-Olivier Boulet, Jean-Philippe Doiron, Luc Beaulieu,
and Jean-François Lalonde. Real-time high resolution 3d data on the hololens.
In 2016 IEEE International Symposium on Mixed and Augmented Reality
(ISMAR-Adjunct), pages 189–191. IEEE, 2016.
[GJS+21] Danilo Gasques, Janet G Johnson, Tommy Sharkey, Yuanyuan Feng,
Ru Wang, Zhuoqun Robin Xu, Enrique Zavala, Yifei Zhang, Wanze Xie, Xin-
ming Zhang, et al. Artemis: A collaborative mixed-reality system for immer-
sive surgical telementoring. In Proceedings of the 2021 CHI Conference on
Human Factors in Computing Systems, pages 1–14, 2021.
[GPW+19] Christina Gsaxner, Antonio Pepe, Jürgen Wallner, Dieter Schmalstieg, and
Jan Egger. Markerless image-to-face registration for untethered augmented
reality in head and neck surgery. In International Conference on Medical Im-
age Computing and Computer-Assisted Intervention, pages 236–244. Springer,
2019.
[Gsa21] Christina Gsaxner. HoloLens2-Unity-ResearchModeStreamer. https:
//github.com/cgsaxner/HoloLens2-Unity-ResearchModeStreamer, 2021.
Accessed: June 12, 2022.
[GSB+20] Rocco Galati, Michele Simone, Graziana Barile, Raffaele De Luca, Carmine
Cartanese, and G Grassi. Experimental setup employed in the operating room
based on virtual and mixed reality: analysis of pros and cons in open abdomen
surgery. Journal of healthcare engineering, 2020, 2020.
[LRvA+19] Florentin Liebmann, Simon Roner, Marco von Atzigen, Davide Scaramuzza,
Reto Sutter, Jess Snedeker, Mazda Farshad, and Philipp Fürnstahl. Pedicle
screw navigation using surface digitization on the microsoft hololens. Inter-
national journal of computer assisted radiology and surgery, 14(7):1157–1165,
2019.
[LSQ+16] Bingxiong Lin, Yu Sun, Xiaoning Qian, Dmitry Goldgof, Richard Gitlin, and
Yuncheng You. Video-based 3d reconstruction, laparoscope localization and
deformation recovery for abdominal minimally invasive surgery: a survey. The
International Journal of Medical Robotics and Computer Assisted Surgery,
12(2):158–178, 2016.
[LYW+18] Christoph Leuze, Grant Yang, Gordon Wetzstein, Mahendra Bhati, Amit
Etkin, and Jennifer McNab. Marker-less co-registration of mri data to a sub-
ject’s head via a mixed reality device. In Proc. Intl. Soc. Mag. Reson. Med,
volume 26, page 1481, 2018.
[MBN21] Adam Mylonas, Jeremy Booth, and Doan Trang Nguyen. A review of artifi-
cial intelligence applications for motion tracking in radiotherapy. Journal of
Medical Imaging and Radiation Oncology, 65(5):596–611, 2021.
[Mic22] Microsoft. MixedReality-WebRTC. https://github.com/microsoft/
MixedReality-WebRTC, 2022. Accessed: June 12, 2022.
[MPMA+22] David Männle, Jan Pohlmann, Sara Monji-Azad, Nikolas Löw, Nicole Rotter,
Jürgern Hesser, Annette Affolter, Anne Lammert, Angela Schell, Benedikt
Kramer, et al. Development of ai based soft tissue shift tracking during surgery
to optimize frozen section analysis. Laryngo-Rhino-Otologie, 101(S 02), 2022.
[MUHO19] Daisuke Mitsuno, Koichi Ueda, Yuka Hirota, and Mariko Ogino. Effective ap-
plication of mixed reality device hololens: simple manual alignment of surgical
field and holograms. Plastic and reconstructive surgery, 143(2):647–651, 2019.
[PBC21] Sebeom Park, Shokhrukh Bokijonov, and Yosoon Choi. Review of microsoft
hololens applications over the past five years. Applied Sciences, 11(16):7259,
2021.
[PIL+18] Philip Pratt, Matthew Ives, Graham Lawton, Jonathan Simmons, Nasko
Radev, Liana Spyropoulou, and Dimitri Amiras. Through the hololenslook-
ing glass: augmented reality for extremity reconstruction surgery using 3d
vascular models with perforating vessels. European radiology experimental,
2(1):1–7, 2018.
[RMLST+20] Edgar Rojas-Muñoz, Chengyuan Lin, Natalia Sanchez-Tamayo, Maria Euge-
nia Cabrera, Daniel Andersen, Voicu Popescu, Juan Antonio Barragan, Ben
Zarzaur, Patrick Murphy, Kathryn Anderson, et al. Evaluation of an aug-
mented reality platform for austere surgical telementoring: a randomized con-
trolled crossover study in cricothyroidotomies. NPJ digital medicine, 3(1):1–9,
2020.
[UBG+20] Dorin Ungureanu, Federica Bogo, Silvano Galliani, Pooja Sama, Xin Duan,
Casey Meekhof, Jan Stühmer, Thomas J Cashman, Bugra Tekin, Johannes L
Schönberger, et al. Hololens 2 research mode as a tool for computer vision
research. arXiv preprint arXiv:2008.11239, 2020.
[vdPAvG22] Kees van der Putten, Mike B Anderson, and Rutger C van Geenen. Looking
through the lens: The reality of telesurgical support with interactive technol-
ogy using microsoft’s hololens 2. Case Reports in Orthopedics, 2022, 2022.
[Wen21] Wenhao. HoloLens2-ResearchMode-Unity. https://github.com/
petergu684/HoloLens2-ResearchMode-Unity, 2021. Accessed: June
12, 2022.
[YCR+17] Jang W Yoon, Robert E Chen, Karim ReFaey, Roberto J Diaz, Ronald Reimer,
Ricardo J Komotar, Alfredo Quinones-Hinojosa, Benjamin L Brown, and
Robert E Wharen. Technical feasibility and safety of image-guided parieto-
occipital ventricular catheter placement with the assistance of a wearable head-
up display. The International Journal of Medical Robotics and Computer As-
sisted Surgery, 13(4):e1836, 2017.
[YLS+12] Michael C Yip, David G Lowe, Septimiu E Salcudean, Robert N Rohling,
and Christopher Y Nguan. Tissue tracking and registration for image-guided
surgery. IEEE transactions on medical imaging, 31(11):2169–2182, 2012.
Appendix
A
115.25
48.76
66.49
52.63
62.63
32.63
42.63
72.63
82.63
57.63
7
3.5
10.7
8.5
R3.1
Ø3.5
49.6
6
15.3
35.3
43
52
Technical drawing of the HoloLens 2 Base Mount. All Measurements in mm. The CAD-
Model is available at https://github.com/DHLD-UKD/HoloLens2-Modifications.
B
58.75
54.5
62.04
2
7
42.19
44.49
125°
52.24
17.25
21.12
31.12
34.99
3.5
32.5
44.5
54.5
R10
Ø2.4
R24.25
Ø50
Technical drawing of the HoloLens 2 mirror module. All Measurements in mm. The CAD-
Model is available at https://github.com/DHLD-UKD/HoloLens2-Modifications.
C
27
4.63
8.5
18.5
22.37
22.99
40.57
35.2
3.5
13.5
R8
R2
7
12.61
13.61
Technical drawing of the HoloLens 2 prism module. All Measurements in mm. The CAD-
Model is available at https://github.com/DHLD-UKD/HoloLens2-Modifications
Article
Full-text available
Reality technologies in the orthopaedic arena have been increasing in use over the last decade, including virtual reality (VR), augmented reality (AR), and mixed reality (MR). MR is one of the most recent innovations and perhaps the most promising for improving the overall surgical experience. The purpose of this case report was to demonstrate a complex total knee arthroplasty case where unplanned remote assistance was used for telesurgical support using the HoloLens 2.
Article
Full-text available
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n = 40). Experiments mainly involve phantoms (n = 43) or system setup (n = 21), with patient case studies ranking third (n = 19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Article
Full-text available
Background Augmented Reality (AR) is a rapidly emerging technology finding growing acceptance and application in different fields of surgery. Various studies have been performed evaluating the precision and accuracy of AR guided navigation. This study investigates the feasibility of a commercially available AR head mounted device during orthopedic surgery. Methods Thirteen orthopedic surgeons from a Swiss university clinic performed 25 orthopedic surgical procedures wearing a holographic AR headset (HoloLens, Microsoft, Redmond, WA, USA) providing complementary three-dimensional, patient specific anatomic information. The surgeon’s experience of using the device during surgery was recorded using a standardized 58-item questionnaire grading different aspects on a 100-point scale with anchor statements. Results Surgeons were generally satisfied with image quality (85 ± 17 points) and accuracy of the virtual objects (84 ± 19 point). Wearing the AR device was rated as fairly comfortable (79 ± 13 points). Functionality of voice commands (68 ± 20 points) and gestures (66 ± 20 points) provided less favorable results. The greatest potential in the use of the AR device was found for surgical correction of deformities (87 ± 15 points). Overall, surgeons were satisfied with the application of this novel technology (78 ± 20 points) and future access to it was demanded (75 ± 22 points). Conclusion AR is a rapidly evolving technology with large potential in different surgical settings, offering the opportunity to provide a compact, low cost alternative requiring a minimum of infrastructure compared to conventional navigation systems. While surgeons where generally satisfied with image quality of the here tested head mounted AR device, some technical and ergonomic shortcomings were pointed out. This study serves as a proof of concept for the use of an AR head mounted device in a real-world sterile setting in orthopedic surgery.
Article
Full-text available
Purpose of Review Augmented reality (AR) is becoming increasingly popular in modern-day medicine. Computer-driven tools are progressively integrated into clinical and surgical procedures. The purpose of this review was to provide a comprehensive overview of the current technology and its challenges based on recent literature mainly focusing on clinical, cadaver, and innovative sawbone studies in the field of orthopedic surgery. The most relevant literature was selected according to clinical and innovational relevance and is summarized. Recent Findings Augmented reality applications in orthopedic surgery are increasingly reported. In this review, we summarize basic principles of AR including data preparation, visualization, and registration/tracking and present recently published clinical applications in the area of spine, osteotomies, arthroplasty, trauma, and orthopedic oncology. Higher accuracy in surgical execution, reduction of radiation exposure, and decreased surgery time are major findings presented in the literature. Summary In light of the tremendous progress of technological developments in modern-day medicine and emerging numbers of research groups working on the implementation of AR in routine clinical procedures, we expect the AR technology soon to be implemented as standard devices in orthopedic surgery.
Article
Full-text available
Currently, surgeons in operating rooms are forced to focus their attention both on the patient’s body and on flat low-quality surgical monitors, in order to get all the information needed to successfully complete surgeries. The way the data are displayed leads to disturbances of the surgeon’s visuals, which may affect his performances, besides the fact that other members of the surgical team do not have proper visual tools able to aid him. The idea underlying this paper is to exploit mixed reality to support surgeons during surgical procedures. In particular, the proposed experimental setup, employed in the operating room, is based on an architecture that put together the Microsoft HoloLens, a Digital Imaging and Communications in Medicine (DICOM) player and a mixed reality visualization tool (i.e., Spectator View) developed by using the Mixed Reality Toolkit in Unity with Windows 10 SDK. The suggested approach enables visual information on the patient’s body as well as information on the results of medical screenings to be visualized on the surgeon’s headsets. Additionally, the architecture enables any data and details to be shared by the team members or by external users during surgical operations. The paper analyses in detail advantages and drawbacks that the surgeons have found when they wore the Microsoft HoloLens headset during all the ten open abdomen surgeries conducted at the IRCCS Hospital “Giovanni Paolo II” in the city of Bari (Italy). A survey based on Likert scale demonstrates how the use of the suggested tools can increase the execution speed by allowing multitasking procedures, i.e., by checking medical images at high resolution without leaving the operating table and the patient. On the other hand, the survey also reveals an increase in the physical stress and reduced comfort due to the weight of the Microsoft HoloLens device, along with drawbacks due to the battery autonomy. Additionally, the survey seems to encourage the use of DICOM Viewer and Spectator View both for surgical education and for improving surgery outcomes. Note that the real use of the conceived platform in the operating room represents a remarkable feature of this paper, since most if not all the studies conducted so far in literature exploit mixed reality only in simulated environments and not in real operating rooms. In conclusion, the study clearly highlights that, despite the challenges required in the forthcoming years to improve the current technology, mixed reality represents a promising technique that will soon enter the operating rooms to support surgeons during surgical procedures in many hospitals across the world. 1. Introduction During the last decades, the advances in electronics, mechatronics, and computer vision have been used to design very powerful and accurate microcameras, smaller and faster processors, and real-time data streaming. These devices have led to remarkable improvements in diagnostic imaging systems such as mammography, computerized tomography, and magnetic resonance imaging [1]. Surely, these tools have revolutionized the way physicians use imaging to perform procedures, particularly during emergencies when surgeons need as much information as possible about the patient in order to successfully complete the procedure. Despite all these innovations, it is worth noting that while imaging has radically evolved, the way images are displayed is basically the same as in the past [2]. Usually, surgeons are forced to focus their attention both on their patients and on medical-grade flat surgical monitors in order to get all the information needed to avoid adverse events and, consequently, correctly finish up the surgery with an appreciable outcome for the patient. Further, medical images are not displayed from the perspective of the viewer, but rather from that of the imaging device: surgeons have to use their experience and imagination to understand and mentally project the images into the patient’s body while they are doing procedures [3]. Finally, different types of visual data are displayed separately, so doctors have to focus additional attention to mentally fusing multiple image types (such as abdomen RMI and CT colonography) into a coherent representation of the patient. Note that acquiring this skill takes years of training. Nonetheless, it seems quite unreasonable that surgeons have to look away from the surgical site in order to look at the monitor, especially during high-risk procedures that are associated with significant inpatient mortality [4]. Thus, surgeons need specific visual requirements, which are very different from those of all the other physicians. These requirements include good acuity for fine detail at close range, good depth perception, and good color vision. For this reason, surgeons need to observe more scopes or monitors during surgeries. Few years ago, the concept of mixed reality (MR) has been introduced [5]. It represents the merging of real and virtual worlds to produce new environments and visualizations, where physical and digital objects co-exist and interact in real time [5]. Mixed reality has started entering the operating rooms in many hospitals around the world, with the aim to support surgeons and operating team. It represents a key technology that has the potential to solve all of the aforementioned problems occurring in the operating rooms [6, 7]. The use of applications based on mixed reality enables medical professionals to recreate real-world images of anatomical structures virtually while they wear specific binocular head-mounted display. The augmented information can then be projected over the patient’s body in real-time and can be overlaid on the field of view of the surgeon. Using such technology, surgeons do not have to take their eyes off the patient for looking at different displays when they need to gather and interpret medical information [8]. Moreover, the possibility to use mixed reality features allows the surgeons to anchor virtual objects to the real world and interact with them. This provides a very powerful tool for better training and improving education on surgical tactics and methods [9]. One of the most popular mixed reality headsets is the Microsoft HoloLens. So far, different uses of Microsoft HoloLens have been reported in literature. Referring to training applications, in [10], a system for laparoscopy training that is enhanced by Microsoft HoloLens has been described. In particular, the system in [10] is used to perform peg-transfer training tasks via Microsoft HoloLens. In [11], suturing training has been investigated using Microsoft HoloLens. In particular, in [11], a training module for basic suturing training has been developed by combining video instruction and voice commands with the Microsoft HoloLens software. In [12], anatomy training has been proposed using a HoloLens-based augmented reality system. Referring to navigation, in [13], Microsoft HoloLens has been used for navigation in endovascular aortic repair, with the aim of reducing side effects such as radiation exposure and contrast agent administration. In [13], a novel multicriteria evaluation model for HoloLens-based mixed reality surgical navigation system is illustrated. The model, which includes factors such as comfort and safety, is performed in an actual clinical application [13]. Referring to visualization, in [14], a framework for 3D holographic visualization of myocardial scar (imaged using late gadolinium enhancement) has been developed based on Microsoft HoloLens. Such visualization of the complex 3D architecture of myocardial scar via Microsoft HoloLens has improved guidance of radiofrequency ablation in the treatment of ventricular tachycardia [14]. Referring to interactive registration, in [15], Microsoft HoloLens has been used in 3D thoracoscopy. Namely, interactive registration may help the surgeon when one or several trocars must be inserted between the patient’s ribs [15]. Finally, an approach for advancing telemedicine using the Microsoft HoloLens has been illustrated in [16]. Based on previous considerations, the purpose of this study is to focus on the effectiveness of Microsoft HoloLens as a tool for aiding one or more members of the surgical team during abdominal surgeries. The aim is to evaluate the technical feasibility of introducing this kind of technology in open surgery in the operating rooms. The paper is organized as follow. Section 2 briefly illustrates the hardware and the equipment used for this research work, whereas Section 3 provides a description of the experimental setup employed in the operating room. It enables visual information on the patient’s body and on the results of medical screenings to be visualized on the surgeon’s headsets, as well as any data and details to be shared by the team’s members or by external users during surgical operations. Section 4 presents a discussion on the advantages and drawbacks that the surgeons have found when they wore the Microsoft HoloLens during all the ten open abdomen surgeries conducted at the IRCCS Hospital “Giovanni Paolo II” in the city of Bari (Italy). These surgeries, carried out with the Microsoft HoloLens, represent a remarkable feature of this manuscript, since most if not all studies conducted in literature so far exploit mixed reality only in simulated environments [17] and not in real operating rooms. Finally, by using a Likert-like scale, Section 5 presents the results of questionnaires that have been given to the team’s members during surgeries conducted when wearing Microsoft HoloLens. 2. Materials and Equipment The aim of this section is to briefly describe the technical specifications of the Microsoft HoloLens. This will help the reader, in the following, to better understand advantages and drawbacks when adopting this device in open abdomen surgery. Microsoft HoloLens is a mixed reality headset able to display virtual additions in context to the real environment on the user’s field of view. The electronic unity that calculates the visual information and the mixed reality objects is stand-alone and fully integrated into the HoloLens hardware. Thus, there is no need for extra computers or devices. Information is projected on specific see-through lenses based on holographic waveguide in front of the user’s eye [18]. The HoloLens hardware includes an inertial measurement unit, which integrates an accelerometer, a gyroscope, and a magnetometer to detect the head movements in the three-dimensional space. The hardware also includes a pair of short and long-throw infrared illuminators, an energy-efficient depth camera with a × angle of view, a 2.4-megapixel photographic video camera, an array made up by four microphones, and an ambient light sensor [18]. More specifically, Microsoft HoloLens is equipped with four gray-scale environment tracking cameras and a depth camera to sense its environment and capture gestures of the user. An image illustrating some details is reported in Figure 1. Moreover, two of the gray-scale cameras are configured as a stereo mode, in order to capture the area in front of the headset so that the absolute depth of tracked visual features can be determined through triangulation. The remaining two additional gray-scale cameras are used to provide a wider field of view and to keep track of features [18]. It is worth saying that these synchronized global-shutter cameras are significantly more light-sensitive than the color camera. Consequently, they can be used to capture images at a rate of up to 30 frames per second (FPS). The color video camera placed in the front of the device can provide a horizontal field of view of with a maximum resolution of 1344 × 756 px when configured to work with large FOV video mode with overscan or at 1280 × 720 px with a field of view of when used in standard mode.
Conference Paper
Full-text available
In the treatment of head and neck cancer, physicians can benefit from augmented reality in preparing and executing treatment. We present a system allowing a physician wearing an untethered augmented reality headset to see medical visualizations precisely overlaid onto the patient. Our main contribution is a strategy for markerless registration of 3D imaging to the patient's face. We use a neural network to detect the face using the headset's depth sensor and register it to computed tomography data. The face registration is seamlessly combined with the headset's continuous self-localization. We report on registration error and compare our approach to an external, high-precision tracking system.
Article
During radiotherapy, the organs and tumour move as a result of the dynamic nature of the body; this is known as intrafraction motion. Intrafraction motion can result in tumour underdose and healthy tissue overdose, thereby reducing the effectiveness of the treatment while increasing toxicity to the patients. There is a growing appreciation of intrafraction target motion management by the radiation oncology community. Real‐time image‐guided radiation therapy (IGRT) can track the target and account for the motion, improving the radiation dose to the tumour and reducing the dose to healthy tissue. Recently, artificial intelligence (AI)‐based approaches have been applied to motion management and have shown great potential. In this review, four main categories of motion management using AI are summarised: marker‐based tracking, markerless tracking, full anatomy monitoring and motion prediction. Marker‐based and markerless tracking approaches focus on tracking the individual target throughout the treatment. Full anatomy algorithms monitor for intrafraction changes in the full anatomy within the field of view. Motion prediction algorithms can be used to account for the latencies due to the time for the system to localise, process and act.
Article
The technology used to add information to a real visual field is defined as augmented reality technology. Augmented reality technology that can interactively manipulate displayed information is called mixed reality technology. HoloLens from Microsoft, which is a head-mounted mixed reality device released in 2016, can display a precise three-dimensional model stably on the real visual field as hologram. If it is possible to accurately superimpose the position/direction of the hologram in the surgical field, surgical navigation-like use can be expected. However, in HoloLens, there was no such function. The authors devised a method that can align the surgical field and holograms precisely within a short time using a simple manual operation. The mechanism is to match the three points on the hologram to the corresponding marking points of the body surface. By making it possible to arbitrarily select any of the three points as a pivot/axis of the rotational movement of the hologram, alignment by manual operation becomes very easy. The alignment between the surgical field and the hologram was good and thus contributed to intraoperative objective judgment. By using the method of this study, the clinical usefulness of the mixed reality device HoloLens will be expanded.
Article
Background: Wearable technology is growing in popularity as a result of its ability to interface with normal human movement and function. Methods: Using proprietary hardware and software, neuronavigation images were captured and transferred wirelessly via a password-encrypted network to the head-up display. The operating surgeon wore a loupe-mounted wearable head-up display during image-guided parieto-occipital ventriculoperitoneal shunt placement in two patients. Results: The shunt placement was completed successfully without complications. The tip of the catheter ended well within the ventricles away from the ventricular wall. The wearable device allowed for continuous monitoring of neuronavigation images in the right upper corner of the surgeon's visual field without the need for the surgeon to turn his head to view the monitors. Conclusions: The adaptable nature of this proposed system permits the display of video data to the operating surgeon without diverting attention away from the operative task. This technology has the potential to enhance image-guided procedures.