Content uploaded by Bastian Dewitz
Author content
All content in this area was uploaded by Bastian Dewitz on Sep 28, 2022
Content may be subject to copyright.
Towards 5G Telementoring in VR-Assisted
Heart Transplantation Using HoloLens 2
Bastian Dewitz∗,†, Roman Bibo∗, Sebastian Kalkhoff∗, Sobhan Moazemi∗,
Artur Liebrecht∗, Christian Geiger⋆, Frank Steinicke†, Hug Aubin∗and Falko Schmid∗
∗Universitätsklinkum Düsseldorf †Universität Hamburg ⋆Hochschule Düsseldorf
Moorenstraße 5 Vogt-Kölln-Straße 30 Münsterstraße 156
40225 Düsseldorf 22527 Hamburg 40476 Düsseldorf
Abstract: In this work-in-progress paper, we present the current state of a research project
regarding the use of HoloLens 2 (HL2) as single optical see-through head-mounted display
(OST-HMD) in medical telementoring. In the past, some projects have been presented that
show the potential of 3D reconstruction of operations and advanced communication using
3D annotations in augmented reality (AR). In our research project, we develop a system
to support the process of heart transplantation (HTX) which introduces great challenges
by its inherent requirements. We present first findings of the technical development and
deployment in the clinical environment and also limitations using the HL2.
Keywords: HoloLens 2, Augmented Reality, Virtual Reality, Telementoring, Transplan-
tation, HTX, 5G, Annotation, Communication
1 Introduction
HTX is a process that may benefit greatly from technological advances in telementoring and
AR-devices due to time-critical and logistical challenges. While the explantation of a donor
organ is possible in many hospitals in the whole Eurotransplant region, only some specialized
hospitals perform the implantation. The time span between explantation of the donor heart
and the implantation into the body of the recipient is only 4 hours to avoid increasing risks
due to ischemia. In case of HTX, the explantation is usually performed by a team of surgeons
which is sent from the implanting hospital to the remote location. They work together with a
second, remote team of surgeons, who are located at the implanting hospital and perform the
heart implantation when the organ arrives. Typically, the heart explantation is carried out in
parallel with explantations of other organs, such as liver, lung or kidney. Therefore, not only
time but also space in the operation room is limited. Additionally, the explanation takes place
in a foreign environment in an external hospital. A crucial part of the trnasplantation is the
evaluation and precise explantation of the donor organ at the explantation site. In most cases
the means for communication and synchronization between both teams are usually limited
to occasional phone calls and a full remote observation and support of the explantation is
still not common today. In many cases, the explanting surgeons are also less experienced
and can benefit from real-time communications with the implanting team.
Candidates for supporting this process of explantation using mixed-reality technology
are OST-HMDs such as the HL2. These head-worn devices allow displaying diverse media
in real-time and can be connected to the internet to allow a remote surgeon to virtually
join an explantation. One of the main application of the HL2 is the support of workers by
experts from afar using software such as the preinstalled Remote Assist in industrial cases.
In this paper, we present the current state of an experimental systems which is tailored
for remote telementoring in HTX and which only relies on the built-in sensors of the HL2.
The system is intended to capture a 3D-reconstruction of an explantation site, stream the
data in real-time to the implantation surgeons and allow for advanced communication using
annotations. During development, we encountered various challenges which are presented
and discussed in this paper.
The remainder of this paper structures as follows: First, we give an overview of related
work in this field of research. Second, we break down the pipeline (see Figure 1) of the
system and present key challenges. Third, the presented approach and solutions and further
remaining challenges are discussed. Finally, we summarize key findings, draw a conclusion
regarding the presented approach and give an outlook on future developments.
2 Related Work
The HL2 has been successfully used in research projects regarding AR as tool in clinical
procedures. In the past five years, one of the main research field for HoloLens 1 and 2 has
been medical applications [PBC21]. One of the most common uses to support surgeries
using HL2 is displaying macroscopic anatomical structures (e.g. bones [PIL+18, MUHO19],
organs [BBS+19] and vessels [PIL+18]) as 3D models or images and other information, such as
annotations [RMLST+20, GJS+21, LRvA+19] as overlays in a digitally augmented operation
room [BECS22]. For telementoring system in the medical domain, STAR [RMLST+20] and
ARTEMIS [GJS+21] are elaborated recent research projects, that showcase the potential
of such systems: The remote clinical procedure is captured and reconstructed in 3D at
a different location to allow expert surgeons to support decision making and procedures
by adding annotations in 3D-space. As current limitation, they do not only rely on the
built-in sensors of the OST-HMD, but require a prepared environment that is equipped with
different sensors. In other cases, the HL2 is used without any additional sensors and Microsoft
Remote 365, which is shipped with the HL2, is used as software for streaming videos and
communication [vdPAvG22]. The actual deployment in clinical procedures is yet limited and
the most common type of study are phantom experiments and system setups [BECS22]. Due
to the critical situation in operations, the HL2 moves only slowly from a proof-of-concept to
an actual application in the operation room and clinical testing [CNH+21, DBS+21].
Modifications of the HL2 (and the HoloLens 1) have been used to enhance the device’s
capabilities by adding new or by replacing existing sensors with better hardware [CUBW21,
GBD+16, LYW+18]. In previous cases, the means to attach additional sensors has been
developed individually and only in rare cases a 3D-model is publicly available (e.g. [Cla20]).
3 Technical Challenges in Using HL2 for HTX
5G
A
B
C
D
E
F
Figure 1: Planned pipeline of this research project. (A): Recording of situs using a single
HL2 at the remote explantation, (B): Processing of the data on the HL2, (C): Streaming via
5G using a handheld smartphone, (D): 3D reconstruction of the situs at the hosptial, (E):
Annotations in Virtual Reality, (F): Display of Annotations at the remote location.
3.1 Recording
The HL2 is equipped with a RGB camera with and a time-of-flight depth sensor (AHAT).
While the AHAT sensor has a resolution of 512 x 512 pixels with a frame rate of 45 fps (which
drops to 5 fps when no hands are present in the depth image), the RGB camera provides
different profiles according to the application’s needs. The AHAT sensor can be accessed
using the official Research Mode [UBG+20] and publicly available wrappers [Wen21, Gsa21].
The resolution of the RGB camera and the depth sensor is considerable low for the intended
scenario and only a small section of the image is relevant for the examination of the situs
from afar (ca. 240 x 180 pixels in the RGB image and 75 x 50 pixels in the AHAT image,
respectively. See also Figure 6).
For the intended scenario to use HL2 as device for capturing an operation a critical weak
spot was found: The viewing direction of the RGB camera is aligned with the view of a user
in a face-to-face scenario with an area of interest directly in front of the user. In contrast,
the area of interest during surgery is located in the lower area of the field of view of the
surgeon. An typical ergonomic posture is a standing with a slightly downwards tilted head,
around 20°to 30°, also visible on the left side in Figure 1 just in front of the operation situs.
Especially when surgical loupes are used, the surgeon needs to keep this position to focus
on the operation. When the color camera of the HL2 is aligned to the operation situs, the
surgeon is forced to tilt his or her head downwards at a much higher angle (around 50°). This
forced posture is problematic from an ergonomic point of view. To counter this problem,
we followed two approaches to deflect the camera view direction of the color camera by
(a) Prism module (PM). (b) Mirror module (MM). (c) MM attached to HL2.
Figure 2: 3D-printed mounts which can be attached to the HL2-mount. The mounts were
printed using a Zortrax M300 Dual printer.
(a) View without modifica-
tion. Head tilt ca. 50°.
(b) Distorted view using PM.
Head tilt ca. 35°
(c) Mirrored view using MM
(flipped). Head tilt ca. 20°.
Figure 3: View with and without modifications on a life-size model of a human heart recorded
with the HL2. The width of one square is 10 mm. The opening for open-heart surgery is
typically not bigger than the area of the depicted checkerboard.
constructing: (1) a mirror module and (2) a prism module. Both were designed as modules
that can be attached to a 3D-printed HL2-mount (visible in Figure 2c).
The HL2-mount is designed to allow attaching diverse modules to the HL2 as experimen-
tal device. It can be tightly screwed to the HL2 using two M2x16 mm screws to prevent it
from falling down and thereby providing a hazard to a patient. The mount was developed in
an iterative prototyping process to reduce the weight and allow heat dissipation by adding
a supported gap between HL2-mount and HL2 as the whole body of the processing unit
acts as heat sink and covering it increases the device temperature significantly. A technical
drawing of the HL2-mount can be seen in the appendix A. Further, the CAD models are
available online (see appendix). The first module, the mirror module (MM) (see Figure 2b)
deflects the camera view at an angle of 35°. It uses a circular mirror with a diameter of 50
mm that is directly placed at the top of the camera lens. An additional cover plate is added
below the mirror (see Figure 2b) to reduce reflections that disturb the AHAT sensor which
decreases the usable image area of the AHAT sensor to a resolution of 512x320 pixels. The
mirror is positioned on top of a 3D-printed rim with a diameter of 0.5 mm that is intended
to mechanically avoid the risk of detaching of the mirror and secured with a glued-on back-
plate. The second module, the prism modul (PM) (see Figure 2a) uses a wedge prism with
a ray deviation of 15°and a diameter of 25 mm. The physical properties of prism produce
distortion and visible chromatic aberrations in the image (see Figure 4a). The distortion
(a) Close up of chromatic aberrations at edges
using PM.
(b) Manual correction of chromatic aberration by
shifting red and blue channels along y.
Figure 4: Chromatic aberrations produced by the PM and result of correction.
can be compensated using the an adequate model during camera calibration and chromatic
aberrations can sufficiently be reduced by shifting the red and blue channel of the image in
y-direction (see Figure 4b). While both methods for deflecting the camera view work, in
case of surgery, the MM supports the natural posture of surgeons better than the PM.
3.2 Network Transmission
For network transmission, MixedReality-WebRTC (MRRTC) [Mic22] can be used. MRRTC
is a framework for Unity that allows to easilyestablish a connection between two peers using
a STUN or TURN server which can be accessed in the public internet. It automatically
transmits encoded video and audio over media channels or arbitrary data over data channels
in different modes (each channel can be set to reliable and ordered, if necessary) with SRTP-
encryption. While MRRTC provides an easy way of connecting the explantation site and
hospital “out-of-the-box” it also has some major disadvantages. MRRTC is mostly intended
to be used in video call scenarios. The available bandwidth is continuously renegotiated
and compression is set to allow communication with a low latency. The encoding reduces
the depth resolution to 8 bit, which is visible as discrete steps and generated noise at sharp
corners which reduces the overall quality compared to raw data (see Figure 5). Further, the
synchronization of streams is also difficult to implement as no timestamps are transmitted
in the current version of MRRTC and the delay of individual streams can vary over time.
Overall, the combined data rate of all streams after compression was measured at around 7
Mbit/s. To allow a network connection between remote explantation site and hospital in the
HTX use case, 5G seems a promising technology as speed and latency of 5G can be considered
sufficient for transmitting this amount of data. Handheld 5G capable smartphones which
allow to be independent from existing infrastructure at the explantation site may be used to
to establish a 5G internet connection as mobile hotspots or via cable connection to the USB
NCM port.
Figure 5: Reconstructed point cloud of a human hand after 8 bit encoding using MRRTC
at a high bandwidth (direct cable connection) (left) and as raw data (10 bit) (right).
3.3 Reconstruction in 3D
With an appropriate camera model, the mirror module and the prism module can be cali-
brated to map color from the RGB image to depth values in the AHAT image (see Figure
6). The depth and color information can then be rendered as point cloud or 3D mesh which
can be evaluated on a 2D screen or in 3D in a virtual reality or AR environment. For a
proof-of-concept, the RGB image is currently mapped to the depth image using 2D image
transformations (scaling and translating the image to fit to 3D structures). The combined
depth and color information is then rendered as quads with a diameter of 1 mm as point
cloud. The real-time reconstruction using compute shaders in Unity 3D seems promising
(see Figure 3b) and a procedure for calibration using the standardized methods of OpenCV
is currently under development. This procedure will be integrated into the software to allow
a fast calibration as part of the device preparation before the explantation.
Some challenges that emerge by reconstructing a 3D model from a single depth sensor
need to be addressed in the future. While smooth surfaces are reconstructed quite well (see
Figure 7a and Figure 7b), edges are difficult to perceive (see Figure 7c) without further
processing. Some parts of the reconstructed environment are occluded by other parts of the
environment or the hands of the surgeon. Further, the noise at a distance of around 50
cm, which is typical for HTX operations, is considerably high which makes some filtering
necessary.
3.4 Annotations and Communication
To extend the communication beyond video and audio, 3D annotations are a reasonable
choice that has been implemented in previous projects. Literature shows a sufficient accuracy
for many surgical procedures with a deviation below 1 cm [GSB+20, GPW+19]. With a
correct 3D reconstruction, the transfer of 3D labels from the reconstruction to the real
environment becomes possible. Different means of interaction can be implemented to allow
the drawing of labels such as 2D-drawing on a monitor or 3D-drawing in virtual reality. In
case of open-heart surgery, the placement of labels is however a more difficult task as tissue
(a) RGB view with MM (flipped
along y).
(b) AHAT depth image with
512x320 cutout resolution.
(c) Combined partly colored
point cloud.
Figure 6: Recording of a model of a human heart at a typical working distance of ca. 45 cm.
and organs constantly move, or are occluded by the surgeons hands or tools. Computer-
vision algorithms may allow to attach annotations to specific locations on the tissue and track
them over time [YLS+12, LSQ+16, BNSD17] and also AI-based methods can be considered
[MBN21, MPMA+22]. Considering the low resolution of the RGB and AHAT depth video
streams this needs specialized algorithms and it is unclear to which degree a tracking of
labels will be possible.
The limited field of view of the HL2 only allows to display content in a small part of
the field of view of a user. Tests showed, that it is possible to wear the HL2 on top of
typical surgical loupes with individually fitted oculars. This suggest the use of the HL2 as
secondary monitor that displays 3D-registered annotations and other media content in the
upper part of the field of view and a clear view on the situs in the lower part of the field
of view. This has already been researched as feasible approach in previous research work
[YCR+17]. Considering the difficult situation in the operation room, some questions need
to be researched in the future, such as, the coloring and size of 3D lines, what actions need
to be communicated to the remote location and how accurately they can be interpreted.
4 Discussion
Although the HL2 enables remote support with the shipped software, this is tailored to
other applications, such as maintenance of industrial plants using video annotation. The
annotation of 3D reconstruction as intended in this project, seems like a useful application
that is worth exploring, especially in the field of telementoring. The presented project
consists of several complex problems that need to be addressed simultaneously and in a
coordinated manner to ensure a successful project. The presented approaches seem promising
and a gradual refinement in the future may lead to an experimental system in the context of
heart transplantation. While still some work has to be done, we are confident that testing
the application under clinical conditions will soon be possible.
One bottleneck that will persist in the current pipeline is using the HL2 as OST-HMD.
Although it can be considered state-of-the-art, we have repeatedly reached the device’s
capabilities during development. A big limitation we encountered was the processing power
on the device itself. While the camera can record video material in a resolution of 1920x1080
(a) Reconstruction with
MRRTC-encoded depth data.
(b) Reconstruction with 5x5
gaussian filtering.
(c) Side view: Missing pixels at
steep edges.
Figure 7: Close-up rendering of the reconstructed point cloud of a life-size model of a human
heart at a distance of 45 cm.
pixels, the processing power is not sufficient to support this resolution in transmission. It was
considered to use a cutout of the relevant areas in the image to reduce bandwidth, but the
frame rate dropped well below 5 fps using different methods (OpenCV for Unity, C# in Unity,
compute shader) which was ultimately considered to low for real-time interaction. During
development, MRRTC has unfortunately been marked as deprecated, so an alternative way
of transmitting data has to be found in the future to allow support and compatibility.
Overall, the mirror mount seems to be the most promising candidate to deflect the RGB
camera as the posture of surgeons has not to be adjusted and will therefore be the primary
approach in this application and in similar use cases. The prism can be used in a similar way,
but, in this case, the deflection is not strong enough. While both approaches successfully
modify the view of the HL2, it would be preferable for future OST-HMDs to be equipped
with an additional tilted camera.
5 Conclusion and Future Work
In this paper, we presented the most important steps in the pipeline of a project regarding 5G
telementoring in HTX. Even though some features of the HL2, such as video resolution and
computation power, were determined as possibly not sufficient in this scenario beforehand,
developing a running system yielded interesting insight for future developments. Following
key findings were made with our current prototype: (1) In surgical applications, the camera
view direction of the HL2 can be deflected to allow recording of the region of interest without
forcing an artificial posture on users. (2) The 5G network transmission of 3D data is possible
using WebRTC, although important details are lost due to encoding. For critical data, other
means of streaming need to be implemented. (3) A 3D reconstruction is possible using data
from the built-in sensors of the HL2, although the resolution is rather low and it is not clear
if it will be sufficient for this use case. (4) The HL2 can be worn above surgical loupes to
provide a on-demand view of annotations. A big challenge for tracking of annotations in this
specific use case will be moving tissue and occlusions. While there are still some challenges
ahead of this project, the prospects are positive and an actual deployment using 5G in
surgeries will be an interesting use case in the field of mixed-reality-assisted telementoring.
References
[BBS+19] Henrik Brun, Robin Anton Birkeland Bugge, LKR Suther, Sigurd Birkeland,
Rahul Kumar, Egidijus Pelanis, and Ole Jacob Elle. Mixed reality holograms
for heart surgery planning: first user experience in congenital heart disease.
European Heart Journal-Cardiovascular Imaging, 20(8):883–888, 2019.
[BECS22] Manuel Birlo, PJ Eddie Edwards, Matthew Clarkson, and Danail Stoyanov.
Utility of optical see-through head mounted displays in augmented reality-
assisted surgery: a systematic review. Medical Image Analysis, page 102361,
2022.
[BNSD17] Sylvain Bernhardt, Stéphane A Nicolau, Luc Soler, and Christophe Doignon.
The status of augmented reality in laparoscopic surgery as of 2016. Medical
image analysis, 37:66–90, 2017.
[Cla20] Thomas Clarke. Hololens 2 Mount for ZED Mini. https://www.thingiverse.
com/thing:4561113, 2020. Accessed: June 12, 2022.
[CNH+21] Fabio A Casari, Nassir Navab, Laura A Hruby, Philipp Kriechling, Ricardo
Nakamura, Romero Tori, Fátima de Lourdes dos Santos Nunes, Marcelo C
Queiroz, Philipp Fürnstahl, and Mazda Farshad. Augmented reality in or-
thopedic surgery is emerging from proof of concept towards clinical studies:
a literature review explaining the technology and current state of the art.
Current Reviews in Musculoskeletal Medicine, 14(2):192–203, 2021.
[CUBW21] Zubin Choudhary, Jesus Ugarte, Gerd Bruder, and Greg Welch. Real-time
magnification in augmented reality. In Symposium on Spatial User Interaction,
pages 1–2, 2021.
[DBS+21] Cyrill Dennler, David E Bauer, Anne-Gita Scheibler, José Spirig, Tobias
Götschi, Philipp Fürnstahl, and Mazda Farshad. Augmented reality in the
operating room: A clinical feasibility study. BMC musculoskeletal disorders,
22(1):1–9, 2021.
[GBD+16] Mathieu Garon, Pierre-Olivier Boulet, Jean-Philippe Doiron, Luc Beaulieu,
and Jean-François Lalonde. Real-time high resolution 3d data on the hololens.
In 2016 IEEE International Symposium on Mixed and Augmented Reality
(ISMAR-Adjunct), pages 189–191. IEEE, 2016.
[GJS+21] Danilo Gasques, Janet G Johnson, Tommy Sharkey, Yuanyuan Feng,
Ru Wang, Zhuoqun Robin Xu, Enrique Zavala, Yifei Zhang, Wanze Xie, Xin-
ming Zhang, et al. Artemis: A collaborative mixed-reality system for immer-
sive surgical telementoring. In Proceedings of the 2021 CHI Conference on
Human Factors in Computing Systems, pages 1–14, 2021.
[GPW+19] Christina Gsaxner, Antonio Pepe, Jürgen Wallner, Dieter Schmalstieg, and
Jan Egger. Markerless image-to-face registration for untethered augmented
reality in head and neck surgery. In International Conference on Medical Im-
age Computing and Computer-Assisted Intervention, pages 236–244. Springer,
2019.
[Gsa21] Christina Gsaxner. HoloLens2-Unity-ResearchModeStreamer. https:
//github.com/cgsaxner/HoloLens2-Unity-ResearchModeStreamer, 2021.
Accessed: June 12, 2022.
[GSB+20] Rocco Galati, Michele Simone, Graziana Barile, Raffaele De Luca, Carmine
Cartanese, and G Grassi. Experimental setup employed in the operating room
based on virtual and mixed reality: analysis of pros and cons in open abdomen
surgery. Journal of healthcare engineering, 2020, 2020.
[LRvA+19] Florentin Liebmann, Simon Roner, Marco von Atzigen, Davide Scaramuzza,
Reto Sutter, Jess Snedeker, Mazda Farshad, and Philipp Fürnstahl. Pedicle
screw navigation using surface digitization on the microsoft hololens. Inter-
national journal of computer assisted radiology and surgery, 14(7):1157–1165,
2019.
[LSQ+16] Bingxiong Lin, Yu Sun, Xiaoning Qian, Dmitry Goldgof, Richard Gitlin, and
Yuncheng You. Video-based 3d reconstruction, laparoscope localization and
deformation recovery for abdominal minimally invasive surgery: a survey. The
International Journal of Medical Robotics and Computer Assisted Surgery,
12(2):158–178, 2016.
[LYW+18] Christoph Leuze, Grant Yang, Gordon Wetzstein, Mahendra Bhati, Amit
Etkin, and Jennifer McNab. Marker-less co-registration of mri data to a sub-
ject’s head via a mixed reality device. In Proc. Intl. Soc. Mag. Reson. Med,
volume 26, page 1481, 2018.
[MBN21] Adam Mylonas, Jeremy Booth, and Doan Trang Nguyen. A review of artifi-
cial intelligence applications for motion tracking in radiotherapy. Journal of
Medical Imaging and Radiation Oncology, 65(5):596–611, 2021.
[Mic22] Microsoft. MixedReality-WebRTC. https://github.com/microsoft/
MixedReality-WebRTC, 2022. Accessed: June 12, 2022.
[MPMA+22] David Männle, Jan Pohlmann, Sara Monji-Azad, Nikolas Löw, Nicole Rotter,
Jürgern Hesser, Annette Affolter, Anne Lammert, Angela Schell, Benedikt
Kramer, et al. Development of ai based soft tissue shift tracking during surgery
to optimize frozen section analysis. Laryngo-Rhino-Otologie, 101(S 02), 2022.
[MUHO19] Daisuke Mitsuno, Koichi Ueda, Yuka Hirota, and Mariko Ogino. Effective ap-
plication of mixed reality device hololens: simple manual alignment of surgical
field and holograms. Plastic and reconstructive surgery, 143(2):647–651, 2019.
[PBC21] Sebeom Park, Shokhrukh Bokijonov, and Yosoon Choi. Review of microsoft
hololens applications over the past five years. Applied Sciences, 11(16):7259,
2021.
[PIL+18] Philip Pratt, Matthew Ives, Graham Lawton, Jonathan Simmons, Nasko
Radev, Liana Spyropoulou, and Dimitri Amiras. Through the hololens™look-
ing glass: augmented reality for extremity reconstruction surgery using 3d
vascular models with perforating vessels. European radiology experimental,
2(1):1–7, 2018.
[RMLST+20] Edgar Rojas-Muñoz, Chengyuan Lin, Natalia Sanchez-Tamayo, Maria Euge-
nia Cabrera, Daniel Andersen, Voicu Popescu, Juan Antonio Barragan, Ben
Zarzaur, Patrick Murphy, Kathryn Anderson, et al. Evaluation of an aug-
mented reality platform for austere surgical telementoring: a randomized con-
trolled crossover study in cricothyroidotomies. NPJ digital medicine, 3(1):1–9,
2020.
[UBG+20] Dorin Ungureanu, Federica Bogo, Silvano Galliani, Pooja Sama, Xin Duan,
Casey Meekhof, Jan Stühmer, Thomas J Cashman, Bugra Tekin, Johannes L
Schönberger, et al. Hololens 2 research mode as a tool for computer vision
research. arXiv preprint arXiv:2008.11239, 2020.
[vdPAvG22] Kees van der Putten, Mike B Anderson, and Rutger C van Geenen. Looking
through the lens: The reality of telesurgical support with interactive technol-
ogy using microsoft’s hololens 2. Case Reports in Orthopedics, 2022, 2022.
[Wen21] Wenhao. HoloLens2-ResearchMode-Unity. https://github.com/
petergu684/HoloLens2-ResearchMode-Unity, 2021. Accessed: June
12, 2022.
[YCR+17] Jang W Yoon, Robert E Chen, Karim ReFaey, Roberto J Diaz, Ronald Reimer,
Ricardo J Komotar, Alfredo Quinones-Hinojosa, Benjamin L Brown, and
Robert E Wharen. Technical feasibility and safety of image-guided parieto-
occipital ventricular catheter placement with the assistance of a wearable head-
up display. The International Journal of Medical Robotics and Computer As-
sisted Surgery, 13(4):e1836, 2017.
[YLS+12] Michael C Yip, David G Lowe, Septimiu E Salcudean, Robert N Rohling,
and Christopher Y Nguan. Tissue tracking and registration for image-guided
surgery. IEEE transactions on medical imaging, 31(11):2169–2182, 2012.
Appendix
A
115.25
48.76
66.49
52.63
62.63
32.63
42.63
72.63
82.63
57.63
7
3.5
10.7
8.5
R3.1
Ø3.5
49.6
6
15.3
35.3
43
52
Technical drawing of the HoloLens 2 Base Mount. All Measurements in mm. The CAD-
Model is available at https://github.com/DHLD-UKD/HoloLens2-Modifications.
B
58.75
54.5
62.04
2
7
42.19
44.49
125°
52.24
17.25
21.12
31.12
34.99
3.5
32.5
44.5
54.5
R10
Ø2.4
R24.25
Ø50
Technical drawing of the HoloLens 2 mirror module. All Measurements in mm. The CAD-
Model is available at https://github.com/DHLD-UKD/HoloLens2-Modifications.
C
27
4.63
8.5
18.5
22.37
22.99
40.57
35.2
3.5
13.5
R8
R2
7
12.61
13.61
Technical drawing of the HoloLens 2 prism module. All Measurements in mm. The CAD-
Model is available at https://github.com/DHLD-UKD/HoloLens2-Modifications