Conference PaperPDF Available

AR-Assisted Surgical Guidance System for Ventriculostomy

Authors:

Abstract

Figure 1: The overall setup of our system using OptiTrack cameras for real-time tracking and HoloLens 2 as an AR device (left), and the AR view of ventricle and EVD catheter holograms overlaid for surgical guidance, and a cube hologram for localization (right). ABSTRACT Augmented Reality (AR) is increasingly used in medical applications for visualizing medical information. In this paper, we present an AR-assisted surgical guidance system that aims to improve the accuracy of catheter placement in ventriculostomy, a common neurosurgical procedure. We build upon previous work on neurosurgical AR, which has focused on enabling the surgeon to visualize a patient's ventricular anatomy, to additionally integrate surgical tool tracking and contextual guidance. Specifically, using accurate tracking of optical markers via an external multi-camera OptiTrack system, we enable Microsoft HoloLens 2-based visualizations of ventricular anatomy, catheter placement, and the information on how far the catheter tip is from its target. We describe the system we developed, present initial hologram registration results, and comment on the next steps that will prepare our system for clinical evaluations.
AR-Assisted Surgical Guidance System for Ventriculostomy
Sangjun Eom*
Department of Electrical and
Computer Engineering
Duke University
Seijung Kim
Department of Computer
Science
Duke University
Shervin Rahimpour
Department of Neurosurgery
University of Utah
Maria Gorlatova§
Department of Electrical and
Computer Engineering
Duke University
Figure 1: The overall setup of our system using OptiTrack cameras for real-time tracking and HoloLens 2 as an AR device (left), and
the AR view of ventricle and EVD catheter holograms overlaid for surgical guidance, and a cube hologram for localization (right).
ABSTRACT
Augmented Reality (AR) is increasingly used in medical applications
for visualizing medical information. In this paper, we present an AR-
assisted surgical guidance system that aims to improve the accuracy
of catheter placement in ventriculostomy, a common neurosurgical
procedure. We build upon previous work on neurosurgical AR,
which has focused on enabling the surgeon to visualize a patient’s
ventricular anatomy, to additionally integrate surgical tool tracking
and contextual guidance. Specifically, using accurate tracking of
optical markers via an external multi-camera OptiTrack system, we
enable Microsoft HoloLens 2-based visualizations of ventricular
anatomy, catheter placement, and the information on how far the
catheter tip is from its target. We describe the system we developed,
present initial hologram registration results, and comment on the
next steps that will prepare our system for clinical evaluations.
Index Terms: Human-centered computing—Human computer
interaction (HCI)—Interaction paradigms—Mixed / augmented real-
ity;
1 INTRODUCTION
The surgical field of view can often be limited when looking through
an endoscope or working through a narrow incision. Especially in
neurosurgery, a surgeon often prefers to take minimally invasive
approaches to avoid inadvertent injuries to vascular or nervous struc-
tures due to a limited field of view [6]. In the case of bedside cranial
procedures, the surgeons rely on their expertise, external anatomical
landmarks, and imaging such as computed tomography (CT) to “see
through” the skull. Among these neurosurgical procedures, ven-
triculostomy involves the placement of an external ventricular drain
(EVD), a procedure that entails placement of twist drill craniotomy
and subsequent placement of a catheter in the lateral ventricles to
*E-mail: sangjun.eom@duke.edu
E-mail: seijung.kim@duke.edu
E-mail: shervin.rahimpour@utah.edu
§E-mail: maria.gorlatova@duke.edu
drain cerebral spinal fluid. EVD placement is one of the most com-
mon neurosurgical procedures, practiced more than 20,000 times
annually in the U.S. alone [7]. However, while this procedure relies
on external landmarks, internal ventricular anatomy can vary by pa-
tient and pathology. Hence, we developed an AR-assisted guidance
system for neurosurgery with EVD as the target application.
Our AR-assisted surgical guidance system, shown in Fig. 1, uses
an external 6-camera OptiTrack system for real-time tracking of a
collection of optical markers, attached to different objects, enabling
Microsoft HoloLens 2-based intraoperative visualization of both the
hologram of patient’s ventricles and the hologram of the inserted
EVD catheter. OptiTrack is often used for precise motion capture in
video games, movie production, and Virtual Reality (VR). Unlike
previous work which uses a single fiducial marker to enable dis-
playing the hologram of patient’s ventricles alone, our system’s use
of multiple optical markers allows displaying multiple holograms,
including holograms of moving objects (i.e., the EVD catheter).
We enable OptiTrack and HoloLens 2 to work together via the
transformation of their coordinate systems. Towards this, we devel-
oped a localization procedure that uses a combination of optical and
fiducial markers. In this paper, we compare the results for two vari-
ants of this procedure. We also demonstrate a complete workflow for
AR-assisted ventriculostomy, including the generation of a ventricu-
lar hologram from a patient’s CT scan, tracking and holographically
representing the catheter, and calculating the distance from the tip
of the inserted catheter to the point the neurosurgeons aim to reach
(ipsilateral foramen of Monro). We believe that our system is the
first to integrate tool tracking with image registration in EVD for
adding more contextual guidance to AR-assisted ventriculostomy.1
In the rest of this paper, we discuss the related work in Section 2,
describe our system architecture in Section 3, detail our system’s
AR-based visualizations in Section 4, share our preliminary results in
Section 5, and comment on our future work directions in Section 6.
2 RE LATE D WORK
Image registration, specifically overlaying a 3D model of the preop-
erative scan of the patient with the surgeon’s view, has been explored
for different types of surgeries [4,5,10]. The most common approach
1
The video of AR-assisted ventriculostomy is provided at
https://sites.duke.edu/sangjuneom/arevd/
Table 1: Accuracy results of instrument manipulation on a hologram
overlaid target in various surgical applications.
Papers Areas Devices Results (mm)
Schneider et al. [10] EVD HoloLens 1 5.20 ±2.60
Li et al. [5] EVD HoloLens 1 4.34 ±1.63
Andress et al. [1] Orthopedic HoloLens 1 4.47 ±2.91
Rose et al. [9] Otolaryngology HoloLens 1 2.47 ±0.46
to it is the use of fiducial or optical markers to determine the holo-
gram overlay location [2]. Fiducial markers within the Vuforia
engine, which is natively integrated with the Microsoft HoloLens
software stack, have been used in several neurosurgical AR applica-
tions [10]. This method achieves a mean hologram registration error
of 1-3mm. Similar results have been demonstrated with other types
of fiducial markers, namely ARToolKit [1, 8] and AprilTags [3].
The use of a single fiducial marker limits the line of sight and
robustness of marker detection. For example, Vuforia uses feature
extraction for identifying trackable features and ARToolKit uses
contour detection for identifying corners of fiducial markers. Both
algorithms require the fiducial marker to be in the line of sight and
marker pattern to be clearly visible to the camera, thus compromising
the robustness of marker detection. These limitations affect the
accuracy of hologram alignments for image registration.
The OptiTrack system overcomes both limitations by expanding
the line of sight to a broader angle with multiple cameras, and
providing robust real-time tracking with multiple optical markers
while maintaining high accuracy of hologram alignment. Multiple
optical markers were used with a flatbed camera system in [11],
however, instead of an AR device, an external monitor was used to
display an augmented video to assist the surgeons.
Various types of surgeries involve the tasks of instrument manip-
ulations where the image registration brings guidance to surgeons
in improving the accuracy results of the tasks. EVD placement
(which is typically a freehand procedure) with no visualization of
internal anatomy can result in inaccurate placement of the catheter,
especially for more junior surgeons [7]. The overlay of a ventricular
hologram has been shown to improve the accuracy of catheter place-
ment. Table 1 shows the accuracy results of instrument manipulation
for various applications. In particular, the EVD application reported
a reduction in the distance between the tip of the catheter and its
intended target from 11.26mm [5] to 4-5mm [5, 10]. The drift of
hologram over time or when erratic user movement is observed, and
low first attempt success rate remain as challenges for the image reg-
istration. The mean drift of 1.41mm using Vuforia marker detection
was reported for EVD [10].
Integrating tool tracking with image registration can enable pro-
viding additional contextual guidance to the surgeons [12]. In EVD,
both the angle and the inserted length of the catheter are impor-
tant for its accurate placement, yet surgeons have a limited view
of the catheter that is inserted. A holographic representation of the
catheter can provide an important visual reference for the surgeon.
In this work, we enable catheter tracking and demonstrate catheter
tracking-based contextual guidance.
3 OVE RA LL ARCHITECTURE
In this section, we describe our system’s hardware specifications, our
approach to tracking different objects that comprise our integrated
setup, and the transformation of world coordinate systems between
the two core components of our system, HoloLens 2 and OptiTrack.
Hardware Setup: The proposed setup includes a surgeon-worn
HoloLens 2 and six Flex 3 OptiTrack cameras, shown in Fig. 1, with
lens specs of 57.5 degrees in field of view and 800nm of long-pass
infrared (IR) range. The OptiTrack cameras are distributed around
the table to capture full 360 degrees of angular view for stable
Figure 2: (a) 3D printed localization marker attached with both fiducial
(ARToolKit) and optical markers. (b) H-shaped 3D printed mount
for the catheter, attached to the inner stylet. (c) Dimensions of the
phantom model attached with eight optical markers and Kocher’s
points. (d) A mold filled with jello inside the phantom model.
and accurate tracking of optical markers. We use the OptiTrack
cameras to track all objects including the HoloLens 2; tracked object
positions and orientations are transmitted back to the HoloLens 2
for visualization within our Unity-based AR app. HoloLens 2 only
tracks one fiducial marker (e.g., the ARToolKit-based marker shown
in Fig. 2a) to localize in order to transform its world coordinates to
OptiTrack’s coordinate system.
The HoloLens 2 and the OptiTrack system are communicating
via a desktop-based server, set up with the Motive Application
Programming Interface (API). The server receives marker positions
from the OptiTrack cameras 100 times per second, via wired USB-
based connections. Tracked objects’ coordinates, calculated on the
server, are transmits to the HoloLens 2 wirelessly in real-time at
the same rate. These settings achieve imperceptible latency and
minimize visual artifacts that can be associated with lower frame-
rate object tracking (spatial jitter, judder, unexpected disappearance
of holograms).
Tracked Objects: There are a total of four objects that the Opti-
Track system is tracking in real-time. 1) HoloLens 2 has three optical
markers attached to it, and is tracked in real-time to adjust the AR
view with the surgeon’s perspective.22) A localization marker that
allows HoloLens 2 and OptiTrack to operate in the same coordinate
space, as described in detail in the next paragraph. We designed
two localization markers: a 2D Vuforia marker, shown in Fig. 1b,
and a 3D ARToolKit marker, shown in Fig. 2a. 3) The phantom
model of the patient’s head, shown in Figs. 2c and d. We created
the phantom model to be anatomically similar to a patient’s head,
for testing, analysis, and evaluation of our system. The phantom
model has eight optical markers attached to its surface, as shown
2
This approach to headset pose tracking is similar to the methods used in
modern VR systems, such as Oculus Quest 2 and HTC Vive.
Figure 3: Transformation between two world coordinate systems
through the tracking of a localization marker. This diagram shows a
3D ARToolKit marker; we also experimented with a 2D Vuforia marker.
in Fig. 2c. The phantom model also shows two Kocher’s points,
which in EVD serve as the entry points for the EVD catheter. In
the phantom model we pre-drilled holes at these points, each with a
diameter of approximately 6mm. Inside the phantom model, there
is a 3D printed mold filled with red jello to imitate the target, as
shown in Fig. 2d. We used jello as a low-cost material that imitates
the texture of the ventricle and is capable of holding the catheter
position after the placement. 4) The EVD catheter. It is not possible
to attach optical markers directly to the catheter, which is a thin tube
with a 3.3mm diameter. Rather, we designed a 3D printed mount,
and attached four optical markers to it, each facing in a different
direction. The 3D printed mount is attached to the top of the inner
stylet, inserted inside the catheter to maintain the tube’s rigidity for
the insertion. EVD catheter mount design and attachment are shown
in Fig. 2b.
Transformation of World Coordinates: HoloLens 2 and Opti-
Track operate in different world coordinates. To be able to perform
transformations between the different coordinate systems, both the
HoloLens and the OptiTrack need to locate the same target, to be
used as a reference when calculating the differences in coordinate
values. To enable this, we created a custom rigid object (i.e., the
localization marker) that can be tracked by both OptiTrack and
HoloLens 2, and compute the same centroid point to calculate the
differences.
The transformation of world coordinates from OptiTrack,
{O}
to HoloLens 2,
{H}
,
TH
O
is shown in Eq. 1 and Fig. 3. The trans-
formation between HoloLens 2 and fiducial markers,
{F}
on the
localization marker,
TH
F
is obtained by either ARToolKit or Vuforia
marker detection methods. We placed the optical markers on the
localization marker to register it as a rigid body, {R}, and calculate
the same centroid point as possible as the fiducial markers. This
minimizes the transformation between the fiducial marker and rigid
body,
TF
R
. The transformation between the rigid body of localization
marker and OptiTrack,
TR
O
, is then obtained by OptiTrack’s real-time
tracking.
TH
O=TH
F·TF
R·TR
O(1)
4 HOLOGRAPHIC VISUALIZATION
High accuracy of image registration is of utmost importance in
surgical applications. We take advantage of OptiTrack with high
accuracy tracking of each marker position, as seen in Fig. 3, for
registering ventricular hologram and tracking the EVD catheter.
In EVD applications, a patient-specific ventricular model needs
to be extracted from the patient’s CT scan (since ventricular anatomy
varies by patient). We first imported the DICOM data of a patient’s
CT scan into 3D Slicer, open-source image analysis and visualization
software commonly used in medical applications. Then, we applied a
Figure 4: Extraction of 3D models of skull frame and ventricle from
the patient’s CT scan.
threshold value to extract the lateral ventricle with foramen of Monro
and the skull, as seen in Fig. 4, rendered the combined 3D model, and
exported it to load into Unity. The 3D model of the skull was initially
extracted together with the ventricle to evaluate the alignment with
the phantom model; however, the ventricular hologram alone is
sufficient for the EVD. The extraction of the ventricular model from
the patient’s DICOM data is a preoperative step that will increase
surgery preparation time. In our experience, it has been taking
30 minutes on average. We believe that the extraction time can
be shortened by automating the threshold selection. Examples of
ventricular hologram overlays can be seen in Figs. 1 and 5.
The phantom model of a patient’s head we use for our experiments
(shown in Fig. 2c) does not correspond to a specific patient for
whom we have the ventricular CT scan. Thus, to evaluate our image
registration, we obtained the CT scan of the phantom model and
extracted its 3D model by using the filtering approach. This allows
us to directly measure the misalignment between the physical and
the virtual (i.e., AR) representations of the phantom model; we
report these results in the next section.
In EVD, once the catheter is inserted through a small opening
in the skull, the tip of the catheter is no longer visible, making it
difficult for surgeons to estimate its true location. A hologram of
the inserted catheter can bring additional guidance by showing the
depth and the angle of the insertion, and the distance from the tip
of the catheter to the ventricle. Since EVD catheters have standard
dimensions, we represent the catheter with a similarly-shaped virtual
object: namely, a cylinder with 3.3mm diameter and 36cm length.
The resulting visualization is shown in Figs. 1b and 5. We are
currently working on improving the robustness of catheter tracking.
Currently, while providing useful visualizations in some conditions,
it results in spatial jitter in others. We will report on these results in
future work.
5 PRELIMINARY RES ULTS
Our system is intended to be used as follows. First, during the
system initialization phase, the user performs the eye calibration for
HoloLens 2, and the connectivity between the HoloLens 2 and the
OptiTrack is established. Following the initialization, the surgeon
navigates around the localization marker until a white cube hologram
appears, as shown in Fig. 1, which indicates that the fiducial marker
is detected. Once the cube hologram is accurately positioned on the
marker, the surgeon presses a button to compute the transformation
of the world coordinate system using the detected location. After the
transformation, the holograms of the ventricle and the EVD catheter
appear in the surgeon’s view, and the surgeon performs the insertion
of the catheter through Kocher’s point. The surgeon removes the
inner stylet when catheter placement is complete.
Localization: The performance of the two marker detection meth-
ods we examined (namely, ARToolKit and Vuforia) differed signifi-
cantly. We summarize our observations in Table 2. Over 15 trials,
we observed that the ARToolKit localization marker was detected
more readily while Vuforia localization marker was sensitive to
user distance and orientation. The detection stability was unreli-
able for ARToolKit, subsequently maintaining a poor alignment that
depended on the user’s movement, however for Vuforia, a good
Table 2: Comparison between ARToolKit and Vuforia localization
methods in detection and hologram alignments.
Detection Method ARToolKit Vuforia
Marker Type 3D Fiducial 2D Fiducial
Localization Time (s) 15.11 ±7.57 12.71 ±7.05
x: 1.39 ±0.57 x: 0.96 ±0.43
Registration Error (mm) y: 1.17 ±0.83 y: 1.11 ±0.27
z: 1.39 ±0.88 z: 1.44 ±1.05
Detection Flexibility Easy, Flexible Strict to angle, distance
Detection Stability Unreliable Reliable
Figure 5: Procedures of AR-assisted ventriculostomy in placing the
catheter at the foramen of Monro.
alignment of the cube hologram was observed, once detected. The
mean
±
standard deviation time of user navigation was 15.11
±
7.57 seconds for ARToolKit, and 12.71
±
7.05 seconds for Vufo-
ria. The average latency of data communication between OptiTrack
and HoloLens 2 was 12.32 ms. The data stream of marker posi-
tions maintained a seamless AR experience of reflecting changes in
tracked models with no data loss.
Hologram Alignment: Over 15 trials, we measured the differ-
ence in the alignment between the physical phantom model and
its holographic representation, using a digital caliper in
x
,
y
, and
z
coordinates (in the coordinate system shown in Fig. 3). As shown in
Table 2, we observed the mean
(x,y,z)
registration errors of (1.39
±
0.57, 1.17
±
0.83, 1.39
±
0.88)mm when using ARToolKit, and
(0.96
±
0.43, 1.11
±
0.27, 1.44
±
1.05)mm when using Vuforia.
The detection stability and higher accuracy of Vuforia are the key to
maintaining the image registration robust and accurate. We will thus
use Vuforia as our localization method in our subsequent research.
Catheter Placement: After the catheter is inserted, the surgeon
removes the inner stylet and the jello maintains the position of the
catheter placement, as shown in Fig. 5. In our system, we added
contextual guidance of a text display above the ventricle hologram,
which shows the calculated distance from the tip of the catheter to the
foramen of Monro (which the surgeons target in EVD). This distance
calculation is obtained in real time, using the tracked positions of the
phantom model and the catheter. We believe that our system is the
first to provide such guidance for the EVD procedure, which may
enable its use in training novice neurosurgeons in conducting this
procedure. However, the guidance we currently present is subject to
object tracking errors. We will evaluate its correctness by comparing
the distance we calculate with the physical distance between the
foramen of Monro and the tip of the catheter.
6 CONCLUSIONS AND FUTURE WORK
Among various surgical applications of AR, neurosurgery benefits
from AR guidance to maintain minimal invasiveness while ensuring
safety. In this paper we integrate AR into a neurosurgical application
by creating an AR-assisted surgical guidance system with image
registration and tool tracking. We demonstrate the visualization
of the ventricular anatomy for guidance to the surgeons and the
projection of the catheter tip for the EVD catheter placement.
We are currently preparing our system to be evaluated in clinical
settings, for the accuracy of AR-assisted catheter placement and
for other potential benefits of contextual guidance that our system
will provide. We have identified the following next steps to prepare
our system for the user studies. First, we will continue to improve
the robustness of tool tracking by optimizing the number and the
distribution of markers on the 3D-printed mount, and by configuring
the number, positions, orientations, and other parameters of the Op-
tiTrack cameras. Furthermore, we will develop more user-friendly
and intuitive workflows for the surgeons. The current procedure
requires pre-operative steps of extracting a patient-specific ventricu-
lar model and intraoperative steps for performing the localization,
increasing the overall surgery preparation time. We will improve our
system by automating threshold selection of different DICOM data,
and by providing additional AR-based visual guidance for system
localization. We expect to start conducting initial user studies, which
will evaluate different elements of our system with both novice and
experienced neurosurgeons, within the next 6 months.
ACKNOWLEDGMENTS
We thank Emily Eisele for her work on this project during an NSF
REU Program at Duke University. This work was supported in
part by NSF grants CSR-1903136 and CNS-1908051, NSF CA-
REER Award IIS-2046072, IBM Faculty Award, and by an AANS
Neurosurgery Technology Development Grant.
REFERENCES
[1]
S. Andress, A. Johnson, M. Unberath, A. F. Winkler, K. Yu, J. Fotouhi,
S. Weidert, G. M. Osgood, and N. Navab. On-the-fly augmented reality
for orthopedic surgery using a multimodal fiducial. Journal of Medical
Imaging, 5(2):021209, 2018.
[2]
L. Chen, T. W. Day, W. Tang, and N. W. John. Recent developments
and future challenges in medical mixed reality. In Proc. IEEE ISMAR,
2017.
[3]
W. Gibby, S. Cvetko, A. Gibby, C. Gibby, K. Sorensen, E. G. Andrews,
J. Maroon, and R. Parr. The application of augmented reality–based
navigation for accurate target acquisition of deep brain sites: advances
in neurosurgical guidance. Journal of Neurosurgery, 1(aop):1–7, 2021.
[4]
N. Haouchine, J. Dequidt, I. Peterlik, E. Kerrien, M.-O. Berger, and
S. Cotin. Image-guided simulation of heterogeneous tissue deformation
for augmented reality during hepatic surgery. In Proc. IEEE ISMAR,
2013.
[5]
Y. Li, X. Chen, N. Wang, W. Zhang, D. Li, L. Zhang, X. Qu, W. Cheng,
Y. Xu, W. Chen, et al. A wearable mixed-reality holographic computer
for guiding external ventricular drain insertion at the bedside. Journal
of Neurosurgery, 131(5):1599–1606, 2018.
[6]
A. Meola, F. Cutolo, M. Carbone, F. Cagnazzo, M. Ferrari, and V. Fer-
rari. Augmented reality in neurosurgery: a systematic review. Neuro-
surgical Review, 40(4):537–548, 2017.
[7]
B. R. O’Neill, D. A. Velez, E. E. Braxton, D. Whiting, and M. Y. Oh. A
survey of ventriculostomy and intracranial pressure monitor placement
practices. Surgical Neurology, 70(3):268–273, 2008.
[8]
L. Qian, A. Deguet, and P. Kazanzides. ARssist: augmented reality
on a head-mounted display for the first assistant in robotic surgery.
Healthcare Technology Letters, 5(5):194–200, 2018.
[9]
A. S. Rose, H. Kim, H. Fuchs, and J.-M. Frahm. Development
of augmented-reality applications in otolaryngology–head and neck
surgery. The Laryngoscope, 129:S1–S11, 2019.
[10]
M. Schneider, C. Kunz, A. Pal’a, C. R. Wirtz, F. Mathis-Ullrich, and
M. Hlav
´
a
ˇ
c. Augmented reality–assisted ventriculostomy. Neurosurgi-
cal Focus, 50(1):E16, 2021.
[11]
S. Skyrman, M. Lai, E. Edstr
¨
om, G. Burstr
¨
om, P. F
¨
orander, R. Homan,
F. Kor, R. Holthuizen, B. H. Hendriks, O. Persson, et al. Augmented
reality navigation for cranial biopsy and external ventricular drain
insertion. Neurosurgical Focus, 51(2):E7, 2021.
[12]
T. Song, C. Yang, O. Dianat, and E. Azimi. Endodontic guided
treatment using augmented reality on a head-mounted display system.
Healthcare Technology Letters, 5(5):201–207, 2018.
... In AR-assisted surgery, Fig. 1: The overall setup of NeuroLens marker-based tracking has been employed in the tracking of a surgical robot arm [45] and image registration of anatomical visualization in various types of surgery (e.g., open surgery [1], neurosurgery [14], [21], [48]). Prior work that used fiducial markers reported image registration errors ranging from 2.5mm to 8.5mm [13], [47] with drifts over time [14], [47]; with optical markers, smaller registration error of 1mm to 2mm was reported [11]. Hence, we designed NeuroLens to rely on optical markers for the image registration of a ventricular hologram on a patient's skull. ...
... To improve the freehand EVD catheter placement accuracy, several researchers have developed systems that use AR to render a registered hologram of a patient's ventricles, enabling the surgeon to see the location they are targeting [11], [31], [47], [58]. A system for both cranial biopsy and EVD, reporting a sub-millimeter accuracy level, was proposed by [49]; however, the system did not use a headmounted AR device, and a needle was used instead of the catheter for placement. ...
... We created a 12cm by 12cm square 2D fiducial marker as a localization target. This localization marker is detected by the HoloLens 2 Vuforia marker detection, which reported higher accuracy in registration error when compared to other detection methods (e.g., AR-ToolKit) [11], to obtain both the position and the orientation of the marker. Four optical markers were attached to the corners of the localization marker to be tracked by the OptiTrack system. ...
Article
Full-text available
External ventricular drain (EVD) is a common, yet challenging neurosurgical procedure of placing a catheter into the brain ventricular system that requires prolonged training for surgeons to improve the catheter placement accuracy. In this paper, we introduce NeuroLens, an Augmented Reality (AR) system that provides neurosurgeons with guidance that aides them in completing an EVD catheter placement. NeuroLens builds on prior work in AR-assisted EVD to present a registered hologram of a patient's ventricles to the surgeons, and uniquely incorporates guidance on the EVD catheter's trajectory, angle of insertion, and distance to the target. The guidance is enabled by tracking the EVD catheter. We evaluate NeuroLens via a study with 33 medical students and 9 neurosurgeons, in which we analyzed participants' EVD catheter insertion accuracy and completion time, eye gaze patterns, and qualitative responses. Our study, in which NeuroLens was used to aid students and surgeons in inserting an EVD catheter into a realistic phantom model of a human head, demonstrated the potential of NeuroLens as a tool that will aid and educate novice neurosurgeons. On average, the use of NeuroLens improved the EVD placement accuracy of the year 1 students by 39.4%, of the year 2-4 students by 45.7%, and of the neurosurgeons by 16.7%. Furthermore, students who focused more on NeuroLens-provided contextual guidance achieved better results, and novice surgeons improved more than the expert surgeons with NeuroLens's assistance.
... AR and MR technologies are revolutionizing surgical precision by superimposing real-time anatomical models onto patients, enhancing spatial awareness and reducing reliance on external imaging [38], [11], [35]. These systems enable more precise interventions in various medical fields. ...
... These systems enable more precise interventions in various medical fields. Eom et al. [11] developed an AR system for ventriculostomy, improving catheter placement by projecting 3D ventricular models onto the cranial surface. Ma et al. [35] introduced an AR navigation system that compensates for motion, optimizing surgical workflows. ...
Conference Paper
Full-text available
Chinese acupuncture practitioners primarily depend on muscle memory and tactile feedback to insert needles and accurately target acupuncture points, as the current workflow lacks imaging modalities and visual aids. Consequently, new practitioners often learn through trial and error, requiring years of experience to become proficient and earn the trust of patients. Medical students face similar challenges in mastering this skill. To address these challenges, we developed an innovative system, MRUCT, that integrates ultrasonic computed tomography (UCT) with mixed reality (MR) technology to visualize acupuncture points in real-time. This system offers offline image registration and real-time guidance during needle insertion, enabling them to accurately position needles based on anatomical structures such as bones, muscles, and auto-generated reference points, with the potential for clinical implementation. In this paper, we outline the non-rigid registration methods used to reconstruct anatomical structures from UCT data, as well as the key design considerations of the MR system. We evaluated two different 3D user interface (3DUI) designs and compared the performance of our system to traditional workflows for both new practitioners and medical students. The results highlight the potential of MR to enhance therapeutic medical practices and demonstrate the effectiveness of the system we developed.
... For robotically assisted minimally invasive heart surgery, studies suggest using an endoscopic AR system [14]. ...
... Specifically, to precisely align virtual models with real-world objects [42], and ensure seamless integration between the virtual and physical domains, three distinct approaches to pose computation and registration have been described ( Table 1). The first approach is marker-based [17,22,36,[43][44][45][46], where identifiable fiducial markers are attached to the physical objects to facilitate accurate tracking. These markers serve as reference points, enabling the system to precisely determine the position and orientation of the objects. ...
Article
Full-text available
Root canal therapy (RCT) is a widely performed procedure in dentistry, with over 25 million individuals undergoing it annually. This procedure is carried out to address inflammation or infection within the root canal system of affected teeth. However, accurately aligning CT scan information with the patient's tooth has posed challenges, leading to errors in tool positioning and potential negative outcomes. To overcome these challenges, a mixed reality application is developed using an optical see‐through head‐mounted display (OST‐HMD). The application incorporates visual cues, an augmented mirror, and dynamically updated multi‐view CT slices to address depth perception issues and achieve accurate tooth localization, comprehensive canal exploration, and prevention of perforation during RCT. Through the preliminary experimental assessment, significant improvements in the accuracy of the procedure are observed. Specifically, with the system the accuracy in position was improved from 1.4 to 0.4 mm (more than a 70% gain) using an Optical Tracker (NDI) and from 2.8 to 2.4 mm using an HMD, thereby achieving submillimeter accuracy with NDI. 6 participants were enrolled in the user study. The result of the study suggests that the average displacement on the crown plane of 1.27 ± 0.83 cm, an average depth error of 0.90 ± 0.72 cm and an average angular deviation of 1.83 ± 0.83°. Our error analysis further highlights the impact of HMD spatial localization and head motion on the registration and calibration process. Through seamless integration of CT image information with the patient's tooth, our mixed reality application assists dentists in achieving precise tool placement. This advancement in technology has the potential to elevate the quality of root canal procedures, ensuring better accuracy and enhancing overall treatment outcomes.
Article
Full-text available
Several approaches to supporting workflow execution with augmented reality systems (ARS) have emerged to address the challenge of providing context-sensitive information to users. Although there are some efforts to systematize ARSs, no taxonomy exists addressing the provided workflow execution support. Thus, there is a lack of a precise vocabulary and a set of possible descriptive constructs, making it challenging to compare ARSs, identify trends and research gaps within the literature, and guide new research and development efforts. To address this research gap, 272 ARSs were analyzed regarding their support for workflow execution and workflow management and a novel taxonomy containing 14 dimensions and 97 characteristics was developed. Its perceived usefulness was evaluated via domain expert interviews and its usefulness validated by the development of two novel ARSs based on underrepresented patterns in the taxonomy. Additionally, cluster analysis was utilized to define three archetypes of ARSs supporting workflow execution.
Article
Purpose This paper explores the application of Apple Vision Pro in ophthalmic surgery, assessing its potential benefits in providing real-time imaging overlay, surgical guidance, and collaborative opportunities. Materials and method The device was worn by 10 ophthalmic surgeons during eyelid malposition surgery. All surgeons performed the entire surgery while wearing the visor. At the end of procedure, all operators had to rate Apple Vision Pro visor according to 10 specific item and system usability scale (SUS) questionnaire. Results The surgeons used the Apple Vision Pro during the entire procedure, and the results were positive, with high ratings for practicality, freedom of movement, integration into workflow, and learning. All surgeons rated the Apple Vision Pro above 85/100 in the SUS. Conclusion The incorporation of Apple Vision Pro in oculoplastic surgery offers several advantages, including improved visualization, enhanced precision, and streamlined communication among surgical teams. According to our preliminary results Apple Vision Pro could represents a valuable tool in ophthalmic surgery, with implications for enhancing surgical techniques and advancing XR research in the surgical field.
Preprint
Full-text available
To address the challenges of providing real-time guidance during surgical procedures and fully utilizing preoperative lesion data information intraoperatively, we propose an augmented reality-based, knowledge-guided method for surgical procedures. Our method involves designing a semantic segmentation algorithm for medical images to achieve accurate lesion tissues segmentation. With the segmentation results, we localize the tissues and organs, plan the surgical approaches based on safety constraints, and generate a virtual knowledge model containing semantic information of the lesion and the preoperative planned approaches through 3D visualization operations. To align the virtual surgical knowledge model with the patient's lesion site in the real surgical environment, we establish a portable high-precision augmented reality display system with external depth cameras. We use joint calibration, point cloud alignment, and wireless communication transmission to improve the augmented reality device's scene perception capability and achieve the precise integration of virtual and real objects in holographic projection. Our proposed method achieves the goal of knowledge-guided surgery process. This paper presents the personalized 3D representation of medical images and demonstrates the precise application of augmented reality technology in surgery, which provides solutions to the problems of surgery planning and navigation.
Article
Full-text available
Objective: The aim of this study was to evaluate the accuracy (deviation from the target or intended path) and efficacy (insertion time) of an augmented reality surgical navigation (ARSN) system for insertion of biopsy needles and external ventricular drains (EVDs), two common neurosurgical procedures that require high precision. Methods: The hybrid operating room-based ARSN system, comprising a robotic C-arm with intraoperative cone-beam CT (CBCT) and integrated video tracking of the patient and instruments using nonobtrusive adhesive optical markers, was used. A 3D-printed skull phantom with a realistic gelatinous brain model containing air-filled ventricles and 2-mm spherical biopsy targets was obtained. After initial CBCT acquisition for target registration and planning, ARSN was used for 30 cranial biopsies and 10 EVD insertions. Needle positions were verified by CBCT. Results: The mean accuracy of the biopsy needle insertions (n = 30) was 0.8 mm ± 0.43 mm. The median path length was 39 mm (range 16-104 mm) and did not correlate to accuracy (p = 0.15). The median device insertion time was 149 seconds (range 87-233 seconds). The mean accuracy for the EVD insertions (n = 10) was 2.9 mm ± 0.8 mm at the tip with a 0.7° ± 0.5° angular deviation compared with the planned path, and the median insertion time was 188 seconds (range 135-400 seconds). Conclusions: This study demonstrated that ARSN can be used for navigation of percutaneous cranial biopsies and EVDs with high accuracy and efficacy.
Article
Full-text available
OBJECTIVE Placement of a ventricular drain is one of the most common neurosurgical procedures. However, a higher rate of successful placements with this freehand procedure is desirable. The authors’ objective was to develop a compact navigational augmented reality (AR)–based tool that does not require rigid patient head fixation, to support the surgeon during the operation. METHODS Segmentation and tracking algorithms were developed. A commercially available Microsoft HoloLens AR headset in conjunction with Vuforia marker-based tracking was used to provide guidance for ventriculostomy in a custom-made 3D-printed head model. Eleven surgeons conducted a series of tests to place a total of 110 external ventricular drains under holographic guidance. The HoloLens was the sole active component; no rigid head fixation was necessary. CT was used to obtain puncture results and quantify success rates as well as precision of the suggested setup. RESULTS In the proposed setup, the system worked reliably and performed well. The reported application showed an overall ventriculostomy success rate of 68.2%. The offset from the reference trajectory as displayed in the hologram was 5.2 ± 2.6 mm (mean ± standard deviation). A subgroup conducted a second series of punctures in which results and precision improved significantly. For most participants it was their first encounter with AR headset technology and the overall feedback was positive. CONCLUSIONS To the authors’ knowledge, this is the first report on marker-based, AR-guided ventriculostomy. The results from this first application are encouraging. The authors would expect good acceptance of this compact navigation device in a supposed clinical implementation and assume a steep learning curve in the application of this technique. To achieve this translation, further development of the marker system and implementation of the new hardware generation are planned. Further testing to address visuospatial issues is needed prior to application in humans.
Article
Full-text available
Objectives/Hypothesis Augmented reality (AR) allows for the addition of transparent virtual images and video to one's view of a physical environment. Our objective was to develop a head‐worn, AR system for accurate, intraoperative localization of pathology and normal anatomic landmarks during open head and neck surgery. Study Design Face validity and case study. Methods A protocol was developed for the creation of three‐dimensional (3D) virtual models based on computed tomography scans. Using the HoloLens AR platform, a novel system of registration and tracking was developed. Accuracy was determined in relation to actual physical landmarks. A face validity study was then performed in which otolaryngologists were asked to evaluate the technology and perform a simulated surgical task using AR image guidance. A case study highlighting the potential usefulness of the technology is also presented. Results An AR system was developed for intraoperative 3D visualization and localization. The average error in measurement of accuracy was 2.47 ± 0.46 millimeters (1.99, 3.30). The face validity study supports the potential of this system to improve safety and efficiency in open head and neck surgical procedures. Conclusions An AR system for accurate localization of pathology and normal anatomic landmarks of the head and neck is feasible with current technology. A face validity study reveals the potential value of the system in intraoperative image guidance. This application of AR, among others in the field of otolaryngology–head and neck surgery, promises to improve surgical efficiency and patient safety in the operating room. Level of Evidence 2b Laryngoscope, 129:S1–S11, 2019
Article
Full-text available
Endodontic treatment is performed to treat inflamed or infected root canal system of any involved teeth. It is estimated that 22.3 million endodontic procedures are performed annually in the USA. Preparing a proper access cavity before cleaning/shaping (instrumentation) of the root canal system is among the most important steps to achieve a successful treatment outcome. However, accidents such as perforation, gouging, ledge and canal transportation may occur during the procedure because of an improper or incomplete access cavity design. To reduce or prevent these errors in root canal treatments, this Letter introduces an assistive augmented reality (AR) technology on the head-mounted display (HMD). The proposed system provides audiovisual warning and correction in situ on the optical see-through HMD to assist the dentists to prepare access cavity. Interaction of the clinician with the system is via voice commands allowing the bi-manual operation. Also, dentist is able to review tooth radiographs during the procedure without the need to divert attention away from the patient and look at a separate monitor. Experiments are performed to evaluate the accuracy of the measurements. To the best of the authors’ knowledge, this is the first time that an HMD-based AR prototype is introduced for an endodontic procedure. © 2018 Institution of Engineering and Technology.All right reserved.
Article
Full-text available
In robot-assisted laparoscopic surgery, the first assistant (FA) is responsible for tasks such as robot docking, passing necessary materials, manipulating hand-held instruments, and helping with trocar planning and placement. The performance of the FA is critical for the outcome of the surgery. The authors introduce ARssist, an augmented reality application based on an optical see-through head-mounted display, to help the FA perform these tasks. ARssist offers (i) real-time three-dimensional rendering of the robotic instruments, hand-held instruments, and endoscope based on a hybrid tracking scheme and (ii) real-time stereo endoscopy that is configurable to suit the FA’s hand–eye coordination when operating based on endoscopy feedback. ARssist has the potential to help the FA perform his/her task more efficiently, and hence improve the outcome of robot-assisted laparoscopic surgeries. © 2018 Institution of Engineering and Technology.All right reserved.
Article
Full-text available
Fluoroscopic X-ray guidance is a cornerstone for percutaneous orthopaedic surgical procedures. However, two-dimensional observations of the three-dimensional anatomy suffer from the effects of projective simplification. Consequently, many X-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. In this paper, we present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasi-unprepared operating rooms. The proposed system builds upon a multi-modality marker and simultaneous localization and mapping technique to co-calibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2D X-ray images can be rendered as virtual objects in 3D providing surgical guidance. We quantitatively evaluate the components of the proposed system, and finally, design a feasibility study on a semi-anthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired X-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects, that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed towards common orthopaedic interventions.
Conference Paper
Full-text available
Mixed Reality (MR) is of increasing interest within technology-driven modern medicine but is not yet used in everyday practice. This situation is changing rapidly, however, and this paper explores the emergence of MR technology and the importance of its utility within medical applications. A classification of medical MR has been obtained by applying an unbiased text mining method to a database of 1,403 relevant research papers published over the last two decades. The classification results reveal a taxonomy for the development of medical MR research during this period as well as suggesting future trends. We then use the classification to analyse the technology and applications developed in the last five years. Our objective is to aid researchers to focus on the areas where technology advancements in medical MR are most needed, as well as providing medical practitioners with a useful source of reference.
Article
Full-text available
Neuronavigation has become an essential neurosurgical tool in pursuing minimal invasiveness and maximal safety, even though it has several technical limitations. Augmented reality (AR) neuronavigation is a significant advance, providing a real-time updated 3D virtual model of anatomical details, overlaid on the real surgical field. Currently, only a few AR systems have been tested in a clinical setting. The aim is to review such devices. We performed a PubMed search of reports restricted to human studies of in vivo applications of AR in any neurosurgical procedure using the search terms “Augmented reality” and “Neurosurgery.” Eligibility assessment was performed independently by two reviewers in an unblinded standardized manner. The systems were qualitatively evaluated on the basis of the following: neurosurgical subspecialty of application, pathology of treated lesions and lesion locations, real data source, virtual data source, tracking modality, registration technique, visualization processing, display type, and perception location. Eighteen studies were included during the period 1996 to September 30, 2015. The AR systems were grouped by the real data source: microscope (8), hand- or head-held cameras (4), direct patient view (2), endoscope (1), and X-ray fluoroscopy (1) head-mounted display (1). A total of 195 lesions were treated: 75 (38.46 %) were neoplastic, 77 (39.48 %) neurovascular, and 1 (0.51 %) hydrocephalus, and 42 (21.53 %) were undetermined. Current literature confirms that AR is a reliable and versatile tool when performing minimally invasive approaches in a wide range of neurosurgical diseases, although prospective randomized studies are not yet available and technical improvements are needed.
Article
OBJECTIVE The objective of this study is to quantify the navigational accuracy of an advanced augmented reality (AR)–based guidance system for neurological surgery, biopsy, and/or other minimally invasive neurological surgical procedures. METHODS Five burr holes were drilled through a plastic cranium, and 5 optical fiducials (AprilTags) printed with CT-visible ink were placed on the frontal, temporal, and parietal bones of a human skull model. Three 0.5-mm-diameter targets were mounted in the interior of the skull on nylon posts near the level of the tentorium cerebelli and the pituitary fossa. The skull was filled with ballistic gelatin to simulate brain tissue. A CT scan was taken and virtual needle tracts were annotated on the preoperative 3D workstation for the combination of 3 targets and 5 access holes (15 target tracts). The resulting annotated study was uploaded to and launched by VisAR software operating on the HoloLens 2 holographic visor by viewing an encrypted, printed QR code assigned to the study by the preoperative workstation. The DICOM images were converted to 3D holograms and registered to the skull by alignment of the holographic fiducials with the AprilTags attached to the skull. Five volunteers, familiar with the VisAR, used the software/visor combination to navigate an 18-gauge needle/trocar through the series of burr holes to the target, resulting in 70 data points (15 for 4 users and 10 for 1 user). After each attempt the needle was left in the skull, supported by the ballistic gelatin, and a high-resolution CT was taken. Radial error and angle of error were determined using vector coordinates. Summary statistics were calculated individually and collectively. RESULTS The combined angle of error of was 2.30° ± 1.28°. The mean radial error for users was 3.62 ± 1.71 mm. The mean target depth was 85.41 mm. CONCLUSIONS The mean radial error and angle of error with the associated variance measures demonstrates that VisAR navigation may have utility for guiding a small needle to neural lesions, or targets within an accuracy of 3.62 mm. These values are sufficiently accurate for the navigation of many neurological procedures such as ventriculostomy.
Article
OBJECTIVE The goal of this study was to explore the feasibility and accuracy of using a wearable mixed-reality holographic computer to guide external ventricular drain (EVD) insertion and thus improve on the accuracy of the classic freehand insertion method for EVD insertion. The authors also sought to provide a clinically applicable workflow demonstration. METHODS Pre- and postoperative CT scanning were performed routinely by the authors for every patient who needed EVD insertion. Hologram-guided EVD placement was prospectively applied in 15 patients between August and November 2017. During surgical planning, model reconstruction and trajectory calculation for each patient were completed using preoperative CT. By wearing a Microsoft HoloLens, the neurosurgeon was able to visualize the preoperative CT-generated holograms of the surgical plan and perform EVD placement by keeping the catheter aligned with the holographic trajectory. Fifteen patients who had undergone classic freehand EVD insertion were retrospectively included as controls. The feasibility and accuracy of the hologram-guided technique were evaluated by comparing the time required, number of passes, and target deviation for hologram-guided EVD placement with those for classic freehand EVD insertion. RESULTS Surgical planning and hologram visualization were performed in all 15 cases in which EVD insertion involved holographic guidance. No adverse events related to the hologram-guided procedures were observed. The mean ± SD additional time before the surgical part of the procedure began was 40.20 ± 10.74 minutes. The average number of passes was 1.07 ± 0.258 in the holographic guidance group, compared with 2.33 ± 0.98 in the control group (p < 0.01). The mean target deviation was 4.34 ± 1.63 mm in the holographic guidance group and 11.26 ± 4.83 mm in the control group (p < 0.01). CONCLUSIONS This study demonstrates the use of a head-mounted mixed-reality holographic computer to successfully perform hologram-assisted bedside EVD insertion. A full set of clinically applicable workflow images is presented to show how medical imaging data can be used by the neurosurgeon to visualize patient-specific holograms that can intuitively guide hands-on operation. The authors also provide preliminary confirmation of the feasibility and accuracy of this hologram-guided EVD insertion technique.