ArticlePDF Available

3:54 PM Abstract No. 29 Augmented reality guidance for cerebral angiography

3:54 PM Abstract No. 29
Augmented reality guidance for cerebral
G. Loeb
, S. Sadri
, A. Grinshpoon
, J. Carroll
C. Cooper
, C. Elvezio
, S. Mutasa
, G. Mandigo
S. Lavine
, J. Weintraub
, A. Einstein
, S. Feiner
P. Meyers
Columbia University, New York, NY;
Columbia University/New York Presbyterian Hospital,
New York, NY
Purpose: Augmented reality (AR) holds great potential for IR by
integrating virtual 3D anatomic models into the real world.
In this
pilot study, we developed an AR guidance system for cerebral
angiography, evaluated its impact on radiation, contrast, and uo-
roscopy time, and assessed physician response.
Materials: In this prospective study, 9 patients with CT or MR
imaging of the aorta underwent diagnostic neuroangiography with
AR guidance from June to August 2017. Before each procedure,
segmentation software was used to create a 3D model of the pa-
tients aortic arch including carotid and vertebral arteries. The
model was deployed to HoloLens (Microsoft, Redmond, WA), a
stereoscopic optical see-through AR head-worn display. Using the
AR user interface we developed, physicians manipulated a virtual
3D model intraoperatively via voice commands, gaze, and gestures
while maintaining sterility. In total, 6 physicians completed 14
postoperative questionnaires assessing the system. 18 case-
matched retrospective controls were identied by screening for
age, aorta imaging, cone-beam CT, indication, physician, and OR.
Results: All 9 patients underwent diagnostic neuroangiography
per standard protocol with AR guidance without complication.
Mean kerma-area product 3150 μGym
(SD 2284), skin-absorbed
dose 283 mGy (SD 192), contrast volume 119 mL (SD 35), and
uoroscopy time 10 min (SD 4) were below reference values for
diagnostic neuroangiography.
There was a non-signicant
reduction in kerma-area product, skin-absorbed dose, and uo-
roscopy time compared to case-matched controls. 100% of ques-
tionnaire responses indicated physicians would recommend the AR
system and felt it neither interfered with safety nor increased ra-
diation, contrast, or procedure time. 79% indicated it helped them
navigate through vasculature. 93% indicated it was useful to see
the 3D model in AR.
Conclusions: AR guidance for neuroangiography produced
clinical outcomes, uoroscopy times, and radiation doses compa-
rable to those of conventional neuroangiography in matched con-
trols. Results suggest that this technology is feasible and safe to use
intraoperatively, offering an opportunity to enhance navigation
through patient anatomy.
4:03 PM Abstract No. 30
Augmented virtual reality assisted treatment
planning for splenic artery aneurysms: a pilot study
Z. Devcic
, I. Idakoji
, A. Kesselman
, R. Shah
M. AbdelRazek
, N. Kothary
Stanford University
Medical Center, Stanford, CA
Purpose: To evaluate the utility of augmented virtual reality (VR)
in preprocedural planning for endovascular repair of splenic artery
aneurysms (SAA) as compared to standard volume-rendering (SR)
Materials: Preprocedural computed tomographic angiography
(CTA) images of 14 patients with 17 SAA who had undergone
endovascular repair were reconstructed using True 3D (EchoPixel,
Inc., CA), a VR visualization software system. AquariusNet (TeraR-
econ, CA) was used for standard volume-rendering image interpreta-
tion. Three radiologists independently evaluated the number of inow
and outow arteries using both VR and SR. Procedural angiographic
images served as the gold standard. Improvement in operator con-
dence of VR over SR was measured on a four-point scale (1 no
change, 4 signicant). Clinical utility was objectively measured by
VRs ability to accurately identify all inow and outow arteries
associated with the SAA and subjectively by operator condence.
Results: There were 17 inow and 22 outow arteries associated
with the SAA. The overall sensitivity, accuracy and positive pre-
dictive value for VR was similar to that of SR (91.3%, 89.7%, 84%
and 88.9%, 88.9%, 84.6%, p ¼0.14, respectively). However, the
ability to view and manipulate images in true three-dimensions
using VR markedly improved operator condence with 93%
receiving a score of at least 3 (71% ¼3, 21% ¼4).
Conclusions: SAA have complex anatomy necessitating metic-
ulous preprocedure planning. VR allows holographic visualization
of images as if they were real physical objects, providing infor-
mation critical for endovascular repair of SAA and thus signi-
cantly increasing operator condence.
4:12 PM Abstract No. 31
Efcacy of the preoperative planning for TEVAR
using the greater curvature measurement with
virtual stentgraft image
S. Iwakoshi
, S. Ichihashi
, S. Sakaguchi
K. Kichikawa
Nara Medical University, Kashihara
City, Japan;
Nara Medical University, Nara, Kashihara,
Nara Medical University, Kashihara, Nara,
Purpose: To assess the accuracy of preoperative planning for
TEVAR using the greater curvature measurement with virtual
stentgraft image.
Materials: From January 2012 to December 2016, patients treated
at our institution were retrospectively analyzed. Patients who were
treated with more than two devices, treated for aortic dissection,
and did not have proper preoperative and postoperative CT data
were excluded. From the preoperative CT data, the virtual sten-
tgraft images based on the center lumen line (CL) measurement,
the greater curvature (GC) measurement and the smaller curvature
(SC) measurement were created using SYNAPSE VINCENT
software. These virtual stentgraft images were superimposed on the
postoperative CT to measure the misalignment between these vir-
tual stentgraft images and the actual stentgraft position. A statistical
comparison using Wilcoxons signed rank sum test was performed.
In addition, the actual stentgraft lengths were measured based on
CL from postoperative CT data and compared to its original length.
Results: A total of 35 cases were analyzed. Twenty-six were men.
The average age of the patients was 72.4 ±13.0 years. Aneurysms
were located at the descending aorta (n ¼11), and the aortic arch
(n ¼24). The gap between the virtual stentgraft based on SC, CL,
JVIR Scientic Sessions Sunday S17
SUNDAY: Scientic Sessions
Conference Paper
Many AR and VR task domains involve manipulating virtual objects; for example, to perform 3D geometric transformations. These operations are typically accomplished with tracked hands or hand-held controllers. However, there are some activities in which the user's hands are already busy with another task, requiring the user to temporarily stop what they are doing to perform the second task, while also taking time to disengage and reengage with the original task (e.g., putting down and picking up tools). To avoid the need to overload the user's hands this way in an AR system for guiding a physician performing a surgical procedure, we developed a hands-free approach to performing 3D transformations on patient-specific virtual organ models. Our approach uses small head motions to accomplish first-order and zero-order control, in conjunction with voice commands to establish the type of transformation. To show the effectiveness of this approach for translating, scaling, and rotating 3D virtual models, we conducted a within-subject study comparing the hands-free approach with one based on conventional manual techniques, both running on a Microsoft HoloLens and using the same voice commands to specify transformation type. Independent of any additional time to transition between tasks, users were significantly faster overall using the hands-free approach, significantly faster for hands-free translation and scaling, and faster (although not significantly) for hands-free rotation.
Conference Paper
Full-text available
During a vascular intervention (a type of minimally invasive surgical procedure), physicians maneuver catheters and wires through a patient's blood vessels to reach a desired location in the body. Since the relevant anatomy is typically not directly visible in these procedures, virtual reality and augmented reality systems have been developed to assist in 3D navigation. Because both of a physician's hands may already be occupied, we developed an augmented reality system supporting hands-free interaction techniques that use voice and head tracking to enable the physician to interact with 3D virtual content on a head-worn display while leaving both hands available intraoperatively. We demonstrate how a virtual 3D anatomical model can be rotated and scaled using small head rotations through first-order (rate) control, and can be rigidly coupled to the head for combined translation and rotation through zero-order control. This enables easy manipulation of a model while it stays close to the center of the physician's field of view.
Conference Paper
Full-text available
Vascular interventions are minimally invasive surgical procedures in which a physician navigates a catheter through a patient's vasculature to a desired destination in the patient's body. Since perception of relevant patient anatomy is limited in procedures of this sort, virtual reality and augmented reality systems have been developed to assist in 3D navigation. These systems often require user interaction, yet both of the physician's hands may already be busy performing the procedure. To address this need, we demonstrate hands-free interaction techniques that use voice and head tracking to allow the physician to interact with 3D virtual content on a head-worn display while making both hands available intraoperatively. Our approach supports rotation and scaling of 3D anatomical models that appear to reside in the surrounding environment through small head rotations using first-order control, and rigid body transformation of those models using zero-order control. This allows the physician to easily manipulate a model while it stays close to the center of their field of view.
ResearchGate has not been able to resolve any references for this publication.