Content uploaded by Ian J. Gerard
Author content
All content in this area was uploaded by Ian J. Gerard on Feb 09, 2018
Content may be subject to copyright.
Content uploaded by Ian J. Gerard
Author content
All content in this area was uploaded by Ian J. Gerard on Feb 05, 2018
Content may be subject to copyright.
1
Combining intra-operative ultrasound brain shift correction and augmented
reality visualizations: a pilot study of 8 cases.
Ian J. Gerard (ian.gerard@mail.mcgill.ca)1, Marta Kersten-Oertel2, Simon Drouin1, Jeffery A.
Hall3, Kevin Petrecca3, Dante De Nigris4, Daniel A. Di Giovanni3, Tal Arbel4, D. Louis
Collins1,3,4
1Montreal Neurological Institute and Hospital, Department of Biomedical Engineering, McGill University, 3801
Blvd. Robert-Bourassa, Montreal, QC, H3A 2B4, Canada
2PERFORM Centre, Department of Computer Science and Software Engineering, Concordia University, 7200
Sherbrooke St.W, Montreal, QC, H4B 1R6, Canada
3Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, McGill University,
3801 Blvd. Robert-Bourassa, Montreal, QC, H3A 2B4, Canada
4Centre for Intelligent Machines, Department of Electrical and Computer Engineering, McGill University, 3480
Blvd. Robert-Bourassa, Montreal, QC, H3A 2A7, Canada
Abstract
Purpose: We present our work investigating the feasibility of combining intraoperative
ultrasound for brain shift correction and augmented reality visualization for intraoperative
interpretation of patient-specific models in image-guided neurosurgery of brain tumors.
Methods: We combine two imaging technologies for image-guided brain tumor neurosurgery.
Throughout surgical interventions, augmented reality was used to assess different surgical
strategies using three-dimensional patient-specific models of the patient’s cortex, vasculature,
and lesion. Ultrasound imaging was acquired intra-operatively and preoperative images and
models were registered to the intraoperative data. The quality and reliability of the augmented
reality views were evaluated with both qualitative and quantitative metrics.
Results: A pilot study of 8 patients, demonstrates the feasible combination of these two
technologies and their complimentary features. In each case, the augmented reality visualizations
enabled the surgeon to accurately visualize the anatomy and pathology of interest for an
extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and
augmented reality, were improved in all cases.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1 Ian J. Gerard is the corresponding author
2
Conclusion: These results demonstrate the potential of combining ultrasound-based registration
with augmented reality to become a useful tool for neurosurgeons to improve intra-operative
patient-specific planning by improving the understanding of complex three-dimensional medical
imaging data and prolonging the reliable use of image-guided neurosurgery.
Keywords: Image-guided neurosurgery, brain shift, augmented reality, registration, brain tumor
1 Introduction
Each year thousands of Canadians undergo neurosurgery for resections of lesions in close
proximity to areas of the brain that are critical to movement, vision, sensation, or language.
There is strong support in the literature demonstrating significantly increased survival benefit
with complete resection of primary and secondary brain tumors [1], creating competing
constraints that must be balanced during surgery for each patient: achieving maximal resection of
the lesions while causing minimal neurological deficit.
Since the introduction of the first intraoperative frameless stereotactic navigation device by
Roberts et al. in 1986 [2], image-guided neurosurgery (IGNS), or “neuronavigation”, has become
an essential tool for many neurosurgical procedures due to its ability to minimize surgical trauma
by allowing for the precise localization of surgical targets. For many of these interventions,
preoperative planning is done on these IGNS systems that provide the surgeon with tools to
visualize, interpret, and navigate through patient specific volumes of anatomical, vascular and
functional information while investigating their inter-relationships. Over the past 30 years, the
growth of this technology has enabled application to increasingly complicated interventions
including the surgical treatment of malignant tumors, neurovascular disorders, epilepsy and deep
brain stimulation. The integration of preoperative image information into a comprehensive
3
patient-specific model enables surgeons to preoperatively evaluate the risks involved and define
the most appropriate surgical strategy. Perhaps more importantly, such systems enable surgery of
previously inoperable cases by facilitating safe surgical corridors through IGNS-identified non-
critical areas.
For intraoperative use, IGNS systems must relate the physical location of a patient with the
preoperative models by means of a transformation that relates the two through a patient-to-image
mapping (Figure 1). By tracking the patient and a set of specialized surgical tools, this mapping
allows a surgeon to point to a specific location on the patient and see the corresponding anatomy
in the pre-operative images and the patient specific models. However, throughout the
intervention, hardware movement, an imperfect patient-image mapping, and movement of brain
tissue during surgery invalidate the patient-to-image mapping [3]. These sources of inaccuracy,
collectively described as ‘brain shift’, reduce the effectiveness of using preoperative patient-
specific models intraoperatively. Unsurprisingly, most surgeons use IGNS systems to plan an
approach to a surgical target but understandably no longer rely on the system throughout the
entirety of an operation when accuracy is compromised and medical image interpretation is
encumbered. Recent advances in IGNS technology have resulted in intraoperative imaging and
registration techniques [4, 5] to help update preoperative images and maintain accuracy.
Advances in visualization have introduced augmented reality techniques [6-9] at different time-
points and for different tasks to help improve the understanding and visualization of complex
medical imaging data and models and to help with intraoperative planning. We present a pilot
study of 8 cases combining the use of intra-operative ultrasound (iUS), for brain shift correction,
and intraoperative augmented reality (AR) visualization with traditional IGNS tools to improve
intraoperative accuracy and interpretation of patient-specific neurosurgical models in the context
4
of IGNS of tumors. While other groups have investigated iUS and AR independently there are
very few reports [10-14] of using both technologies to overcome the visualization issues related
with iUS and the accuracy issues related to AR. The goal of this pilot study is to investigate the
feasibility of combining iUS based brain shift correction and augmented reality visualizations to
improve both the accuracy and interpretation of complex intra-operative data. Our work aims to
improve on some of the limitations of the previous work by being a prospective instead of
retrospective clinical pilot study [12], being focused on evaluation in clinical scenarios as
opposed to phantoms or animal cadavers [11], having high quality MRI images for
segmentations instead of from difficult to interpret US images [12, 14], using patient-specific
data instead of atlas-based data for greater registration accuracy [14], and finally, using a fast
US-MRI registration that allows for an efficient workflow incorporating AR in the OR that
consumes less time and provides more information than previous reports [12, 13].
Figure 1: The patient’s head is positioned, immobilized and a tracked reference frame is
attached. The patient’s preoperative images and physical space are registered by using 8
corresponding landmarks on the head and face to create a correspondence between the two
spaces.
5
1.1 iUS in Neurosurgery
Intra-operative imaging has seen a wide range of use in neurosurgery over the last two decades.
Its main benefit constitutes the ability to visualize the up-to-date anatomy of a patient during an
intervention. iUS has been proposed and used as an alternative to intra-operative MRI due to its
ease of use, low-cost and wide-spread availability [5]. iUS is relatively inexpensive and non-
invasive and does not require many changes to the operating room or surgical procedures.
However, its main challenges are associated with relating information to preoperative images,
which are generally of a different modality. The alignment of iUS to MRI images is a
challenging task due to the widely different nature and quality of the two modalities. While voxel
intensity of both modalities is directly dependent on tissue type, US has an additional
dependence on probe orientation and depth that can lead to intensity non-uniformity due to the
presence of acoustic impedance transitions. Preoperative MR images allow for identification of
tissue types, anatomical structures and a variety of pathologies such as cancerous tumors. iUS
images are generally limited to displaying lesion tissue with an associated uncertainty regarding
its boundary, along with a few coarsely depicted structure boundaries. Early reports using iUS in
neurosurgery, such as in Bucholz et al. [15], show success with this technique using brightness
mode (B-mode) information. B-mode US has been used to obtain anatomical information [4, 5,
16] while Doppler US yields flow information for cerebral vasculature [17, 18]. The interested
reader is directed to [3] for a history and overview of iUS in neurosurgery in the context of brain
shift correction.
1.2 Augmented Reality in Neurosurgery
Augmented reality visualizations have become increasingly popular in medical research to help
understand and visualize complex medical imaging data. Augmented reality is defined as “the
6
merging of virtual objects with the real world (i.e. the surgical field of view)” [19]. The
motivation for these visualizations comes from the desire to merge pre-operative images, models,
and plans with the real physical space of the patient in a comprehensible fashion. These
augmented views have been proposed to better understand the topology and inter-relationships of
structures of interest that are not directly visible in the surgical field of view. AR has been
explored for neurosurgery in the context of skull base surgery [20] trans-sphenoidal neurosurgery
(i.e. for pituitary tumors) [21], microscope-assisted neurosurgery [22], endoscopic neurosurgery
[23, 24] neurovascular surgery [25-27], and primary brain tumor resection planning [28, 29].
This list is far from comprehensive so the interested reader is referred to Kersten et al. [19] and
Meola et al. [13] for detailed reviews of the use of AR in IGNS. In all recently published studies,
AR visualizations are described as an enhancement to the minimally invasiveness of a procedure
through more tailored, patient-specific approaches. A recent study by Kersten et al. [30]
evaluating the benefit of AR to specific neurosurgical tasks has demonstrated a major pitfall of
these types of visualization is the lack of accurate overlay throughout an intervention making
them only useful at early parts of an intervention. Recent literature has tried to address this issue
through interactive overlay realignment [31] or through manipulation of visualization parameters
[32, 33] with some success. In this work, we aim to address this major issue with iUS imaging.
2 Materials and Methods
2.1 Ethics
The MNI/H Ethics Board approved the study and all patients signed informed consent prior to
data collection.
7
2.2 System Description
All data was collected and analyzed on a custom-built prototype IGNS system, the Intraoperative
Brain Imaging System (IBIS) [34]. This system has previously been described in [25, 34, 35] for
use with iUS and AR as independent technologies. The Linux workstation is equipped with an
Intel Core i7-3820 @ 360 GHz x8 processor with 32 GB RAM, a GeForce GTX 670 graphics
card and Conexant cx23800 video capture card. Tracking is performed using a Polaris N4
infrared optical system (Northern Digital, Waterloo, Canada). The Polaris infrared camera uses
stereo triangulation to locate the passive reflective spheres on both the reference and pointing
tools with an accuracy of 0.5 mm [36]. The US scanner, an HDI 5000 (ATL/Philips, Bothell,
WA, USA) equipped with a 2D P7-4 MHz phased array transducer, enables intraoperative
imaging during the surgical intervention. Video capture of the live surgical scene is achieved
with a Sony HDR XR150 camera. Both the camera and US system transmit images using an S-
video cable to the Linux workstation at 30 frames/second. The camera and US transducer probe
are outfitted with a spatial tracking device with attached passive reflective spheres (Traxtal
Technologies Inc., Toronto, Canada) and are tracked in the surgical environment. Figure 2 shows
the main components of the iUS-AR IGNS system.
2.3 Patient-Specific Neurosurgical Models
The patient-specific neurosurgical models refer to all preoperative data – images, surfaces,
segmented anatomical structures – for an individual patient. All patients involved in this study
followed a basic tumor imaging protocol at the Montreal Neurological Institute and Hospital
(MNI/H) with a gadolinium enhanced T1-weighted MRI obtained on a 1.5 T MRI scanner
(Ingenia Phillips Medical Systems). All images were processed in a custom image processing
pipeline as follows [37]: First, the MRI is denoised, after estimating the standard deviation of the
8
MRI Rician noise [38]. Next, intensity non-uniformity correction and normalization is done by
estimating the non-uniformity field [39], followed by histogram matching with a reference image
to normalize the intensities (Figure 3-A). Within this pipeline, the FACE method [40] is used to
obtain a three dimensional model of the patient’s cortex (Figure 3-B). After processing, the
tumor is manually segmented using ITK-Snap [41] and a vessel model is created using a
combination of a semi-automatic intensity thresholding segmentation, also in ITK-Snap, and a
Frangi Vesselness filter [42] (Figure 3-C,D). The processing is done on a local computing cluster
at the MNI and the combined time for the processing pipeline and segmentations is on the order
of 2 hours. A model of the skin surface was also generated using ray-tracing from the processed
images in IBIS using a transfer function to control the transparency of the volume so all
segmented structures can be viewed. The processed images and patient-specific models are then
imported into IBIS (Figure 3-E).
Figure 2: The different components in an iUS-AR IGNS intervention and their relationship with
the surgical and neuronavigation setup. Once the US and live video images are captured from the
external devices, they are imported into the neuronavigation system and all US and AR
visualizations are displayed on the neuronavigation monitor. (Adapted with permission from
[34])
9
2.4 Tracked Camera Calibration and Creating the Augmented Reality View
To create augmented reality visualizations from images captured by a tracked camera, prior
calibration of the camera-tracker apparatus must be performed. The intrinsic and extrinsic
calibration parameters are determined simultaneously. We determine the intrinsic calibration of
the camera using a printed checkerboard pattern fixed on a flat surface with a rigidly attached
tracker tool using the method described in [34]. The different components and transformation
matrix relationships are shown in Figure 3-F. Multiple images are taken while displacing the
pattern in the camera’s field of view. The intrinsic calibration matrix, !, is obtained through
automatic detection of the checkerboard corners and feeding the coordinates and tracked 3D
position through an implementation of Zhang’s method [43]. This also creates a mapping
between the space of the calibration grid and the optical centre of the tracked camera ("#$. The
extrinsic calibration matrix ("%) is estimated by minimizing the standard deviation of grid points
transformed by the right side of the following equation
"&' "("%"#)* (1)
where "( represents the rigid transformation matrix between the tracking reference and tracked
camera, and "& is the transformation between the checkerboard tool and attached tracker. For a
more detailed discussion of this procedure the interested reader is directed to Drouin et al. [44].
The calibration error is measured using a leave-one-out cross-validation procedure with the
calibration images to obtain the reprojection error. This is an estimate of the reprojection error
that is expected to be obtained if the patient is perfectly registered with the system in the OR,
however, this error is compounded with other registration errors that can lead to larger
discrepancies. For the cases this pilot study, the average calibration error was on the order of
0.89 mm (range 0.60 – 1.12 mm). Once the camera has been calibrated and is being tracked, the
10
AR view is created by merging virtual objects, such as the segmented tumor, segmented blood
vessels, segmented cortex and iUS images, with the live view captured from the video camera.
To create a perception such that the tumor and other virtual objects appear under the visible
surface of the patient, edges are extracted and retained from the live camera view. Furthermore,
the transparency of the live image is selectively modulated such that the image is more
transparent around the tumor and opaquer elsewhere (Figure 4-D). For more details on these
visualization procedures, the reader is directed to [33, 45].
Figure 3: Flowchart showing the preoperative steps for creating a patient specific model and for
iUS probe calibration and camera calibration for augmented reality. A-Preoperative MRI image
after denoising, intensity non-uniformity correction, and intensity normalization. B-The cortical
surface is extracted from the MRI image using the FACE algorithm [40]. C-Vessels are extracted
using an ITK-Snap thresholding segmentation and a Frangi Vesselness filter [42] D- Tumor is
manually segmented using ITK-Snap. E-All preoperative models are combined into a patient-
specific model that is imported into the IGNS system. F-Calibration is performed by serial
imaging of a checkboard pattern with an attached tracker in different positions in the tracked
camera’s field of view allowing for simultaneous extraction of the intrinsic calibration matrix (!)
and extrinsic calibration matrix ("%). "(+ "#+ "% are the different transformation matrices used to
determine "% [34]. G-Tracked US calibration is performed using an N-wire calibration phantom
and a custom IGNS calibration plugin allowing for aligning of the virtual N-shaped wires with
the intersecting US images [34].
11
!
2.5 Tracked US Probe Calibration
When using US guidance for neurosurgery, a correspondence between the physical location of
the images and the physical space of the patient must be established. The accuracy of these
procedures is closely related to that of device tracking, which is on the order of 0.5 - 1.0 mm for
optical tracking systems [36], but is often categorized separately since specific phantoms are
needed to perform the calibration. Among the various calibration techniques, the N-wire
phantoms have been most widely accepted in the literature [46] due to their robustness,
simplicity and ability to be used by inexperienced users and therefore is the technique employed
here. Before each case the US probe was calibrated using a custom built N-Wire phantom and
calibration plugin following the guidelines described in [46]. The US probe calibration is
performed by rigidly attaching a tracker to the US probe and filling the phantom containing the
N-wires with water. The entire phantom is then registered to an identical virtual model in the
IGNS system using fiducial markers on the phantom followed by imaging the N-wire patterns at
different US positions and probe depth settings (Figure 3-G). The intersection of the N-wire
patterns with the US image slice defines the three-dimensional position of a point in the US
image and three or more patterns together define the calibration transform for the registered
phantom. Within IBIS is a custom manual calibration plugin that allows users to manually
identify the intersection points of the wires within a sequence of US images and the calibration is
automatically recomputed after each interaction [34]. Following the manual calibration, the
calibration is compared with 5 other N-wire images and the accuracy is reported as the root mean
square of the difference between world coordinates and transformed US image coordinates of the
intersection point of the N-wire and the US image plane. The accuracy for each of the cases in
12
this study was on the order of 1.0 mm which is consistent with reported and accepted values in
the literature [47].
2.6 US-MRI registration
MR – US registration techniques to correct for brain shift have recently been developed, based
on gradient orientation alignment, in order to reduce the effect of the non-homogeneous intensity
response found in iUS images [48]. Once an iUS acquisition has been performed, the collected
slices are reconstructed into a 3D volume, resliced in the axial, coronal, and sagittal views and
overlaid on the existing preoperative images. The current simple volume reconstruction works
with a raster scan strategy: for every voxel within the volume to be reconstructed, all pixels of all
iUS images are evaluated in terms of distance to the reconstructed volume. If a pixel is within
this user-specified distance (e.g., between 1-3mm), the intensity of the voxel is increased by the
intensity of the iUS pixel, modulated by a Gaussian weighting function. The registration
algorithm is based on gradient orientation alignment [48] focuses on maximizing overlap of
gradients with minimal uncertainty of the orientation estimates (i.e locations with high gradient
magnitude) within the set of images. This can be described mathematically as:
,-' ./01 2/3
,456 78 9
:;< (2)
where T* is the transformation being determined, = is the overlap domain and 78 is the inner
angle between the fixed image gradient, >!
?, and the transformed moving image gradient @AB
>!C:
78 ' D >!
?+ @AE >!C (3)
The registration is characterized by three major components: (1) a local similarity metric based
on gradient orientation alignment (456 78 9$, (2) a multi-scale selection strategy that identifies
13
locations of interest with gradient orientations of low uncertainty, and (3) a computationally
efficient technique for computing gradient orientations of the transformed moving images [48].
The registration pipeline consists of two stages. During the initial, pre-processing stage, the
image derivatives are computed and areas of low uncertainty gradient orientations are identified.
The second stage consists of an optimization strategy that maximizes the average value of the
local similarity metric evaluated on the locations of interest using a covariance matrix adaptation
evolution strategy [49]. For an in-depth discussion and more details on this procedure, the
interested reader is directed to [48].
This specific framework was chosen for use in the pilot study due to its validation with clinical
US-MRI data [48]. This registration framework has been shown to provide substantially
improved robustness and computational performance in the context of IGNS motivated by the
fact that gradient orientations are considered to characterize the underlying anatomical
boundaries found in MRI images and are more robust to the effect of non-homogeneous intensity
response found in US images. For this pilot study, only rigid registration transformations were
investigated.
Both the volume reconstruction and registration techniques are incorporated into IBIS using a
graphics processing unit (GPU) implementation that allows for high-speed results (on the order
of seconds) for reconstruction and rigid registration. This processed is briefly summarized in
Figure 4-C.
14
2.7 Operating Room Procedure
All image processing for each patient-specific model is done prior to the surgical case and
imported into the IBIS IGNS console. Once the patient has been brought into the operating room
and anaesthetized, the patient-to-image registration for IBIS is done simultaneously with the
commercial IGNS system, a Medtronic StealthStation (Dublin, Leinster, Republic of Ireland),
using 8 corresponding anatomical landmark pairs [50] (Figure 4-A). The quality of this
registration is evaluated by the clinical team in the OR. Quantitatively, the fiducial landmark
error must be below a certain threshold of 5.0 mm, but is often much lower. Qualitatively, the
neurosurgeon evaluates the appearance of tracked probe position on the skin of the patient to
ensure an appropriate quality of registration has been achieved. The surgeon. Augmented reality
is then used at different time points during the intervention and accuracy is qualitatively
assessed; on the scalp to verify skin incision, on the craniotomy (Figure 4-B) to verify
craniotomy extent, and on the cortex throughout resection (Figure 4-D). Once the dura has been
exposed, tracked intraoperative US images are acquired and used to re-register the preoperative
images to the patient. The accuracy of the alignment of the AR view on the cortex is then re-
evaluated based on these updated views using both qualitative and quantitative criteria.
Comments from the surgeon and visual inspection are used as qualitative criteria, while the pixel
misalignment error [51] and target registration error (TRE) of a set of landmark US-MRI
landmark pairs are used as quantitative criteria. For each patient, a set of 5 corresponding
landmarks were chosen on both preoperative MRI and iUS volumes to calculate the TRE – as the
Euclidian distance between pairs of landmarks – before and after registration. Landmarks were
chosen in areas of hyperechoic-hypoechoic transition (US) near tumor boundaries, ventricles and
sulci when the corresponding well defined features on the MRI were also identifiable. Pixel
misalignment error, as the name suggests, is calculated as the distance in pixels between where
15
the augmented virtual model is displayed on the live view and where it’s true location is in the
live view as determined through identification of a single pair of corresponding landmarks
identified by the surgeon (Figure 4-E). The precision of the pixel misalignment error
measurements relies on the distance between the camera and patient but caution was taken to
ensure the AR camera was always at the edge of the sterile field to ensure that this distance
remained relatively constant between cases and calibration. An example calculation is shown in
Figure 5. It is converted to a distance in mm based on the parameters defined through the
camera calibration that determine the pixel dimensions [51].
2.8 Study Design
Two neurosurgeons at the Montreal Neurological Institute and Hospital were involved in this
study. Neither surgeon had prior experience with the AR system before this study but both have
been involved with work related to development of the custom neuronavigation system as well as
its use and development for intraoperative US brain shift correction. Neither participant was
trained with interpreting AR images before the study other than explanation of what the AR
images represented and how they would be displayed in the OR.
16
Figure 4: Flowchart of the intraoperative workflow and how surgical tasks are related to IGNS
tasks. A-Patient-to-image registration. After the patient’s head is immobilized a tracking
reference is attached to the clamp and 8 facial landmarks are chosen that correspond to identical
landmarks on the preoperative images to create a mapping between the two spaces. B-
Augmented reality visualization on the skull is being qualitatively assessed by comparing the
tumor contour as defined by the preoperative guidance images and the overlay of the augmented
image. C-A series of US images are acquired once the craniotomy has been performed on the
dura and then reconstructed and registered with the preoperative MRI images using the gradient
orientation alignment algorithm. D-Augmented reality visualization on the cortex showing the
location of the tumor (green) and a vessel of interest (blue). E-The AR accuracy is quantitatively
evaluated by having the surgeon choose an identifiable landmark on the physical patient,
recording the coordinates, and then choosing the corresponding landmark on the augmented
image, recording the coordinates and measuring the two-dimensional distance between the
coordinates.
!
3 Results
We present the results of our experience in 8 iUS-AR IGNS cases. Relevant patient information
including age, sex and tumor type is summarized in Table 1.
17
Table 1: Summary of Patient information
Patient
Sex
Age
Tumor Type
Lobe
1
F
56
Meningioma
L – O/P
2
M
49
Glioma
L – F/T
3
F
72
Metastases
L – O/P
4
M
63
Glioma
R - F
5
F
77
Meningioma
R - F
6
M
24
Glioma
L – F
7
F
62
Glioma
L – O/P
8
F
55
Metastases
R -F
F-Frontal, O-Occipital, P-Parietal, L-Left, R-Right
3.1 Quantitative Results
For all but one cases, the iUS-MRI registration improved to under 3 mm and the pixel
misalignment error was on the order of 1 – 3 mm. The average improvement was 68%. In case 2,
the camera calibration data was corrupted so the pixel misalignment error could not be
calculated. Table 2 summarizes all iUS-MRI registration and pixel misalignment errors. The
second column represents the registration misalignment from the initial patient-to-image
landmark registration. Columns 3 and 4 represent the registration errors between US and MRI
volumes before and after registration respectively. The final two columns pertain to the virtual
model-to-video registration (pixel misalignment error) before and after US-MRI registration.
3.2 Qualitative Results
Qualitative comments from the surgeon reflected largely the benefit of not having to look at or
interpret US information but still being able to have an accurately overlaid virtual model that
could be augmented to verify the surgical plan and to visualize the surgical target. Their main
concerns were the limitation of camera maneuverability due to the size of tracking volume,
difficulty in comparing AR visualization with preoperative navigation images, and a learning
curve associated with the technology. Table 3 summarizes the different tasks where AR was used
18
throughout these interventions and how surgeons considered it to be useful. Figure 6 is an image
summary of four illustrative cases showing a surgical, pre-registration AR, and post-registration
AR to qualitatively show the improvement of the iUS registration on the AR visualizations.
Table 2: Summary of Registration and Pixel Misalignment Errors
Patient
Patient-to-Image
Registration
(mm)
Pre iUS-
MRI
Registration
TRE (mm)
Post iUS-
MRI
Registration
TRE (mm)
Pre-reg Pixel
Misalignment
Error (mm)
Post-reg Pixel
Misalignment
Error (mm)
IBIS
Medtronic
1
3.23
3.07
6.85 ± 3.14
1.88 ± 0.93
N/A*
N/A*
2
2.88
3.22
5.33 ± 1.67
2.97 ± 1.54
5.39
1.19
3
3.96
3.54
6.31 ± 1.51
2.18 ± 1.06
6.46
1.06
4
4.20
3.66
7.22 ± 2.31
2.34 ± 1.27
6.88
1.80
5
2.77
3.12
7.89 ± 1.62
2.77 ± 1.34
7.20
2.35
6
2.33
3.20
3.58 ± 1.82
2.25 ± 1.57
3.57
1.32
7
4.35
2.98
6.61 ± 3.39
3.48 ± 2.88
5.55
3.27
8
3.85
3.15
5.80 ± 2.27
1.56 ± 1.02
4.32
1.22
*For this case the camera calibration data was corrupted and we were unable to extract the
necessary parameters to measure the misalignment error. The pre- to post-registration
improvement of mean TRE was statistically significant in all cases (group t-test, p<0.05).
Figure 5: The surgeon was asked to place the tip of the tracked pointer at the closest edge of the
visible tumor in the surgical field (TL) and to also select the corresponding location on the
preoperative images/virtual model (TV). The AR view was initialized and the pixel misalignment
error, measured as the 2D distance between TL and TV on the camera image by multiplying the
19
number of pixels with the pixel size (as determined from the camera calibration) between the two
points of interest before registration (F*) and after registration (F9).
!
Table 3: Summary of AR tasks, benefits, and concerns, as per surgeons using the technology
Task
Use of AR
Benefits
Concerns
Pre-operative
Planning
• Show location of tumor and
other anatomy of interest
after patient positioning
and registration
• Share surgical plan with
other assisting physicians
• Limitation of extent of
camera maneuverability due
to tracking volume
Craniotomy
• Visualize tumor location in
relation to drawn
craniotomy borders below
bone
• Assess if there is a loss of
accuracy from skin
landmark registration
• Minimize craniotomy
size. Added comfort in
verification that virtual
tumor location is within
drawn boundaries before
removing bone.
• Difficult to verify with
navigation images at the same
time
Cortex pre-iUS
Registration
• AR during this context is
primarily for research
purposes and to determine
loss of accuracy from
beginning of surgery and
initial skin landmark
registration
• Despite being used for
research, surgeons still
found the visualizations
useful to have a
understanding of the
directions of the
deviations from initial
registration
• Difficulty in understanding
data first time during use
• Limitation of extent of
camera maneuverability due
to tracking volume
Cortex post-iUS
Registration
• Intraoperative planning
• Tumor and vessel
identification
• Assessment of AR/IGNS
accuracy
• Surgeons found it helpful
to compare their physical
interpretation of tumor
borders with the virtual
borders
• Surgeons commented on
the benefit of seeing a
virtual model of vessels
when they were in close
proximity of the tumor
and deep to the area of
resection
• AR helped confirm their
surgical plan
• Limitation of extent of
camera maneuverability due
to tracking volume
!
!
!
20
Figure 6: Four illustrative examples of the qualitative improvement of AR for tumor
visualization. The left column is the surgical view, the middle column is the initial AR view
before iUS registration and the right column is the iUS brain shift corrected AR view. A, B, C,
and D are cases 1, 3, 6, and 7 respectively.
21
4 Discussion and Conclusions
In this study, we successfully combined two important technologies in the context of IGNS of
brain tumors. The combination contributed to a high level of accuracy during AR visualizations
and obviated the need to directly interpret the iUS images throughout the intervention. The fact
that pre-iUS registration error was greater than the registration error following initial patient-to-
image registration highlights the gradual loss of accuracy throughout the intervention [52]. In
each case, we improved the patient-image misalignment by registration with iUS data. This
resulted in several advantages that included more accurate intra-operative navigation and more
reliable AR visualizations as shown qualitatively, and quantitatively with the improved TRE
measurements and pixel misalignment error measurements respectively.
The improved accuracy of the system was evaluated by two metrics; the target registration error
within a series of target points to assess the registration quality, and the pixel misalignment error
for the improved AR quality. A limitation of measuring the accuracy of AR overlays stems from
the lack of a standardized and universal metric in which the error in AR can be quantified. Some
authors use pixel misalignment error [51], while others use pixel reprojection error [26], and
other complex metrics are also described [6, 53]. The pixel misalignment error has the implicit
assumption that the registration with iUS creates a perfectly aligned image. This assumption is
inevitably violated, and thus pixel misalignment error is not a perfect measure of accuracy and is
only an indication of relative error between the two AR views rather than an absolute error for
either view. Despite this limitation, it was deemed here to be the most appropriate for
quantitative evaluation of the AR images. Another consideration on the accuracy of the
registration procedure is the effect of heart rate and blood pulsation during iUS acquisition. This
22
detail was not considered in this pilot project and will be investigated in future work. This work
is intended to serve as a pilot study to assess the feasibility of the combination of US and AR
technologies to improve on each of their shortcomings. For this reason, an US-MRI registration
algorithm that has been reported in the literature to work well with this type of clinical data was
chosen, as well as an AR evaluation metric that was considered the most appropriate in
evaluating the quality of virtual overlay for the data presented in this study. Future work will
require more extensive validation against other US-MR registration frameworks to draw stronger
conclusions about the quality of accuracy improvement, as well as other AR evaluation metrics
to better describe the quality of virtual overlay improvement. Finally, registration errors on the
order of 1.0 mm are ideally desired for neuronavigation assisted tasks, however, this level of
accuracy is rarely achievable and a registration error on the order of 2.0 – 3.0 mm is sufficient to
perform the intended tasks appropriately for this pilot study. In future work, with the use of non-
linear registration, we hope to further improve the level of registration accuracy to be closer to
ideal conditions.
In this study, AR views were acquired with the use of an external camera to capture images of
the surgical scene and render the AR view on the computer workstation. This strategy was
employed since one of the surgeons involved in the project does not generally use a microscope
while performing tumor resections. Augmenting microscope images is also compatible within
IBIS and may facilitate integration of AR in the operating room for navigation [34]. While the
justification for AR in some of cases presented here may not be great due to the tumor's
proximity to the cortex, the potential of AR in more complicated scenarios should not be
understated. For smaller tumors located much deeper within the brain or for tumors near
23
eloquent brain areas, having the ability to see below the surface with accurate visualizations
offered using AR creates the possibility of tailoring resection corridors to minimize the
invasiveness of the surgery. This benefit can only be accomplished if we are able to maintain a
high level of patient-to-image registration accuracy throughout the procedure. Combining iUS
registration and AR with more accurate tumor segmentations, like the process described in [54],
would assist a surgeon in resecting as much tumorous tissue as possible with minimal resection
of healthy tissue without having to rely solely on a mental map of the patient’s anatomy and the
surgeon's ability to discriminate tissue types.
It is clear from the qualitative comments from the surgeons involved in this work that there is a
learning curve associated with AR in the context of IGNS. In the first several cases, the AR was
employed simply as a tool to verify positions of anatomy of interest and to assess the accuracy of
the AR image alignment with the tracked preoperative images. As the surgeons became
comfortable with the system, the length of time AR was used increased and the number of tasks
where it was deemed useful also increased. The surgeons commented on the usefulness of AR to
assess and minimize the extent of the craniotomy, and to assess the location of the anatomy of
interest (i.e. tumors and vessels) once the cortex has been exposed. Additionally, the surgeons
commented on the benefit of using AR to share the surgical plan with assisting residents and
physicians by being able to show the vessels and tumor location before making an incision. The
surgeons also commented on the fact that having a coloured AR image for assessing the anatomy
was more enjoyable than a grey-scale US. The primary concern of the surgeons using the system
was the limitation of camera maneuverability due to the size of tracking volume which also led
to difficulty in comparing AR visualization with preoperative navigation images. However, over
24
the course of using the system the comfort with the system quickly grew over the first few cases
and the usefulness and amount of time and information requested increased. With continued use,
surgeons found the information increasingly useful as they incorporated it into their
intraoperative planning suggesting that with reliable accuracy and training this technology could
provide a benefit to improve the minimal invasiveness of surgery and to help with patient model
interpretation.
In conclusion, this pilot study highlights the feasibility of combining iUS registration and AR
visualization in the context of IGNS for tumor resections and some of the advantages it can have.
While many authors have investigated these techniques separately for brain tumor neurosurgery,
few have looked at the benefits of combining these two technologies. Our pilot study in 8
surgical cases suggests that the combined use of these technologies has the potential to improve
on traditional IGNS systems. By adding improved visualization of the anatomy and pathology of
interest, while simultaneously correcting for patient-image misalignment, extended reliable use
of IGNS throughout the intervention can be maintained that will hopefully lead to more efficient
and minimally invasive surgical intervention. In addition, with accurate AR visualizations, the
neurosurgeon is not required to interpret the iUS images which can be confusing to a non-expert.
With continued development and integration of the two techniques, the proposed iUS-AR system
has potential for improving tasks such as tailoring craniotomies, planning resection corridors and
localizing tumor tissue while simultaneously correcting for brain shift.
5 DISCLOSURES
All authors declare they have no conflicts of interests.
25
6 ACKNOWLEDGMENTS
This work was funded in part by NSERC (238739) and CIHR (MOP-97820) and an NSERC
CHRP (385864-10)
7 BIOGRAPHIES
Ian J. Gerard, M.Sc.
Ian J. Gerard is a Ph.D. Candidate in Biomedical Engineering at McGill University. His current
research involves improving the accuracy of neuronavigation tools for image-guided
neurosurgery of brain tumours with focus on intraoperative imaging for brain shift management
and enhanced visualization techniques for understanding complex medical imaging data.
Marta Kersten-Oertel, Ph.D.
Dr. Marta Kersten-Oertel is an Assistant Proffessor at Concordia University specializing in
medical image visualization and image-guided neurosurgery. Her current research involves the
development of an augmented reality neuronavigation system and the combination of advanced
visualization techniques with psychophysics in order to improve the understanding of complex
medical imaging data.
Simon Drouin, M.Sc.
Simon Drouin is a Ph.D. Candidate in Biomedical Engineering at McGill University. His current
research involves user interaction in augmented reality with a focus on depth perception, and
interpretation of visual cues in augmented environments.
26
Jeffery A. Hall, M.D. M.Sc.
Dr. Jeffery A. Hall is a neurosurgeon and Assistant Professor of Neurology and Neurosurgery at
McGill University’s Montreal Neurological Institute and Hospital, specializing in the surgical
treatment of epilepsy and cancer. His current research, in collaboration with other MNI clinician-
scientists includes: developing non-invasive means of delineating epileptic foci, intraoperative
imaging in combination with neuronavigation systems and the application of image-guided
neuronavigation to epilepsy surgery.
Kevin Petrecca, M.D. Ph.D.
Dr. Kevin Petrecca is a neurosurgeon, Assistant Professor of Neurology and Neurosurgery at
McGill University, and head of Neurosurgery at the Montreal Neurological Hospital,
specializing in neurosurgical oncology. His research at the Montreal Neurological Institute and
Hospital Brain Tumour Research Centre focuses on understanding fundamental molecular
mechanisms that regulate cell motility with a focus on malignant glial cell invasion.
Dante De Nigris, Ph.D.
Dr. De Nigris was formerly a Ph.D. student in the department of electrical and computer
engineering at McGill University. His research interests focus on analyzing and developing
techniques for multimodal image registration, specifically on similarity metrics for challenging
multimodal image registration contexts.
27
Daniel DiGiovanni, M.Sc.
Daniel DiGiovanni is a Ph.D. student in Integrated Program in Neuroscience at McGill
University. His research interests focus on the analysis of functional MRI data in the context of
brain tumors.
Tal Arbel, Ph.D.
Dr. Arbel is an Associate Professor in the department of electrical and computer engineering, and
a member of the McGill Centre for Intelligent Machines. Her research goals focus on the
development of modern probabilistic techniques in computer vision and their application to
problems in the medical imaging domain.
D. Louis Collins, Ph.D.
Dr. D. Louis Collins is a professor in Neurology and Neurosurgery, Biomedical Engineering and
associate member of the Center for Intelligent Machines at McGill University. His laboratory
develops and uses computerized image processing techniques such as non-linear image
registration and model-based segmentation to automatically identify structures within the brain.
His other research focuses on applying these techniques to image guided neurosurgery to provide
surgeons with computerized tools to assist in interpreting complex medical imaging data.
7 REFERENCES
1.! Sanai,!N.!and!M.S.!Berger,!!"#$%&'(#)&#*+,'-.#/)01$)23'14%/)%,5)&+#)(%3.#)10)#6&#,&)10)
$#/#*&'1,7!Neurotherapeutics,!2009.!6(3):!p.!478-86.!
2.! Roberts,!D.W.,!et!al.,!8)0$%4#3#//)/&#$#1&%6'*)',$%&'1,)10)*14".&#$'9#5)&1412$%"+'*)
'4%2',2)%,5)&+#)1"#$%&',2)4'*$1/*1"#7!J!Neurosurg,!1986.!65(4):!p.!545-9.!
28
3.! Gerard,!I.J.,!et!al.,!:$%',)/+'0&)',),#.$1,%('2%&'1,)10);$%',)&.41$/<)8)$#('#=7!Med!Image!
Anal,!2016.!35:!p.!403-420.!
4.! Comeau,!R.M.,!et!al.,!>,&$%1"#$%&'(#).3&$%/1.,5)01$)2.'5%,*#)%,5)&'//.#)/+'0&)*1$$#*&'1,)
',)'4%2#?2.'5#5),#.$1/.$2#$@7!Med!Phys,!2000.!27(4):!p.!787-800.!
5.! Mercier,!L.,!et!al.,!A#2'/&#$',2)"$#?)%,5)"1/&$#/#*&'1,)B?5'4#,/'1,%3).3&$%/1.,5)01$)
'4"$1(#5)('/.%3'9%&'1,)10)$#/'5.%3);$%',)&.41$7!Ultrasound!Med!Biol,!2013.!39(1):!p.!16-
29.!
6.! Azuma,!R.,!et!al.,!A#*#,&)%5(%,*#/)',)%.24#,)$#%3'&@7!IEEE!computer!graphics!and!
applications,!2001.!21(6):!p.!34-47.!
7.! Liao,!H.,!et!al.,!B?C)%.24#,)$#%3'&@)01$)DA>?2.'5#5)/.$2#$@)./',2)',$%3)('5#12$%"+@)
%.&1/&#$#1/*1"'*)'4%2#)1(#$3%@7!IEEE!Trans!Biomed!Eng,!2010.!57(6):!p.!1476-86.!
8.! Tabrizi,!L.B.!and!M.!Mahvash,!8.24#,)$#%3'&@E2.'5#5),#.$1/.$2#$@<)%**.$%*@)%,5)
',&$%1"#$%&'(#)%""3'*%&'1,)10)%,)'4%2#)"$1F#*&'1,)&#*+,'-.#7!Journal!of!Neurosurgery,!
2015.!123(1):!p.!206-211.!
9.! Liao,!H.,!et!al.,!8,)',$%)5'%2,1/'/)%,5)&+#$%"#.&'*)/@/)./',2)',&$%?1"#$%&'(#)G?
%4',13#(.3','*?%*'5?',5.*#5)03.1$#/*#,*#)2.'5#5)$1;1&'*)3%/#$)%;3%&'1,)01$)"$#*'/'1,)
,#.$1/.$2#$@7!Med!Image!Anal,!2012.!16(3):!p.!754-66.!
10.! Gerard,!I.J.,!et!al.!>4"$1(',2)"%&'#,&)/"#*'0'*),#.$1/.$2'*%3)415#3/)='&+)',&$%1"#$%&'(#)
.3&$%/1.,5)%,5)%.24#,)$#%3'&@)('/.%3'9%&'1,/)',)%),#.$1,%('2%&'1,)#,('$1,4#,&.!in!
H1$I/+1")1,)J3','*%3)>4%2#?:%/#5)K$1*#5.$#/.!2015.!Springer.!
11.! Ma,!L.,!et!al.,!8.24#,)$#%3'&@)/.$2'*%3),%('2%&'1,)='&+).3&$%/1.,5?%//'/)
$#2'/&$%&'1,)01$)"#5'*3#)/*$#=)"3%*#4#,&<)%)"'31&)/&.5@7!International!Journal!of!
Computer!Assisted!Radiology!and!Surgery,!2017.!
12.! Sato,!Y.,!et!al.,!>4%2#)2.'5%,*#)10);$#%/&)*%,*#$)/.$2#$@)./',2)B?C).3&$%/1.,5)'4%2#/)
%,5)%.24#,)$#%3'&@)('/.%3'9%&'1,7!IEEE!Transactions!on!Medical!Imaging,!1998.!17(5):!
p.!681-693.!
13.! Meola,!A.,!et!al.,!8.24#,)$#%3'&@)',),#.$1/.$2#$@<)%)/@/%&'*)$#('#=7!Neurosurg!
Rev,!2016.!
14.! Xiao,!Y.,!et!al.,!8&3%/?L.'5#5)M$%,/*$%,'%3)C1""3#$)N3&$%/1.,5)O6%4',%&'1,)='&+)%)P#.$1?
Q.$2'*%3)P%('2%&'1,)Q@/<)J%/#)Q&.5@,!in!J3','*%3)>4%2#?:%/#5)K$1*#5.$#/7)
M$%,/3%&'1,%3)A#/#%$*+)',)D#5'*%3)>4%2',2<)R&+)>,&#$,%&'1,%3)H1$I/+1"S)JT>K)UVWGS)X#35)
',)J1,F.,*&'1,)='&+)D>JJ8>)UVWGS)D.,'*+S)L#$4%,@S)!*&1;#$)GS)UVWG7)A#('/#5)Q#3#*)
K%"#$/,!C.!Oyarzun!Laura,!et!al.,!Editors.!2016,!Springer!International!Publishing:!Cham.!
p.!19-27.!
15.! Bucholz,!R.D.!and!D.J.!Greco,!>4%2#?2.'5#5)/.$2'*%3)&#*+,'-.#/)01$)',0#*&'1,/)%,5)
&$%.4%)10)&+#)*#,&$%3),#$(1./)/@//!Neurosurg!Clin!N!Am,!1996.!7(2):!p.!187-200.!
16.! Keles,!G.E.,!K.R.!Lamborn,!and!M.S.!Berger,!J1$#2'/&$%&'1,)%**.$%*@)%,5)5#&#*&'1,)10)
;$%',)/+'0&)./',2)',&$%1"#$%&'(#)/1,1,%('2%&'1,)5.$',2)$#/#*&'1,)10)+#4'/"+#$'*)&.41$/7!
Neurosurgery,!2003.!53(3):!p.!556-62;!discussion!562-4.!
17.! Reinertsen,!I.,!et!al.,!>,&$%?1"#$%&'(#)*1$$#*&'1,)10);$%',?/+'0&7!Acta!Neurochir!(Wien),!
2014.!156(7):!p.!1301-10.!
18.! Reinertsen,!I.,!et!al.,!J3','*%3)(%3'5%&'1,)10)(#//#3?;%/#5)$#2'/&$%&'1,)01$)*1$$#*&'1,)10)
;$%',?/+'0&7!Med!Image!Anal,!2007.!11(6):!p.!673-84.!
29
19.! Kersten-Oertel,!M.,!P.!Jannin,!and!D.L.!Collins,!CYY<)%)&%61,14@)01$)4'6#5)$#%3'&@)
('/.%3'9%&'1,)',)'4%2#)2.'5#5)/.$2#$@7!IEEE!Trans!Vis!Comput!Graph,!2012.!18(2):!p.!332-
52.!
20.! Cabrilo,!I.,!et!al.,!8.24#,)$#%3'&@?%//'/)/I.33);%/#)/.$2#$@7!Neurochirurgie,!2014.!
60(6):!p.!304-6.!
21.! Kawamata,!T.,!et!al.,!O,51/*1"'*)%.24#,)$#%3'&@),%('2%&'1,)/@/)01$)#,51,%/%3)
&$%,//"+#,1'5%3)/.$2#$@)&1)&$#%&)"'&.'&%$@)&.41$/<)&#*+,'*%3),1!Neurosurgery,!2002.!
50(6):!p.!1393-7.!
22.! Paul,!P.,!O.!Fleig,!and!P.!Jannin,!8.24#,)('$&.%3'&@);%/#5)1,)/&#$#1/*1"'*)
$#*1,/&$.*&'1,)',)4.3&'415%3)'4%2#?2.'5#5),#.$1/.$2#$@<)4#&+15/)%,5)"#$01$4%,*#)
#(%3.%&'1,7!IEEE!Trans!Med!Imaging,!2005.!24(11):!p.!1500-11.!
23.! Rosahl,!S.K.,!et!al.,!Y'$&.%3)$#%3'&@)%.24#,&%&'1,)',)/I.33);%/#)/.$2#$@7!Skull!Base,!2006.!
16(2):!p.!59-66.!
24.! Shahidi,!R.,!et!al.,!>4"3#4#,&%&'1,S)*%3';$%&'1,)%,5)%**.$%*@)&#/&',2)10)%,)'4%2#?
#,+%,*#5)#,51/*1"@)/@//!IEEE!Trans!Med!Imaging,!2002.!21(12):!p.!1524-35.!
25.! Kersten-Oertel,!M.,!et!al.,!8.24#,)$#%3'&@)',),#.$1(%/*.3%$)/.$2#$@<)0#%/';'3'&@)%,5)
0'$/&)./#/)',)&+#)1"#$%&',2)$1147!Int!J!Comput!Assist!Radiol!Surg,!2015.!10(11):!p.!1823-
36.!
26.! Cabrilo,!I.,!P.!Bijlenga,!and!K.!Schaller,!8.24#,)$#%3'&@)',)&+#)/.$2#$@)10)*#$#;$%3)
%$&#$'1(#,1./)4%301$4%&'1,/<)&#*+,'-.#)%//#//4#,&)%,5)*1,/'5#$%&'1,/7!Acta!Neurochir!
(Wien),!2014.!156(9):!p.!1769-74.!
27.! Cabrilo,!I.,!P.!Bijlenga,!and!K.!Schaller,!8.24#,)$#%3'&@)',)&+#)/.$2#$@)10)*#$#;$%3)
%,#.$@/4/<)%)&#*+,'*%3)$#"1$&7!Neurosurgery,!2014.!10+Suppl+2:!p.!252-60;!discussion!
260-1.!
28.! Low,!D.,!et!al.,!8.24#,)$#%3'&@),#.$1/.$2'*%3)"3%,,',2)%,5),%('2%&'1,)01$)/.$2'*%3)
#6*'/'1,)10)"%$%/%2'&&%3S)0%3*',#)%,5)*1,(#6'&@)4#,',2'14%/7!Br!J!Neurosurg,!2010.!24(1):!
p.!69-74.!
29.! Stadie,!A.T.,!et!al.,!Y'$&.%3)$#%3'&@)/@/)01$)"3%,,',2)4','4%33@)',(%/'(#),#.$1/.$2#$@7)
M#*+,'*%3),1!J!Neurosurg,!2008.!108(2):!p.!382-94.!
30.! Kersten-Oertel,!M.,!et!al.,!8.24#,)A#%3'&@)01$)Q"#*'0'*)P#.$1(%/*.3%$)Q.$2'*%3)M%/I/,!
in!8.24#,)O,('$1,4#,&/)01$)J14".&#$?8//'/)>,&#$(#,&'1,/<)WV&+)>,&#$,%&'1,%3)
H1$I/+1"S)8O?J8>)UVWGS)X#35)',)J1,F.,*&'1,)='&+)D>JJ8>)UVWGS)D.,'*+S)L#$4%,@S)
!*&1;#$)ZS)UVWG7)K$1*##5',2/,!A.C.!Linte,!Z.!Yaniv,!and!P.!Fallavollita,!Editors.!2015,!
Springer!International!Publishing:!Cham.!p.!92-103.!
31.! Drouin,!S.,!M.!Kersten-Oertel,!and!D.!Louis!Collins,!>,&#$%*&'1,?:%/#5)A#2'/&$%&'1,)
J1$$#*&'1,)01$)>4"$1(#5)8.24#,)A#%3'&@)!(#$3%@)',)P#.$1/.$2#$@,!in!8.24#,)
O,('$1,4#,&/)01$)J14".&#$?8//'/)>,&#$(#,&'1,/<)WV&+)>,&#$,%&'1,%3)H1$I/+1"S)8O?J8>)
UVWGS)X#35)',)J1,F.,*&'1,)='&+)D>JJ8>)UVWGS)D.,'*+S)L#$4%,@S)!*&1;#$)ZS)UVWG7)
K$1*##5',2/,!A.C.!Linte,!Z.!Yaniv,!and!P.!Fallavollita,!Editors.!2015,!Springer!International!
Publishing:!Cham.!p.!21-29.!
32.! Kersten-Oertel,!M.,!S.J.!Chen,!and!D.L.!Collins,!8,)O(%3.%&'1,)10)C#"&+)O,+%,*',2)
K#$*#"&.%3)J.#/)01$)Y%/*.3%$)Y13.4#)Y'/.%3'9%&'1,)',)P#.$1/.$2#$@7!IEEE!Trans!Vis!
Comput!Graph,!2013.!
30
33.! Kersten-Oertel,!M.,!et!al.,!8.24#,)$#%3'&@)('/.%3'9%&'1,)01$)2.'5%,*#)',),#.$1(%/*.3%$)
/.$2#$@7!Stud!Health!Technol!Inform,!2012.!173:!p.!225-9.!
34.! Drouin,!S.,!et!al.,!>:>Q<)%,)!A)$#%5@)1"#,?/1.$*#)"3%&01$4)01$)'4%2#?2.'5#5),#.$1/.$2#$@7!
Int!J!Comput!Assist!Radiol!Surg,!2016.!
35.! Mercier,!L.,!et!al.,!P#=)"$1&1&@"#),#.$1,%('2%&'1,)/@/);%/#5)1,)"$#1"#$%&'(#)
'4%2',2)%,5)',&$%1"#$%&'(#)0$##+%,5).3&$%/1.,5<)/@/)5#/*$'"&'1,)%,5)(%3'5%&'1,7!Int!J!
Comput!Assist!Radiol!Surg,!2011.!6(4):!p.!507-22.!
36.! Gerard,!I.J.!and!D.L.!Collins,!8,)%,%3@/'/)10)&$%*I',2)#$$1$)',)'4%2#?2.'5#5),#.$1/.$2#$@7!
Int!J!Comput!Assist!Radiol!Surg,!2015.!
37.! Guizard,!N.,!et!al.!A1;./&)',5'('5.%3)"3%&#)"'"#3',#)01$)31,2'&.5',%3)DA)'4%2#/.!in!
D>JJ8>)UVWU)H1$I/+1")1,)P1(#3):'14%$I#$/)01$)839+#'4#$[/)C'/#%/#)%,5)A#3%)
C'/1$5#$/.!2012.!
38.! Coupe,!P.,!et!al.,!8,)1"&'4'9#5);31*I='/#),1,31*%3)4#%,/)5#,1'/',2)0'3&#$)01$)B?C)
4%2,#&'*)$#/1,%,*#)'4%2#/7!IEEE!Trans!Med!Imaging,!2008.!27(4):!p.!425-41.!
39.!Sled,!J.G.,!A.P.!Zijdenbos,!and!A.C.!Evans,!8),1,"%$%4#&$'*)4#&+15)01$)%.&14%&'*)
*1$$#*&'1,)10)',&#,/'&@),1,.,'01$4'&@)',)DA>)5%&%7!IEEE!Trans!Med!Imaging,!1998.!17(1):!
p.!87-97.!
40.! Eskildsen,!S.F.!and!L.R.!Ostergaard.!8*&'(#)/.$0%*#)%""$1%*+)01$)#6&$%*&'1,)10)&+#)+.4%,)
*#$#;$%3)*1$)0$14)DA>7!in!D#*'*%3)>4%2#)J14".&',2)%,5)J14".&#$?8//'/)
>,&#$(#,&'1,.!2006.!Springer!Berlin!Heidelberg.!
41.! Yushkevich,!P.,!et!al.!N/#$?L.'5#5)T#(#3)Q#&)Q#24#,&%&'1,)10)8,%&14'*%3)Q&$.*&.$#/)='&+)
>M\?QP8KS)>,/'2+&)]1.,$%3S)Q"#*'%3)>//.#)1,)>QJ.!2005.!NA-MIC/MICCAI!Workshop!on!
Open-Source!Software.!
42.! Frangi,!A.F.,!et!al.,!D.3&'/*%3#)(#//#3)#,+%,*#4#,&)0'3&#$',2,!in!D#5'*%3)>4%2#)J14".&',2)
%,5)J14".&#$?8//'/)>,&#$(#,&'1,)^)D>JJ8>_Z`<)a'$/&)>,&#$,%&'1,%3)J1,0#$#,*#)
J%4;$'52#S)D8S)NQ8S)!*&1;#$)WWEWBS)WZZ`)K$1*##5',2/,!W.M.!Wells,!A.!Colchester,!and!
S.!Delp,!Editors.!1998,!Springer!Berlin!Heidelberg:!Berlin,!Heidelberg.!p.!130-137.!
43.! Zhang,!Z.,!J%4#$%)*%3';$%&'1,)='&+)1,#?5'4#,/'1,%3)1;F#*&/7!Pattern!Analysis!and!
Machine!Intelligence,!IEEE!Transactions!on,!2004.!26(7):!p.!892-899.!
44.! Drouin,!S.,!et!al.)8)$#%3'/&'*)&#/&)%,5)5#(#31"4#,&)#,('$1,4#,&)01$)4'6#5)$#%3'&@)',)
,#.$1/.$2#$@7!in!8.24#,)O,('$1,4#,&/)01$)J14".&#$?8//'/)>,&#$(#,&'1,/)2012.!
Springer!Berlin!Heidelberg.!
45.!Kersten-Oertel,!M.,!et!al.,!8.24#,)$#%3'&@)',),#.$1(%/*.3%$)/.$2#$@<)0#%/';'3'&@)%,5)
0'$/&)./#/)',)&+#)1"#$%&',2)$1147!Int!J!Comput!Assist!Radiol!Surg,!2015.!
46.! Mercier,!L.,!et!al.,!8)$#('#=)10)*%3';$%&'1,)&#*+,'-.#/)01$)0$##+%,5)B?C).3&$%/1.,5)
/@//7!Ultrasound!Med!Biol,!2005.!31(4):!p.!449-71.!
47.! Carbajal,!G.,!et!al.,!>4"$1(',2)P?='$#)"+%,&14?;%/#5)0$##+%,5).3&$%/1.,5)*%3';$%&'1,7!Int!
J!Comput!Assist!Radiol!Surg,!2013.!8(6):!p.!1063-72.!
48.! De!Nigris,!D.,!D.L.!Collins,!and!T.!Arbel,!a%/&)$'2'5)$#2'/&$%&'1,)10)"$#?1"#$%&'(#)4%2,#&'*)
$#/1,%,*#)'4%2#/)&1)',&$%?1"#$%&'(#).3&$%/1.,5)01$),#.$1/.$2#$@);%/#5)1,)+'2+)
*1,0'5#,*#)2$%5'#,&)1$'#,&%&'1,/7!Int!J!Comput!Assist!Radiol!Surg,!2013.!8(4):!p.!649-61.!
49.! Hansen,!N.!and!A.!Ostermeier,!J14"3#@)5#$%,514'9#5)/#30?%5%"&%&'1,)',)#(13.&'1,)
/&$%'#/7!Evol!Comput,!2001.!9(2):!p.!159-95.!
31
50.! Gerard,!I.J.,!et!al.,!P#=)K$1&1*13)01$)QI',)T%,54%$I)A#2'/&$%&'1,)',)>4%2#?L.'5#5)
P#.$1/.$2#$@<)M#*+,'*%3)P1!Neurosurgery,!2015.!11+Suppl+3:!p.!376-80;!discussion!
380-1.!
51.! Caversaccio,!M.,!et!al.,!8.24#,)$#%3'&@)#,51/*1"'*)/@/)b8AOQc<)"$#3'4',%$@)$#/.3&/7!
Rhinology,!2008.!46(2):!p.!156-8.!
52.! Nabavi,!A.,!et!al.,!Q#$'%3)',&$%1"#$%&'(#)4%2,#&'*)$#/1,%,*#)'4%2',2)10);$%',)/+'0&7!
Neurosurgery,!2001.!48(4):!p.!787-97;!discussion!797-8.!
53.! Holloway,!R.L.,!A#2'/&$%&'1,)#$$1$)%,%3@/'/)01$)%.24#,)$#%3'&@7!Presence:!
Teleoperators!and!Virtual!Environments,!1997.!6(4):!p.!413-432.!
54.! Subbanna,!N.K.,!et!al.,!X'#$%$*+'*%3)"$1;%;'3'/&'*)L%;1$)%,5)DAa)/#24#,&%&'1,)10);$%',)
&.41.$/)',)DA>)(13.4#/7!Med!Image!Comput!Comput!Assist!Interv,!2013.!16(Pt!1):!p.!
751-8.!