ArticlePDF Available

Abstract and Figures

We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient’s cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoper- ative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative met- rics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with mis- registration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.
Content may be subject to copyright.
1
Combining intra-operative ultrasound brain shift correction and augmented
reality visualizations: a pilot study of 8 cases.
Ian J. Gerard (ian.gerard@mail.mcgill.ca)1, Marta Kersten-Oertel2, Simon Drouin1, Jeffery A.
Hall3, Kevin Petrecca3, Dante De Nigris4, Daniel A. Di Giovanni3, Tal Arbel4, D. Louis
Collins1,3,4
1Montreal Neurological Institute and Hospital, Department of Biomedical Engineering, McGill University, 3801
Blvd. Robert-Bourassa, Montreal, QC, H3A 2B4, Canada
2PERFORM Centre, Department of Computer Science and Software Engineering, Concordia University, 7200
Sherbrooke St.W, Montreal, QC, H4B 1R6, Canada
3Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, McGill University,
3801 Blvd. Robert-Bourassa, Montreal, QC, H3A 2B4, Canada
4Centre for Intelligent Machines, Department of Electrical and Computer Engineering, McGill University, 3480
Blvd. Robert-Bourassa, Montreal, QC, H3A 2A7, Canada
Abstract
Purpose: We present our work investigating the feasibility of combining intraoperative
ultrasound for brain shift correction and augmented reality visualization for intraoperative
interpretation of patient-specific models in image-guided neurosurgery of brain tumors.
Methods: We combine two imaging technologies for image-guided brain tumor neurosurgery.
Throughout surgical interventions, augmented reality was used to assess different surgical
strategies using three-dimensional patient-specific models of the patient’s cortex, vasculature,
and lesion. Ultrasound imaging was acquired intra-operatively and preoperative images and
models were registered to the intraoperative data. The quality and reliability of the augmented
reality views were evaluated with both qualitative and quantitative metrics.
Results: A pilot study of 8 patients, demonstrates the feasible combination of these two
technologies and their complimentary features. In each case, the augmented reality visualizations
enabled the surgeon to accurately visualize the anatomy and pathology of interest for an
extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and
augmented reality, were improved in all cases.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1 Ian J. Gerard is the corresponding author
2
Conclusion: These results demonstrate the potential of combining ultrasound-based registration
with augmented reality to become a useful tool for neurosurgeons to improve intra-operative
patient-specific planning by improving the understanding of complex three-dimensional medical
imaging data and prolonging the reliable use of image-guided neurosurgery.
Keywords: Image-guided neurosurgery, brain shift, augmented reality, registration, brain tumor
1 Introduction
Each year thousands of Canadians undergo neurosurgery for resections of lesions in close
proximity to areas of the brain that are critical to movement, vision, sensation, or language.
There is strong support in the literature demonstrating significantly increased survival benefit
with complete resection of primary and secondary brain tumors [1], creating competing
constraints that must be balanced during surgery for each patient: achieving maximal resection of
the lesions while causing minimal neurological deficit.
Since the introduction of the first intraoperative frameless stereotactic navigation device by
Roberts et al. in 1986 [2], image-guided neurosurgery (IGNS), or “neuronavigation”, has become
an essential tool for many neurosurgical procedures due to its ability to minimize surgical trauma
by allowing for the precise localization of surgical targets. For many of these interventions,
preoperative planning is done on these IGNS systems that provide the surgeon with tools to
visualize, interpret, and navigate through patient specific volumes of anatomical, vascular and
functional information while investigating their inter-relationships. Over the past 30 years, the
growth of this technology has enabled application to increasingly complicated interventions
including the surgical treatment of malignant tumors, neurovascular disorders, epilepsy and deep
brain stimulation. The integration of preoperative image information into a comprehensive
3
patient-specific model enables surgeons to preoperatively evaluate the risks involved and define
the most appropriate surgical strategy. Perhaps more importantly, such systems enable surgery of
previously inoperable cases by facilitating safe surgical corridors through IGNS-identified non-
critical areas.
For intraoperative use, IGNS systems must relate the physical location of a patient with the
preoperative models by means of a transformation that relates the two through a patient-to-image
mapping (Figure 1). By tracking the patient and a set of specialized surgical tools, this mapping
allows a surgeon to point to a specific location on the patient and see the corresponding anatomy
in the pre-operative images and the patient specific models. However, throughout the
intervention, hardware movement, an imperfect patient-image mapping, and movement of brain
tissue during surgery invalidate the patient-to-image mapping [3]. These sources of inaccuracy,
collectively described as ‘brain shift’, reduce the effectiveness of using preoperative patient-
specific models intraoperatively. Unsurprisingly, most surgeons use IGNS systems to plan an
approach to a surgical target but understandably no longer rely on the system throughout the
entirety of an operation when accuracy is compromised and medical image interpretation is
encumbered. Recent advances in IGNS technology have resulted in intraoperative imaging and
registration techniques [4, 5] to help update preoperative images and maintain accuracy.
Advances in visualization have introduced augmented reality techniques [6-9] at different time-
points and for different tasks to help improve the understanding and visualization of complex
medical imaging data and models and to help with intraoperative planning. We present a pilot
study of 8 cases combining the use of intra-operative ultrasound (iUS), for brain shift correction,
and intraoperative augmented reality (AR) visualization with traditional IGNS tools to improve
intraoperative accuracy and interpretation of patient-specific neurosurgical models in the context
4
of IGNS of tumors. While other groups have investigated iUS and AR independently there are
very few reports [10-14] of using both technologies to overcome the visualization issues related
with iUS and the accuracy issues related to AR. The goal of this pilot study is to investigate the
feasibility of combining iUS based brain shift correction and augmented reality visualizations to
improve both the accuracy and interpretation of complex intra-operative data. Our work aims to
improve on some of the limitations of the previous work by being a prospective instead of
retrospective clinical pilot study [12], being focused on evaluation in clinical scenarios as
opposed to phantoms or animal cadavers [11], having high quality MRI images for
segmentations instead of from difficult to interpret US images [12, 14], using patient-specific
data instead of atlas-based data for greater registration accuracy [14], and finally, using a fast
US-MRI registration that allows for an efficient workflow incorporating AR in the OR that
consumes less time and provides more information than previous reports [12, 13].
Figure 1: The patient’s head is positioned, immobilized and a tracked reference frame is
attached. The patient’s preoperative images and physical space are registered by using 8
corresponding landmarks on the head and face to create a correspondence between the two
spaces.
5
1.1 iUS in Neurosurgery
Intra-operative imaging has seen a wide range of use in neurosurgery over the last two decades.
Its main benefit constitutes the ability to visualize the up-to-date anatomy of a patient during an
intervention. iUS has been proposed and used as an alternative to intra-operative MRI due to its
ease of use, low-cost and wide-spread availability [5]. iUS is relatively inexpensive and non-
invasive and does not require many changes to the operating room or surgical procedures.
However, its main challenges are associated with relating information to preoperative images,
which are generally of a different modality. The alignment of iUS to MRI images is a
challenging task due to the widely different nature and quality of the two modalities. While voxel
intensity of both modalities is directly dependent on tissue type, US has an additional
dependence on probe orientation and depth that can lead to intensity non-uniformity due to the
presence of acoustic impedance transitions. Preoperative MR images allow for identification of
tissue types, anatomical structures and a variety of pathologies such as cancerous tumors. iUS
images are generally limited to displaying lesion tissue with an associated uncertainty regarding
its boundary, along with a few coarsely depicted structure boundaries. Early reports using iUS in
neurosurgery, such as in Bucholz et al. [15], show success with this technique using brightness
mode (B-mode) information. B-mode US has been used to obtain anatomical information [4, 5,
16] while Doppler US yields flow information for cerebral vasculature [17, 18]. The interested
reader is directed to [3] for a history and overview of iUS in neurosurgery in the context of brain
shift correction.
1.2 Augmented Reality in Neurosurgery
Augmented reality visualizations have become increasingly popular in medical research to help
understand and visualize complex medical imaging data. Augmented reality is defined as “the
6
merging of virtual objects with the real world (i.e. the surgical field of view)” [19]. The
motivation for these visualizations comes from the desire to merge pre-operative images, models,
and plans with the real physical space of the patient in a comprehensible fashion. These
augmented views have been proposed to better understand the topology and inter-relationships of
structures of interest that are not directly visible in the surgical field of view. AR has been
explored for neurosurgery in the context of skull base surgery [20] trans-sphenoidal neurosurgery
(i.e. for pituitary tumors) [21], microscope-assisted neurosurgery [22], endoscopic neurosurgery
[23, 24] neurovascular surgery [25-27], and primary brain tumor resection planning [28, 29].
This list is far from comprehensive so the interested reader is referred to Kersten et al. [19] and
Meola et al. [13] for detailed reviews of the use of AR in IGNS. In all recently published studies,
AR visualizations are described as an enhancement to the minimally invasiveness of a procedure
through more tailored, patient-specific approaches. A recent study by Kersten et al. [30]
evaluating the benefit of AR to specific neurosurgical tasks has demonstrated a major pitfall of
these types of visualization is the lack of accurate overlay throughout an intervention making
them only useful at early parts of an intervention. Recent literature has tried to address this issue
through interactive overlay realignment [31] or through manipulation of visualization parameters
[32, 33] with some success. In this work, we aim to address this major issue with iUS imaging.
2 Materials and Methods
2.1 Ethics
The MNI/H Ethics Board approved the study and all patients signed informed consent prior to
data collection.
7
2.2 System Description
All data was collected and analyzed on a custom-built prototype IGNS system, the Intraoperative
Brain Imaging System (IBIS) [34]. This system has previously been described in [25, 34, 35] for
use with iUS and AR as independent technologies. The Linux workstation is equipped with an
Intel Core i7-3820 @ 360 GHz x8 processor with 32 GB RAM, a GeForce GTX 670 graphics
card and Conexant cx23800 video capture card. Tracking is performed using a Polaris N4
infrared optical system (Northern Digital, Waterloo, Canada). The Polaris infrared camera uses
stereo triangulation to locate the passive reflective spheres on both the reference and pointing
tools with an accuracy of 0.5 mm [36]. The US scanner, an HDI 5000 (ATL/Philips, Bothell,
WA, USA) equipped with a 2D P7-4 MHz phased array transducer, enables intraoperative
imaging during the surgical intervention. Video capture of the live surgical scene is achieved
with a Sony HDR XR150 camera. Both the camera and US system transmit images using an S-
video cable to the Linux workstation at 30 frames/second. The camera and US transducer probe
are outfitted with a spatial tracking device with attached passive reflective spheres (Traxtal
Technologies Inc., Toronto, Canada) and are tracked in the surgical environment. Figure 2 shows
the main components of the iUS-AR IGNS system.
2.3 Patient-Specific Neurosurgical Models
The patient-specific neurosurgical models refer to all preoperative data – images, surfaces,
segmented anatomical structures for an individual patient. All patients involved in this study
followed a basic tumor imaging protocol at the Montreal Neurological Institute and Hospital
(MNI/H) with a gadolinium enhanced T1-weighted MRI obtained on a 1.5 T MRI scanner
(Ingenia Phillips Medical Systems). All images were processed in a custom image processing
pipeline as follows [37]: First, the MRI is denoised, after estimating the standard deviation of the
8
MRI Rician noise [38]. Next, intensity non-uniformity correction and normalization is done by
estimating the non-uniformity field [39], followed by histogram matching with a reference image
to normalize the intensities (Figure 3-A). Within this pipeline, the FACE method [40] is used to
obtain a three dimensional model of the patient’s cortex (Figure 3-B). After processing, the
tumor is manually segmented using ITK-Snap [41] and a vessel model is created using a
combination of a semi-automatic intensity thresholding segmentation, also in ITK-Snap, and a
Frangi Vesselness filter [42] (Figure 3-C,D). The processing is done on a local computing cluster
at the MNI and the combined time for the processing pipeline and segmentations is on the order
of 2 hours. A model of the skin surface was also generated using ray-tracing from the processed
images in IBIS using a transfer function to control the transparency of the volume so all
segmented structures can be viewed. The processed images and patient-specific models are then
imported into IBIS (Figure 3-E).
Figure 2: The different components in an iUS-AR IGNS intervention and their relationship with
the surgical and neuronavigation setup. Once the US and live video images are captured from the
external devices, they are imported into the neuronavigation system and all US and AR
visualizations are displayed on the neuronavigation monitor. (Adapted with permission from
[34])
9
2.4 Tracked Camera Calibration and Creating the Augmented Reality View
To create augmented reality visualizations from images captured by a tracked camera, prior
calibration of the camera-tracker apparatus must be performed. The intrinsic and extrinsic
calibration parameters are determined simultaneously. We determine the intrinsic calibration of
the camera using a printed checkerboard pattern fixed on a flat surface with a rigidly attached
tracker tool using the method described in [34]. The different components and transformation
matrix relationships are shown in Figure 3-F. Multiple images are taken while displacing the
pattern in the camera’s field of view. The intrinsic calibration matrix, !, is obtained through
automatic detection of the checkerboard corners and feeding the coordinates and tracked 3D
position through an implementation of Zhang’s method [43]. This also creates a mapping
between the space of the calibration grid and the optical centre of the tracked camera ("#$. The
extrinsic calibration matrix ("%) is estimated by minimizing the standard deviation of grid points
transformed by the right side of the following equation
"&' "("%"#)* (1)
where "( represents the rigid transformation matrix between the tracking reference and tracked
camera, and "& is the transformation between the checkerboard tool and attached tracker. For a
more detailed discussion of this procedure the interested reader is directed to Drouin et al. [44].
The calibration error is measured using a leave-one-out cross-validation procedure with the
calibration images to obtain the reprojection error. This is an estimate of the reprojection error
that is expected to be obtained if the patient is perfectly registered with the system in the OR,
however, this error is compounded with other registration errors that can lead to larger
discrepancies. For the cases this pilot study, the average calibration error was on the order of
0.89 mm (range 0.60 – 1.12 mm). Once the camera has been calibrated and is being tracked, the
10
AR view is created by merging virtual objects, such as the segmented tumor, segmented blood
vessels, segmented cortex and iUS images, with the live view captured from the video camera.
To create a perception such that the tumor and other virtual objects appear under the visible
surface of the patient, edges are extracted and retained from the live camera view. Furthermore,
the transparency of the live image is selectively modulated such that the image is more
transparent around the tumor and opaquer elsewhere (Figure 4-D). For more details on these
visualization procedures, the reader is directed to [33, 45].
Figure 3: Flowchart showing the preoperative steps for creating a patient specific model and for
iUS probe calibration and camera calibration for augmented reality. A-Preoperative MRI image
after denoising, intensity non-uniformity correction, and intensity normalization. B-The cortical
surface is extracted from the MRI image using the FACE algorithm [40]. C-Vessels are extracted
using an ITK-Snap thresholding segmentation and a Frangi Vesselness filter [42] D- Tumor is
manually segmented using ITK-Snap. E-All preoperative models are combined into a patient-
specific model that is imported into the IGNS system. F-Calibration is performed by serial
imaging of a checkboard pattern with an attached tracker in different positions in the tracked
camera’s field of view allowing for simultaneous extraction of the intrinsic calibration matrix (!)
and extrinsic calibration matrix ("%). "(+ "#+ "% are the different transformation matrices used to
determine "% [34]. G-Tracked US calibration is performed using an N-wire calibration phantom
and a custom IGNS calibration plugin allowing for aligning of the virtual N-shaped wires with
the intersecting US images [34].
11
!
2.5 Tracked US Probe Calibration
When using US guidance for neurosurgery, a correspondence between the physical location of
the images and the physical space of the patient must be established. The accuracy of these
procedures is closely related to that of device tracking, which is on the order of 0.5 - 1.0 mm for
optical tracking systems [36], but is often categorized separately since specific phantoms are
needed to perform the calibration. Among the various calibration techniques, the N-wire
phantoms have been most widely accepted in the literature [46] due to their robustness,
simplicity and ability to be used by inexperienced users and therefore is the technique employed
here. Before each case the US probe was calibrated using a custom built N-Wire phantom and
calibration plugin following the guidelines described in [46]. The US probe calibration is
performed by rigidly attaching a tracker to the US probe and filling the phantom containing the
N-wires with water. The entire phantom is then registered to an identical virtual model in the
IGNS system using fiducial markers on the phantom followed by imaging the N-wire patterns at
different US positions and probe depth settings (Figure 3-G). The intersection of the N-wire
patterns with the US image slice defines the three-dimensional position of a point in the US
image and three or more patterns together define the calibration transform for the registered
phantom. Within IBIS is a custom manual calibration plugin that allows users to manually
identify the intersection points of the wires within a sequence of US images and the calibration is
automatically recomputed after each interaction [34]. Following the manual calibration, the
calibration is compared with 5 other N-wire images and the accuracy is reported as the root mean
square of the difference between world coordinates and transformed US image coordinates of the
intersection point of the N-wire and the US image plane. The accuracy for each of the cases in
12
this study was on the order of 1.0 mm which is consistent with reported and accepted values in
the literature [47].
2.6 US-MRI registration
MR US registration techniques to correct for brain shift have recently been developed, based
on gradient orientation alignment, in order to reduce the effect of the non-homogeneous intensity
response found in iUS images [48]. Once an iUS acquisition has been performed, the collected
slices are reconstructed into a 3D volume, resliced in the axial, coronal, and sagittal views and
overlaid on the existing preoperative images. The current simple volume reconstruction works
with a raster scan strategy: for every voxel within the volume to be reconstructed, all pixels of all
iUS images are evaluated in terms of distance to the reconstructed volume. If a pixel is within
this user-specified distance (e.g., between 1-3mm), the intensity of the voxel is increased by the
intensity of the iUS pixel, modulated by a Gaussian weighting function. The registration
algorithm is based on gradient orientation alignment [48] focuses on maximizing overlap of
gradients with minimal uncertainty of the orientation estimates (i.e locations with high gradient
magnitude) within the set of images. This can be described mathematically as:
,-' ./01 2/3
,456 78 9
:;< (2)
where T* is the transformation being determined, = is the overlap domain and 78 is the inner
angle between the fixed image gradient, >!
?, and the transformed moving image gradient @AB
>!C:
78 ' D >!
?+ @AE >!C (3)
The registration is characterized by three major components: (1) a local similarity metric based
on gradient orientation alignment (456 78 9$, (2) a multi-scale selection strategy that identifies
13
locations of interest with gradient orientations of low uncertainty, and (3) a computationally
efficient technique for computing gradient orientations of the transformed moving images [48].
The registration pipeline consists of two stages. During the initial, pre-processing stage, the
image derivatives are computed and areas of low uncertainty gradient orientations are identified.
The second stage consists of an optimization strategy that maximizes the average value of the
local similarity metric evaluated on the locations of interest using a covariance matrix adaptation
evolution strategy [49]. For an in-depth discussion and more details on this procedure, the
interested reader is directed to [48].
This specific framework was chosen for use in the pilot study due to its validation with clinical
US-MRI data [48]. This registration framework has been shown to provide substantially
improved robustness and computational performance in the context of IGNS motivated by the
fact that gradient orientations are considered to characterize the underlying anatomical
boundaries found in MRI images and are more robust to the effect of non-homogeneous intensity
response found in US images. For this pilot study, only rigid registration transformations were
investigated.
Both the volume reconstruction and registration techniques are incorporated into IBIS using a
graphics processing unit (GPU) implementation that allows for high-speed results (on the order
of seconds) for reconstruction and rigid registration. This processed is briefly summarized in
Figure 4-C.
14
2.7 Operating Room Procedure
All image processing for each patient-specific model is done prior to the surgical case and
imported into the IBIS IGNS console. Once the patient has been brought into the operating room
and anaesthetized, the patient-to-image registration for IBIS is done simultaneously with the
commercial IGNS system, a Medtronic StealthStation (Dublin, Leinster, Republic of Ireland),
using 8 corresponding anatomical landmark pairs [50] (Figure 4-A). The quality of this
registration is evaluated by the clinical team in the OR. Quantitatively, the fiducial landmark
error must be below a certain threshold of 5.0 mm, but is often much lower. Qualitatively, the
neurosurgeon evaluates the appearance of tracked probe position on the skin of the patient to
ensure an appropriate quality of registration has been achieved. The surgeon. Augmented reality
is then used at different time points during the intervention and accuracy is qualitatively
assessed; on the scalp to verify skin incision, on the craniotomy (Figure 4-B) to verify
craniotomy extent, and on the cortex throughout resection (Figure 4-D). Once the dura has been
exposed, tracked intraoperative US images are acquired and used to re-register the preoperative
images to the patient. The accuracy of the alignment of the AR view on the cortex is then re-
evaluated based on these updated views using both qualitative and quantitative criteria.
Comments from the surgeon and visual inspection are used as qualitative criteria, while the pixel
misalignment error [51] and target registration error (TRE) of a set of landmark US-MRI
landmark pairs are used as quantitative criteria. For each patient, a set of 5 corresponding
landmarks were chosen on both preoperative MRI and iUS volumes to calculate the TRE – as the
Euclidian distance between pairs of landmarks before and after registration. Landmarks were
chosen in areas of hyperechoic-hypoechoic transition (US) near tumor boundaries, ventricles and
sulci when the corresponding well defined features on the MRI were also identifiable. Pixel
misalignment error, as the name suggests, is calculated as the distance in pixels between where
15
the augmented virtual model is displayed on the live view and where it’s true location is in the
live view as determined through identification of a single pair of corresponding landmarks
identified by the surgeon (Figure 4-E). The precision of the pixel misalignment error
measurements relies on the distance between the camera and patient but caution was taken to
ensure the AR camera was always at the edge of the sterile field to ensure that this distance
remained relatively constant between cases and calibration. An example calculation is shown in
Figure 5. It is converted to a distance in mm based on the parameters defined through the
camera calibration that determine the pixel dimensions [51].
2.8 Study Design
Two neurosurgeons at the Montreal Neurological Institute and Hospital were involved in this
study. Neither surgeon had prior experience with the AR system before this study but both have
been involved with work related to development of the custom neuronavigation system as well as
its use and development for intraoperative US brain shift correction. Neither participant was
trained with interpreting AR images before the study other than explanation of what the AR
images represented and how they would be displayed in the OR.
16
Figure 4: Flowchart of the intraoperative workflow and how surgical tasks are related to IGNS
tasks. A-Patient-to-image registration. After the patient’s head is immobilized a tracking
reference is attached to the clamp and 8 facial landmarks are chosen that correspond to identical
landmarks on the preoperative images to create a mapping between the two spaces. B-
Augmented reality visualization on the skull is being qualitatively assessed by comparing the
tumor contour as defined by the preoperative guidance images and the overlay of the augmented
image. C-A series of US images are acquired once the craniotomy has been performed on the
dura and then reconstructed and registered with the preoperative MRI images using the gradient
orientation alignment algorithm. D-Augmented reality visualization on the cortex showing the
location of the tumor (green) and a vessel of interest (blue). E-The AR accuracy is quantitatively
evaluated by having the surgeon choose an identifiable landmark on the physical patient,
recording the coordinates, and then choosing the corresponding landmark on the augmented
image, recording the coordinates and measuring the two-dimensional distance between the
coordinates.
!
3 Results
We present the results of our experience in 8 iUS-AR IGNS cases. Relevant patient information
including age, sex and tumor type is summarized in Table 1.
17
Table 1: Summary of Patient information
Patient
Sex
Age
Tumor Type
Lobe
1
F
56
Meningioma
L O/P
2
M
49
Glioma
L F/T
3
F
72
Metastases
L O/P
4
M
63
Glioma
R - F
5
F
77
Meningioma
R - F
6
M
24
Glioma
L – F
7
F
62
Glioma
L O/P
8
F
55
Metastases
R -F
F-Frontal, O-Occipital, P-Parietal, L-Left, R-Right
3.1 Quantitative Results
For all but one cases, the iUS-MRI registration improved to under 3 mm and the pixel
misalignment error was on the order of 1 – 3 mm. The average improvement was 68%. In case 2,
the camera calibration data was corrupted so the pixel misalignment error could not be
calculated. Table 2 summarizes all iUS-MRI registration and pixel misalignment errors. The
second column represents the registration misalignment from the initial patient-to-image
landmark registration. Columns 3 and 4 represent the registration errors between US and MRI
volumes before and after registration respectively. The final two columns pertain to the virtual
model-to-video registration (pixel misalignment error) before and after US-MRI registration.
3.2 Qualitative Results
Qualitative comments from the surgeon reflected largely the benefit of not having to look at or
interpret US information but still being able to have an accurately overlaid virtual model that
could be augmented to verify the surgical plan and to visualize the surgical target. Their main
concerns were the limitation of camera maneuverability due to the size of tracking volume,
difficulty in comparing AR visualization with preoperative navigation images, and a learning
curve associated with the technology. Table 3 summarizes the different tasks where AR was used
18
throughout these interventions and how surgeons considered it to be useful. Figure 6 is an image
summary of four illustrative cases showing a surgical, pre-registration AR, and post-registration
AR to qualitatively show the improvement of the iUS registration on the AR visualizations.
Table 2: Summary of Registration and Pixel Misalignment Errors
Patient
Patient-to-Image
Registration
(mm)
Pre iUS-
MRI
Registration
TRE (mm)
Post iUS-
MRI
Registration
TRE (mm)
Pre-reg Pixel
Misalignment
Error (mm)
Post-reg Pixel
Misalignment
Error (mm)
IBIS
Medtronic
1
3.23
3.07
6.85 ± 3.14
1.88 ± 0.93
N/A*
N/A*
2
2.88
3.22
5.33 ± 1.67
2.97 ± 1.54
5.39
1.19
3
3.96
3.54
6.31 ± 1.51
2.18 ± 1.06
6.46
1.06
4
4.20
3.66
7.22 ± 2.31
2.34 ± 1.27
6.88
1.80
5
2.77
3.12
7.89 ± 1.62
2.77 ± 1.34
7.20
2.35
6
2.33
3.20
3.58 ± 1.82
2.25 ± 1.57
3.57
1.32
7
4.35
2.98
6.61 ± 3.39
3.48 ± 2.88
5.55
3.27
8
3.85
3.15
5.80 ± 2.27
1.56 ± 1.02
4.32
1.22
*For this case the camera calibration data was corrupted and we were unable to extract the
necessary parameters to measure the misalignment error. The pre- to post-registration
improvement of mean TRE was statistically significant in all cases (group t-test, p<0.05).
Figure 5: The surgeon was asked to place the tip of the tracked pointer at the closest edge of the
visible tumor in the surgical field (TL) and to also select the corresponding location on the
preoperative images/virtual model (TV). The AR view was initialized and the pixel misalignment
error, measured as the 2D distance between TL and TV on the camera image by multiplying the
19
number of pixels with the pixel size (as determined from the camera calibration) between the two
points of interest before registration (F*) and after registration (F9).
!
Table 3: Summary of AR tasks, benefits, and concerns, as per surgeons using the technology
Task
Use of AR
Benefits
Concerns
Pre-operative
Planning
Show location of tumor and
other anatomy of interest
after patient positioning
and registration
Share surgical plan with
other assisting physicians
Limitation of extent of
camera maneuverability due
to tracking volume
Craniotomy
Visualize tumor location in
relation to drawn
craniotomy borders below
bone
Assess if there is a loss of
accuracy from skin
landmark registration
Minimize craniotomy
size. Added comfort in
verification that virtual
tumor location is within
drawn boundaries before
removing bone.
Difficult to verify with
navigation images at the same
time
Cortex pre-iUS
Registration
AR during this context is
primarily for research
purposes and to determine
loss of accuracy from
beginning of surgery and
initial skin landmark
registration
Despite being used for
research, surgeons still
found the visualizations
useful to have a
understanding of the
directions of the
deviations from initial
registration
Difficulty in understanding
data first time during use
Limitation of extent of
camera maneuverability due
to tracking volume
Cortex post-iUS
Registration
Intraoperative planning
Tumor and vessel
identification
Assessment of AR/IGNS
accuracy
Surgeons found it helpful
to compare their physical
interpretation of tumor
borders with the virtual
borders
Surgeons commented on
the benefit of seeing a
virtual model of vessels
when they were in close
proximity of the tumor
and deep to the area of
resection
AR helped confirm their
surgical plan
Limitation of extent of
camera maneuverability due
to tracking volume
!
!
!
20
Figure 6: Four illustrative examples of the qualitative improvement of AR for tumor
visualization. The left column is the surgical view, the middle column is the initial AR view
before iUS registration and the right column is the iUS brain shift corrected AR view. A, B, C,
and D are cases 1, 3, 6, and 7 respectively.
21
4 Discussion and Conclusions
In this study, we successfully combined two important technologies in the context of IGNS of
brain tumors. The combination contributed to a high level of accuracy during AR visualizations
and obviated the need to directly interpret the iUS images throughout the intervention. The fact
that pre-iUS registration error was greater than the registration error following initial patient-to-
image registration highlights the gradual loss of accuracy throughout the intervention [52]. In
each case, we improved the patient-image misalignment by registration with iUS data. This
resulted in several advantages that included more accurate intra-operative navigation and more
reliable AR visualizations as shown qualitatively, and quantitatively with the improved TRE
measurements and pixel misalignment error measurements respectively.
The improved accuracy of the system was evaluated by two metrics; the target registration error
within a series of target points to assess the registration quality, and the pixel misalignment error
for the improved AR quality. A limitation of measuring the accuracy of AR overlays stems from
the lack of a standardized and universal metric in which the error in AR can be quantified. Some
authors use pixel misalignment error [51], while others use pixel reprojection error [26], and
other complex metrics are also described [6, 53]. The pixel misalignment error has the implicit
assumption that the registration with iUS creates a perfectly aligned image. This assumption is
inevitably violated, and thus pixel misalignment error is not a perfect measure of accuracy and is
only an indication of relative error between the two AR views rather than an absolute error for
either view. Despite this limitation, it was deemed here to be the most appropriate for
quantitative evaluation of the AR images. Another consideration on the accuracy of the
registration procedure is the effect of heart rate and blood pulsation during iUS acquisition. This
22
detail was not considered in this pilot project and will be investigated in future work. This work
is intended to serve as a pilot study to assess the feasibility of the combination of US and AR
technologies to improve on each of their shortcomings. For this reason, an US-MRI registration
algorithm that has been reported in the literature to work well with this type of clinical data was
chosen, as well as an AR evaluation metric that was considered the most appropriate in
evaluating the quality of virtual overlay for the data presented in this study. Future work will
require more extensive validation against other US-MR registration frameworks to draw stronger
conclusions about the quality of accuracy improvement, as well as other AR evaluation metrics
to better describe the quality of virtual overlay improvement. Finally, registration errors on the
order of 1.0 mm are ideally desired for neuronavigation assisted tasks, however, this level of
accuracy is rarely achievable and a registration error on the order of 2.0 – 3.0 mm is sufficient to
perform the intended tasks appropriately for this pilot study. In future work, with the use of non-
linear registration, we hope to further improve the level of registration accuracy to be closer to
ideal conditions.
In this study, AR views were acquired with the use of an external camera to capture images of
the surgical scene and render the AR view on the computer workstation. This strategy was
employed since one of the surgeons involved in the project does not generally use a microscope
while performing tumor resections. Augmenting microscope images is also compatible within
IBIS and may facilitate integration of AR in the operating room for navigation [34]. While the
justification for AR in some of cases presented here may not be great due to the tumor's
proximity to the cortex, the potential of AR in more complicated scenarios should not be
understated. For smaller tumors located much deeper within the brain or for tumors near
23
eloquent brain areas, having the ability to see below the surface with accurate visualizations
offered using AR creates the possibility of tailoring resection corridors to minimize the
invasiveness of the surgery. This benefit can only be accomplished if we are able to maintain a
high level of patient-to-image registration accuracy throughout the procedure. Combining iUS
registration and AR with more accurate tumor segmentations, like the process described in [54],
would assist a surgeon in resecting as much tumorous tissue as possible with minimal resection
of healthy tissue without having to rely solely on a mental map of the patient’s anatomy and the
surgeon's ability to discriminate tissue types.
It is clear from the qualitative comments from the surgeons involved in this work that there is a
learning curve associated with AR in the context of IGNS. In the first several cases, the AR was
employed simply as a tool to verify positions of anatomy of interest and to assess the accuracy of
the AR image alignment with the tracked preoperative images. As the surgeons became
comfortable with the system, the length of time AR was used increased and the number of tasks
where it was deemed useful also increased. The surgeons commented on the usefulness of AR to
assess and minimize the extent of the craniotomy, and to assess the location of the anatomy of
interest (i.e. tumors and vessels) once the cortex has been exposed. Additionally, the surgeons
commented on the benefit of using AR to share the surgical plan with assisting residents and
physicians by being able to show the vessels and tumor location before making an incision. The
surgeons also commented on the fact that having a coloured AR image for assessing the anatomy
was more enjoyable than a grey-scale US. The primary concern of the surgeons using the system
was the limitation of camera maneuverability due to the size of tracking volume which also led
to difficulty in comparing AR visualization with preoperative navigation images. However, over
24
the course of using the system the comfort with the system quickly grew over the first few cases
and the usefulness and amount of time and information requested increased. With continued use,
surgeons found the information increasingly useful as they incorporated it into their
intraoperative planning suggesting that with reliable accuracy and training this technology could
provide a benefit to improve the minimal invasiveness of surgery and to help with patient model
interpretation.
In conclusion, this pilot study highlights the feasibility of combining iUS registration and AR
visualization in the context of IGNS for tumor resections and some of the advantages it can have.
While many authors have investigated these techniques separately for brain tumor neurosurgery,
few have looked at the benefits of combining these two technologies. Our pilot study in 8
surgical cases suggests that the combined use of these technologies has the potential to improve
on traditional IGNS systems. By adding improved visualization of the anatomy and pathology of
interest, while simultaneously correcting for patient-image misalignment, extended reliable use
of IGNS throughout the intervention can be maintained that will hopefully lead to more efficient
and minimally invasive surgical intervention. In addition, with accurate AR visualizations, the
neurosurgeon is not required to interpret the iUS images which can be confusing to a non-expert.
With continued development and integration of the two techniques, the proposed iUS-AR system
has potential for improving tasks such as tailoring craniotomies, planning resection corridors and
localizing tumor tissue while simultaneously correcting for brain shift.
5 DISCLOSURES
All authors declare they have no conflicts of interests.
25
6 ACKNOWLEDGMENTS
This work was funded in part by NSERC (238739) and CIHR (MOP-97820) and an NSERC
CHRP (385864-10)
7 BIOGRAPHIES
Ian J. Gerard, M.Sc.
Ian J. Gerard is a Ph.D. Candidate in Biomedical Engineering at McGill University. His current
research involves improving the accuracy of neuronavigation tools for image-guided
neurosurgery of brain tumours with focus on intraoperative imaging for brain shift management
and enhanced visualization techniques for understanding complex medical imaging data.
Marta Kersten-Oertel, Ph.D.
Dr. Marta Kersten-Oertel is an Assistant Proffessor at Concordia University specializing in
medical image visualization and image-guided neurosurgery. Her current research involves the
development of an augmented reality neuronavigation system and the combination of advanced
visualization techniques with psychophysics in order to improve the understanding of complex
medical imaging data.
Simon Drouin, M.Sc.
Simon Drouin is a Ph.D. Candidate in Biomedical Engineering at McGill University. His current
research involves user interaction in augmented reality with a focus on depth perception, and
interpretation of visual cues in augmented environments.
26
Jeffery A. Hall, M.D. M.Sc.
Dr. Jeffery A. Hall is a neurosurgeon and Assistant Professor of Neurology and Neurosurgery at
McGill University’s Montreal Neurological Institute and Hospital, specializing in the surgical
treatment of epilepsy and cancer. His current research, in collaboration with other MNI clinician-
scientists includes: developing non-invasive means of delineating epileptic foci, intraoperative
imaging in combination with neuronavigation systems and the application of image-guided
neuronavigation to epilepsy surgery.
Kevin Petrecca, M.D. Ph.D.
Dr. Kevin Petrecca is a neurosurgeon, Assistant Professor of Neurology and Neurosurgery at
McGill University, and head of Neurosurgery at the Montreal Neurological Hospital,
specializing in neurosurgical oncology. His research at the Montreal Neurological Institute and
Hospital Brain Tumour Research Centre focuses on understanding fundamental molecular
mechanisms that regulate cell motility with a focus on malignant glial cell invasion.
Dante De Nigris, Ph.D.
Dr. De Nigris was formerly a Ph.D. student in the department of electrical and computer
engineering at McGill University. His research interests focus on analyzing and developing
techniques for multimodal image registration, specifically on similarity metrics for challenging
multimodal image registration contexts.
27
Daniel DiGiovanni, M.Sc.
Daniel DiGiovanni is a Ph.D. student in Integrated Program in Neuroscience at McGill
University. His research interests focus on the analysis of functional MRI data in the context of
brain tumors.
Tal Arbel, Ph.D.
Dr. Arbel is an Associate Professor in the department of electrical and computer engineering, and
a member of the McGill Centre for Intelligent Machines. Her research goals focus on the
development of modern probabilistic techniques in computer vision and their application to
problems in the medical imaging domain.
D. Louis Collins, Ph.D.
Dr. D. Louis Collins is a professor in Neurology and Neurosurgery, Biomedical Engineering and
associate member of the Center for Intelligent Machines at McGill University. His laboratory
develops and uses computerized image processing techniques such as non-linear image
registration and model-based segmentation to automatically identify structures within the brain.
His other research focuses on applying these techniques to image guided neurosurgery to provide
surgeons with computerized tools to assist in interpreting complex medical imaging data.
7 REFERENCES
1.! Sanai,!N.!and!M.S.!Berger,!!"#$%&'(#)&#*+,'-.#/)01$)23'14%/)%,5)&+#)(%3.#)10)#6&#,&)10)
$#/#*&'1,7!Neurotherapeutics,!2009.!6(3):!p.!478-86.!
2.! Roberts,!D.W.,!et!al.,!8)0$%4#3#//)/&#$#1&%6'*)',&#2$%&'1,)10)*14".&#$'9#5)&1412$%"+'*)
'4%2',2)%,5)&+#)1"#$%&',2)4'*$1/*1"#7!J!Neurosurg,!1986.!65(4):!p.!545-9.!
28
3.! Gerard,!I.J.,!et!al.,!:$%',)/+'0&)',),#.$1,%('2%&'1,)10);$%',)&.41$/<)8)$#('#=7!Med!Image!
Anal,!2016.!35:!p.!403-420.!
4.! Comeau,!R.M.,!et!al.,!>,&$%1"#$%&'(#).3&$%/1.,5)01$)2.'5%,*#)%,5)&'//.#)/+'0&)*1$$#*&'1,)
',)'4%2#?2.'5#5),#.$1/.$2#$@7!Med!Phys,!2000.!27(4):!p.!787-800.!
5.! Mercier,!L.,!et!al.,!A#2'/&#$',2)"$#?)%,5)"1/&$#/#*&'1,)B?5'4#,/'1,%3).3&$%/1.,5)01$)
'4"$1(#5)('/.%3'9%&'1,)10)$#/'5.%3);$%',)&.41$7!Ultrasound!Med!Biol,!2013.!39(1):!p.!16-
29.!
6.! Azuma,!R.,!et!al.,!A#*#,&)%5(%,*#/)',)%.24#,&#5)$#%3'&@7!IEEE!computer!graphics!and!
applications,!2001.!21(6):!p.!34-47.!
7.! Liao,!H.,!et!al.,!B?C)%.24#,&#5)$#%3'&@)01$)DA>?2.'5#5)/.$2#$@)./',2)',&#2$%3)('5#12$%"+@)
%.&1/&#$#1/*1"'*)'4%2#)1(#$3%@7!IEEE!Trans!Biomed!Eng,!2010.!57(6):!p.!1476-86.!
8.! Tabrizi,!L.B.!and!M.!Mahvash,!8.24#,&#5)$#%3'&@E2.'5#5),#.$1/.$2#$@<)%**.$%*@)%,5)
',&$%1"#$%&'(#)%""3'*%&'1,)10)%,)'4%2#)"$1F#*&'1,)&#*+,'-.#7!Journal!of!Neurosurgery,!
2015.!123(1):!p.!206-211.!
9.! Liao,!H.,!et!al.,!8,)',&#2$%&#5)5'%2,1/'/)%,5)&+#$%"#.&'*)/@/&#4)./',2)',&$%?1"#$%&'(#)G?
%4',13#(.3','*?%*'5?',5.*#5)03.1$#/*#,*#)2.'5#5)$1;1&'*)3%/#$)%;3%&'1,)01$)"$#*'/'1,)
,#.$1/.$2#$@7!Med!Image!Anal,!2012.!16(3):!p.!754-66.!
10.! Gerard,!I.J.,!et!al.!>4"$1(',2)"%&'#,&)/"#*'0'*),#.$1/.$2'*%3)415#3/)='&+)',&$%1"#$%&'(#)
.3&$%/1.,5)%,5)%.24#,&#5)$#%3'&@)('/.%3'9%&'1,/)',)%),#.$1,%('2%&'1,)#,('$1,4#,&.!in!
H1$I/+1")1,)J3','*%3)>4%2#?:%/#5)K$1*#5.$#/.!2015.!Springer.!
11.! Ma,!L.,!et!al.,!8.24#,&#5)$#%3'&@)/.$2'*%3),%('2%&'1,)='&+).3&$%/1.,5?%//'/&#5)
$#2'/&$%&'1,)01$)"#5'*3#)/*$#=)"3%*#4#,&<)%)"'31&)/&.5@7!International!Journal!of!
Computer!Assisted!Radiology!and!Surgery,!2017.!
12.! Sato,!Y.,!et!al.,!>4%2#)2.'5%,*#)10);$#%/&)*%,*#$)/.$2#$@)./',2)B?C).3&$%/1.,5)'4%2#/)
%,5)%.24#,&#5)$#%3'&@)('/.%3'9%&'1,7!IEEE!Transactions!on!Medical!Imaging,!1998.!17(5):!
p.!681-693.!
13.! Meola,!A.,!et!al.,!8.24#,&#5)$#%3'&@)',),#.$1/.$2#$@<)%)/@/&#4%&'*)$#('#=7!Neurosurg!
Rev,!2016.!
14.! Xiao,!Y.,!et!al.,!8&3%/?L.'5#5)M$%,/*$%,'%3)C1""3#$)N3&$%/1.,5)O6%4',%&'1,)='&+)%)P#.$1?
Q.$2'*%3)P%('2%&'1,)Q@/&#4<)J%/#)Q&.5@,!in!J3','*%3)>4%2#?:%/#5)K$1*#5.$#/7)
M$%,/3%&'1,%3)A#/#%$*+)',)D#5'*%3)>4%2',2<)R&+)>,&#$,%&'1,%3)H1$I/+1"S)JT>K)UVWGS)X#35)
',)J1,F.,*&'1,)='&+)D>JJ8>)UVWGS)D.,'*+S)L#$4%,@S)!*&1;#$)GS)UVWG7)A#('/#5)Q#3#*&#5)
K%"#$/,!C.!Oyarzun!Laura,!et!al.,!Editors.!2016,!Springer!International!Publishing:!Cham.!
p.!19-27.!
15.! Bucholz,!R.D.!and!D.J.!Greco,!>4%2#?2.'5#5)/.$2'*%3)&#*+,'-.#/)01$)',0#*&'1,/)%,5)
&$%.4%)10)&+#)*#,&$%3),#$(1./)/@/&#47!Neurosurg!Clin!N!Am,!1996.!7(2):!p.!187-200.!
16.! Keles,!G.E.,!K.R.!Lamborn,!and!M.S.!Berger,!J1$#2'/&$%&'1,)%**.$%*@)%,5)5#&#*&'1,)10)
;$%',)/+'0&)./',2)',&$%1"#$%&'(#)/1,1,%('2%&'1,)5.$',2)$#/#*&'1,)10)+#4'/"+#$'*)&.41$/7!
Neurosurgery,!2003.!53(3):!p.!556-62;!discussion!562-4.!
17.! Reinertsen,!I.,!et!al.,!>,&$%?1"#$%&'(#)*1$$#*&'1,)10);$%',?/+'0&7!Acta!Neurochir!(Wien),!
2014.!156(7):!p.!1301-10.!
18.! Reinertsen,!I.,!et!al.,!J3','*%3)(%3'5%&'1,)10)(#//#3?;%/#5)$#2'/&$%&'1,)01$)*1$$#*&'1,)10)
;$%',?/+'0&7!Med!Image!Anal,!2007.!11(6):!p.!673-84.!
29
19.! Kersten-Oertel,!M.,!P.!Jannin,!and!D.L.!Collins,!CYY<)%)&%61,14@)01$)4'6#5)$#%3'&@)
('/.%3'9%&'1,)',)'4%2#)2.'5#5)/.$2#$@7!IEEE!Trans!Vis!Comput!Graph,!2012.!18(2):!p.!332-
52.!
20.! Cabrilo,!I.,!et!al.,!8.24#,&#5)$#%3'&@?%//'/&#5)/I.33);%/#)/.$2#$@7!Neurochirurgie,!2014.!
60(6):!p.!304-6.!
21.! Kawamata,!T.,!et!al.,!O,51/*1"'*)%.24#,&#5)$#%3'&@),%('2%&'1,)/@/&#4)01$)#,51,%/%3)
&$%,//"+#,1'5%3)/.$2#$@)&1)&$#%&)"'&.'&%$@)&.41$/<)&#*+,'*%3),1&#7!Neurosurgery,!2002.!
50(6):!p.!1393-7.!
22.! Paul,!P.,!O.!Fleig,!and!P.!Jannin,!8.24#,&#5)('$&.%3'&@);%/#5)1,)/&#$#1/*1"'*)
$#*1,/&$.*&'1,)',)4.3&'415%3)'4%2#?2.'5#5),#.$1/.$2#$@<)4#&+15/)%,5)"#$01$4%,*#)
#(%3.%&'1,7!IEEE!Trans!Med!Imaging,!2005.!24(11):!p.!1500-11.!
23.! Rosahl,!S.K.,!et!al.,!Y'$&.%3)$#%3'&@)%.24#,&%&'1,)',)/I.33);%/#)/.$2#$@7!Skull!Base,!2006.!
16(2):!p.!59-66.!
24.! Shahidi,!R.,!et!al.,!>4"3#4#,&%&'1,S)*%3';$%&'1,)%,5)%**.$%*@)&#/&',2)10)%,)'4%2#?
#,+%,*#5)#,51/*1"@)/@/&#47!IEEE!Trans!Med!Imaging,!2002.!21(12):!p.!1524-35.!
25.! Kersten-Oertel,!M.,!et!al.,!8.24#,&#5)$#%3'&@)',),#.$1(%/*.3%$)/.$2#$@<)0#%/';'3'&@)%,5)
0'$/&)./#/)',)&+#)1"#$%&',2)$1147!Int!J!Comput!Assist!Radiol!Surg,!2015.!10(11):!p.!1823-
36.!
26.! Cabrilo,!I.,!P.!Bijlenga,!and!K.!Schaller,!8.24#,&#5)$#%3'&@)',)&+#)/.$2#$@)10)*#$#;$%3)
%$&#$'1(#,1./)4%301$4%&'1,/<)&#*+,'-.#)%//#//4#,&)%,5)*1,/'5#$%&'1,/7!Acta!Neurochir!
(Wien),!2014.!156(9):!p.!1769-74.!
27.! Cabrilo,!I.,!P.!Bijlenga,!and!K.!Schaller,!8.24#,&#5)$#%3'&@)',)&+#)/.$2#$@)10)*#$#;$%3)
%,#.$@/4/<)%)&#*+,'*%3)$#"1$&7!Neurosurgery,!2014.!10+Suppl+2:!p.!252-60;!discussion!
260-1.!
28.! Low,!D.,!et!al.,!8.24#,&#5)$#%3'&@),#.$1/.$2'*%3)"3%,,',2)%,5),%('2%&'1,)01$)/.$2'*%3)
#6*'/'1,)10)"%$%/%2'&&%3S)0%3*',#)%,5)*1,(#6'&@)4#,',2'14%/7!Br!J!Neurosurg,!2010.!24(1):!
p.!69-74.!
29.! Stadie,!A.T.,!et!al.,!Y'$&.%3)$#%3'&@)/@/&#4)01$)"3%,,',2)4','4%33@)',(%/'(#),#.$1/.$2#$@7)
M#*+,'*%3),1&#7!J!Neurosurg,!2008.!108(2):!p.!382-94.!
30.! Kersten-Oertel,!M.,!et!al.,!8.24#,&#5)A#%3'&@)01$)Q"#*'0'*)P#.$1(%/*.3%$)Q.$2'*%3)M%/I/,!
in!8.24#,&#5)O,('$1,4#,&/)01$)J14".&#$?8//'/&#5)>,&#$(#,&'1,/<)WV&+)>,&#$,%&'1,%3)
H1$I/+1"S)8O?J8>)UVWGS)X#35)',)J1,F.,*&'1,)='&+)D>JJ8>)UVWGS)D.,'*+S)L#$4%,@S)
!*&1;#$)ZS)UVWG7)K$1*##5',2/,!A.C.!Linte,!Z.!Yaniv,!and!P.!Fallavollita,!Editors.!2015,!
Springer!International!Publishing:!Cham.!p.!92-103.!
31.! Drouin,!S.,!M.!Kersten-Oertel,!and!D.!Louis!Collins,!>,&#$%*&'1,?:%/#5)A#2'/&$%&'1,)
J1$$#*&'1,)01$)>4"$1(#5)8.24#,&#5)A#%3'&@)!(#$3%@)',)P#.$1/.$2#$@,!in!8.24#,&#5)
O,('$1,4#,&/)01$)J14".&#$?8//'/&#5)>,&#$(#,&'1,/<)WV&+)>,&#$,%&'1,%3)H1$I/+1"S)8O?J8>)
UVWGS)X#35)',)J1,F.,*&'1,)='&+)D>JJ8>)UVWGS)D.,'*+S)L#$4%,@S)!*&1;#$)ZS)UVWG7)
K$1*##5',2/,!A.C.!Linte,!Z.!Yaniv,!and!P.!Fallavollita,!Editors.!2015,!Springer!International!
Publishing:!Cham.!p.!21-29.!
32.! Kersten-Oertel,!M.,!S.J.!Chen,!and!D.L.!Collins,!8,)O(%3.%&'1,)10)C#"&+)O,+%,*',2)
K#$*#"&.%3)J.#/)01$)Y%/*.3%$)Y13.4#)Y'/.%3'9%&'1,)',)P#.$1/.$2#$@7!IEEE!Trans!Vis!
Comput!Graph,!2013.!
30
33.! Kersten-Oertel,!M.,!et!al.,!8.24#,&#5)$#%3'&@)('/.%3'9%&'1,)01$)2.'5%,*#)',),#.$1(%/*.3%$)
/.$2#$@7!Stud!Health!Technol!Inform,!2012.!173:!p.!225-9.!
34.! Drouin,!S.,!et!al.,!>:>Q<)%,)!A)$#%5@)1"#,?/1.$*#)"3%&01$4)01$)'4%2#?2.'5#5),#.$1/.$2#$@7!
Int!J!Comput!Assist!Radiol!Surg,!2016.!
35.! Mercier,!L.,!et!al.,!P#=)"$1&1&@"#),#.$1,%('2%&'1,)/@/&#4);%/#5)1,)"$#1"#$%&'(#)
'4%2',2)%,5)',&$%1"#$%&'(#)0$##+%,5).3&$%/1.,5<)/@/&#4)5#/*$'"&'1,)%,5)(%3'5%&'1,7!Int!J!
Comput!Assist!Radiol!Surg,!2011.!6(4):!p.!507-22.!
36.! Gerard,!I.J.!and!D.L.!Collins,!8,)%,%3@/'/)10)&$%*I',2)#$$1$)',)'4%2#?2.'5#5),#.$1/.$2#$@7!
Int!J!Comput!Assist!Radiol!Surg,!2015.!
37.! Guizard,!N.,!et!al.!A1;./&)',5'('5.%3)&#4"3%&#)"'"#3',#)01$)31,2'&.5',%3)DA)'4%2#/.!in!
D>JJ8>)UVWU)H1$I/+1")1,)P1(#3):'14%$I#$/)01$)839+#'4#$[/)C'/#%/#)%,5)A#3%&#5)
C'/1$5#$/.!2012.!
38.! Coupe,!P.,!et!al.,!8,)1"&'4'9#5);31*I='/#),1,31*%3)4#%,/)5#,1'/',2)0'3&#$)01$)B?C)
4%2,#&'*)$#/1,%,*#)'4%2#/7!IEEE!Trans!Med!Imaging,!2008.!27(4):!p.!425-41.!
39.!Sled,!J.G.,!A.P.!Zijdenbos,!and!A.C.!Evans,!8),1,"%$%4#&$'*)4#&+15)01$)%.&14%&'*)
*1$$#*&'1,)10)',&#,/'&@),1,.,'01$4'&@)',)DA>)5%&%7!IEEE!Trans!Med!Imaging,!1998.!17(1):!
p.!87-97.!
40.! Eskildsen,!S.F.!and!L.R.!Ostergaard.!8*&'(#)/.$0%*#)%""$1%*+)01$)#6&$%*&'1,)10)&+#)+.4%,)
*#$#;$%3)*1$&#6)0$14)DA>7!in!D#*'*%3)>4%2#)J14".&',2)%,5)J14".&#$?8//'/&#5)
>,&#$(#,&'1,.!2006.!Springer!Berlin!Heidelberg.!
41.! Yushkevich,!P.,!et!al.!N/#$?L.'5#5)T#(#3)Q#&)Q#24#,&%&'1,)10)8,%&14'*%3)Q&$.*&.$#/)='&+)
>M\?QP8KS)>,/'2+&)]1.,$%3S)Q"#*'%3)>//.#)1,)>QJ.!2005.!NA-MIC/MICCAI!Workshop!on!
Open-Source!Software.!
42.! Frangi,!A.F.,!et!al.,!D.3&'/*%3#)(#//#3)#,+%,*#4#,&)0'3&#$',2,!in!D#5'*%3)>4%2#)J14".&',2)
%,5)J14".&#$?8//'/&#5)>,&#$(#,&'1,)^)D>JJ8>_Z`<)a'$/&)>,&#$,%&'1,%3)J1,0#$#,*#)
J%4;$'52#S)D8S)NQ8S)!*&1;#$)WWEWBS)WZZ`)K$1*##5',2/,!W.M.!Wells,!A.!Colchester,!and!
S.!Delp,!Editors.!1998,!Springer!Berlin!Heidelberg:!Berlin,!Heidelberg.!p.!130-137.!
43.! Zhang,!Z.,!J%4#$%)*%3';$%&'1,)='&+)1,#?5'4#,/'1,%3)1;F#*&/7!Pattern!Analysis!and!
Machine!Intelligence,!IEEE!Transactions!on,!2004.!26(7):!p.!892-899.!
44.! Drouin,!S.,!et!al.)8)$#%3'/&'*)&#/&)%,5)5#(#31"4#,&)#,('$1,4#,&)01$)4'6#5)$#%3'&@)',)
,#.$1/.$2#$@7!in!8.24#,&#5)O,('$1,4#,&/)01$)J14".&#$?8//'/&#5)>,&#$(#,&'1,/)2012.!
Springer!Berlin!Heidelberg.!
45.!Kersten-Oertel,!M.,!et!al.,!8.24#,&#5)$#%3'&@)',),#.$1(%/*.3%$)/.$2#$@<)0#%/';'3'&@)%,5)
0'$/&)./#/)',)&+#)1"#$%&',2)$1147!Int!J!Comput!Assist!Radiol!Surg,!2015.!
46.! Mercier,!L.,!et!al.,!8)$#('#=)10)*%3';$%&'1,)&#*+,'-.#/)01$)0$##+%,5)B?C).3&$%/1.,5)
/@/&#4/7!Ultrasound!Med!Biol,!2005.!31(4):!p.!449-71.!
47.! Carbajal,!G.,!et!al.,!>4"$1(',2)P?='$#)"+%,&14?;%/#5)0$##+%,5).3&$%/1.,5)*%3';$%&'1,7!Int!
J!Comput!Assist!Radiol!Surg,!2013.!8(6):!p.!1063-72.!
48.! De!Nigris,!D.,!D.L.!Collins,!and!T.!Arbel,!a%/&)$'2'5)$#2'/&$%&'1,)10)"$#?1"#$%&'(#)4%2,#&'*)
$#/1,%,*#)'4%2#/)&1)',&$%?1"#$%&'(#).3&$%/1.,5)01$),#.$1/.$2#$@);%/#5)1,)+'2+)
*1,0'5#,*#)2$%5'#,&)1$'#,&%&'1,/7!Int!J!Comput!Assist!Radiol!Surg,!2013.!8(4):!p.!649-61.!
49.! Hansen,!N.!and!A.!Ostermeier,!J14"3#&#3@)5#$%,514'9#5)/#30?%5%"&%&'1,)',)#(13.&'1,)
/&$%&#2'#/7!Evol!Comput,!2001.!9(2):!p.!159-95.!
31
50.! Gerard,!I.J.,!et!al.,!P#=)K$1&1*13)01$)QI',)T%,54%$I)A#2'/&$%&'1,)',)>4%2#?L.'5#5)
P#.$1/.$2#$@<)M#*+,'*%3)P1&#7!Neurosurgery,!2015.!11+Suppl+3:!p.!376-80;!discussion!
380-1.!
51.! Caversaccio,!M.,!et!al.,!8.24#,&#5)$#%3'&@)#,51/*1"'*)/@/&#4)b8AOQc<)"$#3'4',%$@)$#/.3&/7!
Rhinology,!2008.!46(2):!p.!156-8.!
52.! Nabavi,!A.,!et!al.,!Q#$'%3)',&$%1"#$%&'(#)4%2,#&'*)$#/1,%,*#)'4%2',2)10);$%',)/+'0&7!
Neurosurgery,!2001.!48(4):!p.!787-97;!discussion!797-8.!
53.! Holloway,!R.L.,!A#2'/&$%&'1,)#$$1$)%,%3@/'/)01$)%.24#,&#5)$#%3'&@7!Presence:!
Teleoperators!and!Virtual!Environments,!1997.!6(4):!p.!413-432.!
54.! Subbanna,!N.K.,!et!al.,!X'#$%$*+'*%3)"$1;%;'3'/&'*)L%;1$)%,5)DAa)/#24#,&%&'1,)10);$%',)
&.41.$/)',)DA>)(13.4#/7!Med!Image!Comput!Comput!Assist!Interv,!2013.!16(Pt!1):!p.!
751-8.!
... Some of these HUD AR systems are named such as Dextroscope and Augmented Reality Intra-operative Brain Imaging System (IBIS) and others are described in the methods sections of the papers but are otherwise not formally named. [20][21][22][23][24][25] Another modality of incorporating AR into the neurosurgical OR is through portable tablets and web cameras. [7,26,27] A popular mode of incorporating AR into the operative field is by integrating this technology with the eye-piece of the operative microscope. ...
... *AR Augmented Reality, *HMD Head mounted display, *ARSN Augmented-reality-based surgical navigation, *HUD Heads up display, *AI artificial intelligence address this challenge but not definitive solutions have been found yet. [25,73] Lastly, key hurdles into integrating AR technology into standard neurosurgery practice include technological limitations in the current AR hardware. Recent studies have shown that wearing a mixed reality headset can introduce postural and ergonomic hurdles for the surgeon in the operating room. ...
Article
Advancements in imaging techniques are key forces of progress in neurosurgery. The importance of accurate visualization of intraoperative anatomy cannot be overemphasized and is commonly delivered through traditional neuronavigation. Augmented Reality (AR) technology has been tested and applied widely in various neurosurgical subspecialties in intraoperative, clinical use and shows promise for the future. This systematic review of the literature explores the ways in which AR technology has been successfully brought into the operating room (OR) and incorporated into clinical practice. A comprehensive literature search was performed in the following databases from inception-April 2020: Ovid MEDLINE, Ovid EMBASE, and The Cochrane Library. Studies retrieved were then screened for eligibility against predefined inclusion/exclusion criteria. A total of 54 articles were included in this systematic review. The studies were sub- grouped into brain and spine subspecialties and analyzed for their incorporation of AR in the neurosurgical clinical setting. AR technology has the potential to greatly enhance intraoperative visualization and guidance in neurosurgery beyond the traditional neuronavigation systems. However, there are several key challenges to scaling the use of this technology and bringing it into standard operative practice including accurate and efficient brain segmentation of magnetic resonance imaging (MRI) scans, accounting for brain shift, reducing coregistration errors, and improving the AR device hardware. There is also an exciting potential for future work combining AR with multimodal imaging techniques and artificial intelligence to further enhance its impact in neurosurgery.
... which have successfully translated into clinical practice [48]. In another example, the Intraoperative Brain Imaging System (IBIS) by Drouin et al. [58] allows the augmentation of traditional navigation information with ultrasound images captured by a tracked probe and a live video from the surgical site, fusing all information within one view [59]. In particular, minimally invasive procedures, such as laparoscopy or endoscopy have been an active focus of AR research, as the loss of direct vision inherent to these interventions can be partially compensated by augmenting the camera images with pre-interventional planning. ...
... Tissue deformation needs to be considered in image-to-patient registration for many applications, especially endoscopic and laparoscopic interventions [60]. Non-rigid registration algorithms usually rely on intraoperatively captured data, either for using it directly for augmentation [59], or for updating a 3D patient model with the acquired data [97]. However, more research in the area of real-time online 3D modeling is needed to fully address this challenge [15]. ...
... iUS modalities are being combined with preexisting imaging techniques, such as MRI-based neuronavigation, 102 MRI tractography, 103 and augmented reality, 104 to improve real-time visualization of the brain parenchyma and to monitor and compensate for brain shift. Most of these applications depend on an algorithmic registration process that enables parenchymal shifts detected intraoperatively by iUS to update preoperative MRI images. ...
Article
Full-text available
As the epidemiological and clinical burden of brain metastases continues to grow, advances in neurosurgical care are imperative. From standard magnetic resonance imaging (MRI) sequences to functional neuroimaging, preoperative workups for metastatic disease allow high-resolution detection of lesions and at-risk structures, facilitating safe and effective surgical planning. Minimally invasive neurosurgical approaches, including keyhole craniotomies and tubular retractors, optimize the preservation of normal parenchyma without compromising extent of resection. Supramarginal surgery has pushed the boundaries of achieving complete removal of metastases without recurrence, especially in eloquent regions when paired with intraoperative neuromonitoring. Brachytherapy has highlighted the potential of locally delivering therapeutic agents to the resection cavity with high rates of local control. Neuronavigation has become a cornerstone of operative workflow, while intraoperative ultrasound (iUS) and intraoperative brain mapping generate real-time renderings of the brain unaffected by brain shift. Endoscopes, exoscopes, and fluorescent-guided surgery enable increasingly high-definition visualizations of metastatic lesions that were previously difficult to achieve. Pushed forward by these multidisciplinary innovations, neurosurgery has never been a safer, more effective treatment for patients with brain metastases.
... Additional intraoperative adjuncts have also been shown to be augmented by XR technologies, including ultrasound [53,54] and fiducial markers [51,55]. Notably, concomitant AR-ultrasound (US) applications permit intraoperative adjustments for brain shift and correct AR models in <20 s following image registration [18,56]. ...
Article
While well-established in other surgical subspecialties, the benefits of extended reality, consisting of virtual reality (VR), augmented reality (AR), and mixed reality (MR) technologies, remains underexplored in neurosurgery despite its increasing utilization. To address this gap, we conducted a systematic review of the effects of extended reality (XR) in neurosurgery with an emphasis on the perioperative period, to provide a guide for future clinical optimization. Seven primary electronic databases were screened following guidelines outlined by PRISMA and the Institute of Medicine. Reported data related to outcomes in the perioperative period and resident training were all examined, and a focused analysis of studies reporting controlled, clinical outcomes was completed. After removal of duplicates, 2548 studies were screened with 116 studies reporting measurable effects of XR in neurosurgery. The majority (82%) included cranial based applications related to tumor surgery with 34% showing improved resection rates and functional outcomes. A rise in high-quality studies was identified from 2017 to 2020 compared to all previous years (p = 0.004). Primary users of the technology were: 56% neurosurgeon (n = 65), 28% residents (n = 33) and 5% patients (n = 6). A final synthesis was conducted on 10 controlled studies reporting patient outcomes. XR technologies have demonstrated benefits in preoperative planning and multimodal neuronavigation especially for tumor surgery. However, few studies have reported patient outcomes in a controlled design demonstrating a need for higher quality data. XR platforms offer several advantages to improve patient outcomes and specifically, the patient experience for neurosurgery.
Article
Augmented Reality (AR) is an emerging technology capable of integrating visual technology with the physical world in novel ways. AR uses may range from education delivery to the operating room. This study sought to evaluate cost-effective AR glasses compared to a traditional computer screen when identifying anatomical structures in a chest radiograph (X-ray). The purpose of this experiment was to determine if the Vuzix Blade AR technology is feasible for use in the medical field to identify anatomical structures. Novice undergraduate and graduate students (n = 14) were recruited to participate in this feasibility study. Radio-graphic images were compiled, and subjects were asked to identify 12 anatomical structures on each image. Images were randomly assigned to identify structures using either the AR glasses or a traditional computer screen for each set. Images were viewed randomly by each subject using a crossover design to compare traditional computer screen vs. AR screen. Subjects identified statistically fewer anatomical structures using the AR glasses compared to using the traditional computer screen (p<0.01). However, the glasses did not require significantly more subjective workload (NASA-TLX) on the user to use compared to the computer screen. When subjects used the cost-effective AR glasses, subjects were unable to differentiate as many anatomical structures as possible (10 vs. 9 structures at p < 0.01), but the glasses were not more demanding overall. Thus, AR glasses may soon be an integral part of the operating room, provided adequate resolution to visualize the necessary details.
The ready availability of advanced visualization tools on picture archiving and communication systems workstations or even standard laptops through server-based or cloud-based solutions has enabled greater adoption of these techniques. We describe how radiologists can tailor imaging techniques for optimal 3D reconstructions provide a brief overview of the standard and newer "on-screen" techniques. We describe the process of creating 3D printed models for surgical simulation and education, with examples from the authors' institution and the existing literature. Finally, the review highlights current uses and potential future use cases for virtual reality and augmented reality applications in a pediatric neuroimaging setting.
Article
Three-dimensional (3-D) reconstruction of the spine surface is of strong clinical relevance for the diagnosis and prognosis of spine disorders and intra-operative image guidance. In this paper, we report a new technique to reconstruct lumbar spine surfaces in 3-D from non-invasive ultrasound (US) images acquired in free-hand mode. US images randomly sampled from in vivo scans of 9 rabbits were used to train a U-net convolutional neural network (CNN). More specifically, a late fusion (LF)-based U-net trained jointly on B-mode and shadow-enhanced B-mode images was generated by fusing two individual U-nets and expanding the set of trainable parameters to around twice the capacity of a basic U-net. This U-net was then applied to predict spine surface labels in in vivo images obtained from another rabbit, which were then used for 3-D spine surface reconstruction. The underlying pose of the transducer during the scan was estimated by registering stacks of US images to a geometrical model derived from corresponding CT data and used to align detected surface points. Final performance of the reconstruction method was assessed by computing the mean absolute error (MAE) between pairs of spine surface points detected from US and CT and by counting the total number of surface points detected from US. Comparison was made between the LF-based U-net and a previously developed phase symmetry (PS)-based method. Using the LF-based U-net, the averaged number of US surface points across the lumbar region increased by 21.61% and MAE reduced by 26.28% relative to the PS-based method. The overall MAE (in mm) was 0.24±0.29. Based on these results, we conclude that: 1) the proposed U-net can detect the spine posterior arch with low MAE and large number of US surface points and 2) the newly proposed reconstruction framework may complement and, under certain circumstances, be used without the aid of an external tracking system in intra-operative spine applications.
Article
Full-text available
Purpose: We present a novel augmented reality (AR) surgical navigation system based on ultrasound-assisted registration for pedicle screw placement. This system provides the clinically desired targeting accuracy and reduces radiation exposure. Methods: Ultrasound (US) is used to perform registration between preoperative computed tomography (CT) images and patient, and the registration is performed by least-squares fitting of these two three-dimensional (3D) point sets of anatomical landmarks taken from US and CT images. An integral videography overlay device is calibrated to accurately display naked-eye 3D images for surgical navigation. We use a 3.0-mm Kirschner wire (K-wire) instead of a pedicle screw in this study, and the K-wire is calibrated to obtain its orientation and tip location. Based on the above registration and calibration, naked-eye 3D images of the planning path and the spine are superimposed onto patient in situ using our AR navigation system. Simultaneously, a 3D image of the K-wire is overlaid accurately on the real one to guide the insertion procedure. The targeting accuracy is evaluated postoperatively by performing a CT scan. Results: An agar phantom experiment was performed. Eight K-wires were inserted successfully after US-assisted registration, and the mean targeting error and angle error were 3.35 mm and [Formula: see text], respectively. Furthermore, an additional sheep cadaver experiment was performed. Four K-wires were inserted successfully. The mean targeting error was 3.79 mm and the mean angle error was [Formula: see text], and US-assisted registration yielded better targeting results than skin markers-based registration (targeting errors: 2.41 vs. 5.18 mm, angle errors: [Formula: see text] vs. [Formula: see text]. Conclusion: Experimental outcomes demonstrate that the proposed navigation system has acceptable targeting accuracy. In particular, the proposed navigation method reduces repeated radiation exposure to the patient and surgeons. Therefore, it has promising prospects for clinical use.
Article
Full-text available
Purpose: Navigation systems commonly used in neurosurgery suffer from two main drawbacks: (1) their accuracy degrades over the course of the operation and (2) they require the surgeon to mentally map images from the monitor to the patient. In this paper, we introduce the Intraoperative Brain Imaging System (IBIS), an open-source image-guided neurosurgery research platform that implements a novel workflow where navigation accuracy is improved using tracked intraoperative ultrasound (iUS) and the visualization of navigation information is facilitated through the use of augmented reality (AR). Methods: The IBIS platform allows a surgeon to capture tracked iUS images and use them to automatically update preoperative patient models and plans through fast GPU-based reconstruction and registration methods. Navigation, resection and iUS-based brain shift correction can all be performed using an AR view. IBIS has an intuitive graphical user interface for the calibration of a US probe, a surgical pointer as well as video devices used for AR (e.g., a surgical microscope). Results: The components of IBIS have been validated in the laboratory and evaluated in the operating room. Image-to-patient registration accuracy is on the order of [Formula: see text] and can be improved with iUS to a median target registration error of 2.54 mm. The accuracy of the US probe calibration is between 0.49 and 0.82 mm. The average reprojection error of the AR system is [Formula: see text]. The system has been used in the operating room for various types of surgery, including brain tumor resection, vascular neurosurgery, spine surgery and DBS electrode implantation. Conclusions: The IBIS platform is a validated system that allows researchers to quickly bring the results of their work into the operating room for evaluation. It is the first open-source navigation system to provide a complete solution for AR visualization.
Article
Full-text available
Purpose: Neuronavigation based on preoperative imaging data is a ubiquitous tool for image guidance in neurosurgery. However, it is rendered unreliable when brain shift invalidates the patient-to-image registration. Many investigators have tried to explain, quantify, and compensate for this phenomenon to allow extended use of neuronavigation systems for the duration of surgery. The purpose of this paper is to present an overview of the work that has been done investigating brain shift. Methods: A review of the literature dealing with the explanation, quantification and compensation of brain shift is presented. The review is based on a systematic search using relevant keywords and phrases in PubMed. The review is organized based on a developed taxonomy that classifies brain shift as occurring due to physical, surgical or biological factors. Results: This paper gives an overview of the work investigating, quantifying, and compensating for brain shift in neuronavigation while describing the successes, setbacks, and additional needs in the field. An analysis of the literature demonstrates a high variability in the methods used to quantify brain shift as well as a wide range in the measured magnitude of the brain shift, depending on the specifics of the intervention. The analysis indicates the need for additional research to be done in quantifying independent effects of brain shift in order for some of the state of the art compensation methods to become useful. Conclusion: This review allows for a thorough understanding of the work investigating brain shift and introduces the needs for future avenues of investigation of the phenomenon.
Article
Full-text available
Neuronavigation has become an essential neurosurgical tool in pursuing minimal invasiveness and maximal safety, even though it has several technical limitations. Augmented reality (AR) neuronavigation is a significant advance, providing a real-time updated 3D virtual model of anatomical details, overlaid on the real surgical field. Currently, only a few AR systems have been tested in a clinical setting. The aim is to review such devices. We performed a PubMed search of reports restricted to human studies of in vivo applications of AR in any neurosurgical procedure using the search terms “Augmented reality” and “Neurosurgery.” Eligibility assessment was performed independently by two reviewers in an unblinded standardized manner. The systems were qualitatively evaluated on the basis of the following: neurosurgical subspecialty of application, pathology of treated lesions and lesion locations, real data source, virtual data source, tracking modality, registration technique, visualization processing, display type, and perception location. Eighteen studies were included during the period 1996 to September 30, 2015. The AR systems were grouped by the real data source: microscope (8), hand- or head-held cameras (4), direct patient view (2), endoscope (1), and X-ray fluoroscopy (1) head-mounted display (1). A total of 195 lesions were treated: 75 (38.46 %) were neoplastic, 77 (39.48 %) neurovascular, and 1 (0.51 %) hydrocephalus, and 42 (21.53 %) were undetermined. Current literature confirms that AR is a reliable and versatile tool when performing minimally invasive approaches in a wide range of neurosurgical diseases, although prospective randomized studies are not yet available and technical improvements are needed.
Article
OBJECTIVE A major shortcoming of image-guided navigational systems is the use of preoperatively acquired image data, which does not account for intraoperative changes in brain morphology. The occurrence of these surgically induced volumetric deformations (“brain shift”) has been well established. Maximal measurements for surface and midline shifts have been reported. There has been no detailed analysis, however, of the changes that occur during surgery. The use of intraoperative magnetic resonance imaging provides a unique opportunity to obtain serial image data and characterize the time course of brain deformations during surgery. METHODS The vertically open intraoperative magnetic resonance imaging system (SignaSP, 0.5 T; GE Medical Systems, Milwaukee, WI) permits access to the surgical field and allows multiple intraoperative image updates without the need to move the patient. We developed volumetric display software (the 3D Slicer) that allows quantitative analysis of the degree and direction of brain shift. For 25 patients, four or more intraoperative volumetric image acquisitions were extensively evaluated. RESULTS Serial acquisitions allow comprehensive sequential descriptions of the direction and magnitude of intraoperative deformations. Brain shift occurs at various surgical stages and in different regions. Surface shift occurs throughout surgery and is mainly attributable to gravity. Subsurface shift occurs during resection and involves collapse of the resection cavity and intraparenchymal changes that are difficult to model. CONCLUSION Brain shift is a continuous dynamic process that evolves differently in distinct brain regions. Therefore, only serial imaging or continuous data acquisition can provide consistently accurate image guidance. Furthermore, only serial intraoperative magnetic resonance imaging provides an accurate basis for the computational analysis of brain deformations, which might lead to an understanding and eventual simulation of brain shift for intraoperative guidance.
Conference Paper
Transcranial Doppler (TCD) sonography is a special ultrasound (US) technique that can image and measure the blood flow within certain cerebral blood vessels through bone windows of the human skull. As a relatively inexpensive and portable medical imaging modality, it has shown great applications in the diagnosis and monitoring of a range of neurovascular conditions. However, due to the challenges in imaging through the skull, interpretation of anatomical structures and quick localization of blood vessels in sonograpy can often be difficult. To make the TCD examination more efficient and intuitive, we propose to employ a population-averaged human head atlas that includes a probabilistic blood vessel map and a standard head MRI template to guide the procedure. Using the system, spatially tracked ultrasound images are augmented with the atlas in a navigation system through landmark-based and automated US-MRI registration. A case study of a healthy subject is presented to demonstrate the performance of the proposed technique, and the system is expected to be applied both in clinics and in training.
Conference Paper
We present our work to combine intraoperative ultrasound imaging and augmented reality visualization to improve the use of patient specific models throughout image-guided neurosurgery in the context of tumour resections. Preliminary results in a study of 3 patients demonstrate the successful combination of the two technologies as well as improved accuracy of the patient-specific models throughout the surgery. The augmented reality visualizations enabled the surgeon to accurately visualize the anatomy of interest for an extended period of the intervention. These results demonstrate the potential for these technologies to become useful tools for neurosurgeons to improve patient-specific planning by prolonging the use of reliable neuronavigation.