Conference PaperPDF Available

Head-worn 3D-Visualization of the Invisible for Surgical Intra-Operative Augmented Reality (H3D-VISIOnAiR)

Public deliverable for the ATTRACT Final Conference
Public deliverable for the ATTRACT Final Conference
Head-worn 3D-Visualization of the Invisible for Surgical Intra-Operative
Augmented Reality (H3D-VISIOnAiR)
Jaap Heukelom,1* Nicole D. Bouvy,2 Lejla Alic,3 Maarten Burie,4 Vincent Graham,1 Rutger M. Schols,5 Fokko P.
Wieringa,6 Gabrielle J.M. Tuijthof,7
1i-Med Technology BV, Oxfordlaan 55 6229 EV Maastricht, The Netherlands; 2 Maastricht University Medical Center dept. General
Surgery, PO Box 5800 6202 AZ Maastricht NL; 3 University of Twente dept. Magnetic Detection and Imaging, PO Box 217 7500 AE
Enschede NL; 4Cin-energy BV, Oxfordlaan 55 6229 EV Maastricht NL; 5 Maastricht University Medical Center dept. Plastic Surgery,
PO Box 5800 6202 AZ Maastricht NL; 6Foundation Imec, High Tech Campus 31 5656 AE Eindhoven NL; 7University Maastricht
dept. IDEE, P.O. Box 616 6200 MD Maastricht NL
*Corresponding author:
During surgery, surgeons must identify vital anatomical structures (nerves and lymph nodes) to prevent damage. Correct identification
remains enormously challenging and surgeons require high-tech intra-operative imaging. Our team worked on developing a demonstrator
of H3D-VISIOnAiR that enables visualizing the invisible by combining a commercial spectral + RGB camera, advanced image analytics
and near eye display. Tests were performed with a simulated surgical task consisting of positioning beads in a nailbed with very low
contrast. The results show proof of concept by real-time 2D high-resolution image acquisition, processing of hyperspectral images and
displaying augmented reality overlay of processed hyperspectral images.
Keywords: near eye display; hyperspectral imaging; automated tissue classification; human tissue; surgery
When cutting away sick tissue, surgeons meantime must
correctly identify vital anatomical structures such as
nerves, lymphatic tissue and blood vessels to prevent
accidental damage to these structures. Identifying these to
prevent damage remains enormously challenging,
especially due to natural differences between individual
human bodies. High tech imaging techniques are truly a
break through aid for surgeons in addition to their
anatomical knowledge for reliable high-resolution visual
discrimination of critical anatomical structures.
The targeted breakthrough and disruptive system
offers head-worn augmented reality (AR) for surgical use.
It consists of two multi-spectral cameras (combining
visual range with near infrared visualization), a belt
computer with data processing, and a high-end
stereoscopic near-eye display with wireless connection to
the operating room digital display and archive
infrastructure. The spectral signature of specific pre-
defined tissues will be used to develop machine-learning
models to segment vital anatomical structures. These
models will be used to generate the AR-overlays on top of
the current clinical field of view.
The main result is a demonstrator prototype that
consists of the following hardware: a commercial
spectral+RGB camera in a head mounted display with a
dedicated near infrared LED ring that intermittently
illuminates the scene for improved spectral image
acquisition. The accompanying software enables the
entire chain of high-resolution images acquisition, pre-
processing for correct RGB display, processing of spectral
images with filters, creating an AR overlay based on
spectral image filtering displayed on top of the correct
RGB display (Fig. 2). Due to limitations of the
commercial camera, we developed a surgical simulated
scene with low contrast (black background and black
beads to be placed over black nails). Surgeons executed
this task with and without the spectral image enhancement
to show proof of concept visualizing the invisible. Parallel
to this, we developed a complete new dissection protocol
to recruit porcine samples containing nerve in fatty tissue;
and recruited 50 samples for fast training of the advanced
machine algorithms.
Originating from the late 1980s, spectral imaging
applications are relatively well known in ind Whereas the
human eye senses the colour of visible light by three types
of cells red, green, and blue, spectral imaging adds
additional spectral content, which can be extended beyond
the visible range (Fig. 1). Various intra-operative optical
imaging techniques have been proposed to identify
critical tissues (1-3) including ultrasonography (1),
optical coherence tomography (4), optoacoustic imaging
(4) and collimated polarized light imaging (5).
Furthermore, the use of optical contrast agents (e.g. saline,
indocyanine green dye, methylene blue or 5-
aminolevulinic acid) in combination with infrared,
fluorescence or near-infrared imaging allows the
identification of various critical tissues and can be
combined with normal imaging for straightforward
interpretation (1, 3, 4). Although (near-infrared)
fluorescence imaging has been most widely implemented,
the injection of exogenous contrast agents can lead to
anaphylactic reactions, and contrast agents only allow
visualization of a subset of critical tissues.
In contrast to (near-infrared) fluorescence imaging,
spectral imaging requires no foreign substances to be
administered. This imaging technique identifies natural
reflectance signatures of tissues based on their chemical
composition (Fig. 1). This is similar to the perception of
colors by our own eyes, but with the added benefit that
many more spectral bands can be used (6). So, one can
literally see beyond human vision by adding useful
information from the abundant spectral bands (Fig. 1).
Positive results have been found for differentiating
between normal and tumor tissue in thyroid and
parathyroid glands (7), and differentiating nerve tissue
from surrounding tissue (8).
Recently, we have shown (4, 9-11) the feasibility of
fibre optic spectral analysis to identify tissue-specific
optical reflectance signatures. We showed the different
spectral signatures between nerve, lymph, muscle, ureter,
fatty, thyroid and parathyroid human tissues (Fig. 1). As
these results were achieved with single-spot
measurements in full contact-mode between the tissue and
the optic fiber, intra-operative application is laborious.
So, our attention was redirected to the feasibility 3D
optical imaging and non-contact spectral imaging (12-14).
Further development of this inherent elegant imaging
technique has been slow due to the large dimensions and
cumbersome handling of spectral camera systems, and the
lack of optimal spectral, spatial and time resolution for
surgery (15).
The H3D-VISIOnAiR project aims to solve the
cumbersome limitations by miniaturized front-end sensor
technology (imec & Ximea multi spectral cameras) with
unsurpassed compact full-HD resolution back-end
technology (i-MedTech HD near-eye stereoscopic
displays); AND clinical expertise on spectral tissue
discrimination (Maastricht UMC) combined with real
time innovative augmented reality image processing and
(Fig. 2 Top). The latter generates unique unprecedented
datasets of spectral signatures of critical human tissues
that feed dedicated spectral chip design.
The intended H3D-VISIOnAiR product weighs 250
gram and the belt-worn computer 800 gram. The present
benchmark device (Leica ARveo surgical microscope)
weighs 350 kg and takes up a considerable amount of
valuable space around the patient: A thousand-fold
reduction in weight and size for a fraction of the price.
Finally, intended H3D-VISIOnAiR product offers
ergonomic ease-of-use of surgical magnifying glasses to
support optimal eye-hand coordination and unrestricted
freedom to move around for the surgeon (Fig. 3).
Fig. 1. Left: Typical view when performing thyroid surgery. It illustrates the difficulty in identification of critical tissues such as arteries
and nerves, because they have similar colours as surrounding tissue. Right: Mean spectra per tissue type acquired during colorectal
surgery. Average tissue spectra for ureter (green), mesenteric adipose tissue (dark blue), artery (red), colon (light blue), muscle (black),
and vein (purple) (9). The visible range ends at 780nm, marked by red vertical line.
Public deliverable for the ATTRACT Final Conference
Public deliverable for the ATTRACT Final Conference
Acquisition Preproccesing
AR ove rlay
Finger print
Train ed dataset
Combi ned
normal and AR
displ ay
RGB+NIR i mage RGB+AR ov erlay
RGB imag e
NIR i mage
Illumin atio n
NIR ca mera
resol ution
White-b lack
balan ce
Corre ction
AR ove rlay
filter (edge
Combi ned
normal and AR
displ ay
RGB+NIR i mage RGB+AR ov erlay
RGB imag e
NIR i mage
surgica l task
Future foreseen H3D-
VISIOn AiR process flow
Realised demonstrator
process flow during ATTRACT
Tissu e sample
Finger print
defini tion
annota tion
Tissu e sample
annota tion
Fig. 2. Top: Scheme depicting the future H3D-VISIOnAiR product performance in the operating room. The blue boxes is off-line a
priori information on spectral fingerprints of tissues that serve as input to the AI algorithms that perform real-time fingerprint extraction.
Bottom: Scheme depicting the demonstrator prototype in combination with a simulate surgical task and tissue sample recruitment that
was achieved within this year’s ATTRACT period.
Over the past year, our team worked on developing a
demonstrator prototype of the H3D-VISIOnAiR as
elucidated below and in Fig 2 (Bottom).
4.1 Recruitment of tissue samples
As no intraoperative solution exists, we focused on
discriminating nerve in fatty tissue during for example
thyroid surgery (Fig. 1). Subsequently, we recruited
porcine tissue samples from both surgical training courses
and slaughterhouse as they mimic human tissue, are easily
available and allow fast off-line training of the AI
algorithms to be developed (Fig. 2 Bottom blue boxes). A
dedicated dissection protocol was developed and applied
to recruit 50 porcine samples containing nerve in fatty
tissue from the neck area. Identification of the tissues was
performed in the golden standard manner by eyesight,
palpation and careful dissection. Annotation was
performed with conventional surgical markers, which are
also applicable in the operating room for validation.
Currently, we are scientifically validating the spectral
resemblance between porcine and human tissue.
4.2 Experimental set up
To start the development of the H3D-VISIOnAiR, an ex
vivo experimental setup was built with a commercial
spectral (NIR) + RGB 2D camera (SM2X2RGBNIR,
Ximea GmbH, Münster, Germany) and 4 halogen lamps
(mimicking operating room light) to quickly collect
spectral tissue information (Fig. 3 Left). Unfortunately,
the spectral + RGB camera offered a disappointing
performance due to presence of cross talk, lack of
software processing capacities and improper white
balancing of RGB images.
Fig. 3. Left: Experimental set up with porcine tissue sample
without optimizations. Right: Adjusted experimental set up with
optimal illumination for the simulated surgical task. A:
RGB+NIR camera. B: normal illumination. C: 800 nm NIR
illumination. D: tissue sample. E: Simulated surgical task
Furthermore, the single bandwidth (around 800 nm) was
not suitable to actually discriminate between fat and nerve
tissue. So, we focused on demonstrating the feasibility of
a lightweight head mounted display using the spectral +
RGB camera; and on developing accompanying software
that offers the entire workflow as presented in Fig. 2
Bottom. The experimental set up was used to optimize the
illumination by adding an 800 nm infrared light source
and to develop a simulated surgical task that was adapted
to the capabilities of the camera (Fig 3 right).
4.3 Simulated surgical task
From literature, we selected a commonly used simulated
surgical task for skills training, which consisted of picking
of beads with a tweezer placing them over a bed of nails
in a certain pattern (Fig. 3 and 5). We adapted the task
such that with normal eyesight it would be difficult to
perform and that by adding information from the spectral
image as augmented reality overlay it would become easy
to perform. Thereto, a black environment was created
with a black nailbed that gave a low contrast. Also, 2 sets
of beads were painted: one set with normal black and one
set with black without carbon. The latter set appears light
grey in the spectral image and with this offers the
ingredient to discern the black beads from their black
4.4 Demonstrator prototype
With all information from previous steps we built a
demonstrator. For the hardware, we integrated the
commercial spectral+RGB camera in one of our head
mounted displays (i-Med Technology BV) (Fig. 4). To
minimize cross talk, RGB and spectral images were
acquired intermittently at double speed to allow real-time
image display. Also a dedicated near infrared (800 nm)
LED ring was built that in synchronization with the
spectral image acquisition illuminated the scene. We
developed custom software algorithms using Python
code, which allowed high resolution 12 bit image
acquisition (RGB and spectral intermittently), proper
white balancing of the RGB images, real-time spectral
images segmentation and augmented reality display of the
spectral information on top of the normal RGB image
(Fig. 5).
Surgeons in our team performed tests with and
without the added spectral information when performing
the simulated surgical task (Fig. 5). They confirmed the
added value of the additional spectral information. The
information was experienced as real-time display and the
demonstrator was light enough to wear, although the
weight of the camera created an undesired moment. The
tests indicate proof of concept of visualizing the
invisible; and the feasibility of a head worn advanced
intraoperative imaging system.
Fig. 4. Demonstrator of H3D-VISIOnAiR. Spectral+RGB
camera, head mounted display, dedicated infrared LED ring.
5.1 Technology Scaling
In parallel the ’Digital Surgical Loupe’ (main product of
i-Med Technology) has been developed and allows real-
time 3D full HD display. So, the most logic approach is to
follow the roadmap of this product as it already solved
some of the issues that are also applicable for H3D-
VISIOnAiR product (currently TRL 3). Furthermore, a
key step is actual spectral fingerprint determination, for
which we have set up the foundation, but requires a
spectral camera with many spectral bandwidths. With this
crucial information, dedicated spectral filters can be
developed and the AI algorithms can be trained.
Additionally, a camera driver board with no cross talk
needs to be developed. From than on the conventional
trajectory of design for manufacturing is followed by
maximizing the use of standard components and team up
with experienced production partners in our network.
Validation and CE marking is performed within our
network of surgeons who also will be the launching
customers H3D-VISIOnAiR product (reaching TRL 7
when all steps are executed).
Public deliverable for the ATTRACT Final Conference
Public deliverable for the ATTRACT Final Conference
Simulated surgical task Raw RGB+NI R image Raw RGB image Corrected
RGB ima ge
NIR image AR overlay
Corrected RGB +
AR overlay image
Corrected RGB +
AR overlay image
Fig. 5. From left to right: Photo of the simulated surgical task taken with a conventional photocamera, Combined spectral+RGB image,
separation in RGB and spectral with RGB corrected and spectral filtered by conventional color based an edge detection segmentation,
combined into RGB with augmented reality overlay image
5.2 Project Synergies and Outreach
Our current consortium of i-Med Technology BV, imec,
Maastricht UMC+, Twente University, CIN-ergy BV,
and Azilpix covers spectral filter design, embed software
development, AI algorithms development, 3D-imaging
and end-users. We would highly benefit by extending the
knowledge and expertise to AR display, FPGA design,
high-end camera chip design as well as accessories.
ATTRACT I drew our attention to related projects that
seem to have strong commonality or could reinforce: 1.
FUSCLEAN (Spain), 2. HERALD (Belgium), 3. Mixed
Reality For Brain Functional and Structural Navigation
During Neuro Surgery (Italy).
Following dissemination deliverables as set in
this ATTRACT consortium we continue to give
demonstrations at exhibitions and conferences, publish
new results in engineering and medical peer-reviewed
journals, and give presentations at conferences by the
professors in our team.
5.3 Technology application and
demonstration cases
Within ATTRACT II, we would develop the actual H3D-
VISIOnAiR product. This will contribute scientifically
to contactless identification of spectral fingerprints for
all critical tissues (nerve, vessels, lymph nodes, tumours)
embedded in different tissue types (fat, muscle) and
would generate datasets for fast off-line training of AI
algorithms. This will also contribute to the industry by
allowing the design of dedicated spectral hardware filters
(initially for tissues) but the method can be applied to
many more substances, dedicated FPGA programming to
allow real-time data processing, miniaturized back-end
technology and software building blocks for high-end
fast processing of big data.
Finally, society would benefit greatly from the result of
the project, by first and foremost allowing safe surgery
for all, which also radically changes the way residents are
trained. The technology will also be applicable for other
domains such as professional telepresence, remote
observation vessels, airborne drones, telemanipulators in
hazardous environments, agriculture applications by
analysing soil and leaves, food quality and safety,
pharmaceutical monitoring and forensic analysis.
5.4 Technology commercialization
When the product in the third year of ATTRATC II is at
TRL7 level, clinical trials will be performed at the
MUMC+ to acquire medical European MDR
certification Class IIa. In parallel, MUMC+ has indicated
to buy a minimum 5 VISIOnAiR systems. They will use
this to perform surgical precision procedures, but also to
perform studies to develop new medical applications
(like oncology). Several interviews with surgeons and
hospitals show huge interest in this new head worn
technology with the claimed safe and high precision
surgery a breakthrough advantage to radically improve
quality of surgery. Through the imec network, partner for
the RGB-NIR and hardware filter development, also
other new customers will be found in medical and other
application domains as mentioned in Section 5.3.
In the third year, the sales of normal RGB Head
Mounted Systems is predicted in Europe at €10 mio. The
price of a H3D-VISIOnAiR will be 60% higher than a
standard system, and will already generate additional €1
mio in the first year of introduction. In the 5th year, the
percentage will be grown to 20% of the total TO with €4
mio. Main market will be Europe until FDA certification
been given in the 3rd year for the VISIOnAiR. At the
CES (US) in January 2020, clear interest was expressed
from US and Europe investors provided a higher TRL
J. Heukelom et al.
level can be demonstrated. With ATTRACT II, we can
achieve this.
5.5 Envisioned risks
It could be the case that the sensitivity of the camera
required at working distance of 40 cm is not sufficient.
This would cause a delay as a re-design of the camera
chip and chosen filters must be done with partner imec.
In parallel, more effort should be made to improve the
algorithms. Another solution improve sensitivity is to
optimize ambient illumination by small powerful
pulsating NIR LEDs of specific wavelengths.
Another risk is the processing power of the
wearable PC to calculate the overlay information in
almost real-time (<32ms). This could lead to a lower
frame rate from 60 to 30fps. In the launching customer
phase, this might be acceptable. To mitigate this risk, a
straightforward back up is to use a powerful desktop
computer with wired connection. We also develop the AI
algorithms such that training can be performed off-line,
and increase processing speed by adding FPGA.
Finally, the development could require more
money and time to develop than the estimated €1,5mio.
With proof of concept, we are confident that we can
invite new investors. We expect this to be successful as
a 3D HD demonstration system will be available for sure.
5.6 Liaison with Student Teams and
Socio-Economic Study
In summer 2019, a MSc team of the TU-Delft
investigated new applications for the H3D-VISIOnAiR,
and advised to focus on security in ATTRACT II a. More
teams will be invited to address other application areas
and investigate common interests of related ATTRACT
I projects (section 5.2)
During the ATTRACT II project, two PhD students
will work on AI algorithm development (lead by
University Twente) and tissue fingerprint analysis (lead
by Maastricht University.
A relevant socioeconomic study is to analyse
the overall impact when unnecessary surgical errors are
prevented using H3D-VISIOnAiR on patients but also on
their family. This is highly linked to Health Technology
Assessment, which should be performed anyway and is
part of the development strategy.
Authors thank all J. Dabekaussen, C. Dreissen, S.
Gerards, S. Linckens, E. Toffoli from the dept. IDEE
Maastricht University (NL) and students E. de Vries, V.
van Dal and B. Vroemen for their contributions in
development of the prototype, tissue dissection protocol
and experimental set up as well as tissue recruitment.
This project has received funding from the ATTRACT
project funded by the EC under Grant Agreement
[1] de Boer E, Harlaar NJ, Taruttis A, Nagengast WB,
Rosenthal EL, Ntziachristos V, et al. Optical innovations in
surgery. Br J Surg. 2015;102(2):e56-72.
[2] Schols RM, Bouvy ND, van Dam RM, Stassen LP.
Advanced intraoperative imaging methods for laparoscopic
anatomy navigation: an overview. Surg Endosc.
[3] Al-Taher M, Hsien S, Schols RM, Hanegem NV, Bouvy
ND, Dunselman GAJ, et al. Intraoperative enhanced imaging
for detection of endometriosis: A systematic review of the
literature. Eur J Obstet Gynecol Reprod Biol. 2018;224:108-
[4] Schols RM, Bouvy ND, van Dam RM, Masclee AA,
Dejong CH, Stassen LP. Combined vascular and biliary
fluorescence imaging in laparoscopic cholecystectomy. Surg
Endosc. 2013;27(12):4511-7.
[5] Chin K, Engelsman AF, Chin PTK, Meijer SL, Strackee
SD, Oostra RJ, et al. Evaluation of collimated polarized light
imaging for real-time intraoperative selective nerve
identification in the human hand. Biomed Opt Express.
[6] Lu G, Fei B. Medical hyperspectral imaging: a review. J
Biomed Opt. 2014;19(1):10901.
[7] Lu G, Little JV, Wang X, Zhang H, Patel MR, Griffith
CC, et al. Detection of Head and Neck Cancer in Surgical
Specimens Using Quantitative Hyperspectral Imaging. Clin
Cancer Res. 2017;23(18):5426-36.
[8] Stelzle F, Adler W, Zam A, Tangermann-Gerk K, Knipfer
C, Douplik A, et al. In vivo optical tissue differentiation by
diffuse reflectance spectroscopy: preliminary results for
tissue-specific laser surgery. Surg Innov. 2012;19(4):385-93.
[9] Schols RM, Alic L, Beets GL, Breukink SO, Wieringa FP,
Stassen LP. Automated Spectroscopic Tissue Classification in
Colorectal Surgery. Surg Innov. 2015;22(6):557-67.
[10] Schols RM, Alic L, Wieringa FP, Bouvy ND, Stassen LP.
Towards automated spectroscopic tissue classification in
thyroid and parathyroid surgery. Int J Med Robot. 2017;13(1).
[11] Schols RM, ter Laan M, Stassen LP, Bouvy ND,
Amelink A, Wieringa FP, et al. Differentiation between nerve
and adipose tissue using wide-band (350-1,830 nm) in vivo
diffuse reflectance spectroscopy. Lasers Surg Med.
[12] Bauer JR, Beekum Kv, Klaessens J, Noordmans HJ, Boer
C, Hardeberg JY, et al. Towards real-time non contact spatial
resolved oxygenation monitoring using a multi spectral filter
array camera in various light conditions: SPIE; 2018.
[13] den Blanken MvdB, S;Liberton, M;Grimbergen,
M;Hofman, MBM;Verdaasdonk, RM. Quantification of
cutaneous allergic reactions using 3D optical imaging: a
feasibility study. Skin Research and Technology. 2019.
[14] Klaessens JHGM, Nelisse M, Verdaasdonk RM,
Noordmans HJ. Multimodal tissue perfusion imaging using
multi-spectral and thermographic imaging systems applied on
clinical data: SPIE; 2013.
[15] Schols RM, Dunias P, Wieringa FP, Stassen LP.
Multispectral characterization of tissues encountered during
laparoscopic colorectal surgery. Med Eng Phys.
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Intraoperative peripheral nerve lesions are common complications due to misidentification and limitations of surgical nerve identification. This study validates a realtime non-invasive intraoperative method of nerve identification. Long working distance collimated polarized light imaging (CPLi) was used to identify peripheral radial nerve branches in a human cadaver hand by their nerve specific anisotropic optical reflection. Seven ex situ and six in situ samples were examined for nerves, resulting after histological validation, in a 100% positive correct score (CPLi) versus 77% (surgeon). Nerves were visible during a clinical in vivo observation using CPLi. Therefore CPLi is a promising technique for intraoperative nerve identification.
Full-text available
In the past decade, there has been a major drive towards clinical translation of optical and, in particular, fluorescence imaging in surgery. In surgical oncology, radical surgery is characterized by the absence of positive resection margins, a critical factor in improving prognosis. Fluorescence imaging provides the surgeon with reliable and real-time intraoperative feedback to identify surgical targets, including positive tumour margins. It also may enable decisions on the possibility of intraoperative adjuvant treatment, such as brachytherapy, chemotherapy or emerging targeted photodynamic therapy (photoimmunotherapy). This article reviews the use of optical imaging for intraoperative guidance and decision-making. Image-guided cancer surgery has the potential to be a powerful tool in guiding future surgical care. Photoimmunotherapy is a theranostic concept (simultaneous diagnosis and treatment) on the verge of clinical translation, and is highlighted as an effective combination of image-guided surgery and intraoperative treatment of residual disease. Multispectral optoacoustic tomography, a technique complementary to optical image-guided surgery, is currently being tested in humans and is anticipated to have great potential for perioperative and postoperative application in surgery. Significant advances have been achieved in real-time optical imaging strategies for intraoperative tumour detection and margin assessment. Optical imaging holds promise in achieving the highest percentage of negative surgical margins and in early detection of micrometastastic disease over the next decade. © 2015 BJS Society Ltd. Published by John Wiley & Sons Ltd.
Full-text available
Hyperspectral imaging (HSI) is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the tissue physiology, morphology, and composition. This review paper presents an overview of the literature on medical hyperspectral imaging technology and its applications. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application.
Full-text available
Clinical interventions can cause changes in tissue perfusion, oxygenation or temperature. Real-time imaging of these phenomena could be useful for surgical strategy or understanding of physiological regulation mechanisms. Two noncontact imaging techniques were applied for imaging of large tissue areas: LED based multispectral imaging (MSI, 17 different wavelengths 370 nm-880 nm) and thermal imaging (7.5 to 13.5 μm). Oxygenation concentration changes were calculated using different analyzing methods. The advantages of these methods are presented for stationary and dynamic applications. Concentration calculations of chromophores in tissue require right choices of wavelengths The effects of different wavelength choices for hemoglobin concentration calculations were studied in laboratory conditions and consequently applied in clinical studies. Corrections for interferences during the clinical registrations (ambient light fluctuations, tissue movements) were performed. The wavelength dependency of the algorithms were studied and wavelength sets with the best results will be presented. The multispectral and thermal imaging systems were applied during clinical intervention studies: reperfusion of tissue flap transplantation (ENT), effectiveness of local anesthetic block and during open brain surgery in patients with epileptic seizures. The LED multispectral imaging system successfully imaged the perfusion and oxygenation changes during clinical interventions. The thermal images show local heat distributions over tissue areas as a result of changes in tissue perfusion. Multispectral imaging and thermal imaging provide complementary information and are promising techniques for real-time diagnostics of physiological processes in medicine.
The diagnosis of peritoneal endometriosis during laparoscopy may be difficult due to the polymorphic aspects of the lesions. Enhanced imaging using contrast agents has potential to provide a better identification of peritoneal endometriosis. The aim of this systematic review is to provide an overview of the literature on what is known about the intraoperative laparoscopic visual enhancement of peritoneal endometriosis using contrast agents. A systematic review was done of studies about enhanced imaging during laparoscopy for endometriosis using contrast agents. Clinical studies which contained a description of imaging with a contrast agent and also reported visual findings of endometriosis during laparoscopy, were included. Nine suitable studies were identified. Intraoperative visualization of endometriosis was analyzed with or without histologic confirmation. Four studies evaluated 5-aminolevulinic acid-induced fluorescence (5-ALA), 1 study evaluated indigo carmine, 2 studies evaluated methylene blue (MB), 1 study evaluated indocyanine green (ICG) and 1 study evaluated so-called bloody peritoneal fluid painting. All studies, with a combined total of 171 included patients, showed potential of enhanced visibility of endometriosis using contrast agents. A combined total of 7 complications, all related to the use of 5-ALA, were reported. We conclude that the use of contrast-based enhanced imaging during laparoscopy is promising and that it can provide a better visualization of peritoneal endometriosis. However, based on the limited data no technique of preference can yet be identified.
Purpose: This study intends to investigate the feasibility of using hyperspectral imaging (HSI) to detect and delineate cancers in fresh, surgical specimens of patients with head and neck cancers. Experimental Design: A clinical study was conducted in order to collect and image fresh, surgical specimens from patients (N = 36) with head and neck cancers undergoing surgical resection. A set of machine-learning tools were developed to quantify hyperspectral images of the resected tissue in order to detect and delineate cancerous regions which were validated by histopathological diagnosis. More than two million reflectance spectral signatures were obtained by HSI and analyzed using machine-learning methods. The detection results of HSI were compared with autofluorescence imaging and fluorescence imaging of two vital-dyes of the same specimens. Results: Quantitative HSI differentiated cancerous tissue from normal tissue in ex vivo surgical specimens with a sensitivity and specificity of 91% and 91%, respectively, and which was more accurate than autofluorescence imaging (p<0.05) or fluorescence imaging of 2-NBDG (p<0.05) and proflavine (p<0.05). The proposed quantification tools also generated cancer probability maps with the tumor border demarcated and which could provide real-time guidance for surgeons regarding optimal tumor resection. Conclusions: This study highlights the feasibility of using quantitative HSI as a diagnostic tool to delineate the cancer boundaries in surgical specimens, and which could be translated into the clinic application with the hope of improving clinical outcomes in the future.
BACKGROUND: In (para-)thyroid surgery iatrogenic parathyroid injury should be prevented. To aid the surgeons' eye, a camera system enabling parathyroid-specific image enhancement would be useful. Hyperspectral camera technology might work, provided that the spectral signature of parathyroid tissue offers enough specific features to be reliably and automatically distinguished from surrounding tissues. As a first step to investigate this, we examined the feasibility of wide band diffuse reflectance spectroscopy (DRS) for automated spectroscopic tissue classification, using silicon (Si) and indium-gallium-arsenide (InGaAs) sensors. METHODS: DRS (350-1830 nm) was performed during (para-)thyroid resections. From the acquired spectra 36 features at predefined wavelengths were extracted. The best features for classification of parathyroid from adipose or thyroid were assessed by binary logistic regression for Si- and InGaAs-sensor ranges. Classification performance was evaluated by leave-one-out cross-validation. RESULTS: In 19 patients 299 spectra were recorded (62 tissue sites: thyroid = 23, parathyroid = 21, adipose = 18). Classification accuracy of parathyroid-adipose was, respectively, 79% (Si), 82% (InGaAs) and 97% (Si/InGaAs combined). Parathyroid-thyroid classification accuracies were 80% (Si), 75% (InGaAs), 82% (Si/InGaAs combined). CONCLUSIONS: Si and InGaAs sensors are fairly accurate for automated spectroscopic classification of parathyroid, adipose and thyroid tissues. Combination of both sensor technologies improves accuracy. Follow-up research, aimed towards hyperspectral imaging seems justified. Copyright © 2016 John Wiley & Sons, Ltd.
Background. In colorectal surgery, detecting ureters and mesenteric arteries is of utmost importance to prevent iatrogenic injury and to facilitate intraoperative decision making. A tool enabling ureter- and artery-specific image enhancement within (and possibly through) surrounding adipose tissue would facilitate this need, especially during laparoscopy. To evaluate the potential of hyperspectral imaging in colorectal surgery, we explored spectral tissue signatures using single-spot diffuse reflectance spectroscopy (DRS). As hyperspectral cameras with silicon (Si) and indium gallium arsenide (InGaAs) sensor chips are becoming available, we investigated spectral distinctive features for both sensor ranges. Methods. In vivo wide-band (wavelength range 350-1830 nm) DRS was performed during open colorectal surgery. From the recorded spectra, 36 features were extracted at predefined wavelengths: 18 gradients and 18 amplitude differences. For classification of respectively ureter and artery in relation to surrounding adipose tissue, the best distinctive feature was selected using binary logistic regression for Si- and InGaAs-sensor spectral ranges separately. Classification performance was evaluated by leave-one-out cross-validation. Results. In 10 consecutive patients, 253 spectra were recorded on 53 tissue sites (including colon, adipose tissue, muscle, artery, vein, ureter). Classification of ureter versus adipose tissue revealed accuracy of 100% for both Si range and InGaAs range. Classification of artery versus surrounding adipose tissue revealed accuracies of 95% (Si) and 89% (InGaAs). Conclusions. Intraoperative DRS showed that Si and InGaAs sensors are equally suited for automated classification of ureter versus surrounding adipose tissue. Si sensors seem better suited for classifying artery versus mesenteric adipose tissue. Progress toward hyperspectral imaging within this field is promising.
Background: Intraoperative nerve localization is of great importance in surgery. In certain procedures, where nerves show visual resemblance to surrounding adipose tissue, this can be particularly challenging for the human eye. An example of such a delicate procedure is thyroid and parathyroid surgery, where iatrogenic injury of the recurrent laryngeal nerve can result in transient or permanent vocal problems (0.5-2.0% reported incidence). A camera system, enabling nerve-specific image enhancement, would be useful in preventing such complications. This might be realized with hyperspectral camera technology using silicon (Si) or indium gallium arsenide (InGaAs) sensor chips. Methods: As a first step towards such a camera, we evaluated the performance of diffuse reflectance spectroscopy by analysing spectra collected during 18 thyroid and parathyroid resections. We assessed the contrast information present in two different spectral ranges, for respectively Si and InGaAs sensors. Two hundred fifty three in vivo, wide-band diffuse reflectance spectra (350-1,830 nm range, 1 nm resolution) were acquired on 52 tissue spots, including nerve (n = 22), muscle (n = 12), and adipose tissue (n = 18). We extracted 36 features from these spectroscopic data: 18 gradients and 18 amplitude differences at predefined points in the tissue spectra. Best distinctive feature combinations were established using binary logistic regression. Classification performance was evaluated in a cross-validation (CV) approach by leave-one-out (LOO). To generalize nerve recognition applicability, we performed a train-test (TT) validation using the thyroid and parathyroid surgery data for training purposes and carpal tunnel release surgery data (10 nerve spots and 5 adipose spots) for classification purposes. Results: For combinations of two distinctive spectral features, LOO revealed an accuracy of respectively 78% for Si-sensors and 95% for InGaAs-sensors. TT revealed accuracies of respectively 67% and 100%. Conclusions: Using diffuse reflectance spectroscopy we have identified that InGaAs sensors are better suited for automated discrimination between nerves and surrounding adipose tissue than Si sensors.
Bile duct injury in patients undergoing laparoscopic cholecystectomy is a rare but serious complication. Concomitant vascular injury worsens the outcome of bile duct injury repair. Near-infrared fluorescence imaging using indocyanine green (ICG) is a promising, innovative, and noninvasive method for the intraoperative identification of biliary and vascular anatomy during cholecystectomy. This study assessed the practical application of combined vascular and biliary fluorescence imaging in laparoscopic gallbladder surgery for early biliary tract delineation and arterial anatomy confirmation. Patients undergoing elective laparoscopic cholecystectomy were enrolled in this prospective, single-institutional study. To delineate the major bile ducts and arteries, a dedicated laparoscope, offering both conventional and fluorescence imaging, was used. ICG (2.5 mg) was administered intravenously immediately after induction of anesthesia and in half of the patients repeated at establishment of critical view of safety for concomitant arterial imaging. During dissection of the base of the gallbladder and the cystic duct, the extrahepatic bile ducts were visualized. Intraoperative recognition of the biliary structures was registered at set time points, as well as visualization of the cystic artery after repeat ICG administration. Thirty patients were included. ICG was visible in the liver and bile ducts within 20 minutes after injection and remained up to approximately 2 h, using the ICG-filter of the laparoscope. In most cases, the common bile duct (83 %) and cystic duct (97 %) could be identified significantly earlier than with conventional camera mode. In 13 of 15 patients (87 %), confirmation of the cystic artery was obtained successfully after repeat ICG injection. No per- or postoperative complications occurred as a consequence of ICG use. Biliary and vascular fluorescence imaging in laparoscopic cholecystectomy is easily applicable in clinical practice, can be helpful for earlier visualization of the biliary tree, and is useful for the confirmation of the arterial anatomy.