Content uploaded by Fernando Avila Rencoret
Author content
All content in this area was uploaded by Fernando Avila Rencoret on Nov 10, 2016
Content may be subject to copyright.
A Robotic Hyperspectral Scanning Framework for Endoscopy
F.B. Avila-Rencoret1,2, D.S. Elson1,2, G. Mylonas2
1The Hamlyn Centre for Robotic Surgery, 2Department of Surgery and Cancer
Imperial College London, UK.
fba13@ic.ac.uk
INTRODUCTION
Gastrointestinal (GI) endoscopy is the gold-standard
procedure for detection and treatment of dysplastic
lesions and early stage GI cancers [1]. In spite of its
proven effectiveness, its sensitivity remains suboptimal
due to the subjective nature of the examination, which is
substantially reliant on human-operator skills. For bowel
cancer, colonoscopy can miss up to 22% of dysplastic
lesions, with even higher miss rates for small (<5 mm
diameter) and flat lesions [2]. The proposed system seeks
to improve the sensitivity of GI endoscopy by automated
scanning and real-time classification of wide tissue areas
based on their hyperspectral (HS) features. A “hot-spot”
map is generated to highlight dysplastic or cancerous
lesions for further scrutiny or concurrent resection [3].
The device works as an add-on accessory to any
conventional endoscope and to our knowledge is the first
of its kind.
ENGINEERING SPECIFICATIONS
The device comprises a radial array of optical sensors that
can be partially rotated and translated along the GI tract
while acquiring optical data (Fig. 1, A-C). The optical
sensor array consists of eight single-point diffuse
reflectance spectroscopy (DRS) fibre optic probes,
introduced in a collapsed configuration as an over-tube
add-on accessory to any conventional endoscope.
Deployment of the collapsed sensors is achieved by
externally actuated tendons and respective Bowden force
transmission conduits. Each probe contains an
illumination fibre that emits white light (Fig. 1, A). The
light is diffusely scattered inside the tissue and a parallel
fibre collects and transmits it to a spectrograph (V10,
SPECIM) where is diffracted into its spectrum and finally
captured by an sCMOS camera (optiMOS, QImaging).
From each spectral image acquired, an image processing
sub-routine converts it to 8 DRS spectra (Fig. 1, A-left).
The device is simultaneous and continuously translated
and rotated to scan and obtain a 2D map of the entire
lumen under interrogation. Rotation is provided by an
external hollow rotary actuator through which the
endoscope is inserted (SMH88, Stogra). Translation
along the lumen using the endoscope as a rail is achieved
by a stepper motor (SM56, Stogra) actuating a linear
stage (404XE, Parker). Actuation control, data
Fig. 1: (A) Optical design of an individual DRS sensor: the DRS signal is acquired in each probe by an illumination [I] and a collection
[C] fibre (core:cladding=200:20 µm, N.A=0.22). All terminations of the collections fibres are arranged in a 1x8 array (not shown) in
front of a spectrograph attached to an sCMOS camera that captures images containing the spectra of each probe. Pre-defined pixel-to-
wavelength calibration factors p1 p2 and white standard references are used to convert spectral images to DRS spectra. Top-left plot:
represent ative 781 DRS spectra (mean ±SD) of simulated adenomas (ade) vs. 3049 DRS spectra of simulated mucosa (bkg). (B) Front-
view of the device in a collapsed and deployed configuration. (C) Current fixed scanning mechanism based on a hollow rotary actuator
mounted on top of a linear actuator. The device is axially translated as an over-tube add-on to any conventional endoscope.
acquisition, HS data processing and visualisation, are
fully integrated into a MATLAB framework. The
position of all probes is derived from a single angular and
linear position reported by the actuators. Each scanning
position is co-registered with its corresponding DRS
spectrum. Grayscale images are reconstructed by
integrating total intensities from each DRS spectrum. A
user interface allows real-time visualisation of acquired
raw and processed data as 2D and 3D maps, where the
3D map corresponds to the 2D image texture-mapped to
a cylinder, simulating a 3D representation of the colon
(Fig. 2, d-e).
EXPERIMENTAL VALIDATION
This abstract focuses on characterising the optical
resolution achievable with the current setup on rigid and
deformable targets simulating the colon. For rigid targets,
we scanned a plastic tube internally covered with a
standard resolution target (1959 USAF). The scanning
sequence comprised a 48º partial rotation (step size=1º)
while axially advancing the device along the tubular
target (step size=0.4 mm). Actuation errors were
measured as !"#$% −!"#$' (, where posro is the
rounded pixel coordinate of the reconstructed image, and
posre is the real position reported by the encoders. As
deformable targets, silicone phantoms were
manufactured (Ecoflex 00-30). The phantoms included
simulated flat lesions (cylindrical features Ø = 0.5 to 6.0
mm, height = 0.7 mm) that were pigmented to provide a
clear HS signal (ade, top-left Fig. 1A) over the simulated
mucosa background colour (bkg).
RESULTS
The average angular rotation error of the current device
is 0.02º (SD 0.06º). The linear stage positioning error is
stated to be at ±0.02 mm. The optical resolution
achievable is 0.5 line pairs per mm (Fig. 2a). This optical
resolution is consistent with the smallest simulated flat
pre-cancerous lesions resolved (0.75 mm), even while the
scanning was performed inside deformable and
stretchable phantoms (Fig. 2, b-e).
DISCUSSION
We report for the first time 2D and 3D reconstruction of
endoscopic HS data acquired via robotic scanning of a
simulated colon by a radial array of contact single-point
DRS probes. Sub-millimetre optical resolution has been
demonstrated, which could allow the identification of flat
pre-cancerous lesions that are currently missed. The size
of the device is compatible with the anatomical
dimensions of the colon, but further miniaturisation is
desirable. Our work towards the clinical applicability of
the device currently concentrates on negotiating the
variable diameter, folds, and flexures of the colon.
Angular actuation along highly tortuous paths via a
torque transmission cable is at the centre of our
investigations, as well as the integration of safety features
like pressure sensing on the probes. Ultimately, classified
HS data could be spatially registered with the video
stream of any conventional endoscope. This device will
pave the way towards the next generation of augmented
reality endoscopy while increasing its sensitivity and
specificity.
This work is supported by the Institutional Strategic
Support Fund: Networks of Excellence Scheme 2015
(GM), UK ERC award 242991 (DSE) and Becas Chile
scholarship from the Chilean Government (FBA).
REFERENCES
[1] Pawa, N., T. Arulampalam, and J.D. Norton, Screening for
colorectal cancer: established and emerging modalities.
Nat Rev Gastroenterol Hepatol, 2011. 8(12): p. 711-22.
[2] van Rijn, J.C., et al., Polyp miss rate determined by tandem
colonoscopy: a systematic review. Am J Gastroenterol,
2006. 101(2): p. 343-50.
[3] Avila-Rencoret, F., et al, Towards a robotic-assisted
cartography of the colon: a proof of concept, in IEEE
ICRA, Seattle, WA, 2015, pp. 1757-1763.
Fig. 2: In vitro optical resolution validation experiment. (a) crop ped sub-section of a scanned paper 1951 USAF target showing
a m aximal optical resolutio n of 0.5 line pairs per mm. (b) External and (c) proximal view of a scanning inside a 5 cm diameter
silicone phantom containing patterns of simulated flat pre-cancerous lesions of different diameters. (d) Resulting reconstructed 3D
grayscale image (texture mapped from 2D), and (e) 2D imag e “hotspot” map based on grayscale pixel intensities. It can be seen that
the smallest simulated flat pre-cancerous lesions resolved by the device are 0.75 mm in diameter