Conference PaperPDF Available

Optical Biopsy Mapping for Minimally Invasive Cancer Screening

Authors:

Abstract and Figures

The quest for providing tissue characterization and functional mapping during minimally invasive surgery (MIS) has motivated the development of new surgical tools that extend the current functional capabilities of MIS. Miniaturized optical probes can be inserted into the instrument channel of standard endoscopes to reveal tissue cellular and subcellular microstructures, allowing excision-free optical biopsy. One of the limitations of such a point based imaging and tissue characterization technique is the difficulty of tracking probed sites in vivo. This prohibits large area surveillance and integrated functional mapping. The purpose of this paper is to present an image-based tracking framework by combining a semi model-based instrument tracking method with vision-based simultaneous localization and mapping. This allows the mapping of all spatio-temporally tracked biopsy sites, which can then be re-projected back onto the endoscopic video to provide a live augmented view in vivo, thus facilitating re-targeting and serial examination of potential lesions. The proposed method has been validated on phantom data with known ground truth and the accuracy derived demonstrates the strength and clinical value of the technique. The method facilitates a move from the current point based optical biopsy towards large area multi-scale image integration in a routine clinical environment.
(a-d) Schematic representation of sequential probabilistic mapping updates. The camera's position c is shown in red with the uncertainty represented by an ellipse, features y1, y2 and y3 represented in dark gray, the biopsy site b shown in green and the tissue shown in light gray. (a) c measures y1 with low uncertainty, (b) c is navigated to a new position with growing uncertainty. Features y2 and y3 are measured and biopsy b is taken. (c) c is navigated close to y1 and positional uncertainty increases. (d) Feature y1 is measured and the position estimate of c is improved. Resulting in an improved estimate of b, as it is correlated to c. temporally updated. At the time when the biopsy site is observed, the uncertainty of the camera's position may be high, as illustrated in Fig. 3 (b) but the relative position of the biopsy site to surrounding features is well defined. Over time, the camera will re-measure these surrounding features in the map as in Fig. 3 (d) and the position estimation of the camera will improve, thus reducing the uncertainty. Therefore, the position estimation of the biopsy site will also improve as it is correlated to the position of the camera and will not drift away in the global map. To facilitate real-time examination, the biopsy sites 1 { ... } w w i b b are visualized in this study by re-projecting the 3D points into the camera plane based on the intrinsic camera parameters and the estimated camera position from SLAM. This provides an augmented view of the biopsy sites for the operator.
… 
Content may be subject to copyright.
G.-Z. Yang et al. (Eds.): MICCAI 2009, Part I, LNCS 5761, pp. 483–490, 2009.
© Springer-Verlag Berlin Heidelberg 2009
Optical Biopsy Mapping for Minimally Invasive Cancer
Screening
Peter Mountney1,2, Stamatia Giannarou2, Daniel Elson2,
and Guang-Zhong Yang1,2
1 Department of Computing
2 Institute of Biomedical Engineering Imperial College, London SW7 2BZ, UK
{peter.mountney,stamatia.giannarou03,ds.elson,
g.z.yang}@imperial.ac.uk
Abstract. The quest for providing tissue characterization and functional map-
ping during minimally invasive surgery (MIS) has motivated the development
of new surgical tools that extend the current functional capabilities of MIS.
Miniaturized optical probes can be inserted into the instrument channel of stan-
dard endoscopes to reveal tissue cellular and subcellular microstructures, allow-
ing excision-free optical biopsy. One of the limitations of such a point based
imaging and tissue characterization technique is the difficulty of tracking
probed sites in vivo. This prohibits large area surveillance and integrated func-
tional mapping. The purpose of this paper is to present an image-based tracking
framework by combining a semi model-based instrument tracking method with
vision-based simultaneous localization and mapping. This allows the mapping
of all spatio-temporally tracked biopsy sites, which can then be re-projected
back onto the endoscopic video to provide a live augmented view in vivo, thus
facilitating re-targeting and serial examination of potential lesions. The pro-
posed method has been validated on phantom data with known ground truth and
the accuracy derived demonstrates the strength and clinical value of the
technique. The method facilitates a move from the current point based optical
biopsy towards large area multi-scale image integration in a routine clinical
environment.
1 Introduction
With recent advances in biophotonics and surgical instrumentation, there is an in-
creasing demand to bring cellular and molecular imaging modalities to an in vivo, in
situ setting to allow for real-time tissue characterization, functional assessment and
intra-operative guidance. Miniaturization of the confocal laser scanning microscope,
for example, has led to imaging probes that can be inserted into the instrument chan-
nel of a standard endoscope to visualize cellular and subcellular microstructures to
provide ‘optical biopsy’ without excision of tissue. Following the application of a
contrast agent, this can allow for the detection of colorectal adenomas, disruption in
the pit pattern of the colon, angiogenesis, and neoplasia in Barrett’s esophagus [1]. It
has also been used without a contrast agent to detect malignant disruption of the bron-
chial basement membrane using elastin autofluorescence [2]. Other techniques that
enable microscopic detection and characterization of tissue include Optical Coherence
484 P. Mountney et al.
Tomography (OCT), two photon excited fluorescence and high magnification endo-
scopy [3]. There have also been successful clinical trials of techniques that acquire
detailed spectroscopic information for cancer detection, for example using the time-
or wavelength-resolved fluorescence or Raman properties.
For in vivo applications, all of these techniques suffer from the limitation of only
providing a small, localized probe region whilst the organs of interest may require a
large surface area to be surveyed. Technically, the main difficulty of tracking the
optical biopsy sites is that these probes leave no marks on the tissue. Furthermore,
the optical biopsy sites move in and out of the view in a standard endoscope image as
the examination progresses and may deform as a result of respiration or tissue-
instrument interaction. Current approaches to long-term tissue-instrument tracking
assume the use of rigid laparoscopes and availability of optical markers [4]. Structure
from motion has been used to reconstruct 3D tissue models, but it suffers from drift
and does not work well when revisiting biopsy sites [5]. For extending the effective
field-of-view of the endoscopic image, image mosaicing [6] and dynamic view ex-
pansion [7] have been used to reconstruct enlarged field-of-views, although these
techniques tend not to explicitly deal with motion parallax.
In practice, optical probes are typically introduced through the instrument channel
while holding the endoscope stationary. Since the probe needs to be placed in contact
with the tissue when the optical biopsy takes place, tracking the tip of the probe en-
ables the localization of the biopsy site. To this end, it is necessary to take into ac-
count scale, rotation and illumination changes when tracking the tool. Current ap-
proaches to needle and surgical instrument tracking may be applicable [8, 9], but a
combined approach by integrating probe tracking with a 3D probabilistic map built in
situ using only white light endoscopic images with no additional fiducials can ensure
robustness and practical clinical use. This work proposes an image-based tracking
system based on SLAM (Simultaneous Localization and Mapping) for optical probes.
This will allow for subsequent localization and contextual analysis of microstructures
or guiding real tissue biopsy. The main contribution of this paper is to combine
SLAM with probe tracking to create a 3D model of the tissue surface and spatio-
temporally tracked optical biopsy sites. These biopsy sites are subsequently re-
projected back onto the image plane to provide a live augmented view in vivo, thus
facilitating re-targeting and serial examination. The proposed method has been
Fig. 1. (a) A typical microconfocal fluorescence image showing the microstructure of a sample,
(b) the relative configuration of a confocal fluorescence probe when inserted through the in-
strument channel of a standard endoscope, and (c) a typical endoscopic white light image of the
bronchus used for navigation
Optical Biopsy Mapping for Minimally Invasive Cancer Screening 485
validated on phantom data with known ground truth. The method will facilitate a shift
from the current point based optical biopsy towards multi-scale image integration in a
routine clinical environment.
2 Methods
2.1 Probabilistic Mapping
The first step of the proposed tracking framework is to establish a probabilistic map-
ping of the environment. Previous work on SLAM based approaches has shown the
ability to generate 3D tissue models and recover the relative pose of the endoscope
[10]. A long-term map is generated making it more resilient to drift and error accumu-
lation over time, and thus is well suited to returning to previously targeted areas. In
this work, a vision based sequential approach has been used. This is based on an
Extended Kalman Filter (EKF) framework with state vector x containing the posi-
tion(, , )
xyz
ccc , orientation 123 4
(, , , )
qqq q
cccc , translational velocity (,, )
xyz
vvv and angu-
lar velocity (, , )
xyz
ωωω of the endoscope. In addition, the state vector also stores 3D
locations of salient features in the map(, , )
xyz
yyy . A constant velocity, constant angu-
lar velocity motion model is used to predict the endoscope’s motion with Gaussian
noise. Accompanying the state vector is the covariance matrix which stores the uncer-
tainty of the endoscope and feature locations in 3D. In this sequential map building
approach, new features are added to the map on the fly by feature matching con-
strained by epipolar geometry to estimate their 3D positions relative to the endoscopic
camera.
2.2 Biopsy Site Estimation
The initial position of the biopsy site in the image plane is estimated through probe
tracking. In this work, no marker was attached and no changes were made to the col-
our of the imaging probe. The technique exploits the fact that the camera is relatively
static when the biopsy is taken. The segmentation of the tool is achieved by combin-
ing background subtraction and color segmentation in the HSV space. In this study, a
simple background subtraction technique is used based on inter-frame difference.
Foreground/background models are learnt and updated over time. The background
model is initialized with the first frame of the video sequence. For the extraction of
foreground objects, the current frame is subtracted from the background model and
any significant difference is labeled as foreground. If no foreground object is identi-
fied, the current frame becomes the background model. On the saturation plane, the
shaft of the probe is highlighted in dark grey on a bright background. Therefore, fore-
ground pixels (Fig. 2 (a)) are used as seeds for region growing in the saturation color
plane (Fig. 2 (b)) to segment the probe shaft as shown in Fig. 2(c).
In order to identify the tip of the probe, the centroid of the shaft is extracted. The
tangentials of the shaft are detected at the global maxima of the Hough transform and
the axis of the shaft is computed as the eigenvector corresponding to the smallest
eigenvalue of the moment of inertia. The localization of the tip of the probe is per-
formed with respect to a reference point located at the intersection of the shaft and the
o
486 P. Mountney et al.
Fig. 2. Probe tracking and biopsy site estimation within the image plane; (a) background sub-
traction, (b) color saturation distribution within the image, (c) segmented tool regions, and (d)
the model fitted tool (centroid -red dot, the reference point - green dot, vanishing tangential
lines - cyan, radius at the center of mass and at the reference point – yellow).
distal tip. The 3D position of the tip is estimated using a semi-model based approach
assuming rigidity and incorporating prior knowledge of the width of the probe and the
relationship between the reference point and the tip. The position of the reference
point and the orientation of the shaft are estimated in 3D and the prior model enables
the localization of the tip of the probe c
b relative to the camera.
2.3 Global Biopsy Mapping
Following the steps in Section 2.2, the position of biopsy site c
b is estimated in the
camera coordinate system. It is transformed into the world coordinate system using:
wwcw
bCbc=+ (1)
where w
bis the biopsy site in the world coordinate system, w
Cand w
care the orienta-
tion and position of the camera in the global SLAM coordinate system. Although the
3D position of the biopsy site is now defined, this position is never directly observed
or measured again. There are two reasons for this; the actual site on the tissue is usu-
ally occluded by the probe when the biopsy is taken, and there may not be any salient
features at or around the biopsy site to be tracked. In this case, 2D tracking would fail.
However, the strength of the proposed probabilistic map is that the position of the
biopsy site can be updated without directly measuring it. This is made possible by the
co-variance matrix which models the uncertainty of all the biopsy positions. The ith
biopsy site w
i
bis inserted into the state vector and the co-variance matrix P is up-
dated. The co-variance matrix is updated with the partial derivatives /
iv
bx∂∂ of the
biopsy site with respect to the camera position, as well as the measurement model
/
ii
bh∂∂ and measurement noise Ras shown in Eq (2).
where xis the position of the endoscope and i
yis the ith feature in the map.
The position and uncertainty of the biopsy sites are correlated to the camera posi-
tion and the rest of the features in the map. Fig. 3 illustrates this sequential map build-
ing demonstrating how the camera, features and the biopsy sites are correlated and
w
i
x
by
z
⎛⎞
=
⎝⎠
,
1
111 1
1
T
i
xx xy xx
v
T
i
yx yy yx
v
ww wwTwwT
ii iiiii
xx xy xx
vv vvii
b
PP P
x
b
PP P P
x
bb bbbb
PP P R
xx xxhh
⎡⎤
⎢⎥
⎢⎥
⎢⎥
⎢⎥
⎢⎥
=⎢⎥
⎢⎥
⎢⎥
∂∂ ∂
⎢⎥
+
⎢⎥
⎢⎥∂∂ ∂
⎣⎦
(2)
Optical Biopsy Mapping for Minimally Invasive Cancer Screening 487
Fig. 3. (a-d) Schematic representation of sequential probabilistic mapping updates. The cam-
era’s position c is shown in red with the uncertainty represented by an ellipse, features y1, y2
and y3 represented in dark gray, the biopsy site b shown in green and the tissue shown in light
gray. (a) c measures y1 with low uncertainty, (b) c is navigated to a new position with growing
uncertainty. Features y2 and y3 are measured and biopsy b is taken. (c) c is navigated close to
y1 and positional uncertainty increases. (d) Feature y1 is measured and the position estimate of
c is improved. Resulting in an improved estimate of b, as it is correlated to c.
temporally updated. At the time when the biopsy site is observed, the uncertainty of
the camera’s position may be high, as illustrated in Fig. 3 (b) but the relative position
of the biopsy site to surrounding features is well defined. Over time, the camera will
re-measure these surrounding features in the map as in Fig. 3 (d) and the position
estimation of the camera will improve, thus reducing the uncertainty. Therefore, the
position estimation of the biopsy site will also improve as it is correlated to the posi-
tion of the camera and will not drift away in the global map. To facilitate real-time
examination, the biopsy sites 1
{...}
ww
i
bb are visualized in this study by re-projecting
the 3D points into the camera plane based on the intrinsic camera parameters and the
estimated camera position from SLAM. This provides an augmented view of the bi-
opsy sites for the operator.
2.4 Experimental Set-Up
The proposed approach has been validated on a silicon phantom of the airway coated
with acrylic paint to provide realistic texture and internal reflections. Sponge cell
structures were attached to the internal surface to enable optical biopsies to be taken
using a confocal fluorescence endoscope system (Cellvizio, Mauna Kea Technolo-
gies, Paris). Validation was performed by measuring the accuracy of biopsy sites in
the image space as the endoscope navigated through the phantom. The ground truth
data used for comparison was collected using an optical tracking device (Northern
Digital Inc, Ontario, Canada) and an experienced observer. To obtain the ground truth
position of the camera, a rigid stereo laparoscope fitted with four optical markers was
used. The position l
c and orientation l
C of the center of the left camera relative to
the optical markers were acquired using standard hand-eye calibration [11]. This
enabled the position of the camera to be calculated in the world coordinate system w
c
and w
C.To obtain the ground truth of the 3D biopsy site positions, the experienced
observer manually identified the sites on the stereo images at the time when the bi-
opsy was taken. By using the camera’s intrinsic and extrinsic parameters, the 3D
position c
bof the biopsy site was obtained relative to the camera, and its position in
488 P. Mountney et al.
the world coordinate system w
b was determined as *
wwcw
bCbc=+. At each sub-
sequent frame, the biopsy site w
b was projected into the ground truth camera position
(/)
cc
oxxz
xx fkbb=− and (/)
cc
oyyz
yy fkbb=− where x
fk and y
fk are the focal length
and o
xand o
yare the principal point. To validate the proposed probe tracking ap-
proach, the probe was mounted in a rigid sheath. This evaluation step was combined
with manually defined image coordinates of the probe’s location.
3 Results
The proposed algorithm was validated on a two minute long stereo laparoscopic video
sequence consisting of navigation to four different areas, including six biopsies and
re-targeting previously taken biopsies.
Quantitative analysis of the position of the biopsy sites in the image plane is shown
in Table 1. The average visual angle error for the position of the biopsy sites ranges
from 1.18° to 3.86°. Figs. 4 (d-e) show the estimated biopsy site position and ground
truth position of site three over a short sequence before the site goes out of view.
Accuracy of the biopsy position estimation is affected by the proximity of the camera
to the site where close proximity leads to a magnification of the error. Fig. 5 illus-
trates the results of the augmented biopsy sites at different stages of the procedure
where changes in illumination, scale and view point are experienced. Fig. 5 demon-
strates the practical value and clinical relevance of the proposed method; the entire
procedure is represented where six biopsies are taken and added to the global map,
including the associated biopsy images of the sponge cell structures.
a) b) c)
d) e) f)
Fig. 4. (a-c) Probe tracking: Ground truth (red) and estimated (green) position of probe at (a)
site six and (b) site three. (c) Ground truth (red) and tracked probe position (green) during
navigation between biopsy sites. (d-f) Augmented biopsy site three: (d-e) the X and Y pro-
jected pixel error showing the site being tracked (f) the ground truth projected position (red)
and the estimated position (green) for a short section of the procedure.
Optical Biopsy Mapping for Minimally Invasive Cancer Screening 489
Table 1. Average error of biopsy site estimation and probe tracking for phantom experiments
Probe tracking Augmented biopsy sites
Biopsy sites Average visual
angle error
Percent of
FOV
Visual angle
error
Percent of
FOV
1 1.85° 4.89% 2.34° 5.37%
2 1.33° 3.59% 3.06° 7.58%
3 0.87° 2.29% 2.22° 5.59%
4 0.86° 2.23% 1.18° 2.99%
5 0.81° 1.75% 2.06° 4.61%
6 3.22° 8.88% 3.86° 10.09%
Fig. 5. (a-d) Biopsy site position (green spheres). The spheres are 2mm in diameter and appear
in different sizes when they are projected onto the image under perspective projection; (e)
shows the six biopsy sites with corresponding micro-confocal fluorescence endoscope images.
Detailed quantitative analysis of the probe tracking when the biopsies are taken is
shown in Table 1. The tracking errors range from 0.81° to 3.22° of the visual angle
and an example error distribution is illustrated in Fig. 4 (a-b). Quantitative analysis of
the probe tracking on the whole sequence gave an average visual angle error of 2.87°.
The sensitivity and specificity were 0.9706 and 0.9892, respectively. As expected, the
accuracy deteriorates when the probe is introduced and removed from the scene as a
part of the shaft is occluded, or when the probe is very close to the camera.
4 Conclusion
In this paper we have proposed a novel approach for microconfocal optical biopsy
tracking which can be used to augment intra-operative navigation and retargeting of
previously examined tissue regions. The system has been validated with a detailed
490 P. Mountney et al.
phantom experiment and we have demonstrated that this approach can accurately
project the location of biopsy sites, thus enabling its practical clinical use. The pro-
posed method requires no prior information of the tissue geometry and can operate
consistently in a sparse feature environment. The proposed method is robust to small
local deformation and rigid global motion. Modeling large scale nonlinear tissue
deformation, however, is not trivial and will be addressed in future work.
Acknowledgments. We gratefully acknowledge support from the EPSRC and the
Technology Strategy Board grants DT/F003064/1 and DT/E011101/1.
References
1. Meining, A., Bajbouj, M., von Delius, S., Prinz, C.: Confocal Laser Scanning Microscopy
for in vivo Histopathology of the Gastrointestinal Tract. Arab Journal of Gastroenterol-
ogy 8, 1–4 (2007)
2. Thiberville, L., Moreno-Swirc, S., Vercauteren, T., Peltier, E., Cavé, C., Bourg Heckly, G.:
In Vivo Imaging of the Bronchial Wall Microstructure Using Fibered Confocal Fluores-
cence Microscopy. American Journal of Respiratory and Critical Care Medicine 175, 22–
31 (2007)
3. Van Dam, J.: Novel methods of enhanced endoscopic imaging. GUT 52(4) (2003)
4. Wengert, C., Cattin, P.C., Duff, J.M., Székely, G.: Markerless Endoscopic Registration and
Referencing. In: Larsen, R., Nielsen, M., Sporring, J. (eds.) MICCAI 2006. LNCS,
vol. 4190, pp. 816–823. Springer, Heidelberg (2006)
5. Wu, C.-H., Sun, Y.-N., Chen, Y.-C., Chang, C.-C.: Endoscopic Feature Tracking and
Scale-Invariant Estimation of Soft-Tissue Structures. IEICE Transactions on Information
and Systems 2, 351–360 (2008)
6. Atasoy, S., Noonan, D.P., Benhimane, S., Navab, N., Yang, G.-Z.: A global approach for
automatic fibroscopic video mosaicing in minimally invasive diagnosis. In: Metaxas, D.,
Axel, L., Fichtinger, G., Székely, G. (eds.) MICCAI 2008, Part I. LNCS, vol. 5241, pp.
850–857. Springer, Heidelberg (2008)
7. Lerotic, M., Chung, A.J., Clark, J., Valibeik, S., Yang, G.-Z.: Dynamic View Expansion
for Enhanced Navigation in Natural Orifice Transluminal Endoscopic Surgery. In:
Metaxas, D., Axel, L., Fichtinger, G., Székely, G. (eds.) MICCAI 2008, Part II. LNCS,
vol. 5242, pp. 467–475. Springer, Heidelberg (2008)
8. Wengert, C., Bossard, L., Häberling, A., Baur, C., Székely, G., Cattin, P.C.: Endoscopic
Navigation for Minimally Invasive Suturing. In: Ayache, N., Ourselin, S., Maeder, A.
(eds.) MICCAI 2007, Part II. LNCS, vol. 4792, pp. 620–627. Springer, Heidelberg (2007)
9. Krupa, A., Gangloff, J., Doignon, C., de Mathelin, M.F., Morel, G., Leroy, J., Soler, L.,
Marescaux, J.: Autonomous 3-D positioning of surgical instruments in robotized laparo-
scopic surgery using visual servoing. IEEE Transactions on Robotics and Automation 19,
842–853 (2003)
10. Mountney, P., Stoyanov, D., Davison, A.J., Yang, G.-Z.: Simultaneous Stereoscope Local-
ization and Soft-Tissue Mapping for Minimal Invasive Surgery. In: Larsen, R., Nielsen,
M., Sporring, J. (eds.) MICCAI 2006. LNCS, vol. 4190, pp. 347–354. Springer, Heidel-
berg (2007)
11. Tsai, R., Lenz, R.: Real Time Versatile Robotic Hand/Eye Calibration using 3D Machine
Vision. In: Proc. ICRA 1988, pp. 554–561 (1988)
... Since optical biopsy leaves no marks on the tissue, it is necessary to provide gastroenterologists with on-line localization and retargeting of biopsies. Two approaches have been explored in the literature : (i) by region matching from image content [31,32,33] or (ii) using epipolar geometry in endoscopic video [34,35,36,37]. ...
... Allain et al. [36] used intersecting epipolar lines assuming a local affine transformations in single modality images. A visual simultaneous localization and mapping (vSLAM) algorithm was used by [34] with a fluorescence augmented prototype. By tracking the pose of the camera and the location of salient points on the organ wall, a 3D map of the organ being explored is generated. ...
Article
Full-text available
Gastric cancer is the second leading cause of cancer-related deaths worldwide. Early diagnosis significantly increases the chances of survival; therefore, improved assisted exploration and screening techniques are necessary. Previously we made use of an augmented multi-spectral endoscope by inserting an optical probe into the instrumentation channel. However, the limited field of view and the lack of markings left by optical biopsies on the tissue complicate the navigation and revisit of the suspect areas probed in-vivo. In this contribution two innovative tools are introduced to significantly increase the traceability and monitoring of patients in clinical practice: (i) video mosaicing to build a more comprehensive and panoramic view of large gastric areas; (ii) optical biopsy targeting and registration with the endoscopic images. The proposed optical flow-based mosaicing technique selects images that minimize texture discontinuities and is robust despite the lack of texture and illumination variations. The optical biopsy targeting is based on automatic tracking of a free-marker probe in the endoscopic view using deep learning for estimating dynamically its pose during exploration. The accuracy of pose estimation is sufficient to ensure a precise overlapping of the standard white-light color image and the hyperspectral probe image, assuming that the small target area of the organ is almost flat. This allows the mapping of all spatio-temporally tracked biopsy sites onto the panoramic mosaic. Experimental validations are carried out from videos acquired on patients in hospital. The proposed technique is purely software-based and therefore easily integrable into clinical practices. It is also generic and compatible to any imaging modalities connected to a cylindrical fibroscope.
... The main limitation of the clinical use of DRS is that, although DRS can discriminate tissue types, it does so by providing single-point spectral measurements and leaves no marks on the tissue during scanning. 15 In this way, it is not possible to localize the area that has been in contact with the probe when optical biopsy takes place, and thus makes it difficult for the surgeon to determine the resection margin. This is particularly challenging when DRS is used endoscopically or during minimally invasive surgery, where the ergonomics of scanning and viewing the DRS probe site are even more demanding. ...
Article
Significance: Diffuse reflectance spectroscopy (DRS) allows discrimination of tissue type. Its application is limited by the inability to mark the scanned tissue and the lack of real-time measurements. Aim: This study aimed to develop a real-time tracking system to enable localization of a DRS probe to aid the classification of tumor and non-tumor tissue. Approach: A green-colored marker attached to the DRS probe was detected using hue-saturation-value (HSV) segmentation. A live, augmented view of tracked optical biopsy sites was recorded in real time. Supervised classifiers were evaluated in terms of sensitivity, specificity, and overall accuracy. A developed software was used for data collection, processing, and statistical analysis. Results: The measured root mean square error (RMSE) of DRS probe tip tracking was 1.18 ± 0.58 mm and 1.05 ± 0.28 mm for the x and y dimensions, respectively. The diagnostic accuracy of the system to classify tumor and non-tumor tissue in real time was 94% for stomach and 96% for the esophagus. Conclusions: We have successfully developed a real-time tracking and classification system for a DRS probe. When used on stomach and esophageal tissue for tumor detection, the accuracy derived demonstrates the strength and clinical value of the technique to aid margin assessment in cancer resection surgery.
... The advent of these technologies in medical trial research, such as photoacoustic endoscopy (PAE), Raman spectroscopy (RS), two-photon excited fluorescence (TPEF) imaging has opened a new era and created tremendous opportunities for the enhanced identification and biochemical characterization of diseases. These modalities also have the potential to allow non-invasive in vivo "optical biopsy" which differentiates areas of similar clinical characteristics, hence challenging the ex vivo histology which is the only way for definitive cancer diagnosis [17]. In addition, these endoscopic techniques own unprecedented temporal-spatial resolution of imaging with innovative mechanisms such as photoacoustics, optical coherent tomography, and multi-photo effect. ...
Article
Full-text available
Novel endoscopic biophotonic diagnostic technologies have the potential to non-invasively detect the interior of a hollow organ or cavity of the human body with subcellular resolution or to obtain biochemical information about tissue in real time. With the capability to visualize or analyze the diagnostic target in vivo, these techniques gradually developed as potential candidates to challenge histopathology which remains the gold standard for diagnosis. Consequently, many innovative endoscopic diagnostic techniques have succeeded in detection, characterization, and confirmation: the three critical steps for routine endoscopic diagnosis. In this review, we mainly summarize researches on emerging endoscopic optical diagnostic techniques, with emphasis on recent advances. We also introduce the fundamental principles and the development of those techniques and compare their characteristics. Especially, we shed light on the merit of novel endoscopic imaging technologies in medical research. For example, hyperspectral imaging and Raman spectroscopy provide direct molecular information, while optical coherence tomography and multi-photo endomicroscopy offer a more extensive detection range and excellent spatial–temporal resolution. Furthermore, we summarize the unexplored application fields of these endoscopic optical techniques in major hospital departments for biomedical researchers. Finally, we provide a brief overview of the future perspectives, as well as bottlenecks of those endoscopic optical diagnostic technologies. We believe all these efforts will enrich the diagnostic toolbox for endoscopists, enhance diagnostic efficiency, and reduce the rate of missed diagnosis and misdiagnosis.
... Mountney et al. [233] performed a review of various feature descriptors applied to deformable tissue tracking and in [234] proposed an Extended Kalman filter (EKF) framework for simultaneous localization and mapping (SLAM) based method for feature tracking in deformable scene, such as in laparoscopic surgery. This EKF framework was then extended in [235] for maintaining a global map of biopsy sites for endoluminal procedures, intra-operatively. The authors presented an evaluation of the EKF-SLAM on phantom models of stomach and oesophagus. ...
Preprint
This paper attempts to provide the reader a place to begin studying the application of computer vision and machine learning to gastrointestinal (GI) endoscopy. They have been classified into 18 categories. It should be be noted by the reader that this is a review from pre-deep learning era. A lot of deep learning based applications have not been covered in this thesis.
... Furthermore, slippage and the changing moving direction of the probe will also result in a highly deformed mosaic. Furthermore, in order to realize the promise of endomicroscopy for providing context-aware large-area tissue characterization, 3D visualization and navigation are also important [24][25][26]. ...
Article
Optical biopsy, such as probe-based endomicroscopy, represents a promising technique that can provide useful intraoperative assessment of cellular imaging instead of conventional physical biopsy and histology. Despite the merits of endomicroscopy, however, it is limited by the high cost of the optical system, difficulties in a flexible approach by a commercial probe, large-area surveillance, and tissue deformation. In this paper, we have developed a low-cost endomicroscopy system with a highly flexible fiber bundle coupled with a distal microlens, mosaicking algorithm, and robotic scanning device for obtaining large-area in vivo cellular imaging to extend the clinical application of endomicroscopy. We have demonstrated that this system can obtain good quality images from ex vivo human stomach tissue. We have also shown the potential of the system to provide a much larger field of view for optical biopsy than conventional endomicroscopy. This could greatly improve the prospects for intraoperative in vivo and in situ evaluation of cellular imaging.
Article
Full-text available
Purpose: We present a markerless vision-based method for on-the-fly three-dimensional (3D) pose estimation of a fiberscope instrument to target pathologic areas in the endoscopic view during exploration. Approach: A 2.5-mm-diameter fiberscope is inserted through the endoscope's operating channel and connected to an additional camera to perform complementary observation of a targeted area such as a multimodal magnifier. The 3D pose of the fiberscope is estimated frame-by-frame by maximizing the similarity between its silhouette (automatically detected in the endoscopic view using a deep learning neural network) and a cylindrical shape bound to a kinematic model reduced to three degrees-of-freedom. An alignment of the cylinder axis, based on Plücker coordinates from the straight edges detected in the image, makes convergence faster and more reliable. Results: The performance on simulations has been validated with a virtual trajectory mimicking endoscopic exploration and on real images of a chessboard pattern acquired with different endoscopic configurations. The experiments demonstrated a good accuracy and robustness of the proposed algorithm with errors of 0.33 ± 0.68 mm in distance position and 0.32 ± 0.11 deg in axis orientation for the 3D pose estimation, which reveals its superiority over previous approaches. This allows multimodal image registration with sufficient accuracy of < 3 pixels . Conclusion: Our pose estimation pipeline was executed on simulations and patterns; the results demonstrate the robustness of our method and the potential of fiber-optical instrument image-based tracking for pose estimation and multimodal registration. It can be fully implemented in software and therefore easily integrated into a routine clinical environment.
Article
Minimally invasive surgery, including laparoscopic and thoracoscopic procedures, benefits patients in terms of improved postoperative outcomes and short recovery time. The challenges in hand-eye coordination and manipulation dexterity during the aforementioned procedures have inspired an enormous wave of developments on surgical robotic systems to assist keyhole and endoscopic procedures in the past decades. This paper presents a systematic review of the state-of-the-art systems, picturing a detailed landscape of the system configurations, actuation schemes, and control approaches of the existing surgical robotic systems for keyhole and endoscopic procedures. The development challenges and future perspectives are discussed in depth to point out the need for new enabling technologies and inspire future researches.
Chapter
Technological advancement defines the practice of neurosurgery, and as the field has advanced, so have the visualization strategies that are implemented within it. Accurate and realistic visualization and simulation of surgical anatomy for practitioner training, patient education, and operative planning remains critically important. In neurosurgery, a variety of new technologies have been introduced, including three-dimensional, stereoscopic, virtual reality, augmented reality, and mixed reality platforms, all of which will be reviewed in this chapter. A sampling of each of these modalities and their utilization within neurosurgery will be explored, from Surgical Theater ® and ImmersiveTouch ® to Google Glass ®, Oculus Rift ®, Microsoft HoloLens ®, and much more.
Article
Robotic surgery pushes the frontiers of innovation in healthcare technology towards improved clinical outcomes. We discuss the evolution to five generations of robotic surgical platforms including stereotactic, endoscopic, bioinspired, microbots on the millimetre scale, and the future development of autonomous systems. We examine the challenges, obstacles and limitations of robotic surgery and its future potential including integrated real-time anatomical and immune-histological imaging and data assimilation with improved visualisation, haptic feedback and robot-surgeon interactivity. We consider current evidence, cost-effectiveness and the learning curve in relation to the surgical and anaesthetic journey, and what is required to continue to realise improvements in surgical operative care. The innovative impact of this technology holds the potential to achieve transformative clinical improvements. However, despite over 30 yr of incremental advances it remains formative in its innovative disruption.
Conference Paper
Full-text available
Recent developments in bio-photonics have called for the need of bringing cellular and molecular imaging modalities to an in vivo--in situ setting to allow for real-time tissue characterization and functional assessment. Before such techniques can be used effectively in routine clinical environments, it is necessary to address the visualization requirement for linking point based optical biopsy to large area tissue visualization. This paper presents a novel approach for fibered endoscopic video mosaicing that permits wide region tissue visualization. A feature-based registration method is used to register the frames of the endoscopic video sequence by taking into account the characteristics of fibroscopic imaging such as non-linear lens distortion and high-frequency fiber optic facet pattern. The registration is combined with an efficient optimization scheme in order to align all input frames in a globally consistent way. An evaluation on phantom and ex vivo tissue images allowing free-hand camera motion is presented.
Conference Paper
Full-text available
Accurate patient registration and referencing is a key element in navigated surgery. Unfortunately all existing methods are either invasive or very time consuming. We propose a fully non-invasive optical approach using a tracked monocular endoscope to reconstruct the surgical scene in 3D using photogrammetric methods. The 3D reconstruction can then be used for matching the pre-operative data to the intra-operative scene. In order to cope with the near real-time requirements for referencing, we use a novel, efficient 3D point management method during 3D model reconstruction. The presented prototype system provides a reconstruction accuracy of 0.1 mm and a tracking accuracy of 0.5 mm on phantom data. The ability to cope with real data is demonstrated by cadaver experiments.
Conference Paper
Full-text available
Minimally Invasive Surgery (MIS) has recognized benefits of reduced patient trauma and recovery time. In practice, MIS procedures present a number of challenges due to the loss of 3D vision and the narrow field-of-view provided by the camera. The restricted vision can make navigation and localization within the human body a challenging task. This paper presents a robust technique for building a repeatable long term 3D map of the scene whilst recovering the camera movement based on Simultaneous Localization and Mapping (SLAM). A sequential vision only approach is adopted which provides 6 DOF camera movement that exploits the available textured surfaces and reduces reliance on strong planar structures required for range finders. The method has been validated with a simulated data set using real MIS textures, as well as in vivo MIS video sequences. The results indicate the strength of the proposed algorithm under the complex reflectance properties of the scene, and the potential for real-time application for integrating with the existing MIS hardware.
Article
In this study, we introduce a software pipeline to track feature points across endoscopic video frames. It deals with the common problems of low contrast and uneven illumination that afflict endoscopic imaging. In particular, irregular feature trajectories are eliminated to improve quality. The structure of soft tissue is determined by an iterative factorization method based on collection of tracked features. A shape updating mechanism is proposed in order to yield scale-invariant structures. Experimental results show that the tracking method produced good tracking performance and increased the number of tracked feature trajectories. The real scale and structure of the target scene was successfully estimated, and the recovered structure is more accuracy than the conventional method.
Conference Paper
Natural Orifice Transluminal Endoscopic Surgery (NOTES) is an emerging surgical technique with increasing global interest. It has recently transcended the boundaries of clinical experiments towards initial clinical evaluation. Although profound benefits to the patient have been demonstrated, NOTES requires highly skilled endoscopists for it to be performed safely and successfully. This predominantly reflects the skill required to navigate a flexible endoscope through a spatially complex environment. This paper presents a method to extend the visual field of the surgeon without compromising on the safety of the patient. The proposed dynamic view expansion uses a novel parallax correction scheme to provide enhanced visual cues that aid the navigation and orientation during NOTES surgery in periphery, while leaving the focal view undisturbed. The method was validated using a natural orifice simulated surgical environment and demonstrated on in vivo porcine data.
Article
Endoscopy has become an essential part of the practice of gastroenterology. Techniques exploiting previously unused properties of light have demonstrated the potential to enhance the ability to make clinical diagnoses without removing tissue as has been standard practice for decades. The term used for many of these techniques is "optical biopsy" and, although not yet widely available, enthusiasm for such techniques has grown as has research in their potential clinical utility.
Article
Fibered confocal fluorescence microscopy (FCFM) is a new technique that produces microscopic imaging of a living tissue through a 1-mm fiberoptic probe that can be introduced into the working channel of the bronchoscope. To analyze the microscopic autofluorescence structure of normal and pathologic bronchial mucosae using FCFM during bronchoscopy. Bronchial FCFM and spectral analyses were performed at 488-nm excitation wavelength on two bronchial specimens ex vivo and in 29 individuals at high risk for lung cancer in vivo. Biopsies of in vivo FCFM-imaged areas were performed using autofluorescence bronchoscopy. Ex vivo and in vivo microscopic and spectral analyses showed that the FCFM signal mainly originates from the elastin component of the basement membrane zone. Five distinct reproducible microscopic patterns were recognized in the normal areas from the trachea down to the more distal respiratory bronchi. In areas of the proximal airways not previously biopsied, one of these patterns was found in 30 of 30 normal epithelia, whereas alterations of the autofluorescence microstructure were observed in 19 of 22 metaplastic or dysplastic samples, five of five carcinomas in situ, and two of two invasive lesions. Disorganization of the fibered network could be found on 9 of 27 preinvasive lesions, compatible with early disruptions of the basement membrane zone. FCFM alterations were also observed in a tracheobronchomegaly syndrome and in a sarcoidosis case. Endoscopic FCFM represents a minimally invasive method to study specific basement membrane alterations associated with premalignant bronchial lesions in vivo. The technique may also be useful to study the bronchial wall remodeling in nonmalignant chronic bronchial diseases.
Conference Paper
Manipulating small objects such as needles, screws or plates inside the human body during minimally invasive surgery can be very difficult for less experienced surgeons, due to the loss of 3D depth perception. This paper presents an approach for tracking a suturing needle using a standard endoscope. The resulting pose information of the needle is then used to generate artificial 3D cues on the 2D screen to optimally support surgeons during tissue suturing. Additionally, if an external tracking device is provided to report the endoscope's position, the suturing needle can be tracked in a hybrid fashion with sub-millimeter accuracy. Finally, a visual navigation aid can be incorporated, if a 3D surface is intraoperatively reconstructed from video or registered from preoperative imaging.
Conference Paper
A technique is described for computing 3-D position and orientation of a camera relative to the last joint of a robot manipulator in an eye-on-hand configuration. The calibration can be done within a fraction of a millisecond after the robot finishes the movement. The setup is simple (a planar set of calibration points arbitrarily placed on the work table, in addition to robot and camera) and is the same as that for a common camera calibration. This method is claimed to be faster, simpler, and more accurate than any existing technique for hand/eye calibration. Generic geometric properties of lemmas are presented, leading to the derivation of the final algorithms, which are aimed at simplicity, efficiency, and accuracy while giving ample geometric and algebraic insights. Besides describing the technique, critical factors influencing the accuracy are analysed, and procedures for improving accuracy are introduced. Tests results of both simulation and real experiments on an IBM Cartesian robot are reported