Live augmented reality: a new visualization method for laparoscopic surgery using continuous volumetric computed tomography.
ABSTRACT Current laparoscopic images are rich in surface detail but lack information on deeper structures. This report presents a novel method for highlighting these structures during laparoscopic surgery using continuous multislice computed tomography (CT). This has resulted in a more accurate augmented reality (AR) approach, termed "live AR," which merges three-dimensional (3D) anatomy from live low-dose intraoperative CT with live images from the laparoscope.
A series of procedures with swine was conducted in a CT room with a fully equipped laparoscopic surgical suite. A 64-slice CT scanner was used to image the surgical field approximately once per second. The procedures began with a contrast-enhanced, diagnostic-quality CT scan (initial CT) of the liver followed by continuous intraoperative CT and laparoscopic imaging with an optically tracked laparoscope. Intraoperative anatomic changes included user-applied deformations and those from breathing. Through deformable image registration, an intermediate image processing step, the initial CT was warped to align spatially with the low-dose intraoperative CT scans. The registered initial CT then was rendered and merged with laparoscopic images to create live AR.
Superior compensation for soft tissue deformations using the described method led to more accurate spatial registration between laparoscopic and rendered CT images with live AR than with conventional AR. Moreover, substitution of low-dose CT with registered initial CT helped with continuous visualization of the vasculature and offered the potential of at least an eightfold reduction in intraoperative X-ray dose.
The authors proposed and developed live AR, a new surgical visualization approach that merges rich surface detail from a laparoscope with instantaneous 3D anatomy from continuous CT scanning of the surgical field. Through innovative use of deformable image registration, they also demonstrated the feasibility of continuous visualization of the vasculature and considerable X-ray dose reduction. This study provides motivation for further investigation and development of live AR.
- SourceAvailable from: harvard.edu[show abstract] [hide abstract]
ABSTRACT: As a stand-alone imaging modality, two-dimensional (2D) ultrasound (US) can only guide basic interventional tasks due to the limited spatial orientation information contained in these images. High-resolution real-time three-dimensional (3D) US can potentially overcome this limitation, thereby expanding the applications for US-guided procedures to include intracardiac surgery and fetal surgery, while potentially improving results of solid organ interventions such as image-guided breast, liver or prostate procedures. The following study examines the benefits of real-time 3D US for performing both basic and complex image-guided surgical tasks. Seven surgical trainees performed three tasks in an acoustic testing tank simulating an image-guided surgical environment using 2D US, biplanar 2D US, and 3D US for guidance. Surgeon-controlled US imaging was also tested. The evaluation tasks were (1) bead-in-hole navigation; (2) bead-to-bead navigation; and (3) clip fixation. Performance measures included completion time, tool tip trajectory, and error rates, with endoscope-guided performance serving as a gold-standard reference measure for each subject. Compared to 2D US guidance, completion times decreased significantly with 3D US for both bead-in-hole navigation (50%, p = 0.046) and bead-to-bead navigation (77%, p = 0.009). Furthermore, tool-tip tracking for bead-to-bead navigation demonstrated improved navigational accuracy using 3D US versus 2D US (46%, p = 0.040). Biplanar 2D imaging and surgeon-controlled 2D US did not significantly improve performance as compared to conventional 2D US. In real-time 3D mode, surgeon-controlled imaging and changes in 3D image presentation made by adjusting the perspective of the 3D image did not diminish performance. For clip fixation, completion times proved excessive with 2D US guidance (> 240 s). However, with real-time 3D US imaging, completion times and error rates were comparable to endoscope-guided performance. Real-time 3D US can guide basic surgical tasks more efficiently and accurately than 2D US imaging. Real-time 3D US can also guide more complex surgical tasks which may prove useful for procedures where optical imaging is suboptimal, as in fetal surgery or intracardiac interventions.Computer Aided Surgery 01/2003; 8(2):82-90. · 0.78 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: Minimally invasive image-guided interventions (IGIs) are time and cost efficient, minimize unintended damage to healthy tissue, and lead to faster patient recovery. With the advent of multislice computed tomography (CT), many IGIs are now being performed under volumetric CT guidance. Registering pre-and intraprocedural images for improved intraprocedural target delineation is a fundamental need in the IGI workflow. Earlier approaches to meet this need primarily employed rigid body approximation, which may not be valid because of nonrigid tissue misalignment between these images. Intensity-based automatic deformable registration is a promising option to correct for this misalignment; however, the long execution times of these algorithms have prevented their use in clinical workflow. This article presents a field-programmable gate array-based architecture for accelerated implementation of mutual information (Ml)-based deformable registration. The reported implementation reduces the execution time of MI-based deformable registration from hours to a few minutes. This work also demonstrates successful registration of abdominal intraprocedural noncontrast CT (iCT) images with preprocedural contrast-enhanced CT (preCT) and positron emission tomography (PET) images using the reported solution. The registration accuracy for this application was evaluated using 5 iCT-preCT and 5 iCT-PET image pairs. The registration accuracy of the hardware implementation is comparable with that achieved using a software implementation and is on the order of a few millimeters. This registration accuracy, coupled with the execution speed and compact implementation of the reported solution, makes it suitable for integration in the IGI-workflow.IEEE Transactions on Biomedical Circuits and Systems 07/2007; · 2.74 Impact Factor
Conference Proceeding: Augmented Reality Visualization for Laparoscopic Surgery.[show abstract] [hide abstract]
ABSTRACT: We present the design and a prototype implementation of a three-dimensional visualization system to assist with laparoscopic surgi- cal procedures. The system uses 3D visualization, depth extraction from laparoscopic images, and six degree-of-freedom head and laparoscope tracking to display a merged real and synthetic image in the surgeon's video-see-through head-mounted display. We also introduce a custom design for this display. A digital light projector, a camera, and a conven- tional laparoscope create a prototype 3D laparoscope that can extract depth and video imagery. Such a system can restore the physician's natural point of view and head motion parallax that are used to understand the D structure during open surgery. These cues are not available in conventional laparoscopic surgery due to the displacement of the laparoscopic camera from the physician's viewpoint. The system can also display multiple laparoscopic range imaging data sets to widen the eectiv e eld of view of the device. These data sets can be displayed in true 3D and registered to the exterior anatomy of the patient. Much work remains to realize a clinically useful system, notably in the acquisition speed, reconstruction, and registration of the 3D imagery.Medical Image Computing and Computer-Assisted Intervention - MICCAI'98, First International Conference, Cambridge, MA, USA, October 11-13, 1998, Proceedings; 01/1998