Project

Surgical Audio Guidance - SURAG

Goal: In minimally invasive procedures, navigation of Medical Interventional Device (MID) is a challenging task. The clinicians' experience is an essential pre-requirement to complete the procedures without damaging structures or organs. Medical imaging can assist the procedure, which can, however, lead to inaccuracies caused by modality dependent artifacts.

Sensor-based solutions have been proposed for improving the accuracy by acquiring additional guidance information from MIDs. This typically requires the sensors to be embedded in the MID tip, leading to direct tissue contact, sterilization issues, and added complexity and cost.

Considering these issues, we came up with a new concept for acquiring additional complementary information for guiding MIDs in minimally invasive procedures. This concept is based on using acoustic emission (AE) with a sensor connected at the proximal end of the tool in order to capture information, non-invasively outside the body, concerning the dynamics occurring between the tip of the tool and the tissue. The audio signal resulting from the MID-tissue interactions starting at the tip of the instrument naturally propagates through its shaft so that they can be picked up and detected by an audio sensor located at the proximal end of the instrument. This signal can be then processed in order to extract useful guidance information that can be then mapped into feedback to surgeons during minimally invasive procedures.

The main advantage of this novel approach is that no sensor is needed to be placed in direct contact with the patient organs and tissues. The development of an AE guidance solution would allow us to acquire information from conventional MIDs, as a sort of plug-and-play device without the necessity of rebuild specialized instruments and use the already existing ones. This should result in less complicated solutions than the existing one, allowing a faster clinical approval.

During the last years, we have tested the SurAG approach with several MIDs such as biopsy needles, guide wires, and laparoscopic and robotic tools, and we have extracted different types of information using advanced signal processing and classification procedures.

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
8
Reads
20 new
980

Project log

Axel Boese
added a research item
In robot-assisted procedures, the surgeon controls the surgical instruments from a remote console, while visually monitoring the procedure through the endoscope. There is no haptic feedback available to the surgeon, which impedes the assessment of diseased tissue and the detection of hidden structures beneath the tissue, such as vessels. Only visual clues are available to the surgeon to control the force applied to the tissue by the instruments, which poses a risk for iatrogenic injuries. Additional information on haptic interactions of the employed instruments and the treated tissue that is provided to the surgeon during robotic surgery could compensate for this deficit. Acoustic emissions (AE) from the instrument/tissue interactions, transmitted by the instrument are a potential source of this information. AE can be recorded by audio sensors that do not have to be integrated into the instruments, but that can be modularly attached to the outside of the instruments shaft or enclosure. The location of the sensor on a robotic system is essential for the applicability of the concept in real situations. While the signal strength of the acoustic emissions decreases with distance from the point of interaction, an installation close to the patient would require sterilization measures. The aim of this work is to investigate whether it is feasible to install the audio sensor in non-sterile areas far away from the patient and still be able to receive useful AE signals. To determine whether signals can be recorded at different potential mounting locations, instrument/tissue interactions with different textures were simulated in an experimental setup. The results showed that meaningful and valuable AE can be recorded in the non-sterile area of a robotic surgical system despite the expected signal losses.
Thomas Sühn
added a research item
Arthroscopic surgery is a technically challenging but common minimally invasive procedure with a long learning curve and a high incidence of iatrogenic damage. These damages can occur due to the lack of feedback and supplementary information regarding tissue-instrument-contact during surgery. Deliberately performed interactions can be used however to obtain clinically relevant information, e.g. when a surgeon uses the tactile feedback to assess the condition of articular cartilage. Yet, the perception of such events is highly subjective. We propose a novel proximally attached sensing concept applied to arthroscopic surgery to allow an objective characterization and utilization of interactions. It is based on acoustic emissions which originate from tissue-instrument-contact, that propagate naturally via the instrument shaft and that can be obtained by a transducer setup outside of the body. The setup was tested on its ability to differentiate various conditions of articular cartilage. A femoral head with varying grades of osteoarthritic cartilage was tapped multiple times ex-vivo with a conventional Veress needle with a sound transducer attached at the outpatient end. A wavelet-based processing of the obtained signals and subsequent analysis of distribution of spectral energy showed the potential of tool-tissue-interactions to characterize different cartilage conditions. The proposed concept needs further evaluation with a dedicated design of the palpation tool and should be tested in realistic arthroscopic scenarios.
Alfredo Illanes
added a research item
Accurate needle placement is highly relevant for puncture of anatomical structures. The clinician’s experience and medical imaging are essential to complete these procedures safely. However, imaging may come with inaccuracies due to image artifacts. Sensor-based solutions have been proposed for acquiring additional guidance information. These sensors typically require to be embedded in the instrument tip, leading to direct tissue contact, sterilization issues, and added device complexity, risk, and cost. Recently, an audio-based technique has been proposed for “listening” to needle tip-tissue interactions by an externally placed sensor. This technique has shown promising results for different applications. But the relation between the interaction event and the generated audio excitation is still not fully understood. This work aims to study this relationship, using a force sensor as a reference, by relating events and dynamical characteristics occurring in the audio signal with those occurring in the force signal. We want to show that dynamical information that a well-known sensor as force can provide could also be extracted from a low-cost and simple sensor such as audio. In this aim, the Pearson coefficient was used for signal-to-signal correlation between extracted audio and force indicators. Also, an event-to-event correlation between audio and force was performed by computing features from the indicators. Results show high values of correlation between audio and force indicators in the range of 0.53 to 0.72. These promising results demonstrate the usability of audio sensing for tissue-tool interaction and its potential to improve telemanipulated and robotic surgery in the future.
Axel Boese
added a research item
This work aims to demonstrate the feasibility that haptic information can be acquired from a da Vinci robotic tool using audio sensing according to sensor placement requirements in a real clinical scenario. For that, two potential audio sensor locations were studied using an experimental setup for performing, in a repeatable way, interactions of a da Vinci forceps with three different tissues. The obtained audio signals were assessed in terms of their resulting signal-to-noise-ratio (SNR) and their capability to distinguish between different tissues. A spectral energy distribution analysis using Discrete Wavelet Transformation was performed to extract signal signatures from the tested tissues. Results show that a high SNR was obtained in most of the audio recordings acquired from both studied positions. Additionally, evident spectral energy-related patterns could be extracted from the audio signals allowing us to distinguish between different palpated tissues.
Alfredo Illanes
added 13 research items
Robotic minimally invasive surgery (RMIS) has played an important role in the last decades. In traditional surgery, surgeons rely on palpation using their hands. However, during RMIS, surgeons use the visual-haptics technique to compensate the missing sense of touch. Various sensors have been widely used to retrieve this natural sense, but there are still issues like integration, costs, sterilization and the small sensing area that prevent such approaches from being applied. A new method based on acoustic emission has been recently proposed for acquiring audio information from tool-tissue interaction during minimally invasive procedures that provide user guidance feedback. In this work the concept was adapted for acquiring audio information from a RMIS grasper and a first proof of concept is presented. Interactions of the grasper with various artificial and biological texture samples were recorded and analyzed using advanced signal processing and a clear correlation between audio spectral components and the tested texture were identified. https://doi.org/10.1016/j.compbiomed.2019.103370
Artery guidewire-induced perforation during coronary interventions is an uncommon but potentially serious complication with significant morbidity and mortality rates. For minimizing its impact it is crucial for the surgeon to early detect that a perforation has occurred. However, this is not always easy since perforation sometimes is not characterized by any symptom or sign. In this work a time-varying (TV) characterization of coronary artery perforation is proposed through a TV parametrical modelling of an audio signal acquired from the distal part of a guidewire. A stethoscope equipped with a microphone connected to a computer has been attached to the distal part of a 0.014-inch guidewire using a coupling box allowing a direct contact between the distal part of the guidewire and the stethoscope membrane. Coronary arteries belonging to several pork hearts were perforated using the tip of the guidewire. During the procedure audio signals and time of perforation were recorded. An audio database has been implemented in order to evaluate the performances of the proposed characterization technique. It included 100 coronary artery perforations audio recordings, each one with a duration of 30 seconds and 200 recordings with different types of induced guidewire audio artifacts. Each audio signals has been first decimated and then filtered using a wavelet based band-pass filter. The resulting signal has been modelled using a TV autoregressive (AR) model for estimating a TV power spectral density and TV poles. Finally different features has been computed from the AR spectrum and AR poles, based mainly on spectral energy dispersion and tracking of the pole of maximal energy. Results show that that guidewire perforation leaves a characteristic TV trace which can be tracked through the TV poles and spectrum and that clear differentiating patterns can be extracted allowing 90% of correct perforation classification.
Alfredo Illanes
added a project goal
In minimally invasive procedures, navigation of Medical Interventional Device (MID) is a challenging task. The clinicians' experience is an essential pre-requirement to complete the procedures without damaging structures or organs. Medical imaging can assist the procedure, which can, however, lead to inaccuracies caused by modality dependent artifacts.
Sensor-based solutions have been proposed for improving the accuracy by acquiring additional guidance information from MIDs. This typically requires the sensors to be embedded in the MID tip, leading to direct tissue contact, sterilization issues, and added complexity and cost.
Considering these issues, we came up with a new concept for acquiring additional complementary information for guiding MIDs in minimally invasive procedures. This concept is based on using acoustic emission (AE) with a sensor connected at the proximal end of the tool in order to capture information, non-invasively outside the body, concerning the dynamics occurring between the tip of the tool and the tissue. The audio signal resulting from the MID-tissue interactions starting at the tip of the instrument naturally propagates through its shaft so that they can be picked up and detected by an audio sensor located at the proximal end of the instrument. This signal can be then processed in order to extract useful guidance information that can be then mapped into feedback to surgeons during minimally invasive procedures.
The main advantage of this novel approach is that no sensor is needed to be placed in direct contact with the patient organs and tissues. The development of an AE guidance solution would allow us to acquire information from conventional MIDs, as a sort of plug-and-play device without the necessity of rebuild specialized instruments and use the already existing ones. This should result in less complicated solutions than the existing one, allowing a faster clinical approval.
During the last years, we have tested the SurAG approach with several MIDs such as biopsy needles, guide wires, and laparoscopic and robotic tools, and we have extracted different types of information using advanced signal processing and classification procedures.