ArticlePDF Available

Abstract and Figures

This work aims to demonstrate the feasibility that haptic information can be acquired from a da Vinci robotic tool using audio sensing according to sensor placement requirements in a real clinical scenario. For that, two potential audio sensor locations were studied using an experimental setup for performing, in a repeatable way, interactions of a da Vinci forceps with three different tissues. The obtained audio signals were assessed in terms of their resulting signal-to-noise-ratio (SNR) and their capability to distinguish between different tissues. A spectral energy distribution analysis using Discrete Wavelet Transformation was performed to extract signal signatures from the tested tissues. Results show that a high SNR was obtained in most of the audio recordings acquired from both studied positions. Additionally, evident spectral energy-related patterns could be extracted from the audio signals allowing us to distinguish between different palpated tissues.
Content may be subject to copyright.
Alfredo Illanes*, Anna Schauer, Thomas Sühn, Axel Boese, Roland Croner and
Michael Friebe
Surgical audio information as base for haptic
feedback in robotic-assisted procedures
Abstract: This work aims to demonstrate the feasibility
that haptic information can be acquired from a da Vinci
robotic tool using audio sensing according to sensor
placement requirements in a real clinical scenario. For
that, two potential audio sensor locations were studied
using an experimental setup for performing, in a repeat-
able way, interactions of a da Vinci forceps with three
different tissues. The obtained audio signals were assessed
in terms of their resulting signal-to-noise-ratio (SNR) and
their capability to distinguish between different tissues. A
spectral energy distribution analysis using Discrete
Wavelet Transformation was performed to extract signal
signatures from the tested tissues. Results show that a high
SNR was obtained in most of the audio recordings acquired
from both studied positions. Additionally, evident spectral
energy-related patterns could be extracted from the audio
signals allowing us to distinguish between different
palpated tissues.
Keywords: audio analysis; da Vinci robot; haptic feedback;
minimally invasive surgery; robotic assisted surgery.
Minimally invasive surgical procedures are increasingly
performed with the use of robotic systems. Compared to
conventional laparoscopy, robotic assistance systems
allow increased precision but lacks completely of haptic
sensation because of the indirect interaction with tissue
through remotely controlled instruments. This limitation
can result in risks of injuries to critical structures such as
Different approaches have been presented in order to
provide surgeons with haptic information. They are mainly
based on the direct or indirect measurement of force or
pressure. For direct measurements, single sensors [1, 2] or
sensor arrays [3, 4] are installed in the instrument com-
ponents directly interacting with the patients inner or-
gans. This imposes serious design limitations for fullling
clinical requirements. Sensor technology can also be in-
tegrated into instruments for indirect force measurements,
for example, on the shaft of the instrument [5, 6]. However,
these measurements can only be used for extracting static
one-point information, and for palpation purposes, dy-
namic information acquisition is required.
A novel method for guiding medical interventional
devices using an audio sensor attached to the tools prox-
imal end has been presented in [7, 8]. Audio has shown
promising results for acquiring non-invasively haptic in-
formation from medical tools. An audio-based guidance
device could be used as a sort of plug-and-play device
without the necessity of rebuild specialized instruments
and use the already existing ones. Moreover, since the
sensing device is located at the proximal end of in-
struments, no sensor is needed to be directly connected
with the patients organs.
This method has been successfully applied to acquire
information from a forceps of a da Vinci surgical robot in
three main scenarios: pulsation detection for the presence
of vessel identification, palpation of underneath bony
structures, and texture differentiation during palpation of
different types of tissue [9, 10]. However, in these studies,
the sensor unit was attached to the robotic tool in a location
inside the sterile zone, making it complex to fulll the re-
quirements for real use in a clinical environment. In this
work, two potential clinically feasible locations of the
audio sensor unit are studied. The main objective is to show
that haptic information can be obtained using audio
sensing without violating the clinical framework and to
demonstrate the feasibility of the concept for real sce-
narios. For that, an experimental setup for producing in-
teractions between a da Vinci instrument and three
different tissues was implemented. The acquired audio
*Corresponding author: Alfredo Illanes, Otto-von-Guericke University
Magdeburg, Medical Faculty, Magdeburg, Germany,
Anna Schauer, Thomas Sühn, Axel Boese and Michael Friebe, Otto-
von-Guericke University Magdeburg, Medical Faculty, Magdeburg,
Roland Croner, Clinic for General, Visceral, Vascular and Transplant
Surgery, Otto-von-Guericke University, Magdeburg, Germany
Current Directions in Biomedical Engineering 2020; 6(1): 20200036
Open Access. © 2020 Alfredo Illanes et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0
International License.
recordings were then analysed in terms of their resulting
signal-to-noise-ratio (SNR) and their capability to distin-
guish between different tissues.
Materials and methods
Due to the requirement for attachment of the measuring unit in the
non-sterile area, audio measurements in the operating room cannot be
performed directly on the instrument, as in the setup presented in [10].
During procedures with a da Vinci robot, the robotic arms are wrapped
in a sterile drape during the procedure to avoid contamination of the
surgical eld by the robots non-sterile arms. Therefore, to make audio
guidance in robotic tools feasible, a suitable position for the place-
ment of the sensor must be identied that conforms to the re-
quirements of non-invasiveness (sensor located at the proximal end of
the instrument) and location in the non-sterile zone. As shown in
Figure 1, two possible locations are studied in this work: lateral
adapter frame (left of Figure 1) and lower adapter edge (right of
Figure 1), called in the sequel as locations L
and L
, respectively.
The locations are evaluated in terms of acquired meaningful audio
signal information concerning the instrument tip/tissue interaction
process during the palpation of different tissues. For that, the acquired
audio signals are evaluated according to three parameters: their
Signal to Noise Ratio, their capability to show different dynamics
when different tissues are palpated, and their dynamical stability.
Experimental setup and data acquisition
An experimental setup was implemented following [10]. The experi-
mental setup was intended to simulate the interaction of a da Vinci
Endowrist instrument (Da Vinci Prograsp Forceps, Intuitive Surgical,
California, USA) with different texture surfaces. For this purpose, two
synthetic materials, felt and denim fabric, mounted on a board base
(structure board), were employed as samples (Figure 2a). Additionally,
a porcine liver was used as a third biological specimen. As a basic
framework for supporting the instrument, a stable stand to which a
clamp was attached was used. The clamp was allowed to be displaced
laterally so that it could perform a pivoting movement around the
vertical axis (Figure 2b). A sterile da Vinci drape was placed over the
basic framework, and the integrated instrument adapter of the drape
was xed into the bracket. For the tip/surface interactions, the spec-
imens were placed under the instrument tip so that the normal force
applied to them was determined by the weight of the instrument
(Figure 2c). The instrument tip was driven into a horizontal movement
along the specimen surface by displacing the bracket at an average
velocity of 5 cm/s (indicated by the arrow in Figure 2d). Each unidi-
rectional movement across the surface, referred to as one swipe, was
considered as one tool/tissue interaction event.
For the audio signal acquisition, a MEMS microphone (Adafruit
I2S MEMS microphone SPH0645LM4H-B, Knowles, Illinois, USA) was
attached to each studied locations (lateral adapter frame and lower
adapter edge as shown in Figure 1) employing a double-sided adhesive
Figure 1: Potential locations for sensor
placement identified on the sterile adapter.
Left: lateral adapter frame and microphone
position, middle: adapter frontal view,
right: lower adapter edge and microphone
structure board stable stand
sterile drape
rotaƟon axis
b) c) d)
Figure 2: (a) texture board with different materials; (b) instrument holder framework; (c) lateral view on the experimental setup; (d) top view on
the experimental setup.
2Illanes et al.: Audio feedback in robotic-assisted procedures
For each sensor location and tested specimen, 15 tool/tissue in-
teractions were recorded with a sampling frequency of 44100 Hz (each
interaction represents a swipe of the instrument tip over a tissue
specimen). Each recording involves a segment with only background
noise, followed by the interaction event and nalizing with a new
background noise segment. A total of 90 audio recordings were
generated and saved into a dataset.
Signal to Noise Ratio computation
The individual SNRs per audio recording are calculated to analyse the
SNR resulting from the interaction of the different tested tissues. For
that, three segments are extracted from each audio recording: two
signal segments preceding and following the interaction event and
containing only background noise, and one segment extracted at the
middle of the interaction event. Then the background noises energies
called E
and E
, and the event energy E
are computed. The SNR
ratio for each recording iis nally calculated as SNR = E
Spectral energy distribution analysis for distinguishing
between tissues
When an interventional instrument interacts with a given tissue, the
friction between the tip of the instrument and the tissue results in an
audio wave presenting dynamics whose time-varying changes can
contain information or patterns of how the tissue sounds. For example,
let analyse Figure 3, which displays the time-domain and time-scale
representations of three different audio signals obtained from one
sensor location when sweeps were performed over three different
tissues. The time-scale spectrum was obtained with a Continuous
Wavelet Transformation (CWT) using a Morse mother wavelet. The
x-axis of the CWT spectrum represents the time, and the y-axis rep-
resents pseudo-frequencies ordered in a logarithmic scale between
0 and f
/2=22050 Hz, where f
correspond to the frequency sampling. It
is possible to verify in this gure that even if in the time-domain
representation tissues can show similar audio signal behavior (felt and
denim), the time-scale spectra are signicantly different. This means
that the audio signal time-varying characteristics of each tissue are
different when the instrument interacts with them.
The main observed difference lies in the spectral energy distri-
bution, which shows different dominant energy frequencies for each
tissue. This is the information that we want to exploit to assess the
capability of a sensor location to be able to provide audio signals that
can be used to distinguish between palpated tissues. For that, the
spectrum is first divided into four bands corresponding roughly to
pseudo-frequencies of very-low VLF: 535 Hz, low LF: 35250 Hz,
middle MF: 2501500 Hz, and high HF: 15009500 Hz frequencies.
For each band the energy and the contribution to the total spectral
energy of the event was then calculated using the equation E
i= LF, MLF, MHF, HF and Et
Figure 4 displays the computed SNRs for each of the 90
recordings of the dataset (45 per sensor location and 15 per
tissue tested). The audio recordings obtained from the
lateral location L
exhibit high SNRs for nearly all re-
cordings and particularly for the denim tissue. It is
important to point out that even if liver tissue is soft
Time [s]
ycneuqerF( )elacs cimhtiragol
Figure 3: Time-domain and time-scale representations of the audio signal resulting from the interaction of the robotic tool with three different
Figure 4: Obtained SNR per audio recording
for the two studied sensor locations.
Illanes et al.: Audio feedback in robotic-assisted procedures 3
compared to the other two tested tissues, the obtained
SNRs in this tissue are also high. The location L
good SNRs for felt and denim tissues, but it shows less
sensitivity to some of the tests made with liver tissue.
Figures 5 and 6 presents the four-band energy distri-
bution analysis for both studied locations. Figure 5 dis-
plays two examples per tested tissue of energy
distributions of an interaction event. We can observe how
the energy distribution follows an evident pattern for the
two interactions with each tissue. Additionally, both lo-
cations show energy patterns that do not vary from one
recording to the other one.
Figure 6 shows the energy distribution for the whole
dataset for both locations. This gure serves to analyse the
stability of the obtained energy distribution patterns. Each
energy distribution set (as the ones shown in Figure 5) was
arranged in a matrix where each row corresponds to the
energy distribution of a single recording. Using this visu-
alization, it is possible to make two important verications.
First, the energy distributions are highly stable in a set of
interactions belonging to a same tissue. Second, the energy
distributions are different according to the tissue. For
example, at the location L
, the felt tissue presents VLF
and LF of similar intensities and is very high compared to
Figure 5: Contribution of the separated bands to the total spectral energy for recordings of the different tissues obtained from the two testes
15 felt
15 denim
15 liver
Figure 6: Energy contributions of the whole
audio dataset for both studied locations.
4Illanes et al.: Audio feedback in robotic-assisted procedures
MF and HF. More than 90% of the energy is concentrated in
the lower frequency bands. For denim, 80% of the energy is
concentrated in the VLF band, while for liver, more than
80% of the energy is concentrated in the LF range. A similar
analysis can be done in the L
location, where also clear
patterns to distinguish between tissues can be observed.
This work shows that audio with a high SNR and contain-
ing important dynamic information of tissues can be ob-
tained from the proximal end of a robotic tool with a
clinically realistic sensor location. Two possible sensor
placement in the sterile zone of the robotic instrument has
been successfully evaluated, showing evident energy dis-
tribution patterns for distinguishing interactions of the
instrument tip with different tissues. The next step will be
to test this setup with a functional da Vinci robot in order to
analyse the robustness of the studied signal patterns with
the forceps interacting with hard and soft tissues.
Research funding: The author state no funding involved.
Author contributions: All authors have accepted
responsibility for the entire content of this manuscript
and approved its submission.
Competing interests: Authors state no conict of interest.
1. Kim U, Lee D-H, Yoon WJ, Hannaford B, Choi HR. Force sensor
integrated surgical forceps for minimally invasive robotic surgery.
IEEE Trans Robot 2015;31:121424.
2. Hong MB, Jo Y-H. Design and evaluation of 2-dof compliant
forceps with force-sensing capability for minimally invasive robot
surgery. IEEE Trans Robot 2012;28:93241.
3. Qasaimeh MA, Sokhanvar S, Dargahi J, Kahrizi M. Pvdf-based
microfabricated tactile sensor for minimally invasive surgery. J
Microelectromech Syst 2008;18:195207.
4. King C-H, Culjat MO, Franco ML, Bisley JW, Carman GP, Dutson
EP, et al. A multielement tactile feedback system for robot-
assisted minimally invasive surgery. IEEE Trans Haptics 2008;2:
5. Dalvand MM, Nahavandi S, Fielding M, Mullins J, Najdovski
Z, Howe RD. Modular instrument for a haptically-enabled
robotic surgical system (herosurg). IEEE Access 2018;6:
6. Khadem SM, Behzadipour S, Mirbagheri A, Farahmand F. A
modular force-controlled robotic instrument for minimally
invasive surgeryefcacy for being used in autonomous grasping
against a variable pull force. Int J Med Robot Comput Assist Surg
7. Illanes A, Boese A, Maldonado I, Pashazadeh A, Schauer A,
Navab N, et al. Novel clinical device tracking and tissue event
characterization using proximally placed audio signal acquisition
and processing. Sci Rep 2018;8:111.
8. Illanes A, Sühn T, Esmaeili N, Maldonado I, Schauer A, Chen C-H,
et al. Surgical audio guidance surag: extracting non-invasively
meaningful guidance information during minimally invasive
procedures. In: 2019 IEEE 19th international conference on
bioinformatics and bioengineering (BIBE). IEEE, Greece; 2019:
9. Chen C-H, Sühn T, Illanes A, Maldonado I, Ahmad H, Wex C,
et al. Proximally placed signal acquisition sensoric for
robotic tissue tool interactions. Curr Dir Biomed Eng 2018;4:
10. Chen C, Sühn T, Kalmar M, Maldonado I, Wex C, Croner R, et al.
Texture differentiation using audio signal analysis with
robotic interventional instruments. Comput Bio Med 2019;112:
Illanes et al.: Audio feedback in robotic-assisted procedures 5
... The analysis of sound or audio-haptics may also be an interesting tool in the quest for more autonomy. Computer scientists and engineers have been studying sound waves to see if algorithms can be developed that will give surgical robots even more information [74][75][76]. Because sound waves may require less memory, it is hoped that the analysis of sound waves will give computers more sensitive ways to obtain the relevant information with less data crunching. ...
... This may enable more useful information for the robot to have and less time lost during the analysis, resulting in more AI that can actually be used in real-time. Additionally, these types of data could give pixel data another dimension and theoretically improve computers and robots ability to safely perform autonomous tasks [76,77]. Alternative techniques devised to allow for the differentiation of tissues during surgery involve the utilization of electrical bio-impedance sensing and analysis of force feedback, but are still in the prototype phase [78]. ...
Full-text available
This is a review focused on advances and current limitations of computer vision (CV) and how CV can help us obtain to more autonomous actions in surgery. It is a follow-up article to one that we previously published in Sensors entitled, "Artificial Intelligence Surgery: How Do We Get to Autonomous Actions in Surgery?" As opposed to that article that also discussed issues of machine learning, deep learning and natural language processing, this review will delve deeper into the field of CV. Additionally, non-visual forms of data that can aid computerized robots in the performance of more autonomous actions, such as instrument priors and audio haptics, will also be highlighted. Furthermore, the current existential crisis for surgeons, endoscopists and interventional radiologists regarding more autonomy during procedures will be discussed. In summary, this paper will discuss how to harness the power of CV to keep doctors who do interventions in the loop.
... Many organisations and research groups have specialized in the field and have successfully built robots that are not only capable of performing simple operations but complex surgeries as well. Therefore, a variety of technologies is currently still under development to increase sensing capabilities and safety of such systems [3,4]. Telemanipulation systems for surgical procedures such as the "Zeus" [5] have been developed to increase the accuracy and efficiency for minimally invasive cardiac surgery by haptic methods. ...
Full-text available
Under-staffing of nurses is a significant problem in most countries. It is expected to rise in the coming years, making it challenging to perform crucial tasks like assessing a patient's condition, assisting the surgeon in medical procedures, catheterization and Blood Transfusion etc., Automation of some essential tasks would be a viable idea to overcome this shortage of nurses. One such task intended to automate is the role of a 'Scrub Nurse' by using a robotic arm to hand over the surgical instruments. In this project, we propose to use a Collaborative Robotic-arm as a Scrub nurse that can be controlled with voice commands. The robotic arm was programmed to reach the specified position of the instruments placed on the table equipped with a voice recognition module to recognize the requested surgical instrument. When the Surgeon says "Pick Instrument", the arm picks up the instrument from the table and moves it over to the prior defined handover position. The Surgeon can take over the instrument by saying the command "Drop". Safe pathways for automatic movement of arm and handover position will be predefined by the Surgeon manually. This concept was developed considering the convenience of the Surgeon and the patient's safety, tested for collision, noisy environments, positioning failures and accuracy in grasping the instruments. Limitations that need to be considered in future work are the recognition of voice commands which as well as the returning of the instruments by the surgeon in a practical and safe way.
... Interestingly, this technology has been shown to have the ability to generate quantifiable information from robotic graspers, while the sensor itself still remains outside of the patient [115][116][117]. The potential of this technology has also been shown by demonstrating that everyday procedures like needle insertions can become smarter and potentially safer, specifically in the case of Veress insertion for pneumoperitoneum access, liver ablation of tumors and arthroscopic insertions [118,119]. ...
Full-text available
Most surgeons are skeptical as to the feasibility of autonomous actions in surgery. Interestingly, many examples of autonomous actions already exist and have been around for years. Since the beginning of this millennium, the field of artificial intelligence (AI) has grown exponentially with the development of machine learning (ML), deep learning (DL), computer vision (CV) and natural language processing (NLP). All of these facets of AI will be fundamental to the development of more autonomous actions in surgery, unfortunately, only a limited number of surgeons have or seek expertise in this rapidly evolving field. As opposed to AI in medicine, AI surgery (AIS) involves autonomous movements. Fortuitously, as the field of robotics in surgery has improved, more surgeons are becoming interested in technology and the potential of autonomous actions in procedures such as interventional radiology, endoscopy and surgery. The lack of haptics, or the sensation of touch, has hindered the wider adoption of robotics by many surgeons; however, now that the true potential of robotics can be comprehended, the embracing of AI by the surgical community is more important than ever before. Although current complete surgical systems are mainly only examples of tele-manipulation, for surgeons to get to more autonomously functioning robots, haptics is perhaps not the most important aspect. If the goal is for robots to ultimately become more and more independent, perhaps research should not focus on the concept of haptics as it is perceived by humans, and the focus should be on haptics as it is perceived by robots/computers. This article will discuss aspects of ML, DL, CV and NLP as they pertain to the modern practice of surgery, with a focus on current AI issues and advances that will enable us to get to more autonomous actions in surgery. Ultimately, there may be a paradigm shift that needs to occur in the surgical community as more surgeons with expertise in AI may be needed to fully unlock the potential of AIS in a safe, efficacious and timely manner.
Conference Paper
Full-text available
In this work we summarize applications of a novel approach for providing complementary information for guiding medical interventional devices (MID) and that have been recently published by our research team. This approach consist of using an audio sensor located in the proximal end of the MID in order to extract meaningful information concerning the interaction between the tip of the instrument and the tissue. The approach was successfully evaluated with different setups and MIDs.
Full-text available
Robotic minimally invasive surgery (RMIS) has played an important role in the last decades. In traditional surgery, surgeons rely on palpation using their hands. However, during RMIS, surgeons use the visual-haptics technique to compensate the missing sense of touch. Various sensors have been widely used to retrieve this natural sense, but there are still issues like integration, costs, sterilization and the small sensing area that prevent such approaches from being applied. A new method based on acoustic emission has been recently proposed for acquiring audio information from tool-tissue interaction during minimally invasive procedures that provide user guidance feedback. In this work the concept was adapted for acquiring audio information from a RMIS grasper and a first proof of concept is presented. Interactions of the grasper with various artificial and biological texture samples were recorded and analyzed using advanced signal processing and a clear correlation between audio spectral components and the tested texture were identified.
Full-text available
Robotic surgeries are still limited with respect to the surgeon’s natural senses. The tactile sense is exceptional important in conventional clinical procedures. To identify critical structures inside the tissue, palpation is a commonly used technique in conventional open surgeries. The underlying organ or pathological structures conditions (healthy, abnormally hard or soft) can for example be localized and assessed through this process. Palpation needs a tactile sense; however, that is commonly not available or limited in robotic surgeries. The palpation need was already addressed by several research groups that integrated complex sensor-feedback-systems into prototype surgical instruments for robotic systems. We propose a new technique to acquire data of the tissue tool interaction of the surgical instruments. The structure borne transmission path is used to measure acoustic emission (AE) at the outpatient (proximal) end of the instruments with the help of different sensors attached to the surface of the surgical tool. Initial tests were performed using a microphone in combination with a stethoscope. This setup showed promising results and a more integrated prototype was subsequently designed. A piezoelectric charge accelerometer was used as vibration sensor and compared to a MEMS microphone. A signal acquisition system was developed to acquire signals from both sensors in parallel. The sensors were then attached onto the shaft of a daVinci Prograsp Forceps instrument. According to the surgery observation, a series of simulated experiments was conducted. The tip of the grasper was swiped manually over a human subject’s dorsal and palmar hand side, lateral side of neck and over the carotid artery. Additionally, contact with soft tissue and other instruments were evaluated since these are events of interest during surgery. Advanced signal processing techniques allowed the identification and characterization of significant events such as palpation dynamics, contact and pulsation. Signals acquired by the MEMS microphone showed the most promising results. This approach will now be used to build a prototype for further evaluation in a clinical setup. The paper presents the first results that show that this novel technique can provide valuable information about the tool-tissue interaction in robotic surgery that typically can only be obtained through advanced distal sensor systems or actual human touch.
Full-text available
We propose a new and complementary approach to image guidance for monitoring medical interventional devices (MID) with human tissue interaction and surgery augmentation by acquiring acoustic emission data from the proximal end of the MID outside the patient to extract dynamical characteristics of the interaction between the distal tip and the tissue touched or penetrated by the MID. We conducted phantom based experiments (n = 955) to show dynamic tool/tissue interaction during tissue needle passage (a) and vessel perforation caused by guide wire artery perforation (b). We use time-varying auto-regressive (TV-AR) modelling to characterize the dynamic changes and time-varying maximal energy pole (TV-MEP) to compute subsequent analysis of MID/tissue interaction characterization patterns. Qualitative and quantitative analysis showed that the TV-AR spectrum and the TV-MEP indicated the time instants of the needle path through different phantom objects (a) and clearly showed a perforation versus other generated artefacts (b). We demonstrated that audio signals acquired from the proximal part of an MID could provide valuable additional information to surgeons during minimally invasive procedures.
To restore the sense of touch in robotic surgical systems, a modular force feedback-enabled laparoscopic instrument is developed and employed in a robotic assisted minimally invasive surgical system (HeroSurg). Strain gauge technology is incorporated into the instrument to measure tip/tissue lateral interaction forces. The modularity feature of the proposed instrument makes it interchangeable between various tip types of different functionalities, e.g. cutter, grasper, and dissector without losing force sensing capability. Series of experiments are conducted and results are reported to evaluate force sensing capability of the instrument. The results reveal mean errors of 1.32 g and 1.98 deg in the measurements of tip/tissue load magnitude and direction across all experiments, respectively. OAPA
Background: Many deficiencies of minimally invasive robotic surgery systems can be eliminated by using automated laparoscopic tools with force measurement and control capability. Method: A fully modular, automated laparoscopic instrument with a proximal force sensory system was designed and fabricated. The efficacy of the instrument was evaluated experimentally when functioning in an autonomous force-controlled grasping scheme. Results: The designed instrument was shown to work easily with standard laparoscopic tools, with the whole distal part detachable for autoclave sterilization. The root mean squared error (RMSE) of the actual pinch force from the target ramp was 0.318 N; it was 0.402 N for a sinusoidal pull force, which dropped by 21% using a static friction compensation. A secure grasping condition was achieved, in spite of this error, by applying a sufficiently large margin from the slip boundary. Conclusions: With a simple and practical design, the instrument enjoys affordability, versatility and autoclave sterilizability for clinical usage, with an acceptable performance for being used in an auto-grasping control scheme. Copyright © 2016 John Wiley & Sons, Ltd.
In this paper, a novel concept of two-degree-of-freedom (2-DOF) compliant forceps is suggested for the measure of pulling and grasp forces at the tip of surgical instrument for minimally invasive surgery robot. For the design of the compliant forceps, the required compliance characteristics are first defined using a simple spring model with one linear and one torsional springs. This model may be directly realized as the compliant forceps. However, for the compact realization of the mechanism, we synthesize the spring model with two torsional springs that has equivalent compliance characteristics to the linear-torsional spring model. Then, each of the synthesized torsional springs is realized physically by means of a flexure hinge. From this design approach, direct measurement of the pulling and grasp forces is possible at the forceps, and measuring sensitivity can be adjusted in the synthesis process. The validity of the design is evaluated by finite element analysis. Further, from the measured values of bending strains of two flexure hinges, a method to compute the decoupled pulling and grasp forces is presented via the theory of screws. Finally, force-sensing performance of the proposed compliant forceps is verified from the experiments of the prototype using some weights and load cells.
This paper aimed to develop a miniaturized tactile sensor capable of measuring force and force position in minimally invasive surgery. The in situ measurement of tactile information is a step forward toward restoring the loss of the sense of touch that has occurred due to shift from traditional to minimally invasive surgeries. The sensor was designed such that it can sense low forces which could be comparable to those produced by pulsating delicate arteries, yet can withstand high forces comparable to grasping forces. The influence of some hidden anatomical features, such as lumps, voids, and arteries, on the stress distribution at the grasping surface was studied. In this paper, the capability of the sensor to determine and locate any point load was also investigated. The proposed sensor was designed and manufactured to be highly sensitive, using polyvinylidene fluoride (PVDF). The microfabrication procedure of the sensor, including corner compensation for toothlike projections and patterning of PVDF film, was discussed. The micromachined sensor was tested, and the experimental results were compared with the results of 3-D finite element modeling.
A multi-element tactile feedback (MTF) system has been developed to translate the force distribution, in magnitude and position, from 3times2 sensor arrays on surgical robotic end-effectors to the fingers via 3times2 balloon tactile displays. High detection accuracies from perceptual tests (> 96%) suggest that MTF may be an effective means to improve robotic control.