Article

Surgical soundtracks: automatic acoustic augmentation of surgical procedures

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Purpose: Advances in sensing and digitalization enable us to acquire and present various heterogeneous datasets to enhance clinical decisions. Visual feedback is the dominant way of conveying such information. However, environments rich with many sources of information all presented through the same channel pose the risk of over stimulation and missing crucial information. The augmentation of the cognitive field by additional perceptual modalities such as sound is a workaround to this problem. A major challenge in auditory augmentation is the automatic generation of pleasant and ergonomic audio in complex routines, as opposed to overly simplistic feedback, to avoid alarm fatigue. Methods: In this work, without loss of generality to other procedures, we propose a method for aural augmentation of medical procedures via automatic modification of musical pieces. Results: Evaluations of this concept regarding recognizability of the conveyed information along with qualitative aesthetics show the potential of our method. Conclusion: In this paper, we proposed a novel sonification method for automatic musical augmentation of tasks within surgical procedures. Our experimental results suggest that these augmentations are aesthetically pleasing and have the potential to successfully convey useful information. This work opens a path for advanced sonification techniques in the operating room, in order to complement traditional visual displays and convey information more efficiently.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In contrast to sonification apps, music making apps have already been recognized as a suitable means for education over a decade ago [9]. Many studies concluded that a musical sonification can be informative, motivating, engaging, easy to learn, and raise situation awareness [10,11,12]. ...
Conference Paper
Full-text available
To date, sonification apps are rare. Music apps on the other hand are widely used. Smartphone users like to play with music. In this manuscript, we present Mixing Levels, a spirit level sonification based on music mixing. Tilting the smartphone adjusts the volumes of 5 musical instruments in a rock music loop. Only when perfectly leveled, all instruments in the mix are well-audible. The app is supposed to be useful and fun. Since the app appears like a music mixing console, people have fun to interact with Mixing Levels, so that learning the sonification is a playful experience.
... In contrast to sonification apps, music making apps have already been recognized as a suitable means for education over a decade ago [9]. Many studies concluded that a musical sonification can be informative, motivating, engaging, easy to learn, and raise situation awareness [10,11,12]. ...
Preprint
Full-text available
To date, sonification apps are rare. Music apps on the other hand are widely used. Smartphone users like to play with music. In this manuscript, we present Mixing Levels, a spirit level sonification based on music mixing. Tilting the smartphone adjusts the volumes of 5 musical instruments in a rock music loop. Only when perfectly leveled, all instruments in the mix are well-audible. The app is supposed to be useful and fun. Since the app appears like a music mixing console, people have fun to interact with Mixing Levels, so that learning the sonification is a playful experience.
... Such basic sonification methods do not extrapolate well to more complex multidimensional scenarios, as they lack consideration of psychoacoustics and sound design in their configuration. To address this problem, sonification methods [57][58][59] have been proposed with more focus on usability and clinical integration, using more flexible and creative sound designs; however, these approaches are unsuitable for presenting precise navigation data. ...
Article
Full-text available
Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel sonification solution for alignment tasks in four degrees of freedom based on frequency modulation (FM) synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution.
Article
Full-text available
Transcranial Magnetic Stimulation (TMS) is an effective non-invasive treatment method for major depressive disorder. Accurate placement of an electromagnetic coil on the patient’s head during repetitive TMS is the key for stimulation of the desired brain regions and positive treatment outcome. Neuronavigation systems constitute the state-of-the-art method to accurately stimulate the appropriate brain region. Local separation of navigation information and the patient anatomy in combination with intricate visualisations and cumbersome setup limits the benefits and usability of this method. The present study addresses these problems by proposing an audiovisual Augmented reality (AR) system for coil positioning during TMS. The system sonifies and visualises translational and rotational differences between a target and the current instrument position using a minimalistic graphical user interface and auditory display. Effects of cross-modal integration on usability and targeting precision were shown in an experiment comparing audiovisual AR, audio AR and visual neuronavigation. Our approach revealed significant improvements in task time of all proposed AR conditions over neuronavigation (p < 0.001). Conversely, the neuronavigation system achieved significantly better targeting accuracy (p < 0.001). A purely auditory guidance achieved comparable performance as the audiovisual interface designs.
Article
Full-text available
Data-driven computational approaches have evolved to enable extraction of information from medical images with reliability, accuracy, and speed, which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theaters are extremely complex and typically rely on poorly integrated intraoperative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer-assisted interventions, we highlight the crucial need to take the context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer-assisted intervention (CAI4CAI) arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors, and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; and how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision-making ultimately producing more precise and reliable interventions.
Article
Full-text available
The aim of this paper is to propose an interdisciplinary clas- sification of digital audio effects to facilitate communication and collaborations between DSP programmers, sound engineers, com- posers, performers and musicologists. After reviewing classifica- tions reflecting technological, technical and perceptual points of view, we introduce a transverse classification to link discipline- specific classifications into a single network containing various layers of descriptors, ranging from low-level features to high-level features. Simple tools using the interdisciplinary classification are introduced to facilitate the navigation between effects, underlying techniques, perceptual attributes and semantic descriptors. Finally, concluding remarks on implications for teaching purposes and for the development of audio effects user interfaces based on percep- tual features rather than technical parameters are presented.
Article
Full-text available
We discuss an experimental audio feedback system and method for positional guidance in real-time surgical instrument placement tasks. This system is intended for future usability testing in order to ascertain the efficacy of the use of the aural modality for assisting surgical placement tasks in the operating room. The method is based on translating spatial parameters of a surgical instrument or device, such as its position or velocity with respect to some coordinate system, into a set of audio feedback parameters along the coordinates of a generalised audio space. Error signals that correspond to deviations of the actual instrument trajectory from an optimal trajectory are transformed into a set of audio signals that indicate to the user whether correction is necessary. An experimental hardware platform was assembled using commercially available hardware. A system for 3-D modelling, surgical procedure planning, real-time instrument tracking and audio generation was developed. Prototype software algorithms for generating audio feedback as a function of instrument navigation were designed and implemented. The system is sufficient for future usability testing. This technology is still in an early stage of development, with formal usability and performance testing yet to be done. However, informal usability experiments in the course of the basic engineering process indicate the use of audio is a promising alternative to, or redundancy measure in support of visual display technology for intra-operative navigation.
Article
Full-text available
Background: An alternative mode of interaction with navigation systems for open liver surgery was requested. Surgeons who use such systems are impeded by having to constantly switch between viewing the navigation system screen and the patient during an operation. Methods: To this end, an auditory display system for open liver surgery is introduced with support for guiding the tracked instrument towards and remaining on a predefined resection line. To evaluate the method, a clinically orientated user study with 12 surgeons was conducted. Results: It is shown in qualitative results from the user study that the proposed auditory display is recognized as a useful addition to the current visual mode of interaction. It was revealed in a statistical analysis that participants spent less time looking on the screen (10% vs. 96%). Accuracy for resection guidance was significantly improved when using auditory display as an additional information channel (0.6 vs. 1.4 mm); however, the overall time for the resection task was shorter without auditory display (47 vs. 24 s). Conclusions: By reducing dependence on the visual modality during resection guidance, the auditory display is well suited to become integrated in navigation systems for liver surgery.
Article
Full-text available
Purpose: We developed a surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling. Methods: Warning navigation modules were developed and integrated into a free open source software platform. To obtain high registration accuracy, we used a high-precision laser-sintered template of the patient's bone surface to register the computed tomography (CT) images. We calculated the closest distance between the drill tip and the surface of the facial nerve during drilling. When the drill tip entered the safe regions, the navigation system provided an auditory and visual signal which differed in each safe region. To evaluate the effectiveness of the system, we performed phantom experiments for maintaining a given safe margin from the facial nerve when drilling bone models, with and without the navigation system. The error of the safe margin was measured on postoperative CT images. In real surgery, we evaluated the feasibility of the system in comparison with conventional facial nerve monitoring. Results: The navigation accuracy was submillimeter for the target registration error. In the phantom study, the task with navigation ([Formula: see text] mm) was more successful with smaller error, than the task without navigation ([Formula: see text] mm, [Formula: see text]). The clinical feasibility of the system was confirmed in three real surgeries. Conclusions: This system could assist surgeons in preserving the facial nerve and potentially contribute to enhanced patient safety in the surgery.
Article
Full-text available
The Mozart Effect is a phenomenon whereby certain pieces of music induce temporary enhancement in “spatial temporal reasoning.” To determine whether the Mozart Effect can improve surgical performance, 55 male volunteers (mean age = 20.6 years, range = 16-27), novice to surgery, were timed as they completed an activity course on a laparoscopic simulator. Subjects were then randomized for exposure to 1 of 2 musical pieces by Mozart (n = 21) and Dream Theater (n = 19), after which they repeated the course. Following a 15-minute exposure to a nonmusical piece, subjects were exposed to one of the pieces and performed the activity course a third time. An additional group (n = 15) that was not corandomized performed the tasks without any exposure to music. The percent improvements in completion time between 3 successive trials were calculated for each subject and group means compared. In 2 of the tasks, subjects exposed to the Dream Theater piece achieved approximately 30% more improvement (26.7 ± 8.3%) than those exposed to the Mozart piece (20.2 ± 7.8%, P = .021) or to no music (20.4 ± 9.1%, P = .049). Distinct patterns of covariance between baseline performance and subsequent improvement were observed for the different musical conditions and tasks. The data confirm the existence of a Mozart Effect and demonstrate for the first time its practical applicability. Prior exposure to certain pieces may enhance performance in practical skills requiring spatial temporal reasoning.
Article
Full-text available
A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve) and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling.
Article
Full-text available
Under conventional "open-" surgery, the physician has to take care of the patient, interact with other clinicians and check several monitoring devices. Nowadays, the computer assisted surgery proposes to integrate 3-D cameras in the operating theatre in order to assist the surgeon in performing minimally invasive surgical punctures. The cameras localize the needle and the computer guides the surgeon towards an intracorporeal clinically defined target. A visualization system (screen) is employed to provide the surgeon with indirect visual spatial information about the intracorporeal positions of the needle. The present work proposes to use another sensory modality to guide the surgeon, thus keeping the visual modality fully dedicated to the surgical gesture. For this, the sensory substitution paradigm using the Bach-y-Rita's "Tongue Display Unit" (TDU) is exploited to provide to the surgeon information of the position tool. The TDU device is composed of a 6 x 6 matrix of electrodes transmitting electrotactile information on the tongue surface. The underlying idea consists in transmitting information about the deviation of the needle movement with regard to a preplanned "optimal" trajectory. We present an experiment assessing the guidance effectiveness of an intracorporeal puncture under TDU guidance with respect to the performance evidenced under a usual visual guidance system.
Article
Full-text available
Most conventional computer-aided navigation systems assist the surgeon visually by tracking the position of an ancillary and by superposing this position into the 3D preoperative imaging exam. This paper aims at adding to such navigation systems a device that will guide the surgeon towards the target, following a complex preplanned ancillary trajectory. We propose to use tactile stimuli for such guidance, with the design of a vibrating belt. An experiment using a virtual surgery simulator in the case of skull base surgery is conducted with 9 naïve subjects, assessing the vibrotactile guidance effectiveness for complex trajectories. Comparisons between a visual guidance and a visual+tactile guidance are encouraging, supporting the relevance of such tactile guidance paradigm.
Article
Sonic interaction as a technique for conveying information has advantages over conventional visual augmented reality methods specially when augmenting the visual field with extra information brings distraction. Sonification of knowledge extracted by applying computational methods to sensory data is a well-established concept. However, some aspects of sonic interaction design such as aesthetics, the cognitive effort required for perceiving information, and avoiding alarm fatigue are not well studied in literature. In this work, we present a sonification scheme based on employment of physical modeling sound synthesis which targets focus demanding tasks requiring extreme precision. Proposed mapping techniques are designed to require minimum training for users to adapt to and minimum mental effort to interpret the conveyed information. Two experiments are conducted to assess the feasibility of the proposed method and compare it against visual augmented reality in high precision tasks. The observed quantitative results suggest that utilizing sound patches generated by physical modeling achieve the desired goal of improving the user experience and general task performance with minimal training.
Conference Paper
Advances in sensing and digitalization enable us to acquire and present various heterogeneous datasets to enhance clinical decisions. Visual feedback is the dominant way of conveying such information. However, environments rich with many sources of information all presented through the same channel pose the risk of over stimulation and missing crucial information. The augmentation of the cognitive field by additional perceptual modalities such as sound is a workaround to this problem. A major challenge in auditory augmentation is the automatic generation of pleasant and ergonomic audio in complex routines, as opposed to overly simplistic feedback, to avoid fatigue. In this work, without loss of generality to other procedures, we propose a method for aural augmentation of ophthalmic procedures via automatic modification of musical pieces. Evaluations of this first proof of concept regarding recognizability of the conveyed information along with qualitative aesthetics show the potential of our method.
Article
The poor ergonomics of laparoscopic surgery are a widely recognized source of difficulty for surgeons, leading to sub-optimal surgeon performance and sometimes injury to the patient. The main causes for this are lost and distorted perception of interaction forces and instrument position. The latter, due to losses in visual and kinaesthetic depth perception and modified hand-eye coordination, can prevent precise navigation of instruments towards surgical targets or away from sensitive anatomic structures. This situation prompts us to explore methods for efficiently assisting the surgeon during instrument navigation. Here, we present experiments aimed at providing insights into the effectiveness of haptic (tactile and kinaesthetic), visual and combined feedback in assisting the navigation of a laparoscopic instrument tip towards a surgical target. Subjects placed in front of a laparoscopic trainer were tasked with following various instrument tip trajectories within a target plane while minimizing both deviations and task execution times. Feedback on the level of deviation was provided alternately through visual on-screen cues, tactile cues provided by vibration motors and / or kinaesthetic cues provided by a haptic interface co-manipulating the surgical instrument. Evaluations of these forms of feedback over two series of experiments implicating a total of 35 subjects (34 non-surgeon novices, 1 surgeon intern with experience in laparoscopy) show positive impacts of providing such feedback on precision in instrument navigation. Visual, tactile and combined cues lead to increased precision in navigation (up to 25\% increase in time spent on target, and 32\% reduction in deviation amplitudes), but usually at the cost of reduced task execution speed. However, the use of kinaesthetic feedback through soft virtual fixtures provided in a co-manipulated robot-assisted surgery set-up both significantly improved precision (32\% increase in time spent on target, and 70\% reduction in deviation amplitudes) and task execution speed (30\% reduction in task completion times). The combination of visual and tactile feedback was shown to be helpful in correcting larger deviations from the target. These preliminary results are promising for implementation of low-cost tactile or combined visual and tactile feedback in applications to conventional laparoscopic instrument navigation, as well as to robot-assisted laparoscopic surgery.
Article
Due mainly to drastically shortened recovery times and lower overall cost, minimally invasive surgery (MIS) is growing standard for many surgical interventions. However, associated loss of visual depth perception, difficult hand-eye coordination and distorted haptic sensation tend to complicate this task for the surgeon. In this paper, we explore the potential of simple visual, haptic or combined visual and haptic cues for intuitively assisting surgeons in moving their instrument tip within a predefined 3-D plane. 23 subjects carried out trajectory following tasks within a plane under provision of 9 different combinations of visual and haptic guidance feedback. Evaluated forms of haptic feedback encompassed both tactile cues and kinaesthetic feedback using soft virtual fixtures. Results show clear superiority of soft guidance virtual fixtures over other forms of feedback, leading to performance levels above those obtained in open surgery. However, promising results for the use of cutaneous vibrotactile feedback are also obtained, with potential for integration in MIS tool handles.
Article
Objective. —To determine the effects of surgeon-selected and experimenter-selected music on performance and autonomic responses of surgeons during a standard laboratory psychological stressor.Design. —Within-subjects laboratory experiment.Setting. —Hospital psychophysiology laboratory.Participants. —A total of 50 male surgeons aged 31 to 61 years, who reported that they typically listen to music during surgery, volunteered for the study.Main Outcome Measurements. —Cardiac responses, hemodynamic measures, electrodermal autonomic responses, task speed, and accuracy.Results. —Autonomic reactivity for all physiological measures was significantly less in the surgeon-selected music condition than in the experimenter-selected music condition, which in turn was significantly less than in the no-music control condition. Likewise, speed and accuracy of task performance were significantly better in the surgeon-selected music condition than in the experimenter-selected music condition, which was also significantly better than the no-music control condition.Conclusion. —Surgeon-selected music was associated with reduced autonomic reactivity and improved performance of a stressful nonsurgical laboratory task in study participants.(JAMA. 1994;272:882-884)
Article
Image-guided surgery (IGS) systems are frequently utilized during cranial base surgery to aid in orientation and facilitate targeted surgery. We wished to assess the performance of our recently developed localized intraoperative virtual endoscopy (LIVE)-IGS prototype in a preclinical setting prior to deployment in the operating room. This system combines real-time ablative instrument tracking, critical structure proximity alerts, three-dimensional virtual endoscopic views, and intraoperative cone-beam computed tomographic image updates. Randomized-controlled trial plus qualitative analysis. Skull base procedures were performed on 14 cadaver specimens by seven fellowship-trained skull base surgeons. Each subject performed two endoscopic transclival approaches; one with LIVE-IGS and one using a conventional IGS system in random order. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores were documented for each dissection, and a semistructured interview was recorded for qualitative assessment. The NASA-TLX scores for mental demand, effort, and frustration were significantly reduced with the LIVE-IGS system in comparison to conventional navigation (P < .05). The system interface was judged to be intuitive and most useful when there was a combination of high spatial demand, reduced or absent surface landmarks, and proximity to critical structures. The development of auditory icons for proximity alerts during the trial better informed the surgeon while limiting distraction. The LIVE-IGS system provided accurate, intuitive, and dynamic feedback to the operating surgeon. Further refinements to proximity alerts and visualization settings will enhance orientation while limiting distraction. The system is currently being deployed in a prospective clinical trial in skull base surgery. Laryngoscope, 2013.
Article
The trend in anesthesia care is toward increasing use of technologic non-invasive monitors. The American Society of Anesthesiologists has recently published recommended standards for basic patient monitoring, which include arterial blood pressure, electrocardiography, an oxygen analyzer, and a ventilator disconnection alarm. Monitors that are encouraged, but not mandatory, include pulse oximetry, capnography, and spirometry. Most monitors are fitted with alarm systems, usually with preset and modifiable thresholds, which produce an auditory signal when a high or low limit is passed. This survey evaluates the significance of auditory alarms that sound during routine anesthetic management.
Article
To determine the effects of surgeon-selected and experimenter-selected music on performance and autonomic responses of surgeons during a standard laboratory psychological stressor. Within-subjects laboratory experiment. Hospital psychophysiology laboratory. A total of 50 male surgeons aged 31 to 61 years, who reported that they typically listen to music during surgery, volunteered for the study. Cardiac responses, hemodynamic measures, electrodermal autonomic responses, task speed, and accuracy. Autonomic reactivity for all physiological measures was significantly less in the surgeon-selected music condition than in the experimenter-selected music condition, which in turn was significantly less than in the no-music control condition. Likewise, speed and accuracy of task performance were significantly better in the surgeon-selected music condition than in the experimenter-selected music condition, which was also significantly better than the no-music control condition. Surgeon-selected music was associated with reduced autonomic reactivity and improved performance of a stressful nonsurgical laboratory task in study participants.
Article
Little information is available about the effect of music on the operating room (OR) staff. The objective of this study was to evaluate the perception of the influence of music on physicians and nurses working in the OR. A questionnaire was designed and 250 copies were distributed to the doctors and nurses working in the OR at three hospitals. One hundred and seventy-one returned the completed questionnaire and were included in this study. 63% of the participants listen to music on a regular basis in the OR. Classical music is the most requested (58%) and most of the responders do not choose the type of music according to the type of the procedure. In our study, the nurses were more likely to listen to music and the willingness is higher among the female responders. The desired volume is lower as age increases and 78.9% of the participants claimed that music in the OR makes them calmer and more efficient. According to our study, music has a positive effect on the staff working in the operating rooms.
Sound synthesis by physical modelling with Modalys
  • G Eckel
  • F Iovino
  • R Causs
Eckel G, Iovino F, Causs R (1995) Sound synthesis by physical modelling with Modalys. In: Proceedings of the international symposium on musical acoustics, pp 479-482
Vibrotactile guidance for trajectory following in computer aided surgery
  • J Bluteau
  • M-D Dubois
  • S Coquillart
  • E Gentaz
  • Y Payan
Bluteau J, Dubois M-D, Coquillart S, Gentaz E, Payan Y (2010) Vibrotactile guidance for trajectory following in computer aided surgery. In: Engineering in medicine and biology society (EMBC), 2010 annual international conference of the IEEE, pp 2085-2088. IEEE
An interdisciplinary approach to audio effect classification
  • V Verfaille
  • C Guastavino
  • C Traube
Verfaille V, Guastavino C, Traube C (2006) An interdisciplinary approach to audio effect classification. In Proceedings of the 9th international conference on digital audio effects