Conference Paper

On-line Recognition of Surgical Activity for Monitoring in the Operating Room.

Conference: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI 2008, Chicago, Illinois, USA, July 13-17, 2008
Source: DBLP

ABSTRACT Surgery rooms are complex environments where many inter- actions take place between staff members and the electronic and mechanical systems. In spite of their inherent complex- ity, surgeries of the same kind bear numerous similarities and are usually performed with similar workflows. This gives the possibility to design support systems in the Operating Room (OR), whose applicability range from easy tasks such as the activation of OR lights and calling the next patient, to more complex ones such as context-sensitive user interfaces or au- tomatic reporting. An essential feature when designing such systems, is the ability for on-line recognition of what is hap- pening inside the OR, based on recorded signals. In this paper, we present an approach using signals from the OR and Hidden Markov Models to recognize on-line the sur- gical steps performed by the surgeon during a laparoscopic surgery. We also explain how the system can be deployed in the OR. Experiments are presented using 11 real surgeries performed by different surgeons in several ORs, recorded at our partner hospital. We believe that similar systems will quickly develop in the near future in order to efficiently support surgeons, trainees and the medical staff in general, as well as to improve admin- istrative tasks like scheduling within hospitals.

  • [Show abstract] [Hide abstract]
    ABSTRACT: There is currently great interest in analyzing the workflow of minimally invasive operations performed in a physical or simulation setting, with the aim of extracting important information that can be used for skills improvement, optimization of intraoperative processes, and comparison of different interventional strategies. The first step in achieving this goal is to segment the operation into its key interventional phases, which is currently approached by modeling a multivariate signal that describes the temporal usage of a predefined set of tools. Although this technique has shown promising results, it is challenged by the manual extraction of the tool usage sequence and the inability to simultaneously evaluate the surgeon's skills. In this paper we describe an alternative methodology for surgical phase segmentation and performance analysis based on Gaussian mixture multivariate autoregressive (GMMAR) models of the hand kinematics. Unlike previous work in this area, our technique employs signals from orientation sensors, attached to the endoscopic instruments of a virtual reality simulator, without considering which tools are employed at each time-step of the operation. First, based on pre-segmented hand motion signals, a training set of regression coefficients is created for each surgical phase using multivariate autoregressive (MAR) models. Then, a signal from a new operation is processed with GMMAR, wherein each phase is modeled by a Gaussian component of regression coefficients. These coefficients are compared to those of the training set. The operation is segmented according to the prior probabilities of the surgical phases estimated via GMMAR. The method also allows for the study of motor behavior and hand motion synchronization demonstrated in each phase, a quality that can be incorporated into modern laparoscopic simulators for skills assessment.
    Computer Aided Surgery 02/2013; · 0.78 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Introduction: Automatic surgical activity recognition in the operating room (OR) is mandatory to enable assistive surgical systems to manage the information presented to the surgical team. Therefore the purpose of our study was to develop and evaluate an activity recognition model. Material and methods: The system was conceived as a hierarchical recognition model which separated the recognition task into activity aspects. The concept used radio frequency identification (RFID) for instrument recognition and accelerometers to infer the performed surgical action. Activity recognition was done by combining intermediate results of the aspect recognition. A basic scheme of signal feature generation, clustering and sequence learning was replicated in all recognition subsystems. Hidden Markov models (HMM) were used to generate probability distributions over aspects and activities. Simulated functional endoscopic sinus surgeries (FESS) were used to evaluate the system. Results and discussion: The system was able to detect surgical activities with an accuracy of 95%. Instrument recognition performed best with 99% accuracy. Action recognition showed lower accuracies with 81% due to the high variability of surgical motions. All stages of the recognition scheme were evaluated. The model allows distinguishing several surgical activities in an unconstrained surgical environment. Future improvements could push activity recognition even further.
    Minimally invasive therapy & allied technologies: MITAT: official journal of the Society for Minimally Invasive Therapy 01/2014; · 1.33 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Assessment of surgical skills based on virtual reality (VR) technology has received major attention in recent years, with special focus placed on experience discrimination via hand motion analysis. Although successful, this approach is restricted from extracting additional important information about the trainee's hand kinematics. In this study, we investigate the role of hand motion connectivity in the performance of a laparoscopic cholecystectomy on a VR simulator. Two groups were considered: experienced residents and beginners. The connectivity pattern of each subject was evaluated by analyzing their hand motion signals with multivariate autoregressive (MAR) models. Our analysis included the entire as well as key phases of the operation. The results revealed that experienced residents outperformed beginners in terms of the number, magnitude and covariation of the MAR weights. The magnitude of the coherence spectra between different combinations of hand signals was in favor of the experienced group. Yet, the more challenging (in terms of hand movement activity) an operational phase was, the more connections were generated, with experienced subjects performing more coordinated gestures per phase. The proposed approach provides a suitable basis for hand motion analysis of surgical trainees and could be utilized in future VR simulators for skill assessment.
    Medical & Biological Engineering 03/2013; · 1.76 Impact Factor

Full-text (4 Sources)

Available from
May 22, 2014