Conference Paper

On-line Recognition of Surgical Activity for Monitoring in the Operating Room

Conference: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI 2008, Chicago, Illinois, USA, July 13-17, 2008
Source: DBLP


Surgery rooms are complex environments where many inter- actions take place between staff members and the electronic and mechanical systems. In spite of their inherent complex- ity, surgeries of the same kind bear numerous similarities and are usually performed with similar workflows. This gives the possibility to design support systems in the Operating Room (OR), whose applicability range from easy tasks such as the activation of OR lights and calling the next patient, to more complex ones such as context-sensitive user interfaces or au- tomatic reporting. An essential feature when designing such systems, is the ability for on-line recognition of what is hap- pening inside the OR, based on recorded signals. In this paper, we present an approach using signals from the OR and Hidden Markov Models to recognize on-line the sur- gical steps performed by the surgeon during a laparoscopic surgery. We also explain how the system can be deployed in the OR. Experiments are presented using 11 real surgeries performed by different surgeons in several ORs, recorded at our partner hospital. We believe that similar systems will quickly develop in the near future in order to efficiently support surgeons, trainees and the medical staff in general, as well as to improve admin- istrative tasks like scheduling within hospitals.

Download full-text


Available from: Tobias Blum
  • Source
    • "Additional variants of surgical process models based on Markov theory have been developed [19] [20]. In [21], a Hidden Markov Model (HMM) was designed for a laparoscopic cholecystectomy. The model represented a limited number of high-level states that were recognized on-line using endoscopic videos and device signals . "
    [Show abstract] [Hide abstract]
    ABSTRACT: OBJECTIVE: Effective time and resource management in the operating room requires process information concerning the surgical procedure being performed. A major parameter relevant to the intraoperative process is the remaining intervention time. The work presented here describes an approach for the prediction of the remaining intervention time based on surgical low-level tasks. MATERIALS AND METHODS: A surgical process model optimized for time prediction was designed together with a prediction algorithm. The prediction accuracy was evaluated for two different neurosurgical interventions: discectomy and brain tumor resections. A repeated random sub-sampling validation study was conducted based on 20 recorded discectomies and 40 brain tumor resections. RESULTS: The mean absolute error of the remaining intervention time predictions was 13 min 24 s for discectomies and 29 min 20 s for brain tumor removals. The error decreases as the intervention progresses. DISCUSSION: The approach discussed allows for the on-line prediction of the remaining intervention time based on intraoperative information. The method is able to handle demanding and variable surgical procedures, such as brain tumor resections. A randomized study showed that prediction accuracies are reasonable for various clinical applications. CONCLUSION: The predictions can be used by the OR staff, the technical infrastructure of the OR, and centralized management. The predictions also support intervention scheduling and resource management when resources are shared among different operating rooms, thereby reducing resource conflicts. The predictions could also contribute to the improvement of surgical workflow and patient care.
    Full-text · Article · Oct 2012 · Journal of Biomedical Informatics
  • Source
    • "After identifying 4 states of a common surgical procedure, relevant image features were extracted and HMMs were trained to detect OR occupancy. Padoy et al. [4] also used low-level image features through 3D motion flows combined with hierarchical HMMs to recognize online surgical phases. Secondly, the use of endoscope videos in Minimally Invasive Surgery (MIS) has been investigated. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The need for a better integration of the new generation of computer-assisted-surgical systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the operating room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high-level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore, characterizing each frame of the video. Five different pieces of image-based classifiers were, therefore, implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic time warping and hidden Markov models were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%.
    Full-text · Article · Dec 2011 · IEEE transactions on bio-medical engineering
  • Source
    • "Application of these techniques in recognizing human activities in the OR represents a small minority in prior work. Padoy et al. [10] used instruments signals to infer surgical HLTs. The signals were directly processed by the inference engine in the form of a Hidden Markov Model (HMM). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recognizing and understanding surgical high-level tasks from sensor readings is important for surgical workflow analysis. Surgical high-level task recognition is also a challenging task in ubiquitous computing because of the inherent uncertainty of sensor data and the complexity of the operating room environment. In this paper, we present a framework for recognizing high-level tasks from low-level noisy sensor data. Specifically, we present a Markov-based approach for inferring high-level tasks from a set of low-level sensor data. We also propose to clean the noisy sensor data using a Bayesian approach. Preliminary results on a noise-free dataset of ten surgical procedures show that it is possible to recognize surgical high-level tasks with detection accuracies up to 90%. Introducing missed and ghost errors to the sensor data results in a significant decrease of the recognition accuracy. This supports our claim to use a cleaning algorithm before the training step. Finally, we highlight exciting research directions in this area.
    Full-text · Article · Jun 2011 · Journal of Biomedical Informatics
Show more