Conference Paper

On-line Recognition of Surgical Activity for Monitoring in the Operating Room

Conference: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI 2008, Chicago, Illinois, USA, July 13-17, 2008
Source: DBLP


Surgery rooms are complex environments where many inter- actions take place between staff members and the electronic and mechanical systems. In spite of their inherent complex- ity, surgeries of the same kind bear numerous similarities and are usually performed with similar workflows. This gives the possibility to design support systems in the Operating Room (OR), whose applicability range from easy tasks such as the activation of OR lights and calling the next patient, to more complex ones such as context-sensitive user interfaces or au- tomatic reporting. An essential feature when designing such systems, is the ability for on-line recognition of what is hap- pening inside the OR, based on recorded signals. In this paper, we present an approach using signals from the OR and Hidden Markov Models to recognize on-line the sur- gical steps performed by the surgeon during a laparoscopic surgery. We also explain how the system can be deployed in the OR. Experiments are presented using 11 real surgeries performed by different surgeons in several ORs, recorded at our partner hospital. We believe that similar systems will quickly develop in the near future in order to efficiently support surgeons, trainees and the medical staff in general, as well as to improve admin- istrative tasks like scheduling within hospitals.

Download full-text


Available from: Tobias Blum, Sep 29, 2015
22 Reads
  • Source
    • "After identifying 4 states of a common surgical procedure, relevant image features were extracted and HMMs were trained to detect OR occupancy. Padoy et al. [4] also used low-level image features through 3D motion flows combined with hierarchical HMMs to recognize online surgical phases. Secondly, the use of endoscope videos in Minimally Invasive Surgery (MIS) has been investigated. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The need for a better integration of the new generation of computer-assisted-surgical systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the operating room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high-level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore, characterizing each frame of the video. Five different pieces of image-based classifiers were, therefore, implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic time warping and hidden Markov models were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%.
    IEEE transactions on bio-medical engineering 12/2011; 59(4):966-76. DOI:10.1109/TBME.2011.2181168 · 2.35 Impact Factor
  • Source
    • "Application of these techniques in recognizing human activities in the OR represents a small minority in prior work. Padoy et al. [10] used instruments signals to infer surgical HLTs. The signals were directly processed by the inference engine in the form of a Hidden Markov Model (HMM). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recognizing and understanding surgical high-level tasks from sensor readings is important for surgical workflow analysis. Surgical high-level task recognition is also a challenging task in ubiquitous computing because of the inherent uncertainty of sensor data and the complexity of the operating room environment. In this paper, we present a framework for recognizing high-level tasks from low-level noisy sensor data. Specifically, we present a Markov-based approach for inferring high-level tasks from a set of low-level sensor data. We also propose to clean the noisy sensor data using a Bayesian approach. Preliminary results on a noise-free dataset of ten surgical procedures show that it is possible to recognize surgical high-level tasks with detection accuracies up to 90%. Introducing missed and ghost errors to the sensor data results in a significant decrease of the recognition accuracy. This supports our claim to use a cleaning algorithm before the training step. Finally, we highlight exciting research directions in this area.
    Journal of Biomedical Informatics 06/2011; 44(3):455-62. DOI:10.1016/j.jbi.2010.01.004 · 2.19 Impact Factor
  • Source
    • "Studies by Padoy et al. [9], [8] propose HMM-based approaches for online recognition of surgical steps. In the first approach, contextual data was extracted by processing images from the laparoscopic camera and manually extracting information about instruments being used from video recordings [9]. In the second approach, image analysis of 3D motion-flows is used for phase recognition. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In Ubiquitous Computing (Ubicomp) research, substantial work has been directed towards sensor-based detection and recognition of human activity. This research has, however, mainly been focused on activities of daily living of a single person. This paper presents a sensor platform and a machine learning approach to sense and detect phases of a surgical operation. Automatic detection of the progress of work inside an operating room has several important applications, including coordination, patient safety, and context-aware information retrieval. We verify the platform during a surgical simulation. Recognition of the main phases of an operation was done with a high degree of accuracy. Through further analysis, we were able to reveal which sensors provide the most significant input. This can be used in subsequent design of systems for use during real surgeries.
    Pervasive Computing and Communications (PerCom), 2011 IEEE International Conference on; 04/2011
Show more