Conference Paper

On-line Recognition of Surgical Activity for Monitoring in the Operating Room.

Conference: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI 2008, Chicago, Illinois, USA, July 13-17, 2008
Source: DBLP

ABSTRACT Surgery rooms are complex environments where many inter- actions take place between staff members and the electronic and mechanical systems. In spite of their inherent complex- ity, surgeries of the same kind bear numerous similarities and are usually performed with similar workflows. This gives the possibility to design support systems in the Operating Room (OR), whose applicability range from easy tasks such as the activation of OR lights and calling the next patient, to more complex ones such as context-sensitive user interfaces or au- tomatic reporting. An essential feature when designing such systems, is the ability for on-line recognition of what is hap- pening inside the OR, based on recorded signals. In this paper, we present an approach using signals from the OR and Hidden Markov Models to recognize on-line the sur- gical steps performed by the surgeon during a laparoscopic surgery. We also explain how the system can be deployed in the OR. Experiments are presented using 11 real surgeries performed by different surgeons in several ORs, recorded at our partner hospital. We believe that similar systems will quickly develop in the near future in order to efficiently support surgeons, trainees and the medical staff in general, as well as to improve admin- istrative tasks like scheduling within hospitals.

  • [Show abstract] [Hide abstract]
    ABSTRACT: OBJECTIVE: Effective time and resource management in the operating room requires process information concerning the surgical procedure being performed. A major parameter relevant to the intraoperative process is the remaining intervention time. The work presented here describes an approach for the prediction of the remaining intervention time based on surgical low-level tasks. MATERIALS AND METHODS: A surgical process model optimized for time prediction was designed together with a prediction algorithm. The prediction accuracy was evaluated for two different neurosurgical interventions: discectomy and brain tumor resections. A repeated random sub-sampling validation study was conducted based on 20 recorded discectomies and 40 brain tumor resections. RESULTS: The mean absolute error of the remaining intervention time predictions was 13 min 24 s for discectomies and 29 min 20 s for brain tumor removals. The error decreases as the intervention progresses. DISCUSSION: The approach discussed allows for the on-line prediction of the remaining intervention time based on intraoperative information. The method is able to handle demanding and variable surgical procedures, such as brain tumor resections. A randomized study showed that prediction accuracies are reasonable for various clinical applications. CONCLUSION: The predictions can be used by the OR staff, the technical infrastructure of the OR, and centralized management. The predictions also support intervention scheduling and resource management when resources are shared among different operating rooms, thereby reducing resource conflicts. The predictions could also contribute to the improvement of surgical workflow and patient care.
    Journal of Biomedical Informatics 10/2012; · 2.13 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a system that recognizes human activities during trauma resuscitation, the fast-paced and team-based initial management of injured patients in the emergency department. Most objects used in trauma resuscitation are uniquely associated with tasks. To detect object use, we employed passive radio frequency identification (RFID) for their size and cost advantages. We designed the system setup to ensure the effectiveness of passive tags in such a complex setting, which includes various objects and significant human motion. Through our studies conducted at a Level 1 trauma center, we learned that objects used in trauma resuscitation need to be tagged differently because of their size, shape, and material composition. Based on this insight, we classified the medical items into groups based on usage and other characteristics. Objects in different groups are tagged differently and their data is processed differently. We applied machine-learning algorithms to identify object-state changes and process the RFID data using algorithms specific to object groups. Our results show that RFID has significant potential for automatic detection of object usage in complex and fast-paced settings.
    Proceedings of the 6th International Conference on Body Area Networks; 11/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The field of surgical work-flow fosters the formalization and acquisition of surgical task descriptions from real-time surgical interven-tions to support clinical and technical analysis. However, uncertainty plays such a large part in surgical procedures that the representation of surgical work-flows need to deal with probability in a direct way. To re-duce the uncertainty in surgical task descriptions, we propose two mech-anisms to generate robust and discriminative observation sequences from a larger observation set. These observation sequences are used to train a Hidden Markov Model (HMM) with the surgical work-flow steps as hidden states and the instruments as observation set. A demonstration with ROC analysis shows that asynchronous pre-processing of the ob-servation sets leads to a better classification performance than the syn-chronous pre-processing. Hence the asynchronous pre-processing mech-anism leads to a more robust and discriminative training of the Hidden Markov Model.

Full-text (4 Sources)

Available from
May 22, 2014