May 2025
·
24 Reads
International Journal of Computer Assisted Radiology and Surgery
Purpose We introduce a multimodel, real-time semantic segmentation and tracking approach for Augmented Reality (AR)-guided open liver surgery. Our approach leverages foundation models and scene-aware re-prompting strategies to balance segmentation accuracy and inference time as required for real-time AR-assisted surgery applications. Methods Our approach integrates a domain-specific RGBD model (ESANet), a foundation model for semantic segmentation (SAM), and a semi-supervised video object segmentation model (DeAOT). Models were combined in an auto-promptable pipeline with a scene-aware re-prompting algorithm that adapts to surgical scene changes. We evaluated our approach on intraoperative RGBD videos from 10 open liver surgeries using a head-mounted AR device. Segmentation accuracy (IoU), temporal resolution (FPS), and the impact of re-prompting strategies were analyzed. Comparisons to individual models were performed. Results Our multimodel approach achieved a median IoU of 71% at 13.2 FPS without re-prompting. Performance of our multimodel approach surpasses that of individual models, yielding better segmentation accuracy than ESANet and better temporal resolution compared to SAM. Our scene-aware re-prompting method reaches the DeAOT performance, with an IoU of 74.7% at 11.5 FPS, even when the DeAOT model uses an ideal reference frame. Conclusion Our scene-aware re-prompting strategy provides a trade-off between segmentation accuracy and temporal resolution, thus addressing the requirements of real-time AR-guided open liver surgery. The integration of complementary models resulted in robust and accurate segmentation in a complex, real-world surgical settings.