Conference PaperPDF Available

X-Ray Device Positioning with Augmented Reality Visual Feedback

X-Ray Device Positioning with Augmented Reality Visual Feedback
Kartikay Tehlan *
Alexander Winkler *
Daniel Roth §
Erlangen-N ¨
Nassir Navab
Johns Hopkins University
In minimally invasive surgeries one common way to verify progress
is the use of an intraoperative X-ray device (due to its characteristic
shape called a C-arm). Its control, however, remains challenging ow-
ing to its complex movements. We propose the use of an Augmented
Reality Head-Mounted Display (AR-HMD) to let the surgeon choose
a desired X-ray view interventionally providing the corresponding
C-arm configuration as visual feedback. The study participants’
feedback, despite being critical of the HMD hardware limitations,
suggests an inclination towards using AR for orthopaedic surgeries
on especially complex or unusual anatomies.
Index Terms:
Computing Methodologies—Computer graphics—
Graphics Systems and Interfaces—Mixed / Augmented Real-
ity; Human-Centered Computing—Human Computer Interaction
(HCI)—Interaction Paradigms—Mixed / Augmented Reality
The trial-and-error methodology of positioning a C-arm to obtain
specific intraoperative X-rays for monitoring the progress of min-
imally invasive surgeries increases the radiation exposure to the
patient and surgical staff along with surgery duration. One approach
to simplify the finding of the optimal pose of the C-arm is to track the
X-ray device and generate synthetic X-rays, also known as Digitally
Reconstructed Radiographs (DRR), from this pose dynamically [1].
While intuitive, this is time-consuming and cumbersome. Another
project describes a system to re-position a C-arm into its previous
pose by visualizing the recorded C-arm pose [5].
Desired-view controlled positioning [2] handles the complexities
of C-arm positioning to acquire a specific view of the anatomy
by generating DRRs from patient data in pre-surgery planning.We
circumvent the need for a pre-operative planning stage by combining
intraoperative planning with simultaneous visualization.
We present an AR approach to position the C-arm based on a Com-
puted Tomography (CT) scan of the patient undergoing surgery. The
application enables the surgeon to select a simulated X-ray projec-
tion as a view on the anatomy. This view relates to the position of
the C-arm through patient to CT registration.
Our implementation was done for Microsoft Hololens 1, using
Unity 3D with marker tracking by Vuforia, and Microsoft Mixed-
Reality Toolkit to provide gesture interaction.
The positioning of the C-arm based on a DRR view is described
by forward kinematics: in the application the surgeon adjusts the
individual joints of the C-arm, which directly leads to an updated
perspective on the patient anatomy. The relation between the pose
of the C-arm (its position, angulation, orbital rotation etc.) and
*These authors contributed equally to this work.
Figure 1: The AR UI in front of a real C-arm. The surgeon can select
a real-time DRR view on the anatomy, by moving the sliders.
the DRR are calculated using the Denavit-Hartenberg rules. These
DRRs are rendered from the preoperative patient CT data using
volume raycasting implemented by appropriate shaders. With the
DRR now available for a given pose of C-arm with respect to the
patient volume, the next step is to develop a control mechanism to
manipulate the virtual C-arm.
A touch-less User Interface (UI) (Fig. 1) is provided, which is a
viable method of interaction preserving the sterility of the surgeons
hands [3]. An HMD can project the UI, which enables the surgeon to
both view the DRRs and interact with it through hand gestures. The
projection of the UI floating above the patient was done to accurately
present the DRR, minimize obstruction, and be conveniently visible
to the surgeon.The UI allows the surgeon to control, using pinching
slider gestures, the cranial/caudal angulation, left/right orbital and
superior/inferior translation movements of the C-arm with respect
to the patient. Once the surgeon has arrived at the desired view, it
can be saved for retrieval later. The surgeon can switch to a virtual
representation of the C-arm (Fig. 2) to visualize it in the surgical
environment, which can assist in its positioning.
The surgeon can check the visualization of the suggested C-arm
pose for feasibility, collision-free placement in the operating room,
and ease of access to the patient. Such subjective surgical environ-
ment criteria naturally cannot be assessed automatically, but must
be verified by a medical expert reviewing the visualization.
A user study was conducted to establish the face validity and us-
ability of the developed application contrasted to the trial and error
C-arm positioning method. It indicates acceptance, comfort and
potential of AR frameworks in the surgical domain.
3.1 Study Procedure
After filling in a pre-study questionnaire (demographic information,
estimates of their media usage, clinical experience and technology
Figure 2: The selected view corresponds to a C-arm pose in relation-
ship to the patient and environment. We show the virtual C-arm in
this pose which can guide the surgeon to position the real C-arm.
acceptance [4]), we let the participants experience the system for
several minutes exploring all functionalities of the application. Af-
ter exposure to the application, we had them fill in questionnaires
regarding usability and task load and record their experiences.
We assessed the perceived usability by including the System
Usability Scale (SUS) using the common 5-point scale (1 - strongly
disagree, 5 - strongly agree). As another measure, we assessed
the perceived task load using the NASA Task Load Index (TLX),
and report on its individual subscales. Finally, we asked free-text
questions for advantages and disadvantages of the method.
3.2 Participants
The study included
participants (
Mage =29.0
SDage =2.2
from the medical field. Three of them were assistant physicians
and one of them a final-year medical student. Participants stated to
spend time with digital media (PC, mobile phone, etc.) for about
hours per week (
SD =20.1
). Three participants noted to have
used Mixed Reality systems before. Participants stated to perform on
image guided surgeries per week (
SD =1.9
). Their tech-
nology acceptance facet according to the technology commitment
questionnaire is very high (average 19.5 on a scale of 020.0).
The SUS resulted in an average of
MSUS =70.625
SDSUS =8.75
which puts the system into a “good” range. The accumulated indi-
vidual subscales of the NASA-TLX are reported in Fig. 3. All the
participants found the HMD to be uncomfortable for extended use in
Figure 3: Individual subscales of the NASA-TLX questionnaire.
surgery, and most of them complained about the field of view of the
Hololens 1 to be limited. The participants found that the application
would be most advantageous for orthopaedic and trauma surgery.
The participants valued the AR visualization of the patient and
the 3D display of the C-arm highly. One participant wished for
inclusion of the application in even more modalities. However, the
inconvenience of wearing a heavy device was stated as a deterrent
to the acceptance of HMDs in surgery. For experienced surgeons
under normal surgical conditions, some trial and error X-ray images
are more appealing than the use of cumbersome devices. They were
however inclined towards using AR for complex surgeries, such as
shoulder, spine, and ankle surgery. Also AR was reported to be
favourable when dealing with patients with physical deformities,
and other factors that made the positioning of a C-arm unintuitive.
One participant noted that the larger field of view of the DRR
compared to an actual C-arm, helped to make the positioning easier.
In this regard, the simulation does not need to be constrained by the
capabilities of the physical X-ray device, as long as it provides a
modality that the users are familiar with. This salient feature of the
application can prove beneficial for training of medical students in
standard views without exposing them to radiation.
One participant raised the concern, that the rest of the surgical
team is excluded from the AR application, as they are not able to see
the surgeon’s augmented view. Therefore multi user support would
be a desirable improvement. Another participant suggested natural
control of the C-arm through a virtual grabbing interaction. Further
two participants requested to see both the DRR and the C-arm in its
corresponding pose simultaneously as opposed to individually.
Future improvements of the UI could include real-time X-rays of
the C-arm to show angiographies during surgery, and automation of
C-arm movement according to the chosen position.
In this paper we presented the use of an HMD application as an
intraoperative planning tool for C-arm positioning. In our study we
found that the participants generally liked the idea of visualizing the
3D representation of the patient and the C-arm in a consistent way
and the possibility to receive live updates of the DRR relative to the
C-arm pose. Barring the discomfort of wearing an HMD for long
duration, the inclusion of an HMD into image guided surgery with
its touch-less user interface and powerful visualization methods can
increase the efficiency and quality of medical procedures.
P. Dressel, L. Wang, O. Kutter, J. Traub, S.-M. Heining, and N. Navab.
Intraoperative positioning of mobile c-arms using artificial fluoroscopy.
In Medical Imaging 2010: Visualization, Image-Guided Procedures, and
Modeling, vol. 7625, p. 762506. International Society for Optics and
Photonics, 2010.
P. Fallavollita, A. Winkler, S. Habert, P. Wucherer, P. Stefan, R. Mansour,
R. Ghotbi, and N. Navab. Desired-view controlled positioning of angio-
graphic c-arms. In P. Golland, N. Hata, C. Barillot, J. Hornegger, and
R. Howe, eds., Medical Image Computing and Computer-Assisted Inter-
vention – MICCAI 2014, pp. 659–666. Springer International Publishing,
Cham, 2014.
C. Graetzel, T. Fong, S. Grange, and C. Baur. A non-contact mouse for
surgeon-computer interaction. Technology and Health Care, 12(3):245–
257, 2004.
F. J. Neyer, J. Felber, and C. Gebhardt. Entwicklung und Validierung
einer Kurzskala zur Erfassung von Technikbereitschaft. Diagnostica,
M. Unberath, J. Fotouhi, J. Hajek, A. Maier, G. Osgood, R. Taylor,
M. Armand, and N. Navab. Augmented reality-based feedback for
technician-in-the-loop c-arm repositioning. Healthcare technology let-
ters, 5(5):143–147, 2018.
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation dose to both patient and staff. This work proposes a marker-free "technician-in-the-loop" Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a headmounted display capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a particular target view, the recorded C-arm pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. We conduct experiments in a setting simulating orthopedic trauma surgery. Our proof-of-principle findings indicate that the proposed system can decrease the 2.76 X-ray images required for re-aligning the scanner with an intra-operatively recorded C-arm view down to zero, suggesting substantial reductions of radiation dose during C-arm repositioning. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future. The concept of technician-in-the-loop design will become relevant to various interventions considering the expected advancements of sensing and wearable computing in the near future.
Full-text available
Zusammenfassung. Die Autoren stellen ein Modell der Technikbereitschaft vor, nach dem individuelle Unterschiede in der Bereitschaft zum Umgang mit Technik in drei unterscheidbare Facetten untergliedert werden konnen: Technikakzeptanz, Technikkompetenz- und Technikkontrolluberzeugungen. Technikbereitschaft soll den erfolgreichen Umgang mit neuen Technologien insbesondere im hoheren Lebensalter vorhersagen. Die Messeigenschaften einer neu entwickelten Skala wurden in drei Studien (N = 825) uberpruft. Die Ergebnisse zeigen, dass das Modell der Technikbereitschaft empirisch bestatigt werden kann und das Instrument gute psychometrische Eigenschaften besitzt. Die Konstruktvaliditat wurde uber Zusammenhange mit theoretisch einschlagigen Referenzkonstrukten (Techniknutzung, Personlichkeit, Indikatoren erfolgreichen Alterns) sowie konkurrierend gegenuber anderen Masen der Technikakzeptanz uberpruft. Schlusselworter: Technikbereitschaft, Technikakzeptanz, Technikkompetenzuberzeugungen, Technikkontrolle, Techniknutzung Development and validation of a brief measure of technology commitment
Full-text available
We have developed a system that uses computer vision to replace standard computer mouse functions with hand gestures. The system is designed to enable non-contact human-computer interaction (HCI), so that surgeons will be able to make more effective use of computers during surgery. In this paper, we begin by discussing the need for non-contact computer interfaces in the operating room. We then describe the design of our non-contact mouse system, focusing on the techniques used for hand detection, tracking, and gesture recognition. Finally, we present preliminary results from testing and planned future work.
Conference Paper
We present the idea of a user interface concept, which resolves the challenges involved in the control of angiographic C-arms for their constant repositioning during interventions by either the surgeons or the surgical staff. Our aim is to shift the paradigm of interventional image acquisition workflow from the traditional control device interfaces to 'desired-view' control. This allows the physicians to only communicate the desired outcome of imaging, based on simulated X-rays from pre-operative CT or CTA data, while the system takes care of computing the positioning of the imaging device relative to the patient's anatomy through inverse kinematics and CT to patient registration. Together with our clinical partners, we evaluate the new technique using 5 patient CTA and their corresponding intraoperative X-ray angiography datasets.
In trauma and orthopedic surgery, imaging through X-ray fluoroscopy with C-arms is ubiquitous. This leads to an increase in ionizing radiation applied to patient and clinical staff. Placing these devices in the desired position to visualize a region of interest is a challenging task, requiring both skill of the operator and numerous X-rays for guidance. We propose an extension to C-arms for which position data is available that provides the surgeon with so called artificial fluoroscopy. This is achieved by computing digitally reconstructed radiographs (DRRs) from pre- or intraoperative CT data. The approach is based on C-arm motion estimation, for which we employ a Camera Augmented Mobile C-arm (CAMC) system, and a rigid registration of the patient to the CT data. Using this information we are able to generate DRRs and simulate fluoroscopic images. For positioning tasks, this system appears almost exactly like conventional fluoroscopy, however simulating the images from the CT data in realtime as the C-arm is moved without the application of ionizing radiation. Furthermore, preoperative planning can be done on the CT data and then visualized during positioning, e.g. defining drilling axes for pedicle approach techniques. Since our method does not require external tracking it is suitable for deployment in clinical environments and day-to-day routine. An experiment with six drillings into a lumbar spine phantom showed reproducible accuracy in positioning the C-arm, ranging from 1.1 mm to 4.1 mm deviation of marker points on the phantom compared in real and virtual images.
Desired-view controlled positioning of angiographic c-arms
  • P Fallavollita
  • A Winkler
  • S Habert
  • P Wucherer
  • P Stefan
  • R Mansour
  • R Ghotbi
  • N Navab
P. Fallavollita, A. Winkler, S. Habert, P. Wucherer, P. Stefan, R. Mansour, R. Ghotbi, and N. Navab. Desired-view controlled positioning of angiographic c-arms. In P. Golland, N. Hata, C. Barillot, J. Hornegger, and R. Howe, eds., Medical Image Computing and Computer-Assisted Intervention -MICCAI 2014, pp. 659-666. Springer International Publishing, Cham, 2014.