ArticlePDF Available

Artificial intelligence for image interpretation in ultrasound‐guided regional anaesthesia

Authors:
Editorial
Articial intelligence for image interpretation in
ultrasound-guided regional anaesthesia
J. Bowness,
1,2
K. El-Boghdadly
3,4
and D. Burckett-St Laurent
5
1 Clinical Lecturer, Institute of Academic Anaesthesia, University of Dundee, Dundee, UK
2 Honorary Specialty Registrar, Department of Anaesthesia, Ninewells Hospital, Dundee, UK
3 Consultant, Department of Anaesthesia and Peri-operative Medicine, Guy's and St Thomas's NHS Foundation Trust,
London, UK
4 Honorary Senior Lecturer, King's College London, London, UK
5 Consultant, Department of Anaesthesia, Royal Gwent Hospital, Newport, UK
.................................................................................................................................................................
Correspondence to: J. Bowness
Email: james.bowness@nhs.net
Accepted: 1 July 2020
Keywords: anatomy; articial intelligence; blocks; machine learning; regional anaesthesia; ultrasound
Twitter: @bowness_james, @elboghdadly
Here is my prophecy: In its nal development, the
telephone will be carried about by the individual,
perhaps as we carry a watch today. It probably will
require no dial or equivalent, and I think the users will
be able to see each other, if they want, as they talk.
Mark R Sullivan (Pacic Telephone and Telegraph
Co., 1953)
The initial challenge presented to a practitioner during
ultrasound-guided regional anaesthesia is the
interpretation of sono-anatomy upon placing a probe on
the patient. To date, technological advancements have
focused on methods to enhance needle viewing [1]. Sono-
anatomical interpretation remains an under-explored
avenue of research to improve the availability, efcacy and
safety of regional anaesthetic techniques. We present the
case for the use of articial intelligence (AI) in identifying key
anatomical features to facilitate ultrasound-guided regional
anaesthesia.
Ultrasound image analysis in
ultrasound-guided regional
anaesthesia
Ultrasound guidance has been a major advancement in
regional anaesthesia since the turn of the century. It is often
accepted that ultrasound has led to improved outcomes
following regional anaesthesia, although it is not clear that is
has reduced the incidence of nerve trauma [2].
The American Society of Regional Anesthesia and Pain
and the European Society of Regional Anaesthesia and Pain
Therapy joint committee recommendations for education
and training in ultrasound-guided regional anaesthesia
categorise four activities [3]:
1Understanding device operations
2Image optimisation
3Image interpretation (locating and interpreting anatomy
under ultrasound)
4Visualisation of needle insertion and injection (needle-
probe orientation; the maintenance of needle
visualisation; and optimal anatomical view whilst
moving the needle towards the target object)
Much effort has been directed towards needle
guidance systems and echogenic needles to improve
needle visibility [1]. However, augmenting image
interpretation has received less attention despite a sound
understanding and interpretation of sono-anatomy being
required for the practice of ultrasound-guided regional
anaesthesia [3, 4]. This is particularly pertinent as anatomical
knowledge among anaesthetists is known to be imperfect
©2020 The Authors. Anaesthesia published by John Wiley & Sons Ltd on behalf of Association of Anaesthetists 1
This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and
distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modications or adaptations are made.
Anaesthesia 2020 doi:10.1111/anae.15212
[5]. Human image analysis is similarly fallible [6] and human
performance is subject to fatiguability [7].
The many difculties in acquiring and maintaining the
skill sets involved in anatomical recognition and needle
guidance also restrict the number of clinicians condent
and able to perform ultrasound-guided regional
anaesthesia. Currently, the majority of peripheral nerve
blocks are performed by a restricted number of experts [4].
Breaking down these barriers may particularly enhance
uptake by non-expert regional anaesthetists. Ultrasound-
guided regional anaesthesia also has the potential to be
employed more widely, for example, by nurse anaesthetists,
emergency medicine physicians, armed forces/battleeld
medical practitioners and those treating pain in the chronic
pain clinic or palliative care. Widening patient access to
these techniques has potential to directly address several of
the anaesthesia and peri-operative care priorities of the
James Lind Alliance [8].
Articial intelligence, machine learning
and deep learning in anaesthesia
Articial intelligence is a general term which includes
machine learning and deep learning (Fig. 1). There has
been a recent proliferation of publications relating to the
utility of AI, in particular machine learning, in the peri-
operative setting [7, 9]. Most focus on systems to
assimilate and analyse data input from multiple sources,
to assist in pre-operative assessment and risk stratication,
monitor depth of anaesthesia/sedation, enhance early
detection of unwell patients, or predict intra-operative
adverse events (e.g. hypotension) and postoperative
outcomes (e.g. pain and mortality) (Table 1). However,
implementation of these technologies in clinical practice
is not yet commonplace [9].
Machine learning in ultrasound-guided
regional anaesthesia
Published work includes the study of automated nerve and
blood vessel identication for ultrasound-guided regional
anaesthesia [17]. Indeed, medical image interpretation is a
particularly popular focus of research in healthcare AI [18].
One such example is the collaboration between researchers
and clinicians at DeepMind (Alphabet Inc, Palo Alto, CA,
USA), Moorelds Eye Hospital and University College
London, who have developed a system which reaches or
exceeds expert performance in analysis of optical
coherence tomography [19]. A similar collaboration has
demonstrated equally successful results in the eld breast
cancer screening mammography; with an AI system that is
capable of surpassing human experts in breast cancer
prediction [20]. It thus follows that image analysis in
ultrasound-guided regional anaesthesia could similarly be
an area in which assistive machine learning technology may
provide patient benet.
Figure 1 A summary of articial intelligence, machine learning and deep learning.
2©2020 The Authors. Anaesthesia published by John Wiley & Sons Ltd on behalf of Association of Anaesthetists
Anaesthesia 2020 Editorial
Given the complexity, diversity and operator
dependence (leading to inter- and intra-individual variation)
in ultrasound appearance of anatomical structures on
ultrasound, it is difcult to develop nascent AI algorithms to
recognise all salient features de novo [18]. Therefore,
automated medical image analysis can be trained to
recognise this wide variety of appearances by learning from
examples, which is the premise of machine learning [18].
Such assistive technology could be used to enhance
interpretation of sono-anatomy by facilitating target
identication (e.g. peripheral nerves and fascial planes), and
the selection of optimal block site through demonstrating
relevant landmarks and guidance structures (e.g. bone and
muscle). The safety prole may be enhanced by
highlighting safety structures (e.g. blood vessels) to
minimise unwanted trauma.
We postulate that providing a head-up display
(display within the users existing eld of vision) of anatomy
in real time, as an adjunct to the conventional narrative and
instructions from an expert, may reduce the cognitive load
for less experienced operators. It may also reduce time
required for image acquisition and analysis and increase
operator condence. This in turn may improve performance
in needle/probe manipulation by increasing spare cognitive
capacity for these activities. Head-up and instrument-
mounted displays have been proven to be of use in military
aviation and the automotive industry [21]. Furthermore,
computerised systems are not subject to fatigue and can
reproducibly perform the desired activity with complete
delity [7].
AnatomyGuide
TM
(Intelligent Ultrasound Limited,
Cardiff, UK) is a system based on AI technologies. It has
been developed with the use of B-mode ultrasound video
for specic peripheral nerve block regions. Each video is
broken into multiple frames, with each frame receiving a
coloured overlay of specic structures identied as either
landmarks, safety structures or targets. These labelled
frames are then used to train the machine learning
algorithm, which uses deep learning to develop
associations between the labels and underlying structures.
Table 1 Potential articial intelligence applications to anaesthetic practice based on examples of current evidence.
Area of practice Application
Pre-operative Risk stratication during pre-operative assessment (to inuence anaesthetic technique and for outcome prediction)
-Karpagavalli et al. [10] trained three supervised machine learning systems on pre-operative data
(37 features) from 362 patients
-These systems were able to accurately categorise patients into low, medium and high-risk groups
(broadly correlating with ASA grade)
Intra-operative Automated ultrasound spinal landmark identication in neuraxial blockade
-Oh et al. [11] have demonstrated improved spinal ultrasound interpretation and rst pass spinal success
using an intelligent image processing system to identify spinal landmarks
Prediction of post-induction/intra-operative hypotension
-Wijnberge et al. [12] demonstrated the ability to reduce the duration and depth of intra-operative
hypotension through the use of a machine learning-derived early warning system
Prediction of post-intubation hypoxia
-Sippl et al. [13] retrospectively analysed data from 620 cases to develop a machine learning system
capable of predicting post-intubation hypoxia to the same level as that observed by medical experts
Monitoring/control of level of sedation/hypnosis
-Lee et al [14]. present a deep learning model, training on data sets from 131 patients, to predict bispectral
index response during target-controlled infusion of propofol and remifentanil
Postoperative Prediction of postoperative in-hospital mortality
-Fritz et al. [15] present a deep-learning model based on patient characteristics and peri-operative data
to predict 30 day mortality
Prediction of analgesic response
-Misra et al. [16] use machine learning for the automated classication of pain state (high and low)
based on EEG data
EEG, electroencephalogram.
©2020 The Authors. Anaesthesia published by John Wiley & Sons Ltd on behalf of Association of Anaesthetists 3
Editorial Anaesthesia 2020
In time, the algorithm is able to label raw B-mode ultrasound
data in real-time on new ultrasound scans of similar regions.
System performance is a function of the quantity and quality
of labelled data presented during training: the training set
used for each block included over 120,000 images to
achieve the current level of performance.
One example of a peripheral nerve block for which a
model has been well developed for AnatomyGuide is the
adductor canal block. Information used to train the
algorithm is similar to that used for an inexperienced
operator in clinical practice by identifying the relevant
anatomy. In this model, the sartorius and adductor longus
muscles, as well as the femur, were rst identied as
landmarks. The optimal block site is chosen as the region
where the medial borders of these two muscles align. The
femoral artery is labelled as both a landmark and safety
structure. The saphenous nerve is labelled as a target. The
intent is to assist the operator in identifying the nerve and
correct site to target for the block (Fig. 2 and Supporting
Information, Video S1).
Extended uses of machine learning
systems in ultrasound-guided regional
anaesthesia
Gaining early competencies in ultrasound-guided regional
anaesthesia is particularly challenging. It is difcult to
develop and use high-delity simulation, and training in the
clinical setting can be inconsistent. Experience is often
gained on an ad hoc basis, with long time intervals between
episodes, and different trainers may have differing
approaches. Assistive machine learning systems may
provide supplementary information to facilitate ultrasound-
guided regional anaesthesia training for inexperienced
operators. Simply highlighting the relevant structures will
aid understanding of their likely position and appearance in
future ultrasound analysis. This may aid in the initial skill
acquisition, and shorten the period required for direct
supervision, supporting the transition to indirectly
supervised/solo practice.
In the era of competency-based training, quantitative
assessment and evaluation of operator expertise is
important but difcult. It is often not practical in the clinical
environment and innovation is required. Methods to aid
assessment include an approach based on prociency-
based progression [22]. By using descriptions of
ultrasound-guided regional anaesthesia performance,
broken down to specic actions, machine learning analysis
of data (e.g. video recording of operator, analysis of
sonographic video or needle tracking technology) can
provide an evaluation of the quality of operator
performance. Assuming a robust and successful evaluation
of such systems, this method may facilitate standardised
Figure 2 Sono-anatomy of the adductor canal block. (a) Illustration showing a cross-section of the mid-thigh. (b) Enlarged
illustration of the structures seen on ultrasound during performance adductor canal block. (c) Ultrasound view during adductor
canal block. (d) Ultrasound view labelled by AnatomyGuide.
4©2020 The Authors. Anaesthesia published by John Wiley & Sons Ltd on behalf of Association of Anaesthetists
Anaesthesia 2020 Editorial
assessment of operator performance, and reduce
subjectivity in evaluation/assessment [23].
Furthermore, it has been suggested that a move
towards standardising the implementation of regional
anaesthesia may engage a greater body of anaesthetists in
its practice [4]. Computational systems, by their nature,
assess novel data in a consistent manner, thus their use
could act as a conduit to facilitating the recommendation to
standardise ultrasound-guided approaches to peripheral
nerve blocks [4].
Potential limitations of machine
learning systems in ultrasound-guided
regional anaesthesia
Technological advancement is not without potential pitfalls
and the regulatory landscape for AI applied to medical
imaging is still developing. Few products have obtained
regulatory approval to date, particularly those evaluating
images in real-time. A personal teaching approach should
remain central to training in ultrasound-guided regional
anaesthesia and should not be replaced by technological
supervision. Operators must still learn where to
commence ultrasound scanning, and must assimilate the
nuances of probe pressure, angulation, rotation and tilt to
optimise image acquisition. Integrating AI into image
analysis may allow an uneven progression of training
between sono-anatomical recognition and needle-probe
co-ordination.
In time, there will need to be evidence that such
systems improve operator performance and patient
outcomes to justify continued development and
implementation in clinical practice. There is potential for
inaccuracies in the labelling of anatomy in such a system;
strict validation and quality control will need to apply,
particularly in the context of atypical or complex clinical
presentation and anatomy. Such reservations are applicable
to all new AI technologies, and previous methodological
concerns exist including poor validation, over prediction
and lack of transparency [24].
Early models will inevitably be improved upon but even
the rst systems employed in clinical practice must offer
superior ultrasound image analysis to the non-expert
practitioner. A subsequent, and more stringent, challenge
will be to ensure they augment operators with high-level
expertise, but machine learning systems are not guaranteed
to be superior to human performance [23] and systems
should not be relied upon to replace clinician knowledge.
Conversely, identifying features and associations that are
not regularly viewed by eye might not improve clinical
performance or outcomes.
Articial intelligence systems for ultrasound may
require the acquisition of new ultrasound machines, or be
retro-tted to current devices, both of which may
understandably delay uptake and incur cost. Finally,
unpredictable clinical implications will likely emerge; these
should be anticipated and addressed where possible.
Conclusion
Despite early promise, the potential for utilisation of AI in
medical image analysis is yet to be realised, and few
applications are currently employed in medical practice
[25]. In particular, machine learning for ultrasound-guided
regional anaesthesia appears to have received relatively
little attention. Anatomical knowledge and ultrasound
image interpretation are of paramount importance in
ultrasound-guided regional anaesthesia, but the human
performance and teaching of both are known to be fallible.
Robust and reliable AI technologies could support clinicians
to optimise performance, increase uptake and standardise
training in ultrasound-guided regional anaesthesia. Mark R
Sullivan realised the potential of the mobile telephone
decades before they impacted the public consciousness.
Our belief is that AI systems in healthcare will have a similar
impact, and include the eld of ultrasound-guided regional
anaesthesia, offering innovative solutions to change service
provision and workforce education. Anaesthetists should
embrace this opportunity and engage in the development
of these technologies to ensure they are used to enhance
the specialty in a transformative manner.
Acknowledgements
The authors would like to acknowledge the contributions of
Dr F. Zmuda (Fig. 1) and Dr J. Mortimer (Fig. 2) for the
production of illustrations used in this article. JB is a Clinical
Advisor for and receives honoraria from Intelligent
Ultrasound Limited. KE has received research, honoraria and
educational funding from Fisher and Paykel Healthcare Ltd,
GE Healthcare, and Ambu, and is an Editor for Anaesthesia.
DL is a Clinical Advisor for and receives honoraria from
Intelligent Ultrasound Limited and is the Lead Clinician on
AnatomyGuide. No other competing interests declared.
References
1. Scholten HJ, Pourtaherian A, Mihajlovic N, et al. Improving
needle tip identication during ultrasound-guided procedures
in anaesthetic practice. Anaesthesia 2017; 72: 889904.
2. Munimara S, McLeod GA. A systematic review and meta-
analysis of ultrasound versus nerve stimulation for peripheral
nerve location and blockade. Anaesthesia 2015; 70: 108491.
3. Sites BD, Chan VW, Neal JM, et al. The American Society of
Regional Anesthesia and Pain and the European Society of
Regional Anaesthesia and Pain Therapy Joint Committee
©2020 The Authors. Anaesthesia published by John Wiley & Sons Ltd on behalf of Association of Anaesthetists 5
Editorial Anaesthesia 2020
recommendations for education and training in ultrasound-
guided regional anaesthesia. Regional Anesthesia and Pain
Medicine 2009; 34:406.
4. Turbitt LR, Mariano ER, El-Boghdadly K. Future directions in
regional anaesthesia: not just for the cognoscenti. Anaesthesia
2020; 75: 2937.
5. Bowness J, Turnbull K, Taylor A, et al. Identifying variant
anatomy during ultrasound-guided regional anaesthesia:
opportunities for clinical improvement. British Journal of
Anaesthesia 2019; 122: 7757.
6. Drew T, Vo MLH, Wolfe JM. The invisible gorilla strikes again:
sustained inattention blindness in expert observers.
Psychological Science 2013; 24: 184853.
7. Connor CW. Articial intelligence and machine learning in
anesthesiology. Anesthesiology 2019; 131: 134659.
8. James Lind Alliance. Anaesthesia and Preoperative Care Top 10.
http://www.jla.nihr.ac.uk/priority-setting-partnerships/anaesthesia-
and-perioperative-care/top-10-priorities/ (accessed 15/11/2019).
9. C^
ot
e CD, Kim PJ. Articial intelligence in anesthesiology:
moving into the future. University of Toronto Medical Journal
2019; 96:336.
10. Karpagavalli S, Jamuna KS, Vijaya MS. Machine learning
approach for preoperative anaesthetic risk prediction.
International Journal of Recent Trends in Engineering and
Technology 2009; 1:1922.
11. Oh TT, Ikhsan M, Tan KK, et al. A novel approach to
neuraxial anesthesia: application of an automated
ultrasound spinal landmark identication. BMC
Anesthesiology 2019; 19: 57.
12. Wijnberge M, Geerts BF, Hol L, et al. Effect of a machine learning-
derived early warning system for intraoperative hypotension vs
standard care on depth and duration of intraoperative
hypotension during elective noncardiac surgery. Journal of the
American Medical Association 2020; 323: 105260.
13. Sippl P, Ganslandt T, Prokosch HU, et al. Machine learning
models of post-intubation hypoxia during general anesthesia.
Studies in Health Technology and Informatics 2017; 243:
2126.
14. Lee CK, Ryu HG, Chung EJ, et al. Prediction of bispectral index
during target-controlled infusion of propofol and remifentanil: a
deep learning approach. Anesthesiology 2018; 128: 492501.
15. Fritz BA, Cui Z, Zhang M, et al. Deep-learning model for
predicting 30-day postoperative mortality. British Journal of
Anaesthesia 2019; 123: 68895.
16. Misra G, Wang WE, Archer DB, et al. Automated classication of
pain perception using high-delity elecetroencephaloghic
data. Journal of Neurophysiology 2017; 117: 78695.
17. Smistad E, Johansen KF, Iversen DH, et al. Highlighting nerves
and blood vessels for ultrasound-guided axillary nerve block
procedures using neural networks. Journal of Medical Imaging
2018; 5:1.
18. Shen D, Wu G, Zhang D, et al. Machine learning in medical
imaging. Computerized Medical Imaging and Graphics 2015;
41:12.
19. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically
applicable deep learning for diagnosis and referral in retinal
disease. Nature Medicine 2018; 24: 134250.
20. McKinney SM, Sieniek M, Godbole V, et al. International
evaluation of an AI system for breast cancer screening. Nature
2020; 577:8994.
21. Prabhakar G, Eye BP. Gaze controlled projected display in
automotive and military aviation environments. Multimodal
Technologies and Interaction 2018; 2:1.
22. Shorten G, Kallidaikurichi Sinivasan K, Reinertsen I. Machine
learning and evidence-based training in technical skills. British
Journal of Anaesthesia 2018; 121: 5213.
23. Alexander JC, Joshi GP. Anesthesiology, automation, and
articial intelligence. Proceedings (Baylor University Medical
Center) 2018; 31: 1179.
24. Collins GS, Moons KGM. Reporting of articial intelligence
prediction models. Lancet 2019; 393: 1577979.
25. Kelly CJ, Karthikesalingam A, Suleyman M, et al. Key challenges
for delivering clinical impact with articial intelligence. BMC
Medicine 2019; 17: 195.
Supporting Information
Additional supporting information may be found online via
the journal website.
Video S1. Sub-sartorial distal femoral triangle (adductor
canal) block: real-time anatomy overlay by AnatomyGuide.
6©2020 The Authors. Anaesthesia published by John Wiley & Sons Ltd on behalf of Association of Anaesthetists
Anaesthesia 2020 Editorial
... 11 The implementation of artificial intelligence (AI) in clinical practice continues to grow and regional anesthesia has potential to embrace this technology for patient benefit, such as supporting UGRA training and practice to increase patient access to these techniques, as well as improve patient safety. 12 Broadly speaking, AI includes any technique that enables computers to undertake tasks associated with human intelligence. 12 Common techniques in this field include machine learning (ML) and deep learning (DL). ...
... 12 Broadly speaking, AI includes any technique that enables computers to undertake tasks associated with human intelligence. 12 Common techniques in this field include machine learning (ML) and deep learning (DL). ML uses algorithms, rule-based problem-solving instructions implemented by the computer, 12 to learn: training data are analyzed to identify patterns (statistical correlations) in this information. ...
... 12 Common techniques in this field include machine learning (ML) and deep learning (DL). ML uses algorithms, rule-based problem-solving instructions implemented by the computer, 12 to learn: training data are analyzed to identify patterns (statistical correlations) in this information. If the algorithm is informed of the desired endpoints (eg, by labeling the images during the training stage) and then looks for correlations between the raw input data and the endpoints, this is called supervised ML. ...
Article
Full-text available
Introduction: Ultrasound-guided regional anesthesia (UGRA) involves the acquisition and interpretation of ultrasound images to delineate sonoanatomy. This study explores the utility of a novel artificial intelligence (AI) device designed to assist in this task (ScanNav Anatomy Peripheral Nerve Block; ScanNav), which applies a color overlay on real-time ultrasound to highlight key anatomical structures. Methods: Thirty anesthesiologists, 15 non-experts and 15 experts in UGRA, performed 240 ultrasound scans across nine peripheral nerve block regions. Half were performed with ScanNav. After scanning each block region, participants completed a questionnaire on the utility of the device in relation to training, teaching, and clinical practice in ultrasound scanning for UGRA. Ultrasound and color overlay output were recorded from scans performed with ScanNav. Experts present during the scans (real-time experts) were asked to assess potential for increased risk associated with use of the device (eg, needle trauma to safety structures). This was compared with experts who viewed the AI scans remotely. Results: Non-experts were more likely to provide positive and less likely to provide negative feedback than experts (p=0.001). Positive feedback was provided most frequently by non-experts on the potential role for training (37/60, 61.7%); for experts, it was for its utility in teaching (30/60, 50%). Real-time and remote experts reported a potentially increased risk in 12/254 (4.7%) vs 8/254 (3.1%, p=0.362) scans, respectively. Discussion: ScanNav shows potential to support non-experts in training and clinical practice, and experts in teaching UGRA. Such technology may aid the uptake and generalizability of UGRA. Trial registration number: NCT04918693.
... Ultrasound-guided nerve block is an important technical means of precision anesthesia, but because of its complexity and diversity, it is difficult to guarantee the popularity and blocking effect. Bowness et al. [69] built a clinical support system for nerve block based on AnatomyGuide system by learning from big data, which can effectively assist clinicians to identify nerve block and operate and achieve the role of optimizing anesthesia management. During general anesthesia, a number of studies have shown that closed-loop management based on anesthetic depth can more safely guide clinical anesthetic drugs to achieve accurate anesthesia. ...
Article
Full-text available
The physiological and neuroregulatory mechanism of propofol is largely based on very limited knowledge. It is one of the important puzzling issues in anesthesiology and is of great value in both scientific and clinical fields. It is acknowledged that neural networks which are comprised of a number of neural circuits might be involved in the anesthetic mechanism. However, the mechanism of this hypothesis needs to be further elucidated. With the progress of artificial intelligence, it is more likely to solve this problem through using artificial neural networks to perform temporal waveform data analysis and to construct biophysical computational models. This review focuses on current knowledge regarding the anesthetic mechanism of propofol, an intravenous general anesthetic, by constructing biophysical computational models.
... e development of modern medicine has made emergency medicine an independent medical discipline. Emergency medicine has become one of the important symbols reflecting the level of clinical medical science [8]. It is important to train medical personnel who are suitable for the needs of China's medical service system and have solid basic skills in emergency medicine and first aid. ...
Article
Full-text available
Anesthesiology is a subject with strong practicality and application. Undergraduate anesthesiology teaching needs to strike a balance between theoretical knowledge, clinical skill training, and clinical thinking development. Clinical probation and practice are an important part of undergraduate anesthesia teaching. Traditional clinical teaching uses real patients for demonstration and training, but as patients become more self-protective and less cooperative, there are not enough patients for clinical skill training. Simulation is to teach medical scenes in real life under the control of standardized technical guidelines and parameters. Since then, with the rapid development of computer technology, simulation technology and simulation teaching have been greatly developed and are more and more used in clinical teaching, skill evaluation, and scientific research. This study explores the effective methods of clinical teaching in anesthesiology by comparing the effectiveness of traditional teaching methods and simulation teaching methods in undergraduate clinical teaching. It is difficult to combine theory and practice in first aid, which does not allow them to directly receive and deal with emergency medical treatment and resuscitation. In China’s current medical environment and patients’ high demand for medical services, it is imperative to vigorously carry out simulated medical education. In the eastern part of Inner Mongolia, according to the advantages of teaching hospitals, our hospital took the lead in carrying out the simulation education project, which is still in the exploratory stage and not systematic enough. This study will help us to better carry out simulation teaching and improve the clinical skills of medical students in the future. Methods. The student group and class took the advanced simulator training as the experimental group, applied the advanced integrated simulator and other systems of the Norwegian company, referred to the international guidelines for cardiopulmonary resuscitation and cardiovascular first aid in 2005, and practiced in the emergency department during the clinical internship and “emergency clinical simulation training” course. The course includes basic life support, advanced life support, and comprehensive training of CPR (cardiopulmonary resuscitation) and endotracheal intubation. Results. The passing rate of simulated first aid practice was 94.4%; 100% of the students think it is necessary to set up the course, 91% of the students think it is practical, 91% of the students think the course content is reasonable and perfect, and 77%–100% of the students think the course has improved their first aid operation ability, comprehensive application of knowledge, and clinical thinking ability. Conclusion. Carrying out the course of “clinical simulated first aid training” through the advanced simulator system can effectively improve the interns’ clinical first aid operation ability, teamwork ability, and self-confidence, improve the students’ clinical thinking and judgment ability, and improve the service level to patients.
... US-guided punctures (mainly related to biopsy or nerve-blocking or regional anesthesia) require for clinicians expertise and large learning curves being highly operator dependent. The need of reduce this learning curve and improve accuracy and minimize complications derived from these procedures have pushed the development of specific AI applications for detecting and recognizing PNs [74,75]. The application of a fully automatic deep learning-based approach using MR neurography data has been tested obtaining high correlation with manual segmentation and volumetric similarities with the AI algorithm [76]. ...
Article
Full-text available
Purpose To perform a review of the physical basis of DTI and DCE-MRI applied to Peripheral Nerves (PNs) evaluation with the aim of providing readers the main concepts and tools to acquire these types of sequences for PNs assessment. The potential added value of these advanced techniques for pre-and post-surgical PN assessment is also reviewed in diverse clinical scenarios. Finally, a brief introduction to the promising applications of Artificial Intelligence (AI) for PNs evaluation is presented. Methods We review the existing literature and analyze the latest evidence regarding DTI, DCE-MRI and AI for PNs assessment. This review is focused on a practical approach to these advanced sequences providing tips and tricks for implementing them into real clinical practice focused on imaging postprocessing and their current clinical applicability. A summary of the potential applications of AI algorithms for PNs assessment is also included. Results DTI, successfully used in central nervous system, can also be applied for PNs assessment. DCE-MRI can help evaluate PN's vascularization and integrity of Blood Nerve Barrier beyond the conventional gadolinium-enhanced MRI sequences approach. Both approaches have been tested for PN assessment including pre- and post-surgical evaluation of PNs and tumoral conditions. AI algorithms may help radiologists for PN detection, segmentation and characterization with promising initial results. Conclusion DTI, DCE-MRI are feasible tools for the assessment of PN lesions. This manuscript emphasizes the technical adjustments necessary to acquire and post-process these images. AI algorithms can also be considered as an alternative and promising choice for PN evaluation with promising results.
... Learning and practising UGRA presents many challenges, but recent developments in AI could bring wide-ranging benefits. Automated assistive technology could be used to help the identification of key sono-anatomical structures, such as nerves, arteries and muscle (Bowness et al. 2021c). Systems could also aid in identifying the optimal ultrasound view before introduction of the needle (Smistad et al. 2017;Bowness et al. 2021a). ...
Chapter
Ultrasound-guided regional anaesthesia (UGRA) involves the targeted deposition of local anaesthesia to inhibit the function of peripheral nerves. Ultrasound allows the visualisation of nerves and the surrounding structures, to guide needle insertion to a perineural or fascial plane end point for injection. However, it is challenging to develop the necessary skills to acquire and interpret optimal ultrasound images. Sound anatomical knowledge is required and human image analysis is fallible, limited by heuristic behaviours and fatigue, while its subjectivity leads to varied interpretation even amongst experts. Therefore, to maximise the potential benefit of ultrasound guidance, innovation in sono-anatomical identification is required.Artificial intelligence (AI) is rapidly infiltrating many aspects of everyday life. Advances related to medicine have been slower, in part because of the regulatory approval process needing to thoroughly evaluate the risk-benefit ratio of new devices. One area of AI to show significant promise is computer vision (a branch of AI dealing with how computers interpret the visual world), which is particularly relevant to medical image interpretation. AI includes the subfields of machine learning and deep learning, techniques used to interpret or label images. Deep learning systems may hold potential to support ultrasound image interpretation in UGRA but must be trained and validated on data prior to clinical use.Review of the current UGRA literature compares the success and generalisability of deep learning and non-deep learning approaches to image segmentation and explains how computers are able to track structures such as nerves through image frames. We conclude this review with a case study from industry (ScanNav Anatomy Peripheral Nerve Block; Intelligent Ultrasound Limited). This includes a more detailed discussion of the AI approach involved in this system and reviews current evidence of the system performance.The authors discuss how this technology may be best used to assist anaesthetists and what effects this may have on the future of learning and practice of UGRA. Finally, we discuss possible avenues for AI within UGRA and the associated implications.
... These efforts can potentially solve the many puzzles that exists as a mystique in relation to debilitating diseases affecting the human brain (Murray, Unberath, Hager, & Hui, 2020). The application of AI in educationbased research within anatomical sciences holds value in designing customized learning solutions keeping in mind the individualistic needs of students (Bowness, El-Boghdadly, & Burckett-St, 2021). The application of technology in anatomical research is still in inception phase and has enormous potential for scaling new heights in terms of outcome and influence on mankind. ...
Article
Present day scenario regarding epistemological methods in anatomy is in sharp contrast to the situation during ancient period. This study aimed to explore the evolution of epistemological methodologies in anatomy across centuries. In ancient times Egyptian embalmers acquired anatomical knowledge from handling human bodies and likewise anatomical studies in India involved human dissection. Ancient Greeks used theological principles-based methods, animal dissection and human dissection in practice of anatomy. Human dissection was also practiced in ancient China for gaining anatomical knowledge. Prohibition of human dissection led to use of animal dissection in ancient Rome and the trend continued in Europe through Middle Ages. Epistemological methods used by Muslim scholars during Middle Ages are not clearly chronicled. Human dis- section returned as primary epistemological method in Renaissance Europe and empirical methods were reinstated after ancient period in human dis- section during 16th century. The situation further improved with introduction of pragmatic experiment based approach during 17th century and autopsy-based methods during 18th century. Advances in anatomical knowledge continued with advent of microscope-based methods and emergence of anatomical sec- tions in practice of human dissection in 19th century. Introduction of human observational studies, medical imaging, and molecular methods presented more options in terms of epistemological methods for investigating the human body during 20th century. Onset of 21st century has witnessed dominance of technology-based methods in anatomy. Limited emphasis on ethics in epistemological methodologies since antiquity is a dark aspect of otherwise an eventful evolutionary journey but recent developments are in positive direction.
Article
Introduction Shear wave elastography (SWE) presents nerves in colour, but the dimensions of its colour maps have not been validated with paired B-Mode nerve images. Our primary objective was to define the bias and limits of agreement of SWE with B-Mode nerve diameter. Our secondary objectives were to compare nerve area and shape, and provide a clinical standard for future application of new colour imaging technologies such as artificial intelligence. Materials and Methods Eleven combined ultrasound-guided regional nerve blocks were conducted using a dual-mode transducer. Two raters outlined nerve margins on 110 paired B-Mode and SWE images every second for 20 s before and during injection. Bias and limits of agreement were plotted on Bland-Altman plots. We hypothesized that the bias of nerve diameter would be <2.5% and that the percent limits of agreement would lie ±0.67% (2 SD) of the bias. Results There was no difference in the bias (95% confidence interval (CI) limits of agreement) of nerve diameter measurement, 0.01 (−0.14 to 0.16) cm, P = 0.85, equivalent to a 1.4% (−56.6% to 59.5) % difference. The bias and limits of agreement were 0.03 (−0.08 to 0.15) cm ² , P = 0.54 for cross-sectional nerve area; and 0.02 (−0.03 to 0.07), P = 0.45 for shape. Reliability (ICC) between raters was 0.96 (0.94–0.98) for B-Mode nerve area and 0.91 (0.83–0.95) for SWE nerve area. Conclusions Nerve diameter measurement from B-Mode and SWE images fell within a priori measures of bias and limits of agreement.
Article
Ultrasound (US) technology, with major advances and new developments, has become an essential and first-line imaging modality for clinical diagnosis and interventional treatment. US imaging has evolved from one-dimensional, two-dimensional to three-dimensional display, and from static to real-time imaging, as well as from structural to functional imaging. Based on its portability and advanced digital imaging technique, US was first adopted by emergency medicine in the 1980s and gradually gained popularity among other specialists for clinical diagnosis and interventional treatment. Point-of-Care Ultrasound (POCUS) was then proposed as a new concept and developed for new uses, which greatly extended clinical US applications. Nowadays, artificial intelligence (AI), cloud computing, 5G network, robotics, and remote technologies are starting to be integrated into US equipment. US systems have gradually evolved to an intelligent terminal platform with powerful imaging and communication tools. In addition, specialized US machines tend to be more suitable and important to meet increasing demands and requirements by various clinical specialties and departments. In this article, we review current US technology and POCUS as new concepts and its future trends, as well as related technological developments and clinical applications.
Article
Full-text available
Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful1. Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives2. Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. To assess its performance in the clinical setting, we curated a large representative dataset from the UK and a large enriched dataset from the USA. We show an absolute reduction of 5.7% and 1.2% (USA and UK) in false positives and 9.4% and 2.7% in false negatives. We provide evidence of the ability of the system to generalize from the UK to the USA. In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening. An artificial intelligence (AI) system performs as well as or better than radiologists at detecting breast cancer from mammograms, and using a combination of AI and human inputs could help to improve screening efficiency.
Article
Full-text available
Background: Artificial intelligence (AI) research in healthcare is accelerating rapidly, with potential applications being demonstrated across various domains of medicine. However, there are currently limited examples of such techniques being successfully deployed into clinical practice. This article explores the main challenges and limitations of AI in healthcare, and considers the steps required to translate these potentially transformative technologies from research to clinical practice. Main body: Key challenges for the translation of AI systems in healthcare include those intrinsic to the science of machine learning, logistical difficulties in implementation, and consideration of the barriers to adoption as well as of the necessary sociocultural or pathway changes. Robust peer-reviewed clinical evaluation as part of randomised controlled trials should be viewed as the gold standard for evidence generation, but conducting these in practice may not always be appropriate or feasible. Performance metrics should aim to capture real clinical applicability and be understandable to intended users. Regulation that balances the pace of innovation with the potential for harm, alongside thoughtful post-market surveillance, is required to ensure that patients are not exposed to dangerous interventions nor deprived of access to beneficial innovations. Mechanisms to enable direct comparisons of AI systems must be developed, including the use of independent, local and representative test sets. Developers of AI algorithms must be vigilant to potential dangers, including dataset shift, accidental fitting of confounders, unintended discriminatory bias, the challenges of generalisation to new populations, and the unintended negative consequences of new algorithms on health outcomes. Conclusion: The safe and timely translation of AI research into clinically validated and appropriately regulated systems that can benefit everyone is challenging. Robust clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond measures of technical accuracy to include quality of care and patient outcomes, is essential. Further work is required (1) to identify themes of algorithmic bias and unfairness while developing mitigations to address these, (2) to reduce brittleness and improve generalisability, and (3) to develop methods for improved interpretability of machine learning predictions. If these goals can be achieved, the benefits for patients are likely to be transformational.
Article
Full-text available
Background: Postoperative mortality occurs in 1-2% of patients undergoing major inpatient surgery. The currently available prediction tools using summaries of intraoperative data are limited by their inability to reflect shifting risk associated with intraoperative physiological perturbations. We sought to compare similar benchmarks to a deep-learning algorithm predicting postoperative 30-day mortality. Methods: We constructed a multipath convolutional neural network model using patient characteristics, co-morbid conditions, preoperative laboratory values, and intraoperative numerical data from patients undergoing surgery with tracheal intubation at a single medical centre. Data for 60 min prior to a randomly selected time point were utilised. Model performance was compared with a deep neural network, a random forest, a support vector machine, and a logistic regression using predetermined summary statistics of intraoperative data. Results: Of 95 907 patients, 941 (1%) died within 30 days. The multipath convolutional neural network predicted postoperative 30-day mortality with an area under the receiver operating characteristic curve of 0.867 (95% confidence interval [CI]: 0.835-0.899). This was higher than that for the deep neural network (0.825; 95% CI: 0.790-0.860), random forest (0.848; 95% CI: 0.815-0.882), support vector machine (0.836; 95% CI: 0.802-870), and logistic regression (0.837; 95% CI: 0.803-0.871). Conclusions: A deep-learning time-series model improves prediction compared with models with simple summaries of intraoperative data. We have created a model that can be used in real time to detect dynamic changes in a patient's risk for postoperative mortality.
Article
Full-text available
Background Neuraxial procedures are commonly performed for therapeutic and diagnostic indications. Currently, they are typically performed via palpation-guided surface landmark. We devised a novel intelligent image processing system that identifies spinal landmarks using ultrasound images. Our primary aim was to evaluate the first attempt success rate of spinal anesthesia using landmarks obtained from the automated spinal landmark identification technique. Methods In this prospective cohort study, we recruited 100 patients who required spinal anesthesia for surgical procedures. The video from ultrasound scan image of the L3/4 interspinous space in the longitudinal view and the posterior complex in the transverse view were recorded. The demographic and clinical characteristics were collected and analyzed based on the success rates of the spinal insertion. Results Success rate (95%CI) for dural puncture at first attempt was 92.0% (85.0–95.9%). Median time to detection of posterior complex was 45.0 [IQR: 21.9, 77.3] secs. There is good correlation observed between the program-recorded depth and the clinician-measured depth to the posterior complex (r = 0.94). Conclusions The high success rate and short time taken to obtain the surface landmark with this novel automated ultrasound guided technique could be useful to clinicians to utilise ultrasound guided neuraxial techniques with confidence to identify the anatomical landmarks on the ultrasound scans. Future research would be to define the use in more complex patients during the administration of neuraxial blocks. Trial registration This study was retrospectively registered on clinicaltrials.gov registry (NCT03535155) on 24 May 2018.
Article
Full-text available
The applications of artificial intelligence (AI) and machine learning (ML) have shown promising results in healthcare. However, while many advances have been made to incorporate AI into the field of anesthesiology since it was first used to automate anesthetic delivery, it is still not commonplace. Previous studies have demonstrated that ML algorithms are useful in perioperative management, and the contributions of AI to general anesthesia have yielded advancements in closed-loop systems. Although these tools may ultimately help anesthesiologists guide clinical decision making, it is still unknown how ML-based predictions should be managed in real-time. The fields of postoperative pain management and chronic pain have benefited from AI by developing software capable of predicting pain level and analgesia response, allowing for increasingly individualized care. Importantly, data amalgamation and ML techniques may not solely be useful in direct patient care, but will also increase the training power of simulations by providing high fidelity clinical scenarios and unbiased feedback, thereby improving education in anesthesiology. It is clear that AI will find many applications in anesthesia care, in delivering real-time results and patient assessments to enable physicians to focus on higher-order tasks. However, much more work is required to understand exactly the scope that AI will play in anesthesiology.
Article
Importance Intraoperative hypotension is associated with increased morbidity and mortality. A machine learning–derived early warning system to predict hypotension shortly before it occurs has been developed and validated. Objective To test whether the clinical application of the early warning system in combination with a hemodynamic diagnostic guidance and treatment protocol reduces intraoperative hypotension. Design, Setting, and Participants Preliminary unblinded randomized clinical trial performed in a tertiary center in Amsterdam, the Netherlands, among adult patients scheduled for elective noncardiac surgery under general anesthesia and an indication for continuous invasive blood pressure monitoring, who were enrolled between May 2018 and March 2019. Hypotension was defined as a mean arterial pressure (MAP) below 65 mm Hg for at least 1 minute. Interventions Patients were randomly assigned to receive either the early warning system (n = 34) or standard care (n = 34), with a goal MAP of at least 65 mm Hg in both groups. Main Outcomes and Measures The primary outcome was time-weighted average of hypotension during surgery, with a unit of measure of millimeters of mercury. This was calculated as the depth of hypotension below a MAP of 65 mm Hg (in millimeters of mercury) × time spent below a MAP of 65 mm Hg (in minutes) divided by total duration of operation (in minutes). Results Among 68 randomized patients, 60 (88%) completed the trial (median age, 64 [interquartile range {IQR}, 57-70] years; 26 [43%] women). The median length of surgery was 256 minutes (IQR, 213-430 minutes). The median time-weighted average of hypotension was 0.10 mm Hg (IQR, 0.01-0.43 mm Hg) in the intervention group vs 0.44 mm Hg (IQR, 0.23-0.72 mm Hg) in the control group, for a median difference of 0.38 mm Hg (95% CI, 0.14-0.43 mm Hg; P = .001). The median time of hypotension per patient was 8.0 minutes (IQR, 1.33-26.00 minutes) in the intervention group vs 32.7 minutes (IQR, 11.5-59.7 minutes) in the control group, for a median difference of 16.7 minutes (95% CI, 7.7-31.0 minutes; P < .001). In the intervention group, 0 serious adverse events resulting in death occurred vs 2 (7%) in the control group. Conclusions and Relevance In this single-center preliminary study of patients undergoing elective noncardiac surgery, the use of a machine learning–derived early warning system compared with standard care resulted in less intraoperative hypotension. Further research with larger study populations in diverse settings is needed to understand the effect on additional patient outcomes and to fully assess safety and generalizability. Trial Registration ClinicalTrials.gov Identifier: NCT03376347
Article
Commercial applications of artificial intelligence and machine learning have made remarkable progress recently, particularly in areas such as image recognition, natural speech processing, language translation, textual analysis, and self-learning. Progress had historically languished in these areas, such that these skills had come to seem ineffably bound to intelligence. However, these commercial advances have performed best at single-task applications in which imperfect outputs and occasional frank errors can be tolerated. The practice of anesthesiology is different. It embodies a requirement for high reliability, and a pressured cycle of interpretation, physical action, and response rather than any single cognitive act. This review covers the basics of what is meant by artificial intelligence and machine learning for the practicing anesthesiologist, describing how decision-making behaviors can emerge from simple equations. Relevant clinical questions are introduced to illustrate how machine learning might help solve them—perhaps bringing anesthesiology into an era of machine-assisted discovery.