ArticlePDF Available

Artificial intelligence application in bone fracture detection

Authors:
© 2021 Journal of Musculoskeletal Surgery and Research | Published by Wolters Kluwer - Medknow
4
Review Article
IntRoductIon
Bone fractures are among the most common causes of
emergency department visits. Diagnostic errors often occur
due to misinterpretation of radiological examination, which
may lead to the delayed treatment and poor outcomes.[1] The
analysis of causes of fracture diagnostic inaccuracies has
found them to be multifactorial, including physician factors,
image quality, insufficient clinical information, fracture
type, and polytrauma.[2] Four out of ve diagnostic errors
in an emergency settings are due to physician factors, yet
radiographs are often interpreted by clinicians who lack the
required specialized expertise.[3] Even with an experienced
radiologist, physician fatigue and error may increase during
a long busy day, increasing the risk of missing a subtle
fracture.[4] Thus, a model that can offer assistance to physicians
presenting second opinions through highlighting concerning
areas in imaging examination may produce more efcient
interpretation, standardize quality, and decrease errors. With
recent advances in deep learning (DL) and computer vision,
articial intelligence (AI) may play a signicant role in this
eld.
AI is a powerful technology that has demonstrated good
potential at radiographic image interpretation. While earlier
levels of AI performance were subhuman, modern versions are
able to match or even surpass humans’ performance.[5] AI has
also shown promising results in complex diagnostics in other
medical specialties such as ophthalmology, dermatology, and
pathology.[6] The aim of this article is to explore the potential
of utilizing AI in fracture diagnosis by reviewing the current
literature on this subject.
technIcal aspects
AI, machine learning (ML), DL, and convolutional neural
networking (CNN) are terminology, which often used
interchangeably [Figure 1]. AI refers to any skill where a
machine performs tasks that mimic human intelligence. ML
is a subeld of AI that enables a machine to learn and improve
from the experience independently of human action. DL is a
more specialized subeld of ML, which can analyze more data
sets transforming the inputs of an algorithm into outputs using
Artificial Intelligence Application in Bone Fracture Detection
Ahmed AlGhaithi, Sultan Al Maskari
Division of Orthopedic, Depar tment of Surgery, Sultan Qaboos University Hospital, Muscat, Oman
The interest of researchers, clinicians, and industry in articial intelligence (AI) continues to grow, especially with recent deep‑learning (DL)
advances. Recent published reports have shown the utility of DL for bone fracture diagnosis in the radiological examination. It is important
for practicing physicians to recognize the current scope of DL as it may impact the clinical practices in the near future. This article will give
an insight to the practicing clinician of the current advances in AI fracture diagnosis by reviewing the current literature on this participant.
Electronic databases were searched for relevant articles relating to AI applications in bone fracture detection. We included all published work
in PubMed, Medline, and Cross‑references, which satised the inclusion criteria. The search identied 104 references. Of those, 13 articles
were eligible for the analysis. AI advancements in fracture imaging applications can be divided into the categories of fracture detection,
classication, segmentation, and noninterpretive tasks. Despite the potential work presented in the literature, there are many challenges in the
form of clinical translation and its widespread uses. These challenges range from the proof of safety to clearance from the regulatory agencies.
Keywords: Articial Intelligence, convolutional neural networking, deep learning, fracture imaging, machine learning, musculoskeletal
Address for correspondence: Dr. Ahmed AlGhaithi,
P. O. Box 478 P.C 130, Muscat, Sultanate of Oman.
E‑mail: a.alghaithi@squ.edu.om
Access this article online
Quick Response Code:
Website:
www.journalmsr.com
DOI:
10.4103/jmsr.jmsr_132_20
This is an open access journal, and arcles are distributed under the terms of the Creave
Commons Aribuon‑NonCommercial‑ShareAlike 4.0 License, which allows others to remix,
tweak, and build upon the work non‑commercially, as long as appropriate credit is given and
the new creaons are licensed under the idencal terms.
For reprints contact: WKHLRPMedknow_reprints@wolterskluwer.com
How to cite this article: AlGhaithi A, Al Maskari S. Articial intelligence
application in bone fracture detection. J Musculoskelet Surg Res
2021;5:4-9.
abstRact
Received: 26-11-2020
Accepted: 26-01-2021
Revised: 18-01-2021
Published Online: 20-02-2021
[Downloaded free from http://www.journalmsr.com on Tuesday, March 23, 2021, IP: 255.213.147.126]
AI application in fracture detection
Journal of Musculoskeletal Surgery and Research ¦ Volume 5 ¦ Issue 1 ¦ January-March 2021 5
the sophisticated computational models such as deep neural
networks. CNN is evolutional computational technique of DL,
which can impact the key areas of medicine such as medical
imaging.[7] CNN is built of computational units called nodes,
which are analogous to biological brain neurons. Each node
takes one or more weighted input connections and performs
mathematical operations resulting in outputs that can pass to
other connected nodes.
MateRIal and data souRce
Online databases (PubMed and MEDLINE) search was carried
to nd the literature related to AI use in fracture diagnosis.
The search was carried accordance to preferred reporting
items for systematic reviews and meta-analyses statement.
Keywords included “articial intelligence,” “deep learning,”
“machine learning,” and “fracture.” Searches were conducted
on April 1, 2020, yielding a total of 104 articles from the
two databases, without applying any restriction on language
or date of publication [Figure 2]. An independent reviewer
performed screening of articles’ titles and abstracts in the
rst reviewing stage, in addition to the titles and abstracts of
crossover references. The following inclusion criteria were
used: all levels of evidence and studies on humans. We did not
place restrictions on the target population, the outcome of the
disease of interest, or the intended context for using the model.
We excluded from the search nontraumatic musculoskeletal
pathologies and conferences abstracts due to incomplete data
presentation.
Results
The search terms, as described above, identied 216 references
[Figure 2]. After duplicate removal, 104 articles titles
and abstracts were screened. Of these 19 full‑text articles
were assessed independent by both authors for analysis
eligibility, nally 13 studies satised all the inclusion and
exclusion criteria. A complete list of included published
work is provided in Table 1. The application of AI in
fracture imaging can be classied into four major categories:
Pathology detection (e.g., calcaneus fracture), segmentation
(which means automated segmentation of the region of
interest whereby the irrelevant pixels are cropped out and
would not inuence the training process e.g., cropping out soft
tissue), classication (e.g., calcaneal fracture classication),
noninterpretive (e.g., image-quality improvement from
under-sampled magnetic resonance imaging or low-dose
computed tomography [CT]).[5]
uppeR lIMbs fRactuRes
The rate of missing a fracture between the upper and lower
extremity is almost analogous. Upper limb fractures most
likely to be missed are elbow (6%), hand (5.4%), wrist (4.2%),
and shoulder (1.9%).[8] Kim and MacKinnon trained a
model using 1112 images of wrist radiographs, then they
added additional 100 images for nal testing and analysis
(comprising 50 fractures and 50 normal). The area under the
curve (AUC) was 0.954, with a diagnostic sensitivity of 90%
and 88% specicity.[9] Lindsey et al. developed another CNN
model for detecting wrist fractures using 135,409 radiographs
and was able to improve the sensitivity of clinicians’ image
reading from 88% unaided to 94% aided, and and by doing so,
misinterpretation improved by 53%.[10] Olczak et al. designed
an algorithm for distal radius fractures and tested it on hand and
wrist radiographs. They compared the network performance
with two experienced orthopedic surgeons and showed a high
Figure 1: Shows the relationships of artificial intelligence, machine
learning, deep learning, and convolutional neural network
ScreeningIncluded Eligibility Identification
Records identified through
database searching
(n = 216 PubMed, Medline
and Cross-references)
Additional records identified
through other sources
(n = 0)
Records after duplicates removed
(n = 104)
Records excluded after
evaluation of titles and
abstracts
(n = 85)
Records screened
(n = 104)
Full-text articles
assessed for eligibility
(n = 19)
Full-text articles
excluded, with reasons
(n = 5)
Studies satisfied all inclusion
and exclusion criteria
(n = 13)
Studies included in
quantitative synthesis
(n = 13)
Figure 2: Preferred reporting items for systematic reviews and
meta‑analyses flow diagram for study selection
[Downloaded free from http://www.journalmsr.com on Tuesday, March 23, 2021, IP: 255.213.147.126]
AlGhaithi and Al Maskari:
Journal of Musculoskeletal Surgery and Research ¦ Volume 5 ¦ Issue 1 ¦ January-March 2021
6
detection rate with a sensitivity of 90% and specicity of
88%.[11] They did not specify the type of fractures or grade of
difculty of fracture detection.
Chung et al. trained a CNN model to detect the fractures
of proximal humerus and classify the type of fracture
(four parts Neer’s classification) on a dataset of 1891
anteroposterior shoulder radiographs. The model showed a high
throughput precision of 96% and a mean AUC of 1.00 compared
to specialists, with a sensitivity of 99% and a specicity of
97%. However, the task of classifying the fracture was more
challenging; the reported accuracy was ranging from 65%
to 85%. The model showed superior performance accuracy
compared to general physicians and orthopedic surgeons and
almost similar performance to specialized shoulder surgeons.[12]
Rayan et al. developed a model with a multi-view approach,
which mimics the human radiologist when reviewing multiple
images of acute pediatric elbow fractures. They used 21,456
radiographic studies containing 58,817 elbow radiographs.
The model accuracy was 88%, with a sensitivity of 91% and
specicity of 84%.[13]
loweR lIMbs fRactuRes
Hip fractures constitute 20% of patients admitted to orthopedic
surgery, while the incidence of occult fractures on radiographs
ranges from 4% to 9%.[14] Urakawa et al. developed CNN to
study intertrochanteric hip fractures in a total of 3346 hip
images (1773 fractured and 1573 nonfractured hip images).
His model was compared to the performance of five
Table 1: Classification of artificial intelligence application in view of body part fracture
Reference Anatomic area Module
purpose
Modality Compared to human
expert performance
Performance (metric)
Kim et al.
2018
Wrist Diagnosis Radiographs No Provided proof of concept in fracture detection on
plain radiographs 0.95 (AUC), 90% sensitivity and
88% specicity
Olczak et al.
2017
Hand/wrist/
ankle
Diagnosis Radiographs Yes Performance in detecting fractures from hand/wrist/
ankles radiograph sensitivity of 90% and specicity
of 88%accuracy of 83% versus. radiologists, 82%
Lindsey et al.
2018
Wrist Diagnosis Radiographs Yes Improved clinicians image reading sensitivity from
88% unaided compared to 94% aided
Chung et al.
2018
Proximal
humerus
Diagnosis and
classication
(Neer)
Radiographs Yes Diagnosis accuracy of 96%, 99% sensitivity, 97%
specicity
Classication accuracy range between 65% and
86%, sensitivity 88% to 97%, specicity 83% to
94% (dependent on the type)
Rayan et al.
2019
Pediatrics elbow
fractures
Diagnosis Radiographs No The model accuracy was 88% with sensitivity of
91% and specicity of 84%
Urakawa
et al. 2019
Intertrochanteric
hip fractures
Diagnosis Radiographs Yes Convolutional neural network outperformed
orthopedic surgeons at detecting, accuracies of 96%
versus 92%, specicities of 97% versus 97%. 57
and sensitivities of 94% versus 88%
Cheng et al.
2019
Hip fracture Diagnosis Radiographs No Accuracy of 91%, a sensitivity of 98%, AUC of
0.98
Adams et al.
2019
Neck of femur Diagnosis Radiographs Yes Accuracy of 91%, AUC 0.98
Performing junior’s physician increased from
87.6% to 90.5%
Balaji et al.
2020
Femur
diaphyseal
Diagnosis Radiographs No Accuracy of 90.69% with 86.66% sensitivity and
92.33% specicity
Kitamura
et al. 2019
Ankle Diagnosis Radiographs No Model with multiple views shown improved
accuracy in fracture detection of 81% compared
with single view of 76%
Pranata et al.
2019
Calcaneus Classication
(Sander)
CT No Sanders classication system model accuracy 98%
Rahmaniar
et al. 2019
Calcaneus Classication
(Sander)
CT Yes An accuracy of 86% with computational
performance of 133 frame per second
Burns et al.
2017
Spine Diagnosis CT No Model which detect, localize, classify the fractures
and measure bone density vertebral bodies
employing more lumbar and thoracic CT images.
Attained sensitivity was 95.7%
Tomita et al.
2018
Spine Diagnosis CT No Model which detect osteoporotic vertebral fractures
achieved an accuracy of 89.2%
Muehlematter
et al. 2019
Spine CT No Accuracy of classifying of unstable/stable vertebrae
was low with AUC 0.53
AUC: Area under the curve, CT: Computed tomography
[Downloaded free from http://www.journalmsr.com on Tuesday, March 23, 2021, IP: 255.213.147.126]
AI application in fracture detection
Journal of Musculoskeletal Surgery and Research ¦ Volume 5 ¦ Issue 1 ¦ January-March 2021 7
orthopedic surgeons and showed accuracy of 96% versus 92%,
specicities of 97% versus 57% and sensitivities of 94% versus
88%.[15] Cheng et al. developed CNN algorithm, which was
pretrained using 25,505 limb radiographs. Achieved algorithm
accuracy for diagnosing hip fracture is 91%, sensitivity is
98%. The performance achieved has a low false‑negative rate
of 2%, which make it a good screening tool.[16] Adams et al.
developed a model to detect the neck of femur fracture with
an accuracy of 91% and AUC 0.98.[17] Balaji et al. developed
CNN to diagnose femur diaphyseal fractures. The model
was developed using 175 radiographs (100 normal and 75
fractured). Then trained to classify the type of diaphyseal
femur fracture, namely transverse, spiral, and comminuted.
The achieved highest accuracy of 90.7% with 86.6% sensitivity
and 92.3% specicity.[18]
Missed ankle and foot fractures are common, especially in
trauma patients. Some reports estimated missed diagnosis
due to different reasons in the initial contact may reach up to
44%, of which 66% were due to radiological misdiagnosis.[19]
This is why researchers tried to train models for this purpose.
Kitamura et al. developed CNN of a small number of ankle
radiographs (298 normal and 298 fractured ankles). The model
was trained to detect ankle fractures, where ankle fracture was
dened as proximal forefoot, midfoot, hind foot, distal tibia,
or distal bula. The model with multiple views has shown
improved accuracy in fracture detection from 76% to 81%.[20]
Pranata et al. proposed two types of CNN algorithms for the
classication of calcaneal fractures using CT images using
the Sanders classication system. The proposed algorithm
exhibited 98% accuracy, which makes it a viable tool for future
use in computer-assisted diagnosis.[21] Rahmaniar and Wang
developed a computer-aided method for calcaneal fracture
detection in CT. Sanders system was also used for fracture
classication, where calcaneus fragments were detected and
marked by color segmentation. The achieved performance
accuracy was high (86%), with a computational performance
of 133 frames per second.[22]
spIne fRactuRes
The incidence of misdiagnosed spine fractures varies among
studies and ranges from 19.5% to 45%.[23] Burns et al. was
able to detect, localize, classify vertebral spine fractures as
well as measure bone density of vertebral bodies using lumbar
and thoracic CT images. Achieved sensitivity was 95.7%
and a false-positive rate of 0.29 per patient for compression
fractures detection and localization.[24] Tomita et al. developed
CNN to extract radiological features of osteoporotic vertebral
fractures in CT scan. The model was trained using 1432 CT
scans, comprised of 10,546 sagittal views, and achieved an
accuracy of 89.2%. The product algorithm was then tested on
128 spine CT scans and an accuracy of 90.8% was achieved.[25]
Muehlematter et al. proposed algorithms to detect vertebrae
at risk of fracture using 58 CT scans of patients with acquired
fractures due to vertebral insufciency. One hundred and
twenty items (60 stable vertebrae and 60 unstable vertebrae)
were included in the study. However, the grading accuracy of
unstable/stable vertebrae was low with AUC of 0.5.[26]
dIscussIon
The efficacy of AI compared to human’s intelligence is
emerging as an effective tool to address the current blemishes
of human errors. The AI current status of the technology
can be described by Gartner’s hype cycle [Figure 3], which
denes how a technology, or an innovation progresses through
its life cycle from concept to widespread adoption.[27] The
cycle consists of ve phases: The rst phase is a “technology
trigger” where only technology is envisioned, followed by a
“peak of inated expectations phase,” where the technology
prole is raised with successful and unsuccessful trials. Then,
it is followed by the “trough of disillusionment phase” at
which defects in the technology cause disappointment in its
effectiveness, followed by the “slope of enlightenment” as
companies begin to test it in their own environments. The
nal phase is the “plateau of productivity,” where technology
is available in the market.[25] AI in medical applications, and
fracture detection specically, is still in the early phases of
this cycle and fall at the peak of the inated expectation phase
as more reports continue to demonstrate the efciency of AI
in detecting fractures.[7] Currently, the work published in the
eld of orthopedic traumatology to date is small collective
initiatives, trying to get proof of concept rather than applying
technology.
The objective of integrating AI into the clinical practice is
to augment the workow at clinical environment rather than
replacing the workforce. Thus, with the evaluation of new
computing platforms and the development of new algorithm
models, the new generation of AI is anticipated to advance
the quality of workow in several ways namely improving
the experience of care, the diagnoses, minimizing the errors,
improving time management, and reducing costs.[5] One of the
greatest challenges, which can be improved by AI is accurate
Figure 3: Gartner’s hype cycle provides a graphic illustration of the
maturity and deployment of technologies and applications
[Downloaded free from http://www.journalmsr.com on Tuesday, March 23, 2021, IP: 255.213.147.126]
AlGhaithi and Al Maskari:
Journal of Musculoskeletal Surgery and Research ¦ Volume 5 ¦ Issue 1 ¦ January-March 2021
8
radiological diagnosis, especially in an emergency setting by
inexperienced or exhausted clinicians. Therefore, the aid of
AI in the fracture detection is more important in augmenting
workow compared to segmentation or classication.[9] For
example, assisting AI in diagnosing difcult fractures such
as elbow fracture in children will have a greater impact on
the treatment outcomes compared to classifying the type of
fracture.
By integrating AI into the clinical setting, AI is expected
to provide clinicians with better clinical insights needed to
reduce the errors and improve the quality of task interpretation.
Another important aspect where AI can play a major role is
ofcial reporting systems after ofce hours. AI should support
a reporting system for an examination performed in hospitals
where the radiologist is not attending in person.[7]
The expectation from the latest AI tools is to demonstrate
the state-of-the-art results. It should improve workload and
increase daily productivity by replacing the manual retrieval
of image data from a database to suggest a comparison with
new images, or even for audits and clinical studies. Moreover,
AI should drive efcient worklist prioritization in the work
environment, communicating the important image analysis
and ensure automatic assignment to the most appropriate
available physician.[4]
lIMItatIons and challenges of
aRtIfIcIal IntellIgence In the
clInIcal settIng
AI remains far from independently operating in a clinical
setting. In the face of many successful implementations
of AI models, application limitations must be recognized.
Published works are of an experimental nature and are not
incorporated into daily clinical practice, which may show
the feasibility and efcacy of proposed diagnostic models.
Added to that even, the published works are challenging to
be reproduced, because most training data sets and codes
are rarely published. Moreover, the proposed models need to
be integrated within clinical information software as well as
Picture Archiving and Communications Systems in order to be
useful. However, until now, very limited data present this type
of integration.[5] Moreover, the safety demonstration of these
models to regulatory agencies is an important step for clinical
translation and widespread uses. However, there is no denying
that AI is making rapid progress and great improvements.[4,6]
In general, new generations of DL and in particular CNN have
successfully demonstrated to be more accurate and rapidly
developed with innovative results than earlier generations.
These approaches are now diagnostically accurate and are
predicted to outperform human experts in the future. It would
also potentially give a more precise diagnosis to patients. In
general, to be able to interpret and use articial intelligence
correctly, physicians must have a clear understanding of the
tools used on AI. Taking in account the challenges standing
in the way of clinical translation and widespread uses. These
challenges range from proof of safety to clearance from
regulatory agencies.
conclusIon
Several AI models demonstrated certain performance at the
expert level. Although the comprehensive interpretation of the
image has not been achieved yet, it is too early to consider AI
operating independently in a clinical setting. However, with
the current technology, AI has the potential to be considered
to augment the efciency of clinical workow.
Ethical approval
The authors conrm that this review had been prepared in
accordance to COPE roles and regulation. Given the nature
of the review, IRB review was not required.
Financial support and sponsorship
This study did not receive any specic grant from funding
agencies in the public, commercial, or not‑for‑prot sectors.
Conicts of interest
There are no conicts of interest.
Authors contributions
AAG contributed to developing the project idea, searched
the literature and interpretation of the results and preparation
and revising the manuscript; SAM contributed in developing
the idea and critically revising the manuscript. All authors
have critically reviewed and approved the nal draft and
are responsible for the content and similarity index of the
manuscript.
RefeRences
1. Pinto A, Reginelli A, Pinto F, Lo Re G, Midiri F, Muzj C, et al.
Errors in imaging patients in the emergency setting. Br J Radiol
2016;89(1061):20150914.
2. Pinto A, Berritto D, Russo A, Riccitiello F, Caruso M, Belore MP, et al.
Traumatic fractures in adults: Missed diagnosis on plain radiographs in
the Emergency Department. Acta Biomed 2018;89:111-23.
3. Hallas P, Ellingsen T. Errors in fracture diagnoses in the emergency
department–characteristics of patients and diurnal variation. BMC
Emerg Med 2006;6:4.
4. Krupinski EA, Schartz KM, Van Tassell MS, Madsen MT, Caldwell
RT, Berbaum KS. Effect of fatigue on reading computed tomography
examination of the multiply injured patient. J Med Imaging [Internet].
2017 Jul [cited 2020 Aug 10];4(3).
5. Chea P, Mandell JC. Current applications and future directions
of deep learning in musculoskeletal radiology. Skeletal Radiol
2020;49:183-97.
6. Amisha, Malik P, Pathania M, Rathaur VK. Overview of articial
intelligence in medicine. J Family Med Prim Care 2019;8:2328-31.
7. Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K,
et al. A guide to deep learning in healthcare. Nat Med 2019;25:24-9.
8. Tyson S, Hatem SF. Easily missed fractures of the upper extremity.
Radiol Clin North Am 2015;53:717-36, viii.
9. Kim DH, MacKinnon T. Articial intelligence in fracture detection:
Transfer learning from deep convolutional neural networks. Clin Radiol
2018;73:439-45.
10. Lindsey R, Daluiski A, Chopra S, Lachapelle A, Mozer, Sicular S, et
al. Deep neural network improves fracture detection by clinicians.
Proc Natl Acad Sci U S A. 2018;115(45):11591‑11596. doi:10.1073/
[Downloaded free from http://www.journalmsr.com on Tuesday, March 23, 2021, IP: 255.213.147.126]
AI application in fracture detection
Journal of Musculoskeletal Surgery and Research ¦ Volume 5 ¦ Issue 1 ¦ January-March 2021 9
pnas.1806905115.
11. Olczak J, Fahlberg N, Maki A, Razavian AS, Jilert A, Stark A, et al.
Articial intelligence for analyzing orthopedic trauma radiographs. Acta
Orthop 2017;88:581-6.
12. Chung SW, Han SS, Lee JW, Oh KS, Kim NR, Yoon JP, et al. Automated
detection and classication of the proximal humerus fracture by using
deep learning algorithm. Acta Orthop 2018;89:468-73.
13. Rayan JC, Reddy N, Kan JH, Zhang W, Annapragada A. Binomial
classication of pediatric elbow fractures using a deep learning
multiview approach emulating radiologist decision making. Radiol Artif
Intell 2019;1:e180015.
14. Yu JS. Easily missed fractures in the lower extremity. Radiol Clin North
Am 2015;53:737-55, viii.
15. Urakawa T, Tanaka Y, Goto S, Matsuzawa H, Watanabe K, Endo N.
Detecting intertrochanteric hip fractures with orthopedist-level accuracy
using a deep convolutional neural network. Skeletal Radiol 2019;48:239-44.
16. Cheng CT, Ho TY, Lee TY, Chang CC, Chou CC, Chen CC, et al.
Application of a deep learning algorithm for detection and visualization
of hip fractures on plain pelvic radiographs. Eur Radiol 2019;29:5469-77.
17. Adams M, Chen W, Holcdorf D, McCusker MW, Howe PD, Gaillard F.
Computer vs. human: Deep learning versus perceptual training for the
detection of neck of femur fractures. J Med Imaging Radiat Oncol
2019;63:27-32.
18. Balaji GN, Subashini TS, Madhavi P, Bhavani CH, Manikandarajan A.
Computer-Aided Detection and Diagnosis of Diaphyseal Femur
Fracture. In: Satapathy SC, Bhateja V, Mohanty JR, Udgata SK, editors.
Smart Intelligent Computing and Applications. Singapore: Springer;
2020. p. 549-59.
19. Ahrberg AB, Leimcke B, Tiemann AH, Josten C, Fakler JK. Missed foot
fractures in polytrauma patients: A retrospective cohort study. Patient
Saf Surg 2014;8:10.
20. Kitamura G, Chung CY, Moore BE 2nd. Ankle fracture detection
utilizing a convolutional neural network ensemble implemented with
a small sample, de novo training, and multiview incorporation. J Digit
Imaging 2019;32:672-7.
21. Pranata YD, Wang KC, Wang JC, Idram I, Lai JY, Liu JW, et al. Deep
learning and SURF for automated classication and detection of
calcaneus fractures in CT images. Comput Methods Programs Biomed
2019;171:27-37.
22. Rahmaniar W, Wang WJ. Real-time automated segmentation and
classication of calcaneal fractures in CT images. Appl Sci 2019;9:3011.
23. Aso-Escario J, Sebastián C, Aso-Vizán A, Martínez-Quiñones JV,
Consolini F, Arregui R. Delay in diagnosis of thoracolumbar fractures.
Orthop Rev (Pavia) 2019;11:7774.
24. Burns JE, Yao J, Summers RM. Vertebral body compression fractures
and bone density: Automated detection and classication on CT images.
Radiology 2017;284:788-97.
25. Tomita N, Cheung YY, Hassanpour S. Deep neural networks for
automatic detection of osteoporotic vertebral fractures on CT scans.
Comput Biol Med. 2018;98:8-15.
26. Muehlematter UJ, Mannil M, Becker AS, Vokinger KN, Finkenstaedt T,
Osterhoff G, et al. Vertebral body insufciency fractures: Detection
of vertebrae at risk on standard CT images using texture analysis and
machine learning. Eur Radiol 2019;29:2207-17.
27. Hype Cycle Research Methodology. Gartner. Available from: https://
www.gartner.com/en/research/methodologies/gartner. [Last accessed on
2020 Apr 25].
[Downloaded free from http://www.journalmsr.com on Tuesday, March 23, 2021, IP: 255.213.147.126]
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Deep learning with convolutional neural networks (CNN) is a rapidly advancing subset of artificial intelligence that is ideally suited to solving image-based problems. There are an increasing number of musculoskeletal applications of deep learning, which can be conceptually divided into the categories of lesion detection, classification, segmentation, and non-interpretive tasks. Numerous examples of deep learning achieving expert-level performance in specific tasks in all four categories have been demonstrated in the past few years, although comprehensive interpretation of imaging examinations has not yet been achieved. It is important for the practicing musculoskeletal radiologist to understand the current scope of deep learning as it relates to musculoskeletal radiology. Interest in deep learning from researchers, radiology leadership, and industry continues to increase, and it is likely that these developments will impact the daily practice of musculoskeletal radiology in the near future.
Article
Full-text available
Background: Artificial intelligence (AI) is the term used to describe the use of computers and technology to simulate intelligent behavior and critical thinking comparable to a human being. John McCarthy first described the term AI in 1956 as the science and engineering of making intelligent machines. Objective: This descriptive article gives a broad overview of AI in medicine, dealing with the terms and concepts as well as the current and future applications of AI. It aims to develop knowledge and familiarity of AI among primary care physicians. Materials and methods: PubMed and Google searches were performed using the key words 'artificial intelligence'. Further references were obtained by cross-referencing the key articles. Results: Recent advances in AI technology and its current applications in the field of medicine have been discussed in detail. Conclusions: AI promises to change the practice of medicine in hitherto unknown ways, but many of its practical applications are still in their infancy and need to be explored and developed better. Medical professionals also need to understand and acclimatize themselves with these advances for better healthcare delivery to the masses.
Article
Full-text available
Calcaneus fractures often occur because of accidents during exercise or activities. In general, the detection of the calcaneus fracture is still carried out manually through CT image observation, and as a result, there is a lack of precision in the analysis. This paper proposes a computer-aid method for the calcaneal fracture detection to acquire a faster and more detailed observation. First, the anatomical plane orientation of the tarsal bone in the input image is selected to determine the location of the calcaneus. Then, several fragments of the calcaneus image are detected and marked by color segmentation. The Sanders system is used to classify fractures in transverse and coronal images into four types, based on the number of fragments. In the sagittal image, fractures are classified into three types based on the involvement of the fracture area. The experimental results show that the proposed method achieves a high precision rate of 86%, with a fast computational performance of 133 frames per second (fps), used to analyze the severity of injury to the calcaneus. The results in the test image are validated based on the assessment and evaluation carried out by the physician on the reference datasets.
Article
Full-text available
The time interval between the date of trauma and the diagnosis of vertebral column fractures hinders management and increases liability. We have examined the features and implications of this delay. 585 consecutive thoracolumbar fractures (2005-2016), were considered; 382 (65.30%) were males and 203 (34.70%) females. Mean age was 51 yr. Fall from a height (187; 31.97%), simple fall (147; 25.13%) and road accidents (111; 18.97%) were the most frequent causes of trauma. Physical exertion caused 8.38% (N=49). 142 patients (24.27%) were not diagnosed on the injury day (mean = 3.2 days). Delay was longer in females (mean =5.5 vs. 2.7 days) and shorter in falls from a height (mean = 2.3) or road accidents (2.8). Mean age of diagnosed on the injury day differed from those diagnosed in the first month (49.2 vs60.1). Plain X-ray signs were found in 7 misdiagnosed cases (46.6%). Delay was more frequent in low mineralization cases. Diagnostic delay of spine fractures is frequent. Some risk profiles can help to reduce it. Careful emergency X-ray examination is encouraged, as well as early magnetic resonance imaging in risk profiles.
Article
Full-text available
To determine whether we could train convolutional neural network (CNN) models de novo with a small dataset, a total of 596 normal and abnormal ankle cases were collected and processed. Single- and multiview models were created to determine the effect of multiple views. Data augmentation was performed during training. The Inception V3, Resnet, and Xception convolutional neural networks were constructed utilizing the Python programming language with Tensorflow as the framework. Training was performed using single radiographic views. Measured output metrics were accuracy, positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity. Model outputs were evaluated using both one and three radiographic views. Ensembles were created from a combination of CNNs after training. A voting method was implemented to consolidate the output from the three views and model ensemble. For single radiographic views, the ensemble of all 5 models produced the best accuracy at 76%. When all three views for a single case were utilized, the ensemble of all models resulted in the best output metrics with an accuracy of 81%. Despite our small dataset size, by utilizing an ensemble of models and 3 views for each case, we achieved an accuracy of 81%, which was in line with the accuracy of other models using a much higher number of cases with pre-trained models and models which implemented manual feature extraction.
Article
Full-text available
Objective: To identify the feasibility of using a deep convolutional neural network (DCNN) for the detection and localization of hip fractures on plain frontal pelvic radiographs (PXRs). Hip fracture is a leading worldwide health problem for the elderly. A missed diagnosis of hip fracture on radiography leads to a dismal prognosis. The application of a DCNN to PXRs can potentially improve the accuracy and efficiency of hip fracture diagnosis. Methods: A DCNN was pretrained using 25,505 limb radiographs between January 2012 and December 2017. It was retrained using 3605 PXRs between August 2008 and December 2016. The accuracy, sensitivity, false-negative rate, and area under the receiver operating characteristic curve (AUC) were evaluated on 100 independent PXRs acquired during 2017. The authors also used the visualization algorithm gradient-weighted class activation mapping (Grad-CAM) to confirm the validity of the model. Results: The algorithm achieved an accuracy of 91%, a sensitivity of 98%, a false-negative rate of 2%, and an AUC of 0.98 for identifying hip fractures. The visualization algorithm showed an accuracy of 95.9% for lesion identification. Conclusions: A DCNN not only detected hip fractures on PXRs with a low false-negative rate but also had high accuracy for localizing fracture lesions. The DCNN might be an efficient and economical model to help clinicians make a diagnosis without interrupting the current clinical pathway. Key points: • Automated detection of hip fractures on frontal pelvic radiographs may facilitate emergent screening and evaluation efforts for primary physicians. • Good visualization of the fracture site by Grad-CAM enables the rapid integration of this tool into the current medical system. • The feasibility and efficiency of utilizing a deep neural network have been confirmed for the screening of hip fractures.
Article
Full-text available
Here we present deep-learning techniques for healthcare, centering our discussion on deep learning in computer vision, natural language processing, reinforcement learning, and generalized methods. We describe how these computational techniques can impact a few key areas of medicine and explore how to build end-to-end systems. Our discussion of computer vision focuses largely on medical imaging, and we describe the application of natural language processing to domains such as electronic health record data. Similarly, reinforcement learning is discussed in the context of robotic-assisted surgery, and generalized deep-learning methods for genomics are reviewed.
Article
Background and objectives: The calcaneus is the most fracture-prone tarsal bone and injuries to the surrounding tissue are some of the most difficult to treat. Currently there is a lack of consensus on treatment or interpretation of computed tomography (CT) images for calcaneus fractures. This study proposes a novel computer-assisted method for automated classification and detection of fracture locations in calcaneus CT images using a deep learning algorithm. Methods: Two types of Convolutional Neural Network (CNN) architectures with different network depths, a Residual network (ResNet) and a Visual geometry group (VGG), were evaluated and compared for the classification performance of CT scans into fracture and non-fracture categories based on coronal, sagittal, and transverse views. The bone fracture detection algorithm incorporated fracture area matching using the speeded-up robust features (SURF) method, Canny edge detection, and contour tracing. Results: Results showed that ResNet was comparable in accuracy (98%) to the VGG network for bone fracture classification but achieved better performance for involving a deeper neural network architecture. ResNet classification results were used as the input for detecting the location and type of bone fracture using SURF algorithm. Conclusions: Results from real patient fracture data sets demonstrate the feasibility using deep CNN and SURF for computer-aided classification and detection of the location of calcaneus fractures in CT images.
Article
Purpose: To determine the feasibility of using deep learning with a multiview approach, similar to how a human radiologist reviews multiple images, for binomial classification of acute pediatric elbow radiographic abnormalities. Materials and methods: A total of 21 456 radiographic studies containing 58 817 images of the elbow and associated radiology reports over the course of a 4-year period from January 2014 through December 2017 at a dedicated children's hospital were retrospectively retrieved. Mean age was 7.2 years, and 43% were female patients. The studies were binomially classified, based on the reports, as either positive or negative for acute or subacute traumatic abnormality. The studies were randomly divided into a training set containing 20 350 studies and a validation set containing the remaining 1106 studies. A multiview approach was used for the model by combining both a convolutional neural network and recurrent neural network to interpret an entire series of three radiographs together. Sensitivity, specificity, positive predictive value, negative predictive value, area under the receiver operating characteristic curve (AUC), and their 95% confidence intervals were calculated. Results: AUC was 0.95, and accuracy was 88% for the model on the studied dataset. Sensitivity for the model was 91% (536 of 590), while the specificity for the model was 84% (434 of 516). Of 241 supracondylar fractures, one was missed. Of 88 lateral condylar fractures, one was missed. Of 77 elbow effusions without fracture, 15 were missed. Of 184 other abnormalities, 37 were missed. Conclusion: Deep learning can effectively classify acute and nonacute pediatric elbow abnormalities on radiographs in the setting of trauma. A recurrent neural network was used to classify an entire radiographic series, arrive at a decision based on all views, and identify fractures in pediatric patients with variable skeletal immaturity.Supplemental material is available for this article.© RSNA, 2019.