ArticlePDF AvailableLiterature Review

Digital Pathology: Advantages, Limitations and Emerging Perspectives

MDPI
Journal of Clinical Medicine
Authors:

Abstract and Figures

Digital pathology is on the verge of becoming a mainstream option for routine diagnostics. Faster whole slide image scanning has paved the way for this development, but implementation on a large scale is challenging on technical, logistical, and financial levels. Comparative studies have published reassuring data on safety and feasibility, but implementation experiences highlight the need for training and the knowledge of pitfalls. Up to half of the pathologists are reluctant to sign out reports on only digital slides and are concerned about reporting without the tool that has represented their profession since its beginning. Guidelines by international pathology organizations aim to safeguard histology in the digital realm, from image acquisition over the setup of work-stations to long-term image archiving, but must be considered a starting point only. Cost-efficiency analyses and occupational health issues need to be addressed comprehensively. Image analysis is blended into the traditional work-flow, and the approval of artificial intelligence for routine diagnostics starts to challenge human evaluation as the gold standard. Here we discuss experiences from past digital pathology implementations, future possibilities through the addition of artificial intelligence, technical and occupational health challenges, and possible changes to the pathologist’s profession.
Content may be subject to copyright.
Journal of
Clinical Medicine
Review
Digital Pathology: Advantages, Limitations and
Emerging Perspectives
Stephan W. Jahn 1,* , Markus Plass 1and Farid Moinfar 1,2
1Diagnostic and Research Institute of Pathology, Medical University of Graz, Neue Stiftingtalstraße 6,
8010 Graz, Austria; markus.plass@medunigraz.at (M.P.); farid.moinfar@pathologieverbund.at (F.M.)
2
Department of Pathology, Ordensklinikum/Hospital of the Sisters of Charity, Seilerstätte 4, 4010 Linz, Austria
*Correspondence: stephan.jahn@medunigraz.at
Received: 8 October 2020; Accepted: 13 November 2020; Published: 18 November 2020


Abstract:
Digital pathology is on the verge of becoming a mainstream option for routine diagnostics.
Faster whole slide image scanning has paved the way for this development, but implementation on
a large scale is challenging on technical, logistical, and financial levels. Comparative studies have
published reassuring data on safety and feasibility, but implementation experiences highlight the
need for training and the knowledge of pitfalls. Up to half of the pathologists are reluctant to sign out
reports on only digital slides and are concerned about reporting without the tool that has represented
their profession since its beginning. Guidelines by international pathology organizations aim to
safeguard histology in the digital realm, from image acquisition over the setup of work-stations to
long-term image archiving, but must be considered a starting point only. Cost-eciency analyses
and occupational health issues need to be addressed comprehensively. Image analysis is blended
into the traditional work-flow, and the approval of artificial intelligence for routine diagnostics
starts to challenge human evaluation as the gold standard. Here we discuss experiences from past
digital pathology implementations, future possibilities through the addition of artificial intelligence,
technical and occupational health challenges, and possible changes to the pathologist’s profession.
Keywords:
digital pathology; machine learning; artificial intelligence; whole slide imaging;
occupational health; computer vision syndrome; automation
1. Introduction
Histopathology is a diagnostic discipline founded on the visual interpretation of cellular biology
captured in images. The advent of digitized images to pathology has propelled this traditional
field into what is now described as digital pathology (DP). Digital images and video streams can
be shared in real-time, thus bridging physical distance (telepathology) between local hospitals,
colleges (second-opinion), teachers and students, and between home and workplace (home-oce).
They can be superimposed or linked beyond what physical glass slides would permit to facilitate
spatial correlation across slides and stains. Digital images lend themselves to computational
pathology (CPATH), both for basic measuring and counting and for advanced machine learning
(ML) tasks. Most fascinating of all, images can now be evaluated by ML for features beyond the
assessment of traditional histopathology (artificial intelligence (AI)), such as to directly link images
to clinical data (e.g., prognosis, mutations). New possibilities come with new challenges: extensive
investments in IT-infrastructure and services, best practices for safe implementation, regulatory
requirements, AI as an unaccountable “black-box”, working extensive screen times (occupational
medicine), questions of cost-ecacy, and transformation of the profession by automation. This review
will systematically elaborate on these topics and introduce applications already implemented or
currently under investigation in DP.
J. Clin. Med. 2020,9, 3697; doi:10.3390/jcm9113697 www.mdpi.com/journal/jcm
J. Clin. Med. 2020,9, 3697 2 of 17
2. From Telepathology to Whole Slide Imaging (WSI)
Initially, only single screenshots of histological images captured through a microscope’s optics
were digitized mostly for detailed evaluation in research settings for documentation and teaching.
The introduction of “telepathology”, a term coined in the 1980s, used a remotely operated, motorized
microscope and a live view option of the microscopic slides. The setup was used for selected cases in
frozen section evaluation, consultation practice, and niche applications (e.g., transplant pathology).
Technical advances in scanning speed and decreased costs have made whole slide imaging (WSI)
the standard for future, large-scale, high-throughput DP. Scanning of all relevant tissue is a
prerequisite for a digitized histopathology work-flow to fully replace the optical microscope, i.e.,
for primary histopathological diagnosis. WSI requires dedicated equipment and IT-infrastructure
(scanners/servers/bandwidth/work-stations) to minimize system downtime. WSI enables tissue
work-up, cutting, and staining at the local pathology department, followed by immediate scanning of
slides, which can be later accessed by the reporting pathologists. Pathologists can subsequently view
data either on-site or remotely. Remote viewing access requires sucient data bandwidth (typically
>10 Mbit/s) and low latency for smooth operation. This can add up to considerable IT-capacity
requirements in large pathology departments. Orders for recuts, step-sections, and additional
histochemical or immunohistochemical stains are electronically requested at the central laboratory and
receive subsequent scanning for further online access by the reporting pathologist. Availability of glass
slides upon request is necessary even in centers with long DP experience [
1
], as in the region of 5–10%
of cases are requested for glass slide microscopy. Furthermore, the assessment of slides for polarization
eects (e.g., amyloid, weddelit) cannot be performed on digital slide scans and necessitates evaluation
from glass slides.
3. Regulatory Requirements for WSI for Patient Diagnostics in Europe and the US
At present, WSI scanners are cleared to be used in the European Union under directive 98/79/EC
of the European Commission for
in vitro
diagnostic (
in vitro
diagnostic medical device directive
(IVDD)) [
2
]. DP software, as standalone software, such as WSI viewers or automated image analysis for
specific tasks (e.g., immunohistochemical quantification), can also receive the CE mark IVD-MD (medical
devices). Essentially, conformity is based on a self-declaration of the manufacturer. Scanners and
associated software of numerous manufacturers are currently CE-IVD labeled such as those of Philips,
Roche/Ventana, Leica/Aperio, Hamamatsu, 3DHISTECH. Under the new
in vitro
diagnostic medical
device regulation (IVDR) of the European Parliament, all in-vitro medical devices, including slide
scanners and digital pathology software, are to apply for CE-marking as of May 2022. The IVDR
will require a performance evaluation with a scientific validity report and an analytical and clinical
performance report. Thus, European approvals in DP will more closely resemble the current Food
and Drug Administration (FDA)-approvals for the US-market [
3
]. Only two WSI platforms have
so far received FDA approval for primary surgical pathology (histopathological diagnosis) in the
US. The first approval was granted in 2017 to the Philips IntelliSite Digital Pathology Solution. It is
a closed system that comprises a scanner/image management system and display. Approval was
based on a non-inferiority study [
4
] of close to 2000 slides of dierent histopathological entities.
The approval does not extend to frozen sections, cytology, or non-formalin fixed paran embedded
(FFPE) hematopathology specimens. As of September 2020, the second platform to have been granted
an FDA approval for primary diagnosis is the DP Module by the Swedish company Sectra in conjunction
with Leica Biosystem’s Scanner AT2 DX, itself FDA approved in May 2020.
4. Concordance of Digital Pathology (DP) with Glass Slides
Multiple studies have compared WSI glass slides to digital slides and evaluated their concordance.
In the aforementioned non-inferiority study used for FDA market authorization [
4
], an equal discordance
rate between glass slides (4.6%) and WSI (4.9%) slides was seen when each slide type was reevaluated
J. Clin. Med. 2020,9, 3697 3 of 17
4 weeks after initial diagnosis. Major WSI vs. glass discordance rate was not significant at 0.4%
(95% confidence interval (CI) (
0.3–1.01)). In a review of discordant cases, no consistent inaccuracies
due to WSI were observed. Other studies have found concordance rates to range from above 90% [5]
to over 95% [
4
,
6
,
7
]. In the recent 2020 meta-analysis by Azam et al. [
7
] comprising 25 publications
and 10,410 samples, a 98.3% (95%CI (97.4–98.9)) concordance rate was described. The majority
of discordances (57%) were related to the assessment of nuclear atypia, grading of dysplasia and
malignancy, followed by challenging diagnoses (26%, e.g., focally invasive/malignant lesion) and
identification of small objects (16%, e.g., Helicobacter pylori, mitoses). Generally, the most frequently and
consistently mentioned problems in routine DP refer to detecting microorganisms (Helicobacter pylori),
as well as recognizing mitoses and nuclear features in dysplasia.
Professional organizations have issued guidelines for DP use including, but not limited to, technical
specifications, such as among others the College of American Pathologists (CAP) [
8
], the Digital
Pathology Association [
9
], the European Union [
10
], the Canadian Association of Pathologists [
11
],
the Royal College of Pathologists (RCPath) [
12
,
13
] and the Bundesverband Deutscher Pathologen
(Professsional Association of German Pathologists) [14].
5. Critical Quality Parameters in WSI for Diagnostic Imaging
If the manufacturer has not performed external validation, in-house validation should be
performed on a sucient number of representative cases, including evaluating auxiliary techniques
(immunohistochemistry, special stains). The College of American Pathologists (CAP) guideline
statements [
8
] recommend a minimum of 60 hematoxylin eosin (HE)-slides and at least an extra
20 slides for each auxiliary technique. The time between digital and glass slide comparison should be at
least two weeks to guarantee unbiased evaluation (“washout period”) by the same person. A summary
of relevant points for DP accreditation according to ISO 15189 is highlighted in [15].
Perceived minimum requirements for dedicated DP hardware dier between individual
pathologists, institutions, and the use-case, i.e., whether DP is used for primary diagnosis or only an
adjunct to optical microscopes. Oce grade screens and computer hardware operated by a computer
mouse are widely available. However, high-definition monitors dedicated to DP or even special
digital controllers akin to classical stage handles of a microscope are less frequently used but increase
ergonomics and can be expected to further acceptance of DP by pathologists.
Accurate color rendition is essential and necessitates WSI scanners’ regular calibration against
a standardized reference (established: “IT8.7/1 target”, 28 greys/264 colors). Color deviations are
corrected through software profiles (ICC, International Color Consortium, profiles) for each scanner.
Monitors are also advised to be color-calibrated, taking into account the need for recalibration due
to dierent lighting conditions, which can be conveniently achieved by self-calibrating monitors.
Interestingly, a study [
16
] found equal diagnostic accuracy but faster diagnoses on color-calibrated
than uncalibrated images. In this vein, Norgan et al. [
17
] demonstrated a high agreement (
κ
>0.75)
between calibrated and uncalibrated monitors for 90% of pathologists tasked with counting mitoses
and searching for Helicobacter pylori. Regarding screen size and resolution 27-inch WQXGA and
32-inch UHD screens have been recommended for diagnostic DP by Haroske et al. [
14
]. Higher screen
resolution was found to decrease time to diagnosis [
18
], but conventional optical microscopy was the
fastest still, in line with another study [
19
] reporting increased time to diagnosis through DP. Abel et al.
provide a recent overview of the selection of displays for DP [20].
Usually, WSI captures pre-scans of the whole slide at low resolution, and only tissue detected
on the pre-scan is then scanned at high-magnification. Therefore, tiny tissue particles might elude
automatic inclusion for final scanning and would be lost to evaluation. The Professional Association of
German Pathologists [
14
] recommends adjusting settings to include all tissue clusters beyond 3
×
3 cells.
It encourages the implementation of an option for the pathologist to compare preview images to final
high-resolution scans to avoid missing small particles. The problem potentially extends to slide areas
not coverslipped but holding material of possible diagnostic relevance. In their routine implementation,
J. Clin. Med. 2020,9, 3697 4 of 17
Stathonikos et al. [
21
] used the human control by a technician immediately after scanning to verify
complete WSI and to initiate rescans for slides with out-of-focus areas (see below) before delivery to
the pathologist.
In conventional microscopy, pathologists regularly evaluate multiple focus planes. WSI scanners
per default acquire only a single focal plane. Some scanners can technically acquire multiple planes,
referred to as “z-stacking”, but acquisition times and data increase proportionally (e.g., fivefold for five
focal planes), and z-stacking has mostly been restricted to research. Hence, for routine diagnostics,
tissue might be out of focus if not in the one scanned plane (e.g., at folds), possibly necessitating rescans.
Evaluation of a single focus plane only can contribute to diculties in identifying microorganisms
(HP), mitoses, and dysplasia. These are known challenges in DP, and settings in which pathologists
commonly use multiple focus levels on optical microscopes. Furthermore, blurred lines on scans
corresponding to borders of digitally merged single images, so-called “stitching artifacts”, can appear
on WSI images and obscure relevant details. Out of focus areas can be addressed by manually or
automatically defining additional focus points. If WSI remains unsatisfactory, the case must be deferred
to glass-slide evaluation.
Data storage of diagnostically used WSI data on consumer hard drives is inadequate, and data
storage on professionally maintained and secured servers is mandated. Developed for radiology,
these systems are referred to as PACS (Picture Archiving and Communication System). Stored data
should produce images identical to the time of reporting to guarantee that subsequent, possible
medico-legal evaluations are not biased by reviewing image data with divergent image quality [
14
].
DP data can be stored in open or proprietary formats. The latter can lead to incompatibility between
scanners or to data archives that have to be converted when scanners with new file formats are procured.
In addition, extramural expert consultation requires the compatibility of file formats. Therefore, the use
of the open DICOM (Digital Imaging and Communications in Medicine) standard is encouraged.
In conclusion, the state of the art technical implementation of WSI in routine diagnostics is
complex, requires expertise, human resources, and a continuous focus on technical advances. Lastly,
pathologists may need external professionals to define specifications for major hardware procurements
and compare vendors’ oers in-depth meaningfully [21].
6. The Integrated DP Work-Flow
Once scanned, cases are assigned to a reporting pathologist. DP facilitates dynamic workload
balancing or reallocation of cases in the event of, e.g., sick leave. Regarding digital slide evaluation,
studies on image perception have shown that pathologists first evaluate the digital image by an
initial glance in order to assess the overall image properties, such as spatial distribution, texture,
size, and color, and then quickly focus on regions of interest (e.g., a suspected carcinoma focus).
Randell et al. [
18
] reported that higher resolution monitors are advantageous for faster identifying these
areas on low-power digital images, which are subsequently evaluated at high-power. Tools using AI can
highlight such regions of interest [
22
] for IT-assisted screening. Applications for magnification-sensitive
tracking of digital slide movements can help to prevent missing small particles [
23
]. Tools for
measuring and quantification, launched directly from the viewer, allow for instant documentation of
results. Multiple slides can be displayed side-by-side and interlinked to be moved synchronously or
superimposed, facilitating spatial correlation of dierent stains. Further auxiliary, mostly AI-powered
applications, have either already been integrated into commercially available platforms or have
been published to be suitable for such integration. Use cases include immunohistochemical Ki-67
evaluation to determine a tumor’s proliferation rate [
23
], challenging to quantify biomarkers
(e.g., immunohistochemical PD-L1 staining [
24
,
25
]), evaluation of residual cancer burden after
chemotherapy [
26
], or cancer detection and classification algorithms (e.g., in prostate cancer [
27
29
]).
In a further step towards automation, reports may then be composed using AI-based speech-recognition.
An early example of a routine AI-application embedded in the work-flow after report generation
but before actual sign-out is the second-read Galen
TM
system (Ibex Medical Analytics, Tel Aviv, Israel)
J. Clin. Med. 2020,9, 3697 5 of 17
on the Philips IntelliSite platform [
28
] (for details see the chapter on CPATH). The software can be
configured to analyze the ready histopathology report as well as images of prostate core needle biopsies
in order to automatically alert the pathologist to human vs. AI discrepancies before final sign out.
AI-for cancer prescreening has also been proposed to triage cases for reporting and to optimize the
workload distribution. In the future, automated prescreening may even replace human evaluation [
30
]
in cases negative for cancer.
Human feedback useful for AI-development can also be integrated in the DP work-flow.
AI applications have been shown to benefit from human input for increased classification performance
and reduced numbers of digital slides required for training, a concept termed “Human-in-the loop” [
31
].
Lutnick et al. [
32
] have recently demonstrated the use of a routine WSI viewer (Aperio ImageScope
viewer) for a human-in-the-loop interaction with a convolutional neural network (DeepLab v2). A small
number of WSIs were initially annotated by a human and subsequently classified by the algorithm.
The algorithm was trained through repeated rounds of correction by human feedback and ever more
accurate AI-classifications. This resulted in a considerable eciency improvement in training the
algorithm, so that only a small number of initially annotated WSI (<5 annotated images) were needed.
The algorithm was then able to identify intact and atrophic renal glomeruli and areas of tubular atrophy
in renal biopsies. Thus, human interaction could be incorporated into the training of the AI-algorithm
through graphic annotations on a routine DP interface. Figure 1depicts an outline for an integrated
DP work-flow.
J. Clin. Med. 2019, 8, x FOR PEER REVIEW 5 of 17
biopsies in order to automatically alert the pathologist to human vs. AI discrepancies before final sign
out. AI-for cancer prescreening has also been proposed to triage cases for reporting and to optimize
the workload distribution. In the future, automated prescreening may even replace human
evaluation [30] in cases negative for cancer.
Human feedback useful for AI-development can also be integrated in the DP work-flow. AI
applications have been shown to benefit from human input for increased classification performance
and reduced numbers of digital slides required for training, a concept termed “Human-in-the loop
[31]. Lutnick et al. [32] have recently demonstrated the use of a routine WSI viewer (Aperio
ImageScope viewer) for a human-in-the-loop interaction with a convolutional neural network
(DeepLab v2). A small number of WSIs were initially annotated by a human and subsequently
classified by the algorithm. The algorithm was trained through repeated rounds of correction by
human feedback and ever more accurate AI-classifications. This resulted in a considerable efficiency
improvement in training the algorithm, so that only a small number of initially annotated WSI (<5
annotated images) were needed. The algorithm was then able to identify intact and atrophic renal
glomeruli and areas of tubular atrophy in renal biopsies. Thus, human interaction could be
incorporated into the training of the AI-algorithm through graphic annotations on a routine DP
interface. Figure 1 depicts an outline for an integrated DP work-flow.
Figure 1. Outline for an integrated digital pathology (DP) work-flow.
7. Experience from DP Implementations in Routine Diagnostics Using WSI
Comparatively few reports based on digital pathology experiences to fully replace optic
microscopes for routine diagnostic purposes are available. The Eastern Quebec Telepathology
Network (EQTN) is one of the most extensive real-life applications for DP aimed at providing a DP
histopathology service to the widely dispersed population in Canada [3335]. Taking up service in
2011, the network includes 22 hospitals with a catchment area of 1.7 million people. An EQTN
evaluation project has published the implementation results [34,36]. The network provided remote
intra-operative consultation and histopathological second opinion. It was initiated due to a lack of
pathologists in remote rural areas, which endangered state-of-the-art surgical service provision. A
2018 evaluation [33] confirmed the reduction of two-stage surgeries and patient transfers to urban
centers. Service-breaks and diagnostic delays decreased upon the introduction of DP. Telepathology
did not help recruit and retain pathologists in remote areas as pathologists wished to work in a team.
Figure 1. Outline for an integrated digital pathology (DP) work-flow.
7. Experience from DP Implementations in Routine Diagnostics Using WSI
Comparatively few reports based on digital pathology experiences to fully replace optic
microscopes for routine diagnostic purposes are available. The Eastern Quebec Telepathology
Network (EQTN) is one of the most extensive real-life applications for DP aimed at providing a DP
histopathology service to the widely dispersed population in Canada [
33
35
]. Taking up service
in 2011, the network includes 22 hospitals with a catchment area of 1.7 million people. An EQTN
evaluation project has published the implementation results [
34
,
36
]. The network provided remote
intra-operative consultation and histopathological second opinion. It was initiated due to a lack of
pathologists in remote rural areas, which endangered state-of-the-art surgical service provision. A 2018
evaluation [33] confirmed the reduction of two-stage surgeries and patient transfers to urban centers.
J. Clin. Med. 2020,9, 3697 6 of 17
Service-breaks and diagnostic delays decreased upon the introduction of DP. Telepathology did not help
recruit and retain pathologists in remote areas as pathologists wished to work in a team. In locations
without on-site pathologists, duties, remuneration, and legal responsibilities were transferred from the
pathologists to technical assistants, altering the department’s internal structure. Generally, the EQTN
experiences seem valuable for its long duration and for raising issues beyond mere technical aspects
commonly described in more limited, academic studies.
Experiences from a long-running DP implementation in Sweden since 2006, using WSI at two sites
(Kalmar County Hospital and Linköping University Hospital) were reported by Thorstenson et al. [
37
]
in 2013, based on 500,000 slides at time of publication, with now around 1.8 million slides scanned to
date [
38
]. In a parallel implementation of DP and glass-slides, the pathologists were free to choose
between digital and glass slides. Overall, 38% of cases were diagnosed digitally, but pathologists
reported frequent switching between digital and glass slides, including within the same case,
highlighting perceived shortcomings for pathologists at the time to confidently sign-out reports
with DP.
In 2017, Evans et al. [
39
] published their five-year DP implementation experience on 6700 cases/
35,500 slides and reported a >90% digitally only sign out rate. Conversely, almost 10% of cases required
additional glass slide evaluation. Hanna et al. [
1
] in 2019 also reported on the importance of glass slide
availability on request in DP employed for primary diagnoses. The authors published their DP use at
the Memorial Sloan Kettering Cancer Center, US, including a DP experience survey for pathologists’
feedback; 48% of respondents would not have felt comfortable with DP sign-out without glass slides
available, and 15% felt uncomfortable even with glass slides available. Nevertheless, 91% thought DP
reduced turnaround times, helped to decide on repeat ancillary studies (96%), and was useful for prior
case review (83%).
A UK teaching hospital has been using DP for the primary diagnosis of all histology slides
since September 2018 and published their experience in 2020 [
40
]. The authors laid out technical
validation and suggestions for training DP newcomers. They comprehensively listed possible areas
of diculty and pitfalls to avoid in DP evaluation according to individual fields of histopathology
(e.g., gastrointestinal, breast, hepatobiliary etc.). Their report adds eosinophils, neutrophils, mast cells,
amyloid, weddellite, and mucin to the list of dicult to evaluate features. Lastly, the authors described
diculties for pathologists to apply the commonly used, systematic meandering of glass slides for
complete slide evaluation to digital slides. They advise actively training DP newcomers on substitute
WSI-optimized techniques to guarantee complete visual evaluation of the digital slide.
Furthermore, a UK pathology laboratory focus group published their views in 2019 [41] on a set
of open questions on the imminent transition to DP at their department (Oxford University Hospitals,
NHS Trust). They identified benefits (e.g., collaboration, teaching, cost-savings) but also barriers to
transition (e.g., standardization and validation, training, cost-eectiveness, data storage) and proposed
the setting up of multiple pilot centers in the UK. These early adopters should then create national
guidelines and provide cost-eectiveness analyses, possibly with the help of a project manager to
define appropriate variables and oversee baseline measurements of the cost analyses.
A recent publication [
23
] on the experiences of the successful DP implementation at the Dutch
University Medical Center in Utrecht focuses, among others, on the technical challenges and solutions
since the initiation of DP in 2007. The authors describe their experiences and the pitfalls regarding
laboratory management system integration and the work-flow optimization over two successive
scanner generations. The first generation of scanners was in use for seven years, slightly longer than
the five-year life span generally assumed for heavily used routine DP scanners [
42
,
43
]. Furthermore,
data storage considerations for their large (2 petabyte) digital archive and the benefits of the recent
integration of image analysis tools are addressed. Apart from DP implementation experiences
published in the academic literature, a wealth of information is also available on commercial websites,
especially of manufacturers of DP scanners.
J. Clin. Med. 2020,9, 3697 7 of 17
Undeniably, the current COVID-19 pandemic and the associated social distancing restrictions
have been an unexpected boost to DP. A UK guidance study published by the RCPath [
13
] provides
recommendations on the use of DP in the context of the COVID-19 pandemic. A recent study [
44
]
has looked at “emergency” use of DP in the US during the COVID-19 pandemic on home-oce
work-stations. The study was possible due to the temporarily waived regulatory requirements on DP
workstations during the pandemic. The study reported nearly complete glass-to digital concordance
from a home-oce regarding diagnosis and parameters of clinical relevance. Participants reported
working on non-DP-dedicated hardware as feasible, even though 42% of study participants worked
on a below 14 inch screen. Available home internet connection speeds were deemed sucient
(range 3–385 Mbit/s) with 13% using a <20 mbit/s connection. WSI latency was unrelated to computer
hardware performance but dependent on internet connection speeds. Ninety percent of pathologists
reported being “comfortable” or “very comfortable” (top two items on a five point Likert-scale) with
DP and the option of glass slides available on request.
8. Medical Education and the Consultation Setting—Advantages and Challenges
Medical education might be the most widely used application of DP to date, and its use is
relatively straightforward. Dedicated digital microscopes for presentation can visualize the slide
reviewing process by live video streaming to give insight into the histopathological evaluation strategy
for training purposes. Digital images can be included in presentations, annotated, used for exams,
and made available for self-and remote-study [
45
]. DP in teaching is for illustrative purposes only
and free from medico-legal calamities. Therefore, most issues that beset the use of DP for primary
histopathological evaluation, such as exact color rendition, resolution, completeness of scans, out of
focus areas, long-term data storage, etc. are of secondary importance. Therefore, medical teaching is
often the area of the first contact between pathologists and DP.
Consultation pathology is regularly invoked as an ideal application for DP. Nevertheless, HE slides
together with FFPE blocks are often sent for second-opinion as cases frequently need to be recut for
additional immunohistochemical stains or molecular analyses. Moreover, consulted experts might
prefer slides to be stained in their laboratory since familiar, standardized work-up can be of value
in challenging cases. Image quality and proper calibration of digital slides, as well as technical
compatibility between DP systems, are further issues. Given these limitations, some experts in
consultation pathology are currently reluctant to base binding decisions in challenging cases on digital
images only, not least for medico-legal reasons. From a technical perspective, digital consultation
pathology necessitates a vendor-independent platform for slide review of dierent provenance. A Dutch
initiative has recently reported the setting up of a platform for the exchange of WSI for teleconsultation,
and virtual expert panels [46].
9. Computational Pathology (CPATH)
WSI is an ideal launchpad for computational pathology (CPATH). CPATH is a blanket term
for an array of methods that aim to add diagnostic value by IT-assisted analysis of the image data.
It spans a wide range of applications that comprise in order of ascending complexity image-enhancing
features, measuring and quantification, graphical highlighting/pre-selection by heat mapping, and lastly,
fully automated assessments. A recent white paper [9] from the Digital Pathology Association (DPA)
provides an overview of definitions, best practices, and regulatory guidance recommendations.
The vast majority of current applications refer to image analysis to improve the accuracy and
reproducibility of morphological variables conventionally assessed by the pathologist through visual
estimation or manual counting. Such tasks include measuring distance to resection margins, size of
lymph node metastases in breast cancer, or depth of invasion used for staging and prognostication
(e.g., melanoma).
Examples of evaluation by traditional image analysis and now ML in DP include quantification
of mitotic index [
47
], proliferative index by Ki-67 immunohistochemistry [
48
], nuclear estrogen- and
J. Clin. Med. 2020,9, 3697 8 of 17
progesterone receptor positivity in breast cancer, as well as immunohistochemical HER2 staining [
49
].
These applications have variably been included in DP viewers for easy operation. Also, an application
to estimate residual tumor cell content after chemotherapy has been developed [
26
]. A more complex
application for routine diagnostics has been made available as a CE-IVD approved software for the
Philips IntelliSite Pathology Solution in 2018. The software quantifies intratumoral and invasive
margin T-cell infiltration in colon cancer (Immunoscore) [
50
] in order to predict the risk of recurrence at
five years in stage II and III colon cancer [
25
]. For a recent overview of deep-learning AI-applications
and their performance characteristics see [
51
]. As long as CPATH is used to evaluate parameters also
amenable to human evaluation, the pathologist, regardless of the underlying technique employed,
can check results for plausibility, and final accountability rests with the pathologist.
AI clearly creates most of the excitement around DP. The term is commonly known to the broader
public and is somewhat vaguely defined at the technical level: It denotes applications that use ML
as a technical basis to execute tasks that are generally assumed to be reserved for human, intelligent
behavior due to their complexity. “Machine learning” and “AI” are commonly used synonymously,
although ML is a subgroup of AI applications. ML can broadly be divided into supervised and
unsupervised ML. Both forms need large datasets to train the ML algorithm. With supervised ML,
human-made classifications (e.g., features/histotypes in pathology) are used to train the algorithm
to classify new cases. Unsupervised ML does not need user-specified criteria, but the algorithm
establishes groups of cases with similar image properties on its own. These patterns can coincide with
established morphological classifiers but may also be unbeknown to the pathologist. For safe clinical
use, ML training sets have to be large enough to ideally/theoretically cover the whole spectrum of
diagnostic possibilities. This necessitates extensive datasets, especially in unsupervised ML, where the
nature of the relevant features that need to be covered are not known.
The performance characteristics of ML/AI algorithms have to be rigorously evaluated for their
diagnostic reliability. How the trained algorithm derives at a conclusion is generally unknown.
This phenomenon is commonly referred to as AI being a “black box”. Eorts are made to render the
AI algorithms more transparent and allow for human scrutiny to detect undesirable “behavior” in
AI. Increased transparency [
52
] through “explainable” AI (XAI) aims to avoid hidden biases in AI
algorithms and tries to convert AI from a “black box” into what has been coined a “glass box” [
53
,
54
].
Concepts include visualization of pixels that impacted decision-making and the presentation of the
training image that most influenced the decision [
52
]. Conversely, AI has sparked renewed interest
in the pathologists’ evaluation strategies to improve ML [
55
]. The human-in-the loop approach to
improve training performance in ML has already been mentioned in conjunction with the description
of the integrated DP work-flow.
More than 60 applications in ML/AI have already been approved by the FDA in the US [
56
] for
radiology, cardiology, and internal medicine/general practice so far, but none for pathology. AI has
recently been employed with great success to prostate cancer detection and classification [
27
,
29
,
30
].
The Galen
TM
Prostate software (Ibex Medical Analytics, Tel Aviv, Israel) claims to be the first AI-based
pathology system embedded into routine pathology practice [
28
]. It has been CE-IVD marked in
Europe and commercially launched this year. The algorithm detects prostate cancer in whole slide
images of core needle biopsies and is designed as a second-read system to avoid missed diagnoses.
The prostate cancer is assigned to either a low-grade group comprising Gleason-Score 6 and atypical
small acinar glands or a high-grade group corresponding to Gleason-Score 7–10. The presence and the
extent of Gleason-pattern 5 are reported, as is the finding of perineural invasion and the percentage of
cancer per biopsy core. In an external validation set, the specificity for cancer detection was 97.3%
(95%CI (94.4–98.7)) at a sensitivity of 98.5% (95%CI (94.1–99.6)). The software was implemented for
routine practice at a large Israeli pathology department and performed a second-read evaluation on
more than 900 cases before final sign-out. The algorithm led to 560 cancer alerts, of which 9% were
followed up by additional levels or immunohistochemical stains. This additional work-up up led to
one case being detected by the algorithm but missed by the pathologist.
J. Clin. Med. 2020,9, 3697 9 of 17
CAPTH has also successfully been applied for research purposes to link digital images to clinical
data directly. ML/AI has demonstrated the ability to infer the presence of mutations in tumor tissue
from a tumor’s WSI data, thus extending histomorphological evaluation beyond what is amenable to a
human interpretation. ML using so-called “deep learning” algorithms predicted the 10 most commonly
mutated genes in non-small cell lung cancer [
57
], mutations in melanoma [
58
], and prostate cancer [
59
].
Furthermore, WSI images can also be linked to prognostic clinical data by ML/AI, as demonstrated for
the successful prediction of patient outcome in colon cancer from image data [60,61].
An AI-based application meant to aid in the correct classification of neoplasias is Page.AI. Still in
development, it was granted FDA breakthrough-designation in early 2019 [
62
]. It is a software that
uses AI in order to match unknown tumor morphologies to images of known diagnoses in the database.
Under a license agreement with the Memorial Sloan Kettering Cancer Center, US, around 1 million
slides are available to the project, with another 4 million slides intended for scanning.
For all their impressive results, ML/AI can be negatively aected by a range of common factors,
including blur, tissue folds, tissue tears, and color variations [
63
]. Algorithms for color normalization
and color augmentation have been developed. Detection of novel objects can be problematic for AI
algorithms, which may forcibly classify foreign bodies or rare cases not included in the training datasets
into predefined groups. Attempts at solving this problem of “outlier detections” in histopathology are
still in their early stage [64].
Numerous so-called “challenges” have demonstrated the advances of ML/AI in the field. Examples
include the Grand Challenge on BreAst Cancer Histology (BACH) [
65
], and the Cancer Metastases in
Lymph Nodes Challenge [
66
]. These competitions are hosted in the form of image datasets put out to
the public. Participants are free to download the data and apply their algorithms. A winning algorithm
is determined, ranked according to e.g., accuracy and computing time required. Challenges have been
announced for DP, among others in breast-, prostate-, lung- and neuropathology. It is from an early
mitosis-detection competition in 2012 [
67
] that an ML-based deep learning algorithm took the lead and
highlighted the superiority of ML over non-ML approaches for advanced CPATH.
10. Cost-Eectiveness Considerations
Eciency gains are frequently invoked to advocate DP implementation. If DP use is limited
as for biomarker evaluation/quantification or second read tasks, total costs will increase as DP costs
are incurred in addition to glass slide evaluation. Cost-eectiveness can thus only eventually be
achieved in a completely digital pathology work-flow (primary digital diagnosis), but which in
turn still necessitates the availability of glass slides on request. To our knowledge, no real-life
cost-eciency analysis for full DP implementation with sucient (>5 years) follow-up has been
reported. In 2019 a publication described the DP implementation [
1
] at the Memorial Sloan Kettering
Cancer Center’s pathology department in the US. This academic center produced >125,000 slides a
month as of 2017. The transition from research to routine WSI was initiated in 2015 and implemented
on a subset of cases. The authors reported gradually increasing WSI up to around 23,000 digital
slides per month in mid-2017. DP implementation led to a 93% reduction of glass slide requests.
Ancillary immunohistochemistries decreased from 52% before DP implementation in 2014 to 21%
in 2017. Yearly immunohistochemistry savings of 114,000 dollars were anticipated, and overall cost
savings through DP were estimated at 267,000 dollars per year. Accounting for clinical WSI setup
and maintenance costs, the breakeven was estimated for 2021, seven years after initial clinical DP
implementation. Apart from immunohistochemistry, cost savings were due to reduced labor costs and
vendor services, including cost savings from decreased glass slide transport.
Another estimate on the cost eciency of DP for primary diagnosis was published in 2017 [
43
],
taking the author’s own UK teaching hospital (80,000 specimens, 45 full-time equivalent consultants,
9 million pounds annual departmental budget) as a model. Of note, the authors assumed that all
eciency gains could be financially recouped, which realistically will seldom be the case (e.g., due to
reimbursement schemes and limited flexibility of work contracts). According to their estimates,
J. Clin. Med. 2020,9, 3697 10 of 17
DP costs would break even after two years at an overall 10% productivity improvement and after one
year at 15%. At only a 5% productivity improvement, DP would amount to a permanent financial loss
compared to glass slide pathology. The cost-eciency analysis by Ho et al. [
68
] has also included more
widely evaluated cost-savings through DP due to presumed avoidance of over- and undertreatment
costs as well as from laboratory consolidation/eciency gains that could be achieved in the distributed
health network of the Pittsburgh Medical Center.
In summary, cost-eciency analyses to date are few, often estimated and derived from
academic single-center experiences that cannot easily be generalized. Notably, even if costs to
the healthcare system are diminished, these savings generally do not translate into costs saved for the
pathology department.
11. Digital Pathology and Occupational Health—Computer Vision Syndrome (CVS)
Occupational health aspects are rarely mentioned in DP implementation reports at all, and if
so, take a low priority. We are not aware of a systematic evaluation of occupational health issues in
DP to date. Conversely, health complaints do not feature prominently in unstructured DP feedback,
but comparatively unspecific requests for optimized display configuration and input devices are
frequent. Whether this pertains to requests for more time-ecient hardware setup or includes
unspecified health concerns remains unclear. CVS or digital eye strain is defined as the combination
of eye and vision problems associated with the use of electronic screens. Between 64% and 90% of
computer users experience visual symptoms such as dry eyes, headaches, eyestrain, burning eyes,
diplopia, and blurred vision when using (computer) screens [
69
]. Rossignal et al. [
70
] have reported
increased CVS incidence in individuals looking at screens over four hours a day. CVS can lead to
increased work breaks, and its economic impact is significant [
71
]. Similar to optical microscopy,
the problem also extends to musculoskeletal injuries, and these have also been included as extraocular
manifestations of CVS by some [
72
]. Musculoskeletal injury from computer use has been estimated to
account for at least half of work-related injuries in the US [73].
Contrary to optical microscopy, which projects its image at infinity, computer screens require
constant near focusing and have shown to reduce the eye’s blinking frequency. This, in turn, diminishes
the ocular film and eventually leads to dry eyes. Concerning digital screens, an uncorrected especially
astigmatic-refractive error, the angle of view towards the screen, medications, as well as glare, contrast,
and font size are all understood to contribute to CVS.
Experiences from a Dutch DP routine implementation [
21
] reported on 23 pathologists, of whom
65% regarded DP to be “ergonomic” or “very ergonomic” (top two items on a four-item Likert
scale), but close to 10% experienced DP as only “slightly ergonomic” (lowest item). In addition,
the study descriptively evaluated musculoskeletal injury through self-reporting, but evaluation of
ocular discomfort was missing. Head/neck and shoulder complaints were reduced by approximately a
third with DP, but wrist/lower back/and feet complaints were of equal frequency and wrist complaints
slightly elevated, possibly due to extended computer mouse usage in DP. A 10% rate of DP users
discontent with DP ergonomics was also reported in [
37
], but complaints might be higher in close to
full-time DP reporting, as only a third of pathologists opted to sign out more than half of their cases
digitally. In summary, more work is urgently needed to address CVS and musculoskeletal complaints
in DP, before DP is widely introduced for primary histodiagnoses.
12. Digital Pathology and the Pathologist’s Profession
Digitization has reshaped employment, professions and fueled globalization. If indeed DP
becomes transformative, it would be naive not to assume it will reshape the pathologist’s job market.
To what extent AI can replace the human pathologist waits to be seen. A 2016 study by McKinsey [
74
]
assessed the current automation potential across US professions and found the lowest potential with
professions “managing others” at 9% of work-time, together with those applying expertise at 18%,
the latter applicable to pathologists. On the other end of the scale, the authors predicted physical
J. Clin. Med. 2020,9, 3697 11 of 17
labor to harbor a 78% automation potential. The study also emphasized that the automation potential
was only one factor to influence human replacement through automation. The other factors were:
costs to automate, relative scarcity and costs of skilled replacement workers, benefits beyond labor
replacement (quality gains), and lastly, regulatory and social-acceptance considerations. Therefore,
technical advances, pathology stashortage, and quality improvements through DP would have to
be regarded as the main drivers for automation pressures on the profession. That said, professions
applying medical expertise were still rated as low-risk for automation overall.
Will AI make the pathologist’s workday less taxing? In conjunction with AI, DP may assist with
tasks strenuous for human assessment (e.g., quantification). Conversely, AI might more generally be
good at what is comparably simple for human evaluation, such as detecting metastases in lymph nodes
and might fail where things become dicult for both humans and AI, i.e., complex and challenging
evaluations, where overall judgment is needed. This situation could lead to pathologists being left
with an assortment of dicult tasks when AI performs the simple ones. We expect AI in DP to result
in the pathologist’s future work-day becoming more demanding but also even more interesting.
Even when automation is advanced, the verification of overall plausibility, the quality control,
and the medico-legal responsibility will rest with a human professionalist, as is already the case with
highly automated, specialized areas of medicine such as laboratory medicine. Inevitably, DP will create
a global market for histopathology services, where language barriers, first of all, delineate competing
labor-markets. Salaries for pathologists might come under pressure in high-income markets due to
increased competition with lower-wage countries. Conversely, in areas with sometimes dire pathology
stashortages, such as parts of Africa and Asia, outsourcing tasks to AI may be beneficial to alleviate
stashortages.
13. Important Open Challenges and How They Could Be Addressed
Regarding the problem of non-transparent AI algorithms, explainable AI, which is a subfield
of AI, is dedicated to exposing complex AI models to humans in a systematic and interpretable
manner and one of the ways to allow for human scrutiny and ultimately the building of trust in this
technique. Consequently, not only classification performance but also explainability will need to be
goals in AI development. As a near-term strategy, the use of supervised ML has the advantage of
generating AI-classifications along established human categories, which in turn increases AI-acceptance.
Adequately powered performance studies should help to allay doubts, as consistent results that correlate
with clinical parameters are the ultimate goal.
Occupational health evaluations need to systematically address musculoskeletal as well as ocular
injury. This necessitates studies dedicated to the pathologist’s profession (akin to radiology [
75
]) and
the routine inclusion of occupational health surveys into DP implementation studies. Appropriate
parameters for evaluation need to be defined (e.g., mean hours screen time worked per day) in
collaboration with ophthalmologists. Awareness of CVS and possible measures such as ergonomic
screen setup, regular breaks, and accurate refractory correction is necessary. Possible benefits of glasses
prescribed specifically for computer use should be evaluated, especially in full-time DP users.
Cost-eectiveness analyses based on standardized criteria, as suggested in [
41
], will provide
more robust and comparable data in the future. Results need to be stratified by type of department
(academic/non-academic), the extent of DP implementation, and ideally compared to other players in
the field working under comparable reimbursement schemes.
14. Conclusions
DP undoubtedly holds great potential for routine histopathology in the near future. DP has
shown exciting results for primary histopathological diagnoses, even more so with AI applications
now becoming embedded into the digital reporting work-flow. Nevertheless, uncertainties as to
the current extent of clinically relevant benefits remain in the face of considerable additional costs
upfront and hard to compare hospital and reimbursement settings. Table 1gives an overview of the
J. Clin. Med. 2020,9, 3697 12 of 17
possible advantages and disadvantages of DP over conventional optical microscopy that we identify.
As much of the current work on DP comes from early adopters, some involved in corresponding
hardware and software development, a certain degree of enthusiastic bias in favor of DP is inevitable.
The current COVID-19 pandemic has clearly given a boost to the field, so more robust real-world data
from larger-scale DP implementations can be expected soon.
Table 1. Advantages and disadvantages in digital pathology.
Digital Pathology
(DP) Feature Possible Advantages Possible Disadvantages
In-house telepathology #Quick second opinion
#Social distancing (COVID-19 pandemic)
#Second opinion overuse (interrupted work-flows)
#Decreased interpersonal (face-to-face) communication
Extramural
telepathology
#Service for remote/understaed areas
#Specialization through DP in low volume labs
#Home-oce use
#Healthcare cost reduction through global
histopathology market
#Social isolation in remote telepathology
#Loss of routine on-site expertise through home oce
#
Wage competition through global histopathology market
Consultation
telepathology
#Quick access possible
#No physical slide transfer
#Lower threshold for consultation due to shorter
turnaround time
#No tissue block available for additional
stains/molecular assays
#Consulted pathologist unaccustomed to work-up
(stains/scanner calibration) at the primary center
#Compatibility issues due to diverse proprietary
DP formats
#Possible medico-legal implications due to
restricted work-up
WSI-general #No physical slide distribution
#No fading of stored slides
#No irretrievable/lost slides
#Shorter sign-out time
#Reduced misidentification of slides due to
barcoded slides automatically allocated to the case
#Easy dynamic workload allocation (e.g.,
management of backlogged work, redistribution in
case of sick leave)
#Time to evaluable-ready slide increased due to
additional scan time
#Integration into a laboratory information system (LIS)
for full eciency gains needed possible costs for
LIS update
#Regular calibration required (scanners/displays)
#Small particles omitted by scan manual
checking for rescan
#Artifacts (out-of-focus areas, digital stitching artifacts)
#Increased IT-dependence (IT-downtime) compared to
optical microscopy
WSI-reporting/
user experience
#Parallel (side-by-side) viewing, digital slide
superposition
#Shorter sign-out time
#Quick access to prior slides less
immunohistochemistry
#Facilitates slide presentation at multidisciplinary
tumor board
#Easy image sharing in clinical communication
#Computational pathology possible (see below)
#Occupational health: less neck strain, more
flexible posture
#Slower evaluation compared to optical microscopes
#Mostly only single focus plane in routine DP
diculties with interpretation
#Some structures harder to recognize on WSI glass
slide needed
#Polarization not possible on DP glass slide needed
#Extra training for safe practice required (perceived
insecurity on digital sign-out) if not DP from career start
#Easy availability of prior digital slides might shift
medico-legal onus towards more extensive re-examination
increased workload
#Dual infrastructure generally necessary
(glass and digital)
#
Occupational health: Computer Vision Syndrome (CVS)
WSI-Image
Analysis, ML/AI
#Faster/ecient and more accurate
measurements/quantifications
#Exact quantification of tumor cell content for
molecular analyses
#Digital enhancement of image features
#AI for second-read safety net
#Direct link morphology to clinical parameters
“novel biomarker” beyond human recognition
#
Inspection/correction of suggestions from AI-apps
in development on WSI-viewer:
“human-in-the-loop” interaction
#Benefit of more accurate quantification not necessarily
clinically relevant
#Applications beyond human evaluation not yet
approved/used for clinical management
#AI intransparent (“black box”)
#Regulatory oversight challenges with self-modifying
(adaptive) AI as algorithm/performance not
constant over time
J. Clin. Med. 2020,9, 3697 13 of 17
Table 1. Cont.
Digital Pathology
(DP) Feature Possible Advantages Possible Disadvantages
WSI-teaching #Digital images for presentation and exams
readily available
#Remote teaching and self-study
#Increased student motivation, modern appeal
#None
Costs and
eciency gains
#Work time saved through faster turnaround times
#Decreased auxiliary techniques (less
immunohistochemistry)
#Decreased physical slide-transfer costs
#DP implementation and maintenance and storage costs
add to current fixed costs if productivity gains remain
unrealized (fixed work contracts)
#
Dual infrastructure costs (workstations and microscopes
if kept)
#Glass and digital storage still generally
deemed necessary
#Technical expert knowledge for
hardware acquisitions needed
WSI: whole slide imaging, AI: artificial intelligence, ML: machine learning. Lines are intended as separators between
dierent “digital pathology features“.
Author Contributions:
S.W.J.: Conceptualization, writing—original draft preparation. M.P.: Conceptualization,
writing—original draft preparation. F.M.: Conceptualization, writing—review and editing. All authors have read
and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Hanna, M.G.; Reuter, V.E.; Samboy, J.; England, C.; Corsale, L.; Fine, S.W.; Agaram, N.P.; Stamelos, E.; Yagi, Y.;
Hameed, M.; et al. Implementation of digital pathology oers clinical and operational increase in eciency
and cost savings. Arch. Pathol. Lab. Med. 2019,143, 1545–1555. [CrossRef] [PubMed]
2.
Garc
í
a-Rojo, M. International clinical guidelines for the adoption of digital pathology: A review of technical
aspects. Pathobiol. J. Immunopathol. Mol. Cell. Biol. 2016,83, 99–109. [CrossRef] [PubMed]
3.
Abels, E.; Pantanowitz, L. Current state of the regulatory trajectory for whole slide imaging devices in the
USA. J. Pathol. Inform. 2017,8, 23. [CrossRef] [PubMed]
4.
Mukhopadhyay, S.; Feldman, M.D.; Abels, E.; Ashfaq, R.; Beltaifa, S.; Cacciabeve, N.G.; Cathro, H.P.;
Cheng, L.; Cooper, K.; Dickey, G.E.; et al. Whole slide imaging versus microscopy for primary diagnosis in
surgical pathology: A multicenter blinded randomized noninferiority study of 1992 cases (pivotal study).
Am. J. Surg. Pathol. 2018,42, 39–52. [CrossRef] [PubMed]
5.
Goacher, E.; Randell, R.; Williams, B.; Treanor, D. The diagnostic concordance of whole slide imaging and
light microscopy: A systematic review. Arch. Pathol. Lab. Med. 2017,141, 151–161. [CrossRef] [PubMed]
6.
Snead, D.R.; Tsang, Y.W.; Meskiri, A.; Kimani, P.K.; Crossman, R.; Rajpoot, N.M.; Blessing, E.; Chen, K.;
Gopalakrishnan, K.; Matthews, P.; et al. Validation of digital pathology imaging for primary histopathological
diagnosis. Histopathology 2016,68, 1063–1072. [CrossRef]
7.
Azam, A.S.; Miligy, I.M.; Kimani, P.K.; Maqbool, H.; Hewitt, K.; Rajpoot, N.M.; Snead, D.R.J. Diagnostic
concordance and discordance in digital pathology: A systematic review and meta-analysis. J. Clin. Pathol.
2020. [CrossRef]
8.
Pantanowitz, L.; Sinard, J.H.; Henricks, W.H.; Fatheree, L.A.; Carter, A.B.; Contis, L.; Beckwith, B.A.;
Evans, A.J.; Lal, A.; Parwani, A.V. Validating whole slide imaging for diagnostic purposes in pathology:
Guideline from the college of american pathologists pathology and laboratory quality center. Arch. Pathol.
Lab. Med. 2013,137, 1710–1722. [CrossRef]
9.
Abels, E.; Pantanowitz, L.; Aener, F.; Zarella, M.D.; van der Laak, J.; Bui, M.M.; Vemuri, V.N.; Parwani, A.V.;
Gibbs, J.; Agosto-Arroyo, E.; et al. Computational pathology definitions, best practices, and recommendations
for regulatory guidance: A white paper from the digital pathology association. J. Pathol.
2019
,249, 286–294.
[CrossRef]
10.
Garcia-Rojo, M.; De Mena, D.; Muriel-Cueto, P.; Atienza-Cuevas, L.; Dominguez-Gomez, M.; Bueno, G.
New european union regulations related to whole slide image scanners and image analysis software.
J. Pathol. Inform. 2019,10, 2. [CrossRef]
J. Clin. Med. 2020,9, 3697 14 of 17
11.
Bernard, C.; Chandrakanth, S.A.; Cornell, I.S.; Dalton, J.; Evans, A.; Garcia, B.M.; Godin, C.; Godlewski, M.;
Jansen, G.H.; Kabani, A.; et al. Guidelines from the canadian association of pathologists for establishing
a telepathology service for anatomic pathology using whole-slide imaging. J. Pathol. Inform.
2014
,5, 15.
[PubMed]
12.
Royal College of Pathologists. Best Practice Recommendations for Digital Pathology. Available online:
https://www.rcpath.org/uploads/assets/f465d1b3-797b-4297-b7fedc00b4d77e51/Best-practice-recommendations-
for-implementing-digital-pathology.pdf (accessed on 3 October 2020).
13.
Williams, B.J.; Brettle, D.; Aslam, M.; Barrett, P.; Bryson, G.; Cross, S.; Snead, D.; Verrill, C.; Clarke, E.;
Wright, A.; et al. Guidance for remote reporting of digital pathology slides during periods of exceptional
service pressure: An emergency response from the uk royal college of pathologists. J. Pathol. Inform.
2020
,
11, 12. [CrossRef] [PubMed]
14.
Haroske, G.; Zwönitzer, R.; Hufnagl, P. “Digital Pathology in Diagnostics-Reporting on Digital Images”
guideline of the professional association of german pathologists. Der Pathologe
2018
,39, 250–252. [CrossRef]
[PubMed]
15.
Williams, B.J.; Knowles, C.; Treanor, D. Maintaining quality diagnosis with digital pathology: A practical
guide to iso 15189 accreditation. J. Clin. Pathol. 2019,72, 663–668. [CrossRef] [PubMed]
16.
Krupinski, E.A.; Silverstein, L.D.; Hashmi, S.F.; Graham, A.R.; Weinstein, R.S.; Roehrig, H. Observer
performance using virtual pathology slides: Impact of lcd color reproduction accuracy. J. Digit. Imaging
2012
,
25, 738–743. [CrossRef] [PubMed]
17.
Norgan, A.P.; Suman, V.J.; Brown, C.L.; Flotte, T.J.; Mounajjed, T. Comparison of a medical-grade monitor
vs commercial o-the-shelf display for mitotic figure enumeration and small object (Hellicobacter pylori)
detection. Am. J. Clin. Pathol. 2018,149, 181–185. [CrossRef]
18.
Randell, R.; Ambepitiya, T.; Mello-Thoms, C.; Ruddle, R.A.; Brettle, D.; Thomas, R.G.; Treanor, D.
Eect of display resolution on time to diagnosis with virtual pathology slides in a systematic search
task. J. Digit. Imaging 2015,28, 68–76. [CrossRef]
19.
Mills, A.M.; Gradecki, S.E.; Horton, B.J.; Blackwell, R.; Moskaluk, C.A.; Mandell, J.W.; Mills, S.E.; Cathro, H.P.
Diagnostic eciency in digital pathology: A comparison of optical versus digital assessment in 510 surgical
pathology cases. Am. J. Surg. Pathol. 2018,42, 53–59. [CrossRef]
20.
Abel, J.T.; Ouillette, P.; Williams, C.L.; Blau, J.; Cheng, J.; Yao, K.; Lee, W.Y.; Cornish, T.C.; Balis, U.G.J.;
McClintock, D.S. Display characteristics and their impact on digital pathology: A current review of
pathologists’ future “microscope”. J. Pathol. Inform. 2020,11, 23.
21.
Stathonikos, N.; Nguyen, T.Q.; Spoto, C.P.; Verdaasdonk, M.A.M.; van Diest, P.J. Being fully digital:
Perspective of a dutch academic pathology laboratory. Histopathology 2019,75, 621–635. [CrossRef]
22.
Steiner, D.F.; MacDonald, R.; Liu, Y.; Truszkowski, P.; Hipp, J.D.; Gammage, C.; Thng, F.; Peng, L.; Stumpe, M.C.
Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer.
Am. J. Surg. Pathol. 2018,42, 1636–1646. [CrossRef] [PubMed]
23.
Stathonikos, N.; Nguyen, T.Q.; van Diest, P.J. Rocky road to digital diagnostics: Implementation issues and
exhilarating experiences. J. Clin. Pathol. 2020. [CrossRef] [PubMed]
24.
HalioDx. Immunoscore
®
IC in Non-Small Cell Lung Cancer. Available online: https://www.haliodx.com/
diagnostic/immunoscorer-ic-in- lung-cancer/(accessed on 26 October 2020).
25.
HalioDx. Haliodx and Philips Team up to Oer Immunoscore
®
Colon ivd on Philips Intellisite Pathology
Solution. Available online: https://www.haliodx.com/about-us/news/detail/News/haliodx-and-philips-team-
up-to-oer-immunoscorer-colon-ivd-on-philips-intellisite- pathology-soluti/(accessed on 26 October 2020).
26.
Akbar, S.; Peikari, M.; Salama, S.; Panah, A.Y.; Nofech-Mozes, S.; Martel, A.L. Automated and manual
quantification of tumour cellularity in digital slides for tumour burden assessment. Sci. Rep.
2019
,9, 14099.
[CrossRef] [PubMed]
27.
Ström, P.; Kartasalo, K.; Olsson, H.; Solorzano, L.; Delahunt, B.; Berney, D.M.; Bostwick, D.G.; Evans, A.J.;
Grignon, D.J.; Humphrey, P.A.; et al. Artificial intelligence for diagnosis and grading of prostate cancer in
biopsies: A population-based, diagnostic study. Lancet Oncol. 2020,21, 222–232. [CrossRef]
28.
Pantanowitz, L.; Quiroga-Garza, G.M.; Bien, L.; Heled, R.; Laifenfeld, D.; Linhart, C.; Sandbank, J.;
Albrecht Shach, A.; Shalev, V.; Vecsler, M.; et al. An artificial intelligence algorithm for prostate cancer
diagnosis in whole slide images of core needle biopsies: A blinded clinical validation and deployment study.
Lancet Digit. Health 2020,2, e407–e416. [CrossRef]
J. Clin. Med. 2020,9, 3697 15 of 17
29.
Bulten, W.; Pinckaers, H.; van Boven, H.; Vink, R.; de Bel, T.; van Ginneken, B.; van der Laak, J.; Hulsbergen-van
de Kaa, C.; Litjens, G. Automated deep-learning system for gleason grading of prostate cancer using biopsies:
A diagnostic study. Lancet Oncol. 2020,21, 233–241. [CrossRef]
30.
Campanella, G.; Hanna, M.G.; Geneslaw, L.; Miraflor, A.; Werneck Krauss Silva, V.; Busam, K.J.; Brogi, E.;
Reuter, V.E.; Klimstra, D.S.; Fuchs, T.J. Clinical-grade computational pathology using weakly supervised
deep learning on whole slide images. Nat. Med. 2019,25, 1301–1309. [CrossRef]
31.
Holzinger, A. Interactive machine learning for health informatics: When do we need the human-in-the-loop?
Brain Inform. 2016,3, 119–131. [CrossRef]
32.
Lutnick, B.; Ginley, B.; Govind, D.; McGarry, S.D.; LaViolette, P.S.; Yacoub, R.; Jain, S.; Tomaszewski, J.E.;
Jen, K.Y.; Sarder, P. An integrated iterative annotation technique for easing neural network training in medical
image analysis. Nat. Mach. Intell. 2019,1, 112–119. [CrossRef]
33.
Alami, H.; Fortin, J.P.; Gagnon, M.P.; Pollender, H.; T
ê
tu, B.; Tanguay, F. The challenges of a complex and
innovative telehealth project: A qualitative evaluation of the eastern quebec telepathology network. Int. J.
Health Policy Manag. 2018,7, 421–432. [CrossRef]
34.
T
ê
tu, B.; Perron,
É
.; Louahlia, S.; Par
é
, G.; Trudel, M.C.; Meyer, J. The eastern qu
é
bec telepathology network:
A three-year experience of clinical diagnostic services. Diagn. Pathol. 2014,9(Suppl. 1), S1.
35.
Pare, G.; Meyer, J.; Trudel, M.C.; Tetu, B. Impacts of a large decentralized telepathology network in canada.
Telemed. J. E-Health O. J. Am. Telemed. Assoc. 2016,22, 246–250. [CrossRef] [PubMed]
36.
T
ê
tu, B.; Boulanger, J.; Houde, C.; Fortin, J.P.; Gagnon, M.P.; Roch, G.; Par
é
, G.; Trudel, M.C.; Sicotte, C.
The eastern quebec telepathology network: A real collective project. Med. Sci. M/S2012,28, 993–999.
37.
Thorstenson, S.; Molin, J.; Lundström, C. Implementation of large-scale routine diagnostics using whole
slide imaging in sweden: Digital pathology experiences 2006-2013. J. Pathol. Inform. 2014,5, 14. [PubMed]
38.
Asa, S.L.; Bod
é
n, A.C.; Treanor, D.; Jarkman, S.; Lundström, C.; Pantanowitz, L. 2020 vision of digital
pathology in action. J. Pathol. Inform. 2019,10, 27.
39.
Evans, A.J.; Salama, M.E.; Henricks, W.H.; Pantanowitz, L. Implementation of whole slide imaging for
clinical purposes: Issues to consider from the perspective of early adopters. Arch. Pathol. Lab. Med.
2017
,
141, 944–959. [CrossRef]
40.
Williams, B.J.; Treanor, D. Practical guide to training and validation for primary diagnosis with digital
pathology. J. Clin. Pathol. 2020,73, 418–422. [CrossRef]
41.
Turnquist, C.; Roberts-Gant, S.; Hemsworth, H.; White, K.; Browning, L.; Rees, G.; Roskell, D.; Verrill, C.
On the edge of a digital pathology transformation: Views from a cellular pathology laboratory focus group.
J. Pathol. Inform. 2019,10, 37. [CrossRef]
42.
Leica Biosystems. Top Considerations When Buying a Digital Pathology Scanner. Available online: https:
//www.leicabiosystems.com/de/resources/top-considerations-when-buying-a-digital- pathology-scanner/
(accessed on 25 October 2020).
43.
Grin, J.; Treanor, D. Digital pathology in clinical use: Where are we now and what is holding us back?
Histopathology 2017,70, 134–145. [CrossRef]
44.
Hanna, M.G.; Reuter, V.E.; Ardon, O.; Kim, D.; Sirintrapun, S.J.; Schuer, P.J.; Busam, K.J.; Sauter, J.L.;
Brogi, E.; Tan, L.K.; et al. Validation of a digital pathology system including remote review during the
covid-19 pandemic. Mod. Pathol. 2020,33, 2115–2127. [CrossRef]
45.
Boyce, B.F. Whole slide imaging: Uses and limitations for surgical pathology and teaching. Biotech. Histochem.
O. Publ. Biol. Stain Comm. 2015,90, 321–330. [CrossRef] [PubMed]
46.
van Diest, P.J.; Huisman, A.; van Ekris, J.; Meijer, J.; Willems, S.; Hofhuis, H.; Verbeek, X.; van der Wel, M.;
Vos, S.; Leguit, R.; et al. Pathology image exchange: The dutch digital pathology platform for exchange
of whole-slide images for ecient teleconsultation, telerevision, and virtual expert panels. JCO Clin.
Cancer Inform. 2019,3, 1–7. [CrossRef] [PubMed]
47.
Al-Janabi, S.; van Slooten, H.J.; Visser, M.; van der Ploeg, T.; van Diest, P.J.; Jiwa, M. Evaluation of mitotic
activity index in breast cancer using whole slide digital images. PLoS ONE
2013
,8, e82576. [CrossRef]
[PubMed]
48.
Saha, M.; Chakraborty, C.; Arun, I.; Ahmed, R.; Chatterjee, S. An advanced deep learning approach for ki-67
stained hotspot detection and proliferation rate scoring for prognostic evaluation of breast cancer. Sci. Rep.
2017,7, 3213. [CrossRef] [PubMed]
J. Clin. Med. 2020,9, 3697 16 of 17
49.
Dennis, J.; Parsa, R.; Chau, D.; Koduru, P.; Peng, Y.; Fang, Y.; Sarode, V.R. Quantification of human epidermal
growth factor receptor 2 immunohistochemistry using the ventana image analysis system: Correlation
with gene amplification by fluorescence in situ hybridization: The importance of instrument validation for
achieving high (>95%) concordance rate. Am. J. Surg. Pathol. 2015,39, 624–631.
50.
Pag
è
s, F.; Mlecnik, B.; Marliot, F.; Bindea, G.; Ou, F.S.; Bifulco, C.; Lugli, A.; Zlobec, I.; Rau, T.T.;
Berger, M.D.; et al. International validation of the consensus immunoscore for the classification of colon
cancer: A prognostic and accuracy study. Lancet 2018,391, 2128–2139. [CrossRef]
51.
Jiang, Y.; Yang, M.; Wang, S.; Li, X.; Sun, Y. Emerging role of deep learning-based artificial intelligence in
tumor pathology. Cancer Commun. 2020,40, 154–166. [CrossRef]
52.
Samek, W.; Binder, A.; Montavon, G.; Lapuschkin, S.; Müller, K. Evaluating the visualization of what a deep
neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 2017,28, 2660–2673. [CrossRef]
53.
Holzinger, A.; Plass, M.; Holzinger, K.; Crisan, G.C.; Pintea, C.-M.; Palade, V. A glass-box interactive machine
learning approach for solving np-hard problems with the human-in-the-loop. arXiv
2017
, arXiv:1708.01104.
54.
Holzinger, A. From machine learning to explainable ai. In Proceedings of the 2018 World Symposium on
Digital Intelligence for Systems and Machines (DISA), Kosice, Slovakia, 23–25 August 2018; pp. 55–66.
55.
Pohn, B.; Kargl, M.; Reihs, R.; Holzinger, A.; Zatloukal, K.; Müller, H. Towards a deeper understanding of how
a pathologist makes a diagnosis: Visualization of the diagnostic process in histopathology. In Proceedings of
the 2019 IEEE Symposium on Computers and Communications (ISCC), Barcelona, Spain, 29 June–3 July
2019; pp. 1081–1086.
56.
Benjamens, S.; Dhunnoo, P.; Mesk
ó
, B. The state of artificial intelligence-based FDA-approved medical
devices and algorithms: An online database. NPJ Digit. Med. 2020,3, 118. [CrossRef]
57.
Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.;
Tsirigos, A. Classification and mutation prediction from non-small cell lung cancer histopathology images
using deep learning. Nat. Med. 2018,24, 1559–1567. [CrossRef] [PubMed]
58.
Kim, R.H.; Nomikou, S.; Dawood, Z.; Jour, G.; Donnelly, D.; Moran, U.; Weber, J.S.; Razavian, N.; Snuderl, M.;
Shapiro, R.; et al. A deep learning approach for rapid mutational screening in melanoma. bioRxiv
2019
, 610311.
[CrossRef]
59.
Schaumberg, A.J.; Rubin, M.A.; Fuchs, T.J. H&E-stained whole slide image deep learning predicts spop
mutation state in prostate cancer. bioRxiv 2018. [CrossRef]
60.
Wulczyn, E.; Steiner, D.F.; Moran, M.; Plass, M.; Reihs, R.; Mueller, H.; Sadhwani, A.; Cai, Y.; Flament, I.;
Chen, P.-H.C.; et al. Abstract 2096: A deep learning system to predict disease-specific survival in stage ii and
stage iii colorectal cancer. Cancer Res. 2020,80, 2096.
61.
Skrede, O.J.; De Raedt, S.; Kleppe, A.; Hveem, T.S.; Liestøl, K.; Maddison, J.; Askautrud, H.A.; Pradhan, M.;
Nesheim, J.A.; Albregtsen, F.; et al. Deep learning for prediction of colorectal cancer outcome: A discovery
and validation study. Lancet 2020,395, 350–360. [CrossRef]
62.
FDAnews. Fda Hands Paige. AL Breakthrough Designation for Cancer Diagnosis Tool. Available online:
https://www.fdanews.com/articles/190525-fda-hands- paigeai-breakthrough-designation-for-cancer-
diagnosis-tool (accessed on 26 October 2020).
63.
Komura, D.; Ishikawa, S. Machine learning methods for histopathological image analysis. Comput. Struct.
Biotechnol. J. 2018,16, 34–42. [CrossRef]
64.
Zhang, Y.; Zhang, B.; Coenen, F.; Xiao, J.; Lu, W. One-class kernel subspace ensemble for medical image
classification. EURASIP J. Adv. Signal Process. 2014,2014, 17. [CrossRef]
65.
Aresta, G.; Ara
ú
jo, T.; Kwok, S.; Chennamsetty, S.S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.;
Donovan, M.; et al. Bach: Grand challenge on breast cancer histology images. Med. Image Anal.
2019
,
56, 122–139. [CrossRef]
66.
Ehteshami Bejnordi, B.; Veta, M.; Johannes van Diest, P.; van Ginneken, B.; Karssemeijer, N.; Litjens, G.;
van der Laak, J.; Hermsen, M.; Manson, Q.F.; Balkenhol, M.; et al. Diagnostic assessment of deep learning
algorithms for detection of lymph node metastases in women with breast cancer. JAMA
2017
,318, 2199–2210.
[CrossRef]
67.
Cire¸san, D.C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Mitosis detection in breast cancer histology
images with deep neural networks. Medical image computing and computer-assisted intervention: MICCAI.
In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted
Intervention, Nagoya, Japan, 22–26 September 2013; pp. 411–418.
J. Clin. Med. 2020,9, 3697 17 of 17
68.
Ho, J.; Ahlers, S.M.; Stratman, C.; Aridor, O.; Pantanowitz, L.; Fine, J.L.; Kuzmishin, J.A.; Montalto, M.C.;
Parwani, A.V. Can digital pathology result in cost savings? A financial projection for digital pathology
implementation at a large integrated health care organization. J. Pathol. Inform. 2014,5, 33. [CrossRef]
69.
Rosenfield, M. Computer vision syndrome: A review of ocular causes and potential treatments. Ophthalmic
& physiological optics. Ophthal Physl Opt 2011,31, 502–515.
70.
Rossignol, A.M.; Morse, E.P.; Summers, V.M.; Pagnotto, L.D. Video display terminal use and reported health
symptoms among massachusetts clerical workers. J. Occup. Med. O. Publ. Ind. Med Assoc.
1987
,29, 112–118.
71.
Daum, K.M.; Clore, K.A.; Simms, S.S.; Vesely, J.W.; Wilczek, D.D.; Spittle, B.M.; Good, G.W.
Productivity associated with visual status of computer users. Optometry 2004,75, 33–47. [CrossRef]
72.
Blehm, C.; Vishnu, S.; Khattak, A.; Mitra, S.; Yee, R.W. Computer vision syndrome: A review. Surv. Ophthalmol.
2005,50, 253–262. [CrossRef]
73. Bohr, P.C. Ecacy of oce ergonomics education. J. Occup. Rehabil. 2000,10, 12. [CrossRef]
74.
Chui, M.; Manyika, J.; Miremadi, M. Where machines could replace humans—And where they can’t (yet).
McKinsey Q. 2016,30, 1–9.
75.
Vertinsky, T.; Forster, B. Prevalence of eye strain among radiologists: Influence of viewing variables on
symptoms. AJR. Am. J. Roentgenol. 2005,184, 681–686. [CrossRef]
Publisher’s Note:
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional
aliations.
©
2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... While offering several benefits, digital pathology is not without challenges [102]. Among the technical ones, method standardization is pivotal from the moment the tissue reaches the histology lab to when a pathologist reads the WSI. ...
... Histopathology involves diagnosing diseases by closely inspecting gigapixel tissue slides of microscopic structures to identify their characteristics [1]. Computational pathology has gained significant momentum, bringing about a transformative change in the field of cancer diagnostics since the digitization of pathology slides [2]. ...
Preprint
Diffusion Generative Models (DGM) have rapidly surfaced as emerging topics in the field of computer vision, garnering significant interest across a wide array of deep learning applications. Despite their high computational demand, these models are extensively utilized for their superior sample quality and robust mode coverage. While research in diffusion generative models is advancing, exploration within the domain of computational pathology and its large-scale datasets has been comparatively gradual. Bridging the gap between the high-quality generation capabilities of Diffusion Generative Models and the intricate nature of pathology data, this paper presents an in-depth comparative analysis of diffusion methods applied to a pathology dataset. Our analysis extends to datasets with varying Fields of View (FOV), revealing that DGMs are highly effective in producing high-quality synthetic data. An ablative study is also conducted, followed by a detailed discussion on the impact of various methods on the synthesized histopathology images. One striking observation from our experiments is how the adjustment of image size during data generation can simulate varying fields of view. These findings underscore the potential of DGMs to enhance the quality and diversity of synthetic pathology data, especially when used with real data, ultimately increasing accuracy of deep learning models in histopathology. Code is available from https://github.com/AtlasAnalyticsLab/Diffusion4Path
... While digital scanning and archiving offer numerous benefits for remote collaboration, accessibility, and preservation, they may not be an affordable or practical option for all laboratories. 13 By providing a thorough re-staining procedure, this method can be especially helpful in educational and research settings, where hands-on experience with microscopes is essential for learning histological techniques, without the need for specialized digital equipment. Currently, protocols that provide instructions for efficiently reutilizing deteriorated H&E-stained slides have not been described. ...
Article
Full-text available
Hematoxylin and eosin (H&E)-stained slides inevitably deteriorate over time, frequently becoming unreadable. Reutilizing these slides can reduce the need for additional serial sections, particularly when the target region is no longer available in the tissue block. This study aims to develop efficient protocols for recycling faded H&E-stained slides, providing benefits for future research on stored samples. Seventy-one faded slides, representing a variety of tissue types and pathologies, were randomly divided into two groups. Slides were de-stained and re-stained using the conventional procedure and a modified Tris and HCl procedure. Three observers independently assessed all slides based on predefined parameters. The stability of the re-stained slides was re-assessed in 6 months. The modified Tris and HCl method yielded significantly higher scores compared to the conventional method for crispness of staining, nuclear staining, cytoplasmic staining, and vibrancy of staining ( p < 0.05), as well as greater durability, as evidenced by minimal score reduction 6 months after staining. Thus, incorporating a Tris and HCl step into the process effectively enhances and restores faded H&E slides, offering a valuable technique for revitalizing histology slides for future research and educational purposes.
... The potential of digitalization and AI in cytopathology is significant, but several challenges need to be addressed to realize these advancements fully [10,11]. Cytopathology, dissimilar to radiology, has experienced a slower integration with the digital health landscape, which can be attributed to several factors [12,13]. One of the key issues is the complexity of cytopathological samples. ...
Article
Full-text available
The integration of artificial intelligence (AI) in cytopathology is an emerging field with transformative potential, aiming to enhance diagnostic precision and operational efficiency. This umbrella review seeks to identify prevailing themes, opportunities, challenges, and recommendations related to AI in cytopathology. Utilizing a standardized checklist and quality control procedures, this review examines recent advancements and future implications of AI technologies in this domain. Twenty-one review studies were selected through a systematic process. AI has demonstrated promise in automating and refining diagnostic processes, potentially reducing errors and improving patient outcomes. However, several critical challenges need to be addressed to realize the benefits of AI fully. This review underscores the necessity for rigorous validation, ongoing empirical data on diagnostic accuracy, standardized protocols, and effective integration with existing clinical workflows. Ethical issues, including data privacy and algorithmic bias, must be managed to ensure responsible AI applications. Additionally, high costs and substantial training requirements present barriers to widespread AI adoption. Future directions highlight the importance of applying successful integration strategies from histopathology and radiology to cytopathology. Continuous research is needed to improve model interpretability, validation, and standardization. Developing effective strategies for incorporating AI into clinical practice and establishing comprehensive ethical and regulatory frameworks will be crucial for overcoming these challenges. In conclusion, while AI holds significant promise for advancing cytopathology, its full potential can only be achieved by addressing challenges related to validation, cost, and ethics. This review provides an overview of current advancements, identifies ongoing challenges, and offers a roadmap for the successful integration of AI into diagnostic cytopathology, informed by insights from related fields.
... Digital cytology and cytopathology, together with digital histology and histopathology, are part of the broader field of digital pathology [7,8]. Unlike digital radiology, which advanced more rapidly, digital pathology has experienced a slower adoption rate, primarily due to the delayed integration of the DICOM (Digital Imaging and Communications in Medicine) standard, specifically in the form of DICOM WSI (Whole Slide Imaging) [9,10]. ...
Article
Full-text available
The application of chatbots and Natural Language Processing (NLP) in cytology and cytopathology is an emerging field, which is currently characterized by a limited but growing body of research. Here, a narrative review has been proposed utilizing a standardized checklist and quality control procedure for including scientific papers. This narrative review explores the early developments and potential future impact of these technologies in medical diagnostics. The current literature, comprising 11 studies (after excluding comments, letters, and editorials) suggests that chatbots and NLP offer significant opportunities to enhance diagnostic accuracy, streamline clinical workflows, and improve patient engagement. By automating the extraction and classification of medical information, these technologies can reduce human error and increase precision. They also promise to make patient information more accessible and facilitate complex decision-making processes, thereby fostering greater patient involvement in healthcare. Despite these promising prospects, several challenges need to be addressed for the full potential of these technologies to be realized. These include the need for data standardization, mitigation of biases in Artificial Intelligence (AI) systems, and comprehensive clinical validation. Furthermore, ethical, privacy, and legal considerations must be navigated carefully to ensure responsible AI deployment. Compared to the more established fields of histology, histopathology, and especially radiology, the integration of digital tools in cytology and cytopathology is still in its infancy. Building on the advancements in related fields, especially radiology’s experience with digital integration, where these technologies already offer promising solutions in mentoring, second opinions, and education, we can leverage this knowledge to further develop chatbots and natural language processing in cytology and cytopathology. Overall, this review underscores the transformative potential of these technologies while outlining the critical areas for future research and development.
... The advent of digitization in the field of pathology has drastically aided the medical workflow of histological and cellular investigations 1,2 . Pathologists can now handle greater volumes of patient data with higher precision, ease and throughput. ...
Preprint
We developed a rapid scanning optical microscope, termed "BlurryScope", that leverages continuous image acquisition and deep learning to provide a cost-effective and compact solution for automated inspection and analysis of tissue sections. BlurryScope integrates specialized hardware with a neural network-based model to quickly process motion-blurred histological images and perform automated pathology classification. This device offers comparable speed to commercial digital pathology scanners, but at a significantly lower price point and smaller size/weight, making it ideal for fast triaging in small clinics, as well as for resource-limited settings. To demonstrate the proof-of-concept of BlurryScope, we implemented automated classification of human epidermal growth factor receptor 2 (HER2) scores on immunohistochemically (IHC) stained breast tissue sections, achieving concordant results with those obtained from a high-end digital scanning microscope. We evaluated this approach by scanning HER2-stained tissue microarrays (TMAs) at a continuous speed of 5 mm/s, which introduces bidirectional motion blur artifacts. These compromised images were then used to train our network models. Using a test set of 284 unique patient cores, we achieved blind testing accuracies of 79.3% and 89.7% for 4-class (0, 1+, 2+, 3+) and 2-class (0/1+ , 2+/3+) HER2 score classification, respectively. BlurryScope automates the entire workflow, from image scanning to stitching and cropping of regions of interest, as well as HER2 score classification. We believe BlurryScope has the potential to enhance the current pathology infrastructure in resource-scarce environments, save diagnostician time and bolster cancer identification and classification across various clinical environments.
Article
Full-text available
Synthetic data is becoming a valuable tool for computational pathologists, aiding in tasks like data augmentation and addressing data scarcity and privacy. However, its use necessitates careful planning and evaluation to prevent the creation of clinically irrelevant artifacts. This manuscript introduces a comprehensive pipeline for generating and evaluating synthetic pathology data using a diffusion model. The pipeline features a multifaceted evaluation strategy with an integrated explainability procedure, addressing two key aspects of synthetic data use in the medical domain. The evaluation of the generated data employs an ensemble-like approach. The first step includes assessing the similarity between real and synthetic data using established metrics. The second step involves evaluating the usability of the generated images in deep learning models accompanied with explainable AI methods. The final step entails verifying their histopathological realism through questionnaires answered by professional pathologists. We show that each of these evaluation steps are necessary as they provide complementary information on the generated data’s quality. The pipeline is demonstrated on the public GTEx dataset of 650 Whole Slide Images (WSIs), including five different tissues. An equal number of tiles from each tissue are generated and their reliability is assessed using the proposed evaluation pipeline, yielding promising results. In summary, the proposed workflow offers a comprehensive solution for generative AI in digital pathology, potentially aiding the community in their transition towards digitalization and data-driven modeling.
Preprint
Full-text available
A quantitative model to genetically interpret the histology in whole microscopy slide images is desirable to guide downstream immuno-histochemistry, genomics, and precision medicine. We constructed a statistical model that predicts whether or not SPOP is mutated in prostate cancer, given only the digital whole slide after standard hematoxylin and eosin [H&E] staining. Using a TCGA cohort of 177 prostate cancer patients where 20 had mutant SPOP, we trained multiple ensembles of residual networks, accurately distinguishing SPOP mutant from SPOP non-mutant patients (test AUROC=0.74, p=0.0007 Fisher’s Exact Test). We further validated our full metaensemble classifier on an independent test cohort from MSK-IMPACT of 152 patients where 19 had mutant SPOP. Mutants and non-mutants were accurately distinguished despite TCGA slides being frozen sections and MSK-IMPACT slides being formalin-fixed paraffin-embedded sections (AUROC=0.86, p=0.0038). Moreover, we scanned an additional 36 MSK-IMPACT patients having mutant SPOP, trained on this expanded MSK-IMPACT cohort (test AUROC=0.75, p=0.0002), tested on the TCGA cohort (AUROC=0.64, p=0.0306), and again accurately distinguished mutants from non-mutants using the same pipeline. Importantly, our method demonstrates tractable deep learning in this “small data” setting of 20-55 positive examples and quantifies each prediction’s uncertainty with confidence intervals. To our knowledge, this is the first statistical model to predict a genetic mutation in cancer directly from the patient’s digitized H&E-stained whole microscopy slide. Moreover, this is the first time quantitative features learned from patient genetics and histology have been used for content-based image retrieval, finding similar patients for a given patient where the histology appears to share the same genetic driver of disease i.e. SPOP mutation (p=0.0241 Kost’s Method), and finding similar patients for a given patient that does not have have that driver mutation (p=0.0170 Kost’s Method). Significance Statement This is the first pipeline predicting gene mutation probability in cancer from digitized H&E-stained microscopy slides. To predict whether or not the speckle-type POZ protein [SPOP] gene is mutated in prostate cancer, the pipeline (i) identifies diagnostically salient slide regions, (ii) identifies the salient region having the dominant tumor, and (iii) trains ensembles of binary classifiers that together predict a confidence interval of mutation probability. Through deep learning on small datasets, this enables automated histologic diagnoses based on probabilities of underlying molecular aberrations and finds histologically similar patients by learned genetic-histologic relationships. Conception, Writing: AJS, TJF. Algorithms, Learning, CBIR: AJS. Analysis: AJS, MAR, TJF. Supervision: MAR, TJF.
Article
Full-text available
Background: Digital pathology (DP) has the potential to fundamentally change the way that histopathology is practised, by streamlining the workflow, increasing efficiency, improving diagnostic accuracy and facilitating the platform for implementation of artificial intelligence-based computer-assisted diagnostics. Although the barriers to wider adoption of DP have been multifactorial, limited evidence of reliability has been a significant contributor. A meta-analysis to demonstrate the combined accuracy and reliability of DP is still lacking in the literature. Objectives: We aimed to review the published literature on the diagnostic use of DP and to synthesise a statistically pooled evidence on safety and reliability of DP for routine diagnosis (primary and secondary) in the context of validation process. Methods: A comprehensive literature search was conducted through PubMed, Medline, EMBASE, Cochrane Library and Google Scholar for studies published between 2013 and August 2019. The search protocol identified all studies comparing DP with light microscopy (LM) reporting for diagnostic purposes, predominantly including H&E-stained slides. Random-effects meta-analysis was used to pool evidence from the studies. Results: Twenty-five studies were deemed eligible to be included in the review which examined a total of 10 410 histology samples (average sample size 176). For overall concordance (clinical concordance), the agreement percentage was 98.3% (95% CI 97.4 to 98.9) across 24 studies. A total of 546 major discordances were reported across 25 studies. Over half (57%) of these were related to assessment of nuclear atypia, grading of dysplasia and malignancy. These were followed by challenging diagnoses (26%) and identification of small objects (16%). Conclusion: The results of this meta-analysis indicate equivalent performance of DP in comparison with LM for routine diagnosis. Furthermore, the results provide valuable information concerning the areas of diagnostic discrepancy which may warrant particular attention in the transition to DP.
Article
Full-text available
At the beginning of the artificial intelligence (AI)/machine learning (ML) era, the expectations are high, and experts foresee that AI/ ML shows potential for diagnosing, managing and treating a wide variety of medical conditions. However, the obstacles for implementation of AI/ML in daily clinical practice are numerous, especially regarding the regulation of these technologies. Therefore, we provide an insight into the currently available AI/ML-based medical devices and algorithms that have been approved by the US Food & Drugs Administration (FDA). We aimed to raise awareness of the importance of regulatory bodies, clearly stating whether a medical device is AI/ML based or not. Cross-checking and validating all approvals, we identified 64 AI/ML based, FDA approved medical devices and algorithms. Out of those, only 29 (45%) mentioned any AI/ML-related expressions in the official FDA announcement. The majority (85.9%) was approved by the FDA with a 510(k) clearance, while 8 (12.5%) received de novo pathway clearance and one (1.6%) premarket approval (PMA) clearance. Most of these technologies, notably 30 (46.9%), 16 (25.0%), and 10 (15.6%) were developed for the fields of Radiology, Cardiology and Internal Medicine/General Practice respectively. We have launched the first comprehensive and open access database of strictly AI/ML-based medical technologies that have been approved by the FDA. The database will be constantly updated. npj Digital Medicine (2020) 3:118 ; https://doi.
Article
Full-text available
Digital displays (monitors) are an indispensable component of a pathologists' daily workflow, from writing reports, viewing whole-slide images, or browsing the Internet. Due to a paucity of literature and experience surrounding display use and standardization in pathology, the Food and Drug Administration's (FDA) has currently restricted FDA-cleared whole-slide imaging systems to a specific model of display for each system, which at this time consists of only medical-grade (MG) displays. Further, given that a pathologists' display will essentially become their new surrogate "microscope," it becomes exceedingly important that all pathologists have a basic understanding of fundamental display properties and their functional consequences. This review seeks to: (a) define and summarize the current and emerging display technology, terminology, features, and regulation as they pertain to pathologists and review the current literature on the impact of different display types (e.g. MG vs. consumer off the shelf vs. professional grade) on pathologists' diagnostic performance and (b) discuss the impact of the recent digital pathology device componentization and the coronavirus disease 2019 public emergency on the pixel pathway and display use for remote digital pathology. Display technology has changed dramatically over the past 20 years and continues to change at a rapid rate. There is a paucity of published studies to date that investigate how display type affects pathologist performance, with more research necessary in order to develop standards and minimum specifications for displays in digital pathology. Given the complexity of modern displays, pathologists must become better informed regarding display technology if they wish to have more choice over their future "microscopes."
Article
Full-text available
Background There is high demand to develop computer-assisted diagnostic tools to evaluate prostate core needle biopsies (CNBs), but little clinical validation and a lack of clinical deployment of such tools. We report here on a blinded clinical validation study and deployment of an artificial intelligence (AI)-based algorithm in a pathology laboratory for routine clinical use to aid prostate diagnosis. Methods An AI-based algorithm was developed using haematoxylin and eosin (H&E)-stained slides of prostate CNBs digitised with a Philips scanner, which were divided into training (1 357 480 image patches from 549 H&E-stained slides) and internal test (2501 H&E-stained slides) datasets. The algorithm provided slide-level scores for probability of cancer, Gleason score 7–10 (vs Gleason score 6 or atypical small acinar proliferation [ASAP]), Gleason pattern 5, and perineural invasion and calculation of cancer percentage present in CNB material. The algorithm was subsequently validated on an external dataset of 100 consecutive cases (1627 H&E-stained slides) digitised on an Aperio AT2 scanner. In addition, the AI tool was implemented in a pathology laboratory within routine clinical workflow as a second read system to review all prostate CNBs. Algorithm performance was assessed with area under the receiver operating characteristic curve (AUC), specificity, and sensitivity, as well as Pearson's correlation coefficient (Pearson's r) for cancer percentage. Findings The algorithm achieved an AUC of 0·997 (95% CI 0·995 to 0·998) for cancer detection in the internal test set and 0·991 (0·979 to 1·00) in the external validation set. The AUC for distinguishing between a low-grade (Gleason score 6 or ASAP) and high-grade (Gleason score 7–10) cancer diagnosis was 0·941 (0·905 to 0·977) and the AUC for detecting Gleason pattern 5 was 0·971 (0·943 to 0·998) in the external validation set. Cancer percentage calculated by pathologists and the algorithm showed good agreement (r=0·882, 95% CI 0·834 to 0·915; p<0·0001) with a mean bias of −4·14% (−6·36 to −1·91). The algorithm achieved an AUC of 0·957 (0·930 to 0·985) for perineural invasion. In routine practice, the algorithm was used to assess 11 429 H&E-stained slides pertaining to 941 cases leading to 90 Gleason score 7–10 alerts and 560 cancer alerts. 51 (9%) cancer alerts led to additional cuts or stains being ordered, two (4%) of which led to a third opinion request. We report on the first case of missed cancer that was detected by the algorithm. Interpretation This study reports the successful development, external clinical validation, and deployment in clinical practice of an AI-based algorithm to accurately detect, grade, and evaluate clinically relevant findings in digitised slides of prostate CNBs. Funding Ibex Medical Analytics.
Article
Full-text available
Remote digital pathology allows healthcare systems to maintain pathology operations during public health emergencies. Existing Clinical Laboratory Improvement Amendments regulations require pathologists to electronically verify patient reports from a certified facility. During the 2019 pandemic of COVID-19 disease, caused by the SAR-CoV-2 virus, this requirement potentially exposes pathologists, their colleagues, and household members to the risk of becoming infected. Relaxation of government enforcement of this regulation allows pathologists to review and report pathology specimens from a remote, non-CLIA certified facility. The availability of digital pathology systems can facilitate remote microscopic diagnosis, although formal comprehensive (case-based) validation of remote digital diagnosis has not been reported. All glass slides representing routine clinical signout workload in surgical pathology subspecialties at Memorial Sloan Kettering Cancer Center were scanned on an Aperio GT450 at ×40 equivalent resolution (0.26 µm/pixel). Twelve pathologists from nine surgical pathology subspecialties remotely reviewed and reported complete pathology cases using a digital pathology system from a non-CLIA certified facility through a secure connection. Whole slide images were integrated to and launched within the laboratory information system to a custom vendor-agnostic, whole slide image viewer. Remote signouts utilized consumer-grade computers and monitors (monitor size, 13.3–42 in.; resolution, 1280 × 800–3840 × 2160 pixels) connecting to an institution clinical workstation via secure virtual private network. Pathologists subsequently reviewed all corresponding glass slides using a light microscope within the CLIA-certified department. Intraobserver concordance metrics included reporting elements of top-line diagnosis, margin status, lymphovascular and/or perineural invasion, pathology stage, and ancillary testing. The median whole slide image file size was 1.3 GB; scan time/slide averaged 90 s; and scanned tissue area averaged 612 mm². Signout sessions included a total of 108 cases, comprised of 254 individual parts and 1196 slides. Major diagnostic equivalency was 100% between digital and glass slide diagnoses; and overall concordance was 98.8% (251/254). This study reports validation of primary diagnostic review and reporting of complete pathology cases from a remote site during a public health emergency. Our experience shows high (100%) intraobserver digital to glass slide major diagnostic concordance when reporting from a remote site. This randomized, prospective study successfully validated remote use of a digital pathology system including operational feasibility supporting remote review and reporting of pathology specimens, and evaluation of remote access performance and usability for remote signout.
Article
Full-text available
Pathology departments must rise to new staffing challenges caused by the coronavirus disease-19 pandemic and may need to work more flexibly for the foreseeable future. In light of this, many pathologists and departments are considering the merits of remote or home reporting of digital cases. While some individuals have experience of this, little work has been done to determine optimum conditions for home reporting, including technical and training considerations. In this publication produced in response to the pandemic, we provide information regarding risk assessment of home reporting of digital slides, summarize available information on specifications for home reporting computing equipment, and share access to a novel point-of-use quality assurance tool for assessing the suitability of home reporting screens for digital slide diagnosis. We hope this study provides a useful starting point and some practical guidance in a difficult time. This study forms the basis of the guidance issued by the Royal College of Pathologists, available at:https://www.rcpath.org/uploads/assets/626ead77-d7dd-42e1-949988e43dc84c97/RCPath-guidance-for-remote-digital-pathology.pdf.
Article
Full-text available
The development of digital pathology and progression of state‐of‐the‐art algorithms for computer vision have led to increasing interest in the use of artificial intelligence (AI), especially deep learning (DL)‐based AI, in tumor pathology. The DL‐based algorithms have been developed to conduct all kinds of work involved in tumor pathology, including tumor diagnosis, subtyping, grading, staging, and prognostic prediction, as well as the identification of pathological features, biomarkers and genetic changes. The applications of AI in pathology not only contribute to improve diagnostic accuracy and objectivity but also reduce the workload of pathologists and subsequently enable them to spend additional time on high‐level decision‐making tasks. In addition, AI is useful for pathologists to meet the requirements of precision oncology. However, there are still some challenges relating to the implementation of AI, including the issues of algorithm validation and interpretability, computing systems, the unbelieving attitude of pathologists, clinicians and patients, as well as regulators and reimbursements. Herein, we present an overview on how AI‐based approaches could be integrated into the workflow of pathologists and discuss the challenges and perspectives of the implementation of AI in tumor pathology.
Conference Paper
p>Accurate prognosis in colorectal cancer can have important implications for clinical management. Here, we develop a deep learning system (DLS) to first identify invasive cancer and then directly predict disease specific survival (DSS) for stage II and stage III colorectal cancer using only digitized histopathology whole-slide images. The DLS was trained using slides from 1173 stage II and 1266 stage III cases (18,304 total slides) and was evaluated on a held-out test set of 601 stage II and 638 stage III cases (9,340 total slides). The area under the receiver operating characteristic curve (AUC) for 5-year DSS prediction was 68.0 for stage II (95% CI 62.2-73.1) and 65.5 for stage III (95% CI 61.1-70.0). For stage II, 5-year DSS was 64% for DLS-predicted high-risk cases versus 89% for DLS-predicted low-risk cases (upper and lower risk quartiles; p<0.001, log rank test). For stage III, 5-year DSS was 35% for DLS-predicted high-risk cases versus 66% for DLS-predicted low-risk cases (upper and lower risk quartiles; p<0.001, log rank test). In a multivariable Cox model, the DLS prediction remained significantly associated with DSS after adjusting for T-category, N-category, age, gender, tumor grade, and lymphovascular invasion (stage II: adjusted hazard ratio 1.55, 95% CI 1.33-1.81, p<0.0001; stage III: adjusted hazard ratio 1.35, 95% CI (1.21-1.51), p<0.0001). Finally, a combined proportional-hazards model using the DLS along with baseline clinicopathologic information provided better risk prediction than the DLS or baseline information alone, increasing 5-year AUC over the baseline-only model by 8.9 points (95% CI 3.9-13.6) and 5.3 points (95% CI 2.3-8.4) for stages II and III, respectively. Taken together, these findings demonstrate that the DLS provides significant prognostic value and risk stratification in both stage II and stage III colorectal cancer, and can be combined with known risk features to further improve prognostic accuracy. This represents novel work to train a DLS to directly predict patient outcomes using whole-slide images and weakly supervised learning. The ability to use non-annotated slides as input has important implications for possible clinical applications and the features learned by the model may also help to identify new prognosis-associated morphologic factors in colorectal cancer. Additional work is ongoing to confirm the utility of these findings, such as validation in additional datasets and interpretability experiments to better understand the features learned by the DLS for these predictions. Citation Format: Ellery Wulczyn, David F. Steiner, Melissa Moran, Markus Plass, Robert Reihs, Heimo Mueller, Apaar Sadhwani, Yuannan Cai, Isabelle Flament, Po-Hsuan Cameron Chen, Yun Liu, Martin C. Stumpe, Zhaoyang Xu, Kurt Zatloukal, Craig H. Mermel. A deep learning system to predict disease-specific survival in stage II and stage III colorectal cancer [abstract]. In: Proceedings of the Annual Meeting of the American Association for Cancer Research 2020; 2020 Apr 27-28 and Jun 22-24. Philadelphia (PA): AACR; Cancer Res 2020;80(16 Suppl):Abstract nr 2096.</p
Article
Since 2007, we have gradually been building up infrastructure for digital pathology, starting with a whole slide scanner park to build up a digital archive to streamline doing multidisciplinary meetings, student teaching and research, culminating in a full digital diagnostic workflow where we are currently integrating artificial intelligence algorithms. In this paper, we highlight the different steps in this process towards digital diagnostics, which was at times a rocky road with definitely issues in implementation, but eventually an exciting new way to practice pathology in a more modern and efficient way where patient safety has clearly gone up.