Content uploaded by Daniel Hieber
Author content
All content in this area was uploaded by Daniel Hieber on Nov 13, 2024
Content may be subject to copyright.
1
Chapter
Perspective Chapter: Computer
Vision-Based Digital Pathology for
Central Nervous System Tumors –
State-of-the-Art and Current
Advances
DanielHieber, FelixHoll, VeraNickl,
FriederikeLiesche-Starnecker and JohannesSchobel
Abstract
Rapid advances in computer vision (CV) and artificial intelligence have opened
new avenues for digital pathology, including the diagnosis and treatment of central
nervous system (CNS) tumors. In addition to reviewing the state-of-the-art in
CV-based digital pathology and highlighting its potential to revolutionize the field,
this chapter also provides a general introduction to digital pathology and Machine
Learning (ML) for neuropathologists. Although currently limited to research, the inte-
gration of CV tools into digital pathology already offers significant advantages, such
as automating tissue analysis and providing quantitative assessments. The transition
from research to clinical application is slowly gaining momentum. To provide neuropa-
thologists with the necessary skills to succeed in digital pathology and ML, the chapter
also discusses how physicians and researchers can create custom models and tools tai-
lored to specific needs using tools such as nnU-Net, deepflash2, and PathML. Emphasis
is placed on the importance of interdisciplinary collaboration and continued research
to fully realize the potential of CV in digital pathology for CNS tumors, to address the
challenges of workforce shortages and increased workloads in neuropathology.
Keywords: digital pathology, neuropathology, computer vision, central nervous
system, machine learning, computational pathology
1. Introduction
Artificial intelligence (AI) and machine learning (ML) are becoming increasingly
popular overall, as well as in the medical domain. With radiology as an early adapter
of ML [1], especially in the form of computer vision (CV), AI already has a significant
impact on medicine [2]. On the other hand, AI is only beginning its transition from
research to diagnostics in pathology in general, as well as in neuropathology. Out of
Advanced Concepts and Strategies in Central Nervous System Tumors
2
the 882U.S. Food & Drug Administration (FDA) approved medical devices using ML
or AI, only six are for pathological use cases, and 6&1 are for radiology [3].
However, this transition is now being accelerated with the release of multiple
robust foundation models in 2(2) [)–&]. While there is still a long road ahead in
adapting AI to pathology, quite a few applications and techniques are already available
to assist in the research and diagnosis of pathology. This chapter provides a broad
overview of the available options and how CV can be used in digital (neuro-)pathol-
ogy with a focus on central nervous system (CNS) tumors.
2. An introduction to computer vision in digital pathology
Digital pathology and the use of CV in it are no new developments. While they
have been around for some time, this section will try to introduce a common defini-
tion of digital pathology for this chapter and provide an introduction to CV for medi-
cal readers. The section further highlights some of the key reasons why CV-assisted
digital pathology is needed nowadays.
2.1 What is digital pathology
Digital pathology can generally be divided into two main areas: molecular data and
image data. As with conventional pathology, tissue is the starting point for all further
analysis. With molecular data, the tissue is analyzed regarding its genetic expres-
sion, and the resulting data is then processed using computer programs. As this work
focuses on CV-based digital pathology, the image data is of main interest, with the
first step being the digitization of the data.
Besides digital pathology nowadays, there is also the term computational pathol-
ogy. While computational pathology is part of digital pathology, it is used to describe
the usage of computer algorithms and software to analyze pathological data actively.
In this chapter, the term of digital pathology is used for all matters for simplicity.
2.1.1 Digitizing pathology
With image data, the tissue slides are either stained with hematoxilyn and eosin
(HE) or immunohistochemical (IHC) staining or immunofluorescence (IF) is used.
However, instead of directly analyzing the resulting slides under the microscope, the
data is digitized using so-called Whole-Slide Image (WSI) scanners or specialized IF
scanners. To allow the same level of detail for digital analysis as with conventional
microscopes, these WSIs are scanned using 2(-)(x magnification. This results in
1((.((( x 1((.(((-pixel large images in general, with even larger pixel values being
possible depending on the used tissue slide.
The resulting digital WSIs can be available in many different file formats depend-
ing on the scanner’s manufacturer. Some of these formats are proprietary, meaning
that they can only be used with software by the same vendor and its cooperation part-
ners. However, at the foundation, most of these formats are based on the pyramidal
tiled Tagged Image File (TIF) Format (TIFF). As such, the images can be transformed
to the non-proprietary TIFF and, therefore, be analyzed independently of vendor-
specific software.
Due to the large resolution and sheer size of the images, the resulting files can
require significant storage space on hard drives. Uncompressed WSIs can take up to
3
Perspective Chapter: Computer Vision-Based Digital Pathology for Central Nervous System...
DOI: http://dx.doi.org/10.5772/intechopen.1007366
)(GB and more. In general, these images are already provided in a compressed for-
mat. With lossless compression algorithms like JPEG2(((, the size can significantly
be reduced without losing information [8].
2.1.2 Advantages of digitized pathology
At the most fundamental level, the digitization of WSIs alone already provides
some major advantages over conventional pathology. The digitized WSIs can be
saved in Picture Archiving and Communication Systems (PACSs), which have
been used in radiology for a long time. These PACSs allow the linkage of hospital
information systems and laboratory information systems, providing all relevant
patient data in a single place without the need to transport paper files or search for
the required information. As the data are available in these systems, independent
of the actual location of the physician, pathologists can access the data remotely,
allowing remote analysis to be conducted during conferences and similar events.
Such remote analysis is especially useful to small clinics with only a few trained
(neuro-)pathologists. Moreover, remote consultation for colleagues becomes an
option without the need to send physical glass slides to other locations (and wait for
them to arrive).
A further advantage is the possibility of granularly annotating the data without
“damaging” it. In analog pathology, physicians often use pen markers to annotate
glass slides. These are quite coarse and hide some information on the slide so that
the tissue below the pen mark is no longer readable without removing the annota-
tion first. Using specialized WSI viewers, such annotations can be created on WSIs
without manipulating the original image and, therefore, losing information. Further,
the annotations can be generated on a more granular level, providing more detailed
information within those. Figure 1 shows such annotations in the QuPath software on
an IHC-stained WSI. The yellow annotations are created using the “wand” tool, which
assists the physician with using AI to draw the annotation. These annotations can, for
example, be used by junior physicians to better learn the morphological features in a
histological slide or be used during diagnostics to document where the senior patholo-
gist found relevant structures. Additionally, these digital annotations can further be
used in the training of ML models on which we will focus later on.
The real advantage of digital pathology, however, comes with the exploration of
ML, especially CV.
Figure 1.
Annotations using different tools in QuPath on an IHC stained image. Blue: square, Red: circle, Purple: line,
Black: brush, Green: polygon, Yellow: wand. ([9], CC BY 4.0 https://creativecommons.org/licenses/by/4.0/).
Advanced Concepts and Strategies in Central Nervous System Tumors
4
2.2 What is computer vision and why do we need it in pathology
CV is a large-ranging area with many disciplines and use cases. A fitting defini-
tion comes from Müller, who defined CV as “[...] an interdisciplinary field that deals
with how to automatically process, analyze, and understand images. The general aim
behind computer vision is to build artificial intelligence systems which can automate
visual tasks with a performance similar to or better than humans” [1(]. Another defi-
nition comes from Goodfellow et al., who provided a broader definition: “Computer
vision is a very broad field encompassing a wide variety of ways of processing images,
and an amazing diversity of applications” [11].
2.2.1 Machine learning-based computer vision
ML-based CV is currently the most relevant in the pathological domain for
tumor analysis. It can be generally separated into three groups: classification,
object detection, and segmentation. The different kinds of ML-based CV are
depicted in Figure 2.
Classification is shown in Figure 2A. With this CV approach an image as a whole
receives one or multiple labels as per its contents. In this case, the label “Glioblastoma”
was chosen, as the WSI includes the neoplastic tissue of a Glioblastoma (GBM). Other
possible labels could be “HE” or “Healthy Tissue,” as those also describe the content
of this picture. However, such information is also easily deductible from the picture
itself for a pathologist and, therefore, does not provide added value.
Figure 2.
Machine Learning-based Computer Vision. (A) Classification of the whole whole-slide image with one or
multiple labels. (B) Object detection within an image, which detects different objects of interest and marks them
with a bounding box. (C) Semantic Segmentation gives each pixel in the image a label regarding its content. (D)
Instance Segmentation detecting objects of interest on a pixel level and differentiation between different objects of
the same type.
5
Perspective Chapter: Computer Vision-Based Digital Pathology for Central Nervous System...
DOI: http://dx.doi.org/10.5772/intechopen.1007366
Figure 2B depicts object detection. With this technique, objects of interest (in
this case, the three largest tumor sections) are detected within an image and marked
accordingly. Like in the figure, this is normally achieved by drawing a bounding box
around the object, showing in which area of the image the object is located. This is
mostly not used for tumor detection but more granular tasks like cell detection and
quantification.
Semantic segmentation is the most common use case for CV in pathology
(Figure 2C). With this technique, each pixel in the image is assigned a class. In the
shown use case, only two classes are available: “Tumor” (including neoplastic tissue
and necrosis) as well as “Background” (including empty areas and healthy tissue).
Figure 2D shows a version of segmentation that is less common in pathology:
instance segmentation. Instead of assigning a class to each pixel in the image, we focus
on a single kind of entity, in this case, the GBM. Each pixel that belongs to a GBM is
then marked, and all pixels in a single group are assigned to the same instance of this
entity. In the figure, these separate instances are highlighted with different colors, and
the nine biggest ones are further marked with numbers.
The last type of segmentation is panoptic segmentation (not depicted in the
figure). This is a combination of semantic and instance segmentation. Each pixel in
the image is assigned to a class; however, all pixels are also grouped into instances.
To understand how to employ ML-based CV in the pathological domain, it is
important to understand the general concept of ML first (cf. Figure 3). The initial
step for each ML model is the concrete architecture. In this step, the internals of a new
model are defined by usually taking an existing architecture and modifying it slightly
to the use case. While this step only takes a couple of lines of programming code,
some of the fundamental decisions are made. After the model architecture is defined,
the training is conducted. In this step, large amounts of data are forwarded into the
model, and it is slowly learning its task by ingesting the data repeatedly. This step
requires significant computational resources for digital pathology models and can
take a long time. Afterward, the trained model will be available and can be used on
new data to conduct the introduced CV tasks. This step is called inference. Therefore,
to apply an ML model to a use case, it is important to not only have the model’s code
but also the actual trained model available.
A large number of images is required to train new models sufficiently. While
this sounds contradictive in (neuro-)pathology where only a few WSIs are available
and labeling those takes great effort, WSIs are actually perfectly suited to be large
Figure 3.
Machine Learning on a meta level. Model Definition: adapting a Machine Learning architecture to the specific
use case at hand. Iterative Training: training the model repeatedly on the available data while the model is slowly
adapting to its task. Usable Model: the final model after the training consisting of the model architecture and the
learned information.
Advanced Concepts and Strategies in Central Nervous System Tumors
6
amounts of data in themselves. WSI includes vast amounts of data in a single file,
which are not practically processable by ML models as a single file. Therefore, WSIs
are split into smaller sections, so-called tiles, which only show small amounts of
the WSI at once. These are between 22) and 1.(2) pixels in square. With this tiling
technique, a WSI does not provide a single image; instead, it actually provides around
*.+(( images in maximum resolution. If empty areas of the WSI are omitted, this can
be approximated to ).((( to 6.((( usable tiles per WSI. Following this, a single WSI
can already provide a significant amount of data. However, the labeling of a single
WSI still takes a large amount of time and a robust model still requires training on
tiles of dozens to hundreds of WSIs.
2.2.2 Advantages of computer vision in pathology
The goal of CV is to “[...] build artificial intelligence systems [...] with a perfor-
mance similar to or better than humans” [1(]. While the advantages of a system with
a performance better than a human are quite self-explanatory, systems with a perfor-
mance similar to humans also already benefit pathologists.
With the worker shortage in pathology – especially in neuropathology – and the
ever-increasing workload, new ways have to be found to improve existing processes
in pathology [12]. While a first analysis of a slide can be done in mere seconds by
a trained pathologist, the fine-grained analysis can take up much time. Moreover,
sometimes, a diagnosis is not one hundred percent clear, and a second opinion is
needed. While this can already be accelerated by digital pathology, as mentioned in
Section 2.1.2, this second opinion can also be provided by a CV tool for every diagno-
sis at little time [13, 1)].
In general, CV-based digital pathology can be used to automate analysis, for
example, the detection of different types of tissue in WSIs [1+], and speed up
analysis processes, for example, by annotation regions of interest in WSIs [16]. It is
even possible to annotate complete WSIs automatically using the right ML model
in mere minutes compared to the hours required by manual analysis [1&]. Further
CV models provide objective ways to quantify expressions in WSIs, for example,
Gleason scores grading [18], allowing a more robust foundation for tumor grading
and treatment.
Nowadays, CV systems are already available for some use cases in pathology, for
example, Paige Prostate [1*], and there is a large quantity of promising research
showing potential further use cases. The research in the area of CV-assisted digital
pathology is steadily picking up speed, and – with the latest release of new robust
foundation models for digital pathology – this trend is believed to accelerate even
more [)–&].
3. Foundation models for digital pathology
The researchers at Stanford provide us with a general definition for foundation
models: “A foundation model is any model that is trained on broad data [...] that can
be adapted [...] to a wide range of downstream tasks [...]” [2(]. To break this down,
foundation models are models that were already trained on a large amount of data
and can “understand” the data of this domain in a general sense. These ML models
can then be specialized for new tasks in this domain with little data. This removes the
need for a large amount of training data and the large computational requirements
7
Perspective Chapter: Computer Vision-Based Digital Pathology for Central Nervous System...
DOI: http://dx.doi.org/10.5772/intechopen.1007366
mentioned in Section 2.2.1. Instead of training a new model from the bottom up,
a foundation model can be used as a solid foundation, and only a small amount of
annotated data is required.
For example, training a new strong model capable of segmenting neoplastic
tissue of GBMs in WSIs requires tens of thousands of annotated tiles of training data
(requiring a few hundred annotated WSIs). Using a foundation model trained on
HE data (not even GBM specific), sufficient results can already be achieved by a few
thousand annotated tiles (which only require a few dozen WSIs). As less data is used
during the training process of a new ML model, the training time is also reduced
significantly.
3.1 What foundation models are available for digital pathology
In 2(2), four new foundation models were released: UNI [)], CONCH [+],
Virchow [6], and Prov-GigaPath [&]. Before 2(2), other foundation models had
already been released, including REMEDIS from Google [21], CTransPath [22],
Quilt-Net [23], PLIP [2)], and Phikon [2+]. However, these older models were outper-
formed in most benchmarks by the newer foundation model generation from 2(2),
with only Phikon being in the top 2 in one of six benchmarks [), +].
Out of the four new foundation models, UNI, Virchow, and Prov-GigaPath are
solely vision models. This means that the models can conduct the CV tasks introduced
in Section 2. CONCH, on the other hand, is a vision-language model. As a vision-
language model, CONCH can ingest image and text data and consider both in its
image analysis. Besides general CV tasks such as classifying or segmenting, CONCH
can also generate image captions. Using captioning, the model generates a descrip-
tive caption for the provided image instead of only giving it predefined labels as with
classification. This allows for a more detailed description of its contents compared to
classification.
3.2 How to use foundation models for digital pathology
Foundation models can be used as an advanced starting point for the generation
of one’s own ML model. Sometimes, it is even possible to directly apply foundation
models to a specific task without further training. This process, called zero-shot
classification, is available for both CONCH [+] and Prov-GigaPath [&]. Furthermore,
all foundation models mentioned in this chapter are available publicly as pre-trained
models. This means that the model has already been trained on a large amount of
data, and besides the code to recreate the model, the actual model is also available to
be used.
However, even using a publicly available foundation model with a zero-shot
approach still requires significant effort, decent IT knowledge, and sufficient hard-
ware. While physicians can learn the required skills independently, using foundation
models does not seem very practical. They provide a significant boost to research but
do not provide direct applicability for physicians on their own.
4. Computer vision for CNS tumors in digital pathology
The options for diagnostic use by physicians are still very limited nowadays,
and the focus of digital pathology overall, as well as for CNS tumors, is still mostly
Advanced Concepts and Strategies in Central Nervous System Tumors
8
research-related. The aim of this section is to introduce applicable CV tools and an
overview of some “ready-to-use” CV models for CNS tumors. Further, some input is
provided on how to create one’s own models.”
4.1 CV-tools for digital pathology for CNS Tumors
While no ML-based CV tools are approved for diagnostics in neuropathology,
there is a multitude of viewers and similar applications that can already be used for
digital neuropathology.
In general, these solutions are combined with PACSs or allow the integration
of PACSs and provide a general image viewer with the option to run some simple
analysis (e.g., cell quantification). Regarding open-source/free-to-use software, these
applications include tools such as QuPath [26], ORBIT [2&], and OME [28]. Besides
their general use as WSI viewers, these tools also allow the integration of ML algo-
rithms to analyze the images further. The included ML algorithms, however, are only
usable for research and not diagnostic. Furthermore, there is also highly specialized
open-source research software like CellProfiler [2*], which only has one purpose: the
quantification and analysis of cells in pathological images using CV.
There is also a large number of software tools provided by larger companies such
as Roche’s navify [3(] or Philips IntelliSite [31]. While these provide WSI viewers,
which can be used for diagnostic usage and allow the integration of ML-based CV
models, no FDA-approved models are available for the CNS domain. Besides the solu-
tions of established companies, there are also applications from research institutes
like the Fraunhofer ISS with the MIKAIA software [32]. As a free version, MIKAIA is
a WSI viewer that allows the conversion of file formats and annotation. There is also
a paid version available for research purposes, including many ML-based analysis
algorithms.
Finally, some companies were founded specifically for digital pathology and
specific companies just for histopathology. These include companies like KML Vision
[33], 3DHISTECH [3)], and Paige [3+]. In general, these companies also provide some
certified WSI viewers and additional unapproved algorithms, which can, therefore,
only be used in research.
4.2 CV-models for digital pathology for CNS tumors
Besides using tools for digital pathology to assist in the (mostly research-centered)
analysis of CNS tumors, there is also a large quantity of research on specialized CV
models for CNS tumors.
Many studies have been conducted regarding the research on AI models for CV
in digital pathology for CNS, as shown by the literature review by Jensen et al. [36],
which highlights 68 studies for creating CV models specifically for CNS tumors.
While some of the included studies are only abstracts and not all necessarily provide
high-quality models, this shows a significant effort in this area. A shared characteris-
tic of all those studies, however, is the fact that all studies were conducted preclinical
and worked on retrospective data, showing the lack of application in clinical use.
Another review by Redlich et al. includes 83 studies regarding using CV for digital
pathology, specifically for glioma-type tumors [3&]. Figure 4 shows the steady
increase of studies from 2(1+ to 2(23 (and the first quarter of 2(2)).
However, in contrast to the foundation models mentioned in Section 3.1, most
of the studies do not provide the model itself, the code to recreate it, and/or did not
9
Perspective Chapter: Computer Vision-Based Digital Pathology for Central Nervous System...
DOI: http://dx.doi.org/10.5772/intechopen.1007366
conduct reliable validation of their model’s performance on other datasets. This makes
those models an ill fit for any clinical use case. While studies providing their model’s
code or even the model itself can be a great starting point for one’s own research, it is
most of the time a more valid option to use foundation models as a basis for research
and use the more specific studies as inspiration for the overall solution approaches.
4.3 How can we create our own models/tools
Figure 5 depicts the simplified process for the training of an ML model based
on histopathological image data. The process is only symbolized using a single WSI
and a few tiles. In general, many WSIs should be used to generate robust models.
Figure 5A.1. shows the input WSI for the process, while Figure 5A.2. shows the cor-
responding label WSI marking neoplastic areas in the image. During Preprocessing
areas of the WSI containing tissue are detected Figure 5B.1 and the image is separated
into tiles of smaller resolution Figure 5B.2. Further preprocessing steps should also be
conducted on WSI and tile level, for example, color normalization [38], data augmen-
tation [3*], artifact detection [)(], label cleanup [)1]. However, these are omitted for
simplicity in this overview. After preprocessing, tiles are selected for the training pro-
cess Figure 5C.1 and their corresponding label tiles are retrieved from the provided
label WSI Figure 5C.2. Afterward, the training is conducted on a multitude of tiles,
running the CV model over the data multiple times till the prediction (Figure 5D right
column) fits the corresponding label (Figure 5D center column). When the highest
possible overlap is achieved, the model training is terminated, and the resulting CV
model can be used to segment complete WSIs on its own, as shown in Figure 5E. The
process is the same for classification tasks; however, instead of an image label a “tag”
or number value is supplied as the label, for example, “tumor,” “gbm,” 1, 2, 3.
While creating your own CV models may seem very complex at the start, there are
actually some tools available to assist in their creation. The most difficult step in this
Figure 4.
Amount of studies conducted regarding the application of Computer Vision in the analysis of gliomas and the
conducted task till March 18, 2024. ([37], CC BY 4.0 https://creativecommons.org/licenses/by/4.0/).
Advanced Concepts and Strategies in Central Nervous System Tumors
10
endeavor is the data collection and preparation. While there are some tools to auto-
mate the required ML model training and creation, data preprocessing, in general,
requires a large amount of manual labor.
An exception to this can be found in the Slideflow project [)2]. While this program
requires the installation of the Python programming language and a few different
Python packages, it also provides a graphical use interface that can be used for data
processing, labeling, training the ML models, and running the trained models on
new data. However, compared to other solutions presented in the following part
Figure 5.
Process for training a Computer Vision model based on histopathological image data. (A) Providing a
Hematoxilyn and Eosin-stained whole-slide image and the corresponding label (in this case neoplastic tissue). (B)
Preprocessing of the whole-slide image, detecting tissue, and tiling. (C) Select suitable tiles for the training process
and return the corresponding label. (D) Training of the Computer Vision model based on many tiles and their
labels (left: input tile, center: label, right: prediction). (E) Resulting segmentation for the complete whole-slide
image after training.
11
Perspective Chapter: Computer Vision-Based Digital Pathology for Central Nervous System...
DOI: http://dx.doi.org/10.5772/intechopen.1007366
of this section, the project limits its users in their options to fully customize their
approaches. Therefore, Slideflow provides a great first step for training your own CV
models for digital pathology, but it should not be seen as the final solution.
4.3.1 Preparing the data
As a first step, data must be available to train a CV model. An easy way to get
accustomed to ML data preparation and training can be the usage of a public dataset
like the Ivy Glioblastoma Atlas (IGA) [)3, ))] or The Cancer Genome Atlas (TCGA)
[)+]. The IGA provides labeled WSIs of GBMs from a multitude of patients, which
can be used as a starting point for first models. Using software tools like PathML [)6]
or histolab [)&] can provide valuable assistance during the required data preparation.
However, both still require knowledge of the Python programming language.
4.3.2 Getting to machine learning
There are a couple of semi-automated solutions for creating and training a CV
model. For classification tasks, the AUCMEDI framework [)8] can be used to auto-
matically create new CV models within a few lines of Python code. However, provid-
ing the data in the right format is a prerequisite for this. LikeAUCMEDI, MONAI can
be used similarly for segmentation tasks, allowing the semi-automatic creation of new
segmentation CV models [)*].
Providing even more automation, nnU-Net v2 [+(] and deepflash2 [+1] allow
the creation of CV models without writing a single line of code. While both models
are quite accessible to computer scientists, they still require a technical setup, and
nnU-Net does not provide a graphical user interface as of yet [+2]. Despite these
limitations, it is possible to effectively use them as a neuropathologist effectively, and
further research is ongoing to ease accessibility to medical professionals.
5. Conclusions
In conclusion, the integration of computer vision-based digital pathology repre-
sents a significant advancement in the field of CNS tumor diagnostics and treatment.
This chapter has outlined state-of-the-art technologies and current advances, demon-
strating how digital pathology can revolutionize pathology across various domains,
including the CNS.
Despite the considerable progress, the landscape of ready-to-use solutions remains
sparse, with a predominance of specialized models and applications tailored for
specific research purposes. This gap underscores the importance of collaboration
between physicians and the academic community. Engaging with local IT faculties
and reaching out to researchers developing these models can foster fruitful partner-
ships, accelerating the adoption and customization of digital pathology tools in
clinical settings.
As this field continues to evolve, the potential for computer vision-based applica-
tions in CNS tumor neuropathology is immense. By staying informed and actively
participating in ongoing research and development, physicians can leverage these
cutting-edge technologies to enhance diagnostic accuracy, improve patient outcomes,
and ultimately transform the practice of neuropathology.
Advanced Concepts and Strategies in Central Nervous System Tumors
12
Author details
DanielHieber1,2,3*, FelixHoll1, VeraNickl), FriederikeLiesche-Starnecker2
and JohannesSchobel1
1 DigiHealth Institute, Neu-Ulm University of Applied Sciences, Neu-Ulm, Germany
2 Department of Neuropathology, Pathology, Medical Faculty, University of
Augsburg, Augsburg, Germany
3 Institute of Medical Data Science, University Hospital Würzburg, Würzburg,
Germany
) Department of Neurosurgery, Section Experimental Neurosurgery, University
Hospital Würzburg, Würzburg, Germany
*Address all correspondence to: daniel.hieber@hnu.de
Conflict of interest
The authors declare no conflict of interest. The authors are in no way affiliated
with the products mentioned in this chapter.
Abbreviations
AI artificial intelligence
ML machine learning
CV computer vision
FDA U.S. food & drug administration
CNS central nervous system
HE hematoxilyn and eosin
IHC immunohistochemical
IF immunofluorescence
TIF tagged image file
TIFF tagged image file format
PACS picture archive and communication system
GBM glioblastoma
© 2(2) The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/).(),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
Perspective Chapter: Computer Vision-Based Digital Pathology for Central Nervous System...
DOI: http://dx.doi.org/10.5772/intechopen.1007366
13
References
[1] JoshiG, JainA, AraveetiSR,
AdhikariS, GargH, BhandariM.
FDA-approved artificial intelligence
and machine learning (AI/ML)-
enabled medical devices: An updated
landscape. Electronics. 2(2);13(3):)*8.
Available from: https://www.mdpi.
com/2(&*-*2*2/13/3/)*8
[2] NajjarR. Redefining radiology:
A review of artificial intelligence
integration in medical imaging.
Diagnostics. 2(23;13(1&):2&6(.
Available from: https://www.mdpi.
com/2(&+-))18/13/1&/2&6(
[3] AdministrationUSFD. Artificial
Intelligence and Machine Learning
(AI/ML)-Enabled Medical Devices;
2(2). Available from: https://
www.fda.gov/medical-devices/
software-medical-device-samd/
artificial-intelligence-and-machine-
learning-aiml-enabled-medical-devices
[)] ChenRJ, DingT, LuMY,
WilliamsonDFK, JaumeG, SongAH,
et al. Towards a general-purpose
foundation model for computational
pathology. Nature Medicine.
2(2);30(3):8+(-862. Available from:
https://www.nature.com/articles/
s)1+*1-(2)-(28+&-3
[+] LuMY, ChenB, WilliamsonDFK,
ChenRJ, LiangI, DingT, et al. A
visual-language foundation model
for computational pathology. Nature
Medicine. 2(2);30(3):863-8&). Available
from: https://www.nature.com/articles/
s)1+*1-(2)-(28+6-)
[6] VorontsovE, BozkurtA, CassonA,
ShaikovskiG, ZelechowskiM, LiuS,
et al. Virchow: A million-slide digital
pathology foundation model. arXiv.
2(2) ArXiv:23(*.(&&&8 [cs, eess,
q-bio]. Available from: http://arxiv.org/
abs/23(*.(&&&8
[&] XuH, UsuyamaN, BaggaJ, ZhangS,
RaoR, NaumannT, et al. A whole-
slide foundation model for digital
pathology from real-world data. Nature.
2(2);630(8(1+):181-188. Available
from: https://www.nature.com/articles/
s)1+86-(2)-(&))1-w
[8] TaubmanDS, MarcellinMW,
editors. JPEG2(((: Image compression
fundamentals, standards, and practice.
In: Softcover Reprint of the Hardcover
1st Edition 2((2, Third Printing Ed. No.
6)2 in the Kluwer International Series
in Engineering and Computer Science.
New York: Springer Science + Business
Media, LLC; 2(()
[*] QuPath Team. QuPath Documentation
- Annotating Images. Available from:
https://qupath.readthedocs.io/en/stable/
docs/starting/annotating.html
[1(] MüllerD. Frameworks in Medical
Image Analysis with Deep Neural
Networks. Augsburg: University
Augsburg; 2(23. Available from: https://
opus.bibliothek.uni-augsburg.de/
opus)/1()2)8
[11] GoodfellowI, BengioY, CourvilleA.
Deep Learning. Cambridge,
Massachusetts: MIT Press; 2(16. &&+ p.
(Adaptive computation and machine
learning)
[12] The Royal College of Pathologists.
The Pathology Workforce. Available
from: https://www.rcpath.org/discover-
pathology/public-affairs/the-pathology-
workforce.html
[13] RakhaEA, TossM, ShiinoS,
GambleP, JaroensriR, MermelCH,
et al. Current and future applications
Advanced Concepts and Strategies in Central Nervous System Tumors
14
of artificial intelligence in pathology: A
clinical perspective. Journal of Clinical
Pathology. 2(21;74(&):)(*-)1). Available
from: https://jcp.bmj.com/lookup/
doi/1(.1136/jclinpath-2(2(-2(6*(8
[1)] McGenityC, ClarkeEL,
JenningsC, MatthewsG, CartlidgeC,
Freduah-AgyemangH, et al. Artificial
intelligence in digital pathology: A
systematic review and meta-analysis
of diagnostic test accuracy. npj Digital
Medicine. 2(2);7(1):11). Available
from: https://www.nature.com/articles/
s)1&)6-(2)-(11(6-8
[1+] NiaziMKK, ParwaniAV, GurcanMN.
Digital pathology and artificial
intelligence. The Lancet Oncology.
2(1*;20(+):e2+3-e261. Available from:
https://linkinghub.elsevier.com/retrieve/
pii/S1)&(2()+1*3(1+)8
[16] Al-ThelayaK, GilalNU, AlzubaidiM,
MajeedF, AgusM, SchneiderJ, et al.
Applications of discriminative and deep
learning feature extraction methods for
whole slide image analysis: A survey.
Journal of Pathology Informatics.
2(23;14:1((33+. Available from: https://
linkinghub.elsevier.com/retrieve/pii/
S21+33+3*23((1)**
[1&] ChenC, LuMY, WilliamsonDFK,
ChenTY, SchaumbergAJ, MahmoodF.
Fast and scalable search of whole-slide
images via self-supervised deep learning.
Nature Biomedical Engineering.
2(22;6(12):1)2(-1)3). Available from:
https://www.nature.com/articles/
s)1++1-(22-((*2*-8
[18] MüllerD, MeyerP, RentschlerL,
ManzR, BäckerJ, CramerS, et al.
DeepGleason: A system for automated
Gleason grading of prostate cancer
using deep neural networks. arXiv.
2(2). ArXiv:2)(3.166&8 [cs, eess,
q-bio]. Available from: http://arxiv.org/
abs/2)(3.166&8
[1*] RacitiP, SueJ, CeballosR,
GodrichR, KunzJD, KapurS, et al.
Novel artificial intelligence system
increases the detection of prostate
cancer in whole slide images of core
needle biopsies. Modern Pathology.
2(2(;33(1():2(+8-2(66. Available from:
https://linkinghub.elsevier.com/retrieve/
pii/S(8*33*+222(()&+6
[2(] BommasaniR, HudsonDA, AdeliE,
AltmanR, AroraS, von ArxS, et al. On
the opportunities and risks of foundation
models. arXiv. 2(22. ArXiv:21(8.(&2+8
[cs]. Available from: http://arxiv.org/
abs/21(8.(&2+8
[21] AziziS, CulpL, FreybergJ,
MustafaB, BaurS, KornblithS, et al.
Robust and data-efficient generalization
of self-supervised machine learning for
diagnostic imaging. Nature Biomedical
Engineering. 2(23;7(6):&+6-&&*.
Available from: https://www.nature.com/
articles/s)1++1-(23-(1()*-&
[22] WangX, YangS, ZhangJ, WangM,
ZhangJ, YangW, et al. Transformer-
based unsupervised contrastive
learning for histopathological image
classification. Medical Image Analysis.
2(22;81:1(2++*. Available from: https://
linkinghub.elsevier.com/retrieve/pii/
S13618)1+22((2()3
[23] IkezogwoWO, SeyfiogluMS,
GhezlooF, GevaD, MohammedFS,
AnandPK, et al. Quilt-1M: One
million image-text pairs for
histopathology. Advances in Neural
Information Processing Systems.
2(23;36(DB1):3&**+-38(1&
[2)] HuangZ, BianchiF, YuksekgonulM,
MontineTJ, ZouJ. A visual–language
foundation model for pathology image
analysis using medical twitter. Nature
Medicine. 2(23;29(*):23(&-2316.
Available from: https://www.nature.com/
articles/s)1+*1-(23-(2+()-3
Perspective Chapter: Computer Vision-Based Digital Pathology for Central Nervous System...
DOI: http://dx.doi.org/10.5772/intechopen.1007366
15
[2+] FiliotA, GhermiR, OlivierA,
JacobP, FidonL, Mac KainA, et al.
Scaling Self-Supervised Learning
for Histopathology with Masked
Image Modeling. 2(23. Available
from: http://medrxiv.org/lookup/
doi/1(.11(1/2(23.(&.21.232*2&+&
[26] BankheadP, LoughreyMB,
FernándezJA, DombrowskiY,
McArtDG, DunnePD, et al. QuPath:
Open source software for digital
pathology image analysis. Scientific
Reports. 2(1&;7(1):168&8. Available
from: https://www.nature.com/articles/
s)1+*8-(1&-1&2()-+
[2&] StrittM, StalderAK, VezzaliE.
Orbit image analysis: An open-
source whole slide image analysis
tool. PLOS Computational Biology.
2(2(;16(2):e1((&313. Available from:
https://dx.plos.org/1(.13&1/journal.
pcbi.1((&313
[28] GoldbergIG, AllanC,
BurelJM, CreagerD, FalconiA,
HochheiserH, et al. The open
microscopy environment (OME) data
model and XML file: Open tools for
informatics and quantitative analysis
in biological imaging. Genome Biology.
2((+;6(+):R)&. Available from: http://
genomebiology.biomedcentral.com/
articles/1(.1186/gb-2((+-6-+-r)&
[2*] LamprechtMR, SabatiniDM,
CarpenterAE. CellProfiler™: Free,
versatile software for automated
biological image analysis. BioTechniques.
2((&;42(1):&1-&+. Available from:
https://www.tandfonline.com/doi/
full/1(.21))/(((1122+&
[3(] ThomasM, O’SheaB, ZerbiniCEH,
von MeyennM, HeinzmannS, FreundR,
et al. Aiming for higher ambition: The
Roche approach to cracking the code of
cancer. Nature Portfolio. 2(21. Available
from: https://www.nature.com/articles/
d)2)&3-(2(-((3**-z
[31] VoelkerR. Digitized surgical slides.
Journal of the American Medical
Association. 2(1&;317(1*):1*)2. Available
from: http://jama.jamanetwork.com/
article.aspx?doi=1(.1((1/jama.2(1&.++)(
[32] Fraunhofer IIS. MIKAIA [Internet].
Available from: https://www.iis.
fraunhofer.de/en/ff/sse/health/medical-
image-analysis/mikaia.html [Accessed:
August 1), 2(2)]
[33] KML Vision GmbH. KML Vision.
Available from: https://www.kmlvision.
com
[3)] 3DHISTECH Ltd. 3DHISTECH.
Available from: https://www.3dhistech.
com
[3+] PaigeAI, Inc. Paige. Available from:
https://paige.ai/
[36] JensenMP, QiangZ, KhanDZ,
StoyanovD, BaldewegSE,
JaunmuktaneZ, et al. Artificial
intelligence in histopathological
image analysis of central nervous
system tumours: A systematic
review. Neuropathology and Applied
Neurobiology. 2(2);50(3):e12*81.
Available from: https://onlinelibrary.
wiley.com/doi/1(.1111/nan.12*81
[3&] RedlichJP, FeuerhakeF, WeisJ,
SchaadtNS, Teuber-HanselmannS,
BuckC, et al. Applications of
artificial intelligence in the analysis
of histopathology images of gliomas:
A review. Npj. Imaging. 2(2);2(1):16.
Available from: https://www.nature.com/
articles/s))3(3-(2)-(((2(-8
[38] HoqueMZ, KeskinarkausA,
NybergP, SeppänenT. Stain
normalization methods for
histopathology image analysis: A
comprehensive review and experimental
comparison. Information Fusion.
2(2);102:1(1**&. Available from: https://
Advanced Concepts and Strategies in Central Nervous System Tumors
16
linkinghub.elsevier.com/retrieve/pii/
S1+662+3+23((313+
[3*] FarynaK, Van Der LaakJ, LitjensG.
Automatic data augmentation to
improve generalization of deep learning
in H&E stained histopathology.
Computers in Biology and Medicine.
2(2);170:1(8(18. Available from: https://
linkinghub.elsevier.com/retrieve/pii/
S((1()82+2)((1(21
[)(] KanwalN, López-PérezM, KirazU,
ZuiverloonTCM, MolinaR, EnganK.
Are you sure it’s an artifact? Artifact
detection and uncertainty quantification
in histological images. Computerized
Medical Imaging and Graphics.
2(2);112:1(2321. Available from: https://
linkinghub.elsevier.com/retrieve/pii/
S(8*+611123((13*8
[)1] BernhardtM, CastroDC,
TannoR, SchwaighoferA, TezcanKC,
MonteiroM, et al. Active label
cleaning for improved dataset quality
under resource constraints. Nature.
Communications. 2(22;13(1):1161.
Available from: https://www.nature.com/
articles/s)1)6&-(22-28818-3
[)2] DolezalJM, KochannyS,
DyerE, RameshS, SrisuwananukornA,
SaccoM, et al. Slideflow: Deep learning
for digital histopathology with real-
time whole-slide visualization. BMC
Bioinformatics. 2(2);25(1):13). Available
from: https://bmcbioinformatics.
biomedcentral.com/articles/1(.1186/
s128+*-(2)-(+&+8-x
[)3] PuchalskiRB, ShahN, MillerJ,
DalleyR, NomuraSR, YoonJG, et al. An
anatomic transcriptional atlas of human
glioblastoma. Science (New York, N.Y.).
2(18;360(638*):66(-663
[))] Ivy Glioblastoma Atlas. Available
from: https://glioblastoma.alleninstitute.
org
[)+] The Cancer Genome Atlas. Available
from: https://www.cancer.gov/tcga
[)6] RosenthalJ, CarelliR, OmarM,
BrundageD, HalbertE, NymanJ, et al.
Building tools for machine learning and
artificial intelligence in cancer research:
Best practices and a case study with
the PathML toolkit for computational
pathology. Molecular Cancer Research.
2(22;20(2):2(2-2(6. Available
from: https://aacrjournals.org/mcr/
article/2(/2/2(2/6&8(62/Building-Tools-
for-Machine-Learning-and-Artificial
[)&] CollingR, PitmanH, OienK,
RajpootN, MacklinP, CM-Path AI in
histopathology working group, et al.
artificial intelligence in digital pathology:
A roadmap to routine use in clinical
practice. The Journal of Pathology.
2(1*;2)*(2):1)3-1+(. Available from:
https://pathsocjournals.onlinelibrary.
wiley.com/doi/1(.1((2/path.+31(
[)8] MayerS, MüllerD, KramerF.
Standardized medical image
classification across medical disciplines.
arXiv. 2(22. ArXiv:221(.11(*1 [cs,
eess]. Available from: http://arxiv.org/
abs/221(.11(*1
[)*] CardosoMJ, LiW, BrownR, MaN,
KerfootE, WangY, et al. MONAI:
An open-source framework for deep
learning in healthcare. arXiv. 2(22.
ArXiv:2211.(2&(1 [cs]. Available from:
http://arxiv.org/abs/2211.(2&(1
[+(] IsenseeF, JaegerPF, KohlSAA,
PetersenJ, Maier-HeinKH. nnU-net:
A self-configuring method for deep
learning-based biomedical image
segmentation. Nature Methods.
2(21;18(2):2(3-211. Available from:
https://www.nature.com/articles/
s)1+*2-(2(-(1((8-z
[+1] GriebelM, SegebarthD, SteinN,
SchukraftN, TovoteP, BlumR, et al.
Perspective Chapter: Computer Vision-Based Digital Pathology for Central Nervous System...
DOI: http://dx.doi.org/10.5772/intechopen.1007366
17
Deep learning-enabled segmentation
of ambiguous bioimages with
deepflash2. Nature Communications.
2(23;14(1):16&*. Available from:
https://www.nature.com/articles/
s)1)6&-(23-36*6(-*
[+2] Hieber D, Haisch N, Grambow G,
Holl F, Liesche-Starnecker F, Pryss R,
etal. Comparing nnU-Net and deepflash2
for histopathological tumor
segmentation. In: Mantas J, Hasman A,
Demiris G, Saranto K, Marschollek M,
Arvanitis TN, et al., editors. Studies
in Health Technology and Informatics
[Internet]. IOS Press; 2(2). Available
from: https://ebooks.iospress.nl/
doi/1(.3233/SHTI2)()8&