Conference PaperPDF Available

TuMore: Generation of Synthetic Brain Tumor MRI Data for Deep Learning Based Segmentation Approaches

Authors:

Abstract and Figures

Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.
Content may be subject to copyright.
PROCEEDINGS OF SPIE
SPIEDigitalLibrary.org/conference-proceedings-of-spie
TuMore: generation of synthetic brain
tumor MRI data for deep learning
based segmentation approaches
Lydia Lindner, Birgit Pfarrkirchner, Christina Gsaxner,
Dieter Schmalsteig, Jan Egger
Lydia Lindner, Birgit Pfarrkirchner, Christina Gsaxner, Dieter Schmalsteig,
Jan Egger, "TuMore: generation of synthetic brain tumor MRI data for deep
learning based segmentation approaches," Proc. SPIE 10579, Medical
Imaging 2018: Imaging Informatics for Healthcare, Research, and
Applications, 105791C (6 March 2018); doi: 10.1117/12.2315704
Event: SPIE Medical Imaging, 2018, Houston, Texas, United States
Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 3/8/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
TuMore: Generation of Synthetic Brain Tumor MRI Data for
Deep Learning Based Segmentation Approaches
Lydia Lindnera,b, Birgit Pfarrkirchnera,b, Christina Gsaxnera,b, Dieter Schmalstiega and Jan Eggera,b,c
a TU Graz, Institute for Computer Graphics and Vision, Inffeldgasse 16c/II, 8010 Graz, Austria
b Computer Algorithms for Medicine (Cafe) Laboratory, 8010 Graz, Styria, Austria
c BioTechMed-Graz, Krenngasse 37/1, 8010 Graz, Austria
ABSTRACT
Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is
critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most
challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither
accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide
competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image
segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in
domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method
that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors
- glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural
networks.
Keywords: Synthetic Brain Tumor, Glioblastoma, Segmentation, Deep Learning, Data Augmentation, hybrid MRI.
1. DESCRIPTION OF PURPOSE
In current clinical practice, the interpretation of medical images is mainly done by human experts such as radiologists and
physicians [1]-[7]. MRI (Magnetic Resonance Imaging) images are assessed either based on qualitative criteria, for
example characteristic hyper-intense tissue occurrence in contrast-enhanced T1-weighted MRI images, or by relying on
basic quantitative measures as the largest axial diameter of the tumor [8]. Since manual segmentation of brain tumors is
time consuming, subjective and prone to error, there exists a need for fast, objective and precise automated segmentation
methods. Therefore, deep learning based approaches gain increasing interest in the field of medical image segmentation.
When the training data set is large enough, deep learning approaches can be extremely effective. However, in the medical
field there are usually only limited data samples available, since medical data is heavily protected due to privacy concerns.
Hence, one big challenge of using deep learning approaches for medical image segmentation lies in augmenting the
available data set to build deep models without overfitting the training data. A basic common and accepted technique for
augmenting image data is to perform geometric and color augmentations [1], [9]. Instead of using these basic data
augmentation techniques, our approach generates synthetic glioblastomas and insert them into MRI scans of healthy
subjects. The resulting "hybrid" MRI images can subsequently be used to train a deep neural network for automatic tumor
segmentation.
Glioblastomas (GBMs) are rather large tumors that have thick, irregular-enhancing margins with a central necrotic core
[10], [11]. Since it is not possible for the brain to move outside the surrounding skull and thereby make room for a growing
mass, tumors displace and compress the surrounding brain tissue. This process is called the tumor mass effect [12]. In
clinical practice of oncology and diagnostic medicine, typically only the enhancing tumor and necrotic core are
segmented, since these regions are of primary interest. Segmentations of structures other than brain tumor are usually not
done, since the segmentation of edema etc. is extremely challenging and will therefore not represent truth very well [13].
Hence, with the proposed method, only the enhancing tumor and necrotic core of a glioblastoma are simulated as well as
the tumor mass effect.
For the implementation of the proposed method, we used MeVisLab, a rapid prototyping and development platform for
medical image processing, visualization, and image interaction. Image processing and interactive image manipulation can
be achieved by building networks that are constructed of preexisting modules, macro modules created via Python
scripting, and individual modules implemented in C++ [14]-[17].
Medical Imaging 2018: Imaging Informatics for Healthcare, Research, and Applications, edited by
Jianguo Zhang, Po-Hao Chen, Proc. of SPIE Vol. 10579, 105791C · © 2018 SPIE
CCC code: 1605-7422/18/$18 · doi: 10.1117/12.2315704
Proc. of SPIE Vol. 10579 105791C-1
Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 3/8/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
2. METHODS
The tumor generation algorithm was based on a special type of polyhedron, namely an icosahedron. A regular icosahedron
has 12 polyhedron vertices, 30 polyhedron edges and 20 equivalent equilateral polyhedron faces [18]. In order to receive
an approximation of a sphere, the icosahedron was recursively refined, by dividing each of the existing faces into three
equivalent equilateral triangles and repeating the procedure five times [19]-[21]. This results in a polyhedron with 2432
polyhedron vertices, which is called the “tumor base. In order to obtain the typical irregular shape of a glioblastoma, a
special displacement algorithm was applied to the vertices of the tumor base. The idea behind this displacement algorithm
is to randomly choose one vertex () of the polyhedron, according to a uniform distribution. Then, the distance
between and the center of the polyhedron is increased. This results in an elevation of the corresponding vertex.
To make the deformation look natural, also the remaining vertices are displaced accordingly. In order to determine the
strength of displacement for each vertex - which decreases by distance to - all vertices are categorized in
displacement levels k, according to the minimum number of edges that are required to reach  (for example, direct
neighbor vertices of  are in displacement level k=1). Finally, the displacement of and the remaining vertices
is achieved by multiplying the x, y and z coordinate of each vertex with factor , that is defined by formula (1), where
k denotes the displacement level of the corresponding vertex, d (which is randomly chosen according to a uniform
distribution on the interval [1, 2.5]) indicates the initial strength of displacement and m is a decay factor of 0.97.


 (1)
This process of randomly choosing a vertex, displacing it and subsequently displacing the remaining vertices is repeated
for altogether seven times. A simplified tumor mass effect was simulated by deforming the brain MRI image around the
artificial glioblastoma according to a displacement field. For this purpose, the "ImageWarp" Module of MeVisLab - which
performs backwards deformation was applied. A simple sphere with variable radius was used to calculate the dense
vector field. Since the force, that is applied to the brain by a tumor mass, is an outward radial force, that originates from
the initial tumor region and weakens by distance [13], the required displacement field was constructed in the following
way: First, the 3D Euclidean distance transform of the sphere was calculated to receive a gradually decreasing intensity
from the tumor center to the edges. Then, a gradient filter was applied to the Euclidian distance transform, leading to a
dense vector field. Afterwards, the Euclidian distance transform was normalized and multiplied with a factor, that can be
varied depending on the desired strength of deformation. Finally, the result was multiplied with the previously calculated
gradient field. This leads to a displacement field that lets the strength of the deformation weaken by distance. The network
implemented in MeVisLab can be seen in Figure 1. An existing MRI image of a healthy subject is loaded into the
framework and is subsequently reformatted to axial view. Additionally, the intensity range of the image is adjusted
according to a reference. The SetSeedPoint module is used to manually set a seed point (by simply clicking on the desired
position in the MRI scan) at which the artificial glioblastoma will be inserted. This is done to ensure that the tumor is
located within the brain region. The individual GenerateTumor module is used to generate artificial glioblastomas by
using the previously described algorithm. The synthetic tumor is provided at the output as visual scene graph (Inventor
Scene) and has to be voxelized and further processed to comply with the basic radiographic features of real glioblastomas
as they are visible in post-contrast T1-weighted (T1Gd) brain MRI scans. This is done with the ProcessTumor module.
First, the tumor has to be translated according to the selected seed point. Then, a filled voxelization (fill color is a dark
gray, which represents necrosis) is applied. Afterwards, a border is added to the tumor surface (color is a light gray, which
represents the contrast enhanced surface). The voxelized tumor is slightly blurred to smooth the edges between the fill
color and the edge color. Uniform noise is added to achieve a realistic looking appearance of the tumor. Furthermore, the
ProcessTumor module is used to insert the synthetic glioblastoma into the existing MRI image and create the
corresponding ground truth by using the IntervalThreshld module of MeVisLab. The internal network of this module can
be seen in Figure 2. The DeformationOfBrainMass Module is used to simulate the tumor mass effect according to the
previously described algorithm. In order to manually set a seed point (again by clicking on the desired position in the
MRI scan) that defines the center of the brain mass deformation, the SetSeedPointDeform module can be used. The final
brain MRI image (containing the synthetic glioblastoma) and the corresponding ground truth are saved as single slices in
TIFF format using the SaveSlices module, which is an adapted version of the SaveAsSingleSlices module from related
work [22]. The corresponding MeVisLab network and the C++ / Python source code is freely available [23]. MRI data
used in this work was obtained from the Human Connectome Project (HCP) database [24]-[26]. The 1200 Subjects Data
Release contains 3T MR imaging data from 1206 healthy young adult participants (1113 with structural MR scans).
Proc. of SPIE Vol. 10579 105791C-2
Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 3/8/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
SeveBllces ,
r.PmsessTOmor
sMSeeePolnt
onlmYewzo
. a a SetSeeePOlntOelorm
onl,oViewzo
MmkeM1kt
WoeeVO[elLanvertl
LeeeB2mMR1
IülmageFAeReeeer
Defsoitallorøf Brain Mans MeikerLlsQ
wonevo.elcon.erti
oulpum omwn
AaeMalAnaTumor
AnlnmNlu2
GrtOUtBrainArea
Multlply
MNme11 2
GI'oblestamo
n venierou ndTmth
ArM menci
nalu2 caGaussianalu2
WIAmumassuor
Arlifime
CreateGrountlTVth uvGausslen alari
ConvellGmunRnuhTOGrzy
GolorMO]elCOnveMr CalorMOEelCanveM'.
Gmunrtlmin
InNrverlM1resM1OIE
Gau¢¢lanalur
VorebeeTurnor
Voxellielnvenlor5cene
TranslateTumor
aoTrenslellon
22 6009960.2422 .19 Goaepaabri aoaeele
Li¡uvl
Fig. 1: Network for generating synthetic brain tumor MRI data, implemented in MeVisLab.
Fig. 2: The internal network of the ProcessTumor module.
Proc. of SPIE Vol. 10579 105791C-3
Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 3/8/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
y
.or,
1q
3. RESULTS
In Figure 3, four examples of brain MRI images with real glioblastomas and a real tumor mass effect (upper row) and
four brain MRI images containing synthetic glioblastomas with a simulated tumor mass effect (lower row) can be seen.
The typical radiographic features of glioblastomas (thick, irregular-enhancing margins with a central necrotic core) are
clearly visible in the synthetic brain MRI data. As can be seen by comparing the real MRI images in the upper row and
the synthetic MRI images in the lower row, the proposed method allows to generate quite realistic looking glioblastomas,
that comply with the basic radiographic features of real glioblastomas, and to simulate a simplified tumor mass effect.
Fig. 3: Brain MRI images with real glioblastomas and a real tumor mass effect (upper row) and brain MRI images
containing synthetic glioblastomas with a simulated tumor mass effect (lower row).
A brain MRI image containing a synthetic tumor (left) and the corresponding ground truth (right) can be seen in Figure
4. In Figure 5, the impact of the tumor mass effect can be seen by comparing a brain MRI image containing an artificial
glioblastoma without a simulated tumor mass effect (left) and the same MRI image containing the same artificial
glioblastoma with a simulated tumor mass effect (right).
Fig. 4: Brain MRI image containing a synthetic tumor
(left) and the corresponding ground truth (right).
Fig. 5: Brain MRI image without tumor mass effect
(left) and with tumor mass effect (right).
Proc. of SPIE Vol. 10579 105791C-4
Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 3/8/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
4. CONCLUSIONS
With the proposed method it is possible to generate realistic looking glioblastomas, insert them into brain MRI images of
healthy subjects and create the corresponding ground truth in a very precise way. Furthermore, it is possible to simulate
the tumor mass effect. The goal of the proposed method was to empirically generate sufficiently realistic brain MRI
images and the corresponding ground truth that can subsequently be used to train a deep neural network. The accurate
modeling of brain tumor growth on cell level is beyond the scope of this work. The proposed method is simple, but still
has the ability to simulate all crucial characteristics of glioblastomas (appearance of the enhancing tumor and the necrotic
core) in T1 weighted contrast-enhanced MRI images. In addition, an easy application via MeVisLab is presented (one
click seed point selection). One advantage of inserting artificial brain tumors into MRI images of healthy subjects is that
the position, shape and size of the tumor is already exactly known. Therefore, a very precise ground truth can easily be
created without any required action of a human expert. Additionally, it is possible to insert several artificial tumors into
the same brain MRI image, thereby maximizing the size of the resulting dataset. Since the brain MRI images of healthy
subjects, that are used to insert the artificial tumors, are not contrast enhanced by the contrast agent Gadolinium, further
adaptations of the proposed approach could be to simulate the effect of Gadolinium to the brain in MRI images [27].
Another adaption could be the simulation of edema or fiber bundles [28], [29]. The proposed approach may also be
applied for other areas of clinical oncology and diagnostic medicine (e.g. simulation of liver tumors [30]).
ACKNOWLEDGEMENTS
The work received funding from BioTechMed-Graz in Austria (“Hardware accelerated intelligent medical imaging”), the
6th Call of the Initial Funding Program from the Research & Technology House (F&T-Haus) at the Graz University of
Technology (PI: Dr. Dr. habil. Jan Egger). The corresponding source code is freely available under (November 2017):
https://github.com/LLindn/Synthetic-Brain-Tumor_Data-Generation
Data were provided in part by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David
Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH
Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.
REFERENCES
[1] Shen, D. et al. Deep Learning in Medical Image Analysis, Annual review of biomedical engineering 19, pp. 221-248 (2017).
[2] Greiner, K. et al. “Segmentation of Aortic Aneurysms in CTA Images with the Statistic Approach of the Active Appearance Models,”
Proceedings of Bildverarbeitung für die Medizin (BVM), Berlin, Germany, Springer Press, 51-55 (2008).
[3] Lu, J. et al. “Detection and visualization of endoleaks in CT data for monitoring of thoracic and abdominal aortic aneurysm stents,” Proc. of
SPIE Vol 6918, 69181F-1 (2016).
[4] Zukic, D. et al. “Robust Detection and Segmentation for Diagnosis of Vertebral Diseases using Routine MR Images,” Computer Graphics
Forum, Volume 33, Issue 6, Pages 190204 (2014).
[5] Zukic, D. et al. “Segmentation of Vertebral Bodies in MR Images,” Vision, Modeling, and Visualization (VMV), The Eurographics
Association, pp. 135-142, (2012).
[6] Egger, J. et al. “Interactive-cut: Real-time feedback segmentation for translational research,” Computerized Medical Imaging and Graphics
38 (4), 285-295 (2014).
[7] Egger, J. et al. “PCG-Cut: Graph Driven Segmentation of the Prostate Central Gland,” PLOS ONE 8 (10), e76645 (2013).
[8] Menze, B.H. et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS), IEEE Transactions on Medical Imaging
34(10), pp. 1993-2024 (2015).
[9] Wang, J. et al. “The Effectiveness of Data Augmentation in Image Classification using Deep Learning, Stanford, pp. 1-8 (2017).
[10] Thurston, M. et al. “Glioblastoma,Radiopaedia, rID: 4910 (2017).
[11] Egger, J. et al. “GBM Volumetry using the 3D Slicer Medical Image Computing Platform,” Sci Rep., Nature Publishing Group (NPG),
3:1364 (2013).
[12] Hogea, C. et al. Modeling glioma growth and mass effect in 3D MR images of the brain, MICCAI, pp. 642-50 (2007).
[13] Prastawa, M. et al. “Synthetic Ground Truth for Validation of Brain Tumor Segmentation,MICCAI pp. 26-33 (2005).
[14] Egger, J. et al. “Integration of the OpenIGTlink network protocol for image guided therapy with the medical platform MeVisLab,” The
international Journal of medical Robotics and Computer assisted Surgery, 8(3):282-390 (2012).
[15] Egger, J. et al. “HTC Vive MeVisLab integration via OpenVR for medical applications,” PLoS ONE 12(3): e0173972 (2017).
[16] Egger, J. et al. “Integration of the OpenIGTlink network protocol for image guided therapy with the medical platform MeVisLab,” The
international Journal of medical Robotics and Computer assisted Surgery, 8(3):282-390 (2012).
[17] Kuhnt, D. et al. “Fiber tractography based on diffusion tensor imaging (DTI) compared with High Angular Resolution Diffusion Imaging
(HARDI) with compressed sensing (CS) initial experience and clinical impact,” Neurosurgery, Volume 72, pp. A165-A175 (2013).
[18] Weisstein, E. W. Icosahedron, http://mathworld.wolfram.com/ Icosahedron.html (2017).
Proc. of SPIE Vol. 10579 105791C-5
Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 3/8/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
[19] Egger, J., Mostarkic, Z., Grosskopf, S. and Freisleben, B. “A Fast Vessel Centerline Extraction Algorithm for Catheter Simulation,” 20th
IEEE International Symposium on Computer-Based Medical Systems, Maribor, Slovenia, pp. 177-182, IEEE Press(2007).
[20] Egger, J. et al. “Simulation of bifurcated stent grafts to treat abdominal aortic aneurysms (AAA),” Proceedings of SPIE Medical Imaging
Conference, Vol. 6509, pp. 65091N(1-6), San Diego, USA (2007).
[21] Egger, J. et al. “Preoperative Measurement of Aneurysms and Stenosis and Stent-Simulation for Endovascular Treatment,” IEEE
International Symposium on Biomedical Imaging: From Nano to Macro, Washington (D.C.), USA, pp. 392-395, IEEE Press (2007).
[22] Pfarrkirchner, B. et al. “Lower jawbone data generation for deep learning tools under MeVisLab,” SPIE Medical Imaging (2018).
[23] Lindner, L. & Egger, J. Generation of Training Data for Brain Tumor Segmentation via MeVisLab, https://github.com/LLindn/Synthetic-
Brain-Tumor_Data-Generation (2017).
[24] Moeller, S. et al. Multiband multislice GE-EPI at 7 tesla, with 16-fold acceleration using partial parallel imaging with application to high
spatial and temporal whole-brain fMRI, Magn Reson Med. 63(5):1144-1153 (2010).
[25] Milchenko, M1. & Marcus, D. Obscuring surface anatomy in volumetric imaging data, Neuroinformatics. 11(1):65-75 (2013).
[26] Marcus, D. S. et al. Informatics and data mining: Tools and strategies for the Human Connectome Project, Frontiers in Neuroinformatics
5:4 (2011).
[27] Biswas, T. “Simulation of Contrast Enhancement of Various Brain Lesions (Without Iv Gadolinium) By Using The Neural Network, The
Internet Journal of Radiology 14(1), pp. 1-12 (2012).
[28] Bauer, M. et al. “A fast and robust graph-based approach for boundary estimation of fiber bundles relying on fractional anisotropy maps,”
20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, pp. 4016-4019 (2010).
[29] Bauer, M. et al. “Boundary estimation of fiber bundles derived from diffusion tensor images,” International journal of computer assisted
radiology and surgery 6 (1), 1-11 (2011).
[30] Hann, A. et al. Algorithm guided outlining of 105 pancreatic cancer liver metastases in Ultrasound, Scientific Reports 7 (2017).
Proc. of SPIE Vol. 10579 105791C-6
Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 3/8/2018 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
... An opportunity to generate more training data from a small set of original images is data augmentation [30]- [33]: The original images are, for instance, geometrically transformed; random noise is added, or the resolution is changed. Augmentation is a common method in deep learning research [34], [35], [36]. ...
Conference Paper
Full-text available
The lower jawbone data preparation for deep learning is proposed. In ten cases, surgeons segmented the lower jawbone in each slice to generate the ground truth. Since the number of present images was deemed insufficient to train a deep neural network, data was augmented with geometric transformations and added noise. Flipping, rotating and scaling as well as the addition of various noise types (uniform, Gaussian and salt-and-pepper) were connected within a global macro module under MeVisLab. Our module can prepare the data for general deep learning data in an automatic and flexible way, and further augmentation methods can easily be incorporated.
Chapter
Machine learning methods heavily rely on the availability of large annotated datasets of a certain domain for training. However, freely available datasets of patients with pathologies rarely contain annotations of normal structures, thus cannot be used as ground truth for various image processing methods. To overcome this issue, we propose a topology preserving unpaired domain translation method, including an explicit pathology integration to generate annotated ground truth data of pathological domains. Moreover, we integrate a novel inverse probabilistic approach to generate deformations of the surrounding caused by pathological tissue. Our experiments show the necessity for annotated pathological data for algorithm evaluation. Furthermore, when training neural networks on healthy data and testing on real pathological images, the results are strongly impaired. By generating training data with pathologies using the proposed method, the performance of segmentation and registration methods increases significantly. The best results are achieved by also integrating pathology-induced tissue deformations.
Conference Paper
In this work, fully automatic binary segmentation of GBMs (glioblastoma multiforme) in 2D magnetic resonance images is presented using a convolutional neural network trained exclusively on synthetic data. The precise segmentation of brain tumors is one of the most complex and challenging tasks in clinical practice and is usually done manually by radiologists or physicians. However, manual delineations are time-consuming, subjective and in general not reproducible. Hence, more advanced automated segmentation techniques are in great demand. After deep learning methods already successfully demonstrated their practical usefulness in other domains, they are now also attracting increasing interest in the field of medical image processing. Using fully convolutional neural networks for medical image segmentation provides considerable advantages, as it is a reliable, fast and objective technique. In the medical domain, however, only a very limited amount of data is available in the majority of cases, due to privacy issues among other things. Nevertheless, a sufficiently large training data set with ground truth annotations is required to successfully train a deep segmentation network. Therefore, a semi-automatic method for generating synthetic GBM data and the corresponding ground truth was utilized in this work. A U-Net-based segmentation network was then trained solely on this synthetically generated data set. Finally, the segmentation performance of the model was evaluated using real magnetic resonance images of GBMs.
Conference Paper
Full-text available
The lower jawbone data preparation for deep learning is proposed. In ten cases, surgeons segmented the lower jawbone in each slice to generate the ground truth. Since the number of present images was deemed insufficient to train a deep neural network, data was augmented with geometric transformations and added noise. Flipping, rotating and scaling as well as the addition of various noise types (uniform, Gaussian and salt-and-pepper) were connected within a global macro module under MeVisLab. Our module can prepare the data for general deep learning data in an automatic and flexible way, and further augmentation methods can easily be incorporated.
Article
Full-text available
Manual segmentation of hepatic metastases in ultrasound images acquired from patients suffering from pancreatic cancer is common practice. Semiautomatic measurements promising assistance in this process are often assessed using a small number of lesions performed by examiners who already know the algorithm. In this work, we present the application of an algorithm for the segmentation of liver metastases due to pancreatic cancer using a set of 105 different images of metastases. The algorithm and the two examiners had never assessed the images before. The examiners first performed a manual segmentation and, after five weeks, a semiautomatic segmentation using the algorithm. They were satisfied in up to 90% of the cases with the semiautomatic segmentation results. Using the algorithm was significantly faster and resulted in a median Dice similarity score of over 80%. Estimation of the inter-operator variability by using the intra class correlation coefficient was good with 0.8. In conclusion, the algorithm facilitates fast and accurate segmentation of liver metastases, comparable to the current gold standard of manual segmentation.
Article
Full-text available
Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.
Poster
Full-text available
In this paper a method is introduced, to visualize bifurcated stent grafts in CT-Data. The aim is to improve therapy planning for minimal invasive treatment of abdominal aortic aneurysms (AAA). Due to precise measurement of the abdominal aortic aneurysm and exact simulation of the bifurcated stent graft, physicians are supported in choosing a suitable stent prior to an intervention. The presented method can be used to measure the dimensions of the abdominal aortic aneurysm as well as simulate a bifurcated stent graft. Both of these procedures are based on a preceding segmentation and skeletonization of the aortic, right and left iliac. Using these centerlines (aortic, right and left iliac) a bifurcated initial stent is constructed. Through the implementation of an ACM method the initial stent is fit iteratively to the vessel walls - due to the influence of external forces (distance- as well as balloonforce). Following the fitting process, the crucial values for choosing a bifurcated stent graft are measured, e.g. aortic diameter, right and left common iliac diameter, minimum diameter of distal neck. The selected stent is then simulated to the CT-Data - starting with the initial stent. It hereby becomes apparent if the dimensions of the bifurcated stent graft are exact, i.e. the fitting to the arteries was done properly and no ostium was covered.
Poster
Full-text available
This paper presents two methods to measure aneurysms and stenosis, and introduces a method for visualizing models of tube- and Y-stents virtually placed into preoperative CT-data. The measurement algorithms obtain characteristic dimensions of a vessel disease used to select a proper stent. A physical simulation of the forces interacting between stent and vessel walls allows the prediction of the stent shape after being expanded in the artery. For both measurement and simulation, our methods are based on active contours (ACM). Generating an initial contour requires the segmentation of the vascular structures followed by centerline determination. Starting from this centerline, the initial contours are constructed and fitted to the vessel walls. The paper presents the results of measurements and stent simulations for different clinical images (AAA, TAA, iliac aneurysm and carotid stenosis).
Article
Full-text available
Segmentation of vertebral bodies is useful for diagnosis of certain spine pathologies, such as scoliosis, spondylolisthesis and vertebral fractures. In this paper, we present a fast and semi-automatic approach for spine segmentation in routine clinical MR images. Segmenting a single vertebra is based on multiple-feature boundary classification and mesh inflation, and starts with a simple point-in-vertebra initialization. The inflation retains a star-shape geometry to prevent selfintersections and uses a constrained subdivision hierarchy to control smoothness. Analyzing the shape of the first vertebra, the main spine direction is deduced and the locations of neighboring vertebral bodies are estimated for further segmentation. The method was tested on 11 routine lumbar datasets with 92 reference vertebrae resulting in a detection rate of 93%. The average Dice Similarity Coefficient (DSC) against manual reference segmentations was 78%, which is on par with state of the art. The main advantages of our method are high speed and a low amount of user interaction.
Article
Full-text available
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients – manually annotated by up to four raters – and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all subregions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
Article
In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.
Article
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement. Expected final online publication date for the Annual Review of Biomedical Engineering Volume 19 is June 4, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.