PosterPDF Available

TuMore: Generation of Synthetic Brain Tumor MRI Data for Deep Learning Based Segmentation Approaches

Authors:

Abstract

Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.
TuMore: Generation of Synthetic Brain Tumor MRI Data for
Deep Learning Based Segmentation Approaches
Lydia Lindner, Birgit Pfarrkirchner, Christina Gsaxner, Dieter Schmalstieg, Jan Egger
Graz University of Technology, Institute for Computer Graphics and Vision, Graz, Austria; BioTechMed-Graz, Graz, Austria
Medical University of Graz, Department of Maxillofacial Surgery, Graz, Austria; Computer Algorithms for Medicine (Cafe) Laboratory, Graz, Austria
METHODSINTRODUCTION RESULTS
1. Egger, J., Mostarkic, Z., Grosskopf, S. and Freisleben, B. “A Fast Vessel
Centerline Extraction Algorithm for Catheter Simulation,” 20th IEEE
International Symposium on Computer-Based Medical Systems, Maribor,
Slovenia, pp. 177-182, IEEE Press (2007).
2. Egger, J. et al. “GBM Volumetry using the 3D Slicer Medical Image
Computing Platform,” Sci Rep., Nature Publishing Group (NPG), 3:1364
(2013).
3. Egger, J. et al. “Integration of the OpenIGTlink network protocol for image
guided therapy with the medical platform MeVisLab,” The international
Journal of medical Robotics and Computer assisted Surgery, 8(3):282-390
(2012).
CONCLUSIONS
REFERENCES
The work received funding from BioTechMed-Graz in Austria (“Hardware accelerated intelligent medical imaging”), the 6th Call of the Initial Funding Program from the Research & Technology House
(F&T-Haus) at the Graz University of Technology (PI: Dr.Dr. habil. Jan Egger). Data was provided in part by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van
Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience
at Washington University. The corresponding source code is freely available under: https://github.com/LLindn/Synthetic-Brain-Tumor_Data-Generation
Accurate segmentation and measurement of brain tumors
plays an important role in clinical practice and research, as it
is critical for treatment planning and monitoring of tumor
growth. However, brain tumor segmentation is one of the most
challenging tasks in medical image analysis. Since manual
segmentations are subjective, time consuming and neither
accurate nor reliable, there exists a need for objective, robust
and fast automated segmentation methods that provide
competitive performance. Therefore, deep learning based
approaches are gaining interest in the field of medical image
segmentation. When the training data set is large enough,
deep learning approaches can be extremely effective, but in
domains like medicine, only limited data is available in the
majority of cases. Due to this reason, we propose a method
that allows to create a large dataset of brain MRI (Magnetic
Resonance Imaging) images containing synthetic brain tumors
- glioblastomas more specifically - and the corresponding
ground truth, that can be subsequently used to train deep
neural networks.
The software application was developed using the medical
imaging and visualization platform MeVisLab, which provides an
interface for connecting existing and new, proprietary algorithms
using a dataflow network. In order to receive an approximation of
a sphere, an icosahedron was recursively refined [1]-[3] and a
special displacement algorithm was applied to the resulting
vertices. The idea behind this displacement algorithm is to
randomly choose a vertex (V_chosen) according to a uniform
distribution. Then, the distance between V_chosen and the center
of the polyhedron is increased. This results in an elevation of the
corresponding vertex. To make the deformation look natural, also
the remaining vertices are displaced accordingly. In order to
determine the strength of displacement for each vertex, all
vertices are categorized in displacement levels k, according to the
minimum number of edges that are required to reach V_chosen.
Finally, the displacement of V_chosen and the remaining vertices
is achieved by multiplying the x, yand z coordinate of each vertex
with factor ε(k) (see formula 1), where k denotes the displacement
level, d (which is randomly chosen according to a uniform
distribution on the interval [1, 2.5]) indicates the initial strength of
displacement and mis a decay factor of 0.97.
(1)
This process is repeated for seven times. Then, the synthetic
tumor is inserted into a brain MRI image of a healthy subject.
A simplified tumor mass effect was simulated by deforming the
brain MRI image around the artificial glioblastoma according to a
displacement field. Since the force, that is applied to the brain by a
tumor mass, is an outward radial force, that originates from the
initial tumor region and weakens by distance, the required
displacement field was constructed in the following way: First, the
3D Euclidean distance transform of the sphere was calculated to
receive a gradually decreasing intensity from the tumor center to
the edges. Then, a gradient filter was applied to the Euclidian
distance transform, leading to a dense vector field. Afterwards, the
Euclidian distance transform was normalized and multiplied with a
factor, that can be varied depending on the desired strength of
deformation. Finally, the result was multiplied with the previously
calculated gradient field. This leads to a displacement field that
lets the strength of the deformation weaken by distance.
Typical radiographic features of glioblastomas:
thick, irregular-enhancing margins
central necrotic core
In Figure 1, three examples of brain MRI images with
real glioblastomas and a real tumor mass effect can
be seen. The typical radiographic features of
glioblastomas are clearly visible.
In Figure 2 three brain MRI images containing synthetic
glioblastomas with a simulated tumor mass effect can be seen.
The typical radiographic features of glioblastomas are clearly
visible in the synthetic brain MRI data. As can be seen by
comparing the real MRI images (figure 1) and the synthetic MRI
images (figure 2), the proposed method allows to generate quite
realistic looking glioblastomas, that comply with the basic
radiographic features of real glioblastomas.
1. With the proposed method it is possible to generate
realistic looking glioblastomas, insert them into brain MRI
images of healthy subjects and simulate the tumor mass
effect;
2. The position, shape and size of the tumor is already
exactly known. Therefore, a very precise ground truth can
easily be created without any action of a human expert;
3. The proposed method is simple, but still has the ability to
simulate all crucial characteristics of glioblastomas
(appearance of the enhancing tumor and the necrotic core)
in T1 weighted contrast-enhanced MRI images;
4. An easy application via MeVisLab is presented.
Further adaptations of the proposed approach could be
to simulate the effect of Gadolinium to the brain in MRI
images and simulate edema or fiber bundles. The
proposed approach may also be applied for other areas
of clinical oncology and diagnostic medicine)
Fig. 2 Brain MRI images containing synthetic glioblastomas
with a simulated tumor mass effect
Fig. 1 Three examples of brain MRI images (T1 weighted,
contrast enhanced) with real glioblastomas and a tumor mass
effect
... The semi-automatic data generation approach presented in this contribution is based on our previous work [11]. It was realized using MeVisLab -a powerful modular framework that can be used for image processing research and development with a strong focus on biomedical imaging. ...
Conference Paper
In this work, fully automatic binary segmentation of GBMs (glioblastoma multiforme) in 2D magnetic resonance images is presented using a convolutional neural network trained exclusively on synthetic data. The precise segmentation of brain tumors is one of the most complex and challenging tasks in clinical practice and is usually done manually by radiologists or physicians. However, manual delineations are time-consuming, subjective and in general not reproducible. Hence, more advanced automated segmentation techniques are in great demand. After deep learning methods already successfully demonstrated their practical usefulness in other domains, they are now also attracting increasing interest in the field of medical image processing. Using fully convolutional neural networks for medical image segmentation provides considerable advantages, as it is a reliable, fast and objective technique. In the medical domain, however, only a very limited amount of data is available in the majority of cases, due to privacy issues among other things. Nevertheless, a sufficiently large training data set with ground truth annotations is required to successfully train a deep segmentation network. Therefore, a semi-automatic method for generating synthetic GBM data and the corresponding ground truth was utilized in this work. A U-Net-based segmentation network was then trained solely on this synthetically generated data set. Finally, the segmentation performance of the model was evaluated using real magnetic resonance images of GBMs.
Article
Radiation therapy requires clinical linear accelerators to be mechanically and dosimetrically calibrated to a high standard. One important quality assurance test is the Winston-Lutz test which localises the radiation isocentre of the linac. In the current work we demonstrate a novel method of analysing EPID based Winston-Lutz QA images using a deep learning model trained only on synthetic image data. In addition, we propose a novel method of generating the synthetic WL images and associated ‘ground-truth’ masks using an optical path-tracing engine to ‘fake’ mega-voltage EPID images. The model called DeepWL was trained on 1500 synthetic WL images using data augmentation techniques for 180 epochs. The model was built using Keras with a TensorFlow backend on an Intel Core i5-6500T CPU and trained in approximately 15 h. DeepWL was shown to produce ball bearing and multi-leaf collimator field segmentations with a mean dice coefficient of 0.964 and 0.994 respectively on previously unseen synthetic testing data. When DeepWL was applied to WL data measured on an EPID, the predicted mean displacements were shown to be statistically similar to the Canny Edge detection method. However, the DeepWL predictions for the ball bearing locations were shown to correlate better with manual annotations compared with the Canny edge detection algorithm. DeepWL was demonstrated to analyse Winston-Lutz images with an accuracy suitable for routine linac quality assurance with some statistical evidence that it may outperform Canny Edge detection methods in terms of segmentation robustness and the resultant displacement predictions.
Article
Full-text available
Volumetric change in glioblastoma multiforme (GBM) over time is a critical factor in treatment decisions. Typically, the tumor volume is computed on a slice-by-slice basis using MRI scans obtained at regular intervals. (3D)Slicer - a free platform for biomedical research - provides an alternative to this manual slice-by-slice segmentation process, which is significantly faster and requires less user interaction. In this study, 4 physicians segmented GBMs in 10 patients, once using the competitive region-growing based GrowCut segmentation module of Slicer, and once purely by drawing boundaries completely manually on a slice-by-slice basis. Furthermore, we provide a variability analysis for three physicians for 12 GBMs. The time required for GrowCut segmentation was on an average 61% of the time required for a pure manual segmentation. A comparison of Slicer-based segmentation with manual slice-by-slice segmentation resulted in a Dice Similarity Coefficient of 88.43 ± 5.23% and a Hausdorff Distance of 2.32 ± 5.23 mm.
Article
OpenIGTLink is a new, open, simple and extensible network communication protocol for image-guided therapy (IGT). The protocol provides a standardized mechanism to connect hardware and software by the transfer of coordinate transforms, images, and status messages. MeVisLab is a framework for the development of image processing algorithms and visualization and interaction methods, with a focus on medical imaging. The paper describes the integration of the OpenIGTLink network protocol for IGT with the medical prototyping platform MeVisLab. The integration of OpenIGTLink into MeVisLab has been realized by developing a software module using the C++ programming language. The integration was evaluated with tracker clients that are available online. Furthermore, the integration was used to connect MeVisLab to Slicer and a NDI tracking system over the network. The latency time during navigation with a real instrument was measured to show that the integration can be used clinically. Researchers using MeVisLab can interface their software to hardware devices that already support the OpenIGTLink protocol, such as the NDI Aurora magnetic tracking system. In addition, the OpenIGTLink module can also be used to communicate directly with Slicer, a free, open source software package for visualization and image analysis.
Conference Paper
In this paper, we present a fast and robust algorithm for centerline extraction in blood vessels. The algorithm is suitable for catheter simulation in CT data of blood vessels. It creates an initial centerline based on two user-defined points (start- and endpoint). For curved vessel structures, this initial centerline is computed by Dijkstra's shortest path algorithm. For linear vessel structures, the algorithm directly connects the start- and the endpoint to get the initial centerline. Thereafter, this initial path will be aligned in the blood vessel, resulting in the vessels centerline (i.e. an optimal catheter simulation path). The alignment is done by an active contour model combined with polyhedra placed along it. Results of the proposed centerline algorithm are demonstrated for CTA with variations in anatomy and location of pathology.