Conference Paper

Using Synthetic Training Data for Deep Learning-Based GBM Segmentation

If you want to read the PDF, try requesting it from the authors.

Abstract

In this work, fully automatic binary segmentation of GBMs (glioblastoma multiforme) in 2D magnetic resonance images is presented using a convolutional neural network trained exclusively on synthetic data. The precise segmentation of brain tumors is one of the most complex and challenging tasks in clinical practice and is usually done manually by radiologists or physicians. However, manual delineations are time-consuming, subjective and in general not reproducible. Hence, more advanced automated segmentation techniques are in great demand. After deep learning methods already successfully demonstrated their practical usefulness in other domains, they are now also attracting increasing interest in the field of medical image processing. Using fully convolutional neural networks for medical image segmentation provides considerable advantages, as it is a reliable, fast and objective technique. In the medical domain, however, only a very limited amount of data is available in the majority of cases, due to privacy issues among other things. Nevertheless, a sufficiently large training data set with ground truth annotations is required to successfully train a deep segmentation network. Therefore, a semi-automatic method for generating synthetic GBM data and the corresponding ground truth was utilized in this work. A U-Net-based segmentation network was then trained solely on this synthetically generated data set. Finally, the segmentation performance of the model was evaluated using real magnetic resonance images of GBMs.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... One promising approach is to synthesize scattered spectra datasets using a physical model. For example, a synthetic brain tumor MRI dataset was successfully applied to deep-learning-based glioblastoma multiforme (GBM) segmentation [39] . When a synthetic dataset is used to analyze holographic scattering patterns for nanoparticle characterization [40] , the determined accuracy of particle size and refractive index is improved by more than an order of magnitude compared to traditional methods. ...
Article
Full-text available
Polarized light-scattering spectroscopy (PLSS) is a promising noninvasive early cancer detection technique via inversing the nuclear size of epithelial cells from collected single scattered light. Conventionally, the nuclear size is inferred by using model-driven least square fitting (LSF) methods, which is time-consuming and challenging for real-time cancer diagnosis. Although data-driven deep-learning-based techniques can be employed to speed up the inverse process, the labeled data are usually limited and the reliability quantification of prediction is lacking. Herein, by synthesizing a large quantity of single scattered spectra with physical priori, a Bayesian-deep-learning (BDL)-based PLSS framework is presented for the first time as far as we know, and is expected to provide model uncertainty of network prediction for the reliability quantification. Further, to explore the applicability of the BDL uncertain estimation, the neural networks for early cancer diagnosis are designed as a convolutional neural network (CNN) classification problem and a fully connected neural network (FNN) regression problem, respectively. They are verified using the single scattered spectra of polystyrene microsphere tissue models and human colorectal lesion tissue obtained by our proposed snapshot PLSS system. The results show that both BDL networks obtain similar nuclear sizes to the ground truth and their inferring speeds are far faster than LSF methods. Further, it is interesting to note that only the CNN classification network can provide effective uncertainty for reliability quantification. The work presents a new paradigm for the real-time diagnosis of early cancer and provides a reference for the design of snapshot PLSS endoscopy.
... Here, models are pretrained on artificial data and then refined on a small set of real image-label pairs. In medical image segmentation, this approach has been successfully applied to the segmentation of brain tumors [14], endotracheal tubes [3] and catheters [7] as well as tubular structures, such as the whole mouse brain vasculature [32], the dermal vasculature [6] and neurons [20]. Physics-based augmentations Another strategy to mitigate the impact of sparse data is image augmentation, where additional training samples are generated based on heuristic transformations of existing data [26]. ...
Article
Optical coherence tomography angiography (OCTA) can non-invasively image the eye's circulatory system. In order to reliably characterize the retinal vasculature, there is a need to automatically extract quantitative metrics from these images. The calculation of such biomarkers requires a precise semantic segmentation of the blood vessels. However, deep-learning-based methods for segmentation mostly rely on supervised training with voxel-level annotations, which are costly to obtain. In this work, we present a pipeline to synthesize large amounts of realistic OCTA images with intrinsically matching ground truth labels; thereby obviating the need for manual annotation of training data. Our proposed method is based on two novel components: 1) a physiology-based simulation that models the various retinal vascular plexuses and 2) a suite of physics-based image augmentations that emulate the OCTA image acquisition process including typical artifacts. In extensive benchmarking experiments, we demonstrate the utility of our synthetic data by successfully training retinal vessel segmentation algorithms. Encouraged by our method's competitive quantitative and superior qualitative performance, we believe that it constitutes a versatile tool to advance the quantitative analysis of OCTA images.
... Here, models are pretrained on artificial data and then refined on a small set of real image-label pairs. In medical image segmentation, this approach has been successfully applied to the segmentation of brain tumors [14], endotracheal tubes [3] and catheters [7] as well as tubular structures, such as the whole mouse brain vasculature [32], the dermal vasculature [6] and neurons [20]. Physics-based augmentations Another strategy to mitigate the impact of sparse data is image augmentation, where additional training samples are generated based on heuristic transformations of existing data [26]. ...
Preprint
Full-text available
Optical coherence tomography angiography (OCTA) can non-invasively image the eye's circulatory system. In order to reliably characterize the retinal vasculature, there is a need to automatically extract quantitative metrics from these images. The calculation of such biomarkers requires a precise semantic segmentation of the blood vessels. However, deep-learning-based methods for segmentation mostly rely on supervised training with voxel-level annotations, which are costly to obtain. In this work, we present a pipeline to synthesize large amounts of realistic OCTA images with intrinsically matching ground truth labels; thereby obviating the need for manual annotation of training data. Our proposed method is based on two novel components: 1) a physiology-based simulation that models the various retinal vascular plexuses and 2) a suite of physics-based image augmentations that emulate the OCTA image acquisition process including typical artifacts. In extensive benchmarking experiments, we demonstrate the utility of our synthetic data by successfully training retinal vessel segmentation algorithms. Encouraged by our method's competitive quantitative and superior qualitative performance, we believe that it constitutes a versatile tool to advance the quantitative analysis of OCTA images.
... Medical 3D Viewer of Studierfenster with the classical 2D views in axial, coronal, and sagittal directions (right) and volume rendering (middle)[31,32] ...
Article
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( www.studierfenster.at ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
... In [6], the authors divide image data augmentation into two major categories: basic image manipulations (such as flipping, transposing, and color space manipulations) and deep learning approaches (based, for example, on GANs). For reviews on the deep learning approach in data augmentation, see [9,10]; and, for some recent GAN methods specifically, see [11,12]. The aim of this study is to compare combinations of the best image manipulation methods for generating new samples that the literature has shown works well with deep learners. ...
Article
Full-text available
Convolutional neural networks (CNNs) have gained prominence in the research literature on image classification over the last decade. One shortcoming of CNNs, however, is their lack of generalizability and tendency to overfit when presented with small training sets. Augmentation directly confronts this problem by generating new data points providing additional information. In this paper, we investigate the performance of more than ten different sets of data augmentation methods, with two novel approaches proposed here: one based on the discrete wavelet transform and the other on the constant-Q Gabor transform. Pretrained ResNet50 networks are finetuned on each augmentation method. Combinations of these networks are evaluated and compared across four benchmark data sets of images representing diverse problems and collected by instruments that capture information at different scales: a virus data set, a bark data set, a portrait dataset, and a LIGO glitches data set. Experiments demonstrate the superiority of this approach. The best ensemble proposed in this work achieves state-of-the-art (or comparable) performance across all four data sets. This result shows that varying data augmentation is a feasible way for building an ensemble of classifiers for image classification.
... Some studies tried to limit the need for manual processing: Gsaxner et al. [43] replaced manual tumor annotations with the peak regions of PET images from combined PET/CT datasets. Lindner et al. [91] added artificial glioblastomas in brain MRI images to augment regular datasets of healthy patients. Alternatively, Chen et al. [92] exploited the role of GAN and suggested a one-shot adversarial learning approach for the segmentation of maxillofacial bony structures in MRI. ...
... Furthermore, deep learning algorithms are also frequently used in radiotherapeutic research for automated skull stripping, automated segmentation, or delineation of resection cavities for stereotactic radiosurgery [57][58][59][60]. Despite the ubiquity of highperforming models in clinical research, none has been translated A probability curve of true-positive rates against the false-positive rates at different cutoff points in outcome [1]. ...
Article
Full-text available
Glioblastoma is associated with a poor prognosis. Even though survival statistics are well-described at the population level, it remains challenging to predict the prognosis of an individual patient despite the increasing number of prognostic models. The aim of this study is to systematically review the literature on prognostic modeling in glioblastoma patients. A systematic literature search was performed to identify all relevant studies that developed a prognostic model for predicting overall survival in glioblastoma patients following the PRISMA guidelines. Participants, type of input, algorithm type, validation, and testing procedures were reviewed per prognostic model. Among 595 citations, 27 studies were included for qualitative review. The included studies developed and evaluated a total of 59 models, of which only seven were externally validated in a different patient cohort. The predictive performance among these studies varied widely according to the AUC (0.58–0.98), accuracy (0.69–0.98), and C-index (0.66–0.70). Three studies deployed their model as an online prediction tool, all of which were based on a statistical algorithm. The increasing performance of survival prediction models will aid personalized clinical decision-making in glioblastoma patients. The scientific realm is gravitating towards the use of machine learning models developed on high-dimensional data, often with promising results. However, none of these models has been implemented into clinical care. To facilitate the clinical implementation of high-performing survival prediction models, future efforts should focus on harmonizing data acquisition methods, improving model interpretability, and externally validating these models in multicentered, prospective fashion.
... Annotation based on manual labelling is far from being a practical solution. In many computer vision problems, using synthetic images as data augmentation technique has demonstrated to be a useful solution to enable researchers to cope with the problem of insufficient data [Lee and Moloney 2017] [Lindner et al. 2019]. Apart from the availability of huge amount of data, the possibility of providing labels at the same time is one of the main advantages of the simulation frameworks. ...
Preprint
Full-text available
In this paper, we evaluate a synthetic framework to be used in the field of gaze estimation employing deep learning techniques. The lack of sufficient annotated data could be overcome by the utilization of a synthetic evaluation framework as far as it resembles the behavior of a real scenario. In this work, we use U2Eyes synthetic environment employing I2Head datataset as real benchmark for comparison based on alternative training and testing strategies. The results obtained show comparable average behavior between both frameworks although significantly more robust and stable performance is retrieved by the synthetic images. Additionally, the potential of synthetically pretrained models in order to be applied in user's specific calibration strategies is shown with outstanding performances.
Chapter
Optical coherence tomography angiography (OCTA) can non-invasively image the eye’s circulatory system. In order to reliably characterize the retinal vasculature, there is a need to automatically extract quantitative metrics from these images. The calculation of such biomarkers requires a precise semantic segmentation of the blood vessels. However, deep-learning-based methods for segmentation mostly rely on supervised training with voxel-level annotations, which are costly to obtain. In this work, we present a pipeline to synthesize large amounts of realistic OCTA images with intrinsically matching ground truth labels; thereby obviating the need for manual annotation of training data. Our proposed method is based on two novel components: 1) a physiology-based simulation that models the various retinal vascular plexuses and 2) a suite of physics-based image augmentations that emulate the OCTA image acquisition process including typical artifacts. In extensive benchmarking experiments, we demonstrate the utility of our synthetic data by successfully training retinal vessel segmentation algorithms. Encouraged by our method’s competitive quantitative and superior qualitative performance, we believe that it constitutes a versatile tool to advance the quantitative analysis of OCTA images.
Article
Full-text available
Analyzing medical data to find abnormalities is a time-consuming and costly task, particularly for rare abnormalities, requiring tremendous efforts from medical experts. Therefore, artificial intelligence has become a popular tool for the automatic processing of medical data, acting as a supportive tool for doctors. However, the machine learning models used to build these tools are highly dependent on the data used to train them. Large amounts of data can be difficult to obtain in medicine due to privacy reasons, expensive and time-consuming annotations, and a general lack of data samples for infrequent lesions. In this study, we present a novel synthetic data generation pipeline, called SinGAN-Seg , to produce synthetic medical images with corresponding masks using a single training image. Our method is different from the traditional generative adversarial networks (GANs) because our model needs only a single image and the corresponding ground truth to train. We also show that the synthetic data generation pipeline can be used to produce alternative artificial segmentation datasets with corresponding ground truth masks when real datasets are not allowed to share. The pipeline is evaluated using qualitative and quantitative comparisons between real data and synthetic data to show that the style transfer technique used in our pipeline significantly improves the quality of the generated data and our method is better than other state-of-the-art GANs to prepare synthetic images when the size of training datasets are limited. By training UNet++ using both real data and the synthetic data generated from the SinGAN-Seg pipeline, we show that the models trained on synthetic data have very close performances to those trained on real data when both datasets have a considerable amount of training data. In contrast, we show that synthetic data generated from the SinGAN-Seg pipeline improves the performance of segmentation models when training datasets do not have a considerable amount of data. All experiments were performed using an open dataset and the code is publicly available on GitHub.
Chapter
Three-dimensional (3D) printing or additive manufacturing (AM) technique, enables the fabrication a wide range of 3D structures and complex geometries from a patient’s own 3D model data, which is extracted from medical images such as computed axial tomography (CAT) and magnetic resonance imaging (MRI). This process is based on the principle of successive printing of materials, which overlapped each other layer by layer. It was first introduced in the 1980s and has become one of the most efficient methods which brings new possibilities for building bionic tissue or organs and transform the manufacturing processes. In 1986, a process known as stereolithography (SLA) was developed by Charles Hull, and subsequent developments such as fused deposition modeling (FDM), powder bed fusion, inkjet printing, etc. have come into use later. Nowadays, 3D printing has evolved over the years and has become a leading manufacturing technique in different industries, including construction, prototyping, and biomechanical. Novel materials and methods are continuously being developed, and the 3D printing industry has exploded due to the increased accessibility and manufacturing speed of 3D printers. A wide range of applications, including dentistry, tissue engineering, and regenerative medicine, engineered tissue models, medical devices, anatomical models, and drug formulation, have involved the use of 3D printing techniques.
Poster
Full-text available
An important part of these applications is medical image segmentation. Medical image segmentation algorithms are needed to evaluate and compare segmentations. These algorithms are used and proven for years in all different fields of image processing. Although image segmentation has grown rapidly in medicine a major part of the tools and applications stayed the same for years. Especially in terms of availability, cross platform support and usability there is major room for improvements. This contribution aims to remedy the mentioned problems by the development of a cross platform web tool for manual image segmentation and calculation of segmentation scores.
Poster
Full-text available
Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.
Conference Paper
Full-text available
Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.
Conference Paper
Full-text available
Accurate automatic algorithms for the segmentation of brain tumours have the potential of improving disease diagnosis, treatment planning, as well as enabling large-scale studies of the pathology. In this work we employ DeepMedic [1], a 3D CNN architecture previously presented for lesion segmentation, which we further improve by adding residual connections. We also present a series of experiments on the BRATS 2015 training database for evaluating the robustness of the network when less training data are available or less filters are used, aiming to shed some light on requirements for employing such a system. Our method was further benchmarked on the BRATS 2016 Challenge, where it achieved very good performance despite the simplicity of the pipeline.
Article
Full-text available
Brain tumor segmentation is an important task in medical image processing. Early diagnosis of brain tumors plays an important role in improving treatment possibilities and increases the survival rate of the patients. Manual segmentation of the brain tumors for cancer diagnosis, from large amount of MRI images generated in clinical routine, is a difficult and time consuming task. There is a need for automatic brain tumor image segmentation. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. Recently, automatic segmentation using deep learning methods proved popular since these methods achieve the state-of-the-art results and can address this problem better than other methods. Deep learning methods can also enable efficient processing and objective evaluation of the large amounts of MRI-based image data. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. Different than others, in this paper, we focus on the recent trend of deep learning methods in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of deep learning methods are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.
Article
Full-text available
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients – manually annotated by up to four raters – and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all subregions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
Article
Full-text available
Gliomas are malignant primary brain tumors and yet incurable. Palliation and the maintenance or improvement of the patient's quality of life is therefore of main importance. For that reason, health-related quality of life (HRQoL) has become an important outcome measure in clinical trials, next to traditional outcome measures such as overall and progression-free survivals, and radiological response to treatment. HRQoL is a multidimensional concept covering physical, psychological, and social domains, as well as symptoms induced by the disease and its treatment. HRQoL is assessed by using self-reported, validated questionnaires. Various generic HRQoL questionnaires, which can be supplemented with a brain tumor- specific module, are available. Both the tumor and its treatment can have a negative effect on HRQoL. However, treatment with surgery, radiotherapy, chemotherapy, and supportive treatment may also improve patients' HRQoL, in addition to extending survival. It is expected that the impact of HRQoL measurements in both clinical trials and clinical practice will increase. Hence, it is important that HRQoL data are collected, analyzed, and interpreted correctly. Methodological issues such as selection bias and missing data may hamper the interpretation of HRQoL data and should therefore be accounted. In clinical trials, HRQoL can be used to assess the benefits of a new treatment strategy, which should be weighed carefully against the adverse effects of that treatment. In daily clinical practice, HRQoL assessments of an individual patient can be used to inform physicians about the impact of a specific treatment strategy, and it may facilitate the communication between the physicians and the patients.
Article
Full-text available
Contrast material enhancement for cross-sectional imaging has been used since the mid 1970s for computed tomography and the mid 1980s for magnetic resonance imaging. Knowledge of the patterns and mechanisms of contrast enhancement facilitate radiologic differential diagnosis. Brain and spinal cord enhancement is related to both intravascular and extravascular contrast material. Extraaxial enhancing lesions include primary neoplasms (meningioma), granulomatous disease (sarcoid), and metastases (which often manifest as mass lesions). Linear pachymeningeal (dura-arachnoid) enhancement occurs after surgery and with spontaneous intracranial hypotension. Leptomeningeal (pia-arachnoid) enhancement is present in meningitis and meningoencephalitis. Superficial gyral enhancement is seen after reperfusion in cerebral ischemia, during the healing phase of cerebral infarction, and with encephalitis. Nodular subcortical lesions are typical for hematogenous dissemination and may be neoplastic (metastases) or infectious (septic emboli). Deeper lesions may form rings or affect the ventricular margins. Ring enhancement that is smooth and thin is typical of an organizing abscess, whereas thick irregular rings suggest a necrotic neoplasm. Some low-grade neoplasms are "fluid-secreting," and they may form heterogeneously enhancing lesions with an incomplete ring sign as well as the classic "cyst-with-nodule" morphology. Demyelinating lesions, including both classic multiple sclerosis and tumefactive demyelination, may also create an open ring or incomplete ring sign. Thick and irregular periventricular enhancement is typical for primary central nervous system lymphoma. Thin enhancement of the ventricular margin occurs with infectious ependymitis. Understanding the classic patterns of lesion enhancement--and the radiologic-pathologic mechanisms that produce them--can improve image assessment and differential diagnosis.
Article
Full-text available
Astrocytic tumors are divided into two basic categories: circumscribed (grade I) or diffuse (grades II-IV). All diffuse astrocytomas tend to progress to grade IV astrocytoma, which is synonymous with glioblastoma multiforme (GBM). GBMs are characterized by marked neovascularity, increased mitosis, greater degree of cellularity and nuclear pleomorphism, and microscopic evidence of necrosis. Several genetic abnormalities have been associated with the development of GBM: In some cases, the abnormality is inherited (e.g., Li-Fraumeni syndrome); in others, genetic alteration appears to result from mutation into an oncogene or deterioration of the tumor-suppressor gene p53. A common, distinctive histopathologic feature of GBM is pseudopalisading. The most common imaging appearance of GBM is a large heterogeneous mass in the supratentorial white matter that exerts considerable mass effect. Less frequently, GBM can occur near the dura mater or in the corpus callosum, posterior fossa, and spinal cord. GBM typically contains central areas of necrosis, has thick irregular walls, and is surrounded by extensive, vasogenic edema, but the tumor may also have thin round walls, scant edema, or a cystic appearance with a mural nodule. GBMs most commonly metastasize from their original location by direct extension along white matter tracts; however, cerebrospinal fluid, subependymal, and hematogenous spread also can occur. Given the rapidly growing body of knowledge about GBM, the radiologist's role is more important than ever in accurate and timely diagnosis.
Article
Full-text available
The fourth edition of the World Health Organization (WHO) classification of tumours of the central nervous system, published in 2007, lists several new entities, including angiocentric glioma, papillary glioneuronal tumour, rosette-forming glioneuronal tumour of the fourth ventricle, papillary tumour of the pineal region, pituicytoma and spindle cell oncocytoma of the adenohypophysis. Histological variants were added if there was evidence of a different age distribution, location, genetic profile or clinical behaviour; these included pilomyxoid astrocytoma, anaplastic medulloblastoma and medulloblastoma with extensive nodularity. The WHO grading scheme and the sections on genetic profiles were updated and the rhabdoid tumour predisposition syndrome was added to the list of familial tumour syndromes typically involving the nervous system. As in the previous, 2000 edition of the WHO 'Blue Book', the classification is accompanied by a concise commentary on clinico-pathological characteristics of each tumour type. The 2007 WHO classification is based on the consensus of an international Working Group of 25 pathologists and geneticists, as well as contributions from more than 70 international experts overall, and is presented as the standard for the definition of brain tumours to the clinical oncology and cancer research communities world-wide.
Conference Paper
Image segmentation plays a major role in medical imaging. Especially in radiology, the detection and development of tumors and other diseases can be supported by image segmentation applications. Tools that provide image segmentation and calculation of segmentation scores are not available at any time for every device due to the size and scope of functionalities they offer. These tools need huge periodic updates and do not properly work on old or weak systems. However, medical use-cases often require fast and accurate results. A complex and slow software can lead to additional stress and thus unnecessary errors. The aim of this contribution is the development of a cross-platform tool for medical segmentation use-cases. The goal is a device-independent and always available possibility for medical imaging including manual segmentation and metric calculation. The result is Studierfenster (studierfenster.at), a web-tool for manual segmentation and segmentation metric calculation. In this contribution, the focus lies on the segmentation metric calculation part of the tool. It provides the functionalities of calculating directed and undirected Hausdorff Distance (HD) and Dice Similarity Coefficient (DSC) scores for two uploaded volumes, filtering for specific values, searching for specific values in the calculated metrics and exporting filtered metric lists in different file formats.
Data
Skull-stripped MRI GBM Datasets. Please use the following citations if you use them in your work: L. Lindner, D. Wild, M. Weber, M. Kolodziej, G. von Campe, J. Egger. Skull-stripped MRI GBM Datasets. Figshare, 2018. and J. Egger, et al. GBM Volumetry using the 3D Slicer Medical Image Computing Platform. Sci Rep., 3:1364, 2013. Further Information: Ten Contrast-enhanced T1-Weighted MRI Datasets from Patients with pathologically confirmed Glioblastoma Multiforme (GBM) and Manual Expert Segmentations from Neurosurgeons. All Datasets are skull-stripped and reformatted to have 260 slices in axial direction. Datasets with a sagittal or coronal scanning direction have been reformatted to an axial direction. Note: Only the enhancing tumor and the necrotic core were segmented, since these are currently the areas of main interest in clinical practice. Other regions, such as a surrounding edema, are typically not delineated in clinical routine, since these annotations do often not accurately reflect the true state. Skull-stripping has been performed with BrainSuite.
Chapter
Data diversity is critical to success when training deep learning models. Medical imaging data sets are often imbalanced as pathologic findings are generally rare, which introduces significant challenges when training deep learning models. In this work, we propose a method to generate synthetic abnormal MRI images with brain tumors by training a generative adversarial network using two publicly available data sets of brain MRI. We demonstrate two unique benefits that the synthetic images provide. First, we illustrate improved performance on tumor segmentation by leveraging the synthetic images as a form of data augmentation. Second, we demonstrate the value of generative models as an anonymization tool, achieving comparable tumor segmentation results when trained on the synthetic data versus when trained on real subject data. Together, these results offer a potential solution to two of the largest challenges facing machine learning in medical imaging, namely the small incidence of pathological findings, and the restrictions around sharing of patient data.
Article
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement. Expected final online publication date for the Annual Review of Biomedical Engineering Volume 19 is June 4, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
The fourth edition of the World Health Organization (WHO) classification of tumours of the central nervous system, published in 2007, lists several new entities, including angiocentric glioma, papillary glioneuronal tumour, rosette-forming glioneuronal tumour of the fourth ventricle, papillary tumour of the pineal region, pituicytoma and spindle cell oncocytoma of the adenohypophysis. Histological variants were added if there was evidence of a different age distribution, location, genetic profile or clinical behaviour; these included pilomyxoid astrocytoma, anaplastic medulloblastoma and medulloblastoma with extensive nodularity. The WHO grading scheme and the sections on genetic profiles were updated and the rhabdoid tumour predisposition syndrome was added to the list of familial tumour syndromes typically involving the nervous system. As in the previous, 2000 edition of the WHO ‘Blue Book', the classification is accompanied by a concise commentary on clinico-pathological characteristics of each tumour type. The 2007 WHO classification is based on the consensus of an international Working Group of 25 pathologists and geneticists, as well as contributions from more than 70 international experts overall, and is presented as the standard for the definition of brain tumours to the clinical oncology and cancer research communities world-wide
Conference Paper
There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .
Article
OpenIGTLink is a new, open, simple and extensible network communication protocol for image-guided therapy (IGT). The protocol provides a standardized mechanism to connect hardware and software by the transfer of coordinate transforms, images, and status messages. MeVisLab is a framework for the development of image processing algorithms and visualization and interaction methods, with a focus on medical imaging. The paper describes the integration of the OpenIGTLink network protocol for IGT with the medical prototyping platform MeVisLab. The integration of OpenIGTLink into MeVisLab has been realized by developing a software module using the C++ programming language. The integration was evaluated with tracker clients that are available online. Furthermore, the integration was used to connect MeVisLab to Slicer and a NDI tracking system over the network. The latency time during navigation with a real instrument was measured to show that the integration can be used clinically. Researchers using MeVisLab can interface their software to hardware devices that already support the OpenIGTLink protocol, such as the NDI Aurora magnetic tracking system. In addition, the OpenIGTLink module can also be used to communicate directly with Slicer, a free, open source software package for visualization and image analysis.
Article
To assess the effectiveness of two automated magnetic resonance imaging (MRI) segmentation methods in determining the gross tumor volume (GTV) of brain tumors for use in radiation therapy treatment planning. Two automated MRI tumor segmentation methods (supervised k-nearest neighbors [kNN] and automatic knowledge-guided [KG]) were evaluated for their potential as "cyber colleagues." This required an initial determination of the accuracy and variability of radiation oncologists engaged in the manual definition of the GTV in MRI registered with computed tomography images for 11 glioma patients. Three sets of contours were defined for each of these patients by three radiation oncologists. These outlines were compared directly to establish inter- and intraoperator variability among the radiation oncologists. A novel, probabilistic measurement of accuracy was introduced to compare the level of agreement among the automated MRI segmentations. The accuracy was determined by comparing the volumes obtained by the automated segmentation methods with the weighted average volumes prepared by the radiation oncologists. Intra- and inter-operator variability in outlining was found to be an average of 20% +/- 15% and 28% +/- 12%, respectively. Lowest intraoperator variability was found for the physician who spent the most time producing the contours. The average accuracy of the kNN segmentation method was 56% +/- 6% for all 11 cases, whereas that of the KG method was 52% +/- 7% for 7 of the 11 cases when compared with the physician contours. For the areas of the contours where the oncologists were in substantial agreement (i.e., the center of the tumor volume), the accuracy of kNN and KG was 75% and 72%, respectively. The automated segmentation methods were found to be least accurate in outlining at the edges of the tumor volume. The kNN method was able to segment all cases, whereas the KG method was limited to enhancing tumors and gliomas with clear enhancing edges and no cystic formation. Both methods undersegment the tumor volume when compared with the radiation oncologists and performed within the variability of the contouring performed by experienced radiation oncologists based on the same data.
Conference Paper
Automated MRI (Magnetic Resonance Imaging) brain tumor segmentation is a difficult task due to the variance and complexity of tumors. In this paper, a statistical structure analysis based tumor segmentation scheme is presented, which focuses on the structural analysis on both tumorous and normal tissues. Firstly, 3 kinds of features including intensity-based, symmetry-based and texture-based are extracted from structural elements. Then a classification technique using AdaBoost that learns by selecting the most discriminative features is proposed to classify the structural elements into normal tissues and abnormal tissues. Experimental results on 140 tumor-contained brain MR images achieve an average accuracy of 96.82% on tumor segmentation.
  • D Shen
  • G Wu
  • H Suk
D. Shen, G. Wu, H. Suk, Deep Learning in Medical Image Analysis, Annu Rev Biomed Eng, vol. 19, pp. 221-248, Jun. 2017.
DeepMedic for Brain Tumor Segmentation, Int. Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries
  • K Kamnitsas
  • E Ferrante
  • S Parisot
K. Kamnitsas, E. Ferrante, S. Parisot et al., DeepMedic for Brain Tumor Segmentation, Int. Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pp.138-149, 2016.
A Client/Server Based Online Environment for Manual Segmentation of Medical Images, The 23rd Central European Seminar on Computer Graphics
  • D Wild
  • M Weber
  • J Egger
D. Wild, M. Weber and J. Egger, A Client/Server Based Online Environment for Manual Segmentation of Medical Images, The 23rd Central European Seminar on Computer Graphics, pp. 1-8, Apr. 2019.
  • H C Shin
  • N A Tenenholtz
  • J K Rogers
H.C. Shin, N.A. Tenenholtz, J.K. Rogers et al., Medical Image Synthesis for Data Augmentation and Anonymization using Generative Adversarial Networks, arXiv:1807.10225, Jul. 2018.
  • O Ronneberger
  • P Fischer
  • T Brox
O. Ronneberger, P. Fischer and T. Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation, arXiv:1505.04597, 2015.
  • F Milletari
  • N Navab
  • S Ahmadi
F. Milletari, N. Navab and S. Ahmadi, V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation, arXiv:1606.04797, Jun. 2016.
DeepMedic for Brain Tumor Segmentation, Int. Workshop on Brainlesion: Glioma, Multiple Sclerosis
  • kamnitsas