added a research item
Updates
0 new
3
Recommendations
0 new
0
Followers
0 new
24
Reads
1 new
617
Project log
Data has become the most valuable resource in today's world. With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of data is of great interest. In this context, high-quality training, validation and testing datasets are particularly needed. Volumetric data is a very important resource in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios and applications where large amounts of data is unavailable. For example, in the medical field, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining a sufficient amount of high-quality data can also be a concern. A solution to these problems can be the generation of synthetic data to perform data augmentation in combination with other more traditional methods of data augmentation. Therefore, most of the publications on 3D Generative Adversarial Networks (GANs) are within the medical domain. The existence of mechanisms to generate realistic synthetic data is a good asset to overcome this challenge, especially in healthcare, as the data must be of good quality and close to reality, i.e. realistic, and without privacy issues. In this review, we provide a summary of works that generate realistic 3D synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, advantages and disadvantages. We present a novel taxonomy, evaluations, challenges and research opportunities to provide a holistic overview of the current state of GANs in medicine and other fields.
Lars Heiliger
Anjany Sekuboyina
Bjoern H Menze
- [...]
Jens Kleesiek
Healthcare data are inherently multimodal. Almost all data generated and acquired during a patient's life can be hypothesized to contain information relevant to providing optimal personalized healthcare. Data sources such as ECGs, doctor's notes, histopathological and radiological images all contribute to inform a physician's treatment decision. However, most machine learning methods in healthcare focus on single-modality data. This becomes particularly apparent within the field of radiology, which, due to its information density, accessibility , and computational interpretability, constitutes a central pillar in the healthcare data landscape and traditionally has been one of the key target areas of medically-focused machine learning. Computer-assisted diagnostic systems of the future should be capable of simultaneously processing multimodal data, thereby mimicking physicians, who also consider a multitude of resources when treating patients. Before this background, this review offers a comprehensive assessment of multimodal machine learning methods that combine data from radiology and other medical disciplines. It establishes a modality-based taxonomy, discusses common architectures and design principles, evaluation approaches, challenges, and future directions. This work will enable researchers and clinicians to understand the topography of the domain, describe the state-of-the-art, and detect research gaps for future research in multimodal medical machine learning.
Lydia Lindner
Dominik Narnhofer
Maximilian Weber
- [...]
Jan Egger
In this work, fully automatic binary segmentation of GBMs (glioblastoma multiforme) in 2D magnetic resonance images is presented using a convolutional neural network trained exclusively on synthetic data. The precise segmentation of brain tumors is one of the most complex and challenging tasks in clinical practice and is usually done manually by radiologists or physicians. However, manual delineations are time-consuming, subjective and in general not reproducible. Hence, more advanced automated segmentation techniques are in great demand. After deep learning methods already successfully demonstrated their practical usefulness in other domains, they are now also attracting increasing interest in the field of medical image processing. Using fully convolutional neural networks for medical image segmentation provides considerable advantages, as it is a reliable, fast and objective technique. In the medical domain, however, only a very limited amount of data is available in the majority of cases, due to privacy issues among other things. Nevertheless, a sufficiently large training data set with ground truth annotations is required to successfully train a deep segmentation network. Therefore, a semi-automatic method for generating synthetic GBM data and the corresponding ground truth was utilized in this work. A U-Net-based segmentation network was then trained solely on this synthetically generated data set. Finally, the segmentation performance of the model was evaluated using real magnetic resonance images of GBMs.
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.
Lydia Lindner
Malgorzata Kolodziej
Jan Egger
Skull-stripped Contrast-Enhanced MRI Datasets. Please use the following citations if you use them in your work:
L. Lindner, M. Kolodziej, J. Egger. Skull-stripped Contrast-Enhanced MRI Datasets. Figshare, 2018.
and
J. Egger, et al. GBM Volumetry using the 3D Slicer Medical Image Computing Platform. Sci Rep., 3:1364, 2013.
Further Information:
Ten Contrast-Enhanced T1-weighted Magnetic Resonance Imaging (MRI) Datasets of healthy Brains.
All Datasets are skull-stripped and datasets with a sagittal or coronal scanning direction have been reformatted to an axial direction.
Skull-stripping has been performed with BrainSuite.
Lydia Lindner
Birgit Pfarrkirchner
Christina Gsaxner
- [...]
Jan Egger
Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.
Christina Gsaxner
Birgit Pfarrkirchner
Lydia Lindner
- [...]
Jan Egger
Accurate segmentation of medical images is a key step in medical image processing. As the amount of medical images obtained in diagnostics, clinical studies and treatment planning increases, automatic segmentation algorithms become increasingly more important. Therefore, we plan to develop an automatic segmentation approach for the urinary bladder in computed tomography (CT) images using deep learning.
Birgit Pfarrkirchner
Christina Gsaxner
Lydia Lindner
- [...]
Jan Egger
Segmentation is an important branch in medical image processing and the basis for further detailed investigations on computed tomography (CT), magnetic resonance imaging (MRI), X-ray, ultrasound (US) or nuclear images. Through segmentation, an image is divided into various connected areas that correspond to certain tissue types. A common aim is to delineate healthy and pathologic tissues. A frequent example in medicine is the identification of a tumor or pathological lesion and its volume to evaluate treatment planning and outcome. In the clinical routine, segmentation is necessary for the planning of specific treatment tasks, that are for example used in the radiation therapy or for the creation of three-dimensional (3D) visualizations and models to simulate a surgical procedure. Segmentation can be classified into several families of techniques, such as thresholding, region growing, watershed, edge-based approaches, active contours and model-based algorithms. Recently, deep learning using neural networks is becoming important for automatic segmentation applications.
Lydia Lindner
Birgit Pfarrkirchner
Christina Gsaxner
- [...]
Jan Egger
Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.
Birgit Pfarrkirchner
Christina Gsaxner
Lydia Lindner
- [...]
Jan Egger
The lower jawbone data preparation for deep learning is proposed. In ten cases, surgeons segmented the lower jawbone in each slice to generate the ground truth. Since the number of present images was deemed insufficient to train a deep neural network, data was augmented with geometric transformations and added noise. Flipping, rotating and scaling as well as the addition of various noise types (uniform, Gaussian and salt-and-pepper) were connected within a global macro module under MeVisLab. Our module can prepare the data for general deep learning data in an automatic and flexible way, and further augmentation methods can easily be incorporated.
Christina Gsaxner
Birgit Pfarrkirchner
Lydia Lindner
- [...]
Jan Egger
Accurate segmentation of medical images is a key step in medical image processing. As the amount of medical images obtained in diagnostics, clinical studies and treatment planning increases, automatic segmentation algorithms become increasingly more important. Therefore, we plan to develop an automatic segmentation approach for the urinary bladder in computed tomography (CT) images using deep learning. For training such a neural network, a large amount of labeled training data is needed. However, public data sets of medical images with segmented ground truth are scarce. We overcome this problem by generating binary masks of images of the 18F-FDG enhanced urinary bladder obtained from a multi-modal scanner delivering registered CT and Positron Emission Tomography (PET) image pairs. Since PET images offer good contrast, a simple thresholding algorithm suffices for segmentation. We apply data augmentation to these datasets to increase the amount of available training data. In this contribution, we present algorithms developed with the medical image processing and visualization platform MeVisLab to achieve our goals. With the proposed methods, accurate segmentation masks of the urinary bladder could be generated, and given datasets could be enlarged by a factor of up to 2500.