Conference PaperPDF Available

Abstract

Image processing for MRIs can be done in both Spatial and Frequency Domain. Wavelet is one of the frequency domain which can be used for such operations. The use of discrete wavelet transform (DWT) to remove the artifacts in the images are done in compressed sensing, where the image is thresholded iteratively in the wavelet domain, making it a time-consuming task. In this work, a Deep Learning approach has been proposed to work as the thresholding operator in the wavelet space. The idea is if a pattern for thresholding can be learned from the undersampled data itself, then for new unseen data thresholding can be performed using this and can be thresholded without the need of iterations, making the operation faster. Nonlinearity in deep learning can also help to find out other additional filtering patterns from the data.
2020-08-12 18:55:42
ESMRMB 2020 - 37th Annual Scientific Meeting
Topic: MRI Development and Validation: Motion and Artefacts
Type: Oral Presentation
Abstract no.: A-1366
Status: submitted
Young Investigator Award - Boost your Career! (and win € 1,000) No, I won’t apply for the Young
Investigator Award
Moderator at ESMRMB 2020 Congress Yes, I would like to apply to become a moderator
If yes please indicate Machine Learning
If yes please indicate 1 MRI-DV - Motion and Artefacts
Specify category Junior
Supported by:
Wavelet filtering of undersampled MRI using trainable wavelets and CNN
S. Chatterjee1, P. Putti2, A. Nürnberger3, O. Speck1
1Otto-von-Guericke University, Department of Biomedical Magnetic Resonance, Magdeburg, Germany, 2Otto-
von-Guericke University, Faculty of Computer Science, Magdeburg, Germany, 3Otto-von-Guericke University,
Data and Knowledge Engineering Group, Magdeburg, Germany
Introduction
Image processing for MRIs can be done in both Spatial and Frequency Domain. Wavelet is one of the
frequency domain which can be used for such operations. The use of discrete wavelet transform (DWT) to
remove the artifacts in the images are done in compressed sensing[1], where the image is thresholded
iteratively in the wavelet domain, making it a time-consuming task. In this work, a Deep Learning approach
has been proposed to work as the thresholding operator in the wavelet space. The idea is if a pattern for
thresholding can be learned from the undersampled data itself, then for new unseen data thresholding can
be performed using this and can be thresholded without the need of iterations, making the operation faster.
Nonlinearity in deep learning can also help to find out other additional filtering patterns from the data.
Subjects/Methods
Fig1 illustrates the layers of the CNN blocks and Fig2 demonstrates the model architecture. Each of the
convolution layers is followed by a PReLU with a number of parameters same as the number of output
feature maps in the preceding layer. Undersampled 3D volumes were split into multiple 2D images and
DWT[2] was applied, which resulted in two components; first with 1 channel and the second with 3; both with
half of the size of the input image. These two components were sent to CNN Block1 and 2 respectively. The
input of each of the blocks was added back with its output, to create a residual link[3]. Two of these residual
blocks were stacked on top of another and the outputs of the second set of CNN Blocks were then merged to
undergo an inverse DWT. IDWT brought the model output to the spatial domain and then the loss was
calculated. The backpropagation lead to an update of the coefficients of DWT and learnt a thresholding
pattern. In general, the filters (low pass and high pass) of the DWT are static and pre-computed. But in this
implementation, they were made trainable, and were initialized with the default values[2] and then got
updated over time during training - making it a trainable wavelet.
CNN Blocks with 1 and 3 channel input
Model Architecture
Results/Discussion
This approach uses only a few number of layers and has shown promising results by doing filtering in the
wavelet domain. An example result has been shown in Fig3. Compressed sensing approaches only work for
2D variable density sampling, as the artefacts are noise-like. But this approach also performed well for 1D
variable density sampling. Further tests must be performed for different patterns and levels of
undersampling.
Furthermore, the same strategy can be applied for removing the artifacts from EEG signals, and background
noise during speech recognition.
Left: an example result for qualitative analysis, Right: quantitative analysis
References
[1]Lustig et al:"Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging". Magnetic
Resonance in Medicine (2007)
[2]https://github.com/fbcotter/pytorch_wavelets
[3]Kaiming He et al:"Deep Residual Learning for Image Recognition".arxiv-1512.03385v1 (2015)
Authors
First author: Soumick Chatterjee
Presented by: Soumick Chatterjee
Submitted by: Soumick Chatterjee
ResearchGate has not been able to resolve any citations for this publication.
Article
The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.
Authors First author: Soumick Chatterjee Presented by: Soumick Chatterjee Submitted by: Soumick Chatterjee
  • Kaiming He
Kaiming He et al:"Deep Residual Learning for Image Recognition".arxiv-1512.03385v1 (2015) Authors First author: Soumick Chatterjee Presented by: Soumick Chatterjee Submitted by: Soumick Chatterjee