Content uploaded by Roberto Souza
Author content
All content in this area was uploaded by Roberto Souza on Apr 18, 2019
Content may be subject to copyright.
Proceedings of Machine Learning Research – XXXX:1–11, 2019 Full Paper – MIDL 2019
A Hybrid, Dual Domain, Cascade of Convolutional Neural
Networks for Magnetic Resonance Image Reconstruction
Roberto Souza1,2roberto.medeirosdeso@ucalgary.ca
R. Marc Lebel3marc.lebel@ge.com
Richard Frayne1,2rfrayne@ucalgary.ca
1Department of Radiology and Clinical Neuroscience, Hotchkiss Brain Institute, University of Cal-
gary, Calgary, AB, Canada
2Seaman Family MR Research Centre, Foothills Medical Centre, Calgary, AB, Canada
3General Electric Healthcare, Calgary, AB, Canada
Editors: Under Review for MIDL 2019
Abstract
Deep-learning-based magnetic resonance (MR) imaging reconstruction techniques have the
potential to accelerate MR image acquisition by reconstructing in real-time clinical qual-
ity images from k-spaces sampled at rates lower than specified by the Nyquist-Shannon
sampling theorem, which is known as compressed sensing. In the past few years, several
deep learning network architectures have been proposed for MR compressed sensing re-
construction. After examining the successful elements in these network architectures, we
propose a hybrid frequency-/image-domain cascade of convolutional neural networks in-
tercalated with data consistency layers that is trained end-to-end for compressed sensing
reconstruction of MR images. We compare our method with five recently published deep
learning-based methods using MR raw data. Our results indicate that our architecture
improvements were statistically significant (Wilcoxon signed-rank test, p < 0.05). Visual
assessment of the images reconstructed confirm that our method outputs images similar to
the fully sampled reconstruction reference.
Keywords: Magnetic resonance imaging, image reconstruction, compressed sensing, deep
learning.
1. Introduction
Magnetic resonance (MR) is a non-ionizing imaging modality that possess far superior soft-
tissue contrast compared to other imaging modalities (Nishimura,1996). It allows us to
investigate both structure and function of the brain and body. The major drawback of MR is
its long acquisition times, which can easily exceed 30 minutes per subject scanned (Zbontar
et al.,2018). These long acquisition times make MR susceptible to motion artifacts and
difficult or impossible to image dynamic physiology.
MR data is collected in Fourier domain, known as k-space, and acquisition times are
proportional to k-space sampling rates. Compressed sensing (CS) MR reconstruction is
a technique that reconstructs high quality images from MR data incoherently sampled at
rates inferior to the Nyquist-Shannon sampling theorem (Lustig et al.,2008).
In recent years, several deep-learning-based MR compressed sensing reconstruction tech-
niques have been proposed (Zhang et al.,2018;Seitzer et al.,2018;Jin et al.,2017;Lee
c
2019 R. Souza, R.M. Lebel & R. Frayne.
Hybrid Cascade for MR Reconstruction
et al.,2017;Quan et al.,2018;Schlemper et al.,2018a;Yang et al.,2018;Zhu et al.,2018;
Souza and Frayne,2018;Eo et al.,2018b,a;Schlemper et al.,2018b;Yu et al.,2017). This
rapid growth in number of publications can be explained by the success of deep learning
in many medical imaging problems (Litjens et al.,2017) and its potential to reconstruct
images in real-time. Traditional CS methods are iterative and usually are not suitable for
fast reconstruction.
In this work, a hybrid frequency-domain/image-domain cascade of convolutional neural
networks (CNNs) trained end-to-end for MR CS reconstruction is proposed. We analyze
it on a single-coil acquisition setting, since many older scanners still use it (Zbontar et al.,
2018), and it also works as a proof of concept that can potentially be generalized to more
complex scenarios, such as parallel imaging (Deshmane et al.,2012). We compare our
method with five recently published deep-learning-based models using MR raw data. We
tested our model with acceleration factors of 4×and 5×(corresponding to reductions in
data acquisition of 75% and 80%, respectively). Our results indicate that the improvements
in our hybrid cascade are statistically significant compared to five other approaches.(Yang
et al.,2018;Quan et al.,2018;Souza and Frayne,2018;Schlemper et al.,2018a;Eo et al.,
2018a)
2. Brief Literature Review
In the past couple years, many deep-learning models were proposed for MR CS reconstruc-
tion. Most of them were validated using private datasets and a single-coil acquisition setting.
Initial works on MR CS reconstruction proposed to use U-net (Ronneberger et al.,2015)
architectures with residual connections (Jin et al.,2017;Lee et al.,2017) to map from zero-
filled k-space aliased reconstructions to unialased reconstructions. Yu et al. (2017) proposed
a deep de-aliasing network that incorporated a perceptual and an adversarial component.
Their work was further enhanced by Yang et al. (2018). They proposed a deep de-aliasing
generative adversarial network (DAGAN) that uses a residual U-net as its generator com-
bined with a loss function that incorporates image domain, frequency domain, perceptual
and adversarial information. Quan et al. (2018) proposed a generative adversarial network
with a cyclic loss (Zhu et al.,2017). The cyclic loss tries to enforce that the mapping
between input and output is a bijection, i.e. invertible.
The work of Schlemper et al. (2018a) moved away from U-nets. They proposed and
implemented a model that consists of a deep cascade (Deep-Cascade) of CNNs intercalated
with data consistency (DC) blocks that replace the network estimated k-space frequencies by
frequencies obtained in the sampling process. Seitzer et al. (2018) built upon Deep-Cascade
by adding a visual refinement network that is trained independently using the result of
Deep-Cascade as its input. In their experiments, their results improved in terms of semantic
interpretability and mean opinion scores, but Deep-Cascade was still better in terms of peak
signal to noise ratio (PSNR). Schlemper et al. (2018b) incorporated dilated convolutions
and a stochastic component on the Deep-Cascade model. All techniques discussed so far
use the aliased zero-filled reconstruction as a starting point. Frequency domain information
is only used either in the network loss function computation (e.g., DAGAN) or in the DC
blocks to recover the sampled frequencies.
2
Hybrid Cascade for MR Reconstruction
Zhu et al. (2017) proposed a unified framework for reconstruction called automated
transform by manifold approximation (AUTOMAP). It tries to learn the transform from
undersampled k-space to image domain through fully connected layers followed by convolu-
tional layers in image domain. The major drawback of their proposal is that their parameter
complexity grows quadratically with the number of image pixels (voxels). For 256 ×256 im-
ages, AUTOMAP model has >1010 trainable parameters. Eo et al. (2018a) proposed a dual
domain architecture that cascades k-space domain networks with image domain networks
interleaved by data consistency layers and the appropriate domain transform. The major
advantage of KIKI-net is that it does not try to learn the domain transform. Therefore, it
reduces the number of trainable parameters compared to AUTOMAP by a factor of 10,000,
while still leveraging information from both k-space and image domains. In their model,
each of the four networks that compose KIKI-net is trained independently. KIKI-net is
the deepest model proposed so far for MR CS, it has one hundred convolutional layers and
reconstruction time of a single 256 ×256 slice is of 14 seconds on a NVIDIA GeForce GTX
TITAN graphics processing unit (GPU), which is prohibitive for real time reconstruction.
Souza and Frayne (2018) proposed the W-net model, which consists of a k-space U-net
connected to an image domain U-net through the inverse Fourier Transform (FT). The
W-net model is trained end-to-end as opposed to KIKI-net and it also does not try to learn
the domain transform. W-net reconstructions were shown to arguably work better (i.e. less
process failures) with FreeSurfer (Fischl,2012) post-processing tool.
The work of Eo et al. (2018b) proposes a multi-layer perceptron that estimates a target
image from a one-dimensional inverse FT of k-space followed by a CNN. Their method
parameter complexity grows linearly with the number of image pixels (voxels) as opposed
to AUTOMAP’s quadratic complexity.
Recently, the fastMRI initiative (Zbontar et al.,2018) made single-coil and multi-coil
knee raw MR data publicly available for benchmarking purposes. The Calgary-Campinas
initiative (Souza et al.,2017) has also added brain MR raw data to their dataset. Both
initiatives aim to provide a standardized comparison method that will help researchers to
more easily compare and assess potential improvements of new models.
3. Hybrid Cascade Model
Based on recent trends in the field of deep-learning-based MR reconstruction, we developed
a model that incorporates elements that have improved MR reconstruction. Our proposal
is a hybrid unrolled cascade structure with DC layers in between consecutive CNN blocks
that is fully trained end-to-end (Figure 1). We opted not to use an adversarial component
in our model for two main reasons: 1) The model already outputs realistic images that are
hard for an human expert to tell apart from a fully sampled reconstruction (see results);
2) The discriminator block can always be incorporated subsequently to the reconstruction
(i.e., generator) network training.
Our hybrid cascade model receives as input the zero-filled reconstruction from under-
sampled k-space, which is represented as a two channel image. One channel stores the real
part and the other stores the imaginary part of the complex number.
The first CNN block in the cascade, unlike KIKI-net, is an image domain CNN. The
reason for this is that k-space is usually heavily undersampled at higher spatial frequencies.
3
Hybrid Cascade for MR Reconstruction
Figure 1: Architecture of the proposed Hybrid Cascade model. It has four image domain
CNN blocks and two k-space CNN blocks. We start and end the network using
an image domain CNN.
If the cascade started with a k-space CNN block, there would potentially be regions where
the convolutional kernel would have no signal to operate upon. Thus, a deeper network
having a larger receptive field would be needed, which would increase reconstruction times.
By starting with an image domain CNN block and because of the global property of the
FT, the output of this network has a corresponding k-space that is now complete. This
allows the subsequent CNN block, which is in k-space domain, to perform better due to the
absence region without signal (i.e., because of zero-filling) without the necessity of making
the network deeper. The last CNN block of our architecture is also in image domain. This
decision was made empirically. Between the initial and final image domain CNN blocks,
we alternate between k-space and image domain CNN blocks. Connecting the CNN blocks,
we have the appropriate domain transform (FT or inverse FT) and the DC operator. It is
important to emphasize that unlike AUTOMAP, we do not learn the FT.
Our CNN block architecture is independent of operating in the k-space or image do-
mains. It is a residual network with five convolutional layers. The first four layers have
48 convolutional filters with 3 ×3 kernels. The activations are leaky rectifier linear units
with α= 0.1. The final layer has two convolutional filters with a 3 ×3 kernel size followed
by a linear activation. All convolutional layers include bias terms. This architecture was
empirically determined.
Our hybrid cascade architecture (Figure 1) has a total of four image domain and two
k-space domain CNN blocks. We train it using the mean squared error cost function.
It has 380,172 trainable parameters, which is relatively small compared to other deep
learning architectures, such as the U-net that has >20,000,000 parameters. The number
of parameters of our hybrid cascade model is in the same order of magnitude as Deep-
Cascade (∼500,000) and KIKI-net (>3.5 million) architectures. The main difference
versus Deep-Cascade is our dual domain component. The distinction to KIKI-net is that
our network is trained end-to-end and the hybrid cascade starts operating in the image
domain as opposed to the k-space domain, allowing our network to have fewer layers and
consequently being able to reconstruct images faster.
The depth of our cascade was experimentally set. Our source code will be made public
available at https://github.com/rmsouza01/CD-Deep-Cascade-MR-Reconstruction. It al-
lows you to experiment with other cascade depths, CNN depths and select the domain of
each CNN block.
4
Hybrid Cascade for MR Reconstruction
4. Experimental setup
4.1. Dataset
We use the Calgary-Campinas brain MR raw data in this work (https://sites.google.
com/view/calgary-campinas-dataset/home). The dataset has 45 volumetric T1-weighted
fully sampled k-space datasets acquired on a clinical MR scanner (Discovery MR750; Gen-
eral Electric (GE) Healthcare, Waukesha, WI). The data was acquired with a 12-channel
imaging coil, which was combined to simulate a single-coil acquisition using vendor supplied
tools (Orchestra Toolbox; GE Healthcare). The inverse FT was applied in one dimension
and Gaussian 2-dimensional sampling was performed retrospectively on the other two di-
mensions. Our training set has 4,254 slices coming from 25 subjects. The validation and
test sets have 1,700 slices each corresponding to the remaining 20 subjects. These train,
validation and test slices come from a disjoint set of subjects.
4.2. Metrics and statistical analysis
The metrics used in this work were normalized mean squared error (NRMSE), PSNR and
Structural Similarity (SSIM) (Wang et al.,2004). Low NRMSE and high PSNR and SSIM
values represent good reconstructions. The metrics are computed against the fully sampled
reconstruction. These metrics were chosen seeing that they are commonly used to assess CS
MR reconstruction. We assessed statistical significance using paired Wilcoxon signed-rank
test with an alpha of 0.05.
4.3. Compared Methods
We compared our method, which we will refer to as Hybrid-Cascade, against four previously
published deep-learning-based methods that had publicly available source code and our own
implementation of KIKI-net, which we will refer as KIKI-net-like. It has 6 CNN blocks
alternating between frequency-domain and image-domain CNNs interleaved by DC blocks.
Our KIKI-net-like implementation has the same number of trainable parameters as Hybrid-
Cascade. Our goal, when comparing to KIKI-net-like, is to gain empirical evidence that
initiating the cascade on image-domain can potentially lead to better reconstructions. The
compared methods with public source code were: DAGAN (Yang et al.,2018), RefineGAN
(Quan et al.,2018), W-net (Souza and Frayne,2018) and Deep-Cascade (Schlemper et al.,
2018a).
All networks were re-trained from scratch for two different sampling rates: 25% and 20%
corresponding to speed-ups of 4×and 5×, respectively. We used fixed Gaussian sampling
patterns throughout training and testing (Figure 2).
5. Results and Discussion
Hybrid-Cascade was the top performing method for all metrics and acceleration factors. Al-
though Hybrid-Cascade results were close to Deep-Cascade and KIKI-net-like, the difference
was statistically significant for NRMSE and PSNR (p < 0.05). DAGAN and RefineGAN
achieved the poorer results. W-net was ranked fourth best. Quantitative results are sum-
marized in Table 1.
5
Hybrid Cascade for MR Reconstruction
(a) 25% sampling (b) 20% sampling
Figure 2: Gaussian sampling patterns used in the experiments.
DAGAN and RefineGAN lose relevant brain structural information in their reconstruc-
tions. W-net outputs visually pleasing reconstructions, but it lacks textural information
which is encoded in the high frequencies. Hybrid-Cascade, Deep-Cascade and KIKI-net-
like output very similar reconstructions, but small differences can be noticed specially in
the cerebellum region. Sample reconstructions for each technique are depicted in Figure 3.
Starting with an image-domain CNN lead to a higher error reduction in the first block of
the cascade as opposed to starting with a k-space CNN (Figure 4).
It is interesting to notice that the top tree techniques in our analysis, Hybrid-Cascade,
KIKI-net-like and Deep-Cascade all use unrolled structures combined with DC. DAGAN,
RefineGAN and W-net all use some variation of a U-net architecture within their models.
This make us conjecture that flat unrolled CNN architectures may be better suited models
for MR CS reconstruction.
Table 1: Summary of the results for the different architectures and different acceleration
factors. The best results for each metric and acceleration factor are emboldened.
Acceleration factor Model NRMSE (%) PSNR (dB) SSIM
4×
DAGAN 2.925 ±1.474 31.330 ±3.112 0.84 ±0.11
RefineGAN 1.898 ±1.300 35.436 ±3.705 0.90 ±0.07
W-net 1.364 ±1.011 38.228 ±3.317 0.93 ±0.07
KIKI-net-like 1.178 ±1.022 39.640 ±3.355 0.95 ±0.06
Deep-Cascade 1.198 ±1.057 39.510 ±3.345 0.95 ±0.07
Hybrid-Cascade 1.151 ±1.022 39.871 ±3.380 0.96 ±0.06
5×
DAGAN 3.866 ±1.435 28.691 ±2.658 0.79 ±0.11
RefineGAN 2.273 ±1.401 33.844 ±3.825 0.87 ±0.09
W-net 1.645 ±1.085 36.501 ±3.226 0.92 ±0.09
KIKI-net-like 1.452 ±1.092 37.669 ±3.224 0.94 ±0.08
Deep-Cascade 1.453 ±1.106 37.668 ±3.202 0.94 ±0.08
Hybrid-Cascade 1.423 ±1.099 37.875 ±3.252 0.94 ±0.08
Intermediary outputs of Hybrid-Cascade in a sample subject are depicted in Figure 5.
The input zero-filled reconstruction has a NRMSE of 14.76% and it drops to 2.41% after
the first CNN block, which is the largest error drop throughout the cascade. The error
keeps lowering consistently up to the fifth CNN block. Then, the error goes up, but it
6
Hybrid Cascade for MR Reconstruction
(a) DAGAN (b) RefineGAN (c) W-net
(d) KIKI-net-like (e) Deep-Cascade (f) Hybrid-Cascade (g) Reference
Figure 3: Sample reconstructions for all different reconstruction techniques assessed using
a speed-up factor of 5×. Visually, Hybrid-Cascade, Deep-Cascade and KIKI-net-
like are the most similar to the fully sampled reconstruction reference. W-net
also presents a visually pleasing reconstruction, but it lacks textural information,
i.e. high frequencies information is attenuated.
(a) (b)
(c) (d)
Figure 4: (a) Undersampled k-space and its corresponding zero-filled reconstruc-
tion (NRMSE=15.2%). (b) Output of first CNN block of KIKI-net-
like (NRMSE=4.0%). (c) Output of first CNN block of Hybrid-Cascade
(NRMSE=2.5%) and (d) reference fully sampled k-space and its image recon-
struction.
immediately goes back down again in the final CNN block. This finding was consistent
across all test slices. Although an odd finding, it is not unexpected. Since the network was
optimized to minimize the mean squared error of the final CNN block output, the error
across intermediary outputs can potentially oscillate as it happened in this case.
7
Hybrid Cascade for MR Reconstruction
Figure 5: From the top left to the bottom right: input zero-filled reconstruction for a speed-
up factor of 5×, output of each CNN block in the Hybrid-Cascade, and reference
fully sampled reconstruction. An interesting finding is that the NRMSE increases
at the output of the fifth CNN block in the cascade and than it decreases again
after the sixth block. These finding was consistent across all slices in the test set.
Concerning reconstruction times, we did not perform a systematic assessment. Our
Hybrid-Cascade and KIKI-net-like implementations take on average 22 milliseconds to re-
construct a 256 ×256 slice on a NVIDIA GTX 1070 GPU, which is considerably faster
than the 14 seconds that the original KIKI-net proposal takes to reconstruct a same size
slice. W-net, DAGAN and RefineGAN also have reconstructions times in the order of a few
milliseconds.
The Hybrid-Cascade model can be applied to multi-coil reconstruction by processing
each coil k-space independently, and then combining the resulting images through a sum
of squares algorithm. This approach would probably not be optimal, as it would disregard
complementary information across the k-spaces from each coil. The extension of DC to
multi-coil data is not straightforward and is still an open research question.
6. Conclusions
We proposed a hybrid frequency-domain/image-domain cascade of CNNs for MR CS re-
construction. We compared it with the current state-of-the-art of deep-learning-based re-
constructions using a public dataset. The differences between our model and the compared
ones were statistically significant (p < 0.05).
As future work, we intend to investigate how to adapt our model to parallel imaging
combined with CS using the full spectrum of information across the coils. We also would like
to explore how our dual domain model potentially relates to commonly used parallel imaging
methods, such as Generalized Autocalibrating Partially Parallel Acquisitions (GRAPPA)
(Griswold et al.,2002), which works on k-space domain, and Sensitivity Encoding for fast
MR imaging (SENSE) (Pruessmann et al.,1999), which works on image domain.
8
Hybrid Cascade for MR Reconstruction
Acknowledgments
The authors would like to thank the Natural Science and Engineering Council of Canada
(NSERC) for operating support, NVidia for providing a Titan V GPU, and Amazon Web
Services for access to cloud-based GPU services. We would also like to thank Dr. Louis
Lauzon for setting up the script to save the raw MR data at the Seaman Family MR Centre.
R.S. was supported by an NSERC CREATE I3T Award and currently holds the T. Chen
Fong Fellowship in Medical Imaging from the University of Calgary. R.F. holds the Hopewell
Professorship of Brain Imaging at the University of Calgary.
References
Anagha Deshmane, Vikas Gulani, Mark A Griswold, and Nicole Seiberlich. Parallel mr
imaging. Journal of Magnetic Resonance Imaging, 36(1):55–72, 2012. ISSN 1522-2586.
Taejoon Eo, Yohan Jun, Taeseong Kim, Jinseong Jang, HoJoon Lee, and Dosik Hwang.
Kikinet: crossdomain convolutional neural networks for reconstructing undersampled
magnetic resonance images. Magnetic resonance in medicine, 2018a. ISSN 0740-3194.
Taejoon Eo, Hyungseob Shin, Taeseong Kim, Yohan Jun, and Dosik Hwang. Translation of
1d inverse fourier transform of k-space to an image based on deep learning for accelerating
magnetic resonance imaging. In International Conference on Medical Image Computing
and Computer-Assisted Intervention, pages 241–249. Springer, 2018b.
Bruce Fischl. Freesurfer. Neuroimage, 62(2):774–781, 2012.
Mark A Griswold, Peter M Jakob, Robin M Heidemann, Mathias Nittka, Vladimir Jellus,
Jianmin Wang, Berthold Kiefer, and Axel Haase. Generalized autocalibrating partially
parallel acquisitions (grappa). Magnetic Resonance in Medicine: An Official Journal of
the International Society for Magnetic Resonance in Medicine, 47(6):1202–1210, 2002.
ISSN 0740-3194.
Kyong Hwan Jin, Michael T McCann, Emmanuel Froustey, and Michael Unser. Deep
convolutional neural network for inverse problems in imaging. IEEE Transactions on
Image Processing, 26(9):4509–4522, 2017. ISSN 1057-7149.
Dongwook Lee, Jaejun Yoo, and Jong Chul Ye. Deep residual learning for compressed
sensing MRI. In IEEE 14th International Symposium on Biomedical Imaging, pages
15–18. IEEE, 2017. ISBN 1509011722.
Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio,
Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken,
and Clara I S´anchez. A survey on deep learning in medical image analysis. Medical image
analysis, 42:60–88, 2017.
Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly. Compressed sensing
MRI. IEEE signal processing magazine, 25(2):72–82, 2008. ISSN 1053-5888.
Dwight G Nishimura. Principles of magnetic resonance imaging. Stanford Univ., 1996.
9
Hybrid Cascade for MR Reconstruction
Klaas P Pruessmann, Markus Weiger, Markus B Scheidegger, and Peter Boesiger. Sense:
sensitivity encoding for fast mri. Magnetic resonance in medicine, 42(5):952–962, 1999.
ISSN 1522-2594.
Tran Minh Quan, Thanh Nguyen-Duc, and Won-Ki Jeong. Compressed sensing MRI re-
construction using a generative adversarial network with a cyclic loss. IEEE transactions
on medical imaging, 37(6):1488–1497, 2018. ISSN 0278-0062.
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for
biomedical image segmentation. In International Conference on Medical image computing
and computer-assisted intervention, pages 234–241. Springer, 2015.
Jo Schlemper, Jose Caballero, Joseph V Hajnal, Anthony N Price, and Daniel Rueckert.
A deep cascade of convolutional neural networks for dynamic mr image reconstruction.
IEEE transactions on Medical Imaging, 37(2):491–503, 2018a. ISSN 0278-0062.
Jo Schlemper, Guang Yang, Pedro Ferreira, Andrew C. Scott, Laura-Ann McGill, Zohya
Khalique, Margarita Gorodezky, Malte Roehl, Jennifer Keegan, Dudley Pennell, David N.
Firmin, and Daniel Rueckert. Stochastic deep compressive sensing for the reconstruction
of diffusion tensor cardiac mri. In MICCAI, 2018b.
Maximilian Seitzer, Guang Yang, Jo Schlemper, Ozan Oktay, Tobias Wrfl, Vincent
Christlein, Tom Wong, Raad Mohiaddin, David Firmin, and Jennifer Keegan. Adver-
sarial and perceptual refinement for compressed sensing mri reconstruction. In Inter-
national Conference on Medical Image Computing and Computer-Assisted Intervention,
pages 232–240. Springer, 2018.
Roberto Souza and Richard Frayne. A hybrid frequency-domain/image-domain deep net-
work for magnetic resonance image reconstruction. arXiv preprint arXiv:1810.12473,
2018.
Roberto Souza, Oeslle Lucena, Julia Garrafa, David Gobbi, Marina Saluzzi, Simone Ap-
penzeller, Letcia Rittner, Richard Frayne, and Roberto Lotufo. An open, multi-vendor,
multi-field-strength brain mr dataset and analysis of publicly available skull stripping
methods agreement. NeuroImage, 2017. ISSN 1053-8119.
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assess-
ment: from error visibility to structural similarity. IEEE transactions on image process-
ing, 13(4):600–612, 2004. ISSN 1057-7149.
Guang Yang, Simiao Yu, Hao Dong, Greg Slabaugh, Pier Luigi Dragotti, Xujiong Ye,
Fangde Liu, Simon Arridge, Jennifer Keegan, and Yike Guo. DAGAN: Deep de-aliasing
generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE
transactions on medical imaging, 37(6):1310–1321, 2018. ISSN 0278-0062.
Simiao Yu, Hao Dong, Guang Yang, Greg Slabaugh, Pier Luigi Dragotti, Xujiong Ye,
Fangde Liu, Simon Arridge, Jennifer Keegan, and David Firmin. Deep de-aliasing for
fast compressive sensing mri. arXiv preprint arXiv:1705.07137, 2017.
10
Hybrid Cascade for MR Reconstruction
Jure Zbontar, Florian Knoll, Anuroop Sriram, Matthew J Muckley, Mary Bruno, Aaron De-
fazio, Marc Parente, Krzysztof J Geras, Joe Katsnelson, Hersh Chandarana, et al. fastmri:
An open dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839,
2018.
Pengyue Zhang, Fusheng Wang, Wei Xu, and Yu Li. Multi-channel generative adversar-
ial network for parallel magnetic resonance image reconstruction in k-space. In Inter-
national Conference on Medical Image Computing and Computer-Assisted Intervention,
pages 180–188. Springer, 2018.
Bo Zhu, Jeremiah Z Liu, Stephen F Cauley, Bruce R Rosen, and Matthew S Rosen. Im-
age reconstruction by domain-transform manifold learning. Nature, 555(7697):487, 2018.
ISSN 1476-4687.
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image
translation using cycle-consistent adversarial networks. arXiv preprint, 2017.
11