Article

Sparse MRI: The application of compressed sensing for rapid MR imaging

Wiley
Magnetic Resonance in Medicine
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... While PI can substantially reduce acquisition times, noise amplification, the accuracy of sensitivity maps, and computational constraints collectively limit achievable acceleration factors. As a result, typical clinical implementations utilize acceleration factors (R) of 2 to 4, beyond which the escalation in noise and artifacts can render images diagnostically suboptimal [14]. ...
... This is particularly advantageous in MRI, where the acquisition of k -space data is inherently time-consuming. CS achieves this by exploiting two main principles: sparsity [17,14] and incoherence [18]. Figure 2 illustrates the CS MRI image reconstruction steps. ...
... Although straightforward, this technique may cause coherent aliasing artifacts that reduce the efficacy of CS [42]. In contrast, random Cartesian sampling randomly places sampled points across the Cartesian grid, resulting in incoherent aliasing, which generally leads to better reconstruction quality [14]. ...
Preprint
Full-text available
Magnetic resonance imaging (MRI) is a non-invasive imaging modality and provides comprehensive anatomical and functional insights into the human body. However, its long acquisition times can lead to patient discomfort, motion artifacts, and limiting real-time applications. To address these challenges, strategies such as parallel imaging have been applied, which utilize multiple receiver coils to speed up the data acquisition process. Additionally, compressed sensing (CS) is a method that facilitates image reconstruction from sparse data, significantly reducing image acquisition time by minimizing the amount of data collection needed. Recently, deep learning (DL) has emerged as a powerful tool for improving MRI reconstruction. It has been integrated with parallel imaging and CS principles to achieve faster and more accurate MRI reconstructions. This review comprehensively examines DL-based techniques for MRI reconstruction. We categorize and discuss various DL-based methods, including end-to-end approaches, unrolled optimization, and federated learning, highlighting their potential benefits. Our systematic review highlights significant contributions and underscores the potential of DL in MRI reconstruction. Additionally, we summarize key results and trends in DL-based MRI reconstruction, including quantitative metrics, the dataset, acceleration factors, and the progress of and research interest in DL techniques over time. Finally, we discuss potential future directions and the importance of DL-based MRI reconstruction in advancing medical imaging. To facilitate further research in this area, we provide a GitHub repository that includes up-to-date DL-based MRI reconstruction publications and public datasets-https://github.com/mosaf/Awesome-DL-based-CS-MRI.
... In contrast, CS, which has been clinically applied for over a decade, provides a new approach by recovering imaging data from under-sampled k-space through the exploitation of sparsity. 13 Using incoherent sampling, CS acquires a limited number of signals, which are then reconstructed with high probability via advanced algorithms, ultimately yielding higher-quality images through Fourier transform. 13 CS enhances speed while preserving image quality, though its effectiveness depends on specific conditions and the sparsity of the image data. ...
... 13 Using incoherent sampling, CS acquires a limited number of signals, which are then reconstructed with high probability via advanced algorithms, ultimately yielding higher-quality images through Fourier transform. 13 CS enhances speed while preserving image quality, though its effectiveness depends on specific conditions and the sparsity of the image data. 13,14 In recent years, artificial intelligence (AI) emerges as a promising tool for further solving these problems. ...
... 13 CS enhances speed while preserving image quality, though its effectiveness depends on specific conditions and the sparsity of the image data. 13,14 In recent years, artificial intelligence (AI) emerges as a promising tool for further solving these problems. The AI solutions can be categorized into two groups: k-space reconstruction and image-based post-processing. ...
Article
Full-text available
Purpose Conventional brain MRI protocols are time-consuming, which can lead to patient discomfort and inefficiency in clinical settings. This study aims to assess the feasibility of using artificial intelligence-assisted compressed sensing (ACS) to reduce brain MRI scan time while maintaining image quality and diagnostic accuracy compared to a conventional imaging protocol. Patients and Methods Seventy patients from the department of neurology underwent brain MRI scans using both conventional and ACS protocols, including axial and sagittal T2-weighted fast spin-echo sequences and T2-fluid attenuated inversion recovery (FLAIR) sequence. Two radiologists independently evaluated image quality based on avoidance of artifacts, boundary sharpness, visibility of lesions, and overall image quality using a 5-point Likert scale. Pathological features, including white matter hyperintensities, lacunar infarcts, and enlarged perivascular spaces, were also assessed. The interchangeability of the two protocols was determined by calculating the 95% confidence interval (CI) for the individual equivalence index. Additionally, Cohen’s weighted kappa statistic was used to assess inter-protocol intra-observer agreement. Results The ACS images demonstrated superior quality across all qualitative features compared to the conventional ones. Both protocols showed no significant difference in detecting pathological conditions. The 95% CI for the individual equivalence index was below 5% for all variables except enlarged perivascular spaces, indicating the interchangeability of the conventional and ACS protocols in most cases. The inter-rater reliability between the two radiologists was strong, with kappa values of 0.78, 0.74, 0.70 and 0.86 for image quality evaluation and 0.74, 0.80 and 0.70 for diagnostic performance, indicating good-to-excellent agreement in their evaluations. Conclusion The ACS technique reduces brain MRI scan time by 29.2% while achieving higher image quality and equivalent diagnostic accuracy compared to the conventional protocol. This suggests that ACS could be potentially adopted for routine clinical use in brain MRI.
... However, the long scanning time in MRI examinations is disadvantageous, which can cause patients' discomfort and motion artifacts. Therefore, numerous methods were proposed to accelerate MRI [1]- [4]. One stream of the research in fast MRI explores the usage of compressed sensing theory, referred to as Compressed Sensing MRI (CS-MRI) [1], [4], [5]. ...
... Therefore, numerous methods were proposed to accelerate MRI [1]- [4]. One stream of the research in fast MRI explores the usage of compressed sensing theory, referred to as Compressed Sensing MRI (CS-MRI) [1], [4], [5]. K-space measurements are acquired partially and randomly while the low-frequency bands are sampled densely to capture the major energy of this signal in CS-MRI. ...
... Lustig et.al. proposed a SparseMRI method using both the total variation regularization and wavelet coefficient sparsity [1]. Non-local prior was combined with sparse prior for CS-MRI reconstruction [19]. ...
Preprint
Full-text available
Reconstructing MR images using deep neural networks from undersampled k-space data without using fully sampled training references offers significant value in practice, which is a self-supervised regression problem calling for effective prior knowledge and supervision. The Siamese architectures are motivated by the definition "invariance" and shows promising results in unsupervised visual representative learning. Building homologous transformed images and avoiding trivial solutions are two major challenges in Siamese-based self-supervised model. In this work, we explore Siamese architecture for MRI reconstruction in a self-supervised training fashion called SiamRecon. We show the proposed approach mimics an expectation maximization algorithm. The alternative optimization provide effective supervision signal and avoid collapse. The proposed SiamRecon achieves the state-of-the-art reconstruction accuracy in the field of self-supervised learning on both single-coil brain MRI and multi-coil knee MRI.
... Compressed sensing (CS) [7], [8] has been widely used to enable the reconstruction of MR images from a reduced set of measurements. Traditional MRI follows the Nyquist-Shannon theorem, requiring dense sampling in k-space. ...
... Traditional MRI follows the Nyquist-Shannon theorem, requiring dense sampling in k-space. However, according to CS theory, the k-space sampling can be done at sub-Nyquist rates and the image can be reconstructed using prior knowledge of sparsity in some transform domain [8]- [11] when the sampling operator and sparsity basis are sufficiently incoherent. Some of the widely used undersampling patterns in CS-MRI include variable density [12], Poissondisc [13], combined variable density and Poisson disc [14], and equispaced Cartesian with skipped lines [15]. ...
... We use the alternating framework shown in Figure 1 to solve this highly challenging optimization problem. The algorithm starts with variable density random sampling (VDRS) masks as an initial guess [8] and alternates between updating a reconstructor and sampling masks until we get a final set of scan-adaptive masks {M i } and a reconstruction network f θ trained on them. For optimizing the scan-adaptive masks, we initially use a greedy [35] and later our proposed ICD based sampling optimization algorithm. ...
Preprint
Full-text available
Accelerated MRI involves collecting partial k-space measurements to reduce acquisition time, patient discomfort, and motion artifacts, and typically uses regular undersampling patterns or hand-designed schemes. Recent works have studied population-adaptive sampling patterns that are learned from a group of patients (or scans) based on population-specific metrics. However, such a general sampling pattern can be sub-optimal for any specific scan since it may lack scan or slice adaptive details. To overcome this issue, we propose a framework for jointly learning scan-adaptive Cartesian undersampling patterns and a corresponding reconstruction model from a training set. We use an alternating algorithm for learning the sampling patterns and reconstruction model where we use an iterative coordinate descent (ICD) based offline optimization of scan-adaptive k-space sampling patterns for each example in the training set. A nearest neighbor search is then used to select the scan-adaptive sampling pattern at test time from initially acquired low-frequency k-space information. We applied the proposed framework (dubbed SUNO) to the fastMRI multi-coil knee and brain datasets, demonstrating improved performance over currently used undersampling patterns at both 4x and 8x acceleration factors in terms of both visual quality and quantitative metrics. The code for the proposed framework is available at https://github.com/sidgautam95/adaptive-sampling-mri-suno.
... Accelerated MRI has been developed to enable undersampling k-space to reduce scan time while maintaining favorable image quality by combining advanced image reconstruction algorithms. Classic methods such as parallel imaging [1]- [3], compressed sensing [4], and low-rank methods [5]- [12] have demonstrated promises in maintaining desirable image reconstruction quality using undersampled data focusing on specific algorithm design. ...
... Additionally, dDiMo harnesses spatiotemporal (x-t) and self-consistent frequencytemporal (k-t) priors derived from time-resolved data to guide the diffusion process. The framework also incorporates the nonlinear conjugate gradient (CG) algorithm [4] into the reverse diffusion steps to further improve the performance of this diffusion process. The versatility and effectiveness of dDiMo are demonstrated through experiments conducted on both Cartesian-acquired multi-coil cardiac MRI and continuously acquired free-breathing Golden-Angle-Radial multi-coil lung MRI. ...
... The parameters λ xt and λ kt control the trade-off between the data fidelity term and the regularization terms in those domains. The nonlinear CG method can be used to optimizeŷ iteratively [4]. Specifically, at iteration k, we can apply the following update:ŷ ...
Preprint
Purpose: To propose a domain-conditioned and temporal-guided diffusion modeling method, termed dynamic Diffusion Modeling (dDiMo), for accelerated dynamic MRI reconstruction, enabling diffusion process to characterize spatiotemporal information for time-resolved multi-coil Cartesian and non-Cartesian data. Methods: The dDiMo framework integrates temporal information from time-resolved dimensions, allowing for the concurrent capture of intra-frame spatial features and inter-frame temporal dynamics in diffusion modeling. It employs additional spatiotemporal (x-t) and self-consistent frequency-temporal (k-t) priors to guide the diffusion process. This approach ensures precise temporal alignment and enhances the recovery of fine image details. To facilitate a smooth diffusion process, the nonlinear conjugate gradient algorithm is utilized during the reverse diffusion steps. The proposed model was tested on two types of MRI data: Cartesian-acquired multi-coil cardiac MRI and Golden-Angle-Radial-acquired multi-coil free-breathing lung MRI, across various undersampling rates. Results: dDiMo achieved high-quality reconstructions at various acceleration factors, demonstrating improved temporal alignment and structural recovery compared to other competitive reconstruction methods, both qualitatively and quantitatively. This proposed diffusion framework exhibited robust performance in handling both Cartesian and non-Cartesian acquisitions, effectively reconstructing dynamic datasets in cardiac and lung MRI under different imaging conditions. Conclusion: This study introduces a novel diffusion modeling method for dynamic MRI reconstruction.
... x ∈ R n is said s-sparse if ∥x∥ 0 ≤ s where ∥x∥ 0 denotes the number of nonzero entries of x. Compressed sensing is ubiquitous in many areas of physical sciences and engineering, such as magnetic resonance imaging (MRI) [33,34], computed tomography (CT) [45], radar [23,26], statistics [5,14], and others [22,43] to name just a few. We refer readers to a recent book [25] for a comprehensive exposition of the subject. ...
... Putting (30) and (32) into (33) and noting that ∥h ...
Preprint
Recovery error bounds of tail-minimization and the rate of convergence of an efficient proximal alternating algorithm for sparse signal recovery are considered in this article. Tail-minimization focuses on minimizing the energy in the complement TcT^c of an estimated support T. Under the restricted isometry property (RIP) condition, we prove that tail-1\ell_1 minimization can exactly recover sparse signals in the noiseless case for a given T. In the noisy case, two recovery results for the tail-1\ell_1 minimization and the tail-lasso models are established. Error bounds are improved over existing results. Additionally, we show that the RIP condition becomes surprisingly relaxed, allowing the RIP constant to approach 1 as the estimation T closely approximates the true support S. Finally, an efficient proximal alternating minimization algorithm is introduced for solving the tail-lasso problem using Hadamard product parametrization. The linear rate of convergence is established using the Kurdyka-{\L}ojasiewicz inequality. Numerical results demonstrate that the proposed algorithm significantly improves signal recovery performance compared to state-of-the-art techniques.
... C OMPRESSED sensing is established as a crucial tool for signal processing [1], with applications in, e.g., direction-of-arrival (DoA) estimation and harmonic retrieval [2], [3], image processing [4], tomography [5], and communications [6]. The key idea is to assume that certain signals are constructed from only a few underlying components and thus can be represented as a sparse vector. ...
... Assume now for any convergent subsequence x (ℓ) ℓ∈L that z = lim L∋ℓ→∞ x (ℓ) is not a stationary point of (P). Since Bx (ℓ) is continuous in x (ℓ) according to (5), and γ (ℓ) 1 according to (10) is continuous in both Bx (ℓ) and x (ℓ) , we have Bz = lim L∋ℓ→∞ Bx (ℓ) , γ z = lim L∋ℓ→∞ γ (ℓ) and consequently z stela = z + γ z (Bz − z). In addition, there exists another convergent subsequence x (ℓ−1) ℓ∈L ′ where L ′ ⊆ L and lim L ′ ∋ℓ→∞ x (ℓ−1) = z −1 and thus lim L ′ ∋ℓ→∞ v (ℓ) = v z . ...
Article
Full-text available
We consider the minimization of 1\ell _{1} -regularized least-squares problems. A recent optimization approach uses successive convex approximations with an exact line search, which is highly competitive, especially in sparse problem instances. This work proposes an acceleration scheme for the successive convex approximation technique with a negligible additional computational cost. We demonstrate this scheme by devising three related accelerated algorithms with provable convergence. The first introduces an additional descent step along the past optimization trajectory in the variable update, that is inspired by Nesterov's accelerated gradient method and uses a closed-form step size. The second performs a simultaneous descent step along both the best response and the past trajectory, thereby finding a two-dimensional step size, also in closed-form. The third algorithm combines the previous two approaches. All algorithms are hyperparameter-free. Empirical results confirm that the acceleration approaches improve the convergence rate compared to benchmark algorithms, and that they retain the benefits of successive convex approximation also in non-sparse instances.
... Moreover, both 3D radial and 3D spiral trajectories outperform in compressed sensing (CS) reconstructions but significant undersampling artifacts in sodium T2weighted images cause large noise in transform domain [80]. Now an available nonlinear Compressed Sensing reconstruction algorithm modifies the sparsity of 23 Na images in the transform domain to reduce undersampling artifacts and noise in raw image data [82]. The CS reconstructions also improve image sparsity and accuracy with high reconstruction speed. ...
Article
Full-text available
Advanced breast cancer poses a life threat. It represents lumps, mammary cell carcinoma, mammary gland nodules and metastasis with direct clinical course. Such metastases may pose resistance to chemo/immunotherapy with only option left of careful theranostic multiparametric tumor monitoring, tumor killing and surgery. The paper focuses on nano-radiotracer assisted sodium MRI-PET apoptosis evaluation, accumulated intracellular sodium based breast tumor classification, use of nanomedicine in theranosis of human breast tumor, deep learning by reduced noise and minimal artifacts with a applying Compressed Sensing Reconstruction, review of dictionary learning as sparsifying transformation for breast cancer sodium MRI-PET based classification, clinical trials and new physician's guideline. Major therapy options are: local therapy, immune check point inhibitor therapy and antivascular therapy with upcoming genomic agents and ablation therapy for metastases either relapse or residual disease. A three-dimensional dictionary-learning by compressed sensing reconstruction algorithm (3D-DLCS) and K-singular-value-decomposition (K-SVD) sparifying transform algorithm assess high-quality reconstructions of apoptosis rich areas showing intracellular 23 Na peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) characteristics. The PubMed search on sodium MRI/PET clinical trials, Deep learning AI, virtual reality, bioinspired approaches serve as navigation guideline. Conclusion. The nano-radiotracer assisted 23 Na MRI-18F PET clinical tool is choice in breast cancer navigation theranosis. The 3D-DLCS deep learning algorithm on 23 Na MRI data classifies and navigates tumors with least noise and less artifact levels.
... Sparse modeling has been typically used in reproducing images from observed data. For example, Lustig et al. (2007) reproduced the distribution of brain blood vessels in high resolution from a smaller number of data by applying sparse modeling. Since active faults also distribute linearly like brain blood vessels, and the spatial density of GNSS stations is not enough to estimate a strain-rate field in high resolution, sparse modeling would be a powerful tool to estimate a strain-rate field from GNSS data. ...
Article
Full-text available
Many studies have estimated crustal deformation from observed geodetic data. So far, because most studies have applied a smoothness constraint, which includes the assumption of local uniformity of a strain-rate field, localized strain rates near fault zones have tended to be underestimated when we invert spatially sporadic GNSS data. To overcome this difficulty, we introduce sparse modeling into the estimation of a strain-rate field. Specifically, we impose a sparsity constraint as well as the smoothness constraint on strain rates as prior information, which are expressed by the L1-norm and the L2-norm of the second-order derivative of the velocity field, respectively. To investigate the validity and limitation of the proposed method, we conduct synthetic tests, in which we consider an anti-plane strain problem due to a steady slip on a buried strike-slip fault. As a result, we find: (1) regardless of the locking depth of the fault, the proposed method reproduces localized strain rates near the fault with almost equal or better accuracy than the L2 regularization method (i.e., only the smoothness constraint); (2) the advantage of the proposed method over the L2 regularization method is clearer when data coverage is worse (i.e., when fewer observation points are available); and (3) the proposed method can be applied when observation errors are small. Next, we apply the proposed method to the GNSS data across the Arima-Takatsuki fault zone, which is one of the most active strike-slip faults in Japan. As a result, the proposed method estimates about 1.0×1081.0\times {10}^{-8} 1.0 × 10 - 8 /yr faster strain rates near the fault zone than the L2 regularization method, which corresponds to a 20–30% greater strain-rate concentration. The faster strain rates result in the estimation of a shallower locking depth: 11 km by the proposed method, compared to 17 km by the L2 regularization method. The former is closer to the depth of D90, 12–14 km, above which 90% of earthquakes occur. Graphical Abstract
... Concurrently with the development of fast spatio-temporal MRI acquisition, significant progress has been made in utilizing deep learning for image reconstruction for acquisitions with high rates of acceleration [22][23][24][25][26][27]. These methods commonly leverage an unrolled deep learning architecture, where the algorithm alternates between network inference and a data-consistency step akin to compressed sensing like iterative methods [28][29][30][31][32]. ...
Article
Full-text available
Object Spatio-temporal MRI methods offer rapid whole-brain multi-parametric mapping, yet they are often hindered by prolonged reconstruction times or prohibitively burdensome hardware requirements. The aim of this project is to reduce reconstruction time using deep learning. Materials and methods This study focuses on accelerating the reconstruction of volumetric multi-axis spiral projection MRF, aiming for whole-brain T1 and T2 mapping, while ensuring a streamlined approach compatible with clinical requirements. To optimize reconstruction time, the traditional method is first revamped with a memory-efficient GPU implementation. Deep Learning Initialized Compressed Sensing (Deli-CS) is then introduced, which initiates iterative reconstruction with a DL-generated seed point, reducing the number of iterations needed for convergence. Results The full reconstruction process for volumetric multi-axis spiral projection MRF is completed in just 20 min compared to over 2 h for the previously published implementation. Comparative analysis demonstrates Deli-CS’s efficiency in expediting iterative reconstruction while maintaining high-quality results. Discussion By offering a rapid warm start to the iterative reconstruction algorithm, this method substantially reduces processing time while preserving reconstruction quality. Its successful implementation paves the way for advanced spatio-temporal MRI techniques, addressing the challenge of extensive reconstruction times and ensuring efficient, high-quality imaging in a streamlined manner.
... [3,4,24,30,36]). Compressive sensing (CS) algorithms that promote sparse solutions in a known sparse domain [6,16,17,34] have become increasingly widespread in providing point estimate image recoveries. More recently, Bayesian inference methods have been developed to also quantify the uncertainty of the estimate. ...
Article
Full-text available
Fourier partial sum approximations yield exponential accuracy for smooth and periodic functions, but produce the infamous Gibbs phenomenon for non-periodic ones. Spectral reprojection resolves the Gibbs phenomenon by projecting the Fourier partial sum onto a Gibbs complementary basis, often prescribed as the Gegenbauer polynomials. Noise in the Fourier data and the Runge phenomenon both degrade the quality of the Gegenbauer reconstruction solution, however. Motivated by its theoretical convergence properties, this paper proposes a new Bayesian framework for spectral reprojection, which allows a greater understanding of the impact of noise on the reprojection method from a statistical point of view. We are also able to improve the robustness with respect to the Gegenbauer polynomials parameters. Finally, the framework provides a mechanism to quantify the uncertainty of the solution estimate.
... 15,16 A distributed spiral-in/spiral-out sampling trajectory was reconstructed separately and then combined to increase readout window duration and improve signal-to-noise ratio (SNR). 17 Advanced image undersampling and reconstruction techniques were also pursued for ASL scans with high reduction factors (R), including controlled aliasing, [18][19][20] variable density sampling, [21][22][23][24] and compressed sensing (CS) 25 -based image reconstruction. 19,20,23,24,[26][27][28] Single-shot acquisition for PCASL was demonstrated in two of these studies: (1) with 3D GRASE combining a time-varied 2D CAIPIRINHA (controlled aliasing in parallel imaging results in higher acceleration) sampling pattern and spatial-temporal regularization for accelerated reconstruction (R = 6), 19 and (2) 3D SOS-FSE using interleaved spirals with pseudo golden-angle rotation across slices and CS reconstruction (R = 13). ...
Article
Full-text available
Purpose The present work aims to evaluate the performance of three‐dimensional (3D) single‐shot stack‐of‐spirals turbo FLASH (SOS‐TFL) acquisition for pseudo‐continuous arterial spin labeling (PCASL) and velocity‐selective ASL (VSASL)–based cerebral blood flow (CBF) mapping, as well as VSASL‐based cerebral blood volume (CBV) mapping. Methods Digital phantom simulations were conducted for both multishot echo planar imaging and spiral trajectories with intershot signal fluctuations. PCASL‐derived CBF (PCASL‐CBF), VSASL‐derived CBF (VSASL‐CBF), and CBV (VSASL‐CBV) were all acquired using 3D multishot gradient and spin‐echo and SOS‐TFL acquisitions following background suppression. Both simulation and in vivo images were compared between multishot and single‐shot compressed sensing–regularized sensitivity encoding (CS‐SENSE) reconstructions. Results Artifacts were observed in both simulated multishot echo planar imaging and spiral readouts, as well as in in vivo multishot ASL perfusion images. A high correlation was found between the levels of signal fluctuations among interleaves and the severity of artifacts in both simulated and in vivo data. Image artifacts were more apparent in the inferior region of the brain, especially in CBF scans. These artifacts were effectively eliminated when single‐shot CS‐SENSE reconstruction was applied to the same data set. Conclusion ASL images obtained from 3D segmented gradient and spin‐echo or SOS‐TFL acquisitions can exhibit artifacts caused by signal fluctuations among different shots, which persist even after the application of background suppression pulses. In contrast, these artifacts were prevented when single‐shot CS‐SENSE reconstruction was applied to the same SOS‐TFL data set.
... b This adaptation of a forward-fitting model to the PI-SSFP acquisition seeking to minimize the spectral L1 norm, has parallels with soft thresholding and maximum entropy approaches used in NMR and MRI. [29][30][31] While in the MRI case a sparsifying operation (e.g., a wavelet transformation) is needed to make the solution being sought sparse, NMR spectra of the kind being here considered do not need this extra step. See Supporting Information for further details. ...
Preprint
NMR acquisitions based on Ernst-angle excitations are widely used in analytical spectroscopy, as for over half a century they have been considered the optimal way for maximizing spectral sensitivity without compromising bandwidth or peak resolution. However, if as often happens in liquid state NMR relaxation times T1, T2 are long and similar, steady-state free-precession (SSFP) experiments can actually provide higher signal-to-noise ratios per square root of acquisition time (SNRt) than Ernst-angle-based counterparts. Although a strong offset dependence and a requirement for pulsing at repetition times TR << T2 leading to poor spectral resolution have impeded widespread analytical applications of SSFP, phase-incremented (PI) SSFP schemes could overcome these drawbacks. The present study explores if, when and how, can this approach to high resolution NMR improve SNRt over the performance afforded by Ernst-angle-based FT acquisitions. It is found that PI-SSFP can indeed often provide a superior SNRt than FT-NMR, but that achieving this requires implementing the acquisitions using relatively large flip angles. As also explained, however, this can restrict PI-SSFP's spectral resolution, and lead to distorted line shapes. To deal with this problem we introduce here a new outlook on SSFP experiments that can overcome this dichotomy, and lead to high spectral resolution even when utilizing relatively the large flip angles that provide optimal sensitivity. This new outlook also leads to a processing pipeline for PI-SSFP acquisitions, which is here introduced and exemplified. The enhanced SNRt that the ensuing method can provide over FT-based NMR counterparts collected under Ernst-angle excitation conditions, is examined with a series of 13C and 15N natural abundance investigations on organic compounds.
... In recent years, researchers have explored various techniques to reconstruct high-quality MR images from undersampled acquisitions, such as parallel imaging and compressed sensing (CS) (Lustig et al. 2007). However, these methods have limitations in efficiently removing artifacts and noise, especially at high acceleration rates. ...
Article
Full-text available
Four-dimensional imaging (4D-imaging) plays a critical role in achieving precise motion management in radiation therapy. However, challenges remain in 4D-imaging such as a long imaging time, suboptimal image quality, and inaccurate motion estimation. With the tremendous success of artificial intelligence (AI) in the image domain, particularly deep learning, there is great potential in overcoming these challenges and improving the accuracy and efficiency of 4D-imaging without the need for hardware modifications. In this review, we provide a comprehensive overview of how these AI-based methods could drive the evolution of 4D-imaging for motion management. We discuss the inherent issues associated with multiple 4D modalities and explore the current research progress of AI in 4D-imaging. Furthermore, we delve into the unresolved challenges and limitations in 4D-imaging and provide insights into the future direction of this field.
... In this study, we demonstrated that existing clinical perfusion MRI images can be effectively used to train a conditional diffusion generative model for super-resolution. We proposed a super-resolution pipeline that utilizes low-resolution myocardial perfusion MRI as the guidance after initial reconstruction by GRAPPA (27), which is also potentially applicable to compressed sensing (28) or unrolled network (29) outputs, offering a complementary approach to the existing workflows. When combined with GRAPPA (factor 2-3) in prospective acquisitions, this method offers a nominal 5.7-8.5 folds acceleration, allowing for better slice coverage and improved temporal resolution. ...
Article
Full-text available
Introduction Myocardial perfusion MRI is important for diagnosing coronary artery disease, but current clinical methods face challenges in balancing spatial resolution, temporal resolution, and slice coverage. Achieving broader slice coverage and higher temporal resolution is essential for accurately detecting abnormalities across different slice locations but remains difficult due to constraints in acquisition speed and heart rate variability. While techniques like parallel imaging and compressed sensing have significantly advanced perfusion imaging, they still suffer from noise amplification, residual artifacts, and potential temporal blurring due to the rapid transit of dynamic contrast vs. the temporal constraints of the reconstruction. Methods This study introduces a conditional diffusion-based generative model for myocardial perfusion MRI super resolution, addressing the trade-offs between spatiotemporal resolution and slice coverage. We adapted Denoising Diffusion Probabilistic Models (DDPM) to enhance low-resolution perfusion images into high-resolution outputs without requiring temporal regularization. The forward diffusion process introduces Gaussian noise incrementally, while the reverse process employs a U-Net architecture to progressively denoise the images, conditioned on the low-resolution input image. Results We trained and validated the model on a retrospective dataset of dynamic contrast-enhanced (DCE) perfusion MRI, consisting of both stress and rest images from 47 patients with heart disease. Our results showed significant image quality improvements, with a 5.1% reduction in nRMSE, a 1.1% increase in PSNR, and a 2.2% boost in SSIM compared to GAN-based super-resolution method (P < 0.05 for all metrics using paired t-test) in retrospective study. For the 9 prospective subjects, we achieved a total nominal acceleration of 8.5-fold across 5–6 slices through a combination of low-resolution acquisition and GRAPPA. PerfGen outperformed GAN-based approach in sharpness (4.36 ± 0.38 vs. 4.89 ± 0.22) and overall image quality (4.14 ± 0.28 vs. 4.89 ± 0.22), as assessed by two experts in a blinded evaluation (P < 0.05) in prospective study. Discussion This work demonstrates the capability of diffusion-based generative models in generating high-resolution myocardial perfusion MRI from conditional low-resolution images. This approach has shown the potentials to accelerate myocardial perfusion MRI while enhancing slice coverage and temporal resolution, offering a promising alternative to existing methods.
... The centrality of k-space associated with contrast is frequently sampled to update faster. It can recover the sparse representation images by randomly undersampled k-space data using the nonlinear recovery protocol for CS technology [47]. View-sharing is more general than CS, although it may be blurred. ...
Chapter
Full-text available
The advantage of the multi-parametric method for breast cancer is the different contributions of diverse parameters in the magnetic resonance image (MRI). T1-weighted imaging (T1WI) detects the signal intensity differences in tissue according to different longitudinal relaxation times. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) can estimate the vascularity and permeability of the lesion by semiquantitative and quantitative parameters. The ultrafast DCE-MRI presents the new kinetic parameters. Diffusion-weighted imaging (DWI) provides information related to tumor cell density, and advanced diffusion-weighted imaging techniques, such as diffusion kurtosis imaging, intravoxel incoherent motion, and time-dependent diffusion MRI, exhibit new perspectives of microscale tissue assessment. Moreover, T2-weighted imaging is important for the measurement of the water content of the tissue. Magnetic resonance spectroscopy (MRS) can detect choline levels and choline metabolites in the tissue. Magnetic resonance elastography (MRE) can provide quantitative mechanical properties of breast tissue, including stiffness, elasticity, and viscosity, to improve the specificity for breast lesion characterization. In this chapter, we provide a technical and theoretical background for these parameters and reveal the application of multi-parameter imaging in breast cancer.
... Thus, subspace-constrained image reconstruction focuses on estimating the spatial factor U, typically the most time-intensive computational step, especially when combined with nonquadratic regularization terms such as spatial transform sparsity penalties. 28,34 Given such regularization, this step conventionally requires nonlinear iterative algorithms such as alternating direction method of multipliers or fast iterative soft-thresholding algorithm. Reconstruction time is additionally lengthened when a nonuniform fast Fourier transform is used due to non-Cartesian sampling. ...
Article
Full-text available
Purpose To develop a deep subspace learning network that can function across different pulse sequences. Methods A contrast‐invariant component‐by‐component (CBC) network structure was developed and compared against previously reported spatiotemporal multicomponent (MC) structure for reconstructing MR Multitasking images. A total of 130, 167, and 16 subjects were imaged using T1, T1‐T2, and T1‐T2‐ T2*T2 {\mathrm{T}}_2^{\ast } ‐fat fraction (FF) mapping sequences, respectively. We compared CBC and MC networks in matched‐sequence experiments (same sequence for training and testing), then examined their cross‐sequence performance and generalizability by unmatched‐sequence experiments (different sequences for training and testing). A “universal” CBC network was also evaluated using mixed‐sequence training (combining data from all three sequences). Evaluation metrics included image normalized root mean squared error and Bland–Altman analyses of end‐diastolic maps, both versus iteratively reconstructed references. Results The proposed CBC showed significantly better normalized root mean squared error than MC in both matched‐sequence and unmatched‐sequence experiments (p < 0.001), fewer structural details in quantitative error maps, and tighter limits of agreement. CBC was more generalizable than MC (smaller performance loss; p = 0.006 in T1 and p < 0.001 in T1‐T2 from matched‐sequence testing to unmatched‐sequence testing) and additionally allowed training of a single universal network to reconstruct images from any of the three pulse sequences. The mixed‐sequence CBC network performed similarly to matched‐sequence CBC in T1 (p = 0.178) and T1‐T2 (p = 0121), where training data were plentiful, and performed better in T1‐T2‐T2*T2 {\mathrm{T}}_2^{\ast } ‐FF (p < 0.001) where training data were scarce. Conclusion Contrast‐invariant learning of spatial features rather than spatiotemporal features improves performance and generalizability, addresses data scarcity, and offers a pathway to universal supervised deep subspace learning.
... Second, a compressed sensing reconstruction was 201 carried out to generate 5D cardiac and respiratory motion-resolved images. This was achieved 202 using the Alternating Direction Method of Multipliers (ADMM) algorithm [39,40]. 203 ...
Preprint
Background: Balanced steady-state free precession (bSSFP) sequences offer high SNR and excellent tissue contrast for whole-heart MRI. However, inadequate fat suppression can introduce artifacts, particularly with non-Cartesian readouts. This study aimed to compare novel rapid water-excitation (WE) pulses for 3D radial whole-heart free-running MRI at 1.5T, specifically Binomial Off-Resonant Rectangular (BORR), Lipid Insensitive Binomial Off-Resonant RF Excitation (LIBRE), and Lipid Insensitive Binomial Off-Resonant (LIBOR) pulses, alongside a Fast Interrupted Steady-State (FISS) and non-fat suppressed free-running bSSFP sequence. Methods: Three free-running MRI protocols (BORR, LIBRE, LIBOR) were optimized for fat suppression at 1.5T using a phantom. These protocols, along with FISS and non-fat suppressed bSSFP, were tested in phantom and five volunteers, with each acquisition lasting 3min and 41sec. SNR and CNRWater-Fat were measured in phantom data, while SNR and CNRBlood-Myocardium were assessed in volunteers using static gridded reconstructions. Motion-resolved reconstructions were used for qualitative assessments. SAR values were measured for each sequence, and statistical differences were analyzed using one-way ANOVA (p < 0.05). The pulse with the best combination of high SNR and low SAR was identified for further applications. Results: In phantom studies, LIBOR had the highest CNRWater-Fat (276.8 ± 2.5), followed by LIBRE (268.1 ± 2.6), BORR (249.9 ± 2.2), and FISS (212.7 ± 2.7), though these differences were not statistically significant (p > 0.05). FISS showed the highest SAR (1.41 W.kg-1), while LIBOR had the lowest (0.23 W.kg-1). In volunteers, BORR had the highest SNR in the ventricular blood pool (17.0 ± 1.5), and LIBRE had the highest CNRBlood-Fat (29.4 ± 9.3). FISS had the highest CNRBlood-Myocardium (29.0 ± 8.9), but the differences were not significant (p > 0.05). LIBOR consistently had the lowest SAR (0.26 W.kg-1). Conclusion: This study compared various fat-signal-suppression approaches in 3D contrast-free whole-heart free-running bSSFP MRI at 1.5T. Although the sequences performed similarly in SNR and CNR, LIBOR offered the lowest SAR, making it a promising candidate for future whole-heart MRI applications, particularly where RF energy deposition is a concern.
... For this reason, it has been proposed to introduce compressed sensing (CS) for reconstructing MR images. CS accomplishes the reconstruction task mainly by exploiting the sparsity of MRI, since most MR images become sparse after being transformed into an appropriate domain, such as when total variation and wavelet transformation are used [91,94,[96][97][98][99][100][101][102][103]. Therefore, MR images with CS and DLR can improve image quality at not only conventional, but also thin-section thicknesses [98]. ...
Article
Full-text available
Hereby inviting young rising stars in chest radiology in Japan for contributing what they are working currently, we would like to show the potentials and directions of the near future research trends in the research field. I will provide a reflection on my own research topics. At the end, we also would like to discuss on how to choose the themes and topics of research: What to do or not to do? We strongly believe it will stimulate and help investigators in the field.
... 4. To resemble the sampling pattern of an actual phasecontrast acquisition, a pseudo-random Cartesian variabledensity phyllotaxis sampling pattern taken from a reference 4D Flow MRI sequence [23] was used to sample data in k-space. 5. Data is then transformed back into image space using a compressed sensing reconstruction [24]. Specifically, this meant solvinĝ ...
Preprint
Full-text available
4D Flow Magnetic Resonance Imaging (4D Flow MRI) is a non-invasive technique for volumetric, time-resolved blood flow quantification. However, apparent trade-offs between acquisition time, image noise, and resolution limit clinical applicability. In particular, in regions of highly transient flow, coarse temporal resolution can hinder accurate capture of physiologically relevant flow variations. To overcome these issues, post-processing techniques using deep learning have shown promising results to enhance resolution post-scan using so-called super-resolution networks. However, while super-resolution has been focusing on spatial upsampling, temporal super-resolution remains largely unexplored. The aim of this study was therefore to implement and evaluate a residual network for temporal super-resolution 4D Flow MRI. To achieve this, an existing spatial network (4DFlowNet) was re-designed for temporal upsampling, adapting input dimensions, and optimizing internal layer structures. Training and testing were performed using synthetic 4D Flow MRI data originating from patient-specific in-silico models, as well as using in-vivo datasets. Overall, excellent performance was achieved with input velocities effectively denoised and temporally upsampled, with a mean absolute error (MAE) of 1.0 cm/s in an unseen in-silico setting, outperforming deterministic alternatives (linear interpolation MAE = 2.3 cm/s, sinc interpolation MAE = 2.6 cm/s). Further, the network synthesized high-resolution temporal information from unseen low-resolution in-vivo data, with strong correlation observed at peak flow frames. As such, our results highlight the potential of utilizing data-driven neural networks for temporal super-resolution 4D Flow MRI, enabling high-frame-rate flow quantification without extending acquisition times beyond clinically acceptable limits.
... The individual coil data is then combined in either the image domain [20] or the frequency domain [10]. Another way of tackling this problem is through variational methods or compressed sensing (CS) theory, which leverages sparse representations of the data in a specific domain and incoherent sampling to enable image reconstruction [1,17]. ...
Preprint
Full-text available
Diffusion models have recently shown remarkable results in magnetic resonance imaging reconstruction. However, the employed networks typically are black-box estimators of the (smoothed) prior score with tens of millions of parameters, restricting interpretability and increasing reconstruction time. Furthermore, parallel imaging reconstruction algorithms either rely on off-line coil sensitivity estimation, which is prone to misalignment and restricting sampling trajectories, or perform per-coil reconstruction, making the computational cost proportional to the number of coils. To overcome this, we jointly reconstruct the image and the coil sensitivities using the lightweight, parameter-efficient, and interpretable product of Gaussian mixture diffusion model as an image prior and a classical smoothness priors on the coil sensitivities. The proposed method delivers promising results while allowing for fast inference and demonstrating robustness to contrast out-of-distribution data and sampling trajectories, comparable to classical variational penalties such as total variation. Finally, the probabilistic formulation allows the calculation of the posterior expectation and pixel-wise variance.
... However, these techniques have reached a performance limit, where conventional reconstruction methods struggle to adequately reconstruct the undersampled data. Recognizing the need to transcend the limitations of conventional MRI acquisition and reconstruction techniques, there has been a recent surge in the application and development of deep convolutional neural network (CNN) models [10,11] in the field of MRI. Notably, a commercially available deep learning-based reconstruction (DLR) pipeline, known as AIR Recon DL by GE Healthcare [12,13], has emerged as a significant advancement. ...
Article
Full-text available
Background Conventional hip joint MRI scans necessitate lengthy scan durations, posing challenges for patient comfort and clinical efficiency. Previously, accelerated imaging techniques were constrained by a trade-off between noise and resolution. Leveraging deep learning-based reconstruction (DLR) holds the potential to mitigate scan time without compromising image quality. Methods We enrolled a cohort of sixty patients who underwent DL-MRI, conventional MRI, and No-DL MRI examinations to evaluate image quality. Key metrics considered in the assessment included scan duration, overall image quality, quantitative assessments of Relative Signal-to-Noise Ratio (rSNR), Relative Contrast-to-Noise Ratio (rCNR), and diagnostic efficacy. Two experienced radiologists independently assessed image quality using a 5-point scale (5 indicating the highest quality). To gauge interobserver agreement for the assessed pathologies across image sets, we employed weighted kappa statistics. Additionally, the Wilcoxon signed rank test was employed to compare image quality and quantitative rSNR and rCNR measurements. Results Scan time was significantly reduced with DL-MRI and represented an approximate 66.5% reduction. DL-MRI consistently exhibited superior image quality in both coronal T2WI and axial T2WI when compared to both conventional MRI (p < 0.01) and No-DL-MRI (p < 0.01). Interobserver agreement was robust, with kappa values exceeding 0.735. For rSNR data, coronal fat-saturated(FS) T2WI and axial FS T2WI in DL-MRI consistently outperformed No-DL-MRI, with statistical significance (p < 0.01) observed in all cases. Similarly, rCNR data revealed significant improvements (p < 0.01) in coronal FS T2WI of DL-MRI when compared to No-DL-MRI. Importantly, our findings indicated that DL-MRI demonstrated diagnostic performance comparable to conventional MRI. Conclusion Integrating deep learning-based reconstruction methods into standard clinical workflows has the potential to the promise of accelerating image acquisition, enhancing image clarity, and increasing patient throughput, thereby optimizing diagnostic efficiency. Trial registration Retrospectively registered.
... In undersampled magnetic resonance imaging (MRI) reconstruction, for example, deep learningbased approaches have superseded purely hand-crafted priors such as ℓ 1 -wavelet compressed sensing [36] or total variation (TV) [29] years ago. At first, research has mainly focused on discriminative approaches [19,1,41], which are able to generate remarkably good reconstructions. ...
Preprint
Full-text available
Diffusion model have been successfully applied to many inverse problems, including MRI and CT reconstruction. Researchers typically re-purpose models originally designed for unconditional sampling without modifications. Using two different posterior sampling algorithms, we show empirically that such large networks are not necessary. Our smallest model, effectively a ResNet, performs almost as good as an attention U-Net on in-distribution reconstruction, while being significantly more robust towards distribution shifts. Furthermore, we introduce models trained on natural images and demonstrate that they can be used in both MRI and CT reconstruction, out-performing model trained on medical images in out-of-distribution cases. As a result of our findings, we strongly caution against simply re-using very large networks and encourage researchers to adapt the model complexity to the respective task. Moreover, we argue that a key step towards a general diffusion-based prior is training on natural images.
... By formulating reconstruction as a nonlinear inverse problem, model-based reconstruction can estimate physical quantitative maps from undersampled k-space data without intermediate reconstruction or pixelwise fitting. Advanced regularization techniques, such as sparsity constraints [40], further enhance precision in quantitative mapping. Recently, this approach has been extended to reconstruct water, fat, and R * 2 maps from undersampled 3D multi-echo FLASH for liver imaging [28,41], also enabling additional B 0 estimation [30]. ...
Article
Full-text available
Purpose To develop a rapid, high-resolution and distortion-free quantitative R2* mapping technique for fetal brain at 3 T. Methods A 2D multi-echo radial FLASH sequence with blip gradients is adapted for fetal brain data acquisition during maternal free breathing at 3 T. A calibrationless model-based reconstruction with sparsity constraints is developed to jointly estimate water, fat, R2* and B0 field maps directly from the acquired k-space data. Validations have been performed on numerical and NIST phantoms and five fetal subjects ranging from 27 weeks to 36 weeks gestation age. Results Both numerical and experimental phantom studies confirm good accuracy and precision of the proposed method. In fetal studies, both the parallel imaging compressed sensing (PICS) technique with a Graph Cut algorithm and the model-based approach proved effective for parameter quantification, with the latter providing enhanced image details. Compared to commonly used multi-echo EPI approaches, the proposed radial technique shows improved spatial resolution (1.1 × 1.1 × 3 mm³ vs. 2–3 × 2–3 × 3 mm³) and reduced distortion. Quantitative R2* results confirm good agreement between the two acquisition strategies. Additionally, high-resolution, distortion-free R2*-weighted images can be synthesized, offering complementary information to HASTE. Conclusion This work demonstrates the feasibility of radial acquisition for motion-robust quantitative R2* mapping of the fetal brain. This proposed multi-echo radial FLASH, combined with calibrationless model-based reconstruction, achieves accurate, distortion-free fetal brain R2* mapping at a nominal resolution of 1.1 × 1.1 × 3 mm³ within 2 seconds.
Article
The purpose of this study was to accelerate MR cholangiopancreatography (MRCP) acquisitions using deep learning‐based (DL) reconstruction at 3 and 0.55 T. A total of 35 healthy volunteers underwent conventional twofold accelerated MRCP scans at field strengths of 3 and 0.55 T. We trained DL reconstructions using two different training strategies, supervised (SV) and self‐supervised (SSV), with retrospectively sixfold undersampled data obtained at 3 T. We then evaluated the DL reconstructions against standard techniques, parallel imaging (PI) and compressed sensing (CS), focusing on peak signal‐to‐noise ratio (PSNR) and structural similarity (SSIM) as metrics. We also tested DL reconstructions with prospectively accelerated acquisitions and evaluated their robustness when changing fields strengths from 3 to 0.55 T. DL reconstructions demonstrated a reduction in average acquisition time from 599/542 to 255/180 s for MRCP at 3 T/0.55 T. In both retrospective and prospective undersampling, PSNR and SSIM of DL reconstructions were higher than those of PI and CS. At the same time, DL reconstructions preserved the image quality of undersampled data, including sharpness and the visibility of hepatobiliary ducts. In addition, both DL approaches produced high‐quality reconstructions at 0.55 T. In summary, DL reconstructions trained for highly accelerated MRCP enabled a reduction in acquisition time by a factor of 2.4/3.0 at 3 T/0.55 T while maintaining the image quality of conventional acquisitions.
Article
Full-text available
Purpose: To investigate the accuracy of proton density fat fraction (PDFF) measurement using chemical shift-encoded MRI (CSE-MRI) with fast imaging techniques in a phantom. Methods: A 1.5T imaging system (Prodiva; Philips Healthcare) and PDFF phantom (Fat Fraction Phantom Model 300; Calimetrix) were used in this study. The acquisitions without fast imaging techniques (conventional acquisition), with parallel imaging in phase-encode direction (SENSE acquisition), with compressed sensing (CS-SENSE acquisition), and with parallel imaging in both phase-encode and slice-encode direction (Dual-SENSE acquisition) were performed. The following acceleration factors in SENSE and CS-SENSE acquisition were used: 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, and 8.0. For Dual-SENSE acquisition, the same acceleration factors (1.5, 2.0, 3.0, 4.0, 5.0) were set in each of the two directions. The relationships between reference PDFF values and PDFF measurements obtained using each acquisition were assessed using linear regression analysis and Bland–Altman analysis. Results: According to the linear regression analysis, the slopes and intercepts of regression lines were from 0.87 to 1.02 and from 0.06% to 3.55%, respectively. According to Bland–Altman analysis, there were fixed bias between reference PDFF values and PDFF measurements obtained using SENSE acquisition with reduction factor 8.0 and Dual-SENSE acquisition with reduction factor 5.0. For CS-SENSE acquisition with reduction factor from 7.0 to 8.0, SENSE acquisition with reduction factor from 3.0 to 8.0, and Dual-SENSE acquisition with reduction factor from 2.0 to 5.0, some vials had ±1.5% or more errors between the reference PDFF values and PDFF measurements in the range of 0% to 50% PDFF. Conclusion: In CS-SENSE acquisition, the accuracy of PDFF measurement was maintained within 1.5% up to a reduction factor 6.0. The accuracy of PDFF measurement was maintained within 1.5% up to a reduction factor 2.0 in SENSE acquisition and a reduction factor 1.5 in Dual-SENSE acquisition.
Preprint
Full-text available
Designing the physical encoder is crucial for accurate image reconstruction in computational imaging (CI) systems. Currently, these systems are designed via end-to-end (E2E) optimization, where the encoder is modeled as a neural network layer and is jointly optimized with the decoder. However, the performance of E2E optimization is significantly reduced by the physical constraints imposed on the encoder. Also, since the E2E learns the parameters of the encoder by backpropagating the reconstruction error, it does not promote optimal intermediate outputs and suffers from gradient vanishing. To address these limitations, we reinterpret the concept of knowledge distillation (KD) for designing a physically constrained CI system by transferring the knowledge of a pretrained, less-constrained CI system. Our approach involves three steps: (1) Given the original CI system (student), a teacher system is created by relaxing the constraints on the student's encoder. (2) The teacher is optimized to solve a less-constrained version of the student's problem. (3) The teacher guides the training of the student through two proposed knowledge transfer functions, targeting both the encoder and the decoder feature space. The proposed method can be employed to any imaging modality since the relaxation scheme and the loss functions can be adapted according to the physical acquisition and the employed decoder. This approach was validated on three representative CI modalities: magnetic resonance, single-pixel, and compressive spectral imaging. Simulations show that a teacher system with an encoder that has a structure similar to that of the student encoder provides effective guidance. Our approach achieves significantly improved reconstruction performance and encoder design, outperforming both E2E optimization and traditional non-data-driven encoder designs.
Article
Existing unfolding-based compressive imaging approaches always suffer from certain issues, including inefficient feature extraction and information loss during iterative reconstruction phases, which become particularly evident at low sampling ratios, i.e ., significant detail degradation and distortion in reconstructed images. To mitigate these challenges, we propose USB-Net, a deep unfolding method inspired by the renowned Split Bregman algorithm and multi-phase feature integration strategy, for compressive imaging reconstruction. Specifically, we use a customized Depthwise Attention Block as a fundamental block for feature extraction, but also to address the sparse induction-related splitting operator within Split Bregman method. Based on this, we introduce three Auxiliary Iteration Modules: X( k ), D( k ), and B( k ) to reinforce the effectiveness of Split Bregman’s decomposition strategy for problem breakdown and Bregman iterations. Moreover, we introduce two categories of Iterative Fusion Modules to seamlessly harmonize and integrate insights across iterative reconstruction phases, enhancing the utilization of crucial features, such as edge information and textures. In general, USB-Net can fully harness the advantages of traditional Split Bregman approach, manipulating multi-phase iterative insights to enhance feature extraction, optimize data fidelity, and achieve high-quality image reconstruction. Extensive experiments show that USB-Net significantly outperforms current state-of-the-art methods on image compressive sensing, CS-magnetic resonance imaging, and snapshot compressive imaging tasks, demonstrating superior generalizability. Our code is available at USB-Net.
Article
Full-text available
Purpose To evaluate accelerated T1‐ and T2‐mapping techniques for ultra–low‐field MRI using low‐rank reconstruction methods. Methods Two low‐rank–based algorithms, image‐based locally low‐rank (LLR) and k‐space–based structured low‐rank (SLR), were implemented to accelerate T1 and T2 mapping on a 46 mT Halbach MRI scanner. Data were acquired with 3D turbo spin‐echo sequences using variable‐density poisson‐disk random sampling patterns. For validation, phantom and in vivo experiments were performed on six healthy volunteers to compare the obtained values with literature and to study reconstruction performance at different undersampling factors and spatial resolutions. In addition, the reconstruction performance of the LLR and SLR algorithms for T1 mapping was compared using retrospective undersampling datasets. Total scan times were reduced from 45/38 min (R = 1) to 23/19 min (R = 2) and 11/9 min (R = 4) for a 2.5 × 2.5 × 5 mm³ resolution, and to 18/16 min (R = 4) for a higher in‐plane resolution 1.5 × 1.5 × 5 mm³ for T1/T2 mapping, respectively. Results Both LLR and SLR algorithms successfully reconstructed T1 and T2 maps from undersampled data, significantly reducing scan times and eliminating undersampling artifacts. Phantom validation showed that consistent T1 and T2 values were obtained at different undersampling factors up to R = 4. For in vivo experiments, comparable image quality and estimated T1 and T2 values were obtained for fully sampled and undersampled (R = 4) reconstructions, both of which were in line with the literature values. Conclusions The use of low‐rank reconstruction allows significant acceleration of T1 and T2 mapping in low‐field MRI while maintaining image quality.
Preprint
Motion remains a major challenge in magnetic resonance (MR) imaging, particularly in free-breathing cardiac MR imaging, where data are acquired over multiple heartbeats at varying respiratory phases. We adopt a model-based approach for nonrigid motion correction, addressing two challenges: (a) motion representation and (b) motion estimation. For motion representation, we derive image-space gridding by adapting the nonuniform fast Fourier transform (NUFFT) to represent and compute nonrigid motion, which provides an exact forward-adjoint pair of linear operators. We then introduce nonrigid SENSE operators that incorporate nonrigid motion into the multi-coil MR acquisition model. For motion estimation, we employ both low-resolution 3D image-based navigators (iNAVs) and high-resolution 3D self-navigating image-based navigators (self-iNAVs). During each heartbeat, data are acquired along two types of non-Cartesian trajectories: a subset of a high-resolution trajectory that sparsely covers 3D k-space, followed by a full low-resolution trajectory. We reconstruct 3D iNAVs for each heartbeat using the full low-resolution data, which are then used to estimate bulk motion and identify the respiratory phase of each heartbeat. By combining data from multiple heartbeats within the same respiratory phase, we reconstruct high-resolution 3D self-iNAVs, allowing estimation of nonrigid respiratory motion. For each respiratory phase, we construct the nonrigid SENSE operator, reformulating the nonrigid motion-corrected reconstruction as a standard regularized inverse problem. In a preliminary study, the proposed method enhanced sharpness of the coronary arteries and improved image quality in non-cardiac regions, outperforming translational motion-corrected reconstruction.
Research
Full-text available
27th International Conference on Pattern Recognition (ICPR) [Core Rank A https://www.iiti.ac.in/people/~artiwari/cseconflist.html] is the premier conference in Pattern Recognition- Doctoral Consortium Proceedings https://perso.liris.cnrs.fr/veronique.eglin/DC-ICPR2024/Booklet_DC_ICPR2024.pdf ICPR 2024 (CORE Rank A) – to be held in Kolkata, India. This is the first time ICPR is being hosted in India, with prior editions held in countries like Canada, Italy, and France. (http://www.wikicfp.com/cfp/program?id=1448).
Article
Deep Unfolding Network (DUN) has achieved great success in the image Compressed Sensing (CS) field benefiting from its great interpretability and performance. However, existing DUNs suffer from limited information transmission capacity with increasingly complex structures, leading to undesirable results. Besides, current DUNs are mostly established based on one specific optimization algorithm, which hampers the development and understanding of DUN. In this paper, we propose a new unfolding formula combining the Approximate Message Passing algorithm (AMP) and Range-Nullspace Decomposition (RND), which offers new insights for DUN design. To maximize information transmission and utilization, we propose a novel High-Throughput Decomposition-Inspired Deep Unfolding Network (HTDIDUN) based on the new formula. Specifically, we design a powerful Nullspace Information Extractor (NIE) with high-throughput transmission and stacked residual channel attention blocks. By modulating the dimension of the feature space, we provide three implementations from small to large. Extensive experiments on natural and medical images manifest that our HTDIDUN family members outperform other state-of-the-art methods by a large margin. Our codes and pre-trained models are available on GitHub to facilitate further exploration.
Article
In the spaces between data-hungry generative models and measurement-rich computational imaging, we can find the field of computational photography. Can cell phone cameras be an accessible and affordable bridge between modern computer vision and traditional inverse imaging problems?
Article
Full-text available
A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lanrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t → ∞ the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.
Article
Full-text available
Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-spar-sity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
Article
Full-text available
We show an iterative reconstruction framework for diffraction ultrasound tomography. The use of broad-band illumination allows significant reduction of the number of projections compared to straight ray tomography. The proposed algorithm makes use of forward nonuniform fast Fourier transform (NUFFT) for iterative Fourier inversion. Incorporation of total variation regularization allows the reduction of noise and Gibbs phenomena while preserving the edges. The complexity of the NUFFT-based reconstruction is comparable to the frequency-domain interpolation (gridding) algorithm, whereas the reconstruction accuracy (in sense of the L2 and the L(infinity) norm) is better.
Article
Full-text available
The dynamic MR imaging of time-varying objects, such as beating hearts or brain hemodynamics, requires a significant reduction of the data acquisition time without sacrificing spatial resolution. The classical approaches for this goal include parallel imaging, temporal filtering and their combinations. Recently, model-based reconstruction methods called k-t BLAST and k-t SENSE have been proposed which largely overcome the drawbacks of the conventional dynamic imaging methods without a priori knowledge of the spectral support. Another recent approach called k-t SPARSE also does not require exact knowledge of the spectral support. However, unlike k-t BLAST/SENSE, k-t SPARSE employs the so-called compressed sensing (CS) theory rather than using training. The main contribution of this paper is a new theory and algorithm that unifies the above mentioned approaches while overcoming their drawbacks. Specifically, we show that the celebrated k-t BLAST/SENSE are the special cases of our algorithm, which is asymptotically optimal from the CS theory perspective. Experimental results show that the new algorithm can successfully reconstruct a high resolution cardiac sequence and functional MRI data even from severely limited k-t samples, without incurring aliasing artifacts often observed in conventional methods.
Article
Full-text available
Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr2 error O(N1/2-1p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscrp balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces
Article
Full-text available
Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
Conference Paper
Can we recover a signal f∈R^N from a small number of linear measurements? A series of recent papers developed a collection of results showing that it is surprisingly possible to reconstruct certain types of signals accurately from limited measurements. In a nutshell, suppose that f is compressible in the sense that it is well-approximated by a linear combination of M vectors taken from a known basis Ψ. Then not knowing anything in advance about the signal, f can (very nearly) be recovered from about M log N generic nonadaptive measurements only. The recovery procedure is concrete and consists in solving a simple convex optimization program. In this paper, we show that these ideas are of practical significance. Inspired by theoretical developments, we propose a series of practical recovery procedures and test them on a series of signals and images which are known to be well approximated in wavelet bases. We demonstrate that it is empirically possible to recover an object from about 3M-5M projections onto generically chosen vectors with an accuracy which is as good as that obtained by the ideal M-term wavelet approximation. We briefly discuss possible implications in the areas of data compression and medical imaging.
Article
A variable-density k-space sampling method is proposed to reduce aliasing artifacts in MR images. Because most of the energy of an image is concentrated around the k-space center, aliasing artifacts will contain mostly low-frequency components if the k-space is uniformly undersampled. On the other hand, because the outer k-space region contains little energy, undersampling that region will not contribute severe aliasing artifacts. Therefore, a variable-density trajectory may sufficiently sample the central k-space region to reduce low-frequency aliasing artifacts and may undersample the outer k-space region to reduce scan time and to increase resolution. In this paper, the variable-density sampling method was implemented for both spiral imaging and two-dimensional Fourier transform (2DFT) imaging. Simulations, phantom images and in vivo cardiac images show that this method can significantly reduce the total energy of aliasing artifacts. In general, this method can be applied to all types of k-space sampling trajectories. (C) 2000 Wiley-Liss, Inc.
Article
New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementary to Fourier preparation by linear field gradients. Thus, by using multiple receiver coils in parallel scan time in Fourier imaging can be considerably reduced. The problem of image reconstruction from sensitivity encoded data is formulated in a general fashion and solved for arbitrary coil configurations and k-space sampling patterns. Special attention is given to the currently most practical case, namely, sampling a common Cartesian grid with reduced density. For this case the feasibility of the proposed methods was verified both in vitro and in vivo. Scan time was reduced to one-half using a two-coil array in brain imaging. With an array of five coils double-oblique heart images were obtained in one-third of conventional scan time. Magn Reson Med 42:952-962, 1999.
Article
The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries-stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l(1) norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.
Article
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Article
A method for high frame-rate dynamic imaging exploiting the spatio-temporal sparsity of dynamic MRI sequence of images(scene) is proposed. The k-t space is randomly sampled by random ordering of the phase encodes in time. The dynamic scene is reconstruct by minimizing the L1 norm of a transformed dynamic scene subject to data fidelity constraints. The proposed method does not require a known structure nor a training set, only that the dynamic scene has a sparse representation. A 7-fold frame-rate acceleration is demonstrated in simulated data and in vivo non-gated Cartesian balanced-SSFP cardiac MRI
Article
Can we recover a signal f 2 RN from a small number of linear measurements? A series of recent papers developed a collection of results showing that it is surprisingly possible to reconstruct certain types of signals accurately from limited measurements. In a nutshell, suppose that f is compressible in the sense that it is well-approximated by a linear combination of M vectors taken from a known basis . Then not knowing anything in advance about the signal, f can (very nearly) be recovered from about M logN generic nonadaptive measurements only. The recovery procedure is concrete and consists in solving a simple convex optimization program. In this paper, we show that these ideas are of practical significance. Inspired by theoretical devel- opments, we propose a series of practical recovery procedures and test them on a series of signals and images which are known to be well approximated in wavelet bases. We demonstrate empirically that it is possible to recover an object from about 3M -5M projections onto generically chosen vectors with the same accuracy as the ideal M -term wavelet approximation. We briefly discuss possible implications in the areas of data compression and medical imaging.
Article
We propose a fast imaging method based on undersampled k-space spiral sampling and non-linear reconstruction. Our approach is inspired by theoretical results in sparse signal recovery [1,2] showing that compressible signals can be completely recovered from randomly undersampled frequency data. Since random sampling in frequency space is impractical for MRI hardware, we develop a practical strategy allowing 50 % undersampling by adapting spiral MR imaging. We introduce randomness by
Article
Finding the sparsest solution to a set of underdetermined linear equations is NP-hard in general. However, recent research has shown that for certain systems of linear equations, the sparsest solution (i.e. the solution with the smallest number of nonzeros), is also the solution with minimal '1 norm, and so can be found by a computationally tractable method. For a given n by m matrix F defining a system y ¼ Fa, with nom making the system underdetermined, this phenomenon holds whenever there exists a 'sufficiently sparse' solution a0. We quantify the 'sufficient sparsity' condition, defining an equivalence breakdown point (EBP): the degree of sparsity of a required to guarantee equivalence to hold; this threshold depends on the matrix F. In this paper we study the size of the EBP for 'typical' matrices with unit norm columns (the uniform spherical ensemble (USE)); Donoho showed that for such matrices F, the EBP is at least proportional to n. We distinguish three notions of breakdown point—global, local, and individual—and describe a semi-empirical heuristic for predicting the local EBP at this ensemble. Our heuristic identifies a configuration which can cause breakdown, and predicts the level of sparsity required to avoid that situation. In experiments, our heuristic provides upper and lower bounds bracketing the EBP for 'typical' matrices in the USE. For instance, for an nm matrix Fn;m with m ¼ 2n, our heuristic predicts breakdown of local equivalence when the coefficient vector a has about 30% nonzeros (relative to the reduced dimension n). This figure reliably describes the observed empirical behavior. A rough approximation to the observed breakdown point is provided by the simple formula 0:44 � n= logð2m=nÞ. There are many matrix ensembles of interest outside the USE; our heuristic may be useful in speeding up empirical studies of breakdown point at such ensembles. Rather than solving numerous linear programming problems per n;m combination, at least several for each degree of sparsity, the heuristic suggests to conduct a few experiments to measure the driving term of the heuristic and derive predictive bounds. We tested the applicability of this heuristic to three special ensembles of matrices, including the partial Hadamard ensemble and the partial Fourier ensemble, and found
Article
A rapid dynamic imaging technique based on polar k-space sampling is presented. A gain in temporal resolution is achieved by angular undersampling. A detailed analysis of the point spread function of angular undersampled polar imaging reveals a reduced diameter of the corresponding circular field of view. Under the assumption that dynamic changes are restricted to a local circular field of view, angular under-sampled dynamic imaging allows the recording of rapid changes at high temporal and spatial resolution. The theoretical and experimental details of the technique are presented.
Article
We study the notion of compressed sensing (CS) as put forward by Donoho, Candes, Tao and others. The notion proposes a signal or image, unknown but supposed to be compressible by a known transform, (e.g. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of data points, and yet be accurately reconstructed. The samples are nonadaptive and measure ‘random’ linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible ℓ1 norm.We present initial ‘proof-of-concept’ examples in the favorable case where the vast majority of the transform coefficients are zero. We continue with a series of numerical experiments, for the setting of ℓp-sparsity, in which the object has all coefficients nonzero, but the coefficients obey an ℓp bound, for some p∈(0,1]. The reconstruction errors obey the inequalities paralleling the theory, seemingly with well-behaved constants.We report that several workable families of ‘random’ linear combinations all behave equivalently, including random spherical, random signs, partial Fourier and partial Hadamard.We next consider how these ideas can be used to model problems in spectroscopy and image processing, and in synthetic examples see that the reconstructions from CS are often visually “noisy”. To suppress this noise we post-process using translation-invariant denoising, and find the visual appearance considerably improved.We also consider a multiscale deployment of compressed sensing, in which various scales are segregated and CS applied separately to each; this gives much better quality reconstructions than a literal deployment of the CS methodology.These results show that, when appropriately deployed in a favorable setting, the CS framework is able to save significantly over traditional sampling, and there are many useful extensions of the basic idea.
Article
This work addresses the problem of regularized linear least squares (RLS) with non-quadratic separable regularization. Despite being frequently deployed in many applications, the RLS problem is often hard to solve using standard iterative methods. In a recent work [M. Elad, Why simple shrinkage is still relevant for redundant representations? IEEE Trans. Inform. Theory 52 (12) (2006) 5559–5569], a new iterative method called parallel coordinate descent (PCD) was devised. We provide herein a convergence analysis of the PCD algorithm, and also introduce a form of the regularization function, which permits analytical solution to the coordinate optimization. Several other recent works [I. Daubechies, M. Defrise, C. De-Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math. LVII (2004) 1413–1457; M.A. Figueiredo, R.D. Nowak, An EM algorithm for wavelet-based image restoration, IEEE Trans. Image Process. 12 (8) (2003) 906–916; M.A. Figueiredo, R.D. Nowak, A bound optimization approach to wavelet-based image deconvolution, in: IEEE International Conference on Image Processing, 2005], which considered the deblurring problem in a Bayesian methodology, also obtained element-wise optimization algorithms. We show that the last three methods are essentially equivalent, and the unified method is termed separable surrogate functionals (SSF). We also provide a convergence analysis for SSF. To further accelerate PCD and SSF, we merge them into a recently developed sequential subspace optimization technique (SESOP), with almost no additional complexity. A thorough numerical comparison of the denoising application is presented, using the basis pursuit denoising (BPDN) objective function, which leads all of the above algorithms to an iterated shrinkage format. Both with synthetic data and with real images, the advantage of the combined PCD-SESOP method is demonstrated.
Article
The focal underdetermined system solver (FOCUSS) was originally designed to obtain sparse solutions by successively solving quadratic optimization problems. This article adapts FOCUSS for a projection reconstruction MR imaging problem to obtain high resolution reconstructions from angular under-sampled radial k-space data. We show that FOCUSS is effective for projection reconstruction MRI, since medical images are usually sparse in some sense and the center region of the undersampled radial k-space samples still provides a low resolution, yet meaningful, image essential for the convergence of FOCUSS. The new algorithm is successfully applied for synthetic data as well as in vivo brain imaging obtained by under-sampled radial spin echo sequence.
Article
Non-Cartesian MRI Scan-Time Reduction through Sparse Sampling Magnetic resonance imaging (MRI) signals are measured in the Fourier domain, also called k-space. Samples of the MRI signal can not be taken at will, but lie along k-space trajectories determined by the magnetic field gradients. MRI measurements are usually Cartesian, where the trajectories are parallel and equidistant and sampling along the trajectories is also equidistant. This allows fast reconstruction using the inverse Fast Fourier transform (IFFT). However, this thesis focuses on non-Cartesian MRI. Typical trajectories in this case are radial and spiral, but there exists a multitude of other possibilities. Chapter 2 a basic introduction on MRI relevant for this thesis. Image reconstruction in the non-Cartesian case can not be accomplished by IFFT. In certain cases, however, dedicated reconstruction algorithms are available. For example, for radial scanning there exists the Filtered Back Projection algorithm. Another possibility, aiming at maintaining the IFFT for the transformation to the image domain, is the gridding algorithm. This algorithm, which is capable of image reconstruction from a k-space sampled along arbitrary trajectories, is given extensive attention in this thesis. A major complication in non-Cartesian sampling is the compensation for the non-uniformity of the sampling density. In case the trajectories are rather regular, an analytical expression for the density may be derived or Voronoi triangulation can be applied. Another more recent approach is the Pipe-Menon algorithm. However, all these approaches fail in more irregular cases. The above-mentioned image reconstruction algorithms are described in chapter 3. They are based on the inverse Fourier transform and therefore require k-space sampling to obey the Nyquist criterion. These algorithms cannot cope with k-space undersampling. However, in certain cases there may not be enough time to fully sample k-space; or scan-time is deliberately reduced by omitting trajectories, the total scan-time being proportional to the number of trajectories. Under these circumstances we still want to be able to reconstruct an image. Chapter 4 presents two algorithms that are able to cope with undersampled k-space data, and still reconstruct artefact free images. The first, based on work by G.J. Marseille, who worked on Cartesian scans, aims on estimating values for the missing data. In this approach the missing data are estimated iteratively by shuttling back and forth between image and k-space, while smoothing the image with an edge-preserving filter and resetting the measured data to their original values. This algorithm has a major drawback in that it requires density compensation. This means that this algorithm is only applicable if the trajectories are regular. Moreover, the algorithm requires user input on which k-space points are missing. In certain cases, especially when sampling is irregular, this may be impossible or not desirable. Note that this difficulty is absent in undersampled Cartesian scans. The second algorithm, also based on work by G.J. Marseille, directly estimates the image from the available k-space data. It is based on Bayes theorem, which allows both incorporation of consistency with the measured data, as in maximum likelihood estimation, and prior knowledge about the image. If k-space is undersampled, the k-space data alone give insufficient information to reconstruct satisfactory images. The prior knowledge gives the required additional information. In the Cartesian case, one dimension is always completely measured. Along this dimension IFFT can already be applied. This effectively reduces the image reconstruction problem to one dimension, meaning that different columns in the image can be treated separately. Consequently, the used prior knowledge, i.e. the Lorentzian edge distribution model, is a function taking only into account edges in one direction. In the non-Cartesian case, no dimension is completely sampled. Therefore the image can not be treated column-wise. Moreover, the prior has to take into account edges in both directions, since there is no preferred direction. In this thesis the prior used in Cartesian work is extended to include edges in more than one direction. In contrast to the first algorithm, the Bayesian approach allows one to obviate density compensation. Therefore, this algorithm can handle any type of sampling, to whatever degree of irregularity of sampling. In addition, since the image is directly estimated there is no need for the user to input which k-space points are missing. Chapter 5 discusses image quality measures. The measures are necessary for evaluation of the developed reconstruction algorithms. The performance of the mentioned reconstruction algorithms, in case of deliberate omission of trajectories to reduce the scan-time, depends on which trajectories are omitted. Chapter 6 is devoted to the matter of how to omit trajectories. Ideally, the raw data satisfy Hermitian symmetry. This property can be exploited when omitting trajectories. This chapter also describes two ways of finding optimal omission of trajectories from a full measurement. Finally, one of these methods is applied to an in vivo spiral scan. Optimal distributions seem to omit trajectories irregularly, without clustering too many omitted trajectories together so as to keep local undersampling to a minimum. Finally, chapter 7 deals with applications. The methods alluded to above are tested on simulations and real-world data. The Bayesian estimator appears particularly suited for pseudo-random sample positions. Frank Wajer, Delft University of Technology
Article
Partial Fourier reconstruction algorithms exploit the redundancy in magnetic resonance data sets so that half of the data is calculated during image reconstruction rather than acquired. The conjugate synthesis, Margosian, homodyne detection, Cuppen and POCS algorithms are evaluated using spatial frequency domain analysis to show their characteristics and where limitations may occur. The phase correction used in partial Fourier reconstruction is equivalent to a convolution in the frequency domain and the importance of accurately implementing this convolution is demonstrated. New reconstruction approaches, based on passing the partial data through a phase correcting, finite impulse response (FIR), digital filter are suggested. These FIR and MoFIR algorithms have a speed near that of the Margosian and homodyne detection reconstructions, but with a lower error; close to that of the Cuppen/POCS iterative approaches. Quantitative analysis of the partial Fourier algorithms, tested with three phase estimation techniques, are provided by comparing artificial and clinical data reconstructed using full and partial Fourier techniques.
Article
An MR angiographic technique, referred to as 3D TRICKS (3D time-resolved imaging of contrast kinetics) has been developed. This technique combines and extends to 3D imaging several previously published elements. These elements include an increased sampling rate for lower spatial frequencies, temporal interpolation of k-space views, and zero-filling in the slice-encoding dimension. When appropriately combined, these elements permit reconstruction of a series of 3D image sets having an effective temporal frame rate of one volume every 2-6 s. Acquiring a temporal series of images offers advantages over the current contrast-enhanced 3D MRA techniques in that it I) increases the likelihood that an arterial-only 3D image set will be obtained. II) permits the passage of the contrast agent to be observed, and III) allows temporal-processing techniques to be applied to yield additional information, or improve image quality.
Article
SiMultaneous Acquisition of Spatial Harmonics (SMASH) is a new fast-imaging technique that increases MR image acquisition speed by an integer factor over existing fast-imaging methods, without significant sacrifices in spatial resolution or signal-to-noise ratio. Image acquisition time is reduced by exploiting spatial information inherent in the geometry of a surface coil array to substitute for some of the phase encoding usually produced by magnetic field gradients. This allows for partially parallel image acquisitions using many of the existing fast-imaging sequences. Unlike the data combination algorithms of prior proposals for parallel imaging, SMASH reconstruction involves a small set of MR signal combinations prior to Fourier transformation, which can be advantageous for artifact handling and practical implementation. A twofold savings in image acquisition time is demonstrated here using commercial phased array coils on two different MR-imaging systems. Larger time savings factors can be expected for appropriate coil designs.
Article
New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementary to Fourier preparation by linear field gradients. Thus, by using multiple receiver coils in parallel scan time in Fourier imaging can be considerably reduced. The problem of image reconstruction from sensitivity encoded data is formulated in a general fashion and solved for arbitrary coil configurations and k-space sampling patterns. Special attention is given to the currently most practical case, namely, sampling a common Cartesian grid with reduced density. For this case the feasibility of the proposed methods was verified both in vitro and in vivo. Scan time was reduced to one-half using a two-coil array in brain imaging. With an array of five coils double-oblique heart images were obtained in one-third of conventional scan time. Magn Reson Med 42:952-962, 1999.
Article
In several applications, MRI is used to monitor the time behavior of the signal in an organ of interest; e.g., signal evolution because of physiological motion, activation, or contrast-agent accumulation. Dynamic applications involve acquiring data in a k-t space, which contains both temporal and spatial information. It is shown here that in some dynamic applications, the t axis of k-t space is not densely filled with information. A method is introduced that can transfer information from the k axes to the t axis, allowing a denser, smaller k-t space to be acquired, and leading to significant reductions in the acquisition time of the temporal frames. Results are presented for cardiac-triggered imaging and functional MRI (fMRI), and are compared with data obtained in a conventional way. The temporal resolution was increased by nearly a factor of two in the cardiac-triggered study, and by as much as a factor of eight in the fMRI study. This increase allowed the acquisition of fMRI activation maps, even when the acquisition time for a single full time frame was actually longer than the paradigm cycle period itself. The new method can be used to significantly reduce the acquisition time of the individual temporal frames in certain dynamic studies. This can be used, for example, to increase the temporal or spatial resolution, increase the spatial coverage, decrease the total imaging time, or alter sequence parameters e.g., repetition time (TR) and echo time (TE) and thereby alter contrast. Magn Reson Med 42:813-828, 1999.
Article
Undersampled projection reconstruction (PR) is investigated as an alternative method for MRA (MR angiography). In conventional 3D Fourier transform (FT) MRA, resolution in the phase-encoding direction is proportional to acquisition time. Since the PR resolution in all directions is determined by the readout resolution, independent of the number of projections (Np), high resolution can be generated rapidly. However, artifacts increase for reduced Np. In X-ray CT, undersampling artifacts from bright objects like bone can dominate other tissue. In MRA, where bright, contrast-filled vessels dominate, artifacts are often acceptable and the greater resolution per unit time provided by undersampled PR can be realized. The resolution increase is limited by SNR reduction associated with reduced voxel size. The hybrid 3D sequence acquires fractional echo projections in the k(x)-k(y) plane and phase encodings in k(z). PR resolution and artifact characteristics are demonstrated in a phantom and in contrast-enhanced volunteer studies.
Article
A variable-density k-space sampling method is proposed to reduce aliasing artifacts in MR images. Because most of the energy of an image is concentrated around the k-space center, aliasing artifacts will contain mostly low-frequency components if the k-space is uniformly undersampled. On the other hand, because the outer k-space region contains little energy, undersampling that region will not contribute severe aliasing artifacts. Therefore, a variable-density trajectory may sufficiently sample the central k-space region to reduce low-frequency aliasing artifacts and may undersample the outer k-space region to reduce scan time and to increase resolution. In this paper, the variable-density sampling method was implemented for both spiral imaging and two-dimensional Fourier transform (2DFT) imaging. Simulations, phantom images and in vivo cardiac images show that this method can significantly reduce the total energy of aliasing artifacts. In general, this method can be applied to all types of k-space sampling trajectories.
Article
Time-resolved contrast-enhanced 3D MR angiography (MRA) methods have gained in popularity but are still limited by the tradeoff between spatial and temporal resolution. A method is presented that greatly reduces this tradeoff by employing undersampled 3D projection reconstruction trajectories. The variable density k-space sampling intrinsic to this sequence is combined with temporal k-space interpolation to provide time frames as short as 4 s. This time resolution reduces the need for exact contrast timing while also providing dynamic information. Spatial resolution is determined primarily by the projection readout resolution and is thus isotropic across the FOV, which is also isotropic. Although undersampling the outer regions of k-space introduces aliased energy into the image, which may compromise resolution, this is not a limiting factor in high-contrast applications such as MRA. Results from phantom and volunteer studies are presented demonstrating isotropic resolution, broad coverage with an isotropic field of view (FOV), minimal projection reconstruction artifacts, and temporal information. In one application, a single breath-hold exam covering the entire pulmonary vasculature generates high-resolution, isotropic imaging volumes depicting the bolus passage.
Article
Dynamic images of natural objects exhibit significant correlations in k-space and time. Thus, it is feasible to acquire only a reduced amount of data and recover the missing portion afterwards. This leads to an improved temporal resolution, or an improved spatial resolution for a given amount of acquisition. Based on this approach, two methods were developed to significantly improve the performance of dynamic imaging, named k-t BLAST (Broad-use Linear Acquisition Speed-up Technique) and k-t SENSE (SENSitivity Encoding) for use with a single or multiple receiver coils, respectively. Signal correlations were learned from a small set of training data and the missing data were recovered using all available information in a consistent and integral manner. The general theory of k-t BLAST and k-t SENSE is applicable to arbitrary k-space trajectories, time-varying coil sensitivities, and under- and overdetermined reconstruction problems. Examples from ungated cardiac imaging demonstrate a 4-fold acceleration (voxel size 2.42 x 2.52 mm(2), 38.4 fps) with either one or six receiver coils. k-t BLAST and k-t SENSE are applicable to many areas, especially those exhibiting quasiperiodic motion, such as imaging of the heart, the lungs, the abdomen, and the brain under periodic stimulation.
Article
Non-uniform sampling is shown to provide significant time savings in the acquisition of a suite of three-dimensional NMR experiments utilized for obtaining backbone assignments of H, N, C', CA, and CB nuclei in proteins : HNCO, HN(CA)CO, HNCA, HN(CO)CA, HNCACB, and HN(CO)CACB. Non-uniform sampling means that data were collected for only a subset of all incremented evolution periods, according to a user-specified sampling schedule. When the suite of six 3D experiments was acquired in a uniform fashion for an 11 kDa cytoplasmic domain of a membrane protein at 1.5 mM concentration, a total of 146 h was consumed. With non-uniform sampling, the same experiments were acquired in 32 h and, through subsequent maximum entropy reconstruction, yielded spectra of similar quality to those obtained by conventional Fourier transform of the uniformly acquired data. The experimental time saved with this methodology can significantly accelerate protein structure determination by NMR, particularly when combined with the use of automated assignment software, and enable the study of samples with poor stability at room temperature. Since it is also possible to use the time savings to acquire a greater numbers of scans to increase sensitivity while maintaining high resolution, this methodology will help extend the size limit of proteins accessible to NMR studies, and open the way to studies of samples that suffer from solubility problems.
Article
Recent work in k-t BLAST and undersampled projection angiography has emphasized the value of using training data sets obtained during the acquisition of a series of images. These techniques have used iterative algorithms guided by the training set information to reconstruct time frames sampled at well below the Nyquist limit. We present here a simple non-iterative unfiltered backprojection algorithm that incorporates the idea of a composite image consisting of portions or all of the acquired data to constrain the backprojection process. This significantly reduces streak artifacts and increases the overall SNR, permitting decreased numbers of projections to be used when acquiring each image in the image time series. For undersampled 2D projection imaging applications, such as cine phase contrast (PC) angiography, our results suggest that the angular undersampling factor, relative to Nyquist requirements, can be increased from the present factor of 4 to about 100 while increasing SNR per individual time frame. Results are presented for a contrast-enhanced PR HYPR TRICKS acquisition in a volunteer using an angular undersampling factor of 75 and a TRICKS temporal undersampling factor of 3 for an overall undersampling factor of 225.
Article
Multislice breath-held coronary imaging techniques conventionally lack the coverage of free-breathing 3D acquisitions but use a considerably shorter acquisition window during the cardiac cycle. This produces images with significantly less motion artifact but a lower signal-to-noise ratio (SNR). By using the extra SNR available at 3 T and undersampling k-space without introducing significant aliasing artifacts, we were able to acquire high-resolution fat-suppressed images of the whole heart in 17 heartbeats (a single breath-hold). The basic pulse sequence consists of a spectral-spatial excitation followed by a variable-density spiral readout. This is combined with real-time localization and a real-time prospective shim correction. Images are reconstructed with the use of gridding, and advanced techniques are used to reduce aliasing artifacts.
Article
The reconstruction of artifact-free images from radially encoded MRI acquisitions poses a difficult task for undersampled data sets, that is for a much lower number of spokes in k-space than data samples per spoke. Here, we developed an iterative reconstruction method for undersampled radial MRI which (i) is based on a nonlinear optimization, (ii) allows for the incorporation of prior knowledge with use of penalty functions, and (iii) deals with data from multiple coils. The procedure arises as a two-step mechanism which first estimates the coil profiles and then renders a final image that complies with the actual observations. Prior knowledge is introduced by penalizing edges in coil profiles and by a total variation constraint for the final image. The latter condition leads to an effective suppression of undersampling (streaking) artifacts and further adds a certain degree of denoising. Apart from simulations, experimental results for a radial spin-echo MRI sequence are presented for phantoms and human brain in vivo at 2.9 T using 24, 48, and 96 spokes with 256 data samples. In comparison to conventional reconstructions (regridding) the proposed method yielded visually improved image quality in all cases.
Article
Acquisition-weighting improves the localization of MRI experiments. An approach to acquisition-weighting in a purely phase-encoded experiment is presented that is based on a variation of the sampling density in k-space. In contrast to conventional imaging or to accumulation-weighting, where k-space is sampled with uniform increments, density-weighting varies the distance between neighboring sampling points Deltak to approximate a given radial weighting function. A fast, noniterative algorithm has been developed to calculate the sampling matrix in one, two, and three dimensions from a radial weighting function w(k), the desired number of scans NA(tot) and the nominal spatial resolution Deltax(nom). Density-weighted phase-encoding combines the improved shape of the spatial response function and the high SNR of acquisition-weighting with an extended field of view. The artifact energy that results from aliasing due to a small field of view is substantially reduced. The properties of density-weighting are compared to uniform and to accumulation-weighted phase-encoding in simulations and experiments. Density-weighted (31)P 3D chemical shift imaging of the human heart is shown which demonstrates the superior performance of density-weighted metabolic imaging.
Conference Paper
Compressed sensing or compressive sampling (CS) has been receiving a lot of interest as a promising method for signal recovery and sampling. CS problems can be cast as convex problems, and then solved by several standard methods such as interior-point methods, at least for small and medium size problems. In this paper we describe a specialized interior-point method for solving CS problems that uses a preconditioned conjugate gradient method to compute the search step. The method can efficiently solve large CS problems, by exploiting fast algorithms for the signal transforms used. The method is demonstrated with a medical resonance imaging (MRI) example.
Article
Recent results show that a relatively small number of random projections of a signal can contain most of its salient information. It follows that if a signal is compressible in some orthonormal basis, then a very accurate reconstruction can be obtained from random projections. This "compressive sampling" approach is extended here to show that signals can be accurately recovered from random projections contaminated with noise. A practical iterative algorithm for signal reconstruction is proposed, and potential applications to coding, analog-digital (A/D) conversion, and remote wireless sensing are discussed
Article
The Time-Frequency and Time-Scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the Method of Frames (MOF), Matching Pursuit (MP), and, for special dictionaries, the Best Orthogonal Basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l 1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP! and BOB, including better sparsity, and super-resolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation de-noising, and multi-scale edge de-noising. Basis Pursuit in highly ...
Article
With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle offers dramatic advantages over traditional linear estimation by nonadaptive kernels; however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatially-adaptive estimation: selective wavelet reconstruction. We show that variableknot spline fits and piecewise-polynomial fits, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coefficients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as we...
Article
Suppose we wish to recover a vector x_0 Є R^m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax_0 + e; A is an n by m matrix with far fewer rows than columns (n « m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? To recover x_0, we consider the solution x^# to the ℓ_(1-)regularization problem min ‖x‖ℓ_1 subject to ‖Ax - y‖ℓ(2) ≤ Є, where Є is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level ‖x^# - x_0‖ℓ_2 ≤ C Є. As a first example, suppose that A is a Gaussian random matrix; then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x_0; then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/[log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
Article
We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.
Correction for artifacts in 3D angularly undersampled MR projection reconstruction
  • Fainsb Blockw
  • Charlesa Mistrettaab
Fain SB, Block W, and CharlesA.Mistretta AB. Correction for artifacts in 3D angularly undersampled MR projection reconstruction. In: Proceedings of the 9th Annual Meeting of ISMRM, Glasgow, 2001. p. 759.
JPEG 2000: Image compression fundamentals, standards and practice. Kluwer International Series in Engineering and Computer Science
  • Taubman DS
  • Marcellin MW