Conference Paper

K-SVD for HARDI denoising

Lab. of Neuro Imaging, Univ. of California, Los Angeles, CA, USA
DOI: 10.1109/ISBI.2011.5872757 Conference: Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium on
Source: IEEE Xplore


Noise is an important concern in high-angular resolution diffusion imaging studies because it can lead to errors in downstream analyses of white matter structure. To address this issue, we investigate a new approach for denoising diffusion-weighted data sets based on the K-SVD algorithm. We analyze its characteristics using both simulated and biological data and compare its performance with existing methods. Our results show that K-SVD provides robust and effective noise reduction and is practical for use in high-volume applications.

Download full-text


Available from: Paul Thompson,
  • [Show abstract] [Hide abstract]
    ABSTRACT: Diffusion tensor imaging (DTI) is known to be the best non-invasive imaging modality in providing anatomical information as white-matter fiber bundles. However, the Gaussian noise introduced into the diffusion tensor images can bring serious impacts on tensor calculation and fiber tracking. To decrease the effects of the Gaussian noise, many denoising methods have been presented. In this paper, a shearlet based denosing strategy is introduced. To evaluate the efficiency of the proposed shearlet based denoising method in accounting for the Gaussian noise introduced into the images, the peak to peak signal-to-noise ratio (PSNR), signal-to-mean squared error ratio (SMSE) and edge keeping index (Beta) metrics are adopted. The experiment results acquired from both the synthetic and real data indicate the good performance of our proposed filter.
    2012 11th International Conference on Signal Processing (ICSP 2012); 10/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the KSVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionarybased CS algorithm.
    07/2013; 32(11). DOI:10.1109/TMI.2013.2271707
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Real-world experiments are becoming increasingly more complex, needing techniques capable of tracking this complexity. Signal based measurements are often used to capture this complexity, where a signal is a record of a sample’s response to a parameter (e.g. time, displacement, voltage, wavelength) that is varied over a range of values. In signals the responses at each value of the varied parameter are related to each other, depending on the composition or state sample being measured. Since a signal contains multiple information points, they have rich information content but generally complex to apprehend. Multivariate analysis (MA) has profoundly transformed their analysis by allowing gross simplification of the tangled web of variation. In addition MA has also provided the advantage of being much more robust to the influence of noise than univariate methods of analysis. In recent years there has been a growing awareness that the nature of the multivariate methods allows exploitation of its benefits for purposes other than data analysis, such as pre-processing of signals with the aim of eliminating irrelevant variations prior to analysis of the signal of interest. It has been shown that exploiting multivariate data reduction in an appropriate way can allow high fidelity denoising (removal of irreproducible non-signal), consistent and reproducible noise-insensitive correction of baseline distortions (removal of reproducible non-signals), accurate elimination of interfering signals (removal of reproducible but unwanted signals) and the standardisation of signal amplitude fluctuations. At present the field is relatively small but the possibilities for much wider application are considerable. Where signal properties are suitable for MA (such as the signal being stationary along the x-axis), these signal based corrections have the potential to be highly reproducible, and highly adaptable and are applicable in situations where the data is noisy or where the variations in the signals can be complex. As science seeks to probe datasets in less and less tightly controlled situations the ability to provide high-fidelity corrections in a very flexible manner is becoming more critical and multivariate based signal processing has the potential to provide many solutions. L'analyse multivariée, dont l'analyse en composantes principales (ACP), a transformé, dans des contextes concrets, l'étude de mesures complexes, formées de signaux chargés d'informations. Si la réduction de dimension permet de simplifier grossièrement un enchevêtrement de variations multidimensionnelles, elle est également plus robuste aux perturbations que les méthodes d'analyse univariées. Plus récemment, il est apparu que les propriétés des méthodes multivariées les rendaient propices à d'autres usages que statistiques, comme le traitement des signaux pour l'élimination des variations/fluctuations non pertinentes pour une analyse ultérieure. Il a été montré que l'exploitation spécifique de la réduction de dimension permet un débruitage précis (suppression de "non-signaux/perturbation" non reproductibles), la soustraction fiable et consistante de la ligne de base (suppression de "non-signaux/perturbation" reproductibles), l'éminination d'interférences (suppression de "signaux" reproductibles et inutiles), ainsi que la standardisation des fluctuations d'amplitude des signaux. Si ce champ d'investigation est encore restreint, les possibilités de diffusion de ses applications sont considérables. En effet, ces améliorations, intrinsèquement liées aux signaux eux-mêmes, sont hautement reproductibles entre les répétitions, possèdent une grande capacité d'adaptation et d'application à des situations de bruit, ou de variations complexes dans les signaux. Alors que les disciplines scientifiques sondent des volumes de données toujours plus volumineux, dans des situations de moins en moins étroitement contrôlées, la capacité à apporter des corrections/améliorations précises/haute résolutions, de manière flexible, devient de plus en plus critique. Aussi les traitements de signaux multivariés offrent un éventail de solutions potentiellement très large.
    Oil & Gas Science and Technology 01/2014; 69(2). DOI:10.2516/ogst/2013185 · 0.75 Impact Factor
Show more