Article

Nearly Sharp Sufficient Conditions on Exact Sparsity Pattern Recovery

Dept. of Stat., Columbia Univ., New York, NY, USA
IEEE Transactions on Information Theory (Impact Factor: 2.65). 08/2011; DOI: 10.1109/TIT.2011.2145670
Source: IEEE Xplore

ABSTRACT Consider the n -dimensional vector y = X β+ε where β ∈ BBR p has only k nonzero entries and ε ∈ BBR n is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted by noise, where the objective is to estimate the sparsity pattern of β given the observation vector y and the measurement matrix X . First, we derive a nonasymptotic upper bound on the probability that a specific wrong sparsity pattern is identified by the maximum-likelihood estimator. We find that this probability depends (inversely) exponentially on the difference of || X β||2 and the l 2 -norm of X β projected onto the range of columns of X indexed by the wrong sparsity pattern. Second, when X is randomly drawn from a Gaussian ensemble, we calculate a nonasymptotic upper bound on the probability of the maximum-likelihood decoder not declaring (partially) the true sparsity pattern. Consequently, we obtain sufficient conditions on the sample size n that guarantee almost surely the recovery of the true sparsity pattern. We find that the required growth rate of sample size n matches the growth rate of previously established necessary conditions.

0 Followers
 · 
87 Views
  • Source
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recent work in the area of compressed sensing mainly focuses on the perfect recovery of the entire support for sparse signals. However, partial support recovery, where a part of the signal support is correctly recovered, may be adequate in many practical scenarios. In this study, in the high-dimensional and noisy setting, the authors develop the probability of partial support recovery of the optimal maximum-likelihood (ML) algorithm. When a large part of the support is available, the asymptotic mean-square-error (MSE) of the reconstructed signal is further developed. The simulation results characterise the asymptotic performance of the ML algorithm for partial support recovery, and show that there exists a signal-to-noise ratio (SNR) threshold, beyond which the increase of SNR cannot bring any obvious MSE gain.
    IET Signal Processing 04/2014; 8(2):188-201. DOI:10.1049/iet-spr.2011.0205 · 0.69 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Repurposing tools and intuitions from Shannon theory, we present fundamental limits on the reliable classification of linear and affine subspaces from noisy, linear features. Recognizing a syntactic equivalence between discrimination among subspaces and communication over vector wireless channels, we propose two Shannon-inspired measures to characterize asymptotic classifier performance. First, we define the classification capacity, which characterizes necessary and sufficient relationships between the signal dimension, the number of features, and the number of classes to be discerned as all three quantities approach infinity. Second, we define the diversity-discrimination tradeoff which, by analogy with the diversity-multiplexing tradeoff of fading vector channels, characterizes relationships between the number of discernible classes and the misclassification probability as the signal-to-noise ratio approaches infinity. We derive inner and outer bounds on these measures, which are tight in many regimes. We further study the impact of feature design on the error performance. Numerical results, including a face recognition application, validate the results in practice.
    IEEE Transactions on Information Theory 04/2014; 61(4). DOI:10.1109/ISIT.2014.6875387 · 2.65 Impact Factor

Full-text (2 Sources)

Download
11 Downloads
Available from
Jul 10, 2014