Article

Nearly Sharp Sufficient Conditions on Exact Sparsity Pattern Recovery

Dept. of Stat., Columbia Univ., New York, NY, USA
IEEE Transactions on Information Theory (Impact Factor: 2.62). 08/2011; DOI: 10.1109/TIT.2011.2145670
Source: IEEE Xplore

ABSTRACT Consider the n -dimensional vector y = X β+ε where β ∈ BBR p has only k nonzero entries and ε ∈ BBR n is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted by noise, where the objective is to estimate the sparsity pattern of β given the observation vector y and the measurement matrix X . First, we derive a nonasymptotic upper bound on the probability that a specific wrong sparsity pattern is identified by the maximum-likelihood estimator. We find that this probability depends (inversely) exponentially on the difference of || X β||2 and the l 2 -norm of X β projected onto the range of columns of X indexed by the wrong sparsity pattern. Second, when X is randomly drawn from a Gaussian ensemble, we calculate a nonasymptotic upper bound on the probability of the maximum-likelihood decoder not declaring (partially) the true sparsity pattern. Consequently, we obtain sufficient conditions on the sample size n that guarantee almost surely the recovery of the true sparsity pattern. We find that the required growth rate of sample size n matches the growth rate of previously established necessary conditions.

0 Bookmarks
 · 
80 Views
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers the problem of sparse signal re-covery when the decoder has prior information on the sparsity pat-tern of the data. The data vector has a ran-domly generated sparsity pattern, where the -th entry is non-zero with probability . Given knowledge of these probabilities, the decoder attempts to recover based on random noisy projec-tions. Information-theoretic limits on the number of measurements needed to recover the support set of perfectly are given, and it is shown that significantly fewer measurements can be used if the prior distribution is sufficiently non-uniform. Furthermore, exten-sions of Basis Pursuit, LASSO, and Orthogonal Matching Pursuit which exploit the prior information are presented. The improved performance of these methods over their standard counterparts is demonstrated using simulations.
    IEEE Transactions on Signal Processing 01/2013; 61(2):427. · 2.81 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper derives fundamental limits on the performance of compressive classification when the source is a mixture of Gaussians. It provides an asymptotic analysis of a Bhattacharya based upper bound on the misclassification probability for the optimal Maximum-A-Posteriori (MAP) classifier that depends on quantities that are dual to the concepts of diversity-order and coding gain in multi-antenna communications. The diversity-order of the measurement system determines the rate at which the probability of misclassification decays with signal-to-noise ratio (SNR) in the low-noise regime. The counterpart of coding gain is the measurement gain which determines the power offset of the probability of misclassification in the low-noise regime. These two quantities make it possible to quantify differences in misclassification probability between random measurement and (diversity-order) optimized measurement. Results are presented for two-class classification problems first with zero-mean Gaussians then with nonzero-mean Gaussians, and finally for multiple-class Gaussian classification problems. The behavior of misclassification probability is revealed to be intimately related to certain fundamental geometric quantities determined by the measurement system, the source and their interplay. Numerical results, representative of compressive classification of a mixture of Gaussians, demonstrate alignment of the actual misclassification probability with the Bhattacharya based upper bound. The connection between the misclassification performance and the alignment between source and measurement geometry may be used to guide the design of dictionaries for compressive classification.
    01/2014;

Full-text (2 Sources)

Download
6 Downloads
Available from
Jul 10, 2014