Nearly Sharp Sufficient Conditions on Exact Sparsity Pattern Recovery

Dept. of Stat., Columbia Univ., New York, NY, USA
IEEE Transactions on Information Theory (Impact Factor: 2.33). 08/2011; 57(7):4672 - 4679. DOI: 10.1109/TIT.2011.2145670
Source: IEEE Xplore


Consider the n -dimensional vector y = X β+ε where β ∈ BBR p has only k nonzero entries and ε ∈ BBR n is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted by noise, where the objective is to estimate the sparsity pattern of β given the observation vector y and the measurement matrix X . First, we derive a nonasymptotic upper bound on the probability that a specific wrong sparsity pattern is identified by the maximum-likelihood estimator. We find that this probability depends (inversely) exponentially on the difference of || X β||2 and the l 2 -norm of X β projected onto the range of columns of X indexed by the wrong sparsity pattern. Second, when X is randomly drawn from a Gaussian ensemble, we calculate a nonasymptotic upper bound on the probability of the maximum-likelihood decoder not declaring (partially) the true sparsity pattern. Consequently, we obtain sufficient conditions on the sample size n that guarantee almost surely the recovery of the true sparsity pattern. We find that the required growth rate of sample size n matches the growth rate of previously established necessary conditions.

Download full-text


Available from: Kamiar Rahnama Rad, Jul 10, 2014
  • Source
    • "These necessary conditions are tightened in [17], and a comparison between dense and sparse ensembles is performed. In [18], sufficient conditions are derived and shown to be tight in a scaling-law sense by comparison to the necessary conditions of [17]. For information-theoretic limits with respect to other performance metrics, see [13], [19], [20]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers the problem of sparse signal re-covery when the decoder has prior information on the sparsity pat-tern of the data. The data vector has a ran-domly generated sparsity pattern, where the -th entry is non-zero with probability . Given knowledge of these probabilities, the decoder attempts to recover based on random noisy projec-tions. Information-theoretic limits on the number of measurements needed to recover the support set of perfectly are given, and it is shown that significantly fewer measurements can be used if the prior distribution is sufficiently non-uniform. Furthermore, exten-sions of Basis Pursuit, LASSO, and Orthogonal Matching Pursuit which exploit the prior information are presented. The improved performance of these methods over their standard counterparts is demonstrated using simulations.
    IEEE Transactions on Signal Processing 01/2013; 61(2):427. DOI:10.1109/TSP.2012.2225051 · 2.79 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Consider a Bernoulli-Gaussian complex n-vector whose components are X<sub>i</sub>B<sub>i</sub>, with B<sub>i</sub> ~Bernoulli-q and X<sub>i</sub> ~ CN(0; σ<sup>2</sup>), iid across i and mutually independent. This random q-sparse vector is multiplied by a random matrix U, and a randomly chosen subset of the components of average size np, p ∈ [0; 1], of the resulting vector is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where U is typically the identity or a matrix with iid components, to allow U that satisfies a certain freeness condition, which encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verdú, as well as a number of information theoretic bounds, to study the input-output mutual information and the support recovery error rate as n → ∞.
    Information Theory Proceedings (ISIT), 2011 IEEE International Symposium on; 09/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we introduce a new support recovery algorithm from noisy measurements called Bayesian hypothesis test via belief propagation (BHT-BP). BHT-BP focuses on sparse support recovery rather than sparse signal estimation. The key idea behind BHT-BP is to detect the support set of a sparse vector using hypothesis test where the posterior densities used in the test are obtained by aid of belief propagation (BP). Since BP provides precise posterior information using the noise statistic, BHT-BP can recover the support with robustness against the measurement noise. In addition, BHT-BP has low computational cost compared to the other algorithms by the use of BP. We show the support recovery performance of BHT-BP on the parameters (N; M; K; SNR) and compare the performance of BHT-BP to OMP and Lasso via numerical results.
    IEEE Statistical Signal Processing Workshop (SSP) 2012, Ann Arbor MI; 08/2012
Show more