Conference Paper

Fundamental limits of almost lossless analog compression

Dept. of Electr. Eng., Princeton Univ., Princeton, NJ, USA
DOI: 10.1109/ISIT.2009.5205734 Conference: Information Theory, 2009. ISIT 2009. IEEE International Symposium on
Source: IEEE Xplore

ABSTRACT In Shannon theory, lossless source coding deals with the optimal compression of discrete sources. Compressed sensing is a lossless coding strategy for analog sources by means of multiplication by real-valued matrices. In this paper we study almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor is not constrained to linear transformations but it satisfies various regularity conditions such as Lipschitz continuity. The fundamental limit is shown to be the information dimension proposed by Renyi in 1959.

0 Bookmarks
 · 
84 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we study the problem of compressive sensing for sparse signal vectors generated over the graphs. The signal vectors to be recovered are sparse vectors representing the parameters of the links over the graphs. The collective additive measurements we are allowed to take must follow connected paths over the underlying graphs. For a sufficiently connected graph with n nodes, using O(k log(n)) path measurements, it is shown that we are able to recover any sparse link vector with no more than k nonzero elements, even though the measurements have to follow the graph path constraints.
    01/2010;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, motivated by network inference and tomography applications, we study the problem of compressive sensing for sparse signal vectors over graphs. In particular, we are interested in recovering sparse vectors representing the properties of the edges from a graph. Unlike existing compressive sensing results, the collective additive measurements we are allowed to take must follow connected paths over the underlying graph. For a sufficiently connected graph with n nodes, it is shown that, using O(k log(n)) path measurements, we are able to recover any k-sparse link vector (with no more than k nonzero elements), even though the measurements have to follow the graph path constraints. We mainly show that the computationally efficient ℓ<sub>1</sub> minimization can provide theoretical guarantees for inferring such k-sparse vectors with O(k log(n)) path measurements from the graph.
    INFOCOM, 2011 Proceedings IEEE; 05/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a new approach for the analysis of iterative node-based verification-based (NB-VB) recovery algorithms in the context of compressive sensing. These algorithms are particularly interesting due to their low complexity (linear in the signal dimension n). The asymptotic analysis predicts the fraction of unverified signal elements at each iteration ℓ in the asymptotic regime where n → ∞. The analysis is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. To perform the analysis, a message-passing interpretation of NB-VB algorithms is provided. This interpretation lacks the extrinsic nature of standard message-passing algorithms to which density evolution is usually applied. This requires a number of non-trivial modifications in the analysis. The analysis tracks the average performance of the recovery algorithms over the ensembles of input signals and sensing matrices as a function of ℓ. Concentration results are devised to demonstrate that the performance of the recovery algorithms applied to any choice of the input signal over any realization of the sensing matrix follows the deterministic results of the analysis closely. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of n. Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is much simpler and more accurate.
    Information Theory Proceedings (ISIT), 2011 IEEE International Symposium on; 09/2011

Full-text

Download
2 Downloads
Available from