Conference Paper

# Remote Source Coding Under Gaussian Noise: Dueling Roles of Power and Entropy-Power

If you want to read the PDF, try requesting it from the authors.

## No full-text available

... For arbitrary discrete source distributions and the logarithmic loss distortion measure, Courtade and Weissman completely characterized the CEO rate region in [13]. Moreover, asymptotic analyses for an infinite number of sensors have been performed in [14], [15]. Berger et al. investigated the error rate performance for a discrete source with the Hamming distance as a distortion measure and exhibited an inevitable loss due to non-cooperating sensors [14]. ...
... Berger et al. investigated the error rate performance for a discrete source with the Hamming distance as a distortion measure and exhibited an inevitable loss due to non-cooperating sensors [14]. A scaling law on the sum-rate distortion function for arbitrary distortion measures has been derived in [15]. Despite of the large recent progress, rate regions have been characterized only for particular distributions of the relevant signal and specific distortion measures and are generally still unknown. ...
... which has to be maximized. Pursuing the greedy optimization approach, the objective in (15) can be decomposed into M target functions being successively maximized. ...
Article
Full-text available
This paper addresses the optimization of distributed compression in a sensor network. A direct communication among the sensors is not possible so that noisy measurements of a single relevant signal have to be locally compressed in order to meet the rate constraints of the communication links to a common receiver. This scenario is widely known as the Chief Executive Officer (CEO) problem and represents a long-standing problem in information theory. In recent years significant progress has been achieved and the rate region has been completely characterized for specific distributions of involved processes and distortion measures. While algorithmic solutions of the CEO problem are principally known, their practical implementation quickly becomes challenging due to complexity reasons. In this contribution, an efficient greedy algorithm to determine feasible solutions of the CEO problem is derived using the information bottleneck (IB) approach. Following the Wyner-Ziv coding principle, the quantizers are successively designed using already optimized quantizer mappings as side-information. However, processing this side-information in the optimization algorithm becomes a major bottleneck because the memory complexity grows exponentially with number of sensors. Therefore, a sequential compression scheme leading to a compact representation of the side-information and ensuring moderate memory requirements even for larger networks is introduced. This internal compression is optimized again by means of the IB method. Numerical results demonstrate that the overall loss in terms of relevant mutual information can be made sufficiently small even with a significant compression of the side-information. The performance is compared to separately optimized quantizers and a centralized quantization. Moreover, the influence of the optimization order for asymmetric scenarios is discussed.
... and (6b), for some joint distribution (7), where [x] + = max{0, x} and U A ↔ Y A ↔ X ↔ Y A c ↔ U A c forms a Markov chain for any A ⊆ I L . ...
... and D th = H(X, U I L ) − H(U I L ) = H(X) + H(U I L |X) − H(U I L ) = H(X) + H(N I L ⊕ Z I L ) − H(U I L ) = 1 + L l=1 h b (P l ) − H(U I L ),7 Generally, there is a compound LDGM-LDPC code in the first and the L-th link; and there are two compound codes in the i-th link, for i ∈ [2 : L − 1]. Thus, there are totally 2L − 2 compound codes in an L-link case. ...
Article
The L-link binary Chief Executive Officer (CEO) problem under logarithmic loss is investigated in this paper. A quantization splitting technique is applied to convert the problem under consideration to a (2L−1)-step successive Wyner-Ziv (WZ) problem, for which a practical coding scheme is proposed. In the proposed scheme, Low-Density Generator-Matrix (LDGM) codes are used for binary quantization while Low-Density Parity-Check (LDPC) codes are used for syndrome generation; the decoder performs successive decoding based on the received syndromes and produces a soft reconstruction of the remote source. The simulation results indicate that the rate-distortion performance of the proposed scheme can approach the theoretical inner bound based on binary-symmetric test-channel models.
... In contrast to the jointly Gaussian CEO problem, non-Gaussian and non-quadratic CEO problems have received less attention due to limited analytic tractability compared with the Gaussian case. A non-regular sourceobservation pair such as copula model or truncated Gaussian noise was considered under quadratic distortion [7], and a general continuous source with additive Gaussian noise was considered under quadratic distortion and general distortion [8]. Toward generalization of source-observation pair, it was shown that Gaussianity is in fact the worst [9]. ...
Preprint
The CEO problem has received a lot of attention since Berger et al.~first investigated it, however, there are limited results on non-Gaussian models with non-quadratic distortion measures. In this work, we extend the CEO problem to two continuous alphabet settings with general $r$th power of difference and logarithmic distortions, and study asymptotics of distortion as the number of agents and sum rate grow without bound. The first setting is a regular source-observation model, such as jointly Gaussian, with difference distortion and we show that the distortion decays at $R_{\textsf{sum}}$ up to a multiplicative constant. We use sample median estimation following the Berger-Tung scheme for achievability and the Shannon lower bound for the converse. The other setting is a non-regular source-observation model, such as copula or uniform additive noise models, with difference distortion for which estimation-theoretic regularity conditions do not hold. The optimal decay $R_{\textsf{sum}}$ is obtained for the non-regular model by midrange estimator following the Berger-Tung scheme for achievability and the Chazan-Ziv-Zakai bound for the converse. Lastly, we provide a condition for the regular model, under which quadratic and logarithmic distortions are asymptotically equivalent by entropy power relation as the number of agents grows. This proof relies on the Bernstein-von Mises theorem.
Article
Full-text available