Article

Identification Over Additive Noise Channels in the Presence of Feedback

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We analyze deterministic message identification via channels with non-discrete additive white noise and with a noiseless feedback link under both average power and peak power constraints. The identification task is part of Post Shannon Theory. The consideration of communication systems beyond Shannon’s approach is useful in order to increase the efficiency of information transmission for certain applications. We propose a coding scheme that first generates infinite common randomness between the sender and the receiver. If the channel has a positive message transmission feedback capacity, for given error thresholds and sufficiently large blocklength this common randomness is then used to construct arbitrarily large deterministic identification codes. In particular, the deterministic identification feedback capacity is infinite regardless of the scaling (exponential, doubly exponential, etc.) chosen for the capacity definition. Clearly, if randomized encoding is allowed in addition to the use of feedback, these results continue to hold.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The size of this random experiment can be used to compute the growth of the ID rate. This result has been further emphasized in [15,28], where it has been shown that the ID capacity of the Gaussian channel with noiseless feedback is infinite. This is because the authors of [15,28] provided a coding scheme that generates infinite common randomness between the sender and the receiver. ...
... This result has been further emphasized in [15,28], where it has been shown that the ID capacity of the Gaussian channel with noiseless feedback is infinite. This is because the authors of [15,28] provided a coding scheme that generates infinite common randomness between the sender and the receiver. Here, we want to investigate the effect of feedback on the ID capacity of the system model depicted in Figure 2. Theorem 2 characterizes the ID feedback capacity of the statedependent channel W S with noiseless feedback. ...
... where (a) follows the definition of β in (28). This completes the proof of Lemma 1. ...
Article
Full-text available
In the identification (ID) scheme proposed by Ahlswede and Dueck, the receiver’s goal is simply to verify whether a specific message of interest was sent. Unlike Shannon’s transmission codes, which aim for message decoding, ID codes for a discrete memoryless channel (DMC) are far more efficient; their size grows doubly exponentially with the blocklength when randomized encoding is used. This indicates that when the receiver’s objective does not require decoding, the ID paradigm is significantly more efficient than traditional Shannon transmission in terms of both energy consumption and hardware complexity. Further benefits of ID schemes can be realized by leveraging additional resources such as feedback. In this work, we address the problem of joint ID and channel state estimation over a DMC with independent and identically distributed (i.i.d.) state sequences. State estimation functions as the sensing mechanism of the model. Specifically, the sender transmits an ID message over the DMC while simultaneously estimating the channel state through strictly causal observations of the channel output. Importantly, the random channel state is unknown to both the sender and the receiver. For this system model, we present a complete characterization of the ID capacity–distortion function.
... It has been shown that infinite CR can be generated from Gaussian source [25]. The identification with feedback (IDF) problem via single-user Gaussian channels has been explored in [13], [26], [27], demonstrating the achievability of infinite capacity regardless of the chosen rate scaling. ...
... For instance, deterministic ID rate of Gaussian channels yields infinite rate with ϕ 1 and zero rate with ϕ 3 , i.e., R ϕ1 dID (W σ 2 ) = log N n = +∞ and R ϕ3 dID (W σ 2 ) = log log N n = 0. However, it has been shown that the IDF capacity of single-user continuous channels with additive noise is infinite regardless of the scaling function used [27]. For our IDF problem via K-GMAC and SD-K-GMAC, we have the following corollary. ...
Preprint
We investigate message identification over a K-sender Gaussian multiple access channel (K-GMAC). Unlike conventional Shannon transmission codes, the size of randomized identification (ID) codes experiences a doubly exponential growth in the code length. Improvements in the ID approach can be attained through additional resources such as quantum entanglement, common randomness (CR), and feedback. It has been demonstrated that an infinite capacity can be attained for a single-user Gaussian channel with noiseless feedback, irrespective of the chosen rate scaling. We establish the capacity region of both the K-sender Gaussian multiple access channel (K-GMAC) and the K-sender state-dependent Gaussian multiple access channel (K-SD-GMAC) when strictly causal noiseless feedback is available.
... In the Gaussian case, we have shown that the codebook size scales as 2 (n log n)R , by deriving bounds on the DI capacity. DI for Gaussian channels is also studied in [66], [67]. Furthermore, DI for typical MC channel models, such as the DTPC with inter-symbol interference (ISI) and the Binomial channel is studied in [59], [68], [69], where the correct scale of the size of the codebook is proved to be 2 (n log n)R . ...
... The corresponding error probabilities of the identification code (U, D) are given by Apart from the conventional exponential and double exponential codebook sizes for transmission [49] and RI [32], respectively, different non-standard codebook sizes are observed for other communication tasks, such as covert communication [80], [81] and covert identification [82] for the binary-input DMC (BIDMC), where the codebook size scales as 2 √ nR and 2 2 √ nR , respectively. For the Gaussian DI channel with feedback [66], the codebook size can be arbitrarily large. ...
Article
Various applications of molecular communications (MC) are event-triggered, and, as a consequence, the prevalent Shannon capacity may not be the right measure for performance assessment. Thus, in this paper, we motivate and establish the identification capacity as an alternative metric. In particular, we study deterministic identification (DI) for the discrete-time Poisson channel (DTPC), subject to an average and a peak molecule release rate constraint, which serves as a model for MC systems employing molecule counting receivers. It is established that the number of different messages that can be reliably identified for this channel scales as 2(nlogn)R, where n and R are the codeword length and coding rate, respectively. Lower and upper bounds on the DI capacity of the DTPC are developed. The obtained large capacity of the DI channel sheds light on the performance of natural DI systems such as natural olfaction, which are known for their extremely large chemical discriminatory power in biology. Furthermore, numerical results for the empirical miss-identification and false identification error rates are provided for finite length codes. This allows us to characterize the behaviour of the error rate for increasing codeword lengths, which complements our theoretically-derived scale for asymptotically large codeword lengths.
... The size of this random experiment can be used to compute the growth of the ID rate. This result has been further emphasized in [17], [33], where it has been shown that the ID capacity of the Gaussian channel with noiseless feedback is infinite. This is because the authors in [17], [33] provided a coding scheme that generates infinite common randomness between the sender and the receiver. ...
... This result has been further emphasized in [17], [33], where it has been shown that the ID capacity of the Gaussian channel with noiseless feedback is infinite. This is because the authors in [17], [33] provided a coding scheme that generates infinite common randomness between the sender and the receiver. We want to investigate the effect of feedback on the ID capacity of our system model depicted in Fig. 2. Theorem 6 characterizes the ID feedback capacity of the state-dependent channel W S with noiseless feedback. ...
Preprint
In the identification (ID) scheme proposed by Ahlswede and Dueck, the receiver only checks whether a message of special interest to him has been sent or not. In contrast to Shannon transmission codes, the size of ID codes for a Discrete Memoryless Channel (DMC) grows doubly exponentially fast with the blocklength, if randomized encoding is used. This groundbreaking result makes the ID paradigm more efficient than the classical Shannon transmission in terms of necessary energy and hardware components. Further gains can be achieved by taking advantage of additional resources such as feedback. We study the problem of joint ID and channel state estimation over a DMC with independent and identically distributed (i.i.d.) state sequences. The sender simultaneously sends an ID message over the DMC with a random state and estimates the channel state via a strictly causal channel output. The random channel state is available to neither the sender nor the receiver. For the proposed system model, we establish a lower bound on the ID capacity-distortion function.
... For all the continuous alphabet works, i.e., the Gaussian, Poisson (with/out ISI), and Binomial models [13,20,[22][23][24], a new observation regarding the codebook size is obtained, namely, the codebook size scales super-exponentially in the codeword length, i.e., ∼ 2 (n log n)R which is different than the standard exponential [12] and double exponential [5] behavior for DI and RI problems, respectively. In [26] the DI problem with non-discrete additive white noise and noiseless feedback under both average and peak power constraints, is analyzed, where the DI capacity is shown to be infinite regardless of the scaling for the codebook size. The problem of joint identification and channel state estimation for a DMC with independent and identically distributed state sequences, is studied in [27]. ...
... where the last inequality exploits the definition of type I/II error probabilities given in (26) and (27). Hence, the assumption is false, and distinct messages i1 and i2 cannot share the same codeword. ...
Preprint
Full-text available
Deterministic K-Identification (DKI) for the binary symmetric channel (BSC) is developed. A full characterization of the DKI capacity for such a channel, with and without the Hamming weight constraint, is established. As a key finding, we find that for deterministic encoding the number of identifiable messages K may grow exponentially with the codeword length n, i.e., K=2κnK = 2^{\kappa n}, where κ\kappa is the target identification rate. Furthermore, the eligible region for κ\kappa as a function of the channel statistics, i.e., the crossover probability, is determined.
... Deterministic codes often have the advantage of simpler implementation, simulation [70,71], and explicit construction [72]. DI problem for Gaussian channels is also studied in [64,[73][74][75]. Further, DI may be preferred over RI in complexity-constrained applications of MC systems, where the generation of random codewords is challenging 1 . ...
... Next, we bound the probability on the right hand side of (75) as follows Returning to the sum of error probabilities in (74), exploiting the bound (76) leads to P 1 ( 1 ) + P 2 ( 2 1 ) ≥ 1 − ...
Preprint
Full-text available
Several applications of molecular communications (MC) feature an alarm-prompt behavior for which the prevalent Shannon capacity may not be the appropriate performance metric. The identification capacity as an alternative measure for such systems has been motivated and established in the literature. In this paper, we study deterministic identification (DI) for the discrete-time \emph{Poisson} channel (DTPC) with inter-symbol interference (ISI) where the transmitter is restricted to an average and a peak molecule release rate constraint. Such a channel serves as a model for diffusive MC systems featuring long channel impulse responses and employing molecule counting receivers. We derive lower and upper bounds on the DI capacity of the DTPC with ISI when the number of ISI channel taps K may grow with the codeword length n (e.g., due to increasing symbol rate). As a key finding, we establish that for deterministic encoding, the codebook size scales as 2(nlogn)R2^{(n\log n)R} assuming that the number of ISI channel taps scales as K=2κlognK = 2^{\kappa \log n}, where R is the coding rate and κ\kappa is the ISI rate. Moreover, we show that optimizing κ\kappa leads to an effective identification rate [bits/s] that scales linearly with n, which is in contrast to the typical transmission rate [bits/s] that is independent of n.
... The problem of identification via channel and channel resolvability have been studied extensively [4], [10], [13], [21], [23], [28]- [30], [32]- [34], [37], [39], [41]. For point-to-point channel, the identification capacity is well understood. ...
Preprint
Full-text available
We study the channel resolvability problem, which is used to prove strong converse of identification via channel. Channel resolvability has been solved by only random coding in the literature. We prove channel resolvability using the multiplicative weight update algorithm. This is the first approach to channel resolvability using non-random coding.
... The identification capacity of a channel, defined as the maximum second-order exponent of the number of identifiable messages, was shown to equal the Shannon capacity of the channel [2], [4]- [8]. Identification has since been extensively studied under various channel models [9]- [17] and under various input constraints [18]- [20]. ...
Preprint
We study message identification over the binary noisy permutation channel. For discrete memoryless channels (DMCs), the number of identifiable messages grows doubly exponentially, and the maximum second-order exponent is the Shannon capacity of the DMC. We consider a binary noisy permutation channel where the transmitted vector is first permuted by a permutation chosen uniformly at random, and then passed through a binary symmetric channel with crossover probability p. In an earlier work, it was shown that 2cnn2^{c_n n} messages can be identified over binary (noiseless) permutation channel if cn0c_n\rightarrow 0. For the binary noisy permutation channel, we show that message sizes growing as 2ϵnnlogn2^{\epsilon_n \sqrt{\frac{n}{\log n}}} are identifiable for any ϵn0\epsilon_n\rightarrow 0. We also prove a strong converse result showing that for any sequence of identification codes with message size 2Rnnlogn2^{R_n \sqrt{n}\log n}, where RnR_n \rightarrow \infty, the sum of Type-I and Type-II error probabilities approaches at least 1 as nn\rightarrow \infty. Our proof of the strong converse uses the idea of channel resolvability. The channel of interest turns out to be the ``binary weight-to-weight (BWW) channel'' which captures the effect on the Hamming weight of a vector, when the vector is passed through a binary symmetric channel. We propose a novel deterministic quantization scheme for quantization of a distribution over {0,1,,n}\{0,1,\cdots, n\} by an M-type input distribution when the distortion is measured on the output distribution (over the BWW channel) in total variation distance. This plays a key role in the converse proof.
... Her discussion was rich with detailed theoretical insights and practical implications, which sparked numerous questions and comments from the audience, leading to a vibrant exchange of ideas. Wafa also introduced feedback as another valuable resource for message identification [11]. She discussed how incorporating feedback mechanisms can further enhance the identification process, offering examples and theoretical backing for her points [12,13]. ...
Preprint
The one-day workshop, held prior to the "ZIF Workshop on Information Theory and Related Fields", provided an excellent opportunity for in-depth discussions on several topics within the field of post-Shannon theory. The agenda covered deterministic and randomized identification, focusing on various methods and algorithms for identifying data or signals deterministically and through randomized processes. It explored the theoretical foundations and practical applications of these techniques. The session on resources for increasing identification capacity examined the different resources and strategies that can be utilized to boost the capacity for identifying information. This included discussions on both hardware and software solutions, as well as innovative approaches to resource allocation and optimization. Participants delved into common randomness generation, essential for various cryptographic protocols and communication systems. The session highlighted recent advancements and practical implementations of common randomness in secure communications. The workshop concluded with a detailed look at the development and practical deployment of identification codes. Experts shared insights on code construction techniques, implementation challenges, and real-world applications in various communication systems. We extend our thanks to the esteemed speakers for their valuable contributions: Caspar von Lengerke, Wafa Labidi, Ilya Vorobyev, Johannes Rosenberger, Jonathan Huffmann, and Pau Colomer. Their presentations and insights significantly enriched the workshop. Additionally, we are grateful to all the participants whose active engagement, constructive comments, and stimulating discussions made the event a success. Your involvement was crucial in fostering a collaborative and intellectually vibrant environment.
... The result holds regardless of the selected scaling for the rate. This result was generalized in [31] for general additive noise channels. ...
Preprint
There is a growing interest in models that extend beyond Shannon's classical transmission scheme, renowned for its channel capacity formula C. One such promising direction is message identification via channels, introduced by Ahlswede and Dueck. Unlike in Shannon's classical model, where the receiver aims to determine which message was sent from a set of M messages, message identification focuses solely on discerning whether a specific message m was transmitted. The encoder can operate deterministically or through randomization, with substantial advantages observed particularly in the latter approach. While Shannon's model allows transmission of M=2nCM = 2^{nC} messages, Ahlswede and Dueck's model facilitates the identification of M=22nCM = 2^{2^{nC}} messages, exhibiting a double exponential growth in block length. In their seminal paper, Ahlswede and Dueck established the achievability and introduced a "soft" converse bound. Subsequent works have further refined this, culminating in a strong converse bound, applicable under specific conditions. Watanabe's contributions have notably enhanced the applicability of the converse bound. The aim of this survey is multifaceted: to grasp the formalism and proof techniques outlined in the aforementioned works, analyze Watanabe's converse, trace the evolution from earlier converses to Watanabe's, emphasizing key similarities and differences that underpin the enhancements. Furthermore, we explore the converse proof for message identification with feedback, also pioneered by Ahlswede and Dueck. By elucidating how their approaches were inspired by preceding proofs, we provide a comprehensive overview. This overview paper seeks to offer readers insights into diverse converse techniques for message identification, with a focal point on the seminal works of Hayashi, Watanabe, and, in the context of feedback, Ahlswede and Dueck.
... The presence of external noise significantly diminishes the efficiency of classical methods. The QCC-based methodology meets the demands of contemporary data analysis, as the issues of heavy-tailed behaviour and external noise in data are increasingly highlighted by researchers across various fields, see, e.g., Ross and Jones (2015); Subramanian et al. (2015) and Wiese et al. (2023); Huang et al. (2022); Comte and Lacour (2011). ...
Preprint
Full-text available
It has been recently shown in Jaworski, P., Jelito, D. and Pitera, M. (2024), 'A note on the equivalence between the conditional uncorrelation and the independence of random variables', Electronic Journal of Statistics 18(1), that one can characterise the independence of random variables via the family of conditional correlations on quantile-induced sets. This effectively shows that the localized linear measure of dependence is able to detect any form of nonlinear dependence for appropriately chosen conditioning sets. In this paper, we expand this concept, focusing on the statistical properties of conditional correlation estimators and their potential usage in serial dependence identification. In particular, we show how to estimate conditional correlations in generic and serial dependence setups, discuss key properties of the related estimators, define the conditional equivalent of the autocorrelation function, and provide a series of examples which prove that the proposed framework could be efficiently used in many practical econometric applications.
... Identification over broadcast channels was investigated in [6], [20], while identification in the presence of feedback over multiple-access channels and broadcast channels was studied in [3]. Identification was studied over Gaussian channels in [22], [15]; over additive noise channels under average and peak power constraints in [30]; over compound channels and arbitrarily varying channels in [1]. Deterministic identification over DMCs with and without input constraints was studied in [21]. ...
Preprint
We study message identification over the binary uniform permutation channels. For DMCs, the number of identifiable messages grows doubly exponentially. Identification capacity, the maximum second-order exponent, is known to be the same as the Shannon capacity of a DMC. We consider a binary uniform permutation channel where the transmitted vector is permuted by a permutation chosen uniformly at random. Permutation channels support reliable communication of only polynomially many messages. While this implies a zero second-order identification rate, we prove a soft converse result showing that even non-zero first-order identification rates are not achievable with a power-law decay of error probability for identification over binary uniform permutation channels. To prove the converse, we use a sequence of steps to construct a new identification code with a simpler structure and then use a lower bound on the normalized maximum pairwise intersection of a set system on {0, . . . , n}. We provide generalizations for arbitrary alphabet size.
... In [46], [47], Gaussian channels with fast and slow fading and subject to an average power constraint are studied and the codebook size is shown to scale as 2 (n log n)R . DI is also studied in [62] for Gaussian channels in the presence of feedback and in [63] for general continuous-time channels with infinite alphabets. Furthermore, DI for MC channels modelled as DTPC without ISI and the Binomial channel is studied in [26], [27], [45], [53], where the scale of the size of the codebook is shown to be 2 (n log n)R . ...
Article
Full-text available
Various applications of molecular communications (MCs) feature an alarm-prompt behavior for which the prevalent Shannon capacity may not be the appropriate performance metric. The identification capacity as an alternative measure for such systems has been motivated and established in the literature. In this paper, we study deterministic K-identification (DKI) for the discrete-time Poisson channel (DTPC) with inter-symbol interference (ISI), where the transmitter is restricted to an average and a peak molecule release rate constraint. Such a channel serves as a model for diffusive MC systems featuring long channel impulse responses and employing molecule-counting receivers. We derive lower and upper bounds on the DKI capacity of the DTPC with ISI when the size of the target message set K and the number of ISI channel taps L may grow with the codeword length n. As a key finding, we establish that for deterministic encoding, assuming that K and L both grow sub-linearly in n, i.e., K = 2κlogn and L = 2llogn with κ + 4l ϵ 0,1), where κ ϵ 0,1) is the identification target rate and l ϵ 0,1/4) is the ISI rate, then the number of different messages that can be reliably identified scales super-exponentially in n, i.e., ~2(nlogn)R, where R is the DKI coding rate. Moreover, since l and κ must fulfill κ + 4l ϵ 0,1), we show that optimizing l (or equivalently the symbol rate) leads to an effective identification rate bits/s that scales sub-linearly with n. This result is in contrast to the typical transmission rate bits/s which is independent of n.
... highlights that (prior to maximizing over i) we first maximize over j and then average the result over t. In this sense, the help-even if provided to both encoder and decoder-cannot be viewed as "common randomness" in the sense of [9][10][11] where the averaging over the common randomness is performed before taking the maximum. Our criterion is more demanding of the direct part (code construction) and less so of the converse. ...
Article
Full-text available
The gain in the identification capacity afforded by a rate-limited description of the noise sequence corrupting a modulo-additive noise channel is studied. Both the classical Ahlswede–Dueck version and the Ahlswede–Cai–Ning–Zhang version, which does not allow for missed identifications, are studied. Irrespective of whether the description is provided to the receiver, to the transmitter, or to both, the two capacities coincide and both equal the helper-assisted Shannon capacity.
... Over continuous channels, deterministic identification (DI) code sizes scale in the order of [7]- [9]. Super-exponential codes can also be achieved for DI if local randomness can be generated using resources such as feedback [10] or sensing [11], making it effectively a randomized ID. Applications are expected mainly in authentication tasks [12]- [15] and eventtriggered systems. ...
Preprint
Deterministic identification over K-input multiple-access channels with average input cost constraints is considered. The capacity region for deterministic identification is determined for an average-error criterion, where arbitrarily large codes are achievable. For a maximal-error criterion, upper and lower bounds on the capacity region are derived. The bounds coincide if all average partial point-to-point channels are injective under the input constraint, i.e. all inputs at one terminal are mapped to distinct output distributions, if averaged over the inputs at all other terminals. The achievability is proved by treating the MAC as an arbitrarily varying channel with average state constraints. For injective average channels, the capacity region is a hyperrectangle. The modulo-2 and modulo-3 binary adder MAC are presented as examples of channels which are injective under suitable input constraints. The binary multiplier MAC is presented as an example of a non-injective channel, where the achievable identification rate region still includes the Shannon capacity region.
... For the Gaussian DI channel with feedback [47], the codebook size can be arbitrarily large. In [48] the result is generalized for channels with non-discrete additive white noise and positive message transmission feedback capacity. ...
Preprint
Full-text available
Various applications of molecular communications (MC) are event-triggered, and, as a consequence, the prevalent Shannon capacity may not be the right measure for performance assessment. Thus, in this paper, we motivate and establish the identification capacity as an alternative metric. In particular, we study deterministic identification (DI) for the discrete-time Poisson channel (DTPC), subject to an average and a peak power constraint, which serves as a model for MC systems employing molecule counting receivers. It is established that the codebook size for this channel scales as 2(nlogn)R2^{(n\log n)R}, where n and R are the codeword length and coding rate, respectively. Lower and upper bounds on the DI capacity of the DTPC are developed. The obtained large capacity of the DI channel sheds light on the performance of natural DI systems such as natural olfaction, which are known for their extremely large chemical discriminatory power in biology. Furthermore, numerical simulations for the empirical miss-identification and false identification error rates are provided for finite length codes. This allows us to quantify the scale of error reduction in terms of the codeword length.
Chapter
The one-day workshop, held prior to the “ZIF Workshop on Information Theory and Related Fields”, provided an excellent opportunity for in-depth discussions on several topics within the field of post-Shannon theory. The agenda covered deterministic and randomized identification, focusing on various methods and algorithms for identifying data or signals deterministically and through randomized processes. It explored the theoretical foundations and practical applications of these techniques. The session on resources for increasing identification capacity examined the different resources and strategies that can be utilized to boost the capacity for identifying information. This included discussions on both hardware and software solutions, as well as innovative approaches to resource allocation and optimization. Participants delved into common randomness generation, essential for various cryptographic protocols and communication systems. The session highlighted recent advancements and practical implementations of common randomness in secure communications. The workshop concluded with a detailed look at the development and practical deployment of identification codes. Experts shared insights on code construction techniques, implementation challenges, and real-world applications in various communication systems. We extend our thanks to the esteemed speakers for their valuable contributions: Caspar von Lengerke, Wafa Labidi, Ilya Vorobyev, Johannes Rosenberger, Jonathan Huffmann, and Pau Colomer. Their presentations and insights significantly enriched the workshop. Additionally, we are grateful to all the participants whose active engagement, constructive comments, and stimulating discussions made the event a success. Your involvement was crucial in fostering a collaborative and intellectually vibrant environment.
Chapter
New applications in modern communications are demanding robust and ultra-reliable low-latency information exchange such as machine-to-machine and human-to-machine communications. For many of these applications, the identification approach of Ahlswede and Dueck is much more efficient than the classical message transmission scheme proposed by Shannon. Previous studies concentrate mainly on identification over discrete channels. For discrete channels, it was proved that identification is robust under channel uncertainty. Furthermore, optimal identification schemes that are secure and robust against jamming attacks have been considered. However, no results for continuous channels have yet been established. That is why we focus on the continuous case: the Gaussian channel for its known practical relevance. We deal with secure identification over Gaussian channels. Provable secure communication is of high interest for future communication systems. A key technique for implementing secure communication is the physical layer security based on information-theoretic security. We model this with the wiretap channel. In particular, we provide a suitable coding scheme for the Gaussian wiretap channel (GWC) and determine the corresponding secure identification capacity. We also consider Multiple-Input Multiple-Output (MIMO) Gaussian channels and provide an efficient signal-processing scheme. This scheme allows a separation of signal processing and Gaussian coding as in the classical case.
Chapter
The model of identification via channels, introduced by Ahlswede and Dueck, has attracted increasing attention in recent years. Unlike in Shannon’s classical model, where the receiver aims to determine which message was sent from a set of M messages, message identification focuses solely on discerning whether a specific message m was transmitted. The encoder can operate deterministically or through randomization, with substantial advantages observed particularly in the latter approach. While Shannon’s model allows transmission of M=2nCM = 2^{nC} messages, Ahlswede and Dueck’s model facilitates the identification of M=22nCM = 2^{2^{nC}} messages, exhibiting a double exponential growth in block length. In their seminal paper, Ahlswede and Dueck established the achievability and introduced a “soft” converse bound. Subsequent works have further refined this, culminating in a strong converse bound, applicable under specific conditions. Watanabe’s contributions have notably enhanced the applicability of the converse bound. The aim of this survey is multifaceted: to grasp the formalism and proof techniques outlined in the aforementioned works, analyze Watanabe’s converse, trace the evolution from earlier converses to Watanabe’s, emphasizing key similarities and differences that underpin the enhancements. Furthermore, we explore the converse proof for message identification with feedback, also pioneered by Ahlswede and Dueck. By elucidating how their approaches were inspired by preceding proofs, we provide a comprehensive overview. This overview paper seeks to offer readers insights into diverse converse techniques for message identification, with a focal point on the seminal works of Hayashi, Watanabe, and, in the context of feedback, Ahlswede and Dueck.
Article
Identification via channels (ID) is a goal-oriented (Post-Shannon) communications paradigm that verifies the matching of message (identity) pairs at source and sink. To date, ID research has focused on the upper bound λ for the probability of a false-positive (FP) identity match, mainly through ID tagging codes that represent the identities through ID codeword sets consisting of position-tag tuples. We broaden the ID research scope by introducing novel ID performance metrics: the expected FP-error probability P fp which considers distance properties of ID codeword sets in conjunction with the probability for selecting ID pairs, the threshold probabilities p ϵ that characterize quantiles of FP-probabilities, and the distance tail uplift ratio DiTUR giving the fraction of ID pairs whose distance is increased above the minimum distance (which corresponds to λ). We define a No-Code (NC) approach that directly conducts the ID operations with the messages (identities) without any additional coding as a baseline for ID. We investigate a concatenated Reed-Solomon ID code and a Reed-Muller ID code, and find that they do not always yield advantages over using no ID code. We analytically characterize the reduction of error-prone ID pairs through sending multiple tags. Overall, our insights point to investigating the distance distribution of ID codes and incorporating the ID pair distributions of real ID systems in future ID research.
Article
Full-text available
The deterministic identification (DI) capacity is developed in multiple settings of channels with power constraints. A full characterization is established for the DI capacity of the discrete memoryless channel (DMC) with and without input constraints. Originally, Ahlswede and Dueck established the identification capacity with local randomness at the encoder, resulting in a double exponential number of messages in the block length n . In the deterministic setup, the number of messages scales exponentially, as in Shannon’s transmission paradigm, but the achievable identification rates are higher. An explicit proof was not provided for the deterministic setting. In this paper, a detailed proof is presented for the DMC. Furthermore, Gaussian channels with fast and slow fading are considered, when channel side information is available at the decoder. A new phenomenon is observed as we establish that the number of messages scales as 2nlog(n)R2^{n\log (n)R} by deriving lower and upper bounds on the DI capacity on this scale. Consequently, the DI capacity of the Gaussian channel is infinite in the exponential scale and zero in the double exponential scale, regardless of the channel noise.
Conference Paper
In this paper, we discuss the potential of integrating molecular communication (MC) systems into future generations of wireless networks. First, we explain the advantages of MC compared to conventional wireless communication using electromagnetic waves at different scales, namely at micro-and macroscale. Then, we identify the main challenges when integrating MC into future generation wireless networks. We highlight that two of the greatest challenges are the interface between the chemical and the cyber (Internet) domain, and ensuring communication security. Finally, we present some future applications, such as smart infrastructure and health monitoring, give a timeline for their realization, and point out some areas of research towards the integration of MC into 6G and beyond
Article
The initial vision of cellular communications was to deliver ubiquitous voice communications to anyone anywhere. In a simplified view, 1G delivered voice services for business customers, and only 2G for consumers. Next this also initiated the appetite for cellular data, for which 3G was designed. However, Blackberry delivered business smartphones, and 4G made smartphones a consumer device. The promise of 5G is to start the Tactile Internet, to control real and virtual objects in real-time via cellular. However, the hype around 5G is, again, focusing on business customers, in particular in the context of campus networks. Consequently, 6G must provide an infrastructure to enable remote controlled mobile robotic solutions for everyone the Personal Tactile Internet. Which role can information and communication theory play in this context, and what are big challenges aheadto provide the infrastructure to enable remote controlled mobile robotic solutions for everyone. Which role can information and communication theory play to help and what are big challenges ahead
Article
The problem of identification is considered, in which it is of interest for the receiver to decide only whether a certain message has been sent or not, and the identification-feedback (IDF) capacity of channels with feedback is studied. The IDF capacity is shown to be discontinuous and super-additive for both deterministic and randomized encoding. For the deterministic IDF capacity the phenomenon of super-activation occurs, which is the strongest form of super-additivity. This is the first time that super-activation is observed for discrete memoryless channels. On the other hand, for the randomized IDF capacity, super-activation is not possible. Finally, the developed theory is studied from an algorithmic point of view by using the framework of Turing computability. The problem of computing the IDF capacity on a Turing machine is connected to problems in pure mathematics and it is shown that if the IDF capacity would be Turing computable, it would provide solutions to other problems in mathematics including Goldbach’s conjecture and the Riemann Hypothesis. However, it is shown that the deterministic and randomized IDF capacities are not Banach-Mazur computable. This is the weakest form of computability implying that the IDF capacity is not computable even for universal Turing machines. On the other hand, the identification capacity without feedback is Turing computable revealing the impact of the feedback: It transforms the identification capacity from being computable to non-computable.
Article
We determine the identification capacity of compound channels in the presence of a wiretapper. It turns out that the secure identification capacity formula fulfills a dichotomy theorem: It is positive and equals the identification capacity of the channel if its message transmission secrecy capacity is positive. Otherwise, the secure identification capacity is zero. Thus, we show in the case that the secure identification capacity is greater than zero we do not pay a price for secure identification, i.e. the secure identification capacity is equal to the identification capacity. This is in strong contrast to the transmission capacity of the compound wiretap channel. We then use this characterization to investigate the analytic behavior of the secure identification capacity. In particular, it is practically relevant to investigate its continuity behavior as a function of the channels. We completely characterize this continuity behavior. We analyze the (dis-)continuity and (super-)additivity of the capacities. In [12] Alon gave a conjecture about maximal violation for the additivity of capacity functions in graphs.We show that this maximal violation as well holds for the secure identification capacity of compound wiretap channels. This is the first example of a capacity function exhibiting this behavior.
Article
In the Preface to the first edition, originally published in 1980, we mentioned that this book was based on the author's lectures in the Department of Mechanics and Mathematics of the Lomonosov University in Moscow, which were issued, in part, in mimeographed form under the title "Probabil­ ity, Statistics, and Stochastic Processors, I, II" and published by that Univer­ sity. Our original intention in writing the first edition of this book was to divide the contents into three parts: probability, mathematical statistics, and theory of stochastic processes, which corresponds to an outline of a three­ semester course of lectures for university students of mathematics. However, in the course of preparing the book, it turned out to be impossible to realize this intention completely, since a full exposition would have required too much space. In this connection, we stated in the Preface to the first edition that only probability theory and the theory of random processes with discrete time were really adequately presented. Essentially all of the first edition is reproduced in this second edition. Changes and corrections are, as a rule, editorial, taking into account com­ ments made by both Russian and foreign readers of the Russian original and ofthe English and Germantranslations [Sll]. The author is grateful to all of these readers for their attention, advice, and helpful criticisms. In this second English edition, new material also has been added, as follows: in Chapter 111, §5, §§7-12; in Chapter IV, §5; in Chapter VII, §§8-10.
Article
Bell System Technical Journal, also pp. 623-656 (October)
Article
Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr {S – ES ≥ nt} depend only on the endpoints of the ranges of the summands and the mean, or the mean and the variance of S. These results are then used to obtain analogous inequalities for certain sums of dependent random variables such as U statistics and the sum of a random sample without replacement from a finite population.
Chapter
We analyze wire-tape channels with secure feedback from the legitimate receiver. We present a lower bound on the transmission capacity (Theorem 1), which we conjecture to be tight and which is proved to be tight (Corollary 1) for Wyner’s original (degraded) wire-tape channel and also for the reversely degraded wire-tape channel for which the legitimate receiver gets a degraded version from the enemy (Corollary 2). Somewhat surprisingly we completely determine the capacities of secure common randomness (Theorem 2) and secure identification (Theorem 3 and Corollary 3). Unlike for the DMC, these quantities are different here, because identification is linked to non-secure common randomness.
Article
This paper reviews the role of information theory in characterizing the fundamental limits of watermarking systems and in guiding the development of optimal watermark embedding algorithms and optimal attacks. Watermarking can be viewed as a communication problem with side information (in the form of the host signal and/or a cryptographic key) available at the encoder and the decoder. The problem is mathematically defined by distortion constraints, by statistical models for the host signal, and by the information available in the game between the information hider, the attacker, and the decoder. In particular, information theory explains why the performance of watermark decoders that do not have access to the host signal may surprisingly be as good as the performance of decoders that know the host signal. The theory is illustrated with several examples, including an application to image watermarking. Capacity expressions are derived under a parallel-Gaussian model for the host-image source. Sparsity is the single most important property of the source that determines capacity.
Article
Two examples are presented showing that partial feedback can increase the capacity regions of broadcast channels and also the capacity regions even of those two-way channels which give equal outputs on both terminals.
Conference Paper
Watermarking identification codes were introduced by Y. Steinberg and N. Merhav. In their model they assumed that (1) the attacker uses a single channel to attack the watermark and both,the information hider and the decoder, know the attack channel; (2) the decoder either completely he knows the covertext or knows nothing about it. Then instead of the first assumption they suggested to study more robust models and instead of the second assumption they suggested to consider the case where the information hider is allowed to send a secret key to the decoder according to the covertext. In response to the first suggestion in this paper we assume that the attacker chooses an unknown (for both information hider and decoder) channel from a set of channels or a compound channel, to attack the watermark. In response to the second suggestion we present two models. In the first model according to the output sequence of covertext the information hider generates side information componentwise as the secret key. In the second model the only constraint to the key space is an upper bound for its rate. We present lower bounds for the identification capacities in the above models, which include the Steinberg and Merhav results on lower bounds. To obtain our lower bounds we introduce the corresponding models of common randomness. For the models with a single channel, we obtain the capacities of common randomness. For the models with a compound channel, we have lower and upper bounds and the differences of lower and upper bounds are due to the exchange and different orders of the max–min operations.
Article
The capacity region of the discrete memoryless network is expressed in terms of conditional mutual information and causally conditioned directed information. Codetrees play a central role in the capacity expressions
Article
We report on ideas, problems and results, which occupied us during the past decade and which seem to extend the frontiers of information theory in several directions. The main contributions concern information transfer by channels. There are also new questions and some answers in new models of source coding. While many of our investigations are in an explorative state, there are also hard cores of mathematical theories. In particular we present a unified theory of information transfer, which naturally incorporates Shannon's theory of information transmission and the theory of identification in the presence of noise as extremal cases. It provides several novel coding theorems. On the source coding side we introduce data compression for identification. Finally we are led beyond information theory to new concepts of solutions for probabilistic algorithms.
Conference Paper
The method of “types” is used to examine a discrete time channel with additive noise, and the ID-capacity is derived
Conference Paper
Summary form only given. To use common randomness in coding is a key idea from the theory of identification. Methods and ideas of this theory are shown here to have also an impact on Shannon's theory of transmission. We determine the capacity for a classical arbitrarily varying channel (AVC) (Ahlswede, 1973) with a novel structure of the capacity formula. This channel models a robust search problem in the presence of noise (Ahlswede and Wegner 1987)
Article
The zero error capacity C_o of a noisy channel is defined as the least upper bound of rates at which it is possible to transmit information with zero probability of error. Various properties of C_o are studied; upper and lower bounds and methods of evaluation of C_o are given. Inequalities are obtained for the C_o relating to the "sum" and "product" of two given channels. The analogous problem of zero error capacity C_oF for a channel with a feedback link is considered. It is shown that while the ordinary capacity of a memoryless channel with feedback is equal to that of the same channel without feedback, the zero error capacity may be greater. A solution is given to the problem of evaluating C_oF .
Article
The capacity of a single-input single-output discrete memoryless channel is not increased by the use of a noiseless feedback link. It is shown, by example, that this is not the case for a multiple-access discrete memoryless channel. That is, it is shown that the capacity region for such a channel is enlarged if a noiseless feedback link is utilized.
Article
Watermarking codes are analyzed from an information-theoretic viewpoint as identification codes with side information that is available at the transmitter only or at both ends. While the information hider embeds a secret message (watermark) in a covertext message (typically, text, image, sound, or video stream) within a certain distortion level, the attacker, modeled here as a memoryless channel, processes the resulting watermarked message (within limited additional distortion) in attempt to invalidate the watermark. In most applications of watermarking codes, the decoder need not carry out full decoding, as in ordinary coded communication systems, but only to test whether a watermark at all exists and if so, whether it matches a particular hypothesized pattern. This fact motivates us to view the watermarking problem as an identification problem, where the original covertext source serves as side information. In most applications, this side information is available to the encoder only, but sometimes it can be available to the decoder as well. For the case where the side information is available at both encoder and decoder, we derive a formula for the identification capacity and also provide a characterization of achievable error exponents. For the case where side information is available at the encoder only, we derive upper and lower bounds on the identification capacity. All characterizations are obtained as single-letter expressions
Article
In the theory of identification via noisy channels randomization in the encoding has a dramatic effect on the optimal code size, namely, it grows double-exponentially in the blocklength, whereas in the theory of transmission it has the familiar exponential growth. We consider now instead of the discrete memoryless channel (DMC) more robust channels such as the familiar compound (CC) and arbitrarily varying channels (AVC). They can be viewed as models for jamming situations. We make the pessimistic assumption that the jammer knows the input sequence before he acts. This forces communicators to use the maximal error concept and also makes randomization in the encoding superfluous. Now, for a DMC W by a simple observation, made by Ahlswede and Dueck (1989), in the absence of randomization the identification capacity, say CNRI(W), equals the logarithm of the number of different row-vectors in W. We generalize this to compound channels. A formidable problem arises if the DMC W is replaced by the AVC W. In fact, for 0-1-matrices only in W we are-exactly as for transmission-led to the equivalent zero-error-capacity of Shannon. But for general W the identification capacity CNRI(W) is quite different from the transmission capacity C(W). An observation is that the separation codes of Ahlswede (1989) are also relevant here. We present a lower bound on C NRI(W). It implies for instance for W={(0 1<sup>1 0</sup>), (δ (1-δ)(1 0))}, δ∈(0, &frac12;) that CNRI(W)=1, which is obviously tight. It exceeds C(W), which is known to exceed 1-h(δ), where h is the binary entropy function. We observe that a separation code, with worst case average list size L¯ (which we call an NRA code) can be partitioned into L¯2ne transmission codes. This gives a nonsingle-letter characterization of the capacity of AVC with maximal probability of error in terms of the capacity of codes with list decoding. We also prove that randomization in the decoding does not increase CI(W) and CNRI(W). Finally, we draw attention to related work on source coding
Article
For the case of complete feedback, a fairly unified theory of identification is presented. Its guiding principle is the discovery that communicators (sender and receiver) must set up a common random experiment with maximal entropy and use it as randomization for a suitable identification technique. It is shown how this can be done in a constructive manner. The proof of optimality (weak converse) is based on a novel entropy bound, which can be viewed as a substitute for Fano's lemma in the present context. The single-letter characterization of (second-order) capacity regions now rests on an entropy characterization problem, which often can be solved. This is done for the multiple-access channel with deterministic encoding strategies, and for the broadcast channel with randomized encoding strategies
Article
The authors' main finding is that any object among doubly exponentially many objects can be identified in blocklength n with arbitrarily small error probability via a discrete memoryless channel (DMC), if randomization can be used for the encoding procedure. A novel doubly exponential coding theorem is presented which determines the optimal R , that is, the identification capacity of the DMC as a function of its transmission probability matrix. This identification capacity is a well-known quantity, namely, Shannon's transmission capacity for the DMC
Article
A study is made of the identification problem in the presence of a noiseless feedback channel, and the second-order capacity C f (resp. C F) for deterministic (resp. randomized) encoding strategies is determined. Several important phenomena are encountered. (1) Although feedback does not increase the transmission capacity of a discrete memoryless channel (DMC), it does increase the (second-order) identification capacity; (2) noise increases C f; (3) the structure of the new capacity formulas is simpler than C.E. Shannon's (1948) familiar formula. This has the effect that proofs of converses become easier than in the authors' previous work
The tactile internet
  • G Fettweis