Conference Paper

Identification over the Gaussian Channel in the Presence of Feedback

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This holds to both rate definitions 1 n log M (as defined by Shannon for classical transmission) and 1 n log log M (as defined by Ahlswede and Dueck for ID over DMCs). Interestingly, the authors of [15] showed that the ID capacity with noiseless feedback remains infinite regardless of the scaling used for the rate, e.g., double exponential, triple exponential, etc. In addition, the resource CR allows for a considerable increase in the ID capacity of channels [6,16,17]. ...
... In this case, the transmitter wishes to simultaneously send a message to the receiver and sense its channel state through a strictly causal feedback link. Motivated by the drastic effects of feedback on the ID capacity [15], this work investigates joint ID and sensing. To the best of our knowledge, the problem of joint ID and sensing has not been treated in the literature yet. ...
... The size of this random experiment can be used to compute the growth of the ID rate. This result has been further emphasized in [15,28], where it has been shown that the ID capacity of the Gaussian channel with noiseless feedback is infinite. This is because the authors of [15,28] provided a coding scheme that generates infinite common randomness between the sender and the receiver. ...
Article
Full-text available
In the identification (ID) scheme proposed by Ahlswede and Dueck, the receiver’s goal is simply to verify whether a specific message of interest was sent. Unlike Shannon’s transmission codes, which aim for message decoding, ID codes for a discrete memoryless channel (DMC) are far more efficient; their size grows doubly exponentially with the blocklength when randomized encoding is used. This indicates that when the receiver’s objective does not require decoding, the ID paradigm is significantly more efficient than traditional Shannon transmission in terms of both energy consumption and hardware complexity. Further benefits of ID schemes can be realized by leveraging additional resources such as feedback. In this work, we address the problem of joint ID and channel state estimation over a DMC with independent and identically distributed (i.i.d.) state sequences. State estimation functions as the sensing mechanism of the model. Specifically, the sender transmits an ID message over the DMC while simultaneously estimating the channel state through strictly causal observations of the channel output. Importantly, the random channel state is unknown to both the sender and the receiver. For this system model, we present a complete characterization of the ID capacity–distortion function.
... It has been shown that infinite CR can be generated from Gaussian source [25]. The identification with feedback (IDF) problem via single-user Gaussian channels has been explored in [13], [26], [27], demonstrating the achievability of infinite capacity regardless of the chosen rate scaling. ...
... In this section, we recall the previous results in [26] on IDF via single-user Gaussian channel. Consider the IDF problem via a single-user discrete-time Gaussian channel W σ 2 as depicted in Fig. 1. ...
... Theorem 3. [26] Let P > 0. Then there exists for all R > 0 a blocklength n 0 such that for every n ≥ n 0 there exists a deterministic IDF code for W σ 2 of blocklength n with N = 2 2 nR identities and with λ ∈ 0, 1 2 , i.e., C(W σ 2 , P ) = +∞. ...
Preprint
We investigate message identification over a K-sender Gaussian multiple access channel (K-GMAC). Unlike conventional Shannon transmission codes, the size of randomized identification (ID) codes experiences a doubly exponential growth in the code length. Improvements in the ID approach can be attained through additional resources such as quantum entanglement, common randomness (CR), and feedback. It has been demonstrated that an infinite capacity can be attained for a single-user Gaussian channel with noiseless feedback, irrespective of the chosen rate scaling. We establish the capacity region of both the K-sender Gaussian multiple access channel (K-GMAC) and the K-sender state-dependent Gaussian multiple access channel (K-SD-GMAC) when strictly causal noiseless feedback is available.
... In contrast to the classical Shannon message transmission, feedback can increase the ID capacity of a DMC [16]. Furthermore, it has been shown in [17] that the ID capacity of Gaussian channels with noiseless feedback is infinite. This holds to ii both rate definitions 1 n log M (as defined by Shannon for classical transmission) and 1 n log log M (as defined by Ahlswede and Dueck for ID over DMCs). ...
... This holds to ii both rate definitions 1 n log M (as defined by Shannon for classical transmission) and 1 n log log M (as defined by Ahlswede and Dueck for ID over DMCs). Interestingly, the authors in [17] showed that the ID capacity with noiseless feedback remains infinite regardless of the scaling used for the rate, e.g., double exponential, triple exponential, etc. Besides, the resource CR allows a considerable increase in the ID capacity of channels [18], [19], [20], [21], [22]. The aforementioned communication scenarios emphasize that the ID capacity has a completely different behavior than Shannon's capacity. ...
... For instance, fundamental limits of joint sensing and communication for a point-to-point channel have been studied in [29], where the transmitter wishes to simultaneously send a message to the receiver and sense its channel state via a strictly causal feedback link. Motivated by the drastic effects of feedback on the ID capacity [17], this work investigates joint ID and sensing. To the best of our knowledge, the problem of joint ID and sensing has not been treated in the literature yet. ...
Preprint
In the identification (ID) scheme proposed by Ahlswede and Dueck, the receiver only checks whether a message of special interest to him has been sent or not. In contrast to Shannon transmission codes, the size of ID codes for a Discrete Memoryless Channel (DMC) grows doubly exponentially fast with the blocklength, if randomized encoding is used. This groundbreaking result makes the ID paradigm more efficient than the classical Shannon transmission in terms of necessary energy and hardware components. Further gains can be achieved by taking advantage of additional resources such as feedback. We study the problem of joint ID and channel state estimation over a DMC with independent and identically distributed (i.i.d.) state sequences. The sender simultaneously sends an ID message over the DMC with a random state and estimates the channel state via a strictly causal channel output. The random channel state is available to neither the sender nor the receiver. For the proposed system model, we establish a lower bound on the ID capacity-distortion function.
... In such a situation, no communication over the channel is required. The major motivation of the work in [26] was the drastic effects that the common randomness generated from the perfect feedback in the model treated in [27] produce on the identification capacity. The identification capacity of Gaussian channels with noiseless feedback has been established in [27] and it is infinite regardless of the scaling. ...
... The major motivation of the work in [26] was the drastic effects that the common randomness generated from the perfect feedback in the model treated in [27] produce on the identification capacity. The identification capacity of Gaussian channels with noiseless feedback has been established in [27] and it is infinite regardless of the scaling. The authors in [27] proposed a coding strategy that achieves an infinite identification capacity in which an infinitely large amount of CR between the sender and the receiver is generated using noiseless feedback. ...
... The identification capacity of Gaussian channels with noiseless feedback has been established in [27] and it is infinite regardless of the scaling. The authors in [27] proposed a coding strategy that achieves an infinite identification capacity in which an infinitely large amount of CR between the sender and the receiver is generated using noiseless feedback. ...
Preprint
We study a standard two-source model for common randomness (CR) generation in which Alice and Bob generate a common random variable with high probability of agreement by observing independent and identically distributed (i.i.d.) samples of correlated sources on countably infinite alphabets. The two parties are additionally allowed to communicate as little as possible over a noisy memoryless channel. In our work, we give a single-letter formula for the CR capacity for the proposed model and provide a rigorous proof of it. This is a challenging scenario because some of the finite alphabet properties, namely of the entropy can not be extended to the countably infinite case. Notably, it is known that the Shannon entropy is in fact discontinuous at all probability distributions with countably infinite support.
... In such a situation, no communication over the channel is required. We were motivated by the drastic effects on the identification capacity produced by the common randomness generated from the perfect feedback in the model treated in [18]. The authors in [18] proved that the identification capacity of Gaussian channels with noiseless feedback is infinite regardless of the scaling by proposing a coding scheme that generates an infinitely large amount of CR between the sender and the receiver using noiseless feedback. ...
... We were motivated by the drastic effects on the identification capacity produced by the common randomness generated from the perfect feedback in the model treated in [18]. The authors in [18] proved that the identification capacity of Gaussian channels with noiseless feedback is infinite regardless of the scaling by proposing a coding scheme that generates an infinitely large amount of CR between the sender and the receiver using noiseless feedback. ...
... The proof of Lemma 10 is analogous to the proof of [18,Lemma 7]. We then discretizeX using the function d as described in [18]. ...
Preprint
Full-text available
We study the problem of common randomness (CR) generation in the basic two-party communication setting in which the sender and the receiver aim to agree on a common random variable with high probability by observing independent and identically distributed (i.i.d.) samples of correlated Gaussian sources and while communicating as little as possible over a noisy memoryless channel. We completely solve the problem by giving a single-letter characterization of the CR capacity for the proposed model and by providing a rigorous proof of it. Interestingly, we prove that the CR capacity is infinite when the Gaussian sources are perfectly correlated.
... Common Randomness (CR) is a resource for future communication systems, e.g., for semantics-based communication systems [3], [4] and for communication systems with stringent security and trustworthiness requirements [5]. In particular, CR improves the scalability of identification over channels (ID) [6]- [8]. Also, CR improves the security [9] for post-quantum cryptography [10], [11], e.g., by supporting information-theoretic security through modular coding schemes [12] and protection against jamming attacks [13]. ...
... - [8], [46], [48]. Further, let c denote the number of bits that we can generate for a given observation. ...
Article
Full-text available
Common Randomness (CR) provides sequences of random variables at two physically separated locations. Ideally, the random variables at the two locations should be identical, i.e., have low probability of discrepancy, and should have high entropy. Previous CR research has focused on CR for physical layer security (mainly physical layer secret key generation), where a low CR generation rate, i.e., a low rate of CR bits per second is sufficient. However, emerging semantic communication paradigms, e.g., identification via channels, require high CR rates. We develop and evaluate a Carrier Frequency Offset (CFO) based methodology for high-rate CR generation from reciprocal observations of a common wireless channel between two distinct wireless terminals. The proposed CFO-CR methodology proceeds in several stages, including channel probing, random parameter extraction, noise reduction, quantization, information reconciliation, and randomization. Our evaluations with single-carrier software-defined radios, for which we make measurement traces publicly available, indicate that high-rate CR generation should observe (probe) the CFO and employ a Savitzky Golay low pass filter with a low cut-off frequency for noise reduction in conjunction with multi-bit quantization, Gray code encoding, and a shuffling based randomization. We provide insights into the tradeoffs between the reconciliation cost for correcting bit discrepancies and the CR generation parameters. Our proposed CFO-CR methodology can generate 2048 bits of CR at a comparatively low reconciliation cost of 72 bytes while only observing the lowest possible 256 channel observations and passing all common randomness tests. For generating 2048 bits of CR, other state-of-the-art approaches either require more channel observations (≥ 2048) or incur a higher reconciliation cost (≥ 450 bytes).
... In [23], the Gaussian channel with feedback is considered. For a positive noise variance, a coding scheme is proposed that generates infinite common randomness between the sender and the receiver. ...
... The main difference compared to transmission codes is that the disjointness condition for decoding sets is replaced by the weaker property (23). Instead of a single receiver interested in a specific message, one can imagine a scenario where all decoders are in the same location. ...
Preprint
There is a growing interest in models that extend beyond Shannon's classical transmission scheme, renowned for its channel capacity formula C. One such promising direction is message identification via channels, introduced by Ahlswede and Dueck. Unlike in Shannon's classical model, where the receiver aims to determine which message was sent from a set of M messages, message identification focuses solely on discerning whether a specific message m was transmitted. The encoder can operate deterministically or through randomization, with substantial advantages observed particularly in the latter approach. While Shannon's model allows transmission of M=2nCM = 2^{nC} messages, Ahlswede and Dueck's model facilitates the identification of M=22nCM = 2^{2^{nC}} messages, exhibiting a double exponential growth in block length. In their seminal paper, Ahlswede and Dueck established the achievability and introduced a "soft" converse bound. Subsequent works have further refined this, culminating in a strong converse bound, applicable under specific conditions. Watanabe's contributions have notably enhanced the applicability of the converse bound. The aim of this survey is multifaceted: to grasp the formalism and proof techniques outlined in the aforementioned works, analyze Watanabe's converse, trace the evolution from earlier converses to Watanabe's, emphasizing key similarities and differences that underpin the enhancements. Furthermore, we explore the converse proof for message identification with feedback, also pioneered by Ahlswede and Dueck. By elucidating how their approaches were inspired by preceding proofs, we provide a comprehensive overview. This overview paper seeks to offer readers insights into diverse converse techniques for message identification, with a focal point on the seminal works of Hayashi, Watanabe, and, in the context of feedback, Ahlswede and Dueck.
... Identification over broadcast channels was investigated in [6], [20], while identification in the presence of feedback over multiple-access channels and broadcast channels was studied in [3]. Identification was studied over Gaussian channels in [22], [15]; over additive noise channels under average and peak power constraints in [30]; over compound channels and arbitrarily varying channels in [1]. Deterministic identification over DMCs with and without input constraints was studied in [21]. ...
... Using (15), we can write, for k ∈ A i , ...
Preprint
We study message identification over the binary uniform permutation channels. For DMCs, the number of identifiable messages grows doubly exponentially. Identification capacity, the maximum second-order exponent, is known to be the same as the Shannon capacity of a DMC. We consider a binary uniform permutation channel where the transmitted vector is permuted by a permutation chosen uniformly at random. Permutation channels support reliable communication of only polynomially many messages. While this implies a zero second-order identification rate, we prove a soft converse result showing that even non-zero first-order identification rates are not achievable with a power-law decay of error probability for identification over binary uniform permutation channels. To prove the converse, we use a sequence of steps to construct a new identification code with a simpler structure and then use a lower bound on the normalized maximum pairwise intersection of a set system on {0, . . . , n}. We provide generalizations for arbitrary alphabet size.
... Deterministic codes often have the advantage of simpler implementation, simulation [70,71], and explicit construction [72]. DI problem for Gaussian channels is also studied in [64,[73][74][75]. Further, DI may be preferred over RI in complexity-constrained applications of MC systems, where the generation of random codewords is challenging 1 . ...
... where the latter inequality follows from y ∈ 1 , cf. (73). For ( ), we used Bernoulli's inequality Equation (78) can then be written as follows V¯ y c 1 − V¯ y c 2 ≤ V¯ y c 1 · 1 − −θ ′ ¯ =1 1 · 1 − θ ′ ¯ =1 ...
Preprint
Full-text available
Several applications of molecular communications (MC) feature an alarm-prompt behavior for which the prevalent Shannon capacity may not be the appropriate performance metric. The identification capacity as an alternative measure for such systems has been motivated and established in the literature. In this paper, we study deterministic identification (DI) for the discrete-time \emph{Poisson} channel (DTPC) with inter-symbol interference (ISI) where the transmitter is restricted to an average and a peak molecule release rate constraint. Such a channel serves as a model for diffusive MC systems featuring long channel impulse responses and employing molecule counting receivers. We derive lower and upper bounds on the DI capacity of the DTPC with ISI when the number of ISI channel taps K may grow with the codeword length n (e.g., due to increasing symbol rate). As a key finding, we establish that for deterministic encoding, the codebook size scales as 2(nlogn)R2^{(n\log n)R} assuming that the number of ISI channel taps scales as K=2κlognK = 2^{\kappa \log n}, where R is the coding rate and κ\kappa is the ISI rate. Moreover, we show that optimizing κ\kappa leads to an effective identification rate [bits/s] that scales linearly with n, which is in contrast to the typical transmission rate [bits/s] that is independent of n.
... In the Gaussian case, we have shown that the codebook size scales as 2 (n log n)R , by deriving bounds on the DI capacity. Furthermore, DI for Gaussian channels is also studied in [39], [47]- [49]. ...
... In other words, the codebook size increases as A increases; however, since A appears in a term that is exponential and double exponential codebook sizes for transmission [28] and RI [18], respectively, different non-standard codebook sizes are observed for other communication tasks, such as covert communication [69] or covert identification [70] for the binary-input DMC (BIDMC), where the codebook size scales as 2 √ nR and 2 2 √ nR , respectively. For the Gaussian DI channel with feedback [47], the codebook size can be arbitrarily large. In [48] the result is generalized for channels with non-discrete additive white noise and positive message transmission feedback capacity. ...
Preprint
Full-text available
Various applications of molecular communications (MC) are event-triggered, and, as a consequence, the prevalent Shannon capacity may not be the right measure for performance assessment. Thus, in this paper, we motivate and establish the identification capacity as an alternative metric. In particular, we study deterministic identification (DI) for the discrete-time Poisson channel (DTPC), subject to an average and a peak power constraint, which serves as a model for MC systems employing molecule counting receivers. It is established that the codebook size for this channel scales as 2(nlogn)R2^{(n\log n)R}, where n and R are the codeword length and coding rate, respectively. Lower and upper bounds on the DI capacity of the DTPC are developed. The obtained large capacity of the DI channel sheds light on the performance of natural DI systems such as natural olfaction, which are known for their extremely large chemical discriminatory power in biology. Furthermore, numerical simulations for the empirical miss-identification and false identification error rates are provided for finite length codes. This allows us to quantify the scale of error reduction in terms of the codeword length.
... 16], [53,Thm. 5.6], [54,Thm. 17]). ...
Preprint
The problem of identification over a discrete memoryless wiretap channel is examined under the criterion of semantic effective secrecy. This secrecy criterion guarantees both the requirement of semantic secrecy and of stealthy communication. Additionally, we introduce the related problem of combining approximation-of-output statistics and transmission. We derive a capacity theorem for approximation-of-output statistics transmission codes. For a general model, we present lower and upper bounds on the capacity, showing that these bounds are tight for more capable wiretap channels. We also provide illustrative examples for more capable wiretap channels, along with examples of wiretap channel classes where a gap exists between the lower and upper bounds.
... In Figure 16.9 we illustrate the idea behind CR. In general, if the sender and receiver have access to a random experiment, they can gather and store resources in the form of correlated random variables [8,32], and the CR capacity is determined by the individual channels between the common random experiment and the sender and receiver. In Figure 16.9, the CR resource is illustrated with buckets of water that are generated by observing a random experiment and can be stored at both ends. ...
Chapter
Since the breakthrough of Shannon's seminal paper, researchers and engineers have worked on codes and techniques that approach the fundamental limits of message transmission. Given the capacity C of a channel and the block length n of the codewords, the maximum number of possible messages that can be transmitted is 2 nC . In this work, we advocate a paradigm change towards Post-Shannon communication that allows the encoding of up to 2 2 nC messages: a double exponential behavior! This paradigm shift is the study of the transmission of the Gestalt information instead of message-only transmission and involves a shift from the question of what message the sender has transmitted to whether it has transmitted at all, and with the purpose of achieving which goal. Entire careers were built designing methods and codes on top of previous works, bringing only marginal gains in approaching the fundamental limit of Shannon's message transmission. This paradigm change can bring not only marginal but also exponential gains in the efficiency of communication. Within Post-Shannon techniques, we will explore identification codes, the exploitation of resources that are considered useless in the current paradigm such as noiseless feedback common randomness, and the exploitation of multi-channel descriptor information.
Chapter
New applications in modern communications are demanding robust and ultra-reliable low-latency information exchange such as machine-to-machine and human-to-machine communications. For many of these applications, the identification approach of Ahlswede and Dueck is much more efficient than the classical message transmission scheme proposed by Shannon. Previous studies concentrate mainly on identification over discrete channels. For discrete channels, it was proved that identification is robust under channel uncertainty. Furthermore, optimal identification schemes that are secure and robust against jamming attacks have been considered. However, no results for continuous channels have yet been established. That is why we focus on the continuous case: the Gaussian channel for its known practical relevance. We deal with secure identification over Gaussian channels. Provable secure communication is of high interest for future communication systems. A key technique for implementing secure communication is the physical layer security based on information-theoretic security. We model this with the wiretap channel. In particular, we provide a suitable coding scheme for the Gaussian wiretap channel (GWC) and determine the corresponding secure identification capacity. We also consider Multiple-Input Multiple-Output (MIMO) Gaussian channels and provide an efficient signal-processing scheme. This scheme allows a separation of signal processing and Gaussian coding as in the classical case.
Chapter
The model of identification via channels, introduced by Ahlswede and Dueck, has attracted increasing attention in recent years. Unlike in Shannon’s classical model, where the receiver aims to determine which message was sent from a set of M messages, message identification focuses solely on discerning whether a specific message m was transmitted. The encoder can operate deterministically or through randomization, with substantial advantages observed particularly in the latter approach. While Shannon’s model allows transmission of M=2nCM = 2^{nC} messages, Ahlswede and Dueck’s model facilitates the identification of M=22nCM = 2^{2^{nC}} messages, exhibiting a double exponential growth in block length. In their seminal paper, Ahlswede and Dueck established the achievability and introduced a “soft” converse bound. Subsequent works have further refined this, culminating in a strong converse bound, applicable under specific conditions. Watanabe’s contributions have notably enhanced the applicability of the converse bound. The aim of this survey is multifaceted: to grasp the formalism and proof techniques outlined in the aforementioned works, analyze Watanabe’s converse, trace the evolution from earlier converses to Watanabe’s, emphasizing key similarities and differences that underpin the enhancements. Furthermore, we explore the converse proof for message identification with feedback, also pioneered by Ahlswede and Dueck. By elucidating how their approaches were inspired by preceding proofs, we provide a comprehensive overview. This overview paper seeks to offer readers insights into diverse converse techniques for message identification, with a focal point on the seminal works of Hayashi, Watanabe, and, in the context of feedback, Ahlswede and Dueck.
Preprint
We study message identification over the binary noisy permutation channel. For discrete memoryless channels (DMCs), the number of identifiable messages grows doubly exponentially, and the maximum second-order exponent is the Shannon capacity of the DMC. We consider a binary noisy permutation channel where the transmitted vector is first permuted by a permutation chosen uniformly at random, and then passed through a binary symmetric channel with crossover probability p. In an earlier work, it was shown that 2cnn2^{c_n n} messages can be identified over binary (noiseless) permutation channel if cn0c_n\rightarrow 0. For the binary noisy permutation channel, we show that message sizes growing as 2ϵnnlogn2^{\epsilon_n \sqrt{\frac{n}{\log n}}} are identifiable for any ϵn0\epsilon_n\rightarrow 0. We also prove a strong converse result showing that for any sequence of identification codes with message size 2Rnnlogn2^{R_n \sqrt{n}\log n}, where RnR_n \rightarrow \infty, the sum of Type-I and Type-II error probabilities approaches at least 1 as nn\rightarrow \infty. Our proof of the strong converse uses the idea of channel resolvability. The channel of interest turns out to be the ``binary weight-to-weight (BWW) channel'' which captures the effect on the Hamming weight of a vector, when the vector is passed through a binary symmetric channel. We propose a novel deterministic quantization scheme for quantization of a distribution over {0,1,,n}\{0,1,\cdots, n\} by an M-type input distribution when the distortion is measured on the output distribution (over the BWW channel) in total variation distance. This plays a key role in the converse proof.
Article
Full-text available
A minimax converse for the identification via channels is derived. By this converse, a general formula for the identification capacity, which coincides with the transmission capacity, is proved without the assumption of the strong converse property. Furthermore, the optimal second-order coding rate of the identification via channels is characterized when the type I error probability is non-vanishing and the type II error probability is vanishing. Our converse is built upon the so-called partial channel resolvability approach; however, the minimax argument enables us to circumvent a flaw reported in the literature.
Article
Full-text available
The deterministic identification (DI) capacity is developed in multiple settings of channels with power constraints. A full characterization is established for the DI capacity of the discrete memoryless channel (DMC) with and without input constraints. Originally, Ahlswede and Dueck established the identification capacity with local randomness at the encoder, resulting in a double exponential number of messages in the block length n . In the deterministic setup, the number of messages scales exponentially, as in Shannon’s transmission paradigm, but the achievable identification rates are higher. An explicit proof was not provided for the deterministic setting. In this paper, a detailed proof is presented for the DMC. Furthermore, Gaussian channels with fast and slow fading are considered, when channel side information is available at the decoder. A new phenomenon is observed as we establish that the number of messages scales as 2nlog(n)R2^{n\log (n)R} by deriving lower and upper bounds on the DI capacity on this scale. Consequently, the DI capacity of the Gaussian channel is infinite in the exponential scale and zero in the double exponential scale, regardless of the channel noise.
Chapter
Machine-to-machine and human-to-machine communications are essential aspects incorporated in the framework of fifth generation wireless connectivity.
Article
Full-text available
The deterministic identification (DI) capacity is developed in multiple settings of channels with power constraints. A full characterization is established for the DI capacity of the discrete memoryless channel (DMC) with and without input constraints. Originally, Ahlswede and Dueck established the identification capacity with local randomness at the encoder, resulting in a double exponential number of messages in the block length n . In the deterministic setup, the number of messages scales exponentially, as in Shannon’s transmission paradigm, but the achievable identification rates are higher. An explicit proof was not provided for the deterministic setting. In this paper, a detailed proof is presented for the DMC. Furthermore, Gaussian channels with fast and slow fading are considered, when channel side information is available at the decoder. A new phenomenon is observed as we establish that the number of messages scales as 2nlog(n)R2^{n\log (n)R} by deriving lower and upper bounds on the DI capacity on this scale. Consequently, the DI capacity of the Gaussian channel is infinite in the exponential scale and zero in the double exponential scale, regardless of the channel noise.
Conference Paper
In this paper, we discuss the potential of integrating molecular communication (MC) systems into future generations of wireless networks. First, we explain the advantages of MC compared to conventional wireless communication using electromagnetic waves at different scales, namely at micro-and macroscale. Then, we identify the main challenges when integrating MC into future generation wireless networks. We highlight that two of the greatest challenges are the interface between the chemical and the cyber (Internet) domain, and ensuring communication security. Finally, we present some future applications, such as smart infrastructure and health monitoring, give a timeline for their realization, and point out some areas of research towards the integration of MC into 6G and beyond
Chapter
Since the breakthrough of Shannon's seminal paper, researchers and engineers have worked on codes and techniques that approach the fundamental limits of message transmission. Given the capacity C of a channel and the block length n of the codewords, the maximum number of possible messages that can be transmitted is 2 nC . In this work, we advocate a paradigm change towards Post-Shannon communication that allows the encoding of up to 2 2 nC messages: a double exponential behavior! This paradigm shift is the study of the transmission of the Gestalt information instead of message-only transmission and involves a shift from the question of what message the sender has transmitted to whether it has transmitted at all, and with the purpose of achieving which goal. Entire careers were built designing methods and codes on top of previous works, bringing only marginal gains in approaching the fundamental limit of Shannon's message transmission. This paradigm change can bring not only marginal but also exponential gains in the efficiency of communication. Within Post-Shannon techniques, we will explore identification codes, the exploitation of resources that are considered useless in the current paradigm such as noiseless feedback common randomness, and the exploitation of multi-channel descriptor information.
Article
The problem of identification is considered, in which it is of interest for the receiver to decide only whether a certain message has been sent or not, and the identification-feedback (IDF) capacity of channels with feedback is studied. The IDF capacity is shown to be discontinuous and super-additive for both deterministic and randomized encoding. For the deterministic IDF capacity the phenomenon of super-activation occurs, which is the strongest form of super-additivity. This is the first time that super-activation is observed for discrete memoryless channels. On the other hand, for the randomized IDF capacity, super-activation is not possible. Finally, the developed theory is studied from an algorithmic point of view by using the framework of Turing computability. The problem of computing the IDF capacity on a Turing machine is connected to problems in pure mathematics and it is shown that if the IDF capacity would be Turing computable, it would provide solutions to other problems in mathematics including Goldbach’s conjecture and the Riemann Hypothesis. However, it is shown that the deterministic and randomized IDF capacities are not Banach-Mazur computable. This is the weakest form of computability implying that the IDF capacity is not computable even for universal Turing machines. On the other hand, the identification capacity without feedback is Turing computable revealing the impact of the feedback: It transforms the identification capacity from being computable to non-computable.
Article
We determine the identification capacity of compound channels in the presence of a wiretapper. It turns out that the secure identification capacity formula fulfills a dichotomy theorem: It is positive and equals the identification capacity of the channel if its message transmission secrecy capacity is positive. Otherwise, the secure identification capacity is zero. Thus, we show in the case that the secure identification capacity is greater than zero we do not pay a price for secure identification, i.e. the secure identification capacity is equal to the identification capacity. This is in strong contrast to the transmission capacity of the compound wiretap channel. We then use this characterization to investigate the analytic behavior of the secure identification capacity. In particular, it is practically relevant to investigate its continuity behavior as a function of the channels. We completely characterize this continuity behavior. We analyze the (dis-)continuity and (super-)additivity of the capacities. In [12] Alon gave a conjecture about maximal violation for the additivity of capacity functions in graphs.We show that this maximal violation as well holds for the secure identification capacity of compound wiretap channels. This is the first example of a capacity function exhibiting this behavior.
Article
The paper is concerned with the estimation of the probability that the empirical distribution of n independent, identically distributed random vectors is contained in a given set of distributions. Sections 1–3 are a survey of some of the literature on the subject. In section 4 the special case of multinomial distributions is considered and certain results on the precise order of magnitude of the probabilities in question are obtained.
Article
Bell System Technical Journal, also pp. 623-656 (October)
Chapter
We analyze wire-tape channels with secure feedback from the legitimate receiver. We present a lower bound on the transmission capacity (Theorem 1), which we conjecture to be tight and which is proved to be tight (Corollary 1) for Wyner’s original (degraded) wire-tape channel and also for the reversely degraded wire-tape channel for which the legitimate receiver gets a degraded version from the enemy (Corollary 2). Somewhat surprisingly we completely determine the capacities of secure common randomness (Theorem 2) and secure identification (Theorem 3 and Corollary 3). Unlike for the DMC, these quantities are different here, because identification is linked to non-secure common randomness.
Article
This paper reviews the role of information theory in characterizing the fundamental limits of watermarking systems and in guiding the development of optimal watermark embedding algorithms and optimal attacks. Watermarking can be viewed as a communication problem with side information (in the form of the host signal and/or a cryptographic key) available at the encoder and the decoder. The problem is mathematically defined by distortion constraints, by statistical models for the host signal, and by the information available in the game between the information hider, the attacker, and the decoder. In particular, information theory explains why the performance of watermark decoders that do not have access to the host signal may surprisingly be as good as the performance of decoders that know the host signal. The theory is illustrated with several examples, including an application to image watermarking. Capacity expressions are derived under a parallel-Gaussian model for the host-image source. Sparsity is the single most important property of the source that determines capacity.
Conference Paper
Watermarking identification codes were introduced by Y. Steinberg and N. Merhav. In their model they assumed that (1) the attacker uses a single channel to attack the watermark and both,the information hider and the decoder, know the attack channel; (2) the decoder either completely he knows the covertext or knows nothing about it. Then instead of the first assumption they suggested to study more robust models and instead of the second assumption they suggested to consider the case where the information hider is allowed to send a secret key to the decoder according to the covertext. In response to the first suggestion in this paper we assume that the attacker chooses an unknown (for both information hider and decoder) channel from a set of channels or a compound channel, to attack the watermark. In response to the second suggestion we present two models. In the first model according to the output sequence of covertext the information hider generates side information componentwise as the secret key. In the second model the only constraint to the key space is an upper bound for its rate. We present lower bounds for the identification capacities in the above models, which include the Steinberg and Merhav results on lower bounds. To obtain our lower bounds we introduce the corresponding models of common randomness. For the models with a single channel, we obtain the capacities of common randomness. For the models with a compound channel, we have lower and upper bounds and the differences of lower and upper bounds are due to the exchange and different orders of the max–min operations.
Article
An abstract is not available.
Article
The authors' main finding is that any object among doubly exponentially many objects can be identified in blocklength n with arbitrarily small error probability via a discrete memoryless channel (DMC), if randomization can be used for the encoding procedure. A novel doubly exponential coding theorem is presented which determines the optimal R, that is, the identification capacity of the DMC as a function of its transmission probability matrix. This identification capacity is a well-known quantity, namely, Shannon's transmission capacity for the DMC.
Conference Paper
The method of “types” is used to examine a discrete time channel with additive noise, and the ID-capacity is derived
Article
Watermarking codes are analyzed from an information-theoretic viewpoint as identification codes with side information that is available at the transmitter only or at both ends. While the information hider embeds a secret message (watermark) in a covertext message (typically, text, image, sound, or video stream) within a certain distortion level, the attacker, modeled here as a memoryless channel, processes the resulting watermarked message (within limited additional distortion) in attempt to invalidate the watermark. In most applications of watermarking codes, the decoder need not carry out full decoding, as in ordinary coded communication systems, but only to test whether a watermark at all exists and if so, whether it matches a particular hypothesized pattern. This fact motivates us to view the watermarking problem as an identification problem, where the original covertext source serves as side information. In most applications, this side information is available to the encoder only, but sometimes it can be available to the decoder as well. For the case where the side information is available at both encoder and decoder, we derive a formula for the identification capacity and also provide a characterization of achievable error exponents. For the case where side information is available at the encoder only, we derive upper and lower bounds on the identification capacity. All characterizations are obtained as single-letter expressions
Article
In the theory of identification via noisy channels randomization in the encoding has a dramatic effect on the optimal code size, namely, it grows double-exponentially in the blocklength, whereas in the theory of transmission it has the familiar exponential growth. We consider now instead of the discrete memoryless channel (DMC) more robust channels such as the familiar compound (CC) and arbitrarily varying channels (AVC). They can be viewed as models for jamming situations. We make the pessimistic assumption that the jammer knows the input sequence before he acts. This forces communicators to use the maximal error concept and also makes randomization in the encoding superfluous. Now, for a DMC W by a simple observation, made by Ahlswede and Dueck (1989), in the absence of randomization the identification capacity, say CNRI(W), equals the logarithm of the number of different row-vectors in W. We generalize this to compound channels. A formidable problem arises if the DMC W is replaced by the AVC W. In fact, for 0-1-matrices only in W we are-exactly as for transmission-led to the equivalent zero-error-capacity of Shannon. But for general W the identification capacity CNRI(W) is quite different from the transmission capacity C(W). An observation is that the separation codes of Ahlswede (1989) are also relevant here. We present a lower bound on C NRI(W). It implies for instance for W={(0 1<sup>1 0</sup>), (δ (1-δ)(1 0))}, δ∈(0, &frac12;) that CNRI(W)=1, which is obviously tight. It exceeds C(W), which is known to exceed 1-h(δ), where h is the binary entropy function. We observe that a separation code, with worst case average list size L¯ (which we call an NRA code) can be partitioned into L¯2ne transmission codes. This gives a nonsingle-letter characterization of the capacity of AVC with maximal probability of error in terms of the capacity of codes with list decoding. We also prove that randomization in the decoding does not increase CI(W) and CNRI(W). Finally, we draw attention to related work on source coding
Article
The authors' main finding is that any object among doubly exponentially many objects can be identified in blocklength n with arbitrarily small error probability via a discrete memoryless channel (DMC), if randomization can be used for the encoding procedure. A novel doubly exponential coding theorem is presented which determines the optimal R , that is, the identification capacity of the DMC as a function of its transmission probability matrix. This identification capacity is a well-known quantity, namely, Shannon's transmission capacity for the DMC
Article
A study is made of the identification problem in the presence of a noiseless feedback channel, and the second-order capacity C f (resp. C F) for deterministic (resp. randomized) encoding strategies is determined. Several important phenomena are encountered. (1) Although feedback does not increase the transmission capacity of a discrete memoryless channel (DMC), it does increase the (second-order) identification capacity; (2) noise increases C f; (3) the structure of the new capacity formulas is simpler than C.E. Shannon's (1948) familiar formula. This has the effect that proofs of converses become easier than in the authors' previous work
The tactile internet
  • fettweis