Conference PaperPDF Available

Deterministic Identification Over Fading Channels

Authors:
Deterministic Identification Over Fading Channels
Mohammad J. Salariseddigh, Uzi Pereg, Holger Boche, and Christian Deppe
Institute for Communications Engineering, Technical University of Munich
Chair of Theoretical Information Technology, Technical University of Munich
Abstract—Deterministic identification (DI) is addressed for
Gaussian channels with fast and slow fading, where channel
side information is available at the decoder. In particular, it
is established that the number of messages scales as 2nlog(n)R,
where nis the block length and Ris the coding rate. Lower and
upper bounds on the DI capacity are developed in this scale for
fast and slow fading. Consequently, the DI capacity is infinite in
the exponential scale and zero in the double-exponential scale,
regardless of the channel noise.
Index Terms—Fading channels, identification without random-
ization, deterministic codes, super exponential growth, channel
side information.
I. INT ROD UC TI ON
Modern communications require the transfer of enormous
amount of data in wireless systems, for cellular communi-
cation, sensor networks, smart appliances internet of things,
etc. Wireless communication is often modelled by fading
channels with AWGN [1], [2]. In the fast fading regime, the
transmission spans over a large number of coherence time
intervals [2], hence the signal attenuation is characterized
by a stochastic process or a sequence random parameters.
In some applications, the receiver may acquire channel side
information (CSI) by instantaneous estimation of the channel
parameters [3]. On the other hand, in the slow fading regime,
the latency is short compared to the coherence time [2], and
the behaviour is that of a compound channel.
In the fundamental point-to-point communication paradigm,
a sender conveys a message through a noisy channel in a such
a manner that the receiver will retrieve the original message.
Ahlswede and Dueck [4] introduced a scenario of a different
nature where the decoder only performs identification and
determines whether a particular message was sent or not [4],
[5]. Applications include vehicle-to-X communications [6],
digital watermarking [7], molecular communications [8], [9]
and other event-triggered systems. In vehicle-to-X communi-
cations, a vehicle that collects sensor data may ask whether
a certain alert message, concerning the future movement of
an adjacent vehicle, was transmitted or not [10]. In molecular
communications (MC) [11], [12], information is transmitted
via chemical signals or molecules. In various environments,
e.g. inside the human body, conventional wireless commu-
nication with electromagnetic (EM) waves is not feasible or
could be detrimental. The research on micro-scale MC for
medical applications, such as intra-body networks, is still in
its early stages and faces many challenges. MC is a promising
contender for future applications such as 7G+.
The original identification setting by Ahlswede and Dueck
[4] requires randomized encoding, i.e., a randomized source
available to the sender. It is known that this resource cannot
increase the transmission capacity of discrete memoryless
channels [13]. A remarkable result of identification theory is
that given local randomness at the encoder, reliable identifica-
tion can be attained such that the code size, i.e., the number of
messages, grows double exponentially in the block length n,
i.e., 22nR [4]. This differs sharply from the traditional trans-
mission setting where the code size scales only exponentially,
i.e., 2nR. Yet, the implementation of such a coding scale is
challenging, as it requires the encoder to process a bit string
of exponential length. The construction of identification codes
is considered in [5], [14], [15]. Identification for Gaussian
channels is considered in [16].
In the deterministic setup, given a discrete memoryless
channel (DMC), the number of messages grows exponentially
with the blocklength [4], [17]–[19], as in the traditional setting
of transmission. Nevertheless, the achievable identification
rates are higher than those of transmission. In addition, deter-
ministic codes often have the advantage of simpler implemen-
tation and explicit construction. In particular, J´
aJ´
a [18] showed
that the deterministic identification (DI) capacity of a binary
symmetric channel is 1 bit per channel use, as one can exhaust
the entire input space and assign (almost) all binary n-tuples
as codewords. The DI capacity in the literature is also referred
to as the non-randomized identification (NRI) capacity [17] or
the dID capacity [20]. Ahlswede et al. [4], [17] stated that the
DI capacity of a discrete memoryless channel (DMC) with a
stochastic matrix Wis given by the logarithm of the number
of distinct row vectors of W[4], [17]. The DI ε-capacity of
the Gaussian channel was determined by Burnashev [20].
In a recent work by the authors [21], we addressed determin-
istic identification for the DMC subject to an input constraint
and have also shown that the DI capacity of the standard
Gaussian channel, without fading, is infinite in the exponential
scale. Our previous results [21] reveal a gap of knowledge in
the following sense. For a finite blocklength n, the number of
codewords must be finite. Thereby, the meaning of the infinite
capacity result is that the number of messages scales super-
exponentially. The question remains what is the true order of
the code size. In mathematical terms, what is the scale Lfor
which the DI capacity is positive yet finite. Here, we will
answer this question.
In this paper, we consider deterministic identification for
Gaussian channels with fast fading and slow fading, where
channel side information (CSI) is available at the decoder.
We show that for Gaussian channels, the number of messages
scales as 2nlog(n)R, and develop lower and upper bounds on
2020 IEEE Information Theory Workshop (ITW)
978-1-7281-5962-1/21/$31.00 ©2021 IEEE
2020 IEEE Information Theory Workshop (ITW) | 978-1-7281-5962-1/20/$31.00 ©2021 IEEE | DOI: 10.1109/ITW46852.2021.9457587
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.
the DI capacity in this scale. As a consequence, we deduce
that the DI capacity of a Gaussian Channel with fast fading
is infinite in the exponential scale and zero in the double-
exponential scale, regardless of the channel noise. For slow
fading, the DI capacity in the exponential scale is infinite,
unless the fading gain can be zero or arbitrarily close to zero
(with positive probability), in which case the DI capacity is
zero. As opposed to RI coding, double exponential number of
messages cannot be achieved with deterministic codes.
The results have the following geometric interpretation.
While identification allows overlap between decoding regions
[13], [16], overlap at the encoder is not allowed for determinis-
tic codes. We observe that when two messages are represented
by codewords that are close to one another, then identification
fails. Thus, deterministic coding imposes the restriction that
the codewords need to be distanced from each other. Based
on fundamental properties in the lattice and group theory [22],
the optimal packing of non-overlapping spheres of radius
contains an exponential number of spheres, and by decreasing
the radius of the codeword spheres, the exponential rate can
be made arbitrarily large. However, in the derivation of our
lower bound, we show achievability of rates in the 2nlog(n)-
scale by using spheres of radius nn1/4, which results
in 21
4nlog(n)codewords. The full version of this paper with
proofs can be found in [23].
II. DE FIN IT IO NS A ND RE LATE D WOR K
In this section, we introduce the channel models and coding
definitions. We use the following notation: a vector is denoted
by x= (x1, x2, . . . , xn), and its `2-norm by kxk. Element-
wise product is denoted by xy= (xtyt)n
t=1. We denote
the hyper-sphere of radius rby Sx0(n, r) = xRn:
kxx0k r, and the set of consecutive integers from 1
to Mby [[M]]. The closure of a set Ais denote by cl(A).
A. Fading Channels
Consider a Gaussian channel,
Y=Gx+Z(1)
where Gis a random sequence of fading coefficients and Z
is an additive white Gaussian process, i.i.d. N(0, σ2
Z)(see
Figure 1). For fast fading, Gis a sequence of i.i.d. continuous
random variables fG, whereas for slow fading, the fading
sequence remains constant throughout the transmission, i.e.,
Gt=GfG. It is assumed that the noise sequence Z
and the sequence of fading coefficients Gare statistically
independent, and that the values of the fading coefficients
belong to a bounded set G, either countable or uncountable.
The transmission power is limited to
kxk2nA. (2)
We consider codes with different size orders. For instance,
when we discuss the exponential scale, we refer to a code
size that scales as L(n, R)=2nR. On the other hand, in
the double-exponential scale, the code size is L(n, R) =
iEnc ×+Dec
j
Yes/No
fG
Zt
Gt
ui,t GtYt
Fig. 1. Deterministic identification over fading channels. For fast fading, G=
(Gt)
t=1 is a sequence of i.i.d. fading coefficients fG. For slow fading,
the fading sequence remains constant throughout the transmission block, i.e.,
Gt=G.
22nR . We say that a scale L1dominates another scale L2if
limn→∞ L2(n,b)
L1(n,a)= 0 for all a, b > 0.
Definition 1.An (L(n, R), n)DI code for a Gaussian chan-
nel Gfast with CSI at the decoder, assuming L(n, R)is an
integer, is defined as a system (U,D)which consists of a
codebook U={ui}i[[L(n,R)]] ,U Rn, such that kuik2
nA , for all i[[L(n, R)]] and a collection of decoding re-
gions D={Di,g}i[[2nR ]] ,g∈Gnwith SL(n,R)
i=1 Di,gRn.
Given a message i[[L(n, R)]], the encoder transmits ui.
The decoder’s aim is to answer the following question: Was a
desired message jsent or not? There are two types of errors
that may occur: Rejecting of the true message, or accepting
a false message. Those are referred to as type I and type II
errors, respectively. The error probabilities are given by
Pe,1(i)=1ZGn
fG(g)ZDi,g
fZ(ygui)dydg(3)
Pe,2(i, j) = ZGn
fG(g)ZDj,g
fZ(ygui)dydg.(4)
An (L(n, R), n, λ1, λ2)DI code further satisfies Pe,1(i)λ1
and Pe,2(i, j)λ2for all i, j [[L(n, R)]],i6=j. A rate Ris
called achievable if for every λ1, λ2>0and sufficiently large
n, there exists a (L(n, R), n, λ1, λ2)DI code. The operational
DI capacity in the L-scale is defined as the supremum of
achievable rates, and will be denoted by CDI(Gfast , L).
Coding for slow fading is defined in a similar manner.
However, the errors are defined with a supremum over the
values of the fading coefficient G G, namely,
Pe,1(i) = sup
g∈G "1ZDi,g n
Y
t=1
fZ(ytgui,t)dy#(5)
Pe,2(i, j) = sup
g∈G "ZDj,g n
Y
t=1
fZ(ytgui,t)dy#.(6)
The DI capacity of the Gaussian channel with slow fading is
denoted by CDI (Gslow, L).
As mentioned earlier, in the original identification setting, a
randomized source is available to the sender, and the encoder
may depend on the output of this source. A randomized-
encoder identification (RI) code is defined similarly where
the encoder is allowed to select a codeword Uiat random
according to some conditional input distribution Q(xn|i). The
RI capacity in the L-scale is then denoted by CRI (·, L).
2020 IEEE Information Theory Workshop (ITW)
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.
Remark 1.We establish the following property [23, see
Lem. 3]. Suppose that L1is a scale that dominates another
scale L2. We note that in general, if the capacity in the L2-
scale is finite, then it is zero in the L1-scale. Conversely, if
the capacity in the L1-scale is positive, then the capacity in
the L2-scale is +.
Remark 2.Molecular communication (MC) technology has
recently advanced significantly [11]. The goal is to construct
complex networks, such as IoT, using MC. Nanotechnol-
ogy enables the development of nanothings, devices in the
nano-scale range, and Internet of NanoThings (IoNT), which
will be the basis for various future healthcare and military
applications [24]. Conventional electronic circuits and EM-
based communication could be harmful for some application
environments, such as inside the human body. Nanothings
can be biological cells, or bio-nanothings, that are created by
means of synthetic biology and nanotechnology techniques.
Similar to artificial nanothings, bio-nanothings have control
(cell nucleus), power (mitochondrion), communication (signal
pathways), and sensing/actuation (flagella, pili or cilia) units.
For the inter-cellular communications, MC is especially well
suited, due to the natural exchange of information. The com-
munication task of identification has significant interest for
these applications. However, it is unclear whether RI codes can
be incorporated there, as it is unclear how powerful random
number generators should be developed for synthetic materials
on these small scales. Furthermore, for Bio-NanoThings, it
is uncertain whether the natural biological processes can be
controlled or reinforced by local randomness. Therefore, for
the design of synthetic IoNT, or for the analysis and utilization
of IoBNT, deterministic identification is applicable.
B. Related Work
We briefly review known results for the standard Gaussian
channel. We begin with the RI capacity, i.e., when the encoder
uses a stochastic mapping. Let Gdenote the standard Gaussian
channel, Yt=gXt+Zt, where the gain g > 0is a
deterministic constant which is known to the encoder and
the decoder. As mentioned, using RI codes, it is possible
to identify a double-exponential number of messages in the
block length n. Despite the significant difference between the
definitions in the identification and transmission settings, it
was shown that the value of the RI capacity in the double-
exponential scale equals the Shannon capacity of transmission.
Theorem 1 (see [4], [16]).The RI capacity in the double-
exponential scale of the standard Gaussian channel satisfies
CRI (G, L) =1
2log 1 + g2A2
σ2,for L(n, R)=22nR (7)
CRI (G, L) =,for L(n, R)=2nR.(8)
In a recent paper by the authors [21], the deterministic case
was considered in the exponential scale.
Theorem 2 (see [21]).The DI capacity in the exponential scale
of the standard Gaussian channel is infinite, i.e.,
CDI (G, L) = ,for L(n, R)=2nR. (9)
Our results in [21] reveal a gap of knowledge in the
following sense. For a finite blocklength n, the number of
codewords must be finite. Thereby, Theorem 2 implies that
the code size scales super-exponentially. The question remains
what is the order of the code size. In mathematical terms, what
is the scale Lfor which the DI capacity is positive yet finite.
In the next section, we provide an answer to this question.
III. MAI N RES ULTS
A. Main Result - Gaussian Channel with Fast Fading
Our DI capacity theorem for the Gaussian channel with fast
fading is stated below.
Theorem 3.Assume that the fading coefficients are positive,
i.e., 0/cl(G). The DI capacity of the Gaussian channel Gfast
with fast fading in the 2nlog(n)-scale is given by
1
4CDI (Gfast, L)1,for L(n, R)=2nlog(n)R. (10)
Hence, the DI capacity is infinite in the exponential scale and
zero in the double-exponential, i.e.,
CDI (Gfast, L) = (for L(n, R)=2nR .
0for L(n, R)=22nR .(11)
The proofs for the lower and upper bounds are given in
Section IV-A and Section IV-B, respectively. The second part
of Theorem 3 is a direct consequence of Remark 1.
Next, we consider the Gaussian channel Gslow with slow
fading.
Theorem 4.The DI capacity of the Gaussian channel Gslow
with slow fading in the 2nlog(n)-scale is bounded by
1
4CDI (Gslow, L)1if 0cl(G)
CDI (Gslow, L) = 0 if 0/cl(G)(12)
for L(n, R)=2nlog(n)R. Hence,
CDI (Gslow, L) = (0if 0cl(G)
if 0/cl(G),for L(n, R)=2nR
CDI (Gslow, L)=0,for L(n, R)=22nR.(13)
The proof of Theorem 4 is given in [23]. The proof is based
on a similar technique as for fast fading.
IV. PROOF OF THEOREM 3
A. Lower Bound
We show that the DI capacity is bounded by
CDI (Gfast, L)1
4for L(n, R)=2nlog(n)R. Achievability
is established using a dense packing arrangement and a
simple distance-decoder. A DI code for the Gaussian channel
Gfast with fast fading is constructed as follows. Consider the
normalized input-output relation,
¯
Y=G¯
x+¯
Z(14)
2020 IEEE Information Theory Workshop (ITW)
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.
where the noise sequence ¯
Zis i.i.d. N 0,σ2
n, and an
input power constraint
k¯
xk A(15)
with ¯
x=1
nx,¯
Z=1
nZ, and ¯
Y=1
nY. Assuming 0/
cl(G), there exists a positive number γsuch that |Gt|> γ for
all twith probability 1.
Codebook construction: We use a packing arrangement of
non-overlapping hyper-spheres of radius εnthat cover a
hyper-sphere of radius (Aεn), with
εn=A
n1
2(1b)(16)
where b > 0is arbitrarily small. The small spheres are
not necessarily entirely contained within the bigger sphere.
The packing density is defined as the fraction of the big
sphere volume that is covered by the small spheres. Based
on the Minkowski-Hlawka theorem [22] in lattice theory,
there exists an arrangement S2nlog(n)R
i=1 Sui(n, εn)inside
S0(n, Aεn)with a density of at least 2n. Specifically,
consider a saturated packing arrangement in Rnwith spheres
of radius εn, i.e., such that no spheres can be added without
overlap. Then, for such an arrangement, there cannot be a point
in the big sphere with a distance of more than 2εnfrom
all sphere centers. Otherwise, a new sphere could be added.
As a consequence, if we double the radius of each sphere,
the 2εn-radius spheres cover the whole sphere of radius
(Aεn). Doubling the radius multiplies the volume by
2n. This, in turn, implies that the original εn-radius packing
arrangement has density of at least 2n. We assign a codeword
to the center of each small sphere ui. Since the small spheres
have the same volume, the total number of spheres, i.e., the
codebook size, is roughly 2nlog(n)R2n·A
εnn
2.To be
precise,
R1
2 log(n)log A
εn2
log(n)=1
4(1 b)2
log(n)
which tends to 1
4when n and b0.
Encoding: Given a message i[[L(n, R)]], send ¯
x=¯
ui.
Decoding: Let δn=γ2εn
3. To identify whether a message
j[[L(n, R)]] was sent, given the sequence g, the decoder
checks whether the channel output ¯
ybelongs to
Dj,g={¯
yRn:k¯
yg¯
ujk pσ2+δn}.(17)
Error Analysis: Consider the type I error, i.e., when the
transmitter sends ¯
ui, yet ¯
Y/ Di,G. For every i[[L(n, R)]],
the type I error probability is bounded by
Pe,1(i) = Pr 1
n
¯
YG¯
ui
2> σ2
Z+δn¯
x=G¯
ui
= Pr
¯
Z
2> σ2
Z+δn3σ4
Z
2
n
(18)
by Chebyshev’s inequality. This tends to zero as n since
δnn1
2(1b).
Next, we address the type II error, i.e., when ¯
Y Dj,G
while the transmitter sent ¯
ui. Then, for every i, j [[L(n, R)]],
where i6=j, the type II error probability is given by
Pe,2(i, j) = Pr
G(¯
ui¯
uj) + ¯
Z
2σ2
Z+δn.(19)
Observe that the square norm can be expressed as
G(¯
ui¯
uj) + ¯
Z
2=kG(¯
ui¯
uj)k2+
¯
Z
2
+ 2
n
X
t=1
Gui,t ¯uj,t)¯
Zt.(20)
Furthermore, by Chebyshev’s inequality, the probability of
the event nPn
t=1 Gui,t ¯uj,t)¯
Zt>δn
2ois bounded by
4σ2
Z(σ2
G+µ2
G)4A
2
n. Therefore, for sufficiently large n,Pe,2(i, j)
Pr kG(¯
ui¯
uj)k2+
¯
Z
2σ2
Z+ 2δn+η1.
Since each codeword is surrounded by a sphere of radius
εn, we have kG(¯
ui¯
uj)k2γ2εn. Thus,,
Pe,2(i, j)Pr
¯
Z
2σ2
Z+ 2δnγ2εn+η1
= Pr
¯
Z
2+η1σ2
Zδn2η1(21)
as 2δnγ2εn=δn. The proof follows by taking the limits
n ,b0.
B. Upper Bound (Converse Proof)
We show that the capacity is bounded by CDI (Gfast, L)1.
Suppose that Ris an achievable rate.
Lemma 5.For sufficiently large n, every pair of codewords
are distanced by at least p0
n, i.e., kui1ui2k p0
n
where
ε0
n=A
n2(22)
for all i1, i2[[L(n, R)]] such that i16=i2.
Proof. Let κ, η > 0be arbitrarily small. Assume to the
contrary that there exist two messages i1and i2, where i16=i2,
such that kui1ui2k<p0
n=qA
n. Then, there exists
b > 0such that kui1ui2k αnwhere αnA
n1
2(1+2b).
Observe that E{kG(ui1ui2)k2}=E{G2}kui1ui2k2
and consider the subspace
Ai1,i2={g Gn:kg(ui1ui2)k> δ0
n}(23)
where δ0
nA
n1
2(1+b). By Markov’s inequality,
Pr(G Ai1,i2)E{G2}α2
n
δ02
n
=E{G2}
nbκ. (24)
Therefore,
1Pe,1(i1) = ZGn
fG(g)ZDi1,g
fZ(ygui1)dydg
ZAc
i1,i2
fG(g)ZDi1,g
fZ(ygui1)dydg+κ. (25)
2020 IEEE Information Theory Workshop (ITW)
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.
Next we bound the inner integral by
ZDi1,g∩Bi2
fZ(ygui1)dy+ZBc
i2
fZ(ygui1)dy
(26)
with Bi2={y:kygui,2k pn(σ2+ζ)}. Consider
the second term in (26). By the triangle inequality
kygui,1k≥kygui,2k−kg(ui,2ui,1)k
pn(σ2+ζ)δ0
npn(σ2+η)(27)
g Ac
i1,i2for large nand η < ζ
2, following the definition
of Ai1,i2and Bi2. We deduce that the second term in (26) is
bounded by
Z{y:kygui,1k>n(σ2+η)}
fZ(ygui1)dy
= Pr(kZk2σ2> )3σ4
2κ(28)
by Chebyshev’s inequality. Moving to the first integral, by
the triangle inequality, kygui1k≤kygui2k+
kg(ui1ui2)k. Taking the square of both sides yields
kygui1k2 kygui2k2+δ02
n+2pA(σ2+ζ)
nb
2
by the definition of Ai1,i2and Bi2. Thus, for sufficiently large
n,
fZ(ygui1)fZ(ygui2)κfZ(ygui1)
which, in turn, implies
Pe,1(i1) + Pe,2(i2, i1)12κκZAc
i1,i2
fG(g)
×ZDi1,g∩Bi1,i2
fZ(ygui1)dydg13κ. (29)
Hence, the assumption is false.
By Lemma 5, we can define an arrangement of non-
overlapping spheres Sui(n, p0
n)of radius p0
ncentered
at the codewords ui. Since the codewords all belong to a
sphere S0(n, nA)of radius nA centered at the origin,
it follows that the number of packed spheres, i.e., the num-
ber of codewords 2nlog(n)R, is bounded by 2nlog(n)R
(A+ε0
n
ε0
n
)n. Thus,
R1
log(n)log A+pε0
n
pε0
n!=log(n+ 1)
log(n)(30)
which tends to 1as n . This completes the proof of
Theorem 3. Further details are given in [23].
ACK NOW LE DG ME NT S
We gratefully thank Andreas Winter, Ning Cai, and Robert
Schober for useful discussions. Mohammad J. Salarised-
digh, Uzi Pereg, and Christian Deppe were supported by
16KIS1005 (LNT, NEWCOM). Holger Boche was sup-
ported by 16KIS1003K (LTI, NEWCOM) and the BMBF
within the national initiative for Molecular Communications
(MAMOKO) under grant 16KIS0914.
REF ER EN CE S
[1] M. Li, H. Yin, Y. Huang, and Y. Wang, “Impact of correlated fading
channels on cognitive relay networks with generalized relay selection,
IEEE Access, vol. 6, pp. 6040–6047, 2018.
[2] D. Tse and P. Viswanath, Fundamentals of wireless communication.
Cambridge university press, 2005.
[3] A. J. Goldsmith and P. P. Varaiya, “Capacity of fading channels with
channel side information,” IEEE Trans. Inf. Theory, vol. 43, no. 6, pp.
1986–1992, 1997.
[4] R. Ahlswede and G. Dueck, “Identification via channels,” IEEE Trans.
Inf. Theory, vol. 35, no. 1, pp. 15–29, 1989.
[5] S. Derebeyo˘
glu, C. Deppe, and R. Ferrara, “Performance analysis of
identification codes,” Entropy, vol. 22, no. 10, p. 1067, 2020.
[6] K. Guan, B. Ai, M. Liso Nicol´
as, R. Geise, A. M¨
oller, Z. Zhong, and
T. K¨
orner, “On the influence of scattering from traffic signs in vehicle-
to-x communications,” IEEE Trans. Vehicl. Tech., vol. 65, no. 8, pp.
5835–5849, 2016.
[7] Y. Steinberg and N. Merhav, “Identification in the presence of side
information with application to watermarking,” IEEE Trans. Inf. Theory,
vol. 47, no. 4, pp. 1410–1422, 2001.
[8] S. Bush, J. Paluh, G. Piro, V. S. Rao, R. Prasad, and A. Eckford,
“Defining communication at the bottom,” IEEE Trans. Mol. Biol. Multi-
Scale Commun., vol. 1, pp. 90–96, 2015.
[9] W. Haselmayr, A. Springer, G. Fischer, C. Alexiou, H. Boche, P. H ¨
oher,
F. Dressler, and R. Schober, “Integration of molecular communications
into future generation wireless networks,” in 6G Wireless Summit, Levi
Lapland, Finland, Mar 2019.
[10] H. Boche and C. Deppe, “Robust and secure identification,” in Proc.
IEEE Int. Symp. Inf. Th., 2017, pp. 1539–1543.
[11] T. Nakano, M. J. Moore, F. Wei, A. V. Vasilakos, and J. Shuai, “Molecu-
lar communication and networking: Opportunities and challenges,” IEEE
Trans. Nanobiosci., vol. 11, no. 2, pp. 135–148, 2012.
[12] N. Farsad, H. B. Yilmaz, A. Eckford, C. Chae, and W. Guo, “A com-
prehensive survey of recent advancements in molecular communication,”
IEEE Commun. Surveys Tuts., vol. 18, no. 3, pp. 1887–1919, 2016.
[13] A. Ahlswede, I. Alth¨
ofer, C. Deppe, and U. Tamm (Eds.), Identification
and Other Probabilistic Models, Rudolf Ahlswedes Lectures on Infor-
mation Theory 6, 1st ed., ser. Found. Signal Process., Commun. Netw.
Springer Verlag, 2020, vol. 15, to appear.
[14] S. Verd´
u and V. K. Wei, “Explicit construction of optimal constant-
weight codes for identification via channels,” IEEE Trans. Inf. Theory,
vol. 39, no. 1, pp. 30–36, 1993.
[15] J. Bringer, H. Chabanne, G. Cohen, and B. Kindarji, “Identification
codes in cryptographic protocols,” in IEEE Inf. Th. Workshop, 2010,
pp. 1–5.
[16] W. Labidi, C. Deppe, and H. Boche, “Secure identification for gaussian
channels,” in IEEE Int. Conf. Acoust. Speech Sig. Proc. (ICASSP), 2020,
pp. 2872–2876.
[17] R. Ahlswede and Ning Cai, “Identification without randomization,”
IEEE Trans. Inf. Theory, vol. 45, no. 7, pp. 2636–2642, 1999.
[18] J. J´
aJ´
a, “Identification is easier than decoding,” in Ann. Symp. Found.
Comp. Scien. (SFCS), 1985, pp. 43–50.
[19] M. V. Burnashev, “On the method of types and approximation of output
measures for channels with finite alphabets,” Prob. Inf. Trans., vol. 36,
no. 3, pp. 195–212, 2000.
[20] M. V. Burnashev, “On identification capacity of infinite alphabets or
continuous-time channels,” IEEE Trans. Inf. Theory, vol. 46, no. 7, pp.
2407–2414, 2000.
[21] M. J. Salariseddigh, U. Pereg, H. Boche, and C. Deppe, “Deterministic
identification over channels with power constraints, submitted
to IEEE Int’l Conf. Commun. (ICC), 2020. [Online]. Available:
https://arxiv.org/pdf/2010.04239.pdf
[22] J. H. Conway and N. J. A. Sloane, Sphere packings, lattices and groups.
Springer Science & Business Media, 2013, vol. 290.
[23] M. J. Salariseddigh, U. Pereg, H. Boche, and C. Deppe, “Deterministic
identification over fading channels, arXiv preprint arXiv:2010.10010,
2020. [Online]. Available: https://arxiv.org/pdf/2010.10010.pdf
[24] F. Dressler and S. Fischer, “Connecting in-body nano communication
with body area networks: Challenges and opportunities of the internet
of nano things,” Nano Commun. Networks, vol. 6, pp. 29–38, 2015.
2020 IEEE Information Theory Workshop (ITW)
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.
... More interestingly, and directly motivating our present work, in certain channels with continuous input alphabets, it was found that deterministic identification codes are governed by a slightly superexponential scaling in block length. Concretely, via fast and slow fading Gaussian channels [9], [13] and over Poisson channels [14] optimal DI codes grow as N (n, λ 1 , λ 1 ) ∼ 2 Rn log n . We are thus motivated to define the slightly superexponential capacity aṡ C DI (W ) := inf λ1,λ2>0 lim inf n→∞ 1 n log n log N DI (n, λ 1 , λ 2 ). ...
... We are thus motivated to define the slightly superexponential capacity aṡ C DI (W ) := inf λ1,λ2>0 lim inf n→∞ 1 n log n log N DI (n, λ 1 , λ 2 ). (7) Indeed, the superexponential DI capacity has been bounded for fast-and slow-fading Gaussian channels G [9], [13] (with a recently improved upper bound in [15]) and for the Poisson channel P [14], [16] as follows: ...
... To realise the above argument, it is enough to fix a line segment in the input parameter space that satisfies the power constraints in its entirety. By reduction to the input-restricted Bernoulli channel we thus reproduce the prior achievability resultsĊ DI (G) ≥ 1 4 [9], [13] andĊ DI (P ) ≥ 1 4 [14], [16]. ...
Article
Full-text available
Following initial work by JaJa, Ahlswede and Cai, and inspired by a recent renewed surge in interest in deterministic identification (DI) via noisy channels, we consider the problem in its generality for memoryless channels with finite output, but arbitrary input alphabets. Such a channel is essentially given by its output distributions as a subset in the probability simplex. Our main findings are that the maximum length of messages thus identifiable scales superlinearly as Rn log n with the block length n, and that the optimal rate R is bounded in terms of the covering (aka Minkowski, or Kolmogorov, or entropy) dimension d of a certain algebraic transformation of the output set: 1/4 d ≤ R ≤ 1/2 d . Remarkably, both the lower and upper Minkowski dimensions play a role in this result. Along the way, we present a Hypothesis Testing Lemma showing that it is sufficient to ensure pairwise reliable distinguishability of the output distributions to construct a DI code. Although we do not know the exact capacity formula, we can conclude that the DI capacity exhibits superactivation: there exist channels whose capacities individually are zero, but whose product has positive capacity. We also generalise these results to classical-quantum channels with finite-dimensional output quantum system, in particular to quantum channels on finite-dimensional quantum systems under the constraint that the identification code can only use tensor product inputs.
... It enables the use of randomized ID codes and allows the ID capacity of DMCs to grow doubly exponentially with code length [19]. Many studies have explored the ID over continuous channels [22], [23], [24]. It has been shown that infinite CR can be generated from Gaussian source [25]. ...
... In the case of continuous channels, the deterministic ID rate is defined w.r.t. ϕ 2 [40], [24], [41]. A "slower" scaling leads to an infinite rate, while a "faster" scaling results in a zero rate. ...
Preprint
We investigate message identification over a K-sender Gaussian multiple access channel (K-GMAC). Unlike conventional Shannon transmission codes, the size of randomized identification (ID) codes experiences a doubly exponential growth in the code length. Improvements in the ID approach can be attained through additional resources such as quantum entanglement, common randomness (CR), and feedback. It has been demonstrated that an infinite capacity can be attained for a single-user Gaussian channel with noiseless feedback, irrespective of the chosen rate scaling. We establish the capacity region of both the K-sender Gaussian multiple access channel (K-GMAC) and the K-sender state-dependent Gaussian multiple access channel (K-SD-GMAC) when strictly causal noiseless feedback is available.
... In some communication settings, the coding scale is larger for continuous-variable channels. For example, in deterministic identification, the code size is super-exponential and scales as 2 n log nR for Gaussian channels [44] and Poisson channels [45]. On the other hand, deterministic identification is limited to an exponential scale for finite-dimensional channels [46]. ...
Article
Full-text available
We consider entanglement-assisted communication over the qubit depolarizing channel under the security requirement of covert communication, where the transmission itself must be concealed from detection by an adversary. Previous work showed that O (√ n ) information bits can be reliably and covertly transmitted in n channel uses without entanglement assistance. However, Gagatsos et al. (2020) showed that entanglement assistance can increase this scaling to O (√ n log n ) for continuous-variable bosonic channels. Here, we present a finite-dimensional parallel, and show that O (√ n log n ) covert bits can be transmitted reliably over n uses of a qubit depolarizing channel. The coding scheme employs “weakly” entangled states such that the squared amplitude scales as O (1/√ n ).
... Also, surprisingly, certain channels with continuous input alphabets have DI codes governed by a slightly superlinear scaling in the block length: log N ∼ Rn log n. This was first observed for Gaussian channels [7], Gaussian channels with both fast and slow fading [11], [12], and Poisson channels [13], [14]. This behaviour differs from that of randomized identification, where the scaling remains the same as for DMCs [15], [16]. ...
Preprint
Full-text available
We investigate deterministic identification over arbitrary memoryless channels under the constraint that the error probabilities of first and second kind are exponentially small in the block length n, controlled by reliability exponents E1,E20E_1,E_2 \geq 0. In contrast to the regime of slowly vanishing errors, where the identifiable message length scales as Θ(nlogn)\Theta(n\log n), here we find that for positive exponents linear scaling is restored, now with a rate that is a function of the reliability exponents. We give upper and lower bounds on the ensuing rate-reliability function in terms of (the logarithm of) the packing and covering numbers of the channel output set, which for small error exponents E1,E2>0E_1,E_2>0 can be expanded in leading order as the product of the Minkowski dimension of a certain parametrisation the channel output set and logmin{E1,E2}\log\min\{E_1,E_2\}. These allow us to recover the previously observed slightly superlinear identification rates, and offer a different perspective for understanding them in more traditional information theory terms. We further illustrate our results with a discussion of the case of dimension zero, and extend them to classical-quantum channels and quantum channels with tensor product input restriction.
Chapter
Motivated by deterministic identification via classical channels, where the encoder is not allowed to use randomization, we revisit the problem of identification via quantum channels but now with the additional restriction that the message encoding must use pure quantum states, rather than general mixed states. Together with the previously considered distinction between simultaneous and general decoders, this suggests a two-dimensional spectrum of different identification capacities, whose behaviour could a priori be very different. We demonstrate two new results as our main findings: first, we show that all four combinations (pure/mixed encoder, simultaneous/general decoder) have a double-exponentially growing code size, and that indeed the corresponding identification capacities are lower bounded by the classical transmission capacity for a general quantum channel, which is given by the Holevo-Schumacher-Westmoreland Theorem. Secondly, we show that the simultaneous identification capacity of a quantum channel equals the simultaneous identification capacity with pure state encodings, thus leaving three linearly ordered identification capacities. By considering some simple examples, we finally show that these three are all different: general identification capacity can be larger than pure-state-encoded identification capacity, which in turn can be larger than pure-state-encoded simultaneous identification capacity.
Preprint
We study message identification over the binary noisy permutation channel. For discrete memoryless channels (DMCs), the number of identifiable messages grows doubly exponentially, and the maximum second-order exponent is the Shannon capacity of the DMC. We consider a binary noisy permutation channel where the transmitted vector is first permuted by a permutation chosen uniformly at random, and then passed through a binary symmetric channel with crossover probability p. In an earlier work, it was shown that 2cnn2^{c_n n} messages can be identified over binary (noiseless) permutation channel if cn0c_n\rightarrow 0. For the binary noisy permutation channel, we show that message sizes growing as 2ϵnnlogn2^{\epsilon_n \sqrt{\frac{n}{\log n}}} are identifiable for any ϵn0\epsilon_n\rightarrow 0. We also prove a strong converse result showing that for any sequence of identification codes with message size 2Rnnlogn2^{R_n \sqrt{n}\log n}, where RnR_n \rightarrow \infty, the sum of Type-I and Type-II error probabilities approaches at least 1 as nn\rightarrow \infty. Our proof of the strong converse uses the idea of channel resolvability. The channel of interest turns out to be the ``binary weight-to-weight (BWW) channel'' which captures the effect on the Hamming weight of a vector, when the vector is passed through a binary symmetric channel. We propose a novel deterministic quantization scheme for quantization of a distribution over {0,1,,n}\{0,1,\cdots, n\} by an M-type input distribution when the distortion is measured on the output distribution (over the BWW channel) in total variation distance. This plays a key role in the converse proof.
Article
Full-text available
The deterministic identification (DI) capacity is developed in multiple settings of channels with power constraints. A full characterization is established for the DI capacity of the discrete memoryless channel (DMC) with and without input constraints. Originally, Ahlswede and Dueck established the identification capacity with local randomness at the encoder, resulting in a double exponential number of messages in the block length n . In the deterministic setup, the number of messages scales exponentially, as in Shannon’s transmission paradigm, but the achievable identification rates are higher. An explicit proof was not provided for the deterministic setting. In this paper, a detailed proof is presented for the DMC. Furthermore, Gaussian channels with fast and slow fading are considered, when channel side information is available at the decoder. A new phenomenon is observed as we establish that the number of messages scales as 2nlog(n)R2^{n\log (n)R} by deriving lower and upper bounds on the DI capacity on this scale. Consequently, the DI capacity of the Gaussian channel is infinite in the exponential scale and zero in the double exponential scale, regardless of the channel noise.
Article
Full-text available
In this paper, we analyze the construction of identification codes. Identification codes are based on the question: “Is the message I have just received the one I am interested in?”, as opposed to Shannon’s transmission, where the receiver is interested in not only one, but any, message. The advantage of identification is that it allows rates growing double exponentially in the blocklength at the cost of not being able to decode every message, which might be beneficial in certain applications. We focus on a special identification code construction based on two concatenated Reed-Solomon codes and have a closer look at its implementation, analyzing the trade-offs of identification with respect to transmission and the trade-offs introduced by the computational cost of identification codes.
Article
Full-text available
In this paper, we investigate the secrecy performance of dual-hop decode-and-forward cognitive relay networks taking into account the channel correlation between the main and wiretapping channels. For the enhancement of the secrecy performance, a generalized relay selection scheme is utilized, where the k th strongest relay node is selected based on the main channel. In order to analyze the impact of key parameters on the secrecy performance, we first derive the exact closed-form expression for secrecy outage probability (SOP) of the considered networks. Moreover, to extract deep insights, the asymptotic approximation for SOP in high main-to-eavesdropper ratio (MER) regime is also provided. Our theoretical results as well as simulations demonstrate that: 1) the channel correlation does not affect the achievable secrecy diversity order, but has a positive impact on the secrecy coding gain in high MER regime; 2) the secrecy diversity order is decided by the generalized selection coefficient and the number of relays that can successfully decode the information transmitted from the secondary transmitter; and 3) the total amount of relays cannot influence the secrecy diversity order directly, but has a significant impact on the secrecy coding gain.
Article
Full-text available
Nanoscale communication is expected to offer unprecedented benefits. However, lack of a precise definition and general framework for nanoscale communication has resulted in limited impact and dissipated effort. The IEEE P1906.1/Draft 1.0 Recommended Practice for Nanoscale and Molecular Communication Framework provides the precise, common definition of nanoscale communication and a standard, general framework. The definition of nanoscale communication must carefully depict the field so that it captures the unique aspects of small-scale physics with respect to communication. Both the definition and framework must be broad enough to cover the scope of cross-disciplinary technologies that may be utilized while simultaneously be precise enough to allow for interoperable and reusable components. The implication for the field is significant. The IEEE P1906.1/Draft 1.0 Recommended Practice for Nanoscale and Molecular Communication Framework will enable diverse disciplines to have a common language and reference for making lasting contributions. A lasting impact will be possible because others can now build upon these results through rational design and synthesis.
Conference Paper
In this paper, we discuss the potential of integrating molecular communication (MC) systems into future generations of wireless networks. First, we explain the advantages of MC compared to conventional wireless communication using electromagnetic waves at different scales, namely at micro-and macroscale. Then, we identify the main challenges when integrating MC into future generation wireless networks. We highlight that two of the greatest challenges are the interface between the chemical and the cyber (Internet) domain, and ensuring communication security. Finally, we present some future applications, such as smart infrastructure and health monitoring, give a timeline for their realization, and point out some areas of research towards the integration of MC into 6G and beyond
Book
The sixth volume of Rudolf Ahlswede's lectures on Information Theory is focused on Identification Theory. In contrast to Shannon's classical coding scheme for the transmission of a message over a noisy channel, in the theory of identification the decoder is not really interested in what the received message is, but only in deciding whether a message, which is of special interest to him, has been sent or not. There are also algorithmic problems where it is not necessary to calculate the solution, but only to check whether a certain given answer is correct. Depending on the problem, this answer might be much easier to give than finding the solution. ``Easier'' in this context means using fewer resources like channel usage, computing time or storage space. Ahlswede and Dueck's main result was that, in contrast to transmission problems, where the possible code sizes grow exponentially fast with block length, the size of identification codes will grow doubly exponentially fast. The theory of identification has now developed into a sophisticated mathematical discipline with many branches and facets, forming part of the Post Shannon theory in which Ahlswede was one of the leading experts. New discoveries in this theory are motivated both by concrete engineering problems and by explorations of the inherent properties of the mathematical structures. Rudolf Ahlswede wrote: It seems that the whole body of present day Information Theory will undergo serious revisions and some dramatic expansions. In this book we will open several directions of future research and start the mathematical description of communication models in great generality. For some specific problems we provide solutions or ideas for their solutions. The lectures presented in this work, which consists of 10 volumes, are suitable for graduate students in Mathematics, and also for those working in Theoretical Computer Science, Physics, and Electrical Engineering with a background in basic Mathematics. The lectures can be used as the basis for courses or to supplement courses in many ways. Ph.D. students will also find research problems, often with conjectures, that offer potential subjects for a thesis. More advanced researchers may find questions which form the basis of entire research programs. The book also contains an afterword by Gunter Dueck.