Content uploaded by Mohammad Javad Salariseddigh
Author content
All content in this area was uploaded by Mohammad Javad Salariseddigh on Feb 12, 2023
Content may be subject to copyright.
Deterministic Identification Over Fading Channels
Mohammad J. Salariseddigh∗, Uzi Pereg∗, Holger Boche†, and Christian Deppe∗
∗Institute for Communications Engineering, Technical University of Munich
†Chair of Theoretical Information Technology, Technical University of Munich
Abstract—Deterministic identification (DI) is addressed for
Gaussian channels with fast and slow fading, where channel
side information is available at the decoder. In particular, it
is established that the number of messages scales as 2nlog(n)R,
where nis the block length and Ris the coding rate. Lower and
upper bounds on the DI capacity are developed in this scale for
fast and slow fading. Consequently, the DI capacity is infinite in
the exponential scale and zero in the double-exponential scale,
regardless of the channel noise.
Index Terms—Fading channels, identification without random-
ization, deterministic codes, super exponential growth, channel
side information.
I. INT ROD UC TI ON
Modern communications require the transfer of enormous
amount of data in wireless systems, for cellular communi-
cation, sensor networks, smart appliances internet of things,
etc. Wireless communication is often modelled by fading
channels with AWGN [1], [2]. In the fast fading regime, the
transmission spans over a large number of coherence time
intervals [2], hence the signal attenuation is characterized
by a stochastic process or a sequence random parameters.
In some applications, the receiver may acquire channel side
information (CSI) by instantaneous estimation of the channel
parameters [3]. On the other hand, in the slow fading regime,
the latency is short compared to the coherence time [2], and
the behaviour is that of a compound channel.
In the fundamental point-to-point communication paradigm,
a sender conveys a message through a noisy channel in a such
a manner that the receiver will retrieve the original message.
Ahlswede and Dueck [4] introduced a scenario of a different
nature where the decoder only performs identification and
determines whether a particular message was sent or not [4],
[5]. Applications include vehicle-to-X communications [6],
digital watermarking [7], molecular communications [8], [9]
and other event-triggered systems. In vehicle-to-X communi-
cations, a vehicle that collects sensor data may ask whether
a certain alert message, concerning the future movement of
an adjacent vehicle, was transmitted or not [10]. In molecular
communications (MC) [11], [12], information is transmitted
via chemical signals or molecules. In various environments,
e.g. inside the human body, conventional wireless commu-
nication with electromagnetic (EM) waves is not feasible or
could be detrimental. The research on micro-scale MC for
medical applications, such as intra-body networks, is still in
its early stages and faces many challenges. MC is a promising
contender for future applications such as 7G+.
The original identification setting by Ahlswede and Dueck
[4] requires randomized encoding, i.e., a randomized source
available to the sender. It is known that this resource cannot
increase the transmission capacity of discrete memoryless
channels [13]. A remarkable result of identification theory is
that given local randomness at the encoder, reliable identifica-
tion can be attained such that the code size, i.e., the number of
messages, grows double exponentially in the block length n,
i.e., ∼22nR [4]. This differs sharply from the traditional trans-
mission setting where the code size scales only exponentially,
i.e., ∼2nR. Yet, the implementation of such a coding scale is
challenging, as it requires the encoder to process a bit string
of exponential length. The construction of identification codes
is considered in [5], [14], [15]. Identification for Gaussian
channels is considered in [16].
In the deterministic setup, given a discrete memoryless
channel (DMC), the number of messages grows exponentially
with the blocklength [4], [17]–[19], as in the traditional setting
of transmission. Nevertheless, the achievable identification
rates are higher than those of transmission. In addition, deter-
ministic codes often have the advantage of simpler implemen-
tation and explicit construction. In particular, J´
aJ´
a [18] showed
that the deterministic identification (DI) capacity of a binary
symmetric channel is 1 bit per channel use, as one can exhaust
the entire input space and assign (almost) all binary n-tuples
as codewords. The DI capacity in the literature is also referred
to as the non-randomized identification (NRI) capacity [17] or
the dID capacity [20]. Ahlswede et al. [4], [17] stated that the
DI capacity of a discrete memoryless channel (DMC) with a
stochastic matrix Wis given by the logarithm of the number
of distinct row vectors of W[4], [17]. The DI ε-capacity of
the Gaussian channel was determined by Burnashev [20].
In a recent work by the authors [21], we addressed determin-
istic identification for the DMC subject to an input constraint
and have also shown that the DI capacity of the standard
Gaussian channel, without fading, is infinite in the exponential
scale. Our previous results [21] reveal a gap of knowledge in
the following sense. For a finite blocklength n, the number of
codewords must be finite. Thereby, the meaning of the infinite
capacity result is that the number of messages scales super-
exponentially. The question remains what is the true order of
the code size. In mathematical terms, what is the scale Lfor
which the DI capacity is positive yet finite. Here, we will
answer this question.
In this paper, we consider deterministic identification for
Gaussian channels with fast fading and slow fading, where
channel side information (CSI) is available at the decoder.
We show that for Gaussian channels, the number of messages
scales as 2nlog(n)R, and develop lower and upper bounds on
2020 IEEE Information Theory Workshop (ITW)
978-1-7281-5962-1/21/$31.00 ©2021 IEEE
2020 IEEE Information Theory Workshop (ITW) | 978-1-7281-5962-1/20/$31.00 ©2021 IEEE | DOI: 10.1109/ITW46852.2021.9457587
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.
the DI capacity in this scale. As a consequence, we deduce
that the DI capacity of a Gaussian Channel with fast fading
is infinite in the exponential scale and zero in the double-
exponential scale, regardless of the channel noise. For slow
fading, the DI capacity in the exponential scale is infinite,
unless the fading gain can be zero or arbitrarily close to zero
(with positive probability), in which case the DI capacity is
zero. As opposed to RI coding, double exponential number of
messages cannot be achieved with deterministic codes.
The results have the following geometric interpretation.
While identification allows overlap between decoding regions
[13], [16], overlap at the encoder is not allowed for determinis-
tic codes. We observe that when two messages are represented
by codewords that are close to one another, then identification
fails. Thus, deterministic coding imposes the restriction that
the codewords need to be distanced from each other. Based
on fundamental properties in the lattice and group theory [22],
the optimal packing of non-overlapping spheres of radius √nε
contains an exponential number of spheres, and by decreasing
the radius of the codeword spheres, the exponential rate can
be made arbitrarily large. However, in the derivation of our
lower bound, we show achievability of rates in the 2nlog(n)-
scale by using spheres of radius √nεn∼n1/4, which results
in ∼21
4nlog(n)codewords. The full version of this paper with
proofs can be found in [23].
II. DE FIN IT IO NS A ND RE LATE D WOR K
In this section, we introduce the channel models and coding
definitions. We use the following notation: a vector is denoted
by x= (x1, x2, . . . , xn), and its `2-norm by kxk. Element-
wise product is denoted by x◦y= (xtyt)n
t=1. We denote
the hyper-sphere of radius rby Sx0(n, r) = x∈Rn:
kx−x0k ≤ r, and the set of consecutive integers from 1
to Mby [[M]]. The closure of a set Ais denote by cl(A).
A. Fading Channels
Consider a Gaussian channel,
Y=G◦x+Z(1)
where Gis a random sequence of fading coefficients and Z
is an additive white Gaussian process, i.i.d. ∼ N(0, σ2
Z)(see
Figure 1). For fast fading, Gis a sequence of i.i.d. continuous
random variables ∼fG, whereas for slow fading, the fading
sequence remains constant throughout the transmission, i.e.,
Gt=G∼fG. It is assumed that the noise sequence Z
and the sequence of fading coefficients Gare statistically
independent, and that the values of the fading coefficients
belong to a bounded set G, either countable or uncountable.
The transmission power is limited to
kxk2≤nA. (2)
We consider codes with different size orders. For instance,
when we discuss the exponential scale, we refer to a code
size that scales as L(n, R)=2nR. On the other hand, in
the double-exponential scale, the code size is L(n, R) =
iEnc ×+Dec
j
Yes/No
fG
Zt
Gt
ui,t GtYt
Fig. 1. Deterministic identification over fading channels. For fast fading, G=
(Gt)∞
t=1 is a sequence of i.i.d. fading coefficients ∼fG. For slow fading,
the fading sequence remains constant throughout the transmission block, i.e.,
Gt=G.
22nR . We say that a scale L1dominates another scale L2if
limn→∞ L2(n,b)
L1(n,a)= 0 for all a, b > 0.
Definition 1.An (L(n, R), n)DI code for a Gaussian chan-
nel Gfast with CSI at the decoder, assuming L(n, R)is an
integer, is defined as a system (U,D)which consists of a
codebook U={ui}i∈[[L(n,R)]] ,U ⊂ Rn, such that kuik2≤
nA , for all i∈[[L(n, R)]] and a collection of decoding re-
gions D={Di,g}i∈[[2nR ]] ,g∈Gnwith SL(n,R)
i=1 Di,g⊂Rn.
Given a message i∈[[L(n, R)]], the encoder transmits ui.
The decoder’s aim is to answer the following question: Was a
desired message jsent or not? There are two types of errors
that may occur: Rejecting of the true message, or accepting
a false message. Those are referred to as type I and type II
errors, respectively. The error probabilities are given by
Pe,1(i)=1−ZGn
fG(g)ZDi,g
fZ(y−g◦ui)dydg(3)
Pe,2(i, j) = ZGn
fG(g)ZDj,g
fZ(y−g◦ui)dydg.(4)
An (L(n, R), n, λ1, λ2)DI code further satisfies Pe,1(i)≤λ1
and Pe,2(i, j)≤λ2for all i, j ∈[[L(n, R)]],i6=j. A rate Ris
called achievable if for every λ1, λ2>0and sufficiently large
n, there exists a (L(n, R), n, λ1, λ2)DI code. The operational
DI capacity in the L-scale is defined as the supremum of
achievable rates, and will be denoted by CDI(Gfast , L).
Coding for slow fading is defined in a similar manner.
However, the errors are defined with a supremum over the
values of the fading coefficient G∈ G, namely,
Pe,1(i) = sup
g∈G "1−ZDi,g n
Y
t=1
fZ(yt−gui,t)dy#(5)
Pe,2(i, j) = sup
g∈G "ZDj,g n
Y
t=1
fZ(yt−gui,t)dy#.(6)
The DI capacity of the Gaussian channel with slow fading is
denoted by CDI (Gslow, L).
As mentioned earlier, in the original identification setting, a
randomized source is available to the sender, and the encoder
may depend on the output of this source. A randomized-
encoder identification (RI) code is defined similarly where
the encoder is allowed to select a codeword Uiat random
according to some conditional input distribution Q(xn|i). The
RI capacity in the L-scale is then denoted by CRI (·, L).
2020 IEEE Information Theory Workshop (ITW)
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.
Remark 1.We establish the following property [23, see
Lem. 3]. Suppose that L1is a scale that dominates another
scale L2. We note that in general, if the capacity in the L2-
scale is finite, then it is zero in the L1-scale. Conversely, if
the capacity in the L1-scale is positive, then the capacity in
the L2-scale is +∞.
Remark 2.Molecular communication (MC) technology has
recently advanced significantly [11]. The goal is to construct
complex networks, such as IoT, using MC. Nanotechnol-
ogy enables the development of nanothings, devices in the
nano-scale range, and Internet of NanoThings (IoNT), which
will be the basis for various future healthcare and military
applications [24]. Conventional electronic circuits and EM-
based communication could be harmful for some application
environments, such as inside the human body. Nanothings
can be biological cells, or bio-nanothings, that are created by
means of synthetic biology and nanotechnology techniques.
Similar to artificial nanothings, bio-nanothings have control
(cell nucleus), power (mitochondrion), communication (signal
pathways), and sensing/actuation (flagella, pili or cilia) units.
For the inter-cellular communications, MC is especially well
suited, due to the natural exchange of information. The com-
munication task of identification has significant interest for
these applications. However, it is unclear whether RI codes can
be incorporated there, as it is unclear how powerful random
number generators should be developed for synthetic materials
on these small scales. Furthermore, for Bio-NanoThings, it
is uncertain whether the natural biological processes can be
controlled or reinforced by local randomness. Therefore, for
the design of synthetic IoNT, or for the analysis and utilization
of IoBNT, deterministic identification is applicable.
B. Related Work
We briefly review known results for the standard Gaussian
channel. We begin with the RI capacity, i.e., when the encoder
uses a stochastic mapping. Let Gdenote the standard Gaussian
channel, Yt=gXt+Zt, where the gain g > 0is a
deterministic constant which is known to the encoder and
the decoder. As mentioned, using RI codes, it is possible
to identify a double-exponential number of messages in the
block length n. Despite the significant difference between the
definitions in the identification and transmission settings, it
was shown that the value of the RI capacity in the double-
exponential scale equals the Shannon capacity of transmission.
Theorem 1 (see [4], [16]).The RI capacity in the double-
exponential scale of the standard Gaussian channel satisfies
CRI (G, L) =1
2log 1 + g2A2
σ2,for L(n, R)=22nR (7)
CRI (G, L) =∞,for L(n, R)=2nR.(8)
In a recent paper by the authors [21], the deterministic case
was considered in the exponential scale.
Theorem 2 (see [21]).The DI capacity in the exponential scale
of the standard Gaussian channel is infinite, i.e.,
CDI (G, L) = ∞,for L(n, R)=2nR. (9)
Our results in [21] reveal a gap of knowledge in the
following sense. For a finite blocklength n, the number of
codewords must be finite. Thereby, Theorem 2 implies that
the code size scales super-exponentially. The question remains
what is the order of the code size. In mathematical terms, what
is the scale Lfor which the DI capacity is positive yet finite.
In the next section, we provide an answer to this question.
III. MAI N RES ULTS
A. Main Result - Gaussian Channel with Fast Fading
Our DI capacity theorem for the Gaussian channel with fast
fading is stated below.
Theorem 3.Assume that the fading coefficients are positive,
i.e., 0/∈cl(G). The DI capacity of the Gaussian channel Gfast
with fast fading in the 2nlog(n)-scale is given by
1
4≤CDI (Gfast, L)≤1,for L(n, R)=2nlog(n)R. (10)
Hence, the DI capacity is infinite in the exponential scale and
zero in the double-exponential, i.e.,
CDI (Gfast, L) = (∞for L(n, R)=2nR .
0for L(n, R)=22nR .(11)
The proofs for the lower and upper bounds are given in
Section IV-A and Section IV-B, respectively. The second part
of Theorem 3 is a direct consequence of Remark 1.
Next, we consider the Gaussian channel Gslow with slow
fading.
Theorem 4.The DI capacity of the Gaussian channel Gslow
with slow fading in the 2nlog(n)-scale is bounded by
1
4≤CDI (Gslow, L)≤1if 0∈cl(G)
CDI (Gslow, L) = 0 if 0/∈cl(G)(12)
for L(n, R)=2nlog(n)R. Hence,
CDI (Gslow, L) = (0if 0∈cl(G)
∞if 0/∈cl(G),for L(n, R)=2nR
CDI (Gslow, L)=0,for L(n, R)=22nR.(13)
The proof of Theorem 4 is given in [23]. The proof is based
on a similar technique as for fast fading.
IV. PROOF OF THEOREM 3
A. Lower Bound
We show that the DI capacity is bounded by
CDI (Gfast, L)≥1
4for L(n, R)=2nlog(n)R. Achievability
is established using a dense packing arrangement and a
simple distance-decoder. A DI code for the Gaussian channel
Gfast with fast fading is constructed as follows. Consider the
normalized input-output relation,
¯
Y=G◦¯
x+¯
Z(14)
2020 IEEE Information Theory Workshop (ITW)
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.
where the noise sequence ¯
Zis i.i.d. ∼ N 0,σ2
n, and an
input power constraint
k¯
xk ≤ √A(15)
with ¯
x=1
√nx,¯
Z=1
√nZ, and ¯
Y=1
√nY. Assuming 0/∈
cl(G), there exists a positive number γsuch that |Gt|> γ for
all twith probability 1.
Codebook construction: We use a packing arrangement of
non-overlapping hyper-spheres of radius √εnthat cover a
hyper-sphere of radius (√A−√εn), with
εn=A
n1
2(1−b)(16)
where b > 0is arbitrarily small. The small spheres are
not necessarily entirely contained within the bigger sphere.
The packing density is defined as the fraction of the big
sphere volume that is covered by the small spheres. Based
on the Minkowski-Hlawka theorem [22] in lattice theory,
there exists an arrangement S2nlog(n)R
i=1 Sui(n, √εn)inside
S0(n, √A−√εn)with a density of at least 2−n. Specifically,
consider a saturated packing arrangement in Rnwith spheres
of radius √εn, i.e., such that no spheres can be added without
overlap. Then, for such an arrangement, there cannot be a point
in the big sphere with a distance of more than 2√εnfrom
all sphere centers. Otherwise, a new sphere could be added.
As a consequence, if we double the radius of each sphere,
the 2√εn-radius spheres cover the whole sphere of radius
(√A−√εn). Doubling the radius multiplies the volume by
2n. This, in turn, implies that the original √εn-radius packing
arrangement has density of at least 2−n. We assign a codeword
to the center of each small sphere ui. Since the small spheres
have the same volume, the total number of spheres, i.e., the
codebook size, is roughly 2nlog(n)R≈2−n·A
εnn
2.To be
precise,
R≥1
2 log(n)log A
εn−2
log(n)=1
4(1 −b)−2
log(n)
which tends to 1
4when n→ ∞ and b→0.
Encoding: Given a message i∈[[L(n, R)]], send ¯
x=¯
ui.
Decoding: Let δn=γ2εn
3. To identify whether a message
j∈[[L(n, R)]] was sent, given the sequence g, the decoder
checks whether the channel output ¯
ybelongs to
Dj,g={¯
y∈Rn:k¯
y−g◦¯
ujk ≤ pσ2+δn}.(17)
Error Analysis: Consider the type I error, i.e., when the
transmitter sends ¯
ui, yet ¯
Y/∈ Di,G. For every i∈[[L(n, R)]],
the type I error probability is bounded by
Pe,1(i) = Pr 1
n
¯
Y−G◦¯
ui
2> σ2
Z+δn¯
x=G◦¯
ui
= Pr
¯
Z
2> σ2
Z+δn≤3σ4
Z
nδ2
n
(18)
by Chebyshev’s inequality. This tends to zero as n→ ∞ since
δn∝n−1
2(1−b).
Next, we address the type II error, i.e., when ¯
Y∈ Dj,G
while the transmitter sent ¯
ui. Then, for every i, j ∈[[L(n, R)]],
where i6=j, the type II error probability is given by
Pe,2(i, j) = Pr
G◦(¯
ui−¯
uj) + ¯
Z
2≤σ2
Z+δn.(19)
Observe that the square norm can be expressed as
G◦(¯
ui−¯
uj) + ¯
Z
2=kG◦(¯
ui−¯
uj)k2+
¯
Z
2
+ 2
n
X
t=1
G(¯ui,t −¯uj,t)¯
Zt.(20)
Furthermore, by Chebyshev’s inequality, the probability of
the event nPn
t=1 G(¯ui,t −¯uj,t)¯
Zt>δn
2ois bounded by
4σ2
Z(σ2
G+µ2
G)4A
nδ2
n. Therefore, for sufficiently large n,Pe,2(i, j)≤
Pr kG◦(¯
ui−¯
uj)k2+
¯
Z
2≤σ2
Z+ 2δn+η1.
Since each codeword is surrounded by a sphere of radius
√εn, we have kG◦(¯
ui−¯
uj)k2≥γ2εn. Thus,,
Pe,2(i, j)≤Pr
¯
Z
2≤σ2
Z+ 2δn−γ2εn+η1
= Pr
¯
Z
2+η1≤σ2
Z−δn≤2η1(21)
as 2δn−γ2εn=−δn. The proof follows by taking the limits
n→ ∞,b→0.
B. Upper Bound (Converse Proof)
We show that the capacity is bounded by CDI (Gfast, L)≤1.
Suppose that Ris an achievable rate.
Lemma 5.For sufficiently large n, every pair of codewords
are distanced by at least pnε0
n, i.e., kui1−ui2k ≥ pnε0
n
where
ε0
n=A
n2(22)
for all i1, i2∈[[L(n, R)]] such that i16=i2.
Proof. Let κ, η > 0be arbitrarily small. Assume to the
contrary that there exist two messages i1and i2, where i16=i2,
such that kui1−ui2k<pnε0
n=qA
n. Then, there exists
b > 0such that kui1−ui2k ≤ αnwhere αn≡√A
n1
2(1+2b).
Observe that E{kG◦(ui1−ui2)k2}=E{G2}kui1−ui2k2
and consider the subspace
Ai1,i2={g∈ Gn:kg◦(ui1−ui2)k> δ0
n}(23)
where δ0
n≡√A
n1
2(1+b). By Markov’s inequality,
Pr(G∈ Ai1,i2)≤E{G2}α2
n
δ02
n
=E{G2}
nb≤κ. (24)
Therefore,
1−Pe,1(i1) = ZGn
fG(g)ZDi1,g
fZ(y−g◦ui1)dydg
≤ZAc
i1,i2
fG(g)ZDi1,g
fZ(y−g◦ui1)dydg+κ. (25)
2020 IEEE Information Theory Workshop (ITW)
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.
Next we bound the inner integral by
ZDi1,g∩Bi2
fZ(y−g◦ui1)dy+ZBc
i2
fZ(y−g◦ui1)dy
(26)
with Bi2={y:ky−g◦ui,2k ≤ pn(σ2+ζ)}. Consider
the second term in (26). By the triangle inequality
ky−g◦ui,1k≥ky−g◦ui,2k−kg◦(ui,2−ui,1)k
≥pn(σ2+ζ)−δ0
n≥pn(σ2+η)(27)
∀g∈ Ac
i1,i2for large nand η < ζ
2, following the definition
of Ai1,i2and Bi2. We deduce that the second term in (26) is
bounded by
Z{y:ky−g◦ui,1k>√n(σ2+η)}
fZ(y−g◦ui1)dy
= Pr(kZk2−σ2> nη)≤3σ4
nη2≤κ(28)
by Chebyshev’s inequality. Moving to the first integral, by
the triangle inequality, ky−g◦ui1k≤ky−g◦ui2k+
kg◦(ui1−ui2)k. Taking the square of both sides yields
ky−g◦ui1k2≤ ky−g◦ui2k2+δ02
n+2pA(σ2+ζ)
nb
2
by the definition of Ai1,i2and Bi2. Thus, for sufficiently large
n,
fZ(y−g◦ui1)−fZ(y−g◦ui2)≤κfZ(y−g◦ui1)
which, in turn, implies
Pe,1(i1) + Pe,2(i2, i1)≥1−2κ−κZAc
i1,i2
fG(g)
×ZDi1,g∩Bi1,i2
fZ(y−g◦ui1)dydg≥1−3κ. (29)
Hence, the assumption is false.
By Lemma 5, we can define an arrangement of non-
overlapping spheres Sui(n, pnε0
n)of radius pnε0
ncentered
at the codewords ui. Since the codewords all belong to a
sphere S0(n, √nA)of radius √nA centered at the origin,
it follows that the number of packed spheres, i.e., the num-
ber of codewords 2nlog(n)R, is bounded by 2nlog(n)R≤
(√A+√ε0
n
√ε0
n
)n. Thus,
R≤1
log(n)log √A+pε0
n
pε0
n!=log(n+ 1)
log(n)(30)
which tends to 1as n→ ∞. This completes the proof of
Theorem 3. Further details are given in [23].
ACK NOW LE DG ME NT S
We gratefully thank Andreas Winter, Ning Cai, and Robert
Schober for useful discussions. Mohammad J. Salarised-
digh, Uzi Pereg, and Christian Deppe were supported by
16KIS1005 (LNT, NEWCOM). Holger Boche was sup-
ported by 16KIS1003K (LTI, NEWCOM) and the BMBF
within the national initiative for Molecular Communications
(MAMOKO) under grant 16KIS0914.
REF ER EN CE S
[1] M. Li, H. Yin, Y. Huang, and Y. Wang, “Impact of correlated fading
channels on cognitive relay networks with generalized relay selection,”
IEEE Access, vol. 6, pp. 6040–6047, 2018.
[2] D. Tse and P. Viswanath, Fundamentals of wireless communication.
Cambridge university press, 2005.
[3] A. J. Goldsmith and P. P. Varaiya, “Capacity of fading channels with
channel side information,” IEEE Trans. Inf. Theory, vol. 43, no. 6, pp.
1986–1992, 1997.
[4] R. Ahlswede and G. Dueck, “Identification via channels,” IEEE Trans.
Inf. Theory, vol. 35, no. 1, pp. 15–29, 1989.
[5] S. Derebeyo˘
glu, C. Deppe, and R. Ferrara, “Performance analysis of
identification codes,” Entropy, vol. 22, no. 10, p. 1067, 2020.
[6] K. Guan, B. Ai, M. Liso Nicol´
as, R. Geise, A. M¨
oller, Z. Zhong, and
T. K¨
orner, “On the influence of scattering from traffic signs in vehicle-
to-x communications,” IEEE Trans. Vehicl. Tech., vol. 65, no. 8, pp.
5835–5849, 2016.
[7] Y. Steinberg and N. Merhav, “Identification in the presence of side
information with application to watermarking,” IEEE Trans. Inf. Theory,
vol. 47, no. 4, pp. 1410–1422, 2001.
[8] S. Bush, J. Paluh, G. Piro, V. S. Rao, R. Prasad, and A. Eckford,
“Defining communication at the bottom,” IEEE Trans. Mol. Biol. Multi-
Scale Commun., vol. 1, pp. 90–96, 2015.
[9] W. Haselmayr, A. Springer, G. Fischer, C. Alexiou, H. Boche, P. H ¨
oher,
F. Dressler, and R. Schober, “Integration of molecular communications
into future generation wireless networks,” in 6G Wireless Summit, Levi
Lapland, Finland, Mar 2019.
[10] H. Boche and C. Deppe, “Robust and secure identification,” in Proc.
IEEE Int. Symp. Inf. Th., 2017, pp. 1539–1543.
[11] T. Nakano, M. J. Moore, F. Wei, A. V. Vasilakos, and J. Shuai, “Molecu-
lar communication and networking: Opportunities and challenges,” IEEE
Trans. Nanobiosci., vol. 11, no. 2, pp. 135–148, 2012.
[12] N. Farsad, H. B. Yilmaz, A. Eckford, C. Chae, and W. Guo, “A com-
prehensive survey of recent advancements in molecular communication,”
IEEE Commun. Surveys Tuts., vol. 18, no. 3, pp. 1887–1919, 2016.
[13] A. Ahlswede, I. Alth¨
ofer, C. Deppe, and U. Tamm (Eds.), Identification
and Other Probabilistic Models, Rudolf Ahlswedes Lectures on Infor-
mation Theory 6, 1st ed., ser. Found. Signal Process., Commun. Netw.
Springer Verlag, 2020, vol. 15, to appear.
[14] S. Verd´
u and V. K. Wei, “Explicit construction of optimal constant-
weight codes for identification via channels,” IEEE Trans. Inf. Theory,
vol. 39, no. 1, pp. 30–36, 1993.
[15] J. Bringer, H. Chabanne, G. Cohen, and B. Kindarji, “Identification
codes in cryptographic protocols,” in IEEE Inf. Th. Workshop, 2010,
pp. 1–5.
[16] W. Labidi, C. Deppe, and H. Boche, “Secure identification for gaussian
channels,” in IEEE Int. Conf. Acoust. Speech Sig. Proc. (ICASSP), 2020,
pp. 2872–2876.
[17] R. Ahlswede and Ning Cai, “Identification without randomization,”
IEEE Trans. Inf. Theory, vol. 45, no. 7, pp. 2636–2642, 1999.
[18] J. J´
aJ´
a, “Identification is easier than decoding,” in Ann. Symp. Found.
Comp. Scien. (SFCS), 1985, pp. 43–50.
[19] M. V. Burnashev, “On the method of types and approximation of output
measures for channels with finite alphabets,” Prob. Inf. Trans., vol. 36,
no. 3, pp. 195–212, 2000.
[20] M. V. Burnashev, “On identification capacity of infinite alphabets or
continuous-time channels,” IEEE Trans. Inf. Theory, vol. 46, no. 7, pp.
2407–2414, 2000.
[21] M. J. Salariseddigh, U. Pereg, H. Boche, and C. Deppe, “Deterministic
identification over channels with power constraints,” submitted
to IEEE Int’l Conf. Commun. (ICC), 2020. [Online]. Available:
https://arxiv.org/pdf/2010.04239.pdf
[22] J. H. Conway and N. J. A. Sloane, Sphere packings, lattices and groups.
Springer Science & Business Media, 2013, vol. 290.
[23] M. J. Salariseddigh, U. Pereg, H. Boche, and C. Deppe, “Deterministic
identification over fading channels,” arXiv preprint arXiv:2010.10010,
2020. [Online]. Available: https://arxiv.org/pdf/2010.10010.pdf
[24] F. Dressler and S. Fischer, “Connecting in-body nano communication
with body area networks: Challenges and opportunities of the internet
of nano things,” Nano Commun. Networks, vol. 6, pp. 29–38, 2015.
2020 IEEE Information Theory Workshop (ITW)
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on February 12,2023 at 21:26:14 UTC from IEEE Xplore. Restrictions apply.