ArticlePDF Available

Deterministic K-Identification for MC Poisson Channel With Inter-Symbol Interference

Authors:

Abstract and Figures

Various applications of molecular communications (MCs) feature an alarm-prompt behavior for which the prevalent Shannon capacity may not be the appropriate performance metric. The identification capacity as an alternative measure for such systems has been motivated and established in the literature. In this paper, we study deterministic K-identification (DKI) for the discrete-time Poisson channel (DTPC) with inter-symbol interference (ISI), where the transmitter is restricted to an average and a peak molecule release rate constraint. Such a channel serves as a model for diffusive MC systems featuring long channel impulse responses and employing molecule-counting receivers. We derive lower and upper bounds on the DKI capacity of the DTPC with ISI when the size of the target message set K and the number of ISI channel taps L may grow with the codeword length n. As a key finding, we establish that for deterministic encoding, assuming that K and L both grow sub-linearly in n, i.e., K = 2κlogn and L = 2llogn with κ + 4l ϵ 0,1), where κ ϵ 0,1) is the identification target rate and l ϵ 0,1/4) is the ISI rate, then the number of different messages that can be reliably identified scales super-exponentially in n, i.e., ~2(nlogn)R, where R is the DKI coding rate. Moreover, since l and κ must fulfill κ + 4l ϵ 0,1), we show that optimizing l (or equivalently the symbol rate) leads to an effective identification rate bits/s that scales sub-linearly with n. This result is in contrast to the typical transmission rate bits/s which is independent of n.
Content may be subject to copyright.
Deterministic K-Identification for MC Poisson
Channel With Inter-symbol Interference
Mohammad Javad Salariseddigh∗†, Vahid Jamali, Uzi Pereg§, Holger Boche†∥,
Christian Deppe†¶, and Robert Schober∗∗
Technical University of Munich (TUM) BMBF Research Hub 6G-life TUM
Technical University of Darmstadt §Technion Israel Institute of Technology
Institute for Communications Technology Technical University of Braunschweig
Chair of Theoretical Information Technology TUM ∗∗ Friedrich-Alexander-University Erlangen-Nürnberg
(Invited Paper)
Corresponding Author: Mohammad Javad Salariseddigh (E-mail: mjss@tum.de).
E-mails: {mjss,boche}@tum.de, vahid.jamali@tu-darmstadt.de,
uzipereg@technion.ac.il, christian.deppe@tu-braunschweig.de, robert.schober@fau.de
This paper was presented in part at the IEEE International Conference on Communications (ICC 2023) [1].
ABSTRACT Various applications of molecular communications (MCs) feature an alarm-prompt behavior
for which the prevalent Shannon capacity may not be the appropriate performance metric. The identification
capacity as an alternative measure for such systems has been motivated and established in the literature.
In this paper, we study deterministic K-identification (DKI) for the discrete-time Poisson channel (DTPC)
with inter-symbol interference (ISI), where the transmitter is restricted to an average and a peak molecule
release rate constraint. Such a channel serves as a model for diffusive MC systems featuring long channel
impulse responses and employing molecule-counting receivers. We derive lower and upper bounds on the
DKI capacity of the DTPC with ISI when the size of the target message set Kand the number of ISI
channel taps Lmay grow with the codeword length n. As a key finding, we establish that for deterministic
encoding, assuming that Kand Lboth grow sub-linearly in n, i.e., K= 2κlog nand L= 2llog nwith
κ+ 4l[0,1),where κ[0,1) is the identification target rate and l[0,1/4) is the ISI rate, then
the number of different messages that can be reliably identified scales super-exponentially in n, i.e.,
2(nlog n)R,where Ris the DKI coding rate. Moreover, since land κmust fulfill κ+ 4l[0,1),we
show that optimizing l(or equivalently the symbol rate) leads to an effective identification rate [bits/s]
that scales sub-linearly with n. This result is in contrast to the typical transmission rate [bits/s] which is
independent of n.
INDEX TERMS Channel capacity, deterministic identification, inter-symbol interference, molecular com-
munication, and Poisson channel.
I. Introduction
Molecular communication (MC) is a new communica-
tion concept where messages are embedded in the prop-
erties of molecules [2], [3]. In contrast to conventional
electromagnetic-based (EM) communication systems, which
embed information into the properties of EM waves such as
their amplitude, frequency, and phase, MC systems embed
information into the properties of molecules such as their
concentration [4], type [5], time of release [6], and spatial
release pattern [7]. A parallel growing and related field
to MC is synthetic biology [8], which provides tools for
realizing the hardware components needed for MC systems.
In [9], [10], the realization of MC systems using synthetic
biology techniques is discussed and biological components
are investigated which can potentially serve as the main
building blocks of synthetic MC systems, i.e., as transmitter,
receiver, and signaling particles. This promising vision for
realizing synthetic MC systems has motivated the research
community to establish theoretical frameworks for their
modeling, design, and analysis. Examples include channel
1
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
modeling [11], modulation and detection design [12], and
information-theoretical performance characterization [13].
Advanced synthetic MC systems are expected to facilitate
the realization of the Internet of Bio-Nano Things [14], [15]
capable of performing sophisticated tasks such as sensing,
computing, and networking inside the human body.
A. Poisson Channel With Inter-symbol Interference
In the context of MC systems, information can be en-
coded in the concentration (rate) of molecules released
by the transmitter and be decoded based on the number
of molecules observed at the receiver. Assuming that the
release, propagation, and reception of different molecules
are independent of each other, the number of molecules
observed by molecule counting receivers in MC systems
are characterized by the Binomial distribution. However, the
Binomial channel model and in particular its probability and
cumulative distribution functions make theoretical analysis
cumbersome. Fortunately, in most MC applications, the
number of released molecules is quite large which allows
approximating the Binomial channel model by the Gaussian
or Poisson channel models [16]. Specifically, when the
number of molecules emitted by the transmitter, N, is large
but the probability of successful observation of one molecule
at the receiver, p, is small (such that N p is still small), the
Poisson distribution can be shown to result as the limiting
case of the Binomial distribution1[11, Sec. IV].
Diffusive MC channels are inherently dispersive since
molecules do not quickly fade away and stay in the channel
for a long time. This leads to a long tail of channel impulse
response (CIR) and causes inter-symbol interference (ISI)
[13]. The number of relevant channel memory taps, L,
depends on the relative length of the CIR and the symbol
duration and hence is a function of the symbol rate. Moti-
vated by the above discussions, we focus on investigating the
fundamental performance limits of the discrete-time Poisson
channel (DTPC) with ISI in this paper.
B. Information Theoretical Analysis of MC Systems
Despite the recent theoretical and technological advance-
ments in the field of MC, the information-theoretical per-
formance limits of DTPC MC systems with and without
ISI are still not fully understood [13]. In fact, finding an
analytic expression for the transmission rate (TR) capacity
of the DTPC with ISI under an average power constraint
is still an open problem [13], [18], [19]. Nevertheless, for
characterizing the TR capacity for the DTPC, a number
of approaches have been explored and several bounds and
asymptotic results for the DTPC with ISI have been estab-
lished. For instance, analytical lower and upper bounds on
the TR capacity of the DTPC with input constraints and
ISI are provided in [20]. Bounds on the TR capacity of the
1In the literature, this result is also known as the Poisson limit theorem or
the law of rare events [17].
DTPC with ISI are developed in [21], [22]. The design of
optimal codes for the DTPC with ISI is studied in [23], [24].
In [25], the impact of ISI on the transmission performance
over a diffusive MC channel is investigated. Nonetheless, the
DTPC with ISI has been mostly studied for the TR problem
in the existing literature. On the other hand, in [26], [27],
deterministic identification (DI) for the DTPC without ISI is
studied, where bounds on the DI capacity are established.
To the best of the authors’ knowledge, for the DTPC with
ISI, the fundamental performance limits for the DI problem,
have not been investigated in the literature, yet, except in the
conference version of this paper [1].
C. Applications of the K-Identification Problem for MC
Scenarios
Numerous envisioned applications of MCs under the um-
brella of future generation (XG) communication networks
[28], [29] give rise to event-triggered communication scenar-
ios2, where TR capacity may not be the appropriate perfor-
mance metric. In particular, in event-detection, object-finding
or alarm-prompt scenarios, where the receiver has to decide
about the occurrence of a specific event or the presence of
an object with a reliable Yes / No answer, the so-called K-
identification capacity is the relevant performance measure
[31]. More specifically, in the K-identification problem, it is
assumed that the receiver is interested in a subset of size
Kof the message set, M={1, . . . , M },referred to as the
target message set. Since Mhas cardinality M, there are
in total M
Kpossible target message sets or subsets of size
K. For each inclusion test, the receiver chooses an arbitrary
message from the message set and checks whether or not
it belongs to a given target message set. The error criteria
imposed on the corresponding K-identification codes dictate
that such an inclusion test must be reliable no matter which
specific target message set is considered.
Concrete examples of the K-identification problem in the
context of MC can be found in communication scenarios
featuring event/object recognition tasks. In particular, for
targeted drug delivery [2], [32]–[35] where, e.g., a nano-
device’s objective may be to identify whether or not a
specific biomarker present around a target tissue belongs
to a certain category of cancers; in health monitoring [36],
[37], where, e.g., one may be interested in finding to which
group/set of diseases a target bacteria belongs to. Moreover,
K-identification problems may find applications in natural
MC systems. For example, in natural olfactory MC systems
[38]–[40], where the communication goal may involve the
inclusion of a specific type of secreted odor/pheromone
into a target group of K-odors corresponding to a specific
identification task for foraging, mating, etc.
2Such communication systems are also known as post-Shannon communi-
cation systems in the literature [29]. A detailed discussion of the potential
of MC and post-Shannon communication for the sixth generation (6G) of
communication systems can be found in [30].
2
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
TABLE 1: Mathematical notations used throughout this paper.
Symbols Description
X
,
Y
,
Z
,... (blackboard bold letters) Alphabet sets
X
cComplement of set
X
x, y, z, . . . (lower case letters) Constants and values of random variables (RVs)
X,Y,Z,... (upper case letters) RVs
x,y,... (lower case bold symbols) Row vectors
1nAll-ones row vector of size n
[[M]] {1,2,...,M}Set of consecutive natural numbers from 1through M
N0{0,1,2,...}Set of whole numbers
R+Set of non-negative real numbers
x!x×(x1) × · ·· × 1Factorial for non-negative integer x
Γ(x)=(x1)! (x1) ×(x2) × · ·· × 1Gamma function for non-negative integer x
f(n) = o(g(n)) (small O notation) f(n)is dominated by g(n)asymptotically, i.e., limn→∞ f(n)/g(n)=0
f(n) = O(g(n)) (big O notation) |f(n)|is bounded above by g(n)(up to constant factor) asymptotically, i.e.,
lim supn→∞ |f(n)|/g(n)<
E[X]Statistical expectation of RV X
Cov(X, Y ) = E[(XE[X])(YE[Y])] Covariance of two real-valued RVs Xand Ywith finite second moments
x1,x,x1-norm, 2-norm, -norm
Sx0(n, r) = {xRn
+:xx0 r}n-dimensional hyper sphere of radius rcentered at x0with respect to the 2-norm
Q
0(n, A) = {xRn
+:0xtA, t[[n]]}n-dimensional cube with center (A/2,...,A/2) and a corner at the origin, i.e.,
0= (0,...,0),whose edges have length A
H(z)zlog(z)(1 z) log(1 z)Binary entropy function for z[0,1]
CTR Message transmission capacity of a channel
PPoisson channel with ISI
Besides, in the context of molecular modeling [41], a
computational representation of an MC unit, called a digital
twin [42], may be required. In order to manage complex tasks
(e.g., prediction of future behavior) and perform reliable
computational functions (e.g., real-time simulation) on the
digital twin, it has to continually remain consistent with its
real counterpart [42]. Such a virtual copy of a target MC unit
allows experts to accomplish and evaluate their subsequent
computational tasks in a more reliable manner. Therefore, it
is crucial for the digital twin to verify/identify whether or
not it is consistent with the real MC unit. Examples include
the creation of a functioning Human brain at the molecular
level [43] and real-time calibration between an operating
nano-scale communication system and its digital twin [44].
D. Contributions
In this paper, we study the problem of deterministic K-
identification (DKI) over the DTPC with ISI under average
and peak molecule release rate constraints which account
for the restricted molecule production / release rates by the
transmitter. In particular, this work makes the following
contributions:
Generalized DKI and ISI model: In this paper, we
study the DTPC, where the ISI memory length, L, and
the size of the identification set, K, may scale with the
codeword length, n. As special cases, this model includes
the ISI-free channel (L= 1), the ISI channel with constant
L, DI (K= 1), and DKI with constant K. For a given
MC channel, scaling Limplies a higher symbol rate.
Therefore, the proposed generalized model allows us to
investigate whether large codeword lengths enable reliable
identification even if the symbol rate is increased (or
similarly Kis increased). To the best of the authors’
knowledge, such a generalized DKI and ISI model has
not been studied in the literature, yet.
Codebook scale: We establish that the codebook size for
K-identification for the DTPC with ISI for deterministic
encoding scales in nsimilar as for the memoryless DTPC
[27], [45], namely super-exponentially in the codeword
length, i.e., 2(nlog n)R,where Ris the DKI coding rate,
even when the size of the target message set Kand the
number of ISI channel taps L, both grow sub-linearly in
n, i.e., K= 2κlog nand L= 2llog n,respectively, where
κ+ 4l[0,1),has to hold, and κ[0,1) is called
the identification target rate and l[0,1/4) is the ISI
rate. This result reveals that the set of target messages for
identification and the ISI memory can indeed scale with
nwithout affecting the scale of the codebook, confirming
the result for the standard identification problem for the
memoryless DTPC (i.e., K=L= 1) [45].
Capacity bounds: We derive DKI capacity bounds for
constant K1and growing K= 2κlog nfor the
dispersive DTPC with constant L1and growing ISI
3
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
L= 2llog n,respectively. We show that for constant K
and L, the proposed lower and upper bounds on Rare
independent of Kand L, whereas for growing target
message set or growing number of ISI taps, they are
functions of the target identification rate κand ISI rate l,
respectively. Moreover, we show that optimizing the ISI
rate l(or equivalently the symbol rate) leads to an effective
identification rate [bits/s] that scales super-linearly with n.
This result is in contrast to the typical transmission rate
in [bits/s] for which the rate is independent of n.
Technical novelty in the capacity proof: To obtain the
proposed lower bound on the DKI capacity, we analyze the
input space imposed by the input constraints and exploit
it for an appropriate sphere packing (non-overlapping
spheres with identical radius), namely we consider the
packing of hyper spheres inside a larger hypercube, whose
radius grows in the codeword length n, the target identifi-
cation rate κ, and the ISI rate l, i.e., n(1+κ+4l)/4.Unlike
the existing construction for Gaussian channels [46], [47],
where the radius of spheres vanishes for asymptotic code-
word length n, i.e., n ,here, the radius of the hyper
spheres tends to infinity with a polynomial growth, i.e.,
n(1+κ+4l)/4.This packing incorporates the impact of
the size of the target message set and the ISI as functions
of κand l, respectively. For derivation of the upper bound
on the DKI capacity, we assume that an arbitrary sequence
of codes with vanishing error probabilities is given. Then,
for such a sequence of codes, we prove that a certain
minimum distance between the codewords is asserted.
Unlike the previous construction for Gaussian [46], [47]
and memoryless channels, here this distance converges to
zero more rapidly for the asymptotic codeword length n
and depends on the target identification rate and the ISI
rate and decreases as Kand Lgrow, respectively.
E. Organization
The remainder of this paper is structured as follows. Sec-
tion II reviews previous results for the DI, RI, and DKI prob-
lems and includes background information. In Section III, the
system model is presented and the required preliminaries
regarding DKI codes are established. Section IV provides
the main contributions and results on the K-identification
capacity of the DTPC with ISI. Finally, Section V concludes
the paper with a summary and directions for future research.
The notations adopted throughout this paper are summa-
rized in Table 1. Moreover, all logarithms are to base two.
II. Background on Identification Problem
In this section, we establish the required background for our
work and introduce the identification problem. Furthermore,
we review relevant previous results on the randomized-
encoder identification (RI), DI, and DKI capacities for dif-
ferent channels.
A. Identification Problem
In Shannon’s communication paradigm [48], a sender, Alice,
encodes her message in a manner that will allow the receiver,
Bob, to reliably recover the message. In other words, the
receiver’s task is to determine which message was sent.
In contrast, in the identification setting, the coding scheme
is designed to accomplish a different objective [31]. The
decoder’s main task is to determine whether a particular
message was sent or not, while the transmitter does not know
which message the decoder is interested in.
Randomized identification: Ahlswede and Dueck [31]
introduced an RI scheme, in which the codewords are
tailored according to their corresponding random source
(distribution). Note that such an approach cannot increase the
TR capacity for Shannon’s message transmission task [49].
On the other hand, Ahlswede and Dueck [31] established
that given local randomness at the encoder, reliable identifi-
cation is accomplished with a codebook size that is double-
exponential in the codeword length n, i.e., 22nR [31],
where Ris the coding rate. This behavior differs radically
from the conventional message transmission setting, where
the codebook size grows only exponentially with the code-
word length, i.e., 2nR.Therefore, RI yields an exponential
gain in the codebook size compared to the transmission
problem. The construction of RI codes has been considered
in previous works [50], [51]. For example, in [50], a binary
code is constructed based on a three-layer concatenated
constant-weight code.
Deterministic identification: The realization of RI
codes can be challenging in practice since they require the
implementation of probability distribution functions. There-
fore, from a practical point of view, it is of interest to
consider the case where the codewords are not selected
based on distributions but rather by means of a deterministic
mapping from the message set to the input space. This is
known as DI in which the encoder is a deterministic function.
K-Identification framework: For the standard DI or RI
problems [31], [47], the receiver is interested in identifying a
single message, that is, it selects an arbitrary message called
target message and using a decision rule (decoder) decides
whether or not this target message is identical to the sent
message. For the K-identification problem [52], the receiver
selects a subset of Kmessages from the message set called
target message set (denoted by
K
) and in contrast to the
standard DI or RI problems, it decides whether or not the
sent message belongs to this target message set. We note
that such a target message set can be in general any arbitrary
subset of the message set of size K, where the total possible
number of such subsets is M
K.The K-identification scenario
may be interpreted as a generalization of the standard DI
or RI problems in the sense that the target message at the
receiver is substituted with a set of Ktarget messages, where
K1.That is, the DKI for the special case where K= 1
corresponds to the standard DI problem considered in [45],
[53].
4
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
iEnc Dispersive Poisson Channel Dec Yes / No
K
={j1, . . ., jK}
ci,t Yt
FIGURE 1: End-to-end transmission chain for DKI communication in a generic MC system modelled as a Poisson channel with ISI. The transmitter maps
message ionto a codeword ci= (ci,t)|n
t=1.The receiver selects an arbitrary target message set
K
={j1,...,jK}and, given the channel output vector
Y= (Yt)|n
t=1,asks whether or not the sent message ibelongs to target message set
K
.
B. Related Work on DI Capacity
In the deterministic coding setup for identification, for
discrete memoryless channels (DMCs), the codebook size
grows only exponentially in the codeword length, similar
to the conventional transmission problem [31], [54]–[58].
However, the achievable identification rates are significantly
higher compared to the transmission rates [47], [55]. Com-
pared to RI codes, DI codes often have the advantage
of simpler implementation and simulation [59], [60] and
explicit construction [61]. In [47], [55], DI for DMCs
with an average power constraint is considered and a full
characterization of the DI capacity is provided. In [46], [47],
Gaussian channels with fast and slow fading and subject to
an average power constraint are studied and the codebook
size is shown to scale as 2(nlog n)R. DI is also studied
in [62] for Gaussian channels in the presence of feedback
and in [63] for general continuous-time channels with infinite
alphabets. Furthermore, DI for MC channels modelled as
DTPC without ISI and the Binomial channel is studied in
[26], [27], [45], [53], where the scale of the size of the
codebook is shown to be 2(nlog n)R.
C. Related Work on DKI Capacity
Randomized K-identification for the DMC is studied
in [52] where assuming K= 2κn,the set of all
achievable pairs of the identification coding rate R
and the target identification rate κ, is shown to con-
tain (R, κ):R, κ 0 ; R+ 2κCTR .Assuming K=
2κlog n,the DKI for slow fading channels, denoted by Gslow,
subject to an average power constraint and a codebook size
of super-exponential scale, i.e., 2(nlog n)R,is studied in
[64], [65] and the following bounds on the DKI capacity are
derived: (1 κ)/4CDKI(Gslow, M , K)1 + κ, for 0
κ < 1.Also, a full characterization of the DKI capacity for
the binary symmetric channel subject to a Hamming weight
constraint is established in [66]. On the other hand, to the
best of the authors’ knowledge, the DKI capacity of the
DTPC with ISI (which is relevant for MC systems) has not
been studied in the literature, yet, and hence it is the main
focus of this paper.
III. System Model and K-Identification Coding
In this section, we present the adopted system model and
establish some preliminaries regarding DKI coding.
A. System Model
In this paper, we consider a K-identification-focused com-
munication setup, where the decoder aims to accomplish
the following task: Determining whether or not a specific
received message belongs to a target set of messages of
size K; see Figure 1. To achieve this objective, a coded
communication between the transmitter and the receiver over
nuses of the MC channel is established by modulating
the molecule concentration. We assume that the transmitter
releases molecules with rate xt(molecules/second) during
TRseconds at the beginning of each symbol interval having
a length of TSseconds [13]. These molecules propagate
through the channel via diffusion and/or advection, and may
even experience degradation in the channel via enzymatic re-
actions [11]. The ISI of the channel is modelled by a length L
sequence of probability values, i.e., p= (p0, p1, . . . , pL1),
where value pl(0,1] denotes the probability that a given
molecule released by the transmitter for the t-th channel use
is observed at the receiver during time slot t+l. Further, let
ρ(ρ0, . . . , ρL1),where ρlplTR.
We assume a counting-type receiver3. Examples of such
receivers include the transparent receiver, which counts the
molecules that are at a given time within its sensing volume
[67], the fully absorbing receiver, which absorbs and counts
the molecules hitting its surface within a given time interval
[68], and the reactive receiver which counts the molecules
bound to the ligand proteins on its surface at a given time
[69]. The value of pldepends on parameters such as the
diffusion coefficient of the molecules, D, the propagation
environment (e.g., diffusion, advection, and reaction pro-
cesses), the distance between transmitter and receiver, d, and
the type of reception mechanism (e.g., transparent, absorb-
ing, or reactive receiver); see [11, Sec. III] for the char-
acterization of plfor various setups. For instance, assuming
instantaneous release (i.e., TR0) of molecules by a point-
source transmitter, molecule propagation via diffusion in an
unbounded three-dimensional environment, and assuming a
3We note that there are different types of receivers in MC systems including
timing receivers, counting receivers, concentration-based receivers, and re-
ceivers using secondary/indirect signals, see [3], [11] for a comprehensive
review. We adopt counting receivers in this paper, since on the one hand,
they are not as complex as timing receivers, and on the other hand, they are
more accurate than concentration-based receivers or receivers that employ
secondary/indirect signals.
5
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
uniform concentration approximation of molecules within
the reception volume of a transparent receiver, plcan be
obtained as [11]
pl=Vrx
(4πDτl)3/2·ed2/(4Dτl),(1)
where Vrx is the reception volume size and τllTS+¯τwith
l {0, . . . , L1},denotes the sampling time at the receiver,
where ¯τis a constant time offset between the release time by
the transmitter and sample time at the receiver within each
symbol duration.
Assuming that the release, propagation, and reception
of individual molecules are statistically identical but inde-
pendent of each other, the received signal follows Poisson
statistics when the number of released molecules is large, i.e.,
xtTR 1[11, Sec. IV]. We assume that XR0and
YN0denote RVs modeling the rate of molecule release
by the transmitter and the number of molecules observed at
the receiver, respectively. The channel output Yis related to
the channel input Xaccording to
Yt=Pois Xρ
t+λ,(2)
where
Xρ
t
L1
X
l=0
ρlXtl,(3)
is the mean number of observed molecules at the receiver
after the release of molecules at the time t. The constant λ
R>0is the mean number of observed interfering molecules
originating from external noise sources which employ the
same type of molecule as the considered MC system. Let
x
t
def
= (xtL+1, . . . , xt)
be the vector of the Lmost recently released symbols.
Considering the Poisson distribution provided in (2), the
letter-wise conditional distribution of the output of the DTPC
with ISI Pis given by
V(Yt|x
t) = e(Xρ
t+λ)Xρ
t+λYt/(Yt!).(4)
Standard transmission schemes employ strings of letters
(symbols) of length n, referred to as codewords, that is,
the encoding scheme uses the channel in nconsecutive
symbol intervals to transmit one message. Since the channel
is dispersive, each output symbol is influenced by the L
most recent input symbols. As a consequence, the receiver
observes a string of length ¯n=n+L1,referred to as
output vector (received signal). Since the ISI of the channel,
characterized by p= (p0, p1, . . . , pL1),has length L, we
assume that different channel uses given any Lprevious
input symbols are statistically independent. Therefore, for n
channel uses, the transition probability distribution is given
by
V¯n(y|x) =
¯n
Y
t=1
V(Yt|x
t) =
¯n
Y
t=1
e(Xρ
t+λ)Xρ
t+λYt
Yt!,
(5)
where x= (x1, . . . , xn)and y= (y1, . . . , y¯n)denote the
transmitted codeword and the received signal, respectively.
We assume that xt= 0 for t > n or t < 0.We impose
peak and average molecule release rate constraints on the
codewords as follows
0xtPmax and 1
n
n
X
t=1
xtPavg,(6)
respectively, t[[n]],where Pmax >0and Pavg >0con-
strain the rate of molecule release per channel use and over
the entire nchannel uses in each codeword, respectively. Im-
posing such constraints on the rate of the released molecules
is motivated by the fact that there are a finite and limited
number of signalling molecules contained in the molecule
reservoir and the constraints guarantee that, for large number
of channel uses, the number of stored molecules suffices.
Unlike the classical average power constraint imposed on the
input of the Gaussian channel which is a non-linear function
of the symbols signifying the signal (symbol) energy, here
for the DTPC with ISI, the average constraint is a linear
function of the symbols signifying the number of released
molecules normalized by the codeword length [13].
B. DKI Coding For The Poisson Channel With ISI
The definition of a DKI code for the Poisson channel with
ISI, P,is given below.
Definition 1 ( ISI-Poisson DKI Code ).An (n, M(n, R),
K(n, κ), L(n, l), e1, e2)DKI code for a Poisson channel
with ISI, P,under average and peak molecule release rate
constraints of Pave >0,and Pmax >0,respectively, and
for integers M(n, R), K(n, κ),and L(n, l),where n, R, κ,
and lare the codeword length, the DKI coding rate, the
target identification rate, and the ISI rate, respectively, is
defined as a system (C,T),which consists of a codebook
C=ciRn
+,with i[[M]],such that
0ci,t Pmax and 1
n
n
X
t=1
ci,t Pavg,(7)
i[[M]],t[[n]] and a decoder T
K
N¯n
0,where
K
is an arbitrary subset of size K. That is
K
G
[[M]] ;
|
G
|=K.Given a message i[[M]],the encoder sends
ci,and the decoder’s task is to perform a binary hypothesis
test: Was a target message j
K
sent or not? There exist
two types of errors that may happen4(see Figure 2):
Type I error: Rejection of the correct message, i
K
.
Type II error: Acceptance of a wrong message, i /
K
.
The associated error probabilities of the DKI code reads
Pe,1(i,
K
) = Pr YTc
K
x=ci
= 1 X
yT
K
V¯nyci, i
K
,(8)
4The error requirement as imposed by the DKI code definition applies to
all possible choices of the set
K
, i.e., M
Kcases; see [70, p. 140] for
further details on K-identification codes.
6
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
c2
c3
c4
c1
c5
c6
c7
Input Space Output Space
T
5
T
1
T
6
T
7
T
2
T
3
T
4
Correct Identification
Type I Error
Type II Error
FIGURE 2: Depiction of a DKI setting with K= 3 and target message set
K
={2,3,5}colored in blue. In the correct identification event, the channel
output is detected in the union of the individual decoders
T
j,where jbelongs to the target message set
K
.A type I error event occurs if the channel
output is observed in the complement of the union of the individual decoders to which the index of the codeword belongs to. A type II error event occurs
if the channel output is detected in the union of the individual decoders
T
j,with j
K
,but the index of the sent codeword does not belong to
K
.
Pe,2(i,
K
) = Pr YT
K
x=ci
=X
yT
K
V¯nyci, i /
K
,(9)
and e1, e2>0fulfill the bounds Pe,1(i,
K
)e1,i
K
,
and Pe,2(i,
K
)e2,i /
K
.
Note that correct K-identification implies that neither type-
I nor type-II errors occur. In this paper, we are interested in
the asymptotic case when arbitrary small error probabilities
are achievable for sufficiently large codeword length n.
Definition 2 ( DKI Coding / Target Identification / ISI Rates ).
The size of the codebook M(n, R), the size of the target
message set K(n, κ), and the number of ISI taps L(n, l)
are sequences of monotonically non-decreasing functions in
codeword length nwith R, κ, and ldenoting the DKI coding
rate, target identification rate, and ISI rate, respectively.
In particular, we consider the following functions in this
paper:
M(n, R)=2(nlog n)R, K (n, κ)=2κlog n, L(n, l)=2llog n.
Definition 3 ( Achievable Rate Region ).The triple of rates
(R, κ, l)is called achievable if for every e1, e2>0and
sufficiently large n, there exists an (n, M(n, R), K (n, κ),
L(n, l), e1, e2)-ISI-Poisson DKI code. Then, the set of all
achievable rate triples (R, κ, l)is referred to as the achievable
rate region for P.
Definition 4 ( Capacity Region / Capacity ).The operational
DKI capacity region of the ISI-Poisson channel, P,is defined
as the closure of all achievable rate triples (R, κ, l). The
supremum of the identification coding rate Ris called the
identification capacity and is denoted by CDKI(P, M , K, L).
IV. DKI Capacity of The Poisson Channel With ISI
In this section, we first present our main results, i.e., lower
and upper bounds on the achievable DKI rates for P.
Subsequently, we provide detailed proofs of these bounds.
A. Main Results
The DKI capacity theorem for Pis stated below.
Theorem 1.Consider the DTPC with ISI, P,and assume that
both the target message set and the number of ISI channel
taps grow sub-linearly with the codeword length, i.e.,
K(n, κ)=2κlog nand L(n, l)=2llog n,
respectively, where κ[0,1), l [0,1/4),and κ+ 4l
[0,1).Then, the DKI capacity of Psubject to average
and peak molecule release rate constraints of the form
n1Pn
t=1 ci,t Pave and 0ci,t Pmax,respectively,
with i[[M]] and a codebook of super-exponential scale,
i.e., M(n, R)=2(nlog n)R,is bounded by
1(κ+ 4l)
4CDKI(P, M , K, L)3
2+κ+l.
Proof:
The proof of Theorem 1 consists of two parts, namely the
achievability and the converse proofs, which are provided in
Sections IV-B and IV-C, respectively.
In the following, we highlight some insights obtained from
Theorem 1 and its proof.
Rate region: Theorem 1 unveils the feasible region for
three different rates, namely, the DKI achievable rate R, the
7
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
lκ
R
(0,1/4,0)
(0,0,1/4)
(1,0,0)
FIGURE 3: Illustration of achievable rate region for triple rates (κ, l, R)for the DTPC with ISI, P.The DKI capacity region of Pincludes the entire
depicted convex tetrahedron with vertices at (κ= 0,l = 0, R = 0),(κ= 1,0,0),(0, l = 1/4,0),and (0,0, R = 1/4).The plane formed by the three
extreme points marked in blue is characterized by κ+ 4l+ 4R1=0,which is derived by considering the equality case in the lower bound on the
DKI capacity in Theorem 1. The subspace inscribed by this plane, and three other equations namely, 0κ < 1,0l < 1/4,and R0,defines the
entire rate region.
ISI rate l, and the target identification rate κ. The geometric
structure for all possible triples (κ, l, R)obtained from The-
orem 1 is shown in Figure 3. A tetrahedron characterizes the
feasible triple vectors (κ, l, R)for which a communication
system can accomplish the task of K-identification for a
DTPC with LISI taps at a DKI achievable rate of at least
R, where K= 2κlog nand L= 2llog n.
The cross section of the tetrahedron with plane R= 0 de-
termines the feasible region for rate pairs (κ, l); see Figure 4.
This region can also be derived by the following argument:
Since the target identification rate κ, the ISI rate l, and the
lower bound on the DKI capacity given in Theorem 1 are
non-negative rate values, we obtain 0κ < 1,0l < 1/4,
and 0κ+ 4l1.The first two constraints involving
only κand l, respectively, yield a rectangle having two of
its corners at the origin (0,0) and (1,1/4),and the third
joint constraint on κand l, i.e., 0κ+ 4l1excludes
some of the rate pairs (κ, l)from such a rectangle for which
the corresponding lower bound on the DKI capacity would
be a strictly negative value. In addition, we note that more
sophisticated coding schemes may result in an achievable
region for rate pairs (κ, l)beyond the blue line in Figure 4.
Adopted decoder: Before going through the details of
the achievability proof, we will present some insight into the
proposed decoder. In particular, in the proposed achievable
scheme, we adopt a distance decoder that decides in favour
of a candidate codeword based on the distance between the
received vector and expected value of the received vector if
such a candidate codeword was really sent by the transmitter.
More specifically, upon observing an output sequence yat
the receiver, the decoder declares that message jwas sent if
the following condition is met
yE(Y|cj)2−∥y1¯n,(10)
where δnis referred to as a decoding threshold and cj=
[cj,1, . . . , cj,n]is the codeword associated with message j.
Unlike the distance decoder used for Gaussian channels [46],
which includes only the distance term yE(Y|cj),the
proposed decoder provided in (10) requires subtraction of an
additional correction term y1.This correction term stems
from the fact that the noise in the DTPC with ISI is signal
(input codeword) dependent [11]. Therefore, the variance of
yE(Y|cj)depends on the adopted codeword cjwhich
implies that, unlike for the Gaussian channel, here the radius
of the decoding region is not constant for all the codewords.
To account for this fact, we include the correction termy1.
Corollary 1 ( DI Capacity of the ISI-free DTPC ).The
lower and upper bounds on the DKI capacity of the DTPC
with ISI, P,for the asymptotic range of l, κ 0converges
to their maximum possible and minimum possible values,
i.e., 1/4and 3/2,respectively. Specifically, for L=K=
1,i.e., l=κ= 0,Theorem 1 recovers the results for the
memoryless standard DI problem studied in [26], [45]:
1
4CDKI(P, M , K = 1, L = 1) 3
2.(11)
Proof:
The proof follows directly by substituting the extreme values
of land κin the capacity results in Theorem 1.
Remark 1.The lower bound on the DKI capacity in Theo-
rem 1 suggests that by considering a dispersive communica-
tion system or allowing the receiver to identify its favourite
message among a larger set of target messages, a penalty
on the value of the lower bound is incurred. However, an
increase in the exponent lfor the number of ISI channel
taps has a four times larger impact on the proposed lower
bound. Another observation is that for a communication
setting, where a fixed given lower bound on the identification
8
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
0 0.25 0.5 0.75 1
0
0.13
0.25
Target Identification Rate κ
ISI Rate l
Blue Line: κ+ 4l= 1
FIGURE 4: Illustration of achievable rate region for rate pairs (κ, l)for the DTPC with ISI, P,which is the set of all points inscribed by the blue
line (obtained by equating the lower bound given in Theorem 1 to zero) as well as the horizontal and vertical axes. The extreme points (0,0.25) and
(1,0) correspond to the DI with maximum possible number of ISI taps [1] and the DKI with the maximum size of the target message set, respectively.
Walking on the blue line towards each of the extreme cases exemplifies the trade-off between the target identification rate and the ISI rate. The origin
(0,0) corresponds to the standard identification scheme (i.e., DI for the DTPC without ISI), where the set of target messages has only one element and
the channel is memoryless [26], [27].
performance in terms of the maximum achievable rate is
required, there is a trade-off between the target identification
rate and the ISI rate.
Corollary 2 ( Effective Identification Rate ).Let us assume
that the physical length of the CIR interval is fixed and
given by Tcir.Further, assume that the LISI taps span the
CIR interval, Tcir.Then, the following relation between the
symbol duration, TS,and the number of ISI taps, Lholds:
TS=Tcir/L =Tcir 2llog n,(12)
for some l[0,1/4) with κ+ 4l[0,1).Now, let the
effective identification rate, ¯
Reff ,be defined as follows
¯
Reff
def
=log M(n, R)
nTS
(13)
(in bits/s). Then, the effective identification rate subject
to average and peak molecule release rate constraints is
bounded by
1(κ+4l)nllog n
4Tcir ¯
Reff 3+2(κ+l)nllog n
2Tcir
.
(14)
Proof:
The proof follows directly by substituting the capacity results
in Theorem 1 into the definition of the effective rate and
performing some mathematical simplifications.
Remark 2.Theorem 1 assumes that the number of ISI taps
L(n, l)scales sub-linearly in the codeword length n, i.e.,
2llog n.More specifically, Lused in Theorem 1 may
comprise the following three different cases:
1) ISI-free, L= 1:This case corresponds to an ISI-free
setup, which is valid when the symbol duration is large
(TSTcir), and implies L= 1 and l= 0.Thereby, ¯
Reff
scales logarithmically with the codeword length n. This
is in contrast to the transmission setting, where ¯
Reff is
independent of n(e.g., the well-known Shannon formula
for the Gaussian channel). This result is known in the
identification literature [31], [45].
2) Constant L > 1:When TSis constant and TS< Tcir,
we have a constant L > 1,which implies l0as n
.Surprisingly, our capacity result in Theorem 1 reveals
that the bounds for the DTPC with memory are in fact
identical to those for the memoryless DTPC given in [45].
3) Growing L:Our capacity result shows that reliable
identification is possible even when Lscales with the
codeword length as 2llog n.Moreover, the impact of ISI
rate lis reflected in the capacity lower and upper bounds
in Theorem 1, where the bounds respectively decrease and
increase in l. While the upper bound on ¯
Reff increases in l,
too, the lower bound in (14) suggests a trade-off in terms
of l, which is investigated in Corollary 3.
Corollary 3 ( Optimum ISI Rate ).The lower bound given
in Corollary 2 is maximized for the following ISI rate
lmax(n) = 1
41κ4
ln n,(15)
where nN.Moreover, the maximum ISI rate, lmax,
provided in (15) yields the following lower bound on the
effective identification rate, ¯
Reff (n):
¯
Reff (n)log e
eTcir ·n1
4(1κ).(16)
Thereby, the normalized effective identification rate is lower
bounded as follows
lim inf
n→∞
¯
Reff (n)
n1
4(1κ)log e
eTcir
.(17)
Proof:
9
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
0 0.09 0.11 0.14 0.2 0.225 0.25
0
(0.8
10 log e)/e
(0.9
10 log e)/e
(10 log e)/e
:optimal l
ISI Rate, l
Lower Bound of ¯
Reff (n)
κ= 0
κ= 0.1
κ= 0.2
FIGURE 5: Illustration of the lower bound on the effective identification rate ¯
Reff provided in (14) for target identification rates κ= 0,0.1,0.2and
codeword length n= 104. The ISI rate lthat yields the maximum value for each value of lis marked by a yellow star and coincides with the optimal
lmax provided in (15).
The proof follows from differentiating the lower bound in
Corollary 2 with respect to land equating the result to zero.
The effective identification rate ¯
Reff [bits/s] in (13) is
the product of two terms, namely the identification rate per
symbol log M(n, R)/n [bits/symbol] (which decreases with
lfor the lower bound provided in Theorem 1) and the symbol
rate 1/TS[symbol/s] (which increases with l). The above
corollary reveals that in order to maximize ¯
Reff ,it is optimal
to set the trade-off for lsuch that the identification rate, i.e.,
log M(n, R)
n=(1 (κ+ 4lmax)) log n
4= log e, (18)
becomes independent of nbut the symbol rate scales poly-
nomially with fractional exponent in n, i.e.,
1/TS=n1
4(1κ)/Tcir = 2O(log n).(19)
As a result, in contrast to the typical transmission setting,
where the effective rate is independent of n, here, the
effective identification rate ¯
Reff for the optimal lgrows
sub-linearly in n. Moreover, the sub-linear increase of the
effective rate in nis faster compared to the typical scenario,
where TS(and hence L) is fixed and l= 0,and the effective
rate, i.e., ((1 κ) log n)/4increases logarithmically in n.
Fig. 5 shows the lower bound on the effective identification
rate ¯
Reff in (14) for target identification rates κ= 0,0.1,0.2
and codeword length n= 104. Note that nshould be
large since our capacity results are valid asymptotically. As
expected, each curve in Fig. 5 has a unique maximum at an
ISI rate lthat coincides with lmax in (15).
In addition, based on Theorem 1, we can distinguish the
following three cases in terms of K:
DI, K= 1:This case accounts for a standard identifi-
cation setup (κ= 0), i.e., the degenerate case where the
target message set has only one element, namely,
K
={i},
with i[[M]] and |
K
|=K= 1.Therefore, the identi-
fication setup in the deterministic [47] and randomized
regimes [31] can be regarded as a special case of K-
identification considered in this paper.
Constant K > 1:Constant K > 1implies κ0as
n .Our DKI capacity result in Theorem 1 reveals
that the bounds on the DKI achievable rate are identical
to those for K= 1.
Growing K:The DKI capacity bounds in Theorem 1
suggest that reliable identification is possible even when
Kscales with the codeword length as 2κlog n,for some
κ[0,1) and κ+ 4l[0,1).
In the following, we provide the proof of Theorem 1,
namely the achievability proof in Section IV-B and the
converse proof in Section IV-C.
B. Lower Bound (Achievability Proof)
The achievability proof consists of the following two steps.
Step 1: We propose a codebook construction and derive
an analytical lower bound on the corresponding codebook
size using inequalities for the sphere packing density.
Step 2: We prove that this codebook leads to an achiev-
able rate by proposing a decoder and showing that the
corresponding type I and type II error probabilities vanish
as n .
A DKI code for the DTPC, P,is constructed as follows.
Input constraint adaptation: We restrict ourselves to
codewords that meet the condition 0ci,t Pave,i
[[M]] ,t[[n]], which ensures that both constraints in (7)
are met for Pave > P max and Pave Pmax :
1) Pave > P max : In this case, the condition 0ci,t
Pmax,i[[M]],t[[n]],yields n1Pn
t=1 ci,t Pave.
In this case, the average constraint trivially holds and we
exclude this scenario from the analysis.
2) Pave Pmax : Then, the condition 0ci,t Pave,i
[[M]],t[[n]],implies both 0ci,t Pmax and
n1Pn
t=1 ci,t Pave.
Thus, for the construction of the codebook in the next steps,
we only require that 0ci,t Pave,i[[M]] ,t[[n]].
Convoluted codebook construction: In the following,
instead of directly constructing the original codebook C=
10
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
{ci} Rn
+,with i[[M]],we present a construction
of a codebook called convoluted codebook and show that
the original codebook can be uniquely reconstructed for a
convoluted codebook. In particular, the convoluted codebook
is denoted by Cρ={cρ
i} Rn
+,with i[[M]],where
each cρ
i(cρ
i,1,...,cρ
i,n)is referred to as a convoluted
codeword whose symbols are formed as a linear combination
(convolution) of the Lmost recent symbols of codeword
ci(ci,1,...,ci,n)and CIR vector ρ,i.e.,
cρ
i,t
L1
X
l=0
ρlci,tl.(20)
Observe that the convoluted symbol cρ
i,t represents the ex-
pected value of the signal observed at the receiver after
the release of ci,t molecules by the transmitter. The pro-
posed convoluted codebook construction is motivated by the
structure of the ISI channel and the choice of the distance
decoder given in (10). More specifically, the term E(Y|cj)
for j[[M]] given in (10) is the center of the distance
decoder and includes the convoluted codeword, i.e., cρ
j.
In order to use the convoluted codebook, we have to
show that the original codewords cican be uniquely derived
from the convoluted codewords cρ
i, i.e., there is a one-to-one
mapping between the convoluted and the original codebooks.
To show this, let us first define the set of feasible original
and convoluted codewords, respectively, as:
C
0=
Q
0(n, P ave)
ciRn:0ci,t Pave,i[[M]],t[[n]](21)
C
ρ
0cρ
iRn:cρ
i,t
L1
X
l=0
ρlci,tl,ci
C
0,i[[M]].
(22)
Unfortunately, unlike the feasible set of the original code-
words
C
0, the feasible set of the convoluted codewords
C
ρ
0lacks the simple structure and geometry needed for the
calculation of the volume and rate analysis. To cope with
this issue, we target a subset of
C
ρ
0that enjoys a suitable
structure with well-known geometry and analytic volume
formula, namely the following hyper cube:
Q
0(n, ¯
Pave)
={cρ
i:0cρ
i,t ¯
Pave,i[[M]],t[[n]]},(23)
where
¯
Pave min
i[[M]];
cρ
i
C
1
C
c
2
min
t[[n]];
tL+1¯
tt
cρ
i,t ,(24)
where ¯
tis a specific symbol index for which the corre-
sponding input symbol yields a non-zero number of released
molecules from the transmitter, i.e., TRci,¯
t1.Moreover,
sets
C
1and
C
2are given by
C
1=
Q
0(n, P
ave)
cρ
iRn:0cρ
i,t P
ave,i[[M]],t[[n]],
C
2=cρ
iRn:ci,t 0,i[[M]],t[[n]],(25)
where P
ave ρ0Pave.
Next, we have to show that the volume of
Q
0(n, ¯
Pave)
is non-zero (i.e., ¯
Pave is bounded away from zero) and
Q
0(n, ¯
Pave)
C
ρ
0. The former follows from the fact that
¯
Pave tends to zero only if all symbols of at least one of
the original codewords are arbitrary close to zero. Such a
single all-zero codeword can be excluded without affecting
the rate analysis. To prove
Q
0(n, ¯
Pave)
C
ρ
0, we show that
the original codeword ciobtained from cρ
i
Q
0(n, ¯
Pave)
belongs to
C
0, namely the extracted original symbols must
meet 0ci,t Pave. We first show that ci,t 0holds via
contradiction. In other words, we assume cρ
i
Q
0(n, ¯
Pave)
but the corresponding original codeword meets ci
C
c
2.
This already contradicts the fact that ¯
Pave >0, see (24). To
show ci,t Pave, we use the following chain of inequalities
assuming cρ
i
Q
0(n, ¯
Pave):
ρ0ci,1¯
Pave P
ave
ρ0ci,2+ρ1ci,1¯
Pave P
ave
ρ0ci,3+ρ1ci,2+ρ2ci,1¯
Pave P
ave
.
.
.
ρ0ci,n +ρ1ci,n1+. . . +ρL1ci,nL+1 ¯
Pave P
ave,
(26)
where ¯
Pave P
ave holds since
Q
0(n, ¯
Pave)
C
1, see (24).
The above inequalities can be rewritten as follows
ci,1P
ave0=Pave
ci,2P
ave ρ0ci,1
ρ0P
ave0=Pave
.
.
.
ci,n P
ave PK1
t=1 ρlci,tl
ρ0P
ave0=Pave,(27)
where we used the fact that ci,t 0. Hence, condition
ciPave holds for the extracted original codewords.
In summary, we showed that for convoluted codewords
cρ
i
Q
0(n, ¯
Pave), there is a unique feasible original code-
word ci
Q
0(n, P ave). Therefore, the rate analysis of the
convoluted codebook is also valid for the original codebook.
Calculation of the codebook size/rate: We use a pack-
ing arrangement of non-overlapping hyper spheres of radius
r0=nin a hyper cube with edge length ¯
Pave,where
ϵn=3a
4n1
2(1(b+κ+4l)) ,(28)
and a > 0is a non-vanishing fixed constant, 0<b<1is
an arbitrarily small constant, and 0κ+ 4l < 1.
Let Sdenote a sphere packing, i.e., an arrangement of
Mnon-overlapping spheres Scρ
i(n, r0), i [[M]],that are
packed inside the larger cube
Q
0(n, ¯
Pave)with edge length
¯
Pave,see Figure 6. As opposed to standard sphere packing
coding techniques [71], the spheres are not necessarily en-
tirely contained within the cube. That is, we only require that
11
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
¯
Pave/2
n
¯
Paven
FIGURE 6: Illustration of a saturated sphere packing inside a cube, where
small spheres of radius r0=ncover a larger cube. Dark gray colored
spheres are not entirely contained within the larger cube, and yet they
contribute to the packing arrangement. As we assign a codeword to each
sphere center, the 1-norm and arithmetic mean of a codeword are bounded
by ¯
Pave as required.
the centers of the spheres are inside
Q
0(n, ¯
Pave),the spheres
are disjoint from each other, and they have a non-empty
intersection with
Q
0(n, ¯
Pave).The packing density n(S)
is defined as the ratio of the saturated packing volume to the
cube volume Vol
Q
0(n, ¯
Pave),i.e.,
n(S)Vol SM
i=1 Scρ
i(n, r0)
Vol
Q
0(n, ¯
Pave).(29)
Sphere packing Sis called saturated if no spheres can be
added to the arrangement without overlap. In particular, we
use a packing argument that has a similar flavor as that for
the Minkowski–Hlawka theorem for saturated packings [71].
Specifically, consider the saturated packing arrangement of
M(n,R)
[
i=1 Scρ
i(n, n)(30)
spheres with radius r0=nembedded within cube
Q
0(n, ¯
Pave).Then, for such an arrangement, we have the
following lower [72, Lem. 2.1] and upper bounds [71,
Eq. 45] on the packing density
2nn(S)20.599n.(31)
In particular, in our subsequent analysis, we employ the
lower bound given in (31), which can be proved as follows:
For the saturated packing arrangement given in (30), there
cannot be a point in the larger cube
Q
0(n, ¯
Pave)with a
distance of more than 2r0from all sphere centers. Otherwise,
a new sphere could be added which contradicts the assump-
tion that the union of M(n, R)spheres with radius nis
saturated. Now, if we double the radius of each sphere, the
spheres with radius 2r0cover thoroughly the entire volume
of
Q
0(n, ¯
Pave),that is, each point inside the hyper cube
Q
0(n, ¯
Pave)belongs to at least one of the small spheres. In
general, the volume of a hyper sphere of radius ris given
by [71, Eq. (16)]
Vol Sx(n, r)=πn
2
Γ(n
2+ 1) ·rn.(32)
Hence, if the radius of the small spheres is doubled, the
volume of SM(n,R)
i=1 Scρ
i(n, n)is increased by 2n.Since
the spheres with radius 2r0cover
Q
0(n, ¯
Pave),it follows
that the original r0-radius packing5has a density of at least
2n.We assign a convoluted codeword to the center cρ
iof
each small hyper sphere. The convoluted codewords satisfy
the input constraint as 0cρ
i,t P
ave,t[[n]],i[[M]],
which is equivalent to
cρ
i
¯
Pave.(33)
Since the volume of each sphere is equal to Vol(Scρ
1(n, r0))
and the centers of all spheres lie inside the cube, the total
number of spheres is bounded from below by
M=
Vol SM
i=1 Scρ
i(n, r0)
Vol(Scρ
1(n, r0)) =n(S)·Vol
Q
0(n, ¯
Pave)
Vol(Scρ
1(n, r0))
2n·Pn
ave
Vol(Scρ
1(n, r0)),(34)
where the inequality holds by (31). The bound in (34) can
be written as follows
log Mlog ¯
Pn
ave /VolScρ
1(n, r0)n
nlog ¯
Pave/πr0+ log Γn/2+1n,
(35)
where the last inequality exploits (32). The above bound can
be further simplified as follows
log Mnlog ¯
Pave/πr0+ log n/2!n, (36)
where the equality exploits the following relation:
Γn/2+1(a)
=n
2Γn/2(b)
n/2Γn/2(c)
n/2!.
(37)
In the above equation, (a)holds by the recurrence relation
of the Gamma function [73] for real n/2,(b)follows from
n/2n/2,the monotonicity of the Gamma function [73]
for n/21.46 n4,and (c)holds since for positive
integer n/2,we have Γn/2 = (n/21)!,cf.
[73]. Next, we proceed to simplify the factorial term given
in (36). To this end, we exploit Stirling’s approximation,
i.e., log n! = nlog nnlog e+o(n)[74, p. 52] with the
substitution of n=n/2,where n/2Z.Thereby, we
obtain
log Mnlog ¯
Pave nlog r0+n/2log n/2
n/2log e+on/2n, (38)
Therefore, for r0=n=an1+b+κ+4l
4,where a
3a/4,we have
log M
(a)
nlog ¯
Pave
πa1 + b+κ+ 4l
4nlog an
5We note that the proposed proof of the lower bound in (31) is non-
constructive in the sense that, while the existence of the respective satu-
rated packing is proved, no systematic construction method is provided.
12
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
+ (n/21) log n/21n/2log e+on/21n
(b)
nlog ¯
Pave
πa1 + b+κ+ 4l
4nlog an
+1
2nlog n2nlog nn
2log e+on/2
=1(b+κ+ 4l)
4nlog n
+nlog ¯
Pave/πae+O(n),(39)
where (a)follows from n
2>n
21and (b)holds since
log(t1) log t1for t2and n
2n
2for integer n.
Observe that the dominant term in (39) is of order nlog n.
Hence, to obtain a finite value for the lower bound on the
rate, R, (39) reveals that the scaling law of Mis 2(nlog n)R.
Therefore, we obtain
R1
nlog n1(b+κ+ 4l)
4nlog n
+n log ¯
Pave
πae!+O(n),(40)
which tends to (1 (κ+ 4l))/4when n and b0.
Encoder: Given message i[[M]],transmit x=ci.
Proposed decoder: In order to analyze the error perfor-
mance of the proposed codebook, we need to adopt a decoder
which is introduced next. Before we proceed, for the sake of
a concise analysis, we introduce the following conventions.
Let:
Yt(i)Pois(cρ
i,t +λ)denote the channel output at time
tgiven that x=ci.
The output vector is defined as the vector of symbols, i.e.,
Y(i)=(Y1(i), . . . , Y ¯n(i)).
¯yt(i)yt(i)(cρ
i,t +λ),where yt(i)is a realization of
Yt(i).
Furthermore, let
δn4ϵn/3=4a/3n1
2(1(b+κ+4l)),(41)
where 0<b<1is an arbitrarily small constant and 0
κ+ 4l < 1with κand lbeing the identification target rate
and the ISI rate, respectively. To identify whether a message
j[[M]] was sent, the decoder checks whether the channel
output ybelongs to the following decoding set,
T
j=nyN¯n
0:T(y,cj)δno,(42)
where
T(y;cj) = 1
¯n
¯n
X
t=1 ytcρ
j,t +λ2yt,(43)
is referred to as the decoding metric evaluated for observa-
tion vector yand codeword cj.Finally, let e1, e2>0and
ζ0, ζ
0, ζ1, ζ
1>0be arbitrarily small constants.
Error analysis: In the following, we exploit Cheby-
shev’s inequality in order to establish upper bounds for the
type I and type II error probabilities.
Type I error analysis: Consider the type I errors, i.e.,
the transmitter sends ci,yet Y/
T
i.For every i[[M]],
the type I error probability is bounded as
Pe,1(i,
K
) = Pr Y(i)Tc
K
= Pr Y(i)[
j
K
T
jc
(a)
= Pr Y(i)\
j
K
T
c
j
(b)
Pr Y(i)
T
c
i
= Pr T(Y(i),cj)> δn,(44)
where (a)holds by De Morgans law for a finite number
of union of sets [75], i.e., (Sj
K
T
j)c=Tj
K
T
c
jand (b)
follows since Tj
K
T
c
j
T
j.
In order to bound Pe,1(i,
K
)in (44), we apply Chebyshev’s
inequality, namely
Pr T(Y(i),ci)ET(Y(i),ci)> δn
Var T(Y(i),ci)
δ2
n
.(45)
First, we calculate the expectation of the decoding metric as
follows
ET(Y(i),ci)
(a)
=1
¯n
¯n
X
t=1
EYt(i)cρ
i,t +λ2EYt(i)
(b)
=1
¯n
¯n
X
t=1
Var Yt(i)cρ
i,t +λ
(c)
=1
¯n
¯n
X
t=1 cρ
i,t +λcρ
i,t +λ= 0,(46)
where (a)follows from the linearity of expectation, (b)holds
since E[(Yt(i)E[Yt(i)])2] = Var[Yt(i)] and E[Yt(i)] =
cρ
i,t +λ, and (c)follows since Var[Yt(i)] = E[Yt(i)] = cρ
i,t +
λ. Second, in order to compute the upper bound in (45), we
proceed to compute the variance of the decoding metric. Let
us define
ψVar
¯n
X
t=1
VarY2
t(i)Yt(i).(47)
Since, conditioned on ci,the channel outputs conditioned on
the Lmost recent input symbols are uncorrelated, we obtain
Var T(Y(i); ci)=ψVar
¯n2.(48)
Next, we proceed to establish an upper bound ψUB
Var for ψVar.
To this end, let us define
ψVar VarY2
t(i)Yt(i)
(a)
=Var hY2
t(i)2cρ
i,t +λ+ 1Yt(i)i
(b)
=VarY2
t(i)+2cρ
i,t +λ+ 12Var Yt(i)
13
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
4cρ
i,t +λ+ 2Cov hY2
t(i), Yt(i)i,(49)
where (a)holds since ¯
Yt(i)Yt(i)(cρ
i,t +λ)and the
decomposition in (b)follows from the following identity for
constants aand b:
VaraX bY =a2Var[X] + b2Var[Y]2ab Cov[X, Y ].
(50)
Next, let us define
ψCov 4¯
LP avg +λ+ 2qexp(8)¯
LP avg +λ
=O(L3/2),(51)
with ¯
LLTR.Now, we proceed to establish an upper bound
on (49) as follows
ψVar
(a)
EhY4
t(i)i+2cρ
i,t +λ+ 12cρ
i,t +λ
+4cρ
i,t +λ+ 2qEY4
t(i)Var Yt(i)
(b)
¯
LP avg +λ4exp(8)+2¯
LP avg +λ+ 12+ψCov,
(52)
where (a)follows from the triangle inequality, i.e., αβ
|αβ|≤|α|+|β|for real aand b, Var[Y2
t(i)]
E[Y4
t(i)],Var[Yt(i)] = cρ
i,t +λ, and Cov[X, Y ]
pVar[X]·Var[Y]for RVs with finite variances, (b)follows
from ci,t Pavg,i[[M]],t[[n]],for a Poisson
RV Yt(i)Poisλ,an upper bound on the non-centered
moments:
E[Yk
t(i)] Ek[Yt(i)] ·exp(k2/2E[Yt(i)]),(53)
(see [76, Th. 1]), and (51). Thereby, exploiting (45)–(49)
and (52), we can establish the following upper bound on the
type I error probability given in (44):
Pe,1(i,
K
)
= Pr T(Y(i),cj)> δn
(a)
¯
LP avg +λ4exp(8)+2¯
LP avg +λ+12+ψCov
2
n
(b)
=9¯
LP avg +λ4exp(8)+2¯
LP avg +λ+12+ψCov
16a2nb+κ+4l
=O(L4)
nb+κ+4l=O(1)
nb+κ
e1,(54)
for sufficiently large nand arbitrarily small e1,where (a)
follows from (45), (48) and (52), and (b)follows from (41).
Type II error analysis: Next, we consider type II er-
rors, i.e., when Y(i)T
K
while the transmitter sent ci
with i /
K
.Then, for each of the M
Kpossible cases of
K
,
where i /
K
,the type II error probability is bounded as
Pe,2(i,
K
) = Pr Y(i)
T
K
= Pr Y(i)[
j
K
T
j
= Pr [
j
K
n|T(Y(i), cj)| δno
|
K
|
X
j=1
Pr T(Y(i); cj)δn
|
K
| · max
1jKPr T(Y(i); cj)δn,
(55)
where T(Y(i); cj)is a random variable modeling the de-
coding metric in (43), i.e.,
T(Y(i); cj) = 1
¯n
¯n
X
t=1 Yt(i)cρ
j,t +λ2Yt(i).(56)
Next, we establish an upper bound on the RHS of (55), while
we assume that jcan be an arbitrary value from set [[K]].
Further, let
˜
jarg max
1jK
Pr T(Y(i); cj)δn.(57)
We note that if our analysis gives an upper bound on
Pr(|T(Y(i); cj)| δn)for arbitrary j[[K]],then
the same upper bound is valid for Pr(|T(Y(i); c˜
j)|
δn).That is, we immediately obtain an upper bound for
max
1jKPr(|T(Y(i); cj)| δn)in (55).
Observe that (56) for j=˜
jcan be rewritten as follows
T(Y(i); c˜
j)
=1
¯n
¯n
X
t=1 Yt(i)cρ
i,t +λ+cρ
i,t cρ
˜
j,t2
| {z }
ϕi,˜
j,t
Yt(i).
(58)
Observe that ϕi,˜
j,t in (58) can be expressed as
ϕi,˜
j,t =¯
Yt(i)2+ψ2
i,˜
j,t + 2 ¯
Yt(i)ψi,˜
j,t,(59)
where
¯
Yt(i) = Yt(i)cρ
i,t +λand ψi,˜
j,t =cρ
i,t cρ
j,t.
(60)
Then, define the following events
Ei,˜
j=n
¯n
X
t=1 ¯
Yt(i) + ψi,˜
j,t2Yt(i)¯no,
E
i,˜
j=n¯n
X
t=1 ¯
Yt(i) + ψi,˜
j, t2Yt(i)¯no,
E′′
i,˜
j=n
¯n
X
t=1
¯
Yt(i)ψi,˜
j,t>¯n/2o,
E′′′
i,˜
j=n¯n
X
t=1
¯
Yt(i)2+ψ2
i,˜
j,t Yt(i)no.(61)
Hence,
Pe,2(i,
K
)K·Pr Ei,˜
j
14
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
=K·Pr
¯n
X
t=1 ¯
Yt(i) + ψi,˜
j,t2Yt(i)¯n
(a)
K·Pr ¯n
X
t=1 ¯
Yt(i) + ψi,˜
j,t2Yt(i)¯n
=K·Pr E
i,˜
j,(62)
where (a)holds since αβ |αβ|for real α, β. Now, we
apply the law of total probability to event E
i,˜
jwith respect to
the pair of (E′′
i,˜
j,E′′c
i,˜
j),and obtain the following upper bound
on the type II error probability,
Pe,2(i,
K
)K·Pr E
i,˜
j
=K·hPr E
i,˜
j E′′
i,˜
j+ Pr E
i,˜
j E′′c
i,˜
ji
(a)
K·hPr E′′
i,˜
j+ Pr E
i,˜
j E′′c
i,˜
ji
(b)
=K·hPr E′′
i,˜
j+ Pr E′′′
i,˜
ji,(63)
where (a)follows from E
i,˜
jE′′
i,˜
j E′′
i,˜
jand (b)holds since
the event E
i,˜
j E′′c
i,˜
jyields event E′′′
i,˜
j,with the following
argument. Observe that,
Pr E
i,˜
j E′′c
i,˜
j(a)
Pr ¯n
X
t=1
¯
Yt(i)2+ψ2
i,˜
j,t Yt(i)n
= Pr E′′′
i,˜
j,(64)
where (a)holds since given the complementary event E′′c
i,˜
j,
we obtain
¯n/2
¯n
X
t=1
¯
Yt(i)ψi,˜
j,t ¯n/2,
which implies that 2P¯n
t=1 ¯
Yt(i)ψi,˜
j,t ¯n.That is, event
E
i,˜
j E′′c
i,˜
jyields the event
¯n
X
t=1
¯
Yt(i)2+ψ2
i,˜
j,t Yt(i)n.
Now, we establish an upper bound on Pr(E′′
i,˜
j)by exploiting
Chebyshev’s inequality:
Pr(E′′
i,˜
j) = Pr
¯n
X
t=1
¯
Yt(i)ψi,˜
j,t>¯n/2
Var hP¯n
t=1 ¯
Yt(i)ψi,˜
j,ti
n)2
=P¯n
t=1 Var h¯
Yt(i)ψi,˜
j,ti
n)2,(65)
where the last equality holds since the variance of the sum
of uncorrelated RVs is the sum of the respective variances.
Thereby,
Pr(E′′
i,˜
j)P¯n
t=1 ψ2
i,˜
j,tVar ¯
Yt(i)
n)2
=P¯n
t=1 cρ
i,t cρ
˜
j,t2Var ¯
Yt(i)
n)2
(a)
P¯n
t=1 cρ
i,t +cρ
˜
j,t2Var ¯
Yt(i)
n)2
(b)
=P¯n
t=1 cρ
i,t +cρ
˜
j,t2(cρ
i,t +λ)
n)2
(c)
cρ
i+cρ
˜
j2(¯
LP ave +λ)
n)2,(66)
where (a)exploits the triangle inequality, i.e., cρ
i,t cρ
˜
j,t
cρ
i,t +cρ
˜
j,t,(b)follows since Var[¯
Yt(i)] = cρ
i,t +λ, t[[¯n]],
and (c)follows since cρ
i,t ¯
LP ave +λ. Now, observe that
cρ
i+cρ
˜
j
2(a)
cρ
i
+
cρ
˜
j
2
(b)
n
cρ
i
+n
cρ
˜
j
2
(c)
n¯
LP avg +n¯
LP avg2
= 4¯
L2nP 2
avg,(67)
where (a)holds by the triangle inequality, (b)follows since
∥·∥ n∥·∥,and (c)is valid by the definition of cρ
i,i.e.,
cρ
i=PL1
l=0 ρlci,tl,and (33). Hence,
Pr(E′′
i,˜
j)cρ
i+cρ
˜
j2(¯
LP ave +λ)
n)2
4¯
L2P2
avg(¯
LP ave +λ)
2
n
=9¯
L3P2
avg(Pavg +λ)
4a2nb+κ+4l
=O(L3)
nb+κ+4l
ζ0.(68)
We now proceed with bounding Pr E′′′
i,˜
jas follows. Based
on the convoluted codebook construction, each convoluted
codeword is surrounded by a sphere of radius n,that is
cρ
icρ
˜
j
24n= n,(69)
where the last equality exploits (41). Thus, we can establish
the following upper bound for event E′′′
i,˜
j:
Pr(E′′′
i,˜
j)
= Pr ¯n
X
t=1
¯
Yt(i)2+ψ2
i,˜
j,t Yt(i)n
= Pr ¯n
X
t=1
¯
Yt(i)2Yt(i)nψ2
i,˜
j,t
(a)
Pr ¯n
X
t=1
¯
Yt(i)2Yt(i)n3¯n
(b)
VarhP¯n
t=1 ¯
Yt(i)2Yt(i)i
¯n2δ2
n
(c)
=Var T(Y(i),ci)
δ2
n
15
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
(d)
=9¯
LP avg +λ4exp(8)+2¯
LP avg +λ+12+ψCov
16a2nb+κ+4l
ζ1,(70)
where (a)follows from (69), (b)holds from applying Cheby-
shev’s inequality, (c)follows from similar arguments as
provided for the type I error probability, i.e., the calculations
provided in (47) and (48), (d)holds by (52).
To sum up, recalling (68), we obtain
0=9K¯
L3P2
avg(Pavg +λ)
4a2nb+κ+4l
(a)
=O(L3)
nb+4l
(b)
=O(1)
nb+l
ζ
0,
(71)
where (a)exploits K=nκand (b)holds as L=nl.On the
other hand, recalling (70), we obtain
1=
9nκ¯
LP avg +λ4exp(8)+2¯
LP avg +λ+12+ψCov
16a2nb+κ+4l
(a)
=O(L4)
nb+4l
(b)
=O(1)
nb
ζ
1,(72)
where (a)exploits K=nκand (b)holds as L=nl.
Therefore, recalling (63) and (68), and (70) we obtain
Pe,2(i,
K
)K·hPr E′′
i,˜
j+ Pr E′′′
i,˜
ji
K·[ζ0+ζ1]
=ζ
0+ζ
1
e2,(73)
hence, Pe,2(i,
K
)e2holds for sufficiently large nand
arbitrarily small e2>0.
We have thus shown that for every e1, e2>0and
sufficiently large n, there exists an (n, M(n, R), K (n, κ),
L(n, l), e1, e2)-ISI-Poisson DKI code.
Remark 3.In the error analysis, we established upper bounds
on the type I (cf. (54)) and type II error probabilities (cf. (71)
and (72)). These results reveal that the fastest scales for the
size of the target message set K(n, κ)and the number of
ISI taps L(n, l)which ensure the vanishing of the type I
and type II error probabilities as n ,are allowed to be
defined as follows:
K(n, κ)=2 =nκand L(n, l)=2nl =nl.
C. Upper Bound (Converse Proof)
Before we start with the converse proof, for the sake of
a concise presentation of the analysis, we introduce the
following notations. Let:
Ix
tλ+PL1
l=1 ρlxtl.
di,t =ρ0ci,t +Ici
t,t[[n]].
The converse proof consists of the following two main steps.
Step 1: First, we show in Lemma 1 that for any achiev-
able DKI rate (for which the type I and type II error
probabilities vanish as n ), the distance between any
selected entry of one codeword and any entry of another
codeword is at least larger than a threshold.
Step 2: Employing Lemma 1, we then derive an upper
bound on the codebook size of DKI codes.
We start with the following lemma on the ratio of di2,t/di1,t
for two distinct messages i1and i2,with i1, i2[[M]].
Lemma 1 ( Shifted Symbol Distance ).Suppose that R > 0
is an achievable DKI rate for the DTPC with ISI, P.Consider
a sequence of (n, M (n, R), K(n, κ), L(n, l), e(n)
1, e(n)
2)-ISI-
Poisson codes (C(n),T(n)),where
K(n, κ)=2κlog n, L(n, l) = 2llog n,
with κ, l [0,1) such that e(n)
1and e(n)
2tend to zero as n
.Then, given a sufficiently large n, the codebook C(n)
satisfies the following property. For every pair of codewords,
ci1and ci2,there exists at least one letter t[[n]] such that
1di2,t
di1,t > θn,(74)
for all i1, i2[[M]],such that i1=i2,with
θnPmax
KLn1+b=Pmax
n1+b+l+κ,(75)
where b > 0is an arbitrarily small constant.
Proof:
The method of proof is by contradiction, namely, we assume
that the condition given in (74) is violated and then we
show that this leads to a contradiction, namely the sum of
the type I and type II error probabilities converges to one,
i.e., limn→∞ Pe,1(i1,
K
) + Pe,2(i2,
K
)= 1 for some
K
[[M]],where i1
K
and i2/
K
.
Let e1, e2>0and η0, η1, η2, δ > 0be arbitrarily
small constants. Assume to the contrary that there exist
two messages i1and i2,where i1=i2,meeting the error
constraints in (8) and (9), such that t[[n]],we have
1di2,t
di1,t θn.(76)
In order to show contradiction, we bound the sum of the
two error probabilities, Pe,1(i1,
K
)+Pe,2(i2,
K
),from below.
Then, observe that
Pe,1(i1,
K
) + Pe,2(i2,
K
)
=h1X
yT
K
V¯nyci1i+X
yT
K
V¯nyci2.(77)
To bound the error, let us define
F
i1=nyT
K
:¯n1
¯n
X
t=1
YtIci1
tρ0Pmax +δo,
(78)
16
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
where T
K
N¯n
0is the decoding set adopted6for the set of
target messages
K
.
Now, consider the sum inside the bracket in (77),
X
yT
K
V¯nyci1
=X
yT
K
F
i1
V¯nyci1+X
yT
K
F
c
i1
V¯nyci1,(79)
where the equality follows from applying the law of total
probability on T
K
with respect to (
F
i1,
F
c
i1).
Now, we proceed to establish an upper bound on the RHS
sum in (79) as follows
X
yT
K
F
c
i1
V¯nyci1= Pr T
K
F
c
i1
Pr ¯n1
¯n
X
t=1
Yt(i1)Ici1
t> ρ0Pmax +δ.(80)
Next, we apply Chebyshev’s inequality to the probability
term in (80) and obtain
X
yT
K
F
c
i1
V¯nyci1
(a)
Pr ¯n1
¯n
X
t=1
Yt(i1)¯n1
¯n
X
t=1
E[Yt(i1)] > ρ0Pmax +δ
(b)
Var h¯n1P¯n
t=1 Yt(i1)i
(ρ0Pmax +δ)2
(c)
=¯n2P¯n
t=1 ρ0ci1,t +Ici1
t
(ρ0Pmax +δ)2
(d)
TRPmax +λ+ (L1)TRPmax
2
LTRPmax +λ
2=O(L)
2
(e)
=O(1)
n1lδ2
η0,(81)
for sufficiently large n, where (a)holds since E[Yt(i1)] =
Ici1
t,for inequality (b),we exploited Chebyshev’s inequality,
and for equality (c),we used the fact that Var[Yt(i1)] = E
[Yt(i1)] = ρ0ci1,t +Ici1
t,t[[n]].Inequality (d)employs
ci1,t Pmax,i1[[M]],t[[n]], ρ0TR, n ¯nand
(e)exploits L=nl.Thereby, recalling (79) and (81), we
obtain
X
yT
K
V¯nyci1
X
yT
K
F
i1
V¯nyci1+X
yT
K
F
c
i1
V¯nyci1
6We note that in the achievability proof given in Section IV-B we impose
a specific structure on the decoding set T
K
,namely, we defined T
K
to
be the union of the individual decoding set corresponding to messages
that belong to set
K
,i.e., T
K
=Si1
K
T
i1.In contrast, in the converse
proof, we do not impose any structure on T
K
and treat the decoding set
T
K
as a general choice T
K
N¯n
0.
X
yT
K
F
i1
V¯nyci1+η0.(82)
Next, recalling the sum of error probabilities in (77),
where i1
K
and i2/
K
,we obtain
Pe,1(i1,
K
) + Pe,2(i2,
K
)
=h1X
yT
K
V¯nyci1i+X
yT
K
V¯nyci2
(a)
1η0X
F
i1
V¯nyci1+X
T
K
V¯nyci2
(b)
1η0X
S
i1
K
F
i1
V¯nyci1+X
S
i1
K
F
i1
V¯nyci2
1η0X
S
i1
K
F
i1hV¯nyci1V¯nyci2i,(83)
where (a)holds by (82) and (b)follows since
F
i1
Si1
K
F
i1T
K
.Now, let us focus on the summand in
the square brackets in (83). Employing (5), we have
V¯nyci1V¯nyci2
=V¯nyci1·h1V¯nyci2/ V ¯nyci1i
=V¯nyci1·h1
¯n
Y
t=1
e(di2,tdi1,t )di2,t
di1,t Yti
=V¯nyci1·h1
¯n
Y
t=1
eθndi1,t (1 θn)Yti,(84)
where for the last inequality, we exploited
di2,t di1,t di2,t di1,tθndi1,t (85)
and
1di2,t
di1,t 1di2,t
di1,t θn,(86)
which holds by (76). Now, we bound the product term inside
the bracket in (84) for space ySi1
K
F
i1as follows:
¯n
Y
t=1
eθndi1,t (1 θn)Yt=eθnP¯n
t=1 di1,t ·(1 θn)P¯n
t=1 Yt
(a)
e¯nρ0Pmax+ ¯n1P¯n
t=1 I
ci1
t
·(1 θn)¯nρ0Pmax+ ¯n1P¯n
t=1 I
ci1
t+δ
=e¯nδ·e¯nρ0Pmax+ ¯n1P¯n
t=1 I
ci1
t+δ
·(1 θn)¯nρ0Pmax+ ¯n1P¯n
t=1 I
ci1
t+δ
(b)
e¯nδ·e¯nρ0Pmax+ ¯n1P¯n
t=1 I
ci1
t+δ
·(1 ¯n)ρ0Pmax n1P¯n
t=1 I
ci1
t+δ
(c)
=e¯nδ·fn)enδ·f(¯n)(d)
> fn)
17
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
(e)
13ρ0Pmax +
¯n
X
t=1
Ici1
t+δ¯n
(f)
13TRPmax +λ+ (L1)TRPmax +δPmax
nb+l+κ·¯n
n
= 1 O(L)
nb+l+κ·1 + O(L)
n
= 1 O(1)
nb+κO(L2)
n1+b+l+κ
(g)
= 1 O(1)
nb+κ+O(1)
n1+b+κl
(h)
= 1 η1,(87)
for sufficiently large n. We used the following facts for the
above inequalities:
Inequality (a)follows since
di1,t ρ0Pmax +Ici1
t,t[[n]],(88)
and
¯n
X
t=1
Yt¯nρ0Pmax + ¯n1
¯n
X
t=1
Ici1
t+δ,(89)
where the latter inequality follows from ySi1
K
F
i1,
cf. (78).
For (b),we used Bernoulli’s inequality [77, Ch. 3]:
(1 x)r1rx , x > 1,r > 0.(90)
For (c),we used the following definition:
f(x) = ecx(1 x)c,(91)
with c=ρ0Pmax + ¯n1P¯n
t=1 Ici1
t+δ.
For (d), we used the fact that
enδ=ePmaxδ/nb+l+κ>1.(92)
For (e),we used the Taylor expansion
fn)=12c¯n+O((¯n)2)(93)
to obtain the upper bound fn)13c¯nfor
sufficiently small values of ¯n,i.e.,
¯n=Pmax
n1+b+l+κ·(n+L1) = Pmax
nb+l+κ·¯n
n
=Pmax
nb+l+κ·1 + O(L)
n=Pmax
nb+l+κ+O(1)
nb+l+κ.
(94)
Inequality (f)exploits (75).
Equality (g)employs L=nl,with l[0,1).
Finally, (h)follows from
O(1)
nb+κ+O(1)
n1+b+κl
η1.
Thereby, (84) can then be written as follows
V¯nyci1V¯nyci2
V¯nyci1·h1eθnP¯n
t=1 di1,t ·(1 θn)P¯n
t=1 Yti
η1·V¯nyci1.(95)
Next, recalling the definition of an ISI-Poisson DKI code
given in (1), we focus on the underlying assumptions
stated in Lemma 1 on the properties of a given sequence
of ISI-Poisson DKI codes (C(n),T(n)).Such a code se-
quence has five parameters (n, M(n, R), K(n, κ), L(n, l),
e(n)
1, e(n)
2),and endows the following property:
For each general choice (arrangement) of the target mes-
sage set
K
[[M]] of size K, the upper bound on the
type I and type II error probabilities, i.e., e(n)
1and e(n)
2,
respectively, tends to zero as ntends to infinity. That is,
lim
n→∞ Pe,1(i1,
K
) + Pe,2(i2,
K
)= 0,
K
[[M]].(96)
Next, let
K
(i1, i2)denote a specific class of the target
message sets
K
,where i1
K
and i2/
K
,i.e.,
K
(i1, i2)
K
[[M]]; |
K
|=K;i1
K
, i2/
K
.(97)
Observe that the above set cannot be empty (i.e., |
K
(i1, i2)|
1), that is, there exists at least one arrangement
K
belonging to
K
(i1, i2),where i1
K
, i2/
K
.This holds
true since according to Lemma 1 the two messages i1and
i2are distinct, i.e., i1=i2.Thereby, for every set
K
K
(i1, i2),we have the following upper bounds on the type
I and type II error probabilities
Pe,1(i1,
K
) = V¯n(Tc
K
|xn=ci1)e(n)
1,
Pe,2(i2,
K
) = V¯n(T
K
|xn=ci2)e(n)
2.(98)
Hence,
e(n)
1+e(n)
2Pe,1(i1,
K
) + Pe,2(i2,
K
)
(a)
1η0X
S
i1
K
F
i1hV¯nyci1V¯nyci2i
(b)
1η0η1X
S
i1
K
F
i1
V¯nyci1
(c)
1η0η1X
i1
K
X
F
i1
V¯nyci1
(d)
1η0η1· |
K
|
(e)
1η0KO(1)
nb+κ
(e)
= 1 η0η2,(99)
where (a)follows from (83), and (b)holds by (95), (c)ex-
ploits the union bound, (d)follows since P
F
i1V¯nyci1
= Pr(
F
i1)1,(e)holds since |
K
|=K, and (f)follows
from O(nb)η2.
Therefore, e(n)
1+e(n)
21η0η2which is a contradiction
to (96). In other words, Lemma 1 states that every given
sequence of ISI-Poisson DKI codes (C(n),T(n))with the pa-
rameters (n, M (n, R), K(n, κ)=2κlog n, L(n, l)=2llog n,
e(n)
1, e(n)
2)endows the following property: For an arbitrary
(general) choice of
K
of size K(n, κ),the upper bounds on
the type I and type II error probabilities vanish, i.e., e(n)
1and
18
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
e(n)
2tend to zero as n .However, we show that there
exist some particular choices for
K
denoted by
K
(i1, i2)
whose elements satisfy the following property: The sum of
the corresponding upper bounds on the type I and type II
errors is lower bounded by one, i.e., e(n)
1and e(n)
2do not
vanish. This is clearly a contradiction and implies that the
inequality given in (76) does not hold. This completes the
proof of Lemma 1.
Next, we use Lemma 1 to prove the upper bound on the
DKI capacity. Observe that since
di,t =ρ0ci,t +Ici
t> λ, (100)
Lemma 1 implies
ρ0ci1,t ci2,t=di1,t di2,t
(a)
> θndi1,t
(b)
> λθn,
(101)
where (a)follows from (74) and (b)holds by (100). Now,
since ci1ci2 ci1,t ci2,t,we deduce that the
distance between every pair of codewords satisfies
ci1ci2> λθn0.(102)
Thus, we can define an arrangement of non-overlapping
spheres Sci(n, λθn/2ρ0),i.e., spheres of radius r0=λθn/
2ρ0that are centered at the codewords ci.Since all code-
words belong to a hyper cube
Q
0(n, P max )with edge length
Pmax,it follows that the number of packed small spheres,
i.e., the number of codewords M, is bounded by
M=
Vol SM
i=1 Sci(n, r0)
Vol(Sc1(n, r0)) =n(S)·Vol
Q
0(n, P max )
Vol(Sc1(n, r0))
20.599n·Pn
max
Vol(Sc1(n, r0)),(103)
where the last inequality follows from (31). Thereby,
log Mlog Pn
max
Vol Sc1(n, r0)!0.599n
=nlog(Pmax)log Vol Sc1(n, r0)0.599n
(a)
=nlog Pmax nlog r0nlog π+ log Γn/2+1
(104)
where (a)exploits (32). Next, we proceed to establish an
upper bound on the last term in (104). Observe that
Γn/2+1(a)
= (n/2)Γn/2
(b)
<n/2+ 1Γn/2+ 1
(c)
=n/2+ 1!,(105)
where (a)holds by the recurrence relation of the Gamma
function [73] for real n/2,(b)follows since n/2<n/2+1
for real n/2,and (c)holds since for positive integer n/2,
we have Γn/2+ 1= (n/2)!,cf. [73]. Next, we
proceed to simplify the factorial term given in (105). To
this end, we exploit Stirling’s approximation, i.e., log n! =
nlog nnlog e+o(n)[74, p. 52] with the substitution of
n=n/2+ 1,where n/2Z.Thereby, we obtain
log Γn/2+1<
n/2+1log n/2+1n/2+1log e+on/2
(a)
n/2+1log n/2+1n/2log e+on/2,
(106)
where (a)follows from n
2n
2and n
2>n
21,for
integer n. Therefore, merging (104)–(106), we obtain
log M
nlog Pmax nlog r0nlog π
+n/2+1log n/2+1n/2log e+on/2
=nlog Pmax nlog(λP max/(2ρ0)) +(1 + b+l+κ)nlog n
nlog π+n/2+1log n/2+1
n/2log e+on/2,(107)
where for the equality we used
r0=λθn
2ρ0
=λP max
2ρ0n1+b+l+κ.(108)
The dominant term in (104) is again of order nlog n. Hence,
to ensure a finite value for the upper bound of the rate, R,
(104) induces the scaling law of Mto be 2(nlog n)R.By
setting M(n, R)=2(nlog n)R, we obtain
R1
nlog n1
2+ (1 + b+κ+l)nlog n
n1
2+ log(λπe/(2ρ0))+o(n),(109)
which tends to 3
2+l+κas n and b0.This
completes the proof of Theorem 1.
V. Summary and Future Directions
In this paper, we studied the deterministic K-identification
problem for MC channels. In particular, we considered MC
systems with molecule counting receivers, modeled by the
DTPC with ISI. For this setting, we derived lower and upper
bounds on the DKI capacity subject to average and peak
molecule release rate constraints for a codebook size of
M(n, R)=2(nlog n)R=nnR .Our results revealed that the
super-exponential scale of nnR is the appropriate scale for
the DKI capacity of the DTPC with ISI. This was proved by
finding a suitable sphere packing arrangement embedded in a
hyper cube. In particular, in the rate analysis, we established
a lower bound for the logarithm of the number of codewords,
whose fastest growing term has order nlog n;cf. (39)
and (40). This observation dictates, that in order to obtain a
finite and positive value for the DKI capacity, the codebook
size should scale as M(n, R) = 2(nlog n)R.We note that
this scale is distinctly different from the ordinary scales in
transmission and RI settings, where the codebook size grows
exponentially and double exponentially, respectively.
The results presented in this paper can be extended in
several directions, some of which are listed in the following
as potential topics for future research:
19
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
Continuous alphabet conjecture: Our observations for
the codebook size of the DTPC with ISI, DTPC without
ISI [26], [45], Binomial channel [53], [78], and Gaussian
channel with fading [46] lead us to conjecture that the
codebook size for any continuous alphabet channel is a
super-exponential function, i.e., 2(nlog n)R.However, a
formal proof of this conjecture remains unknown.
Multiuser and multi-antenna systems: This study has
focused on a point-to-point system and may be extended
to multi-user scenarios (e.g., broadcast and multiple ac-
cess channels) or multiple-input multiple-output channels
which are relevant in complex MC nano-networks.
Finite codeword length coding: The identification ca-
pacity results in this paper reveal the performance limits
of DTPC with ISI for the asymptotic regime when the
length of codewords can be arbitrarily large. However, the
codeword length is finite in practice, particularly for MC
applications, where large encoding/decoding delays cannot
be afforded. Therefore, the study of the non-asymptotic
DKI capacity of the DTPC is an important direction for
future work.
Explicit code construction: Our main focus in this pa-
per was the establishment of fundamental performance
limits for the DKI problem for the DTPC with ISI, where
an explicit code construction was not considered. In fact,
the proposed achievable scheme proves the existence of a
code without providing a systematic approach to construct
it. Hence, interesting directions for future research include
the explicit construction of identification codes and the
development of low-complexity encoding and decoding
schemes for practical applications. The efficiency of these
designs can be evaluated against the performance bounds
derived in this paper.
Acknowledgements
Salariseddigh was supported by the German Research Foun-
dation (DFG) under grant DE 1915/2-1 and 6G-life project
under grant 16KISK002. Jamali was supported in part by the
DFG under grant JA 3104/1-1 and in part by the LOEWE
initiative (Hesse, Germany) within the emergenCITY center.
Pereg was supported by Israel Science Foundation (ISF) un-
der grant 939/23 and 2691/23, Israel VATAT Junior Faculty
Program for Quantum Science and Technology under grant
86636903, Chaya Career Advancement Chair under grant
8776026, the German-Israeli Project Cooperation (DIP) un-
der grant 2032991, and the Helen Diller Quantum Center at
the Technion. Pereg and Deppe were supported by the Ger-
man Federal Ministry of Education and Research (BMBF)
under Grants 16KIS1005, 16KISQ028 and 6G-life project
under grant 16KISK002. Boche was supported by the BMBF
under grant 16KIS1003K, the national initiative for “Molecu-
lar Communications" (MAMOKO) under grant 16KIS0914,
and 6G-life project under grant 16KISK002. Schober was
supported by MAMOKO under grant 16KIS0913.
REFERENCES
[1] M. J. Salariseddigh, V. Jamali, U. Pereg, H. Boche, C. Deppe,
and R. Schober, “Deterministic Identification For MC ISI-Poisson
Channel,” in Proc. IEEE Intl. Conf. Commun., 2023, pp. 6108–6113.
[2] T. Nakano, A. W. Eckford, and T. Haraguchi, Molecular Communica-
tion. Cambridge University Press, 2013.
[3] N. Farsad, H. B. Yilmaz, A. Eckford, C.-B. Chae, and W. Guo,
“A Comprehensive Survey of Recent Advancements in Molecular
Communication,” IEEE Commun. Surveys Tuts., vol. 18, no. 3, pp.
1887–1919, 2016.
[4] V. Jamali, A. Ahmadzadeh, C. Jardin, H. Sticht, and R. Schober,
“Channel Estimation For Diffusive Molecular Communications, IEEE
Trans. Commun., vol. 64, no. 10, pp. 4238–4252, 2016.
[5] Y. Tang, F. Ji, M. Wen, Q. Wang, and L.-L. Yang, “Enhanced Molec-
ular Type Permutation Shift Keying For Molecular Communication,
IEEE Wireless Commun. Lett., vol. 10, no. 12, pp. 2722–2726, 2021.
[6] Q. Li, “A Novel Time-Based Modulation Scheme in Time-
Asynchronous Channels For Molecular Communications,” IEEE
Trans. Nanobiosci., vol. 19, no. 1, pp. 59–67, 2019.
[7] Y. Huang, M. Wen, L.-L. Yang, C.-B. Chae, and F. Ji, “Spatial
Modulation For Molecular Communication,” IEEE Trans. Nanobiosci.,
vol. 18, no. 3, pp. 381–395, 2019.
[8] M. Magarini and P. Stano, “Synthetic Cells Engaged in Molec-
ular Communication: An Opportunity For Modelling Shannon-and
Semantic-Information in The Chemical Domain,” Front. Comms. Net,
vol. 2, p. 724597, 2021.
[9] C. A. Söldner, E. Socher, V. Jamali, W. Wicke, A. Ahmadzadeh, H.-G.
Breitinger, A. Burkovski, K. Castiglione, R. Schober, and H. Sticht,
“A Survey of Biological Building Blocks For Synthetic Molecular
Communication Systems,” IEEE Commun. Surveys Tuts., vol. 22,
no. 4, pp. 2765–2800, 2020.
[10] D. Bi, A. Almpanis, A. Noel, Y. Deng, and R. Schober, “A Survey
of Molecular Communication in Cell Biology: Establishing a New
hierarchy for Interdisciplinary Applications,” IEEE Commun. Surveys
Tuts., vol. 23, no. 3, pp. 1494–1545, 2021.
[11] V. Jamali, A. Ahmadzadeh, W. Wicke, A. Noel, and R. Schober,
“Channel Modeling For Diffusive Molecular Communication - A
Tutorial Review,” Proc. IEEE, vol. 107, no. 7, pp. 1256–1301, 2019.
[12] M. Kuscu, E. Dinc, B. A. Bilgin, H. Ramezani, and O. B. Akan,
“Transmitter and Receiver Architectures For Molecular Communica-
tions: A Survey on Physical Design With Modulation, Coding, and
Detection Techniques, Proc. IEEE, vol. 107, no. 7, pp. 1302–1341,
2019.
[13] A. Gohari, M. Mirmohseni, and M. Nasiri-Kenari, “Information The-
ory of Molecular Communication: Directions and Challenges,” IEEE
Trans. Mol. Biol. Multi-Scale Commun., vol. 2, no. 2, pp. 120–142,
2016.
[14] I. F. Akyildiz, M. Pierobon, S. Balasubramaniam, and Y. Koucheryavy,
“The Internet of Bio-Nano Things,” IEEE Commun. Mag., vol. 53, pp.
32–40, 2015.
[15] C. Lee, B.-H. Koo, C.-B. Chae, and R. Schober, “The Internet of Bio-
Nano Things in Blood Vessels: System Design and Prototypes,” J.
Commun. Netw., vol. 25, no. 2, pp. 222–231, 2023.
[16] H. B. Yilmaz and C.-B. Chae, “Arrival Modelling For Molecular
Communication via Diffusion, Electron. Lett., vol. 50, no. 23, pp.
1667–1669, 2014.
[17] A. Papoulis and S. U. Pillai, Probability, Random Variables, and
Stochastic Processes. Boston, MA, McGraw-Hill, 2002.
[18] J. Cao, S. Hranilovic, and J. Chen, “Capacity-Achieving Distributions
For The Discrete-Time Poisson Channel—Part I: General Properties
and Numerical Techniques,” IEEE Trans. Commun., vol. 62, no. 1, pp.
194–202, 2013.
[19] ——, “Capacity-Achieving Distributions For The Discrete-Time Pois-
son Channel—Part II: Binary Inputs,” IEEE Trans. Commun., vol. 62,
no. 1, pp. 203–213, 2013.
[20] F. Ratti, F. Vakilipoor, H. Awan, and M. Magarini, “Bounds on The
Constrained Capacity For The Diffusive Poisson Molecular Channel
With Memory, IEEE Trans. Mol. Biol. Multi-Scale Commun., vol. 7,
no. 2, pp. 100–105, 2021.
[21] G. Aminian, H. Arjmandi, A. Gohari, M. N. Kenari, and U. Mitra,
“Capacity of LTI-Poisson Channel For Diffusion Based Molecular
Communication,” in Proc. IEEE Int. Conf. Commun., 2015, pp. 1060–
1065.
20
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
[22] G. Aminian, H. Arjmandi, A. Gohari, M. Nasiri-Kenari, and U. Mitra,
“Capacity of Diffusion-Based Molecular Communication Networks
Over LTI-Poisson Channels,” IEEE Trans. Mol. Biol. Multi-Scale
Commun., vol. 1, no. 2, pp. 188–201, 2015.
[23] N. Ahmadypour and A. Gohari, “Transmission of a Bit Over a Discrete
Poisson Channel With Memory, IEEE Trans. Inf. Theory, vol. 67,
no. 7, pp. 4710–4727, 2021.
[24] A. O. Kislal, H. B. Yilmaz, A. E. Pusane, and T. Tugcu, “ISI-Aware
Channel Code Design For Molecular Communication via Diffusion,”
IEEE Trans. Nanobiosci., vol. 18, no. 2, pp. 205–213, 2019.
[25] R. Mosayebi, H. Arjmandi, A. Gohari, M. Nasiri-Kenari, and U. Mitra,
“Receivers For Diffusion-Based Molecular Communication: Exploit-
ing Memory and Sampling Rate,” IEEE J. Selec. Areas Commun.,
vol. 32, no. 12, pp. 2368–2380, 2014.
[26] M. J. Salariseddigh, U. Pereg, H. Boche, C. Deppe, and R. Schober,
“Deterministic Identification Over Poisson Channels,” in Proc. IEEE
Glob. Commun. Conf. (GC), 2021, pp. 1–6.
[27] M. J. Salariseddigh, V. Jamali, U. Pereg, H. Boche, C. Deppe, and
R. Schober, “Deterministic Identification For Molecular Communica-
tions Over The Poisson Channel,” IEEE Trans. Mol. Biol. Multi-Scale
Commun., pp. 1–1, 2023.
[28] W. Haselmayr, A. Springer, G. Fischer, C. Alexiou, H. Boche, P. A.
Hoeher, F. Dressler, and R. Schober, “Integration of Molecular Com-
munications Into Future Generation Wireless Networks, in Proc. 6G
Wireless Summit., Finland, 2019.
[29] J. A. Cabrera, H. Boche, C. Deppe, R. F. Schaefer, C. Scheunert, and
F. H. Fitzek, “6G and The Post-Shannon Theory,” in Shaping Future
6G Networks: Needs, Impacts and Technologies, N. O. Frederiksen
and H. Gulliksen, Eds. Hoboken, NJ, United States: Wiley-Blackwell,
2021.
[30] P. Schwenteck, G. T. Nguyen, H. Boche, W. Kellerer, and F. H. P.
Fitzek, “6G Perspective of Mobile Network Operators, Manufacturers,
and Verticals,” IEEE Netw. Lett., vol. 5, no. 3, pp. 169–172, 2023.
[31] R. Ahlswede and G. Dueck, “Identification via Channels,” IEEE Trans.
Inf. Theory, vol. 35, no. 1, pp. 15–29, 1989.
[32] R. H. Muller and C. M. Keck, “Challenges and Solutions For The
Delivery of Biotech Drugs–A Review of Drug Nanocrystal Technology
and Lipid Nanoparticles,” J. Biotech., vol. 113, no. 1-3, pp. 151–170,
2004.
[33] R. Jain, “Transport of Molecules, Particles, and Cells in Solid Tumors,”
Annu. Biomed. Eng. Rev., vol. 1, no. 1, pp. 241–263, 1999.
[34] S. Wilhelm, A. J. Tavares, Q. Dai, S. Ohta, J. Audet, H. F. Dvorak,
and W. C. Chan, Analysis of Nanoparticle Delivery to Tumours, Nat.
Rev. Mater., vol. 1, no. 5, pp. 1–12, 2016.
[35] S. K. Hobbs, W. L. Monsky, F. Yuan, W. G. Roberts, L. Griffith,
V. P. Torchilin, and R. K. Jain, “Regulation of Transport Pathways in
Tumor Vessels: Role of Tumor Type and Microenvironment, Proc.
Natl. Acad. Sci., vol. 95, no. 8, pp. 4607–4612, 1998.
[36] T. Nakano, T. Suda, Y. Okaie, M. J. Moore, and A. Vasilakos, “Molec-
ular Communication Among Biological Nanomachines: A Layered
Architecture and Research Issues,” IEEE Trans. Nanobiosci., vol. 13,
no. 3, pp. 169–197, 2014.
[37] S. Ghavami, “Anomaly Detection in Molecular Communications With
Applications to Health Monitoring Networks,” IEEE Trans. Mol. Biol.
Multi-Scale Commun., vol. 6, no. 1, pp. 50–59, 2020.
[38] L. B. Buck, “Unraveling The Sense of Smell (Nobel Lect.),” Angew.
Chem. Int. Ed., vol. 44, no. 38, pp. 6128–6140, 2005.
[39] A. Buettner, Springer Handbook of Odor. Cham, Switzerland:
Springer, 2017.
[40] V. Jamali, H. M. Loos, A. Buettner, R. Schober, and H. Vincent Poor,
“Olfaction-Inspired MCs: Molecule Mixture Shift Keying and Cross-
Reactive Receptor Arrays, IEEE Trans. Commun., vol. 71, no. 4, pp.
1894–1911, 2023.
[41] M. Baaden, “Deep Inside Molecules - Digital Twins at The
Nanoscale,” Virtual Real. Intell. Hardw., vol. 4, no. 4, pp.
324–341, 2022. [Online]. Available: https://www.sciencedirect.com/
science/article/pii/S2096579622000171
[42] B. R. Barricelli, E. Casiraghi, and D. Fogli, “A Survey on Digital Twin:
Definitions, Characteristics, Applications, and Design Implications,”
IEEE Access, vol. 7, pp. 167 653–167 671, 2019.
[43] H. Markram, “The Blue Brain Project,” Nat. Rev. Neurosci., vol. 7,
no. 2, pp. 153–160, 2006.
[44] J. Chen, C. Yi, S. D. Okegbile, J. Cai, Xuemin, and Shen,
“Networking Architecture and Key Supporting Technologies For
Human Digital Twin in Personalized Healthcare: A Comprehensive
Survey,” arXiv:2301.03930, 2023. [Online]. Available: http://arxiv.
org/abs/2301.03930.pdf
[45] M. J. Salariseddigh, U. Pereg, H. Boche, C. Deppe, V. Jamali,
and R. Schober, “Deterministic Identification For Molecular
Communications Over The Poisson Channel,” arXiv:2203.02784,
2022. [Online]. Available: https://arxiv.org/abs/2203.02784.pdf
[46] M. J. Salariseddigh, U. Pereg, H. Boche, and C. Deppe, “Deterministic
Identification Over Fading Channels,” in Proc. IEEE Inf. Theory Wksp.
(ITW), 2021, pp. 1–5.
[47] ——, “Deterministic Identification Over Channels With Power Con-
straints,” IEEE Trans. Inf. Theory, vol. 68, no. 1, pp. 1–24, 2022.
[48] C. E. Shannon, “A Mathematical Theory of Communication,” Bell Sys.
Tech. J., vol. 27, no. 3, pp. 379–423, 1948.
[49] R. Ahlswede, “Elimination of Correlation in Random Codes For
Arbitrarily Varying Channels,” Zs. Wahrscheinlichkeitstheorie Verw.
Geb., vol. 44, no. 2, pp. 159–175, 1978.
[50] S. Verdu and V. Wei, “Explicit Construction of Optimal Constant-
Weight Codes For Identification via Channels,” IEEE Trans. Inf.
Theory, vol. 39, no. 1, pp. 30–36, 1993.
[51] O. Günlü, J. Kliewer, R. F. Schaefer, and V. Sidorenko, “Code
Constructions and Bounds For Identification via Channels,” IEEE
Trans. Commun., vol. 70, no. 3, pp. 1486–1496, 2021.
[52] R. Ahlswede, “General Theory of Information Transfer: Updated,”
Discrete Appl. Math., vol. 156, no. 9, pp. 1348–1388, 2008.
[53] M. J. Salariseddigh, V. Jamali, H. Boche, C. Deppe, and R. Schober,
“Deterministic Identification For MC Binomial Channel,” in Proc.
IEEE Int. Symp. Inf. Theory (ISIT), 2023, pp. 448–453.
[54] R. Ahlswede and N. Cai, “Identification Without Randomization,”
IEEE Trans. Inf. Theory, vol. 45, no. 7, pp. 2636–2642, 1999.
[55] M. J. Salariseddigh, U. Pereg, H. Boche, and C. Deppe, “Deterministic
Identification Over Channels With Power Constraints, in Proc. IEEE
Int. Conf. Commun., 2021, pp. 1–6.
[56] ——, “Deterministic Identification Over Channels With Power
Constraints,” arXiv:2010.04239, 2020. [Online]. Available: http:
//arxiv.org/abs/2010.04239.pdf
[57] J. JáJá, “Identification is Easier Than Decoding,” in Proc. Ann. Symp.
Found. Comp. Scien., 1985, pp. 43–50.
[58] M. V. Burnashev, “On The Method of Types and Approximation of
Output Measures For Channels With Finite Alphabets,” Prob. Inf.
Trans., vol. 36, no. 3, pp. 195–212, 2000.
[59] Z. Brakerski, Y. T. Kalai, and R. R. Saxena, “Deterministic and
Efficient Interactive Coding From Hard-to-Decode Tree Codes, in
Proc. IEEE Ann. Symp. Found. Comp. Scien., 2020, pp. 446–457.
[60] R. L. Bocchino, V. Adve, S. Adve, and M. Snir, “Parallel Programming
Must be Deterministic by Default,” Usenix HotPar, vol. 6, no. 10.5555,
pp. 1855 591–1 855 595, 2009.
[61] E. Arıkan, “Channel Polarization: A Method For Constructing
Capacity-Achieving Codes For Symmetric Binary-Input Memoryless
Channels,” IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3051–3073,
2009.
[62] M. Wiese, W. Labidi, C. Deppe, and H. Boche, “Identification Over
Additive Noise Channels in The Presence of Feedback, IEEE Trans.
Inf. Theory, vol. 69, no. 11, pp. 6811–6821, 2023.
[63] M. V. Burnashev, “On Identification Capacity of Infinite Alphabets or
Continuous-Time Channels,” IEEE Trans. Inf. Theory, vol. 46, no. 7,
pp. 2407–2414, 2000.
[64] M. Spahovic, M. J. Salariseddigh, and C. Deppe, “Deterministic K-
Identification For Slow Fading Channels, in Proc. IEEE Inf. Theory
Wksp. (ITW), 2023, pp. 353–358.
[65] M. J. Salariseddigh, M. Spahovic, and C. Deppe, “Deterministic
K-Identification For Slow Fading Channel, arXiv:2212.02732, 2022.
[Online]. Available: https://arxiv.org/abs/2212.02732.pdf
[66] O. Dabbabi, M. J. Salariseddigh, C. Deppe, and H. Boche,
“Deterministic K-Identification For Binary Symmetric Channel,”
arXiv:2305.04260, Accepted for Publication at IEEE Glob. Commun.
Conf., 2023. [Online]. Available: http://arxiv.org/abs/2305.04260.pdf
[67] H. Unterweger, J. Kirchner, W. Wicke, A. Ahmadzadeh, D. Ahmed,
V. Jamali, C. Alexiou, G. Fischer, and R. Schober, “Experimental
Molecular Communication Testbed Based on Magnetic Nanoparticles
in Duct Flow, in Proc. IEEE Int. Works. Sig. Process. Advances
Wireless Commun., 2018, pp. 1–5.
21
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
M. J. Salariseddigh et al.: Deterministic K-Identification For MC Poisson Channel With ISI Memory
[68] H. B. Yilmaz, A. C. Heren, T. Tugcu, and C.-B. Chae, “Three-
Dimensional Channel Characteristics For Molecular Communications
With an Absorbing Receiver, IEEE Commun. Lett., vol. 18, no. 6, pp.
929–932, 2014.
[69] A. Ahmadzadeh, H. Arjmandi, A. Burkovski, and R. Schober, “Com-
prehensive Reactive Receiver Modeling For Diffusive Molecular Com-
munication Systems: Reversible Binding, Molecule Degradation, and
Finite Number of Receptors,” IEEE Trans. Nanobiosci., vol. 15, no. 7,
pp. 713–727, 2016.
[70] R. Ahlswede, A. Ahlswede, I. Althöfer, C. Deppe, and U. Tamm,
Identification and Other Probabilistic Models. Cham, Switzerland:
Springer, 2021.
[71] J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and
Groups. New York, NY, USA: Springer, 2013.
[72] H. Cohn, “Order and Disorder in Energy Minimization,” in Proc. Int.
Congr. Mathn. World Sci., 2010, pp. 2416–2443.
[73] R. Beals and R. Wong, Special Functions: A Graduate Text. Cam-
bridge University Press, 2010, vol. 126.
[74] W. Feller, An Introduction to Probability Theory and Its Applications.
John Wiley & Sons, 1966.
[75] I. M. Copi, C. Cohen, and K. McMahon, Introduction to Logic.
Routledge, New York, NY, USA, 2016.
[76] T. D. Ahle, “Sharp and Simple Bounds For The Raw Moments of The
Binomial and Poisson Distributions,” Stat. Probab. Lett., vol. 182, p.
109306, 2022.
[77] D. Mitrinovic, J. Pecaric, and A. Fink, Classical and New Inequalities
in Analysis. Dordrecht, The Netherlands: Springer, 2013, vol. 61.
[78] M. J. Salariseddigh, V. Jamali, H. Boche, C. Deppe, and
R. Schober, “Deterministic Identification For MC Binomial Channel,
arXiv:2304.12493, 2023. [Online]. Available: http://arxiv.org/abs/
2304.12493.pdf
22
This article has been accepted for publication in IEEE Open Journal of the Communications Society. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/OJCOMS.2024.3359186
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
... This observation can be interpreted as follows: The channel noise can be exploited as an additional inherent source embedded in the communication setting for performing the K-identification task with a larger value of K. This observation is in contrast to previous results for DKI over the slow fading channel [51], or the DI for Gaussian and Poisson channels [32,48,52], where capacity bounds were shown to be independent of the input constraints or the channel parameters. We demonstrate that the suggested upper and lower bounds on attainable rates (R, κ) are independent of K for constant K, whereas they are functions of the goal identification rate κ for increasing goal message sets. ...
... While the radius of the small balls in the DI problem for the Gaussian channel with slow and fast fading [32], tends to zero as n → ∞, here, the radius similar to the DKI problem for the slow fading channel [51] grows in the codeword length n for asymptotic n. In general, the derivation of lower bound for the BSC is more complicated compared to that for the Gaussian [32] and Poisson channels with/out memory [48,52], and entails exploiting of new analysis and inequalities. Here, the error analysis in the achievability proof requires dealing with several combinatorial arguments and using of bounds on the tail for the cumulative distribution function (CDF) of the Binomial distribution. ...
... Here, the error analysis in the achievability proof requires dealing with several combinatorial arguments and using of bounds on the tail for the cumulative distribution function (CDF) of the Binomial distribution. The DKI problem was recently investigated in [52] for a DTPC with ISI where the size of the ISI taps is assumed to scale as L(n, l) = 2 l log n . In contrast to the findings in [52], where the attainable rate region of triple rates (κ, l, R) for the Poisson channel with memory was derived, here, we study the DKI problem for a memoryless BSC, i.e., L = 1, and the attainable rate region of pair rates (κ, R) is established. ...
Article
Full-text available
Numerous applications of the Internet of Things (IoT) feature an event recognition behavior where the established Shannon capacity is not authorized to be the central performance measure. Instead, the identification capacity for such systems is considered to be an alternative metric, and has been developed in the literature. In this paper, we develop deterministic K-identification (DKI) for the binary symmetric channel (BSC) with and without a Hamming weight constraint imposed on the codewords. This channel may be of use for IoT in the context of smart system technologies, where sophisticated communication models can be reduced to a BSC for the aim of studying basic information theoretical properties. We derive inner and outer bounds on the DKI capacity of the BSC when the size of the goal message set K may grow in the codeword length n. As a major observation, we find that, for deterministic encoding, assuming that K grows exponentially in n, i.e., K=2nκ, where κ is the identification goal rate, then the number of messages that can be accurately identified grows exponentially in n, i.e., 2nR, where R is the DKI coding rate. Furthermore, the established inner and outer bound regions reflects impact of the input constraint (Hamming weight) and the channel statistics, i.e., the cross-over probability.
Article
Full-text available
Digital twin (DT), referring to a promising technique to digitally and accurately represent actual physical entities, has attracted explosive interests from both academia and industry. One typical advantage of DT is that it can be used to not only virtually replicate a system’s detailed operations but also analyze the current condition, predict the future behavior, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate the remote monitoring, diagnosis, prescription, surgery and rehabilitation, and hence significantly alleviate the heavy burden on the traditional health- care system. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and the conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT.
Article
Full-text available
In this paper, we investigate the Internet of bionano things (IoBNT) which pertains to networks formed by molecular communications. By providing a means of communication through the ubiquitously connected blood vessels (arteries, veins, and capillaries), molecular communication-based IoBNT enables a host of new eHealth applications. For example, an organ monitoring sensor can transfer internal body signals through the IoBNT to health-monitoring applications. We empirically show that blood vessel channels introduce a new set of challenges in the design of molecular communication systems in comparison to free-space channels. Then, we propose cylindrical duct channel models and discuss the corresponding system designs conforming to the channel characteristics. Furthermore, based on prototype implementations, we confirm that molecular communication techniques can be utilized for composing the IoBNT. We believe that the promising results presented in this work, together with the rich research challenges that lie ahead, are strong indicators that IoBNT with molecular communications can drive novel applications for emerging eHealth systems.
Article
Full-text available
The first release of 5G technology is being rolled out worldwide. In parallel, 3GPP is constantly adding new features to upcoming releases covering well-known use cases. This raises the questions i.) when will 6G be introduced?, ii.) how can 6G be motivated for the stakeholders, and iii.) what are the 6G use cases? In this work, we present the perspective of these stakeholders, namely the network operators, manufacturers, and verticals, identifying potential 5G shortcomings and the remaining 6G solution space. We will highlight the Metaverse as the enabler for 6G addressing omnipresent daily challenges and the upcoming energy problem.
Conference Paper
In this paper, we discuss the potential of integrating molecular communication (MC) systems into future generations of wireless networks. First, we explain the advantages of MC compared to conventional wireless communication using electromagnetic waves at different scales, namely at micro-and macroscale. Then, we identify the main challenges when integrating MC into future generation wireless networks. We highlight that two of the greatest challenges are the interface between the chemical and the cyber (Internet) domain, and ensuring communication security. Finally, we present some future applications, such as smart infrastructure and health monitoring, give a timeline for their realization, and point out some areas of research towards the integration of MC into 6G and beyond
Article
Various applications of molecular communications (MC) are event-triggered, and, as a consequence, the prevalent Shannon capacity may not be the right measure for performance assessment. Thus, in this paper, we motivate and establish the identification capacity as an alternative metric. In particular, we study deterministic identification (DI) for the discrete-time Poisson channel (DTPC), subject to an average and a peak molecule release rate constraint, which serves as a model for MC systems employing molecule counting receivers. It is established that the number of different messages that can be reliably identified for this channel scales as 2(nlogn)R, where n and R are the codeword length and coding rate, respectively. Lower and upper bounds on the DI capacity of the DTPC are developed. The obtained large capacity of the DI channel sheds light on the performance of natural DI systems such as natural olfaction, which are known for their extremely large chemical discriminatory power in biology. Furthermore, numerical results for the empirical miss-identification and false identification error rates are provided for finite length codes. This allows us to characterize the behaviour of the error rate for increasing codeword lengths, which complements our theoretically-derived scale for asymptotically large codeword lengths.
Article
In this paper, we propose a novel concept for engineered molecular communication (MC) systems inspired by animal olfaction. We focus on a multi-user scenario where several transmitters wish to communicate with a central receiver. We assume that each transmitter employs a unique mixture of different types of signaling molecules to represent its message and the receiver is equipped with an array comprising R different types of receptors in order to detect the emitted molecule mixtures. The design of an MC system based on orthogonal molecule-receptor pairs implies that the hardware complexity of the receiver linearly scales with the number of signaling molecule types Q (i.e., R = Q ). Natural olfaction systems avoid such high complexity by employing arrays of cross-reactive receptors, where each type of molecule activates multiple types of receptors and each type of receptor is predominantly activated by multiple types of molecules albeit with different activation strengths. For instance, the human olfactory system is believed to discriminate several thousands of chemicals using only a few hundred receptor types, i.e., Q - R . Motivated by this observation, we first develop an end-to-end MC channel model that accounts for the key properties of olfaction. Subsequently, we present the proposed transmitter and receiver designs. In particular, given a set of signaling molecules, we develop algorithms that allocate molecules to different transmitters and optimize the mixture alphabet for communication. Moreover, we formulate the molecule mixture recovery as a convex compressive sensing problem which can be efficiently solved via available numerical solvers. Finally, we present a comprehensive set of simulation results to evaluate the performance of the proposed MC designs revealing interesting insights regarding the design parameters. For instance, we show that mixtures comprising few types of molecules are best suited for communication since they can be more reliably detected by the cross-reactive array than one type of molecule or mixtures of many molecule types.