Conference PaperPDF Available

Multipath channels of bounded capacity

Authors:

Abstract

The capacity of discrete-time, non-coherent, multi-path fading channels is considered. It is shown that if the delay spread is large in the sense that the variances of the path gains do not decay faster than geometrically, then capacity is bounded in the signal-to-noise ratio.
arXiv:0711.3152v2 [cs.IT] 14 Apr 2008
Multipath Channels of Bounded Capacity
Tobias Koch Amos Lapidoth
ETH Zurich, Switzerland
Email: {tkoch, lapidoth}@isi.ee.ethz.ch
AbstractThe capacity of discrete-time, non-coherent, multi-
path fading channels is considered. It is shown that if the delay
spread is large in the sense that the variances of the path gains
do not decay faster than geometrically, then capacity is bounded
in the signal-to-noise ratio.
I. INTRODUCTION
This paper studies non-coherent multipath (frequency-
selective) fading channels. Such channels have been inves-
tigated extensively in the wideband regime where the signal-
to-noise ratio (SNR) is typically small, and it was shown that
in the limit as the available bandwidth tends to infinity the
capacity of the fading channel is the same as the capacity of
the additive white Gaussian noise (AWGN) channel of equal
received power, see [1].
1
When the SNR is large we encounter a different situation.
Indeed, it has been shown in [5] for non-coherent frequency-
flat fading channels that if the fading process is regular in
the sense that the present fading cannot be predicted perfectly
from its past, then at high SNR capacity only increases double-
logarithmically in the SNR. This is in stark contrast to the
logarithmic growth of the AWGN capacity. See [6], [7], [8],
and [9] for extensions to multi-antenna systems, and see [10]
and [11] for extensions to non-regular fading, i.e., when the
present fading can be predicted perfectly from its past. Thus,
communicating over non-coherent flat-fading channels at high
SNR is power inefficient.
In this paper, we show that communicating over non-
coherent multipath fading channels at high SNR is not merely
power inefficient, but even worse: if the delay spread is large
in the sense that the variances of the path gains do not decay
faster than geometrically, then capacity is bounded in the SNR.
For such channels, capacity does not tend to infinity as the
SNR tends to infinity. To state this result precisely we begin
with a mathematical description of the channel model.
A. Channel Model
Let C and Z
+
denote the set of complex numbers and the
set of positive integers, respectively. We consider a discrete-
time multipath fading channel whose channel output Y
k
C at time k Z
+
corresponding to the channel inputs
1
However, in contrast to the infinite bandwidth capacity of the AWGN
channel where the conditions on the capacity achieving input distribution
are not so stringent, the infinite bandwidth capacity of non-coherent fading
channels can only be achieved by signaling schemes which are “peaky”; see
also [2], [3], [4] and references therein.
(x
1
, x
2
, . . . , x
k
) C
k
is given by
Y
k
=
k
X
=1
H
(k)
k
x
+ Z
k
. (1)
Here, H
()
k
denotes the time-k gain of the -th path, and
{Z
k
} is a sequence of independent and identically distributed
(IID), zero-mean, variance-σ
2
, circularly-symmetric, complex
Gaussian random variables. We assume that for each path
Z
+
0
(with Z
+
0
denoting the set of non-negative integers) the
stochastic process
H
()
k
, k Z
+
is a zero-mean stationary
process. We denote its variance and its differential entropy rate
by
α
, E
h
H
()
k
2
i
, Z
+
0
and
h
, lim
n→∞
1
n
h
H
()
1
, H
()
2
, . . . , H
()
n
, Z
+
0
,
respectively. We further assume that
sup
Z
+
0
α
< and inf
∈L
h
> − ∞, (2)
where the set L is defined as L , { Z
+
0
: α
> 0}. We
finally assume that the processes
H
(0)
k
, k Z
+
,
H
(1)
k
, k Z
+
, . . .
are independent (“uncorrelated scattering”), that they are
jointly independent of {Z
k
}, and that the joint law of
{Z
k
},
H
(0)
k
, k Z
+
,
H
(1)
k
, k Z
+
, . . .
does not depend on the input sequence {x
k
}. We consider a
non-coherent channel model where neither the transmitter nor
the receiver is cognizant of the realization of
H
()
k
, k Z
+
,
Z
+
0
, but both are aware of their statistic. We do not assume
that the path gains are Gaussian.
B. Channel Capacity
Let A
n
m
denote the sequence A
m
, A
m+1
, . . . , A
n
. We define
the capacity as
C(SNR) , lim
n→∞
1
n
sup I
X
n
1
; Y
n
1
, (3)
where the maximization is over all joint distributions on
X
1
, X
2
, . . . , X
n
satisfying the power constraint
1
n
n
X
k=1
E
|X
k
|
2
, (4)
and where SNR is defined as
SNR ,
σ
2
. (5)
By Fano’s inequality, no rate above C(SNR) is achievable.
2
(See [13] for a definition of an achievable rate.)
Notice that the above channel (1) is generally not stationary
3
since the number of terms (paths) influencing Y
k
depends on
k. It is therefore prima facie not clear whether the liminf on
the RHS of (3) is a limit.
C. Main Result
Theorem 1: Consider the above channel model. Then
lim
→∞
α
+1
α
> 0
=
sup
SNR>0
C(SNR) <
, (6)
where we define, for any a > 0, a/ 0 , and 0/0 , 0.
For example, when { α
} is a geometric sequence, i.e., α
=
ρ
, Z
+
0
for some 0 < ρ < 1, then capacity is bounded.
Theorem 1 is proved in Section II where it is even shown
that (6) would continue to hold if we replaced the liminf in (3)
by a limsup. Section III addresses briefly multipath channels
of unbounded capacity.
II. PROOF OF THEOREM 1
The proof follows along the same lines as the proof of [14,
Thm. 1i)].
We first note that it follows from the left-hand side (LHS)
of (6) that we can find an
0
Z
+
0
and a 0 < ρ < 1 so that
α
0
> 0 and
α
+1
α
ρ, =
0
,
0
+ 1, . . . . (7)
We continue with the chain rule for mutual information
1
n
I(X
n
1
; Y
n
1
) =
1
n
0
X
k=1
I
X
n
1
; Y
k
Y
k1
1
+
1
n
n
X
k=
0
+1
I
X
n
1
; Y
k
Y
k1
1
. (8)
Each term in the first sum on the right-hand side (RHS) of (8)
is upper bounded by
4
I
X
n
1
; Y
k
Y
k1
1
h(Y
k
) h
Y
k
Y
k1
1
, X
n
1
, H
(0)
k
, H
(1)
k
, . . . , H
(k1)
k
log
πe
σ
2
+
k
X
=1
α
k
E
|X
|
2
!!
log
π
2
log
1 + sup
Z
+
0
α
· n · SNR
!
, (9)
where the first inequality follows because conditioning cannot
increase entropy; the second inequality follows from the
2
See [12] for conditions that guarantee that C(SNR) is achievable.
3
By a stationary channel we mean a channel where for any stationary input
{X
k
} the pair {(X
k
, Y
k
)} is jointly stationary.
4
Throughout this paper, log(·) denotes the natural logarithm function.
entropy maximizing property of Gaussian random variables
[13, Thm. 9.6.5]; and the last inequality follows by upper
bounding α
sup
Z
+
0
α
, = 0, 1, . . . , k 1 and from
the power constraint (4).
For k =
0
+ 1,
0
+ 2, . . . , n , we upper bound
I
X
n
1
; Y
k
Y
k1
1
using the general upper bound for mutual
information [5, Thm. 5.1]
I(X; Y )
Z
D
W (·|x)
R(·)
dQ(x), (10)
where D(·k·) denotes relative entropy, W (·|·) is the channel
law, Q(·) denotes the distribution on the channel input X,
and R(·) is any distribution on the output alphabet.
5
Thus,
any choice of output distribution R(·) yields an upper bound
on the mutual information.
For any given Y
k1
1
= y
k1
1
, we choose the output
distribution R(·) to be of density
β
π
2
|y
k
|
1
1 + β|y
k
|
2
, y
k
C, (11)
with β = 1/(
˜
β|y
k
0
|
2
) and
6
˜
β = min
ρ
0
1
α
0
max
0
0
α
, α
0
, ρ
0
. (12)
With this choice
0 <
˜
β < 1 and
˜
βα
α
+
0
, Z
+
0
. (13)
Using (11) in (10), and averaging over Y
k1
1
, we obtain
I
X
n
1
; Y
k
Y
k1
1
1
2
E
log |Y
k
|
2
+
1
2
E
h
log
˜
β|Y
k
0
|
2
i
+ E
h
log
˜
β|Y
k
0
|
2
+ |Y
k
|
2
i
h
Y
k
X
n
1
, Y
k1
1
E
log |Y
k
0
|
2
+ log
π
2
˜
β
. (14)
We bound the terms in (14) separately. We begin with
E
log |Y
k
|
2
= E
E
log |Y
k
|
2
X
k
1

E
log
E
|Y
k
|
2
X
k
1

= E
"
log
σ
2
+
k
X
=1
α
k
|X
|
2
!#
, (15)
where the inequality follows from Jensen’s inequality. Like-
wise, we use Jensen’s inequality and (13) to upper bound
E
h
log
˜
β|Y
k
0
|
2
i
E
"
log
˜
βσ
2
+
˜
β
k
0
X
=1
α
k
0
|X
|
2
!#
E
"
log
σ
2
+
k
0
X
=1
α
k
|X
|
2
!#
(16)
5
For channels with finite input and output alphabets this inequality follows
by Topsøe’s identity [15]; see also [16, Thm. 3.4].
6
When y
k
0
= 0, then the density of the Cauchy distribution (11) is
undefined. However, this event is of zero probability and has therefore no
impact on the mutual information I
`
X
n
1
; Y
k
˛
˛
Y
k1
1
´
.
and
E
h
log
˜
β|Y
k
0
|
2
+ |Y
k
|
2
i
E
"
log
2σ
2
+ 2
k
0
X
=1
α
k
|X
|
2
+
k
X
=k
0
+1
α
k
|X
|
2
!#
log 2 + E
"
log
σ
2
+
k
X
=1
α
k
|X
|
2
!#
, (17)
where the second inequality follows because
P
k
=k
0
+1
α
k
|X
|
2
2
P
k
=k
0
+1
α
k
|X
|
2
.
Next, we derive a lower bound on h
Y
k
X
n
1
, Y
k1
1
. Let
H
k
=
H
(0)
k
, H
(1)
k
, . . . , H
(k1)
k
, k
= 1, 2, . . . , k 1. We
have
h
Y
k
X
n
1
, Y
k1
1
h
Y
k
X
n
1
, Y
k1
1
, H
k1
1
= h
Y
k
X
n
1
, H
k1
1
, (18)
where the inequality follows because conditioning cannot
increase entropy, and where the equality follows because,
conditional on
X
n
1
, H
k1
1
, Y
k
is independent of Y
k1
1
. Let
S
k
be defined as
S
k
, { = 1, 2, . . . , k : min{|x
|
2
, α
k
} > 0}. (19)
Using the entropy power inequality [13, Thm. 16.6.3], and
using that the processes
H
(0)
k
, k Z
+
,
H
(1)
k
, k Z
+
, . . .
are independent and jointly independent of X
n
1
, it can be
shown that for any given X
n
1
= x
n
1
h
k
X
=1
H
(k)
k
X
+ Z
k
X
n
1
= x
n
1
, H
k1
1
!
log
X
∈S
k
e
h
H
(k)
k
X
X
=x
,{H
(k)
k
}
k1
k
=1
+ e
h(Z
k
)
!
. (20)
We lower bound the differential entropies on the RHS of (20)
as follows. The differential entropies in the sum are lower
bounded by
h
H
(k)
k
X
X
= x
,
H
(k)
k
k1
k
=1
= log
α
k
|x
|
2
+ h
H
(k)
k
H
(k)
k
k1
k
=1
log α
k
log
α
k
|x
|
2
+ inf
∈L
(h
log α
) , S
k
, (21)
where the equality follows from the behavior of differential
entropy under scaling; and where the inequality follows by
the stationarity of the process
H
(k)
k
, k Z
+
which
implies that the differential entropy h
H
(k)
k
H
(k)
k
k1
k
=1
cannot be smaller than the differential entropy rate h
k
[13, Thms. 4.2.1 & 4.2.2], and by lower bounding (h
k
log α
k
) by inf
∈L
(h
log α
) (which holds for each S
k
because S
k
L). The last differential entropy on the RHS of
(20) is lower bounded by
h(Z
k
) inf
∈L
(h
log α
) + log σ
2
, (22)
which follows by noting that
inf
∈L
(h
log α
) log(πe). (23)
Applying (21) & (22) to (20), and averaging over X
n
1
, yields
then
h
Y
k
X
n
1
, Y
k1
1
E
"
log
σ
2
+
k
X
=1
α
k
|X
|
2
!#
+ inf
∈L
(h
log α
) . (24)
We continue with the analysis of (14) by lower bounding
E
log |Y
k
0
|
2
. To this end, we write the expectation as
E
E
log
k
0
X
=1
H
(k
0
)
k
0
X
+ Z
k
0
2
X
k
0
1
and lower bound the conditional expectation by
E
log
k
0
X
=1
H
(k
0
)
k
0
X
+ Z
k
0
2
X
k
0
1
= x
k
0
1
= log
σ
2
+
k
0
X
=1
α
k
0
|x
|
2
!
2 · E
log
P
k
0
=1
H
(k
0
)
k
0
x
+ Z
k
0
q
σ
2
+
P
k
0
=1
α
k
0
|x
|
2
1
log
σ
2
+
k
0
X
=1
α
k
0
|x
|
2
!
+ log δ
2
2ǫ(δ, η)
2
η
h
P
k
0
=1
H
(k
0
)
k
0
x
+ Z
k
0
q
σ
2
+
P
k
0
=1
α
k
0
|x
|
2
(25)
for some 0 < δ 1 and 0 < η < 1, where
h
(X) ,
Z
{xC:f
X
(x)>1}
f
X
(x) log f
X
(x) dx, (26)
and where ǫ(δ, η) > 0 tends to zero as δ 0.
(We write x
in lower case to indicate that expectation
and entropy are conditional on X
k
0
1
= x
k
0
1
.) Here,
the inequality follows by writing the expectation in the
form E
log |A|
1
· I{|A| > δ}
+ E
log |A|
1
· I{|A| δ}
(where I{·} denotes the indicator function), and by upper
bounding then the first expectation by log δ and the second
expectation using [5, Lemma 6.7]. We continue by upper
bounding
h
P
k
0
=1
H
(k
0
)
k
0
x
+ Z
k
0
q
σ
2
+
P
k
0
=1
α
k
0
|x
|
2
= h
+
P
k
0
=1
H
(k
0
)
k
0
x
+ Z
k
0
q
σ
2
+
P
k
0
=1
α
k
0
|x
|
2
h
P
k
0
=1
H
(k
0
)
k
0
x
+ Z
k
0
q
σ
2
+
P
k
0
=1
α
k
0
|x
|
2
2
e
+ log(πe) + log
σ
2
+
k
0
X
=1
α
k
0
|x
|
2
!
h
k
0
X
=1
H
(k
0
)
k
0
x
+ Z
k
0
!
, (27)
where h
+
(X) is defined as h
+
(X) , h(X) + h
(X). Here,
we applied [5, Lemma 6.4] to upper bound
h
+
P
k
0
=1
H
(k
0
)
k
0
x
+ Z
k
0
q
σ
2
+
P
k
0
=1
α
k
0
|x
|
2
2
e
+ log(πe). (28)
Averaging (27) over X
k
0
1
yields
h
P
k
0
=1
H
(k
0
)
k
0
X
+ Z
k
0
q
σ
2
+
P
k
0
=1
α
k
0
|X
|
2
X
k
0
1
2
e
+ log(πe) + E
"
log
σ
2
+
k
0
X
=1
α
k
0
|X
|
2
!#
h
k
0
X
=1
H
(k
0
)
k
0
X
+ Z
k
0
X
k
0
1
!
2
e
+ log(πe) inf
∈L
(h
log α
) , (29)
where the second inequality follows by conditioning the dif-
ferential entropy additionally on Y
k
0
1
1
, and by using then
lower bound (24). A lower bound on E
log |Y
k
0
|
2
follows
now by averaging (25) over X
k
0
1
, and by applying (29)
E
log |Y
k
0
|
2
E
"
log
σ
2
+
k
0
X
=1
α
k
0
|X
|
2
!#
+ log δ
2
2ǫ(δ, η)
2
η
2
e
+ log(πe)
+ inf
∈L
2
η
(h
log α
) . (30)
Returning to the analysis of (14), we obtain from (30), (24),
(17), (16), and (15)
I
X
n
1
; Y
k
Y
k1
1
1
2
E
"
log
σ
2
+
k
X
=1
α
k
|X
|
2
!#
+
1
2
E
"
log
σ
2
+
k
0
X
=1
α
k
|X
|
2
!#
+ log 2 + E
"
log
σ
2
+
k
X
=1
α
k
|X
|
2
!#
E
"
log
σ
2
+
k
X
=1
α
k
|X
|
2
!#
inf
∈L
(h
log α
)
E
"
log
σ
2
+
k
0
X
=1
α
k
0
|X
|
2
!#
log δ
2
+ 2ǫ(δ, η) +
2
η
2
e
+ log(πe)
inf
∈L
2
η
(h
log α
) + log
π
2
˜
β
+ E
"
log
σ
2
+
k
X
=1
α
k
|X
|
2
!#
E
"
log
σ
2
+
k
0
X
=1
α
k
0
|X
|
2
!#
, (31)
with
, inf
∈L
1 +
2
η
(h
log α
) + log
2π
2
˜
βδ
2
+ 2ǫ(δ, η) +
2
η
2
e
+ log(πe)
. (32)
The second inequality in (31) follows because
P
k
0
=1
α
k
|X
|
2
P
k
=1
α
k
|X
|
2
.
In order to show that the capacity is bounded in the SNR, we
apply (31) and (9) to (8) and use then that for any sequences
{a
k
} and {b
k
}
n
X
k=
0
+1
(a
k
b
k
) =
n
X
k=n
0
+1
(a
k
b
kn+2
0
)
+
n
0
X
k=
0
+1
(a
k
b
k+
0
). (33)
Defining
a
k
, E
"
log
σ
2
+
k
X
=1
α
k
|X
|
2
!#
(34)
and
b
k
, E
"
log
σ
2
+
k
0
X
=1
α
k
0
|X
|
2
!#
(35)
we have for the first sum on the RHS of (33)
n
X
k=n
0
+1
(a
k
b
kn+2
0
)
=
n
X
k=n
0
+1
E
"
log
σ
2
+
P
k
=1
α
k
|X
|
2
σ
2
+
P
kn+
0
=1
α
kn+
0
|X
|
2
!#
0
log
1 + sup
Z
+
0
α
· n · SNR
!
, (36)
which follows by lower bounding the denominator by σ
2
, and
by using then Jensen’s inequality along with the last inequality
in (9). For the second sum on the RHS of (33) we have
n
0
X
k=
0
+1
(a
k
b
k+
0
)
=
n
0
X
k=
0
+1
E
"
log
σ
2
+
P
k
=1
α
k
|X
|
2
σ
2
+
P
k
=1
α
k
|X
|
2
!#
= 0. (37)
Thus, applying (31)–(37) and (9) to (8), we obtain
1
n
I(X
n
1
; Y
n
1
)
0
n
log
σ
2
+ sup
Z
+
0
α
· n · SNR
!
+
0
n
log
σ
2
+ sup
Z
+
0
α
· n · SNR
!
+
n 2
0
n
, (38)
which tends to
< as n tends to infinity. This proves
Theorem 1.
III. MULTIPATH CHANNELS OF UNBOUNDED CAPACITY
We have seen in Theorem 1 that if the variances of the path
gains {α
} do not decay faster than geometrically, then ca-
pacity is bounded in the SNR. In this section, we demonstrate
that this need not be the case when the variances of the path
gains decay faster than geometrically. The following theorem
presents a sufficient condition for the capacity C(SNR) to be
unbounded in the SNR.
Theorem 2: Consider the above channel model. Then
lim
→∞
1
log log
1
α
=
=
sup
SNR>0
C(SNR) =
. (39)
Proof: Omitted.
Note: We do not claim that C(SNR) is achievable. How-
ever, it can be shown that when, for example, the processes
H
()
k
, k Z
+
, Z
+
0
are IID Gaussian, then the maximum
achievable rate is unbounded in the SNR, i.e., any rate is
achievable for sufficiently large SNR.
Certainly, the condition on the LHS of (39) is satisfied when
the channel has finite memory in the sense that for some finite
L Z
+
0
α
= 0, = L + 1, L + 2, . . . .
In this case, (1) becomes
Y
k
=
k1
X
=0
H
()
k
x
k
+ Z
k
, k = 1, 2, . . . , L
L
X
=0
H
()
k
x
k
+ Z
k
, k = L + 1, L + 2, . . . .
(40)
This channel (40) was studied for general (but finite) L in
[17] where it was shown that its capacity satisfies
lim
SNR→∞
C(SNR)
log log SNR
= 1. (41)
Thus, for finite L, the capacity pre-loglog (41) is not affected
by the multipath behavior. This is perhaps surprising as
Theorem 1 implies that if L = , and if the variances of
the path gains do not decay faster than geometrically, then the
pre-loglog is zero.
ACKNOWLEDGMENT
Discussions with Helmut B¨olcskei and Giuseppe Durisi are
gratefully acknowledged.
REFERENCES
[1] R. G. Gallager, Information Theory and Reliable Communication. John
Wiley & Sons, 1968.
[2] V. Sethuraman, L. Wang, B. Hajek, and A. Lapidoth, “Low SNR capacity
of fading channels - MIMO and delay spread, in Proc. IEEE Int.
Symposium on Inf. Theory, Nice, France, June 24–29, 2007.
[3] M. M´edard and R. G. Gallager, “Bandwidth scaling for fading multipath
channels, IEEE Trans. Inform. Theory, vol. 48, no. 4, pp. 840–852, Apr.
2002.
[4]
˙
I. E. Telatar and D. N. C. Tse, “Capacity and mutual information
of wideband multipath fading channels, IEEE Trans. Inform. Theory,
vol. 46, no. 4, pp. 1384–1400, July 2000.
[5] A. Lapidoth and S. M. Moser, “Capacity bounds via duality with
applications to multiple-antenna systems on flat fading channels, IEEE
Trans. Inform. Theory, vol. 49, no. 10, pp. 2426–2467, Oct. 2003.
[6] T. Koch and A. Lapidoth, “Degrees of freedom in non-coherent station-
ary MIMO fading channels, in Proc. Winter School Cod. and Inform.
Theory, Bratislava, Slovakia, Feb. 20–25, 2005.
[7]
, “The fading number and degrees of freedom in non-coherent
MIMO fading channels: a peace pipe, in Proc. IEEE Int. Symposium
on Inf. Theory, Adelaide, Australia, Sept. 4–9, 2005.
[8] A. Lapidoth and S. M. Moser, “The fading number of single-input
multiple-output fading channels with memory, IEEE Trans. Inform.
Theory, vol. 52, no. 2, pp. 437–453, Feb. 2006.
[9] S. M. Moser, “The fading number of multiple-input multiple-output
fading channels with memory, in Proc. IEEE Int. Symposium on Inf.
Theory, Nice, France, June 24–29, 2007.
[10] A. Lapidoth, “On the asymptotic capacity of stationary Gaussian fading
channels, IEEE Trans. Inform. Theory, vol. 51, no. 2, pp. 437–446,
Feb. 2005.
[11] T. Koch and A. Lapidoth, “Gaussian fading is the worst fading, in Proc.
IEEE Int. Symposium on Inf. Theory, Seattle, Washington, USA, July
9–14, 2006.
[12] S. Verd´u and T. S. Han, “A general formula for channel capacity, IEEE
Trans. Inform. Theory, vol. 40, no. 4, pp. 1147–1157, July 1994.
[13] T. M. Cover and J. A. Thomas, Elements of Information Theory. John
Wiley & Sons, 1991.
[14] T. Koch, A. Lapidoth, and P. P. Sotiriadis, A hot channel, in Proc.
Inform. Theory Workshop (ITW), Lake Tahoe, CA, USA, Sept. 2–6 2007.
[15] F. Topsøe, An information theoretical identity and a problem involving
capacity, Studia Sci. Math. Hungar., vol. 2, pp. 291–292, 1967.
[16] I. Csisz´ar and J. K¨orner, Information Theory: Coding Theorems for
Discrete Memoryless Systems. Academic Press, 1981.
[17] T. Koch and A. Lapidoth, “On multipath fading channels at high SNR,
2008, subm. to IEEE Int. Symposium on Inf. Theory, Toronto, Canada.
[Online]. Available: http://arxiv.org/abs/0801.0672
... It has been shown in [1] for noncoherent frequency-flat fading channels that if the fading process is of finite entropy rate, then at high SNR capacity grows double-logarithmically with the SNR. 1 For noncoherent multipath fading channels, it has been recently demonstrated that if the delay spread is large in the sense that the variances of the path gains do not decay faster than geometrically, then capacity is bounded in the SNR [3]. For such channels, capacity does not tend to infinity as the SNR tends to infinity. ...
... Here H (ℓ) k denotes the time-k gain of the ℓ-th path; {Z k } is a sequence of independent and identically distributed (IID), zero-mean, variance-σ 2 , circularly-symmetric, complex Gaussian random variables; and L ∈ Z + 0 (where Z + 0 denotes the set of nonnegative integers) denotes the number of paths that influence Y k . For L = 0, the channel (1) reduces to the flat-fading channel that was studied in [1]; and for L = ∞, Equation (1) describes the multipath fading channel that was studied in [3]. In this paper we shall focus on the case where the number of paths is finite, i.e., where L < ∞. ...
... For flat-fading channels (i.e., when L = 0) we have Λ = 1 [1]. For multipath fading channels with an infinite number of paths (i.e., when L = ∞), it has been shown in [3] that when the sequence {α ℓ } decays not faster than geometrically, then capacity is bounded in the SNR and hence Λ = 0. One might therefore expect that the pre-loglog decays with L. It turns out, however, that this is not the case. ...
Article
Full-text available
This paper studies the capacity of discrete-time multipath fading channels. It is assumed that the number of paths is finite, i.e., that the channel output is influenced by the present and by the L previous channel inputs. A noncoherent channel model is considered where neither transmitter nor receiver are cognizant of the fading's realization, but both are aware of its statistic. The focus is on capacity at high signal-to-noise ratios (SNR). In particular, the capacity pre-loglog - defined as the limiting ratio of the capacity to loglog SNR as SNR tends to infinity - is studied. It is shown that, irrespective of the number paths L, the capacity pre-loglog is 1.
... Recently, it has been demonstrated that communicating over noncoherent multipath fading channels at high SNR is not merely power inefficient, but may be even worse: if the delay spread is large in the sense that the variances of the path gains decay exponentially or slower, then capacity is bounded in the SNR; see [3,Thm. 1]. ...
... In contrast, if the variances of the path gains decay faster than double-exponentially, then capacity is unbounded in the SNR; see [3,Thm. 2]. ...
... The above results demonstrate that whether the capacity of a multipath channel is unbounded in the SNR depends critically on the decay rate of the variances of the path gains. However, [ [3,Thm. 2] fail to characterize the capacity of channels for which the variances of the path gains decay faster than exponentially but slower than double-exponentially. ...
Article
Full-text available
The capacity of discrete-time, noncoherent, multipath fading channels is considered. It is shown that if the variances of the path gains decay faster than exponentially, then capacity is unbounded in the transmit power.
... In the approximating discrete-time discrete-frequency inputoutput relation (11), ISI and ICI are neglected [see (12)]. But the high-SNR behavior of a fading channel is heavily influenced by ISI and ICI, as recently shown in [71]. ...
... Here, (a) follows from Property 5, (b) from the WSSUS property, and (c) from Property 3. We finally substitute (71) in (70) and obtain ...
Article
Full-text available
We derive bounds on the noncoherent capacity of wide-sense stationary uncorrelated scattering (WSSUS) channels that are selective both in time and frequency, and are underspread, i.e., the product of the channel's delay spread and Doppler spread is small. The underspread assumption is satisfied by virtually all wireless communication channels. For input signals that are peak constrained in time and frequency, we obtain upper and lower bounds on capacity that are explicit in the channel's scattering function, are accurate for a large range of bandwidth, and allow to coarsely identify the capacity-optimal bandwidth as a function of the peak power and the channel's scattering function. We also obtain a closed-form expression for the first-order Taylor series expansion of capacity in the infinite-bandwidth limit, and show that our bounds are tight in the wideband regime. For input signals that are peak constrained in time only (and, hence, allowed to be peaky in frequency), we provide upper and lower bounds on the infinite-bandwidth capacity. Our lower bound is closely related to a result by Viterbi (1967). We find cases where the bounds coincide and, hence, the infinite-bandwidth capacity is characterized exactly. The analysis in this paper is based on a discrete-time discrete-frequency approximation of WSSUS time- and frequency-selective channels. This discretization takes the underspread property of the channel explicitly into account.
Article
Full-text available
Discrete-time Rayleigh-fading single-input single-output (SISO) and multiple-input multiple-output (MIMO) channels are considered, with no channel state information at the transmitter or the receiver. The fading is assumed to be stationary and correlated in time, but independent from antenna to antenna. Peak-power and average-power constraints are imposed on the transmit antennas. For MIMO channels, these constraints are either imposed on the sum over antennas, or on each individual antenna. For SISO channels and MIMO channels with sum power constraints, the asymptotic capacity as the peak signal-to-noise ratio (SNR) goes to zero is identified; for MIMO channels with individual power constraints, this asymptotic capacity is obtained for a class of channels called transmit separable channels. The results for MIMO channels with individual power constraints are carried over to SISO channels with delay spread (i.e., frequency-selective fading).
Conference Paper
The capacity of discrete-time, noncoherent, multipath fading channels is considered. It is shown that if the variances of the path gains decay faster than exponentially, then capacity is unbounded in the transmit power.
Book
In mobilen Kommunikationssystemen wird das Sendesignal über einen zeitveränderlichen Funkkanal übertragen. Der Funkkanal kann hierbei als ein stochastischer bandbegrenzter Prozess angesehen werden, dessen Realisierung dem Empfänger typischerweise nicht bekannt ist. Um dennoch eine kohärente Signaldetektion im Empfänger zu ermöglichen, wird der Kanalprozess oft auf der Basis von bekannten Sendesymbolen, sogenannten Pilotsymbolen, die periodisch in die Sendesequenz eingefügt sind, geschätzt. Die so erreichbare Datenrate hängt von der Dynamik des Kanalprozesses ab. Für dieses konventionelle Verfahren einer kohärenten Detektion (Synchronized Detection) auf der Basis einer ausschließlich auf Pilotsymbolen basierenden Kanalschätzung existieren Schranken für die erreichbare Datenrate. Andererseits sind in den letzten Jahren iterative Empfängerstrukturen in den Fokus der Forschung gelangt, bei denen die Kanalschätzung durch Zuverlässigkeitsinformationen über die Datensymbole, die vom Kanaldecoder geliefert werden, verbessert wird (codeunterstützte Kanalschätzung). Für diese Systeme sind die Schranken über die erreichbaren Datenraten bei einer ausschließlich pilotbasierten Kanalschätzung nicht gültig. Die Untersuchung des möglichen Effizienzgewinns durch codeunterstützte Kanalschätzung wirft die Frage nach den damit erreichbaren Datenraten und somit auch nach der Kapazität eines stationären Fadingkanals auf. Obwohl diese Klasse von Kanälen typisch für eine Vielzahl an realistischen Systemen ist, ist schon für den einfachen Fall eines frequenzflachen Rayleigh-Kanals die genaue Kapazität und die kapazitätserreichende Eingangsverteilung unbekannt. Es gibt Schranken für die Kapazität, die meist jedoch nur in bestimmten SNR-Bereichen dicht sind oder auf einer Beschränkung der Momentanleistung basieren. Vor diesem Hintergrund werden in der vorliegenden Dissertation verschiedene informationstheoretische Fragestellungen bezüglich der erreichbaren Datenraten nichtkohärenter stationärer Rayleigh-Fadingkanäle behandelt. Zunächst werden Schranken für die erreichbare Datenrate mit unabhängigen zirkulärsymmetrischen gaußverteilten Eingangssymbolen, die bei perfekter Kanalkenntnis am Empfänger kapazitätserreichend sind, hergeleitet. Diese Schranken sind dicht in dem Sinne, dass ihre Differenz für alle SNRs beschränkt ist. Für asymptotisch kleine Kanaldynamik konvergiert die untere Schranke gegen die kohärente Kanalkapazität. Diese Schranken werden auf MIMO-Kanäle und auf frequenzselektive Kanäle erweitert. Der Vergleich dieser Schranken mit der erreichbaren Datenrate bei kohärenter Detektion und einer ausschließlich pilotbasierten Kanalschätzung gibt einen Hinweis auf die Performance von konventionellen Empfängern. Jedoch werden in Systemen mit codeunterstützter Kanalschätzung nach wie vor periodische Pilotsymbole genutzt. Deshalb wird in einem weiteren Teil der Arbeit die erreichbare Datenrate mit periodischen Pilotsymbolen und Empfängern basierend auf kohärenter Detektion und einer iterativen codeunterstützten Kanalschätzung untersucht und für eine spezielle Empfängerstruktur eine Näherung für eine obere Schranke für die erreichbare Datenrate hergeleitet. Der Vergleich der Näherung für die obere Schranke der erreichbaren Datenrate bei iterativer codeunterstützter Kanalschätzung mit der erreichbaren Datenrate bei kohärenten Detektion und einer ausschließlich auf Pilotsymbolen basierenden Kanalschätzung gibt eine approximative obere Grenze für den maximal möglichen Gewinn durch die iterative Empfängerstruktur. In diesem Zusammenhang wird auch gezeigt, welcher Teil der Transinformation zwischen Sender und Empfänger im Fall einer kohärenten Detektion mit einer ausschließlich auf Pilotsymbolen basierenden Kanalschätzung verworfen wird. Aus informationstheoretischer Sicht stellt sich die Frage, ob die oft verwendeten periodischen Pilotsequenzen optimal sind. Um dieser Frage nachzugehen, wird die Transinformation zwischen Sender und Empfänger für gegebene diskrete Signalsequenzen untersucht. Es wird eine implizite Lösung für die optimale Eingangsverteilung auf der Basis der Kullback-Leibler Divergenz angegeben. Darauf basierend wird gezeigt, dass periodische Pilotsymbole im Allgemeinen nicht kapazitätserreichend sind. Sie ermöglichen aber für praktische Systeme Empfänger mit niedriger Komplexität. In typical mobile communication systems transmission takes place over a time-varying fading channel. The stochastic channel fading process can assumed to be bandlimited and its realization is usually unknown to the receiver. To allow for a coherent signal detection, the channel fading process is often estimated based on pilot symbols which are periodically inserted into the transmit symbols sequence. The achievable data rate with this approach depends on the dynamics of the channel fading process. For this conventional approach, i.e., performing channel estimation solely based on pilot symbols and using it for coherent detection (synchronized detection) in a second step, bounds on the achievable data rate are known. However, in recent years receiver structures got into the focus of research, where the channel estimation is iteratively enhanced based on the reliability information on data symbols (code-aided channel estimation). For this kind of systems, the bounds on the achievable data rate with synchronized detection based on a solely pilot based channel estimation are no longer valid. The study of the possible performance gain when using such receivers with synchronized detection and a code-aided channel estimation in comparison to synchronized detection in combination with a solely pilot based channel estimation poses also the question on the capacity of stationary fading channels. Although such channels are typical for many practical mobile communication systems, already for the simple case of a Rayleigh flat-fading channel the capacity and the capacity-achieving input distribution are unknown. There exist bounds on the capacity, however, most of them are tight only in a limited SNR regime or rely on a peak power constraint. Thinking of this, in the present thesis various aspects regarding the capacity/achievable data rate of stationary Rayleigh fading channels are treated. First, bounds on the achievable data rate with i.i.d. zero-mean proper Gaussian input symbols, which are capacity-achieving in the coherent case, i.e., in case of perfect channel knowledge at the receiver, are derived. These bounds are tight in the sense that the difference between the upper and the lower bound is bounded for all SNRs. The lower bound converges to the coherent capacity for asymptotically small channel dynamics. Furthermore, these bounds are extended to the case of multiple-input multiple-output (MIMO) channels and to the case of frequency selective channels. The comparison of these bounds on the achievable rate with i.i.d. zero-mean proper Gaussian input symbols to the achievable rate while using receivers with synchronized detection based on a solely pilot based channel estimation already gives an indication on the performance of such conventional receiver structures. However, for systems with receivers based on iterative code-aided channel estimation periodic pilot symbols are still used. Therefore, in a further part of the present work the achievable rate with receivers based on synchronized detection and a code-aided channel estimation is studied. For a specific type of such a receiver an approximate upper bound on the achievable rate is derived. The comparison of this approximate upper bound and the achievable data rate with receivers using synchronized detection based on a solely pilot based channel estimation gives an approximate upper bound on the possible gain by using this kind of code-aided channel estimation in comparison to the conventional receiver using a solely pilot based channel estimation. In this context, it is also shown which part of the mutual information of the transmitter and the receiver is discarded when using the conventional receiver with synchronized detection based on a solely pilot based channel estimation. Concerning the typically applied periodic pilot symbols the question arises if these periodic pilot symbols are optimal from an information theoretic perspective. To address this question, the mutual information between transmitter and receiver is studied for a given discrete signaling set. The optimum input distribution, i.e., the one that maximizes the mutual information when restricting to the given signaling set, is given implicitly based on the Kullback-Leibler distance. Thereon it is shown that periodic pilot symbols are not capacity-achieving in general. However, for practical systems they allow for receivers with small computational complexity.
Article
Full-text available
This paper studies the capacity of discrete-time multipath fading channels. It is assumed that the number of paths is finite, i.e., that the channel output is influenced by the present and by the L previous channel inputs. A noncoherent channel model is considered where neither transmitter nor receiver are cognizant of the fading's realization, but both are aware of its statistic. The focus is on capacity at high signal-to-noise ratios (SNR). In particular, the capacity pre-loglog - defined as the limiting ratio of the capacity to loglog SNR as SNR tends to infinity - is studied. It is shown that, irrespective of the number paths L, the capacity pre-loglog is 1.
Article
Full-text available
The capacity of peak-power limited, single-antenna, noncoherent, flat-fading channels with memory is considered. The emphasis is on the capacity pre-log, i.e., on the limiting ratio of channel capacity to the logarithm of the signal-to-noise ratio (SNR), as the SNR tends to infinity. It is shown that, among all stationary & ergodic fading processes of a given spectral distribution function and whose law has no mass point at zero, the Gaussian process gives rise to the smallest pre-log. The assumption that the law of the fading process has no mass point at zero is essential in the sense that there exist stationary & ergodic fading processes whose law has a mass point at zero and that give rise to a smaller pre-log than the Gaussian process of equal spectral distribution function. An extension of our results to multiple-input single-output fading channels with memory is also presented.
Conference Paper
Full-text available
The capacity of pear-power limited, single-antenna, non-coherent, flat-fading channels with memory is considered. The emphasis is on the capacity pre-log, i.e., on the limiting ratio of channel capacity to the logarithm of the signal-to-noise ratio (SNR), as the SNR tends to infinity. It is shown that, among all stationary and ergodic fading processes of given spectral distribution function whose law has no mass point at zero, the Gaussian process gives rise to the smallest pre-log
Article
Csiszár and Körner’s book is widely regarded as a classic in the field of information theory, providing deep insights and expert treatment of the key theoretical issues. It includes in-depth coverage of the mathematics of reliable information transmission, both in two-terminal and multi-terminal network scenarios. Updated and considerably expanded, this new edition presents unique discussions of information theoretic secrecy and of zero-error information theory, including the deep connections of the latter with extremal combinatorics. The presentations of all core subjects are self contained, even the advanced topics, which helps readers to understand the important connections between seemingly different problems. Finally, 320 end-of-chapter problems, together with helpful solving hints, allow readers to develop a full command of the mathematical techniques. It is an ideal resource for graduate students and researchers in electrical and electronic engineering, computer science and applied mathematics. © Akadémiai Kiadó, Budapest 1981 and Cambridge University Press 2011.
Chapter
Information theory answers two fundamental questions in communication theory: what is the ultimate data compression (answer: the entropy H), and what is the ultimate transmission rate of communication (answer: the channel capacity C). For this reason some consider information theory to be a subset of communication theory. We will argue that it is much more. Indeed, it has fundamental contributions to make in statistical physics (thermodynamics), computer science (Kolmogorov complexity or algorithmic complexity), statistical inference (Occam's Razor: “The simplest explanation is best”) and to probability and statistics (error rates for optimal hypothesis testing and estimation). The relationship of information theory to other fields is discussed. Information theory intersects physics (statistical mechanics), mathematics (probability theory), electrical engineering (communication theory) and computer science (algorithmic complexity). We describe these areas of intersection in detail.