Content uploaded by Amos Lapidoth
Author content
All content in this area was uploaded by Amos Lapidoth
Content may be subject to copyright.
Content uploaded by Amos Lapidoth
Author content
All content in this area was uploaded by Amos Lapidoth
Content may be subject to copyright.
arXiv:0711.3152v2 [cs.IT] 14 Apr 2008
Multipath Channels of Bounded Capacity
Tobias Koch Amos Lapidoth
ETH Zurich, Switzerland
Email: {tkoch, lapidoth}@isi.ee.ethz.ch
Abstract—The capacity of discrete-time, non-coherent, multi-
path fading channels is considered. It is shown that if the delay
spread is large in the sense that the variances of the path gains
do not decay faster than geometrically, then capacity is bounded
in the signal-to-noise ratio.
I. INTRODUCTION
This paper studies non-coherent multipath (frequency-
selective) fading channels. Such channels have been inves-
tigated extensively in the wideband regime where the signal-
to-noise ratio (SNR) is typically small, and it was shown that
in the limit as the available bandwidth tends to infinity the
capacity of the fading channel is the same as the capacity of
the additive white Gaussian noise (AWGN) channel of equal
received power, see [1].
1
When the SNR is large we encounter a different situation.
Indeed, it has been shown in [5] for non-coherent frequency-
flat fading channels that if the fading process is regular in
the sense that the present fading cannot be predicted perfectly
from its past, then at high SNR capacity only increases double-
logarithmically in the SNR. This is in stark contrast to the
logarithmic growth of the AWGN capacity. See [6], [7], [8],
and [9] for extensions to multi-antenna systems, and see [10]
and [11] for extensions to non-regular fading, i.e., when the
present fading can be predicted perfectly from its past. Thus,
communicating over non-coherent flat-fading channels at high
SNR is power inefficient.
In this paper, we show that communicating over non-
coherent multipath fading channels at high SNR is not merely
power inefficient, but even worse: if the delay spread is large
in the sense that the variances of the path gains do not decay
faster than geometrically, then capacity is bounded in the SNR.
For such channels, capacity does not tend to infinity as the
SNR tends to infinity. To state this result precisely we begin
with a mathematical description of the channel model.
A. Channel Model
Let C and Z
+
denote the set of complex numbers and the
set of positive integers, respectively. We consider a discrete-
time multipath fading channel whose channel output Y
k
∈
C at time k ∈ Z
+
corresponding to the channel inputs
1
However, in contrast to the infinite bandwidth capacity of the AWGN
channel where the conditions on the capacity achieving input distribution
are not so stringent, the infinite bandwidth capacity of non-coherent fading
channels can only be achieved by signaling schemes which are “peaky”; see
also [2], [3], [4] and references therein.
(x
1
, x
2
, . . . , x
k
) ∈ C
k
is given by
Y
k
=
k
X
ℓ=1
H
(k−ℓ)
k
x
ℓ
+ Z
k
. (1)
Here, H
(ℓ)
k
denotes the time-k gain of the ℓ-th path, and
{Z
k
} is a sequence of independent and identically distributed
(IID), zero-mean, variance-σ
2
, circularly-symmetric, complex
Gaussian random variables. We assume that for each path
ℓ ∈ Z
+
0
(with Z
+
0
denoting the set of non-negative integers) the
stochastic process
H
(ℓ)
k
, k ∈ Z
+
is a zero-mean stationary
process. We denote its variance and its differential entropy rate
by
α
ℓ
, E
h
H
(ℓ)
k
2
i
, ℓ ∈ Z
+
0
and
h
ℓ
, lim
n→∞
1
n
h
H
(ℓ)
1
, H
(ℓ)
2
, . . . , H
(ℓ)
n
, ℓ ∈ Z
+
0
,
respectively. We further assume that
sup
ℓ∈Z
+
0
α
ℓ
< ∞ and inf
ℓ∈L
h
ℓ
> − ∞, (2)
where the set L is defined as L , {ℓ ∈ Z
+
0
: α
ℓ
> 0}. We
finally assume that the processes
H
(0)
k
, k ∈ Z
+
,
H
(1)
k
, k ∈ Z
+
, . . .
are independent (“uncorrelated scattering”), that they are
jointly independent of {Z
k
}, and that the joint law of
{Z
k
},
H
(0)
k
, k ∈ Z
+
,
H
(1)
k
, k ∈ Z
+
, . . .
does not depend on the input sequence {x
k
}. We consider a
non-coherent channel model where neither the transmitter nor
the receiver is cognizant of the realization of
H
(ℓ)
k
, k ∈ Z
+
,
ℓ ∈ Z
+
0
, but both are aware of their statistic. We do not assume
that the path gains are Gaussian.
B. Channel Capacity
Let A
n
m
denote the sequence A
m
, A
m+1
, . . . , A
n
. We define
the capacity as
C(SNR) , lim
n→∞
1
n
sup I
X
n
1
; Y
n
1
, (3)
where the maximization is over all joint distributions on
X
1
, X
2
, . . . , X
n
satisfying the power constraint
1
n
n
X
k=1
E
|X
k
|
2
≤
, (4)
and where SNR is defined as
SNR ,
σ
2
. (5)
By Fano’s inequality, no rate above C(SNR) is achievable.
2
(See [13] for a definition of an achievable rate.)
Notice that the above channel (1) is generally not stationary
3
since the number of terms (paths) influencing Y
k
depends on
k. It is therefore prima facie not clear whether the liminf on
the RHS of (3) is a limit.
C. Main Result
Theorem 1: Consider the above channel model. Then
lim
ℓ→∞
α
ℓ+1
α
ℓ
> 0
=⇒
sup
SNR>0
C(SNR) < ∞
, (6)
where we define, for any a > 0, a/ 0 , ∞ and 0/0 , 0.
For example, when { α
ℓ
} is a geometric sequence, i.e., α
ℓ
=
ρ
ℓ
, ℓ ∈ Z
+
0
for some 0 < ρ < 1, then capacity is bounded.
Theorem 1 is proved in Section II where it is even shown
that (6) would continue to hold if we replaced the liminf in (3)
by a limsup. Section III addresses briefly multipath channels
of unbounded capacity.
II. PROOF OF THEOREM 1
The proof follows along the same lines as the proof of [14,
Thm. 1i)].
We first note that it follows from the left-hand side (LHS)
of (6) that we can find an ℓ
0
∈ Z
+
0
and a 0 < ρ < 1 so that
α
ℓ
0
> 0 and
α
ℓ+1
α
ℓ
≥ ρ, ℓ = ℓ
0
, ℓ
0
+ 1, . . . . (7)
We continue with the chain rule for mutual information
1
n
I(X
n
1
; Y
n
1
) =
1
n
ℓ
0
X
k=1
I
X
n
1
; Y
k
Y
k−1
1
+
1
n
n
X
k=ℓ
0
+1
I
X
n
1
; Y
k
Y
k−1
1
. (8)
Each term in the first sum on the right-hand side (RHS) of (8)
is upper bounded by
4
I
X
n
1
; Y
k
Y
k−1
1
≤ h(Y
k
) − h
Y
k
Y
k−1
1
, X
n
1
, H
(0)
k
, H
(1)
k
, . . . , H
(k−1)
k
≤ log
πe
σ
2
+
k
X
ℓ=1
α
k−ℓ
E
|X
ℓ
|
2
!!
− log
πeσ
2
≤ log
1 + sup
ℓ∈Z
+
0
α
ℓ
· n · SNR
!
, (9)
where the first inequality follows because conditioning cannot
increase entropy; the second inequality follows from the
2
See [12] for conditions that guarantee that C(SNR) is achievable.
3
By a stationary channel we mean a channel where for any stationary input
{X
k
} the pair {(X
k
, Y
k
)} is jointly stationary.
4
Throughout this paper, log(·) denotes the natural logarithm function.
entropy maximizing property of Gaussian random variables
[13, Thm. 9.6.5]; and the last inequality follows by upper
bounding α
ℓ
≤ sup
ℓ∈Z
+
0
α
ℓ
, ℓ = 0, 1, . . . , k − 1 and from
the power constraint (4).
For k = ℓ
0
+ 1, ℓ
0
+ 2, . . . , n , we upper bound
I
X
n
1
; Y
k
Y
k−1
1
using the general upper bound for mutual
information [5, Thm. 5.1]
I(X; Y ) ≤
Z
D
W (·|x)
R(·)
dQ(x), (10)
where D(·k·) denotes relative entropy, W (·|·) is the channel
law, Q(·) denotes the distribution on the channel input X,
and R(·) is any distribution on the output alphabet.
5
Thus,
any choice of output distribution R(·) yields an upper bound
on the mutual information.
For any given Y
k−1
1
= y
k−1
1
, we choose the output
distribution R(·) to be of density
√
β
π
2
|y
k
|
1
1 + β|y
k
|
2
, y
k
∈ C, (11)
with β = 1/(
˜
β|y
k−ℓ
0
|
2
) and
6
˜
β = min
ρ
ℓ
0
−1
α
ℓ
0
max
0≤ℓ
′
≤ℓ
0
α
ℓ
′
, α
ℓ
0
, ρ
ℓ
0
. (12)
With this choice
0 <
˜
β < 1 and
˜
βα
ℓ
≤ α
ℓ+ℓ
0
, ℓ ∈ Z
+
0
. (13)
Using (11) in (10), and averaging over Y
k−1
1
, we obtain
I
X
n
1
; Y
k
Y
k−1
1
≤
1
2
E
log |Y
k
|
2
+
1
2
E
h
log
˜
β|Y
k−ℓ
0
|
2
i
+ E
h
log
˜
β|Y
k−ℓ
0
|
2
+ |Y
k
|
2
i
− h
Y
k
X
n
1
, Y
k−1
1
− E
log |Y
k−ℓ
0
|
2
+ log
π
2
˜
β
. (14)
We bound the terms in (14) separately. We begin with
E
log |Y
k
|
2
= E
E
log |Y
k
|
2
X
k
1
≤ E
log
E
|Y
k
|
2
X
k
1
= E
"
log
σ
2
+
k
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
, (15)
where the inequality follows from Jensen’s inequality. Like-
wise, we use Jensen’s inequality and (13) to upper bound
E
h
log
˜
β|Y
k−ℓ
0
|
2
i
≤ E
"
log
˜
βσ
2
+
˜
β
k−ℓ
0
X
ℓ=1
α
k−ℓ
0
−ℓ
|X
ℓ
|
2
!#
≤ E
"
log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
(16)
5
For channels with finite input and output alphabets this inequality follows
by Topsøe’s identity [15]; see also [16, Thm. 3.4].
6
When y
k−ℓ
0
= 0, then the density of the Cauchy distribution (11) is
undefined. However, this event is of zero probability and has therefore no
impact on the mutual information I
`
X
n
1
; Y
k
˛
˛
Y
k−1
1
´
.
and
E
h
log
˜
β|Y
k−ℓ
0
|
2
+ |Y
k
|
2
i
≤ E
"
log
2σ
2
+ 2
k−ℓ
0
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
+
k
X
ℓ=k−ℓ
0
+1
α
k−ℓ
|X
ℓ
|
2
!#
≤ log 2 + E
"
log
σ
2
+
k
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
, (17)
where the second inequality follows because
P
k
ℓ=k−ℓ
0
+1
α
k−ℓ
|X
ℓ
|
2
≤ 2
P
k
ℓ=k−ℓ
0
+1
α
k−ℓ
|X
ℓ
|
2
.
Next, we derive a lower bound on h
Y
k
X
n
1
, Y
k−1
1
. Let
H
k
′
=
H
(0)
k
′
, H
(1)
k
′
, . . . , H
(k−1)
k
′
, k
′
= 1, 2, . . . , k − 1. We
have
h
Y
k
X
n
1
, Y
k−1
1
≥ h
Y
k
X
n
1
, Y
k−1
1
, H
k−1
1
= h
Y
k
X
n
1
, H
k−1
1
, (18)
where the inequality follows because conditioning cannot
increase entropy, and where the equality follows because,
conditional on
X
n
1
, H
k−1
1
, Y
k
is independent of Y
k−1
1
. Let
S
k
be defined as
S
k
, {ℓ = 1, 2, . . . , k : min{|x
ℓ
|
2
, α
k−ℓ
} > 0}. (19)
Using the entropy power inequality [13, Thm. 16.6.3], and
using that the processes
H
(0)
k
, k ∈ Z
+
,
H
(1)
k
, k ∈ Z
+
, . . .
are independent and jointly independent of X
n
1
, it can be
shown that for any given X
n
1
= x
n
1
h
k
X
ℓ=1
H
(k−ℓ)
k
X
ℓ
+ Z
k
X
n
1
= x
n
1
, H
k−1
1
!
≥ log
X
ℓ∈S
k
e
h
H
(k−ℓ)
k
X
ℓ
X
ℓ
=x
ℓ
,{H
(k−ℓ)
k
′
}
k−1
k
′
=1
+ e
h(Z
k
)
!
. (20)
We lower bound the differential entropies on the RHS of (20)
as follows. The differential entropies in the sum are lower
bounded by
h
H
(k−ℓ)
k
X
ℓ
X
ℓ
= x
ℓ
,
H
(k−ℓ)
k
′
k−1
k
′
=1
= log
α
k−ℓ
|x
ℓ
|
2
+ h
H
(k−ℓ)
k
H
(k−ℓ)
k
′
k−1
k
′
=1
− log α
k−ℓ
≥ log
α
k−ℓ
|x
ℓ
|
2
+ inf
ℓ∈L
(h
ℓ
− log α
ℓ
) , ℓ ∈ S
k
, (21)
where the equality follows from the behavior of differential
entropy under scaling; and where the inequality follows by
the stationarity of the process
H
(k−ℓ)
k
, k ∈ Z
+
which
implies that the differential entropy h
H
(k−ℓ)
k
H
(k−ℓ)
k
′
k−1
k
′
=1
cannot be smaller than the differential entropy rate h
k−ℓ
[13, Thms. 4.2.1 & 4.2.2], and by lower bounding (h
k−ℓ
−
log α
k−ℓ
) by inf
ℓ∈L
(h
ℓ
−log α
ℓ
) (which holds for each ℓ ∈ S
k
because S
k
⊆ L). The last differential entropy on the RHS of
(20) is lower bounded by
h(Z
k
) ≥ inf
ℓ∈L
(h
ℓ
− log α
ℓ
) + log σ
2
, (22)
which follows by noting that
inf
ℓ∈L
(h
ℓ
− log α
ℓ
) ≤ log(πe). (23)
Applying (21) & (22) to (20), and averaging over X
n
1
, yields
then
h
Y
k
X
n
1
, Y
k−1
1
≥ E
"
log
σ
2
+
k
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
+ inf
ℓ∈L
(h
ℓ
− log α
ℓ
) . (24)
We continue with the analysis of (14) by lower bounding
E
log |Y
k−ℓ
0
|
2
. To this end, we write the expectation as
E
E
log
k−ℓ
0
X
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
X
ℓ
+ Z
k−ℓ
0
2
X
k−ℓ
0
1
and lower bound the conditional expectation by
E
log
k−ℓ
0
X
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
X
ℓ
+ Z
k−ℓ
0
2
X
k−ℓ
0
1
= x
k−ℓ
0
1
= log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
0
−ℓ
|x
ℓ
|
2
!
− 2 · E
log
P
k−ℓ
0
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
x
ℓ
+ Z
k−ℓ
0
q
σ
2
+
P
k−ℓ
0
ℓ=1
α
k−ℓ
0
−ℓ
|x
ℓ
|
2
−1
≥ log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
0
−ℓ
|x
ℓ
|
2
!
+ log δ
2
− 2ǫ(δ, η)
−
2
η
h
−
P
k−ℓ
0
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
x
ℓ
+ Z
k−ℓ
0
q
σ
2
+
P
k−ℓ
0
ℓ=1
α
k−ℓ
0
−ℓ
|x
ℓ
|
2
(25)
for some 0 < δ ≤ 1 and 0 < η < 1, where
h
−
(X) ,
Z
{x∈C:f
X
(x)>1}
f
X
(x) log f
X
(x) dx, (26)
and where ǫ(δ, η) > 0 tends to zero as δ ↓ 0.
(We write x
ℓ
in lower case to indicate that expectation
and entropy are conditional on X
k−ℓ
0
1
= x
k−ℓ
0
1
.) Here,
the inequality follows by writing the expectation in the
form E
log |A|
−1
· I{|A| > δ}
+ E
log |A|
−1
· I{|A| ≤ δ}
(where I{·} denotes the indicator function), and by upper
bounding then the first expectation by −log δ and the second
expectation using [5, Lemma 6.7]. We continue by upper
bounding
h
−
P
k−ℓ
0
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
x
ℓ
+ Z
k−ℓ
0
q
σ
2
+
P
k−ℓ
0
ℓ=1
α
k−ℓ
0
−ℓ
|x
ℓ
|
2
= h
+
P
k−ℓ
0
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
x
ℓ
+ Z
k−ℓ
0
q
σ
2
+
P
k−ℓ
0
ℓ=1
α
k−ℓ
0
−ℓ
|x
ℓ
|
2
− h
P
k−ℓ
0
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
x
ℓ
+ Z
k−ℓ
0
q
σ
2
+
P
k−ℓ
0
ℓ=1
α
k−ℓ
0
−ℓ
|x
ℓ
|
2
≤
2
e
+ log(πe) + log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
0
−ℓ
|x
ℓ
|
2
!
− h
k−ℓ
0
X
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
x
ℓ
+ Z
k−ℓ
0
!
, (27)
where h
+
(X) is defined as h
+
(X) , h(X) + h
−
(X). Here,
we applied [5, Lemma 6.4] to upper bound
h
+
P
k−ℓ
0
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
x
ℓ
+ Z
k−ℓ
0
q
σ
2
+
P
k−ℓ
0
ℓ=1
α
k−ℓ
0
−ℓ
|x
ℓ
|
2
≤
2
e
+ log(πe). (28)
Averaging (27) over X
k−ℓ
0
1
yields
h
−
P
k−ℓ
0
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
X
ℓ
+ Z
k−ℓ
0
q
σ
2
+
P
k−ℓ
0
ℓ=1
α
k−ℓ
0
−ℓ
|X
ℓ
|
2
X
k−ℓ
0
1
≤
2
e
+ log(πe) + E
"
log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
0
−ℓ
|X
ℓ
|
2
!#
− h
k−ℓ
0
X
ℓ=1
H
(k−ℓ
0
−ℓ)
k−ℓ
0
X
ℓ
+ Z
k−ℓ
0
X
k−ℓ
0
1
!
≤
2
e
+ log(πe) − inf
ℓ∈L
(h
ℓ
− log α
ℓ
) , (29)
where the second inequality follows by conditioning the dif-
ferential entropy additionally on Y
k−ℓ
0
−1
1
, and by using then
lower bound (24). A lower bound on E
log |Y
k−ℓ
0
|
2
follows
now by averaging (25) over X
k−ℓ
0
1
, and by applying (29)
E
log |Y
k−ℓ
0
|
2
≥ E
"
log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
0
−ℓ
|X
ℓ
|
2
!#
+ log δ
2
− 2ǫ(δ, η) −
2
η
2
e
+ log(πe)
+ inf
ℓ∈L
2
η
(h
ℓ
− log α
ℓ
) . (30)
Returning to the analysis of (14), we obtain from (30), (24),
(17), (16), and (15)
I
X
n
1
; Y
k
Y
k−1
1
≤
1
2
E
"
log
σ
2
+
k
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
+
1
2
E
"
log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
+ log 2 + E
"
log
σ
2
+
k
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
− E
"
log
σ
2
+
k
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
− inf
ℓ∈L
(h
ℓ
− log α
ℓ
)
− E
"
log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
0
−ℓ
|X
ℓ
|
2
!#
− log δ
2
+ 2ǫ(δ, η) +
2
η
2
e
+ log(πe)
− inf
ℓ∈L
2
η
(h
ℓ
− log α
ℓ
) + log
π
2
˜
β
≤
+ E
"
log
σ
2
+
k
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
− E
"
log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
0
−ℓ
|X
ℓ
|
2
!#
, (31)
with
, − inf
ℓ∈L
1 +
2
η
(h
ℓ
− log α
ℓ
) + log
2π
2
˜
βδ
2
+ 2ǫ(δ, η) +
2
η
2
e
+ log(πe)
. (32)
The second inequality in (31) follows because
P
k−ℓ
0
ℓ=1
α
k−ℓ
|X
ℓ
|
2
≤
P
k
ℓ=1
α
k−ℓ
|X
ℓ
|
2
.
In order to show that the capacity is bounded in the SNR, we
apply (31) and (9) to (8) and use then that for any sequences
{a
k
} and {b
k
}
n
X
k=ℓ
0
+1
(a
k
− b
k
) =
n
X
k=n−ℓ
0
+1
(a
k
− b
k−n+2ℓ
0
)
+
n−ℓ
0
X
k=ℓ
0
+1
(a
k
− b
k+ℓ
0
). (33)
Defining
a
k
, E
"
log
σ
2
+
k
X
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
(34)
and
b
k
, E
"
log
σ
2
+
k−ℓ
0
X
ℓ=1
α
k−ℓ
0
−ℓ
|X
ℓ
|
2
!#
(35)
we have for the first sum on the RHS of (33)
n
X
k=n−ℓ
0
+1
(a
k
− b
k−n+2ℓ
0
)
=
n
X
k=n−ℓ
0
+1
E
"
log
σ
2
+
P
k
ℓ=1
α
k−ℓ
|X
ℓ
|
2
σ
2
+
P
k−n+ℓ
0
ℓ=1
α
k−n+ℓ
0
−ℓ
|X
ℓ
|
2
!#
≤ ℓ
0
log
1 + sup
ℓ∈Z
+
0
α
ℓ
· n · SNR
!
, (36)
which follows by lower bounding the denominator by σ
2
, and
by using then Jensen’s inequality along with the last inequality
in (9). For the second sum on the RHS of (33) we have
n−ℓ
0
X
k=ℓ
0
+1
(a
k
− b
k+ℓ
0
)
=
n−ℓ
0
X
k=ℓ
0
+1
E
"
log
σ
2
+
P
k
ℓ=1
α
k−ℓ
|X
ℓ
|
2
σ
2
+
P
k
ℓ=1
α
k−ℓ
|X
ℓ
|
2
!#
= 0. (37)
Thus, applying (31)–(37) and (9) to (8), we obtain
1
n
I(X
n
1
; Y
n
1
)
≤
ℓ
0
n
log
σ
2
+ sup
ℓ∈Z
+
0
α
ℓ
· n · SNR
!
+
ℓ
0
n
log
σ
2
+ sup
ℓ∈Z
+
0
α
ℓ
· n · SNR
!
+
n − 2ℓ
0
n
, (38)
which tends to
< ∞ as n tends to infinity. This proves
Theorem 1.
III. MULTIPATH CHANNELS OF UNBOUNDED CAPACITY
We have seen in Theorem 1 that if the variances of the path
gains {α
ℓ
} do not decay faster than geometrically, then ca-
pacity is bounded in the SNR. In this section, we demonstrate
that this need not be the case when the variances of the path
gains decay faster than geometrically. The following theorem
presents a sufficient condition for the capacity C(SNR) to be
unbounded in the SNR.
Theorem 2: Consider the above channel model. Then
lim
ℓ→∞
1
ℓ
log log
1
α
ℓ
= ∞
=⇒
sup
SNR>0
C(SNR) = ∞
. (39)
Proof: Omitted.
Note: We do not claim that C(SNR) is achievable. How-
ever, it can be shown that when, for example, the processes
H
(ℓ)
k
, k ∈ Z
+
, ℓ ∈ Z
+
0
are IID Gaussian, then the maximum
achievable rate is unbounded in the SNR, i.e., any rate is
achievable for sufficiently large SNR.
Certainly, the condition on the LHS of (39) is satisfied when
the channel has finite memory in the sense that for some finite
L ∈ Z
+
0
α
ℓ
= 0, ℓ = L + 1, L + 2, . . . .
In this case, (1) becomes
Y
k
=
k−1
X
ℓ=0
H
(ℓ)
k
x
k−ℓ
+ Z
k
, k = 1, 2, . . . , L
L
X
ℓ=0
H
(ℓ)
k
x
k−ℓ
+ Z
k
, k = L + 1, L + 2, . . . .
(40)
This channel (40) was studied for general (but finite) L in
[17] where it was shown that its capacity satisfies
lim
SNR→∞
C(SNR)
log log SNR
= 1. (41)
Thus, for finite L, the capacity pre-loglog (41) is not affected
by the multipath behavior. This is perhaps surprising as
Theorem 1 implies that if L = ∞, and if the variances of
the path gains do not decay faster than geometrically, then the
pre-loglog is zero.
ACKNOWLEDGMENT
Discussions with Helmut B¨olcskei and Giuseppe Durisi are
gratefully acknowledged.
REFERENCES
[1] R. G. Gallager, Information Theory and Reliable Communication. John
Wiley & Sons, 1968.
[2] V. Sethuraman, L. Wang, B. Hajek, and A. Lapidoth, “Low SNR capacity
of fading channels - MIMO and delay spread,” in Proc. IEEE Int.
Symposium on Inf. Theory, Nice, France, June 24–29, 2007.
[3] M. M´edard and R. G. Gallager, “Bandwidth scaling for fading multipath
channels,” IEEE Trans. Inform. Theory, vol. 48, no. 4, pp. 840–852, Apr.
2002.
[4]
˙
I. E. Telatar and D. N. C. Tse, “Capacity and mutual information
of wideband multipath fading channels,” IEEE Trans. Inform. Theory,
vol. 46, no. 4, pp. 1384–1400, July 2000.
[5] A. Lapidoth and S. M. Moser, “Capacity bounds via duality with
applications to multiple-antenna systems on flat fading channels,” IEEE
Trans. Inform. Theory, vol. 49, no. 10, pp. 2426–2467, Oct. 2003.
[6] T. Koch and A. Lapidoth, “Degrees of freedom in non-coherent station-
ary MIMO fading channels,” in Proc. Winter School Cod. and Inform.
Theory, Bratislava, Slovakia, Feb. 20–25, 2005.
[7]
, “The fading number and degrees of freedom in non-coherent
MIMO fading channels: a peace pipe,” in Proc. IEEE Int. Symposium
on Inf. Theory, Adelaide, Australia, Sept. 4–9, 2005.
[8] A. Lapidoth and S. M. Moser, “The fading number of single-input
multiple-output fading channels with memory,” IEEE Trans. Inform.
Theory, vol. 52, no. 2, pp. 437–453, Feb. 2006.
[9] S. M. Moser, “The fading number of multiple-input multiple-output
fading channels with memory,” in Proc. IEEE Int. Symposium on Inf.
Theory, Nice, France, June 24–29, 2007.
[10] A. Lapidoth, “On the asymptotic capacity of stationary Gaussian fading
channels,” IEEE Trans. Inform. Theory, vol. 51, no. 2, pp. 437–446,
Feb. 2005.
[11] T. Koch and A. Lapidoth, “Gaussian fading is the worst fading,” in Proc.
IEEE Int. Symposium on Inf. Theory, Seattle, Washington, USA, July
9–14, 2006.
[12] S. Verd´u and T. S. Han, “A general formula for channel capacity,” IEEE
Trans. Inform. Theory, vol. 40, no. 4, pp. 1147–1157, July 1994.
[13] T. M. Cover and J. A. Thomas, Elements of Information Theory. John
Wiley & Sons, 1991.
[14] T. Koch, A. Lapidoth, and P. P. Sotiriadis, “A hot channel,” in Proc.
Inform. Theory Workshop (ITW), Lake Tahoe, CA, USA, Sept. 2–6 2007.
[15] F. Topsøe, “An information theoretical identity and a problem involving
capacity,” Studia Sci. Math. Hungar., vol. 2, pp. 291–292, 1967.
[16] I. Csisz´ar and J. K¨orner, Information Theory: Coding Theorems for
Discrete Memoryless Systems. Academic Press, 1981.
[17] T. Koch and A. Lapidoth, “On multipath fading channels at high SNR,”
2008, subm. to IEEE Int. Symposium on Inf. Theory, Toronto, Canada.
[Online]. Available: http://arxiv.org/abs/0801.0672