Page 1

Characterization of Mutual Information of Spatially

Correlated MIMO Channels with Keyhole

Andreas M¨ uller and Joachim Speidel

Institute of Telecommunications, University of Stuttgart, Germany

E-mail: {andreas.mueller, joachim.speidel}@inue.uni-stuttgart.de

Abstract—We characterize the statistical properties of the

mutual information between the transmitter and the receiver of

a multiple-input multiple-output (MIMO) communication system

in case of spatially correlated Rayleigh-fading keyhole channels

with arbitrarily correlated input signals and potentially colored

additive Gaussian noise. The probability density function (pdf) as

well as the cumulative distribution function (cdf) of the mutual

information are derived in a concise mathematical form and we

determine exact analytical closed-form expressions for its mean

value as well as the corresponding high SNR asymptotics. Numer-

ical results illustrate the impact of several different parameters on

the mutual information statistics and are shown to be in perfect

agreement with results obtained from Monte-Carlo simulations,

thus verifying the accuracy of our theoretical analysis.

I. INTRODUCTION

Multiple-input multiple-output (MIMO) systems are known

to offer a wide variety of benefits compared to conventional

single-input single-output (SISO) systems, such as the poten-

tial to realize considerably higher data rates or to significantly

improve the reliability of a wireless link. In his seminal paper,

Telatar showed that in case of independent and identically

distributed Rayleigh-fading, the ergodic capacity of a MIMO

channel grows linearly with the minimum out of the number

of transmit and receive antennas, respectively [1]. Later, his

results were extended by Shin and Lee [2] as well as Kiessling

and Speidel [3], [4] to scenarios with spatial correlation at the

transmitting and/or the receiving antenna array, in which case

the capacity generally might be drastically reduced. However,

spatial correlation is not the only detrimental factor that may

derogate the capacity of a MIMO channel in practice. In fact,

it has been predicted theoretically in [5] and [6] and verified

by several different measurement campaigns (see for example

[7]) that there might be scenarios in which the MIMO channel

matrix has only low rank, even though the transmit and receive

signals are only weakly correlated or even totally uncorrelated.

This phenomenon is usually referred to as keyhole effect and

might occur due to various different propagation effects, such

as diffraction or waveguiding, for example [6].

The capacity of uncorrelated keyhole channels with additive

white Gaussian noise (AWGN) has already been investigated

in [2] whereas a lower bound on the ergodic capacity of corre-

lated keyhole channels has been presented in [8]. However, in

both cases the authors confined themselves to considering the

ergodic channel capacity only while a comprehensive analysis

of the statistical properties of the mutual information has, to

the best of our knowledge, not been addressed in literature

before. In this paper, we perform such an analysis by deriving

concise mathematical closed-form expressions for the proba-

bility density function (pdf) and cumulative distribution func-

tion (cdf) of the mutual information of arbitrarily correlated

Rayleigh-fading MIMO channels with keyhole, considering

the most general case with arbitrary input covariance matrices

and possibly colored additive Gaussian noise. Furthermore, we

calculate the corresponding mean value, based on which the

exact ergodic channel capacity can easily be determined.

The remainder of this paper is structured as follows: In

Section II, we introduce our system and channel model. The

actual statistical analysis of the mutual information is done in

Section III, followed by some numerical results in Section IV.

Finally, some concluding remarks are given in Section V.

Notation: Vectors and matrices are denoted by bold lower

and upper case letters, respectively. Cm

m-variate complex Gaussian random vector with mean x and

covariance matrix Rxx, Inis the identity matrix of size n×n,

the superscript (·)Hstands for the conjugate-transpose of a

matrix or vector, and E[·] represents the expectation operator.

A1/2and AH/2denote matrix roots of matrix A such that

A1/2AH/2= A, X ∼ means “random variable X is distributed

as” while X∼= Y means “random variable X is statistically

equivalent to random variable Y ”. Finally, tr(·) and det(·)

denote the trace and determinant of a matrix, respectively.

N(x,Rxx) represents an

II. SYSTEM AND CHANNEL MODEL

We consider a frequency-flat Rayleigh-fading MIMO chan-

nel with NTX transmit antennas and NRX receive antennas.

In the discrete-time equivalent baseband domain, the system

can be described during one channel use by

?

where H ∈ CNRX×NTXdenotes the channel matrix, s ∈ CNTX

the transmit signal, r ∈ CNRXthe received signal, n ∈ CNRX

an additive Gaussian noise vector, and ¯ γ the average signal-

to-noise ratio (SNR) per receive antenna. The transmit signals

s are assumed to have zero mean and arbitrary autocorrelation

matrix Rss = E?ssH?, where we request without loss of

NTX. Furthermore, we assume that the additive noise has zero

mean as well and that its autocorrelation matrix is given by

Rnn= E?nnH?, which we request to have full rank and to

r =

¯ γ

NTX

Hs + n,

(1)

generality that this matrix is normalized such that tr(Rss) =

be normalized such that tr(Rnn) = NRX. Please note that the

1-4244-0353-7/07/$25.00 ©2007 IEEE

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2007 proceedings.

Page 2

requested normalizations of Rssand Rnnare always feasible

by properly adjusting the average SNR ¯ γ, which we define as

¯ γ = PT/N0, where PTand N0denote the total transmit power

and the average noise power per receive antenna, respectively.

Furthermore, we assume a perfect keyhole channel, i.e., the

only way for the radio waves to propagate from the transmitter

to the receiver is to pass through a keyhole (e.g., a hallway

acting as a single-mode waveguide), which ideally re-radiates

all the captured energy. In this case, the channel can be

considered as a concatenation of a multiple-input single-output

(MISO) channel from the individual transmit antennas to the

keyhole and a (statistically independent) single-input multiple-

output (SIMO) channel from the keyhole to the various receive

antennas [5], i.e., the channel matrix H can be modeled as

H = xyH,

(2)

where the vectors x ∈ CNRXand y ∈ CNTXdenote the afore-

mentioned SIMO and MISO channels, respectively. Consider-

ing a spatially correlated Rayleigh-fading scenario, we have

x ∼ CNRX

and RRX denote the correlation matrices at the transmitter

and the receiver, respectively. In the following, the channel is

always assumed to be perfectly known by the receiver while

it is unknown to the transmitter.

N

(0,RRX) and y ∼ CNTX

N

(0,RTX), where RTX

III. MUTUAL INFORMATION ANALYSIS

A. Probability Density Function

From [1], [4], it is well-known that for a fixed realization

of the channel matrix H, the mutual information (in bits per

channel use) between the input signal vector s and the output

signal vector r of the considered MIMO system is given by

?

Exploiting the special structure of H in case of keyhole

channels as introduced in (2), we can reformulate (3) as

?

Since x and y are complex Gaussian random vectors with zero

mean and covariance matrix RRX and RTX, respectively, it

can easily be shown that

?

with random vectors ˜ x ∼ CNRX

∼ CNTX

identity det(I + AB) = det(I + BA) for arbitrary matrices A

and B such that AB is square [9], (5) can be written as

?

where || · ||2denotes the squared Euclidean norm of a vec-

tor. Hence, the distribution of I(s,r) is obviously directly

connected to the distribution of the random variable Φ =

||˜ x||2||˜ y||2, which is provided by the following lemma:

I(s,r) = log2det

INTX+

¯ γ

NTX

RssHHR−1

nnH

?

.

(3)

I(s,r) = log2det

INTX+

¯ γ

NTX

RssyxHR−1

nnxyH

?

. (4)

I(s,r)∼= log2det

INTX+

¯ γ

NTX

˜ y˜ xH˜ x˜ yH

?

,

(5)

N

(0,R−H/2

nn

RRXR−1/2

nn

) and ˜ y

N

(0,RH/2

ss RTXR1/2

ss). Making use of the determinant

I(s,r)∼= log2

1 +

¯ γ

NTX

||˜ x||2||˜ y||2

?

,

(6)

Lemma 1: The probability density function of the random

variable Φ = ||˜ x||2||˜ y||2in (6) is given by

M

?

j+l

2−1Kj−l

pΦ(φ)=

i=1

mi

?

j=1

N

?

k=1

nk

?

l=1

2ξi,jζk,l

Γ(j)Γ(l) (λiσk)

?

λiσk

j+l

2

×φ

2

?

φ

?

,φ ≥ 0 (7)

where the coefficients λidenote the M distinct mi-fold non-

zero eigenvalues of the matrix R−1

σk are the N distinct nk-fold non-zero eigenvalues of the

matrix RssRTX. Furthermore, we have

nnRRX and the coefficients

ξi,j

=

(−λi)j−mi

(mi− j)!

∂mi−j

∂smi−j

M

?

v?=i

v=1

1

(1 − sλv)mv

???????

???????

s=1

λi

(8)

ζk,l

=

(−σk)l−nk

(nk− l)!

∂nk−l

∂snk−l

N

?

v?=k

v=1

1

(1 − sσv)nv

s=

1

σk

(9)

and Γ(·) as well as Kν(·) denote the gamma function and

the ν-th order modified Bessel function of the second kind,

respectively [10].

Proof: For proving Lemma 1, we first of all consider

the distribution of X = ||˜ x||2. Pursuing a similar approach as

presented in [11], it can easily be shown that the moment-

generating function (mgf) of X is given by

MX(s) =

1

det

?

INRX− sR−H/2

nn

RRXR−1/2

nn

?.

(10)

In order to simplify this expression, we note that the deter-

minant of a matrix is always given by the product of its

eigenvalues and that the non-zero eigenvalues of AB and their

multiplicities are identical to those of BA (provided that A and

B have appropriate dimensions such that AB is square) [9].

Assuming without loss of generality that the matrix R−1

has M ≤ NRX distinct non-zero eigenvalues λ1,λ2,...,λM

with multiplicities m1,m2,...,mM, it is then straightforward

to rewrite the mgf of X according to (10) as

nnRRX

MX(s) =

M

?

i=1

1

(1 − sλi)mi.

(11)

Expanding this term into partial fractions in order to get rid

of the product, we obtain

MX(s) =

M

?

i=1

mi

?

j=1

ξi,j

(1 − sλi)j,

(12)

where the expansion coefficients ξi,jcan be calculated analyt-

ically by means of (8). Based on this result, the desired pdf

of X can then be obtained by performing the inverse Laplace

transform of MX(−s), yielding to

M

?

pX(x) =

i=1

mi

?

j=1

ξi,jxj−1

Γ(j)λj

i

e−x

λi,x ≥ 0.

(13)

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2007 proceedings.

Page 3

Likewise, the PDF of Y = ||˜ y||2can be shown to be given as

N

?

where we assumed without loss of generality that the matrix

RssRTXhas N distinct non-zero eigenvalues σ1,σ2,...,σN

with multiplicities n1,n2,...,nN and where the coefficients

ζk,lcan be calculated based on (9). Exploiting that X and Y

are statistically independent of each other, the desired pdf of

Φ = ||˜ x||2||˜ y||2= X Y can then be determined as

pΦ(φ)=

0

M

?

×

0

Making use of [10] eq. (3.471,9), this integral can be solved

in closed-form, yielding to the result provided in (7), what

eventually concludes the proof.

Based on this result, we now can easily determine the desired

pdf of I(s,r), which is stated by the following theorem:

Theorem 1: The probability density function of the mutual

information I(s,r) between the input signal vector s and the

output signal vector r is given by

pY(y) =

k=1

nk

?

l=1

ζk,lyk−1

Γ(l)σl

k

e−

y

σk,y ≥ 0,

(14)

?∞

pX(x)pY

?φ

x

?

1

|x|dx

(15)

=

i=1

mi

?

?∞

j=1

N

?

xj−l−1e−

k=1

nk

?

l=1

ξi,jζk,lφl−1

Γ(j)Γ(l)λj

?

iσl

?

dx

k

x

λi+

φ

x σk

φ ≥ 0.(16)

pI(R)=

M

?

×2R+1ln(2)

(2R− 1)

i=1

mi

?

j=1

N

?

k=1

nk

?

l=1

ξi,jζk,l

Γ(j)Γ(l)

??2R− 1?NTX

?

λiσk¯ γ

λiσk¯ γ

?j+l

2

Kj−l

2

?

pΦ

(2R− 1) NTX

.(17)

Proof: Performing the simple transformation of random

variables I = log2

1 +

?

¯ γ

NTXΦ

, we obtain

??2R− 1? NTX

pI(R) = ln 2 · 2RNTX

what exactly corresponds to (17) after substituting the pdf of

Φ with the expression given in Lemma 1.

¯ γ

¯ γ

?

,

(18)

B. Cumulative Distribution Function

The cdf of the mutual information generally corresponds

to the probability that a certain information rate R cannot be

supported by an instantaneous realization of the channel matrix

H. Often, this probability is also referred to as the information

outage probability of the channel and our main result in this

regard is stated by the following theorem:

Theorem 2: The cumulative distribution function of the

mutual information I(s,r) is given by

FI(R)=1 −

M

?

i=1

mi

?

j=1

N

?

k=1

nk

?

l=1

l−1

?

v=0

2ξi,jζk,l

Γ(j)Γ(v + 1)

λiσk

×

?Λ(R)

λiσk

?j+v

2

Kj−v

2

?

Λ(R)

, (19)

where we have introduced for brevity the short-hand notation

Λ(R) =?2R− 1? NTX

and with Kν(·) as the ν-th order modified Bessel function of

the second kind again.

Proof: Generally, the cdf FI(R) of I(s,r) is defined as

FI(R) = Prob[I(s,r) ≤ R]. It can easily be shown that this is

equivalent to FI(R) = Prob[Φ ≤ Λ(R)], with Λ(R) according

to (20). Based on (16), we consequently obtain

¯ γ

,

(20)

FI(R) =

M

?

i=1

mi

?

j=1

N

?

k=1

nk

?

l=1

ξi,jζk,lφl−1

Γ(j)Γ(l)λj

iσl

k

· I,

(21)

with the short-hand notation

?Λ(R)

which has been introduced for brevity. Changing the order of

integration and capitalizing on [10] eq. (3.381,1), we get

?∞

with γ(a,x) =?x

γ(n,x) is given for positive integers n as

?

and exploiting the integral relationships provided in [10] eqs.

(3.381,4) and (3.471,9), (23) can be solved in closed-form,

whereby we finally obtain the result given in Theorem 2.

I =

0

?∞

0

φl−1xj−l−1e−

?

x

λi+

φ

x σk

?

dxdφ,

(22)

I =

0

0ta−1e−tdt as the lower incomplete gamma

function [10]. Making use of the well-known relationship that

σl

kxj−1e−x

λi γ

?

l,Λ(R)

xσk

?

dx,

(23)

γ(n,x) = Γ(n)1 − e−x

n−1

?

k=0

xk

Γ(k + 1)

?

(24)

C. Mean Mutual Information

A central information-theoretic measure of great impor-

tance, particularly for characterizing ergodic fading channels,

is the mean mutual information, which corresponds to the

maximum data rate at which (at least in theory) error-free

transmission is possible. Maximizing the mean mutual infor-

mation over the set of all possible input covariance matrices

Rss then yields the ergodic capacity of the channel. The

problem of determining the ergodic capacity of keyhole chan-

nels has already been addressed in [2], but for uncorrelated

channels and white noise only. In the following, we extend

and generalize these results by deriving an exact analytical

closed-form expression for the mean mutual information of the

considered spatially correlated keyhole channels with possibly

colored noise and arbitrary input covariance matrices. Based

on this result, the ergodic capacity with uninformed transmitter

can then easily be obtained by simply setting Rss= INTX.

Theorem 3: The mean mutual information between the

input signal vector s and the output signal vector r of the

considered MIMO channel is given by

E[I(s,r)]=

M

?

×G1,4

i=1

mi

?

j=1

N

?

?¯ γ λi?r

k=1

nk

?

l=1

ξi,jζk,l

ln(2)Γ(j)Γ(l)

????

4,2

NTX

1 − j,1 − l,1,1

1,0

?

,(25)

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2007 proceedings.

Page 4

where Gp,q

Proof: Generally, E[I(s,r)] can be calculated as

?∞

Replacing pΦ(φ) with the expression provided in (7), we get

m,n[·|·] denotes the Meijer-G function [10].

?

E[I(s,r)] =

0

log2

1 +

¯ γ

NTX

φ

?

pΦ(φ)dφ.

(26)

E[I(s,r)]=

M

?

×

i=1

mi

?

?∞

j=1

N

?

ln

?

k=1

?

2

nk

?

1 +

?

l=1

2ξi,jζk,l

Γ(j)Γ(l) ln(2) (λiσk)

¯ γ

NTX

?

j+l

2

0

φ

?

φ

j+l

2−1

×Kj−l

φ

λiσk

dφ.

(27)

Due to the relatively complex structure of the integrand, the

integral in (27) can—to the best of our knowledge—not be

solved in closed-form using standard integration techniques

or standard tables of integrals. For that reason, we pursue an

approach based upon Meijer G-functions in the following [10],

which has already been used in [2]. These functions are of

very general nature and contain basically all known elementary

functions as special cases. Since they are readily available

in common mathematical software packages, a numerical

evaluation of these Meijer G-functions is immediately feasible.

From [12] eq. (8.4.6,5), we specifically know the identity

ln(1 + x) = G1,2

2,2

?

x

????

1,

1,

1

0

?

.

(28)

Replacing the logarithm in (27) by this equivalent expression,

the integral can be written in a form which finally can be

solved in closed-form using the result provided in [10] eq.

(7.821,3). This way, we obtain the expression given in (25),

what eventually concludes the proof.

Even though the exact analytical expression for the mean

mutual information according to (25) might be easily evaluated

numerically, it is not very intuitive and does not directly reveal

its dependence on the spatial correlation properties of the

channel and the input signal and noise covariance matrices,

respectively. For that purpose, we determine a simple upper

bound as well as the corresponding high signal-to-noise ratio

(SNR) asymptotics in the following, which can be expressed

by means of elementary functions only.

Theorem 4: A simple upper bound Ibound on the mean

mutual information I(s,r) is given by

?

Proof: Exploiting the concavity of the log-function and

applying Jensen’s inequality to (26), we obtain

Ibound= log2

1 +

¯ γ

NTX

tr?R−1

nnRRX

?tr(RssRTX)

?

.

(29)

I(s,r) ≤ Ibound= log2

Since Φ = ||˜ x||2||˜ y||2, where ||˜ x||2and ||˜ y||2are statistically

independent of each other, we can say that

?

?

1 +

¯ γ

NTX

E[Φ]

?

.

(30)

Ibound= log2

1 +

¯ γ

NTX

E?||˜ x||2?E?||˜ y||2??

.

(31)

Exploiting the equivalence E[||a||2= tr?E[aaH]?

=

log2

1 +

?

Based on this expression, the final result given in (29) now

can easily by obtained by making use of the fact that tr(AB)

= tr(BA) for matrices A and B such that AB is square.

Remark 1: For isotropic input signals and additive white

Gaussian noise with equal noise power N0 on all receiver

branches, i.e., for Rss = INTXas well as Rnn = INRX, the

upper bound according to (29) simplifies to

and the

known statistical properties of ˜ x and ˜ y then yields to

?

× tr

Ibound

¯ γ

NTX

RTXR1/2

tr

?

ss

R−H/2

nn

??

RRXR−1/2

nn

?

RH/2

ss

.

(32)

Ibound= log2(1 + NRX¯ γ).

(33)

This means that in this frequently considered case the mean

mutual information of a MIMO keyhole channel is upper

bounded by the mean mutual information of an additive white

Gaussian noise SIMO channel with NRXreceive antennas.

For the asymptotics of the mean mutual information in the

high SNR regime, we can formulate the following theorem:

Theorem 5: The high SNR asymptotics Ihigh of the mean

mutual information I(s,r) are given by

?

with

Ihigh= log2

¯ γ

NTX

?

+ ΞTX+ ΞRX,

(34)

ΞTX

=

M

?

N

?

i=1

mi

?

nk

?

j=1

ξi,j

ln 2[ψ(j) + ln λi]

(35)

ΞRX

=

k=1 l=1

ζk,l

ln 2[ψ(l) + ln σk],

(36)

where ψ(·) denotes Euler’s psi-function, which is given for

positive integers n as ψ(n) = −? +?n−1

Proof: It can easily be seen that in the high SNR regime

the mean mutual information can be reasonably approximated

by

?

where the expectation has to be taken with respect to the

distribution of Φ. Noting again that Φ = ||˜ x||2||˜ y||2, (37) can

be transformed to (34), where ΞTX= E[log2(||˜ y||2)] and ΞRX

= E[log2(||˜ x||2)]. These expected values can be calculated in

closed-form based on (13) and (14) by making use of [10]

eq. (4.352,1), whereby we finally get the expressions given in

(35) and (36), respectively. Please note that the asymptotical

tightness of (34) can easily be checked by showing that

lim¯ γ→∞|E[I(s,r)]−Ihigh| = 0, what, however, is not explicitly

shown here due to space constraints.

At this point, it is also interesting to note that the high

SNR asymptotics always represent a lower bound on the exact

ergodic capacity since log2(x) ≤ log2(1+x) ∀ x > 0. Further-

more, it can easily be seen based on (34) that keyhole channels

k=1

1

kwith ? as the

Euler-Mascheroni constant [10].

Ihigh= E

log2

?

¯ γ

NTX

Φ

??

,

(37)

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2007 proceedings.

Page 5

provide no spatial multiplexing gain since lim¯ γ→∞

independent of the actual antenna configuration. In fact, this

is quite obvious since keyhole channels provide only a single

degree of freedom [2]. In addition, it is getting apparent from

(34) that spatial correlation affects only the values of ΞTXand

ΞRX in the high SNR regime and hence leads to a constant

offset of the mean mutual information. In this regard, we can

formulate the following corollary:

Corollary 1: Assuming isotropic inputs, i.e., Rss= INTX,

the difference between the mean mutual information in case

of uncorrelated and fully correlated transmit signals is given

in the high SNR regime by

?NTX−1

k=1

Ihigh

log2¯ γ= 1,

∆corr,TX=

1

ln 2

?

1

k− ln NTX

?

.

(38)

Proof: In case of uncorrelated transmit signals, all eigen-

values σkof RTX are equal to one, i.e., we have N = 1, n1

= NTX, and σ1= 1. Furthermore, it can easily be shown that

?

Hence, ΞTXis given by ΞTX,a=

this case. With full spatial correlation at the transmitter-side,

we have only one non-zero eigenvalue with multiplicity one,

which is equal to the number of transmit antennas, i.e., N =

1, n1= 1, σ1= NTX, and ζ1,1= 1. Hence, we obtain ΞTX,b=

1

ln 2[−? + ln NTX]. The result provided by (38) then simply

corresponds to the difference between ΞTX,aand ΞTX,b.

Similar considerations can be made for spatial correlation at

the receiver-side if we assume AWGN, i.e., Rnn = INRX,

where the corresponding expression for ∆corr,RX can be ob-

tained from (38) by simply replacing NTXwith NRX. Clearly,

(38) reveals that the impact of spatial correlation on the

capacity in case of keyhole channels is generally relatively

small. For NTX= 2, for example, the difference between the

uncorrelated and the fully correlated case at the transmitter-

side is just about 0.4427 bits per channel use, what is far less

than it would be in case of non-keyhole MIMO channels [4].

A self-evident conjecture in this regard is that the offset

between the fully correlated and the uncorrelated case is

increasing if more antennas are used since in absence of spatial

correlation the diversity order increases in that case whereas

in case of fully correlated channels the diversity order always

equals one. This is manifested by the following corollary:

Corollary 2: ∆corr,TXaccording to (38) is a strictly increas-

ing function of the number of transmit antennas and the

limiting value for NTX→ ∞ is given by

lim

ζ1,l=

1

0

for l = NTX

otherwise

.

(39)

1

ln 2

?

−? +?NTX−1

k=1

1

k

?

in

NTX→∞∆corr,TX=

The strictly increasing nature of ∆corr,TX can

easily be shown by means of complete induction, what is

not explicitly presented here due to space constraints. For

obtaining the limiting value in (40), we make use of [10] eq.

(8.367,2), what directly leads to the given result.

?

ln 2.

(40)

Proof:

Please note that exactly the same considerations can be done

for the receiver-side again, i.e., similarly to (40) we can say

that limNRX→∞∆corr,RX=

IV. NUMERICAL RESULTS

In the following, we restrict for simplicity to considering

isotropic input signals and AWGN only, i.e., we assume that

Rss= INTXand Rnn= INRX. Furthermore, for investigating

the impact of spatial correlation, we assume an exponential

correlation model, i.e., the entry in the m-th row and n-th

column of the transmit correlation matrix RTX is given by

[RTX]m,n = ρ|m−n|

TX

(0 ≤ ρTX ≤ 1) and the corresponding

entry of RRXby [RRX]m,n= ρ|m−n|

ρTXand ρRXdenote two correlation coefficients that can be

used for adjusting the degree of correlation.

Fig. 1 shows the mean mutual information versus the aver-

age SNR for various antenna configurations with no correlation

at the receiver-side and only moderate correlation at the

transmitter-side (ρTX= 0.7). Aside from our analytically cal-

culated values, also the corresponding high SNR asymptotics,

the upper bound, as well as results obtained from Monte-

Carlo simulations are shown. Obviously, there is a perfect

match between calculated and simulated values, what verifies

the accuracy of our theoretical analysis. Furthermore, it can

be seen that our high SNR asymptotics are even for rather

moderate average SNRs quite tight and that the accuracy of the

upper bound increases with increasing numbers of antennas.

Some examples for the shapes of the pdfs of the mutual

information are depicted in Fig. 2. As before, we compare our

analytical results with simulated values and it turns out that

there is again a perfect agreement between both of them.

Fig. 3 illustrates the impact of spatial correlation at the

transmitter-side on the cdf of the mutual information. As

can be seen, with increasing spatial correlation, i.e., with

increasing values of ρTX, the probability that the channel

supports only rather low rates is increasing, but at the same

time also the probability that it supports relatively high rates is

?

ln 2.

RX

(0 ≤ ρRX≤ 1), where

0510152025

0

1

2

3

4

5

6

7

8

9

10

Average SNR [dB]

Mean mutual information [bits per channel use]

Theory

Simulation

Asymptotics

Upper bound

2 x 2

4 x 4

6 x 6

Fig. 1.

antenna configurations with ρTX= 0.7 and ρRX= 0.

Mean mutual information versus average SNR for several different

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2007 proceedings.