ArticlePDF Available

A Low-Complexity and Asymptotically Optimal Coding Strategy for Gaussian Vector Sources

MDPI
Entropy
Authors:

Abstract and Figures

In this paper, we present a low-complexity coding strategy to encode (compress) finite-length data blocks of Gaussian vector sources. We show that for large enough data blocks of a Gaussian asymptotically wide sense stationary (AWSS) vector source, the rate of the coding strategy tends to the lowest possible rate. Besides being a low-complexity strategy it does not require the knowledge of the correlation matrix of such data blocks. We also show that this coding strategy is appropriate to encode the most relevant Gaussian vector sources, namely, wide sense stationary (WSS), moving average (MA), autoregressive (AR), and ARMA vector sources.
Content may be subject to copyright.
entropy
Article
A Low-Complexity and Asymptotically Optimal
Coding Strategy for Gaussian Vector Sources
Marta Zárraga-Rodríguez , Jesús Gutiérrez-Gutiérrez * and Xabier Insausti
Tecnun, University of Navarra, Paseo de Manuel Lardizábal 13, 20018 San Sebastián, Spain;
mzarraga@tecnun.es (M.Z.-R.); xinsausti@tecnun.es (X.I.)
*Correspondence: jgutierrez@tecnun.es; Tel.: +34-943-219-877
Received: 4 August 2019; Accepted: 28 September 2019; Published: 2 October 2019


Abstract:
In this paper, we present a low-complexity coding strategy to encode (compress)
finite-length data blocks of Gaussian vector sources. We show that for large enough data blocks of a
Gaussian asymptotically wide sense stationary (AWSS) vector source, the rate of the coding strategy
tends to the lowest possible rate. Besides being a low-complexity strategy it does not require the
knowledge of the correlation matrix of such data blocks. We also show that this coding strategy
is appropriate to encode the most relevant Gaussian vector sources, namely, wide sense stationary
(WSS), moving average (MA), autoregressive (AR), and ARMA vector sources.
Keywords:
source coding; rate distortion function (RDF); Gaussian vector; asymptotically wide sense
stationary (AWSS) vector source; block discrete Fourier transform (DFT)
1. Introduction
The rate distortion function (RDF) of a source provides the minimum rate at which data can be
encoded in order to be able to recover them with a mean squared error (MSE) per dimension not larger
than a given distortion.
In this paper, we present a low-complexity coding strategy to encode (compress) finite-length
data blocks of Gaussian
N
-dimensional vector sources. Moreover, we show that for large enough
data blocks of a Gaussian asymptotically wide sense stationary (AWSS) vector source, the rate of our
coding strategy tends to the RDF of the source. The definition of AWSS vector process can be found
in ([
1
] (Definition 7.1)). This definition was first introduced for the scalar case N = 1 (see ([
2
] (Section
6)) or [3]), and it is based on the Gray concept of asymptotically equivalent sequences of matrices [4].
A low-complexity coding strategy can be found in [
5
] for finite-length data blocks of Gaussian
wide sense stationary (WSS) sources and in [
6
] for finite-length data blocks of Gaussian AWSS
autoregressive (AR) sources. Both precedents deal with scalar processes. The low-complexity
coding strategy presented in this paper generalizes the aforementioned strategies to Gaussian AWSS
vector sources.
Our coding strategy is based on the block discrete Fourier transform (DFT), and therefore, it turns
out to be a low-complexity coding strategy when the fast Fourier transform (FFT) algorithm is used.
Specifically, the computational complexity of our coding strategy is
O(nN log n)
, where
n
is the length
of the data blocks. Besides being a low-complexity strategy, it does not require the knowledge of the
correlation matrix of such data blocks.
We show that this coding strategy is appropriate to encode the most relevant Gaussian vector
sources, namely, WSS, moving average (MA), autoregressive (AR), and ARMA vector sources. Observe
that our coding strategy is then appropriate to encode Gaussian vector sources found in the literature,
such as the corrupted WSS vector sources considered in [
7
,
8
] for the quadratic Gaussian CEO problem.
Entropy 2019,21, 965; doi:10.3390/e21100965 www.mdpi.com/journal/entropy
Entropy 2019,21, 965 2 of 22
The paper is organized as follows. In Section 2, we obtain several new mathematical results on
the block DFT, and we present an upper bound for the RDF of a complex Gaussian vector. In Section 3,
using the results given in Section 2, we present a new coding strategy based on the block DFT to encode
finite-length data blocks of Gaussian vector sources. In Section 4, we show that for large enough data
blocks of a Gaussian AWSS vector source, the rate of our coding strategy tends to the RDF of the source.
In Section 5, we show that our coding strategy is appropriate to encode WSS, MA, AR, and ARMA
vector sources. In Section 6, conclusions and numerical examples are presented.
2. Preliminaries
2.1. Notation
In this paper
N
,
Z
,
R
, and
C
are the set of positive integers, the set of integers, the set of real
numbers, and the set of complex numbers, respectively. The symbol
>
denotes transpose and the
symbol
denotes conjugate transpose.
k·k2
and
k·kF
are the spectral and the Frobenius norm,
respectively.
dxe
denotes the smallest integer higher than or equal to
x
.
E
stands for expectation,
is
the Kronecker product, and
λj(A)
,
j∈ {
1,
. . .
,
n}
, denote the eigenvalues of an
n×n
Hermitian matrix
A
arranged in decreasing order.
Rn×1
is the set of real
n
-dimensional (column) vectors,
Cm×n
denotes
the set of
m×n
complex matrices, 0
m×n
is the
m×n
zero matrix,
In
denotes the
n×n
identity matrix,
and Vnis the n×nFourier unitary matrix, i.e.,
[Vn]j,k=1
ne2π(j1)(k1)
ni,j,k∈ {1, . . . , n},
where i is the imaginary unit.
If
AjCN×N
for all
j∈ {
1,
. . .
,
n}
, then
diag1jn(Aj)
denotes the
n×n
block diagonal matrix
with N×Nblocks given by diag1jn(Aj) = (Ajδj,k)n
j,k=1, where δis the Kronecker delta.
Re
and
Im
denote the real part and the imaginary part of a complex number, respectively.
If
ACm×n
, then
Re(A)
and
Im(A)
are the
m×n
real matrices given by
[Re(A)]j,k=Re([A]j,k)
and [Im(A)]j,k=Im([A]j,k)with j∈ {1, . . . , m}and k∈ {1, . . . , n}, respectively.
If zCN×1, then b
zdenotes the real 2N-dimensional vector given by
b
z= Re(z)
Im(z)!.
If zkCN×1for all k∈ {1, . . . , n}, then zn:1 is the nN-dimensional vector given by
zn:1 =
zn
zn1
.
.
.
z1
.
Finally, if
zk
is a (complex) random
N
-dimensional vector for all
kN
,
{zk}
denotes the
corresponding (complex) random N-dimensional vector process.
2.2. New Mathematical Results on the Block DFT
We first give a simple result on the block DFT of real vectors.
Lemma 1.
Let
n
,
NN
. Consider
xkCN×1
for all
k∈ {
1,
. . .
,
n}
. Suppose that
yn:1
is the block DFT of
xn:1, i.e.,
yn:1 =(V
nIN)xn:1 =(VnIN)xn:1. (1)
Then the two following assertions are equivalent:
Entropy 2019,21, 965 3 of 22
1. xn:1 RnN×1.
2. yk=ynkfor all k ∈ {1, . . . , n1}and ynRN×1.
Proof. See Appendix A.
We now give three new mathematical results on the block DFT of random vectors that are used in
Section 3.
Theorem 1.
Consider
n
,
NN
. Let
xk
be a random
N
-dimensional vector for all
k∈ {
1,
. . .
,
n}
. Suppose
that yn:1 is given by Equation (1). If k ∈ {1, . . . , n}, then
λnN (E(xn:1 x
n:1))λN(E(xkx
k))λ1(E(xkx
k))λ1(E(xn:1x
n:1)) (2)
and
λnN (E(xn:1x
n:1))λN(E(yky
k))λ1(E(yky
k))λ1(E(xn:1x
n:1)) . (3)
Proof. See Appendix B.
Theorem 2.
Let
xn:1
and
yn:1
be as in Theorem 1. Suppose that
xn:1
is real. If
k∈ {
1,
. . .
,
n
1
}\ {n
2}
, then
λnN (Exn:1 x>
n:1)
2λ2NEb
ykb
yk>λ1Eb
ykb
yk>λ1(Exn:1x>
n:1)
2.
Proof. See Appendix C.
Lemma 2. Let xn:1 and yn:1 be as in Theorem 1. If k ∈ {1, . . . , n}, then
1. E yky
k=(VnIN)Exn:1x
n:1(VnIN)nk+1,nk+1.
2. E yky>
k=(VnIN)Exn:1x>
n:1VnINnk+1,nk+1.
3. E b
ykb
yk>=1
2 ReEyky
k+ReEyky>
k ImEyky>
kImEyky
k
ImEyky
k+ImEyky>
k ReEyky
kReEyky>
k!.
Proof. See Appendix D.
2.3. Upper Bound for the RDF of a Complex Gaussian Vector
In [
9
], Kolmogorov gave a formula for the RDF of a real zero-mean Gaussian
N
-dimensional
vector x with positive definite correlation matrix Exx>, namely,
Rx(D) = 1
N
N
k=1
max (0, 1
2ln λkExx>
θ)D 0, tr Exx>
N#, (4)
where tr denotes trace and θis a real number satisfying
D=1
N
N
k=1
min nθ,λkExx>o.
If
D0, λNExx>
, an optimal coding strategy to achieve
Rx(D)
is to encode
[
z
]1,1
,
. . .
,
[
z
]N,1
separately, where z
=U>
x with
U
being a real orthogonal eigenvector matrix of
Exx>
(see ([
6
]
(Corollary 1))). Observe that in order to obtain
U
, we need to know the correlation matrix
Exx>
.
This coding strategy also requires an optimal coding method for real Gaussian random variables.
Entropy 2019,21, 965 4 of 22
Moreover, as 0
<DλNExx>1
NN
k=1λkExx>=tr(E(xx>))
N
, if
D0, λNExx>
,
then from Equation (4) we obtain
Rx(D) = 1
N
N
k=1
1
2ln λkExx>
D=1
2Nln N
k=1λkExx>
DN=1
2Nln det Exx>
DN. (5)
We recall that
Rx(D)
can be thought of as the minimum rate (measured in nats) at which x can be
encoded (compressed) in order to be able to recover it with an MSE per dimension not larger than
D
,
that is:
Ekxe
xk2
2
ND,
where e
x denotes the estimation of x.
The following result gives an upper bound for the RDF of a complex zero-mean Gaussian
N-dimensional vector (i.e., a real zero-mean Gaussian 2N-dimensional vector).
Lemma 3.
Consider
NN
. Let
z
be a complex zero-mean Gaussian
N
-dimensional vector. If
Eb
zb
z>
is a
positive definite matrix, then
Rb
z(D)1
2Nln det (E(zz))
(2D)ND0, λ2NEb
zb
z>i. (6)
Proof. We divide the proof into three steps:
Step 1: We prove that E(zz)is a positive definite matrix. We have
Eb
zb
z>=
ERe(z)(Re(z))>ERe(z)(Im(z))>
EIm(z)(Re(z))>EIm(z)(Im(z))>
and
E(zz)=E(Re(z) + iIm(z))(Re(z))>i(Im(z))>
=ERe(z)(Re(z))>+EIm(z)(Im(z))>+iEIm(z)(Re(z))>iERe(z)(Im(z))>.
Consider
uCN×1
, and suppose that
uE(zz)u=
0. We only need to show that
u=
0
N×1
.
As Eb
zb
z>is a positive definite matrix and
u
iu!
Eb
zb
z> u
iu!= u
iu!
ERe(z)(Re(z))>uiERe(z)(Im(z))>u
EIm(z)(Re(z))>uiEIm(z)(Im(z))>u
=uERe(z)(Re(z))>uiuERe(z)(Im(z))>u
+iuEIm(z)(Re(z))>u+uEIm(z)(Im(z))>u
=uE(zz)u=0,
we obtain u
iu!=02N×1, or equivalently u=0N×1.
Step 2: We show that
det Eb
zb
z> (det(E(zz)))2
22N
. We have
E(zz)=Λc+
i
Λs
, where
Entropy 2019,21, 965 5 of 22
Λc=ERe(z)
(Re(z))>+EIm(z)
(Im(z))>
and
Λs=EIm(z)
(Re(z))>EIm(z)
(Re(z))>>
.
Applying ([10] (Corollary 1)), we obtain
det Eb
zb
z>det Λc+ΛsΛ1
cΛsdet (Λc)
22N=det IN+ΛsΛ1
cΛsΛ1
c(det (Λc))2
22N
=det IN+iΛsΛ1
cINiΛsΛ1
c(det (Λc))2
22N=det (Λc+iΛs)det (ΛciΛs)
22N
=det (E(zz)) det E(zz)
22N=det (E(zz)) det (E(zz))
22N=(det (E(zz)))2
22N.
Step 3: We now prove Equation (6). From Equation (5), we conclude that
Rb
z(D) = 1
4Nln det Eb
zb
z>
D2N1
4Nln (det (E(zz)))2
(2D)2N=1
2Nln det (E(zz))
(2D)N.
3. Low-Complexity Coding Strategy for Gaussian Vector Sources
In this section (see Theorem 3), we present our coding strategy for Gaussian vector sources.
To encode a finite-length data block
xn:1
of a Gaussian
N
-dimensional vector source
{xk}
, we compute
the block DFT of
xn:1
(
yn:1
) and we encode
ydn
2e
,
. . .
,
yn
separately with
E(kyke
ykk2
2)
ND
for all
k
n
2, . . . , n(see Figure 1).
V
nIN
yn:1
Encodern
yn
yn1
ydn
2e
xn:1
fyn
gyn1
gydn
2e
gxn:1
gyn:1
ydn
2e+1 gydn
2e+1
gyn1
gydn
2e+1
.
.
.
.
.
.
VnIN
Decodern
: : :
Encodern1Decodern1
: : :
Encoderdn
2e+1 Decoderdn
2e+1
: : :
Encoderdn
2eDecoderdn
2e
: : :
Figure 1.
Proposed coding strategy for Gaussian vector sources. In this figure,
Encoderk
(
Decoderk
)
denotes the optimal encoder (decoder) for the Gaussian
N
-dimensional vector
yk
with
k
n
2, . . . , n.
We denote by
e
Rxn:1 (D)
the rate of our strategy. Theorem 3also provides an upper bound of
e
Rxn:1 (D)
. This upper bound is used in Section 4to prove that our coding strategy is asymptotically
optimal whenever the Gaussian vector source is AWSS.
In Theorem 3
CAn
denotes the matrix
(VnIN)diag1kn[(VnIN)An(VnIN)]k,k(VnIN)
,
where AnCnN×n N .
Theorem 3.
Consider
n
,
NN
. Let
xk
be a random
N
-dimensional vector for all
k∈ {
1,
. . .
,
n}
. Suppose
that
xn:1
is a real zero-mean Gaussian vector with a positive definite correlation matrix (or equivalently,
Entropy 2019,21, 965 6 of 22
λnN Exn:1 x>
n:1>
0). Let
yn:1
be the random vector given by Equation (1). If
D0, λnN Exn:1 x>
n:1
,
then
Rxn:1 (D)e
Rxn:1 (D)1
2nN ln
det CE(xn:1 x>
n:1)
DnN , (7)
where
e
Rxn:1 (D) =
Ryn
2
(D)+2n1
k=n
2+1Rc
yk(D
2)+Ryn(D)
nif n is even,
2n1
k=n+1
2
Rc
yk(D
2)+Ryn(D)
nif n is odd.
Moreover,
01
2nN ln
det CE(xn:1 x>
n:1)
DnN Rxn:1 (D)1
2ln
1+
Exn:1x>
n:1CE(xn: 1 x>
n:1)
F
nNλnN Exn:1 x>
n:1
. (8)
Proof. We divide the proof into three steps:
Step 1: We show that
Rxn:1 (D)e
Rxn:1 (D)
. From Lemma 1,
yk=ynk
for all
k∈ {
1,
. . .
,
dn
2e −
1
}
,
and
ykRN×1
with
k∈ {n
2
,
n} ∩ N
. We encode
ydn
2e
,
. . .
,
yn
separately (i.e., if
n
is even, we encode
yn
2,d
yn
2+1, . . . , d
yn1,ynseparately, and if nis odd, we encode d
yn+1
2, . . . , d
yn1,ynseparately) with
E
b
yke
b
yk
2
2
2ND
2,knln
2m. . . , n1o\nn
2o
and
Ekyke
ykk2
2
ND,knn
2,noN.
Let g
xn:1 =(VnIN)g
yn:1 with
g
yn:1 =
e
yn
.
.
.
e
y1
,
where
b
e
yk=e
b
yk
for all
k∈ {dn
2e. . .
,
n
1
}\{n
2}
, and
e
yk=g
ynk
for all
k∈ {
1,
. . .
,
dn
2e
1
}
. Applying
Lemma 1yields g
xn:1 RnN ×1. As (VnIN)is unitary and k· k2is unitarily invariant, we have
Ekxn:1 g
xn:1k2
2
nN =E
(VnIN)xn:1 (VnIN)g
xn:1
2
2
nN
=Ekyn:1 g
yn:1k2
2
nN =1
nN
n
k=1
Ekyke
ykk2
2
=1
nN
2
k1∈{d n
2e...,n1}\{ n
2}
E
yk1f
yk1
2
2+
k2∈{n
2,n}∩N
E
yk2f
yk2
2
2
=1
nN
2
k1∈{d n
2e...,n1}\{ n
2}
E
c
yk1f
c
yk1
2
2+
k2∈{n
2,n}∩N
E
yk2f
yk2
2
2
(1
nN 2n
21ND +2NDif nis even,
1
nN 2nn+1
2ND +NDif nis odd, )=D.
Entropy 2019,21, 965 7 of 22
Consequently,
Rxn:1 (D)
NRyn
2
(D)+2Nn1
k=n
2+1Rc
yk(D
2)+NRyn(D)
nN if nis even,
2Nn1
k=n+1
2
Rc
yk(D
2)+NRyn(D)
nN if nis odd,
=e
Rxn:1 (D).
Step 2: We prove that e
Rxn:1 (D)1
2nN ln
detCE
(xn:1x>
n:1)
DnN . From Equations (3) and (5), we obtain
Ryk(D) = 1
2Nln det Eyky>
k
DN,knn
2,noN, (9)
and applying Theorem 2and Equation (5) yields
Rb
ykD
2=1
4Nln
det Eb
ykb
yk>
D
22N,k{1, . . . , n1}\nn
2o. (10)
From Lemma 3, we have
e
Rxn:1 (D)
1
n
2
k1∈{d n
2e,...,n1}\{ n
2}
1
2Nln
det Eyk1y
k1
DN+
k2∈{n
2,n}∩N
1
2Nln
det Eyk2y
k2
DN
=1
2nN
k1∈{d n
2e,...,n1}\{n
2}
ln
det Eyk1y
k1
DN+ln
det Eyk1y
k1
DN
+
k2∈{n
2,n}∩N
ln
det Eyk2y
k2
DN
=1
2nN
k1∈{d n
2e,...,n1}\{ n
2}
ln
det Eyk1y
k1
DN+ln
det Eynk1y
nk1
DN
+
k2∈{n
2,n}∩N
ln
det Eyk2y
k2
DN
=1
2nN
n
k=1
ln det Eyky
k
DN=1
2nN ln n
k=1det Eyky
k
DnN .
As
λj(E(yky
k)):j∈ {1, . . . , N},k∈ {1, . . . , n}=λj([E(yn:1 y
n:1)]k,k):j∈ {1, . . . , N},k∈ {1, . . . , n}
=λjh(
VnIN)Exn:1x>
n:1(
VnIN)
ik,k:j∈ {1, . . . , N},k∈ {1, . . . , n}
=λjdiag1knh(
VnIN)Exn:1x>
n:1(
VnIN)
ik,k:j∈ {1, . . . , nN}
=λj(
VnIN)diag1knh(
VnIN)Exn:1x>
n:1(
VnIN)
ik,k(
VnIN)1:j∈ {1, . . . , nN}
=nλjCE(xn:1 x>
n:1):j∈ {1, . . . , nN}o, (11)
Entropy 2019,21, 965 8 of 22
we obtain
n
k=1
det (E(yky
k)) =
n
k=1
N
j=1
λj(E(yky
k)) =
nN
j=1
λjCE(xn:1 x>
n:1)=det CE(xn: 1x>
n:1).
Step 3: We show Equation (8).
As
Exn:1x>
n:1
is a positive definite matrix (or equivalently,
Exn:1x>
n:1
is Hermitian
and
λj(Exn:1x>
n:1)>
0 for all
j∈ {
1,
. . .
,
nN}
),
(
VnIN)Exn:1x>
n:1(
VnIN)
is
Hermitian. Hence,
[(
VnIN)Exn:1x>
n:1(
VnIN)]k,k
is Hermitian for all
k∈ {
1,
. . .
,
n}
,
and therefore,
diag1kn[(
VnIN)Exn:1x>
n:1(
VnIN)]k,k
is also Hermitian. Consequently,
(
VnIN)diag1kn[(
VnIN)Exn:1x>
n:1(
VnIN)]k,k(
VnIN)
is Hermitian, and applying
Equations (3) and (11), we have that CE(xn:1 x>
n:1)is a positive definite matrix.
Let
Exn:1x>
n:1=Udiag1jn NλjExn:1x>
n:1U1
be an eigenvalue decomposition (EVD)
of
Exn:1x>
n:1
, where
U
is unitary. Thus,
qExn:1x>
n:1=Udiag1jn NqλjExn:1x>
n:1U
and
qExn:1x>
n:11
=Udiag1jnN 1
qλj(E(xn:1 x>
n:1) ) !U.
Since
qExn:1x>
n:11
is Hermitian and
CE(xn:1 x>
n:1)
is a positive definite matrix,
qExn:1x>
n:11
CE(xn:1 x>
n:1)qExn:1x>
n:11
is also a positive definite matrix.
From Equation (5), we have
Rxn:1 (D) = 1
2nN ln det Exn:1 x>
n:1
DnN , (12)
and applying the arithmetic mean-geometric mean inequality yields
01
2nN ln
det CE(xn:1 x>
n:1)
DnN Rxn:1 (D)
=1
2nN ln
det CE(xn:1 x>
n:1)
det E(xn:1x>
n:1)=1
2nN ln
det CE(xn:1 x>
n:1)
det qExn:1x>
n:1det qExn:1x>
n:1
=1
2nN ln det qExn:1 x>
n:11!det CE(xn: 1 x>
n:1)det qExn:1x>
n:11!!
=1
2nN ln det qExn:1x>
n:11
CE(xn:1 x>
n:1)qExn:1x>
n:11!
=1
2nN ln
nN
j=1
λj qExn:1x>
n:11
CE(xn:1 x>
n:1)qExn:1x>
n:11!
1
2nN ln
1
nN
nN
j=1
λj qExn:1x>
n:11
CE(xn:1 x>
n:1)qExn:1x>
n:11!!nN
=1
2ln 1
nN tr qExn:1 x>
n:11
CE(xn:1 x>
n:1)qExn:1x>
n:11!!
=1
2ln 1
nN tr CE(xn:1 x>
n:1)qExn:1x>
n:11qExn:1x>
n:11!!
=1
2ln 1
nN tr CE(xn:1 x>
n:1)Exn:1x>
n:11
Entropy 2019,21, 965 9 of 22
1
2ln nN
nN
CE(xn:1 x>
n:1)Exn:1x>
n:11
F!
=1
2ln 1
nN
CE(xn:1 x>
n:1)Exn:1x>
n:1Exn:1 x>
n:11+InN
F
1
2ln 1
nN
CE(xn:1 x>
n:1)Exn:1x>
n:1Exn:1 x>
n:11
F
+nN
1
2ln 1
nN
CE(xn:1 x>
n:1)Exn:1x>
n:1
F
Exn:1x>
n:11
2
+nN
=1
2ln
1+
Exn:1x>
n:1CE(xn: 1 x>
n:1)
F
nNλnN (Exn:1 x>
n:1)
.
In Equation
(12)
,
Rxn:1 (D)
is written in terms of
Exn:1x>
n:1
.
e
Rxn:1 (D)
can be written in terms of
Exn:1x>
n:1and Vnby using Lemma 2and Equations (9) and (10).
As our coding strategy requires the computation of the block DFT, its computational complexity
is
O(nN log n)
whenever the FFT algorithm is used. We recall that the computational complexity of
the optimal coding strategy for
xn:1
is
O(n2N2)
since it requires the computation of
U>
nxn:1
, where
Un
is a real orthogonal eigenvector matrix of
Exn:1x>
n:1
. Observe that such eigenvector matrix
Un
also needs to be computed, which further increases the complexity. Hence, the main advantage of
our coding strategy is that it notably reduces the computational complexity of coding
xn:1
. Moreover,
our coding strategy does not require the knowledge of
Exn:1x>
n:1
. It only requires the knowledge of
Eb
ykb
yk>, with k∈ {dn
2e. . . , n}.
It should be mentioned that Equation (7) provides two upper bounds for the RDF of finite-length
data blocks of a real zero-mean Gaussian
N
-dimensional vector source
{xk}
. The greatest upper
bound in Equation (7) was given in [
11
] for the case in which the random vector source
{xk}
is WSS,
and therefore, the correlation matrix of the Gaussian vector,
Exn:1x>
n:1
, is a block Toeplitz matrix.
Such upper bound was first presented by Pearl in [
12
] for the case in which the source is WSS and
N=1. However, neither [11] nor [12] propose a coding strategy for {xk}.
4. Optimality of the Proposed Coding Strategy for Gaussian AWSS Vector Sources
In this section (see Theorem 4), we show that our coding strategy is asymptotically optimal, i.e.,
we show that for large enough data blocks of a Gaussian AWSS vector source
{xk}
, the rate of our
coding strategy, presented in Section 3, tends to the RDF of the source.
We begin by introducing some notation. If
X:RCN×N
is a continuous and 2
π
-periodic
matrix-valued function of a real variable, we denote by
Tn(X)
the
n×n
block Toeplitz matrix with
N×Nblocks given by
Tn(X) = (Xjk)n
j,k=1,
where {Xk}kZis the sequence of Fourier coefficients of X:
Xk=1
2πZ2π
0ekωiX(ω)dωkZ.
If
An
and
Bn
are
nN ×nN
matrices for all
nN
, we write
{An}∼{Bn}
when the sequences
{An}
and
{Bn}
are asymptotically equivalent (see ([
13
] (p. 5673))), that is,
{kAnk2}
and
{kBnk2}
are
bounded and
lim
nkAnBnkF
n=0.
Entropy 2019,21, 965 10 of 22
The original definition of asymptotically equivalent sequences of matrices was given by Gray (see ([
2
]
(Section 2.3)) or [4]) for N=1.
We now review the definition of the AWSS vector process given in ([
1
] (Definition 7.1)). This
definition was first introduced for the scalar case N = 1 (see ([2] (Section 6)) or [3]).
Definition 1.
Let
X:RCN×N
, and suppose that it is continuous and 2
π
-periodic. A random
N
-dimensional vector process
{xk}
is said to be AWSS with asymptotic power spectral density (APSD)
X
if it
has constant mean (i.e., E(xk1) = E(xk2)for all k1,k2N)and {Exn:1 x
n:1} ∼ {Tn(X)}.
We recall that the RDF of {xk}is defined as limnRxn:1 (D).
Theorem 4.
Let
{xk}
be a real zero-mean Gaussian AWSS
N
-dimensional vector process with APSD
X
.
Suppose that infnNλnN Exn:1 x>
n:1>0. If D 0, infnNλn N Exn:1 x>
n:1, then
lim
nRxn:1 (D) = lim
ne
Rxn:1 (D) = 1
4πNZ2π
0ln det(X(ω))
DNdω. (13)
Proof. We divide the proof into two steps:
Step 1: We show that
limnRxn:1 (D) = 1
4πNR2π
0ln det(X(ω))
DNdω
. From Equation
(12)
, ([
1
] (Theorem
6.6)), and ([14] (Proposition 2)) yields
lim
nRxn:1 (D) = lim
n
1
2nN ln n N
k=1λkExn:1x>
n:1
DnN =lim
n
1
2nN
nN
k=1
ln λkExn:1x>
n:1
D
=1
4πZ2π
0
1
N
N
k=1
ln λk(X(ω))
Ddω=1
4πNZ2π
0ln det(X(ω))
DNdω.
Step 2: We prove that
limnRxn:1 (D) = limne
Rxn:1 (D)
. Applying Equations
(7)
and
(8)
, we obtain
0e
Rxn:1 (D)Rxn:1 (D)1
2nN ln
det CE(xn:1 x>
n:1)
DnN Rxn:1 (D)
1
2ln
1+
Exn:1x>
n:1CE(xn: 1 x>
n:1)
F
nNλnN Exn:1 x>
n:1
1
2ln
1+
Exn:1x>
n:1CE(xn: 1 x>
n:1)
F
nN infmNλm N Exm:1 x>
m:1
nN. (14)
To finish the proof, we only need to show that
lim
n
Exn:1x>
n:1CE(xn: 1 x>
n:1)
F
n=0. (15)
Let Cn(X)be the n×nblock circulant matrix with N×Nblocks defined in ([13] (p. 5674)), i.e.,
Cn(X) = (VnIN)diag1knX2π(k1)
n(VnIN)nN.
Observe that
CCn(X)=(VnIN)diag1kn[(VnIN)Cn(X)(VnIN)]k,k(VnIN)
Entropy 2019,21, 965 11 of 22
=(VnIN)diag1kn diag1jnX2π(j1)
nk,k!(VnIN)
=(VnIN)diag1knX2π(k1)
n(VnIN)=Cn(X)nN.
Consequently, as the Frobenius norm is unitarily invariant, we have
Cn(X)CE(xn:1 x>
n:1)
F=
CCn(X)CE(xn:1x>
n:1)
F
=
(VnIN)diag1knh(VnIN)Cn(X)Exn:1x>
n:1(VnIN)ik,k(VnIN)
F
=
diag1kn[(VnIN)Cn(X)Exn:1x>
n:1(VnIN)]k,k
F
(VnIN)Cn(X)Exn:1x>
n:1(VnIN)
F=
Cn(X)Exn:1x>
n:1
FnN.
Therefore,
0
Exn:1x>
n:1CE(xn: 1 x>
n:1)
F
n
Exn:1x>
n:1Cn(X)
F
n+
Cn(X)CE(xn:1 x>
n:1)
F
n
2
Exn:1x>
n:1Cn(X)
F
n2
Exn:1x>
n:1Tn(X)
F
n+kTn(X)Cn(X)kF
n!nN.
(16)
Since {Exn:1x>
n:1} ∼ {Tn(X)}, Equation (16) and ([1] (Lemma 6.1)) yields Equation (15).
Observe that the integral formula in Equation
(13)
provides the value of the RDF of the Gaussian
AWSS vector source whenever
D0, infnNλnN Exn:1 x>
n:1
. An integral formula of such an RDF
for any
D>
0 can be found in ([
15
] (Theorem 1)). It should be mentioned that ([
15
] (Theorem 1))
generalized the integral formulas previously given in the literature for the RDF of certain Gaussian
AWSS sources, namely, WSS scalar sources [
9
], AR AWSS scalar sources [
16
], and AR AWSS vector
sources of finite order [17].
5. Relevant AWSS Vector Sources
WSS, MA, AR, and ARMA vector processes are frequently used to model multivariate time series
(see, e.g., [
18
]) that arise in any domain that involves temporal measurements. In this section, we show
that our coding strategy is appropriate to encode the aforementioned vector sources whenever they
are Gaussian and AWSS.
It should be mentioned that Gaussian AWSS MA vector (VMA) processes, Gaussian AWSS AR
vector (VAR) processes, and Gaussian AWSS ARMA vector (VARMA) processes are frequently called
Gaussian stationary VMA processes, Gaussian stationary VAR processes, and Gaussian stationary
VARMA processes, respectively (see, e.g., [
18
]). However, they are asymptotically stationary but not
stationary, because their corresponding correlation matrices are not block Toeplitz.
5.1. WSS Vector Sources
In this subsection (see Theorem 5), we give conditions under which our coding strategy is
asymptotically optimal for WSS vector sources.
We first recall the well-known concept of WSS vector process.
Definition 2.
Let
X:RCN×N
, and suppose that it is continuous and 2
π
-periodic. A random
N
-dimensional vector process
{xk}
is said to be WSS (or weakly stationary) with PSD
X
if it has constant mean
and {Exn:1x
n:1}={Tn(X)}.
Entropy 2019,21, 965 12 of 22
Theorem 5.
Let
{xk}
be a real zero-mean Gaussian WSS
N
-dimensional vector process with PSD
X
.
Suppose that
minω[0,2π]λN(X(ω))>
0(or equivalently,
det(X(ω)) 6=
0for all
ωR
). If
D
0, minω[0,2π]λN(X(ω))i, then
lim
nRxn:1 (D) = lim
ne
Rxn:1 (D) = 1
4πNZ2π
0ln det (X(ω))
DNdω.
Proof.
Applying ([
1
] (Lemma 3.3)) and ([
1
] (Theorem 4.3)) yields
{Exn:1x>
n:1}={Tn(X)} ∼
{Tn(X)}. Theorem 5now follows from ([14] (Proposition 3)) and Theorem 4.
Theorem 5was presented in [
5
] for the case
N=
1 (i.e., just for WSS sources but not for vector
WSS sources).
5.2. VMA Sources
In this subsection (see Theorem 6), we give conditions under which our coding strategy is
asymptotically optimal for VMA sources.
We start by reviewing the concept of VMA process.
Definition 3. A real zero-mean random N-dimensional vector process {xk}is said to be MA if
xk=wk+
k1
j=1
GjwkjkN,
where G
j
,
jN
, are real
N×N
matrices,
{wk}
is a real zero-mean random
N
-dimensional vector process,
and
Ewk1w>
k2=δk1,k2Λ
for all
k1
,
k2N
with
Λ
being a real
N×N
positive definite matrix. If there exists
qNsuch that Gj=0N×Nfor all j >q, then {xk}is called a VMA(q) process.
Theorem 6.
Let
{xk}
be as in Definition 3. Assume that
{
G
k}
k=
, with G
0=IN
and G
k=
0
N×N
for
all
kN
, is the sequence of Fourier coefficients of a function
G:RCN×N
, which is continuous and
2
π
-periodic. Suppose that
{Tn(G)}
is stable (that is,
{k(Tn(G))1k2}
is bounded). If
{xk}
is Gaussian and
D0, infnNλnN Exn:1 x>
n:1, then
lim
nRxn:1 (D) = lim
ne
Rxn:1 (D) = 1
2Nln det(Λ)
DN. (17)
Moreover, Rxn:1 (D) = 1
2Nln det(Λ)
DNfor all n N.
Proof. We divide the proof into three steps:
Step 1: We show that
detExn:1x>
n:1=(
det(Λ)
)n
for all
nN
. From ([
15
] (Equation (A3))) we have
that Exn:1x>
n:1=Tn(G)Tn(Λ)(Tn(G)). Consequently,
detExn:1x>
n:1=det (Tn(G)
)det (Tn(Λ)
)det (Tn(G)
)=|det (Tn(G)
)|2(
det(Λ)
)n=(
det(Λ)
)nnN.
Step 2: We prove the first equality in Equation
(17)
. Applying ([
15
] (Theorem 2)), we obtain that
{xk}
is
AWSS. From Theorem 4, we only need to show that infnNλnN Exn:1 x>
n:1>0. We have
λnN Exn:1 x>
n:1=1
λ1Exn:1x>
n:11=1
Exn:1x>
n:11
2
=1
Tn(G)Tn(Λ)(Tn(G))1
2
=1
(Tn(G))1Tn(Λ1)(Tn(G))1
2
1
(Tn(G))1
2kTn(Λ1)k2
(Tn(G))1
2
Entropy 2019,21, 965 13 of 22
=1
(Tn(G))1
2
2
λ1(Λ1)
=λN(Λ)
(Tn(G))1
2
2
λN(Λ)
supmN
(Tm(G))1
22>0nN.
Step 3: We show that Rxn:1 (D) = 1
2Nln det(Λ)
DNfor all nN. Applying Equation (12) yields
Rxn:1 (D) = 1
2nN ln (det(Λ))n
DnN =1
2Nln det(Λ)
DNnN.
5.3. VAR AWSS Sources
In this subsection (see Theorem 7), we give conditions under which our coding strategy is
asymptotically optimal for VAR sources.
We first recall the concept of VAR process.
Definition 4. A real zero-mean random N-dimensional vector process {xk}is said to be AR if
xk=wk
k1
j=1
FjxkjkN,
where F
j
,
jN
, are real
N×N
matrices,
{wk}
is a real zero-mean random
N
-dimensional vector process,
and
Ewk1w>
k2=δk1,k2Λ
for all
k1
,
k2N
with
Λ
being a real
N×N
positive definite matrix. If there exists
pNsuch that Fj=0N×Nfor all j >p, then {xk}is called a VAR(p) process.
Theorem 7.
Let
{xk}
be as in Definition 4. Assume that
{
F
k}
k=
, with F
0=IN
and F
k=
0
N×N
for
all
kN
, is the sequence of Fourier coefficients of a function
F:RCN×N
, which is continuous and
2
π
-periodic. Suppose that
{Tn(F)}
is stable and
det (F(ω))6=
0for all
ωR
. If
{xk}
is Gaussian and
D0, infnNλnN Exn:1 x>
n:1, then
lim
nRxn:1 (D) = lim
ne
Rxn:1 (D) = 1
2Nln det(Λ)
DN. (18)
Moreover, Rxn:1 (D) = 1
2Nln det(Λ)
DNfor all n N.
Proof. We divide the proof into three steps:
Step 1: We show that
detExn:1x>
n:1=(
det(Λ)
)n
for all
nN
. From ([
19
] (Equation (19))), we have
that Exn:1x>
n:1=n(Tn(F))1Tn(Λ)(Tn(F))1o. Consequently,
detExn:1x>
n:1=det (Tn(Λ))
det (Tn(F))det (Tn(F))=(det(Λ))n
|det (Tn(F))|2=(det(Λ))nnN.
Step 2: We prove the first equality in Equation
(18)
. Applying ([
15
] (Theorem 3)), we obtain that
{xk}
is AWSS. From Theorem 4, we only need to show that
infnNλnN Exn:1 x>
n:1>
0. Applying ([
1
]
(Theorem 4.3)) yields
λnN Exn:1 x>
n:1=1
Exn:1x>
n:11
2
=1
(Tn(F))1Tn(Λ)(Tn(F))11
2
λN(Λ)
kTn(F)k2
2λN(Λ)
supmNkTm(F)k22>0nN.
Entropy 2019,21, 965 14 of 22
Step 3: We show that
Rxn:1 (D) = 1
2Nln det(Λ)
DN
for all
nN
. This can be directly obtained from
Equation (12).
Theorem 7was presented in [
6
] for the case of
N=
1 (i.e., just for AR sources but not for
VAR sources).
5.4. VARMA AWSS Sources
In this subsection (see Theorem 8), we give conditions under which our coding strategy is
asymptotically optimal for VARMA sources.
We start by reviewing the concept of VARMA process.
Definition 5. A real zero-mean random N-dimensional vector process {xk}is said to be ARMA if
xk=wk+
k1
j=1
Gjwkj
k1
j=1
FjxkjkN,
where G
j
and F
j
,
jN
, are real
N×N
matrices,
{wk}
is a real zero-mean random
N
-dimensional vector
process, and
Ewk1w>
k2=δk1,k2Λ
for all
k1
,
k2N
with
Λ
being a real
N×N
positive definite matrix.
If there exists
p
,
qN
such that
Fj=
0
N×N
for all
j>p
and
Gj=
0
N×N
for all
j>q
, then
{xk}
is called
a VARMA(p,q) process (or a VARMA process of (finite) order (p,q)).
Theorem 8.
Let
{xk}
be as in Definition 5. Assume that
{
G
k}
k=
, with G
0=IN
and G
k=
0
N×N
for all
kN
, is the sequence of Fourier coefficients of a function
G:RCN×N
which is continuous and 2
π
-periodic.
Suppose that
{
F
k}
k=
, with F
0=IN
and F
k=
0
N×N
for all
kN
, is the sequence of Fourier coefficients
of a function
F:RCN×N
which is continuous and 2
π
-periodic. Assume that
{Tn(G)}
and
{Tn(F)}
are
stable, and
det (F(ω))6=
0for all
ωR
. If
{xk}
is Gaussian and
D0, infnNλnN Exn:1 x>
n:1
, then
lim
nRxn:1 (D) = lim
ne
Rxn:1 (D) = 1
2Nln det(Λ)
DN. (19)
Moreover, Rxn:1 (D) = 1
2Nln det(Λ)
DNfor all n N.
Proof. We divide the proof into three steps:
Step 1: We show that
detExn:1x>
n:1 =(
det(Λ)
)n
for all
nN
. From ([
15
] (Appendix D))
and ([
1
] (Lemma 4.2)), we have that
Exn:1x>
n:1=(Tn(F))1Tn(G)Tn(Λ)(Tn(G))((Tn(F)))1
.
Consequently,
detExn:1x>
n:1=|det (Tn(G))|2(det(Λ))n
|det (Tn(F))|2=(det(Λ))nnN.
Step 2: We prove the first equality in Equation
(19)
. Applying ([
15
] (Theorem 3)), we obtain that
{xk}
is AWSS. From Theorem 4, we only need to show that
infnNλnN Exn:1 x>
n:1>
0. Applying ([
1
]
(Theorem 4.3)) yields
λnN Exn:1 x>
n:1=1
Exn:1x>
n:11
2
=1
(Tn(F))1Tn(G)Tn(Λ)(Tn(G))((Tn(F)))11
2
λN(Λ)
kTn(F)k2
2
(Tn(G))1
2
2
λN(Λ)
supmNkTm(F)k22supmN
(Tm(G))1
22>0nN.
Entropy 2019,21, 965 15 of 22
Step 3: We show that
Rxn:1 (D) = 1
2Nln det(Λ)
DN
for all
nN
. This can be directly obtained from
Equation (12).
6. Numerical Examples
We first consider four AWSS vector processes, namely, we consider the zero-mean WSS vector
process in ([
20
] (Section 4)), the VMA(1) process in ([
18
] (Example 2.1)), the VAR(1) process in ([
18
]
(Example 2.3)), and the VARMA(1,1) process in ([
18
] (Example 3.2)). In ([
20
] (Section 4)),
N=
2 and
the Fourier coefficients of its PSD Xare
X0= 2.0002 0.7058
0.7058 2.0000!,X1=X
1= 0.3542 0.1016
0.1839 0.2524!,X2=X
2= 0.0923 0.0153
0.1490 0.0696!,
X3=X
3= 0.1443 0.0904
0.0602 0.0704 !,X4=X
4= 0.0516 0.0603
0 0 !,
and Xj=02×2with |j|>4. In ([18] (Example 2.1)), N=2, G1is given by
0.8 0.7
0.4 0.6!, (20)
Gj=02×2for all jN, and
Λ= 4 1
1 2!. (21)
In ([
18
] (Example 2.3)),
N=
2,
Fj=
0
2×2
for all
jN
, and
F1
and
Λ
are given by
Equations (20) and (21), respectively. In ([18] (Example 3.2)), N=2,
G1= 0.6 0.3
0.3 0.6!,F1= 1.2 0.5
0.6 0.3!,Λ= 1 0.5
0.5 1.25!,
Gj=02×2for all jN, and Fj=02×2for all jN.
Figures 25show
Rxn:1 (D)
and
e
Rxn:1 (D)
with
n
100 and
D=
0.001 for the four vector processes
considered, by assuming that they are Gaussian. The figures bear evidence of the fact that the rate of
our coding strategy tends to the RDF of the source.
10 20 30 40 50 60 70 80 90 100
3.68
3.7
3.72
3.74
3.76
Figure 2. Considered rates for the wide sense stationary (WSS) vector process in ([20] (Section 4)).
Entropy 2019,21, 965 16 of 22
10 20 30 40 50 60 70 80 90 100
3.9
3.95
4
4.05
4.1
Figure 3. Considered rates for the VMA(1) process in ([18] (Example 2.1)).
10 20 30 40 50 60 70 80 90 100
3.9
3.95
4
4.05
4.1
Figure 4. Considered rates for the VAR(1) process in ([18] (Example 2.3)).
10 20 30 40 50 60 70 80 90 100
3.4
3.45
3.5
3.55
3.6
3.65
3.7
3.75
3.8
Figure 5. Considered rates for the VARMA(1,1) process in ([18] (Example 3.2)).
We finish with a numerical example to explore how our method performs in the presence of
a perturbation. Specifically, we consider a perturbed version of the WSS vector process in ([
20
]
(Section 4)). The correlation matrices of the perturbed process are
Tn(X) + 02n2×2n202n2×2
02×2n2I2!,nN.
Entropy 2019,21, 965 17 of 22
10 20 30 40 50 60 70 80 90 100
3.65
3.7
3.75
3.8
3.85
3.9
Figure 6. Considered rates for the perturbed WSS vector process with D = 0.001.
7. Conclusions
The computational complexity of coding finite-length data blocks of Gaussian
N
-dimensional
vector sources can be reduced by using the low-complexity coding strategy presented here instead of
the optimal coding strategy. Specifically, the computational complexity is reduced from
O(n2N2)
to
O(nN log n)
, where
n
is the length of the data blocks. Moreover, our coding strategy is asymptotically
optimal (i.e., the rate of our coding strategy tends to the RDF of the source) whenever the Gaussian
vector source is AWSS and the considered data blocks are large enough. Besides being a low-complexity
strategy, it does not require the knowledge of the correlation matrix of such data blocks. Furthermore,
our coding strategy is appropriate to encode the most relevant Gaussian vector sources, namely, WSS,
MA, AR, and ARMA vector sources.
Author Contributions:
Authors are listed in order of their degree of involvement in the work, with the most
active contributors listed rst. J.G.-G. conceived the research question. All authors proved the main results and
wrote the paper. All authors have read and approved the final manuscript.
Funding:
This work was supported in part by the Spanish Ministry of Economy and Competitiveness through
the CARMEN project (TEC2016-75067-C4-3-R).
Conflicts of Interest: The authors declare no conflict of interest
Appendix A. Proof of Lemma 1
Proof. (1)(2) We have
yk= [yn:1 ]nk+1,1 =
n
j=1
[V
nIN]nk+1,j[xn:1 ]j,1 =
n
j=1
[V
n]nk+1,jIN[xn:1 ]j,1 =
n
j=1
[Vn]j,nk+1[xn:1]j,1
=1
n
n
j=1
e2π(j1)(nk)
ni[xn:1]j,1 =1
n
n
j=1
e2π(j1)ie2π(j1)k
ni[xn:1]j,1 =1
n
n
j=1
e2π(j1)k
ni[xn:1]j,1
=1
n
n
j=1
e2π(j1)k
ni[xn:1]j,1 =ynk
for all k∈ {1, . . . , n1}and yn=1
nn
j=1[xn:1]j,1 RN×1.
(2)(1) Since VnINis a unitary matrix and
[Vn]k,nj+1=1
ne2π(k1)(nj)
ni=1
ne2π(k1)ie2π(k1)j
ni=[Vn]k,j+1
for all k∈ {1, . . . , n}and j∈ {1, . . . , n1}, we conclude that
xk= [xn:1 ]nk+1,1 = [(VnIN)yn:1]nk+1,1 =
n
j=1
[Vn]nk+1,j[yn:1 ]j,1 =
n
j=1
[Vn]nk+1,jynj+1
Entropy 2019,21, 965 18 of 22
=[Vn]nk+1,1 yn+
n1
h=1
[Vn]nk+1,nh+1yh
=1
nyn+dn
2e−1
h=1[Vn]nk+1,nh+1yh+[Vn]nk+1,nh+1yh+1+(1)n
2[Vn]nk+1,dn
2e+1ydn
2e
=1
nyn+dn
2e−1
h=1[Vn]nk+1,nh+1yh+[Vn]nk+1,nh+1yh+1+(1)n
2
1
neπ(nk)iydn
2e
=1
nyn+2dn
2e−1
h=1
Re [Vn]nk+1,nh+1yh+1+(1)n
2
(1)nk
nydn
2eRN×1
for all k∈ {1, . . . , n}.
Appendix B. Proof of Theorem 1
Proof.
Fix
k∈ {
1,
. . .
,
n}
. Let
Exn:1x
n:1=Udiag1jn NλjExn:1x
n:1U1
and
E(xkx
k) =
Wdiag1jNλjE(xkx
kW1
be an eigenvalue decomposition (EVD) of
Exn:1x
n:1
and
E(xkx
k)
,
respectively. We can assume that the eigenvector matrices Uand Ware unitary. We have
λj(E(xkx
k))=[WE(xkx
k)W]j,j=
N
h=1
[W]j,h
N
l=1
[E(xkx
k)]h,l[W]l,j
=
N
h=1
[W]j,h
N
l=1
[E(xn:1x
n:1)](nk)N+h,(nk)N+l[W]l,j
=
N
h=1
[W]j,h
N
l=1hUdiag1pnN λp(E(xn:1 x
n:1))Ui(nk)N+h,(nk)N+l[W]l,j
=
N
h=1
[W]j,h
N
l=1 nN
p=1
[U](nk)N+h,pλp(E(xn:1x
n:1)) [U]p,(nk)N+l![W]l,j
=
nN
p=1
λp(E(xn:1x
n:1))
N
h=1
[W]h,j[U](nk)N+h,p
N
l=1
[U](nk)N+l,p[W]l,j
=
nN
p=1
λp(E(xn:1x
n:1)) N
h=1
[W]h,j[U](nk)N+h,p! N
l=1
[W]l,j[U](nk)N+l,p!
=
nN
p=1
λp(E(xn:1x
n:1))
N
h=1
[W]h,j[U](nk)N+h,p
2
,
and consequently,
λnN (E(xn:1 x
n:1))
nN
p=1
N
h=1
[W]h,j[U](nk)N+h,p
2
λj(E(xkx
k))
λ1(E(xn:1x
n:1))
nN
p=1
N
h=1
[W]h,j[U](nk)N+h,p
2
for all j∈ {1, . . . , N}. Therefore, since
nN
p=1
N
h=1
[W]h,j[U](nk)N+h,p
2
=
nN
p=1
N
h=1
[W]h,j[U](nk)N+h,p
N
l=1
[U](nk)N+l,p[W]l,j
Entropy 2019,21, 965 19 of 22
=
N
h=1
[W]j,h
N
l=1
nN
p=1
[U](nk)N+h,p[U]p,(nk)N+l[W]l,j=
N
h=1
[W]j,h
N
l=1
[UU](nk)N+h,(nk)N+l[W]l,j
=
N
h=1
[W]j,h
N
l=1
[InN ](nk)N+h,(nk)N+l[W]l,j=
N
h=1
[W]j,h[W]h,j=[WW]j,j=[IN]j,j=1,
Equation (2) holds. We now prove Equation (3). Let
E(yky
k) = Mdiag1jNλjE(yky
kM1
be an EVD of E(yky
k), where Mis unitary. We have
λj(E(yky
k))=
N
h=1
[M]j,h
N
l=1
[E(yn:1y
n:1)](nk)N+h,(nk)N+l[M]l,j
=
N
h=1
[M]j,h
N
l=1
E(VnIN)xn:1x
n:1 (VnIN)(nk)N+h,(nk)N+l[M]l,j
=
N
h=1
[M]j,h
N
l=1
(VnIN)E(xn:1 x
n:1
)(VnIN)(nk)N+h,(nk)N+l[M]l,j
=
N
h=1
[M]j,h
N
l=1
h(
VnIN)Udiag1pnNλp(E(xn:1x
n:1))
(
VnIN)Ui(nk)N+h,(nk)N+l[M]l,j
=
nN
p=1
λp(E(xn:1x
n:1
))
N
h=1
[M]h,j(VnIN)U(nk)N+h,p
2
,
and thus,
λnN (E(xn:1 x
n:1))
nN
p=1
N
h=1
[M]h,j(VnIN)U(nk)N+h,p
2
λj(E(yky
k))λ1(E(xn:1x
n:1))
nN
p=1
N
h=1
[M]h,j(VnIN)U(nk)N+h,p
2
for all j∈ {1, . . . , N}. Hence, as
nN
p=1
N
h=1
[M]h,j(
VnIN)U(nk)N+h,p
2
=
N
h=1
[M]j,h
N
l=1h(
VnIN)U(
VnIN)Ui(nk)N+h,(nk)N+l[M]l,j
=
N
h=1
[M]j,h
N
l=1(
VnIN)InN (
VnIN)
(nk)N+h,(nk)N+l[M
]l,j=
N
h=1
[M]j,h
N
l=1
[InN](nk)N+h,(nk)N+l[M
]l,j=1,
Equation (3) holds.
Appendix C. Proof of Theorem 2
Proof. Fix k∈ {1, . . . , n1} \ {n
2}. Since
yk=1
n
n
j=1
e2π(j1)k
ni[xn:1]j,1 =1
n
n
j=1cos 2π(1j)k
n+i sin 2π(1j)k
nxnj+1,
we obtain
Eb
ykb
yk>=E Re(yk)
Im(yk)
!(Re(yk))>(Im(yk))>!=
ERe(yk)(Re(yk))>ERe(yk)(Im(yk))>
EIm(yk)(Re(yk))>EIm(yk)(Im(yk))>
Entropy 2019,21, 965 20 of 22
=1
n
n
j1,j2=1
cos 2π(1j1)k
ncos 2π(1j2)k
nExnj1+1x>
nj2+1cos 2π(1j1)k
nsin 2π(1j2)k
nExnj1+1x>
nj2+1
sin 2π(1j1)k
ncos 2π(1j2)k
nExnj1+1x>
nj2+1sin 2π(1j1)k
nsin 2π(1j2)k
nExnj1+1x>
nj2+1
=1
n
n
j1,j2=1
A>
j1Exnj1+1x>
nj2+1Aj2,
where
Aj=cos 2π(1j)k
nINsin 2π(1j)k
nIN
with
j∈ {
1,
. . .
,
n}
. Fix
r∈ {
1,
. . .
, 2
N}
, and consider
a real eigenvector v corresponding to
λrEb
ykb
yk>
with v
>
v
=
1. Let
Exn:1x>
n:1=
Udiag1jnNλjExn:1x>
n:1U1be an EVD of Exn:1 x>
n:1, where Uis real and orthogonal. Then
λrEb
ykb
yk>=λrEb
ykb
yk>v>v=v>λrEb
ykb
yk>v=v>Eb
ykb
yk>v
=1
n
n
j1,j2=1
v>A>
j1Exnj1+1x>
nj2+1Aj2v=1
n
n
j1,j2=1
v>A>
j1hExn:1x>
n:1ij1,j2
Aj2v
=1
n
n
j1,j2=1
v>A>
j1e>
j1Exn:1x>
n:1ej2Aj2v
=1
n
n
j1,j2=1
v>A>
j1e>
j1Udiag1pnNλpExn:1x>
n:1U>ej2Aj2v
=1
n
n
j1=1
v>A>
j1e>
j1Udiag1pnNλpExn:1x>
n:1 n
j2=1
U>ej2Aj2v
=1
nhB>diag1pnNλpExn:1 x>
n:1Bi1,1 =1
n
nN
p=1hB>i1,p
λpExn:1x>
n:1[B]p,1
=1
n
nN
p=1
λpExn:1x>
n:1[B]2
p,1 ,
where elCnN×Nwith [el]j,1 =δj,lINfor all j,l∈ {1, . . . , n}and B=n
j=1U>ejAjv. Consequently,
λnN Exn:1 x>
n:11
n
nN
p=1
[B]2
p,1 λrEb
ykb
yk>λ1Exn:1x>
n:11
n
nN
p=1
[B]2
p,1 .
Therefore, to finish the proof we only need to show that
1
nnN
p=1[B]2
p,1 =1
2
. Applying ([
5
]
(Equations (14) and (15))) yields
1
n
nN
p=1
[B]2
p,1 =1
n
nN
p=1hB>i1,p[B]p,1 =1
nB>B=1
n n
j1=1
U>ej1Aj1v!> n
j2=1
U>ej2Aj2v!
=1
n
n
j1,j2=1
v>A>
j1e>
j1ej2Aj2v=1
n
n
j=1
v>A>
jAjv=1
n
n
j=1
(Ajv)>(Ajv) = 1
n
n
j=1
N
s=1Ajv2
s,1
=1
n
n
j=1
N
s=1cos 2π(1j)k
n[v]s,1 +sin 2π(1j)k
n[v]N+s,12
=1
n
N
s=1
n
j=1 cos 2π(1j)k
n2
[v]2
s,1 +sin 2π(1j)k
n2
[v]2
N+s,1
+2 cos 2π(1j)k
nsin 2π(1j)k
n[v]s,1[v]N+s,1
Entropy 2019,21, 965 21 of 22
=
N
s=1 [v]2
s,1
1
n
n
j=1cos 2π(1j)k
n2
+ [v]2
N+s,1
1
n
n
j=1sin 2π(1j)k
n2
+[v]s,1 [v]N+s,1
1
n
n
j=1
2 sin 2π(1j)k
ncos 2π(1j)k
n!
=
N
s=1
[v]2
s,1
1
n
n
j=1 1sin 2π(1j)k
n2
!+[v]2
N+s,1
2+[v]s,1 [v]N+s,1
1
n
n
j=1
sin 4π(1j)k
n!
=
N
s=1
[v]2
s,1
11
n
n
j=1
sin 2π(1j)k
n2
!+[v]2
N+s,1
2[v]s,1[v]N+s,1
1
n
n
j=1
sin 4π(j1)k
n!
=
N
s=1
[v]2
s,1
2+[v]2
N+s,1
2[v]s,1[v]N+s,1
1
n
n
j=1
Im e4π(j1)k
ni!
=
N
s=1
[v]2
s,1
2+[v]2
N+s,1
2[v]s,1[v]N+s,1
1
nIm n
j=1
e4π(j1)k
ni!!
=
N
s=1
[v]2
s,1
2+[v]2
N+s,1
2!=1
2
2N
h=1
[v]2
h,1 =1
2v>v=1
2.
Appendix D. Proof of Lemma 2
Proof. (1)Eyky
k=Eyn:1y
n:1nk+1,nk+1=(VnIN)Exn:1x
n:1(VnIN)nk+1,nk+1.
(2)Eyky>
k=Eyn:1y>
n:1nk+1,nk+1=h(VnIN)Exn:1x>
n:1(VnIN)>ink+1,nk+1.
(3) We have
E(yky
k)=E(Re(yk)+iIm(yk))(Re(yk))>i(Im(yk))>
=ERe(yk)(
Re(yk))>+EIm(yk)(Im(yk))>+iEIm(yk)(
Re(yk))>ERe(yk)(
Im(yk))>,(A1)
and
Eyky>
k=E(Re(yk)+iIm(yk))(Re(yk))>+i(Im(yk))>
=ERe(yk)(
Re(yk))>EIm(yk)(Im(yk))>+iEIm(yk)(
Re(yk))>+ERe(yk)(
Im(yk))>.(A2)
As
Eb
ykb
yk>=
ERe(yk)(Re(yk))>ERe(yk)(Im(yk))>
EIm(yk)(Re(yk))>EIm(yk)(Im(yk))>
,
assertion (3) follows directly from Equations (A1) and (A2).
References
1.
Gutiérrez-Gutiérrez, J.; Crespo, P.M. Block Toeplitz matrices: Asymptotic results and applications.
Found. Trends Commun. Inf. Theory 2011,8, 179–257.
2. Gray, R.M. Toeplitz and circulant matrices: A review. Found. Trends Commun. Inf. Theory 2006,2, 155–239.
3.
Ephraim, Y.; Lev-Ari, H.; Gray, R.M. Asymptotic minimum discrimination information measure for
asymptotically weakly stationary processes. IEEE Trans. Inf. Theory 1988,34, 1033–1040.
4.
Gray, R.M. On the asymptotic eigenvalue distribution of Toeplitz matrices. IEEE Trans. Inf. Theory
1972
,
IT-18, 725–730.
Entropy 2019,21, 965 22 of 22
5.
Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Insausti, X. Upper bounds for the rate distortion function of
finite-length data blocks of Gaussian WSS sources. Entropy 2017,19, 554.
6.
Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Villar-Rosety, F.M.; Insausti, X. Rate-Distortion function
upper bounds for Gaussian vectors and their applications in coding AR sources. Entropy 2018,20, 399.
7.
Viswanathan, H.; Berger, T. The quadratic Gaussian CEO problem. IEEE Trans. Inf. Theory
1997
,
43, 1549–1559.
8.
Torezzan, C.; Panek, L.; Firer, M. A low complexity coding and decoding strategy for the quadratic Gaussian
CEO problem. J. Frankl. Inst. 2016,353, 643–656.
9.
Kolmogorov, A.N. On the Shannon theory of information transmission in the case of continuous signals.
IRE Trans. Inf. Theory 1956,IT-2, 102–108.
10.
Neeser, F.D.; Massey, J.L. Proper complex random processes with applications to information theory.
IEEE Trans. Inf. Theory 1993,39, 1293–1302.
11.
Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Insausti, X.; Hogstad, B.O. On the complexity reduction of
coding WSS vector processes by using a sequence of block circulant matrices. Entropy 2017,19, 95.
12.
Pearl, J. On coding and filtering stationary signals by discrete Fourier transforms. IEEE Trans. Inf. Theory
1973,19, 229–232.
13.
Gutiérrez-Gutiérrez, J.; Crespo, P.M. Asymptotically equivalent sequences of matrices and Hermitian block
Toeplitz matrices with continuous symbols: Applications to MIMO systems. IEEE Trans. Inf. Theory
2008
,
54, 5671–5680.
14.
Gutiérrez-Gutiérrez, J. A modified version of the Pisarenko method to estimate the power spectral density
of any asymptotically wide sense stationary vector process. Appl. Math. Comput. 2019,362, 124526.
15.
Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Crespo, P.M.; Insausti, X. Rate distortion function of Gaussian
asymptotically WSS vector processes. Entropy 2018,20, 719.
16. Gray, R.M. Information rates of autoregressive processes. IEEE Trans. Inf. Theory 1970,IT-16, 412–421.
17.
Toms, W.; Berger, T. Information rates of stochastically driven dynamic systems. IEEE Trans. Inf. Theory
1971,17, 113–114.
18. Reinsel, G.C. Elements of Multivariate Time Series Analysis; Springer: Berlin/Heidelberg, Germany 1993.
19.
Gutiérrez-Gutiérrez, J.; Crespo, P.M. Asymptotically equivalent sequences of matrices and multivariate
ARMA processes. IEEE Trans. Inf. Theory 2011,57, 5444–5454.
20.
Gutiérrez-Gutiérrez, J.; Iglesias, I.; Podhorski, A. Geometric MMSE for one-sided and two-sided vector
linear predictors: From the finite-length case to the infinite-length case. Signal Process. 2011,91, 2237–2245.
c
2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... By combining Equation (16) in [11], Definition 3 and Lemma 6.1 in [10], (A2) holds. ...
... Finally, we prove (11). From (A1) we need to show that E x n:1 x n:1 − C ...
... x n:1 F is bounded. To that end, from Equation (16) in[11] we need to show that E x n:1 x n:1 − are bounded. This is shown in the following two steps.Step 1. ...
Article
Full-text available
In the era of the Internet of Things, there are many applications where numerous devices are deployed to acquire information and send it to analyse the data and make informed decisions. In these applications, the power consumption and price of the devices are often an issue. In this work, analog coding schemes are considered, so that an ADC is not needed, allowing the size and power consumption of the devices to be reduced. In addition, linear and DFT-based transmission schemes are proposed, so that the complexity of the operations involved is lowered, thus reducing the requirements in terms of processing capacity and the price of the hardware. The proposed schemes are proved to be asymptotically optimal among the linear ones for WSS, MA, AR and ARMA sources.
... Obviously, since such a bound is tighter than the one given by Pearl, it also tends to the RDF of the source when the length of the data block grows. In [4], we generalized our low-complexity coding strategy to encode (compress) finite-length data blocks of any Gaussian vector source. Moreover, in [4], we also gave a sufficient condition for the vector source in order to make such a coding strategy asymptotically optimal. ...
... In [4], we generalized our low-complexity coding strategy to encode (compress) finite-length data blocks of any Gaussian vector source. Moreover, in [4], we also gave a sufficient condition for the vector source in order to make such a coding strategy asymptotically optimal. We recall that a coding strategy is asymptotically optimal if its rate tends to the RDF of the source as the length of the data block grows. ...
... The definition of the AWSS process was first introduced in [5], Section 6, and extended to vector processes in [6], Definition 7.1. However, the convergence speed of the rate of the coding strategy considered (i.e., how fast the rate of the coding strategy tends to the RDF of the AWSS vector source) was not studied in [4]. ...
Article
Full-text available
In this paper, we study the asymptotic optimality of a low-complexity coding strategy for Gaussian vector sources. Specifically, we study the convergence speed of the rate of such a coding strategy when it is used to encode the most relevant vector sources, namely wide sense stationary (WSS), moving average (MA), and autoregressive (AR) vector sources. We also study how the coding strategy considered performs when it is used to encode perturbed versions of those relevant sources. More precisely, we give a sufficient condition for such perturbed versions so that the convergence speed of the rate remains unaltered.
Article
Full-text available
In this paper, we obtain an integral formula for the rate distortion function (RDF) of any Gaussian asymptotically wide sense stationary (AWSS) vector process. Applying this result, we also obtain an integral formula for the RDF of Gaussian moving average (MA) vector processes and of Gaussian autoregressive MA (ARMA) AWSS vector processes.
Article
Full-text available
In this paper, we give upper bounds for the rate-distortion function (RDF) of any Gaussian vector, and we propose coding strategies to achieve such bounds. We use these strategies to reduce the computational complexity of coding Gaussian asymptotically wide sense stationary (AWSS) autoregressive (AR) sources. Furthermore, we also give sufficient conditions for AR processes to be AWSS.
Article
Full-text available
In this paper, we present upper bounds for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new results on the discrete Fourier transform (DFT) of WSS processes.
Article
Full-text available
In the present paper, we obtain a result on the rate-distortion function (RDF) of wide sense stationary (WSS) vector processes that allows us to reduce the complexity of coding those processes. To achieve this result, we propose a sequence of block circulant matrices. In addition, we use the proposed sequence to reduce the complexity of filtering WSS vector processes.
Article
Full-text available
We consider the quadratic Gaussian CEO problem, where the goal is to estimate a measure based on several Gaussian noisy observations which must be encoded and sent to a centralized receiver using limited transmission rate. For real applications, besides minimizing the average distortion, given the transmission rate, it is important to take into account memory and processing constraints. Considering these motivations we present a low complexity coding and decoding strategy, which exploits the correlation between the measurements to reduce the number of bits to be transmitted by refining the output of the quantization stage. The CEO makes an estimate using a decoder based on a process similar to majority voting. We derive explicit expression for the CEO's error probability and compare numerical simulations with known achievability results and bounds.
Article
The Pisarenko method estimates the power spectral density (PSD) of Gaussian wide sense stationary (WSS) 1-dimensional (scalar) processes when the PSD is Lipschitz. In this paper we modify the Pisarenko method to estimate the generating function of a sequence of block Toeplitz matrices from another sequence of matrices when both sequences are asymptotically equivalent. This modified version of the Pisarenko method allows us to estimate the PSD of any asymptotically WSS (AWSS) multidimensional (vector) process even if the process is not Gaussian and even if the PSD is continuous, but not Lipschitz.
Article
It is demonstrated that the DFT of a stationary time series with uniformly bounded covariance matrix is asymptotically equivalent to the Karhunen-Loeve expansion and that when a finite-order Markov process is coded or filtered in the Fourier domain a performance degradation results with magnitude inversely proportional to the square root of the number of samples. These results establish quantitative measures to guide the choice between the computationally efficient Fourier method and the statistically optimal Karhunen-Loeve processing of real-time stationary data.