IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 12, DECEMBER 20115921
Spherically Invariant Vector Random
Fields in Space and Time
Juan Du and Chunsheng Ma
Abstract—This paper is concerned with spherically invariant
or elliptically contoured vector random fields in space and/or
time, which are formulated as scale mixtures of vector Gaussian
random fields. While a spherically invariant vector random field
may or may not have second-order moments, a spherically in-
variant second-order vector random field is determined by its
mean and covariance matrix functions, just like the Gaussian
one. This paper explores basic properties of spherically invariant
second-order vector random fields, and proposes an efficient
approach to develop covariance matrix functions for such vector
Index Terms—Covariance matrix function, cross covariance,
direct covariance, elliptically contoured random field, Gaussian
random field, spherically invariant stochastic process, variogram.
or spatio-temporal uncertainty in various geophysical, infor-
mational, and environmental systems, to describe observed
variabilities, to analyze the experimental or observational data,
and to predict future or neighborhood values , , ,
, , , , . An example in engineering is to
study the simultaneous behavior over time or space of current
and voltage, or of pressure, temperature, and volume. An
example in atmospheric science is to describe climate change
and to predict future weather, based on hourly or daily weather
data recorded at various physical stations with measurements
such as temperature, wind speed, wind direction, precipitation,
and so on, which may be modelled using (deterministic or
stochastic) partial differential equations or random fields in
space and/or time. The Gaussian process or field, no matter
whether it is univariate or multivariate, is one of the most
popularly developed and widely used stochastic processes or
random fields, mostly due to the relative ease with which the
Gaussian one can be studied analytically. However, in many
cases of interest Gaussian models become inadequate, as both
experimental evidence and theoretical considerations suggest
TOCHASTIC processes and random fields are often used
in science and engineering to model temporal, spatial,
Manuscript received December 25, 2010; revised May 25, 2011; accepted
August 15, 2011. Date of publication August 30, 2011; date of current version
November 16, 2011. The associate editor coordinating the review of this man-
uscript and approving it for publication was Prof. David Love. This work was
supported in part by the U.S. Department of Energy by Grant DE-SC0005359,
by the Kansas NSF EPSCoR by Grant EPS0903806, and by a Kansas Tech-
nology Enterprise Corporation grant.
J. Du is with the Department of Statistics, Kansas State University, Man-
hattan, KS 66506 USA (e-mail: email@example.com).
C. Ma is with the Department of Mathematics and Statistics, Wichita State
University, Wichita, KS 67260 USA (e-mail: firstname.lastname@example.org).
Digital Object Identifier 10.1109/TSP.2011.2166391
, , , the non-Gaussianity cannot be neglected ,
and there are often specific reasons for assuming particular
non-Gaussian finite-dimensional distributions , , , ,
–, , , , , , , , –,
–, , , . For example, “One of the main
problems in statistical analysis of multi-component images and
multidimensional signals is the choice of relevant statistical
parametric laws” , and the electromagnetic environment en-
countered by receiver systems is often non-Gaussian in nature
. Another practical need is to deal with multivariate data
measurements with statistical modeling and stochastic simula-
tion , which call for the development of vector random fields
with various properties for both theoretical study and practical
use. In addition, a complex stochastic process, between whose
real part and imaginary part should the cross relationship be
specified, may be treated as a real bivariate stochastic process.
These practical or theoretical demands motivate us to investi-
gate a particular class of real vector (multivariate, or multiple)
random fields in this paper, called spherically invariant vector
random fields or elliptically contoured vector random fields.
In the univariate case, the spherically invariant or compound
Gaussian stochastic process on the real line was introduced by
Vershik  when he investigated a class of stochastic pro-
cesses which shared some characteristic properties of Gaussian
processes, its properties were studied in , , , ,
, and elliptically contoured random fields in space and time
are investigated recently in  and . The theory and appli-
cation of spherically invariant random processes are extensively
reviewed in , and applications of spherically invariant or
compound Gaussian stochastic processes for modeling signals,
interferences, or noises may be found also in , , , ,
, , . For properties and applications of mutivariate
spherically invariant or elliptically contoured distribtuions see
, , , , , , among others.
-variate random field, or a family of
vectors on the same probability space, where the index domain
notes the transpose of a vector or matrix
field is called a second-order random field if all of its compo-
nents have second-order moments. Under such assumptions, its
mean (or expectation) function
. This vector random
is well-defined. So is its covariance matrix (function)
1053-587X/$26.00 © 2011 IEEE
5922IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 12, DECEMBER 2011
. The diagonal entries of
, are called the direct covariancefunctions, and the
, are called the cross covari-
It is easy to see that the covariance matrix
sesses the following properties:
is positive definite for each
(iii) the inequality
holds for every natural number
, any, and any
Conversely, given an
erties, one can always find an
with mean zero and with
trix , , . Nevertheless, for a given matrix function
, it is often hard to verify inequality (1) directly. Al-
and theFouriertransforms of its entries exist,
a necessary and sufficient condition  for
the covariance matrix of a vector Gaussian random field is that
the Fourier transform matrix is positive definite. This result is
named the Cramér-Kolmogorov theorem.
Stationarity or homogeneity is typically a simplifying as-
sumption in practice. Let
be a group. A vector random
is said to be (weakly, or second-order)
stationary or homogeneous, if its mean function
constant vector, and its covariance matrix
only on the lag
. In such a case, we write
instead ofand call it a stationary covariance matrix
This paper is organized as follows. In Section II we define a
spherically invariant vector random field as a scale mixture of
vector Gaussian random fields and explore its basic properties.
Section III provides an efficient method for constructing covari-
method contains three ingredients: a conditionally negative def-
inite matrix, a completely monotone function, and a univariate
variogram, and this method is comparable to that in the uni-
variate case , where the last two ingredients are involved.
Examples in Section III illustrate how naturally a conditionally
vector case. Section IV employs this method to derive several
covariance structures, whose general forms are nonstationary,
but contain the stationary case as a special case. The proofs of
Theorems 2–5 are given in Section V. Finally Section VI con-
cludes the paper with some remarks.
matrix function with these prop-
-variate Gaussian random field
as its covariance ma-
II. SPHERICALLY INVARIANT VECTOR RANDOM FIELDS
This section presents the definition of the spherically in-
variant vector random field and its basic properties. Our
definition of a spherically invariant vector random field is
simply a scale mixture of a vector Gaussian random field, or the
product of a vector Gaussian random field and a nonnegative
scalar random variable, like the univariate case , . A
more general format is given in Section VI.
We call an
-variate random field
cally invariant or elliptically contoured random field, if it takes
with mean zero and covariance matrix
negative random variable and is independent with
, and is a deterministic vector-valued function.
The finite-dimensional distribution rules of an
spherically invariant random field
by (2) can be better described through its finite-dimensional
characteristic functions. In fact, for every natural number
, the characteristic function of a
is an-variate Gaussian random field
,is a non-
is the imaginary unit, noticing that
a normal random variable with mean zero and vari-
derive the characteristic function of the random ma-
, although it is less visible than
joint density of
if, for example,
is a Bernoulli random variable with
is the distribution function of and
. Alternatively, one may
. Whenever it exists, a
representation theorem for the explicit form of the joint density
may be derived, just like Theorem
2.2 of .
Clearly, the existence of the moments of (2) highly depends
DU AND MA: SPHERICALLY INVARIANT VECTOR RANDOM FIELDS5923
possesses the mean
of (2). If
with direct and cross covariances
,is actually the mean function
, then the vector
has the second-order moments,
possesses the second-moment
Hence, once a mixing distribution is prespecified, the finite-di-
mensional distributions of a second-order vector random field
(2) is completely characterized by its mean and covariance ma-
trix functions. On the other hand, given an
that satisfies inequality (1), by choosing a non-
degenerate mixing variable
in (2) with
-variate spherically invariant random field with
as its covariance matrix . As a result, the mean
vector and covariance matrix are dominant in the construction
of second-order spherically invariant vector random fields, just
like the Gaussian one.
Next we give an example of (2) by selecting a random vari-
, and derive the corresponding finite-dimensional char-
acteristic functions. Other examples for the univariate case may
be found in , and may be extended to the multivariate case
1) Example1: In(2) taking
constant and greater than 1 and
, whereis a
is a Gamma random variable
, we obtain
or Macdonald’s function , , of order
acteristic function of the random vector
is a modified Bessel function of the second kind
. Thus, the char-
the particular case where
equals a natural number plus a half,
we obtain a vector Student’s
, assuming that. The
It is known , , that if a spherically invariant random
process is stationary and ergodic, then it is a Gaussian random
process. Such a property holds for spherically invariant vector
random fields as well. A nonergodic spherically invariant one
may indeed be useful for the modeling of certain physical phe-
nomena as discussed in , , .
of two matrices
andof the same size, which is the entry-
wise product of
and , and
denotes the Hadamard
to 1. Next we present a couple of important lemmas, which are
of interest in their own right and will be used to prove Theo-
rems 2–5 in Section V, and whose proofs are omitted since they
variate case .
Lemma 1: If
ance matrix functions, and
inite matrices, then there is an
random field with covariance matrix
In the particular case where
are nonnegative constants, we obtain that
, is also a covariance matrix, where
all entries of the matrix
equal 1. In other words, the set of
covariance matrices is a convex set.
Lemma 2: If
is a nonnegative function in
, then there is an
random field with direct and cross covariances
being a matrix of the same size of and all entries equal
are positive def-
-variate spherically invariant
covariance matrix for every
-variate spherically invariant
assuming that the above integrals exist.
As we have seen in the last section, the mean and co-
variance matrix functions are dominant in the construction
of second-order spherically invariant vector random fields.
Next we will concentrate on constructing matrix functions
that satisfy inequality (1), which can be treated as covariance
matrices of zero-mean spherically invariant vector random
fields. A basic approach we employ to formulate covariance
matrices for spherically invariant vector random fields in this
paper is based on three ingredients: a conditionally negative
definite matrix, a completely monotone function on
a univariate variogram. These concepts are briefly reviewed
in this section. A univariate variogram can be defined through
conditionally negative definite matrices. In particular, this
section gives several examples to illustrate how this kind of
matrices is involved in the development of covariance matrices
of spherically invariant random fields.
5924 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 12, DECEMBER 2011
Letbe an integer and greater than 1. An
is said to be conditionally negative definite, if
holds for any real numbers
As a simple example, a symmetric matrix with identical entries
is conditionally negative definite. Two other examples are
Clearly, for a 2
if and only if
. In general, (3) implies
. Consequently, all entries of a condi-
tionally negative definite matrix are nonnegative once its diag-
onal entries are nonnegative. The following theorem, which is
known in , is kept here for our later reference.
Theorem 1: For an
following statements are equivalent:
is conditionally negative definite;
(ii) there exist real numbers
matrix with entries
(iii) for any nonnegative number
A nonnegative function
univariate variogram on
, inequality (3) holds
, such that an
, the matrix is
,, is called a
holds for every integer , any, and any
. This is equiva-
matrix with entries
, and every
lent to that
is conditionally negative definite, whenever
is an integer and
univariaterandom fieldwithsecond-order increments possesses
a variogram , , , . Examples of univariate vari-
Now let us take a look at three covariance matrix structures
using the Cramér-Kolmogorov theorem , and see how the
conditionally negative matrix gets involved in such cases. This
would be helpful for us to understand the settings in the next
1) Example 2: Let
investigate the constraints on these constants, under which the
. It is known that every
be positive constants. We
is the covariance matrix for a stationary bivariate spherically
invariant random field.
Notice that the Fourier transform matrix of (4) is positively
According to the Cramér-Kolmogorov theorem, (4) is a covari-
ance matrix function if and only if
, which is equivalent to
is positive definite for
is conditionally negative definite. Moreover, one may
verify that an
, is a covariance matrix if and only
if the symmetric matrix
definite. This example is essentially a particular case of Corol-
lary 2.2 below.
2) Example3: Let
with positive entries. Consider a matrix function
is conditionally negative
be an symmetric matrix
whose Fourier transform matrix is positively proportional to
By the Cramér-Kolmogorov theorem, (5) is a covariance matrix
and, by Theorem 1 with
an arbitrary nonnegative constant,
this is equivalent to that
is a conditionally negative definite
3) Example 4: For an
with positive entries, consider a matrix function
. Since the Fourier transform matrix of (6) is positively
we obtain from the Cramér-Kolmogorov theorem that (6) is a
covariance matrix if and only if
, and then from Theorem 1 that the matrix
conditionally negative definite.
The nextexample presents a way to generate a vector random
field from a univariate random field, with a conditionally nega-
tive definite matrix involved.
4) Example5: Supposethat
is positive definite for
. From this we formulate
DU AND MA: SPHERICALLY INVARIANT VECTOR RANDOM FIELDS 5925
an-variate random field
this vector random field has second-order moments with direct
and cross covariances
,are prespecified vectors in. Evidently,
It is easy to verify that the matrix with entries
is conditionally negative definite. Such
a matrix is also called a Euclidean distance matrix.
After seeing the role played by the conditionally negative
matrix above, we now recall the definition of the completely
, is completely monotone on
and thederivativesalternate insign,
i.e., for every natural number
Bernstein’s theorem asserts that
if and only if
is a nonnegative finite measure on
Laplace transform, whenever it exists, of a nonnegative random
variable is a completely monotone function on
, if it has deriva-
is completely monotone
. Obviously, the
IV. SOME COVARIANCE MATRIX STRUCTURES
In this section we formulate some covariance matrix func-
tions for second-order spherically invariant vector random
fields, using the following three ingredients that are reviewed
in the last section: a conditionally negative matrix with non-
negative diagonal entries, a completely monotone function on
, and a univariate variogram.
Theorem 2: Assume that
, and that
. Ifis a univariate variogram in
matrix with nonnegative diagonal entries, then there is an
-variate spherically invariant random field with direct and
is a completely monotone
and are positive constants
conditionally negative definite
Clearly, a spherically invariant vector random field with
direct/cross covariances (7) is generally not stationary, unless
is intrinsically stationary, in the sense that it is a
it is a function of
only, the resulting vector random
field is isotropic. Many particular cases of (7) can be obtained
andare specified. For example, taking
in (7) yields the following corollary.
is isotropic, namely,
Corollary 2.1: The functions
, are the direct and cross co-
may not be defined when
tend to infinity leads to the following corollary, from which Ex-
ample 2 follows by particularly choosing
Corollary 2.2: For a positive constant
andhaveto be positivein (7); otherwise,
. Lettingin (7)
, are the direct and cross covari-
ances of an
From Corollary 2.2 it is now clear that the covariance matrix
with entries (7) is essentially the difference of two covariance
in (7) choosing
-variate spherically invariant random field in.
which change sign in case
The model (9) in the next theorem may be interpreted as the
negative of the partial derivative of (8) with respect to , and its
validity follows from Theorem 2.
variate variogram in
, is a completely monotone func-
with a finite derivative at the right-hand side of
the origin, and
definite matrix with nonnegative diagonal entries, then there is
-variate spherically invariant random field in
rect and cross covariances
is greater than.
Similar to Theorem 2, a spherically invariant vector random
5926IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 12, DECEMBER 2011
unlessis intrinsically stationary, and is isotropic if
is isotropic. In contrast to Theorem 2, the existence
is obviously required in Theorem 3 since, otherwise,
would not be well defined. This excludes some com-
pletely monotone functions, such as
is a constant.
Another particular case of (7) is obtained when we choose
, which is a completely monotone function
. This is also a special case of the following theorem,
where the index set
is not necessarily identical to
Theorem 4: Assume that
are positive constants with
. If is a univariate variogram on
is anconditionally negative definite matrix
with nonnegative diagonal entries, then there are two
spherically invariant random fields on
(i) one with the direct and cross covariances
, , and
(ii) the other with the direct and cross covariances
Corollary 4.1: If
is a positive constant, then the functions
in (10) yields
are the direct and cross covariances of an
invariant random field on
Clearly, there are two symmetric matrices involved in The-
orem 4, the conditionally negative definite matrix
matrix with entries
. It would be of interest to see whether
the latter could be replaced by another symmetric matrix. The
following corollary is obtained by letting
Corollary 4.2: Letbe a positive constant.
is a nonnegative constant, then the functions
, are the direct and cross
-variate spherically invariant random
covariances of an
(ii) If (12) are direct and cross covariance functions on
all positive , then
must be a nonnegative constant.
Unlike (7), (10) and (11) are always nonnegative. The ingre-
is replaced in the structure (13) below by a ma-
, which might be viewed as a var-
iogram matrix function  in some particular cases.
Theorem 5: Assume that
, and that
is a completely monotone
andare positive constants
such that the determinant of
ditionally negative definite matrix with nonnegative diagonal
entries, then there is an
-variate spherically invariant random
field with direct and cross covariances
. If, , is amatrix function
is an, andcon-
In particular, Corollary 2.1 is obtained when one chooses
. As another choice, in (13) taking
is aidentity matrix.
Proof of Theorem 2: By Bernstein’s theorem , the com-
pletely monotone function
form of a bounded, nondecreasing function
onis the Laplace trans-
; that is
With this representation, the functions in (7) are the same as
Thus it suffices to show that for every fixed,
are direct/cross covari-
DU AND MA: SPHERICALLY INVARIANT VECTOR RANDOM FIELDS5927
and replacingby and
matrix, we rewrite
Lemma 2, these are direct/cross covariances in
of the nonnegative function
is a covariance matrix. In fact, by Theorem 1,
is a positive definite matrix, and, by Schoenberg’s theorem 
as a mixture
in, once it is confirmed
function and thus
, is an
covariance matrix, where all entries of the matrix
to 1. Finally, by Lemma 1
, is a covariance matrix.
Proof of Theorem 3: By introducing an auxiliary variable
on the interval, we consider the functions
cross covariances in
, so are
. Lettingapproach 0 yields new
, which are, by L’Hos-
direct/cross covariance functions in
and thus coincide with (9).
Proof of Theorem 4:
(i) For every fixed
Schoenberg’s theorem  that
is a univariate covariance function in
, anmatrix with en-
is positive definite. Also, it follows from
, and thus
is a covariance matrix, where
matrix with all entries equal to 1. By Theorem
is a positive definite matrix, and by Lemma
1, the matrix function that is the Hadamard product of the
one with entries
matrix. Finally, the required result follows since (10) can
be rewritten as
,, and the
, is a covariance
(ii) This follows directly from Part (i) and the following de-
composition of (11)
Proof of Theorem 5: Using the representation (14) for the
completely monotone function
, we rewrite (13) as
using the formula (15) and substituting
, we rewrite as
we are going to show, are direct/cross covariances in
every constant. To apply Lemma 2, we just need verify
, is a covariance matrix. Since
a positive definite matrix due to Theorem 1, by Lemma 1 it suf-
fices to verify that
, is a univariate covariance function. In fact, this function is
positive definite insince
holds for every natural number , any , and any
5928IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 12, DECEMBER 2011
V. CONCLUDING REMARKS
A form that is more general than our formulation (2) would
with mean zero and covariance matrix
-variate nonnegative random vector and is independent with
, andis a deterministic vector-valued
function. This structure would allow for different components
having different distributions. If
moments, then (16) is a second-order random field, with mean
and direct/cross covariances
is an-variate Gaussian random field
possesses the second-order
. As a result, in order that (16)
is allowable for any correlation structure , it is necessary
equal a positive constant, or,
; in other words, reduces
almost surely, where is a positive constant and
is a random variable, and (16) reduces to (2). An important
ever a mixing variable
has a second-order moment. Another
important feature is that it does not have a restriction or a tight
connection between its mean and covariance functions, unlike
log-Gaussian  or
cases , so that the spherically in-
variantrandomfieldmaybe relativelymore flexiblefor applica-
tions. The latter feature of being analytically easy to manipulate
would make second-order spherically invariant random fields
work effectively for studying various correlation effects in sci-
ence and engineering.
One of our main models, (7), gives a much richer class of co-
variance matrix models in terms of a wide selection of the con-
ditionally negative definite matrix
, and the univariate variogram
onecanspecialize tomeet practicalneedssuchasthedatastruc-
ture, property, or convenience. In some way, the univariate var-
can be replaced by a matrix
The direct covariances in Theorem 2, 3, or 4 belong to the
same univariate class of those in , but this work is not a
simple extension of , which deals with the univariate case
only. For the vector case, the difficulty often arises because a
framework is needed for specifying not only the properties of
each component, such as the direct covariances, but also the
possible cross interaction among the components such as the
cross covariances. Examples in Section III illustrate how nat-
urally a conditionally negative definite matrix gets involved in
our construction for the vector case, which has at most
parameters. The two other parameters,
volved in (7), (9)–(11), and (13). One reason for including these
two parameters is tohave more flexiblemodels, whichmay take
the direct and cross covariances (7) are affected by
Corollary 2.2 may serve as an answer to this question, where
tend to infinity yields positive direct and cross co-
variances. Theorem 3 describes the limiting status when
, the completely monotone
and , are also in-
are close to each other. Alternatively, one may make a com-
parison between (7) and the same model but with different pa-
and, in a way similar to that in .
The authors would like to thank three anonymous reviewers
for their helpful comments and suggestions, which helped us to
improve the presentation of the paper.
 A. Abdi and S. Nader-Esfahani, “Expected number of maximum in the
envelope of a spherically invariant random process,” IEEE Trans. Inf.
Theory, vol. 49, pp. 1369–1375, 2003.
 M. T. Alodat and M. Y. Al-Rawwash, “Skew-Gaussian random field,”
J. Comput. Applied Math., vol. 232, pp. 496–504, 2009.
 R. B. Bapat and T. E. S. Raghavan, Nonnegative Matrices and Appli-
cations.Cambridge, U.K.: Cambridge Univ. Press, 1997.
 T. J. Barnard and F. Khan, “Statistical normalization of spherically
invariant non-Gaussian clutter,” IEEE J. Ocean. Eng., vol. 29, pp.
 I. F. Blake and J. B. Thomas, “On a class of processes arising in linear
estimation theory,” IEEE Trans. Inf. Theory, vol. IT-14, pp. 12–16,
invariant speech-model signals,” Signal Process., pp. 119–141, 1987.
 N. J.-B. Brunel, J. Lapuyade-Lahorgue, and W. Pieczynski,“Modeling
and unspupervised classification of multivariate hidden Markov chains
 S. Buzzi, E. Conte, A. D. Maio, and M. Lops, “Optimum diversity
detection over fading dispersive channels with non-Gaussian noise,”
IEEE Trans. Signal Process., vol. 49, pp. 767–776, 2001.
 P. B. Chapple, D. C. Bertilone, R. S. Caprari, and G. N. Newsam,
“Stochastic model-based processing for detection of small targets in
non-Gaussian natural imagery,” IEEE Trans. Image Process., vol. 10,
pp. 554–564, 2001.
 E. Conte, M. Di Bisceglie, M. Longo, and M. Lops, “Canonical detec-
tion in spherically invariant noise,” IEEE Trans. Commun., vol. 43, pp.
 E. Conte, G. Galati, and M. Longo, “Exogenous modelling of non-
Gaussianclutter,”J. Inst.Elec. Radio Eng., vol. 57,pp. 151–155, 1987.
 E. Conte and M. Longo, “Characterisation of radar clutter as a spher-
ically invariant random processes,” Inst. Elect. Eng. Proc. Pt. F, vol.
134, pp. 191–197, 1987.
 H. Cramér, “On the theory of stationary random processes,” Ann.
Math., vol. 41, pp. 215–230, 1940.
 H. Cramér and M. R. Leadbetter, Stationary and Related Stochastic
Processes: Sample Function Properties and Their Application.
York: Wiley, 1967.
 N. Cressie, Statistics for Spatial Data.
 S. Das, R. Ghanem, and S. Finette, “Polynomial chaos representation
of spatio-temporal random fields from experimental measurements,” J.
Computat. Phys., vol. 228, pp. 8726–8751, 2009.
 G. Deodatis and R. C. Micaletti, “Simulation of highly skewed
non-Gaussian stochastic processes,” J. Eng. Mechan., vol. 127, pp.
 C. T. J. Dodson and J. Scharcanski, “Information geometric similarity
measurement for near-random stochastic processes,” IEEE Trans. Syst.
Man Cybern. A, Syst. Humans, vol. 33, pp. 435–40, 2003.
Distributions.London, U.K.: Chapman and Hall, 1990.
 F. J. Ferrante, S. R. Arwade, and L. L. Graham-Brady, “A translation
Mech., vol. 20, pp. 215–228, 2005.
 J. C. S. S. Filho and M. D. Yacoub, “Coloring non-Gaussian se-
quences,” IEEE Trans. Signal Process., vol. 56, pp. 5817–5822, 2008.
 A. Germani, C. Manes, and P. Palumbo, “Polynomial filtering for sto-
chastic non-Gaussian descriptor systems,” IEEE Trans. Circuits Syst.
I, Fundam. Theory Appl., vol. 51, pp. 1561–76, 2004.
 I. I. Gikhman and A. V. Skorokhod, Introduction to the Theory of
Random Processes.Philadelphia, PA: W. B. Saunders, 1969.
 J. Goldman,“Detection in the presence of sphericalsymmetric random
vectors,” IEEE Trans. Inf. Theory, vol. IT-22, pp. 52–59, 1976.
 I. S. Gradshteyn and I. M. Ryzhik, Table of Integrations, Series, and
Products, 7th ed. Boston, MA: Academic, 2007.
New York: Wiley, 1993.
DU AND MA: SPHERICALLY INVARIANT VECTOR RANDOM FIELDS5929 Download full-text
 R. M. Gray and L. D. Davisson, An Introduction to Statistical Signal
Processing.Cambridge, U.K.: Cambridge Univ. Press, 2004.
 M. Grigoriu, Applied Non-Gaussian Processes: Examples, Theory,
Simulation, Linear Random Vibration, and MATLAB Solutions.
glewood Cliffs, N.J: Prentice-Hall, 1995.
 A. F. Gualtierotti, “Some remarks on spherically invariant distribu-
tions,” J. Multivariate Anal., vol. 4, pp. 347–349, 1974.
 J. L. Harvill, “Spatio-temporal processes,” WIREs Computat. Statist.,
vol. 2, pp. 375–382, 2010.
 D. Huang and H. Leung, “Maximum likelihood state estimation of
semi-Markovian switching system in non-Gaussian measurement
noise,” IEEE Trans. Aerosp. Electron. Syst., vol. 46, pp. 133–46, 2010.
 D. R. Jensen and R. V. Foutz, “The structure and analysis of spher-
ical time-dependent processes,” SIAM J. Appl. Math., vol. 49, pp.
 H. Kim and B. K. Mallick, “Analyzing spatial data using skew-
Gaussian processes,” in Spatial Cluster Modelling, A. B. Lawson and
D. G. T. Denison, Eds. London, U.K.: Chapman and Hall/CRC,
 J. F. C. Kingman, “On random sequences with spherical symmetry,”
Biometrika, pp. 492–494, 1972.
 V. Y. Kontorovich and V. Z. Lyandres, “Stochastic differential equa-
tions: An approach to the generation of continuous non-Gaussian
processes,” IEEE Trans. Signal Process., vol. 43, pp. 2372–2385,
 N. D. Lagaros, G. Stefanou, and M. Papadrakakis, “An enhanced
hybrid method for the simulation of highly skewed non-Gaussian
stochastic fields,” Comput. Methods Appl. Mech. Eng., vol. 194, pp.
 C. Ma, “Semiparametric spatio-temporal covariance models with the
autoregressive temporal margin,” Ann. Inst. Statist. Math., vol. 57, pp.
 C. Ma, “Spatio-temporal variograms and covariance models,” Adv.
Appl. Prob., vol. 37, pp. 706–725, 2005.
 C. Ma, “Construction of non-Gaussian random fields with any given
correlation structure,” J. Statist. Plan. Infer., vol. 139, pp. 780–787,
 C. Ma, “?
random fields in space and time,” IEEE Trans. Signal
Process., vol. 58, pp. 378–383, 2010.
 C. Ma, “Elliptically contoured random fields in space and time,” J.
Phys. A: Math. Theor., vol. 43, no. 165209, p. 14, 2010.
 C. Ma, “Vector random fields with second-order moments or second-
order increments,” Stoch. Anal. Appl., vol. 29, pp. 97–215, 2011.
 G. Matheron, “The internal consistency of models in geostatistics,” in
Geostatistics, M. Armstrong, Ed.
Academic, 1989, vol. 1, pp. 21–38.
IEEE Trans. Inf. Theory, vol. 14, pp. 110–120, 1968.
 A. Nasri, A. Nezampour, and R. Schober, “Adaptive ? -norm diver-
sity combining in non-Gaussian noise and interference,” IEEE Trans.
Wireless Commun., vol. 8, pp. 4230–40, 2009.
 J. M. Nichols, C. C. Olson, J. V. Michalowicz, and F. Bucholtz, “A
simple algorithm for generating spectrally colored, non-Gaussian sig-
nals,” Prob. Eng. Mech., vol. 25, pp. 315–322, 2010.
 F. Pascal, Y. Chitour, J.-P. Ovarlez, P. Forster, and P. Larzabal,
“Covariance structure maximum-likelihood estimates in compound
Gaussian noise: Existence and algorithm analysis,” IEEE Trans. Signal
Process., vol. 56, pp. 34–47, 2008.
 B. Picinbono, “Spherically invariant and compound Gaussian sto-
chastic processes,” IEEE Trans. Inf. Theory, vol. 16, pp. 77–79, 1970.
 S. L. Primak and V. Z. Lyandres, “On the generation of the baseband
vol. 46, pp. 1229–1237, 1998.
 M. Rangaswamy, “Multichannel detectionfor correlatednon-Gaussian
random processes based on innovations,” IEEE Trans Signal Process.,
vol. 43, pp. 1915–1922, 1995.
Amsterdam, Netherlands: Kluwer
 M. Rangaswamy, “Statistical analysis of the nonhomogeneity detector
for non-Gaussian interference backgrounds,” IEEE Trans Signal
Process., vol. 53, pp. 2101–2111, 2005.
 M. Rangaswamy, D. Weiner, and A. Ozturk, “Non-Gaussian random
vector identification using spherically invariant random processes,”
IEEE Trans. Aerosp. Elect. Syst., vol. 29, pp. 111–124, 1993.
 A. M. Sabatini, “A statistical mechanical analysis of postural sway
using non-Gaussian farima stochastic models,” IEEE Trans. Biomed.
Eng., vol. 47, pp. 1219–27, 2000.
 O. Schabenberger and C. A. Gotway, Statistical Methods for Spatial
Data Analysis.Boca Raton, FL: Chapman and Hall /CRC, 2005.
 R. L. Schilling, R. Song, and Z. Vondracek, Bernstein Functions:
Theory and Applications. Berlin, Germany: De Gruyter, 2010.
 C. Turchetti, P. Crippa, M. Pirani, and G. Biagetti, “Representation of
nonlinear random transformations by non-Gaussian stochastic neural
networks,” IEEE Trans. Neural Netw., vol. 19, pp. 1033–60, 2008.
 G.Vasile,J.-P. Ovarlez,F. Pascal,andC.Tison,“Coherencymatrixes-
timation of heterogeneous clutter in high-resolution polarimetric SAR
images,” IEEE Trans. Geosci. Remote Sens., vol. 48, pp. 1809–1826,
 A. M. Vershik, “Some characteristic properties of stochastic Gaussian
processes,” Theory Prob. Appl., vol. 9, pp. 353–356, 1964.
 G. N. Watson, A Treatise on the Theory of Bessel Functions, 2nd ed.
London, U.K.: Cambridge Univ. Press, 1958.
 G. L. Wise and N. C. Gallagher, “On spherically invariant random pro-
cesses,” IEEE Trans. Inf. Theory, vol. IT-24, pp. 118–120, 1978.
 K. Yao, “A representation theorem and its applications to spherically-
invariant random processes,” IEEE Trans. Inf. Theory, vol. IT-19, pp.
 K. Yao, “Spherically invariant random processes: Theory and applica-
tions,” in Communications, Information and Network Security, V. K.
Bhargava, H. V. Poor, V. Tarokh, and S. Yoon, Eds.
Academic, 2003, pp. 315–332.
Juan Du received the Ph.D. degree in statistics from
Michigan State University, East Lansing, in 2009.
Since then, she has been with Kansas State Uni-
versityasan Assistant Professorofstatistics. Herpri-
mary research interests lie in spatial and spatio-tem-
poral statistics, vector random fields and asymptotic
theory with applications in environmental sciences,
engineering, and agriculture.
Chunsheng Ma received the Ph.D. degree from the
University of Sydney, Australia, in 1997.
After two years with the University of British
Columbia as a Postdoctoral Fellow, he joined Wi-
chita State University in 1999, where he is currently
a professor. He was a University Fellow during
the 2006–2007 academic year at the Statistical and
Applied Mathematical Science Institution, Research
Triangle Park, NC, and was a Guest Professor during
the 2009–2010 academic year at Wuhan University,
China. Currently, he is an Adjunct Professor at the
Wuhan University of Technology, China. His research areas include statistics
and probability with applications in science and engineering. His current
research interests are in vector random fields and spatio-temporal statistics.