SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING

c

⃝2015 SAMPLING PUBLISHING

Vol. 14, No. 2, 2015, pp. 153–169

ISSN: 1530-6429

Multidimensional Signal Recovery in Discrete Evolution Systems via

Spatiotemporal Trade Oﬀ

Roza Aceska

Department of Mathematical Sciences, Ball State University

Muncie, IN, USA

e-mail address:raceska@bsu.edu

Armenak Petrosyan

Department of Mathematics, Vanderbilt University

Nashville, TN, 37240, USA

e-mail address: armenak.petrosyan@vanderbilt.edu

Sui Tang

Department of Mathematics, Vanderbilt University

Nashville, TN, 37240, USA

e-mail address: sui.tang@vanderbilt.edu

Abstract. The problem of recovering an evolving signal from a set of sam-

ples taken at diﬀerent time instances, has been well-studied for one-variable

signals modeled by ℓ2(Zd) and ℓ2(Z).However, most observed time-variant

signals in applications are described by at least two spatial variables. In this

paper, we study the spatiotemporal sampling pattern to recover the initial

signals modeled by ℓ2(Zd1×Zd2) and ℓ2(Z×Z) which are evolving in a dis-

crete evolution system and provide speciﬁc reconstruction results.

Key words and phrases : Distributed sampling, reconstruction, frames.

2010 AMS Mathematics Subject Classiﬁcation−94A20,94A12,42C15,15A29.

1. Introduction

The ongoing development [1, 2, 4, 8, 9, 19] in sampling theory suggests to

combine coarse spatial samples of a signal’s initial state with its later-time sam-

ples. In these cases, time-dependency among samples permits a reduction of the

number of used expensive sensors via the increase of their usage frequency. The

reconstruction of the initial distribution of a signal is achieved by the use of the

evolutionary nature of that signal under some certain constraints, which is not

fully considered in classical sampling problems [3, 5, 6, 10, 12, 14, 16, 17, 18].

The so-called dynamical sampling problem (motivated by [13, 15]) has been

well-studied in the one-variable setting [4, 8, 9, 1, 2] but there have been no

results in the multivariable setting. In industrial applications (sampling of air

pollution, wireless networks) the observed time-variant signals are described by

154 R. ACESKA, A. PETROSYAN AND S. TANG

at least two variables. In this paper, we formulate the problem of spatiotemporal

sampling for two-variable data and provide speciﬁc reconstruction results.

1.1. Stating the Dynamical Sampling Problem. In real-life situations,

physical systems evolve over time, under the inﬂuence of a family of opera-

tors {At}t≥0. Let f0be the initial state deﬁned on a domain D. Dynamical

sampling problem asks when we can recover the initial state f0by the spa-

tiotemporal sampling data {SXtiAtif0:i= 1, . . . , N −1}, where SXtiis a sub-

sampling operator deﬁned by a coarse sampling set Xi⊂Dat time instances

ti,i= 0, ..., N −1. In other words, we would like to compensate for the lack

of suﬃcient samples of the initial state by adding coarse samples of the evolved

states {Atif0=fti, i = 0, . . . , N −1}. In this way, we can use fewer sampling

devices to save budget but lose no information.

The dynamical sampling problem is solved when conditions on the sampling

sets and time instances tiare found, such that recovery of the signal is possible,

preferably in a stable way. That is, if one (or both) of the following properties

is satisﬁed:

ISP Invertibility sampling property. The operators At0, ..., AtN−1, the sam-

pling sets Xt0, ..., XtN−1and the number of repeated sampling proce-

dures satisfy this condition within a class of signals, if any signal hin

that class is uniquely determined by its samples data set.

SSP Stability sampling property. The operators, the sampling sets and the

number of repeated samplings satisfy this condition in a ﬁxed class of

signals, if for any two signals h,h1in that class, the two norms

∥h−h1∥2

2and

N

i=0

∥SXtiAti(h−h1)∥2

ℓ2are equivalent.

SSP is clearly a stronger property and implies ISP. In [1, 2, 8, 9] the authors

have studied the dynamical sampling problem for the discrete spatially invariant

evolution system, in which the initial state fis deﬁned on the domain D=

Zdor Zunder certain constraints. At time instance t=n∈N, the initial state f

is altered by convolution with a ﬁlter a n times to be An(f) = a∗a∗...∗a∗f=anf.

At each time instance t=n, the altered state An(f) is under-sampled at a

uniform subsampling rate m. The invertibility and stability questions have been

fully answered under the speciﬁc constraints. Namely, for a uniform discrete

sampling grid X=mD ⊂D, speciﬁc conditions on aand Nare stated so that

a function fcan be recovered from the samples

{f(X), a ∗f(X),· · · ,(aN−1∗f)(X)},for X⊂D. (1.1)

The multidimensional dynamical sampling problem we consider in this paper

has similarities with problems considered by some other authors. For example,

in [11], the authors work in a multivariable shift-invariant space(MSIS) setting,

and study linear systems {Lj:j= 1,· · · , s}such that one can recover any

fin MSIS by uniformly downsampling the functions {(Ljf) : j= 1,· · · , s},

i.e. taking the generalized samples {(Ljf)(Mα)}α∈Zd,j=1,··· ,s . In dynamical

MULTIDIMENSIONAL SIGNAL RECOVERY 155

sampling, there is only one convolution operator A, and it is applied iteratively

to the function f. This iterative structure is important for our analysis of the

kernel of the arising matrix, and using that special structure we are able to add

extra samples outside of the initial uniform sampling greed and get full recovery

of the signal.

In addition, certain singularity problems, which can occur due to the speciﬁc

properties of awhen sampling on a uniform grid X, have been successfully

overcome in the cited papers by adding additional samples. Since most real-life

phenomena are described by functions of multiple variables, we ﬁnd it important

to expand the dynamical sampling concept on two variable setting, i.e. D=

Zd1×Zd2and Z×Z.As we will see later, the two variable problem is more

complicated in structure and we ﬁnd it more subtle to overcome the singularity

problems. Studying the stated problem in 3 (and higher) variable setting would

require similar coping techniques to the ones we use in this paper to expand the

domain from one to two dimensions.

2. Dynamical sampling on Zd1×Zd2

For a positive integer d,Zddenotes the ﬁnite group of integers modulo d. In

the ﬁnite discrete setting, we work on the domain D=Zd1×Zd2,d1, d2∈N+.

Let the operator Aact on the signal of interest f∈ℓ2(D) as a convolution with

some a∈ℓ1(D) given by

Af(k, l) = a∗f(k, l) =

(s,p)∈D

a(s, p)f(k−s, l −p),for all (k, l)∈D. (2.1)

Note that Ais a bounded linear operator that maps ℓ2(D) to itself. The initial

signal fis evolving in time under the repeated eﬀect of Asuch that at time

instance t=n, the evolved signal is fn=Anf=a∗a∗ · · · ∗ a∗f(and

f=f0=A0f).

We assume that d1and d2are odd numbers, such that di=Jimifor integers

mi≥1, Ji≥1, i= 1,2. We set the sampling sensors on a uniform coarse grid

X=m1Zd1×m2Zd2to sample the initial state fand its temporally evolved

states Af,A2f, . . . , AN−1f. Note that, given such a coarse sampling grid, each

individual measurement is insuﬃcient for recovery of the sampled state.

Let SX=Sm1,m2denote the assigned subsampling operator related to the

sampling grid. Speciﬁcally,

(SXf)(k, l) = f(k, l) if (k, l)∈X

0 otherwise (2.2)

For some N≥2, our objective is to reconstruct ffrom the combined coarse

samples set

{yj=SX(Ajf)}, j = 0,1, ..., N −1.(2.3)

We denote by Fthe 2−dimensional discrete Fourier transform (2d DFT) and use

the notation ˆx=F(x). After applying Fto (2.3), due to the two-dimensional

156 R. ACESKA, A. PETROSYAN AND S. TANG

Poisson’s summation formula, we obtain

ˆyn(i, j) = 1

m1m2

m1−1

k=0

m2−1

l=0

ˆan(i+kJ1, j +lJ2)ˆ

f(i+kJ1, j +lJ2) (2.4)

for (i, j)∈ I ={0,· · · , J1−1}×{0,· · · , J2−1}and n= 0,1, . . . , N −1.

Let ¯

y(i, j) = ( ˆy0(i, j) ˆy1(i, j)... ˆyN−1(i, j))T, (i, j)∈ I and

¯

f(i, j) =

ˆ

f(i, j)

ˆ

f(i+J1, j)

.

.

.

ˆ

f(i+ (m1−1)J1, j)

ˆ

f(i, j +J2)

.

.

.

ˆ

f(i+ (m1−1)J1, j +J2)

.

.

.

.

.

.

ˆ

f(i, j + (m2−1)J2)

.

.

.

ˆ

f(i+ (m1−1)J1, j + (m2−1)J2)

.

We use the block-matrices

Al,m1m2(i, j) =

1 1 ... 1

ˆa(i,j+lJ2) ˆa(i+J1,j +lJ2)... ˆa(i+(m1−1)J1,j+lJ2)

.

.

...

.

..

.

..

.

.

ˆaN−1(i,j+lJ2) ˆaN−1(i+J1,j +lJ2)... ˆaN−1(i+(m1−1)J1,j+lJ2)

,

where l= 0,1, ..., m2−1, to deﬁne the N×m1m2matrix

Am1,m2(i, j) = [A0,m1m2(i, j )A1,m1m2(i, j)...Am2−1,m1m2(i, j)] (2.5)

for all (i, j)∈ I . Equations (2.4) have the form of vector inner products, so we

restate them in matrix product form

¯

y(i, j) = 1

m1m2

Am1,m2(i, j)¯

f(i, j).(2.6)

By equation (2.6), we need N≥m1m2to be able to recover the signal f.

Note that for N=m1m2, matrix (2.5) is square, we denote this special square

matrix by Am1,m2(i, j) and obtain the following reconstruction result:

Proposition 1. For N=m1m2, the SSP is satisﬁed if and only if

det Am1,m2(i, j)̸= 0 for all (i, j)∈ I.(2.7)

In the ﬁnite dimensional case, unique reconstruction is equivalent to stable

reconstruction so SSP and ISP coincide. When (2.7) holds true, the signal is

recovered from the system of equations

¯

f(i, j) = m1m2A−1

m1,m2(i, j)¯

y(i, j),(i, j )∈ I.

MULTIDIMENSIONAL SIGNAL RECOVERY 157

As expected, Proposition 1 reduces to the respective result in [8] when d=d1

and d2= 1, or d=d2and d1= 1.

2.1. Extra samples for stable spatiotemporal sampling. Proposition 1

gives a complete characterization of stable recovery from the dynamical sam-

ples (2.3). In practice, however, we may not have the ideal ﬁlter asuch that

(2.7) holds true. For instance, consider a kernel awith a so-called quadrantal

symmetry, i.e. let

ˆa(s, p) = ˆa(d1−s, p) = ˆa(s, d2−p) = ˆa(d1−s, d2−p)

for all (s, p)∈D. Since (2.5) is a Vandermonde matrix, it is singular if and only

if some of its columns coincide. In this case, it is easy to see that Am1,m2(0,0)

is singular, which prevents the stable reconstruction.

Motivated by the above example, we propose a way of taking extra samples

to overcome the lack of reconstruction uniqueness, whenever singularities for

matrix (2.5) occur. Let

A=

Am1,m2(0,0) 0 ... 0

0Am1,m2(1,0) ... 0

.

.

..

.

.....

.

.

0 0 ... Am1,m2(J1−1,J2−1)

and

¯

f=

¯

f(0,0)

¯

f(1,0)

.

.

.

¯

f(J1−1,0)

¯

f(1,1)

.

.

.

¯

f(J1−1,1)

.

.

.

.

.

.

¯

f(0, J2−1)

.

.

.

¯

f(J1−1, J2−1)

,¯

y=

¯

y(0,0)

¯

y(1,0)

.

.

.

¯

y(J1−1,0)

¯

y(1,1)

.

.

.

¯

y(J1−1,1)

.

.

.

.

.

.

¯

y(0, J2−1)

.

.

.

¯

y(J1−1, J2−1)

.

Then

A¯

f=¯

y(2.8)

and

ker(A) =

(i,j)∈I

ker[Am1,m2(i, j)].(2.9)

The kernels of each Am1,m2(i, j) can be viewed as generated by linearly indepen-

dent vectors ˆvj∈ℓ2(D) such that each ˆvjhas exactly two nonzero coordinates,

one of which is equal to 1 and the other is −1. Let’s assume that the nullity of

matrix Am1,m2(i, j) equals wi,j at each (i, j)∈ I. Then there are n=i,j wi,j

158 R. ACESKA, A. PETROSYAN AND S. TANG

of such linearly independent vectors ˆvj∈ℓ2(D). Let {vj:j= 1,· · · , n}be their

image under the 2Dinverse DFT. Note that {vj:j= 1,··· , n} ⊂ ℓ2(D) is also

linearly independent.

Let Ω ⊂D\Xbe the additional sampling set, that is to say, we take extra

spatial samples of the initial state fat the locations speciﬁed by Ω. By SΩwe

denote the related sampling operator and RΩis a |Ω| × nmatrix with rows cor-

responding to [v1(k, l),· · · , vn(k , l)]{(k,l)∈Ω}. With these notations, the following

result holds true:

Theorem 2.1. The reconstruction of f∈ℓ2(D)from its spatiotemporal samples

{SΩf, SXf , SXAf, · · · , SXAm1m2−1f}(2.10)

is possible in a stable manner (SSP is satisﬁed) if and only if rank(RΩ) = n.

In particular, if SSP holds true, then we must have |Ω| ≥ n.

Proof. Let W=span{vj:j= 1,· · · , n}. It suﬃces to show that

ker(SΩ)∩W={0}if and only if rank(RΩ) = n.

Suppose wis in ker(SΩ)∩W. There must exist coeﬃcients c1, c2, .., cnso that

w=n

j=1 cjvjand SΩw= 0. The last statement is equivalent to

[v1(k, l), v2(k, l),· · · , vn(k, l)] [c1c2... cn]T= 0

for each (k, l)∈Ω. Equivalently, we have RΩc= 0. Hence, c= 0 if and only if

rank(RΩ) = n.□

Since the d1d2×nmatrix R= [v1(k, l),· · · , vn(k.l)]{(k,l)∈D}has column rank

n, for any kernel a, there exists a minimal choice of Ω, namely |Ω|=nsuch that

the square matrix RΩis invertible. It is hard to give a formula to specify the

extra sampling set for every kernel a∈ℓ2(D). On the other hand, compared

to the 1−variable case [8], it is more challenging to specify the rank of RΩ

analytically, since the entries of RΩwill involve the product of sinusoids mixed

with exponentials in general.

In [8], the authors studied a typical low pass ﬁlter with symmetric properties

and gave a choice of a minimal extra sampling set Ω, since symmetry reﬂects

the fact that there is often no preferential direction for physical kernels and

monotonicity is a reﬂection of energy dissipation. Similarly, we consider a kernel

awith a so-called strict quadrantal symmetry: for a ﬁxed (k, l)∈D, ˆa(s, p) =

ˆa(k, l) if and only if

(s, p)∈ {(k, l),(d1−k, l),(k, d2−l),(d1−k, d2−l)}.(2.11)

Since Am(i, j) is a Vandermonde matrix, it has singularity if and only if some

of its columns coincide. We can compute the singularity of each Am(i, j), as we

make use of its special structure.

Lemma 2.2. If the ﬁlter asatisﬁes the symmetry assumptions (2.11), then

dim(ker(A)) = d1(m2−1)

2+d2(m1−1)

2−(m1−1)(m2−1)

4.

MULTIDIMENSIONAL SIGNAL RECOVERY 159

Clearly, we need an extra sampling set Ω ⊂Dwith size dim(ker(A)). Based

on Theorem 2.1, we provide a minimal Ω:

Theorem 2.3. Assume that the kernel asatisﬁes the strict quadrantal symmetry

assumptions (2.11) and let

Ω = {(k, l) : k= 1 · · · m1−1

2, l ∈Zd2}∪{(k, l) : k∈Zd1, l = 1,· · · ,m2−1

2}.

Then, any f∈ℓ2(D)is recovered in a stable way from the expanded set of

samples

{SΩf, SXf , SXAf, ··· , SXAm1m2−1f}.(2.12)

Remark 2.4.Note that in this case

|Ω|=d1(m2−1)

2+d2(m1−1)

2−(m1−1)(m2−1)

4,

so by Theorem 2.1 and Lemma 2.2 we can not do better in terms of its cardinality.

Proof. Set

n=d1(m2−1)

2+d2(m1−1)

2−(m1−1)(m2−1)

4.

Recall that the kernels of singular blocks Am1,m2(i, j) are generated by vectors

{ˆvk:k= 1,· · · n}, such that each ˆvkhas exactly two non-zero components, 1

and −1 (corresponding to each pair of identical columns). Then the formula of

2Dinverse DFT gives

vj(k, l) =

d1−1

s=0

d2−1

p=0

ˆvj(s, p)e2πisk

d1e

2πipl

d2,(k, l)∈Zd1×Zd2.(2.13)

We deﬁne a row vector F1(k) = 1, e2π ik

d1,··· , e

2πi(d1−1)k

d1for all k∈Zd1. For

each l= 0,1,· · · , d2−1, we deﬁne a row vector ¯

F2(l) of length d2−m2−1

2, which

is derived from vector

[1, e 2πil

d2,· · · , e

2πi(d2−1)l

d2]

after deleting the entries that correspond to {sJ2+ 1 : 1 ≤s≤m2−1

2}, i.e. we

omit the entries e

2πsJ2

d2for 1 ≤s≤m2−1

2. We reorder the vectors vjso that

[v1(k, l),· · · , vn(k, l)] equals

2isin( 2π1l

m2)F1(k),··· ,sin( 2π(m2−1)l

2m2)F1(k),sin( 2π1k

m1)¯

F2(l)··· ,sin( 2π(m1−1)k

2m1)¯

F2(l)

for every (k, l)∈Ω. By Theorem 2.1, the proof is complete if we show that these

n=|Ω|row vectors of size nare linearly independent.

We deﬁne a row vector R(k, l) corresponding to (k, l)∈Ω given by

2isin( 2π1l

m2)F1(k),··· ,sin( 2π(m2−1)l

2m2)F1(k),sin( 2π1k

m1)¯

F2(l)··· ,sin( 2π(m1−1)k

2m1)¯

F2(l).

160 R. ACESKA, A. PETROSYAN AND S. TANG

Suppose that for some coeﬃcients {c(k, l) : (k, l)∈Ω}, it holds

(k,l)∈Ω

c(k, l)R(k, l) = 0.

We need to show that all c(k, l) = 0. Note that, for a ﬁxed k, the vector R(k, l)

is compartmentalized into two components with lengths m2−1

2and m1−1

2. By

construction, {F1(k)|k∈Zd1}are linearly independent row vectors. Then, the

coeﬃcients related to F1(k) for the ﬁrst component should be zeros. Related to

the ﬁrst component of length m2−1

2, for every ﬁxed k∈Zd1such that (k, l)∈Ω

for some l, the following m2equations hold true

(k,l)∈Ω

c(k, l) sin 2πsl

m2= 0 for s= 0,1, ..., m2−1.(2.14)

Case I: if k≥m1+1

2or k= 0, then (k, l)∈Ω if and only if l= 1,· · · m2−1

2.

We restate the system of equations (2.14) in the matrix form:

sin( 2π

m2) sin( 4π

m2). . . sin(π(m2−1)

m2)

sin( 4π

m2) sin( 8π

m2). . . sin(2π(m2−1)

m2)

.

.

..

.

.....

.

.

sin(π(m2−1)

m2) sin(2π(m2−1)

m2). . . sin(π(m2−1)(m2−1)

2m2)

c(k, 1)

c(k, 2)

.

.

.

c(k, m2−1

2)

=0.

The matrix on the left-hand side is invertible, since

{sin(2πx), sin(4πx), ..., sin((m2−1)πx)}

is a Chebyshev system on [0,1](see[7]); Hence we have c(k, l) = 0 for

l= 1,· · · m2−1

2.

Case II: if 1 ≤k≤m1−1

2, then (k, l)∈Ω if and only if l= 0,· · · , d2−1.

Then (2.14) is equivalent to the system of equations

d2−1

l=0

c(k, l) sin 2πsl

m2= 0 for s= 1,2, ..., (m2−1)/2.(2.15)

Related to the second component of length m1−1

2, and combined with

the fact that c(k, l) = 0 if kis in case I, for all s= 1,2, ..., m1−1

2we have

d2−1

l=0

m1−1

2

k=1

c(k, l) sin 2πsk

m1¯

F2(l)

= 0.(2.16)

Let ¯

F2= [ ¯

F2(0)T,· · · ,¯

F2(d2−1)T], where ¯

F2(l)Tdenotes the transpose

of each row vector ¯

F2(l); ¯

F2is a (d2−m2−1

2)×d2matrix. Using matrix

MULTIDIMENSIONAL SIGNAL RECOVERY 161

notation, the ﬁrst equation in (2.16) can be restated as a product, namely

¯

F2·

m1−1

2

k=1

sin 2πk

m1c(k, 0)

m1−1

2

k=1

sin 2πk

m1c(k, 1)

.

.

.

m1−1

2

k=1

sin 2πk

m1c(k, d2−1)

= 0

As an easy consequence of equation (2.15), for each 1 ≤j≤m2−1

2, it

holds m1−1

2

k=1

sin 2πk

m1d2−1

l=0 sin(2πlj

m2

)c(k, l)= 0,(2.17)

which is equivalent to

m1−1

2

k=1

d2−1

l=0

sin 2πlj

m2sin 2πk

m1c(k, l) = 0,

i.e.

d2−1

l=0

sin 2πlj

m2

m1−1

2

k=1

sin 2πk

m1c(k, l) = 0.(2.18)

We deﬁne a m2−1

2×d2matrix Eas follows:

E=

sin(2π·0

m2) sin(2π·1

m2). . . sin(2π(d2−1)

m2)

sin(4π·0

m2) sin(4π∗·1

m2). . . sin(4π(d2−1)

m2)

.

.

..

.

.. . . .

.

.

sin(π(m2−1)·0

m2) sin(2π(m2−1)

m2). . . sin(π(m2−1)(d2−1)

m2)

.

Due to (2.18), we have

E·

m1−1

2

k=1

sin(2πk

m1

)c(k, 0)

m1−1

2

k=1

sin(2πk

m1

)c(k, 1)

.

.

.

m1−1

2

k=1

sin(2πk

m1

)c(k, d2−1)

=0.(2.19)

162 R. ACESKA, A. PETROSYAN AND S. TANG

Let F2=E

¯

F2. Then

F2·

m1−1

2

k=1

sin(2πk

m1

)c(k, 0)

m1−1

2

k=1

sin(2πk

m1

)c(k, 1)

.

.

.

m1−1

2

k=1

sin(2πk

m1

)c(k, d2−1)

=0.

Note that the d2×d2matrix F2is invertible, since it is the image of a

series of elementary matrices acting on the d2×d2DFT matrix (one row

minus another row). Hence we have

m1−1

2

k=1

sin(2πk

m1

)c(k, 0)

m1−1

2

k=1

sin(2πk

m1

)c(k, 1)

.

.

.

m1−1

2

k=1

sin(π(m1−1)k

m1

)c(k, d2−1)

=0.(2.20)

After analyzing the rest of the equations in (2.16), we obtain:

m1−1

2

k=1

sin(2πjk

m1

)c(k, s) = 0 for j= 2,· · · ,m1−1

2,s= 0,1, ..., d2−1.

In a similar manner, for each l= 0,· · · , d2−1 we obtain the matrix equation

sin( 2π

m1) sin( 4π

m2). . . sin(π(m1−1)

m1)

sin( 4π

m1) sin( 8π

m2). . . sin(2π(m1−1)

m1)

.

.

..

.

.....

.

.

sin(π(m1−1)

m1) sin(2π(m1−1)

m1). . . sin(π(m1−1)(m1−1)

2m1)

c(1, l)

c(2, l)

.

.

.

c(m1−1

2, l)

= 0.

As the matrix on the left hand side is invertible, we must have c(k, l) = 0 for

k= 1,· · · ,m1−1

2.

We have demonstrated that c(k, l) = 0 for all (k, l)∈Ω. Therefore the n

row vectors {R(k, l)}(k,l)∈Ωare linearly independent i.e. stability of the signal

recovery is achieved. □

MULTIDIMENSIONAL SIGNAL RECOVERY 163

3. Dynamical sampling in ℓ2(Z×Z)

In this section, we aim to generalize our results to signals of inﬁnite length.

Somewhat surprisingly, there is not much diﬀerence between the techniques used

in these two settings and we feel that we can gloss over a few details in the second

part without overburdening the reader.

Let D=Z×Z. We study a signal of interest f∈ℓ2(D) that evolves over time

under the inﬂuence of an evolution operator A. The operator Ais described by

a convolution with a∈ℓ1(D), namely

Af(p, q) = a∗f(p, q) =

k∈Z

l∈Z

a(k, l)f(p−k, q −l) at all (p, q)∈D.

Clearly, Ais a bounded linear operator, mapping ℓ2(D) to itself. Given integers

m1,m2≥1, we assume m1and m2are odd number. We introduce a coarse

sampling grid X=m1Z×m2Z. We make use of a uniform sampling operator SX,

deﬁned by (SXf)(k, l) = f(m1k, m2l) for (k, l)∈D. The goal is to reconstruct

ffrom the set of coarse samples

y0=SXf

y1=SXAf

...

yN−1=SXAN−1f.

(3.1)

Similar to the work done in section 2, we study this problem on the Fourier

domain. Due to Poisson’s summation formula, we have the Lemma below.

Lemma 3.1. The Fourier transform of each ylin (3.1) at (ξ, ω)∈T×Tis

ˆyl(ξ, ω) = 1

m1m2

m2−1

j=0

m1−1

i=0

ˆalξ+i

m1

,ω+j

m2ˆ

fξ+i

m1

,ω+j

m2.(3.2)

Expression (3.2) allows for a matrix representation of the dynamical sampling

problem in the case of uniform subsampling. For j= 0,1,· · · , m2−1, we deﬁne

N×m1matrices

Aj,m1,m2(ξ, ω) = ˆakξ+l

m1

,ω+j

m2k,l

,

where k= 0,1,· · · , N −1, l= 0,1,··· , m1−1 and denote by Am1,m2(ξ, ω) the

block matrix

[A0,m1,m2(ξ, ω)A1,m1,m2(ξ, ω)... Am2−1,m1,m2(ξ, ω)].(3.3)

Let ¯

y(ξ, ω) = (ˆy0(ξ, ω) ˆy1(ξ, ω)...ˆyN−1(ξ, ω) )Tand

164 R. ACESKA, A. PETROSYAN AND S. TANG

¯

f(ξ, ω) =

ˆ

f(ξ

m1,ω

m2)

ˆ

f(ξ+1

m1,ω

m2)

.

.

.

ˆ

f(ξ+m1−1

m1,ω

m2)

ˆ

f(ξ

m1,ω+1

m2)

.

.

.

ˆ

f(ξ+m1−1

m1,ω+1

m2)

.

.

.

.

.

.

ˆ

f(ξ

m1,ω+m2−1

m2)

.

.

.

ˆ

f(ξ+m1−1

m1,ω+m2−1

m2)

.(3.4)

Due to (3.2), it holds

¯

y(ξ, ω) = 1

m1m2

Am1,m2(ξ, ω)¯

f(ξ, ω).(3.5)

Proposition 2. ISP is satisﬁed if and only if Am1,m2(ξ , ω)as deﬁned in (3.3)

has full column rank m1m2at a.e. (ξ , ω)∈T×T, where T= [0,1) under

addition modulo 1. SSP is satisﬁed if and only if Am1,m2(ξ, ω)is full rank for

all (ξ, ω)∈T×T.

By Proposition 2, we conclude that N≥m1m2. In particular, if N=m1m2,

then Am1,m2(ξ, ω) is a square matrix, we denote by Am1,m2(ξ, ω) this square

matrix.

Corollary 1. When N=m1m2, the invertibility sampling property is equivalent

to the condition:

det Am1,m2(ξ, ω)̸= 0 for a.e. (ξ, ω)∈T×T.

Since Am1,m2(ξ, ω)has continuous entries, the stable sampling property is equiv-

alent to

det Am1,m2(ξ, ω)̸= 0 for all (ξ, ω)∈T×T.

From here on we assume N=m1m2. By its structure, Am1,m2(ξ, ω) is a

Vandermonde matrix, thus it is singular at (ξ, ω)∈T×Tif and only if some of

its columns coincide. In case Am1,m2(ξ, ω) is singular, no matter how many times

we resample the evolved states Anf,n > N −1, on the grid Ωo=m1Z×m2Z,

the additional data is not going to add anything new in terms of recovery and

stability. In such a case we need to consider adding extra sampling locations to

overcome the singularities of Am1,m2(ξ, ω).

MULTIDIMENSIONAL SIGNAL RECOVERY 165

3.1. Additional sampling locations. If Am1,m2(ξ, ω) is singular at some (ξ, ω),

then by Corollary 1 the recovery of f∈ℓ2(Z2) is not stable. To remove the sin-

gularities and achieve stable recovery, some extra sampling locations need to

be added. The additional sampling locations depend on the positions of the

singularities of Am1,m2(ξ, ω) that we want remove. We propose a quasi-uniform

way of constructing the extra sampling locations and give a characterization

specifying when the singularity will be removed. Then, we use this method to

remove the singularity of a strict quadrantally symmetric convolution operator.

Let the additional sampling set be given by

Ω = {X+ (c1, c2)|(c1, c2)∈W⊂Zm1×Zm2}.(3.6)

Let Tc1,c2denote the translation operator on ℓ2(Z2), so that Tc1,c2f(k, l) = f(k+

c1, l +c2) for all (k, l)∈Z2. We employ a shifted sampling operator SXTc1,c2to

take extra samples at the initial time instance; this means that our subsampling

grid is shifted from X=m1Z×m2Zto (c1, c2) + Xand the extra samples are

given as

hc1,c2

m1,m2=Sm1,m2Tc1,c2f, (c1, c2)∈Ω.(3.7)

Set

uc1,c2(s, p) = e2πi c1s

m1e2πi c2p

m2,

for (s, p)∈Zm1×Zm2.

By taking the Fourier transform of the samples on the additional sampling set

Ω, we obtain

ˆ

hc1,c2

m1,m2(ξ, ω) = e2πi(c1ξ

m1+c2ω

m2)

m1m2

m1−1

s=0

m2−1

p=0

uc1,c2(s, p)ˆ

fξ+s

m1

,ω+p

m2.(3.8)

where

uc1,c2(s, p) = e2πi c1s

m1e2πi c2p

m2.

For each (c1, c2)∈W, we deﬁne a row vector

uc1,c2={uc1,c2(s, p)}(s,p)∈X

with terms arranged in the same order as the terms in vector ¯

f(ξ, ω) in (3.4).

We organize the vectors uc1,c2in a matrix ¯

U= (uc1,c2)(c1,c2)∈Wand extend the

data vector ¯

y(ξ, ω) in (3.5) into a big vector Y(ξ, ω) by adding

{e2πi −c1ξ

m1e2πi −c2ω

m2(Sm1,m2Tc1,c2f)ˆ(ξ, ω)}(c1,c2)∈W.

Then (3.2) and (3.8) can be combined into the following matrix equation

Y(ξ, ω) = 1

m1m2¯

U

Am1,m2(ξ, ω)¯

f(ξ, ω).(3.9)

166 R. ACESKA, A. PETROSYAN AND S. TANG

Proposition 3. If a left inverse for

¯

U

Am1,m2(ξ, ω)

exists for every (ξ, ω)∈T2, then the vector fcan be uniquely and stably recovered

from the combined samples (3.1) and (3.6) via (3.9).

If the following property holds true:

ker( ¯

U)∩ker(Am1,m2(ξ, ω)) = 0 (3.10)

for every (ξ, ω) in T2,we say that Wremoves the singularities of Am(ξ, ω); In

such a case, the assumption in Proposition 3 is satisﬁed.

Corollary 2. If Wremoves the singularities of Am(ξ, ω)then

|W| ≥ dim(ker(Am1,m2(ξ, ω)))

for every (ξ, ω).

3.2. Strict quadrantal symmetric convolution operator. We consider a

ﬁlter a, such that ˆahas the strict quadrantal symmetry property, i.e. ˆa(ξ1, ω1) =

ˆa(ξ2, ω2) for (ξ1, ω1), (ξ2, ω2)∈T×T=T2if and only if one of the following

conditions is satisﬁed:

1. ξ1=ξ2, ω1+ω2= 1

2. ξ1+ξ2= 1, ω1=ω2

3. ξ1+ξ2= 1, ω1+ω2= 1.

The following result is a direct consequence of the symmetries assumptions listed

in conditions 1 −3.

Proposition 4. If ˆa(ξ, ω)has the strict quadrantal symmetry property, then we

have det Am1,m2(ξ, ω)=0when ξ= 0 or ω= 0. Moreover, the kernel of each

Am1,m2(ξ, ω)is a subspace of the kernel of one of the following four matrices:

Am1,m2(0,0) , Am1,m21

2,0, Am1,m20,1

2, Am1,m21

2,1

2.

From Proposition 4, for a strict quadrantally symmetric kernel we need to

consider only the points (ξ, ω)∈(0,0) ,0,1

2,1

2,0,1

2,1

2 and construct

the set W, such that it removes the singularities of the above four matrices.

Proposition 5. If ˆahas the strict quadrantal symmetry property, then

dim(Am1,m2(ξ, ω)) = (m1−1)m2

2+m2−1

2

m1+ 1

2

for every (ξ, ω)∈(0,0) ,0,1

2,1

2,0,1

2,1

2.

Proof. We discuss here in depth only the case ξ=ω=1

2. The proof in the other

three cases are analogous to what we present here. Because Am1,m2(1

2,1

2) is a

MULTIDIMENSIONAL SIGNAL RECOVERY 167

Vandermonde matrix, the rank is equal to the number of its diﬀerent columns.

It is easy to show that

ˆa1

2+s

m1

,

1

2+p

m2= ˆa1

2+k

m1

,

1

2+l

m2

is satisﬁed if and only if one of the following holds true:

(1) s=k, p +l=m2−1

(2) p=l, s +k=m1−1

(3) s+k=m1−1, p +l=m2−1

using which we can easily compute that

dim(Am1,m2(1

2,1

2)) = (m1−1)m2

2+m2−1

2

m1+ 1

2=n.

□

Let

W=W1∪W2(3.11)

where

W1={1,· · · m1−1

2} × {0,· · · , m2−1},

W2={0,· · · , m1−1}×{1,· · · ,m2−1

2}}.

Remark 3.2.When Wis deﬁned as in (3.11), we have

|W|=(m1−1)m2

2+m2−1

2

m1+ 1

2;

By Corollary 2, Whas the minimal possible size.

Theorem 3.3. Let a∈ℓ1(D)be the ﬁlter such that the evolution operator is

given by Ax =a∗x. Suppose ˆasatisﬁes the strict quadrantal symmetric property

deﬁned at the beginning of subsection 3.2. Let Ωbe as in (3.6) with Wspeciﬁed in

(3.11). Then, any f∈ℓ2(D)can be recovered in a stable way from the expanded

set of samples

{SΩf, SXf , · · · , SXAm1m2−1f}.(3.12)

Proof. It suﬃces to show that for every (ξ, ω)∈T×T, it holds

ker(¯

U)∩ker(Am1,m2(ξ , ω)) = 0.(3.13)

By Proposition 4, we only need to study the kernels of these four matrices

Am1,m2(0,0), Am1,m2(1

2,0), Am1,m2(0,1

2), Am1,m2(1

2,1

2).(3.14)

We discuss here in depth for the case ξ=ω=1

2. Z := ker(Am1,m2(1

2,1

2)) is a

subspace in Cm1m2. By Proposition 5, the dimension of Zis n. Taking advantage

of the fact that Am1,m2(1

2,1

2) is a Vandermonde matrix, we can choose a basis

{vj:j= 1,· · · , n}for Z, such that each vjhas only two nonzero entries 1 and

168 R. ACESKA, A. PETROSYAN AND S. TANG

−1. Let v∈ker( ¯

U)∩Z, there exists c= (c(i))i=1,··· ,n such that v=

n

i=1

c(i)vi.

Deﬁne a n×nmatrix Rwith the row corresponds to a ﬁxed (c1, c2)∈Wis

[(e

2πi(m1−1)c1

m1−e

2πi0c1

m1)F2(c2),· · · ,(e

2πi(m1+1)c1

2m1−e

2πi(m1−3)c1

2m1)F2(c2),

(e

2πi(m2−1)c2

m2−e

2πi0c2

m2)¯

F1(c1),· · · ,(e

2πi(m2+1)c2

2m2−e

2πi(m2−3)c2

2m2)¯

F1(c1)].

Then

¯

Uv= 0,which is equivalent to Rc= 0.

By the use the same strategy as in the proof of Theorem 2.3, it can be demon-

strated that these nrow vectors of Rare linearly independent. With slight

adaptations of the strategy used so far,we can come to the same conclusion for

the other three matrices in (3.14). As a consequence of Proposition 3, stability

is achieved. □

4. Conclusion

In this paper, we seek the spatiotemporal trade oﬀ in the two variable discrete

spatially invariant evolution system driven by a single convolution ﬁlter in both

ﬁnite and inﬁnite case. We characterize the spectral properties of the ﬁlters to

specify when we can recover the initial state from the uniform undersampled

future states and a way to add extra spatial sampling locations to stably recover

the signal when the ﬁlters fail the certain constraints. Compared to one variable

case, the singularity problems caused by the structure of ﬁlters are more com-

plicated and tough to solve. We give the explicit constructions of extra spatial

sampling locations to resolve the singularity issue caused by the strict quadran-

tal symmetric ﬁlters. Our results can be adapted to the general multivariable

case. Diﬀerent kinds of symmetry assumptions can be imposed on the ﬁlters.

The problem of ﬁnding the right additional spatiotemporal sampling locations

for other types of ﬁlters remains open and requires a further study.

ACKNOWLEDGEMENT

We would like to thank Akram Aldroubi for his helpful discussions and com-

ments. The research of Armenak Petrosyan and Sui Tang are partially supported

by NSF Grant DMS-1322099.

References

[1] R. Aceska, A. Aldroubi, J. Davis and A. Petrosyan, Dynamical Sampling in Shift-Invariant

Spaces, AMS Contemporary Mathematics (CONM) book series, 2013.

[2] R. Aceska and S. Tang, Dynamical Sampling in Hybrid Shift Invariant Spaces, AMS

Contemporary Mathematics (CONM) book series, 2014.

[3] B. Adcock and A. Hansen, A generalized sampling theorem for stable reconstructions in

arbitrary bases, J. Fourier Anal. Appl.,18(4), 685–716, 2012.

[4] A. Aldroubi, U. Molter, C. Cabrelli and S. Tang, Dynamical Sampling. ArXiv 1409.8333.

MULTIDIMENSIONAL SIGNAL RECOVERY 169

[5] A. Aldroubi and M. Unser, Sampling Procedures in Function Spaces and Asymptotic

equivalence with Shannon’s sampling theory, Numer. Func. Anal. and Opt.,15, 1–21,

1994.

[6] A. Aldroubi and K. Gr¨ochenig, Nonuniform sampling and reconstruction in shift-invariant

spaces, SIAM Rev.,43, 585–620, 2001.

[7] Andrei Osipov, Vladimir Rokhlin and Hong Xiao, Prolate Spheroidal Wave Functions of

Order Zero: Mathematical Tools for Bandlimited Approximation, Springer Science and

Business Media, 2013.

[8] A. Aldroubi, J. Davis and I. Krishtal, Dynamical Sampling: Time Space Trade-oﬀ, Appl.

Comput. Harmon. Anal.,34(3), 495–503, 2013.

[9] A. Aldroubi, J. Davis and I. Krishtal, Exact Reconstruction of Signals in Evolutionary

Systems Via Spatiotemporal Trade-oﬀ, J. Fourier Anal. Appl.,21(1), 11–31, 2015.

[10] N. Atreas, Perturbed sampling formulas and local reconstruction in shift invariant spaces,

J. Math. Anal. Appl.,377, 841–852, 2011.

[11] A. G. Garc´ıa and G. P´erez-Villal´on, Multivariate generalized sampling in shift-invariant

spaces and its approximation properties, J. Math. Anal. Appl.,355, 397–413, 2009.

[12] P. Jorgensen and Feng Tian, Discrete reproducing kernel Hilbert spaces: Sampling and

distribution of Dirac-masses. ArXiv:1501.02310.

[13] Y. Lu and M. Vetterli, Spatial super-resolution of a diﬀusion ﬁeld by temporal oversam-

pling in sensor networks, Proc. IEEE Int. Conf. Acoust., Speech and Signal Process. 2009

(ICASSP 2009), 2249 –2252, 2009.

[14] Z. Nashed and Q. Sun, Sampling and reconstruction of signals in a reproducing kernel

subspace of Lp(Rd), J. Funct. Anal.,258, 2422–2452, 2010.

[15] J. Ranieri, A. Chebira, Y. M. Lu and M. Vetterli, Sampling and reconstructing diﬀusion

ﬁelds with localized sources, Proc. IEEE Int. Conf. Acoust., Speech and Signal Process.

2011 (ICASSP 2011), 4016–4019, 2011.

[16] W. Sun, Sampling theorems for multivariate shift invariant subspaces, Sampl. Theory

Signal Image Process.,4, 73–98, 2005.

[17] Q. Sun, Local reconstruction for sampling in shift-invariant spaces, Adv. Comput. Math.,

32, 335–352, 2010.

[18] Q. Sun, Nonuniform average sampling and reconstruction of signals with ﬁnite rate of

innovation, SIAM J. Math. Anal.,38(5), 1389–1422, 2006.

[19] S. Tang, A Generalized Prony Method for Filter Recovery in Evolutionary System via

Spatiotemporal Trade Oﬀ, Preprint. ArXiv:1502.0274.