# Multidimensional signal recovery in discrete evolution systems via spatiotemporal trade off

Article (PDF Available)inSampling Theory in Signal and Image Processing 14(2):153-169 · September 2015with 134 Reads
Abstract
The problem of recovering an evolving signal from a set of sam- ples taken at different time instances, has been well-studied for one-variable signals modeled by l2(Zd) and l2(Z): However, most observed time-variant signals in applications are described by at least two spatial variables. In this paper, we study the spatiotemporal sampling pattern to recover the initial signals modeled by l2(Zd1 × Zd2) and l2(Z × Z) which are evolving in a dis-crete evolution system and provide specic reconstruction results.
SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING
c
2015 SAMPLING PUBLISHING
Vol. 14, No. 2, 2015, pp. 153–169
ISSN: 1530-6429
Multidimensional Signal Recovery in Discrete Evolution Systems via
Roza Aceska
Department of Mathematical Sciences, Ball State University
Muncie, IN, USA
Armenak Petrosyan
Department of Mathematics, Vanderbilt University
Nashville, TN, 37240, USA
Sui Tang
Department of Mathematics, Vanderbilt University
Nashville, TN, 37240, USA
Abstract. The problem of recovering an evolving signal from a set of sam-
ples taken at diﬀerent time instances, has been well-studied for one-variable
signals modeled by 2(Zd) and 2(Z).However, most observed time-variant
signals in applications are described by at least two spatial variables. In this
paper, we study the spatiotemporal sampling pattern to recover the initial
signals modeled by 2(Zd1×Zd2) and 2(Z×Z) which are evolving in a dis-
crete evolution system and provide speciﬁc reconstruction results.
Key words and phrases : Distributed sampling, reconstruction, frames.
2010 AMS Mathematics Subject Classiﬁcation94A20,94A12,42C15,15A29.
1. Introduction
The ongoing development [1, 2, 4, 8, 9, 19] in sampling theory suggests to
combine coarse spatial samples of a signal’s initial state with its later-time sam-
ples. In these cases, time-dependency among samples permits a reduction of the
number of used expensive sensors via the increase of their usage frequency. The
reconstruction of the initial distribution of a signal is achieved by the use of the
evolutionary nature of that signal under some certain constraints, which is not
fully considered in classical sampling problems [3, 5, 6, 10, 12, 14, 16, 17, 18].
The so-called dynamical sampling problem (motivated by [13, 15]) has been
well-studied in the one-variable setting [4, 8, 9, 1, 2] but there have been no
results in the multivariable setting. In industrial applications (sampling of air
pollution, wireless networks) the observed time-variant signals are described by
154 R. ACESKA, A. PETROSYAN AND S. TANG
at least two variables. In this paper, we formulate the problem of spatiotemporal
sampling for two-variable data and provide speciﬁc reconstruction results.
1.1. Stating the Dynamical Sampling Problem. In real-life situations,
physical systems evolve over time, under the inﬂuence of a family of opera-
tors {At}t0. Let f0be the initial state deﬁned on a domain D. Dynamical
sampling problem asks when we can recover the initial state f0by the spa-
tiotemporal sampling data {SXtiAtif0:i= 1, . . . , N 1}, where SXtiis a sub-
sampling operator deﬁned by a coarse sampling set XiDat time instances
ti,i= 0, ..., N 1. In other words, we would like to compensate for the lack
of suﬃcient samples of the initial state by adding coarse samples of the evolved
states {Atif0=fti, i = 0, . . . , N 1}. In this way, we can use fewer sampling
devices to save budget but lose no information.
The dynamical sampling problem is solved when conditions on the sampling
sets and time instances tiare found, such that recovery of the signal is possible,
preferably in a stable way. That is, if one (or both) of the following properties
is satisﬁed:
ISP Invertibility sampling property. The operators At0, ..., AtN1, the sam-
pling sets Xt0, ..., XtN1and the number of repeated sampling proce-
dures satisfy this condition within a class of signals, if any signal hin
that class is uniquely determined by its samples data set.
SSP Stability sampling property. The operators, the sampling sets and the
number of repeated samplings satisfy this condition in a ﬁxed class of
signals, if for any two signals h,h1in that class, the two norms
hh12
2and
N
i=0
SXtiAti(hh1)2
2are equivalent.
SSP is clearly a stronger property and implies ISP. In [1, 2, 8, 9] the authors
have studied the dynamical sampling problem for the discrete spatially invariant
evolution system, in which the initial state fis deﬁned on the domain D=
Zdor Zunder certain constraints. At time instance t=nN, the initial state f
is altered by convolution with a ﬁlter a n times to be An(f) = aa...af=anf.
At each time instance t=n, the altered state An(f) is under-sampled at a
uniform subsampling rate m. The invertibility and stability questions have been
fully answered under the speciﬁc constraints. Namely, for a uniform discrete
sampling grid X=mD D, speciﬁc conditions on aand Nare stated so that
a function fcan be recovered from the samples
{f(X), a f(X),· · · ,(aN1f)(X)},for XD. (1.1)
The multidimensional dynamical sampling problem we consider in this paper
has similarities with problems considered by some other authors. For example,
in [11], the authors work in a multivariable shift-invariant space(MSIS) setting,
and study linear systems {Lj:j= 1,· · · , s}such that one can recover any
fin MSIS by uniformly downsampling the functions {(Ljf) : j= 1,· · · , s},
i.e. taking the generalized samples {(Ljf)(Mα)}αZd,j=1,··· ,s . In dynamical
MULTIDIMENSIONAL SIGNAL RECOVERY 155
sampling, there is only one convolution operator A, and it is applied iteratively
to the function f. This iterative structure is important for our analysis of the
kernel of the arising matrix, and using that special structure we are able to add
extra samples outside of the initial uniform sampling greed and get full recovery
of the signal.
In addition, certain singularity problems, which can occur due to the speciﬁc
properties of awhen sampling on a uniform grid X, have been successfully
overcome in the cited papers by adding additional samples. Since most real-life
phenomena are described by functions of multiple variables, we ﬁnd it important
to expand the dynamical sampling concept on two variable setting, i.e. D=
Zd1×Zd2and Z×Z.As we will see later, the two variable problem is more
complicated in structure and we ﬁnd it more subtle to overcome the singularity
problems. Studying the stated problem in 3 (and higher) variable setting would
require similar coping techniques to the ones we use in this paper to expand the
domain from one to two dimensions.
2. Dynamical sampling on Zd1×Zd2
For a positive integer d,Zddenotes the ﬁnite group of integers modulo d. In
the ﬁnite discrete setting, we work on the domain D=Zd1×Zd2,d1, d2N+.
Let the operator Aact on the signal of interest f2(D) as a convolution with
some a1(D) given by
Af(k, l) = af(k, l) =
(s,p)D
a(s, p)f(ks, l p),for all (k, l)D. (2.1)
Note that Ais a bounded linear operator that maps 2(D) to itself. The initial
signal fis evolving in time under the repeated eﬀect of Asuch that at time
instance t=n, the evolved signal is fn=Anf=aa · · · af(and
f=f0=A0f).
We assume that d1and d2are odd numbers, such that di=Jimifor integers
mi1, Ji1, i= 1,2. We set the sampling sensors on a uniform coarse grid
X=m1Zd1×m2Zd2to sample the initial state fand its temporally evolved
states Af,A2f, . . . , AN1f. Note that, given such a coarse sampling grid, each
individual measurement is insuﬃcient for recovery of the sampled state.
Let SX=Sm1,m2denote the assigned subsampling operator related to the
sampling grid. Speciﬁcally,
(SXf)(k, l) = f(k, l) if (k, l)X
0 otherwise (2.2)
For some N2, our objective is to reconstruct ffrom the combined coarse
samples set
{yj=SX(Ajf)}, j = 0,1, ..., N 1.(2.3)
We denote by Fthe 2dimensional discrete Fourier transform (2d DFT) and use
the notation ˆx=F(x). After applying Fto (2.3), due to the two-dimensional
156 R. ACESKA, A. PETROSYAN AND S. TANG
Poisson’s summation formula, we obtain
ˆyn(i, j) = 1
m1m2
m11
k=0
m21
l=0
ˆan(i+kJ1, j +lJ2)ˆ
f(i+kJ1, j +lJ2) (2.4)
for (i, j)∈ I ={0,· · · , J11}×{0,· · · , J21}and n= 0,1, . . . , N 1.
Let ¯
y(i, j) = ( ˆy0(i, j) ˆy1(i, j)... ˆyN1(i, j))T, (i, j)∈ I and
¯
f(i, j) =
ˆ
f(i, j)
ˆ
f(i+J1, j)
.
.
.
ˆ
f(i+ (m11)J1, j)
ˆ
f(i, j +J2)
.
.
.
ˆ
f(i+ (m11)J1, j +J2)
.
.
.
.
.
.
ˆ
f(i, j + (m21)J2)
.
.
.
ˆ
f(i+ (m11)J1, j + (m21)J2)
.
We use the block-matrices
Al,m1m2(i, j) =
1 1 ... 1
ˆa(i,j+lJ2) ˆa(i+J1,j +lJ2)... ˆa(i+(m11)J1,j+lJ2)
.
.
...
.
..
.
..
.
.
ˆaN1(i,j+lJ2) ˆaN1(i+J1,j +lJ2)... ˆaN1(i+(m11)J1,j+lJ2)
,
where l= 0,1, ..., m21, to deﬁne the N×m1m2matrix
Am1,m2(i, j) = [A0,m1m2(i, j )A1,m1m2(i, j)...Am21,m1m2(i, j)] (2.5)
for all (i, j)∈ I . Equations (2.4) have the form of vector inner products, so we
restate them in matrix product form
¯
y(i, j) = 1
m1m2
Am1,m2(i, j)¯
f(i, j).(2.6)
By equation (2.6), we need Nm1m2to be able to recover the signal f.
Note that for N=m1m2, matrix (2.5) is square, we denote this special square
matrix by Am1,m2(i, j) and obtain the following reconstruction result:
Proposition 1. For N=m1m2, the SSP is satisﬁed if and only if
det Am1,m2(i, j)̸= 0 for all (i, j)∈ I.(2.7)
In the ﬁnite dimensional case, unique reconstruction is equivalent to stable
reconstruction so SSP and ISP coincide. When (2.7) holds true, the signal is
recovered from the system of equations
¯
f(i, j) = m1m2A1
m1,m2(i, j)¯
y(i, j),(i, j )∈ I.
MULTIDIMENSIONAL SIGNAL RECOVERY 157
As expected, Proposition 1 reduces to the respective result in [8] when d=d1
and d2= 1, or d=d2and d1= 1.
2.1. Extra samples for stable spatiotemporal sampling. Proposition 1
gives a complete characterization of stable recovery from the dynamical sam-
ples (2.3). In practice, however, we may not have the ideal ﬁlter asuch that
(2.7) holds true. For instance, consider a kernel awith a so-called quadrantal
symmetry, i.e. let
ˆa(s, p) = ˆa(d1s, p) = ˆa(s, d2p) = ˆa(d1s, d2p)
for all (s, p)D. Since (2.5) is a Vandermonde matrix, it is singular if and only
if some of its columns coincide. In this case, it is easy to see that Am1,m2(0,0)
is singular, which prevents the stable reconstruction.
Motivated by the above example, we propose a way of taking extra samples
to overcome the lack of reconstruction uniqueness, whenever singularities for
matrix (2.5) occur. Let
A=
Am1,m2(0,0) 0 ... 0
0Am1,m2(1,0) ... 0
.
.
..
.
.....
.
.
0 0 ... Am1,m2(J11,J21)
and
¯
f=
¯
f(0,0)
¯
f(1,0)
.
.
.
¯
f(J11,0)
¯
f(1,1)
.
.
.
¯
f(J11,1)
.
.
.
.
.
.
¯
f(0, J21)
.
.
.
¯
f(J11, J21)
,¯
y=
¯
y(0,0)
¯
y(1,0)
.
.
.
¯
y(J11,0)
¯
y(1,1)
.
.
.
¯
y(J11,1)
.
.
.
.
.
.
¯
y(0, J21)
.
.
.
¯
y(J11, J21)
.
Then
A¯
f=¯
y(2.8)
and
ker(A) =
(i,j)∈I
ker[Am1,m2(i, j)].(2.9)
The kernels of each Am1,m2(i, j) can be viewed as generated by linearly indepen-
dent vectors ˆvj2(D) such that each ˆvjhas exactly two nonzero coordinates,
one of which is equal to 1 and the other is 1. Let’s assume that the nullity of
matrix Am1,m2(i, j) equals wi,j at each (i, j)∈ I. Then there are n=i,j wi,j
158 R. ACESKA, A. PETROSYAN AND S. TANG
of such linearly independent vectors ˆvj2(D). Let {vj:j= 1,· · · , n}be their
image under the 2Dinverse DFT. Note that {vj:j= 1,··· , n} ⊂ 2(D) is also
linearly independent.
Let Ω D\Xbe the additional sampling set, that is to say, we take extra
spatial samples of the initial state fat the locations speciﬁed by Ω. By Swe
denote the related sampling operator and Ris a || × nmatrix with rows cor-
responding to [v1(k, l),· · · , vn(k , l)]{(k,l)}. With these notations, the following
result holds true:
Theorem 2.1. The reconstruction of f2(D)from its spatiotemporal samples
{Sf, SXf , SXAf, · · · , SXAm1m21f}(2.10)
is possible in a stable manner (SSP is satisﬁed) if and only if rank(R) = n.
In particular, if SSP holds true, then we must have || ≥ n.
Proof. Let W=span{vj:j= 1,· · · , n}. It suﬃces to show that
ker(S)W={0}if and only if rank(R) = n.
Suppose wis in ker(S)W. There must exist coeﬃcients c1, c2, .., cnso that
w=n
j=1 cjvjand Sw= 0. The last statement is equivalent to
[v1(k, l), v2(k, l),· · · , vn(k, l)] [c1c2... cn]T= 0
for each (k, l)Ω. Equivalently, we have Rc= 0. Hence, c= 0 if and only if
rank(R) = n.
Since the d1d2×nmatrix R= [v1(k, l),· · · , vn(k.l)]{(k,l)D}has column rank
n, for any kernel a, there exists a minimal choice of Ω, namely ||=nsuch that
the square matrix Ris invertible. It is hard to give a formula to specify the
extra sampling set for every kernel a2(D). On the other hand, compared
to the 1variable case [8], it is more challenging to specify the rank of R
analytically, since the entries of Rwill involve the product of sinusoids mixed
with exponentials in general.
In [8], the authors studied a typical low pass ﬁlter with symmetric properties
and gave a choice of a minimal extra sampling set Ω, since symmetry reﬂects
the fact that there is often no preferential direction for physical kernels and
monotonicity is a reﬂection of energy dissipation. Similarly, we consider a kernel
awith a so-called strict quadrantal symmetry: for a ﬁxed (k, l)D, ˆa(s, p) =
ˆa(k, l) if and only if
(s, p)∈ {(k, l),(d1k, l),(k, d2l),(d1k, d2l)}.(2.11)
Since Am(i, j) is a Vandermonde matrix, it has singularity if and only if some
of its columns coincide. We can compute the singularity of each Am(i, j), as we
make use of its special structure.
Lemma 2.2. If the ﬁlter asatisﬁes the symmetry assumptions (2.11), then
dim(ker(A)) = d1(m21)
2+d2(m11)
2(m11)(m21)
4.
MULTIDIMENSIONAL SIGNAL RECOVERY 159
Clearly, we need an extra sampling set Ω Dwith size dim(ker(A)). Based
on Theorem 2.1, we provide a minimal Ω:
Theorem 2.3. Assume that the kernel asatisﬁes the strict quadrantal symmetry
assumptions (2.11) and let
Ω = {(k, l) : k= 1 · · · m11
2, l Zd2}∪{(k, l) : kZd1, l = 1,· · · ,m21
2}.
Then, any f2(D)is recovered in a stable way from the expanded set of
samples
{Sf, SXf , SXAf, ··· , SXAm1m21f}.(2.12)
Remark 2.4.Note that in this case
||=d1(m21)
2+d2(m11)
2(m11)(m21)
4,
so by Theorem 2.1 and Lemma 2.2 we can not do better in terms of its cardinality.
Proof. Set
n=d1(m21)
2+d2(m11)
2(m11)(m21)
4.
Recall that the kernels of singular blocks Am1,m2(i, j) are generated by vectors
{ˆvk:k= 1,· · · n}, such that each ˆvkhas exactly two non-zero components, 1
and 1 (corresponding to each pair of identical columns). Then the formula of
2Dinverse DFT gives
vj(k, l) =
d11
s=0
d21
p=0
ˆvj(s, p)e2πisk
d1e
2πipl
d2,(k, l)Zd1×Zd2.(2.13)
We deﬁne a row vector F1(k) = 1, e2π ik
d1,··· , e
2πi(d11)k
d1for all kZd1. For
each l= 0,1,· · · , d21, we deﬁne a row vector ¯
F2(l) of length d2m21
2, which
is derived from vector
[1, e 2πil
d2,· · · , e
2πi(d21)l
d2]
after deleting the entries that correspond to {sJ2+ 1 : 1 sm21
2}, i.e. we
omit the entries e
2πsJ2
d2for 1 sm21
2. We reorder the vectors vjso that
[v1(k, l),· · · , vn(k, l)] equals
2isin( 2π1l
m2)F1(k),··· ,sin( 2π(m21)l
2m2)F1(k),sin( 2π1k
m1)¯
F2(l)··· ,sin( 2π(m11)k
2m1)¯
F2(l)
for every (k, l)Ω. By Theorem 2.1, the proof is complete if we show that these
n=||row vectors of size nare linearly independent.
We deﬁne a row vector R(k, l) corresponding to (k, l)Ω given by
2isin( 2π1l
m2)F1(k),··· ,sin( 2π(m21)l
2m2)F1(k),sin( 2π1k
m1)¯
F2(l)··· ,sin( 2π(m11)k
2m1)¯
F2(l).
160 R. ACESKA, A. PETROSYAN AND S. TANG
Suppose that for some coeﬃcients {c(k, l) : (k, l)}, it holds
(k,l)
c(k, l)R(k, l) = 0.
We need to show that all c(k, l) = 0. Note that, for a ﬁxed k, the vector R(k, l)
is compartmentalized into two components with lengths m21
2and m11
2. By
construction, {F1(k)|kZd1}are linearly independent row vectors. Then, the
coeﬃcients related to F1(k) for the ﬁrst component should be zeros. Related to
the ﬁrst component of length m21
2, for every ﬁxed kZd1such that (k, l)
for some l, the following m2equations hold true
(k,l)
c(k, l) sin 2πsl
m2= 0 for s= 0,1, ..., m21.(2.14)
Case I: if km1+1
2or k= 0, then (k, l)Ω if and only if l= 1,· · · m21
2.
We restate the system of equations (2.14) in the matrix form:
sin( 2π
m2) sin( 4π
m2). . . sin(π(m21)
m2)
sin( 4π
m2) sin( 8π
m2). . . sin(2π(m21)
m2)
.
.
..
.
.....
.
.
sin(π(m21)
m2) sin(2π(m21)
m2). . . sin(π(m21)(m21)
2m2)
c(k, 1)
c(k, 2)
.
.
.
c(k, m21
2)
=0.
The matrix on the left-hand side is invertible, since
{sin(2πx), sin(4πx), ..., sin((m21)πx)}
is a Chebyshev system on [0,1](see[7]); Hence we have c(k, l) = 0 for
l= 1,· · · m21
2.
Case II: if 1 km11
2, then (k, l)Ω if and only if l= 0,· · · , d21.
Then (2.14) is equivalent to the system of equations
d21
l=0
c(k, l) sin 2πsl
m2= 0 for s= 1,2, ..., (m21)/2.(2.15)
Related to the second component of length m11
2, and combined with
the fact that c(k, l) = 0 if kis in case I, for all s= 1,2, ..., m11
2we have
d21
l=0
m11
2
k=1
c(k, l) sin 2πsk
m1¯
F2(l)
= 0.(2.16)
Let ¯
F2= [ ¯
F2(0)T,· · · ,¯
F2(d21)T], where ¯
F2(l)Tdenotes the transpose
of each row vector ¯
F2(l); ¯
F2is a (d2m21
2)×d2matrix. Using matrix
MULTIDIMENSIONAL SIGNAL RECOVERY 161
notation, the ﬁrst equation in (2.16) can be restated as a product, namely
¯
F2·
m11
2
k=1
sin 2πk
m1c(k, 0)
m11
2
k=1
sin 2πk
m1c(k, 1)
.
.
.
m11
2
k=1
sin 2πk
m1c(k, d21)
= 0
As an easy consequence of equation (2.15), for each 1 jm21
2, it
holds m11
2
k=1
sin 2πk
m1d21
l=0 sin(2πlj
m2
)c(k, l)= 0,(2.17)
which is equivalent to
m11
2
k=1
d21
l=0
sin 2πlj
m2sin 2πk
m1c(k, l) = 0,
i.e.
d21
l=0
sin 2πlj
m2
m11
2
k=1
sin 2πk
m1c(k, l) = 0.(2.18)
We deﬁne a m21
2×d2matrix Eas follows:
E=
sin(2π·0
m2) sin(2π·1
m2). . . sin(2π(d21)
m2)
sin(4π·0
m2) sin(4π∗·1
m2). . . sin(4π(d21)
m2)
.
.
..
.
.. . . .
.
.
sin(π(m21)·0
m2) sin(2π(m21)
m2). . . sin(π(m21)(d21)
m2)
.
Due to (2.18), we have
E·
m11
2
k=1
sin(2πk
m1
)c(k, 0)
m11
2
k=1
sin(2πk
m1
)c(k, 1)
.
.
.
m11
2
k=1
sin(2πk
m1
)c(k, d21)
=0.(2.19)
162 R. ACESKA, A. PETROSYAN AND S. TANG
Let F2=E
¯
F2. Then
F2·
m11
2
k=1
sin(2πk
m1
)c(k, 0)
m11
2
k=1
sin(2πk
m1
)c(k, 1)
.
.
.
m11
2
k=1
sin(2πk
m1
)c(k, d21)
=0.
Note that the d2×d2matrix F2is invertible, since it is the image of a
series of elementary matrices acting on the d2×d2DFT matrix (one row
minus another row). Hence we have
m11
2
k=1
sin(2πk
m1
)c(k, 0)
m11
2
k=1
sin(2πk
m1
)c(k, 1)
.
.
.
m11
2
k=1
sin(π(m11)k
m1
)c(k, d21)
=0.(2.20)
After analyzing the rest of the equations in (2.16), we obtain:
m11
2
k=1
sin(2πjk
m1
)c(k, s) = 0 for j= 2,· · · ,m11
2,s= 0,1, ..., d21.
In a similar manner, for each l= 0,· · · , d21 we obtain the matrix equation
sin( 2π
m1) sin( 4π
m2). . . sin(π(m11)
m1)
sin( 4π
m1) sin( 8π
m2). . . sin(2π(m11)
m1)
.
.
..
.
.....
.
.
sin(π(m11)
m1) sin(2π(m11)
m1). . . sin(π(m11)(m11)
2m1)
c(1, l)
c(2, l)
.
.
.
c(m11
2, l)
= 0.
As the matrix on the left hand side is invertible, we must have c(k, l) = 0 for
k= 1,· · · ,m11
2.
We have demonstrated that c(k, l) = 0 for all (k, l)Ω. Therefore the n
row vectors {R(k, l)}(k,l)are linearly independent i.e. stability of the signal
recovery is achieved.
MULTIDIMENSIONAL SIGNAL RECOVERY 163
3. Dynamical sampling in 2(Z×Z)
In this section, we aim to generalize our results to signals of inﬁnite length.
Somewhat surprisingly, there is not much diﬀerence between the techniques used
in these two settings and we feel that we can gloss over a few details in the second
Let D=Z×Z. We study a signal of interest f2(D) that evolves over time
under the inﬂuence of an evolution operator A. The operator Ais described by
a convolution with a1(D), namely
Af(p, q) = af(p, q) =
kZ
lZ
a(k, l)f(pk, q l) at all (p, q)D.
Clearly, Ais a bounded linear operator, mapping 2(D) to itself. Given integers
m1,m21, we assume m1and m2are odd number. We introduce a coarse
sampling grid X=m1Z×m2Z. We make use of a uniform sampling operator SX,
deﬁned by (SXf)(k, l) = f(m1k, m2l) for (k, l)D. The goal is to reconstruct
ffrom the set of coarse samples
y0=SXf
y1=SXAf
...
yN1=SXAN1f.
(3.1)
Similar to the work done in section 2, we study this problem on the Fourier
domain. Due to Poisson’s summation formula, we have the Lemma below.
Lemma 3.1. The Fourier transform of each ylin (3.1) at (ξ, ω)T×Tis
ˆyl(ξ, ω) = 1
m1m2
m21
j=0
m11
i=0
ˆalξ+i
m1
,ω+j
m2ˆ
fξ+i
m1
,ω+j
m2.(3.2)
Expression (3.2) allows for a matrix representation of the dynamical sampling
problem in the case of uniform subsampling. For j= 0,1,· · · , m21, we deﬁne
N×m1matrices
Aj,m1,m2(ξ, ω) = ˆakξ+l
m1
,ω+j
m2k,l
,
where k= 0,1,· · · , N 1, l= 0,1,··· , m11 and denote by Am1,m2(ξ, ω) the
block matrix
[A0,m1,m2(ξ, ω)A1,m1,m2(ξ, ω)... Am21,m1,m2(ξ, ω)].(3.3)
Let ¯
y(ξ, ω) = (ˆy0(ξ, ω) ˆy1(ξ, ω)...ˆyN1(ξ, ω) )Tand
164 R. ACESKA, A. PETROSYAN AND S. TANG
¯
f(ξ, ω) =
ˆ
f(ξ
m1,ω
m2)
ˆ
f(ξ+1
m1,ω
m2)
.
.
.
ˆ
f(ξ+m11
m1,ω
m2)
ˆ
f(ξ
m1,ω+1
m2)
.
.
.
ˆ
f(ξ+m11
m1,ω+1
m2)
.
.
.
.
.
.
ˆ
f(ξ
m1,ω+m21
m2)
.
.
.
ˆ
f(ξ+m11
m1,ω+m21
m2)
.(3.4)
Due to (3.2), it holds
¯
y(ξ, ω) = 1
m1m2
Am1,m2(ξ, ω)¯
f(ξ, ω).(3.5)
Proposition 2. ISP is satisﬁed if and only if Am1,m2(ξ , ω)as deﬁned in (3.3)
has full column rank m1m2at a.e. (ξ , ω)T×T, where T= [0,1) under
addition modulo 1. SSP is satisﬁed if and only if Am1,m2(ξ, ω)is full rank for
all (ξ, ω)T×T.
By Proposition 2, we conclude that Nm1m2. In particular, if N=m1m2,
then Am1,m2(ξ, ω) is a square matrix, we denote by Am1,m2(ξ, ω) this square
matrix.
Corollary 1. When N=m1m2, the invertibility sampling property is equivalent
to the condition:
det Am1,m2(ξ, ω)̸= 0 for a.e. (ξ, ω)T×T.
Since Am1,m2(ξ, ω)has continuous entries, the stable sampling property is equiv-
alent to
det Am1,m2(ξ, ω)̸= 0 for all (ξ, ω)T×T.
From here on we assume N=m1m2. By its structure, Am1,m2(ξ, ω) is a
Vandermonde matrix, thus it is singular at (ξ, ω)T×Tif and only if some of
its columns coincide. In case Am1,m2(ξ, ω) is singular, no matter how many times
we resample the evolved states Anf,n > N 1, on the grid Ωo=m1Z×m2Z,
the additional data is not going to add anything new in terms of recovery and
stability. In such a case we need to consider adding extra sampling locations to
overcome the singularities of Am1,m2(ξ, ω).
MULTIDIMENSIONAL SIGNAL RECOVERY 165
3.1. Additional sampling locations. If Am1,m2(ξ, ω) is singular at some (ξ, ω),
then by Corollary 1 the recovery of f2(Z2) is not stable. To remove the sin-
gularities and achieve stable recovery, some extra sampling locations need to
be added. The additional sampling locations depend on the positions of the
singularities of Am1,m2(ξ, ω) that we want remove. We propose a quasi-uniform
way of constructing the extra sampling locations and give a characterization
specifying when the singularity will be removed. Then, we use this method to
remove the singularity of a strict quadrantally symmetric convolution operator.
Let the additional sampling set be given by
Ω = {X+ (c1, c2)|(c1, c2)WZm1×Zm2}.(3.6)
Let Tc1,c2denote the translation operator on 2(Z2), so that Tc1,c2f(k, l) = f(k+
c1, l +c2) for all (k, l)Z2. We employ a shifted sampling operator SXTc1,c2to
take extra samples at the initial time instance; this means that our subsampling
grid is shifted from X=m1Z×m2Zto (c1, c2) + Xand the extra samples are
given as
hc1,c2
m1,m2=Sm1,m2Tc1,c2f, (c1, c2).(3.7)
Set
uc1,c2(s, p) = e2πi c1s
m1e2πi c2p
m2,
for (s, p)Zm1×Zm2.
By taking the Fourier transform of the samples on the additional sampling set
Ω, we obtain
ˆ
hc1,c2
m1,m2(ξ, ω) = e2πi(c1ξ
m1+c2ω
m2)
m1m2
m11
s=0
m21
p=0
uc1,c2(s, p)ˆ
fξ+s
m1
,ω+p
m2.(3.8)
where
uc1,c2(s, p) = e2πi c1s
m1e2πi c2p
m2.
For each (c1, c2)W, we deﬁne a row vector
uc1,c2={uc1,c2(s, p)}(s,p)X
with terms arranged in the same order as the terms in vector ¯
f(ξ, ω) in (3.4).
We organize the vectors uc1,c2in a matrix ¯
U= (uc1,c2)(c1,c2)Wand extend the
data vector ¯
y(ξ, ω) in (3.5) into a big vector Y(ξ, ω) by adding
{e2πi c1ξ
m1e2πi c2ω
m2(Sm1,m2Tc1,c2f(ξ, ω)}(c1,c2)W.
Then (3.2) and (3.8) can be combined into the following matrix equation
Y(ξ, ω) = 1
m1m2¯
U
Am1,m2(ξ, ω)¯
f(ξ, ω).(3.9)
166 R. ACESKA, A. PETROSYAN AND S. TANG
Proposition 3. If a left inverse for
¯
U
Am1,m2(ξ, ω)
exists for every (ξ, ω)T2, then the vector fcan be uniquely and stably recovered
from the combined samples (3.1) and (3.6) via (3.9).
If the following property holds true:
ker( ¯
U)ker(Am1,m2(ξ, ω)) = 0 (3.10)
for every (ξ, ω) in T2,we say that Wremoves the singularities of Am(ξ, ω); In
such a case, the assumption in Proposition 3 is satisﬁed.
Corollary 2. If Wremoves the singularities of Am(ξ, ω)then
|W| ≥ dim(ker(Am1,m2(ξ, ω)))
for every (ξ, ω).
3.2. Strict quadrantal symmetric convolution operator. We consider a
ﬁlter a, such that ˆahas the strict quadrantal symmetry property, i.e. ˆa(ξ1, ω1) =
ˆa(ξ2, ω2) for (ξ1, ω1), (ξ2, ω2)T×T=T2if and only if one of the following
conditions is satisﬁed:
1. ξ1=ξ2, ω1+ω2= 1
2. ξ1+ξ2= 1, ω1=ω2
3. ξ1+ξ2= 1, ω1+ω2= 1.
The following result is a direct consequence of the symmetries assumptions listed
in conditions 1 3.
Proposition 4. If ˆa(ξ, ω)has the strict quadrantal symmetry property, then we
have det Am1,m2(ξ, ω)=0when ξ= 0 or ω= 0. Moreover, the kernel of each
Am1,m2(ξ, ω)is a subspace of the kernel of one of the following four matrices:
Am1,m2(0,0) , Am1,m21
2,0, Am1,m20,1
2, Am1,m21
2,1
2.
From Proposition 4, for a strict quadrantally symmetric kernel we need to
consider only the points (ξ, ω)(0,0) ,0,1
2,1
2,0,1
2,1
2 and construct
the set W, such that it removes the singularities of the above four matrices.
Proposition 5. If ˆahas the strict quadrantal symmetry property, then
dim(Am1,m2(ξ, ω)) = (m11)m2
2+m21
2
m1+ 1
2
for every (ξ, ω)(0,0) ,0,1
2,1
2,0,1
2,1
2.
Proof. We discuss here in depth only the case ξ=ω=1
2. The proof in the other
three cases are analogous to what we present here. Because Am1,m2(1
2,1
2) is a
MULTIDIMENSIONAL SIGNAL RECOVERY 167
Vandermonde matrix, the rank is equal to the number of its diﬀerent columns.
It is easy to show that
ˆa1
2+s
m1
,
1
2+p
m2= ˆa1
2+k
m1
,
1
2+l
m2
is satisﬁed if and only if one of the following holds true:
(1) s=k, p +l=m21
(2) p=l, s +k=m11
(3) s+k=m11, p +l=m21
using which we can easily compute that
dim(Am1,m2(1
2,1
2)) = (m11)m2
2+m21
2
m1+ 1
2=n.
Let
W=W1W2(3.11)
where
W1={1,· · · m11
2} × {0,· · · , m21},
W2={0,· · · , m11}×{1,· · · ,m21
2}}.
Remark 3.2.When Wis deﬁned as in (3.11), we have
|W|=(m11)m2
2+m21
2
m1+ 1
2;
By Corollary 2, Whas the minimal possible size.
Theorem 3.3. Let a1(D)be the ﬁlter such that the evolution operator is
given by Ax =ax. Suppose ˆasatisﬁes the strict quadrantal symmetric property
deﬁned at the beginning of subsection 3.2. Let be as in (3.6) with Wspeciﬁed in
(3.11). Then, any f2(D)can be recovered in a stable way from the expanded
set of samples
{Sf, SXf , · · · , SXAm1m21f}.(3.12)
Proof. It suﬃces to show that for every (ξ, ω)T×T, it holds
ker(¯
U)ker(Am1,m2(ξ , ω)) = 0.(3.13)
By Proposition 4, we only need to study the kernels of these four matrices
Am1,m2(0,0), Am1,m2(1
2,0), Am1,m2(0,1
2), Am1,m2(1
2,1
2).(3.14)
We discuss here in depth for the case ξ=ω=1
2. Z := ker(Am1,m2(1
2,1
2)) is a
subspace in Cm1m2. By Proposition 5, the dimension of Zis n. Taking advantage
of the fact that Am1,m2(1
2,1
2) is a Vandermonde matrix, we can choose a basis
{vj:j= 1,· · · , n}for Z, such that each vjhas only two nonzero entries 1 and
168 R. ACESKA, A. PETROSYAN AND S. TANG
1. Let vker( ¯
U)Z, there exists c= (c(i))i=1,··· ,n such that v=
n
i=1
c(i)vi.
Deﬁne a n×nmatrix Rwith the row corresponds to a ﬁxed (c1, c2)Wis
[(e
2πi(m11)c1
m1e
2πi0c1
m1)F2(c2),· · · ,(e
2πi(m1+1)c1
2m1e
2πi(m13)c1
2m1)F2(c2),
(e
2πi(m21)c2
m2e
2πi0c2
m2)¯
F1(c1),· · · ,(e
2πi(m2+1)c2
2m2e
2πi(m23)c2
2m2)¯
F1(c1)].
Then
¯
Uv= 0,which is equivalent to Rc= 0.
By the use the same strategy as in the proof of Theorem 2.3, it can be demon-
strated that these nrow vectors of Rare linearly independent. With slight
adaptations of the strategy used so far,we can come to the same conclusion for
the other three matrices in (3.14). As a consequence of Proposition 3, stability
is achieved.
4. Conclusion
In this paper, we seek the spatiotemporal trade oﬀ in the two variable discrete
spatially invariant evolution system driven by a single convolution ﬁlter in both
ﬁnite and inﬁnite case. We characterize the spectral properties of the ﬁlters to
specify when we can recover the initial state from the uniform undersampled
future states and a way to add extra spatial sampling locations to stably recover
the signal when the ﬁlters fail the certain constraints. Compared to one variable
case, the singularity problems caused by the structure of ﬁlters are more com-
plicated and tough to solve. We give the explicit constructions of extra spatial
sampling locations to resolve the singularity issue caused by the strict quadran-
tal symmetric ﬁlters. Our results can be adapted to the general multivariable
case. Diﬀerent kinds of symmetry assumptions can be imposed on the ﬁlters.
The problem of ﬁnding the right additional spatiotemporal sampling locations
for other types of ﬁlters remains open and requires a further study.
ACKNOWLEDGEMENT
We would like to thank Akram Aldroubi for his helpful discussions and com-
ments. The research of Armenak Petrosyan and Sui Tang are partially supported
by NSF Grant DMS-1322099.
References
[1] R. Aceska, A. Aldroubi, J. Davis and A. Petrosyan, Dynamical Sampling in Shift-Invariant
Spaces, AMS Contemporary Mathematics (CONM) book series, 2013.
[2] R. Aceska and S. Tang, Dynamical Sampling in Hybrid Shift Invariant Spaces, AMS
Contemporary Mathematics (CONM) book series, 2014.
[3] B. Adcock and A. Hansen, A generalized sampling theorem for stable reconstructions in
arbitrary bases, J. Fourier Anal. Appl.,18(4), 685–716, 2012.
[4] A. Aldroubi, U. Molter, C. Cabrelli and S. Tang, Dynamical Sampling. ArXiv 1409.8333.
MULTIDIMENSIONAL SIGNAL RECOVERY 169
[5] A. Aldroubi and M. Unser, Sampling Procedures in Function Spaces and Asymptotic
equivalence with Shannon’s sampling theory, Numer. Func. Anal. and Opt.,15, 1–21,
1994.
[6] A. Aldroubi and K. Gr¨ochenig, Nonuniform sampling and reconstruction in shift-invariant
spaces, SIAM Rev.,43, 585–620, 2001.
[7] Andrei Osipov, Vladimir Rokhlin and Hong Xiao, Prolate Spheroidal Wave Functions of
Order Zero: Mathematical Tools for Bandlimited Approximation, Springer Science and
[8] A. Aldroubi, J. Davis and I. Krishtal, Dynamical Sampling: Time Space Trade-oﬀ, Appl.
Comput. Harmon. Anal.,34(3), 495–503, 2013.
[9] A. Aldroubi, J. Davis and I. Krishtal, Exact Reconstruction of Signals in Evolutionary
Systems Via Spatiotemporal Trade-oﬀ, J. Fourier Anal. Appl.,21(1), 11–31, 2015.
[10] N. Atreas, Perturbed sampling formulas and local reconstruction in shift invariant spaces,
J. Math. Anal. Appl.,377, 841–852, 2011.
[11] A. G. Garc´ıa and G. P´erez-Villal´on, Multivariate generalized sampling in shift-invariant
spaces and its approximation properties, J. Math. Anal. Appl.,355, 397–413, 2009.
[12] P. Jorgensen and Feng Tian, Discrete reproducing kernel Hilbert spaces: Sampling and
distribution of Dirac-masses. ArXiv:1501.02310.
[13] Y. Lu and M. Vetterli, Spatial super-resolution of a diﬀusion ﬁeld by temporal oversam-
pling in sensor networks, Proc. IEEE Int. Conf. Acoust., Speech and Signal Process. 2009
(ICASSP 2009), 2249 –2252, 2009.
[14] Z. Nashed and Q. Sun, Sampling and reconstruction of signals in a reproducing kernel
subspace of Lp(Rd), J. Funct. Anal.,258, 2422–2452, 2010.
[15] J. Ranieri, A. Chebira, Y. M. Lu and M. Vetterli, Sampling and reconstructing diﬀusion
ﬁelds with localized sources, Proc. IEEE Int. Conf. Acoust., Speech and Signal Process.
2011 (ICASSP 2011), 4016–4019, 2011.
[16] W. Sun, Sampling theorems for multivariate shift invariant subspaces, Sampl. Theory
Signal Image Process.,4, 73–98, 2005.
[17] Q. Sun, Local reconstruction for sampling in shift-invariant spaces, Adv. Comput. Math.,
32, 335–352, 2010.
[18] Q. Sun, Nonuniform average sampling and reconstruction of signals with ﬁnite rate of
innovation, SIAM J. Math. Anal.,38(5), 1389–1422, 2006.
[19] S. Tang, A Generalized Prony Method for Filter Recovery in Evolutionary System via
• ... a frame for ℓ 2 (I)? In what way does the choice of Ω and l i depend on the operator A? Problem 1.2 is motivated by the spatiotemporal sampling and reconstruction problem arising in spatially invariant evolution systems [1,2,3,6,7,8,21,23,26,27]. Let f ∈ ℓ 2 (I) be an unknown vector that is evolving under the iterated actions of a convolution operator A, such that at time instance t = n it evolves to be A n f. ...
... In Proposition 2.1, we will use an algebraic characterization based on the spectral decomposition of A to answer the questions we just proposed. Proposition 2.1 explains the frame properties for the examples below: Consider the following two convolution kernels on ℓ 2 (Z 4 ) which define A 1 and A 2 respectively: ˆ a 1 = [1,2,3,4] T , ˆ a 2 = [1,2,1,2] ...
... In Proposition 2.1, we will use an algebraic characterization based on the spectral decomposition of A to answer the questions we just proposed. Proposition 2.1 explains the frame properties for the examples below: Consider the following two convolution kernels on ℓ 2 (Z 4 ) which define A 1 and A 2 respectively: ˆ a 1 = [1,2,3,4] T , ˆ a 2 = [1,2,1,2] ...
Article
Full-text available
Let $(I,+)$ be a finite abelian group and $\mathbf{A}$ be a circular convolution operator on $\ell^2(I)$. The problem under consideration is how to construct minimal $\Omega \subset I$ and $l_i$ such that $Y=\{\mathbf{e}_i, \mathbf{A}\mathbf{e}_i, \cdots, \mathbf{A}^{l_i}\mathbf{e}_i: i\in \Omega\}$ is a frame for $\ell^2(I)$, where $\{\mathbf{e}_i: i\in I\}$ is the canonical basis of $\ell^2(I)$. This problem is motivated by the spatiotemporal sampling problem in discrete spatially invariant evolution systems. We will show that the cardinality of $\Omega$ should be at least equal to the largest geometric multiplicity of eigenvalues of $\mathbf{A}$, and we consider the universal spatiotemporal sampling sets $(\Omega, l_i)$ for convolution operators $\mathbf{A}$ with eigenvalues subject to the same largest geometric multiplicity. We will give an algebraic characterization for such sampling sets and show how this problem is linked with sparse signal processing theory and polynomial interpolation theory.
• ... In dynamical sampling, one seeks to exploit the association between the signals received at various time levels to enhance the classical sampling and reconstruction techniques or propose novel sampling and reconstruction algorithms. Since the original work on dynamical sampling [5], a number of subsequent studies have been devoted to various aspects of the theory and applications (see, for example, [1,2,3,4,6,8,9,10,13,14,15,18,20,21,22,23,25,26,27]). ...
Preprint
Dynamical sampling deals with signals that evolve in time under the action of a linear operator. The purpose of the present paper is to analyze the performance of the basic dynamical sampling algorithms in the finite dimensional case and study the impact of additive noise. The algorithms are implemented and tested on synthetic and real data sets, and denoising techniques are integrated to mitigate the effect of the noise. We also develop theoretical and numerical results that validate the algorithm for recovering the driving operators, which are defined via a real symmetric convolution.
• ... In dynamical sampling, one seeks to exploit the association between the signals received at various time levels to enhance the classical sampling and reconstruction techniques or propose novel sampling and reconstruction algorithms. Since the original work on dynamical sampling [5], a number of subsequent studies have been devoted to various aspects of the theory and applications (see, for example, [1,2,3,4,6,8,9,10,13,14,15,18,20,21,22,23,25,26,27]). ...
Article
Full-text available
Dynamical sampling deals with signals that evolve in time under the action of a linear operator. The purpose of the present paper is to analyze the performance of the basic dynamical sampling algorithms in the finite dimensional case and study the impact of additive noise. The algorithms are implemented and tested on synthetic and real data sets, and denoising techniques are integrated to mitigate the effect of the noise. We also develop theoretical and numerical results that validate the algorithm for recovering the driving operators, which are defined via a real symmetric convolution.
• Article
Full-text available
Let $A$ be an operator in a finite dimensional Hilbert space $\mathbb{H}$, and let $G \subset \mathbb{H}$ be a finite set of vectors. Under certain conditions on $A$ and $G$, the set of iterations $F_G(A)= \{A^j {\bf g } \; | \; {\bf g } \in G, \; 0 \leq j \leq L({\bf g }) \}$ is a frame for $\mathbb{H}$. We explore further the properties of frames of type $F_G(A)$; in particular, we show that the canonical dual frame of $F_G(A)$ has an iterative set structure as well. The main goal of our manuscript is to determine the relations between $A$, $G$ and the number of iterations $L$ which ensure that the system $F_G(A)$ is a scalable frame. We give a general statement on frame scalability when the Schur decomposition of the dynamical operator is known. We study in more detail the case when $A$ is a Hermitian operator, that is, we exploit its unitary diagonalization. In addition, we answer the question of frame scalability and full spark frames for several special cases, involving block-diagonal and companion operators.
• Conference Paper
The problem of recovering an evolving signal from a set of samples taken at different time instances, has been well-studied for one-variable signals on both discrete and continuous domains [4]-[8]. Most sampling problems in applications involve spatial coordinates, that is, the observed time-variant signals are described by at least two spatial variables. We state the problem of spatio-temporal sampling for two-variable data and provide specific reconstruction results in the l 2 (Z × Z) case.
• Article
We study the spatial-temporal sampling of a linear diffusion field, and show that it is possible to compensate for insufficient spatial sampling densities by oversampling in time. Our work is motivated by the following issue often encountered in sensor network sampling, namely increasing the temporal sampling density is often easier and less expensive than increasing the spatial sampling density of the network. For the case of sampling a diffusion field, we show that, to achieve trade-off between spatial and temporal sampling, the spatial arrangement of the sensors must satisfy certain conditions. We provide in this paper the precise relationships between the achievable reduction of spatial sampling density, the required temporal oversampling rate, the spatial arrangement of the sensors, and the bound for the condition numbers of the resulting sampling and reconstruction procedures.
• Article
Full-text available
We study reproducing kernels, and associated reproducing kernel Hilbert spaces (RKHSs) $\mathscr{H}$ over infinite, discrete and countable sets $V$. In this setting we analyze in detail the distributions of the corresponding Dirac point-masses of $V$. Illustrations include certain models from neural networks: An Extreme Learning Machine (ELM) is a neural network-configuration in which a hidden layer of weights are randomly sampled, and where the object is then to compute resulting output. For RKHSs $\mathscr{H}$ of functions defined on a prescribed countable infinite discrete set $V$, we characterize those which contain the Dirac masses $\delta_{x}$ for all points $x$ in $V$. Further examples and applications where this question plays an important role are: (i) discrete Brownian motion-Hilbert spaces, i.e., discrete versions of the Cameron-Martin Hilbert space; (ii) energy-Hilbert spaces corresponding to graph-Laplacians where the set $V$ of vertices is then equipped with a resistance metric; and finally (iii) the study of Gaussian free fields.
• Article
Regular and irregular sampling theorems for multivariate shift invariant subspaces are studied. We give a characterization of regular points and an irregular sampling theorem, which covers many known results, e. g. Kadec’s 1/4-theorem. We show that some subspaces may not have a regular point. We also present a reconstruction algorithm which is slightly different from the known one but is more efficient. We study the aliasing error and prove that every smooth square integrable function can be approximated by its sampling series.
• Article
We consider the problem of spatiotemporal sampling in which an initial state of an evolution process is to be recovered from a set of samples at different time levels. We are particularly interested in lossless trade-off between spatial and temporal samples. We show that for a special class of signals it is possible to recover the initial state using a reduced number of measuring devices activated more frequently. We present several algorithms for this kind of recovery and describe their robustness to noise.
• Article
ochenigz Abstract. This article discusses modern techniques for nonuniform sampling and reconstruction of functions in shift-invariant spaces. It is a survey as well as a research paper and provides a unied framework for uniform and nonuniform sampling and reconstruction in shift- invariant spaces by bringing together wavelet theory, frame theory, reproducing kernel Hilbert spaces, approximation theory, amalgam spaces, and sampling. Inspired by appli- cations taken from communication, astronomy, and medicine, the following aspects will be emphasized: (a) The sampling problem is well dened within the setting of shift-invariant spaces. (b) The general theory works in arbitrary dimension and for a broad class of gener- ators. (c) The reconstruction of a function from any sucien tly dense nonuniform sampling set is obtained by ecien t iterative algorithms. These algorithms converge geometrically and are robust in the presence of noise. (d) To model the natural decay conditions of real signals and images, the sampling theory is developed in weighted Lp-spaces.
• Article
We view Shannon's sampling procedure as a problem of approximation in the space S = {s: s (x) = (c * sinc)(x)c ε l2}. We show that under suitable conditions on a generating function λ ε L2, the approximation problem onto the space V = {v:v(x) = (c * λ)(x)c ε l2} produces a sampling procedure similar to the classical one. It consists of an optimal prefiltering, a pure jitter-stable sampling, and a postfiltering for the reconstruction. We describe equivalent signal representations using generic, dual, cardinal, and orthogonal basis functions and give the expression of the corresponding filters. We then consider sequences λ, where λ denotes the n-fold convolution of λ. They provide a sequence of increasingly regular sampling schemes as the value of n increases. We show that the cardinal and orthogonal pre- and postfilters associated with these sequences asymptotically converge to the ideal lowpass filter of Shannon. The theory is illustrated using several eamples.
• Article
We introduce a generalized framework for sampling and reconstruction in separable Hilbert spaces. Specifically, we establish that it is always possible to stably reconstruct a vector in an arbitrary Riesz basis from sufficiently many of its samples in any other Riesz basis. This framework can be viewed as an extension of the well-known consistent reconstruction technique (Eldar et al.). However, whilst the latter imposes stringent assumptions on the reconstruction basis, and may in practice be unstable, our framework allows for recovery in any (Riesz) basis in a manner that is completely stable. Whilst the classical Shannon Sampling Theorem is a special case of our theorem, this framework allows us to exploit additional information about the approximated vector (or, in this case, function), for example sparsity or regularity, to design a reconstruction basis that is better suited. Examples are presented illustrating this procedure.
• Article
In this paper, we consider sampling and reconstruction of signals in a reproducing kernel subspace of Lp(Rd), 1⩽p⩽∞, associated with an idempotent integral operator whose kernel has certain off-diagonal decay and regularity. The space of p-integrable non-uniform splines and the shift-invariant spaces generated by finitely many localized functions are our model examples of such reproducing kernel subspaces of Lp(Rd). We show that a signal in such reproducing kernel subspaces can be reconstructed in a stable way from its samples taken on a relatively-separated set with sufficiently small gap. We also study the exponential convergence, consistency, and the asymptotic pointwise error estimate of the iterative approximation–projection algorithm and the iterative frame algorithm for reconstructing a signal in those reproducing kernel spaces from its samples with sufficiently small gap.
• Article
Nowadays the topic of sampling in a shift-invariant space is having a significant impact: it avoids most of the problems associated with classical Shannon's theory. Under appropriate hypotheses, any multivariate function in a shift-invariant space can be recovered from its samples at Zd. However, in many common situations the available data are samples of some convolution operators acting on the function itself: this leads to the problem of multivariate generalized sampling in shift-invariant spaces. This extra information on the functions in the shift-invariant space will allow to sample in an appropriate sub-lattice of Zd. In this paper an L2(Rd) theory involving the frame theory is exhibited. Sampling formulas which are frame expansions for the shift-invariant space are obtained. In the case of overcomplete frame formulas, the search of reconstruction functions with prescribed good properties is allowed. Finally, approximation schemes using these generalized sampling formulas are included.