Content uploaded by Xu Chen

Author content

All content in this area was uploaded by Xu Chen on Jun 30, 2018

Content may be subject to copyright.

Proceedings of ASME 2018 Dynamic Systems and Control Conference

DSCC2018ASME

September 30 - October 3, 2018, Atlanta, Georgia, USA

DSCC2018-9088

MODEL-BASED SPARSE INFORMATION RECOVERY BY A COLLABORATIVE

SENSOR MANAGEMENT

Hui Xiao

Dept. of Mechanical Eng.

University of Connecticut

Storrs, Connecticut, 06269

Email: hui.xiao@uconn.edu

Yaakov Bar-Shalom

Dept. of Electrical and Computer Eng.

University of Connecticut

Storrs, Connecticut, 06269

Email: yaakov.bar-shalom@uconn.edu

Xu Chen

Dept. of Mechanical Eng.

University of Connecticut

Storrs, Connecticut, 06269

Email: xchen@uconn.edu

ABSTRACT

This paper considers the real-time recovery of a fast time

series (e.g., updated every T seconds) by using sparsely sam-

pled measurements from two sensors whose sampling intervals

are much larger than T (e.g., MT and NT , where M and N are

integers). Speciﬁcally, when the fast signal is an autoregressive

process, we propose an online information recovery algorithm

that reconstructs the dense underlying temporal dynamics fully,

by systematically modulating the sensor speeds MT and NT , and

by exploiting a model-based fusion of the sparsely collected data.

We provide the collaborative sensing design, parametric analysis

and optimization of the algorithm. Application to a closed-loop

disturbance rejection problem reveals the feasibility to annihilate

fast disturbance signals with the slow and not fully aligned sen-

sor pair in real time, and in particular, the rejection of narrow-

band disturbances whose frequencies are much higher than the

Nyquist frequencies of the sensors.

1 INTRODUCTION

Fast feedback response is key for safe and high-performance

operation of a control system. Whether the application is to mon-

itor thermal conditions in a nuclear power plant, to track ground

and aerial targets for defense purposes, or to maintain material

temperature when additively manufacturing personalized pros-

thetic implants for patients, we build mathematical models, col-

lect measurements, and analyze performances by assuming or

desiring fast sampled measurements (e.g., 20 times the desired

closed-loop bandwidth in a servo problem [5, 11]). However,

many sensors update at intrinsically limited speeds. For instance,

the update rate of a radar scanner is constrained by the rota-

tion rate of the antenna; for imaging-based automation (using,

e.g., sonar or infrared vision), complex elaborations must be per-

formed to extract information from the raw image frames. In the

presence of fast dynamics and disturbances that happen between

the slow sampling instances, the resulting lack of sight constrains

the overall situational awareness of the system, and can lead to

unsafe system operations in a wide range of engineering applica-

tions that can beneﬁt from fast real-time closed-loop operations.

In pursuit of resolving this signiﬁcant barrier, this paper aims to

provide a new information feedback mechanism for systematic

fast controls under slow information feedback.

From a signal processing view point, a few strategies ex-

ist to generate dense signals from limited sensor measurements.

Under the ﬁrst and perhaps the most commonly adopted strategy,

practitioners typically rely on simple techniques such as linear

interpolation. A second and mathematically more elegant strat-

egy interprets sampling as a projection operator – one that com-

putes a band-limited approximation of the input signal. The aim

here is to approximate the original signal instead of insisting on a

perfect reconstruction [7,13]. Both the ﬁrst and the second strate-

gies focus on regular, uniform sampling. A third and more recent

strategy involves irregular data collection. Perhaps the fastest

growing in this category is compressed, or compressive sensing

(CS) [1, 2, 4], which advocates randomized sampling and L1op-

timization to approximate the original signal in a transformed

domain — one that allows a sparse, compressive representation

of the data.

From the viewpoint of control design, real-time closed-loop

functionality and causality are key factors when manipulating a

temporal signal ﬂow. This implies the obstacle that the full se-

quence of measured data will not be available when recovering

a particular element in the middle of the experiment. A natural

question is then: what methodology can be used for desparsify-

ing a slowly measured data online, with assurance of causality

and real-time computation? Aligned with the ﬁrst two discussed

1

strategies of information processing, advanced digital to analog

converter (DAC) and ﬁltering have been proposed for real-time

controls considering inter-sample behaviors. However, the re-

construction is an approximate one, and connecting feedback

with multi-sensory sparse data collection has also not entered the

radar yet. Within the third strategy of sparse signal processing,

CS has been proposed as a nice ﬁt for networked feedback con-

trol when the remotely transmitted measurement data is large and

compressible [8–10]. In these studies, CS is used to store and

process imaging information; the focus is not on sparsity in time

but in the pixel space of the images.

Building upon the above knowledge and moving beyond ex-

isting architectures of constrained real-time functionality, this

paper proposes an online computation friendly algorithm to re-

cover a discrete signal d[n]from sparsely sampled measurements.

Speciﬁcally, we consider the case when d[n]is an autoregressive

process and when the sparsely sampled measurements are from

two sensors S1and S2with slow sampling periods MT and NT ,

where Mand Nare distinct integers greater than one. By col-

lecting parallel sets of very slowly measured samples from the

linear ﬂow, we show that the dense and fast intersample infor-

mation can be fully recovered in real time, by a unique model-

ing of the signal-sensor pair. This signal-reconstruction method,

which builds the correlation formula between the missing sig-

nal data and collaborative measurements, is made possible by

elaborately designing and re-parameterizing the internal signal

model of d[n]. The result is that we will be able to not only facil-

itate the real-time sparse information processing, but also seam-

lessly integrate the magniﬁed sensing with closed-loop model-

based controls to achieve agile feedback response to structured

disturbances and high-level control inputs.

Notations: LCM(M,N)denotes the least common multiple

of Mand N. If a uniformly sampled sequence d[n]has sampling

period T,t{d[n]}=nT is the timestamp (the time when a data

point is measured) of signal d[n]. The ceiling function dxemaps

a real number xto the smallest following integer.

2 MECHANISMS OF THE PROPOSED COLLABORA-

TIVE SENSING

Let the discrete measurements from S1and S2be denoted as

dM[n]and dN[n], respectively. The following simple and direct

connections hold:

dM[n]↔d[Mn](1)

dN[n]↔d[Nn](2)

Here, the sign “↔” represents that two samples are equal and

aligned in time (i.e., two samples have the same timestamps)1.

In order to better describe the collaborative sampling pro-

cess, we divide d[n]into a list of sequences {d{i}}i=1,2,3,..., where

1We use this notation rather than “=” because data points having an identical

value could have distinct time stamps. For example, a periodic signal x(n) =

x(n+T), but x(n)and x(n+T)are not aligned in time.

FIGURE 1. CONNECTIONS BETWEEN dM[n],dN[n]AND d[n]

WHEN M=2, N=3 AND L=6.

d{i}is called the i-th batch in d[n]. Each batch contains Lcon-

secutive data points in d[n], that is,

d{i}[k]↔d[iL +k],k=1,2,...,L(3)

where d{i}[k]denotes the k-th data points in the i-th batch.

As a ﬁrst result, when the batch size Lis properly set, it can

be shown that if the k-th data point in a batch is equal and aligned

to a data point in dM[n](or dN[n]), then the k-th data point in the

next batch will be equal and aligned to another data point in dM[n]

(or dN[n]):

Lemma 1. Let the batch size L =LCM(M,N), if d{i}[k]↔dX[n],

then d{i+1}[k]↔dX[n+k1],where k1=L/X and X denotes M

or N.

Proof. If d{i}[k]↔dM[n], then combining Eqs. (1) and (3), one

can get d[iL +k]↔d{i}[k]↔dM[n]↔d[Mn], or equivalently,

their time stamps are equal: t{d[iL +k]}/T=iL +k=Mn =

t{d[MN]}.Now for the time stamp of d{i+1}[k]it holds that

t{d{i+1}[k]}/T= (i+1)L+k

=M(n+L/M) = t{dM[n+L/M]}/T(4)

where L/Mis an integer. Thus we have d{i+1}[k]↔dM[n+

L/M]. Analogously, d{i+1}[k]↔dN[n+L/M]if d{i}[k]↔dN[n].

Lemma 1 suggests that the connections between dM[n],dN[n]

and d[n]are repeated over batches (see Fig. 1), if the chosen

batch size L=LCM(M,N). This property of repeated connec-

tions makes it possible to design a procedure to recover one batch

of signal points, then use the procedure repetitively to recover

other batches. With this in mind, we design our recovering algo-

rithm under the following batch conﬁgurations.

Deﬁnition 1. The batch d{i}[k]used in this paper (see Fig. 1) is

deﬁned based on the following rules:

1. The ﬁrst data points in d[n],dM[n]and dN[n]are aligned in

time, i.e., d[0]↔dM[0]↔dN[0].

2. The batch size L=LCM(M,N).

3. The last data point in a batch is aligned to both dM[n]and

dN[n], i.e., d{i}[L]↔dM[n1]↔dN[n2].

With the deﬁnition above, a signal batch d{i}has the following

properties:

2

1. There are L/Mdata points in a batch that are aligned to

dM[n], with index k∈KM={M,2M,3M,...,L}.

2. There are L/Ndata points in a batch that are aligned to dN[n],

with index k∈KN={N,2N,3N,...,L}.

3. There are L−L/M−L/N+1 data points in a batch that are

not aligned to either dM[n]or dN[n]. This index set is denoted

as KU={k∈Z+|k<L,mod(k,M)6=0,mod(k,N)6=0}.

The above deﬁnition of data sets will be used in the following

information recovering algorithm design.

3 MODEL-BASED INFORMATION RECOVERY WITH

COLLABORATIVE SENSING

Intuitively, if the time index of the fast underlying sig-

nal d{i}[k]is aligned to any of the sensor measurements, i.e.

k∈KMorKN, a direct measurement is available and no data re-

covery is needed. However, if k∈KU,d{i}[k]is lost in the sam-

pling process. The following theorem shows that if d[n]satisﬁes

an internal signal model, the lost information can be recovered

by combining historical measurements form S1and S2.

Theorem 1. Let dM[n], dN[n], d[n], and d {i}[k]be deﬁned as

described in the previous section. If there exists a polynomial

A(z−1) = 1+∑m

i=1aiz−i(am6=0) such that A(z−1)d[n] = 0at

the steady state (z−1is the one-step delay operator such that

z−1d[n] = d[n−1]), then the k-th data point in the i-th batch can

be recovered by

d{i}[k] =

t1

∑

i=0

wk,idM[n1−i] +

t2

∑

j=0

vk,idN[n2−j](5)

where t1and t2are ﬁnite integers, n1and n2denote indices of dM

and dNsuch that dM[n1]↔dN[n2]↔d{i−1}[L](such relation-

ship is ensured by the third rule of Deﬁnition 1). The unknown

parameters wk,i’s and vk,i’s come from the solution to the follow-

ing system of linear equations

Mk

fk,1

.

.

.

fk,l

wk,0

.

.

.

wk,t1

vk,0

.

.

.

vk,t2

=

−a1

−a2

.

.

.

−am

0

0

.

.

.

0

(6)

Here, l =max{t1M,t2N}+k−m; Mkis a matrix of dimension

(l+m)×(l+t1+t2+2), and is deﬁned as

Mk= [ ˜

Mkekek+M.. . ek+t1Mekek+N. . . ek+t2N](7)

where

˜

Mk=

1·· · 0

a1

....

.

.

.

.

....0

am

...1

0...a1

.

.

.....

.

.

0·· · am

(8)

and eiis the elemental column vector whose entries are all zeros

except for the i-th entry, which equals 1.

Proof. To see ﬁrst (5), we construct

Fk(z−1)A(z−1) + z−kWk(z−M) + z−kVk(z−N) = 1 (9)

where

Fk(z−1) = 1+f1z−1+· · · +flz−l(10)

Wk(z−M) = wk,0+wk,1z−M+·· · +wk,t1z−t1M(11)

Vk(z−N) = vk,0+vk,1z−N+·· · +vk,t2z−t2N(12)

Multiplying both sides of (9) with d[n]and dropping the trivial

term Fk(z−1)A(z−1)d[n], we have

d[n] = z−kWk(z−M)d[n] + z−kVk(z−N)d[n](13)

namely,

d[n] =

t1

∑

i=0

wk,id[n−k−iM] +

t2

∑

j=0

vk,id[n−k−jN](14)

Let d[n]be the k-th data point of the k-th batch, i.e. d[n]↔d(i)[k],

then based on the batch deﬁnition (Eq. (3)), we have d[n−k]↔

d[iL]↔d{i−1}[L].Recall that the indices n1and n2are chosen

such that dM[n1]↔dN[n2]↔d{i−1}[L]. Thus we get d[n−k]↔

dM[n1]↔dN[n2], or (n−k)T=n1MT =n2N T based on their

time-stamp equivalence. Now the time stamps of the summation

terms in (14) are

t{d[n−k−iM]}= (n−k−iM)T

= (n1−i)MT =t{dM[n1−i]}(15)

t{d[n−k−jN]}= (n−k−jN)T

= (n2−j)NT =t{dN[n2−j]}(16)

3

Thus we get

d[n−k−iM]↔dM[n1−i](17)

d[n−k−jN]↔dN[n2−j](18)

In other words, (5) will be satisﬁed as long as (14), or its equiva-

lent from (9) is satisﬁed.

Now consider solving (9). Expanding the equation and col-

lecting the coefﬁcients of z−i’s (i=1,2,...,l+m), one can get

(l+m)linear equations with (l+t1+t2+2)unknowns, which

can be written in the matrix from as (6).

Example 1. Consider an illustrative example with M=3, N=

2 and A(z−1) = 1+a1z−1+a2z−2.Based on Deﬁnition 1, the

batch size is chosen as L=LCM(3,2) = 6, then KU={1,5}.In

the recovering process, data points with index k∈KUin batches

of d[n]will be recovered from Eq. (5). Here we choose t1=

t2=1 (there are more discussions about choosing t1and t2in the

following section), then the recovering equations become:

d{i}[k] = wk,0d3[n1] + wk,1d3[n1−1]

+vk,0d2[n2] + vk,1d2[n2−1],k=1,5 (19)

Following the procedures in Theorem 1, parameters w1,0,w1,1,

v1,0,v1,1are obtained from the solution of

1 0 1 0 1 0

a11 0000

a2a10001

0a20100

f1,1

f1,2

w1,0

w1,1

v1,0

v1,1

=

−a1

−a2

0

0

(20)

and parameters w5,0,w5,1,v5,0,v5,1are from the solution of

1 0 0 0 0 0 0 0 0 0

a11 0 0 0 0 0 0 0 0

a2a11 0 0 0 0 0 0 0

0a2a11 0 0 0 0 0 0

0 0 a2a11 0 1 0 1 0

000a2a11 0000

0000a2a10001

00000a20100

f5,1

f5,2

f5,3

f5,4

f5,5

f5,6

w5,0

w5,1

v5,0

v5,1

=

−a1

−a2

0

0

0

0

0

0

(21)

4 DISCUSSION

4.1 Choosing t1and t2

In Theorem 1, (t1+1)data points from dM[n]and (t2+1)

data points from dN[n]are used in the recovery equation (5). In

fact, the number of data points used in the recovery process is

ﬂexible, as we discuss next.

Corollary 1. A necessary condition for the system of equations

(6) to have a solution is

t1+t2≥m+nd−2 (22)

where

nd=mint1+1

L/M,t2+1

L/N (23)

Proof. Recall that a solvable system of linear equations must not

be overdetermined, so an obvious necessary condition for (6) to

have solutions is

l+t1+t2+2≥l+m(24)

In addition, when iM =jN holds for some i∈[0,t1]and j∈

[0,t2], the corresponding columns ek+iM and ek+jN in matrix Mk

are identical, yielding redundant variable pairs in (6) (say there

are ndnumber of them). Then, the number of independent vari-

ables becomes l+t1+t2+2−ndand the necessary condition

(24) reduces to (22).

To more quantitatively deﬁne nd, we recall that a signal

batch could provide at most L/Nmeasurements from sensor S1,

hence the number of prior batches containing measurements from

S1that are used in the recovery process is

nd,M=t1+1

L/M(25)

Similarly, for sensor S2,

nd,N=t2+1

L/N(26)

It can be seen from Deﬁnition 1 that the condition iM =jN holds

only once in a single batch, hence the number of redundant vari-

able pairs ndis the number of prior batches where measurements

from both sensors are involved in the recovery process, which is

the minimum value between nd,Mand nd,N.

Note that although the dimension of Mkvaries when recov-

ering different data points in a batch, the necessary condition (22)

only needs to be checked once because kis not involved in (22).

This can be understood by realizing that the recovered data points

in a batch are calculated using the same set of prior sensor mea-

surements.

4.2 Solution to the System of Linear Equations, and a

Method to Reduce Computation

If t1and t2are chosen based on (22), solutions are guaran-

teed to exist for the system of equations deﬁned in Eq. (6). One

4

particular solution is given by

fk

qk=M†

ka

0(27)

where M†

kis the Moore-Penrose inverse or the pseudoinverse of

Mk,fk=fk,1,·· · ,fk,lT,qk=wk,0,· ·· ,wk,t1,vk,0,· ·· ,vk,t2T,

and a=−[a1,·· · ,am]T. The solution to (27) has the minimum

Euclidean norm among all possible solutions. However, comput-

ing the pseudoinverse is time-consuming for a large Mk(whose

dimension grows quickly as kincreases (see Fig. 2)). We discuss

next an reduced-order procedure to solve (6) that will drastically

reduce the computation load for real-time applications.

The system of linear equations (6) can be rewritten into the

following form, where Mkis segmented into four smaller matri-

ces with dimensions deﬁned below.

Am×lDm×(t1+t2+2)

Bl×lCl×(t1+t2+2) fk

qk=a

0(28)

Then the system solution is given by

qk=D−AB−1C†a(29)

To see this, unfold the matrix equation (28) as

A f k+Dqk=a(30)

B f k+Cqk=0 (31)

Notice that Bis an invertible upper triangular matrix. Thus fk

can be solved from (31) as

fk=−B−1Cqk(32)

Inserting (32) into (30) yields (29) .

Instead of directly computing the pseudoinverse of the large

matrix Mk, the reduced-order method reduces the matrix dimen-

sion by lin height and width before taking the pseudoinverse, and

efﬁcient algorithms exist for the inversion of the upper triangular

matrix B[12], This allows for a signiﬁcant reduction of compu-

tation cost in conﬁgurations with large parameters (M,N,t1,t2).

Figure 2 shows the changes of the computing cost as kincreases

when computing the prediction parameters in a batch2. The test

results shows that the proposed method reduces the computation

costs to a signiﬁcantly lower level under different conﬁgurations;

furthermore, the computation cost remains largely invariant when

kincreases.

2The tests were done on a same computer running MATLAB 2017b; func-

tions were called multiple times, the average time used in the computation was

recorded.

0 10 20 30 40 50 60

index k in a batch

0

0.2

0.4

0.6

0.8

1

1.2

time / seconds

10-3

direct method

reduced-order method

FIGURE 2. THE TIME FOR COMPUTING THE SYSTEM SOLU-

TION USING DIRECT METHOD AND OPTIMIZED METHOD.

5 EXAMPLE APPLICATION: BEYOND-NYQUIST DIS-

TURBANCES REJECTION

An immediate result of slowly sampled data in a feedback

system is that the controlled process will not be able to reject fast

disturbances, or more speciﬁcally, signals beyond the Nyquist

frequency. Our preliminary study [14, 15] have reported by sim-

ulation and experimentation that a well-designed classic high-

gain control could amplify instead of attenuate the actual distur-

bance when its main spectral components are near or beyond the

Nyquist frequency of the sensor. However, with the proposed

model-based information recovery technique, rejecting beyond

Nyquist disturbances using classic high-gain control becomes

possible, as we shall see from an example below.

Consider the case when a micro-servo motor is controlled

by a discrete PID controller with a feedback loop. The contin-

uous transfer function of the motor from the input applied volt-

age to the output speed (rad/sec) is P

c(s) = 360000/(s2+660s+

36000). The controller has the transfer function

C(z) = kP+kITs

1

z−1+kD

1

Ts

z−1

z(33)

with kP=0.9628, kITs=0.0640, kD/Ts=1.55 and sampling

time Ts=0.9 msec. The closed loop achieves a rise time of

0.0036 sec and a settling time of 0.018 sec. Suppose the plant

is subject to narrow-band disturbances with high frequencies at

516Hz, 783Hz and 1150Hz, which are close to or beyond the

Nyquist frequency (i.e. 555Hz) of the sensor. Many control al-

gorithms exist for rejecting such narrow-band disturbances. For

example, [3,16] provides narrow-band disturbance observers that

can achieve inﬁnite high-gain control at selected disturbance fre-

quency ranges. The focused application will construct a multirate

control system combining the narrow-band disturbance observer

and our proposed algorithm of information recovery with collab-

orative sensors.

Figure 3 shows the proposed control scheme for beyond

Nyquist disturbance compensation with the collaborative sensing

5

FIGURE 3. MULTI-RATE CONTROL SCHEME FOR DISTUR-

BANCE REJECTION USING NARROW-BAND DISTURBANCE

OBSERVER AND COLLABORATIVE SENSING.

mechanism. Besides the original sensor sampled at 0.9 msec, a

second slow sensor with sampling time 1.2 msec is added to form

the collaborative sensor pair: M=4,N=3 and T=0.3 msec.

In Fig. 3, discrete signals with sampling times M T ,NT and T

are denoted by sparse dashed lines, dense dashed lines and dot-

ted lines, respectively. Continuous signals are denoted by solid

lines. Components of the disturbance rejection mechanism in-

clude ˆ

P

d(z), the identiﬁed discrete model of the continuous plant

P

s(s), and Q(z), the disturbance compensating ﬁlter. With the

disturbance frequency information known, one can design Q(z)

with the procedure provided in [16]. Our model-based recover

technique (i.e. The MR block in Fig. 3) is applied to recover

a fast disturbance estimate ˆ

d[n]using slow sampled disturbance

estimates ˆ

dM[n]and ˆ

dN[n].

In the recovering process, there are L=LCM(4,3) = 12

points in a signal batch; points with index k=1,2,5,7,10,11

are recovered by (5). Based on the internal signal model [6]

of a narrow-band signal d[n]with nfrequency components fi,

i=1,...,n,we have A(z−1)d[n] = 0 at the steady state, where

A(z−1) =

n

∏

i=1

(1−2cos(2πfiTs)z−1+z−2)(34)

Substituting the parameter values yields the model of the distur-

bance estimate ˆ

d[n]:

A(z−1) = 1−0.1882z−1+1.7362z−2−0.1386z−3(35)

+1.7362z−4−0.1882z−5+z−6

Then by Theorem 1, one can get the parameters wk,i’s and vk,i’s

in the recovering equation (5) from the solution of system of lin-

ear equations (6) for each k. Here, we chose t1=t2=3,which

satisﬁes the necessary condition (22).

To show the effectiveness of the disturbance compensation

loop, we gave zero reference inputs to the system and turned

on the compensation loop at t=0.3 msec. Figure 5 shows the

system output yd[n]sampled at 0.3 msec. The results show that

1.212 1.214 1.216 1.218 1.22 1.222 1.224 1.226

Time / sec

-1

-0.5

0

0.5

FIGURE 4. SIGNAL RECOVERY RESULTS.

0 0.5 1 1.5 2 2.5 3

Time / sec

-1.5

-1

-0.5

0

0.5

1

1.5

System output

Disturbance conpensation loop turned on

due

to the slow measuring speeds of sensors

FIGURE 5. SYSTEM OUTPUT SAMPLED AT 3.33KHZ.

the beyond-Nyquist disturbances were fully rejected at a sam-

pling rate three times faster than the maximum sampling speed

of sensors. Figure 4 shows the recovered fast sampled distur-

bance ˆ

d[n](black solid line) as well as the slow disturbance mea-

surements ˆ

dM[n](blue dashed line with asterisk marks) and ˆ

dN[n]

(red dash-dotted line with circle marks). The sparsely sampled

disturbances were accurately recovered based on its frequency

information using our purposed algorithm.

6 CONCLUSION

In this paper, the problem of reconstructing the fast discrete

signal d[n]form the sparsely sampled collaborative sensor mea-

surements dM[n]and dN[n]is addressed. Based on a collabora-

tive sensing design and a model-based ﬁltering using sensors of

different sampling speeds, the proposed online algorithm can re-

cover the highly dense information that is not measured by the

slow sensors. This algorithm was implemented and validated in

a disturbance compensation architecture to enable fully rejection

of beyond-Nyquist disturbances.

References

[1] Richard G Baraniuk. Compressive sensing [lecture notes].

IEEE signal processing magazine, 24(4):118–121, 2007.

[2] Emmanuel J Candès and Michael B Wakin. An introduc-

6

tion to compressive sampling. IEEE signal processing mag-

azine, 25(2):21–30, 2008.

[3] Xu Chen and Masayoshi Tomizuka. A minimum param-

eter adaptive approach for rejecting multiple narrow-band

disturbances with application to hard disk drives. IEEE

Transactions on Control Systems Technology, 20(2):408–

415, March 2012.

[4] David L Donoho. Compressed sensing. IEEE Transactions

on information theory, 52(4):1289–1306, 2006.

[5] Gene F Franklin, J David Powell, and Michael L Workman.

Digital control of dynamic systems, volume 3. Addison-

wesley Menlo Park, CA, 1998.

[6] Carlos E Garcia and Manfred Morari. Internal model con-

trol. A unifying review and some new results. Industrial &

Engineering Chemistry Process Design and Development,

21(2):308–323, April 1982.

[7] Abdul J Jerri. The shannon sampling theorem—its various

extensions and applications: A tutorial review. Proceedings

of the IEEE, 65(11):1565–1596, 1977.

[8] Yasamin Mostoﬁ. Compressive cooperative sensing and

mapping in mobile networks. IEEE Transactions on Mo-

bile Computing, 10(12):1769–1784, 2011.

[9] Masaaki Nagahara, Takahiro Matsuda, and Kazunori

Hayashi. Compressive sampling for remote control sys-

tems. IEICE Transactions on Fundamentals of Electronics,

Communications and Computer Sciences, 95(4):713–722,

2012.

[10] Masaaki Nagahara and Daniel E Quevedo. Sparse represen-

tations for packetized predictive networked control. IFAC

Proceedings Volumes, 44(1):84–89, 2011.

[11] C E Shannon. Communication in the presence of noise.

Proceedings of the IRE, 37(1):10–21, 1949.

[12] Lloyd N Trefethen and David Bau III. Numerical linear

algebra, volume 50. Siam, 1997.

[13] Michael Unser. Sampling-50 years after shannon. Proceed-

ings of the IEEE, 88(4):569–587, 2000.

[14] Dan Wang and Xu Chen. A spectral analysis and its impli-

cations of feedback regulation beyond Nyquist frequency.

IEEE Transactions on Mechatronics, 2018. in production.

[15] Dan Wang, Masayoshi Tomizuka, and Xu Chen. Spectral

distribution and implications of feedback regulation beyond

nyquist frequency. In Flexible Automation (ISFA), Interna-

tional Symposium on, pages 23–30. IEEE, 2016.

[16] Hui Xiao and Xu Chen. Multi-band beyond-Nyquist distur-

bance rejection on a galvanometer scanner system. In Pro-

ceedings of IEEE International Conference on Advanced

Intelligent Mechatronics, July 3-7 2017, Munich, Germany,

pages 1700–1705, 2017.

7