Content uploaded by Maria Prandini

Author content

All content in this area was uploaded by Maria Prandini on Oct 13, 2014

Content may be subject to copyright.

A new approach to controller design in presence of constraints

Maria Prandini and Marco C. Campi

Abstract— In this paper, we present a new approach to

control design in presence of constraints. This approach relies

on the reformulation of the controller design problem as a

semi-inﬁnite convex optimization program, and on the solution

of this program by the scenario optimization technology.

The approach is illustrated through a simple example of

disturbance rejection subject to input saturation constraints.

I. INTRODUCTION

In this paper, we propose a new approach to address robust

control design in presence of constraints in a systematic and

optimal way.

For ease of explanation, we illustrate this new approach

through a simple example where, given a linear system

affected by a disturbance belonging to some class, the goal

is to design a feedback controller that attenuates the effect

of the disturbance on the system output, while avoiding

saturation of the control action due to actuator limitations.

The proposed control design method relies on the re-

formulation of the problem as a robust convex optimiza-

tion program by adopting an appropriate parametrization

of the controller. A robust convex optimization problem is

expressed in mathematical terms as

min

θ∈ℜ

n

g(θ) subject to: (1)

f

δ

(θ) ≤ 0, ∀δ ∈ ∆,

where δ is the uncertain parameter, and g(θ) and f

δ

(θ) are

convex functions in the n-dimensional optimization variable

θ for every δ within the uncertainty set ∆. Convexity is

appealing since ‘convex’ - as opposed to ‘non-convex’ -

means ‘solvable’ in many cases, [1], [2]. In our context,

the uncertain parameter δ represents a realization of the

disturbance affecting the system, hence ∆ contains an inﬁnite

number of instances. It is well known that semi-inﬁnite opti-

mization problems, that is problems with a ﬁnite number n of

optimization variables and an inﬁnite number of constraints,

are difﬁcult to solve and they have even proven NP-hard in

some cases, [3], [4], [5], [6].

In [7], [8], an innovative technology called ‘scenario

approach’ has been introduced to deal with semi-inﬁnite

convex programming at a very general level. The main thrust

This work was supported by MIUR (Ministero dell’Istruzione,

dell’Universit

`

a e della Ricerca) under the project Identiﬁcation and adaptive

control of industrial systems.

M. Prandini is with Dipartimento di Elettronica e Informazione, Po-

litecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano, Italy

prandini@elet.polimi.it

M.C. Campi is with Dipartimento di Elettronica per l’Automazione,

Universit

`

a degli Studi di Brescia, via Branze 38, 25123 Brescia, Italy

marco.campi@ing.unibs.it

of this technology is that solvability can be obtained through

random sampling of constraints provided that a probabilis-

tic relaxation of the worst-case robust paradigm of (1) is

accepted. Here, we propose to use the scenario technology

for determining a solution to control design problems that

would otherwise be hard to solve because of the presence

of constraints and of uncertain signals/disturbances affecting

the system. No extensions of the scenario approach itself are

developed. Randomized algorithms for system analysis and

control design have recently become a topic of great interest

for the control community (see [9] for a comprehensive

survey on the subject). Our contribution consists in the

introduction of a novel randomized algorithm for robust

control design in presence of constraints, which is based on

the scenario approach.

In our control set-up where the uncertain parameter δ

represents the disturbance realization, the implementation

of the scenario optimization requires to randomly extract a

certain number of disturbance realizations and to simulate the

system behavior with the extracted realizations as input. This

justiﬁes the terminology we adopt to describe the proposed

approach to control design as a ‘simulation-based method’.

The problem of disturbance rejection has been addressed

in the literature based on the dynamic programming approach

[10], [11], [12], the l

1

-optimal control theory [13], [14],

the use of an upper bound on the l

1

-norm (the star-norm)

[15], and, more recently, through the invariant ellipsoids

technique [16]. In all these approaches, the disturbance is

only assumed to be bounded. Further possible knowledge on

the disturbance signal (such as, for example, its correlation

in time and main frequency components) is not exploited

in the design process, which may lead to sub-optimal and

conservative solutions for the problem at hand. Also, in the

approaches based on dynamic programming and l

1

-optimal

control, the order of the controller cannot be ﬁxed a-priori

and the complexity of the resulting ‘optimal’ compensator

may be high.

Other methodologies for solving quite general control

design problems for linear systems affected by uncertain

signals/disturbances and subject to constraints are present

in the literature of receding horizon and model predictive

control, [17], [18], [19], [20]. Differently from what we

propose here, no structure is imposed to the feedback con-

troller in these papers and design is carried out by directly

optimizing over the control input samples in a time horizon

of interest. The resulting feedback controller suffers from

the problem to be difﬁcult to implement, but it secures high

performance under certain hypotheses. Moreover, applicabil-

ity of standard methods in receding horizon model predictive

Proceedings of the

46th IEEE Conference on Decision and Control

New Orleans, LA, USA, Dec. 12-14, 2007

WeA17.1

1-4244-1498-9/07/$25.00 ©2007 IEEE. 530

control requires that uncertainty is quite structured (typically,

the uncertain signals/disturbances are characterized through

some polytopic or ellipsoidal bound on their instantaneous

value), a limitation which is largely overcome by the ap-

proach proposed here.

The rest of the paper is organized as follows. In Section

II, we precisely describe the control problem addressed and

its reformulation as a semi-inﬁnite convex optimization pro-

gram. The application of the scenario technology to solve this

optimization program is then explained in Section III, and a

numerical example is provided in Section IV to illustrate the

effectiveness of the resulting randomized method for control

design. Some concluding remarks are drawn in Section V.

II. CONTROL PROBLEM FORMULATION

We consider a discrete time linear system with scalar input

and scalar output, u(t) and y(t), governed by the following

equation:

y(t) = G(z)u(t) + d(t), (2)

where G(z) is a stable transfer function and d(t) is an

additive disturbance.

Our objective is to determine a feedback control law

u(t) = C(z)y(t) (3)

(see Figure 1) such that the disturbance d(t) is optimally

attenuated for every realization of d(t) in some set of

possible realizations D, and such that the control input keeps

within certain saturation limits. For example, D can be the set

of step functions with speciﬁed maximum amplitude or the

set of sinusoids with frequency in a certain range. A precise

formalization of the optimization problem is next given.

Fig. 1. The feedback disturbance compensation scheme.

Consider the ﬁnite-horizon 2-norm

P

M

t=1

y(t)

2

of the

closed-loop system output. This norm quantiﬁes the effect of

the disturbance d(t). For simplicity, we here consider (2) and

(3) initially at rest, namely G(z)u(t) represents an inﬁnite

backwards expansion

P

∞

j=1

g

j

u(t − j) where u(t − j) = 0

for t − j ≤ 0, and similarly for C(z)y(t).

The goal is to minimize the worst-case disturbance effect

max

d(t)∈D

M

X

t=1

y(t)

2

, (4)

while maintaining the control input u(t) within a saturation

limit u

bound

:

max

1≤t≤M

|u(t)| ≤ u

bound

, ∀d(t) ∈ D. (5)

Controller C(z) is expressed in terms of an Internal Model

Control (IMC) parametrization, [21]:

C(z) =

Q(z)

1 + Q(z)G(z)

, (6)

where G(z) is the system transfer function and Q(z) is a

free-to-choose transfer function (see Figure 2).

Fig. 2. The IMC parameterization of the controller.

Expression of C(z) in (6) is totally generic, in that, given

a C(z), a Q(z) can be always found generating that C(z)

through expression (6). The advantage of (6) is that the set

of all controllers that closed-loop stabilize G(z) is simply

obtained from (6) by letting Q(z) vary over the set of all

stable transfer functions (see [21] for more details).

With (6) in place, the control input u(t) and the controlled

output y(t) are given by:

u(t) =

C(z)

1 − C(z)G(z)

d(t) = Q(z)d(t) (7)

y(t) = G(z)u(t) + d(t) = [G(z)Q(z) + 1]d(t). (8)

The distinctive feature of these expressions is that u(t)

and y(t) are afﬁne in Q(z). Consequently, if Q(z) is selected

from a family of stable transfer functions linearly parame-

terized in γ := [γ

0

γ

1

. . . γ

k

]

T

∈ ℜ

k+1

, i.e.

Q(z) = γ

0

β

0

(z) + γ

1

β

1

(z) + γ

2

β

2

(z) + · · · + γ

k

β

k

(z), (9)

where β

i

(z)’s are pre-speciﬁed stable transfer functions, then

the cost (4) and the constraints (5) are convex in γ.

A common choice for the β

i

(z)’s functions is to set them

equal to pure ‘delays’: β

i

(z) = z

−i

, leading to

Q(z) = γ

0

+ γ

1

z

−1

+ γ

2

z

−2

+ · · · + γ

k

z

−k

.

Another possibility is to let β

i

(z)’s be Laguerre polynomials,

[22], [23].

The control design problem can now be precisely formu-

lated as follows:

min

γ,h∈ℜ

k+2

h subject to: (10)

M

X

t=1

y(t)

2

≤ h, ∀d(t) ∈ D, (11)

max

1≤t≤M

|u(t)| ≤ u

bound

, ∀d(t) ∈ D. (12)

Due to (11), h represents an upper bound to the output 2-

norm

P

M

t=1

y(t)

2

for any realization of d(t). Such an upper

46th IEEE CDC, New Orleans, USA, Dec. 12-14, 2007 WeA17.1

531

bound is minimized in (10) under the additional constraint

(12) that u(t) does not exceed the saturation limits.

We now rewrite problem (10)–(12) in a more explicit form.

By (7) and (8) and the parametrization of Q(z) in (9),

the input and the output of the controlled system can be

expressed as

u(t) =

¡

γ

o

β

0

(z) + . . . + γ

k

β

k

(z)

¢

d(t) (13)

y(t) = G(z)

¡

γ

o

β

0

(z) + . . . + γ

k

β

k

(z)

¢

d(t) + d(t). (14)

Let us deﬁne the following vectors containing ﬁltered

versions of the disturbance d(t):

φ(t) :=

β

0

(z)d(t)

β

1

(z)d(t)

.

.

.

β

k

(z)d(t)

and ψ(t) =

G(z)β

0

(z)d(t)

G(z)β

1

(z)d(t)

.

.

.

G(z)β

k

(z)d(t)

. (15)

Then, (13) and (14) can be re-written as

u(t) = φ(t)

T

γ

y(t) = ψ(t)

T

γ + d(t),

and

P

M

t=1

y(t)

2

= γ

T

Aγ + Bγ + C, where

A =

M

X

t=1

ψ(t)ψ(t)

T

, B = 2

M

X

t=1

d(t)ψ(t)

T

, C =

M

X

t=1

d(t)

2

(16)

are matrices that depend on d(t) only.

With all these positions, (10)–(12) rewrites as

min

γ,h∈ℜ

k+2

h subject to: (17)

γ

T

Aγ + Bγ + C ≤ h, ∀d(t) ∈ D

− u

bound

≤ φ(t)

T

γ ≤ u

bound

, ∀t ∈ {1, 2, . . . , M },

∀d(t) ∈ D.

Compared with the general form (1), the optimization

variable θ is here (γ, h) and has size n = k + 2, and

the uncertain parameter δ is the disturbance realization d(t)

taking value in the set ∆ = D. Note that, given d(t),

quantities A, B, C, and φ(t) are ﬁxed so that the ﬁrst

constraint in (17) is quadratic, while the others are linear.

Typically, the set D of disturbance realizations has inﬁnite

cardinality. Hence, problem (17) is a semi-inﬁnite convex

optimization problem.

III. RANDOMIZED SOLUTION THROUGH THE SCENARIO

TECHNOLOGY

As already pointed out in the introduction, semi-inﬁnite

convex optimization problems like (17) are difﬁcult to solve.

The idea of the scenario approach is that solvability can

be recovered if some relaxation in the concept of solution

is accepted. In the context of our control design problem,

this means requiring that the constraints in (17) are satisﬁed

for all disturbance realizations but a small fraction of them

(chance-constrained approach).

The scenario approach goes as follows. Since we are

unable to deal with the wealth of constraints in (17), we

concentrate attention on just a few of them and extract at

random N disturbance realizations d(t) according to some

probability distribution P introduced over D. This proba-

bility distribution should reﬂect the likelihood with which

the disturbance realizations occur or the relative importance

that is attributed to different disturbance realizations. If no

hint is available on which realization is more likely to occur

and none of them is more critical than the others, then the

uniform distribution can be adopted. A discussion on the

use of the uniform distribution in randomized methods can

be found in [24].

Only the extracted instances (‘scenarios’) are considered

in the scenario optimization:

SCENARIO OPTIMIZATION

extract N independent identically distributed realizations

d(t)

1

, d(t)

2

, . . . , d(t)

N

from D according to P . Then,

solve the scenario convex program (SCP

N

):

min

γ,h∈ℜ

k+2

h subject to: (18)

γ

T

A

i

γ + B

i

γ + C

i

≤ h, i = 1, . . . , N,

− u

bound

≤ φ(t)

T

i

γ ≤ u

bound

,

∀t ∈ {1, 2, . . . , M }, i = 1, . . . , N,

where A

i

, B

i

, C

i

, and φ(t)

i

are as in (16) and (15) for

d(t) = d(t)

i

.

Letting (γ

∗

N

, h

∗

N

) be the solution to SCP

N

, γ

∗

N

returns

the designed controller parameter, whereas h

∗

N

quantiﬁes the

performance of the design compensator over the extracted

disturbance realizations d(t)

1

, d(t)

2

, . . . , d(t)

N

.

The implementation of the scenario optimization requires

that one picks N realizations of the disturbance and com-

putes A

i

, B

i

, C

i

, and φ(t)

i

in correspondence of the

extracted realizations. Since these quantities are artiﬁcially

generated (that is they are not actual measurements coming

from the system, but, instead, they are computer-generated),

the proposed control design methodology can as well be seen

as a simulation-based approach.

SCP

N

is a standard convex optimization problem with a

ﬁnite number of constraints, and therefore easily solvable.

On the other hand, it is spontaneous to ask: what kind of

solution is one provided by SCP

N

? Speciﬁcally, what can we

claim regarding the behavior of the designed control system

for all other disturbance realizations, those we have not taken

into consideration while solving the control design problem?

Answering this question is necessary to provide performance

guarantees.

The above question is of the ‘generalization’ type in a

learning-theoretic sense: we want to know how the solution

(γ

∗

N

, h

∗

N

) generalizes in constraints satisfaction, from seen

disturbance realizations to unseen ones. Certainly, any gener-

alization result calls for some structure as no generalization

is possible if no structure linking what has been seen to

46th IEEE CDC, New Orleans, USA, Dec. 12-14, 2007 WeA17.1

532

what has not been seen is present. The formidable fact in

the context of convex optimization is that the solution of

SCP

N

always generalizes well, with no extra assumptions.

We have the following theorem (see Corollary 1 in [8]).

Theorem 1: Select a ‘violation parameter’ ǫ ∈ (0, 1) and

a ‘conﬁdence parameter’ β ∈ (0, 1). Let n = k + 2.

If

N =

»

2

ǫ

ln

1

β

+ 2n +

2n

ǫ

ln

2

ǫ

¼

(19)

(⌈·⌉ denotes the smaller integer greater than or equal to the

argument), then, with probability no smaller than 1 − β, the

solution (γ

∗

N

, h

∗

N

) to (18) satisﬁes all constraints of problem

(17) with the exception of those corresponding to a set of

disturbance realizations whose probability is at most ǫ. ¤

Let us read through the statement of this theorem in some

detail. If we neglect the part associated with β, then, the

result simply says that, by sampling a number of disturbance

realizations as given by (19), the solution (γ

∗

N

, h

∗

N

) to (18)

violates the constraints corresponding to other realizations

with a probability that does not exceed a user-chosen level

ǫ. This corresponds to say that – for other, unseen, d(t)’s

– constraints (11) and (12) are violated with a probability

at most ǫ. From (11) we therefore see that the found h

∗

N

provides an upper bound for the output 2-norm

P

M

t=1

y(t)

2

valid for any realizations of the disturbance with exclusion of

at most an ǫ-probability set, while (12) guarantees that, with

the same probability, the saturation limits are not exceeded.

As for the probability 1−β, one should note that (γ

∗

N

, h

∗

N

)

is a random quantity because it depends on the randomly

extracted disturbance realizations. It may happen that the

extracted realizations are not representative enough (one can

even stumble on an extraction as bad as selecting N times the

same realization!). In this case no generalization is certainly

expected, and the portion of unseen realizations violated by

(γ

∗

N

, h

∗

N

) is larger than ǫ. Parameter β controls the probabil-

ity of extracting ‘bad’ realizations, and the ﬁnal result that

(γ

∗

N

, h

∗

N

) violates at most an ǫ-fraction of realizations holds

with probability 1 − β.

In theory, β plays an important role and selecting β = 0

yields N = ∞. For any practical purpose, however, β has

very marginal importance since it appears in (19) under the

sign of logarithm: we can select β to be such a small number

as 10

−10

or even 10

−20

, in practice zero, and still N does

not grow signiﬁcantly.

It is worth mentioning that improved bounds on the sample

complexity N have been developed very recently in [25] and

[26]. In particular, the bound derived in [26] is exact for the

class of the so-called fully-supported problems.

IV. NUMERICAL EXAMPLE

A simple example illustrates the controller design proce-

dure.

With reference to (2), let

G(z) =

0.2

z − 0.8

,

and let the additive output disturbance be a piecewise con-

stant signal that varies from time to time, at a low rate, of

an amount bounded by some given constant. Speciﬁcally,

let the set of admissible realizations D consists of piecewise

constant signals changing at most once over any time interval

of length 50, and taking value in [−1, 1].

As for the IMC parametrization Q(z) in (9), we choose

k = 1 and Q(z) = γ

0

+ γ

1

z

−1

.

A control design problem (10)–(12) is considered with

M = 300, and for two different values of the saturation

limit u

bound

: 10 and 1. Probability P is implicitly assigned

by the recursive equation

d(t + 1) =

¡

1 − µ(t)

¢

d(t) + µ(t)v(t + 1),

initialized with d(1) = v(1), where µ(t) is a {0, 1}-valued

process (µ(t) = 1 at times where a jump occurs), and v(t) is

a sequence of i.i.d. random variables uniformly distributed

in [−1, 1] (v(t) is the new d(t) value). µ(t) is generated

according to

µ(t) = α(t)

50

Y

k=1

¡

1 − µ(t − k)

¢

,

initialized with µ(0) = µ(−1) = · · · = µ(−49) = 0,

where α(t) is a sequence of i.i.d. {0, 1}-valued random

variables taking value 1 with probability 0.01. An admissible

realization of d(t) in D is reported in Figure 3.

1 50 100 150 200 250 300

−1.5

−1

−0.5

0

0.5

1

1.5

Fig. 3. A disturbance realization.

In the scenario approach we let ǫ = 5 · 10

−2

and β =

10

−10

. Correspondingly, N given by (19) is N = 1370.

From Theorem 1, with probability no smaller than 1 −

10

−10

, the obtained controller achieves the minimum of

P

M

t=1

y(t)

2

over all disturbance realizations, except a frac-

tion of them of size smaller than or equal to 5%. At the

same time, the control input u(t) is guaranteed not to exceed

the saturation limit u

bound

except for the same fraction of

disturbance realizations.

A. Simulation results

For u

bound

= 10, we obtained Q(z) = −4.993 + 4.024z

−1

and, correspondingly, the transfer function F (z) = 1 +

46th IEEE CDC, New Orleans, USA, Dec. 12-14, 2007 WeA17.1

533

Q(z)G(z) between d(t) and y(t) (closed-loop sensitivity

function) was

F (z) = 1 + (−4.993 + 4.024z

−1

)

0.2

z − 0.8

≃ 1 − z

−1

.

The pole-zero plot of F (z) is in Figure 4.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Fig. 4. Pole-zero plot of F (z) when u

bound

= 10. The poles are plotted

as x’s and the zeros are plotted as o’s.

Since y(t) = F (z)d(t) ≃ d(t) − d(t − 1), then, when

d(t) has a step variation, y(t) changes of the same amount

and, when the disturbance gets constant, y(t) is immediately

brought back to zero and maintained equal to zero until

the next step variation in d(t) (see Figure 5). The obtained

solution that F (z) is approximately a FIR (Finite Impulse

Response) of order 1 with zero DC-gain is not surprising

considering that d(t) varies at a low rate.

1 50 100 150 200 250 300

−1.5

−1

−0.5

0

0.5

1

1.5

d(t)

y(t)

Fig. 5. Disturbance realization and corresponding output of the controlled

system for u

bound

= 10.

In the controller design just described, the limit u

bound

=

10 played no role in that constraints −u

bound

≤ φ(t)

T

i

γ ≤

u

bound

in problem (18) were not active at the found solution.

As u

bound

is decreased, the saturation limits become more

stringent and affect the solution.

For u

bound

= 1, the following scenario solution was found

Q(z) = −0.991 + 0.011z

−1

, which corresponds to the

sensitivity function:

F (z) = 1 + (−0.991 + 0.011z

−1

)

0.2

z − 0.8

≃

z − 0.996

z − 0.8

.

The pole-zero plot of F (z) is in Figure 6, while Figure 7

represents y(t) obtained through this new controller for the

same disturbance realization as in Figure 5. Note that the

time required to bring y(t) back to zero after a disturbance

jump is now longer than 1 time unit, owing to saturation

constraints on u(t).

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Fig. 6. Pole-zero plot of F (z) when u

bound

= 1. The poles are plotted as

x’s and the zeros are plotted as o’s.

1 50 100 150 200 250 300

−1.5

−1

−0.5

0

0.5

1

1.5

d(t)

y(t)

Fig. 7. Disturbance realization and corresponding output of the controlled

system for u

bound

= 1.

The optimal control cost value h

∗

N

is h

∗

N

= 9.4564 for

u

bound

= 10 and h

∗

N

= 27.4912 for u

bound

= 1. As expected,

the control cost increases as u

bound

becomes more stringent.

The numerical example of this section is just one instance

of application of the scenario approach to controller selec-

tion. The introduced methodology is of general applicability

to diverse situations with constraints of different type, pres-

ence of reference signals, etc.

46th IEEE CDC, New Orleans, USA, Dec. 12-14, 2007 WeA17.1

534

V. CONCLUSIONS

In this paper, we considered an optimal disturbance re-

jection problem with limitations on the control action and

showed how it can be effectively addressed by means of

the so-called scenario technology. This approach basically

consists of the following main steps:

- reformulation of the problem as a robust (usually with

inﬁnite constraints) convex optimization problem;

- randomization over constraints and resolution (by

means of standard numerical methods) of the so ob-

tained ﬁnite optimization problem;

- evaluation of the constraint satisfaction level of the

obtained solution through Theorem 1.

Extensions to tracking of some class of reference signals, and

to control problems where the initial condition is uncertain

or the output of the system is subject to some constraint are

quite straightforward.

The applicability of the scenario methodology is not

limited to optimal control problems with constraints and,

indeed, this same methodology has been applied to a number

of different endeavors in systems and control, [27], [28], [29],

[30].

REFERENCES

[1] G.C. Goodwin, M.M. Seron, J.A. De Don

´

a, Constrained Control and

Estimation: an Optimisation Approach, Springer-Verlag, New York,

2005.

[2] S. Boyd, L. El Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix

Inequalities in System and Control Theory, SIAM Studies in Applied

Mathematics, SIAM, Philadelphia, 1994.

[3] A. Ben-Tal and A. Nemirovski, Robust convex optimization, Mathe-

matical Operational Research, vol. 23(4), pp. 769-805, 1998.

[4] V.D. Blondel and J.N. Tsitsiklis, A survey of computational complexity

results in systems and control, Automatica, vol. 36, pp. 12491274,

2000.

[5] R.P. Braatz, P.M. Young, J.C. Doyle, M. Morari, Computational

complexity of µ calculation, IEEE Trans. Autom. Control, vol. 39(5),

pp. 10001002, 1994.

[6] A. Nemirovski, Several NP-hard problems arising in robust stability

analysis, SIAM J. Matrix Anal. Appl., vol. 6, pp. 99-105, 1993.

[7] G. Calaﬁore and M.C. Campi, Uncertain convex programs: random-

ized solutions and conﬁdence levels, Math. Program., Ser. A vol. 102,

pp. 25-46, 2005.

[8] G. Calaﬁore and M.C. Campi, The scenario approach to robust

control design, IEEE Trans. on Automatic Control, vol. 51(5), pp.

742–753, 2006.

[9] G. Calaﬁore, F. Dabbene, R. Tempo, A survey of randomized algo-

rithms for control synthesis and performance veriﬁcation, Journal of

Complexity, vol. 23(3), pp. 301–316, 2007.

[10] D.P. Bertsekas and I.B. Rhodes, On the Minimax Reachability of

Target Sets and Target Tubes, Automatica, vol. 7, pp. 233–241, 1971.

[11] J. Glover and F. Schweppe, Control of linear dynamic systems with

set constrained disturbances, IEEE Trans. on Automatic Control, vol.

16, pp. 411–423, 1971.

[12] N. Elia and M.A. Dahleh, Minimization of the worst case peak-to-peak

gain via dynamic programming: state feedback case, IEEE Trans. on

Automatic Control, vol. 45, pp. 687–701, 2000.

[13] M. Vidyasagar, Optimal rejection of persistent bounded disturbances,

IEEE Trans. on Automatic Control, vol. 31, pp. 517–535, 1986.

[14] M.A. Dahleh and J.B. Pearson, l

1

-Optimal feedback controllers for

MIMO discrete-time systems, IEEE Trans. on Automatic Control, vol.

32, pp. 314–322, 1987.

[15] J. Abedor, K. Nagpal, K. Poola, A linear matrix inequality approach to

peak-to-peak gain minimization, Int. J. Robust and Nonlinear Control,

vol. 6, pp. 899–927, 1996.

[16] B.T. Polyak, A.V. Nazin, M.V. Topunov, S.A. Nazin, ”Rejection of

bounded disturbances via invariant ellipsoids technique”, in Proceed-

ings of the 45

th

IEEE Conference on Decision and Control, San

Diego, USA, 2006.

[17] J.M. Maciejowski, Predictive Control with Constraints, Prentice-Hall,

Pearson Education Limited, Harlow, UK, 2002.

[18] D.Q. Mayne, J.B. Rawlings, C.V. Rao, P. O. M. Scokaert, Constrained

model predictive control: Stability and optimality, Automatica, vol.

36, pp. 789814, 2000.

[19] P.O.M. Scokaert and D.Q. Mayne, Min-max feedback model predictive

control for constrained linear systems, IEEE Trans. Automat. Contr.,

vol. 43, pp. 11361142, 1998.

[20] A. Bemporad, F. Borrelli, M. Morari, Min-Max Control of Con-

strained Uncertain Discrete-Time Linear Systems, IEEE Trans. on

Automatic Control, vol. 48(9), pp. 1600-1606, 2003.

[21] M. Morari and E. Zaﬁriou, Robust process control. Prentice Hall,

Englewood Cliffs, New Jersey, 1989.

[22] B. Wahlberg, System identiﬁcation using Laguerre models, IEEE

Trans. on Automatic Control, vol. 36, pp. 551-562, 1991.

[23] B. Wahlberg and E. Hannan, Parameteric signal modelling using

Laguerre ﬁlters, The Annals of Applied Probabililty, vol. 3, pp. 467-

496, 1993.

[24] B.R. Barmish and C.M. Lagoa, The uniform distribution: A rigorous

justiﬁcation for its use in robustness analysis, Mathematics of Control,

Signals and Systems, vol. 10, pp. 203-222, 1997.

[25] T. Alamo, R. Tempo, E.F. Camacho, ”The scenario approach for

robust control design: improved samples size bounds”, in Proceedings

of the 46

th

IEEE Conference on Decision and Control, New Orleans,

USA, 2007.

[26] M.C. Campi and S. Garatti, The eaxct feasibility of randomized

solutions of robust convex programs. Available on-line at

http://www.optimization-online.org/DB

HTML/2007/07/1727.html,

2007.

[27] G. Calaﬁore and M.C. Campi, ”Robust convex programs: randomized

solutions and application in control”, in Proceedings of the 42

nd

IEEE Conference on Decision and Control, Maui, Hawaii, 2003.

[28] G. Calaﬁore and M.C. Campi, ”A new bound on the generalization

rate of sampled convex programs”, in Proceedings of the 43

rd

IEEE Conference on Decision and Control, Atlantis, Paradise Island,

Bahamas, 2004.

[29] G. Calaﬁore and M.C. Campi, A learning theory approach to the

construction of predictor models. Discrete and Continuous Dynamical

Systems, supplement volume, pp. 156-166, 2003.

[30] G. Calaﬁore, M.C. Campi, S. Garatti, ”Identiﬁcation of reliable

predictor models for unknown systems: a data-consistency approach

based on learning theory”. in Proceedings of the 16

th

IFAC World

Congress, Prague, Czech Republic, 2005.

46th IEEE CDC, New Orleans, USA, Dec. 12-14, 2007 WeA17.1

535