ArticlePDF Available

Computationally Efficient Predictive Robot Control

Authors:

Abstract and Figures

Conventional linear controllers (PID) are not really suitable for the control of robot manipulators due to the highly nonlinear behavior of the latter. Over the last decades, several control methods have been proposed to circumvent this limitation. This paper presents an approach to the control of manipulators using a computationally-efficient-model-based predictive control scheme. First, a general predictive control law is derived for position tracking and velocity control, taking into account the dynamic model of the robot, the prediction and control horizons, and also the constraints. However, the main contribution of this paper is the derivation of an analytical expression for the optimal control to be applied that does not involve a numerical procedure, as opposed to most predictive control schemes. In the last part of the paper, the effectiveness of the approach for the control of a nonlinear plant is illustrated using a direct-drive pendulum, and then, the approach is validated and compared to a PID controller using an experimental implementation on a 6-DOF cable-driven parallel manipulator.
Content may be subject to copyright.
570 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 12, NO. 5, OCTOBER 2007
[9] J. M. Grasmeyer and M. T. Keennon, “Development of the black widow
micro air vehicle,” in Proc. 39th Aerosp. Sci. Meeting Exhib., 2001,
pp. 1–9.
[10] S. M. Ettinger, M. C. Nechyba, P. G. Ifju, and M. Waszak, “Vision-guided
flight stability and control for micro air vehicles,” Adv. Robot., vol. 17,
no. 3, pp. 617–640, 2003.
[11] H. Wu, D. Sun, and Z. Zhou, “Model identification of a micro air vehicle
in loitering flight based on attitude performance evaluation,” IEEE Trans.
Robot., vol. 20, no. 4, pp. 702–712, Aug. 2004.
[12] The cyclogyros: Planned paddle-wheel aeroplanes. (2005). [Online].
Available: http://www.dself.dsl.pipex.com/MUSEUM/TRANSPORT/
cyclogyro/cyclogyro.htm.
[13] T. Hase, “A cyclogyro-based flying robot,” in Proc. SVBL Conf. UEC,Jul.
2001, p. 12.
[14] T. Hase, T. Suzuki, K. Tanaka, and T. Emaru, “A flying robot with variable
attack angle mechanism,” in Proc. 21st Annu. Conf. Robot. Soc. Jpn.,
Tokyo, Japan, May 2003, CD-ROM, Paper 3B22.
[15] Y. Higashi, K. Tanaka, T. Emaru, and H. O. Wang, “Development of a
cyclogyro-based flying robot with variable attack angle mechanisms,” in
Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Beijing, China, Oct. 2006,
pp. 3261–3266.
Computationally Efficient Predictive Robot Control
Vincent Duchaine, Samuel Bouchard, and Cl
´
ement M. Gosselin
Abstract—Conventional linear controllers (PID) are not really suitable
for the control of robot manipulators due to the highly nonlinear behav-
ior of the latter. Over the last decades, several control methods have been
proposed to circumvent this limitation. This paper presents an approach to
the control of manipulators using a computationally-efficient-model-based
predictive control scheme. First, a general predictive control law is derived
for position tracking and velocity control, taking into account the dynamic
model of the robot, the prediction and control horizons, and also the con-
straints. However, the main contribution of this paper is the derivation of
an analytical expression for the optimal control to be applied that does
not involve a numerical procedure, as opposed to most predictive control
schemes. In the last part of the paper, the effectiveness of the approach for
the control of a nonlinear plant is illustrated using a direct-drive pendu-
lum, and then, the approach is validated and compared to a PID controller
using an experimental implementation on a 6-DOF cable-driven parallel
manipulator.
Index Terms—Cable-driven mechanism, nonlinear control, parallel
mechanism, position control, predictive control, robot manipulators, ve-
locity control.
I. INTRODUCTION
The majority of existing industrial manipulators are controlled using
proportional derivative (PD) controllers. This type of basically linear
control does not represent an optimal solution for the control of robots,
which exhibit highly nonlinear kinematics and dynamics. In fact, in or-
der to accommodate configurations where the gravity and inertia terms
reach their minimum amplitude, the gain associated with the derivative
feedback (D) has to be set to a relatively large value, thereby leading to a
Manuscript received October 13, 2006; revised May 7, 2007. This work was
supported in part by the Natural Sciences and Engineering Research Council of
Canada (NSERC) and in part by the Canada Research Chair Program. Recom-
mended by Technical Editor Z. Lin.
The authors are with the Department of Mechanical Engineering, Univer-
sit
´
e Laval, Quebec City, QC G1K 7P4, Canada (e-mail: vincent.duchaine.1@
ulaval.ca; samuel.bouchard.1@ulaval.ca; gosselin@gmc. ulaval.ca).
Digital Object Identifier 10.1109/TMECH.2007.905722
generally overdamped behavior that limits the performance. Neverthe-
less, in most current robotic applications, PD controllers are functional
and sufficient due to the high reduction ratio of the transmissions used.
However, this assumption is no longer valid for manipulators with low
transmission ratios or those intended to perform high accelerations,
such as parallel robots.
Therefore, several other approaches have been proposed by re-
searchers. These alternative control schemes can be classified into two
main categories [1], namely: 1) dynamic control, which is based on a
rigid-body dynamic model of the robot and 2) adaptive control, which
is based on an online adjustment of the dynamics of the system or its
controller. Examples of dynamic control schemes (computed torque)
are given for instance in [2]–[4], while examples of adaptive control
methods are provided in [5]–[7]. Although these control schemes rep-
resent significant improvements over classical PD controllers, many
of them are not suitable for an industrial context due to their lack of
robustness with respect to model uncertainties (e.g., variable payload)
or due to their computational complexity.
Over the last few decades, a new class of control approach based on
the so-called Model Predictive Control (MPC) algorithm was proposed.
Arising from the work of Kalman [8], [9] in the 1960s, predictive
control can be said to provide the possibility of controlling a system
using a proactive rather than reactive scheme. Common applications of
this approach are in slow processes such as the petroleum and chemical
industries or more recently in aeronautics and aerospace control [10].
A little more than a decade ago, it was also proposed to apply predictive
control to nonlinear robotic systems [11], [12]. However, in the latter
references, only a restricted form of predictive control was presented
and the implementation issues—including the computational burden—
were not addressed. Later, predictive control was applied to a broader
variety of robotic systems such as a 2-DOF serial manipulator [13],
robots with flexible joints [14], or electrical motor drives [15]. More
recently, simplified approach using a limited Taylor expansion was
presented in [16] and [17]. Due to their relatively low computation time,
the latter open the avenue for real implementations. Finally, Poignet and
coworkers [18]–[20] experimentally demonstrated predictive control
on a 4-DOF parallel mechanism using a linear model in the optimization
combined with a feedback linearization.
This paper presents a new simplified approach of predictive control
applied to general robotic manipulators that includes directly the com-
plete dynamic model into the cost function. The proposed approach is
first derived for velocity control s chemes. A general solution is pre-
sented, and then, a simplified approach in which no online optimization
is required is developed. Then, the approach is derived for the position
tracking problem (position control) using similar assumptions. Finally,
an experimental validation of the proposed predictive approach is pre-
sented to illustrate its performance. Experimental results obtained from
the implementation on a 1-DOF pendulum and on a 6-DOF cable-driven
parallel mechanism are included.
II. V
ELOCITY CONTROL
Velocity control is rarely implemented in conventional industrial
manipulators since the majority of the tasks to be performed by robots
require precise position tracking. However, over the last few years,
several researchers have developed a new generation of robots that
are capable of working in collaboration with humans [21]–[23]. For
this type of tasks, velocity control seems more appropriate [24] due
to the fact that the robot is not constrained to given positions, but
rather has to follow the movement of the human collaborator. The
predictive control approach presented in this paper can be useful in this
context.
1083-4435/$25.00 © 2007 IEEE
IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 12, NO. 5, OCTOBER 2007 571
Fig. 1. MPC applied to manipulator.
A. General Formulation
The proposed approach is based on a conventional-model-based pre-
dictive control strategy. In this scheme, in order to predict the correct
control input to be applied to the system, it is required to minimize
a quadratic cost function over a prediction horizon. The cost function
is composed of two parts, namely, a quadratic function of the deter-
ministic and stochastic components of the process and a quadratic
function of the constraints. The function to be minimized can be
written as
J =
H
p
n =1
(y
k +n
r
k +n
)
T
Q(y
k +n
r
k +n
)+
H
c
m =1
ψ
T
λψ (1)
where
k the current time step;
H
p
the prediction horizon;
H
c
the control horizon;
Q a weighting factor;
λ a weighting factor;
r
k +n
the reference input;
y
k +n
the output of the system;
u
k +m
the control input signal;
ψ a constraint function.
In this paper, a constraint on the variation of the control input signal
over a prediction horizon will be used
ψ =(u
k +m
u
k +m 1
) (2)
which amounts to minimizing the variation between two consecutive in-
puts in order to obtain a smoother response. Fig. 1 provides a schematic
representation of the proposed scheme, where d
k
represents the error
between the output of the system and the output of the model.
For velocity control, the reference input is usually relatively con-
stant, especially considering the high servo rates used. Therefore, it is
reasonable to assume that the reference velocity remains constant over
the prediction horizon. With this assumption, the stochastic predictor
of the reference velocity becomes
˜r(k +1)=r(k)
˜r(k +2)=r(k)
.
.
.
.
.
.
˜r(k + H
c
)=r(k) (3)
where ˜r(j) stands for the predicted value of r at time step j.
The error d is obtained by computing the difference between the
system’s output and the model’s output. Taking into account this dif-
ference in the cost function will help to increase the robustness of
the control to model mismatch. The error can be decomposed in two
parts. The first one is the error associated directly with model uncer-
tainties. Often, this component will produce an offset proportional to
the mismatch. The error may also include a zero-mean white noise
given by the noise of the encoder or some random perturbation that
cannot be included in the deterministic model. Since the error term is
partially composed of zero-mean white noise, it is difficult to define
a good stochastic predictor of the future values. However, in the case
considered here, a future error equal to the present one will be simply
assumed. This can be expressed as
˜
d(k +1)=d(k)
˜
d(k +2)=d(k)
.
.
.
.
.
.
˜
d(k + H
c
)=d(k) (4)
where
˜
d(j) is the predicted value of d at time step j.
B. Application To Manipulators
The dynamic model of a robot manipulator can be expressed as
τ = M(θ)
¨
θ + h(θ,
˙
θ)+V
˙
θ + g(θ) (5)
where
τ the vector of actuator torques;
M(θ) the generalized inertia matrix;
¨
θ the vector of actuator accelerations;
h(θ,
˙
θ) the vector of centrifugal and C orriolis forces;
V
˙
θ the vector of viscous friction torques;
g(θ) the vector of gravity torques.
The acceleration resulting from a torque applied on the system can
be found by inverting (5), which leads to
¨
θ = M(θ)
1
[h(θ,
˙
θ)+V
˙
θ + g(θ) τ ] (6)
where θ and
˙
θ are the positions and velocities measured by the en-
coders. Assuming that the acceleration is constant over one time period,
the previous expression can be substituted into the equations associ-
ated with the motion of a body undergoing constant acceleration, which
leads to
˙
θ
k +1
=
˙
θ
k
M(θ)
1
[h(θ,
˙
θ)+V
˙
θ + g(θ) τ ]T
s
(7)
where T
s
is the sampling period. Since robots usually run on a discrete
controller with a very small sampling period, assuming a constant
acceleration over a sample period is a reasonable approximation that
will not induce significant errors.
Since the system input is a current and not a torque, the afore-
mentioned equations must be modified. Neglecting the dynamic time
response of the actuators—which is justified by the high bandwidth
of the latter—one can define a matrix K
τ
relating the current and the
torque at the actuators. Thus, let us suppose firstly that the relation
between the torque and the current for an actuator can be approximated
asasimplegainK
m
. Since actuators are generally coupled with a
transmission, the generalized torque constant will be
K
τ
= K
m
(8)
572 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 12, NO. 5, OCTOBER 2007
where K
m
is a diagonal matrix whose ith diagonal entry is the product
of the gear ratio and the torque to current gain of the ith actuator.
Substituting (8) into (7) leads to
˙
θ
k +1
=
˙
θ
k
M(θ)
1
[h(θ,
˙
θ)+V
˙
θ + g(θ) K
τ
u]T
s
(9)
where u represents the vector of control input (current in the actuators).
Equation (9) represents the behavior of the robot over a sampling pe-
riod. However, in predictive control, this behavior must be determined
over a number of sampling periods given by t he horizon of predic-
tion. Since the dynamic model of the manipulator is nonlinear, it i s not
straightforward to compute the necessary recurrence over this horizon,
especially considering the limited computational time available. This
is the main reason why predictive control is still not commonly used
for manipulator control.
Instead of computing exactly the nonlinear evolution of the manip-
ulator dynamics, it can be more efficient to make some assumptions
that will simplify the calculations. For the accelerations normally en-
countered in most manipulator applications, the gravitational term is
the one that has the most impact on the dynamic model. The evolution
of this term over time is a function of the position. The position is
obtained by integrating the velocity over time. Even a large variation
of velocity will not lead to a significant change of position since it is
integrated over a very short period of time. From this point of view, the
high sampling rate that is typically used in robot controllers allows us
to assume that the nonlinear terms of the dynamic model are constant
over a prediction horizon. Obviously, this assumption will induce some
error, but this error can easily be managed by the error term included
in the minimization.
It is known from the literature that, for an infinite prediction horizon
and for a stabilizable process, as long as the objective function weight-
ing matrices are positive definite, predictive control will always stabi-
lize the system [25]. However, the simplifications that have been made
earlier on the representation of the system prevent us from concluding
on stability since the errors in the model will increase nonlinearly with
an increasing prediction horizon. It is not trivial to determine the dura-
tion of the prediction horizon that will ensure the stability of the control
method. The latter will depend on the dynamic model, the geometric
parameters, and also on the conditioning of the manipulator at a given
pose.
From the earlier derivations, combining the deterministic and
stochastic components and the constraint on the input variable leads to
the general cost function to be optimized as a function of the prediction
and control horizons. This function can be divided into two sums in or-
der to manage distinctively the prediction horizon and control horizon.
One has
J =
H
c
1
n =1
[A(n)
T
QA(n)+∆u
T
n
λu
n
]
+
H
p
n =H
c
[B(n)
T
QB(n)+∆u
T
H
c
λu
H
c
] (10)
with
A(n)=
˙
θ + nM
1
(K
τ
u
n
h
N
(θ,
˙
θ))T
s
(r d) (11)
B(n)=
˙
θ + nM
1
(K
τ
u
H
c
h
N
(θ,
˙
θ))T
s
(r d) (12)
being the integration form for
˙
θ
k +n
of the linear equation ( 9) and where
h
N
(θ,
˙
θ) the gravity, friction, centripetal, and Coriolis term;
Q diagonal matrix of weighting factors;
λ diagonal matrix of weighting factors;
u the vector of input variables (current);
u
j
u
j
u
j 1
.
An explicit solution to the minimization of J can be found for
given values of H
p
and H
c
. However, it is more difficult to fi nd a
general solution that would be a function of H
p
and H
c
. Nevertheless,
a minimum of J can easily be found numerically. From ( 10), it is clear
that J is a quadratic function of u. Moreover, because of its physical
meaning, the minimum of J is reached when the derivative of J with
respect to u is equal to zero. The problem can, thus, be reduced to
finding the root of the following equation:
∂J
u
=
H
c
1
n =1
([C(n)D(n) (r d))T
s
]+2λu
n
)
+
H
p
n =H
c
([C(n)E(n) (r d))T
s
]+2λu
H
c
)=0 (13)
with
C(n)=2nQK
m
(14)
D(n)=(
˙
θ + nM
1
(K
τ
u
n
h
N
(θ,
˙
θ))T
s
(15)
E(n)=(
˙
θ + nM
1
(K
τ
u
H
c
h
N
(θ,
˙
θ))T
s
. (16)
An exact and unique solution to this equation exists since it is linear.
However, the computation of the solution involves the resolution of a
system of linear equation whose size increases linearly with the control
horizon. Another drawback of this approach is that the generalized
inertia matrix must be inverted, which can be time-consuming. The
next section will present strategies to avoid these drawbacks.
C. Analytical Solution of the Minimization Problem
The previous section provided a general formulation of the MPC
applied to robot manipulators with an arbitrary number of DOFs and
arbitrary chosen prediction and control horizons. However, in this sec-
tion, only the prediction horizon will be considered. This simplification
of the general approach of the predictive control will make it possible
to find an exact expression of the optimal control input signal for any
prediction horizon, thereby reducing dramatically the computing time.
Many predictive schemes presented in the literature [11], [16], [17]
consider only the prediction horizon and disregard the control horizon,
which greatly simplifies the formulation. Also, the constraint imposed
on the input variable can be eliminated. At high servo rates, neglecting
this constraint does not have a major impact since the input signal
does not usually vary much from one period to another. Thus, the
aggressiveness of the control (u) that will result from the elimination
of the constraint function can easily be compensated for by the use of a
longer prediction horizon. The aforementioned simplifications lead to
a new cost function given by
J =
H
p
n =1
F(n)
T
F(n) (17)
where
F(n)=
˙
θ + nM
1
(K
τ
u h
N
(θ,
˙
θ))T
s
(r d). (18)
Computing the derivative of (17) with respect to u and setting it to zero,
a general expression of the optimal control input signal as a function
of the prediction horizon is obtained, namely
u = K
1
τ
h
N
(θ,
˙
θ)
3M(
˙
θ r + d)
(1 + 2H
p
)T
s
. (19)
IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 12, NO. 5, OCTOBER 2007 573
The algebraic manipulations that lead to (19) from (17) are summarized
in Appendix A.
It is noted that this solution does not require the computation of the
inverse of the generalized inertia matrix. Moreover, since the solution
is analytical, an online numerical optimization is no longer required.
III. P
OSITION CONTROL
A. General Formulation
The position tracking scheme of control follows a formulation sim-
ilar to the one that was presented before for velocity control. The main
differences are the stochastic predictor of the future reference position
and the deterministic model of the manipulator that must now predict
the future positions instead of velocities.
In the velocity control scheme, it was assumed that the reference
input was constant over the prediction horizon. This assumption was
justified by the high servo rate and by the fact that the velocity does not
usually vary drastically over a sampling period even in fast trajectories.
However, this assumption cannot be used for position tracking. In
particular, in the context of human–robot cooperation, no trajectory is
established apriori, and the future reference input must be predicted
from current positions and velocities. A simple approximation that can
be made is to use the time derivative of the reference to linearly predict
its future. This can be written as
˜r(k)=r(k)
˜r(k +1)=r(k)+∆r
.
.
.
.
.
.
˜r(k + H
c
)=r(k)+H
c
r (20)
where r is given by
r = r(k) r(k 1). (21)
Since the error term d(k) is again partially composed of zero-mean
white noise, one will consider the future of this error equal to the
present. Therefore, (4) is also used here.
B. Application To Manipulators
As shown in the previous section, the velocity can be predicted using
(7). Integrating the latter equation once more with respect to time—
and assuming constant acceleration—the prediction on the position is
obtained as
θ
k +1
= θ
k
+
˙
θ
k
T
s
1
2
M(θ)
1
[h(θ,
˙
θ)+V
˙
θ + g(θ) τ ]T
2
s
.
(22)
Including the deterministic model and the stochastic part inside the
function to be minimized, the general function of predictive control for
the manipulator is obtained
J =
H
c
1
n =1
[G(n)
T
QG(n)+∆u
T
n
λu
n
]
+
H
p
n =H
c
[L(n)
T
QL(n)+∆u
T
H
c
λu
H
c
] (23)
with
G(n)=N(n)+
n
2
2
M
1
K
τ
u
n
h
N
(θ,
˙
θ)
T
2
s
(24)
L(n)=N(n)+
n
2
2
M
1
K
τ
u
H
c
h
N
(θ,
˙
θ)
T
2
s
(25)
N(n)=θ + n
˙
θT
s
(˜r d). (26)
Taking the derivative of this function with respect to u and setting it to
zero leads to a linear equation that will give the minimum of the cost
function:
∂J
u
=
H
c
1
n =1
(n
2
QM
1
K
τ
G(n)T
2
s
+2λu
n
)
+
H
p
n =H
c
(n
2
QM
1
K
τ
L(n)T
2
s
+2λu
H
c
)=0. (27)
C. Exact Solution To the Minimization
Since the aforementioned result requires the use of a numerical pro-
cedure and also the inversion of the inertia matrix, the same assumptions
that were made for simplifying the cost function for velocity control
will be used again here. These assumptions lead to a simplified predic-
tive control law that allows to find a direct solution to the minimization
without using a numerical procedure. This function can be written as
J =
H
p
n =1
s(n)
T
s(n) (28)
where
s(n)=θ + n
˙
θT
s
+
n
2
2
M
1
(K
τ
u h
N
(θ,
˙
θ))T
2
s
(˜r d).
(29)
Setting the derivative of this function with respect to uequal to zero and
after some manipulations summarized in Appendix B, the following
optimal solution is obtained:
u=K
1
τ
h
N
(θ,
˙
θ)
2M(P
2
˙
θT
s
+θr + P
3
∆r + d)
P
1
T
2
s
(30)
with
P
1
=
(3H
2
p
+3H
p
1)
5
(31)
P
2
=
3H
p
(H
p
+1)
2(2H
p
+1)
(32)
P
3
=
3H
2
p
+ H
p
+2
4H
p
+2
(33)
where H
p
is the horizon of prediction.
It is again pointed out that the direct solution of the minimization
given by (30) does not require the computation of the inverse of the
inertia matrix.
IV. E
XPERIMENTAL VALIDATION
A. Goal of the Experiment
The predictive control algorithm presented in this paper aims at
providing a more accurate control of robots. The first goal of the exper-
iment is, thus, to compare the performance of the predictive controller
to the performance of a PID controller on an actual robot. The second
objective is to verify that the simplifying assumptions that were made
in this paper hold in practice. The argument in favor of the predictive
574 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 12, NO. 5, OCTOBER 2007
Fig. 2. Speed response of the direct-drive pendulum for PID and MPC control.
controller is that it should lead to better performances than does a PID
control scheme since it takes into account the dynamics of the robot
and its future behavior while requiring almost the same computation
time. In order to illustrate this phenomenon, the control algorithms
were first used to actuate a simple 1-DOF pendulum. Then, the posi-
tion and velocity control were implemented on a 6-DOF cable-driven
parallel mechanism. The controllers were implemented on a real-time
QNX computer with a servo rate of 500 Hz—a typical servo rate for
robotics applications. The PID controllers were tuned experimentally
by minimizing the square norm of the error of the motors s ummed over
the entire trajectories.
B. Illustration With the 1-DOF Pendulum
A simple pendulum attached to a direct-drive motor was controlled
using a PID scheme and the predictive controller. This system, which
represents one of the worst candidates for PID controllers, has been
used to demonstrate how our assumption on the dynamical model
does not affect the capability of the proposed predictive controller to
stabilize nonlinear systems. The use of a direct-drive motor maximizes
the impact of the nonlinear terms of the dynamic model, making the
system difficult to control by a conventional regulator.
Also, the simplicity of the system helps to obtain accurate estima-
tions of the parameters of the dynamic model that allow to test the ideal
case. Despite the fact that its inertia remains constant over time, under
constant angular velocity, the gravitational torque is the dominating
term in the dynamic model. This setup also makes it possible to test
the velocity control at high speed without having to consider angular
limitations.
Fig. 2 provides the response of the system (angular velocity) to a
given sequence of input reference velocities for the different controllers.
The predictive control has been implemented according to (19), and
an experimentally determined prediction horizon of four was used for
the tests. It can be easily seen that PID control is inappropriate for this
nonlinear dynamic mechanism. The sinusoidal error corresponds to the
gravitational torque that varies with the rotation of the pendulum. The
predictive control follows the reference input more adequately as it
anticipates the variation in this term.
C. 6-DOF Cable-Driven Robot Parallel
A 6-DOF cable-driven robot with an architecture similar to the one
presented in [26] is used in the experiment. It is s hown in Fig. 3 where
the frame is a cube with 2 m edges. The end-effector is suspended by
six cables. The cables are wound on pulleys actuated by motors fixed
at the top of the frame.
1) Kinematic Modeling: For a given end-effector pose x, the nec-
essary cable lengths ρ can be calculated using the inverse kinematic
Fig. 3. Cable robot used in the experiment.
problem (IKP). The length of cable i can be calculated by
ρ
2
i
= v
T
i
v
i
(34)
where
v
i
= a
i
b
i
. (35)
In (35), b
i
and a
i
are, respectively, the position of the attachment point
of cable i on the frame and on the end-effector, expressed in the global
coordinate frame. Thus, vector a
i
can be expressed as
a
i
= c + Qa
i
(36)
with a
i
being the attachment point of cable i on the end-effector,
expressed in the reference frame of the end-effector and Q being the
rotation matrix expressing the orientation of the end-effector in the fixed
reference frame. Vector c is defined as the position of the reference point
on the end-effector in the fixed frame.
Considering a fixed pulley radius r, the lengths of the cable can be
related to the angular positions θ of the actuators
ρ = rθ. (37)
Substituting ρ in (34) and differentiating with respect to time, one
obtains the velocity equation
A
˙
x = B
˙
θ (38)
where
˙
x is the twist of the end-effector
˙
x =[
˙
c
T
w
T
]
T
(39)
θ =
θ
1
··· θ
6
T
(40)
B = diag[r
2
θ
1
,...,r
2
θ
6
] (41)
A =
c
T
1
.
.
.
c
T
6
(42)
vector c
i
being
c
i
=
(a
i
b
i
)
(Qa
i
) × (a
i
b
i
)
(43)
and where w is the angular velocity of the end-effector.
IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 12, NO. 5, OCTOBER 2007 575
2) Dynamic Modeling: In this paper, the cables are considered
straight, massless, and infinitely stiff. The assumption of straight cables
is justified since the robot is small and the mass of the end-effector is
much larger than the mass of the cables, which induces no sag. The
measurements are made for chosen trajectories for which t he mecha-
nism has positive tensions in the cables at all time. The inertia of the
wires is negligible compared to the combined inertia of the pulleys and
end-effector. Although it is of research interest, the elasticity of the
cables is not considered in the dynamic model f or this research. The
elastic behavior is not exhibited strongly because of the high stiffness
of the cables under relatively low accelerations (maximum 9.81 m/s
2
)
of a 600 g end-effector. The balance between the dynamic properties
of the different parts of the robot make these assumptions acceptable.
Equation (38) can be rearranged as
˙
x = J
˙
θ (44)
where
J = A
1
B. (45)
From (44) and using the principle of virtual work, the following dy-
namic equation can be obtained:
τ = I
p
¨
θ + K
ν
˙
θ + J
T
M(
¨
x + w
g
) (46)
where I
p
is the inertia matrix of the pulleys and motors combined
I
p
= diag[I
p 1
,...,I
p 6
] (47)
K
ν
is the matrix of the viscous friction at the actuators
K
ν
= diag[k
ν 1
,...,k
ν 6
] (48)
M
e
is the inertia matrix of the end-effector
M
e
=
diag[m
e
m
e
m
e
][0]
(3×3)
[0]
(3×3)
I
e
(49)
m
e
being the mass of the end-effector and I
e
its inertia matrix given
by the computer-aided design (CAD) model. Vector w
g
is the wrench
applied by gravity to the end-effector.
By differentiating (44) with respect to time, one obtains
¨
x =
˙
J
˙
θ + J
¨
θ. (50)
Substituting this expression for
¨
x in (46), the dynamics can be expressed
with respect to the joint variables θ:
τ =[I
p
+ J
T
MJ]
¨
θ + J
T
M
˙
J
˙
θ K
ν
˙
θ + J
T
Mw
g
. (51)
Equation (51) has the same form as (5) where
M(θ)=I
p
+ J
T
M
e
J (52)
h(θ,
˙
θ)=J
T
M
e
˙
J
˙
θ (53)
V = K
ν
(54)
g(θ)=J
T
Mw
g
. (55)
3) Trajectories: The trajectories are defined in the Cartesian space.
For the experiment, the selected trajectory is a displacement of 0.95
m along the vertical axis performed in 1 s. The displacement follows
a fifth-order polynomial with respect to time, with zero speed and
acceleration at the beginning and at the end. This smooth displacement
is chosen in order to avoid inducing vibrations in the robot since the
elasticity of the cables is not taken into account in the dynamic model.
Fig. 4. Velocity control—error between the output and the reference input of
the six motors for the PID (top left) and the predictive control (top right). The
corresponding control input signals are shown at the bottom.
As mentioned earlier, it was verified prior to the experiment that this
trajectory does not require compression in any of the cables.
The cable robot is controlled using the joint coordinates θ,the
angular position, and velocity of the pulleys. The Cartesian trajectories
are, thus, converted in the joint space using the IKP and the velocity
equations. A numerical solution to the direct kinematic problem is
also implemented to determine the pose from the information of the
encoders. This estimated pose is used to calculate the terms that depend
on x.
D. Experimental Results for the 6-Dof Robot
1) Velocity Control: Fig. 4 provides the error between the response
of the system (joint velocities) and the time derivative of the joint
trajectory described before for the two different controllers. The cor-
responding control input signals are also shown in this figure. The
predictive control algorithm was implemented according to (19), and
an experimentally determined horizon of prediction H
p
=11was used.
It can be observed that the magnitude of the error is smaller with
the proposed predictive control than with the PID. The control input
signals also appears to be smoother with the proposed approach than
with the conventional linear controller. The PID suffers from the use
of the second derivative of the encoder for the derivative gain (D)that
reduces the stability of the control. The predictive control, according
to (19) requires only the encoder signal and its first derivative.
2) Position Control: A predictive position controller was imple-
mented according to (30). An experimentally determined horizon of
prediction H
p
=14was used.
Fig. 5 illustrates the capability of the proposed scheme to perform
position tracking compared to a PID. The error over the trajectory
for the six motors and the control input signals are presented in the
same figure. It can be seen from these figures that the magnitude of
the error is in the same range for the two control methods. The main
difference occurs at the end of the trajectory where the PID leads to
a small overshoot and takes some time to stabilize. Indeed, that is
where the predictive control exhibits a clear advantage of performance
over the PID. One can also note that during the trajectory, the PID
error appears to have a more random distribution and variation than
576 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 12, NO. 5, OCTOBER 2007
Fig. 5. Position control—error between the output and the reference input of
the six motors for the PID (top left) and the predictive control (top right). The
corresponding control input signals are shown at the bottom.
Fig. 6. Angular position trajectory of a motor using the PID.
the errors obtained with the predictive control. In fact, for the latter,
the errors follow exactly the velocity profile of the trajectory probably
as a consequence of an inaccurate estimation of the friction parameter
(K
ν
). The PID is tuned to follow the trajectory as closely as possible.
Even if the magnitude of the acceleration is the same at the end of
the trajectory than at the beginning, the velocity is different. At the
beginning, the velocity is small. The errors that feed the PID build
up fast enough in a short amount of time to provide a good trajectory
tracking. At the end of the trajectory, the velocity is higher, causing this
time—with the integrator effect—an overshoot at the end. This type of
behavior is common for a PID, and is illustrated with experimental data
in Fig. 6. If it is tuned to track closely a trajectory, there is an overshoot
at the end. If it is tuned in such a way that there is no overshoot, the
tracking is not as good. The predictive controller does not suffer from
this problem. It is possible to have a controller that tracks closely a
TAB LE I
C
OMPUTATION TIME REQUIRED FOR EACH CONTROLLER
fast trajectory, without overshooting at the end. The reason is that the
controller anticipates the required torques, the future reference input,
and takes into account the dynamics and the actual conditions of the
system. Experimentally, another advantage of the predictive controller
is that it appears to be much easier to tune than the PID. With a good
dynamic model, only the prediction horizon has to be adjusted in order
to obtain a controller that is more or less aggressive.
3) Computation Time: The computation time was also determined
for each controller during position tracking. The results are given in
Table I. The PID controller requires a longer calculation time at each
step. The integrator in the RT-Lab/QNX computer is the longer term
to calculate in the PID. Actually, if it is removed to obtain a PD, the
computation time drops to 209 µs. Each controller requires computation
times of the same order of magnitude, which means that it is fair to
compare them with the same servo rate.
V. C
ONCLUSION
This paper presented a simplified approach to predictive control
adapted to robot manipulators. Control schemes were derived for ve-
locity control as well as position tracking, leading to general predictive
equations that do not require online optimization. Several justified sim-
plifications were made on the deterministic part of the typical predictive
control in order to obtain a compromise between the accuracy of the
model and the computation time. These simplifications can be seen as
a means of combining the advantages of predictive control with the
simplicity of implementation of a computed torque method and the fast
computing time of a PID.
Despite all these simplifications, experimental results on a 6-DOF
cable-driven parallel manipulator demonstrated the effectiveness of the
method in terms of performance. The method using the exact solution
of the optimal control appears to alleviate two of the main drawbacks
of predictive control for manipulators, namely, the complexity of the
implementation and the computational burden.
Further investigations should focus on the stability analysis using
Lyapunov functions and also on the demonstration of the robustness of
the proposed control law.
A
PPENDIX A
E
QUATION (19) OBTAINED FROM THE DERIVATIVE OF (17)
J =
H
p
n =1
F(n)
T
F(n)
(56)
where
F(n)=
˙
θ + nM
1
(K
τ
u h
N
(θ,
˙
θ))T
s
(r d). (57)
A minimum is obtained when
∂J
u
=
H
p
n =1
F(n)
u
T
F(n)=0. (58)
IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 12, NO. 5, OCTOBER 2007 577
Using (57) then leads to
H
p
n =1
T
s
M
1
K
t
T
×
n
2
T
s
M
1
(K
t
u h
N
(θ,
˙
θ)) + n(
˙
θ r + d)
=0. (59)
Since matrix (T
s
M
1
K
t
) is constant, the optimal control signal as a
function of the prediction horizon is obtained by finding the root of the
vector part of the equation, i.e.,
H
p
n =1
n
2
T
s
M
1
K
t
u
=
H
p
n =1
n
2
T
s
M
1
h
N
(θ,
˙
θ)n(
˙
θr+d)
(60)
H
p
n =1
n
2
u = K
1
t
H
p
n =1
n
2
h
N
(θ,
˙
θ) n
M(
˙
θ r + d)
T
s
(61)
u = K
1
t
h
N
(θ,
˙
θ)
H
p
n =1
n
H
p
n =1
n
2
M(
˙
θ r + d)
T
s
. (62)
Recalling that
H
p
n =1
n =
H
p
(H
p
+1)
2
(63)
H
p
n =1
n
2
=
H
p
(H
p
+ 1)(2H
p
+1)
6
(64)
leads to
H
p
n =1
n
H
p
n =1
n
2
=
3
2H
p
+1
. (65)
Substituting in (62), one finally has
u = K
1
t
h
N
(θ,
˙
θ)
3M(
˙
θ r + d)
(1 + 2H
p
)T
s
. (66)
A
PPENDIX B
E
QUATION(30) OBTAINED FROM THE DERIVATIVE OF (28)
J =
H
p
n =1
s(n)
T
s(n) (67)
where
s(n)=θ + n
˙
θT
s
+
n
2
2
M
1
(K
τ
u h
N
(θ,
˙
θ))T
2
s
(˜r d).
(68)
A minimum is reached when
∂J
u
=
H
p
n =1
s(n)
u
T
s(n)=0. (69)
Substituting (68) into (69) leads to
H
p
n =1
T
2
s
M
1
K
t
T
n
4
4
T
2
s
M
1
(K
t
uh
N
(θ,
˙
θ))
+
n
3
2
(
˙
θT
s
r)+
n
2
2
(θ r +∆r + d)
=0 (70)
where ˜r has been replaced according to (20) by
˜r = r +(n 1)∆r. (71)
Since matrix (T
s
M
1
K
t
) is constant, the optimal control signal as a
function of the prediction horizon is obtained by finding the root of the
vector part of the equation, i.e.,
H
p
n =1
n
4
4
T
2
s
M
1
K
t
u
=
H
p
n =1
n
4
4
T
2
s
M
1
h
N
(θ,
˙
θ)
n
3
2
(
˙
θT
s
r)
n
2
2
(θ r +∆r + d)
(72)
u = K
1
t
h
N
(θ,
˙
θ)
2M
T
2
s
H
p
n =1
n
3
H
p
n =1
n
4
(
˙
θ r)
+
H
p
n =1
n
2
H
p
n =1
n
4
(θ r +∆r + d)

. (73)
Recalling that
H
p
n =1
n
2
=
H
p
(H
p
+ 1)(2H
p
+1)
6
(74)
H
p
n =1
n
3
=
H
p
(H
p
+1)
2
2
(75)
H
p
n =1
n
4
=
H
p
(H
p
+ 1)(2H
p
+ 1)(3H
2
p
+3H
p
1)
30
(76)
we can define P
1
as
P
1
=
H
p
n =1
n
2
H
p
n =1
n
4
=
5
3H
2
p
+3H
p
+1
. (77)
Similarly, we have
H
p
n =1
n
3
H
p
n =1
n
4
=
15H
p
(H
p
+1)
2(2H
p
+ 1)(3H
2
p
+3H
p
+1)
=
P
2
P
1
(78)
where P
2
is given by
P
2
=
3H
p
(H
p
+1)
2(2H
p
+1)
. (79)
Collecting the r terms leads to
1
P
1
P
2
P
1
=
P
3
P
1
(80)
578 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 12, NO. 5, OCTOBER 2007
where
P
3
=
3H
p
(H
p
+1)
2(2H
p
+1)
+1=
3H
2
p
+ H
p
+2
4H
p
+2
. (81)
Finally, substituting these three polynomial expressions into (73) leads
to
u = K
1
τ
×
h
N
(θ,
˙
θ)
2M(P
2
˙
θT
s
+θr + P
3
r
+ d)
P
1
T
2
s
. (82)
R
EFERENCES
[1] C. Samson, “Commande non-lin
´
eaire robuste des robots manipulateurs,”
INRIA, Rapport de recherche, vol. 1, no. 182, pp. 1–53, 1983.
[2] M. Uebel, I. Minis, and K. Cleary, “Improved computed torque control
for industrial robots,” in Proc. 1992 Int. Conf. Robot. Autom.,, vol. 1,
pp. 528–533
[3] R. J. Anderson, “Passive computed torque algorithms for robot,” in Proc.
28th Conf. Decis. Control, 1989, vol. 2, pp. 1638–1644.
[4] Y. Bestaoui and D. Benmerzouk, “A sensity analysis of the computed
torque technique,” in Proc. Amer. Control Conf., 1995, vol. 6, pp. 4458–
4459.
[5] J.-J. E. Slotine, “The robust control of robot manipulators,” Int. J. Robot.
Res., vol. 4, no. 2, pp. 49–64, 1985.
[6] J.-J. E. Slotine and W. Li, “Adaptive manipulator control a case study,” in
Proc. 1987 IEEE Int. Conf. Robot. Autom., vol. 4, pp. 1392–1400.
[7] J.-J. E. Slotine and W. Li, Adaptive strategies in constrained manipula-
tion,” in Proc. 1987 IEEE Int. Conf. Robot. Autom., vol. 4, pp. 595–601.
[8] R. Kalman, “Contributions to the theory of optimal control,” Bull. Soc.
Math. Mex., vol. 5, pp. 102–119, 1960.
[9] R. Kalman, “A new approach to linear filtering and prediction problems,”
Trans. ASME, J. Basic Eng., vol. 82, pp. 35–45, 1960.
[10] J. Shi, A. G. Kelkar, and D. Soloway, “Stable reconfigurable generalized
predictive control with application to flight control,” Trans. ASME, J.
Dyn. Syst., Meas. Control, vol. 128, no. 6, pp. 371–378, 2006.
[11] F. Berlin and P. M. Frank, “Robust predictive robot control,” in Proc. 5th
Int. Conf. Adv. Robot., 1991, vol. 2, pp. 1493–1496.
[12] J. M. Compas, P. Decarreau, G. Lanquetin, J. Estival, and J. Richalet,
“Industrial application of predictive functional control to rolling mill, fast
robot, river dam,” in Proc. 3rd IEEE Conf. Control Appl., 1994, vol. 3,
pp. 1643–1655.
[13] Z. Zhang and W. Wang, “Predictive function control of a two link robot ma-
nipulator,” in Proc. Int. Conf. Mechatron. Autom., 2005, vol. 4, pp. 2004–
2009.
[14] D. von Wissel, R. Nikoukhah, and S. L. Campbell, “On a new predictive
control strategy: Application to a flexible-joint robot,” in Proc. 33rd IEEE
Conf. Decis. Control, 1994, vol. 3, no. 14, pp. 3025–3026.
[15] R. Kennel, A. Linder, and M. Linke, “Generalized predictive control
(GPC)—Ready for use in drive application?,” in Proc. 32nd IEEE Power
Electron. Spec. Conf., 2001, pp. 1839–1844.
[16] R. Hedjar, R. Toumi, P. Boucher, and D. Dumur, “Finite horizon nonlin-
ear predictive control by the taylor approximation: Application to robot
tracking trajectory,” Int. J. Appl. Math. Sci., vol. 15, no. 4, pp. 527–540,
2005.
[17] R. Hedjar and P. Boucher, “Nonlinear receding-horizon control of rigid
link robot manipulators,” Int. J. Adv. Robot. Syst., vol. 2, no. 1, pp. 15–24,
2005.
[18] F. Lydoire and P. Poignet, “Non linear model predictive control via interval
analysis,” in Proc. 44th IEEE Conf. Decis. Control, 2005, pp. 3771–3776.
[19] P. Poignet and M. Gautier, “Nonlinear model predictive control of a
robot manipulator,” in Proc. 6th Int. Workshop Adv. Motion Control 2000,
pp. 401–406.
[20] A. Vivas, P. Poignet, and F. Pierrot, “Predictive functional control for
a parallel robot,” in Proc. Int. Conf. Intell. Robots Syst., 2003, vol. 3,
pp. 2785–2790.
[21] R. Berbardt, D. Lu, and Y. Dong, “A novel cobot and control,” in Proc.
5th World Congr. Intell. Control, 2004, vol. 5, pp. 4635–4639.
[22] M. Peshkin, J. Colgate, W. Wannasuphoprasit, C. Moore, and R. Gillespie,
“Cobot architecture,” IEEE Trans. Robot. Autom., vol. 17, no. 4, pp. 377–
390, Aug. 2001.
[23] O. M. Al-Jarrah and Y. F. Zheng, “Arm-manipulator coordination for load
sharing using compliant control,,” in Proc. IEEE 1996 Int. Conf. Robot.
Autom., vol. 2, pp. 1000–1005.
[24] V. Duchaine and C. Gosselin, “General model of human–robot cooperation
using a novel velocity based variable impedance control,” in Proc. IEEE
World Haptics, 2007, pp. 446–451.
[25] S. Qin and T. Badgwell, “An overview of industrial model predictive
control technology,” in Proc. Chem. Process Control V, 1997, pp. 232–
256.
[26] S. Bouchard and C. M. Gosselin, “Workspace optimization of a very
large cable-driven parallel mechanism for a radiotelescope application,,”
presented at the ASME Int. Des. Eng. Tech. Conf., Mech. R obot. Conf.,
Las Vegas, NV, 2007.
... Pour conclure cette section, nous dirons que les techniques utilisant des capteurs d'activation des muscles humains pour interpréter les intentions de l'opérateur présentent un défaut d'ergonomie et peuvent être encombrant durant la co-manipulation. Duchain montre que la lecture des efforts appliqués par l'opérateur sur le robot est un [DBG07]. L'amélioration de la co-manipulation humain-robot en se basant sur le réglage de la commande est la plus répandue dans la littérature. ...
Thesis
Le travail de cette thèse se positionne dans la thématique de la robotique de la co-manipulation pour l’assistance au geste. Nous avons travaillé sur le cas particulier de l’examen échographique, afin de réduire la prévalence des Troubles MusculoSquelettiques (TMS) auxquels peuvent faire face les médecins échographistes. La stratégie repose sur une co-manipulation clinicien-robot de la sonde échographique pour sa mise en place sur le patient ; le maintien de la sonde sur le patient est alors assuré par le robot. Lors du guidage, le robot doit suivre les mouvements imposés par l’opérateur sans aucun ressenti des efforts dynamiques liés à sa mise en mouvement : on parle de manipulation transparente. Sachant que les approches actuelles d’implémentation du guidage manuel sont principalement basées sur le contrôle de l’impédance du robot et ne permettent pas de créer une co-manipulation humain-robot transparente, l’enjeu scientifique de nos travaux est de proposer une technique de commande permettant une co-manipulation humain-robot transparente et intuitive pour l’humain. Ainsi, nous avons proposé une nouvelle méthode de génération de trajectoires temps réel dépendant des efforts appliqués par l’opérateur et reposant sur les lois générales de la dynamique d’un solide, introduisant ainsi la notion de "solide virtuel". En nous appuyant sur le principe fondamental de la dynamique, le robot doit suivre la trajectoire d’un solide dans l’espace lorsqu’il est soumis à des actions extérieurs. Ainsi, la génération de trajectoire se fait à partir de la mise en mouvement du solide virtuel, dont les caractéristiques sont une masse virtuelle, une inertie virtuelle et des frottements virtuels. La méthode est implémentée sur le robot Staübli TX2 60, puis étudiée en simulation et en expérimentation en la comparant à la méthode de co-manipulation classique utilisant la commande en impédance. Les résultats obtenus sont assez satisfaisants au profit de la méthode proposée. Nous avons également étudié l’influence des paramètres virtuels sur la trajectoire et la transparence de la co-manipulation, permettant d’obtenir une logique de réglage de la stratégie de commande.
... Optimal control can be found in a great variety of applications from economics [1] to engineering problems such as energy management [2] and robotics [3,4]. It consists of determining optimal actions in problems following a known forward model that describe state changes in time when actions are applied. ...
Preprint
Optimal control in robotics has been increasingly popular in recent years and has been applied in many applications involving complex dynamical systems. Closed-loop optimal control strategies include model predictive control (MPC) and time-varying linear controllers optimized through iLQR. However, such feedback controllers rely on the information of the current state, limiting the range of robotic applications where the robot needs to remember what it has done before to act and plan accordingly. The recently proposed system level synthesis (SLS) framework circumvents this limitation via a richer controller structure with memory. In this work, we propose to optimally design reactive anticipatory robot skills with memory by extending SLS to tracking problems involving nonlinear systems and nonquadratic cost functions. We showcase our method with two scenarios exploiting task precisions and object affordances in pick-and-place tasks in a simulated and a real environment with a 7-axis Franka Emika robot.
... Unlike the PID controller, MPC explicitly utilizes the model of the dynamical system and hence is able to handle complex systems with many inputs, outputs and constraints. For this reason, it is widely used in process industry [36], power electronics [43,23,19], robotics [10,34,22], unmanned aerial vehicles [18,17,1], etc.. However, computational cost has been an obstacle in the application of MPC in high-frequency control problems. ...
Preprint
Solving complex optimal control problems have confronted computation challenges for a long time. Recent advances in machine learning have provided us with new opportunities to address these challenges. This paper takes the model predictive control, a popular optimal control method, as the primary example to survey recent progress that leverages machine learning techniques to empower optimal control solvers. We also discuss some of the main challenges encountered when applying machine learning to develop more robust optimal control algorithms.
... There have been several studies to reduce the power/energy consumption of the computing units for cyberphysical systems (e.g., [4], [5], [6], [7]). In recent mobile robots computing power management units significantly reduces the computing power/energy consumption via dynamic voltage and frequency scaling (DVFS), power gating (PG) and/or other hardware knobs, whenever full performance is not required (e.g., [8], [9], [10], [11], [12]). ...
Preprint
Full-text available
We propose an energy-efficient controller to minimize the energy consumption of a mobile robot by dynamically manipulating the mechanical and computational actuators of the robot. The mobile robot performs real-time vision-based applications based on an event-based camera. The actuators of the controller are CPU voltage/frequency for the computation part and motor voltage for the mechanical part. We show that independently considering speed control of the robot and voltage/frequency control of the CPU does not necessarily result in an energy-efficient solution. In fact, to obtain the highest efficiency, the computation and mechanical parts should be controlled together in synergy. We propose a fast hill-climbing optimization algorithm to allow the controller to find the best CPU/motor configuration at run-time and whenever the mobile robot is facing a new environment during its travel. Experimental results on a robot with Brushless DC Motors, Jetson TX2 board as the computing unit, and a DAVIS-346 event-based camera show that the proposed control algorithm can save battery energy by an average of 50.5%, 41%, and 30%, in low-complexity, medium-complexity, and high-complexity environments, over baselines.
... Model predictive control (MPC) [1], [2] approximates those closed-loop policies online into locally optimal policies. Although well-known within the robotics community [1], [3], its hardware deployment is yet to be achieved when using high dimensional nonlinear dynamics, mainly due to scaling issues. In default of sufficient computation resources two main approaches can be found in the literature. ...
Conference Paper
Full-text available
Model Predictive Control (MPC) promises to endow robots with enough reactivity to perform complex tasks in dynamic environments by frequently updating their motion plan based on measurements. Despite its appeal, it has seldom been deployed on real machines because of scaling constraints. This paper presents the first hardware implementation of closed-loop nonlinear MPC on a 7-DoF torque-controlled robot. Our controller leverages a state-of-the art optimal control solver, namely Differential Dynamic Programming (DDP), in order to replan state and control trajectories at real-time rates (1kHz). In addition to this experimental proof of concept, an exhaustive performance analysis shows that our controller outperforms open-loop MPC on a rapid cyclic end-effector task. We also exhibit the importance of a sufficient preview horizon and full robot dynamics through comparisons with inverse dynamics and kinematic optimization.
Chapter
Optimal control in robotics has been increasingly popular in recent years and has been applied in many applications involving complex dynamical systems. Closed-loop optimal control strategies include model predictive control (MPC) and time-varying linear controllers optimized through iLQR. However, such feedback controllers rely on the information of the current state, limiting the range of robotic applications where the robot needs to remember what it has done before to act and plan accordingly. The recently proposed system level synthesis (SLS) framework circumvents this limitation via a richer controller structure with memory. In this work, we propose to optimally design reactive anticipatory robot skills with memory by extending SLS to tracking problems involving nonlinear systems and nonquadratic cost functions. We showcase our method with two scenarios exploiting task precisions and object affordances in pick-and-place tasks in a simulated and a real environment with a 7-axis Franka Emika robot.
Article
Solving complex optimal control problems have confronted computational challenges for a long time. Recent advances in machine learning have provided us with new opportunities to address these challenges. This paper takes model predictive control, a popular optimal control method, as the primary example to survey recent progress that leverages machine learning techniques to empower optimal control solvers. We also discuss some of the main challenges encountered when applying machine learning to develop more robust optimal control algorithms.
Article
Full-text available
Disponible dans les fichiers attachés à ce document
Conference Paper
The 6-DOF cable-driven mechanism under study in this paper is a feed positioning device for the Large Adaptive Reflector (LAR). The LAR is a concept of a very large orientable radio antenna. The study aims at optimizing the geometry of the cable mechanism to maximize the portion of the desired workspace in which the mechanism can remain in static equilibrium under the predicted external forces and torques. A general test to rapidly compute if the set of external forces and torques can be balanced is developed. This test is applicable to mechanisms with an arbitrary number (minimum six) of cables. Architectures with six to nine cables are optimized and compared. The conclusion of this study is that the prescribed task is unlikely to be achievable by this type of mechanism. Some design guidelines to improve the performance of a large cable-driven mechanism kept under tension by an aerostat are also provided.
Article
A new scheme is presented for the accurate tracking control of robot manipulators. Based on the more general suction control methodology, the scheme addresses the following problem: Given the extent of parametric uncertainty (such as imprecisions or inertias, geometry, loads) and the frequency range of unmodeled dynamics (such as unmodeled structural modes, neglected time delays), design a nonlinear feedback controller to achieve optimal tracking performance, in a suitable sense. The methodology is compared with standard algorithms such as the computed torque method and is shown to combine in practice improved performance with simpler and more tractable controller designs.
Article
This paper presents the development of a multiinput multioutput generalized predictive control (GPC) law and its application to reconfigurable control design in the event of actuator saturation. The stability of the GPC control law without reconfiguration is first established using an end-point state weighting. Based on the constrained nonlinear optimization, an end-point state weighting matrix synthesis method is derived. A novel reconfiguration strategy is developed for systems that have actuator redundancy and are faced with actuator saturation type failure. An elegant reconfigurable control design is p. resented with stability proof. A numerical simulation using a short-period approximation model of a civil transport aircraft is presented to demonstrate the reconfigurable control architecture.
Article
In industrial control systems, practical interest is driven by the fact that today's processes need to be operated under tighter performance specifications. Often these demands can only be met when process nonlinearities are explicitly considered in the controller. Nonlinear predictive control, the extension of well-established linear predictive control to nonlinear systems, appears to be a well-suited approach for this kind of problems. In this paper, an optimal nonlinear predictive control structure, which provides asymptotic tracking of smooth reference trajectories, is presented. The controller is based on a finite–horizon continuous time minimization of nonlinear predicted tracking errors. A key feature of the control law is that its implementation does not need to perform on-line optimization, and asymptotic tracking of smooth reference signal is guaranteed. An integral action is used to increase the robustness of the closed-loop system with respect to uncertainties and parameters variations. The proposed control scheme is first applied to planning motions problem of a mobile robot and, afterwards, to the trajectory tracking problem of a rigid link manipulator. Simulation results are performed to validate the tracking performance of the proposed controller.
Article
The classical filtering and prediction problem is re-examined using the Bode-Sliannon representation of random processes and the “state-transition” method of analysis of dynamic systems. New results are: (1) The formulation and methods of solution of the problem apply without modification to stationary and nonstationary statistics and to growing-memory and infinitememory filters. (2) A nonlinear difference (or differential) equation is derived for the covariance matrix of the optimal estimation error. From the solution of this equation the coefficients of the difference (or differential) equation of the optimal linear filter are obtained without further calculations. (3) The filtering problem is shown to be the dual of the noise-free regulator problem. The new method developed here is applied to two well-known problems, confirming and extending earlier results. The discussion is largely self-contained and proceeds from first principles; basic concepts of the theory of random processes are reviewed in the Appendix.
Conference Paper
Cobots are inherently passive robots intended for direct collaboration with a human operator. Cobots possess the advantages of conventional robots such as better bearing load capacity and high order accuracy, and still take advantage of the superiorities of the operator such as high intelligence, vision and haptic capacities, dexterity and so on. A novel cobot architecture was put forward by the non-holonomic constraint feature of a differential gear. In terms of different coupling modes between differential mechanisms and joints, parallel and serial cobot architectures were constructed, and kinematic models of two kinds of cobots and a control model of the trajectory constraint were built up. A prototype of five-linkage serial cobot architecture was introduced and experimental study was completed. The experimental results show that the novel cobot can track a desirable straight line, and that it possesses non-holonomic constraint feature. If perfected further, it can be applied in assembly production lines of large-size parts, surgery operations and some situations requiring man-machine cooperation.
Conference Paper
Earlier work (Slotine and Li, 1986) exploits the particular structure of manipulator dynamics to develop a simple, globally convergent adaptive algorithm for trajectory control problems. The algorithm does not require measurements or estimates of the manipulator's joint accelerations, nor inversion of the estimated inertia matrix. This paper demonstrates the approach on a high-speed 2 d.o.f. semi-direct-drive robot. It shows that the manipulator mass properties, assumed to be initially unknown, can be precisely estimated within the first half second of a typical run. Similarly, the algorithm allows large loads of unknown mass properties to be precisely manipulated. Further, these experimental results demonstrate that the adaptive controller enjoys essentially the same level of robustness to unmodelled dynamics as a PD controller, yet achieves much better tracking accuracy than either PD or computed-torque schemes. Its superior performance for high speed operations, in the presence of parameter uncertainties, and its relative computational simplicity, make it a attractive option both to address complex industrial tasks, and to simplify high-level programming of more standard operations.
Conference Paper
After a brief presentation of predictive functional control (PFC), three applications are described: 1) thickness control of a cold rolling mill; 2) water level control of the Rhone river; and 3) position control of a fast robot. The paper focuses on how to implement PFC and what can be expected from this technology