Content uploaded by Juergen Hahn
Author content
All content in this area was uploaded by Juergen Hahn on Nov 17, 2017
Content may be subject to copyright.
1
A Methodology for Fault Detection, Isolation, and Identification for
Nonlinear Processes with Parametric Uncertainties
Srinivasan Rajaramana,b Juergen Hahnb,∗ M. Sam Mannana,b
aMary Kay O’ Connor Process Safety Center
Department of Chemical Engineering
Texas A&M University
College Station, Texas 77843-3122, USA
bDepartment of Chemical Engineering
Texas A&M University
College Station, Texas 77843-3122, USA
Abstract
This paper presents a novel methodology for systematically designing a fault detection, isolation,
and identification algorithm for nonlinear systems with known model structure but uncertainty in
the parameters. The proposed fault diagnosis methodology does not require historical operational
data and/or a priori fault information in order to achieve accurate fault identification. This is
achieved by a two-step procedure consisting of a nonlinear observer, which includes a parameter
estimator and a fault isolation and identification filter. Parameter estimation within the observer
is performed by using the unknown parameters as augmented states of the system and robustness
is ensured by application of a variation of Kharitonov’s theorem to the observer design. The filter
design for fault reconstruction is based upon a linearization, which has to be repeatedly
computed at each step where a fault is to be identified. However, this repeated linearization does
not pose a severe drawback since linearization of a model can be automated and is
computationally not very demanding for models used for fault detection. It is not possible to
simultaneously perform parameter estimation and fault reconstruction since faults and the
parametric uncertainty influence one another. Therefore, these two tasks are performed at
different time scales, where the fault identification takes place at a higher frequency than the
parameter estimation. It is shown that the fault can be reconstructed under some realistic
assumptions and the performance of the proposed methodology is evaluated on a simulated
chemical process exhibiting nonlinear dynamic behavior.
Keywords: Fault detection; Fault isolation; Fault identification; State and parameter estimation;
Hurwitz stability; Kharitonov's theorem
1 Introduction
Early and accurate fault detection and diagnosis is an essential component of operating
modern chemical plants in order to reduce downtime, increase safety and product quality,
∗ Corresponding author. Phone: +1 (979) 845 3568
Fax: +1 (979) 845 6446
E-mail: hahn@tamu.edu
2
minimize impact on the environment, and reduce manufacturing costs1,2. As the level of
instrumentation in chemical plants increases, it is essential to be able to monitor the variables and
interpret their variations. While some of these variations are due to changing operating
conditions, others can be directly linked to faults. Extracting essential information about the state
of a system and processing the data for detecting, isolating, and identifying abnormal readings
are important tasks of a fault diagnosis system3, where the individual goals are defined as:
• Fault detection: a Boolean decision about the existence of faults in a system.
• Fault isolation: determination of the location of a fault, e.g., which sensor or actuator is
not operating within normal limits.
• Fault identification: estimation of the size and type of a fault.
There exist numerous techniques for performing fault diagnosis4. The majority of these
approaches are based upon data from past operations in which statistical measures are used to
compare the current operating data to earlier conditions of the process where the state of the
process was known. While these techniques are often easy to implement, they do have the
drawback that it is not possible to perform fault identification, that a large amount of past data is
required, that the method may not be able to detect a fault if operating conditions have changed
significantly, or that processes exhibiting highly nonlinear behavior may be difficult to diagnose5.
In order to address these points, this paper presents an approach for fault diagnosis based upon
nonlinear first-principles models, which can include parametric uncertainty. Incorporating
fundamental models into the procedure allows for accurate diagnosis even if operating conditions
have changed, while the online estimation of model parameters takes care of plant-model
mismatch. The parameter estimation is performed using an augmented nonlinear observer, where
a concept from Kharitonov’s theory6 about stability under the influence of parametric uncertainty
is utilized in order to ensure a certain level of robustness for the designed observer. The fault
diagnosis itself uses the computation of residuals (i.e., the mismatch between the measured
output and estimated output using the model) for fault detection3 and appropriate filters are
designed to achieve fault isolation and identification as well. Since it is not possible to
simultaneously perform parameter estimation and fault detection, due to the interactions of these
two tasks, an approach where these computations are taking place at different time scales is
implemented. It is shown that fault detection, isolation, and identification for nonlinear systems
containing uncertain parameters can be performed under realistic assumptions with the presented
approach.
2 Previous work
Extensive research on fault detection has been undertaken over the last few decades5,7,8.
The majority of methods are based on statistical techniques9,10, however, a significant body of
literature also exists for fault detection and identification of measurement bias based upon first
principles models5,11.
Work on model-based fault detection has included unknown input observers (UIO)13,
which are based upon the idea that the state estimation error can be decoupled from unknown
inputs3 acting as disturbances and thereby decoupling residuals from uncertainties. This concept
was generalized in subsequent work14 for detecting and isolating both sensor and actuator faults
by considering the case when unknown inputs also appear in the output equation. A different
approach, but using a similar concept of decoupling the disturbances from the residuals for fault
detection, made use of the eigenstructure assignment of the observer14,15. In this technique, the
observer is designed to de-couple the residual from unknown inputs rather than from the state-
3
estimation error. The above two approaches work satisfactorily for LTI systems, however, they
can result in poor performance for nonlinear systems because these systems are not always affine
in the unknown inputs. To cope with nonlinearities of the process, it has been proposed to
develop nonlinear unknown input observers16 for residual generation, which requires a suitable
nonlinear state transformation. However, the conditions under which these transformations exist
are very restrictive in nature. Moreover, it is assumed that the unknown disturbances affect the
system in a piecewise linear affine fashion. The restrictive conditions required to develop UIOs
and attain eigenstructure assignment has led to frequency domain and optimization methods for
robust fault diagnosis3. The main emphasis is on formulation of robust fault diagnosis and
isolation (FDI) problems using frequency performance criteria. Limitations of using an
optimization method for robust fault diagnosis are due to the assumption that no modeling errors
are present or that the modeling errors can be viewed as disturbances17,18.
A major advantage of observer-based fault diagnosis techniques as compared to data-
driven techniques is that the residual’s sensitivity to faults of a specific frequency range can be
tailored. Uncertainties in the model can then be taken into account by considering them to be
slowly time-varying faults. However, this involves the risk that fault signals of low frequency
may not be detected because the enhancement of robustness is associated with an accompanying
decrease of the ability to detect slowly time-varying fault signals19,20. To overcome this difficulty,
it was proposed to use adaptive observers21,22 where certain effects of nonlinearities and model
uncertainties may be handled as unknown parameters that can decoupled from the residuals. The
formulation of the above problem is based on the assumption that the slowly time-varying
unknown parameters appear as an affine unknown input to the system. However, most chemical
processes are nonlinear and exhibit exponential dependence of unknown parameters in the
process model, e.g. The activation energy. Moreover, since industrial processes operate in
closed-loop with appropriate output feedback to attain certain performance objectives, it is
important to not just detect and isolate instrument faults but to reconstruct them at the same time
in order to implement fault tolerant control23.
3 Preliminaries
In section 3.1 observer based fault diagnosis for LTI systems is reviewed. Required
background information about the concept of stability of an interval family of polynomials is
presented in section 3.2. This serves as a foundation for the presentation of the new technique in
section 4.
3.1 Fault diagnosis for LTI systems
Consider a linear time invariant system with no input
s
fCxy
Ax
x
+=
=
(1)
where n
x
R∈is a vector of state variables and m
yR∈is a vector of output variables, n is the
number of states, and m refers to the number of output variables. A and C are matrices of
appropriate dimensions and fs is the sensor fault of unknown nature with the same dimensions as
the output. Assuming the above system is observable, a Luenberger observer for the system can
be designed
ˆˆ ˆ
()
ˆˆ
x
Ax L y y
yCx
=
+−
=
(2)
4
where L is chosen to make the closed loop observer stable and achieve a desired observer
dynamics. Further, define a residual3
0
ˆ
() ( )( ( ) ( ))
t
rt Qt y y d
τ
τττ
=− −
∫ (3)
which represents the difference between the observer output and the actual output passed through
a filter Q(t). Taking a Laplace transform of equations (1)-(3) results in
1
() ()[ ( ( )) ] ()
s
rs Qs I CsI A LC L f s
−
=−−− (4)
where Q(t) is chosen s.t Q(s) is a
R
H
∞
-matrix24. It can be shown that
1) () 0rt = ( ) 0
s
if f t =.
2) () 0rt ≠ ( ) 0
s
if f t ≠.
indicating that the value of r(t) predicts the existence of a fault in the system24.
Figure 1. Schematic of Dedicated Observer Scheme (DOS) for a system with 2 measurements
In addition, if one uses the dedicated observer scheme as shown for a system with two
outputs in Figure 1, then the fault detection system can also determine the location of the fault:
3) ( ) 0
i
rt = ,
() 0
si
if f t =, 1,2,3,......., .im=
4) ( ) 0
i
rt ≠ ,
() 0
si
if f t ≠, 1, 2, 3,......., .im=
where i represents the th
imeasurement. A fault detection system that satisfies all of the above
conditions is called as a Fault detection and isolation filter (FDIF). A fault detection and isolation
filter becomes a fault identification filter (FIDF) if additionally the following condition is
satisfied25:
5) ,
lim( ( ) ( )) 0, 1, 2,3......, .
isi
trt f t i m
→∞ −==
In order to meet the above conditions, the following restrictions on the choice of Q(s) are
imposed:
(a) ( ) 0, Qs s≠∀∈.
5
(b) 11
() [ ( ( )) ]Qs I CsI A LC L
−−
=− − − 1
[( ) ]CsI A L I
−
=
−+.
Linear, observer-based fault detection, isolation, and identification schemes work well if
an accurate model exists for the process over the whole operating region and if appropriate
choices are made for L and Q.
3.2 Robust stability of an interval polynomial family
Consider a set ( )s
δ
of real polynomials of degree n of the form
234
12 3 4
() n
on
ssssss
δ
δδ δ δ δ δ
=++++++
where the coefficients lie within given ranges:
[
]
+−
∈000 ,
δδδ
,
[
]
+−
∈111 ,
δδδ
, …,
[
]
+−
∈nnn
δδδ
,
Denote that
[
]
n
δ
δ
δ
δ
,,, 10 …=
and define a polynomial ()s
δ
by its coefficient vector δ. Furthermore, define the hyperrectangle
of coefficients.
{
}
ni
iii
n,,2,1,0,,:: 1…=≤≤ℜ∈=Ω +−+
δδδδδ
Assuming that the degree remains invariant over the family, so that
[
]
+−
∉nn
δδ
,0. A set of
polynomials with above properties is called an interval polynomial family. Kharitonov’s theorem
provides a necessary and sufficient condition for the Hurwitz stability of all members contained
in this family.
Theorem 1 (Kharitonov’s Theorem)
Every polynomial in the family ()s
δ
is Hurwitz if and only if the following four extreme
polynomials are Hurwitz6.
.)(
,)(
,)(
,)(
6
6
5
5
4
4
3
3
2
210
6
6
5
5
4
4
3
3
2
210
6
6
5
5
4
4
3
3
2
210
6
6
5
5
4
4
3
3
2
210
…
…
…
…
+++++++=
+++++++=
+++++++=
+++++++=
−++−−++++
−−++−−+−+
++−−++−+−
+−−++−−−−
sssssss
sssssss
sssssss
sssssss
δδδδδδδδ
δδδδδδδδ
δδδδδδδδ
δδδδδδδδ
(5)
While this theorem has been used extensively in parametric approaches to robust control,
it will be utilized for developing observers that handle parametric uncertainties and nonlinearities
in the model.
4 Robust fault detection, isolation, and identification
4.1 Problem formulation
Consider a nonlinear system with possibly multiple outputs of the following form
(
)
()
s
fxhy
xfx
+=
=
θ
θ
,
,
(6)
where n
x
R∈is a vector of state variables and m
yR∈is a vector of output variables. It is
assumed that
()
θ
,xf is a smooth analytic vector field in n
R
and
(
)
,hx
θ
is a smooth analytic
vector fieled in m
R
. Let k
R
θ
∈ be a parameter vector assumed to be constant with time but a
priori uncertain and fs is the sensor fault of unknown nature with the same dimensions as the
6
output. The goal of this paper is to estimate the state vector with limited information about the
parameters describing the process model and under the influence of output disturbances
s.t. ˆ
lim( ) 0
txx
→∞ −=, where ˆ
x
is the estimate of the state vector, x, and to design a set of filters
()Qt so that the residuals, given by the expression
0
ˆ
() ( )( ( ) ( ))
t
rt Qt y y d
τ
τττ
=− −
∫ have all the
five properties discussed in section 3.1.
One of the main challenges in this research is that both faults and plant-model mismatch
will have an effect on the fault identification. In order to perform accurate state and parameter
estimation, it is desired to have reliable measurements, while at the same time an accurate model
of the process is required to identify the fault. This will be taken into account by performing the
parameter estimation and the fault detection at different time scales. Each time the parameters
are estimated, it is assumed that the fault is not changing at that instance, while the values of the
parameters are not adjusted during each individual fault detection. A variety of different
techniques exist for designing nonlinear closed-loop observers26-30. However, since the class of
problems under investigation includes parametric uncertainty it would be natural to address these
issues through a parametric approach instead of the often used extended Kalman filter or
extended Luenberger observer. The procedure for designing nonlinear observers under the
influence of parametric uncertainty is outlined in the next subsections, which is followed by a
description of the fault detection, isolation, and identification algorithm.
4.2 Estimator design – a parametric approach
A nonlinear system of the form given by equation (6) can be rewritten by viewing the parameters
as augmented states of the system
(
)
()
s
fxhy
xfx
+=
=
θ
θ
θ
,
0
,
(7)
and with a change of notation
()
(
)
=
=0
,
,,
θ
θ
θ
xf
xf
x
x (8)
this results in the following system
(
)
()
s
fxhy
xfx
+=
=
(9)
For the state and parameter estimation step it is assumed that the sensor faults are known, since
they are identified at certain “sampling times” and the assumption is made that they remain
constant over the time interval between two “sampling points”. Furthermore, assume that each
component θi of the parameter vector
[
]
110 ,,,: −
=k
θ
θ
θ
θ
… (10)
can vary independently of the other components and each θi lies within an interval where the
upper and lower bounds are known
{
}
1,...,2,1,0,:: −=≤≤=Π +− ki
iii
θθθθ
(11)
Also, let Π∈= ss
θ
θ
be a vector of constant, a priori unknown parameters and (xss, θss) be an
equilibrium point of equation (7). The augmented system needs to be observable in order to
7
design an observer, which can also estimate the values of the parameters. A sufficient condition
for local observability of a nonlinear system is if the observability matrix
()
(
)
()
()
∂
∂
∂
∂∂
∂
=
−+ xhL
x
xhL
x
x
xh
xW
kn
f
f
o
1
(12)
has rank n+k for (x, θ) = (xss, θss)31. Since the equilibrium points of the system depend upon the
values of the parameters which are not known a priori, it is required that the rank of
(
)
o
Wx is
checked for all Π∈= ss
θ
θ
and the resulting equilibrium points (xss, θss).
In order to proceed it is assumed that the augmented system is observable over the entire
hyperrectangle-like set Π and the equilibrium points corresponding to these parameters values. It
is then possible to design an observer for the augmented system
() ()
()
()
,ˆ
,
0
ˆ,s
xfx Lx y y
yhx f
θθ
θ
θ
=+−
=+
(13)
Where x
~ is the estimate of
x
and
θ
is the estimate of
θ
and
(
)
,Lx
θ
is a suitably chosen
nonlinear observer gain. Also, note that the observer makes use of the assumption that the
measurement fault is known from an earlier identification of the fault. When the observer is
computed for the first time, it has no knowledge about possible sensors faults and assumes that
no sensor fault was initially present.
4.2.1 Determining the family of polynomials for observer design
In this section, the result about Hurwitz stability of an interval family of polynomials from
section 3.2 is utilized to determine a methodology for computing the gain (, )Lx
θ
of the
nonlinear observer given by equation (13). Consider the linearized model of the augmented
process model around an equilibrium point (xss, θss).
(,)
(,)
ss ss
s
sss s
xAx x
yCx x f
θ
θ
=
=+
(14)
Where ),( ssss
xA
θ
is the Jacobian of
(
)
θ
,xf at the point ),( ssss
x
θ
and
(,)
s
sss
Cx
θ
=
(,) (,)
(, ) (, )
ss ss ss ss
xx
hx hx
x
θθ
θθ
θ
∂∂
∂∂
. The characteristic polynomial of the system, which
determines its stability, is given by
2
12
()det (,) (,) (,) (,) .
n
ssss ossss ssss ssss
ssIAx x xsxs s
δθδθδθδθ
=− = + + ++
(15)
It can be seen that the coefficients of the characteristic polynomial are nonlinear functions of the
parameter vector ss
θ
and ss
x. Assuming that ),( ssss
xf
θ
satisfies the conditions of the implicit
8
function theorem i.e. 0
),(
,
≠
∂
∂
ssss
x
x
xf
θ
θ
then ss
x can be solved for a given ss
θ
, i.e.
ss
x=)(ss
θ
φ
,:kn
R
R
φ
→. The characteristic polynomial in s given by equation (15) can then be
rewritten as
2
12
2
12
( ) ( ( ), ) ( ( ), ) ( ( ), )
( ) ( ) ( )
n
o ssss ssss ssss
n
oss ss ss
ssss
ss s
δ δ φθ θ δ φθ θ δ φθ θ
δθ δθ δθ
=++ ++
=++ ++
(16)
where, ( ( ), ) ( ), 0,1, 2,3, , 1.
issssissin
δφθ θ δθ
== −While it is generally not possible to derive
an analytic expression of the coefficients
(
)
11
,, ,
on
δδ δ
−
…… as a function of ss
θ
, ss
xcan be
evaluated by numerically solving the equation 0),(
=
ssss
xf
θ
for ∏
∈
ss
θ
. Since ( , )
f
x
θ
is
assumed to be a smooth vector function, the coefficients of the characteristic polynomial are
continuous functions of ),( ssss
x
θ
. Therefore, by discretizing the set
∏
and evaluating the
maximum and minimum values for each coefficient )( ssi
θδ
over all the points in the set
∏
, the
hyperrectangle of coefficients Ω as described in section 3.2 can be obtained. In case of a multi-
dimensional parameter
θ
, discretizing the set
∏
can be computationally expensive, however,
advanced NLP algorithms exist that facilitate the calculation of the required bounds on the
coefficients. For the case where
θ
is a scalar, the range within which the coefficients vary can be
determined by plotting )( ssi
θδ
against ss
θ
0,1, 2, 3, , 1.in
∀
=− Figure 2 shows a typical plot
of the coefficient versus the one-dimensional parameter
θ
.
12345678
4
6
8
10
12
14
16
Plot of coefficient as a function of the parameter
θ
min
ss
δ
min
i
(θ
ss
)
δ
max
i
(θ
ss
)
θ
max
ss
Figure 2: Sample plot of the coefficient as a function of a one-dimensional real parameter
To enforce that the estimation error decays asymptotically for the linearized system, the observer
gains are chosen to satisfy the following condition
9
((,) (,)(,)) ,
ss ss ss ss ss ss ss
Ax Lx Cx
λθ θ θ θ
−
−∈∀∈∏ (17)
Where (.)
λ
refers to the eigenvalues of the matrix. The following section focuses on computing
appropriate gains L by making use of Kharitonov’s theorem.
4.2.2 Observer gain computation
Since it is assumed that the augmented system given by equation (7) is observable over the entire
hyperrectangle-like set ∏and the equilibrium points corresponding to these parameters values, it
is possible to find an invertible transformation ),( ssss
xT
θ
∏
∈
∀
ss
ts
θ
.. the LTI system given by
equation (14) can be transformed into an observer canonical form (Appendix A), considering one
output at a time as
zCy
zxAz ssss
=
=),(
θ
(18)
where, 11
(,), (,) (, )(, ) (, ), (,) (, )
s
sss ssss ssss ssss ssss ssss ssss
zTx xAx Tx Ax T x CCx T x
θ
θθθθ θθ
−−
== =
,
and
1
2
2
1
0
(,)10 00
(,)01 00
(,) (, ) 0 010
(,) 0 001
(,) 0 000
nssss
nssss
ss ss
ss ss
ss ss
ss ss
x
x
Ax x
x
x
δθ
δθ
θδθ
δθ
δθ
−
−
=
[]
00001
=C
(19)
The characteristic polynomial of ),( ssss
xA
θ
takes the following form:
21
12 1
() ( , ) ( , ) ( , ) ( , ) .
nn
ossss ssss ssss n ssss
s x xsxs xss
δδθδθδθ δ θ
−
−
=++ + +
Since, the coefficients of the above characteristic polynomial in s are continuous and nonlinear
functions of ),( ssss
x
θ
, the hyperrectangle within which these coefficients can vary independently
from one another can be evaluated from the method discussed in section 4.2.1. The following
analysis provides a method to compute a constant gain vector l for the case of a single output
system such that:
(( , ) ) ,
ss ss ss
Ax lC
λθ θ
−
−
∈∀∈∏
Consider the set of n nominal parameters
{
}
0
1
0
2
0
1
0
0,...,,, −n
δδδδ
together with a set of a priori
uncertainty ranges 11
, ,........, ,
on
δ
δδ
−
∆∆ ∆ which is given by , 0,1, , 1
ii i
in
δδ δ
+−
∆
=− = −…… .
Furthermore, consider the family ( )s
δ
of polynomials,
nn
no ssssss ++++++= −
−
1
1
3
3
2
21 ....)(
δδδδδδ
where the coefficients of the polynomial can vary independently from one another and lie within
the given ranges
10
−=
∆
+≤≤
∆
−1,....,2,1,0,
22
:00 ni
i
ii
i
i
δ
δδ
δ
δδ
Further let there be n free parameters
(
)
121 ,,,, −
=
no lllll to transform the family ()s
δ
into
the family described by
21
00 11 22 1 1
() ( ) ( ) ( ) ( )nn
nn
sl lsls lss
γδ δ δ δ
−
−−
=+++++ + + + +………
The above problem arises, when it is desired to suitably place the closed loop observer poles for
a single output system where the system matrices ,Ac are in observable canonical form and the
coefficients of the characteristic polynomial of
A
are subject to bounded perturbations. The
following theorem guarantees that there exists a free parameter vector l such that the pair ,Ac
can always be stabilized:
Theorem 2 For any set of nominal parameters,
{
}
0
1
0
2
0
1
0
0,...,,, −n
δδδδ
, and for any set of positive
numbers 11
, ,........, ,
on
δ
δδ
−
∆∆ ∆ it is possible to find a vector l such that the entire family ()s
γ
is
stable6.
By Theorem 2 there always exists a .. ( ( , ) ) ,
ss ss ss
lst Ax lC
λθ θ
−
−
∈∀∈∏
, which can
be systematically computed (Appendix B). The result is an observer given by equation (13),
which locally estimates the states and parameters of the system given by equation (6) and with an
observer gain
() ()
1
,,Lx T x l
θ
θ
−
=
. The proposed approach is considerably less computationally
demanding than alternative state and parameter estimation techniques such as extended
Luenberger observers33. Additionally, the presented method yields an analytic expression for
observer gains irrespective of the dimension of the systems but guarantees convergence of the
error dynamics locally around the operating point.
4.3 Fault detection
The purpose of fault detection is to determine whether a fault has occurred in the system. It can
be seen that lim( ) 0
txx
→∞ −≠
in the presence of sensor faults. In order to extract the information
about faults from the system a residual needs to be defined as
0
ˆ
() ( )( ( ) ( ))
t
rt Qt y y d
τ
τττ
=− −
∫, where ( )Qt is any stable filter. It can be verified that
• lim ( ) 0
trt
→∞ = ( ) 0
s
if f t =.
• lim ( ) 0
trt
→∞ ≠ ( ) 0
s
if f t ≠.
Additional restrictions on the class of stable filters ( )Qt will be imposed in the following
sections in order to satisfy desired objectives.
4.4 Fault isolation
Fault isolation is synonymous with determining the location of a fault and its computation
imposes additional restrictions on the choice of the filter Q(t). In order to perform fault isolation,
the augmented system given by equation (7) is assumed to be separately locally observable
through each of the outputs y
∏
∈
∀ss
θ
. It should be noted that this requirement is mandatory
11
for the existence of a fault isolation filter25 and hence does not pose a stringent condition for
using the presented approach.
To achieve fault detection as well as isolation, the proposed approach uses a series of
dedicated nonlinear observers as shown in Figure 1. In this method as many residuals are
generated as the number of measurable outputs. It can be verified that
• lim ( ) 0
i
trt
→∞ = ,
() 0
si
if f t
=
• lim ( ) 0
i
trt
→∞ ≠ ,
() 0
si
if f t
≠
, 1,2,3,......., .im
=
for an appropriately chosen filter Q(t).
4.5 Fault identification
In order to estimate the shape and size of the fault, the residuals have to meet the following
objective:
,
lim( () ()) 0
isi
trt f t
→∞
−
= 1, 2 , ,im
=
…
Since a dedicated nonlinear observer scheme is utilized in the proposed approach, it remains to
choose a suitable stable filter ( )
Qt to meet all the conditions for fault detection, isolation, and
identification. It was shown in section 3.1 that an appropriate choice of ( )
Qt for a linear time-
invariant system described by equation (1) is given by
])([)( 1ILAsICsQ +−= −
where ( )
Qs is the Laplace transform of the filter ( )Qt . Similarly, for the nonlinear system given
by equation (9) a linear filter
1
() [ ( , )( ( , )) ( , ) ]
ss ss ss ss ss ss
Qs Cx sI Ax Lx I
θθθ
−
=− +
is locally applicable. Since the equilibrium point ),( ssss
x
θ
is a priori unknown, the fault
identification filter is modified:
1
() [(,)( (,)) (,) ]Qs Cx sI Ax L x I
θθθ
−
=− +
where ( )
Qs is the Laplace transform of the filter at any point ( , )
x
θ
in the state space. However,
since at least as many eigenvalues of (, )
A
x
θ
are identically to zero as there are parameters of
the original system, the above ()Qt is not stable. To overcome the problem of choosing a stable
filter for fault reconstruction, a lower dimensional observer that does not perform the parameter
estimation but only estimates the states, needs to be considered
ˆˆ ˆ ˆ
(, ) (, )( )
ˆˆ
(, ) s
x
fx Lx y y
yhx f
θθ
θ
=+ −
=+
(20)
where )(
ˆtx is the estimate of )(tx and
(,) .. ((, ) (, )(,)) ,
ss ss ss ss ss ss ss ss ss
Lx st Ax Lx Cx
θλθ θ θ θ
−
−
∈∀∈∏ (21)
where ),( ssss
xA
θ
is the Jacobian of ),(
θ
xf at the point ),( ssss
x
θ
and
,
(, )
(,)
s
sss
ss ss
x
hx
Cx x
θ
θ
θ
∂
=∂
Lemma 1. The nonlinear system described by equation (20) in conjunction with the observer of
the augmented system (13) is a locally asymptotic observer to the system given by equation (6) if
fs is known.
12
Proof: Since ),( ssss
xL
θ
is chosen . .st the condition in equation (17) is met, lim( )
t
θ
θ
→∞
−
=0.
Linearizing the system given by equation (20) around the equilibrium point ),( ssss
x
θ
:
ˆˆ ˆ
(,) (, )( )
ˆˆ
(,)
ss ss ss ss
ss ss s
x
Ax x Lx y y
yCx x f
θθ
θ
=+ −
=+
(22)
Similarly, linearizing the system given by the equation (6) around the equilibrium point
),( ssss
x
θ
results in:
(,)
(,)
ss ss
s
sss s
xAx x
yCx x f
θ
θ
=
=+
(23)
The error of the state estimates, xxe ˆ
−
=, is then given by the following equations:
((,) (,)(,))
ss ss ss ss ss ss
eAx Lx Cx e
θ
θθ
=−
(24)
Since ),( ssss
xL
θ
is chosen to satisfy the condition in equation (21), the estimation error in
equation (24) converges asymptotically to zero.
Note that the gains for the observers given by equation (20) can be computed using the
technique presented in section 4.2.2. Similar observability conditions as in section 4.2. can be
derived for the existence of gains that guarantee stability of the closed-loop observers in the
neighborhood of the operating point.
For practical purposes the original system given by equation (6) in the absence of faults is
considered locally stable around the operating point as the parameters vary in the hyperrectangle
as defined by equation (11). In other words it is assumed that the Jacobian ),( ssss
xA
θ
∏
∈
∀
ss
θ
is Hurwitz stable.
Using the above assumption, a stable linear fault identification filter ( )Qt ..st the residual,
0
ˆ
() ( )( ( ) ( )),
t
rt Qt y y
τ
ττ
=− −
∫ having the property that lim ( )
s
trt f
→∞
=
has the following state space
representation:
)
ˆ
(
)
ˆ
)(
~
,
ˆ
()
~
,
ˆ
(
yyICr
yyxLxA
−+=
−+=
ξ
θξθξ
(25)
where y
ˆ and ˆ
x
are the output and state estimates obtained via the observer given by equation (20)
and n
R∈
ξ
is a state with initial condition 0)0(
=
ξ
.
Putting all these pieces together, the fault detection, isolation, and identification filter
consists of the observers given by equation (13) and (20), and is computed in parallel with
equation (25) in order to generate residuals. The filter is recomputed at each time step by
linearizing the model at the estimate of the location in state space of the augmented system.
In the presence of unknown sensor faults, the estimate
θ
for some ∏∈
ss
θ
may diverge
from the actual value and therefore the stability of the overall fault diagnosis system cannot be
guaranteed. To overcome this problem, parameter estimation and fault reconstruction are
performed at different time scales and it is assumed that the algorithm is initialized when no
sensor fault occurs until a time .
o
tst for some 2
ˆ
0, 0
o
yy t
εε
>−≤∀≥
. The sensor fault is of
the following form:
13
1:
() () , 0:
o
soo
o
tt
f t f t S(t - t ) S(t - t ) tt
≥
==
<
The above assumption ensures that the parameter estimate from equation (17) converges to its
actual value with a desired accuracy
2, ( ) 0
ss
θθ ηηε
−≤ >
(26)
before onset of faults in the original process. Additionally, the parameters are adapted
periodically by the augmented observer (13) in order to take process drifts into account.
In summary, the presented fault diagnosis system performs parameter estimation and
fault reconstruction at different time scales, where the fault identification takes place at a higher
frequency than parameter estimation. The values of the parameters are assumed to stay constant
during the fault identification, while the faults are assumed constant during parameter estimation.
Figure 3 illustrates this two-time scale behavior, where stages 2 and 3 are repeated alternatively
throughout the operation and the time between the start of each stage is decided by the nature of
the process. However, in general the parameter estimation is only performed sporadically and
requires only short periods of time, so that the fault can be identified for the vast majority of the
time.
Figure 3: Schematic fault identification for systems with time-varying parameters
5 Case study
5.1 Fault diagnosis of a reactor with uncertain parameter
To illustrate the main aspects of the investigated observer-based fault diagnosis scheme, a non-
isothermal CSTR is considered with coolant jacket dynamics, where the following exothermic
irreversible reaction between sodium thiosulfate and hydrogen peroxide is taking place34.
OHOSNaOSNaOHOSNa 242263222322 442
+
+
→+ (27)
1 2 3
1) No fault present
2) Parameter estimation
3) Short time period
1) No parameter updating
2) Fault identification assuming
the knowledge of the parameter
3) Long time period
1) Fault assumed constant
(value from previous
identification)
2) Parameter estimation
3) Short time period
Start of operation Time
14
process
parameters values process
parameters values
s
F 120 /minL cp 4.2 (/ )JgK
Ain
C 1 ()/mol L ws
F 30 (/min)L
V 100 L UA 20000 (/ )JsK
o
k 4.11E+13 /(min )Lmol w
V 10 ()L
E
76534.704 (/ )Jmol
w
ρ
1000 (/)
g
L
inT 275 K cpw 4200 (/ )JkgK
()RH−∆ 596619 (/ )Jmol jinT 250 K
ρ
1000 (/)gL
Table 1: Values of process parameters
The capital letters A, B, C, D and E are used to denote the chemical compounds 322 OSNa ,22 OH ,
632 OSNa , 422 OSNa , and OH 2. The reaction kinetic law is reported in the literature to be34
BAoo
BAA
CC
RT
EE
kk
CCTkr
∆+−
∆+=
=−
exp)(
)(
where o
k∆ and
E
∆ represent parametric uncertainties in the model. A mole balance for species
A and energy balances for the reactor and the cooling jacket result in the following nonlinear
process model
)(
)(
)(
)(
)(
)(
))()((
2)(
)(2)(
2
2
j
pwww
jjin
j
j
p
A
p
RR
in
AAAin
A
TT
cV
UAUA
TT
V
F
dt
dT
TT
cV
UAUA
CTk
c
HH
TT
V
F
dt
dT
CTkCC
V
F
dt
dC
−
∆+
+−=
−
∆+
−
∆−∆+∆−
+−=
−−=
ρ
ρρ
(28)
where
F
is the feed flow rate,V is the volume of the reactor, Ain
C is the inlet feed concentration,
in
T the inlet feed temperature, w
Vis the volume of the cooling jacket, jin
T is the inlet coolant
temperature, w
F
is the inlet coolant flow rate, p
cis the heat capacity of the reacting mixture, pw
c
is the heat capacity of the coolant,
ρ
is the density of the reacting mixture, U is the overall heat
transfer coefficient, and
A
is the area over which the heat is transferred. The process parameters
values are listed in Table 1.
Here, , , ( ), and
o
kE H UA∆∆∆∆ ∆ represent uncertainty in the pre-exponential factor, the
activation energy, the heat of reaction, and the overall heat transfer rate, respectively. When ,
o
k
∆
, ( ), and
EH UA∆∆∆ ∆ are all chosen equal to zero, the nominal nonlinear model exhibits
multiple steady states, of which the upper steady state, i.e.
15
( 0.0192076 /
Ass
CmolL=; 384.005
ss
TK=; 371.272
jss
TK
=
), is stable and chosen as the point
of operation. Since the activation energy appears exponentially in the state space description of
the process, the effect of uncertainty on the behavior of the system is significantly higher than for
the other parameters listed above. This observation has also been confirmed in simulations.
In order to validate the performance of the presented approach, it is in a first step
compared to the results derived from a fault detection scheme based upon a Luenberger observer
for the process under consideration. For now, the process parameters are assumed to be known
and given the values in Table 1. The system matrices obtained by linearizing the process model
(28) around the chosen steady state are
-123.7499724 -.07347363019 0
17408.48619 6.379943743 2.857142857
0 28.57142857 -31.57142857
A
=
TT
CC
=
=
1
0
0
,
0
1
0
21
(29)
With
{
}
( ) 112.94, 1.37, 34.63A
λ
=− − − . For performing fault isolation and identification it is
required to design observers for each of the two measurements as shown in Figure 1 and the
eigenvalues of the closed loop observers are placed at
{
}
6.85, 6.86, 6.87−−− . The observer gain
calculated for a measurement of the reaction temperature is
1
53.912
1.55 3
5.79 5
LE
E
−
=
+
+
and the gain corresponding to the coolant temperature is found to be
2
1.7 2
2.7 4
1.55 3
E
LE
E
+
=+
+
Both reaction temperature and coolant temperature sensor are induced with an additive fault
signal and random noise with normal distribution whose shape and size are shown in Figure 4.
Residuals generated by the technique based upon a Luenberger observer with uncertainty
in the initial conditions are shown in Figure 5. Comparing Figure 4 and Figure 5, it is concluded
that the Luenberger observer-based fault diagnosis scheme is able to isolate and identify the
approximate nature of the fault in each sensor. Similar simulations have been carried out where
the process model includes
uncertainties ( 5% , 6% ,
oo
kkEE∆= ∆= ( ) 5% , and 5% )H H UA UA
∆
∆=∆ ∆ = . Figure 6 shows the
residual generated for the fault signal shown in Figure 4 for one specific case of parametric
uncertainty. From Figure 6 it is evident that while the shape of the fault is reproduced almost
perfectly, the bias in the residuals results from modeling uncertainties and can be misinterpreted
as a response to a step fault in the sensor. To illustrate this point, simulations of the fault
diagnosis scheme based upon the Luenberger observer are performed for a sufficiently large
number of scenarios (10,000) which include a random occurrence of faults in either or both the
sensors as well as randomly chosen parametric uncertainty within the given intervals in order to
16
determine the overall percentage of successfully identifying one or all the scenarios. The
scenarios denoted by "00 ", "01", "10", and "11" in the Tables 2-3 stand for no faults in both
sensors, no fault in reaction temperature sensor and fault in coolant temperature sensor, fault in
reaction temperature sensor and no fault in coolant temperature sensor, and fault in both sensor
respectively. Step faults starting at time 0t
=
, and of magnitude 5 K were added to the sensors.
Various thresholds are selected to determine whether or not a fault occurred in the sensors and
the fault isolation scheme (based upon a Luenberger observer) is tested in Monte Carlo
simulations where the parametric uncertainty is chosen at random within the given intervals. As
an example of this scheme, the scenario identifies the condition where no faults occur in both
sensors for a chosen threshold
α
if the following condition is satisfied:
a) If time average of ()
c
rt
α
<, where ( )
c
rtdenotes the coolant temperature residual, and
b) If time average of ()
T
rt
α
<, where ( )
T
rtdenotes the reactor temperature residual.
0 20 40 60 80 100 120 140 160 180 20
0
−8
−6
−4
−2
0
2
Reactor temperature fault
Temperature (K)
0 20 40 60 80 100 120 140 160 180 20
0
−2
0
2
4
6
8
Coolant temperature fault
Time (min)
Temperature (K)
Figure 4: Reactor and coolant temperature fault signal
17
0 20 40 60 80 100 120 140 160 180 20
0
−8
−6
−4
−2
0
2
Reactor temperature residual
Temperature (K)
0 20 40 60 80 100 120 140 160 180 20
0
−2
0
2
4
6
8
Coolant temperature residual
Time (min)
Temperature (K)
Figure 5: Reactor and coolant temperature residuals through Luenberger observer
scheme (no model uncertainty)
0 20 40 60 80 100 120 140 160 180 20
0
−10
−8
−6
−4
−2
0
Temperature (K)
Reactor temperature residual
0 20 40 60 80 100 120 140 160 180 20
0
−4
−2
0
2
4
6
Time (min)
Temperature (K)
Coolant temperature residual
Figure 6: Reactor and coolant temperature residuals through Luenberger observer
scheme (with model uncertainty)
18
1 2 3 4
00 3.92 9.51 35.80 67.52
01 24.01 8.82 44.98 16.80
10 55.38 55.81 42.20 39.04
11 76.25 58.18 44.36 16.32
Table 2: Monte Carlo simulation (Luenberger observer with model uncertainty)
The criteria used for the other (“01”, “10”, “11”) scenarios are chosen accordingly. Table
2 summarizes the results of how efficiently the fault isolation scheme was able to predict the
correct fault locations for random uncertainties in all the parameters within the range described
above. These results show that the parametric uncertainty can have a strong effect on robustness
properties of a fault diagnosis scheme and hence requires techniques that can cope with model
uncertainty. Because of these limitations, the nonlinear fault detection scheme presented in this
work is applied to the same scenario. Since the effect of uncertainty in the process parameters
other than the activation energy has been determined to be of lesser importance for fault isolation,
only uncertainty in the activation energy
{
}
: : 0.94 1.06
s
sss
EEEE∏= = ≤ ≤ is
considered ss
E=76534.704( / )Jmol. However, while the design is solely performed based upon
uncertainty in this one parameter, the evaluation of the fault diagnosis scheme will consider
uncertainty in all of the parameters to compare it to the Luenberger observer scheme. The
interval polynomial computed by a step-by-step procedure as discussed in section 4.2.2 is as
follows:
a) The Jacobian of the nonlinear dynamic model is symbolically evaluated around an
equilibrium point ),( ssss
x
θ
as a function of ),( ssss
x
θ
:
=
333231
232221
131211
),(
aaa
aaa
aaa
xA ssss
θ
where, the entries of the matrix ),( ssss
xA
θ
are nonlinear functions of ),( ssss
x
θ
.
b) The characteristic polynomial of the system is computed as shown in equation (15).
()
()()
32
332211133112212211331132233322
221331231231321321331221322311332211
ssaaasaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaas
+−−−+−−++−+
++−−++−= …
δ
The coefficients of the characteristic polynomial are continuous nonlinear functions of
),( ssss
x
θ
.
c) ss
x is eliminated from the coefficients by using the equation 0),( =
ssss
xf
θ
for computation
of the upper and lower bounds of the coefficients of the above characteristic polynomial.
However, since analytic expression of the coefficients as a nonlinear function of only ss
θ
can
usually not be derived, ss
x is evaluated numerically by solving the equation 0),(
=
ssss
xf
θ
for ss
θ
. By varying the uncertain parameter vector in the set
∏
, the maximum and minimum
values of the coefficients of the characteristic polynomial are computed over the set
∏
.
Thresholds
Scenarios
19
This procedure is used to evaluate the interval family of polynomials given by equation
(30) for the nonlinear system described by equation (6). Figure 7 shows the plots of the
coefficients of the characteristic polynomial as the activation energy E varies in the
set
{
}
: : 0.94 1.06
s
sss
EEEE∏= = ≤ ≤ , 76534.704 /
ss
EJmol
=
. The interval polynomial family
thus computed takes the following form:
()
23
01 2
ssss
δδδδ
=+ + +
(30)
where
[]
11840,2143∈
o
δ
,
[
]
9090,1648
1
∈
δ
,
[
]
289,79
2
∈
δ
7 7.2 7.4 7.6 7.8 8 8.
2
x 10
4
0
5000
10000
15000
Plots of the coefficient of the characteristic polynomial
δ
0
7 7.2 7.4 7.6 7.8 8 8.
2
x 10
4
0
5000
10000
δ
1
7 7.2 7.4 7.6 7.8 8 8.
2
x 10
4
0
100
200
300
Activation energy (J/mol)
δ
2
Figure 7: Plots of the coefficients of the characteristic polynomial as a function of
the activation energy.
It can be verified by Theorem 1 that the interval polynomial family given by (30) is
Hurwitz stable, thereby verifying that the nonlinear system given by equation (28) is locally
stable around the operating points as E varies in the set
{}
ssss EEEE 06.194.0:: ≤≤==∏ ,molJEss / 704.76534
=
.
The detailed derivation of the observer gain computation is not presented here due to
space constraints, but the procedure has been provided in section 4.2.2. The observer gain
computed for the simultaneous state and parameter estimator from the reactor temperature is
1
5929
12970
(, ) (, ) 11347.5
6113
Lx T x
θθ
−
−
−
=
−
−
20
where,
()
1,Tx
θ
−
is the locally invertible transformation as shown in section 4.2.2. Similarly the
observer gains for the state estimator of the form equation (20) to be used for fault isolation are
computed to be
1
11
5929
ˆˆ
( , ) ( , ) 4143
878.5
Lx T x
θθ
−
−
=−
1
22
5929
ˆˆ
( , ) ( , ) 4143
878.5
Lx T x
θθ
−
−
=−
Using the presented technique and applying it to a system with uncertainty in all of the
model parameters, it is found that estimate of the activation energy converges to its true value
after 7 min in the absence of sensor faults. The condition that there is no initial sensor fault is a
reasonable assumption since one would like to have a certain level of confidence in the
measurements before a fault diagnosis procedure is invoked. Figure 8 shows the fault signal fs(t)
that is affecting the sensors. The corresponding coolant and reactor temperature residuals
generated by the Kharitonov theorem-based fault identification techniques are presented in
Figure 9. It is apparent that the residuals converge to the values of the faults even when
uncertainty exists in the model parameters. Additionally, the location, shape, and magnitude of
the faults are correctly reconstructed and sensor noise is filtered.
Since the performed simulation has only used uncertainty in the activation energy, Monte
Carlo simulations have a 100% success rate for the scenarios considered in Table 2. However,
since this is not a very realistic assumption and in order to compare the presented fault detection
scheme to the Luenberger observer-based one, Monte Carlo simulations are performed taking
uncertainty in all the parameters
(5%, 6%,
oo
kkEE∆= ∆= ( ) 5% , and 5% )HHUAUA∆∆ = ∆ ∆ = into account. The results are
summarized in Table 3 and clearly show that the fault detection, isolation, and identification
scheme performs very well even under the influence of uncertainty in all the model parameters.
It should also be noted that the assumption that only the activation energy has a major impact on
the fault diagnosis was a good one, since the fault identification was only designed for
uncertainty in this parameter; nevertheless, reliable fault diagnosis is possible even under the
influence of uncertainty in several other parameters. Additionally, it can be concluded that it is
an important task to choose an appropriate threshold for determining a fault.
1 2 3 4
00 100 100 100 100
01 100 100 100 96.45
10 89.9189 100 100 100
11 100 100 100 100
Table 3: Monte Carlo simulation (presented approach with model uncertainty)
Thresholds
Scenarios
21
0 20 40 60 80 100 120 140 160 180 200 22
0
−8
−6
−4
−2
0
2
Reactor temperature fault
Temperature (K)
0 20 40 60 80 100 120 140 160 180 200 22
0
−2
0
2
4
6
8
Coolant temperature fault
Time (min)
Temperature (K)
Figure 8: Reactor and coolant temperature fault signal
0 20 40 60 80 100 120 140 160 180 200 22
0
−8
−6
−4
−2
0
2
Reactor temperature residual
Temperature (K)
0 20 40 60 80 100 120 140 160 180 200 22
0
−2
0
2
4
6
8Coolant temperature residual
Time (min)
Temperature (K)
Figure 9: Reactor and coolant temperature residual signal through presented scheme
(with model uncertainty)
5.2 Fault diagnosis of a reactor with uncertain and time-varying parameters
In this section, the performance of the proposed fault diagnosis scheme is evaluated for the non-
isothermal CSTR problem as introduced in section 5.1 but with model parameters varying with
22
time. Since activation energy affects the behavior of the system significantly stronger than any
other parameter, it is assumed that only the activation energy varies with time possibly due to
catalyst deactivation or coking. Figure 10 shows the plot of the activation energy and its estimate
over the simulated time span and Figure 11 presents the fault signal fs(t) that is affecting the
sensors. The corresponding coolant and reactor temperature residuals generated by the
Kharitonov theorem-based fault identification technique are shown in Figure 12. The time period
during which the parameter is identified within acceptable limits ranges from t=0 to 10 min.
These times were determined by comparing the measured output and the predicted output. The
first long time period during which fault detection and identification is invoked ranges from 10
min to 200 min. The parameter is adapted from 200 min to 210 min. This is followed by another
fault detection period ranging from t=210 to 400 min. It can be concluded from Figure 12 that
the fault identification scheme is effective even in the presence of time-varying uncertain
parameters. It should be noted that the system would not work as well if the parameters are not
periodically re-identified, as can be seen from Figure 12 during the time period just before 200
min.
0 50 100 150 200 250 300 350 400 45
0
1.02
1.03
1.04
1.05
1.06
1.07
1.08
Activation energy
Time (min)
Activation energy/76534.704 (J/mol)
Actual
Estimate
Figure 10: Activation energy change with time
23
0 50 100 150 200 250 300 350 400 45
0
−8
−6
−4
−2
0
2
Reactor temperature fault
Temperature (K)
0 50 100 150 200 250 300 350 400 45
0
−2
0
2
4
6
8
Coolant temperature fault
Time (min)
Temperature (K)
Figure 11: Reactor and coolant temperature fault signal
0 50 100 150 200 250 300 350 400 45
0
−8
−6
−4
−2
0
2
Reactor temperature residual
Temperature (K)
0 50 100 150 200 250 300 350 400 45
0
−2
0
2
4
6
8
Coolant temperature residual
Temperature (K)
Time (min)
Short time period(1) Short time period(3)
Long time period(2)
Long time period(2)
Long time period(2)
Long time period(2)
Figure 12: Reactor and coolant temperature residual signal through presented scheme
(with time-varying parametric uncertainty)
6 Conclusions
A new observer-based fault diagnosis scheme for nonlinear dynamic systems with
parametric uncertainty was presented. This approach is centered around two main components:
the design of a nonlinear observer, which includes uncertain parameters as augmented states, and
the choice of an appropriate fault isolation and identification filter for reconstructing the location
24
and nature of the fault. The observer design was performed based upon Kharitonov’s theorem
but takes into account the effect that changes in the parameters have on the steady state of the
system. This resulted in a nonlinear, augmented observer, which has the property that it is locally
stable for parametric uncertainty within a specified range. The fault isolation and identification
filter was designed based upon a linearization of the nonlinear model at each time step.
Repeatedly computing linearization of the model does not pose a problem in practice since it is
computationally inexpensive.
Since it is not possible to simultaneously perform parameter estimation and fault
detection, these two tasks were implemented at different time scale. The parameters were
estimated at periodic intervals where the fault was either assumed to be zero or known and
constant, whereas the fault detection scheme was invoked at all times with the exception of the
short periods used for parameter estimation.
The performance of the proposed fault diagnosis method was evaluated using a numerical
example of an exothermic CSTR and by performing Monte Carlo simulations on a bounded set
of parametric uncertainties for a series of faults in both of the available measurements. The faults
were reconstructed correctly even in the presence of severe uncertainties in the model parameters
and measurement noise.
Notation
CA, System matrices
CA, System matrices for the augmented system
CA
, Canonically transformed matrices
s
f Vector of faults
)(),( xhxf Vector fields in state space description of a continuous time nonlinear system
l Constant observer gain vector.
L Constant observer gain matrix
)
~
,
~
(
θ
xL Nonlinear observer gain for augmented system
)
~
,
ˆ
(
θ
xL Nonlinear observer gain for original nonlinear system
hL f Lie derivative of )(xh w.r.t )(xf
)(tQ Fault reconstruction filter
)(tr Difference between the actual and estimated output.
t Time
,TT Invertible transformation matrix for augmented and original system, respectively
)(xWo Observability matrix
x
Vector of state variables
x
ˆ Estimate of state variables of the original nonlinear system
x
Estimate of state variables of the augmented system
x
Augmented state variables
y Vector of output variables
y
ˆ Estimate of output variables
z Transformed state vector
25
Greek letters
)(s
δ
Open loop interval polynomial family
()s
γ
Closed loop interval polynomial family
Ω Hyperrectangle of coefficients of an interval polynomial family
θ
Uncertain parameter vector
θ
~ Estimate of parameter vector
ss
θ
Nominal parameter value
Π Hyperrectangle within which the uncertain parameter varies.
)(x
φ
Nonlinear map
)( A
λ
Eigen values of the matrix A
ξ
Vector of state variables
,,
η
εα
Positive scalars
(.)S Unit step function
Other symbols
2
. Euclidean norm
n
R
n-dimensional Euclidean space
Complex plane
−
Left half complex plane.
Appendix A. Observer form state transformation32
The following state space representation of a LTI system
Cxy
Axx
=
=
is given where n
x
R∈ and 1
yR∈. The characteristic polynomial of the matrix A results in:
21
01 2 1
() nn
n
sss ss
δδδδ δ
−
−
=+ + + + +………
The aim is to find a coordinate transformation matrix
T
, which transforms the aforementioned
LTI system into the following one:
zAz
yCz
=
=
, Txz
=
where,
26
1
2
2
1
0
10 00
01 00
0010
0001
0000
n
n
A
δ
δ
δ
δ
δ
−
−
=
[
]
10 000C=
The transformation matrix that transforms the original system into an observable canonical form
is designed as follows:
1) Let the transformation matrix
T
be presented by the row vector as follows:
=
−
n
n
t
t
t
t
t
T
1
3
2
1
where
1
1
ATAT
CCT
−
−
=
=
2) The first row of the matrix
T
can be obtained from the following relation
[]
==
−
1
1
3
2
1
0001 t
t
t
t
t
t
n
n
C
3) The remaining rows of the matrix
T
can then be computed by the following recursive relation
2111
3212
11nn n
tttA
tttA
tttA
δ
δ
δ
−
=− +
=− +
=− +
27
It can be shown that the invertibility of the transformation matrix
T
is guaranteed if the matrix
pair
}{
CA, is observable.
Appendix B. Observer gain computation6
Consider a polynomial
nn
no ssssss ++++++= −
−
1
1
3
3
2
21 ....)(
δδδδδδ
whose coefficients can vary independently within a given uncertainty range as follows
−=
∆
+≤≤
∆
−1,....,2,1,0,
22
:00 ni
i
ii
i
i
δ
δδ
δ
δδ
, , 0,1, , 1
ii i
in
δδ δ
+−
∆
=− = −……
The aim is to find a constant vector
(
)
121 ,,,, −
=
no lllll to transform the interval polynomial
family ()s
δ
into another interval polynomial family described by
21
00 11 22 1 1
() ( ) ( ) ( ) ( )nn
nn
sl lsls lss
γδ δ δ δ
−
−−
=+++++ + + + +………
such that, entire family ( )s
γ
remains Hurwitz.
1) Consider any stable polynomial ( )
R
s of degree 1
−
n. Let
(
)
()
R
s
ρ
be the radius of the
largest stability hypersphere6 around ()
R
s. It can be shown that for any positive real number
α
,
()
()
()
()
R
sRs
ρα αρ
=6.
2) Thus it is possible to find a polynomial
(
)
R
s
α
such that
()
()
2
1
0
()
4
n
i
i
Rs
δ
αρ
−
=
∆
>∑
3) Denoting 21
01 2 1
( ) ... nn
n
R
srrsrs rs s
−
−
=+ + ++ +, the constant vector l is calculated as follows:
{
}
0
: , 0,1, 2,..., 1
iii
ll r i n
αδ
=
−= −
It can be seen from above calculations that for a given interval family ()s
δ
with associated
uncertainty ranges there is an infinite number of possibilities for the constant gain vector l that
transform the given interval family ( )s
δ
into ( )s
γ
such that ( )s
γ
is Hurwitz.
Acknowledgements
The authors would like to thank Professor Shankar Bhattacharyya for his comments in
preparation of this manuscript.
References
[1] Doyle, F.J. Nonlinear inferential control for process applications. Journal of Process
Control, 1998, 8, 339.
28
[2] Soroush, M. State and parameter estimations and their applications in process control.
Computers and Chemical Engineering, 1998, 23, 229.
[3] Chen, J.; and Patton, R. Robust Model based fault diagnosis for dynamic systems. Kluwer
Academic Publishers, 1999.
[4] Garcia, E.A.; and Frank, P.M. Deterministic nonlinear-observer based approaches to fault
diagnosis: A survey, Control Engineering Practice, 1997, 5, 663.
[5] Frank, P.M.; and Ding, X. Survey of robust residual generation and evaluation methods
in observer-based fault detection systems. Journal of Process Control, 1997, 7, 403.
[6] Bhattacharyya, S.P.; Chappellat, H.; and Keel, L.H. Robust Control: The Parametric
Approach. Prentice Hall PTR, Upper Saddle River, NJ, 1995.
[7] Venkatasubramanian, V.; Rengaswamy, R.; Kavuri, S.N.; and Kewen, Yin. A review of
process fault detection and diagnosis: Part III: Process history based methods. Computers
and Chemical Engineering, 2003, 27, 327.
[8] Massoumia, M.; Verghese, G.C.; and Willsky AS. Failure detection and identification.
IEEE Transactions on Automatic Control, 1989, 34, 316.
[9] Kruger, U.; Chen, Q.; McFarlane, R.C.; and Sandoz D.J. Extended PLS approach for
Enhanced Condition Monitoring for Industrial Processes. AIChE Journal, 2001, 47(9)
2076.
[10] Qin, S.J. Statistical process monitoring: basics and beyond. Journal of Chemometrics,
2003, 17(8-9), 480.
[11] Soderstrom, T.A; Himmelblau, D.M.; and Edgar, T.F. The Extension of a Mixed-
Integer Optimization-based Approach to Simultaneous Data Reconciliation and Bias
Identification. FOCAPO 2003, Boca Raton, FL, January, 2003.
[12] Wattanabe, K.; and Himmelblau, D.M. Instrument fault detection in system with
uncertainties. International Journal of System Science, 1982, 13(2), 137.
[13] Wunnenberg, J.; and Frank, P.M. Sensor fault detection via robust observers, in
Tzafestas, S.G.; Singh, M.G.; and Schmidt, G. (eds), 147-160. System Fault Diagnostics,
Reliability & Related Knowledge-Based Approaches. D. Riedel Press, Dordrecht, 1987.
[14] Xiong, Y.; and Saif, M. Robust fault detection and isolation via a diagnostic observer.
International Journal of Robust Nonlinear Control, 2000, 10, 1175.
[15] Patton, R.J.; and Kangethe, S.M. Robust fault diagnosis using eigen-structure
assignment of observers, chapter 4, 99-154. Fault Diagnosis in Dynamic Systems, Theory
and Application. Prentice Hall, 1989.
[16] Seliger, R.; and Frank, P.M. Robust fault detection and isolation in nonlinear dynamical
systems using nonlinear unknown input observers. In Preprints of the IFAC/IMACS
Symposium SAFEPROCESS’ 91, 1991, 1, 313, Baden-Baden.
[17] Ding, X.; and Frank, P.M. Frequency domain approach and threshold selector for
robust model-based fault detection and isolation. In Preprints of IFAC/IMACS Symp.
SAFEPROCESS’91, 1991, Baden-Baden.
[18] Frank, P.M.; and Ding, X. Frequency domain approach to optimally robust residual
generation and evaluation for model-based fault diagnosis. Automatica, 1994, 30(4), 789.
[19] Marquez, H.J.; and Diduch, C.P. Sensitivity robustness in failure detection: A
frequency domain approach. In Proceedings 29th IEEE CDC, Honolulu, USA, 1990.
[20] Basseville, M. Detecting changes in signals and systems – a survey. Automatica, 1988,
3, 309.
29
[21] Ding, X.; and Frank, P.M. On-line fault detection in uncertain systems using adaptive
observers. European Journal of Diagnosis and Safety in Automation, 1993, 3, 9.
[22] Frank, P.M.; Ding, X.; and Guo, L. An adaptive observer based fault detection system
for uncertain nonlinear systems. In Proceedings of 12th IFAC World Congress, 1993.
[23] Tortora, G. Fault–tolerant control and intelligent instrumentation. IEEE Computing &
Control Journal, 2002, 13, 259.
[24] Francis, B.A. A course in H
∞
control theory. Springer Verlag, Berlin-New York, 1987.
[25] Ding, X.; and Frank, P.M. Fault detection via factorization approach. Systems &
Control Letters, 1990, 14, 431.
[26] Bestle, D.; and Zeitz, M. Canonical form observer design for nonlinear time-varying
system. International Journal of Control, 1988, 47(6), 1823.
[27] Othman, S.; Gauthier, J.P.; and Hammouri, H. A simple observer for nonlinear systems:
Applications to bioreactors. IEEE Trans. Automatic Control, 1992, AC-7, 875.
[28] Bastin, G.; and Gevers, M.R. Stable adaptive observers for non-linear time-varying
systems. IEEE Trans. Automatic Control, 1988, 7, 650.
[29] Krener, A.J.; and Isidori, A. Linearization by output injection and nonlinear observers.
Systems & Control Letters, 1998, 34, 241.
[30] Kazantzis, N.; and Kravaris, C. Nonlinear observer design using Lyapunov’s auxiliary
theorem. Systems & Control Letters, 1988, 34, 241.
[31] Hermann, R.; and Krener, A.J. Nonlinear controllability and observability. IEEE Trans.
Autom. Control, 1977, AC-22, 728.
[32] Fairman, F.W. Linear Control Theory. John Wiley and Sons, New York, 1998.
[33] Zeitz, M. The extended Luenberger observer for nonlinear systems. Systems & Control
Letters, 1987, 9, 149.
[34] Vejtasa, S.A.; and Schmitz, R.A. An experimental study of steady-state multiplicity and
stability in an adiabatic stirred reactor. AIChE Journal, 1970, 3,410.
[35] Fogler, H.S. Elements of Chemical Reaction Engineering; Prentice Hall, Englewoods
Cliffs, NJ, 1992.