Professional Documents
Culture Documents
Given a control system, the first and most important question about its various
properties is whether it is stable, because an unstable control system is typically
useless and potentially dangerous. Qualitatively, a system is described as stable
if starting the system somewhere near its desired operating point implies that it
will stay around the point ever after. Every control system, whether linear or
nonlinear, involves a stability problem which should be carefully studied.
The most useful and general approach for studying the stability of nonlinear
control systems is the theory introduced in the late 19th century by the Russion
mathematician Alexandr Mikhailovich Lyapunov. Lyapunovs work, The General
Problem of Motion Stability, includes two methods for stability analysis (the socalled linearization method and direct method) and was published in 1892. The
linearization method draws conclusions about a nonlinear systems local stability
around an equilibrium point from the stability properties of its linear
approximation. The direct method is not restricted to local motion, and determines
the stability properties of a nonlinear system by constructing a scalar energy-like
function for the system and examining the functions time variation.
A few simplifying notations are defined at this point. Let BR denote the spherical
region (or ball) defined by x<R in state-space, and SR the sphere itself, defined
by x=R.
R 0, r 0, x (0) r t 0, x ( t ) R
or equivalently
R 0, r 0, x (0) B r t 0, x ( t ) B R
3
2
0
x0
i 0 D and V(0) 0.
ii V( x ) 0, x in D 0
V:D R is said to be positive definite in D if condition (ii) is replaced by (ii)
(ii) V(x)>0 in D-{0}.
Finally, V:D R is said to be negative definite (semi definite) in D if V is
positive definite (semi definite).
We will often abuse the notation slightly and write V>0, V0, and V<0 in D
to indicate that V is positive definite, semi definite, and negative definite in
D, respectively.
Example: The simplest and perhaps more important class of positive
function is defined as follows,
V( x ) : R n R x T Q x , Q R nxn , Q Q T
a 0 x1
V1 ( x ) : R R ax bx x1 , x 2
0, a , b 0
0 b x 2
a 0 x 1
2
2
V2 ( x ) : R R ax1 x1 , x 2
0, a 0.
0 0 x 2
2
2
1
2
2
V2(.) is not positive definite since for any x20, any x of the form
x*=[0,x2]T0;however, V2(x*)=0.
Positive definite function (PDFs) constitute the basic building block of the
Lyapunov theory. PDFs can be seen as an abstraction of the total energy
stored in a system, as we will see. All of the Lyapunos stability theorems focus
on the study of the time derivative of a positive definite function along the
trajectory of the system. In other words, given an autonomous system of the
form dx/dt=f(x), we will first construct a positive definite function V(x) and study
dV(x)/dt given by
V( x ) dV V dx V f ( x )
dt
x dt
f1 ( x )
V V
V
,
,,
x
2
n
1
f n ( x )
The following definition introduces a useful and very common way of representing
this derivative.
Definition: Let V:D R and f:DRn. The Lie derivative of V along f,denoted by
LfV, is defined by
V
L f V( x )
f (x)
x1
Thus, according to this definition, we have that
V( x ) V f ( x ) V f ( x ) L V( x ).
f
x
Example: Consider the system
and define
ax1
bx 2 cos x1
Vx x
Thus, we have
2
1
2
2
( x ) L V ( x ) 2 x ,2 x
V
f
1
2
ax1
bx 2 cos x1
2
2
2ax1 2bx 2 2 x 2 cos x1
It is clear from this example that dV(x)/dt depends on the systems equation f(x)
and thus it will be different for different systems.
Stability Theorems:
Theorem 1. (Lyapunov Stability Theorem) Let x=0 be an equilibrium point of
dx/dt=f(x), f:DRn, and let V:DR be a continuously differentiable function such
that
(i) V(0) 0,
(ii) V( x ) 0 in D 0 ,
( x ) 0 in D 0 ,
(iii) V
In other words, the theorem implies that a sufficient condition for the stability of
the equilibrium point x=0 is that there exists a continuously differentiable-positive
definite function V(x) such that dV(x)/dt is negative semi definite in a
neighborhood of x=0.
Theorem 2. (Asymptotic Stability Theorem) Under the conditions of Theorem 1,
if V(.) is such that
(i) V(0) 0,
(ii) V( x ) 0 in D 0 ,
( x ) 0 in D 0 ,
(iii) V
Thus x=0 is asymptotically stable.
In other words, the theorem says that asymptotic stability is achieved if the
conditions of Theorem 1 are strengthened by requiring dV(x)/dt to be
negative definite, rather than semi definite.
Marquez, HJ, Nonlinear Control Systems
Examples:
Pendulum without friction
The equation of motion of the system is
ml mg sin 0
g sin 0
l
mg
Choosing state variables
x1
x 1 x 2
we have
g
x 2 sin x1
l
E KP
1
2
E m l mgh
2
where
x 2
h l 1 cos l 1 cos x1
Thus
1 2 2
E ml x 2 mgl 1 - cosx1
2
We now define V(x)=E and investigate whether V(.) and its derivative dV(.)/dt
satisfies the conditions of theorem 1 and/or 2. Clearly V(0)=0; thus defining
property (i) is satisfied in both theorems. With respect to (ii) we see that because
of the periodicity of cos(x1), we have that V(x)=0 whenever x=(x1,x2)T=(2k,0)T,
k=1,2,.... Thus V(.) is not positive definite. This situation, however, can be easily
remedied by restricting the domain of x1 to the interval (-2,2);i.e., we take
V:DR, with D=((-2,2), R)T.
There remains to evaluate the derivative of v(.) along the trajectories of f(t). We
have
V ( x ) V f ( x )
V V
T
f
(
x
),
f
(
x
)
2
1
x
2
1
1.8
1.6
1.4
1.2
1
0.8
0.6
0.2
0
-8
mgl sin x1 , ml 2 x 2
0.4
-6
-4
-2
1-cosx1
x2
g
sin x1
l
Viscous friction, b
l
mg
x1
ml cl mg sin 0
c g sin 0
m
l
x 1 x 2
g
c
x 2 sin x1 x 2
l
m
Marquez, HJ, Nonlinear Control Systems
Again x=0 is an equilibrium point. The energy is the same as previous example
1 2 2
E V( x ) ml x 2 mgl 1 cos x1 0 in D 0
2
( x ) V f ( x )
V
V V
T
,
f1 ( x ), f 2 ( x )
x1 x 2
mgl sin x1 , ml 2 x 2
x2
g
c
sin x1 x 2
m
l
c l 2 x 22
Thus dV(x)/dt is negative semi-definite. It is not negative definite since dV(x)/dt=0
for x2=0, regardless of the value x1 (thus dV(x)/dt=0 along the x1 axis).
x 1 x1 x12 x 22 2 x 2
x 2 x1 x 2 x12 x 22 2
To study the equilibrium point at the origin we define
1 2
2
V( x ) x1 x 2
2
Marquez, HJ, Nonlinear Control Systems
( x ) V f ( x )
V
V V
T
,
f1 ( x ), f 2 ( x )
x1 x 2
x1 x12 x 22 2 x 2
x1 , x 2
x1 x 2 x x
x12 x12 x 22 2 x1x 2 x1x 2 x 22 x12 x 22 2
2
1
x12 x 22 x12 x 22 2
2
2
2
1
x 22 2
V( x ) as
The origin x=0 is globally asymptotically stable (stable in the large) if the
following conditions are satisfied
(i) V (0) 0,
(ii) V( x ) 0 x 0
(iii) V( x ) is radially unbounded
( x ) 0 x 0
(iiii ) V
Marquez, HJ, Nonlinear Control Systems
Example:
x 1 x 2 x1 x x
2
1
2
2
x 2 x1 x 2 x x
2
1
2
2
V( x ) x x
2
1
( x ) V f ( x )
V
V V
T
,
f1 ( x ), f 2 ( x )
x1 x 2
x 2 x1 x12 x 22
2 x1 , x 2
2
2
x1 x 2 x1 x 2
2 x12 x
2 2
2
2
2
30
25
20
V(x)
15
10
5
0
6
4
2
x2
2
x1
V( x ) g( x )
( x ) V ( x ) g ( x ) f ( x )
V
The power of this method relies on the following fact. Given that
V( x ) g( x )
Thus, we have that
g( x ) dx V( x )dx dV( x )
xb
xb
xa
xa
V( x b ) V( x a ) V( x )dx g( x ) dx
that is, the difference V(xb)-V(xa) depends on the initial and final states xa and xb
and not on the particular path followed when going from xa to xb. This property is
often used to obtain V by integrating V( x ) along the coordinate axis:
x
x1
g 2 ( x1 , s 2 ,0, ,0)ds 2
0
xn
g n ( x1 , x 2 , , s n )ds n
The free parameters in the function g(x) are constrained to satisfy certain
symmetry conditions, satisfied by all gradients of a scalar function.
Theorem: A function g(x)
the matrix
g( x )
g1
x1
g1
x 2
g1
x n
g 2
x1
g 2
x 2
g 2
x n
g n
x1
g n
x 2
g n
x n
is symmetric.
Example:
Consider the following system:
x 1 a x1
x 2 b x 2 x1x 22
Clearly the origin is an equilibrium point. To study the stability of the equilibrium
point, we proceed to find a Lyapunov function as follows,
Step 1: Assume that V ( x ) g ( x )
2V
2V
g i g j
or, equivalently
x i x j x jx i
x j x i
Marquez, HJ, Nonlinear Control Systems
h
x1
h12 x 2 1
x 2
x 2
x 2
1
2
g 2
h
h12 x1 2 x 2 2
x1
x1
x1
To simplify the solution, we attempt to solve the problem assuming that the hijs
are constant. If this is the case, then
0
x 2 x 2 x1 x1
and we have that
g1 g 2
h12 h12 k
x 2 x1
g x h11x1 kx 2 , kx1 h 22 x 2
g ( x ) g1 , g 2 h x , h x 2
Step 3:
1
1 1
2
2
Find V
( x ) V f ( x )
V
g(x ) f ( x )
ax1
h x , h x2
2
bx 2 x1x 2
ah11x12 h 22 b x1x 2 x 22
1
1 1
2
2
by integration
V( x ) g1 s1 ,0 ds1 g 2 x1 , s 2 ds 2
x1
x2
x1
x2
h s ds1 h 22s 2 ds 2
0
1
1 1
1 1 2 1 2 2
h1x1 h 2 x 2
2
2
Step 5: Verify that V>0 and dV/dt<0. We have that
1 1 2 1 2 2
h1x1 h 2 x 2
2
2
( x ) ah1x 2 h 2 b x x x 2
V
1 1
2
1 2
2
V( x )
V( x ) ax 2 b x x x 2
1
1 2
2
Assume now that a>0, and b<0. In this case
( x ) ax 2 b x x x 2
V
1
1 2
2
And we conclude that, under these conditions, the origin is (locally)
asymptotically stable.