You are on page 1of 25

LYAPUNOV STABILITY THEORY:

Given a control system, the first and most important question about its various
properties is whether it is stable, because an unstable control system is typically
useless and potentially dangerous. Qualitatively, a system is described as stable
if starting the system somewhere near its desired operating point implies that it
will stay around the point ever after. Every control system, whether linear or
nonlinear, involves a stability problem which should be carefully studied.

The most useful and general approach for studying the stability of nonlinear
control systems is the theory introduced in the late 19th century by the Russion
mathematician Alexandr Mikhailovich Lyapunov. Lyapunovs work, The General
Problem of Motion Stability, includes two methods for stability analysis (the socalled linearization method and direct method) and was published in 1892. The
linearization method draws conclusions about a nonlinear systems local stability
around an equilibrium point from the stability properties of its linear
approximation. The direct method is not restricted to local motion, and determines
the stability properties of a nonlinear system by constructing a scalar energy-like
function for the system and examining the functions time variation.

Slotine and Li, Applied Nonlinear Control

Today, Lyapunovs linearization method has come to respresent the theoretical


justification of linear control, while Lyapunovs direct method has become the most
important tool for nonlinear system analysis and design. Together, the linearization
method and direct method constitute the so-called Lyapunov stability theorem.

A few simplifying notations are defined at this point. Let BR denote the spherical
region (or ball) defined by x<R in state-space, and SR the sphere itself, defined
by x=R.

STABILITY AND INSTABILITY:


Definition 1. The equilibrium state x=0 is said to be stable if, for any R>0, there
exists r>0, such that if x(0)<r, then x(t)<R for all t0. Otherwise, the
equilibrium point is unstable.

to mean " for any"


for " there exists"
for " in the set"
for " implies that"

R 0, r 0, x (0) r t 0, x ( t ) R
or equivalently

R 0, r 0, x (0) B r t 0, x ( t ) B R

3
2

0
x0

Curve 1: asymptotically stable


Curve 2: marginally stable
Curve 3: unstable

Figure 1. Concepts of stability.


POSITIVE DEFINITE FUNCTIONS: The core of the Lyapunov stability theory is
the analysis and construction of a class of functions to be defined and its derivative
along the trajectories of the system under study. We start with the positive definite
functions. In the following definition, D represents an open and connected subset
of Rn.
Definition: A function V:D R is said to be positive semi definite in D if it satisfies
the following conditions:

i 0 D and V(0) 0.
ii V( x ) 0, x in D 0
V:D R is said to be positive definite in D if condition (ii) is replaced by (ii)
(ii) V(x)>0 in D-{0}.
Finally, V:D R is said to be negative definite (semi definite) in D if V is
positive definite (semi definite).
We will often abuse the notation slightly and write V>0, V0, and V<0 in D
to indicate that V is positive definite, semi definite, and negative definite in
D, respectively.
Example: The simplest and perhaps more important class of positive
function is defined as follows,

V( x ) : R n R x T Q x , Q R nxn , Q Q T

In this case, V(.) defines a quadratic form. Since by assumption, Q is symmetric


(i.e., Q=QT), we have that its eigenvalues i, i=1,...n, are all real.Thus we have that

V() positive definite x T Qx 0, x 0 i 0, i 1, , n


V() positive semi definite x T Qx 0, x 0 i 0, i 1, , n
V() negative definite x T Qx 0, x 0 i 0, i 1, , n
V() negative semi definite x T Qx 0, x 0 i 0, i 1, , n
Thus for example:

a 0 x1
V1 ( x ) : R R ax bx x1 , x 2
0, a , b 0

0 b x 2
a 0 x 1
2
2
V2 ( x ) : R R ax1 x1 , x 2
0, a 0.

0 0 x 2
2

2
1

2
2

V2(.) is not positive definite since for any x20, any x of the form
x*=[0,x2]T0;however, V2(x*)=0.

Positive definite function (PDFs) constitute the basic building block of the
Lyapunov theory. PDFs can be seen as an abstraction of the total energy
stored in a system, as we will see. All of the Lyapunos stability theorems focus
on the study of the time derivative of a positive definite function along the
trajectory of the system. In other words, given an autonomous system of the
form dx/dt=f(x), we will first construct a positive definite function V(x) and study
dV(x)/dt given by

V( x ) dV V dx V f ( x )
dt
x dt
f1 ( x )
V V
V

,
,,

x
2
n
1
f n ( x )
The following definition introduces a useful and very common way of representing
this derivative.

Definition: Let V:D R and f:DRn. The Lie derivative of V along f,denoted by
LfV, is defined by

V
L f V( x )
f (x)
x1
Thus, according to this definition, we have that

V( x ) V f ( x ) V f ( x ) L V( x ).
f
x
Example: Consider the system
and define

ax1

bx 2 cos x1

Vx x

Thus, we have

2
1

2
2

( x ) L V ( x ) 2 x ,2 x
V
f
1
2

ax1

bx 2 cos x1
2
2
2ax1 2bx 2 2 x 2 cos x1

It is clear from this example that dV(x)/dt depends on the systems equation f(x)
and thus it will be different for different systems.
Stability Theorems:
Theorem 1. (Lyapunov Stability Theorem) Let x=0 be an equilibrium point of
dx/dt=f(x), f:DRn, and let V:DR be a continuously differentiable function such
that

(i) V(0) 0,
(ii) V( x ) 0 in D 0 ,
( x ) 0 in D 0 ,
(iii) V

Thus x=0 is stable.

In other words, the theorem implies that a sufficient condition for the stability of
the equilibrium point x=0 is that there exists a continuously differentiable-positive
definite function V(x) such that dV(x)/dt is negative semi definite in a
neighborhood of x=0.
Theorem 2. (Asymptotic Stability Theorem) Under the conditions of Theorem 1,
if V(.) is such that

(i) V(0) 0,
(ii) V( x ) 0 in D 0 ,
( x ) 0 in D 0 ,
(iii) V
Thus x=0 is asymptotically stable.
In other words, the theorem says that asymptotic stability is achieved if the
conditions of Theorem 1 are strengthened by requiring dV(x)/dt to be
negative definite, rather than semi definite.
Marquez, HJ, Nonlinear Control Systems

Examples:
Pendulum without friction
The equation of motion of the system is

ml mg sin 0
g sin 0
l

mg
Choosing state variables

x1

x 1 x 2
we have

g
x 2 sin x1
l

To study the stability of the equilibrium at the origin, we need to propose a


Lyapunov function candidate (LFC) V(x) and show that satisfies the properties of
one of the stability theorems seen so far. In general, choosing this function is
rather difficult; however, in this case we proceed inspired by our understanding
of the physical system. Namely, we compute the total energy of the pendulum
(which is a positive function), and use this quantity as our Lyapunov function
candidate. We have

E KP
1
2
E m l mgh
2
where

x 2

h l 1 cos l 1 cos x1

Thus
1 2 2
E ml x 2 mgl 1 - cosx1
2

Marquez, HJ, Nonlinear Control Systems

We now define V(x)=E and investigate whether V(.) and its derivative dV(.)/dt
satisfies the conditions of theorem 1 and/or 2. Clearly V(0)=0; thus defining
property (i) is satisfied in both theorems. With respect to (ii) we see that because
of the periodicity of cos(x1), we have that V(x)=0 whenever x=(x1,x2)T=(2k,0)T,
k=1,2,.... Thus V(.) is not positive definite. This situation, however, can be easily
remedied by restricting the domain of x1 to the interval (-2,2);i.e., we take
V:DR, with D=((-2,2), R)T.
There remains to evaluate the derivative of v(.) along the trajectories of f(t). We
have

V ( x ) V f ( x )

V V
T

f
(
x
),
f
(
x
)

2
1

x
2
1

1.8
1.6
1.4
1.2
1

0.8

0.6

0.2
0
-8

mgl sin x1 , ml 2 x 2

0.4

-6

-4

-2

1-cosx1

x2

g
sin x1
l

mgl x 2 sin x1 mglx 2 sin x1 0

Thus dV(x)/dt=0 and the origin is stable by theorem 1.

This result is consistent with our physical observations. Indeed, a sipmle


pendulum without friction is a conservative system. This means that the sum
of the kinetic and potential energy remains constant. The pendulum will
continue to balance without changing the amplitude of the oscillations and
thus constitutes a stable system.
Example:
Pendulum with friction.

Viscous friction, b
l

mg

x1

ml cl mg sin 0
c g sin 0
m
l
x 1 x 2
g
c
x 2 sin x1 x 2
l
m
Marquez, HJ, Nonlinear Control Systems

Again x=0 is an equilibrium point. The energy is the same as previous example

1 2 2
E V( x ) ml x 2 mgl 1 cos x1 0 in D 0
2
( x ) V f ( x )
V
V V
T

,
f1 ( x ), f 2 ( x )
x1 x 2

mgl sin x1 , ml 2 x 2

x2

g
c
sin x1 x 2
m
l

c l 2 x 22
Thus dV(x)/dt is negative semi-definite. It is not negative definite since dV(x)/dt=0
for x2=0, regardless of the value x1 (thus dV(x)/dt=0 along the x1 axis).

According to this analysis, we conclude that the origin is stable by theorem 1,


but cannot conclude asymptotic stability as suggested by our intuitive analysis,
since we were not able to establish the conditions of theorem 2. namely,
dV(x)/dt is not negative definite in a neighborhood of x=0. The result is indeed
disappointing since we know that a pendulum with friction has an asymptotically
stable equilibrium point at origin.
Example:
Consider the following system

x 1 x1 x12 x 22 2 x 2

x 2 x1 x 2 x12 x 22 2
To study the equilibrium point at the origin we define

1 2
2
V( x ) x1 x 2
2
Marquez, HJ, Nonlinear Control Systems

( x ) V f ( x )
V
V V
T

,
f1 ( x ), f 2 ( x )
x1 x 2

x1 x12 x 22 2 x 2

x1 , x 2

x1 x 2 x x
x12 x12 x 22 2 x1x 2 x1x 2 x 22 x12 x 22 2

2
1

x12 x 22 x12 x 22 2

2
2

Thus, V(x)>0 and and dV(x)/dt<0, provided that

2
1

x 22 2

and it follows that the origin is an asymptotically equilibrium point.

Asymptotic Stability in the Large:


When the equilibrium is asymptotically stable, it is often important to know under
what conditions an initial state will converge to the equilibrium point. In the best
possible case, any initial state will converge to the equilibrium point. An
equilibrium point that has this property is said to be globally asymptotically
stable, or asymptotically stable in the large.
Definition: A function V(x) is said to radially unbounded if

V( x ) as

The origin x=0 is globally asymptotically stable (stable in the large) if the
following conditions are satisfied

(i) V (0) 0,
(ii) V( x ) 0 x 0
(iii) V( x ) is radially unbounded
( x ) 0 x 0
(iiii ) V
Marquez, HJ, Nonlinear Control Systems

Example:

x 1 x 2 x1 x x
2
1

2
2

x 2 x1 x 2 x x
2
1

2
2

V( x ) x x
2
1

( x ) V f ( x )
V
V V
T

,
f1 ( x ), f 2 ( x )
x1 x 2

x 2 x1 x12 x 22
2 x1 , x 2
2
2
x1 x 2 x1 x 2

2 x12 x

2 2
2

2
2

Thus, V(x)>0 and dV(x)/dt<0 for all


x. Morover, since V(.) is radially
unbounded, it follows that the origin
is globally asymptotically stable.

30
25
20
V(x)

Consider the following system

15
10
5
0
6
4
2
x2

2
x1

Construction of Lyapunov Functions:


The main shortcoming of the Lyapunov theory is the difficulty associated with the
construction of suitable Lyapunuv function. The variable gradient method is used
for this purpose. This method is applicable to autonomous systems and often but
not always leads to a desired Lyapunov function for a given system.
The Variable Gradient: The essence of this method is to assume that the
gradient of the (unkown) Lyapunov function V(.) is known up to some adjustable
parameters, and then finding V(.) itself by integrating the assumed gradient. In
other words, we start out by assuming that
Example function:

g( x ) g1 , g 2 h11x1 h12 x 2 , h12 x1 h 22 x 2

V( x ) g( x )

( x ) V ( x ) g ( x ) f ( x )
V

The power of this method relies on the following fact. Given that

V( x ) g( x )
Thus, we have that

g( x ) dx V( x )dx dV( x )
xb

xb

xa

xa

V( x b ) V( x a ) V( x )dx g( x ) dx

that is, the difference V(xb)-V(xa) depends on the initial and final states xa and xb
and not on the particular path followed when going from xa to xb. This property is
often used to obtain V by integrating V( x ) along the coordinate axis:
x

x1

V( x ) g( x )dx g1 (s1 ,0, ,0)ds1


x2

g 2 ( x1 , s 2 ,0, ,0)ds 2
0

xn

g n ( x1 , x 2 , , s n )ds n

The free parameters in the function g(x) are constrained to satisfy certain
symmetry conditions, satisfied by all gradients of a scalar function.
Theorem: A function g(x)
the matrix

g( x )

is the gradient of a scalar function V(x) is and only if

g1
x1
g1
x 2

g1
x n

g 2
x1
g 2
x 2
g 2
x n

g n
x1

g n

x 2

g n

x n

is symmetric.

Example:
Consider the following system:

x 1 a x1
x 2 b x 2 x1x 22
Clearly the origin is an equilibrium point. To study the stability of the equilibrium
point, we proceed to find a Lyapunov function as follows,
Step 1: Assume that V ( x ) g ( x )

has the form

g ( x ) h11x1 h12 x 2 , h12 x1 h 22 x 2

Step 2: Impose the symmetry conditions

2V
2V
g i g j

or, equivalently

x i x j x jx i
x j x i
Marquez, HJ, Nonlinear Control Systems

In our case we have


2
g1
h11

h
x1
h12 x 2 1
x 2
x 2
x 2
1
2
g 2

h
h12 x1 2 x 2 2
x1
x1
x1

To simplify the solution, we attempt to solve the problem assuming that the hijs
are constant. If this is the case, then

h11 h12 h12 h 22

0
x 2 x 2 x1 x1
and we have that

g1 g 2

h12 h12 k
x 2 x1

g x h11x1 kx 2 , kx1 h 22 x 2

In particular, choosing k=0, we have

g ( x ) g1 , g 2 h x , h x 2
Step 3:

1
1 1

2
2

Find V

( x ) V f ( x )
V
g(x ) f ( x )
ax1
h x , h x2
2
bx 2 x1x 2
ah11x12 h 22 b x1x 2 x 22

1
1 1

2
2

Marquez, HJ, Nonlinear Control Systems

Step 4: Find V from

by integration

Integration along the axes, we have that

V( x ) g1 s1 ,0 ds1 g 2 x1 , s 2 ds 2
x1

x2

x1

x2

h s ds1 h 22s 2 ds 2
0

1
1 1

1 1 2 1 2 2
h1x1 h 2 x 2
2
2
Step 5: Verify that V>0 and dV/dt<0. We have that

1 1 2 1 2 2
h1x1 h 2 x 2
2
2
( x ) ah1x 2 h 2 b x x x 2
V
1 1
2
1 2
2
V( x )

V(x)>0 if and only if

h11 and h 22 0. Assume then that h11 h 22 1.

V( x ) ax 2 b x x x 2
1
1 2
2
Assume now that a>0, and b<0. In this case

( x ) ax 2 b x x x 2
V
1
1 2
2
And we conclude that, under these conditions, the origin is (locally)
asymptotically stable.

Marquez, HJ, Nonlinear Control Systems

You might also like