Professional Documents
Culture Documents
This lecture presents some basic denitions and simple examples on nonlinear dynam
ical systems modeling.
1.1
Behavioral Models.
The most general (though rarely the most convenient) way to dene a system is by using
a behavioral input/output model.
1.1.1
What is a signal?
2
is a valid signal, while
z(t) =
What is a system?
Systems are objects producing signals (called output signals), usually depending on other
signals (inputs) and some other parameters (initial conditions). In most applications,
mathematical models of systems are dened (usually implicitly) by behavior sets. For an
autonomous system (i.e. for a system with no inputs), a behavior set is just a set B = {z}
consisting of some signals z : R+ Rk (k must be the same for all signals from B). For
a system with input v and output w, the behavior set consists of all possible input/output
pairs z = (v(), w()). There is no real dierence between the two denitions, since the
pair of signals z = (v(), w()) can be interpreted as a single vector signal z(t) = [v(t); w(t)]
containing both input and output stacked one over the other.
Note that in this denition a xed input v() may occur in many or in no pairs
(v, w) B, which means that the behavior set does not necessarily dene system output
as a function of an arbitrary system input. Typically, in addition to knowing the input,
one has to have some other information (initial conditions and/or uncertain parameters)
to determine the output in a unique way.
Example 1.2 The familiar ideal integrator system (the one with the transfer function
G(s) = 1/s) can be dened by its behavioral set of all input/output scalar signal pairs
(v, w) satisfying
t2
w(t2 ) w(t1 ) =
v( )d, t1 , t2 [0, ).
t1
In this example, to determine the output uniquely it is sucient to know v and w(0).
In Example 1.1.2 a system is characterised by an integral equation. There is a variety
of other ways to dene the same system (by specifying a transfer function, by writing a
dierential equation, etc.)
3
1.1.3
A system is called linear if its behavior set satises linear superposition laws, i.e. when
for every z1 , z2 B and c R we have z1 + z2 B and cz1 B.
Excluding some absurd examples2 , linear systems are those dened by equations which
are linear with respect to v and w. In particular, the ideal integrator system from Exam
ple 1.1.2 is linear.
A nonlinear system is simply a system which is not linear.
1.2
System State.
It is important to realize that system state can be dened for an arbitrary behavioral
model B = {z(}.
1.2.1
System state at a given time instance t0 is supposed to contain all information relating
past (t < t0 ) and future (t > t0 ) behavior. This leads us to the following denitions.
Denition Let B be a behavior set. Signals z1 , z2 B are said to commute at time t0 if
the signals
z1 (t) for t t0 ,
z12 (t) =
z2 (t) for t > t0
and
z21 (t) =
z2 (t) for t t0 ,
z1 (t) for t > t0
Such as the (linear) system dened by the nonlinear equation (v(t) w(t)) 2 = 0 t
4
w(t1 ) = w(t2 ) = 1 and w(t) = 0 for all t (t1 , t2 ) Z, there are exactly two integers t in
the interval (t1 , t2 ) such that v(t) = 1.
In other words, the system counts the 1s in the input and, every time the count
reaches three, the system resets its counter to zero, and outputs 1 (otherwise producing
0s).
It is easy to see that two input/output pairs z1 = (v1 , w1 ) and z2 = (v2 , w2 ) commute
at a (discrete) time t0 if and only if N (t0 , z1 ) = N (t0 , z2 ), where N (t0 , z) for z = (v, w) B
is the number of 1s in v(t) for t (t0 , t1 ) Z, where t1 means the next (after t0 ) integer
time t when w(t) = 1. Hence the state of the system can be dened by a function
x : R+ B {0, 1, 2}, x(t, z) = N (t, z).
In this example, knowing a system state allows one to write down state space equations
for the system:
x(t + 1) = f (x(t), v(t)), w(t) = g(x(t), v(t)),
(1.1)
where
f (x, v) = (x + v)mod3,
and g(x, v) = 1 if and only if x = 2 and v = 1.
2.1
t2
t1
a(x(t), t)dt t1 , t2 T.
(2.2)
2
The variable t is usually referred to as the time.
Note the use of an integral form in the formal denition (2.2): it assumes that the
function t a(x(t), t) is integrable on T , but does not require x = x(t) to be dierentiable
at any particular point, which turns out to be convenient for working with discontinuous
input signals, such as steps, rectangular impulses, etc.
Example 2.1 Let sgn denote the sign function sgn : R {0, 1, 1} dened by
y > 0,
1,
0,
y = 0,
sgn(y) =
1, y < 0.
The notation
x = sgn(x),
(2.3)
which can be thought of as representing the action of an on/o negative feedback (or
describing behavior of velocity subject to dry friction), refers to a dierential equation
dened as above with n = 1, Z = R R (since sgn(x) is dened for all real x, and no
restrictions on x or the time variable are explicitly imposed in (2.3)), and a(x, t) = sgn(x).
It can be veried2 that all solutions of (2.3) have the form
x(t) = max{c t, 0} or x(t) = min{t c, 0},
where c is an arbitrary real constant. These solutions are not dierentiable at the critical
stopping moment t = c.
2.1.2
Ordinary dierential equations can be used in many ways for modeling of dynamical
systems. The notion of a standard ODE system model describes the most straightforward
way of doing this.
Denition A standard ODE model B = ODE(f, g) of a system with input v = v(t)
V Rm and output w(t) W Rk is dened by a subset X Rn , two functions
f : X V R+ Rn and g : X V R+ W , and a subset X0 X, so that the
behavior set B of the system consists of all pairs (v, w) of signals such that v(t) V for
all t, and there exist a solution x : R+ X of the dierential equation
x (t) = f (x(t), v(t), t)
(2.4)
(2.5)
A special case of this denition, when the input v is not present, denes an autonomous
system.
2
Do it as an excercise!
3
2.1.3
As it was mentioned before, not all ODE models are adequate for design and analysis
purposes. The notion of well-posedness introduces some typical constraints aimed at
insuring their applicability.
Denition A standard ODE model ODE(f, g) is called well posed if for every signal
v(t) V and for every solution x1 : [0, t1 ] X of (2.4) with x1 (0) X0 there exists a
solution x : R+ X of (2.4) such that x(t) = x1 (t) for all t [0, t1 ].
The ODE from Example 2.1.1 can be used to dene a standard autonomous ODE
system model
x (t) = sgn(x(t)), w(t) = x(t),
where V = X = X0 = R, f (x, v, t) = sgn(x) and g(x, v, t) = x. It can be veried that
this autonomous system is well-posed. However, introducing an input into the model
destroys well-posedness, as shown in the following example.
Example 2.2 Consider the standard ODE model
x (t) = sgn(x(t)) + v(t), w(t) = x(t),
(2.6)
2.2
(2.7)
(2.8)
5
2.2.2
Maximal solutions
If x1 : [t0 , t1 ] Rn and x2 : [t1 , t2 ] Rn are both solutions of (2.7), and x1 (t1 ) = x2 (t1 ),
then the function x : [t0 , t2 ] Rn , dened by
x1 (t), t [t0 , t1 ],
x(t) =
x2 (t), t [t1 , t2 ],
(i.e. the result of concatenating x1 and x2 ) is also a solution of (2.7). This means that
some solutions of (2.7) can be extended to a larger time interval.
A solution x : T Rn of (2.7) is called maximal if there exists no other solution
x : T Rn for which T is a proper subset of T, and x(t)
ttf
The ODE describing systems dynamics are frequently discontinuous with respect to the
time variable. Indeed, the standard ODE system model includes
x (t) = f (x(t), v(t), t),
where v = v(t) is an input, and the ODE becomes discontinuous with respect to t when
ever v is a rectangular impulse etc. As long as the time instances at which a(x, t) is
6
discontinuous for a xed nite set t1 < t2 < < tn , Theorem 2.1 can be applied
separately to the time intervals [tk1 , tk ]. However, when the location of discontinuities
depends on x, or when they cannot be counted in an increasing order, a stronger result
is needed. It turns out that the dependence on time needs only be integrable, as long as
dependence on x is continuous.
Theorem 2.3 Assume that for some r > 0
(a) the set
Dr (x0 , t0 ) = {(x, t) Rn R : |x x0 | r, t [t0 , t0 + r]}
is a subset of Z;
(b) the function t a(x(t), t) is integrable on [t0 , t0 + r] for every continuous function
x : [t0 , t0 + r] Rn satisfying |x(t) x0 | r for all t [t0 , t0 + r];
(c) for every > 0 there exists > 0 such that
t0 +r
|a(x1 (t), t) a(x2 (t), t)|dt <
t0
holds.
On the contrary, the dierential equation
1
t x(t), t > 0
x (t) =
0,
t = 0,
t[0,t1 ]
x(0) = x0
does not have a solution on [0, ) for every x0 = 0. Indeed, if x : [0, t1 ] R is a solution
for some t1 > 0 then
d x(t)
=0
dt
t
for all t = 0. Hence x(t) = ct for some constant c, and x(0) = 0.
7
2.2.4
Dierential inclusions
n
Let X be a subset of Rn , and let : X 2R be a function which maps every point of
X to a subset of Rn . Such a function denes a dierential inclusion
x (t) (x(t)).
(2.9)
for some integrable function u : T Rn satisfying the inclusion u(t) (x(t)) for
all t T . It turns out that dierential inclusions are a convenient, though not always
adequate, way of re-dening discontinuous ODE to guarantee existence of solutions.
It turns out that dierential inclusion (2.9) subject to xed initial condition x(t 0 ) = x0
has a solution on a suciently small interval T = [t0 , t1 ] whenever the set-valued function
is compact convex set-valued and semicontinuous with respect to its argument (plus, as
usually, x0 must be an interior point of X).
Theorem 2.4 Assume that for some r > 0
(a) the set
Br (x0 ) = {x Rn : |x x0 | r}
is a subset of X;
8
where c is a xed constant, can be re-dened as a continuous dierential inclusion (2.9)
by introducing
y > 0,
{c 1},
[c 1, c + 1], y = 0,
(y) =
{c + 1},
y < 0.
The newly obtained dierential inclusion has the existence of solutions property, and
appears to be compatible with the dry friction interpretation of the sign nonlinearity.
In particular, with the initial condition x(0) = 0, the equation has solutions for every
value of c R. If c [1, 1], the unique maximal solution is x(t) 0, which corresponds
to the friction force adapting itself to equalize the external force, as long as it is not
too large.
The dierential inclusion model is not as compatible with the on/o controller
interpretation of the sign nonlinearity. In this case, due to the unmodeled feedback
loop delays, one expects some chattering solutions oscillating rapidly around the point
x0 = 0. It is possible to say that, in this particular case, the solutions of (2.9) describe
the limit behavior of the closed loop solutions as the loop delay approaches zero.
3.1
Uniqueness Of Solutions
In this section our main objective is to establish sucient conditions under which solutions
of ODE with given initial conditions are unique.
3.1.1
A counterexample
(3.1)
2
3.1.2
The key issue for uniqueness of solutions turns out to be the maximal slope of a = a(x):
to guarantee uniqueness on time interval T = [t0 , tf ], it is sucient to require existence
of a constant M such that
|a(
x1 ) a(
x2 )| M |x1 x2 |
for all x1 , x
2 from a neigborhood of a solution x : [t0 , tf ] Rn of (3.1). The proof of both
existence and uniqueness is so simple in this case that we will formulate the statement
for a much more general class of integral equations.
Theorem 3.1 Let X be a subset of Rn containing a ball
Br (
x0 ) = {
x Rn : |
x x0 | r}
of radius r > 0, and let t1 > t0 be real numbers. Assume that function a : X [t0 , t1 ]
[t0 , t1 ] Rn is such that there exist constants M, K satisfying
|a(
x1 , , t) a(
x2 , , t)| K|
x1 x2 | x1 , x2 Br (
x0 ), t0 t t1 ,
(3.2)
and
|a(
x, , t)| M x Br (
x0 ), t0 t t1 .
(3.3)
Then, for a suciently small tf > t0 , there exists unique function x : [t0 , tf ]
X
satisfying
t
x(t) = x0 +
a(x( ), , t)d t [t0 , tf ].
(3.4)
t0
A proof of the theorem is given in the next section. When a does not depend on the
third argument, we have the standard ODE case
x (t) = a(x(t), t).
In general, Theorem 3.1 covers a variety of nonlinear systems with an innite dimensional
state space, such as feedback interconnections of convolution operators and memoryless
nonlinear transformations. For example, to prove well-posedness of a feedback system in
which the forward loop is an LTI system with input v, output w, and transfer function
G(s) =
es 1
,
s
and the feedback loop is dened by v(t) = sin(w(t)), one can apply Theorem 3.1 with
sin(
x) + h(t), t 1 t,
a(
x, , t) =
h(t),
otherwise,
where h = h(t) is a given continuous function depending on the initial conditions.
3
3.1.3
First prove existence. Choose tf > t1 such that tf t0 r/M and tf t0 1/(2K).
Dene functions xk : [t0 , tf ] X by
t
x0 (t) x0 , xk+1 (t) = x0 +
a(xk ( ), , t)d.
t0
t
|xk+1 (t) xk (t)|
|a(xk ( ), , t) a(xk1 ( ), , t)|d
t0
t[t0 ,tf ]
t[t0 ,tf ]
Hence xk (t) converges exponentially to a limit x(t) which, due to continuity of a with
respoect to the rst argument, is the desired solution of (3.4).
Now let us prove uniqueness. Note that, due to tf t0 r/M , all solutions of (3.4)
must satisfy x(t) Dr (
x0 ) for t [t0 , tf ]. If xa and xb are two such solutions then
t
|xa (t) xb (t)|
|a(xa ( ), , t) a(xb ( ), , t)|d
t0
K|xa ( ) xb ( )|d
t0
t[t0 ,tf ]
The proof is complete now. Note that the same proof applies when (3.2),(3.3) are
replaced by the weaker conditions
|a(
x1 , , t) a(
x2 , , t)| K( )|
x1 x2 | x1 , x2 Br (
x0 ), t0 t t1 ,
and
x, , t)| m(t) x Br (
x0 ), t0 t t1 ,
|a(
where the functions K() and M () are integrable over [t0 , t1 ].
3.2
In this section our main objective is to establish sucient conditions under which solutions
of ODE depend continuously on initial conditions and other parameters.
Consider the parameterized integral equation
t
x(t, q) = x0 (q) +
a(x(, q), , t, q)d, t [t0 , t1 ],
(3.5)
t0
where q R is a parameter. For every xed value of q integral equation (3.5) has the
form of (3.4).
Theorem 3.2 Let x0 : [t0 , tf ] Rn be a solution of (3.5) with q = q0 . For some d > 0
let
X d = {
x Rn : t [t0 , tf ] : |x x0 (t)| < d}
be the d-neigborhood of the solution. Assume that
(a) there exists K R such that
|a(
x1 , , t, q)a(
x2 , , t, q)| K|x1 x2 | x1 , x2 X d , t0 t tf , q (q0 d, q0 +d);
(3.6)
(b) there exists K R such that
|a(
x, , t, q)| M x X d , t0 t tf , q (q0 d, q0 + d);
(3.7)
3.3
This section contains some examples showing how the general continuous dependence
of solutions on parameters allows one to derive qualitative statements about nonlinear
systems.
3.3.1
Dierential ow
(3.8)
(3.9)
on every bounded subset of Rn . According to Theorem 3.1, this implies existence and
uniqueness of a maximal solution x : (t , t+ ) Rn of (3.8) subject to given initial
conditions x(t0 ) = x0 (by this denition, t < t0 < t+ , and it is possible that t =
and/or t+ = ). To specify the dependence of this solution on the initial conditions,
we will write x(t) = x(t, t0 , x0 ). Due to the time-invariance of (3.8), this notation can
be further simplied to x(t) = x(t t0 , x0 ), where x(t, x)
means the value x(t) of the
solution of (3.8) with initial conditions x(0) = x. Remember that this denition makes
sense only when uniqueness of solutions is guaranteed, and that x(t, x) may by undened
when |t| is large, in which case we will write x(t, x) = .
According to Theorem 3.2, x : Rn is a continuous function dened on an open
subset R Rn . With x considered a parameter, t x(t, x)
denes a family of
smooth curves in Rn . When t is xed, x x(t, x)
denes a continuous map form an open
whenever
x(t2 , x)
= . The function x : Rn is sometimes called dierential ow dened
by (3.8).
3.3.2
Denition An equilibrium x
0 of (3.8) is called asymptotically stable if the following two
conditions are satised:
(a) there exists d > 0 such that x(t, x)
x0 as t for all x satisfying |
x0 x|
< d;
(b) for every > 0 there exists > 0 such that |x(t, x)
x 0 | < whenever t 0 and
|
x x0 | < .
The proof of the theorem follows easily from the continuity of x(, ).
3.3.3
This lecture presents several techniques of qualitative systems analysis based on what is
frequently called topological arguments, i.e. on the arguments relying on continuity of
functions involved.
4.1
This section covers results which do not rely specically on the shape of the state space,
and thus remain valid for very general classes of systems. We will start by proving gener
alizations of theorems from the previous lecture to the case of discrete-time autonomous
systems.
4.1.1
(4.1)
(4.2)
2
with x(0) = x exist and are unique on the time interval t [0, 1] for all x R n . Then
discrete time system (4.1) with f ()
x = x(1, x) describes the evolution of continuous time
system (4.2) at discrete time samples. In particular, if a is continuous then so is f .
Let us call a point in the closure of X locally attractive for system (4.1) if there exists
d > 0 such that x(t) x0 as t for every x = x(t) satisfying (4.1) with |x(0)x 0 | < d.
Note that locally attractive points are not necessarily equilibria, and, even if they are,
they are not necessarily asymptotically stable equilibria.
For x0 Rn the set A = A(
x0 ) of all initial conditions x X in (4.1) which dene a
solution x(t) converging to x0 as t is called the attractor of x0 .
Theorem 4.1 If f is continuous and x0 is locally attractive for (4.1) then the attractor
A = A(
x0 ) is a (relatively) open subset of X, and its boundary d(A) (in X) is f -invariant,
i.e. f ()
x d(A) whenever x d(A).
Remember that a subset Y X Rn is called relatively open in X if for every y Y
there exists r > 0 such that all x X satisfying |x y| < r belong to Y . A boundary of a
subset Y X Rn in X is he set of all x X such that for every r > 0 there exist y Y
and z X/Y such that |y x| < r and z x| < r. For example, the half-open interval
Y = (0, 1] is a relatively closed subset of X = (0, 2), and its boundary in X consists of a
single point x = 1.
Example 4.1 Assume system (4.1), dened on X = Rn by a continuous function
f : Rn Rn , is such that all solutions with |x(0)| < 1 converge to zero as t ,
and all solutions with |x(0)| > 100 converge to innity as t . Then, according to
Theorem 4.1, the boundary of the attractor A = A(0) is a non-empty f -invariant set. By
assumptions, 1 |x|
100 for all x A(0). Hence we can conclude that there exist
solutions of (4.1) which satisfy the constraints 1 |x(t)| 100 for all t.
Example 4.2 For system (4.1), dened on X = Rn by a continuous function f : Rn
Rn , it is possible to have every trajectory to converge to one of two equilibria. However,
it is not possible for both equilibria to be locally attractive. Otherwise, according to The
orem 4.1, Rn would be represented as a union of two disjoint open sets, which contradicts
the notion of connectedness of Rn .
4.1.2
According to the denition of local attractiveness, there exists d > 0 such that x(t) x 0
as t for every x = x(t) satisfying (4.1) with |x(0) x 0 | < d. Take an arbitrary
x1 A(
x0 ). Let x1 = x1 (t) be the solution of (4.1) with x(0) = x1 . Then x1 (t) x0 as
t , and hence |x1 (t1 )| < d/2 for a suciently large t1 . Since f is continuous, x(t) is a
continuous function of x(0) for every xed t {0, 1, 2, . . . }. Hence there exists > 0 such
that |x(t1 ) x1 (t1 )| < d/2 whenever |x(0) x1 | < . Since this implies |x(t1 ) x0 | < d,
3
we have x A(
x0 ) for every x X such that |x x1 | < , which proves that A = A(
x0 )
is open.
To show that d(A) is f -invariant, note rst that A is itself f -invariant. Now take an
arbitrary x d(A). By the denition of the boundary, there exists a sequence x k A
xk ) converges to f (
converging to x. Hence, by the continuity of f , the sequence f (
x). If
f ()
x A, this implies f (
x) d(A). Let us show that the opposite is impossible. Indeed,
if f ()
x A then, since A is proven open, there exists > 0 such that z A for every
z X such that |z f ()|
x < . Since f is continuous, there exists > 0 such that
|f (y) f (
x)| < whenever y X is such that |y x| < . Hence f (y) A whenever
|y x| < . Since, by the denition of attractor, f (y) A imlies y A, y A whenever
|y x|
< , which contradicts the assumption that x d(A).
4.1.3
For a given solution x = x(t) of (4.2), the set lim(x) Rn of all possible limits x(tk ) x
as k , where {tk } converges to innity, is called the limit set of x.
Theorem 4.2 Assume that a : Rn Rn is a locally Lipschitz function. If x : [0, )
Rn is a solution of (4.2) then the set lim(x) of its limit points is a closed subset of R n ,
and every solution of (4.2) with initial conditions in lim(x) lies completely in lim(x).
Proof First, if tk,q and x(tk,q , x(0)) xq as k for every q, and xq x as
q then one can select q = q(k) such that tk,q(k) and x(tk,q(k) , x(0) x as
k . This proves the closedness (continuity of solutions was not used yet).
Second, by assumption
x0 = lim x(tk , x(0)).
k
In general, limit sets of ODE solutions can be very complicated. However, in the case
when n = 2, a relatively simple classication exists.
Theorem 4.3 Assume that a : R2 R2 is a locally Lipschitz function. Let x0 :
[0, ) R2 be a solution of (4.2). Then one of the following is true:
(a) |x0 (t)| as t ;
(b) there exists T > 0 and a non-constant solution xp : (, +) R2 such that
xp (t + T ) = xp (t) for all t, and the set of limit points of x is the trajectory (the
range) of xp ;
4
(c) the limit set is a union of trajectories of maximal solutions x : (t 1 , t2 ) R2 of
(4.2), each of which has a limit (possibly innite) as t t1 or t t2 .
The proof of Theorem 4.3 is based on the more specic topological arguments, to be
discussed in the next section.
4.2
The notion of index of a continuous function is a remarkably powerful tool for proving
existence of mathematical objects with certain properties, and, as such, is very useful in
qualitative system analysis.
4.2.1
For n = 1, 2, . . . let
S n = {x Rn+1 : |x| = 1}
denote the unit sphere in Rn+1 . Note the use of n, not n + 1, in the S-notation: it
indicates that locally the sphere in Rn+1 looks like Rn . There exists a way to dene the
index ind(F ) of every continuous map F : S n S n in such a way that the following
conditions will be satised:
(a) if H : S n [0, 1] S n is continuous then
ind(H(, 0)) = ind(H(, 1))
(such maps H is called a homotopy between H(, 0) and H(, 1));
(b) if the map F : Rn+1 Rn+1 dened by
F (z) = |z|F (z/|z|)
is continuously dierentiable in a neigborhood of S n then
ind(F ) =
det(Jx (F ))dm(x),
xS n
5
4.2.2
One of the classical mathematical results that follow from the very existence of the index
function is the famous Browers xed point theorem, which states that for every continuous
function G : B n B n , where
B n = {x Rn+1 : |x| 1},
equation F (x) = x has at least one solution.
The statement is obvious (though still very useful) when n = 1. Let us prove it for
: B n B n which maps
n > 1, starting with assume the contrary. Then the map G
x B n to the point of S n1 which is the (unique) intersection of the open ray starting
from G(x) and passing through x with S n1 . Then H : S n1 [0, 1] S n1 dened by
H(x, t) = G(tx)
is a homotopy between the identity map H(, 1) and the constant map H(, 0). Due to
existence of the index function, such a homotopy does not exist, which proves the theorem.
4.2.3
(4.3)
with initial conditions x(0) B n remain in B n for all times. Then (4.3) has a T -periodic
solution x = x(t) = x(t + T ) for all t R.
Indeed, the map x x(T, 0, x)
is a continuous function G : B n B n . The solution
of x = G(
x) denes the initial conditions for the periodic trajectory.
5.1
There exists a number of slightly dierent ways of dening what constitutes a Lyapunov
function for a given system. Depending on the strength of the assumptions, a variety of
conclusions about a systems behavior can be drawn.
5.1.1
In general, Lyapunov functions are real-valued functions of systems state which are mono
tonically non-increasing on every signal from the systems behavior set. More gener
ally, stotage functions are real-valued functions of systems state for which explicit upper
bounds of increments are available.
Let B = {z} be a behavior set of a system (i.e. elements of B are are vector sig
nals, which represent all possible outputs for autonomous systems, and all possible input/output pairs for systems with an input). Remember that by a state of a system
we mean a function x : B [0, ) X such that two signals z1 , z2 B dene same
state of B at time t whenever x(z1 (), t) = x(z2 (), t) (see Lecture 1 notes for details and
examples). Here X is a set which can be called the state space of B. Note that, given the
behavior set B, state space X is not uniquelly dened.
1
2
Denition A real-valued function V : X R dened on state space X of a system
with behavior set B and state x : B [0, ) X is called a Lyapunov function if
t V (t) = V (x(t)) = V (x(z(), t)) is a non-increasing function of time for every z B.
According to this denition, Lyapunov functions provide limited but very explicit
information about system behavior. For example, if X = Rn and V (x(t)) = |x(t)|2 is a
Lyapunov function then we now that system state x(t) remains bounded for all times,
though we may have no idea of what the exact value of x(t) is.
For conservative systems in physics, the total energy is always a Lyapunov function.
Even for non-conservative systems, it is frequently important to look for energy-like ex
pressions as Lyapunov function candidates.
One can say that Lyapunov functions have an explicit upper bound (zero) imposed on
their increments along system trajectories:
V (x(z(), t1 )) V (x(z(), t0 )) 0 t1 t0 0, z B.
A useful generalization of this is given by storage functions.
Denition Let B be a set of n-dimensional vector signals z : [0, ) Rn . Let
: Rn R be a given function such that (z(t)) is locally integrable for all z() B. A
real-valued function V : X R dened on state space X of a system with behavior set
B and state x : B [0, ) X is called a storage function with supply rate if
t1
V (x(z(), t1 )) V (x(z(), t0 ))
(z(t))dt t1 t0 0, z B.
(5.1)
t0
1 t
2
|w( )|2 d,
w()p = lim sup
T tT t 0
never exceed power of the input.
Example 5.1 Let behavior set B = {(i(t), v(t))} descrive the (dynamcal) voltage-current
relation of a passive single port electronic circuit. Then the total energy E = E(t)
accumulated in the circuit can serve as a storage function with supply rate
3
5.1.2
It is important to have tools for verifying that a given function of a systems state is
monotonically non-increasing along system trajectories, without explicitly calculating so
lutions of system equations. For systems dened by ODE models, this can usually be
done.
Consider an autonomous system dened by ODE model
x (t) = a(x(t)),
(5.2)
(5.3)
4
family T = {T } of open disjoint intervals T [0, 1] of total length 1. Indeed, for a xed
Kantor function k dene
V (
x) = oor(
x) + k(1 oor(
x)),
where oor(
x) denotes the largest integer not larger than x. Let a(
x) be zero on every
interval (m + t1 , m + t2 ), where m is an integer and (t1 , t2 ) T , and a(
x) = 0 otherwise.
Then x(t) t is a solution of ODE (5.2), but V (x(t)) is strictly monotonically increasing,
x + ta(
despite the fact that t V (
x)) is constant in a neigborhood of t = 0 for every
x R.
However, if V and all solutions of (5.2) are smooth enough, condition (5.4) is su
cient for V to be a Lyapunov function.
Theorem 5.1 If X is an open set in Rn , V : X R is locally Lipschitz, a : X Rn is
continuous, and condition (5.4) is satised then V (x(t)) is monotonically non-increasing
for all solutions x : [t0 , t1 ] X of (5.2).
Proof We will use the following statement: if h : [t0 , t1 ] R is continuous and satises
h(t + ) h(t)
0 t [t0 , t1 ),
d0,d>0 (0,d)
lim
sup
(5.5)
then h is monotonically non-increasing. Indeed, for every r > 0 let hr (t) = h(t) rt.
If hr is monotonically non-increasing for all r > 0 then so is h. Otherwise, assume that
hr (t3 ) > hr (t2 ) for some t0 t2 < t3 t1 and r > 0. Let t4 be the maximal solution of
equation hr (t) = hr (t2 ) with t [t2 , t3 ]. Then hr (t) > hr (t4 ) for all t (t4 , t3 ], and hence
(5.5) is violated at t = t4 .
Now let M be the Lipschitz constant for V in a neigborhood of the trajectory of x.
Since a is continuous,
x(t + ) x(t)
lim
a(x(t))
= 0 t.
0,>0
V (x(t + )) V (x(t))
V (x(t) + a(x(t))) V (x(t)) V (x(t + )) V (x(t) + a(x(t)))
=
+
+M
5
A time-varying ODE model
x 1 (t) = a1 (x1 (t), t)
(5.6)
(5.7)
for every pair of integrable functions x : [t0 , t1 ] X, u : [t0 , t1 ] U such that the
composition t f (x(t), u(t)) satises the identity
t
x(t) = x(t0 ) +
f (x(t), u(t))dt
t0
sup
(5.8)
is satised then V (x(t)) is a storage function with supply rate for system (5.7).
The proof of the theorem follows the lines of Theorem 5.1. Further generalizations to
discontinuous functions f , etc., are possible.
6.1
Stability of an equilibria
(6.1)
inf
r0,r>0 xY
: |
x |<r
x
1
f ()
x f (
x ) x Y.
2
Theorem 6.1 x0 X is a locally stable equilibrium of (6.1) if and only if there exist
c > 0 and a lower semicontinuous function V : Bc (
x0 ) R, dened on
x0 ) = {
Bc (
x : x x0 < c}
and continuous at x0 , such that V (x(t)) is monotonically non-increasing along the solu
tions of (6.1), and
V (
x0 ) < V (
x) x Bc (
x0 )/{
x0 }.
Proof To prove that (ii) implies (i), dene
x) V (
x x0 | = r
V (r) = inf{V (
x0 ) : |
for r (0, c). Since V is assumed lower semicontinuous, the inmum is actually a min
imum, and hence is strictly positive for all r (0, c). On the other hand, since V is
continuous at x0 , V (r) converges to zero as r 0. Hence, for a given > 0, one can nd
> 0 such that
V (min{, c/2}) > V (
x) x : |
x x0 | < .
Hence a solution x = x(t) of (6.1) with an initial condition such that |x(0) x 0 | < (and
hence V (x(0)) < V (min{, c/2}) cannot cross the sphere |x x0 | = min{, c/2}.
V (
x) = sup{x(t) x0 : t 0, x(0) = x,
x() satises (6.1) }.
(6.2)
Since, by assumption, solutions starting close enough to x0 never leave a given disc cen
tered at x0 , V is well dened in a neigborhood X0 of x0 . Then, by its very denition,
V (x(t)) is not increasing for every solution of (6.1) starting in X 0 . Since V is a supremum,
it is lower semicontinuous (actually, here we use the fact, not mentioned before, that if
xk = xk (t) are solutions of (6.1) such that xk (t0 ) x0 and xk (t1 ) x1 then there exists
a solution of (6.1) with x(t0 ) = x0 and x(t1 ) = x1 ). Moreover, V is continuous at x0 ,
because of stability of the equilibrium x0 .
One can ask whether existence of a Lyapunov function from a better class (say, con
tinuous functions) is possible. The answer, in general, is negative, as demonstrated by
exp(1/
x2 )sgn(
x) sin2 (
x), x =
0,
a(
x) =
0,
x = 0.
Then a is arbitrary number of times dierentialble and the equilibrium x 0 = 0 of (6.1) is
locally stable. However, every continuous function V : R R which does not increase
along system trajectories will achieve a maximum at x0 = 0.
3
For the case of a linear system, however, local stability of equilibrium x 0 = 0 implies
existence of a Lyapunov function which is a positive denite quadratic form.
Theorem 6.2 If a : Rn Rn is dened by
a()
x = Ax
where A is a given n- by-n matrix, then equilibrium x0 = 0 of (6.1) is locally stable if and
only if there exists a matrix Q = Q > 0 such that V (x(t)) = x(t) Qx(t) is monotonically
non-increasing along the solutions of (6.1).
The proof of this theorem, which can be based on considering a Jordan form of A, is
usually a part of a standard linear systems class.
6.1.2
V (
x) < 0 x B (
x0 )/{
x)a(
x0 }.
4
Proof Dene V by
V (x(0)) =
(|x(t)|2 )dt,
0
where : [0, ) [0, ) is positive for positive arguments and continuously dieren
tiable. If V is correctly dened and dierentiable, dierentiation of V (x(t)) with respect
to t at t = 0 yields
V (x(0))a(x(0)) = (|x(0)|2 ),
which proves the theorem. To make the integral convergent and continuously dieren
tiable, it is sucient to make (y) converging to zero quickly enough as y 0.
For the case of a linear system, a classical Lyapunov theorem shows that local stability
of equilibrium x0 = 0 implies existence of a strict Lyapunov function which is a positive
denite quadratic form.
Theorem 6.5 If a : Rn Rn is dened by
a(
x) = A
x
where A is a given n- by-n matrix, then equilibrium x0 = 0 of (6.1) is locally asymptotically
stable if and only if there exists a matrix Q = Q > 0 such that, for V (
x) = x Q
x,
V (
x)A
x = |
x|2 .
6.1.3
Here we consider the case when a : Rn Rn in dened for all vectors. An equilibrium
x0 of (6.1) is called globally asymptotically stable if it is locally stable and every solution
of (6.1) converges to x0 as t .
Theorem 6.6 If function V : Rn R has a unique minimum at x0 , is strictly mono
tonically decreasing along every trajectory of (6.1)except x(t) x 0 , and has bounded level
sets then x0 is a globally asymptotically stable equilibrium of (6.1).
The proof of the theorem follows the lines of the proof of Theorem 6.4. Note that the
assumption that the level sets of V are bounded is critically important: without it, some
solutions of (6.1) may converge to innity instead of x0 .
7.1
The set of all real-valued functions of system state which do not increase along system
trajectories is convex, i.e. closed under the operations of addition and multiplication by a
positive constant. This serves as a basis for a general procedure of searching for Lyapunov
functions or storage functions.
7.1.1
(7.1)
(7.2)
(7.3)
2
In particular, when 0, this yields the denition of a Lyapunov function.
Finding, for a given supply rate, a valid storage function (or at least proving that one
exists) is a major challenge in constructive analysis of nonlinear systems. The most com
mon approach is based on considering a linearly parameterized subset of storage function
candidates V dened by
N
V = {V (
x) =
q Vq (
x),
(7.4)
q=1
where {Vq } is a xed set of basis functions, and k are parameters to be determined. Here
every element of V is considered as a storage function candidate, and one wants to set up
an ecient search for the values of k which yield a function V satisfying (7.3).
Example 7.1 Consider the nite state automata dened by equations (7.1) with value
sets
X = {1, 2, 3}, W = {0, 1}, Y = {0, 1},
and with dynamics dened by
f (1, 1) = 2, f (2, 1) = 3, f (3, 1) = 1, f (1, 0) = 1, f (2, 0) = 2, f (3, 0) = 2,
g(1, 1) = 1, g(
x, w)
= 0 (
x, w)
=
(1, 1).
In order to show that the amount of 1s in the output is never much larger than one third
of the amount of 1s in the input, one can try to nd a storage function V with supply
rate
(
y, w)
= w 3
y.
Taking three basis functions V1 , V2 , V3 dened by
1, x = k,
Vk (
x) =
0, x =
k,
the conditions imposed on 1 , 2 , 3 can be written as the set of six ane inequalities (7.3),
two of which (with (
x, w)
= (1, 0) and (
x, w)
= (2, 0)) will be satised automatically, while
the other four are
x, w)
= (3, 0),
2 3 1 at (
2 1 2 at (
x, w)
= (1, 1),
3 2 1 at (
x, w)
= (2, 1),
1 3 1 at (
x, w)
= (3, 1).
Solutions of this linear program are given by
1 = c, 2 = c 2, 3 = c 1,
3
where c R is arbitrary. It is customary to normalize storage and Lyapunov functions
so that their minimum equals zero, which yields c = 2 and
1 = 2, 2 = 0, 3 = 1.
Now, summing the inequalities (7.2) from t = 0 to t = T yields
3
T 1
t=0
T 1
w(t),
t=0
which is implies the desired relation between the numbers of 1s in the input and in the
output, since V (x(0)) V (x(T )) cannot be larger than 2.
7.1.2
The possibility to reduce the search for a valid storage function to convex optimization,
as demonstrated by the example above, is a general trend. One general situation in which
an ecient search for a storage function can be performed is when a cheap procedure of
checking condition (7.3) (an oracle) is available.
Assume that for every given element V V it is possible to nd out whether condition
(7.3) is satised, and, in the case when the answer is negative, to produce a pair of vectors
x X, w
W for which the inequality in (7.3) does not hold. Select a suciently large
set T0 (a polytope or an ellipsoid) in the space of parameter vector = (q )N
q =1 (this set
will limit the search for a valid storage function). Let be the center of T0 . Dene
V by the , and apply the verication oracle to it. If V is a valid storage function,
the search for storage function ends successfully. Otherwise, the invalidity certicate
(
x, w)
produced by the oracle yields a hyperplane separating and the (unknown) set
of dening valid storage functions, thus cutting a substantial portion from the search
set T0 , reducing it to a smaller set T1 . Now re-dene as the center of T1 and repeat
the process by constructing a sequence of monotonically decreasing search sets Tk , until
either a valid storage function is found, or Tk shrinks to nothing.
With an appropriate selection of a class of search sets Tk (ellipsoids or polytopes
are most frequently used) and with an adequate denition of a center (the so-called
analytical center is used for polytopes), the volume of Tk can be made exponentially
decreasing, which constitutes fast convergence of the search algorithm.
7.1.3
Completion of squares
The success of the search procedure described in the previous section depends heavily
on the choice of the basis functions Vk . A major diculty to overcome is verication of
(7.3) for a given V . It turns out that the only known large linear space of functionals
4
F : Rn R which admits ecient check of non-negativity of its elements is the set of
quadratic forms
x
x
F (
x) =
Q
, (Q = Q )
1
1
x
w
x
w
P A + A P P B
.
BP
0
(7.5)
Since this inequality is linear with respect to its parameters P and , it can be solved
relatively eciently even when additional linear constraints are imposed on P and .
Note that a quadratic functional is non-negative if and only if it can be represented as
a sum of squares of linear functionals. The idea of checking non-negativity of a functional
by trying to represent it as a sum of squares of functions from a given linear set can be
used in searching for storage functions of general nonlinear systems as well. Indeed, let
: Rn Rm RM and V : Rn RN be arbitrary vector-valued functions. For every
H
RN , condition (7.3) with
x) = V (
V (
x)
is implied by the identity
x, w)
x, w)
V (f (
x, w))
V (
x) + H(
S H(
= (
x, w)
x X, w W,
(7.6)
7.2
As described in the previous section, one can search for storage functions by considering
linearly parameterized sets of storage function candidates. It turns out that storage
functions derived for subsystems of a given system can serve as convenient building blocks
(i.e. the components Vq of V ). Indeed, assume that Vq = Vq (x(t)) are storage functions
with supply rates q = q (z(t)). Typically, z(t) includes x(t) as its component, and has
some additional elements, such as inputs, outputs, and othe nonlinear combinations of
system states and inputs. If the objective is to nd a storage function V with a given
supply rate , one can search for V in the form
V (x(t)) =
Vq (x(t)), q 0,
(7.7)
q=1
where q are the search parameters. Note that in this case it is known a-priori that every
V in (7.7) is a storage function with supply rate
(z(t)) =
q q (z(t)).
(7.8)
q=1
1 q (
z) (
z) z.
(7.9)
q=1
When , q are generic functions, even this simplied task can be dicult. However, in
the important special case when and q are quadratic functionals, the search for q in
(7.9) becomes a semidenite program.
In this section, the use of storage functions with quadratic supply rates is discussed.
7.2.1
A quadratic form V (
x) = x P x is a storage function for LTI system
x = Ax + Bw
(7.10)
x
w
x
w
6
Theorem 7.1 Assume that the pair (A, B) is controllable. A symmetric matrix P = P
satisfying (7.5) exists if and only if
x
x
0 whenever j
x = A
x + B w for some R.
(7.11)
w
w
Moreover, if there exists a matrix K such that A + BK is a Hurwitz matrix, and
I
I
0,
K
K
then all such matrices P = P are positive semidenite.
Example 7.2 Let G(s) = C(sI A)1 B + D be a stable transfer function (i.e. matrix
A is a Huewitz matrix) with a controllable pair (A, B). Then |G(j)| 1 for all R
if and only if there exists P = P 0 such that
2
x P (A
x + B w)
|w|
2 |C x + Dw|
2 x Rn , w
Rm .
This can be proven by applying Theorem 7.1 with
x, w)
= |w|
(
2 |C x + Dw|
2
and K = 0.
7.2.2
Whenever two components v = v(t) and w = w(t) of the system trajectory z = z(t)
are related in such a way that the pair (v(t), w(t)) lies in the cone between the two lines
w = k1 v and v = k2 v, V 0 is a storage function for
(z(t)) = (w(t) k1 v(t))(k2 v(t) w(t)).
For example, if w(t) = v(t)3 then (z(t)) = v(t)w(t). If w(t) = sin(t) sin(v(t)) then
(z(t)) = |v(t)|2 |w(t)|2 .
7.2.3
Whenever two components v = v(t) and w = w(t) of the system trajectory z = z(t) are
related by w(t) = (v(t)), where : R R is an integrable function, and v(t) is a
component of system state, V (x(t)) = (v(t)) is a storage function with supply rate
(z(t)) = v(t)w(t),
where
(y) =
( )d.
0
7.3
A number of important results in nonlinear system analysis rely on storage functions for
which no explicit formula is known. It is frequently sucient to provide a lower bound for
the storage function (for example, to know that it takes only non-negative values), and
to have an analytical expression for the supply rate function .
In order to work with such implicit storage functions, it is helpful to have theorems
which guarantee existence of non-negative storage functions for a given supply rate. In this
regard, Theorem 7.1 can be considered as an example of such result, stating existence of
a storage function for a linear and time invariant system as an implication of a frequencydependent matrix inequality. In this section we present a number of such statements
which can be applied to nonlinear systems.
7.3.1
(7.12)
where the inmum is taken over all t t0 and over all z B dening same state as z0 at
time t0 .
Proof Implication (b)(a) follows directly from the denition of a storage function,
which requires
V (z0 , t1 ) V (z0 , t0 ) I(z, t0 , t1 )
(7.13)
8
for t1 t0 , z0 B. Combining this with V 0 yields
I(z, t0 , t1 ) V (z, t0 ) = V (z0 , t0 )
for all z, z0 dening same state of B at time t0 .
Now let us assume that (a) is valid. Then a nite inmum in (7.12) exists (as an
inmum over a non-empty set bounded from below) and is not positive (since I(z0 , t0 , t0 ) =
0). Hence V is correctly dened and not negative. To nish the proof, let us show that
(7.13) holds. Indeed, if z1 denes same state as z0 at time t1 then
z0 (t), t t1 ,
z01 (t) =
z1 (t), t > t1
denes same state as z0 at time t0 < t1 (explain why). Hence the inmum of I(z, t0 , t) in
the denition of V is not larger than the inmum of integrals of all such z 01 , over intervals
of length not smaller than t1 t0 . These integrals can in turn be decomposed into two
integrals
I(z01 , t0 , t) = I(z0 , t0 , t1 ) + I(z1 , t1 , t),
which yields the desired inequality.
7.3.2
(7.14)
over all solutions of (7.14) with a xed x(t0 ) = x0 which can be extended to the time
interval [0, ).
9
In the case when X = Rn , and f : Rn Rn is such that existence and uniqueness
of solutions x : [0, ) Rn is guaranteed for all locally integrable inputs w : [0, )
W and all initial conditions x(t0 ) = x0 Rn , the inmum in (7.12) (and hence, the
corresponding storage function) do not depend on time. If, in addition, f is continuous
and V is continuously dierentiable, the well-known dynamic programming condition
lim
inf
0,>0 wW,
x0 )
xB (
{(
x, w)V
(
x)f (
x, w)}0
inf {(
x0 , w)V
(
x0 )f (
x0 , w)}
x0 Rn
wW
(7.15)
will be satised. However, using (7.15) requires a lot of caution in most cases, since, even
for very smooth f, , the resulting storage function V does not have to be dierentiable.
7.3.3
A non-trivial and powerful case of an implicitly dened storage function with a quadratic
supply rate was introduced in late 60-s by G. Zames and P. Falb.
Theorem 7.3 Let A, B, C be matrices such that A is a Hurwitz matrix, and
|CeAt B|dt < 1.
0
0 w(
w)
|w|
2 w R.
Then for all < 1 system
x (t) = Ax(t) + Bw(t)
has a non-negative storage function with supply rate
+ (
x, w)
= (w (w))(
w C x),
and system
x (t) = Ax(t) + B(w(t) (w(t))
has a non-negative storage function with supply rate
(
x, w)
= (w (w)
C x)
w.
The proof of Theorem 7.3 begins with establishing that, for every function h : R R
with L1 norm not exceeding 1, and for every square integrable function w : R R the
integral
(w(t) (w(t)))y(t)dt,
10
Theorem 7.4 Assume that matrices Ap , Bp , Cp are such that Ap is a Hurwitz matrix,
and there exists > 0 such that
Re(1 G(j))(1 H(j)) R,
where H is a Fourier transform of a function with L1 norm not exceeding 1, and
G(s) = Cp (sI Ap )1 Bp .
Then system
x (t) = Ap x(t) + Bp (Cx(t) + v(t))
has nite L2 gain, in the sense that there exists > 0 such that
2
2
|x(t)| dt (|x(0)| +
|v(t)|2 dt
0
7.4
Consider the following system of dierential equations2 with an uncertain constant delay
parameter :
x 1 (t) = x1 (t)3 x2 (t )3
x 2 (t) = x1 (t) x2 (t)
(7.16)
(7.17)
Analysis of this system is easy when = 0, and becomes more dicult when is an
arbitrary constant in the interval [0, 0 ]. The system is not exponentially stable for any
value of . Our objective is to show that, despite the absence of exponential stability, the
method of storage functions with quadratic supply rates works.
The case = 0
For = 0, we begin with describing (7.16),(7.17) by the behavior set
Z = {z = [x1 ; x2 ; w1 ; w2 ]},
where
w1 = x31 , w2 = x32 , x 1 = w1 w2 , x 2 = x1 x2 .
Quadratic supply rates for which follow from the linear equations of Z are given by
x1
w1 w2
LT I (z) = 2
P
,
x2
x1 x 2
2
11
where P = P is an arbitrary symmetric 2-by-2 matrix dening storage function
VLT I (z(), t) = x(t) P x(t).
Among the non-trivial quadratic supply rates valid for Z, the simplest are dened by
N L (z) = d1 x1 w1 + d2 x2 w2 + q1 w1 (w1 w2 ) + q2 w2 (x1 x2 ),
with the storage function
VN L (z(), t) = 0.25(q1 x1 (t)4 + q2 x2 (t)4 ),
where dk 0. It turns out (and is easy to verify) that the only convex combinations of
these supply rates which yield 0 are the ones that make = LT I + N L = 0, for
example
0.5 0
P =
,
d1 = d2 = q2 = 1,
q1 = 0.
0 0
The absence of strictly negative denite supply rates corresponds to the fact that the
system is not exponentially stable. Nevertheless, a Lyapunov function candidate can be
constructed from the given solution:
V (x) = x P x + 0.25(q1 x41 + q2 x24 ) = 0.5x12 + 0.25x24 .
This Lyapunov function can be used along the standard lines to prove global asymptotic
stability of the equilibrium x = 0 in system (7.16),(7.17).
7.4.1
Now consider the case when [0, 0.2] is an uncertain parameter. To show that the
delayed system (7.16),(7.17) remains stable when 0.2, (7.16),(7.17) can be represented
by a more elaborate behavior set Z = {z()} with
z = [x1 ; x2 ; w1 ; w2 ; w3 ; w4 ; w5 ; w6 ] R8 ,
satisfying LTI relations
x 1 = w1 w2 + w3 , x 2 = x1 x2
and the nonlinear/innite dimensional relations
w1 (t) = x31 , w2 = x23 , w3 = x32 (x2 + w4 )3 ,
w4 (t) = x2 (t ) x2 (t), w5 = w43 , w6 = (x1 x2 )3 .
Some additional supply rates/storage functions are needed to bound the new variables.
These will be selected using the perspective of a small gain argument. Note that the
12
perturbation w4 can easily be bounded in terms of x 2 = x1 x2 . In fact, the LTI system
with transfer function (exp( s) 1)/s has a small gain (in almost any sense) when is
small. Hence a small gain argument would be applicable provided that the gain from w 4
to x 2 could be bounded as well.
It turns out that the L2 -induced gain from w4 to x 2 is unbounded. Instead, we can
use the L4 norms. Indeed, the last two components w5 , w6 of w were introduced in order
to handle L4 norms within the framework of quadratic supply rates. More specically, in
addition to the usual supply rate
w1 w2 + w3
x1
,
LT I (z) = 2
P
x1 x 2
x2
the set Z has supply rates
(z) =d1 x1 w1 + d2 x2 w2 + q1 w1 (w1 w2 + w3 ) + q2 w2 (x1 x2 )
+ d3 [0.99(x1 w1 + x2 w2 ) x1 w3 + 2.54 w4 w5 0.54 (x1 x2 )w6 ]
+ q3 [0.24 (x1 x2 )w6 w4 w5 ],
di 0. Here the supply rates with coecients d1 , d2 , q1 , q2 are same as before. The term
with d3 , based on a zero storage function, follows from the inequality
4
4
5w4
x1 x 2
4
4
3
3
0.99(x1 + x2 ) x1 (x2 (x2 + w4 ) ) +
0
2
2
(which is satised for all real numbers x1 , x2 , w4 , and can be checked numerically).
The term with q3 follows from a gain bound on the transfer function G (s) = (exp( s)
1)/s from x1 x2 to w4 . It is easy to verify that the L1 norm of its impulse response
equals , and hence the L4 induced gain of the causal LTI system with transfer function
G will not exceed 1. Consider the function
t
4
Vd (v(), T ) = inf
0.24 |v1 (t)|4
v1 (r)dr dt,
(7.18)
T
where the inmum is taken over all functions v1 which are square integrable on (0, )
and such that v1 (t) = v(t) for t T . Because of the L4 gain bound of G with [0, 0.2]
does not exceed 0.2, the inmum in (7.18) is bounded. Since we can always use v1 (t) = 0
for t > T , the inmum is non-positive, and hence Vd is non-negative. The IQC dened
by the q3 term holds with V = q3 Vd (x1 x2 , t).
Let
0 (z) = 0.01(x1 w1 + x2 w2 ) = 0.01(x41 + x24 ),
which reects our intention to show that x1 , x2 will be integrable with fourth power over
(0, ). Using
0.5 0
P =
, d1 = d2 = 0.01, d3 = q2 = 1, q1 = 0, q3 = 2.54
0 0
13
yields a Lyapunov function
V (xe (t)) = 0.5x1 (t)2 + 0.25x2 (t)4 + 2.54 Vd (x1 x2 , t),
where xe is the total state of the system (in this case, xe (T ) = [x(T ); vT ()], where
vT () L2 (0, ) denotes the signal v(t) = x1 (T + t) x2 (T + t) restricted to the
interval t (0, )). It follows that
dV (xe (t))
0.01(x1 (t)4 + x2 (t)4 ).
dt
On the other hand, we saw previously that V (xe (t)) 0 is bounded from below. There
fore, x1 (), x2 () 4 (fourth powers of x1 , x2 are integrable over (0, )) as long as the
initial conditions are bounded. Thus, the equilibrium x = 0 in system (7.16),(7.17) is
stable for 0 0.2.
8.1
(8.2)
in a neigborhood of equilibrium x0 .
In the statements below, it is assumed that a : X Rn is a continuous function
dened on an open subset X Rn . It is further assumed that x0 X, and there exists
an n-by-n matrix A such that
|a(
x0 + ) a(
x0 ) A|
0 as || 0.
||
(8.3)
2
Example 8.1 Function a : R2 R2 , dened by
x21 x22 (
x21 x22 )2 x1
x1
a
= 2 2
x2
x1 x2 + (
x21 x22 )2 x2
for x = 0, and by a(0) = 0, is dierentiable with respect to x 1 and x2 at every point
x R2 , and its Jacobian a (0) = A equals minus identity matrix. However, condition
(8.3) is not satised (note that a (
x) is not continuous at x = 0).
8.1.1
Let us call an equilibrium x0 of (8.1) exponentially stable if there exist positive real num
bers , r, C such that every solution x : [0, T ] X with |x(0) x 0 | < satises
|x(t) x 0 | Cert |x(0) x0 | t 0.
The following theorem can be attributed directly to Lyapunov.
Theorem 8.1 Assume that a(
x0 ) = 0 and condition (8.3) is satised. Then
(a) if A = a (
x0 ) is a Hurwitz matrix (i.e. if all eigenvalues of A have negative real
part) then x0 is a (locally) exponentially stable equilibrium of (8.1);
(b) if A = a (
x0 ) has an eigenvalue with a non-negative real part then x0 is not an
exponentially stable equilibrium of (8.1);
(c) if A = a (
x0 ) has an eigenvalue with a positive real part then x0 is not a stable
equilibrium of (8.1).
Note that Theorem 8.1 does not cover all possible cases: if A is not a Hurwitz matrix
and does not have eigenvalues with positive real part then the statement says very little,
and for a good reason: the equilibrium may turn out to be asymptotically stable or
unstable. Note also that the equilibrium x = 0 from Example 8.1 (where a is dierentiable
but does not satisfy (8.3)) is not stable, despite the fact that A = I has all eigenvalues
at 1.
Example 8.2 The equilibrium x = 0 of the ODE
x (t) = x(t) + x(t)3
is asympotically stable when < 0 (this is due to Theorem 8.1), but also when = 0 and
< 0. The equilibrium is not stable when > 0 (due to Theorem 8.1), but also when
= 0 and > 0. In addition, the equilibrium is stable but not asymptotically stable
when = = 0.
3
8.1.2
The proof of (a) can be viewed as an excercise in storage function construction outlined
in the previous lecture. Indeed, assuming, for simplicity, that x 0 = 0, (8.1) can be re
written as
x (t) = Ax(t) + w(t), w(t) = a(x(t)) Ax(t).
Here the linear part has standard storage functions
VLT I (
x) = x P x,
P = P
with supply rates
LT I (
x, w)
= 2
x P (A
x + w).
In addition, due to (8.3), for every > 0 there exists > 0 such that the nonlinear
component w(t) satises the sector constraint
N L (x(t), w(t)) = |x(t)|2 |w(t)|2 0,
as long as |x(t)| < . Since A is a Hurwitx matrix, P = P can be chosen positive denite
and such that
P A + A P = I.
Then
(
x, w)
= LT I (
x, w)
+ N L (
x, w)
= ( 1)|
2 ( 1)|x|
2 2P |x|
|w|
|w|
2,
x| + 2 x P w |w|
2
where P is the largest singular value of P , is a supply rate for the storage function
V = VLT I for every constant 0. When = 16P and = 0.25/ , we have
(
x, w)
0.5||
x 2,
which proves that, for |x(t)| < , the inequality
V (x(t)) 0.5|x(t)|2
1
V (x(t)).
2P 1
Hence
V (x(t)) edt V (x(0)) t 0,
where d = 1/2P 1 , as long as |x(t)| < . Since
P |x(t)| V (x(t)) P 1 1 |x(t)|,
this implies (a).
The proofs of (b) and (c) are more involved, based on showing that solutions which
start at x0 + v, where v is an eigenvector of A corresponding to an eigenvalue with a non
negative (strictly positive) real part, cannot converge to x 0 quickly enough (respectively,
diverge from x0 ).
4
To prove (b), take a real number d (0, r/2) such that no two eigenvalues of A sum
up to 2d. Then P = P be the unique solution of the Lyapunov equation
P (A + dI) + (A + dI)P = I.
Note that P is non-singular: otherwise, if P v = 0 for some v =
0, it follows that
|v|2 = v (P (A + dI) + (A + dI)P )v = (P v) (A + dI)v + v (A + dI)(P v) = 0.
In addition, P = P is not positive semidenite: since, by assumption, A + dI has an
eigenvector u =
0 which corresponds to an eigenvalue with a positive real part, we have
|u|2 = 2Re()u P u,
hence u P u < 0.
Let > 0 be small enough so that
2
x P w 0.5|x|
2 for |w| |x|.
The results for the discrete time case are similar to Theorem 8.1, with the real parts of
the eigenvalues being replaced by the dierence between their absolute values and 1.
Let us call an equilibrium x0 of (8.2) exponentially stable if there exist positive real
numbers , r, C such that every solution x : [0, T ] X with |x(0) x 0 | < satises
|x(t) x0 | Cert |x(0) x0 | t = 0, 1, 2, . . . .
Theorem 8.2 Assume that a(
x0 ) = 0 and condition (8.3) is satised. Then
5
(a) if A = a (
x0 ) is a Schur matrix (i.e. if all eigenvalues of A have absolute value less
than one) then x0 is a (locally) exponentially stable equilibrium of (8.2);
(b) if A = a (
x0 ) has an eigenvalue with absolute value greater than 1 then x 0 is not an
exponentially stable equilibrium of (8.2);
(c) if A = a (
x0 ) has an eigenvalue with absolute value strictly larger than 1 then x 0 is
not a stable equilibrium of (8.2).
8.2
In this subsection we assume for simplicity that x0 = 0 is the studied equilibrium of (8.1),
i.e. a(0) = 0. Assume also that a is k times continuously dierentiable in a neigborhood
of x0 = 0, where k 1, and that A = a (0) has no eigenvalues with positive real part, but
has eigenvalues on the imaginary axis, as well as in the open left half plane Re(s) < 0.
Then a linear change of coordinates brings A into a block-diagonal form
Ac 0
,
A=
0 As
where As is a Hurwitz matrix, and all eigenvalues of Ac have zero real part.
Theorem 8.3 Let a : Rn Rn be k 2 times continuously dierentiable in a neigbor
hood of x0 = 0. Assume that a(0) = 0 and
Ac 0
,
a (0) = A =
0 As
where As is a Hurwitz p-by-p matrix, and all eigenvalues of the q-by-q matrix Ac have
zero real part. Then
(a) there exists > 0 and a function h : Rq Rp , k 1 times continuously dier
entiable in a neigborhood of the origin, such that h(0) = 0, h (0) = 0, and every
solution x(t) = [xc (t); xs (t)] of (8.1) with xs (0) = h(xc (0)) and with |xc (0)| <
satises xs (t) = h(x0 (t)) for as long as |xc (t)| < ;
6
(b) for every function h from (a), the equilibrium x0 = 0 of (8.1) is locally stable
(asymptotically stable) [unstable] if and only if the equilibrium x c = 0 of the ODE
dotxc (t) = a([xc (t); h(xc (t))])
(8.4)
x = [xc ; h(
Mc = {
xc )] : |
x
c | < },
where > 0 is small enough, is called the central manifold of (8.1). Theorem 8.3, called
frequently the center manifold theorem, allows one to reduce the dimension of the system
to be analyzed from n to q, as long as the function h dening the central manifold can be
calculated exactly or to a sucient degree of accuracy to judge local stability of (8.4).
Example 8.3 This example is taken from Sastry, p. 312. Consider system
x 1 (t) = x1 (t) + kx2 (t)2 ,
x 2 (t) = x1 (t)x2 (t),
where k is a real parameter. In this case n = 2, p = q = 1, Ac = 0, As = 1, and k
can be arbitrarily large. According to Theorem 8.3, there exists a k times dierentiable
function h : R R such that x1 = h(x2 ) is an invariant manifold of the ODE (at least,
in a neigborhood of the origin). Hence
ky 2 = h(y) + h (y)h(y)y
for all suciently small y. For the 4th order Taylor series expansion
9.1
(9.1)
where is a parameter. When a and x0 are dierentiable with respect to , the solution
x(t) = x(t, ) is dierentiable with respect to as well. Moreover, the derivative of x(t, )
with respect to can be found by solving linear ODE with time-varying coecients.
Theorem 9.1 Let a : Rn R Rk Rn be a continuous function, 0 Rk . Let
x0 : [t0 , t1 ] Rn be a solution of (9.1) with = 0 . Assume that a is continuously
dierentiable with respect to its rst and third arguments on an open set X such that
(x0 (t), t, 0 ) X for all t [t0 , t1 ]. Then for all in a neigborhood of 0 the ODE in (9.1)
has a unique solution x(t) = x(t, ). This solution is a continuously dierentiable function
of , and its derivative with respect to at = 0 equals (t), where : [t0 , t1 ] Rn,k
is the n-by-k matrix-valued solution of the ODE
(t)
= A(t)(t) + B(t), (t0 ) = 0 ,
(9.2)
2
Proof Existence and uniqueness of x(t, ) and (t) follow from Theorem 3.1. Hence, in
order to prove dierentiability and the formula for the derivative, it is sucient to show
that there exist a function C : R+ R+ such that C(r)/r 0 as r 0 and > 0 such
that
|x(t, ) (t)( 0 ) x0 (t)| C(| 0 |)
whenever | 0 | . Indeed, due to continuous dierentiability of a, there exist C1 , 0
such that
|a(
x, t, ) a(x0 (t), t, 0 ) A(t)(
x x0 (t)) B(t)( 0 )| C1 (|x x0 (t)| + | 0 |)
and
|x0 () x0 (0 ) 0 ( 0 )| C1 (| 0 |)
whenever
|x x0 (t)| + | 0 | ,
t [t0 , t1 ].
Hence, for
(t) = x(t, ) x0 (t) (t)( 0 )
we have
(t)
= sin(t), (0) = 0,
i.e. (t) = 1 cos(t). Hence
y (t) = t + (1 cos(t)) + O(2 )
for small .
9.2
(9.3)
(9.4)
Note that while the rst equation in (9.4) means that f is periodic in t with a period ,
it is possible that x = 0, in which case the second equation in (9.4) does not bring any
additional information.
4
To derive a stability criterion for periodic solutions x0 : R Rn of (9.3), assume
continuous dierentiability of function f = f (
x, t) with respect to the rst argument x
for |
x x0 (t)| , where > 0 is small, and dierentiate the solution as a function of
initial conditions x(0) x0 (0).
Theorem 9.2 Let f : Rn R Rn be a continuous (, x)-periodic function. Let
x0 : R Rn be a (, x)-periodic solution of (9.3). Assume that there exists > 0 such
that f is continuously dierentiable with respect to its rst argument for |x x 0 (t)| <
and t R. For
df
|x=x0 (t) ,
(9.9)
A(t) =
dx
dene : [0, ] Rn,n be the n-by-n matrix solution of the linear ODE
(t)
= A(t)(t), (0) = I.
(9.10)
Then
(a) x0 () is exponentially stable if ( ) is a Schur matrix (i.e. if all eigenvalues of ( )
have absolute value less than one);
(b) x0 () is not exponentially stable if ( ) has an eigenvalue with absolute value greater
or equal than 1;
(c) x0 () is not stable if ( ) has an eigenvalue with absolute value greater than 1.
The matrix-valued function = (t) is called the evolution matrix of linear system
(9.10). The proof of Theorem 9.2 follows the same path as the proof of a similar theorem
for stability of equilibria, using time-varying quadratic Lyapunov functions.
9.2.2
(9.11)
5
(a) for every > 0 there exists > 0 such that dist(x(t), x0 ()) < for all t 0 and all
solutions x = x(t) of (9.11) such that dist(x(0), x0 ()) < , where
dist(
x, x0 ()) = min |
x x0 (t)|;
tR
(b) there exists > 0 such that dist(x(t), x0 ()) 0 as t for every solution of
(9.11) such that dist(x(0), x0 ()) < .
Note that a non-constant periodic solution x0 = x0 (t) of time-invariant ODE equations
is never asymptotically stable, because, as 0, the initial conditions for the solution
x (t) = x0 (t + ) approach the initial conditions for x0 (), but the dierence x (t) x0 (t)
does not converge to 0 as t unless x x0 . Therefore, the notion of a stable limit
cycle is a relaxed version of asymptotic stability of a solution.
Theorem 9.3 Let a : Rn Rn be a continuous (
x)-periodic function. Let x0 : R Rn
be a non-constant (, x)-periodic solution of (9.11). Assume that there exists > 0 such
that a is continuously dierentiable on the set
X = {
x Rn : |x x0 (t)| < for some t R.
Let : [0, ] Rn,n be dened by (9.9),(9.10). Then
(a) if all eigenvalues of ( ) except one have absolute value less than 1, x 0 () is a stable
limit cycle;
(b) if one eigenvalue of ( ) has absolute value greater than 1, x0 () is not a stable
limit cycle.
10.1
(10.1)
where [0, 0 ] is a small positive parameter. When > 0, (10.1) is an ODE model.
For = 0, (10.1) is a combination of algebraic and dierential equations. Models such
as (10.1), where y represents a set of less relevant, fast changing parameters, are fre
quently studied in physics and mechanics. One can say that singular perturbations is the
classical approach to dealing with uncertainty, complexity, and nonlinearity.
10.1.1
A typical question asked about the singularly perturbed system (10.1) is whether its
solutions with > 0 converge to the solutions of (10.1) with = 0 as 0. A su
cient condition for such convergence is that the Jacobian of g with respect to its second
argument should be a Hurwitz matrix in the region of interest.
Theorem 10.1 Let x0 :
satisfying equations
[t0 , t1 ] Rn , y0 :
2
where f : Rn Rm R Rn and g : Rn Rm R Rm are continuous functions.
Assume that f, g are continuously dierentiable with respect to their rst two arguments
in a neigborhood of the trajectory x0 (t), y0 (t), and that the derivative
A(t) = g2 (x0 (t), y0 (t), t)
is a Hurwitz matrix for all t [t0 , t1 ]. Then for every t2 (t0 , t1 ) there exists d > 0 and
C > 0 such that inequalities |x0 (t) x(t)| C for all t [t0 , t1 ] and |y0 (t) y(t)| C
for all t [t2 , t1 ] for all solutions of (10.1) with |x(t0 ) x0 (t0 )| , |y(t0 ) y0 (t0 )| d,
and (0, d).
The theorem was originally proven by A. Tikhonov in 1930-s. It expresses a simple
principle, which suggests that, for small > 0, x = x(t) can be considered a constant
when predicting the behavior of y. From this viewpoint, for a given t (t0 , t1 ), one can
expect that
y(t + ) y1 ( ),
where y1 : [0, ) is the solution of the fast motion ODE
y 1 ( ) = g(x0 (t), y1 ( )), y1 (0) = y(t).
Since y0 (t) is an equilibrium of the ODE, and the standard linearization around this
equilibrium yields
( ) A(t)( )
where ( ) = y1 ( ) y0 (t), one can expect that y1 ( ) y0 (t) exponentially as
whenever A(t) is a Hurwitz matrix and |y(t) y0 (t)| is small enough. Hence, when > 0
is small enough, one can expect that y(t) y0 (t).
10.1.2
First, let us show that the interval [t0 , t1 ] can be subdivided into subintervals k =
[k1 , k ], where k {1, 2, . . . , N } and t0 = 0 < 1 < < N = t1 in such a way that
for every k there exists a symmetric matrix Pk = Pk > 0 for which
Pk A(t) + A(t) Pk < I t [k1 , k ].
Indeed, since A(t) is a Hurwitz matrix for every t [t0 , t1 ], there exists P (t) = P (t) > 0
such that
P (t)A(t) + A(t) P (t) < I.
Since A depends continuously on t, there exists an open interval (t) such that t (t)
and
P (t)A( ) + A( ) P (t) < I (t).
3
Now the open intervals (t) with t [t0 , t1 ] cover the whole closed bounded interval
[t0 , t1 ], and taking a nite number of tk , k = 1, . . . , m such that [t0 , t1 ] is completely
covered by (tk ) yields the desired partition subdivision of [t0 , t1 ].
Second, note that, due to the continuous dierentiability of f, g, for every > 0 there
exist C, r > 0 such that
|f (x0 (t) + x , y0 (t) + y , t) f (x0 (t), y0 (t), t)| C(|x | + |y |)
and
|y |k = (y Pk y )1/2 .
Then, for
x (t) = x(t) x0 (t), y (t) = y(t) y0 (t),
we have
|x | C1 (|x | + |y |k ),
|y |k q|y |k + C1 |x | + C1
(10.2)
as long as x , y are suciently small, where C1 , q are positive constants which do not
depend on k. Combining these two derivative bounds yields
d
(|x | + (C1 /q)|y |) C2 |x | + C2
dt
for some constant C2 independent of k. Hence
|x (k1 + )| eC3 (|x (k1 )| + (C1 /q)|y (k1 )|) + C3
for [0, k k1 ]. With the aid of this bound for the growth of |x |, inequality (10.2)
yields a bound for |y |k :
|y (k1 + )| exp(q /)|y (k1 )| + C4 (|x (k1 )| + (C1 /q)|y (k1 )|) + C4 ,
which in turn yields the result of Theorem 10.1.
10.2
Averaging
f (
x, t, ).
f (
x, ) =
0
If df/dx|x=0,=0 is a Hurwitz matrix, then, for suciently small > 0, the equilibrium
x 0 of the system
x (t) = f (x, t, )
(10.3)
is exponentially stable.
Though the parameter dependence in Theorem 10.2 is continuous, the question asked
is about the behavior at t = , which makes system behavior for = 0 not a valid
indicator of what will occur for > 0 being suciently small. (Indeed, for = 0 the
equilibrium x0 is not asymptotically stable.)
To prove Theorem 10.2, consider the function S : Rn R Rn which maps x(0),
to x( ) = S(x(0), ), where x() is a solution of (10.3). It is sucient to show that the
x, ) of S with respect to its rst argument, evaluated at x = x0
derivative (Jacobian) S (
and > 0 suciently small, is a Schur matrix. Note rst that, according to the rules on
dierentiating with respect to initial conditions, S (
x0 , ) = (, ), where
df
d(t, )
= (0, t, )(t, ), (0, ) = I.
dt
dx
) dened by
Consider D(t,
)
d(t,
df
), (0,
) = I.
= (0, t, 0)(t,
dt
dx
) with respect to at = 0. According to the rule for
Let (t) be the derivative of (t,
dierentiating solutions of ODE with respect to parameters,
t
df
(t) =
(0, t1 , 0)dt1 .
0 dx
Hence
( ) = df/dx|x=0,=0
5
is by assumption a Hurwitz matrix. On the other hand,
) = o().
(, ) (,
Combining this with
) = I + ( ) + o()
(,
yields
(, ) = I + ( ) + o().
Since ( ) is a Hurwitz matrix, this implies that all eigenvalues of (, ) have absolute
value strictly less than one for all suciently small > 0.
by A. Megretski
11.1
Weighted volume
x1 +r
x2 +r
xn1 +r
xn +r
V (Q(
x, r)) =
...
(x1 , x2 , . . . , xn )dxn dxn1 . . . dx2 dx1 .
x
1 r
x
2 r
n1 r
x
x
n r
Without going into the ne details of the measure theory, let us say that the weighted
volume of a subset X U with respect to is well dened if there exists M > 0 such that
for every > 0 there exist (countable) families of cubes {Q1k } and {Q2k } (all contained in
1
2
U ) such that X is contained in the union of Qik , the union of all Q2k is contained in the
union of X and Q1k , and
V (Q2k )
k
as 0 and Q2k are required to have empty pair-wise intersections. A common alternative
notation for V (X) is
(x)dx.
V (X) =
xX
The rules for variable change in integration allow one to trace the change of weighted
volume under a smooth transformation.
Theorem 11.1 Let U be an open subset of Rn . Let F : U U be an injective Lipschitz
function which is dierentiable on an open subset U0 of U such that the complement of U0
in U has zero Lebesque volume. Let : U R be a given measureable function which is
bounded on every compact subset of U . Then, if -weighted volume is dened for a subset
X U , -weighted volume is also dened for F (X), F -weighted volume is dened for
X, where
(F (x))| det(dF/dx(
x))|, dF/dx dened for x,
F (
x) =
0,
otherwise,
and
V (F (X)) = VF (X).
Note that the formula is not always valid for non-injective functions (because of possi
ble folding). It is also useful to remember that image of a line segment (zero Lebesque
volume when n > 1) under a continuous map could cover a cube (positive Lebesque
volume).
3
11.1.3
Let consider the case when the map F = St is dened by a smooth dierential ow.
Remember that, for a dierential function g : Rn Rn , div(g) is the trace of the
Jacobian of g.
Theorem 11.2 Let U be an open subset of Rn . Let f : U Rn and : U R be
continuously dierentiable functions. For T > 0 let UT be the set of vectors x U such
that the ODE
x (t) = f (x(t)),
has a solution x : [0, T ] U such that x(0) = x. Let ST : UT U be the map dened by
ST (x(0)) = x(T ). Then, if X is contained in a compact subset of UT and has a -weighted
volume, the map t V (St (X)) is well dened, dierentiable, and its derivative at t = 0
is given by
dV (St (X))
= Vdiv(f ) (X).
dt
Proof According to Theorem 11.2,
V (St (X)) =
(St (
x))| det(dSt (x)/dx(
x))|d
x.
X
Note that
x)
dSt (
= f (
x),
dt t=0
df
d(t, x)
=
(t, x),
(0, x)
= I.
dt
dx x=St (x)
d
d
| det(dSt (x)/dx(
x))| =
det((t, x))
dt
dt
d(t, x)
= trace
dt
t=0
= div(f )(
x),
d
1 dA( )
det(A( )) = det(A( ))trace A( )
d
d
was used. Finally, at t = 0,
d
(St (
x))| det(dSt (x)/dx(
x))| = ()(
x)f (
x) + (
x)div(f )(
x) = div(f )(x).
dt
11.2
Results from the previous section allow one to establish invariance (monotonicity) of
weighted volumes of sets evolving according to dynamical system equations. This section
discusses application of such invariance in stability analysis.
11.2.1
(11.1)
for every solution x = x() of (11.1) such that x(0) belongs to the ball
B0 = {
x : |x x0 (0)| .
Let v(t) = V (St (B0 )), where St is the system ow. By assumption, v is monotonically
non-increasing, v(0) = 0, and v(t) 0 as t . The contradiction proves the theorem.
5
11.2.2
Let us call a set X Rn strictly invariant for system (11.1) if every maximal solution
x = x(t) of (11.1) with x(0) X is dened for all t R and stays in X for all t
R. Obviously, if X is a strictly invariant set then, for every weight , V (St (X)) does
not change as t changes. Therefore, if one can nd a for which div(f ) > 0 almost
everywhere, the strict invariance of X should imply that X is a set of a zero Lebesque
volume, i.e. the following theorem is true.
Theorem 11.4 Let U be an open subset of Rn . Let f : U Rn and : U R be
continuously dierentiable functions. Assume that div(f ) > 0 for almost all points of U .
Then, if X is a bounded closed subset of U which is strictly invariant for system (11.1),
the Lebesque volume of X equals zero.
As a special case, when n = 2 and 1, we get the Bendixon theorem, which claims
that if, in a simply connected region U , (f ) > 0 almost everywhere, there exist no
non-equilibrium periodic trajectories of (11.1) in U . Indeed, a non-equilibrium periodic
trajectory on a plane bounds a strictly invariant set.
11.2.3
So far, we considered weights which were bounded in the regions of interest. A recent ob
servation by A. Rantzer shows that, when studying asymptotic stability of an equilibrium,
it is most benecial to consider weights which are singular at the equilibrium.
In particular, he has proven the following stability criterion.
Theorem 11.5 Let f : Rn Rn and : Rn /{0} R be continuously dierentiable
functions such that f (0) = 0, (x)f (x)/|x| is integrable over the set |x| 1, and div(f ) >
0 for almost all x Rn . If either 0 or 0 is a locally stable equilibrium of (11.1) then
for almost all initial states x(0) the corresponding solution x = x(t) of (11.1) converges
to zero as t .
To prove the statement for the case when x = 0 is a stable equilibrium, for every r > 0
consider the set Xr of initial conditions x(0) for which
sup |x(t)| > r T > 0.
t[T,)
The set X is strictly invariant with respect to the ow of (11.1), and has well dened
-weighted volume. Hence, by the strict -weighted volume monotonicity, the Lebesque
measure of Xr equals zero. Since this is true for all r > 0, almost every solution of (11.1)
converges to the origin.
6
Example 11.1 (Rantzer) The system
x 1 = 2x1 + x21 x22
x 2 = 6x2 + 2x1 x2
satises conditions of Theorem 11.5 with (x) = |x|4 .
(12.1)
12.1
da
da
, B(t) =
A(t) =
dx x=x0 (t),u=u0 (t)
du x=x0 (t),u=u0 (t)
(12.3)
is a linear integral dependence, function u can be chosen to belong to any subclass which
is dense in L1 (0, T ). For example, u (t) can be selected from the class of polynomials,
class of piecewise constant functions, etc.
Note that controllability over an interval implies controllability over every interval
+ containing , but in general does not imply controllability over all intervals
contained in . Also, system (12.2) in which A(t) = A0 and B(t) = B0 are constant is
equivalent to controllability of the pair (A, B).
3
12.1.2
there exist functions x : [0, T ] Rn , u : [0, T ] Rm satisfying the ODE in (12.2) and
conditions
x(0) = x0 , x(T ) = xT , |x(t) x0 (t)| < , |u(t) u0 (t) < t [0, T ].
wk uk (t),
k=1
is well dened and continuously dierentiable when > 0 is suciently small. The
derivative of S with respect to w at w = v = 0 is identity. Hence, by the implicit mapping
theorem, equation S(w, v) = x has a solution w 0 whenever |v | and |x x0 (T )| are
small enough.
12.2
In this section we consider ODE models in which the right side is linear with respect to
the control variable, i.e. when (12.1) has the special form
x (t) = g(x(t))u(t) =
gk (x(t))u(t), x(0) = x0 ,
(12.4)
k=1
Let us say that system (12.4) is locally controllable at a point x 0 X0 if for every > 0,
T > 0, and x X0 such that |x x0 | < there exists a bounded measureable function
u : [0, T ] Rm dening a solution of (12.4) with x(0) = x0 such that x(T ) = x and
|x(t) x0 | < t [0, T ].
The local controlability conditions to be presented in this section are based on the
notion of a Lie bracket. Let us write h3 = [h1 , h2 ] (which reads as h3 is the Lie bracket
of h1 and h2 ) when hk : X0 Rn are continuous functions dened on an open subset
X0 of Rn , functions h1 , h2 are continuously dierentiable on X0 , and
h3 (
x) = h 1 (
x)h2 (
x) h 2 (
x)h1 (
x)
for all x X0 , where h k (
x) denotes the Jacobian of hk at x.
The reasoning behind the denition, as well as a more detailed study of the properties
of Lie brackets, will be postponed until the proof of the controllability results of this
subsection.
Let us call a set of functions hk : X0 Rn , (k = 1, . . . , q) complete at a point x X0
if either the vectors hi (
x) with i = 1, . . . , m span the whole Rn or there exist functions
n
hk : X0 R , (k = q + 1, . . . , N ), such that for every k > q we have hk = [hi , hs ] for
some i, s < k, and the vectors hi (
x) with i = 1, . . . , N span the whole Rn .
Theorem 12.3 If C functions gk : X0 Rn form a complete set at x0 X0 then
system (12.4) is locally controllable at x0 .
5
Theorem 12.3 provides a sucient criterion of local controllability in terms of the span
of all vector elds which can be generated by applying repeatedly the Lie bracket operation
to gk . This condition is not necessary, as can be seen from the following example: the
second order system
x 1 = u1 ,
x 2 = (x1 )u2 ,
where function : R R is innitely many times continuously dierentiable and such
that
(0) = 0, (y) > 0 for y = 0, (k) (0) = 0 k,
is locally controllable at every point x0 Rn despite the fact that the corresponding set
of vector elds
1
0
g1 (x) =
, g2 (x) =
0
(x1 )
is not complete at x = 0. On the other hand, the example of the system
x = xu,
which is not locally controlable at x = 0, but is dened by a (single element) set of vector
elds which is complete at every point except x = 0, shows that there is little room for
relaxing the sucient conditions of Theorem 12.3.
12.2.2
Let S denote the set of all continuous functions s : s X0 , where s is an open subset
of R X0 containing {0} X0 (s is allowed to depend on s). Let Sk S be the elements
of S dened by
Sk (, x)
= x( ) : x (t) = gk (x(t)), x(0) = x.
Let Sg be subset of S which consists of all functions which can be obtained by recursion
sk+1 (
x, ) = S(k) (sk (
x, ), k ( )), 0 (
x, ) = x,
where (k) {1, 2, . . . , m} and k : R R are continuous functions such that k (0) = 0.
One can view elements of Sg as admissible state transitions in system (12.2) with piecewise
constant control depending on parameter in such a way that = 0 corresponds to the
identity transition. Note that for every s Sg there exists an inverse s Sg such that
s(s (
x, ), ) = x (
x, ) s ,
dened by applying inverses S(k) (, k ( )) of the basic transformations S(k) (, k ( ))
in the reverse order.
6
Let us call a C function h : X0 Rn implementable in control system (12.4) if
for every integer k > 0 there exists a function s Sg which is k times continuously
dierentiable in the region 0 and in the region 0, such that
s(
x, ) = x + h(
x) + o( )
(12.5)
t2
h(
x)h(
x) + O(t3 )
2
as t 0. This is not necessarily true for a general transition s from the denition of
an implementable vector eld h. However, the next Lemma shows that s can always be
chosen to match the rst k Taylor coecients of S h .
Lemma 12.1 If h is implementable then for every integer k > 0 there exists a k times
continuously dierentiable function s Sg such that
s(
x, ) = S h (
x, ) + O( k ).
(12.6)
7
satises
sa,b (
x, ) = S h (
x, (2 a b) ) + (2ak bk ) k w(
x) + O( k+1 .
2a b = 1, 2ak = bk ,
which yields (12.6) with k increased by 1.
After (12.6) is established for 0, s can be dened for negative arguments by
x, ) = s (
s(
x, ), 0,
which makes it k 1 times continuously dierentiable.
Next lemma is a key result explaining the importance of Lie brackets in controllability
analysis.
Lemma 12.2 If vector elds h1 , h2 are implementable then so is their Lie bracket h =
[h2 , h1 ].
Proof By Lemma 12.1, there exist 2 k + 2 times continuously dierentiable (for = 0)
functions s1 , s2 Sg such that
si (
x, ) = x + hi (
x) + 2 h i (
x)hi (
x) + o( 2 ).
si+1 (
x, ) = si (si (
x, / 2), / 2).
By induction,
si+2 (
x, ) = x +
2i iq (
x) + o( 2i ),
q=1
i.e. the transformation from si to si+1 removes the smallest odd power of in the Taylor
expansion for si . Hence
x, ) = s2k+2 (
s(
x, ), 0
denes a k times continuously dierentiable function for suciently small 0, and
s(
x, ) = x + h(
x) + o( )
for 0, 0.
8
12.2.3
Frobenius Theorem
x1
x2
2
n = 2, m = 1, x0 = 0 R , g
=
,
x2
x1
the distribution dened by g is smooth and involutive (because [g, g] = 0 for every vector
eld g), but not regular at x0 . Consequently, the conclusion of Theorem 12.4 does not
hold at x0 = 0, but is nevertheless valid in a neigborhood of all other points.
The locality of complete integrability is also essential for the theorem. For example,
the vector eld
2
x1 + (1 x22 x22 )2
x1
x3
g x2 =
x2
x3
denes a smooth regular involutive distribution on the whole R3 . However, the distri
bution is not completely integrable over R3 , while it is still completely integrable in a
neigborhood of every point.
9
12.2.4
The implication (a)(b) follows straightforwardly from the reachability properties of Lie
brackets. Let us prove the implication (b)(a).
Let Sk denote the dierential ow map associated with gk , i.e. Sk (
x) = x( ), where
x = x(t) is the solution of
x (t) = gk (x(t)), x(0) = x.
Let (
x) denote the span of g1 (
x), . . . , gm (
x). The following stetement, which relies on
t
x) of
both regularity and involutivity of the family {gk }m
k=1 , states that the Jacobian Dk (
t
t
Sk at x maps (
x) onto (Sk (
x)). This a generalization of the (obvious) fact that, for
a single vector eld g : Rn Rn , moving the initial condition x(0) by g(x(0)) of a
solution x = x(t) of dx/dt = g(x) results in x(t) shifted by g(x(t)) + o().
Lemma 12.3 Under the assumptions of Theorem 12.4,
Dkt (
x)(
x) = (Skt (
x)).
Proof According to the rules for dierentiation with respect to initial conditions, for a
xed x,
Dk (t) = Dkt (
x) satises the ODE
d
Dk (t) = g k (x(t))Dk (t), Dk (0) = I,
dt
where x(t) = Skt (
x), and g k (
x) denotes the Jacobian of g at x.
Hence
k (t) = Dk (t)g(
D
x), where g(
x) = [g1 (
x) g2 (
x) . . . gm (
x)],
satises
d
k (t), D
k (0) = g(
x).
(12.7)
Dk (t) = g k (x(t))D
dt
Note that the (12.7) is an ODE with a unique solution. Hence, it is sucient to show
that (12.7) has a solution of the form
k (t) = g(x(t))(t) =
D
gi (x(t))k (t),
(12.8)
i=1
10
where A(t) is the n-by-m matrix with columns gki (x(t)), gki = [gk , gi ]. By involutivity
and regularity, A(t) = g(x(t))a(t) for some continuous m-by-m matrix valued function
a = a(t). Thus, the equation for (t) becomes
(t) = a(t)(t), (0) = I,
k (t) = g(x(t))(t) is guaranteed.
hence existence of (t) such that D
Let gm+1 , . . . , gn be C smooth functions gi : Rn Rn such that vectors g1 (
x0 ), . . . , gn (
x0 )
form a basis in Rn . (For example, the functions gi with i > m can be chosen constant).
Consider the map
F (z) = S1z1 (S2z2 (. . . (Snzn (
x0 )) . . . )),
dened and k times continuously dierentiable for z = [z1 , . . . , zn ] in a neigborhood of
zero in Rn . F is a k times dierentiable map dened in a neigborhood of z = 0, x = x 0 .
Since the Jacobian F (0) of F at zero, given by
F (0) = [g1 (
x0 ) g2 (
x0 ) . . . gn (
x0 )]
is not singular, by the implicit mapping theorem there exists a k times continuously
dierentiable function
z = H(x) = [hn (x); hn1 (x); . . . ; h1 (x)]
dened in a neigborhood of x0 , such that F (H(x)) x.
Let us show that functions hi = hi (x) satisfy the requirements of Theorem 12.4.
Indeed, dierentiating the identity F (H(x)) x yields
F (H(x))H (x) = I,
where
(12.9)
13.1
In this section, we give a motivating example and state technical objectives of theory of
feedback linearization.
13.1.1
(13.1)
where q(t) Rk is the position vector, u(t) is the vector of actuation forces and torques,
F : Rk Rk Rk is a given vector-valued function, and M : Rk Rkk is a given
function taking positive denite symmetric matrix values (the inertia matrix). When
u = u(t) is xed (for example, when u(t) = u0 cos(t) is a harmonic excitation), analysis of
(13.1) is usually an extremely dicult task. However, when u(t) is an unrestricted control
eort to be chosen, a simple change of control variable
u(t) = M (q(t))(v(t) + F (q(t), q(t)))
(13.2)
(13.3)
2
The transformation from (13.1) to (13.3) is a typical example of feedback linearization,
which uses a strong control authority to simplify system equations. For example, when
(13.1) is an underactuated model, i.e. when u(t) is restricted to a given subspace in R k ,
the transformation in (13.2) is not valid. Similarly, if u(t) must satisfy an a-priori bound,
conversion from v to u according to (13.2) is not always possible.
In addition, feedback linearization relies on access to accurate information, in the
current example precise knowledge of functions M, F and precise measurement of coor
dinates q(t) and velocities q(t).
(13.4)
y(t) = h(x(t)),
(13.5)
where x(t) U is the state vector ranging over a given open subset X0 of Rn , u(t) Rm
is the control vector, y(t) Rm is the output vector, f : X0 Rn , h : X0 Rm ,
and g : X0 Rnm are given smooth functions. Note that in this setup y(t) has same
dimension as u(t).
The simplication is to be achieved by nding a feedback transformation
v(t) = (x(t)) + (x(t))u(t),
(13.6)
(13.7)
(13.8)
(13.9)
where A, B, C are constant matrices of dimensions k-by-k, k-by-m, and m-by-k respec
tively, such that the pair (A, B) is controllable and the pair (C, A) is observable, and
a0 : Rk Rnk Rnk is a continuously dierentiable function.
3
More precisely, it is required that for every solution x : [t0 , t1 ] X0 , u : [t0 , t1 ] Rm ,
y : [t0 , t1 ] Rm of (13.4), (13.5) equalities (13.8), (13.9) must be satised for z(t), v(t)
dened by (13.6) and (13.7).
As long as accurate measurements of the full state x(t) of the original system are
available, X0 = Rn , and the behavior of y(t) and u(t) is the only issue of interest, the
output feedback linearization reduces the control problem to a linear one. However, in a
ddition to sensor limitations, X0 is rarely the whole Rn , and the state x(t) is typically
required to remain bounded (or even to converge to a desired steady state value). Thus,
it is frequently impossible to ignore equation (13.9), which is usually refered to as the zero
dynamics of (13.4),(13.5). In the best scenario (the so-called minimum phase systems),
the response of (13.9) to all expected initial conditions and reference signals y(t) can be
proven to be bounded and generating a response x(t) conned to X0 . In general, the
area X0 on which feedback linearization is possible does not cover of states of interest,
the zero dynamics is not as stable as desired, and hence the benets of output feedback
linearization are limited.
13.1.3
Formally, full state feedback linearization applies to nonlinear ODE control system model
of the form (13.4), without a need for a particular output y(t) to be specied.
As in the previous subsection, the simplication is to be achieved by nding a feedback
transformation (13.6) and a state transformation
z(t) = (x(t))
(13.10)
(13.11)
13.2
This section contains basic results on feedback linearization of single-input systems (the
case when m = 1 in (13.4)).
4
13.2.1
(i = 1, . . . , q).
By applying the denition to the LTI case f (x) = Ax, g(x) = B, h(x) = Cx one can
see that an LTI system with a non-zero transfer function always has a relative degree,
which equals the dierence between the degrees of numerator and denominator of its
transfer function.
It turns out that systems with well dened relative degree are exactly those for which
input/output feedback linearization is possible.
Theorem 13.1 Assuming that h, f, g are continuously dierentiable n + 1 times, the
following conditions are equivalent:
(a) system (13.4),(13.5) has relative degree q;
(b) system (13.4),(13.5) is input/output feedback linearizable.
Moreover if conditions (a) is satised then
x) with k = 1, . . . , q are linearly independent for every x
X0
(i) the gradients hk (
(which, in particular, implies that q n);
(ii) vectors gk (
x) dened by
g1 = g, gk+1 = [f, gk ] (k = 1, . . . , q 1)
satisfy
hi (
x)gk (
x) = hi+j1 (
x)g(
x) x X0
for i + j q + 1;
h1 (
x)
h2 (
x)
zl = l (
x) = .. .
.
hq (
x)
5
Note that, unlike the Frobenius theorem, Theorem 13.1 is not local: it provides feed
back linearization on every open set X0 on which the relative degree is well dened. Also,
in the case of linear models, where f (x) = Ax and g(x) = B, it is always possible to get
the zero dynamics depending on y only, i.e. to ensure that
a0 (zl , z0 ) = a
0 (Czl , z0 ).
This, however, is not always possible in the nonlinear case. For example, for system
x1
x2
d
, y = x1
u
x2 =
dt
2
x3
x1 + x 2
there exists no function p : X0 R dened on a non-empty open subset of R3 such that
p(x)f (x) = b(x1 , p(x)), p(x)g(x) = 0, p(x) = 0 x X0 .
Indeed, otherwise the system with new output ynew = p(x) would have relative degree 3,
which by Theorem 13.1 implies that (p)g1 = (p)g2 = 0, and hence by the Frobenius
theorem the vector elds
0
1
g1 (x) = 1 , g2 (x) = 0
0
2x2
would dene an involutive distribution, which they do not.
13.2.2
It follows from Theorem 13.1 that system (13.4), (13.5) which has maximal possible
relative degree n is full state feedback linearizable. The theorem also states that, given
smooth functions f, g, existence of h dening a system with relative degree n implies linear
independence of vectors g1 (
x), . . . , n (
x) for all x X0 , and involutivity of the regular
distribution dened by vector elds g1 , . . . , gn1 . The converse is also true, which allows
one to state the following theorem.
Theorem 13.2 Let f : X0 Rn and g : X0 Rn be n + 1 times continuously
dierentiable functions dened on an open subset X0 of Rn . Let gk with k = 1, . . . , n be
dened as in Theorem 13.1.
(a) If system (13.4) is full state feedback linearizable on X0 then vectors g1 (
x), . . . , n (
x)
n
form a basis in R for all x X0 , and the distribution dened by vector elds
g1 , . . . , gn1 in involutive on X0 .
(b) If for some x0 X0 vectors g1 (
x), . . . , n (
x) form a basis in Rn , and the distribution
dened by vector elds g1 , . . . , gn1 in involutive in a neigborhood of x0 , there exists
0 of X0 such that x0 X
0 and system (13.4) is full state feedback
an open subset X
0.
linearizable on X
Problem Set 11
Problem 1.1
Behavior set B of an autonomous system with a scalar binary DT output consists of all
DT signals w = w(t) {0, 1} which change value at most once for 0 t < .
(a) Give an example of two signals w1 , w2 B which commute at t = 3, but do not
dene same state of B at t = 3.
(b) Give an example of two dierent signals w1 , w2 B which dene same state of B at
t = 4.
(c) Find a time-invariant discrete-time nite state-space dierence inclusion model
for B, i.e. nd a nite set X and functions g : X {0, 1}, f : X S(X),
where S(X) denotes the set of all non-empty subsets of X, such that a sequence
w(0), w(1), w(2), . . . can be obtained by sampling a signal w B if and only if there
exists a sequence x(0), x(1), x(2), . . . of elements from X such that
x(t + 1) f (x(t)) and w(t) = g(x(t)) for t = 0, 1, 2, . . . .
(Figuring out which pairs of signals dene same state of B at a given time is one
possible way to arrive at a solution.)
Problem 1.2
Consider dierential equation
y(t) + sgn(y(t)
+ y(t)) = 0.
1
2
(a) Write down an equivalent ODE x(t)
(b) Find all vectors x0 R2 for which the ODE from (a) does not have a solution
x : [t0 , t1 ] R2 (with t1 > t0 ) satisfying initial condition x(t0 ) = x0 .
2
(c) Dene a semicontinuous convex set-valued function : R2 2R such that a(
x)
x) for all x. Make sure the sets (
(
x) are the smallest possible subject to these
constraints.
(d) Find explicitly all solutions of the dierential inclusion x (t) (x(t)) satisfying
initial conditions x(0) = x0 , where x0 are the vectors found in (b). Such solutions
are calles sliding modes.
(e) Repeat (c) for a : R2 R2 dened by
a([x1 ; x2 ]) = [sgn(x1 ); sgn(x2 )].
Problem 1.3
For the statements below, state whether they are true or false. For true statements,
give a brief proof (can refer to lecture notes or books). For false statements, give a
counterexample.
(a) All maximal solutions of ODE x (t) = exp(x(t)2 ) are dened on the whole time
axis {t} = R.
(b) All solutions x : R R of the ODE
x(t)/t, t = 0,
x(t)
=
0,
t=0
are such that x(t) = x(t) for all t R.
(c) If constant signal w(t) 1 belongs to a system behavior set B, but constant signal
w(t) 1 does not then the system is not linear.
Problem Set 21
Problem 2.1
Consider the feedback system with external input r = r(t), a causal linear time invariant
forward loop system G with input u = u(t), output v = v(t), and impulse response
g(t) = 0.1(t) + (t + a)1/2 et , where a 0 is a parameter, and a memoryless nonlinear
feedback loop u(t) = r(t) + (v(t)), where (y) = sin(y). It is customary to require well-
(y)
on the time interval t [0, ) for every bounded input signal r = r(t).
1
2
(a) Show how Theorem 3.1 from the lecture notes can be used to prove well-posedness
in the case when a > 0. Hint: it may be a good idea to begin with getting rid
of the algebraic part of the system equations by introducing a new signal e(t) =
v(t) 0.1(v(t)) 0.1r(t).
(b) Propose a generalization of Theorem 3.1 which can be applied when a = 0 as well.
(You are not required to write down the proof of your generalization, but make
every eort to ensure the statement is correct.)
Problem 2.2
Read the section of Lecture 4 handouts on limit sets of trajectories of ODE (it was not
covered in the classroom).
(a) Give an example of a continuously dierentiable function a : R2 R2 , and a
solution of ODE
x (t) = a(x(t)),
(2.1)
for which the limit set consists of a single trajectory of a non-periodic and nonequilibrium solution of (2.1).
(b) Give an example of a continuously dierentiable function a : Rn Rn , and a
bounded solution of ODE (2.1), for which the limit set contains no equilibria and no
trajectories of periodic solutions. Hint: it is possible to do this with a 4th order
linear time-invariant system with purely imaginary poles.
(c) Use Theorem 4.3 from the lecture notes to derive the Poincare-Bendixon theorem:
if a set X R2 is compact (i.e. closed and bounded), positively invariant for system
(2.1) (i.e. x(t, x)
X for all t 0 and x X), and contains no equilibria, then
the limit set of every solution starting in X is a closed orbit (i.e. the trajectory of
a periodic solution). Assume that a : R2 R2 is continuously dierentiable.
Problem 2.3
Use the index theory to prove the following statements.
(a) If n > 1 is even and F : S n S n is continuous then there exists x S n such that
x = F (x) or x = F (x).
(b) The equations for the harmonically forced nonlinear oscillator
y(t) + y(t)
+ (1 + y(t)2 )y(t) = 100 cos(t)
have at least one 2-periodic solution. Hint: Show rst that, for
V (t) = y(t)2 + y(t)2 + y(t)y(t)
+ 0.5y(t)2 ,
3
the inequality
V (t) c1 V (t) + c2 ,
Problem Set 31
Problem 3.1
Find out which of the functions V : R2 R,
(a) V (x1 , x2 ) = x21 + x22 ;
(b) V (x1 , x2 ) = |x1 | + |x2 |;
(c) V (x1 , x2 ) = max |x1 |, |x2 |;
are valid Lyapunov functions for the systems
(1) x 1 = x1 + (x1 + x2 )3 , x 2 = x2 (x1 + x2 )3 ;
2 );
(3) x 1 = x2 |x1 |, x 2 = x1 |x2 |.
Problem 3.2
Show that the following statement is not true. Formulate and prove a correct version: if
V : Rn R is a continuously dierentiable functional and a : Rn Rn is a continuous
function such that
V (
x)a(
x) 0 x : V (
x) = 1,
(3.1)
then V (x(t)) 1 for every solution x : [0, ) Rn of
x (t) = a(x(t))
with V (x(0)) 1.
1
(3.2)
2
Problem 3.3
The optimal minimal-time controller for the double integrator system with bounded con
trol
x 1 (t) = x2 (t),
|u(t)| 1
x 2 (t) = u(t),
has the form
u(t) = sgn(x1 (t) + 0.5x2 (t)2 sgn(x2 (t))).
(Do you know why ?)
(a) Find a Lyapunov function V : R2 R2 for the closed loop system, such that
V (x(t)) is strictly decreasing along all solutions of system equations except the
equilibrium solution x(t) 0.
(b) Find out whether the equilibrium remains asymptotically stable when the same
controller is used for the perturbed system
x 1 (t) = x2 (t),
|u(t)| 1,
x 2 (t) = x1 (t) + u(t),
where > 0 is small.
Problem Set 41
Problem 4.1
Find a function V : R3 R+ which has a unique minimum at x = 0, and is strictly
monotonically decreasing along all non-equilibrium trajectories of system
x 1 (t) = x1 (t) + x2 (t)2 ,
x 2 (t) = x2 (t)3 + x3 (t)4 ,
x 3 (t) = x3 (t)5 .
Problem 4.2
System takes arbitrary continuous input signals v : [0, ) R and produces contin
uous outputs w : [0, ) R in such a way that the series connection of and the LTI
system with transfer function G0 (s) = 1/(s + 1), described by equations
x 0 (t) = x0 (t) + w(t), w() = (v()),
has a non-negative storage function with supply rate
x0 , v,
w)
= (w 0.9
x0 )(
v w).
0 (
(a) Find at least one nonlinear system which ts the description.
(b) Derive constraints to be imposed on the values G(j) of a transfer function
2
with a Hurwitz matrix A, which guarantee that x(t) 0 as t for every
solution of
x (t) = Ax(t) + Bw(t), v(t) = Cx(t), w() = (v()).
Make sure that your conditions are satised at least for one non-zero transfer func
tion G = G(s).
Problem 4.3
For the pendulum equation
y(t) + y + sin(y) = 0,
nd a single continuously dierentiable Lyapunov function V = V (y, y) that yields the
maximal region of attraction of the equilibrium y = y = 0. (In other words, the level set
{
x R2 : V (
x) < 1}
schould be a union of disjoint open sets, one of which is the attractor of the zero
equilibrium, and V (y(t), y(t)) schould have negative derivative at all points of except
the origin.)
Problem Set 51
Problem 5.1
y(t) a is an equilibrium solution of the dierential equation
y (3) (t) + y(t) + y(t)
+ 2 sin(y(t)) = 2 sin(a),
where a R and y (3) denotes the third derivative of y. For which values of a R is this
equilibrium locally exponentially stable?
Problem 5.2
In order to solve a quadratic matrix equation X 2 + AX + B = 0, where A, B are given
n-by-n matrices and X is an n-by-n matrix to be found, it is proposed to use an iterative
scheme
Xk+1 = Xk2 + AXk + Xk + B.
Assume that matrix X satises X2 + AX + B = 0. What should be required of the
eigenvalues of X and A+X in order to guarantee that Xk X exponentially as k
when X0 X is small enough? You are allowed to use the fact that matrix equation
ay + yb = 0,
where a, b, y are n-by-n matrices, has a non-zero solution y if and only if det(sI a) =
det(sI + b) for some s C.
1
2
Problem 5.3
Use the Center manifold theory to prove local asymptotic stability of the equilibrium at
the origin of the Lorentz system
x = x + yz,
y = y + z,
z = yx + y z,
cos(2t)
a
x(t)
x (t) =
cos4 (t) sin4 (t)
converges to zero as t when > 0 is a suciently small constant.
Problem Set 61
Problem 6.1
For the following statement, verify whether it is true or false (give a proof if true, give a
counterexample if false):
Assume that
(a) n, m are positive integers;
(b) f : Rn Rm Rn and g : Rn Rm Rm are continuously dierentiable
functions;
(c) the ODE
y(t) = g(
x, y(t))
has a globally asymptotically stable equilibrium for every x Rn ;
(d) functions x0 : [0, 1] Rn and y0 : [0, 1] Rm are continuously dierentiable
and satisfy
x 0 (t) = f (x0 (t), y0 (t)), g(x0 (t), y0 (t)) = 0 t [0, 1].
Then there exists 0 > 0 such that for every (0, 0 ) the dierential equation
x (t) = f (x(t), y(t)), y(t) = 1 g(x(t), y(t)),
has a solution x : [0, 1] Rn , y : [0, 1] Rm such that xe (0) = x0 (0),
y (0) = y0 (0), and x (t) converges to x0 (t) as 0 for all t [0, 1].
1
2
Problem 6.2
Find all values of parameter k R for which solutions of the ODE system
x 1 = x1 + k(x31 3x1 x22 )
x 2 = x2 + k(3x21 x2 x23 )
with almost all initial conditions converge to zero as t . Hint: use weighted volume
monotonicity with density function
(x) = |x| .
Also, for those who remember complex analysis, it is useful to pay attention to the fact
that
(x31 3x1 x22 ) + j(3x12 x2 x23 ) = (x1 + jx2 )3 .
Problem 6.3
Equations for steering a two-wheeled vehicle (with one wheel used for steering and the
other wheel xed) on a planar surface are given by
x 1
x 2
x 3
x 4
=
=
=
=
cos(x3 )u1 ,
sin(x3 )u1 ,
x 4 u1 ,
u2 ,
where x1 , x2 are Decart coordinates of the xed wheel, x3 is the angle between the vehicles axis and the x1 axis, x4 is a parameter characterizing the steering angle, u1 , u2 are
functions of time with values restricted to the interval [1, 1].
(a) Find all states reachable (time nite but not xed) from a given initial state x 0 R4
by using appropriate control.
(b) Design and test in computer simulation an algorithm for moving in nite time the
state from a given initial position x0 R4 to an arbitrary reachable state x1 R4 .
Problem Set 71
Problem 7.1
A stable linear system with a relay feedback excitation is modeled by
x (t) = Ax(t) + Bsgn(Cx(t)),
(7.1)
where A is a Hurwitz matrix, B is a column matrix, C is a row matrix, and sgn(y) denotes
the sign nonlinearity
y > 0,
1,
0,
y = 0,
sgn(y) =
1, y < 0.
For T > 0, a 2T -periodic solution x = x(t) of (7.1) is called a regular unimodal limit cycle
if Cx(t) = Cx(t + T ) > 0 for all t (0, T ), and CAx(0) > |CB|.
(a) Derive a necessary and sucient condition of exponential local stability of the reg
ular unimodal limit cycle (assuming it exists and A, B, C, T are given).
(b) Use the result from (a) to nd an example of system (7.1) with a Hurwitz matrix
A and an unstable regular unimodal limit cycle.
Problem 7.2
A linear system controlled by modulation of its coecients is modeled by
x (t) = (A + Bu(t))x(t),
where A, B are xed n-by-n matrices, and u(t) R is a scalar control.
1
(7.2)
2
(a) Is it possible for the system to be controllable over the set of all non-zero vectors
x Rn , x =
=
=
=
=
x2 + x23 ,
(1 2x3 )u + a sin(x1 ) x2 + x3 x23 ,
u,
x1 ,
Problem Set 81
Problem 8.1
Autonomous system equations have the form
y (t )
y(t)
y(t) =
Q
,
y(t)
y(t)
(8.1)
where y is the scalar output, and Q = Q is a given symmetric 2-by-2 matrix with real
coecients.
(a) Find all Q for which there exists a C bijection : R2 R2 , matrices A, C, and
a C function : R R2 such that z = (y, y) satises the ODE
z(t) = Az(t) + (y(t)), y(t) = Cz(t)
2
Problem 8.2
A linear constrol system
x 1 (t) =
x2 (t) + w1 (t),
x 2 (t) = x1 (t) x2 (t) + u + w2 (t)
x P x,
where P = P > 0, which are valid control Lyapunov function for a given ODE
model
x (t) = F (x(t), u(t)),
in the sense that
x, u)
|
x|2 x Rn ,
inf x P F (
u
R
is linearly connected for all continuously dierentiable functions F : Rn R Rn ?
(Remember that a set of matrices is called linearly connected if for every two matrices
P0 , P1 there exists a continuous function p : [0, 1] such that p(0) = P0 and
p(1) = P1 . In particular, the empty set is linearly connected.)
To answer this and the following questions, let us begin with formulating necessary
and sucient conditions for two signals z1 , z2 B to commute and to dene same
0, if w(t) = lim w( ),
N+ (w, t) =
1, otherwise
be the number of discontinuities of w( ) between = t and = . Similarly, let
0, if w(0) = w[t],
N (w, t) =
1, otherwise
2
Lemma 1.1 Signals z1 , z2 B commute at time t [0, ) if and only if z1 (t) =
z2 (t) and
N (z1 , t) + N+ (z2 , t) + |z2 (t) z1 [t]| 1
(1.1)
and
N (z2 , t) + N+ (z1 , t) + |z1 (t) z2 [t]| 1.
(1.2)
Proof First note that the hybrid signal z12 , obtained by gluing the past of z1
(before time t) to the future of z2 (from t to ), is a discrete time signal if and only
if z1 (t) = z2 (t). Moreover, since he discontinuities of z12 result from three causes:
discontinuities of z1 ( ) before = t, discontinuities of z2 between = t and = ,
and the inequality between z1 [t] and z1 (t), condition (1.1) is necessary and sucient
for z12 B (subject to z1 (t) = z2 (t)). Similarly, considering the discontinuities of
the other hybrid obtained by gluing the past of z2 to the future of z2 yields
(1.2).
It follows immediately from Lemma 1.1 that signals z1 , z2 B dene same state of
B at time t [0, ) if and only if
N (z1 , t) = N (z2 , t), D(z1 , t) = D(z2 , t), N+ (z1 , t) = N+ (z2 , t), z1 (t) = z2 (t),
(1.3)
where for w B
0, t < k,
1, t k.
3
(Figuring out which pairs of signals define same state of B at a given
time is one possible way to arrive at a solution.)
Condition (1.3) naturally calls for X to be the set of all possible combinations
x(t) = [N (w, t); N+ (w, t); D(w, t); w(t)].
Note that not more than one of the rst three components can be non-zero at a
given time instance, and hence the total number of possible values of x(t) is eight,
which further reduces to four at t = 0, since
N (w, 0) = D(w, 0) = 0 w B.
The dynamics of x(t) is given by
f ([0; 0; 0; 0]) = {[0; 0; 0; 0]},
f ([0; 0; 0; 1]) = {[0; 0; 0; 1]},
f ([1; 0; 0; 0]) = {[1; 0; 0; 0]},
f ([1; 0; 0; 1]) = {[1; 0; 0; 1]},
f ([0; 1; 0; 0]) = {[1; 0; 0; 0]},
f ([0; 1; 0; 1]) = {[1; 0; 0; 1]},
f ([0; 0; 1; 0]) = {[0; 0; 1; 0], [0; 1; 0; 1]},
f ([0; 0; 1; 1]) = {[0; 0; 1; 1], [0; 1; 0; 0]},
while g(x(t)) is simply the last bit of x(t).
This model is not the minimal state space model of B. Note that last two bits of
x(t + 1), as well as w(t), depend only on the last two bits of x(t). Hence a model of
B with a two-bit state space X = {0, 1} {0, 1} can be given by
f ([0; 0]) = {[0; 0]}, f ([0; 1]) = {[0; 1]}, f ([1; 0]) = {[0; 1], [1; 0]}, f ([1; 1]) = {[0; 0], [1; 1]},
and
g ([x1 ; x2 ]) = x2 .
4
Problem 1.2
Consider differential equation
y(t) + sgn(y(t)
+ y(t)) = 0.
(a) Write down an equivalent ODE x (t) = a(x(t)) for the state vector
x(t) = [y(t); y(t)].
x 1
x 2
x2
=
sgn(
x1 + x2 )
(b) Find all vectors x0 R2 for which the ODE from (a) does not have
a solution x : [t0 , t1 ] R2 (with t1 > t0 ) satisfying initial condition
x(t0 ) = x0 .
Solutions (forward in time) do not exist for
x01
2
R : x01 + x02 = 0, x01 [1, 1], x01 =
0 .
x0 X0 =
x02
To show this, note rst that, for
x01 + x02 0, x02 > 1,
a solution is given by
x(t) =
x01 + t
x02 t2 /2
x02 t
, t [0, 2(
x02 1)].
Similarly, for
x01 + x02 0, x
02 < 1,
a solution is given by
x(t) =
x01 + t
x02 + t2 /2
x02 + t
, t [0, 2(
x02 1)].
5
the assumption that t0 is an argument of a minimum. Hence x1 (t) + x2 (t) 0 for all
t [0, ]. Moreover, since x1 is an integral of x2 > 0, x1 (t) is strictly monotonically
non-increasing on [0, ], and hence x1 (t) > 1 for all t (0, ].
Let t0 be the argument of maximum of x1 (t) + x2 (t) on [0, ]. If x1 (t0 ) + x2 (t0 ) > 0
then x1 (t) + x2 (t) > 0 in a neigborhood of t0 . Combined with x1 (t) > 1, this yields
d(t) = x2 (t) sgn(x1 (t) + x2 (t)) < x1 (t) sgn(x1 (t) + x2 (t)) < 1 1 = 0.
Since x1 + x2 is an integral of d, this contradicts the assumption that t0 is an
argument of a maximum. Hence x1 (t) + x2 (t) = 0 for t [0, ], which implies that
x2 (t) is a constant. Hence x1 (t) is a constant as well, which contradicts the strict
monotonicity of x1 (t).
2
(c) Define a semicontinuous convex set-valued function : R 2 2R
such that a(
x) (
x) for all x. Make sure the sets (
x) are the
smallest possible subject to these constraints.
x2
x1
x1 + x2 ) ,
=
: t (
0
t
x2
y > 0,
{1},
{1}, y < 0, .
(y) =
[-1,1], y = 0.
On the other hand, it is easy to check that the compact convex set-valued function
0 is semicontinuous. Hence = 0 .
(d) Find explicitly all solutions of the differential inclusion x (t) (x(t))
satisfying initial conditions x(0) = x0 , where x0 are the vectors found
in (b). Such solutions are calles sliding modes.
The proof in (b) can be repeated to show that all such solutions will stay on the
hyperplane x1 (t) + x2 (t) = 0. Hence
x1 (t) = x1 (0)et , x2 (t) = x2 (0)et .
(e) Repeat (c) for a : R2 R2 defined by
a([x1 ; x2 ]) = [sgn(x1 ); sgn(x2 )].
x1
x2
c1
c2
x1 ), c2 (
x2 ) .
: c1 (
6
Problem 1.3
For the statements below, state whether they are true or false. For
true statements, give a brief proof (can refer to lecture notes or books).
For false statements, give a counterexample.
(a) All maximal solutions of ODE x (t) = exp(x(t)2 ) are defined on the
whole time axis {t} = R.
This statement is true. Indeed, a maximal solution x = x(t) is dened on an interval
with a nite bound t only when |x(t)| as t t . However, x(t) is an integral
of a function not exceeding 1 by absolute value. Hence |x(t) x(t0 )| |t t0 | for
all t, and therefore |x(t)| cannot approach innity on a nite time interval.
(b) All solutions x : R R of the ODE
x(t)/t, t =
0,
x (t) =
0,
t=0
are such that x(t) = x(t) for all t R.
c1 t, t 0,
x(t) =
c2 t, t > 0
is a solution of the ODE, which can be veried by checking that
t2
x(t)
dt t1 , t2 .
x(t2 ) x(t1 ) =
t
t1
(c) If constant signal w(t) 1 belongs to a system behavior set B, but
constant signal w(t) 1 does not then the system is not linear.
This statement is true. Indeed, if B is linear then cw B for all c R, w B.
With c = 1 this means that, for a linear system, w B if and only if w B.
(y)
on the time interval t [0, ) for every bounded input signal r = r(t).
1
2
(a) Show how Theorem 3.1 from the lecture notes can be used to prove
well-posedness in the case when a
> 0.
where
h(t) =
(t + a)1/2 et , t 0
0,
otherwise,
and : R R is the function which maps z R into (q), with q being the
solution of
q 0.1(q) = z.
Since is continuously dierentiable, and its derivative ranges in [1, 1], is con
tinuously dierentiable as well, and its derivative ranges between 1/1.1 and 1/0.9.
For every constant T [0, ), the equation for y(t) with t T can be re-written
as
t
y(t) = y(T ) +
aT (y( ), , t)d,
T
where
aT (
When parameter a
takes a positive value, function a = aT satises conditions of
Theorem 3.1 with X = Rn , x0 = y(T ), r = 1, and t0 = T , with K = K(
a) being a
0, and
function of a
=
M = MT = M0 (a)(1 + max |y(t)|).
t[0,T ]
t[0,T+ ]
t[0,T ]
3
Starting with T = T (0) = 0, for k = 0, 1, 2, . . . dene T (k + 1) as the T + calculated
for T = T (k). To nish the proof of well posedness, we have to show that T (k)
as k . Indeed, since
MT (k) (T (k + 1) T (k)) = MT (k) min{1/MT (k) , 1/(2K)} 1,
MT (k) grows not faster than linearly with k. Hence T (k + 1) T (k) decrease not
faster than c/k, and therefore T (k) as k .
(b) Propose a generalization of Theorem 3.1 which can be applied when
= 0 as well.
a
An appropriate generalization, relying on integral time-varying bounds for a and its
increments, rather than their maximal values, is suggested at the end of proof of
Theorem 3.1 in the lecture notes.
Problem 2.2
Read the section of Lecture 4 handouts on limit sets of trajectories of
ODE (it was not covered in the classroom).
(a) Give an example of a continuously differentiable function a : R 2
R2 , and a solution of ODE
x (t) = a(x(t)),
(2.1)
for which the limit set consists of a single trajectory of a nonperiodic and non-equilibrium solution of (2.1).
The limit trajectory should be that of a maximal solution x : (t1 , t2 ) R2 such
that | x(t)| as t t1 or t t2 .
To construct a system with such limit trajectory, start with a planar ODE for which
every solution, except the equilibrium solution at the origin, converges to a periodic
solution which trajectory is the unit circle. Considering R2 as the set of all complex
numbers, one such ODE can be written as
1
+ 1,
w
which moves the point z = 1 to w = (and also moves z = to w = 0). For the
resulting system
w (t) = w(t)(1 + w(t))(1 + j | (1 + w(t))/w(t)|)|(1 + w(t))/w(t)|,
(2.2)
4
every solution w() with w(0) = 0 will have the straight line passing through the
points w = 1/2 and w = 1/(j 1) (trajectory of the solution w0 (t) = 1/(ejt 1),
dened for t (0, 2)), as its limit set. However, the right side of (2.2) is not a
continuously dierentiable function of w: there is a discontinuity at w = 0. To x
this problem, multiply the right side by the real number |w(t)|4 , which yields
a(w) = w(1 + w)((1 + j)| w|2 |(1 + w)w|)|(1 + w)w|.
For the resulting system, every trajectory except the equilibrium at w = 0 has the
same limit set as dened before.
(b) Give an example of a continuously differentiable function a : R n
Rn , and a bounded solution of ODE (2.1), for which the limit set con
tains no equilibria and no trajectories of periodic solutions.
It is possible to do this with a 4th order linear time-invariant system with purely
imaginary poles:
x 1 (t)
x 2 (t)
x 3 (t)
x 4 (t)
The solution
=
=
=
=
x2 (t),
x1 (t),
x4 (t),
x3 (t).
sin(t)
cos(t)
x(t) =
sin(t)
cos(t)
)
sin(t
cos(t
)
1
=
sin(t2 ) : t1 , t2 R .
cos(t2 )
Indeed, since is not a rational number, every real number can be approximated
arbitrarily well by 2k 2q where k, q are arbitrarily large positive integers. Hence
the dierence between t1 + 2k and t2 / + 2q can be made arbitrarily small for every
given pair t1 , t2 R. For t = t1 + 2k this implies that
sin(t) = sin(t1 ), cos(t) = cos(t1 ), sin(t) sin(t2 +2q) = sin(t2 ), cos(t) cos(t2 ).
5
Every solution with x(0) in has the form
sin(t + t1 )
cos(t + t1 )
x(t) =
sin(t + t2 )
cos(t + t2 )
6
(b) The equations for the harmonically forced nonlinear oscillator
y(t) + y(t)
+ (1 + y(t)2 )y(t) = 100 cos(t)
have at least one 2-periodic solution. Hint: Show first that, for
V (t) = y(t)2 + y(t)2 + y(t)y(t)
+ 0.5y(t)4 ,
the inequality
V (t) c1 V (t) + c2 ,
=
=
=
y 2 yy y 2 y 4 + 2(y + y/2)w
0.5V 0.5(y + y/2)2 + 2(y + y/2)w 3/8y 2
0.5V + 2w 2 0.5(y + y/2 + 2w)2 3/8y
2
0.5V + 20000.
log(2) + 4 log(30)
4.55
7
then |x(T )| 300.
y(0)
= 300
x
y(0)
will be periodic with period T = 10.
2
dierentiable, with dV /dx = [sgn(x1 ); sgn(x2 )] being the derivative. Hence V (x)f (x) =
x1 x2 x1 x2 = 0 at every such point, which proves that V (x(t)) is non-increasing (and
non-decreasing either) along all non-equilibrium trajectories.
Below we list the reasons why no other pair yields a valid Lyapunov function. Of
course, there are many other ways to show that.
For system (1) at x = (2, 0), we have x 1 > 0, x 2 < 0, hence both |x1 | and |x2 | are
increasing along system trajectories in a neigborhood of x = (2, 0). Since all Lyapunov
function candidates (a)-(c) increase when both |x1 | and |x2 | increase, (a)-(c) are not valid
Lyapunov functions for system (1).
For system (2) at x = (0.5, 0.5), we have x 1 > 0, x 2 < 0, hence both |x1 | and |x2 |
increase along system trajectories in a neigborhood of x = (0.5, 0.5).
For system (3) at x = (2, 1), we have x = (2, 2), hence both x21 + x22 and max(x1 , x2 )
are increasing along system trajectories in a neigborhood of x = (2, 1).
Problem 3.2
Show that the following statement is not true. Formulate and prove a correct version: if
V : Rn R is a continuously dierentiable functional and a : Rn Rn is a continuous
function such that
V (
x)a(
x) 0 x : V (
x) = 1,
(3.1)
then V (x(t)) 1 for every solution x : [0, ) Rn of
x (t) = a(x(t))
(3.2)
with V (x(0)) 1.
There are two important reasons why the statement is not true: rst, V (
x) should be
non-zero for all x such that V (
x) = 1; second, solution of x = a(x) with initial condition
x(0) = x0 such that V (
x0 ) = 1 should be unique. Simple counterexamples based on these
considerations are given by
V (x) = x2 + 1, a(
x) = 1, x(t) = t,
and
x) = 1.5
V (x) = x + 1, a(
x1/3 , x(t) = t1.5 .
One correct way to x the problem is by requiring a strict inequality in (3.1). Here is
a less obvious correction.
Theorem 3.1 Let V : Rn R be a continuously dierentiable functional such that
V (
x) = 1, and let a : Rn Rn be a locally Lipschitz func
x) =
0 for all x satisfying V (
tion such that condition (3.1) holds. Then V (x(t)) 1 for every solution x : [t 0 , t ) Rn
of (3.2) with V (x(0)) 1.
3
Proof It is sucient to prove that for every x0 Rn satisfying the condition V (
x0 ) = 1
there exists d > 0 such that V (x(t)) 1 for 0 t d for the solution x(t) of (3.2) with
x(0) = x0 . Indeed, for (0, 1) dene x as a solution of equation
x (t) = V (x(t)) + a(x(t)), x(0) = x0 .
(3.3)
By the existence theorem, solutions x are dened on a non-empty interval t [0, d] which
does not depend on . Note that
dV (x (t))/dt = V (x (t))(V (x (t)) + a(x (t))) V (x (t))2 < 0
whenever V (x (t)) = 1, and hence the same inequality holds whenever x (t) is close enough
to the set {x : V (x) = 1}. Hence V (x (t)) 1 for t [0, d] for all . Now, continuous
dependence on parameters implies that x (t) converges for all t [0, d] to x(t). Hence
V (x(t)) = lim V (x (t)) 1.
0
Problem 3.3
The optimal minimal-time controller for the double integrator system
with bounded control
x 1 (t) = x2 (t),
|u(t)| 1
x 2 (t) = u(t),
has the form
u(t) = sgn(x1 (t) + 0.5x2 (t)2 sgn(x2 (t))).
(a) Find a Lyapunov function V : R2 R2 for the closed loop system,
such that V (x(t)) is strictly decreasing along all solutions of system
equations except the equilibrium solution x(t) 0.
The original problem set contained a typo: a - sign in the expression for u(t) was
missing. For completeness, a solution which applies to this case is supplied in the
next section.
A hint was given in the problem formulation, stressing that u is a minimal time
control. What is important here is that it takes only nite time for for a system
solution to reach the origin. Therefore, the amount of time it takes for the system to
reach the origin can be used as a Lyapunov function. Let us verify this by inspection.
System equations are Lipschitz continuous outside the curve
0 = {x = [x1 ; x2 ] : x1 = 0.5x2 |x2 |},
4
Solving them explicitly (outside ) yields
c1 + c2 t 0.5t2
x(t) =
for x(t) + = {x = [x1 ; x2 ] : x1 > 0.5x2 |x2 |},
c2 t
c1 + c2 t + 0.5t2
x(t) =
for x(t) = {x = [x1 ; x2 ] : x1 < 0.5x2 |x2 |}.
c2 + t
In addition, no solutions with initial condition x(0) = [0.5r 2 ; r] or x(0) = [0.5r 2 ; r],
where r > 0, exists, unless the sgn() function is understood as the set-valued sign
y > 0,
{1},
[1, 1], y = 0,
sgn(y) =
{1}, y < 0,
x22 /2 + x1 ,
for x1 + x2 |x2 |/2 0,
x2 + 2
V (x) =
2
x2 + 2 x2 /2 x1 , for x1 + x2 |x2 |/2 0.
(b) Find out whether the equilibrium remains asymptotically stable when
the same controller is used for the perturbed system
x 1 (t) = x2 (t),
|u(t)| 1,
x 2 (t) = x1 (t) + u(t),
where > 0 is small.
The Lyapunov function V (x) designed for the case = 0 is not monotonically nonincreasing along trajectories of the perturbed system ( > 0). Indeed, when
x1 = 0.5r 2 + r8 , x2 = r > 0,
we have
x1 x2
,
V (x(t)) = x1 1
0.5x22 + x1
5
However, the stability can be established for the case > 0 using an alternative
Lyapunov function. One such function is
2 2
x2 + (1 + 4 |x1 |)2 , for |x1 | x22 /2,
V1 (x) =
2
2
2 x22 + (1 + 4 x2
2 /2) , for |x1 | x2 /2.
2
By considering the two regions |x1 | x2
2 /2 and |x1 | x2 /2 separately, it is easy to
see that dV1 (x(t))/dt 0, and dV1 (x(t))/dt = 0 only for
x 1 (t) = x2 (t),
|u(t)| 1
x 2 (t) = u(t),
has the form
u(t) = sgn(x1 (t) + 0.5x2 (t)2 sgn(x2 (t))).
(a) Find a Lyapunov function V : R2 R2 for the closed loop system,
such that V (x(t)) is strictly decreasing along all solutions of system
equations except the equilibrium solution x(t) 0.
6
The system is unstable (all solutions except x(t) 0 converge to innity). However,
this does not aect existence of strictly decreasing Lyapunov functions. For example,
x2 , x1 + 0.5x2 |x2 | = 0, x2 0,
V ([x1 ; x2 ]) =
x1 + 0.5x2 |x2 | < 0,
x2 ,
x2 ,
x1 + 0.5x2 |x2 | = 0, x2 0.
To show that V is valid, note that the trajectories of this system are given by
c1 + c2 t + 0.5t2
x(t) =
c2 + t
c1 + c2 t 0.5t2
x(t) =
c2 t
when x1 + 0.5x2 |x2 | < 0 or x1 + 0.5x2 |x2 | = 0 and x2 0.
(b) Find out whether the equilibrium remains asymptotically stable when
the same controller is used for the perturbed system
x 1 (t) = x2 (t),
|u(t)| 1,
x 2 (t) = x1 (t) + u(t),
where > 0 is small.
As can be expected, the equilibrium of the perturbed system is unstable just as the
equilibrium of the unperturbed one is. To show this, note that for
x K = {[x1 ; x2 ] : x1 (0, 1/(2)), x2 0}
we have x 1 > 0 and x 2 0.5. Hence, a solution x = x(t) such that x(0) K cannot
satisfy the inequality |x(t)| < 1/(2e) for all t 0.
where w2 = x3
2
4/3
we have
16/3
V 3 = 4x3 = 4w22 .
Now, for
V = c 1 V1 + c 2 V2 + c 3 V3
we have
Problem 4.2
System takes arbitrary continuous input signals v : [0, ) R and
produces continuous outputs w : [0, ) R in such a way that the series
connection of and the LTI system with transfer function G 0 (s) = 1/(s +
1), described by equations
x 0 (t) = x0 (t) + w(t), w() = (v()),
has a non-negative storage function with supply rate
0 (
x0 , v,
w)
= (w 0.9
x0 )(
v w).
(a) Find at least one nonlinear system which fits the description.
The ideal saturation nonlinearity
sat(y) =
y/|y|, |y| 1,
y,
|y| 1,
3
(b) Derive constraints to be imposed on the values G(j) of a transfer
function
G(s) = C(sI A)1 B
with a Hurwitz matrix A, which guarantee that x(t) 0 as t for
every solution of
x (t) = Ax(t) + Bw(t), v(t) = Cx(t), w() = (v()).
Make sure that your conditions are satisfied at least for one non
zero transfer function G = G(s).
Let us prove that condition
0.1 j
>0 R
Re (1 G(j))
1 j
(4.1)
0.1 j
Re (1 G(j))
> (1 + |(jI A)1 B|2 ) R.
1 j
Therefore, the frequency inequality conditions of the KYP Lemma are satised for
the existence of a matrix P = P such that
x
Ax + Bw
2
P
(wCx)(w0.9x0 )(|x|2 +|w|2 ) w, x0 R, x Rn .
x0
w x0
To show that P is positive denite, substitute w = x0 into the last inequality, which
yields
x
Ax + Bx0
2
(|x|2 + |0.9x0 |2 ) x0 R, x Rn ,
P
x0
0.1x0
which is equivalent to the Lyapunov inequality
P A + A P = Q,
where
A =
A B
0 0.1
, Q=
I
0
0 0.81
x
x0
x
x0
4
is a non-negative storage function for the closed loop system, with supply rate
= (|x|2 + |w|2 ).
Hence w is square integrable over the interval [0, ). Since
x = Ax + Bw,
and A is a Hurwitz matrix, this implies that x(t) 0 as t .
Since
Re
0.1 j
0.1 R,
1 j
condition (4.1) is satised for all G with suciently small H-Innity norm (maximal
absolute value of the frequency response).
Problem 4.3
For the pendulum equation
y(t) + y + sin(y) = 0,
find a single continuously differentiable Lyapunov function V = V (y, y)
that yields the maximal region of attraction of the equilibrium y = y = 0.
(In other words, the level set
{
x R2 : V (
x) < 1}
schould be a union of disjoint open sets, one of which is the attractor
of the zero equilibrium, and V (y(t), y(t)) schould have negative derivative
at all points of except the origin.)
Note that the problem can be interpreted as follows: given the initial angular position
and angular velocity of a pendulum, nd the number of complete rotations it will have
before settling at an equilibrium position. An exact analytical answer can be obtained
by stating that the maximal region of attraction is the area bounded by the four separa
trix solutions of the system equation, converging to the two unstable equilibria (0, ).
However, this exact answer (which cannot be expressed in elementary functions) will
be of no use in the case when the pendulum model is slightly modied (a dierent friction
model, exibility of the pendulum taken into account, etc.) On the other hand, one can
expect that an estimate obtained by using a Lyapunov function will be more robust
with respect to various perturbations of the model.
An obvious Lyapunov function is given by the system energy (potential plus kinetic)
V0 (y, y) = 0.5y 2 cos(y), dV /dt = y(t)2 .
5
To estimate the region of attraction of the equilibrium at the origin, using this Lyapunov
function, one may can nd a constant c such that the level set
L(V0 , c) = {(y0 , y1 ) : V0 (y0 , y1 ) < c}
does not contain a path connecting the origin with any other equilibrium of the system.
It is easy to see that taking c = 1 does the job, and yields the region of attraction 0
given by
0 = {(y, y) : 0.5y 2 cos(y) < 1, < y < }.
This appears to be a very poor estimate, taking into account what we know about the
true maximal region of attraction.
To get a better Lyapunov function, one can try to construct it in such a way that the
level sets are polytopes centered at the origin. Remember that a function V is a Lyapunov
function if and only if the system trajectories never leave any of its level sets. Since the
boundary of a polytope in R2 is a segment, it is especially easy to check this condition
for the Lyapunov functions candidates with polytopic level sets.
One of the simplest examples of a Lyapunov function constructed this way is given by
|y| + |y|,
yy 0,
V1 (y, y) =
max{y, y}, yy 0.
It is easy to check that V1 is a Lyapunov function for the pendulum system in the area
1 = {(y, y) : V1 (y, y) < },
which is also the resulting estimate of the region of attraction.
The previous estimate 0 is contained in 1 . Even better estimates can be obtained
by using other lyapunov functions with polytopic level sets.
2
exponentially as k when X0 X is small enough? You are allowed
to use the fact that matrix equation
ay + yb = 0,
where a, b, y are n-by-n matrices, has a non-zero solution y if and only if
det(sI a) = det(sI + b) for some s C.
The task at hand is to verify whether the equilibrium X = X of the map
X F (X) = X 2 + AX + X + B
is locally exponentially stable. Since
F (X + ) = (X + )2 + A(X + ) + X + + B = F (X) + (X + A) + (X + I) + 2 ,
the dierential dF of F at X is the linear transformation on the set of n-by-n matrices
dened by
dF () = (X + A) + (X + I).
According to the standard theorem on analysis via linearization, the equilibrium is locally
exponentially stable if and only if dF has no eigenvalues z with |z| 1. Equivalently, the
equation dF () = z should have no non-zero solutions for all |z| 1. According to
the criterion mentioned in the problem formulation, this is true if and only if the sum of an
eigenvalue of X + A and an eigenvalue of X + I z is not zero for |z| 0. Equivalently,
all pairwise sums of eigenvalues of X + A with eigenvalues of X should lie withing the
open disc of radius one centered at 1.
Problem 5.3
Use the Center manifold theory to prove local asymptotic stability of
the equilibrium at the origin of the Lorentz system
x = x + yz,
y = y + z,
z = yx + y z,
x = x,
y = y + z,
z = y z.
3
To diagonalize the linearized system, introduce the new state variables
x1 = x, x2 = y z, x3 = y + z.
This transforms the original nonlinear equations into
(x3 + x2 )(x3 x2 )
,
( + 1)2
(x3 + x2 )x1
= ( + 1)x2 +
,
+1
(x3 + x2 )x1
=
.
+1
x 1 = x1 +
x 2
x 3
According to the basic theorem, the central manifold of this system is dened by x 1 =
h1 (x3 ) and x2 = h2 (x3 ), where h1 , h2 are 100 times continuously dierentiable and satisfy
h1 (x3 ) = h11 x23 + o(x32 ), h 1 (x3 ) = 2h11 x3 + o(x3 ), h2 (x3 ) = o(x32 ),
together with
(x3 + h2 (x3 ))x3
(x3 + h2 (x3 ))(x3 h2 (x3 ))
h 1 (x3 )
= h1 (x3 ) +
.
+1
( + 1)2
Comparing the second order terms on both sides yields
h11 =
1
.
( + 1)2
x33 + o(x33 ),
3
( + 1)
which means that the center manifold system dynamics is asymptotically stable. Hence
the equilibrium at the origin is locally asymptotically stable.
Problem 5.4
Check local asymptotic stability of the periodic trajectory y(t) = sin(t)
of system
y(t) + y(t)
+ y 3 = sin(t) + cos(t) + sin3 (t).
The linearized system equations for small (t) = y(t) sin(t) are given by
4
or, equivalently
x(t)
= A(t)x(t), A(t) =
where
x(t) =
0
1
2
3 sin (t) 1
(t)
(t)
The evolution matrix of the linear system over its period , calculated numerically using
the MATLAB code2
M=eye(2);
T=pi;
for k=1:n,
M=expm([0 1;-3*sin(k*T/n) -1]*(T/n))*M;
end
is given by
M
0.2995 0.2362
0.0986 0.0665
and has eigenvalues well within the unit circle. Hence, the periodic solution is locally
exponentially stable.
Problem 5.5
Find all values of parameter a R such that every solution x : [0, ) R 2
of the ODE
cos(2t)
a
x (t) =
x(t)
cos4 (t) sin4 (t)
converges to zero as t when > 0 is a sufficiently small constant.
Since the integral of trace of
cos(2t)
a
A(t) =
cos4 (t) sin4 (t)
over its period is positive, there are no a R, > 0 for which all solutions converge to
zero as t +.
A more interesting case takes place when A(t) is replaced with
cos(2t)
a
A1 (t) =
.
cos4 (t) sin4 (t)
2
5
Then the average of A1 (t) over the period equals
0
a
A =
.
3/8 3/8
When a < 0, this is a Hurwitz matrix, which, according to the averaging theorem, guaran
tees asymptotic stability for suciently small > 0. When a = 0, the original equations
yield
x 1 (t) = cos(2t)x1 (t),
and hence x1 (t) does not converge to zero as t when x1 (0) =
0. When a > 0, A has
eigenvalues with positive real part. Repeating the arguments from the proof of Theorem
10.2 shows that the evolution matrix of the system will have eigenvalues outside of the
closed unit disc for all suciently small > 0.
2
The statement is false. To see this, consider the case when n = m = 1,
f (x, y) = 3y 2 , g(x, y) = x y 3 , x0 (t) = t3 , y0 (t) = t.
Then conditions (a)-(d) are satised. However, every solution xe , ye of the singularly
perturbed equation with initial conditions xe (0) = ye (0) = 0 will be identically equal to
zero. Hence, the convergence conclusion does not hold.
Problem 6.2
Find all values of parameter k R for which solutions of the ODE system
x 1 = x1 + k(x31 3x1 x22 )
x 2 = x2 + k(3x21 x2 x23 )
with almost all initial conditions converge to zero as t . Hint: use
weighted volume monotonicity with density function
(x) = |x| .
Also, for those who remember complex analysis, it is useful to pay at
tention to the fact that
(x31 3x1 x22 ) + j(3x12 x2 x23 ) = (x1 + jx2 )3 .
Using Theorem 11.5 with (x) = |x|6 yields convergence of almost every solution to
zero for all k R. Indeed, for
In addition, |f (x)| grows not faster than |x|3 as |x| , and hence |f (x)|(x) is integrable
over the set |x| 1. Since is positive, the assumptions of Theorem 11.5 are satised.
Problem 6.3
Equations for steering a two-wheeled vehicle (with one wheel used for
steering and the other wheel fixed) on a planar surface are given by
x 1
x 2
x 3
x 4
=
=
=
=
cos(x3 )u1 ,
sin(x3 )u1 ,
x 4 u1 ,
u2 ,
3
where x1 , x2 are Decart coordinates of the fixed wheel, x3 is the angle
between the vehicles axis and the x1 axis, x4 is a parameter characteriz
ing the steering angle, u1 , u2 are functions of time with values restricted
to the interval [1, 1].
(a) Find all states reachable (time finite but not fixed) from a given
initial state x0 R4 by using appropriate control.
This is a driftless system of the form
Let
0
cos(x3 )
x1
x2
4
0
sin(x3 )
x=
x3 R , g1 (x) = x4 , g2 (x) = 0
1
x4
0
0
sin(x3 )
0
cos(x3 )
Since vectors g1 (x), g2 (x), g3 (x), g4 (x) form a basis in R4 for all x R4 , every
state is reachable from every other state in arbitrary (positive) time, provided that
suciently large piecewise constant control values can be used. Since in our case
the control values have limited range, every state is reachable from every other state
in sucient time.
(b) Design and test in computer simulation an algorithm for moving in
finite time the state from a given initial position x 0 R4 to an arbi
trary reachable state x1 R4 .
(7.1)
y > 0,
1,
0,
y
= 0,
sgn(y) =
1, y < 0.
For T > 0, a 2T -periodic solution x = x(t) of (7.1) is called a regular unimodal
limit cycle if Cx(t) = Cx(t + T ) > 0 for all t (0, T ), and CAx(0) > |CB|.
Let F : R Y be dened by
F (t, x)
= eAt (
x + A1 B) A1 B.
1
2
By denition F (, x) is the value at t = of the solution z = z(t) of the ODE
dz/dt = Az + B. Since F (t, x0 ) > 0 for t (0, T ) and
dF
(0, x)
= C(A
x + B) C(Ax0 + B) > 0
dt
whenever x Y is suciently close to x0 , we conclude that F (t, x)
dCF
(T, x0 ) = C(Ax(T ) + B) = CAx0 + CB < 0.
dt
Hence, by the implicit mapping theorem, for x Y suciently close to x0 equation
CF (t, x)
= 0 has a unique solution t = h(
x) in a neigborhood of t = T .
3
Problem 7.2
A linear system controlled by modulation of its coefficients is modeled
by
x (t) = (A + Bu(t))x(t),
(7.2)
where A, B are fixed n-by-n matrices, and u(t) R is a scalar control.
(a) Is it possible for the system to be controllable over the set of all
non-zero vectors x Rn , x = 0, when n 3? In other words, is it
possible to find matrices A, B with n > 2 such that for every non
zero x0 , x1 there exist T > 0 and a bounded function u : [0, T ] R
such that the solution of (7.2) with x(0) = x 0 satisfies x(T ) = x1 ?
The answer to this question is positive (examples exist for all n > 1). One such
example is given by
A = 0.5( + ), B = I + 0.5( ),
where
0 0 0
0 1 0
= 0 0 1 , = 1 0 0 .
0 1 0
0 0 0
To show that the resulting system (7.2) is controllable over the set of non-zero states,
note rst that the auxiliary driftless system with three scalar controls
x = xu1 + xu2 + xu3
satises the conditions of complete controllability for all x =
0. Indeed, the Lie
bracket g = [g1 , g2 ] of the linear vector elds gk (x) = Ak x is given by g(x) = Ax,
where A = [A1 , A2 ] = A1 A2 A2 A1 is the commutant of matrices A1 and A2 . Hence
for g1 (x) = x, g2 (x) = x, and g3 = [g1 , g2 ] we have g3 (x) = x, where
0 0 1
= 0 0 0 .
1 0 0
Since the matrix
x1 x2
x1 x3
x2
[x x x x] = x2 x1 x3
x3 x3 x2 x1
4
Since the auxiliary system is fully controllable for x =
0, it is also fully controllable
using piecewise constant controls along the vector elds x, x, x. Note that the
ow along x is given by St (x) = et x. Since e2 = I, negative time ows along
x can be implemented using positive time ows. Same conclusion is also true for
. Since the ows along (A + B)x = x + x and (A B)x = x x dier from
the ows along x and x only in scaling of the trajectory, we conclude that for
every non-zero x1 , x2 R3 there exists a (picewise constant) control u which moves
x1 to x2 for some > 0. Therefore, for every non-zero x1 , x2 R3 there exists a
(picewise constant) control u which moves x1 rst to x , then to x , and then
to x2 , where
1
0
x = 0 , x = 0 .
0
1
{cx : c R}
is invariant for ow dened by te vector eld x + x, and the ow moves points of
this line monotonically from the origin. Similarly, the line
{cx : c R}
is invariant for ow dened by te vector eld x x, and the ow moves points of
this line monotonically to the origin. Hence, there also exists a piecewise constant
control u which moves x1 rst to x , then to c x , then to c x , then to
c c x and then to c c x2 , where c , c are arbitrary positive numbers such that
c 1 and c 1. Selecting c , c in such a way that c c = 1 yields a trajectory
from x1 to x2 .
While the theoretical derivation above is easy to generalize to higher dimensions,
there exists a rather simple explicit algorithm for moving from a given vector x 1 = 0
to a given vector x2 = 0 using not more than ve switches of the piecewise constant
control value u(t) {1, 1}.
(b) Is it possible for the system to be full state feedback linearizable
in a neigborhood of some point x0 Rn for some n > 2?
The answer to this question is positive (examples exist for all n 1).
To nd an example, search for a linear output y = Cx of relative degree n. This
requires
C = [1 0 0], B = 0
1
take
0 0
0 1 0
1
0 0 , A = 0 0 1 , x0 = 0 .
0 0
0 0 0
0
5
Problem 7.3
A nonlinear ODE control model with control input u and controlled
output y is defined by equations
x 1
x 2
x 3
y
=
=
=
=
x2 + x23 ,
(1 2x3 )u + a sin(x1 ) x2 + x3 x23 ,
u,
x1 ,
z2 = v,
z3 = a sin(z1 ) z3 .
(b) Design a (dynamical) feedback controller with inputs x(t), r(t), where
r = r(t) is the reference input, such that for every bounded r = r(t)
the system state x(t) stays bounded as t , and y(t) r(t) as t
whenever r = r(t) is constant.
One such controller is given by
2
u = kp (x1 r) kd (x2 + x2
3 ) a sin(x1 ) + x2 x3 + x3 ,
6
where kp and kd are arbitrary positive constants, which is equivalent to
v = kp (z1 r) kd z2 .
Since the corresponding equations for z1 , z2 are those of a stable LTI system, z1 , z2
remain bounded whenever r is bounded, and z1 r when r is constant. Since
dz3 /dt + z3 = a sin(z1 ) is also bounded, z3 remains bounded as well. Since the
transformation from z back to x, given by
x1 = z1 , x2 = z2 (z2 z3 )2 , x3 = z2 z3 ,
is continuous, x is also bounded whenever r is bounded.
(c) Find all values of a R for which the open loop system is full state
feedback linearizable.
It is convenient to check the full state feedback linearizability conditions in n terms
of the z state variable. Then
0
1 0
z2
z1
, f =
0
0 0 ,
0
f z2 =
z3
a cos(z1 ) 0 1
a sin(z1 ) z3
and hence
1
0
.
0
[f, g] = 0 , [f, [f, g]] =
0
a cos(z1 )
This means that the system is locally full state feedback linearizable (to a controllable
system) whenever a cos(z1 ) = 0. For a = 0 the system is an uncontrollable LTI
system. For a =
0 and z1 = 0 the new coordinates
p1 = z3 , p2 = a sin(z1 ) z3 , p3 = a cos(z1 )z2 a sin(z1 ) + z3
and the new control variable
w = a cos(z1 )v a sin(z1 )z22 a cos(z1 )z2 + a sin(z1 ) z3
linearize completely system equations.
(d) Try to design a dynamical feedback controller with inputs y(t), r(t)
which achieves the objectives from (b). Test your design by a com
puter simulation.
Since all nonlinear elements of the z equations are functions of the observable vari
able y = z1 , it is easy to construct a stable observer for the system:
z1 = z2 + k1 (y z1 ),
z2 = u + a sin(y) z3 + k2 (y z1 ),
z3 = a sin(y) z3 ,
7
where k1 , k2 are arbitrary positive coecients. With this observer, the control action
can be dened by
u = kp (
z1 r) kd z2 a sin(
z1 ) + z3 .
Take-Home Test 11
For each problem, give an answer and provide supporting arguments, not to exceed one
page per problem. Return your test paper by 11.05 am on Friday October 17, in the
classroom. Remember that collaboration is not allowed on test assignments.
Problem T1.1
Find all values of R for which the function V : R2 R, dened by
x1
V
= max{|
x1 |, |x2 |}
x2
is monotonically non-increasing along solutions of the ODE
2
Problem T1.2
Find all values of r R for which dierential inclusion of the form
x (t) (x(t)), x(0) = x0 ,
2
where : R2 2R is dened by
x) = {f (
x|)} for x =
0,
(
x/|
(0) = {f (y) : y = [y1 ; y2 ] R2 , |y1 | + |y2 | r},
has a solution x : [0, ) R2 for every continuous function f : R2 R2 and for every
initial condition x0 R2 .
Problem T1.3
Find all values q, r R for which x0 = 0 is not a (locally) stable equilibrium of the ODE
x (t) = Ax(t) + B(Cx(t))1/3
for every set of matrices A, B, C of dimensions n-by-n, n-by-1, and 1-by-n respectively,
such that A is a Hurwitz matrix and
Re[(1 + jq)G(j)] > r R
for
G(s) = C(sI A)1 B.
x1
= max{|
x1 |, |x2 |}
V
x2
is monotonically non-increasing along solutions of the ODE
1d 2
x = x21 + x1 sin(x2 )
2 dt 1
< |x1 |(|x1 | |x2 |),
when t > 0 is small enough, which proves that V is not monotonically decreasing.
1
2
Problem T1.2
Find all values of r R for which differential inclusion of the form
x (t) (x(t)), x(0) = x0 ,
2
where : R2 2R is defined by
(
x) = {f (
x/|x|)}
for x = 0,
(0) = {f (y) : y = [y1 ; y2 ] R2 , |y1 | + |y2 | r},
Proof First, let us showthat existence of solutions is not guaranteed when r < 2. Let
> 0 be such that 2 < 2 r. Dene
0.52(1 ) x1
x1
f
=
.
x2
0.5 2(1 ) x2
Let us show that, for this f , the dierential inclusion x(t)
To prove existence of solutions for r 2, one is tempted to use the existence theorem
relying on convexity and semicontinuity of (). However, these assumptions are not
necessarily satised in this case, since the set (0) does not have to be convex. Instead,
note that, by the continuity of f , existence of a solution x : [t0 , t0 + |x0 |/M ) R2 with
x(t0 ) = x0 is guaranteed for all x0 = 0. Hence, it is sucient to show that a solution
x0 : [0, ) R2 with x0 (0) = 0 exists.
To do this, consider two separate cases: 0 (0) and 0 (0). If 0 (0) then
x(t) 0 is the desired solution of the dierential inclusion. Let us show that 0 (0)
implies existence
of a solution q (0, ), |u| = 1 of the equation f (u) = qu. Indeed, if
f ( u) for all [0, 1], |u| = 1, and hence
0 (0) and r 2 then 0 =
(, u)
f ( u)
|f ( u)|
3
is a homotopy between the vector elds f1 : u f (u)/|f (u)| and f0 : u f (0)/|f (0)|.
Since the index of the constant map f0 is zero, the index of f1 is zero as well. However,
assuming that f (u) = qu for q (0, ), |u| = 1 yields a homotopy
(, u)
u + (1 )f (u)
| u + (1 )f (u)|
between f1 and the identity map, which is impossible, since the identity map has index 1.
Hence f (u) = qu for some q > 0, |u| = 1, which yields x0 (t) = qtu as as a valid
solution x0 : [0, ) R2 of the dierential inclusion.
Problem T1.3
Find all values q, r R for which x0 = 0 is not a (locally) stable equilibrium
of the ODE
x (t) = Ax(t) + B(Cx(t))1/3
(1.1)
for every set of matrices A, B, C of dimensions n-by-n, n-by-1, and 1-by-n
respectively, such that A is a Hurwitz matrix and
Re[(1 + jq)G(j)] > r R
(1.2)
for
G(s) = C(sI A)1 B.
Answer: r 0, q R arbitrary (note, however, that for r 0 (1.2) implies q 0).
Proof If r < 0, take A = 1, B = 0, C = 1 to get an example of A, B, C satisfying the
conditions and such that x0 = 0 is a (globally asymptotically) stable equilibrium of (1.1).
Now consider the case r 0. Then, informally speaking, the frequency domain
condition means some sort of passivity of G, while (1.1) describes a positive feedback
interconnection of G with nonlinearity y w = y 1/3 , which can be characterized as
having arbitrarily large positive gain for x 0. Hence one expects instability of the zero
equilibrium of (1.1).
To show local instability, let us prove existence of a Lyapunov function V = V (x) for
which 0 is not a local minimum, and
d
V (x(t)) < 0 whenever |Cx(t)| (0, 0 )
dt
for some 0 > 0. Note that this will imply instability of the equilibrium x 0 = 0, since
every solution with V (x(0)) < V (0) and |x(0)| < 0 /2|C | will eventually cross the sphere
|x(0)| = 0 /2|C | (otherwise |Cx(t)| 0 /2 for all t 0, hence V (x(t)) is monotonically
4
non-increasing, and all limit points x of x() satisfy C x = 0, therefore every solution x (t)
of (1.1) beginning at such limit point satises Cx (t) = 0 and hence converges to the
origin, which contradicts V (x(0)) < V (0)).
By introducing w(t) = (Cx(t))1/3 , system equations can be re-written in the form
x (t) = Ax(t) + Bw(t).
Consider rst the (simpler) case when r > 0 (and hence q > 0). Then one can use the
inequality
w(t)Cx(t) r|w(t)|2 ,
for suciently small |Cx(t)|. Condition (1.2) together with the KYP Lemma yields exis
tence of a matrix P = P such that
wC
x + q wC(A
x + B w)
r|w|2 2
x P (A
x + B w)
x Rn , w R.
Substituting w = (Cx)1/3 , we get
d
[x P x 0.75q |Cx|4/3 ] |Cx|4/3 r|Cx|2/3 ,
dt
which is exactly what is needed, because y 4/3 ry 2/3 < 0 for y (0,
every x0 Rn such that C x0 = 0 the expression
V (
x) = x P x 0.75q |C x|4/3
is negative when x = r
x0 and r > 0 is small enough.
To prove the answer in the general case, note that the inequality
w(t)Cx(t) > R|Cx(t)|2
is satised whenever |Cx(t)| (0, ) with > 0 and R = 2/3 , i.e. R can be made
arbitrarily large by selecting an appropriate > 0. Therefore the derivative bound for
V (x(t)) = x(t) P x(t) will hold if
2
x P (A
x + B w)
R|C x|2 wC
x x Rn , w R.
(1.3)
5
Therefore, P = P cannot be positive semidenite if A + BKC has eigenvalues with
positive real part.
We will rely on the following statement from the linear system theory: if H(s) is a
stable proper rational transfer function which is positive real (i.e. Re(H(j)) > 0 for all
R) then Re(s) > 0 whenever Re(s) > 0, and the relative degree of H is not larger
than one.
Consider H(s) = (1 + qs)G(s). By assumption, H is positive real and proper. Hence
q 0 (otherwise H(1/q) = 0). If relative degree of H is zero then q > 0, and hence
sG(s) converges to a non-zero limit H as s . Since (1 + qr)G(r) > 0 for r > 0, it
follows that H > 0, and hence
Re
j
1
= Re
G(j)
G(j)
is bounded as .
If relative degree of H is one then sH(s) converges to a positive limit as s , and
hence
1
j (j)2
Re
= Re
G(j)
(j)2 G(j)
is bounded from above as . Finally, since G(r) > 0 and G(r) 0 as r +,
it follows that the equation 1 = KG(r) has a positive solution r for all suciently large
K > 0. Hence matrix A + BKC has a positive eigenvalue for all suciently large K > 0.
Take-Home Test 21
For each problem, give an answer and provide supporting arguments, not to exceed two
pages per problem. Return your test paper by 11.05 am on Wednesday November 19, in
the classroom. Remember that collaboration is not allowed on test assignments.
Problem T2.1
System of ODE equations
x (t) = Ax(t) + B(Cx(t) + cos(t)),
(1.1)
x1
1
g1 x2 = x1 .
0
x3
2
(a) Find a continuously dierentiable function g2 : R3 R3 such that the driftless
system
x (t) = g1 (x(t))u1 (t) + g2 (x(t))u2 (t)
(1.2)
is completely controllable on R3 .
(b) Find continuously dierentiable functions g2 : R3 R3 and h : R3 R such that
h(
x) =
0 for all x R3 and h(x(t)) is constant on all solutions of (1.2). (Note:
function g2 in (b) does not have to be (and cannot be) the same as g2 in (a).)
(c) Find a continuously dierentiable function g2 : R3 R3 such that the driftless
system (1.2) is not completely controllable on R3 , but, on the other hand, there
exists no continuously dierentiable function h : R3 R such that h(
x) =
0 for
3
all x R and h(x(t)) is constant on all solutions of (1.2).
Problem T2.3
An ODE control system model is given by equations
(1.3)
(a) Find all polynomials p : R R such that system (1.3) is full state feedback
linearizable in a neigborhood of x = 0.
(b) For each polynomial p found in (a), design a feedback law
u(t) = K(x1 (t), x2 (t), x3 (t)) = Kp (x1 (t), x2 (t), x3 (t))
which makes the origin a locally asymptotically stable equilibrium of (1.3).
(c) Find a C function p : R R for which system (1.3) is globally full state feedback
linearizable, or prove that such p() does not exist.
(1.1)
2
and hence
det M (T ) = exp
trace(A + Bh(t)C)dt .
0
Since
trace(A + Bh(t)C) = trace(A + CBh(t)) = trace(A),
det(M (T )) > 1 whenever trace(A) > 0. Hence trace(A) 0 is a necessary condition for
local asymptotic stability of x0 ().
Since system (1.1) with k = q = 1, (y) y,
a 0
1
A=
, B=
, C= 0 1
0 a
0
has periodic stable steady state solution
z2 (t),
z1 (t) =
z2 (t) =
z1 (t),
(1.2)
z1 (t)
,
z3 (t) = Az3 (t) + B Cz3 (t) +
2
2
z1 (t) +z2 (t)
dened for z12 + z22 = 0. If (1.1) has an asymptotically stable periodic solution x0 = x0 (t)
then, for > 0 small enough, solutions of (1.2) with
z1 (0)
z2 (0) z , z 0
z3 (0)
x0 (0)
small enough satisfy
lim z3 (t) x0 (t + ) = 0,
3
Problem T2.2
Function g1 : R3 R3 is defined by
1
x1
g1 x2 = x1 .
0
x3
(a) Find a continuously differentiable function g2 : R3 R3 such that
the driftless system
x (t) = g1 (x(t))u1 (t) + g2 (x(t))u2 (t)
(1.3)
is completely controllable on R3 .
For
g2 (x) = 0 = const,
we have
g3 = [g1 , g2 ] = 1 .
Since g1 (x), g2 , g3 form a basis in R3 for all x, the resulting system (1.3) is completely
controllable on R3 .
(b) Find continuously differentiable functions g2 : R3 R3 and h : R3
R such that h(
x) =
0 for all x R3 and h(x(t)) is constant on all
solutions of (1.3). (Note: function g2 in (b) does not have to be (and
cannot be) the same as g2 in (a).)
For example,
1
x1
g2 (x) = 1 = const, h x2 = x3 .
0
x3
0
g2 (x) = x1 ,
x3
4
we have
g3 = [g2 , g1 ] = 1 ,
for all x.
Problem T2.3
An ODE control system model
x 1 (t)
x 2 (t)
x 3 (t)
is given by equations
= x2 (t)2 + u(t),
= x3 (t)2 + u(t),
= p(x1 (t)) + u(t).
(1.4)
(a) Find all polynomials p : R R such that system (1.4) is full state
(1.5)
x1
1
x22
x1
2
x2
x2
x3
, g
= 1 .
=
f
1
p(x1 )
x3
x3
Dene
i.e.
2
x1
4x2 x3 2x23
x1
2x2
x1
g2 x2 = 2x3 , g3 x2
= 2x3 p(x1 ) 2p(x1 ) , g21 x2 = 2 .
x3
x3
p(x1 )
2x2 p(x1 ) p(x1 )x22
p(x1 )
x3
For local full state feedback linearizability at x = 0 it is necessary and sucient
p(0)p(0)
= 0) and for g21 (x) to be a linear combination of g1 (x) and g2 (x) for
p(x1 ) = x21 + p1 x1 + p0 , p0 p1 = 0
is necessary and sucient for local full state feedback linearizability at x = 0.
5
(b) For each polynomial p found in (a), design a feedback law
u(t) = K(x1 (t), x2 (t), x3 (t)) = Kp (x1 (t), x2 (t), x3 (t))
which makes the origin a locally asymptotically stable equilibrium
of (1.4).
Since p(0) =
0, x = 0 cannot be made into a locally asymptotically stable equilibrium of (1.4). However, the origin z = 0 (i.e. with respect to the new coordinates
z = (x)) of the feedback linearized system can be made locally asymptotically
stable, as long as 0 () where is the domain of . Actually, this does not
require any knowledge of the coordinate transform , and can be done under an
assumption substantially weaker than full state feedback linearizability!
Let
z(t) = Az(t) + Bv(t)
(1.6)
f (x) = [(x)]
[A(x) B(x)(x)], g(x) = [(x)]
B(x).
If x satises (
x) = 0 then x is a conditional equilibrium of (1.5), in the sense
that
f (
x)
x) + g(
u = 0
conditional equilibrium has a controllable linearization, in the sense that the pair
(f(
x) + g(
x)
u, g(
x)) is controllable as well, because
f(
x) + g(
x)
u = S 1 (AS BF ), g(
x) = S 1 B(
x)
for
x), F = (
x).
S = (
x) (
f(
x) + g(
x)
u + g(
x)K
6
is a Hurwitz matrix. Indeed, by assumption x is an equilibriun of
x (t) = fK (x) = f (x(t)) + g(x(t))(
u + K(x(t) x)),
and
fK (
x) = f(
x) + g(
x)
u + g(
x)K.
x1
x = x2
x3
x21 = x22 = p(
x1 ) =
u.
Then
1
0
x2 0
0 2
x2 , g(
f(
x) + g(
x)
u= 0
x) = 1 .
1
p(
x1 ) 0
0
u(t) =
x21 + k1 (x1 (t) x 1 ) + k2 (x2 (t) x 2 ) + k3 (x3 (t) x3 ),
where the coecients k1 , k2 , k3 are chosen in such a way that
0
x2 0
1
0
0 2
x 2 + 1 k1 k2 k3
p(
x1 ) 0
0
1
is a Hurwitz matrix.
(c) Find a C function p : R R for which system (1.4) is globally full
state feedback linearizable, or prove that such p() does not exist.
Such p() does not exist. Indeed, otherwise vectors
1
2x2
1 and 2x3
1
p(x1 )
are linearly independent for all real x1 , x2 , x3 , which is impossible for
x2 = x3 = 0.5p(
x1 ).