You are on page 1of 74

Nonlinear phenomena

nite escape time peaking

Nonlinear Control and Servo Systems


Lecture 1
Nonlinear Phenomena and Stability theory

Stability theory

Lyapunov Theory revisited exponential stability quadratic stability time-varying systems invariant sets center manifold theorem

Existence problems of solutions


Example: The differential equation
5

Finite Escape Time


Finite escape time of dx/dt = x2

dx = x2 , dt

x(0) = x0

4.5 4 3.5 3

has the solution


x(t) = x0 , 1 x0 t 1 x0 0t< 1 x0
x(t)

2.5 2 1.5

Finite escape time


tf =

1 0.5 0 0 1 2 Time t 3 4 5

The peaking phenomenon


Example: Controlled linear system with right-half plane zero Feedback can change location of poles but not location of zero (unstable pole-zero cancellation not allowed). (s + 1) 2 o Gcl (s) = 2 s + 2 o s + 2 o (1)
1 0.5 0 0.5 1 1.5 2 0

The peaking phenomenon cont.


o = 5 o = 2 o = 1

A step response will reveal a transient which grows in amplitude for faster closed loop poles s = o, see Figure on next slide.

Step responses for the system in Eq. (1), o = 1, 2, and 5. Faster poles gives shorter settling times, but the transients grow signicantly in amplitude, so called peaking.

The peaking phenomenon cont.


Note! Linear case: Performance may be severely deteriorated by peaking, but stability still guaranteed. Nonlinear case: Instability and even nite escape time solutions may occur.

The peaking phenomenon cont.


We will come back to the peaking phenomenon for cascaded systems [Kokotovic & Sussman 91] observers [observer backstepping]

What bandwidth constraints does a non-minimum zero impose for linear systems? See e. g., [Freudenberg and Looze, 1985; strm, 1997; Goodwin and Seron, 1997]

Alexandr Mihailovich Lyapunov (18571918)

Lyapunov formalized the idea: If the total energy is dissipated, the system must be stable. Main benet: By looking at an energy-like function ( a so called Lyapunov function), we might conclude that a system is stable or asymptotically stable without solving the nonlinear differential equation.

Trades the difculty of solving the differential equation to: Master thesis On the stability of ellipsoidal forms of equilibrium of rotating uids, St. Petersburg University, 1884. Doctoral thesis The general problem of the stability of motion, 1892.

How to nd a Lyapunov function? Many cases covered in [Rouche et al, 1977]

Stability Denitions
An equilibrium point x = 0 of x = f ( x) is

Lyapunov Theorem for Local Stability


Theorem Let x = f ( x), f (0) = 0, and 0 Rn. Assume that V : R is a C1 function. If

locally stable, if for every R > 0 there exists r > 0, such that
x(0) < r x(t) < R, t0

V (0) = 0 V ( x) > 0, for all x , x = 0


d V ( x) dt

0 along all trajectories in < 0 for all x , x = 0

then x = 0 is locally stable. Furthermore, if also locally asymptotically stable, if locally stable and
x(0) < r
t d V ( x) dt

lim x(t) = 0

then x = 0 is locally asymptotically stable.

globally asymptotically stable, if asymptotically stable for all x(0) Rn.

Proof: Read proof in [Khalil] or [Slotine].


3

Lyapunov Functions (

Energy Functions)

Lyapunov Theorem for Global Stability


Theorem Let x = f ( x) and f (0) = 0. Assume that V : Rn R 1 is a C function. If

A Lyapunov function fullls V ( x0 ) = 0, V ( x) > 0 for x , x = x0 , and


dV dV d V ( x) = f ( x) 0 x= V ( x) = dt dx dx

V (0) = 0 V ( x) > 0, for all x = 0 V ( x) < 0 for all x = 0 V ( x) as x


radially unbounded

V x2 x1
V = constant

then x = 0 is globally asymptotically stable.

Note! Can be only one equilibrium.

Radial Unboundedness is Necessary


If the condition V ( x) as x is not fullled, then global stability cannot be guaranteed.
2 2 2 Example Assume V ( x) = x1 /(1 + x1 ) + x2 is a Lyapunov function for a system. Can have x even if V ( x) < 0.

Example saturated control


Exercise - 5 min

Contour plot V ( x) = C:
2 1.5

Find a bounded control signal u = sat( v), which globally stabilizes the system x1 = x1 x2 x2 = u (2) u = sat (v( x1 , x2 )) What is the problem with using the standard candidate
2 2 V1 = x1 /2 + x2 /2 ?

0.5

x2

0.5

Hint: Use the Lyapunov function candidate


2 2 V2 = log(1 + x1 ) + x2
8 6 4 2

1.5

2 10

x1
0

10

for some appropriate value of .


4

Lyapunov Function for Linear System


Theorem The eigenvalues i of A satisfy Re i < 0 if and only if: for every positive denite Q = Q T there exists a positive denite P = P T such that
PA + AT P = Q Proof of Q, P Re i( A) < 0: Consider x = Ax and the T Lyapunov function candidate V ( x) = x Px. V ( x) = x T P x + x T Px = x T ( PA + AT P) x = x T Qx < 0, x = Ax asymptotically stable

Linear Systems cont.


Discrete time linear system:
x( k + 1) = x( k)

The following statements are equivalent x = 0 is asymptotically stable i < 0 for all eigenvalues of Given any Q = Q T > 0 there exists P = P T > 0, which is the unique solution of the (discrete Lyapunov equation T P P = Q

x = 0

Re i < 0
0

Proof of Re i ( A) < 0

Q, P: Choose P =

AT t

QeAt dt

Exponential Stability
The equilibrium point x = 0 of the system x = f ( x) is said to be exponentially stable if there exist c, k, such that for every t t0 0, x(t0 ) c one has x(t)

Comparison functions class K


The following two function classes are often used as lower or upper bounds on growth condition of Lyapunov function candidates and their derivatives. D EFINITION 1C LASS K FUNCTIONS [K HALIL , 1996] A continuous function : [0, a) IR+ is said to belong to class K if it is strictly increasing and (0) = 0. It is said to belong to class K if a = and lim (r) = .
r

k x(t0 ) e (tt0 )

It is globally exponentially stable if the condition holds for arbitrary initial states.

For linear systems asymptotic stability implies global exponential stability.

Common choice is i( x ) = ki x c , k, c > 0


5

Comparison functions class K L


D EFINITION 2C LASS K L FUNCTIONS [K HALIL , 1996] A continuous function : [0, a) IR+ IR+ is said to belong to class K L if for each xed s the mapping (r, s) is a class K function with respect to r, and for each xed r the mapping (r, s) is decreasing with respect to s and lim (r, s) = 0. The function (, ) is said to belong to class K L if for each xed s, (r, s) belongs to class K with respect to r.
s

Lyapunov Theorem for Exponential Stability


Let V : Rn R be a continuously differentiable function and let ki > 0, c > 0 be numbers such that
k1 x
c

V ( x) k2 x

V f (t, x) k3 x x

for t 0, x r. Then x = 0 is exponentially stable. For exponential stability ( x , t) = .... (ll in) If r is arbitrary, then x = 0 is globally exponentially stable.

Proof

Quadratic Stability
Suppose there exists a P > 0 such that
0 > ( A + B i C ) P + P( A + B i C ) for all i

V =

V k3 f (t, x) k3 x c V x k2

Then the system


x = [ A + B ( x, t) C] x

V ( x) V ( x0 ) e(k3 /k2 )(tt0 ) k2 x0 c e(k3 /k2 )(tt0 ) V k1


1/ c

is globally exponentially stable for all functions satisfying


x(t)

k2 k1

1/ c

x0 e(k3 /k2 )(tt0 )/c

( x, t) conv{ 1 , . . . , m } for all x and t


6

Aircraft Example
-n z r + e1

Piecewise linear system


Consider the nonlinear differential equation

K1

1
max

q,

x =

lim

e2

K2

A1 x A2 x

if x1 < 0 if x1 0

with x = ( x1 , x2 ). If the inequalities


A P + PA1 < 0 1 A P + PA2 < 0 2 P > 0

(Branicky, 1993)

can be solved simultaneously for the matrix P, then stability is proved by the Lyapunov function x Px

Matlab Session
Copy /home/kursolin/matlab/lmiinit.m to the current directory or download and install the IQCbeta toolbox from http://www.control.lth.se/cykao.
>> >> >> >> >> >> >> >> >> P = 0.0749 -0.0257 -0.0257 0.1580 lmiinit A1=[-5 -4;-1 -2]; A2=[-2 -1; 2 -2]; p=symmetric(2); p>0; A1*p+p*A1<0; A2*p+p*A2<0; lmi_mincx_tbx P=value(p)

Trajectory Stability Theorem


Let f be differentiable along the trajectory x (t) of the system x = f ( x, t) Then, under some regularity conditions on x (t), exponential stability of the linear system x (t) = A(t) x(t) with A(t) = f ( x (t), t) x

implies that
x(t) x (t) decays exponentially for all x in a neighborhood of x .
7

Time-varying systems
Note that autonomous systems only depends on (t t0 ) while solutions for non-autonomous systems may depend on t0 and t independently.

Stability denitions for time-varying systems


An equilibrium point x = 0 of x = f ( x, t) is

locally stable at t0 , if for every R > 0 there exists r = r( R, t0 ) > 0, such that
x(t0 ) < r x(t) < R, t t0

locally asymptotically stable at time t0 , if locally stable and


x(t0 ) < r(t0 )
t

lim x(t) = 0

A second order autonomous system can never have nonsimply intersecting trajectories ( A limit cycle can never be a gure eight ) A system is said to be uniformly stable if r can be independently chosen with respect to t0 , i. e., r = r( R). Example of non-uniform convergence [Slotine, p.105/Khalil
p.134] Consider
x = x/(1 + t)

globally asymptotically stable, if asymptotically stable for all x(t0 ) Rn.

Time-varying Lyapunov Functions


Let V : Rn+1 R be a continuously differentiable function and let ki > 0, c > 0 be numbers such that
k1 x c V (t, x) k2 x c V V (t, x) + (t, x) f (t, x) k3 x t x

which has the solution


x ( t) = 1 + t0 x ( t0 ) 1+t x ( t) x ( t0 )

t t0

for t 0, x r. Then x = 0 is exponentially stable.


The solution x(t) 0, but we can not get a decay rate estimate independently of t0.

If r is arbitrary, then x = 0 is globally exponentially stable.


8

Time-varying Linear Systems


The following conditions are equivalent
The system x (t) = A(t) x(t) is exponentially stable

Proof
Given the second condition, let V ( x, t) = x P(t) x. Then
V ( x) = V + t V Ax = x ( P + A P + PA) x < x x
2

There exists a symmetric matrix function P(t) > 0 such that


I P(t) + A(t) P(t) + P(t) A(t)

so exponential stability follows the Lyapunov theorem. Conversely, given exponential stability, let (t, s) be the transition matrix for the system. Then the matrix P(t) = (t, s) (t, s)ds is well-dened and satises t
I = P(t) + A(t) P(t) + P(t) A(t)

for all t.

Lyapunovs rst theorem revisited


The system can be written Suppose the time-varying system
x = f ( x, t)

Proof
x (t) = f ( x, t) = A(t) x(t) + o( x, t)

has an equilibrium x = 0, where 2 f / x2 is continuous and uniformly bounded as a function of t. Then the equilibrium is exponentially stable provided that this is true for the linearization x (t) = A(t) x(t) where
A(t) = f (0, t) x

where o( x, t) / x 0 uniformly as x 0. Choose P(t) > 0 with


P(t) + A(t) P(t) + P(t) A(t) I

and let V ( x) = x Px. Then


V f ( x) = x ( P + A P + PA) x + 2x P(t)o( x, t) < x 2 /2 x

in a neighborhood of x = 0. Hence Lyapunovs theorem proves exponential stability.


9

Proof of Trajectory Stability Theorem

Let z(t) = x(t) x (t). Then z = 0 is an equilibrium and the system z(t) = f ( z + x ) f ( x)

The desired implication follows by the time-varying version of Lyapunovs rst theorem.

Lyapunovs Linearization Method revisited


Recall from Lecture 2 (undergraduate course): Theorem Consider
x = f ( x)

Proof of (1) in Lyapunovs Linearization Method


Lyapunov function candidate V ( x) = x T Px. V (0) = 0, V ( x) > 0 for x = 0, and
V ( x) = x T P f ( x) + f T ( x) Px

Assume that x = 0 is an equilibrium point and that


x = Ax + ( x)

= x T P[ Ax + ( x)] + [ x T A +
T T T

( x)] Px

= x ( PA + A P) x + 2x P ( x) = x T Qx + 2x T P ( x)
x T Qx min( Q) x 2 and for all > 0 there exists r > 0 such that

is a linearization. (1) If Re i( A) < 0 for all i, then x = 0 is locally asymptotically stable. (2) If there exists i such that i ( A) > 0, then x = 0 is unstable.

( x) < x ,

x <r
2

Thus, choosing sufciently small gives


V ( x) min ( Q) 2 max ( P) x

<0
10

First glimpse of the Center Manifold Theorem


What can we do if the linearization A = imaginary axis? Assume
z1 = A0 z1 + f 0 ( z1 , z2 ) z2 = A z2 + f ( z1 , z2 ) A : asymptotically stable A0 : eigenvalues on imaginary axis f 0 and f second order and higher terms. f has zeros on the x

Center Manifold Theorem Assume z = 0 is an equilibrium point. For every k 2 there exists a C k mapping such that (0) = 0 and d (0) = 0 and the surface z2 = ( z1 )

is invariant under the dynamics above.


Proof Idea: Construct a contraction with the center manifold as x-point.

Cont

Usage
1) Determine z2 = ( z1 ), at least approximately 2) The local stability for the entire system can be proved to be the same as for the dynamics restricted to a center manifold:
z1 = A z1 + f ( z1 , ( z1 ))
0 0

An instability result - Chetaevs Theorem


Idea: show that a solution arbitrarily close to the origin have to leave Let f (0) = 0 and let V : D R be a continuously differentiable function on a neighborhood D of x = 0, such that V (0) = 0. Suppose that the set
U = { x D : x < r, V ( x) > 0} is nonempty for every r > 0. If V > 0 in U , then x = 0 is unstable.

11

Invariant Sets
Denition A set M is called invariant if for the system
x = f ( x), x(0) M implies that x(t) M for all t 0.

Invariant Set Theorem


Theorem Let Rn be a bounded and closed set that is invariant with respect to
x = f ( x).

Let V : Rn R be a radially unbounded C1 function such that V ( x) 0 for x . Let E be the set of points in where V ( x) = 0. If M is the largest invariant set in E, then every solution with x(0) approaches M as t (proof on p. 73)

x(t) x(0) M

strm, K. J. (1997): Limitations on control system performance. In Proceedings of the European Control Conference (ECC97), vol. 1. Brussels, Belgium. TU-E-E4. Freudenberg, J. and D. Looze (1985): Right half plane poles and zeros and design tradeoffs in feedback systems. IEEE Transactions on Automatic Control, 30, pp. 555565. Goodwin, G. and M. Seron (1997): Fundamental design tradeoffs in ltering, prediction, and control. IEEE Transactions on Automatic Control, 42:9, pp. 12401251. Khalil, H. (1996): Nonlinear Systems, 2nd edition. Prentice Hall. Rouche, N., P. Habets, and M. Laloy (1977): Stability theory by Liapunovs direct method. Springer-Verlag, New-York, Berlin.

12

Invariant Sets Nonlinear Control and Servo Systems


Lecture 2
Lyapunov theory contd. Storage function and dissipation Absolute stability The Kalman-Yakubovich-Popov lemma Denition A set M is called invariant if for the system
x = f ( x), x(0) M implies that x(t) M for all t 0.

PSfrag replacements
x(0)

x ( t) M

Invariant Set Theorem


Theorem Let Rn be a bounded and closed set that is invariant with respect to Let V : Rn R be a radially unbounded C1 function such that V ( x) 0 for x . Let E be the set of points in where V ( x) = 0. If M is the largest invariant set in E, then every solution with x(0) approaches M as t (see proof in textbook) PSfrag replacements
x = f ( x). V =

Invariant sets - nonautonomous systems


Problems with invariant sets for nonautonomous systems.
V + t V f (t, x) depends both on t and x. x

Barbalats Lemma - nonautonomous systems


Let : I I be a uniformly continuous function on [0, ). R R Suppose that
t

Nonautonomous systems -contd


[Khalil, Theorem 4.4] Assume there exists V (t, x) such that
W1 ( x) V (t, x) W2 ( x)
decrecent

lim

( )d

exists and is nite. Then

(t) 0 as t

V V f (t, x) W3 ( x) + t x W3 is a continuous positive semi-denite function. V ( t, x ) = Solutions to x = f (t, x) starting in x(t0 ) { x Br ...} are bounded and satisfy W3 ( x(t)) 0 t

positive denite

Common tool in adaptive control.

See example 4.23 in Khalil (2nd ed).

An instability result - Chetaevs Theorem


Let f (0) = 0 and let V : D R be a continuously differentiable function on a neighborhood D of x = 0, such that V (0) = 0. Suppose that the set U = { x D : x < r, V ( x ) > 0 } is nonempty for every r > 0. If V > 0 in U , then x = 0 is unstable.

Dissipativity
Consider a nonlinear system
x (t) = f ( x(t), u(t), t), y(t) = h( x(t), u(t), t) t0

and a locally integrable function


r(t) = r(u(t), y(t), t).

V >0

The system is said to be dissipative with respect to the supply rate r if there exists a storage function S (t, x) such that for all t0 , t1 and inputs u on [t0 , t1 ]
S (t0, x(t0 )) +
t1 t0

PSfrag replacements

dV /dt > 0

r(t)dt S (t1, x(t1 )) 0

ExampleCapacitor
A capacitor
i=C du dt

ExampleInductance
An inductance
u= L di dt

is dissipative with respect to the supply rate r(t) = i(t)u(t). A storage function is
S (u) = Cu2 2

is dissipative with respect to the supply rate r(t) = i(t)u(t). A storage function is
S ( i) = Li2 2

In fact
Cu(t0 )2 + 2
t1 t0

In fact
i(t)u(t)dt = Cu(t1 )2 2 Li(t0 )2 + 2
t1 t0

i(t)u(t)dt =

Li(t1 )2 2

Memoryless Nonlinearity
The memoryless nonlinearity w = (v, t) with sector condition

Linear System Dissipativity


The linear system
x (t) = Ax(t) + Bu(t), x u
T

(v, t)/v ,

t 0, v = 0

t0

is dissipative with respect to the supply rate


M x u

is dissipative with respect to the quadratic supply rate


r(t) = [w(t) v(t)][w(t) v(t)]

and storage function x T Px if and only if


M+ AT P + PA P B BT P 0

with storage function


S (t, x) 0

Storage function as Lyapunov function


For a system without input, suppose that
r( y ) k x
t1 t0 c

Interconnection of dissipative systems


If the two systems are dissipative with supply rates r1 (u1 , x1 ) and r2 (u2 , x2 ) and storage functions S ( x1 ), S ( x2 ), then their interconnection
x1 = f 1 ( x1 , h2 ( x2 )) x2 = f 2 ( x2 , h1 ( x1 )) x1 = f 1 ( x 1 , u 1 ) x2 = f 2 ( x 2 , u 2 )

for some k > 0. Then the dissipation inequality implies


S (t0, x(t0 )) k x(t) c dt S (t1, x(t1 ))

is dissipative with respect to every supply rate of the form

which is an integrated form of the Lyapunov inequality


d S (t, x(t)) k x dt
c

1 r1 (h2 ( x2 ), x1 ) + 2 r2 (h1 ( x1 ), x2 )
The corresponding supply rate is

1, 2 0

1 S 1 ( x1 ) + 2 S 2 ( x2 )

Nonlinear Control Theory 2003


Lecture 2B updated

Global Sector Condition

PSfrag replacements

Absolute Stability Kalman - Yakubovich - Popov Lemma Circle Criterion Popov Criterion pp. 237 - 268 + extra material on the K-Y-P Lemma

Let (t, y) R be piecewise continuous in t [0, ) and locally Lipschitz in y R. Assume that satises the global sector condition

(t, y)/ y ,

t 0, y = 0

(1)

Absolute Stability
u +

The Circle Criterion

( A, B , C)

PSfrag replacements (t, ) The system x = Ax + Bu, y = Cx u = (t, y) PSfrag replacements

1/

1/

t0

(2)

The system (2) with sector condition (1) is absolutely stable if the origin is asymptotically stable for (t, y) = y and the Nyquist plot does not intersect the closed disc with diameter [1/ , 1/ ].
C( j I A)1 B + D ,

with sector condition (1) is called absolutely stable if the origin is globally uniformly asymptotically stable for any nonlinearity satisfying (1).

Loop Transformation
u +

Special Case: Positivity


y

G ( s)

y +

G ( s)

rag replacements
+

Let M ( j ) = C( j I system x y u with sector condition is absolutely stable if

A)1 B + D , where A is Hurwitz. The = Ax + Bu, = Cx + Du = (t, y)


t0

(t, y)/ y 0 t 0, y = 0
M ( j ) + M ( j ) > 0,

[0, )

Common choices: K = or K =

+
2

Note: For SISO systems this means that the Nyquist curve lies strictly in the right half plane.

Proof
Set V ( x) = x T Px,
V = 2x T P x P = PT > 0 x P 0 x

The Kalman-Yakubovich-Popov Lemma


Exists in numerous versions Idea: Frequency dependence is replaced by matrix equations/inequalities or vice versa + 2 y

= 2x T P [ A = 2 [ xT

B]

2x T P [ A
A B

B] x

C D 0 I By the Kalman-Yakubovich-Popov Lemma, the inequality M ( j ) + M ( j ) > 0 guarantees that P can be chosen to make the upper bound for V strictly negative for all ( x, ) = (0, 0). Stability by Lyapunovs theorem.

The K-Y-P Lemma, version I


Let M ( j ) = C( j I A)1 B + D , where A is Hurwitz. Then the following statements are equivalent.
(i) M ( j ) + M ( j ) > 0 for all [0, )
T

Compare Khalil (5.10-12):


M is strictly positive real if and only if P, W , L, : PA + AT P P B C T P + LT L LT W = BT P C D + DT WT W WT L

(ii) P = P > 0 such that P 0 0 I A C B A + D C B D

P 0

0 <0 I

Mini-version a la [ Slotine& Li ]: x = Ax + bu, A Hurwitz, (i. e., Re{ i ( A) < 0}] y = cx The following statements are equivalent Re{c( j I A)1 b} > 0, w [0, )

There exist P = P T > 0 and Q = Q T > 0 such that


AT P + PA = Q Pb = cT

The K-Y-P Lemma, version II - cont. The K-Y-P Lemma, version II


(i) ( j ) ( j ) + ( j ) ( j ) 0 for all R with det( j A A) = 0. (ii) There exists a nonzero pair ( p, P) R p 0, P = P and A B C D

Rn

such that

For (s) (s)


C C D D

P 0 0 pI

A B C D 0 A B C D

( s A A ) 1 ( B s B ) +

A B C D

P 0

pI

with s A A nonsingular for some s C, the following two statements are equivalent.

The corresponding equivalence for strict inequalities holds with p = 1.

Some Notation Helps


Introduce
M= N= A B C D

Lemma 1
0], I ].

, ,

M =[I N = [0

Given y, z Cn, there exists an [0, ) such that y = j z, if and only if yz + zy = 0.


Proof Necessity is obvious. For sufciency, assume that yz + zy = 0. Then v( y + z) 2 v( y z) 2 = 2v( yz + zy)v = 0.

Then
y = [ C( j I A)1 B + D ]u

if and only if
y = u N N w

Hence y = z for some C {}. The equality yz + zy = 0 gives that is purely imaginary.

for some w Cn+m satisfying M w = j M w.

(i) and (ii) can be connected by the following sequence of equivalent statements.
(a) w( N N + N N )w < 0 for w = 0 satisfying M w = j M w (b) P = , where = w w = 1

with R.

Proof of the K-Y-P Lemma

w ( N N + N N )w, M ww M + M ww M :

P = {(r, 0) : r > 0}
(c) (conv ) P = .

See handout (Rantzer)

(d) There exists a hyperplane in R P , i.e. P such that w = 0

Rn

separating from

0 > w N N + N N + M P M + M PM w

Time-invariant Nonlinearity

The Popov Criterion


(v) Im G (i ) 1
2 0 2 4 6 8

v
PSfrag replacements

PSfrag replacements v

10

12 2

Assume that satises the global sector condition

Let ( y) R be locally Lipschitz in y R.

( y)/ y ,

t 0, y = 0

Suppose that : R R is Lipschitz and 0 (v)/v . Let G (i ) = C(i I A)1 B with A Hurwitz. If there exists R such that 1 Re [(1 + i ) G (i )] > (3) R then the differential equation x (t) = Ax(t) B ( Cx(t)) is exponentially stable.

Re G (i )
10 12 14

Popov proof I
Set
V ( x) = x Px + 2
T Cx 0

Popov proof II
( )d
For 0, V > 0 is obvious for x = 0.
Stability for linear gives V 0 and V < 0, so V must be positive also for < 0.

where P is an n n positive denite matrix. Then V = 2( x T P + k C) x x = 2( x T P + C) [ A B ] x 2 ( y) 2( x T P + C) [ A B ] PA PB x = 2 [ x T ] C C A 1 C B By the K-Y-P Lemma there is a P that makes the upper bound for V strictly negative for all ( x, ) = (0, 0).

Stability for nonlinear from Lyapunovs theorem.

The Kalman-Yakubovich-Popov lemma III


Given A Rn n, B Rn m, M = M T R(n+m) (n+m), with i I A nonsingular for R and ( A, B ) controllable, the following two statements are equivalent.
(i)

Proof techniques
(ii) (i) simple (i I A)1 B I

Multiply from right and left by ( i) (ii) difcult

(i I A)1 B I

(i I A)1 B I
n

[0, )

Spectral factorization (Anderson) Linear quadratic optimization (Yakubovich) Find (1, P) as separating hyperplane between the sets
x u
T

(ii) There exists a matrix P R n M+


T

such that P = P and


0

A P + PA P B B P
T

x u

, x( Ax + Bu) + ( Ax + Bu) x

: ( x , u) C n + m

{(r, 0) : r > 0}

The Center Manifold Theorem


[Khalil ch 8] What can we do if the linearization A =
f has zeros on the x

Center Manifold Theory


Assume that a system ( possibly via a state space transformation [ x] [ y T , zT ]T ) can be written as
y = A0 y + f 0 ( y, z) z = A z + f ( y, z) A : asymptotically stable A0 : eigenvalues on imaginary axis f 0 and f second order and higher terms.

imaginary axis?

Note: It is the y-dynamics which relate to zero linearization, NOT the z-dynamics according to notation in Khalil.

Center Manifold Theorem


Assume [ y T , zT ]T = 0 is an equilibrium point. For every k 2 there exists a k > 0 and C k mapping h such that h(0) = 0 and h (0) = 0 and the surface
z = h( y) y k

Proof Outline
For any continuously differentiable function h k , globally bounded together with its rst partial derivative and with hk(0) = 0, h (0) = 0, let hk+1 be dened by the equations
y = A0 y + f 0( y, hk( y)) z = A z + f ( y, hk( y)) hk+1( y) = z

is invariant under the dynamics above.

Under suitable assumptions, it can be veried that this denes hk+1 uniquely. Furthermore, the sequence {hi} is contractive in the norm sup y hi ( y) and the limit h saties the conditions for a center manifold.

Usage
1) Determine z = h( y), at least approximately. (E.g., do a series expansion and identify coefcients...) 2) The local stability for the entire system can be proved to be the same as for the dynamics restricted to a center manifold:
y = A0 y + f 0 ( y, h( y))

Usage contd
In the case of using series expansion of h( y) = c2 y2 + c3 y3 + ..., you would need to continue (w.r.t the order of the terms) until you have been able to determined the local behavior. (Low order terms dominate locally).

Identify the coefcients from the boundary condition [Khalil (8.8, 8.11)]
h ( y)[ A0 y + f 0 ( y, h( y))] A h( y) f ( y, h( y)) = 0 y

Example
y = z z = z + a y2 + b yz Example

Nonuniqueness
The center manifold need not be unique

Here A0 = 0 and A = 1. z = h( y) gives h + a y2 + b yh h h = 0 hence


h( y ) = a y + O ( y )
2 3

y = y3 z = z z = h( y) gives h y 3 = z = h( y )
2

which has the solutions

Substituting into the dynamics we get


y = a y 2 + O ( y 3)

h( y) = Ce1/(2 y

for all constants C.

so x = (0, 0) is unstable for a = 0.

Robustness analysis and quadratic inequalities


-analysis Multipliers S-procedure Integral Quadratic Constraints

Department of Automatic Control Lund Institute of Technology

Lecture 4 Nonlinear Control


Robustness analysis and quadratic inequalities

Material
lecture notes A. Megretski and A. Rantzer, System Analysis via Integral Quadratic Constraints, IEEE Transactions on Automatic Control, 47:6, 1997 U. Jnsson, Lecture Notes on Integral Quadratic Constraints Users guide to toolbox, Matlab
x= y

Preview Example
A linear system of equations
y = 1.1 0.1x x= y=1

Equations with uncertainty


( x y )2 < 1 x 2 ( y + 0.1x 1.1)2 < 2 ( x 1)2 + ( y 1)2 < 3

Given 1 and 2 , how do we nd a valid 3 ?

Example

Parametric Uncertainty in Linear Systems


Let D Rn n contain zero. The system x = ( A + B C) x is then exponentially stable for all D if and only if

A is stable det I C(i I A)1 B = 0 for R, D


v w r

C(sI A)1 B

Question: For what values of is the system stable? Note: May be large differenses if we consider complex or real uncertainties .

C(sI A)1 B

Use quadratic inequalities at each frequency!


w = I C( i I A) w = v + r v = C( i I A)1 Bw
1

Structured Singular Values


Given M Cn
n

and a perturbation set


ml

D = {diag[ 1 Ir1 , . . . , m Irm , 1 , . . . , p] : k R, l Cml

the structured singular value D ( M ) is dened by


D ( M ) = sup{ ()1 : D , det( I M ) = 0}

For example, if
D=
1
0 0

k [1, 1]

Then a bound of the form w 2 < 2 r 2 can be obtained using


w1 r1 w2 r2
2 2

< v1 2 < v2 2

v1 v2

= C( i I A)1 B

w1 w2

This veries that det I C(i I A)1 B = 0.

See Matlabs toolbox


2

Reformulated Denition
The following two conditions are equivalent
(i) 0 = det[ I M (i )] for all D and R (ii) D ( M (i )) < 1 for R

Bounds on
If D consists of full complex matrices, then D ( M ) = ( M ).
where ( M ) is the largest singular value of M = the larges eigenvalue of the matrix M M .

If D consists of perturbations of the form = I with [1, 1], then D ( M ) is equal to the magnitude R ( M ) of the largest real eigenvalue of M (the spectral radius). In general
R ( M ) D ( M ) ( M )

Computation of
Dene
UD = { U D : U U = I } DD = { D = D Cn : D = D for all D } GD = { G = G Cn : G = G for all D }

S-procedure for quadratic inequalities


The inequality
x T y M0 y 0 1 1 x

follows from the inequalities Then


U U D

sup R ( U M ) D ( M )

inf
D DD G GD

( D , G ) inf ( DM D 1 )
D D D

if there exist 1 , 2 0 such that where


( D , G ) = inf{ > 0 : M D DM + j ( GM M G ) < 2 D D }

T x x y M1 y 0 1 1

T x x y M2 y 0 1 1

M0 + 1 M1 + 2 M2 0

Numerical algorithms in available (e.g. in Matlab)


3

S-procedure in general
The inequality
0 ( h) 0

S-procedure losslessness by Megretsky/Treil


m Let 0, 1 , . . . , n be time-invariant quadratic forms on L2 . Suppose that there exists f such that

follows from the inequalities


1 ( h) 0 , . . . , n ( h) 0

1 ( f ) > 0, . . . , m ( f ) > 0

Then the following statements are equivalent 0( f ) 0 for all f such that 1( f ) 0, . . . , n ( f ) 0
h

if there exist 1 , . . . , n 0 such that


0 ( h) +
k

k k ( h) 0

There exist 1 , . . . , n 0 such that


0( f ) +
k

k k ( f ) 0

Integral Quadratic Constraint


v

Example Gain and Passivity


Suppose the gain of is at most one. Then

m The causal bounded operator on L2 is said to satisfy the IQC dened by the matrix function (i ) if

0
0

( v 2 v 2 ) dt =

v( i )

I 0

v( i )

( v)( i )

( v)( i )

v( i )

( v)( i )

( i )

v( i )

( v)( i )

for all v L2 . Suppose instead that is passive. Then


0
0

v( t)( v)(t) dt =

v( i )

0 I

I 0

v( i )

( v)( i )

( v)( i )

Exercise
Show that a nonlinearity satisfying the sector condition

IQCs for Coulomb Friction


f ( t) = 1 f ( t) [1, 1] f ( t) = 1 if v( t) < 0 if v( t) = 0 if v( t) > 0

y2 (t, y) y y2
satises the IQC, I QC() given by ( j ) = =
2 +

+ 2

Zames/Falbs property

0
0

v( t)[ f ( t) + ( h f )( t)]dt,

h( t) dt 1

v f

0 1 + H ( i )

1 + H (i ) 0

v f

Note: Satises a quadratic inequality (for every frequency) = satises integral quadratic inequality

structure

( i )

Condition

Well-posed Interconnection

passive ( i ) 1

0 I x( i ) I 0 X ( i ) Y ( i ) X Y 0 1+ H ( i )
T

I 0 0

G ( s)

x( i ) I
Y ( i ) X ( i ) Y

x( i ) 0

The feedback interconnection


X = X

[1, 1] ( t) [1, 1]
( v)( t) = sgn(v( t))

Y = Y

= Gw + f w = (v) + e
v

X
1 + H ( i ) 0 H
L1

is said to well-posed if the map (v, w) ( e, f ) has a causal inverse. It is called BIBO stable if the inverse is also bounded.
1

IQC Stability Theorem

G ( s)

Relation to Passivity and Gain Theorems

G ( s)

Let G (s) be stable and proper and let be causal. For all [0, 1], suppose the loop is well posed and satises the IQC dened by (i ). If
G ( i ) I

A stability theorem based on gain is recovered with A passivity based stability theorem is recovered with

I 0 0 I

I
I 0

. .

( i )

G ( i ) I

<0

for [0, ]

then the feedback system is BIBO stable.

Special Case Analysis


Note that = diag{ 1 , . . . , m }, with k 1 satises the IQC dened by
( i ) =
X ( ) 0 0

Combination of Uncertain and Nonlinear Blocks


The operator (v1, v2 ) = ( v1, (v2 )) where
[1, 1] (v2)/v2

X ( i )

where X (i ) = diag{ x1 (i ), . . . , xm (i )} > 0. Feedback loop stability follows if there exists X (i ) > 0 with
G ( i ) X ( i ) G ( i ) < X ( i )

satises all IQCs dened by matrix functions of the form


0 ( i ) = Y ( i )
0

X ( i )

Y ( i ) 0

[0, ]

or eqivalently, with D (i ) D (i ) = X (i )
sup D ( i ) G ( i ) D ( i )

2 0 +

where X (i ) = X (i ) and Y (i ) = Y (i ).

+ X ( i ) 0 0 2

<1

Proof idea of IQC Theorem


Combination of the IQC for with the inequality for G gives existence of c0 > 0 such that
v c0 v G (v)
1

A toolbox for IQC analysis


Copy /home/kursolin/matlab/lmiinit.m to the current directory or download and install the IQCbeta toolbox from http://www.control.lth.se/cykao.
8

v L2 , [0, 1]

If ( I G ) is bounded for some [0, 1] then the above inequality gives boundedness of ( I G )1 for all with

6 4 2

G (i )

c0 G < 1

e(t)

Hence, boundedness for = 0 gives boundedness for < (c0 G )1 . This, in turn, gives boundedness for < 2(c0 G )1 and so on. Finally the whole interval [0, 1] is covered.

10s2 s3 +2s2 +2s+1

y(t)

0 2 4 2 0 2 4 6 8

>> >> >> >> >> >> >>

abst_init_iqc; G = tf([10 0 0],[1 2 2 1]); e = signal w = signal y = -G*(e+w) w==iqc_monotonic(y) iqc_gain_tbx(e,y)

A simulation model

An analysis model dened graphically

K
performance

monotonic with restrict rate

Step

Gain2

Saturation 2s 2+2s+1 .01s2+s Transfer Fcn 1 Sum1 s Integrator 1 s Integrator1 Scope


Sum 10 Gain 2s 2+2s+1 0.01s 2+s+.01 Transfer Fcn Sum1 1 s Integrator 1 s Integrator1

1 Gain1 Sum

10 Gain

Sum2

Exp(ds)1 uncertain delay

The text version (i.e., NOT the gui) is strongly recommended by the IQCbeta author(s) at present version!!
7

z iqc_gui(fricSYSTEM) extracting information from fricSYSTEM ...


1

A library of analysis objects


(s1) s(s+1) ZeroPole performance Popov popov IQC polytope encapsulated odd deadzone

scalar inputs: 5 states: 10 simple q-forms: 7 LMI LMI LMI LMI LMI #1 #2 #3 #4 #5 size size size size size = = = = = 1 1 1 1 1 states: states: states: states: states: 0 0 0 0 0

s+1 Transfer Fcn

x = Ax+Bu y = Cx+Du StateSpace

white noise performance sector+popov polytope with restrict rate diagonal structure

norm bounded 1 Gain K Matrix Gain unknown const monotonic with restrict rate sector satint LTI unmodeled

Sum Step Source

|D(t)|<k TV scalar

encapsulated deadzone

Mux

Demux Demux STV scalar window odd slope nonlinearity

Solving with 62 decision variables ... ans = 4.7139

Mux

1 In

1 Out harmonic

Exp(ds)1 cdelay slope nonlinearity

Text version (i.e., NOT gui) strongly recommended by the IQCbeta author(s) at present version!!

Bounds on Auto Correlation


u

Dominant Harmonics
u

system

system

The auto correlation bound


For small > 0, the constraint


u( t) u( t T ) dt

u( t)u( t) dt,
0

u( i ) 2 d

(1 + )
a

u( i ) 2 d

corresponds to
( i ) = 2 e
i T

i T

means that the energy of u is concentrated to the interval [a, b].

Subharmonic Oscillations in Position Control


f

No Subharmonics in Velocity Control!

r = 1 + 0.1 sin t

PID u

1 s

1 s

PI

1 s

1.4 1.2 1

1.2

1.1

1 0.8 0.9 0 40 80 120

0.8 0 10 20 30 40

Incremental Gain and Passivity


v

Incremental Stability

G ( s)

m A causal nonlinear operator on L2 is said to have incremental gain less than if

(v1 ) (v2 ) v1 v2

v1 , v2 L2

The feedback interconnection


= Gw + f w = (v) + e
v

It is called incrementally passive if


T

0
0

[(v1 ) (v2 )][v1 v2 ]dt

T > 0, v1, v2 L2

is called incrementally stable if there is a constant C such that any two solutions ( e1 , f 1 , v1 , w1 ), ( e2 , f 2 , v2, w2 ) satises
v1 v2 + w1 w2 C e1 e2 + C f 1 f 2

Incremental Stability Excludes Subharmonics


1

Summary Robustness analysis


-analysis S-procedure Integral Quadratic Constraints

0.5

G ( j )

PI

1 s

0.5

1 1

0.5

0.5

F (i )

G (i )

References: A. Megretski and A. Rantzer, System Analysis via Integral Quadratic Constraints, IEEE Transactions on Automatic Control, 47:6, 1997 A formula for Computation of the Real Stability Radius, L. Qiu, B. Bernhardsson, A. Rantzer, E.J. Davison, and P.M. Young. Automatica, pp. 879890, vol 31(6), 1995. U. Jnsson, Lecture Notes on Integral Quadratic Constraints Users guide to toolbox, Matlab

G ( s) =

s s2 + K s+ Ki

10

Lecture 5. Synthesis, Nonlinear design


Introduction Control Lyapunov functions Backstepping

Why nonlinear design methods?


Linear design degraded by nonlinearities (e.g. saturations) Linearization not controllable (e.g. pocket parking) Long state transitions (e.g. satellite orbits)

Exact (feedback) Linearization


Idea: Transform the nonlinear system into a linear system by means of feedback and/or a change of variables. After this, a stabilizing state feedback is designed. Simple example
x= l sin( x) + cos( x)u

State transformation
More difcult example, needing state transformation
x1 = a sin( x2 )
2 x2 = x1 + u

Can not cancel a sin( x2 ). Introduce


z1 = x1 z2 = a sin x2

Put
u=

1 ( sin( x) + v) cos( x) l

gives
x=v Design linear controller v = l1 x + l2 x , etc

so that
z1 = z1 z2 = ( z2 + u) a cos x2 1

Then feedback linearization is possible by


u = z2 + v/(a cos( z2 )) 1
1

Exact Linearization
Often useful in simple cases Important intuition may be lost Related to Lie brackets and atness

From analysis to synthesis


Lyapunov criterion Search for ( V , u) such that
V [ f + u] < 0 x

IQC criterion Search for Q(s) and 1 , . . . , m such that


[T1 + T2 QT3 ]( i ) I

k k( i )
k

[T1 + T2 QT3 ]( i ) I

<0

for [0, ] In both cases, the problem is non-convex and hard. Heuristic idea: Iterate between the arguments

Convexity for state feedback


Problem Suppose (v)/v . Given the system
x = f u ( x) := Ax + E ( Fx) + Bu

Control Lyapunov Function (CLF)


A positive denite radially unbounded C1 function V is called a CLF for the system x = f ( x, u) if for each x = 0, there exists u such that
V ( x ) f ( x , u) < 0 x

nd u = Lx and V ( x) = x T Px such that

V x

f u ( x) < 0

(Notation: L f V ( x) < 0)

Solution Solve for P, L


( A + EF B L) T P + P( A + EF B L) < 0 ( A + EF B L) T P + P( A + EF B L) < 0

When f ( x, u) = f ( x) + ( x)u, V is a CLF if and only if


L f V ( x) < 0 for all x = 0 such that L V ( x) = 0

or equivalently convex in ( Q, K ) = ( P1 , LP1 )


( AQ + EFQ B K ) T + ( AQ + EFQ B K ) < 0 ( AQ + EFQ B K ) T + ( AQ + EFQ B K ) < 0
2

Example
Check if V ( x, y) = [ x2 + ( y + x2 )2 ]2 /2 is a CLF for the system
x = xy y = y + u u( x) :=

Sontags formula
If V is a CLF for the system x = f ( x) + ( x)u, then a continuous asymptotically stabilizing feedback is dened by
0 if L V ( x) = 0 ( L f V )2 + ( L V )4 ( x) L V if L V ( x) = 0

L f V +

L f V ( x, y) = x2 y + ( y + x2 )( y + 2x2 y) L V ( x, y) = y + x2 L V ( x, y) = 0 y = x2 L f V ( x, y) = x4 < 0

if ( x, y) = 0

Backstepping idea
Problem

Backstepping
Let V x be a CLF for the system x = f ( x) + ( x)u with corresponding asymptotically stabilizing control law u = ( x). Then V ( x, y) = V x ( x) + [ y ( x)]2 /2 is a CLF for the system
x = f ( x) + ( x) y y = h( x , y ) + u

Given a CLF for the system


x = f ( x , u)

nd one for the extended system


x = f ( x, y) y = h( x , y ) + u

with corresponding control law


u=

Idea

[ f ( x ) + ( x ) u]

Vx ( x ) h( x , y ) + ( x ) y x

Use y to control the rst system. Use u for the second.

Proof.
V = ( Vx / x)( f + u) + ( y )[( / x) f + h + u]

Note potential for recursivity

= ( Vx / x)( f + ) + ( y )[( Vx / x) + ( / x) f + h + u] = ( Vx / x)( f + ) ( y )2 < 0


3

Backstepping Example
For the system
x = x2 + y y=u

we can choose V x( x) = x2 and ( x) = x2 x to get the control law


u = ( x ) f ( x , y ) h( x , y ) + ( x ) y

= (2x + 1)( x2 + y) x2 x y

with Lyapunov function


V ( x, y) = Vx ( x) + [ y ( x)]2 /2

= x 2 + ( y + x 2 + x ) 2 /2

Nonlinear Control
Linear case

Controllability
x = Ax + Bu
0 x( T ), x (0) x ( T ) x (0) 0,

Intro to geometric control theory


Lie-brackets and nonlinear controllability The parking problem

All controllability denitions coincide

T either xed or free

Khalil pp. Ch 13.12 (Intro to Feedback linearization) Slotine and Li, pp. 229-236

Rank condition System is controllable iff B AB . . . An1 B full rank Wn =

Is there a corresponding result for nonlinear systems?

Lie Brackets
Lie bracket between f ( x) and ( x) is dened by [f , ] = Example:
x1 cos x2 , = , f = 1 x1 f [f , ] = f x x 1 0 cos x2 0 sin x2 x1 = 0 0 x1 1 0 1 cos x2 + sin x2 = x1

Why interesting?
x=
1 ( x )u 1

f x

The system is controllable if the Lie bracket tree has full rank (controllable=the states you can reach from x = 0 at xed time T contains a ball around x = 0)

(1, 0), t [2 , 3 ] (0, 1), t [3 , 4 ] gives motion x(4 ) = x(0) + 2 [ 1 , 2 ] + O ( 3) t t t t n n [t 1 , 2 ] = lim ( 2 1 2 n 1 n )n


n

The motion (u1 , u2 ) =

(1, 0), t [0, ]

2 ( x )u 2

(0, 1), t [ , 2 ]

The Lie Bracket Tree


[ 1,
2]

Parking Your Car Using Lie-Brackets


y

[ 1, [ 1 ,
2 ]]

[ 2, [ 1 ,

2 ]]

( x, y)
[ 2, [ 1 , [
1, 2 ]]]

[ 1, [ 1 , [

1,

2 ]]]

[ 1, [ 2 , [

1,

2 ]]]

[ 2, [ 2 , [

1,

2 ]]]

cos( + ) x 0 d y 0 = u + sin( + ) u 1 sin( ) 2 dt 0 1 0

Parking the Car


Can the car be moved sideways? Sideways: in the ( sin( ), cos( ), 0, 0)T -direction?
[
1, 2] 1 2 x x 0 0 sin( + ) sin( + ) 0 0 0 cos( + ) cos( + ) 0 0 = 0 0 cos( ) 0 0 1 0 0 0 0 sin( + ) cos( + ) =: = 3 = wriggle cos( ) 0

Once More
[
3, 2]

The motion [ 3 ,

3 2 = ... x x sin( ) cos( ) = sideways = 0 0

2]

takes the car sideways.


(sin( ), cos( ))

The Parking Theorem


You can get out of any parking lot that is bigger than your car. Use the following control sequence: Wriggle, Drive, Wriggle(this requires a cool head), Drive (repeat).

Another example The unicycle

x3

( x1 , x2 ) cos( x ) 3 sin( x3 ) , =
0

Full rank, controllable.

0 = 0 , [ 1,
1

sin( x ) 3 cos( x3 ) 2] =
0

More Information
More theory about Lie-bracket theory Nijmeijer, van der Schaft, Nonlinear Dynamical Control Systems, Springer Verlag. Isidori, Nonlinear Control Systems, Springer Verlag

More activities on passivity


or Recapitulation of some stuff from Spring 2003

Consider the system


x1 = x2 x2 = u y = c1 x1 + c2 x2 z = z + z2 y

Interconnected systems peaking Passivity and Stability Relative degree and zero dynamics Exact linearization We can solve for z(t)
z(t) =

1 z(0)

et z(0)
t 0

e y( )d

When can we guarantee that


Nonlinear Systems, Khalil Constructive Nonlinear Control, Sepulchre et al
t

1 z(0)

e y( ) = 0?

The peaking phenomenon


Example: Controlled linear system with right-half plane zero Feedback can change location of poles but not location of zero (unstable pole-zero cancellation not allowed).
Gcl ( s) =
1 0.5 0 0.5

The peaking phenomenon cont.


o = 5 o = 2

o = 1

(s + 1) 2 o 2 + 2 s + 2 s o o

(1)

1 1.5 2 0 1 2 3 4 5 6

A step response will reveal a transient which grows in amplitude for faster closed loop poles s = o, see Figure on next slide.

Step responses for the system in Eq. (1), o = 1, 2, and 5. Faster poles gives shorter settling times, but the transients grow signicantly in amplitude, so called peaking.

Dissipativity
Consider a nonlinear system
x( t)

If the supply rate is


r(u(t), y(t)) = uT y

then the system is passive.


t0 S( x ) r ( u , y )

= y( t) =

h( x( t), u( t), t)

f ( x( t), u( t), t),

The system is dissipative if there exist a supply rate r(t) = r(u(t), y(t), t) such that
T

increase rate for storage not larger than supplied power

a storage function S ( x) 0

Any storage increase in a passive systems is due to external sources! Connection between passivity and Lyapunov stability (Warning: passivity is an In/Out-relationship so need something more). Use S ( x) = x2 /2 as storage function. for all x X

S( x( T )) S( x(0))

r(u( t), y( t)) dt

S( x ) r ( u , y )

Local passivity
Show that
x = ( x3 kx) + u y=x

S = x2 ( x2 k) + ux u y

is passive in the interval X = [ k, k].

Example: Mass-spring-damper system position x, velocity v (constants k, d, m > 0)


x=v v= d 1 k x v x m m m

uv

so passive

E = uv dv2 uv

ux

1 1 Energy E = mv2 + kx2 2 2

G xu (s) =

ms2

has relative degree 2 > 1.

1 + ds + k

Show that the mapping input force u position x is NOT passive input force u velocity v is passive

Assume that 1 and 2 are passive, then the well-posed feedback interconnections in the gure below are also passive from r to y.

Excess and shortage of passivity


A system is said to be Output Feedback Passive (OFP) if it is dissipative with respect to r(u, y) = uT y k y T y for some k

1 2

y r

1 2
+
y

Input Feedback Passive (IFP) if it is dissipative with respect to r(u, y) = uT y kuT u for some k

Excess or shortage of passivity can be quantied by the notation O F P( k) and I F P( k)

Small phase-property for feedback-connection


3

Excess of passivity vs Shortage of passivity

Sector condition [ , ]:

IFP(1)

OFP(-0.3)

u2 u (u) u2 , 0

x=u

y= x+u

x = 0.3x + u y=x

Take S=0 Note! u = k y where k = 0.3 is exactly the amount of feedback required to make the (right) system passive. Feedback connection may even out excess and shortage of passivity...
u y u2 0 and u y 1

y2 0

The sector nonlinearity y = (u) is both IFP( ) and OFP(1/ ).

Special Case: Positivity


Let M ( j ) = C( j I A)1 B + D , where A is Hurwitz. The system
x = y = u =
Cx + Du Ax + Bu, t0

Zero-state detectability (ZSD)


Remark: a storage functions may be positive semi-denite
x1 = x1 x2 = u y = x2
2 The system above is passive with storage function S = x2 /2 !! (although x1 is unstable).

( t, y)

with sector condition

( t, y)/ y 0 t 0, y = 0

is absolutely stable if
M ( j ) + M ( j ) > 0 ,

[0, )

Introduce zero-state detectability (cmp linear detectability) to exclude these cases and relate to stability.

Note: For SISO systems this means that the Nyquist curve lies strictly in the right half plane.
4

Zero-state detectability (ZSD) cont.


D EFINITION 1Z ERO -S TATE D ETECTABILITY The system of x = f ( x, u), y = h( x, u) is said to be ZeroState Detectable if, for any initial condition x(0) and zero input u = [u 1 . . . u p ] T 0, the condition of identical zero output y = [h1 ( x) . . . h p( x)] = 0, t 0 implies that the state converges to zero, lim t+ x(t) 0.

Passivity and stability


Dissipativity and zero-state detectability of a system imply Lyapunov stability. For a system H which is passive and ZSD: If the output y = h( x) (i. e., no direct throughput) then the feedback u = y achieves asymptotic stability of x = 0.

Relative degree
A systems relative degree: How many times you need to take the derivative of the output signal before the input shows up

For a nonlinear system with relative degree d


x = f ( x ) + ( x )u y = h( x )

(2)

we have
y

= = . . . =

h( x ) d h( x ) = x= dt x L f h( x ) + L h( x ) u
=0 i f d >1

h f ( x) + x

h ( x )u x

y ( k)

y( d)

. . . =

L k h( x ) f

if k < d
( d 1)

(3)

L d h( x ) + L L f f

h( x ) u

Using the same kind of coordinate transformations as for the feedback linearizable systems above, we can introduce new state space variables, , where the rst d coordinates are chosen as
1 2 . . . d = h( x ) = L f h( x ) = Lf
( d 1)

Under some conditions on involutivity, the Frobenius theorem guarantees the existence of another (n d) functions to provide a local state transformation of full rank. Such a coordinate change transforms the system to the normal form
(4)
1 = 2 . . . d 1 = d d = Ld h( , z) + L Ld1 h( , z)u f f y = 1 z = ( , z)

h( x )

(5)

where z = ( , z) represent the zero dynamics of order n d [Byrnes+Isidori 1991].

E XAMPLE 1Z ERO DYNAMICS Consider the linear system


y= s2

FOR LINEAR SYSTEMS

x1

x1

x2 + u = 0

s1 u + 2s + 1

(6)

x2 = u = x2

(8)

with the following state-space description


x1 x 2 y = 2x1 + x2 = x1 +u u (7)

The remaining dynamics is an unstable system corresponding to the zero s = 1 in the transfer function (6).

= x1

We have the relative degree =1 Find the zero-dynamics, by assigning y


0.

Exact (feedback) Linearization


Idea: Transform the nonlinear system into a linear system by means of feedback and/or a change of variables. After this, a stabilizing state feedback is designed.
r
+

For general nonlinear systems feedback linearization comprises state transformation inversion of nonlinearities linear feedback

v
u = 1 ()

z
x = T ( z)

Simple example
x= l sin( x) + cos( x)u

Put
u=

gives

1 ( sin( x) + v) cos( x) l x=v

Inner feedback linearization and outer linear feedback control

Design linear controller v = l1 x + l2 x , etc

State transformation
More difcult example, where we need a state transformation
x1 = a sin( x2 )
2 x2 = x1 + u

Feedback linearization (nonlinear version of pole-zero cancellation) Feedback linearization can be interpreted as a nonlinear version of pole-zero cancellations which can not be used if the zero-dynamics are unstable, i. e., for nonminimum-phase system.

Can not cancel a sin( x2 ). Introduce


z1 = x1 z2 = a sin x2

so that
z1 = z1 z2 = ( z2 + u) a cos x2 1

Then feedback linearization is possible by


u = z2 + v/(a cos( z2 )) 1
7

When to cancel nonlinearities?


3 x1 = x1 + u1 3 x2 = x2 + u2

Matching uncertainties
x1 = x2

(9)

Nonrobust and/or not necessary. However, note the difference between tracking or regulation!!

. . . n1 = xd x

x n = L d h( x , z) + L L d 1 h( x , z) u f f y = x1 z = ( x , z)

(10)

Integrator chain and nonlinearities (+ zero-dynamics) Note that uncertainties due to parameters etc. are collected in Ld h( x, z) + L Ld1 h( x, z)u f f

Will see later how optimal criteria will give hints. Achieving passivity by feedback ( Feedback passivation ) Need to have relative degree one weakly minimum phase

Exact Linearization
Often useful in simple cases Important intuition may be lost Related to Lie brackets and atness

NOTE! (Nonlinear) relative degree and zero-dynamics invariant under feedback! Two major challenges: avoid non-robust cancellations make it constructive by nding matching input-output pairs
8

Control Lyapunov Function (CLF)


A positive denite radially unbounded C1 function V is called a CLF for the system x = f ( x, u) if for each x = 0, there exists u such that
V ( x ) f ( x , u) < 0 x

Example
Check if V ( x, y) = [ x2 + ( y + x2 )2 ]2 /2 is a CLF for the system
x = xy y = y + u

(Notation: L f V ( x) < 0)

When f ( x, u) = f ( x) + ( x)u, V is a CLF if and only if


L f V ( x) < 0 for all x = 0 such that L V ( x) = 0 L V ( x, y) = 0

L f V ( x, y) = x2 y + ( y + x2 )( y + 2x2 y) L V ( x, y) = y + x2 y = x2

L f V ( x, y) = x4 < 0

if ( x, y) = 0

Sontags formula
If V is a CLF for the system x = f ( x) + ( x)u, then a continuous asymptotically stabilizing feedback is dened by
u( x) :=

Backstepping idea
Problem

Given a CLF for the system


x = f ( x , u)

if L V ( x) = 0 ( L f V )2 + (( L V )( L V ) T )2 [ L V ]T ( L V )( L V ) T if L V ( x) = 0

L f V +

nd one for the extended system


x = f ( x, y) y = h( x , y ) + u

Idea

Use y to control the rst system. Use u for the second.

Note: Can cancel factor L V = 0 if scalar.

Note potential for recursivity


9

Backstepping
Let V x be a CLF for the system x = f ( x) + ( x)u with corresponding asymptotically stabilizing control law u = ( x). Then V ( x, y) = V x( x) + [ y ( x)]2 /2 is a CLF for the system
x = f ( x) + ( x) y y = h( x , y ) + u

Backstepping Example
For the system
x = x2 + y y=u

with corresponding control law


u=

we can choose V x( x) = x2 and ( x) = x2 x to get the control law


u = ( x ) f ( x , y ) h( x , y ) + ( x ) y

[ f ( x ) + ( x ) u]

Vx ( x ) h( x , y ) + ( x ) y x

= (2x + 1)( x2 + y) x2 x y
V ( x, y) = Vx ( x) + [ y ( x)]2 /2

Proof.
V = ( Vx / x)( f + u) + ( y )[( / x) f + h + u]

with Lyapunov function


= x 2 + ( y + x 2 + x ) 2 /2

= ( Vx / x)( f + ) ( y )2 < 0

= ( Vx / x)( f + ) + ( y )[( Vx / x) + ( / x) f + h + u]

10

Nonlinear design methods


Lyapunov redesign Nonlinear damping Backstepping
CLF passivity roboust/adaptive

motivation: Feedback Linearization


One of the drawbacks with feedback linearization is that exact cancellation of nonlinear terms may not be possible due to e. g., parameter uncertainties. A suggested solution: stabilization via feedback linearization around a nominal model consider known bounds on the uncertainties to provide an additional term for stabilization ( Lyapunov redesign )

Ch 14 Nonlinear Systems, Khalil The Joy of Feedback, P V Kokotovic

Lyapunov Redesign
Consider the nominal system
x = f ( x, t) + G ( x, t)u

Lyapunov Redesign cont.


Perturbed system
x = f ( x, t) + G ( x, t)[u + ]

(1)

with the known control law


u = ( x, t)

disturbance = (t, x, u) Assume the disturbance satises the bound

so that the system is uniformly asymptotically stable. Assume that a Lyapunov function V ( x, t) is known s.t.
1 ( x ) V ( x, t) 2 ( x )
V + t V [ f ( t, x) + G ] x

(t, x, + v) ( x, t) + 0 v

If we know and 0 how do we design additional control v such that u = ( x, t) + v stabilizes (2)? The matching condition: perturbation enters at same place as control signal u.
1

3 ( x )

Apply u = ( x, t) + v
x = f ( x, t) + G ( x, t) + G ( x, t)[v + ( t, x, + v)]

(2)

Lyapunov Redesign cont.


wT v + wT wT v + wT

2 1

V=

V + t

V [ f ( t, x) + G ] + x
V x

V G [v + ] 3 ( x ) + x

V G [v + ] x

w v+w w v+ w
T T T

Introduce w = [

G] V 3 ( x ) + wT v + wT

Alternative 1: If take
2 -norm

(t, x, + v)

( x, t) + 0 v 2, 0 0 < 1
w w

Choose v such that wT v + wT 0: Two alternatives presented in Khalil ( /


-norm)

v = (t, x)

Note: v appears at same place as due to the matching condition Alternative 2: If take where /(1 0 ) Restriction on 0 < 1 but not on growth of . Alt 1 and alt 2 coincide for single-input systems.

where /(1 0 )

Example: Matched uncertainty

(t, x, + v)

( x, t) + 0 v

0 0 < 1

v = (t, x) sgn w

()

x = u + ( x)(t)

Note: control laws are discontinues fcn of x (risk of chattering)


2

Example cont.
Example: Exponentially decaying disturbance (t) = (0) ekt linear feedback u = cx

Nonlinear damping
Modify the control law in the previous example as:
u = cx s( x) x

( x) = x2
x = cx + (0) ekt x2

where s( x ) x will be denoted nonlinear damping.


x2 2

Similar to peaking problem last lecture: Finite escape of solution to innity if (0) x(0) > c + k We want to guarantee that x(t) stay bounded for all initial values x(0) and all bounded disturbances (t)

Use the Lyapunov function candidate V =


V = xu + x ( x)

= cx2 x2 s( x) + x ( x)

How to proceed? Can show that x(t) converges to the set


R= x : x(t)

s( x) = 2 ( x) to complete the squares!


V = cx2 x2 s( x) + x ( x)

Choose

2 c

i. e., x(t) stays bounded for all bounded disturbances


2 4 cx2 + 2 4

= cx2 x

Note! V is negative whenever x(t)

2 c

Remark: The nonlinear damping x 2 ( x) renders the system Input-To-State Stable (ISS) with respect to the disturbance.
3

Youngs inequality
Let p > 1, q > 1 s.t. ( p 1)( q 1) = 1, then for all > 0 and all ( x, y) R2
xy < Problem

Backstepping idea
Given a CLF for the system

p
p

x p+

1 y q q

x = f ( x , u)

nd one for the extended system


x = f ( x, y) y = h( x , y ) + u

Standard case: ( p = q = 2, 2 /2 = )
xy < x 2 + 1 y 4
2

Idea

Our example:
x ( x)(t) < x2 2 ( x) +

Use y to control the rst system. Use u for the second. 2 (t) 4

Note: potential for recursivity

Backstepping
Let V x be a CLF for the system x = f ( x) + ( x)u with corresponding asymptotically stabilizing control law u = ( x). Then V ( x, y) = V x( x) + [ y ( x)]2 /2 is a CLF for the system
x = f ( x) + ( x) y y = h( x , y ) + u

Backstepping Example
For the system
x = x2 + y y=u

with corresponding control law


u=

we can choose V x( x) = x2 and ( x) = x2 x to get the control law


u = ( x ) f ( x , y ) h( x , y ) + ( x ) y

[ f ( x ) + ( x ) u]

Vx ( x ) h( x , y ) + ( x ) y x

= (2x + 1)( x2 + y) x2 x y

Proof.
V = ( Vx / x)( f + u) + ( y )[( / x) f + h + u]

with Lyapunov function


V ( x, y) = Vx ( x) + [ y ( x)]2 /2

= ( Vx / x)( f + ) + ( y )[( Vx / x) + ( / x) f + h + u] = ( Vx / x)( f + ) ( y )2 < 0

= x 2 + ( y + x 2 + x ) 2 /2

Example again (step


x1 = x1 2 + x2 x2 = u( x)

by step)

We cant expect to realize x2 = ( x1 ) exactly, but we can always try to get the error 0.
(3)

Introduce the error states


z1 = x1 z2 = x2 1 ( x1 )

(4)

Find u(x) which stabilizes (3). Idea : Try rst to stabilize the x1 -system with x2 and then stabilize the whole system with u.
2 We know that if x2 = x1 x1 then x1 0 asymptotically ( exponentially ) as t .

2 where 1 ( x1 ) = x1 x1
x2

z1

= = = = = =

x1 = z2 1

+ z2 + 1 ( z1) = + z2 z2 z1 = z1 + z2 1
z2 1
known

z2 1

x2 1 = u( x) 1 d ( z2 z1 ) = z1 z1 z1 1 dt z1 ( z1 + z2 ) ( z1 + z2) = z2 z1 z2 z2 z1 1

Start with a Lyapunov for the rst subsystem ( z1 -dynamics):


V1 V1

Now look at the augmented Lyapunov fcn for the error system
V2 V2

= =

1 2 z 0 2 1 z1 z1 = z2 + z1 z2 1

V1 +

Note : If z2 = 0 we would achieve V1 = z2 0 1 with 1 ( x1 )

= = z2 + z1 z2 + z2( u z2 + z1 z2) 1 1 = z2 + z2 ( u z2 + z1 z2 + z2 + z1 ) 1 1
choose = z2

1 2 z 0 2 2 V1 + z2 z2 =

z2 1

z2 2

so if u = z2 z1 z2 z2 z1 1 ( z1 , z2 ) 0 asymptotically (exponentially) ( x1 , x2 ) 0 asymptotically


2 As z1 = x1 and z2 = x2 1 = x2 + x1 + x1 , we can express u as a ( nonlinear ) state feedback function of x1 and x2 .

Backward propagation of desired control signal


x2 u

x2 u

x1

x1
f +

f ()

If we could use x2 as control signal, we would like to assign it to ( x1 ) to stabilize the x1 -dynamics.

Move the control backwards through the integrator z2 = x2 u x1 + + d /dt Note the change of coordinates!

f +

Adaptive Backstepping
System :
x1 = x2 + ( x1 ) x2 = x3 x3 = u( t) (5)

1 Lyapunov function : V1 = 2 z1 2 + 1 2 2 where = ( ) is the parameter error

(Back-) Step 1:
x2

z1( t) V1

= = =

z2 ( t) + 1 ( z1, ) + ( z1( t)) z1 z1 + = z1 ( z2 + 1 + ) + = z1 [ z2 + 1 + ] + ( z1 )


z1

where is a known function of x1 and is an unknown parameter Introduce new (error) coordinates
z1( t) = x1 ( t)

z2( t) = x2 ( t) 1 ( z1, )

(6)

Choose 1 = z1 V1 = z2 + z1 z2 + ( 1 ) 1
6

where 1 is used as a control to stabilize the z1 - system w.r.t a certain Lyapunov-function.

Note: If we used = 1 as update law and if z2 = 0 then V1 = z2 0 1 Step 2: Introduce z3 = x3 2 ( z1 , z2 , ) and use 2 as control to stabilize the ( z1 , z2 )-system

Augmented Lyapunov function : V2 = V1 +


z2

1 2 z2 2
1 1
z1

= =

x2 1 =
x3

z3 + 2

1
z1

( x2 + )

V2

= =

V1 + z2 z2 = . . . =

z2 + z2 [ z3 + 2 + z1 + 1

( z1 z2)
z2

1 ]+

1 + [ ( 1 + z2 ) ] z1
2

Choose 2 = z2 z1

1
z1

( z1 z2 ) +

Note : If z3 = 0 and we used = 2 as update law we would get V2 = z2 z2 0 1 2 Resulting subsystem


z1 z2

Step 3 :
z3

x3 2

= u
z1 z2

2
z1

z1

2
z2

z2

1 1 1 1
Hurwitz

1
z1

0 z3 1 ( 2 )

puh...

2 = ... =

We now want to choose u = u( z1 , z2 , ) such that the whole system will be stabilized w.r.t V3

2 = [

x1

z1 z2
1 (

Just to simplify the expressions, introduce u, and choose


2
z1 ( z2 + 2 + ) +

V2 = z2 z2 + z3 z2 + ( 2 ) z2 1 2

2)

u u

= z2 z3 + = ?

2
z2

x3 +

2 +u
7

Augmented Lyapunov function :


V3 V3

= =

1 1 V2 + z32 = z 2 2

1 + 2 2

= [
2
z1

z1

z1

z1

] z2
z3

z2 z2 z2 + z3u + [ ( 2 z3 1 2 3
3

)]

V3 = z Crucial :

+ z3 u + z2

1 ( 2

z2

1 ( 2 )
2
z1

1 ( 2 ) = z3 1 2 z1
known =
def

= ( 3 ) = 2 z3

1
1 0 1+

We are almost there : z=


1 1 0 1 1 1 0 1 1

z= z+

1 z1
2
z1

0
1 ( 2

1 1 0 1

z+

0 u

V3 = z

+ z3 (u + z2 )

1 z1
2
z1

Choose u = z2 Finally :

Observer backstepping
Observer backstepping is based on the following steps:

V3 = z Theorem )

GS of z = 0, = and x 0 ( by La Salles

1. A (nonlinear) observer is designed which provides (exponentially) convergent estimates. 2. Backstepping is applied to a system where the states have been replaces by their estimates.

Closed-loop system : z=
1 1 1 1 0 1
0 1+ z+ 1
1
z1

1 z1
2
z1

skew s ymmetric I

The observation errors are regarded as (bounded) disturbances and handled by nonlinear damping.

3 = [

z1

]z
8

Backstepping applies to systems in strict-feedback form


x1 = f 1 ( x1 ) + x2 x2 = f 2 ( x1 , x2 ) + x3

. . .
xn = f n ( x1 , x2 , . . . xn1 , xn ) + u

Compare with Strict-feedforward systems


x1 = x2 + f 1 ( x2 , x3 , . . . , xn , u) x2 = x3 + f 2 ( x3 , . . . , xn , u)

. . .
xn1 = xn + f n1 ( xn , u) xn = u

Outline
HJB Inverse optimal control

Nonlinear Control Lecture 8


Optimal and inverse(!) optimal design Saturated control and feedforwarding

Stabilization with Saturations Integrator forwarding Relations between the concepts Conclusions

Optimality
Two main alternatives Pontryagins Maximum Principle (Necessary cond) Hamilton-Jacobi-Bellman (Dyn prog.) (Sufcient cond)

Consider the system


x = f ( x ) + ( x )u

Find u = u such that (ii) u minimizes the cost functional


V =

(i) u achieves asymptotic stability of the origin x = 0


( l ( x) + u T R( x)u) dt (1)

where l ( x) 0 and R( x) 0 x. For a given feedback u( x) the value of V depends on the initial state x(0): V ( x(0)) or simply V ( x).

Theorem (Optimality and Stability) Suppose there exist a C 1 -function V ( x) 0 which satises the Hamilton-Jacobi-Bellman equation
1 l ( x) + L f V ( x) L V ( x) R1 ( L V ( x)) T = 0 4 V (0) = 0

Example: Linear system Cost Function


V =

x = Ax + Bu

(2)

( x T C T Cx + uT Ru)dt,

R>0

such that the feedback control


1 u ( x) = R1 ( L V ( x))T 2

Riccati-equation
PA + APT PB R1 B T P + C T C = 0

(3)

achieves asymptotic stability of the origin x = 0. Then u( x) is the optimal stabilizing control which minimizes the cost (1). 5-min exercise: Consider the system and the cost functional
V = x = x2 + u

If (A,B) controllable and (A,C) observable, then (3) has a unique solution P = P T > 0 such that the optimal cost is V = x T Px and u ( x) = R1 B T Px is the optimal stabilizing control

( x2 + u2 )dt

What is the optimal stabilizing control?


2

HJB:
x2 + V 2 1 x x 4 V x

= 0,
4x4 + 4x2 x2 + 1

V ( x) = 0

Remark: If (A,B) stabilizable and (A,C) detectable then P is positive semi-denite. Example (non-detectability in cost) System

V = 2x2 x

= 2x2 +2x
2 2 V ( x ) = x 3 + ( x 2 + 1 ) 3/ 2 + C , 3 3

(4)

x = x+u

Cost functional
V =

u2 dt

C = 2/3 so that V (0) = 0

(5)

Riccati-eq

1 V u ( x ) = = x2 x x2 + 1 2 x Remark: We have chosen the positive solution in (4) as V ( x) 0

Corresponding HJB
x

2P P2 = 0,

P = 0 or P = 2
V (0) = 0

1 V 2 V ( ) = 0, x 4 x

V = 0 or V = 2x2

Inverse optimality
A stabilizing control law u( x) solves an inverse optimal problem for the system x = f ( x ) + ( x )u
1 u( x) = k( x)/2 = R1 ( x)( L V ( x))T , 2 R( x ) > 0

The underlying idea of formulating an inverse optimal problem is to get some help to avoid non-robust cancellations and gain some stability margins. Example: Non-robust cancellation Consider the system
x = x2 + u

if it can be written as

and the control law


u n = x2 x x = x

where V ( x) 0 and
1 V = L f V + L V = L f V L V k( x) 0 2
l( x )

However, if there is some small perturbation gain u = (1 + )un, we get


x = (1 + ) x x2

Then V ( x) is the solution of the HJB-eqn


l ( x) + L f V 1 ( L V ) R 1 ( L V ) T = 0 4

This system may has nite escape time solutions. How does u from previous example behave?
3

Damping Control / Jurdjevic-Quinn


Consider the system
x = f ( x ) + ( x )u

To add more damping to the system to render it asymptotically stable the following suggestion was made by Jurdjevic-Quinn (1978)
V = L f V + L Vu L Vu

Assume that the drift part of the system is stable, i.e.,


x = f ( x), f (0) = 0

Choose

u = ( L V )T

and that we know a function V ( x) such that L f V 0 for all x. How to make it asymptotically stable (robustly)?

It also solves the global optimization problem for the cost functional
V ( x) =

( l ( x) +

u T u) dt

for the state cost function


l ( x) = L f V +

( L V )( L V ) T 0

Connection to passivity: The system


x = f ( x ) + ( x )u y = ( L V ) T ( x)

Systems with saturations of control signal


Problem: System runs in open loop when in saturation Anti-windup designs from FRT075 Consider Lyapunov function candidates of type V = lo (1 + x2 ) (see Lecture 1)

is passive with V ( x) as storage function if L f V 0 as


V = L f V + L u yT u

Saturated controls [Sussmann, Yang And Sontag] Cascaded saturations [Teel et al]

The feedback law u = y guarantees GAS if the system is ZSD (zero state detectable). Note: May be a conservative choice as it does not fully exploit the possibility to choose V ( x) for the whole system (only x = f ( x)).
4

Feedforward systems
Particular form of cascaded systems 1991 A. Teel ... Sussman, Sontag, Yang ... Saberi, Lin 1996 Mazenc, Praly 1996 Sepulchre, Jankovic, Kokotovic

Strict-feedforward systems
x1 = x2 + f 1 ( x2 , x3 , . . . , xn , u) x2 = x3 + f 2 ( x3 , . . . , xn , u)

. . .

xn1 = xn + f n1 ( xn , u) xn = u

1/s

1/s

1/s

f n1

f n2

f1

Strict-feedforward systems are, in general, not feedback linearizable! Compare with e.g. Strict-feedback systems
x1 = x2 + f 1 ( x1 ) x2 = x3 + f 2 ( x1 , x2 )

(i.e neither exact linearization nor backstepping is applicable for stabilization) Restriction: Does not cover systems of the type
... ...
xk = x2 + ... k

. . .

xn = xn + f n( x1 , x2 , . . . xn1 ) + u

i.e. dont have to worry about


nite escape-time

Sussman and Yang (1991) : There does not exist any (simple) saturated feedback-law which stabilizes an integrator chain of order 3 globally.
1/s 1/s 1/s

Denition: is a linear saturation for ( L, M ) if is continuous and nondecreasing (s) = s when s L

( s) M , s R Theorem (Teel):

l1
+

l2
+

l3
+

Teels idea: using nested saturations


u = n(hn( x) + n1(hn1( x) + + 1 (h1 ( x)) . . . )

For an integrator chain of any order and for any set {( Li , Mi )} 1 where Li Mi and Mi < 2 Li+1 , there exists {hi} for all linear saturations { i} such that the bounded control
u = n(hn( x) + n1(hn1( x) + + 1(h1 ( x)) . . . )

results in global asymptotic stability for the closed loop system.

Sketch of proof: (n=3, Li = Mi ) Consider a state transformation y = T x which transforms the integrator chain into y = Ay + Bu where
0 1 1 1 0 0 1, B = 1 A= 0 0 0 1

How does y3 evolve ? 2 Let V3 = y3


V3 = 2 y3 3 ( y3 + 2 ( y2 + 1( y1)))
1 As 2(.) M2 < 2 L3 , 1 V3 < 0 for all y3 > 2 L3

The control law

u = 3( y3 + 2( y2 + 1 ( y1)))

y3 will decrease.

will give the closed loop system


y1 = y2 + y3 y2 = y3 y3 =

3 ( y3 + 2 ( y2 + 1 ( y1))) 3 ( y3 + 2 ( y2 + 1 ( y1))) 3 ( y3 + 2 ( y2 + 1 ( y1)))

In nite time y3 will be < 1 L3 and 3 will now operate in the 2 linear region. (Note: no nite escape for the other states.)
6

y2 = y3 ( y3 + 2 ( y2 + 1 ( y1)))

= 2 ( y2 + 1 ( y1 )))

Integrator forwarding
strict-feedforward systems
x1 = x2 + f 1 ( x2 , x3 , . . . , xn , u) . . . xn1 = xn + f n1 ( xn , u) xn = u

Same kind of argument shows us that after nite time, the closed loop will look like
y1 = y1 y2 = y1 y2

y3 = y1 y2 y3

i.e. after a nite time, the dynamics are exponentially stable Remark: Although we have found a globally stabilizing, bounded, control law, u, the internal states may have huge overshoots !!

Due to the lack of feedback connections, solutions always exists and are of the form
t

xn ( t) = xn (0) +

u( s) ds
t 0

xn1 ( t) = xn1 (0) +

( xn ( s) + f n1 ( xn ( s), u( s))) ds

. . .

2. Augment the control law u n1 ( x n1 , x n ) = u n ( x n ) + v n1 such that un1 stabilizes the cascade

1. Begin with stabilizing the system xn = un 2 Use e.g. Vn = xn and un = xn

How is the cascade (in step k) stabilized? We have a cascade of one GAS/LES system and a ISSsystem with a linear growth-condition.

xn1 = xn + f n1 ( xn , u) xn = u n1

There exists a Lyapunov function for the (sub-) system


Vk = Vk+1 + 1 2 x + 2 k

0 Vk

... k. Augment the control law uk ( xk , xk+1 ) = un( xk+1 ) + vk such that uk stabilizes the cascade
X k+1 = Fk+1 (. . . , u k ) xk = xk+1 + f k (. . . )

xk (s) f k( X k+1(s))ds

It can be shown that Vk uk = L cost functional of the form J=

< 0 and nally u1 minimizes a

(l ( x) + u2 )ds

The cross-term can only be exactly evaluated for very simple systems. In other cases it has to be numerically evaluated or approximated by i.e. Taylor series

Connection to Teels results: To avoid computations of the integrals we can use nested lowgain (saturated) control. Also showed to be GAS/LES for the integrator chain, but LAS/LES for the general strict-feedforward system. (Compare with high-gain design in backstepping)

Can use a feedback passivation design for a system if 1. A relative degree condition satised 2. The system is weakly minimum phase

Conclusions Global/semiglobal stabilization of strict-feedforward system ( No exact linearization possible ) Tracking results reported Relaxes weakly minimum phase-condition

Backstepping is a recursive way of nding a relative degree one output. Integrator forwarding allows us to stabilize weakly nonminimum phase systems.

Integration forwarding - necessary to simplify controller

Nonlinear Control Theory


Lecture 9

Today: Two Time-scales


Averaging
x

Periodic Perturbations Averaging Singular Perturbations Khalil Chapter (9, 10) 10.3-10.6, 11

f ( t, x, )

The state x moves slowly compared to f . Singular perturbations


x z f ( t, x, z, )

= =

( t, x, z, )

The state x moves slowly compared to z.

Example: Vibrating Pendulum I

Newtons law in tangential direction


m( l a 2 sin t sin ) = m sin k( l + a cos t sin )

(incl. viscous friction in joint) Let = a/l , = t, = 0 l / a, and = k/m 0


= = 1 ( d /d ) + cos sin f 1 ( , x) = x2 cos sin x1
x1 x2

m l

f 2 ( , x)

x2 2 sin x1 + x2 cos cos x1 cos2 sin x1 cos x1

the state equation is given by


dx = f ( , x) d

a sin t

x = l sin( ),

= l cos( ) a sin( t)

Averaging Assumptions
Consider the system
x

Example: Vibrating Pendulum II


The averaged system

f ( t, x, ),

x(0) = x0

= =

f av ( x)
2

where f and its derivatives up to second order are continuous and bounded. Let xav be dened by the equations
xav f av( x)

x2
1 4

x2 sin x1
0
2

sin 2x1

has
f av ( , 0) x

= =

f av ( xav ),
T

lim

1 T

xav (0) = x0 f ( , x, 0) d

which is Hurwitz for 0 < < 1/ 2, > 0. Can this be used for rigorous conclusions?

0.5

Periodic Averaging Theorem


Let f be periodic in t with period T . Let x = 0 be an exponentially stable equilibrium of xav = f ( xav). If x0 is sufciently small, then
x( t, )

General Averaging Theorem


Under certain conditions on the convergence of
f av( x)

1 T T lim

T 0

f ( , x, 0) d

xav ( t, ) + O ( ) for all t [0, ]

Furthermore, for sufciently small > 0, the equation x = f (t, x, ) has a unique exponentially stable periodic solution of period T in an O ( ) neighborhood of x = 0.

there exists a C > 0 such that for sufciently small > 0


x( t, ) xav ( t, )

<

for all t [0, 1/ ].

Example: Vibrating Pendulum III

Periodic Perturbation Theorem


Consider
x

f ( x) + ( t, x, )

where f , , f / x and ous and bounded. Let The Jacobian of the averaged system is Hurwitz for 0 < < 1/ 2, > 0. For a/l sufciently small and

> 2 0 l / a

/ x are continu-

be periodic in t with period T .

Let x = 0 be an exponentially stable equilibrium point for = 0.

Then, for sufciently small > 0, there is a unique periodic solution


x( t, )

the unstable pendulum equilibrium ( , ) = ( , 0) is therefore stabilized by the vibrations.

O ( )

which is exponentially stable.

Proof ideas of Periodic Perturbation Theorem


Let (t, x0 , ) be the solution of
x

Proof idea of Averaging Theorem


For small > 0 dene u and y by
u( t, x) x
t

f ( x) + ( t, x, ),

x(0) = x0

= =

The implicit function theorem shows solvability of


x

Exponential stability of x = 0 for = 0, plus bounds on the magnitude of , shows existence of a bounded solution x for small > 0.

y + u( t, y)

[ f ( , x, 0) f av( x)]d

Then
x = y+
I + u y y

u(t, y) u(t, y) + y t y
u ( t, y) t

= ( T , 0, x , )

= f ( t, y + u, ) = f av ( y) + 2 p( t, y, )

for small . This gives periodicity of x . Put z = x x . Exponential stability of x = 0 for = 0 gives exponential stability of z = 0 for small > 0.

With s = t,
dy ds

f av ( y) + q

, y,

which has a unique and exponentially stable periodic solution for small . This gives the desired result.
3

Application: Second Order Oscillators


For the second order system
y + 2 y

Illustration: Van der Pol Oscillator I


i

L V

+ resistive element

( y, y)

(1)
iC iL

introduce
y y/ f ( , r, ) f av( r)

= = = = =

r sin r cos
linear osc-part

2
1 2

( r sin , r cos ) cos ( /r) ( r sin , r cos ) sin


2 0

f ( , r, 0) d
2 0

1 2 2

For an ordinary resistance we will get a damped oscillation. For a negative resistance/admittance chosen as
i = h( V )

( r sin , r cos ) cos d

Then (1) is equivalent to


dr d

( V +

1 3 V ) 3

ives van der Pol eq.

f ( , r, )

we get
iC + i L + i = 0, CL i = h( V )

and the periodic averaging theorem may be applied.

dV d2 V + V + Lh ( V ) =0 2 dt dt

Example: Van der Pol Oscillator I


The vacuum tube circuit equation (a k a the van der Pol equation)
y+ y

Singular Perturbations
Consider equations of the form
x z

y(1 y2)

= =

f ( t, x, z, ),

( t, x, z, )

z( 0 ) = z0

x(0) = x0

gives
f av( r)

= =

2 1 r cos (1 r2 sin2 ) cos d 2 0 1 1 r r3 2 8

For small > 0, the rst equation describes the slow dynamics, while the second equation denes the fast dynamics. The main idea will be to approximate x with the solution of the reduced problem
x

The averaged system


dr d

1 1 r r3 2 8

f ( t, x, h( t, x), 0)

x(0) = x0

where h(t, x ) is dened by the equation


0

has equilibria r = 0, r = 2 with


d f av dr
r=2

( t, x, h( t, x), 0)

so small give a stable limit cycle, which is close to circular with radius r = 2.

Example: DC Motor I
R i L

Linear Singular Perturbation Theorem


Let the matrix A22 have nonzero eigenvalues 1 , . . . , m and let 1 , . . . , n be the eigenvalues of A0 = A11 A12 A1 A21 . 22 Then, > 0 0 > 0 such that the eigenvalues 1 , . . . , n+m of the matrix
A11 A21 / A12 A22 / i = 1, . . . , n

u EM K = k

satisfy the bounds


i i i n i
< , < ,
i = n + 1, . . . , n + m

d J dt di L dt

= =

ki

k Ri + u

for 0 < < 0 .

With x = , z = i and = Lk2 / J R2 we get


x z

= z = x z + u

Proof
A22 is invertible, so it follows from the implicit function theorem that for sufciently small the Riccati equation
A11 P + A12 P A21 P P A22
=
0

Example: DC Motor II
In the example
x z

= z = x z + u
A12 A22 0 1

has a unique solution P =

A12 A1 22

+ O ( ).

we have
A11 A21

The desired result now follows from the similarity transformation


I 0

= =
Lk2 J R2

1 1 1

P
I I 0

A11 A21 / I

A12 A22 / A11 A21 / 0

I 0

P
I

1 A11 A12 A22 A21

so stability of the DC motor model for small

= =

A22 / + A21 P

A12 + A11 P

A0 + O ( )

is veried.

A22 / + O (1)

See Khalil for example where reduced system is stable but fast dynamics unstable.
5

The Boundary-Layer System


For xed (t, x) the boundary layer system
dy d

Tikhonovs Theorem
Consider a singular perturbation problem with f , , h, / x C1 . Assume that the reduced problem has a unique bounded solution x on [0, T ] and that the equilibrium y = 0 of the boundary layer problem is exponentially stable uniformly in (t, x). Then
x( t, ) z( t, ) =

( t, x, y + h( t, x), 0),

y(0) = z0 h(0, x0 )

describes the fast dynamics, disregarding variations in the slow variables t, x.

x( t) + O ( ) h( t, x( t)) + y( t/ ) + O ( )

uniformly for t [0, T ].

Example: High Gain Feedback


u
+ +

up

Proof ideas of Tikhonovs Theorem


y

()

k1 / s

x p = ... y = .. k2

Closed loop system


xp 1 up k1

For small , G (t, x, y, ) is close to G (t, x, y, 0).

Replace f and with F and G that are identical for x < r, but nicer for large x.

Ax p + Bu p

= (u u p k2 Cx p)

y-bound for G -equation

y-bound for G (, , , 0)-equation

x, y-bound for F , G -equations

Reduced model
xp

( A B k2 C) x p + Bu

For small > 0, the x, y-solutions of the F , G -equations will satisfy x < r. Hence, they also solve the f , -equations

The Slow Manifold


For small > 0, the system
x z

The Fast Manifold


x=x+z z = 1/epsilon atan(1 z x) 2

= 0.001

epsilon = 0.001

= =

f ( x , z)

( x , z)

has the invariant manifold


z

H ( x, )
2

1 x

It can often be computed approximately by Taylor expansion


H ( x, )

x=x+z z = 1/epsilon atan(1 z x)

= 0.1

epsilon = 0.1

H0 ( x ) + H1 ( x ) + H2 ( x ) +

where H0 satises
0

( x , H0 )

1 x

All solutions of
x z

= =

f ( x , z)

( x , z)

approach the slow manifold z = h( x) = 1 x along a fast manifold approximately satisfying


x

constant

Example: Van der Pol Oscillator III


Consider
dv d2 v +v (1 v2) 2 ds ds

Illustration: Van der Pol III


Phase plot for van der Pol example = 0.001
x=z z = 1/epsilon ( x + z 1/3 z3) epsilon = 0.001 4 3 2 1 0 1 2 3 4 4 3 2 1 0 x 1 2 3 4

= 0

With
x t

= = =

z =

1 1 dv + v v3 ds 3 v 1/ 2 s/

x=z z = 1/epsilon ( x + z 1/3 z3) 4 3 2

= 0.1

epsilon = 0.1

we have the system


z =
z

1 x + z z3 3 1 3 z 3

1 0 1 2 3 4

with slow manifold


x

0 x

The red dotted curve is the slow manifold x = z z3 /3. All solutions approch this along a fast manifold approximately satisfying
7 x

= constant

You might also like