Professional Documents
Culture Documents
Stability theory
Lyapunov Theory revisited exponential stability quadratic stability time-varying systems invariant sets center manifold theorem
dx = x2 , dt
x(0) = x0
4.5 4 3.5 3
2.5 2 1.5
1 0.5 0 0 1 2 Time t 3 4 5
A step response will reveal a transient which grows in amplitude for faster closed loop poles s = o, see Figure on next slide.
Step responses for the system in Eq. (1), o = 1, 2, and 5. Faster poles gives shorter settling times, but the transients grow signicantly in amplitude, so called peaking.
What bandwidth constraints does a non-minimum zero impose for linear systems? See e. g., [Freudenberg and Looze, 1985; strm, 1997; Goodwin and Seron, 1997]
Lyapunov formalized the idea: If the total energy is dissipated, the system must be stable. Main benet: By looking at an energy-like function ( a so called Lyapunov function), we might conclude that a system is stable or asymptotically stable without solving the nonlinear differential equation.
Trades the difculty of solving the differential equation to: Master thesis On the stability of ellipsoidal forms of equilibrium of rotating uids, St. Petersburg University, 1884. Doctoral thesis The general problem of the stability of motion, 1892.
Stability Denitions
An equilibrium point x = 0 of x = f ( x) is
locally stable, if for every R > 0 there exists r > 0, such that
x(0) < r x(t) < R, t0
then x = 0 is locally stable. Furthermore, if also locally asymptotically stable, if locally stable and
x(0) < r
t d V ( x) dt
lim x(t) = 0
Lyapunov Functions (
Energy Functions)
V x2 x1
V = constant
Contour plot V ( x) = C:
2 1.5
Find a bounded control signal u = sat( v), which globally stabilizes the system x1 = x1 x2 x2 = u (2) u = sat (v( x1 , x2 )) What is the problem with using the standard candidate
2 2 V1 = x1 /2 + x2 /2 ?
0.5
x2
0.5
1.5
2 10
x1
0
10
The following statements are equivalent x = 0 is asymptotically stable i < 0 for all eigenvalues of Given any Q = Q T > 0 there exists P = P T > 0, which is the unique solution of the (discrete Lyapunov equation T P P = Q
x = 0
Re i < 0
0
Proof of Re i ( A) < 0
Q, P: Choose P =
AT t
QeAt dt
Exponential Stability
The equilibrium point x = 0 of the system x = f ( x) is said to be exponentially stable if there exist c, k, such that for every t t0 0, x(t0 ) c one has x(t)
k x(t0 ) e (tt0 )
It is globally exponentially stable if the condition holds for arbitrary initial states.
V ( x) k2 x
V f (t, x) k3 x x
for t 0, x r. Then x = 0 is exponentially stable. For exponential stability ( x , t) = .... (ll in) If r is arbitrary, then x = 0 is globally exponentially stable.
Proof
Quadratic Stability
Suppose there exists a P > 0 such that
0 > ( A + B i C ) P + P( A + B i C ) for all i
V =
V k3 f (t, x) k3 x c V x k2
k2 k1
1/ c
Aircraft Example
-n z r + e1
K1
1
max
q,
x =
lim
e2
K2
A1 x A2 x
if x1 < 0 if x1 0
(Branicky, 1993)
can be solved simultaneously for the matrix P, then stability is proved by the Lyapunov function x Px
Matlab Session
Copy /home/kursolin/matlab/lmiinit.m to the current directory or download and install the IQCbeta toolbox from http://www.control.lth.se/cykao.
>> >> >> >> >> >> >> >> >> P = 0.0749 -0.0257 -0.0257 0.1580 lmiinit A1=[-5 -4;-1 -2]; A2=[-2 -1; 2 -2]; p=symmetric(2); p>0; A1*p+p*A1<0; A2*p+p*A2<0; lmi_mincx_tbx P=value(p)
implies that
x(t) x (t) decays exponentially for all x in a neighborhood of x .
7
Time-varying systems
Note that autonomous systems only depends on (t t0 ) while solutions for non-autonomous systems may depend on t0 and t independently.
locally stable at t0 , if for every R > 0 there exists r = r( R, t0 ) > 0, such that
x(t0 ) < r x(t) < R, t t0
lim x(t) = 0
A second order autonomous system can never have nonsimply intersecting trajectories ( A limit cycle can never be a gure eight ) A system is said to be uniformly stable if r can be independently chosen with respect to t0 , i. e., r = r( R). Example of non-uniform convergence [Slotine, p.105/Khalil
p.134] Consider
x = x/(1 + t)
t t0
Proof
Given the second condition, let V ( x, t) = x P(t) x. Then
V ( x) = V + t V Ax = x ( P + A P + PA) x < x x
2
so exponential stability follows the Lyapunov theorem. Conversely, given exponential stability, let (t, s) be the transition matrix for the system. Then the matrix P(t) = (t, s) (t, s)ds is well-dened and satises t
I = P(t) + A(t) P(t) + P(t) A(t)
for all t.
Proof
x (t) = f ( x, t) = A(t) x(t) + o( x, t)
has an equilibrium x = 0, where 2 f / x2 is continuous and uniformly bounded as a function of t. Then the equilibrium is exponentially stable provided that this is true for the linearization x (t) = A(t) x(t) where
A(t) = f (0, t) x
Let z(t) = x(t) x (t). Then z = 0 is an equilibrium and the system z(t) = f ( z + x ) f ( x)
The desired implication follows by the time-varying version of Lyapunovs rst theorem.
= x T P[ Ax + ( x)] + [ x T A +
T T T
( x)] Px
= x ( PA + A P) x + 2x P ( x) = x T Qx + 2x T P ( x)
x T Qx min( Q) x 2 and for all > 0 there exists r > 0 such that
is a linearization. (1) If Re i( A) < 0 for all i, then x = 0 is locally asymptotically stable. (2) If there exists i such that i ( A) > 0, then x = 0 is unstable.
( x) < x ,
x <r
2
<0
10
Center Manifold Theorem Assume z = 0 is an equilibrium point. For every k 2 there exists a C k mapping such that (0) = 0 and d (0) = 0 and the surface z2 = ( z1 )
Cont
Usage
1) Determine z2 = ( z1 ), at least approximately 2) The local stability for the entire system can be proved to be the same as for the dynamics restricted to a center manifold:
z1 = A z1 + f ( z1 , ( z1 ))
0 0
11
Invariant Sets
Denition A set M is called invariant if for the system
x = f ( x), x(0) M implies that x(t) M for all t 0.
Let V : Rn R be a radially unbounded C1 function such that V ( x) 0 for x . Let E be the set of points in where V ( x) = 0. If M is the largest invariant set in E, then every solution with x(0) approaches M as t (proof on p. 73)
x(t) x(0) M
strm, K. J. (1997): Limitations on control system performance. In Proceedings of the European Control Conference (ECC97), vol. 1. Brussels, Belgium. TU-E-E4. Freudenberg, J. and D. Looze (1985): Right half plane poles and zeros and design tradeoffs in feedback systems. IEEE Transactions on Automatic Control, 30, pp. 555565. Goodwin, G. and M. Seron (1997): Fundamental design tradeoffs in ltering, prediction, and control. IEEE Transactions on Automatic Control, 42:9, pp. 12401251. Khalil, H. (1996): Nonlinear Systems, 2nd edition. Prentice Hall. Rouche, N., P. Habets, and M. Laloy (1977): Stability theory by Liapunovs direct method. Springer-Verlag, New-York, Berlin.
12
PSfrag replacements
x(0)
x ( t) M
lim
( )d
(t) 0 as t
V V f (t, x) W3 ( x) + t x W3 is a continuous positive semi-denite function. V ( t, x ) = Solutions to x = f (t, x) starting in x(t0 ) { x Br ...} are bounded and satisfy W3 ( x(t)) 0 t
positive denite
Dissipativity
Consider a nonlinear system
x (t) = f ( x(t), u(t), t), y(t) = h( x(t), u(t), t) t0
V >0
The system is said to be dissipative with respect to the supply rate r if there exists a storage function S (t, x) such that for all t0 , t1 and inputs u on [t0 , t1 ]
S (t0, x(t0 )) +
t1 t0
PSfrag replacements
dV /dt > 0
ExampleCapacitor
A capacitor
i=C du dt
ExampleInductance
An inductance
u= L di dt
is dissipative with respect to the supply rate r(t) = i(t)u(t). A storage function is
S (u) = Cu2 2
is dissipative with respect to the supply rate r(t) = i(t)u(t). A storage function is
S ( i) = Li2 2
In fact
Cu(t0 )2 + 2
t1 t0
In fact
i(t)u(t)dt = Cu(t1 )2 2 Li(t0 )2 + 2
t1 t0
i(t)u(t)dt =
Li(t1 )2 2
Memoryless Nonlinearity
The memoryless nonlinearity w = (v, t) with sector condition
(v, t)/v ,
t 0, v = 0
t0
1 r1 (h2 ( x2 ), x1 ) + 2 r2 (h1 ( x1 ), x2 )
The corresponding supply rate is
1, 2 0
1 S 1 ( x1 ) + 2 S 2 ( x2 )
PSfrag replacements
Absolute Stability Kalman - Yakubovich - Popov Lemma Circle Criterion Popov Criterion pp. 237 - 268 + extra material on the K-Y-P Lemma
Let (t, y) R be piecewise continuous in t [0, ) and locally Lipschitz in y R. Assume that satises the global sector condition
(t, y)/ y ,
t 0, y = 0
(1)
Absolute Stability
u +
( A, B , C)
1/
1/
t0
(2)
The system (2) with sector condition (1) is absolutely stable if the origin is asymptotically stable for (t, y) = y and the Nyquist plot does not intersect the closed disc with diameter [1/ , 1/ ].
C( j I A)1 B + D ,
with sector condition (1) is called absolutely stable if the origin is globally uniformly asymptotically stable for any nonlinearity satisfying (1).
Loop Transformation
u +
G ( s)
y +
G ( s)
rag replacements
+
(t, y)/ y 0 t 0, y = 0
M ( j ) + M ( j ) > 0,
[0, )
Common choices: K = or K =
+
2
Note: For SISO systems this means that the Nyquist curve lies strictly in the right half plane.
Proof
Set V ( x) = x T Px,
V = 2x T P x P = PT > 0 x P 0 x
= 2x T P [ A = 2 [ xT
B]
2x T P [ A
A B
B] x
C D 0 I By the Kalman-Yakubovich-Popov Lemma, the inequality M ( j ) + M ( j ) > 0 guarantees that P can be chosen to make the upper bound for V strictly negative for all ( x, ) = (0, 0). Stability by Lyapunovs theorem.
P 0
0 <0 I
Mini-version a la [ Slotine& Li ]: x = Ax + bu, A Hurwitz, (i. e., Re{ i ( A) < 0}] y = cx The following statements are equivalent Re{c( j I A)1 b} > 0, w [0, )
Rn
such that
P 0 0 pI
A B C D 0 A B C D
( s A A ) 1 ( B s B ) +
A B C D
P 0
pI
with s A A nonsingular for some s C, the following two statements are equivalent.
Lemma 1
0], I ].
, ,
M =[I N = [0
Then
y = [ C( j I A)1 B + D ]u
if and only if
y = u N N w
Hence y = z for some C {}. The equality yz + zy = 0 gives that is purely imaginary.
(i) and (ii) can be connected by the following sequence of equivalent statements.
(a) w( N N + N N )w < 0 for w = 0 satisfying M w = j M w (b) P = , where = w w = 1
with R.
w ( N N + N N )w, M ww M + M ww M :
P = {(r, 0) : r > 0}
(c) (conv ) P = .
Rn
separating from
0 > w N N + N N + M P M + M PM w
Time-invariant Nonlinearity
v
PSfrag replacements
PSfrag replacements v
10
12 2
( y)/ y ,
t 0, y = 0
Suppose that : R R is Lipschitz and 0 (v)/v . Let G (i ) = C(i I A)1 B with A Hurwitz. If there exists R such that 1 Re [(1 + i ) G (i )] > (3) R then the differential equation x (t) = Ax(t) B ( Cx(t)) is exponentially stable.
Re G (i )
10 12 14
Popov proof I
Set
V ( x) = x Px + 2
T Cx 0
Popov proof II
( )d
For 0, V > 0 is obvious for x = 0.
Stability for linear gives V 0 and V < 0, so V must be positive also for < 0.
where P is an n n positive denite matrix. Then V = 2( x T P + k C) x x = 2( x T P + C) [ A B ] x 2 ( y) 2( x T P + C) [ A B ] PA PB x = 2 [ x T ] C C A 1 C B By the K-Y-P Lemma there is a P that makes the upper bound for V strictly negative for all ( x, ) = (0, 0).
Proof techniques
(ii) (i) simple (i I A)1 B I
(i I A)1 B I
(i I A)1 B I
n
[0, )
Spectral factorization (Anderson) Linear quadratic optimization (Yakubovich) Find (1, P) as separating hyperplane between the sets
x u
T
A P + PA P B B P
T
x u
, x( Ax + Bu) + ( Ax + Bu) x
: ( x , u) C n + m
{(r, 0) : r > 0}
imaginary axis?
Note: It is the y-dynamics which relate to zero linearization, NOT the z-dynamics according to notation in Khalil.
Proof Outline
For any continuously differentiable function h k , globally bounded together with its rst partial derivative and with hk(0) = 0, h (0) = 0, let hk+1 be dened by the equations
y = A0 y + f 0( y, hk( y)) z = A z + f ( y, hk( y)) hk+1( y) = z
Under suitable assumptions, it can be veried that this denes hk+1 uniquely. Furthermore, the sequence {hi} is contractive in the norm sup y hi ( y) and the limit h saties the conditions for a center manifold.
Usage
1) Determine z = h( y), at least approximately. (E.g., do a series expansion and identify coefcients...) 2) The local stability for the entire system can be proved to be the same as for the dynamics restricted to a center manifold:
y = A0 y + f 0 ( y, h( y))
Usage contd
In the case of using series expansion of h( y) = c2 y2 + c3 y3 + ..., you would need to continue (w.r.t the order of the terms) until you have been able to determined the local behavior. (Low order terms dominate locally).
Identify the coefcients from the boundary condition [Khalil (8.8, 8.11)]
h ( y)[ A0 y + f 0 ( y, h( y))] A h( y) f ( y, h( y)) = 0 y
Example
y = z z = z + a y2 + b yz Example
Nonuniqueness
The center manifold need not be unique
y = y3 z = z z = h( y) gives h y 3 = z = h( y )
2
h( y) = Ce1/(2 y
Material
lecture notes A. Megretski and A. Rantzer, System Analysis via Integral Quadratic Constraints, IEEE Transactions on Automatic Control, 47:6, 1997 U. Jnsson, Lecture Notes on Integral Quadratic Constraints Users guide to toolbox, Matlab
x= y
Preview Example
A linear system of equations
y = 1.1 0.1x x= y=1
Example
C(sI A)1 B
Question: For what values of is the system stable? Note: May be large differenses if we consider complex or real uncertainties .
C(sI A)1 B
For example, if
D=
1
0 0
k [1, 1]
< v1 2 < v2 2
v1 v2
= C( i I A)1 B
w1 w2
Reformulated Denition
The following two conditions are equivalent
(i) 0 = det[ I M (i )] for all D and R (ii) D ( M (i )) < 1 for R
Bounds on
If D consists of full complex matrices, then D ( M ) = ( M ).
where ( M ) is the largest singular value of M = the larges eigenvalue of the matrix M M .
If D consists of perturbations of the form = I with [1, 1], then D ( M ) is equal to the magnitude R ( M ) of the largest real eigenvalue of M (the spectral radius). In general
R ( M ) D ( M ) ( M )
Computation of
Dene
UD = { U D : U U = I } DD = { D = D Cn : D = D for all D } GD = { G = G Cn : G = G for all D }
sup R ( U M ) D ( M )
inf
D DD G GD
( D , G ) inf ( DM D 1 )
D D D
T x x y M1 y 0 1 1
T x x y M2 y 0 1 1
M0 + 1 M1 + 2 M2 0
S-procedure in general
The inequality
0 ( h) 0
1 ( f ) > 0, . . . , m ( f ) > 0
Then the following statements are equivalent 0( f ) 0 for all f such that 1( f ) 0, . . . , n ( f ) 0
h
k k ( h) 0
k k ( f ) 0
m The causal bounded operator on L2 is said to satisfy the IQC dened by the matrix function (i ) if
0
0
( v 2 v 2 ) dt =
v( i )
I 0
v( i )
( v)( i )
( v)( i )
v( i )
( v)( i )
( i )
v( i )
( v)( i )
0
0
v( t)( v)(t) dt =
v( i )
0 I
I 0
v( i )
( v)( i )
( v)( i )
Exercise
Show that a nonlinearity satisfying the sector condition
y2 (t, y) y y2
satises the IQC, I QC() given by ( j ) = =
2 +
+ 2
Zames/Falbs property
0
0
v( t)[ f ( t) + ( h f )( t)]dt,
h( t) dt 1
v f
0 1 + H ( i )
1 + H (i ) 0
v f
Note: Satises a quadratic inequality (for every frequency) = satises integral quadratic inequality
structure
( i )
Condition
Well-posed Interconnection
passive ( i ) 1
0 I x( i ) I 0 X ( i ) Y ( i ) X Y 0 1+ H ( i )
T
I 0 0
G ( s)
x( i ) I
Y ( i ) X ( i ) Y
x( i ) 0
[1, 1] ( t) [1, 1]
( v)( t) = sgn(v( t))
Y = Y
= Gw + f w = (v) + e
v
X
1 + H ( i ) 0 H
L1
is said to well-posed if the map (v, w) ( e, f ) has a causal inverse. It is called BIBO stable if the inverse is also bounded.
1
G ( s)
G ( s)
Let G (s) be stable and proper and let be causal. For all [0, 1], suppose the loop is well posed and satises the IQC dened by (i ). If
G ( i ) I
A stability theorem based on gain is recovered with A passivity based stability theorem is recovered with
I 0 0 I
I
I 0
. .
( i )
G ( i ) I
<0
for [0, ]
X ( i )
where X (i ) = diag{ x1 (i ), . . . , xm (i )} > 0. Feedback loop stability follows if there exists X (i ) > 0 with
G ( i ) X ( i ) G ( i ) < X ( i )
X ( i )
Y ( i ) 0
[0, ]
or eqivalently, with D (i ) D (i ) = X (i )
sup D ( i ) G ( i ) D ( i )
2 0 +
where X (i ) = X (i ) and Y (i ) = Y (i ).
+ X ( i ) 0 0 2
<1
v L2 , [0, 1]
If ( I G ) is bounded for some [0, 1] then the above inequality gives boundedness of ( I G )1 for all with
6 4 2
G (i )
c0 G < 1
e(t)
Hence, boundedness for = 0 gives boundedness for < (c0 G )1 . This, in turn, gives boundedness for < 2(c0 G )1 and so on. Finally the whole interval [0, 1] is covered.
y(t)
0 2 4 2 0 2 4 6 8
A simulation model
K
performance
Step
Gain2
1 Gain1 Sum
10 Gain
Sum2
The text version (i.e., NOT the gui) is strongly recommended by the IQCbeta author(s) at present version!!
7
scalar inputs: 5 states: 10 simple q-forms: 7 LMI LMI LMI LMI LMI #1 #2 #3 #4 #5 size size size size size = = = = = 1 1 1 1 1 states: states: states: states: states: 0 0 0 0 0
white noise performance sector+popov polytope with restrict rate diagonal structure
norm bounded 1 Gain K Matrix Gain unknown const monotonic with restrict rate sector satint LTI unmodeled
|D(t)|<k TV scalar
encapsulated deadzone
Mux
Mux
1 In
1 Out harmonic
Text version (i.e., NOT gui) strongly recommended by the IQCbeta author(s) at present version!!
Dominant Harmonics
u
system
system
u( t) u( t T ) dt
u( t)u( t) dt,
0
u( i ) 2 d
(1 + )
a
u( i ) 2 d
corresponds to
( i ) = 2 e
i T
i T
r = 1 + 0.1 sin t
PID u
1 s
1 s
PI
1 s
1.4 1.2 1
1.2
1.1
0.8 0 10 20 30 40
Incremental Stability
G ( s)
(v1 ) (v2 ) v1 v2
v1 , v2 L2
0
0
T > 0, v1, v2 L2
is called incrementally stable if there is a constant C such that any two solutions ( e1 , f 1 , v1 , w1 ), ( e2 , f 2 , v2, w2 ) satises
v1 v2 + w1 w2 C e1 e2 + C f 1 f 2
0.5
G ( j )
PI
1 s
0.5
1 1
0.5
0.5
F (i )
G (i )
References: A. Megretski and A. Rantzer, System Analysis via Integral Quadratic Constraints, IEEE Transactions on Automatic Control, 47:6, 1997 A formula for Computation of the Real Stability Radius, L. Qiu, B. Bernhardsson, A. Rantzer, E.J. Davison, and P.M. Young. Automatica, pp. 879890, vol 31(6), 1995. U. Jnsson, Lecture Notes on Integral Quadratic Constraints Users guide to toolbox, Matlab
G ( s) =
s s2 + K s+ Ki
10
State transformation
More difcult example, needing state transformation
x1 = a sin( x2 )
2 x2 = x1 + u
Put
u=
1 ( sin( x) + v) cos( x) l
gives
x=v Design linear controller v = l1 x + l2 x , etc
so that
z1 = z1 z2 = ( z2 + u) a cos x2 1
Exact Linearization
Often useful in simple cases Important intuition may be lost Related to Lie brackets and atness
k k( i )
k
[T1 + T2 QT3 ]( i ) I
<0
for [0, ] In both cases, the problem is non-convex and hard. Heuristic idea: Iterate between the arguments
V x
f u ( x) < 0
(Notation: L f V ( x) < 0)
Example
Check if V ( x, y) = [ x2 + ( y + x2 )2 ]2 /2 is a CLF for the system
x = xy y = y + u u( x) :=
Sontags formula
If V is a CLF for the system x = f ( x) + ( x)u, then a continuous asymptotically stabilizing feedback is dened by
0 if L V ( x) = 0 ( L f V )2 + ( L V )4 ( x) L V if L V ( x) = 0
L f V +
L f V ( x, y) = x2 y + ( y + x2 )( y + 2x2 y) L V ( x, y) = y + x2 L V ( x, y) = 0 y = x2 L f V ( x, y) = x4 < 0
if ( x, y) = 0
Backstepping idea
Problem
Backstepping
Let V x be a CLF for the system x = f ( x) + ( x)u with corresponding asymptotically stabilizing control law u = ( x). Then V ( x, y) = V x ( x) + [ y ( x)]2 /2 is a CLF for the system
x = f ( x) + ( x) y y = h( x , y ) + u
Idea
[ f ( x ) + ( x ) u]
Vx ( x ) h( x , y ) + ( x ) y x
Proof.
V = ( Vx / x)( f + u) + ( y )[( / x) f + h + u]
Backstepping Example
For the system
x = x2 + y y=u
= (2x + 1)( x2 + y) x2 x y
= x 2 + ( y + x 2 + x ) 2 /2
Nonlinear Control
Linear case
Controllability
x = Ax + Bu
0 x( T ), x (0) x ( T ) x (0) 0,
Khalil pp. Ch 13.12 (Intro to Feedback linearization) Slotine and Li, pp. 229-236
Lie Brackets
Lie bracket between f ( x) and ( x) is dened by [f , ] = Example:
x1 cos x2 , = , f = 1 x1 f [f , ] = f x x 1 0 cos x2 0 sin x2 x1 = 0 0 x1 1 0 1 cos x2 + sin x2 = x1
Why interesting?
x=
1 ( x )u 1
f x
The system is controllable if the Lie bracket tree has full rank (controllable=the states you can reach from x = 0 at xed time T contains a ball around x = 0)
2 ( x )u 2
(0, 1), t [ , 2 ]
[ 1, [ 1 ,
2 ]]
[ 2, [ 1 ,
2 ]]
( x, y)
[ 2, [ 1 , [
1, 2 ]]]
[ 1, [ 1 , [
1,
2 ]]]
[ 1, [ 2 , [
1,
2 ]]]
[ 2, [ 2 , [
1,
2 ]]]
Once More
[
3, 2]
The motion [ 3 ,
2]
x3
( x1 , x2 ) cos( x ) 3 sin( x3 ) , =
0
0 = 0 , [ 1,
1
sin( x ) 3 cos( x3 ) 2] =
0
More Information
More theory about Lie-bracket theory Nijmeijer, van der Schaft, Nonlinear Dynamical Control Systems, Springer Verlag. Isidori, Nonlinear Control Systems, Springer Verlag
Interconnected systems peaking Passivity and Stability Relative degree and zero dynamics Exact linearization We can solve for z(t)
z(t) =
1 z(0)
et z(0)
t 0
e y( )d
1 z(0)
e y( ) = 0?
o = 1
(s + 1) 2 o 2 + 2 s + 2 s o o
(1)
1 1.5 2 0 1 2 3 4 5 6
A step response will reveal a transient which grows in amplitude for faster closed loop poles s = o, see Figure on next slide.
Step responses for the system in Eq. (1), o = 1, 2, and 5. Faster poles gives shorter settling times, but the transients grow signicantly in amplitude, so called peaking.
Dissipativity
Consider a nonlinear system
x( t)
= y( t) =
h( x( t), u( t), t)
The system is dissipative if there exist a supply rate r(t) = r(u(t), y(t), t) such that
T
a storage function S ( x) 0
Any storage increase in a passive systems is due to external sources! Connection between passivity and Lyapunov stability (Warning: passivity is an In/Out-relationship so need something more). Use S ( x) = x2 /2 as storage function. for all x X
S( x( T )) S( x(0))
S( x ) r ( u , y )
Local passivity
Show that
x = ( x3 kx) + u y=x
S = x2 ( x2 k) + ux u y
uv
so passive
E = uv dv2 uv
ux
G xu (s) =
ms2
1 + ds + k
Show that the mapping input force u position x is NOT passive input force u velocity v is passive
Assume that 1 and 2 are passive, then the well-posed feedback interconnections in the gure below are also passive from r to y.
1 2
y r
1 2
+
y
Input Feedback Passive (IFP) if it is dissipative with respect to r(u, y) = uT y kuT u for some k
Sector condition [ , ]:
IFP(1)
OFP(-0.3)
u2 u (u) u2 , 0
x=u
y= x+u
x = 0.3x + u y=x
Take S=0 Note! u = k y where k = 0.3 is exactly the amount of feedback required to make the (right) system passive. Feedback connection may even out excess and shortage of passivity...
u y u2 0 and u y 1
y2 0
( t, y)
( t, y)/ y 0 t 0, y = 0
is absolutely stable if
M ( j ) + M ( j ) > 0 ,
[0, )
Introduce zero-state detectability (cmp linear detectability) to exclude these cases and relate to stability.
Note: For SISO systems this means that the Nyquist curve lies strictly in the right half plane.
4
Relative degree
A systems relative degree: How many times you need to take the derivative of the output signal before the input shows up
(2)
we have
y
= = . . . =
h( x ) d h( x ) = x= dt x L f h( x ) + L h( x ) u
=0 i f d >1
h f ( x) + x
h ( x )u x
y ( k)
y( d)
. . . =
L k h( x ) f
if k < d
( d 1)
(3)
L d h( x ) + L L f f
h( x ) u
Using the same kind of coordinate transformations as for the feedback linearizable systems above, we can introduce new state space variables, , where the rst d coordinates are chosen as
1 2 . . . d = h( x ) = L f h( x ) = Lf
( d 1)
Under some conditions on involutivity, the Frobenius theorem guarantees the existence of another (n d) functions to provide a local state transformation of full rank. Such a coordinate change transforms the system to the normal form
(4)
1 = 2 . . . d 1 = d d = Ld h( , z) + L Ld1 h( , z)u f f y = 1 z = ( , z)
h( x )
(5)
x1
x1
x2 + u = 0
s1 u + 2s + 1
(6)
x2 = u = x2
(8)
The remaining dynamics is an unstable system corresponding to the zero s = 1 in the transfer function (6).
= x1
For general nonlinear systems feedback linearization comprises state transformation inversion of nonlinearities linear feedback
v
u = 1 ()
z
x = T ( z)
Simple example
x= l sin( x) + cos( x)u
Put
u=
gives
State transformation
More difcult example, where we need a state transformation
x1 = a sin( x2 )
2 x2 = x1 + u
Feedback linearization (nonlinear version of pole-zero cancellation) Feedback linearization can be interpreted as a nonlinear version of pole-zero cancellations which can not be used if the zero-dynamics are unstable, i. e., for nonminimum-phase system.
so that
z1 = z1 z2 = ( z2 + u) a cos x2 1
Matching uncertainties
x1 = x2
(9)
Nonrobust and/or not necessary. However, note the difference between tracking or regulation!!
. . . n1 = xd x
x n = L d h( x , z) + L L d 1 h( x , z) u f f y = x1 z = ( x , z)
(10)
Integrator chain and nonlinearities (+ zero-dynamics) Note that uncertainties due to parameters etc. are collected in Ld h( x, z) + L Ld1 h( x, z)u f f
Will see later how optimal criteria will give hints. Achieving passivity by feedback ( Feedback passivation ) Need to have relative degree one weakly minimum phase
Exact Linearization
Often useful in simple cases Important intuition may be lost Related to Lie brackets and atness
NOTE! (Nonlinear) relative degree and zero-dynamics invariant under feedback! Two major challenges: avoid non-robust cancellations make it constructive by nding matching input-output pairs
8
Example
Check if V ( x, y) = [ x2 + ( y + x2 )2 ]2 /2 is a CLF for the system
x = xy y = y + u
(Notation: L f V ( x) < 0)
L f V ( x, y) = x2 y + ( y + x2 )( y + 2x2 y) L V ( x, y) = y + x2 y = x2
L f V ( x, y) = x4 < 0
if ( x, y) = 0
Sontags formula
If V is a CLF for the system x = f ( x) + ( x)u, then a continuous asymptotically stabilizing feedback is dened by
u( x) :=
Backstepping idea
Problem
if L V ( x) = 0 ( L f V )2 + (( L V )( L V ) T )2 [ L V ]T ( L V )( L V ) T if L V ( x) = 0
L f V +
Idea
Backstepping
Let V x be a CLF for the system x = f ( x) + ( x)u with corresponding asymptotically stabilizing control law u = ( x). Then V ( x, y) = V x( x) + [ y ( x)]2 /2 is a CLF for the system
x = f ( x) + ( x) y y = h( x , y ) + u
Backstepping Example
For the system
x = x2 + y y=u
[ f ( x ) + ( x ) u]
Vx ( x ) h( x , y ) + ( x ) y x
= (2x + 1)( x2 + y) x2 x y
V ( x, y) = Vx ( x) + [ y ( x)]2 /2
Proof.
V = ( Vx / x)( f + u) + ( y )[( / x) f + h + u]
= ( Vx / x)( f + ) ( y )2 < 0
= ( Vx / x)( f + ) + ( y )[( Vx / x) + ( / x) f + h + u]
10
Lyapunov Redesign
Consider the nominal system
x = f ( x, t) + G ( x, t)u
(1)
so that the system is uniformly asymptotically stable. Assume that a Lyapunov function V ( x, t) is known s.t.
1 ( x ) V ( x, t) 2 ( x )
V + t V [ f ( t, x) + G ] x
(t, x, + v) ( x, t) + 0 v
If we know and 0 how do we design additional control v such that u = ( x, t) + v stabilizes (2)? The matching condition: perturbation enters at same place as control signal u.
1
3 ( x )
Apply u = ( x, t) + v
x = f ( x, t) + G ( x, t) + G ( x, t)[v + ( t, x, + v)]
(2)
2 1
V=
V + t
V [ f ( t, x) + G ] + x
V x
V G [v + ] 3 ( x ) + x
V G [v + ] x
w v+w w v+ w
T T T
Introduce w = [
G] V 3 ( x ) + wT v + wT
Alternative 1: If take
2 -norm
(t, x, + v)
( x, t) + 0 v 2, 0 0 < 1
w w
v = (t, x)
Note: v appears at same place as due to the matching condition Alternative 2: If take where /(1 0 ) Restriction on 0 < 1 but not on growth of . Alt 1 and alt 2 coincide for single-input systems.
where /(1 0 )
(t, x, + v)
( x, t) + 0 v
0 0 < 1
v = (t, x) sgn w
()
x = u + ( x)(t)
Example cont.
Example: Exponentially decaying disturbance (t) = (0) ekt linear feedback u = cx
Nonlinear damping
Modify the control law in the previous example as:
u = cx s( x) x
( x) = x2
x = cx + (0) ekt x2
Similar to peaking problem last lecture: Finite escape of solution to innity if (0) x(0) > c + k We want to guarantee that x(t) stay bounded for all initial values x(0) and all bounded disturbances (t)
= cx2 x2 s( x) + x ( x)
Choose
2 c
= cx2 x
2 c
Remark: The nonlinear damping x 2 ( x) renders the system Input-To-State Stable (ISS) with respect to the disturbance.
3
Youngs inequality
Let p > 1, q > 1 s.t. ( p 1)( q 1) = 1, then for all > 0 and all ( x, y) R2
xy < Problem
Backstepping idea
Given a CLF for the system
p
p
x p+
1 y q q
x = f ( x , u)
Standard case: ( p = q = 2, 2 /2 = )
xy < x 2 + 1 y 4
2
Idea
Our example:
x ( x)(t) < x2 2 ( x) +
Use y to control the rst system. Use u for the second. 2 (t) 4
Backstepping
Let V x be a CLF for the system x = f ( x) + ( x)u with corresponding asymptotically stabilizing control law u = ( x). Then V ( x, y) = V x( x) + [ y ( x)]2 /2 is a CLF for the system
x = f ( x) + ( x) y y = h( x , y ) + u
Backstepping Example
For the system
x = x2 + y y=u
[ f ( x ) + ( x ) u]
Vx ( x ) h( x , y ) + ( x ) y x
= (2x + 1)( x2 + y) x2 x y
Proof.
V = ( Vx / x)( f + u) + ( y )[( / x) f + h + u]
= x 2 + ( y + x 2 + x ) 2 /2
by step)
We cant expect to realize x2 = ( x1 ) exactly, but we can always try to get the error 0.
(3)
(4)
Find u(x) which stabilizes (3). Idea : Try rst to stabilize the x1 -system with x2 and then stabilize the whole system with u.
2 We know that if x2 = x1 x1 then x1 0 asymptotically ( exponentially ) as t .
2 where 1 ( x1 ) = x1 x1
x2
z1
= = = = = =
x1 = z2 1
+ z2 + 1 ( z1) = + z2 z2 z1 = z1 + z2 1
z2 1
known
z2 1
x2 1 = u( x) 1 d ( z2 z1 ) = z1 z1 z1 1 dt z1 ( z1 + z2 ) ( z1 + z2) = z2 z1 z2 z2 z1 1
Now look at the augmented Lyapunov fcn for the error system
V2 V2
= =
1 2 z 0 2 1 z1 z1 = z2 + z1 z2 1
V1 +
= = z2 + z1 z2 + z2( u z2 + z1 z2) 1 1 = z2 + z2 ( u z2 + z1 z2 + z2 + z1 ) 1 1
choose = z2
1 2 z 0 2 2 V1 + z2 z2 =
z2 1
z2 2
x2 u
x1
x1
f +
f ()
If we could use x2 as control signal, we would like to assign it to ( x1 ) to stabilize the x1 -dynamics.
Move the control backwards through the integrator z2 = x2 u x1 + + d /dt Note the change of coordinates!
f +
Adaptive Backstepping
System :
x1 = x2 + ( x1 ) x2 = x3 x3 = u( t) (5)
(Back-) Step 1:
x2
z1( t) V1
= = =
where is a known function of x1 and is an unknown parameter Introduce new (error) coordinates
z1( t) = x1 ( t)
z2( t) = x2 ( t) 1 ( z1, )
(6)
Choose 1 = z1 V1 = z2 + z1 z2 + ( 1 ) 1
6
Note: If we used = 1 as update law and if z2 = 0 then V1 = z2 0 1 Step 2: Introduce z3 = x3 2 ( z1 , z2 , ) and use 2 as control to stabilize the ( z1 , z2 )-system
1 2 z2 2
1 1
z1
= =
x2 1 =
x3
z3 + 2
1
z1
( x2 + )
V2
= =
V1 + z2 z2 = . . . =
z2 + z2 [ z3 + 2 + z1 + 1
( z1 z2)
z2
1 ]+
1 + [ ( 1 + z2 ) ] z1
2
Choose 2 = z2 z1
1
z1
( z1 z2 ) +
Step 3 :
z3
x3 2
= u
z1 z2
2
z1
z1
2
z2
z2
1 1 1 1
Hurwitz
1
z1
0 z3 1 ( 2 )
puh...
2 = ... =
We now want to choose u = u( z1 , z2 , ) such that the whole system will be stabilized w.r.t V3
2 = [
x1
z1 z2
1 (
V2 = z2 z2 + z3 z2 + ( 2 ) z2 1 2
2)
u u
= z2 z3 + = ?
2
z2
x3 +
2 +u
7
= =
1 1 V2 + z32 = z 2 2
1 + 2 2
= [
2
z1
z1
z1
z1
] z2
z3
z2 z2 z2 + z3u + [ ( 2 z3 1 2 3
3
)]
V3 = z Crucial :
+ z3 u + z2
1 ( 2
z2
1 ( 2 )
2
z1
1 ( 2 ) = z3 1 2 z1
known =
def
= ( 3 ) = 2 z3
1
1 0 1+
z= z+
1 z1
2
z1
0
1 ( 2
1 1 0 1
z+
0 u
V3 = z
+ z3 (u + z2 )
1 z1
2
z1
Choose u = z2 Finally :
Observer backstepping
Observer backstepping is based on the following steps:
V3 = z Theorem )
GS of z = 0, = and x 0 ( by La Salles
1. A (nonlinear) observer is designed which provides (exponentially) convergent estimates. 2. Backstepping is applied to a system where the states have been replaces by their estimates.
Closed-loop system : z=
1 1 1 1 0 1
0 1+ z+ 1
1
z1
1 z1
2
z1
skew s ymmetric I
The observation errors are regarded as (bounded) disturbances and handled by nonlinear damping.
3 = [
z1
]z
8
. . .
xn = f n ( x1 , x2 , . . . xn1 , xn ) + u
. . .
xn1 = xn + f n1 ( xn , u) xn = u
Outline
HJB Inverse optimal control
Stabilization with Saturations Integrator forwarding Relations between the concepts Conclusions
Optimality
Two main alternatives Pontryagins Maximum Principle (Necessary cond) Hamilton-Jacobi-Bellman (Dyn prog.) (Sufcient cond)
where l ( x) 0 and R( x) 0 x. For a given feedback u( x) the value of V depends on the initial state x(0): V ( x(0)) or simply V ( x).
Theorem (Optimality and Stability) Suppose there exist a C 1 -function V ( x) 0 which satises the Hamilton-Jacobi-Bellman equation
1 l ( x) + L f V ( x) L V ( x) R1 ( L V ( x)) T = 0 4 V (0) = 0
x = Ax + Bu
(2)
( x T C T Cx + uT Ru)dt,
R>0
Riccati-equation
PA + APT PB R1 B T P + C T C = 0
(3)
achieves asymptotic stability of the origin x = 0. Then u( x) is the optimal stabilizing control which minimizes the cost (1). 5-min exercise: Consider the system and the cost functional
V = x = x2 + u
If (A,B) controllable and (A,C) observable, then (3) has a unique solution P = P T > 0 such that the optimal cost is V = x T Px and u ( x) = R1 B T Px is the optimal stabilizing control
( x2 + u2 )dt
HJB:
x2 + V 2 1 x x 4 V x
= 0,
4x4 + 4x2 x2 + 1
V ( x) = 0
Remark: If (A,B) stabilizable and (A,C) detectable then P is positive semi-denite. Example (non-detectability in cost) System
V = 2x2 x
= 2x2 +2x
2 2 V ( x ) = x 3 + ( x 2 + 1 ) 3/ 2 + C , 3 3
(4)
x = x+u
Cost functional
V =
u2 dt
(5)
Riccati-eq
Corresponding HJB
x
2P P2 = 0,
P = 0 or P = 2
V (0) = 0
1 V 2 V ( ) = 0, x 4 x
V = 0 or V = 2x2
Inverse optimality
A stabilizing control law u( x) solves an inverse optimal problem for the system x = f ( x ) + ( x )u
1 u( x) = k( x)/2 = R1 ( x)( L V ( x))T , 2 R( x ) > 0
The underlying idea of formulating an inverse optimal problem is to get some help to avoid non-robust cancellations and gain some stability margins. Example: Non-robust cancellation Consider the system
x = x2 + u
if it can be written as
where V ( x) 0 and
1 V = L f V + L V = L f V L V k( x) 0 2
l( x )
This system may has nite escape time solutions. How does u from previous example behave?
3
To add more damping to the system to render it asymptotically stable the following suggestion was made by Jurdjevic-Quinn (1978)
V = L f V + L Vu L Vu
Choose
u = ( L V )T
and that we know a function V ( x) such that L f V 0 for all x. How to make it asymptotically stable (robustly)?
It also solves the global optimization problem for the cost functional
V ( x) =
( l ( x) +
u T u) dt
( L V )( L V ) T 0
Saturated controls [Sussmann, Yang And Sontag] Cascaded saturations [Teel et al]
The feedback law u = y guarantees GAS if the system is ZSD (zero state detectable). Note: May be a conservative choice as it does not fully exploit the possibility to choose V ( x) for the whole system (only x = f ( x)).
4
Feedforward systems
Particular form of cascaded systems 1991 A. Teel ... Sussman, Sontag, Yang ... Saberi, Lin 1996 Mazenc, Praly 1996 Sepulchre, Jankovic, Kokotovic
Strict-feedforward systems
x1 = x2 + f 1 ( x2 , x3 , . . . , xn , u) x2 = x3 + f 2 ( x3 , . . . , xn , u)
. . .
xn1 = xn + f n1 ( xn , u) xn = u
1/s
1/s
1/s
f n1
f n2
f1
Strict-feedforward systems are, in general, not feedback linearizable! Compare with e.g. Strict-feedback systems
x1 = x2 + f 1 ( x1 ) x2 = x3 + f 2 ( x1 , x2 )
(i.e neither exact linearization nor backstepping is applicable for stabilization) Restriction: Does not cover systems of the type
... ...
xk = x2 + ... k
. . .
xn = xn + f n( x1 , x2 , . . . xn1 ) + u
Sussman and Yang (1991) : There does not exist any (simple) saturated feedback-law which stabilizes an integrator chain of order 3 globally.
1/s 1/s 1/s
( s) M , s R Theorem (Teel):
l1
+
l2
+
l3
+
For an integrator chain of any order and for any set {( Li , Mi )} 1 where Li Mi and Mi < 2 Li+1 , there exists {hi} for all linear saturations { i} such that the bounded control
u = n(hn( x) + n1(hn1( x) + + 1(h1 ( x)) . . . )
Sketch of proof: (n=3, Li = Mi ) Consider a state transformation y = T x which transforms the integrator chain into y = Ay + Bu where
0 1 1 1 0 0 1, B = 1 A= 0 0 0 1
u = 3( y3 + 2( y2 + 1 ( y1)))
y3 will decrease.
In nite time y3 will be < 1 L3 and 3 will now operate in the 2 linear region. (Note: no nite escape for the other states.)
6
y2 = y3 ( y3 + 2 ( y2 + 1 ( y1)))
= 2 ( y2 + 1 ( y1 )))
Integrator forwarding
strict-feedforward systems
x1 = x2 + f 1 ( x2 , x3 , . . . , xn , u) . . . xn1 = xn + f n1 ( xn , u) xn = u
Same kind of argument shows us that after nite time, the closed loop will look like
y1 = y1 y2 = y1 y2
y3 = y1 y2 y3
i.e. after a nite time, the dynamics are exponentially stable Remark: Although we have found a globally stabilizing, bounded, control law, u, the internal states may have huge overshoots !!
Due to the lack of feedback connections, solutions always exists and are of the form
t
xn ( t) = xn (0) +
u( s) ds
t 0
( xn ( s) + f n1 ( xn ( s), u( s))) ds
. . .
2. Augment the control law u n1 ( x n1 , x n ) = u n ( x n ) + v n1 such that un1 stabilizes the cascade
How is the cascade (in step k) stabilized? We have a cascade of one GAS/LES system and a ISSsystem with a linear growth-condition.
xn1 = xn + f n1 ( xn , u) xn = u n1
0 Vk
... k. Augment the control law uk ( xk , xk+1 ) = un( xk+1 ) + vk such that uk stabilizes the cascade
X k+1 = Fk+1 (. . . , u k ) xk = xk+1 + f k (. . . )
xk (s) f k( X k+1(s))ds
(l ( x) + u2 )ds
The cross-term can only be exactly evaluated for very simple systems. In other cases it has to be numerically evaluated or approximated by i.e. Taylor series
Connection to Teels results: To avoid computations of the integrals we can use nested lowgain (saturated) control. Also showed to be GAS/LES for the integrator chain, but LAS/LES for the general strict-feedforward system. (Compare with high-gain design in backstepping)
Can use a feedback passivation design for a system if 1. A relative degree condition satised 2. The system is weakly minimum phase
Conclusions Global/semiglobal stabilization of strict-feedforward system ( No exact linearization possible ) Tracking results reported Relaxes weakly minimum phase-condition
Backstepping is a recursive way of nding a relative degree one output. Integrator forwarding allows us to stabilize weakly nonminimum phase systems.
Periodic Perturbations Averaging Singular Perturbations Khalil Chapter (9, 10) 10.3-10.6, 11
f ( t, x, )
= =
( t, x, z, )
m l
f 2 ( , x)
a sin t
x = l sin( ),
= l cos( ) a sin( t)
Averaging Assumptions
Consider the system
x
f ( t, x, ),
x(0) = x0
= =
f av ( x)
2
where f and its derivatives up to second order are continuous and bounded. Let xav be dened by the equations
xav f av( x)
x2
1 4
x2 sin x1
0
2
sin 2x1
has
f av ( , 0) x
= =
f av ( xav ),
T
lim
1 T
xav (0) = x0 f ( , x, 0) d
which is Hurwitz for 0 < < 1/ 2, > 0. Can this be used for rigorous conclusions?
0.5
1 T T lim
T 0
f ( , x, 0) d
Furthermore, for sufciently small > 0, the equation x = f (t, x, ) has a unique exponentially stable periodic solution of period T in an O ( ) neighborhood of x = 0.
<
f ( x) + ( t, x, )
where f , , f / x and ous and bounded. Let The Jacobian of the averaged system is Hurwitz for 0 < < 1/ 2, > 0. For a/l sufciently small and
> 2 0 l / a
/ x are continu-
O ( )
f ( x) + ( t, x, ),
x(0) = x0
= =
Exponential stability of x = 0 for = 0, plus bounds on the magnitude of , shows existence of a bounded solution x for small > 0.
y + u( t, y)
[ f ( , x, 0) f av( x)]d
Then
x = y+
I + u y y
u(t, y) u(t, y) + y t y
u ( t, y) t
= ( T , 0, x , )
= f ( t, y + u, ) = f av ( y) + 2 p( t, y, )
for small . This gives periodicity of x . Put z = x x . Exponential stability of x = 0 for = 0 gives exponential stability of z = 0 for small > 0.
With s = t,
dy ds
f av ( y) + q
, y,
which has a unique and exponentially stable periodic solution for small . This gives the desired result.
3
L V
+ resistive element
( y, y)
(1)
iC iL
introduce
y y/ f ( , r, ) f av( r)
= = = = =
r sin r cos
linear osc-part
2
1 2
f ( , r, 0) d
2 0
1 2 2
For an ordinary resistance we will get a damped oscillation. For a negative resistance/admittance chosen as
i = h( V )
( V +
1 3 V ) 3
f ( , r, )
we get
iC + i L + i = 0, CL i = h( V )
dV d2 V + V + Lh ( V ) =0 2 dt dt
Singular Perturbations
Consider equations of the form
x z
y(1 y2)
= =
f ( t, x, z, ),
( t, x, z, )
z( 0 ) = z0
x(0) = x0
gives
f av( r)
= =
For small > 0, the rst equation describes the slow dynamics, while the second equation denes the fast dynamics. The main idea will be to approximate x with the solution of the reduced problem
x
1 1 r r3 2 8
f ( t, x, h( t, x), 0)
x(0) = x0
( t, x, h( t, x), 0)
so small give a stable limit cycle, which is close to circular with radius r = 2.
Example: DC Motor I
R i L
u EM K = k
d J dt di L dt
= =
ki
k Ri + u
= z = x z + u
Proof
A22 is invertible, so it follows from the implicit function theorem that for sufciently small the Riccati equation
A11 P + A12 P A21 P P A22
=
0
Example: DC Motor II
In the example
x z
= z = x z + u
A12 A22 0 1
A12 A1 22
+ O ( ).
we have
A11 A21
= =
Lk2 J R2
1 1 1
P
I I 0
A11 A21 / I
I 0
P
I
= =
A22 / + A21 P
A12 + A11 P
A0 + O ( )
is veried.
A22 / + O (1)
See Khalil for example where reduced system is stable but fast dynamics unstable.
5
Tikhonovs Theorem
Consider a singular perturbation problem with f , , h, / x C1 . Assume that the reduced problem has a unique bounded solution x on [0, T ] and that the equilibrium y = 0 of the boundary layer problem is exponentially stable uniformly in (t, x). Then
x( t, ) z( t, ) =
( t, x, y + h( t, x), 0),
y(0) = z0 h(0, x0 )
x( t) + O ( ) h( t, x( t)) + y( t/ ) + O ( )
up
()
k1 / s
x p = ... y = .. k2
Replace f and with F and G that are identical for x < r, but nicer for large x.
Ax p + Bu p
= (u u p k2 Cx p)
Reduced model
xp
( A B k2 C) x p + Bu
For small > 0, the x, y-solutions of the F , G -equations will satisfy x < r. Hence, they also solve the f , -equations
= 0.001
epsilon = 0.001
= =
f ( x , z)
( x , z)
H ( x, )
2
1 x
= 0.1
epsilon = 0.1
H0 ( x ) + H1 ( x ) + H2 ( x ) +
where H0 satises
0
( x , H0 )
1 x
All solutions of
x z
= =
f ( x , z)
( x , z)
constant
= 0
With
x t
= = =
z =
1 1 dv + v v3 ds 3 v 1/ 2 s/
= 0.1
epsilon = 0.1
1 x + z z3 3 1 3 z 3
1 0 1 2 3 4
0 x
The red dotted curve is the slow manifold x = z z3 /3. All solutions approch this along a fast manifold approximately satisfying
7 x
= constant