You are on page 1of 25

EE539: Nonlinear and Multivariable Systems

Semester 08, 2011/2012

Fuzzy Control
Concept of Fuzzy Logic Control
There are three scenarios in practical control
System model is available and the uncertain factors are isolated from the plant
Model based control :
Feedback linearization, adaptive/learning control, sliding mode control etc.
System model is available and the uncertain factors are isolated from the plant.
In addition, extra knowledge (e.g. experts control knowledge, operators control skills etc. ) is also available.
Model based plus intelligent control :
Fuzzy/neural control as add-on to further improve performance.
System model is NOT available or modeling the plant is too complicated.
Linear modelfree control (e.g. PID) is inadequate to deal with system nonlinearities. The realtime control is highly dependent on operators experience.
Intelligent control :
e.g. fuzzy/neural control which can extract and formulate human control
rules.
Fuzzy Variables, Membership Functions
A fuzzy variable can be a linguistic one, e.g. large, medium, small,
can be used to capture human action or judgment,
can be quantied using a membership function.

Membership Value m

1.0

small

0.0

medium

0.5
Fuzzy Variable Value

large

1.0

THE Simplest Fuzzy Logic Controller: A Static Example


Suppose you want to have a glass of moderately hot glass of water: suppose you
start with a glass water at room temperature. Then you would follow the to steps
below until the water is at the temperature you like
If the water is too too cold (for your liking) add some hot water
If the water is too too hot (for your liking) add some cold water
A Simple Fuzzy Logic Controller: A Dynamic Example
Suppose you are tracking a trajectory r(t) (e.g. following a motor car) with a
system output y(t) (e.g. video camera).
Let us derive control rules from intuition and/or by inspection. Dene error e =
y r and change of error e = y r.

A:
Error: (large)

Change of Error: +(large)

Control Action: (zero) (error is already going towards 0 )


B:
Error: +(small)

Change of Error: (large)

Control Action: +(medium) (to block)


C:
2

Error: +(medium)

Change of Error: (zero)

Control Action: (medium)


D:
Error: (small)

Change of Error: +(medium)

Control Action: (small)


Fuzzification
A fuzzy coding process: fuzzify a non-fuzzy error signal.
Assign appropriate fuzzy (usually linguistic) and membership values
According to this membership function, an error (e) of 0.43 would be
certainly not a zero signal (membership 0.00)
3

Zero
(ZO)

Positive Small
(PS)

Positive Medium
(PM)

1/3

2/3

Positive Large
(PL)

1
0.77
0.65

0.35
0.23
0

0.10
De

0.43
e

Error
1 Change of Error

more like a small signal (membership 0.65)


less like a medium signal (membership 0.35)
certainly not a large signal (membership 0.00)
According to this membership function, a change of error (e) of 0.1 would be
more like a zero signal (membership 0.77)
less like a small signal (membership 0.23)
certainly not a medium signal (membership 0.00)
certainly not a large signal (membership 0.00)
Fuzzy Rule Base
A fuzzy rule base consists of a set of rules which describes the appropriate
action (as a fuzzy variable) to be taken based on the input (i.e. fuzzied
input).
For example:
R1: IF ERROR is +(medium) AND CHANGE OF ERROR is (medium)
THEN CONTROL ACTION is (zero)
OR
R2: IF ERROR is +(small) AND CHANGE OF ERROR is +(small)
THEN CONTROL ACTION is (medium)
OR
..
.
4

Control

ERROR
NL

NM

NS

ZO

PS

PM

PL

PL

ZO

NS

NM

NL

NL

NL

NL

PM

PS

ZO

NS

NM

NL

NL

NL

PS

PM

PS

ZO

NS

NM

NL

NL

ZO

PL PM

PS

ZO

NS NM

NL

NS

PL

PL

PM

PS

ZO

NS

NM

NM

PL

PL

PL

PM

PS

ZO

NS

NL

PL

PL

PL

PL

PM

PS

ZO

CHANGE OF ERROR

Action

In general, for m input n output case, a fuzzy rule will be of the form:
R1: IF x1 is A1 AND x2 is A2 AND AND xm is Am
THEN u1 is B1 AND u2 is B2 AND AND un is Bn
Fuzzy Inference (Reasoning)
The fuzzy inference (reasoning) engine has to tasks:
Interpret the AND relation in the antecedent (IF part):
The fuzzy inference engine will provide a single value (a truth value or
activation strength) for the consequent (THEN part).
Realized by taking the minimum membership value among all fuzzy labels
linked via AND in a fuzzy rule (in the max-min method) .
Interpret the OR operator among fuzzy rules:
Realized by taking the maximum membership value among all fuzzy labels
linked via OR with respect to all fuzzy rules (in the max-min method) .
Used during defuzzification.
Example: Suppose the error e = 0.43 and the change of error e = 0.1. Then,
there will be only four (4) rules with non-zero membership.
R1: IF ERROR is P S AND CHANGE OF ERROR is ZO
THEN CONTROL ACTION is N S
R2: IF ERROR is P M AND CHANGE OF ERROR is ZO
THEN CONTROL ACTION is N M
5

R3: IF ERROR is P S AND CHANGE OF ERROR is ZO


THEN CONTROL ACTION is N M
R4: IF ERROR is P M AND CHANGE OF ERROR is P S
THEN CONTROL ACTION is N L
Let Pe S (x) be the membership value in P S when error (e) is x. Let the other
PS
membership values be dened similarly (e.g. Pe M (x), ZO
e (x), e (x), etc. ).
With this notation, we earlier showed that
Pe S (0.43) = 0.35;

Pe M (0.43) = 0.65;

ZO
e (0.1) = 0.77;

ZO
e (0.1) = 0.23.

The AND operation is defined as the minimum membership value (in the min-max
method). Therefore, when e = 0.43 and e = 0.1,
Rule R1 has strength (or weight) of activating control action N M
{ PS
}
{
}
e (0.43) AN D PeS (0.10) = min {0.65, 0.23} = 0.23.
Rule R2 has strength of activating control action N S
{ PS
}
{
}
e (0.43) AN D ZO
(0.10)
= min {0.65, 0.77} = 0.65.
e
Rule R3 has strength (or weight) of activating control action N M
{ PM
}
{
}
e (0.43) AN D ZO
(0.10)
= min {0.35, 0.77} = 0.35.
e
Rule R4 has strength (or weight) of activating control action N L
{ PM
}
{
}
e (0.45) AN D PeS (0.10) = min {0.35, 0.23} = 0.23.
Defuzzification
Each activated fuzzy rule will provide a fuzzy control action with a label (NL,
NM, NS, ZO, PS, PM, PL) and associated with a value (reasoning results of
premise part or antecedent part).
Defuzzication is to determine the nal control action, which must be crisp
(non-fuzzy) by synthesizing all activated rules linked via OR operation.
The OR operation is interpreted as the maximum membership value (in the
min-max method) and truncated with respect to the membership function for
each label (N L, N M , etc.) of the control action fuzzy variable.
6

NL

NM

NS

Membership

1.00

0.65
0.35
0.23 R4

R1
R2
R3

0.00
-1.00

0.00
Control Action

There are numerous defuzzication methods. The most classical one is the
Center of Gravity (COG) method.
In the COG method, the geometric center of the truncated region (under
membership value and labels) is assumed as the defuzzified control signal.

Advantages of Fuzzy Logic Control


Tremendous Degrees of Freedom for ne tuning.
The shape of membership functions can be arbitrary.
The support for each label can be dierent.
The rule base can be (highly) nonlinear.
The denition of AN D and OR can be various. (e.g. in addition to the
min-max method AN D can be a product and OR can be max{R1 + R2 , 1}
etc.)
The defuzzication methods can be versatile.
The consequent (i.e. T HEN ) part can even be crisp (e.g. a linear controller).
Dierent tuning strategies result in controllers with dierent performances.

EE539: Nonlinear and Multivariable Systems


Semester 08, 2011/2012

InputOutput Feedback Linearization

We consider output tracking control problems for


1. when inly output needs to be shaped in a practical tracking
2. only a limited number of actuators are available for tracking control
Relative Degree and Internal Dynamics

Example


0
sin x2 + (x2 + 1) x3
x 1
0 u
x 2 =
x1 5 + x3
x1 2
1
x 3

1
x1
y = 0 x2
x3
0
Objective:
Track the desired trajectory yr (t) while keeping all states bounded.
The output is related to the input u through state (y = x1 here)
y = x 1 = sin x2 + (x2 + 1) x3 no relation to the input
y = x 2 cos x2 + x3(x 2 + (x2 +
) 1) x 3
5
y = (x3 + cos x2) x1 + x3 + (x2 + 1) x12 + (x2 + 1) u
|
{z
}
a(x)

)
1
Choosing u =
(V a(x)) leads to canonical form with y = z1,
x2 + 1
y = z2 gives z1 = z2; z2 = V .
V can be yr k1e1 k2e2 for our tracking task y(t) yr (t)
If y yr = e = e1; y y r = e = e2, then e 1 = e2; e 2 = V yr .
Our example system has order 3 but linearized system part is only of
order 2. Part of the system dynamics (described by one state component)
has been rendered unobservable in the inputoutput linearization. This
part is not visible in the inputoutput relationship.
This system has relative degree = 2
In this system internal dynamics is
)
(
1
(
yr k1e1 k2e2 a)
x 3 = x12 +
x2 + 1
System states are {y, y,
x3 }
If system order = relative degree, then
No internal dynamics
State regulation and output tracking can be easily achieved

Zero Dynamics

Stability of internal dynamics are solely determined by its poles.


The poles of the internal dynamics are the zeros of the open loop
transfer function that were canceled out in the polezero cancellation
that occurs in inputoutput control, when deriving the closed loop
transfer function.
Concept of zero is not (directly) applicable to nonlinear system. However, we can borrow the idea by assuming a zero output maintained
by a controller. As a result, internal dynamics are simplified to zero
dynamics:
2

y = x1; e, e 0; yr , y r 0.
(
)
1
x 3 = |{z}
x12 +
( yr k1 |{z}
e1 k2 |{z}
e2 a(x))
x2 + 1 |{z}
0

a(x) = (|{z}
x15 +x3)(x3 + cos x2) + (x2 +
0

so, x 3 =

0
1) |{z}
x1 2
0

1
x3(x3 + cos x2)
x2 + 1

x23
Choose the Lyapunov function V =
2
( 2 )
x3
so, V =
(x3 + cos x2)
x2 + 1
No globally bounded. Local boundedness depends on x2, x3 initial
values:
{x3 > 0, a + x2 < 0 & x3 + cos x2 < 0}
= cos x2 < 0
= x 3 < 0, x3 decreases
and V < 0 and will continue to stay negative.
Nonlinear zero dynamic stability does not imply overall stability. But
if the overall system e, y, yr 0 then the zero dynamics of the system
will take over.
NOTE: If zero dynamics are stable, then the overall system is stable.

Normal Form

Assume relative degree < n. Then nonlinear system can be transformed to a normal form with new states:
(
)T
= (1 2 . . . )T = y y . . . y (1)
Remaining states, = (1

...
3

n)T

= (2 . . . a + bu)T
= W (, ) (internal dynamics)
y = 1
= W (0, ) (zero dynamics)

Procedure for Linearization

Normal form discloses the underlying structure of a nonlinear system,


including its relative degree and the internal dynamics. It gives the
simplest form suitable for designing either an output regulator or an
output tracking controller.
Consider a nonlinear system x(t)
= f (x) + g(x)u(t) and y(t) = h(x).
y = (h) (x)
= (h) (f + gu) = Lf h + (Lg h) u
if Lg h = 0 then, = 1.
if Lg h = 0 then, we can differentiate again
y = ( (Lf h)) (f + gu) = L2f h + (Lg (Lf h)) u
(
)
1
If Lg Lf h = 0, then relative degree is
for i = 1, . . . , 1
(
))

1
= Lf h + Lg Lf h u

y (i) = Lfi1h
y ()
If u =

V Lf h
Lg

LF 1h

, we get y () = V

i
i
i =
x =
(f + gu)
x
x
i
g=0
x
Rest of the states can be determined by using i g = 0 or Lg i = 0
for i = 1, . . . , n .
for i to be independent of u:

Example:
z

f (x)

g(x)

}|
z }| {

{
x 1
x1
e2x2
x 2 = 2x1x2 + sin x2 + 1/2 u
x 3
2x2
0

(
) x1
y = 0 0 1 x2
x3
|
{z
}
h(x)

Relative Degree:

2x
2
(
) e
Lg h = h g = 0 0 1 1/2 = 0
0

x1
(
)
Lg f = h f = 0 0 1 2x1x2 + sin x2 = 2x2
2x
22x
2
(
) e
Lg Lf h = (Lf h) g = 0 2 0 1/2 = 1 = 0
0
Therefore, = 2; there is one internal state.
1 = y = h(x) = x3,
2 = y = Lf h(x) = 2x2,

(
) e2x2
1 1 1
1 g = 0 gives
1/2 = 0.
x1 x2 x3
0
1 1 1
e2x2
+
= 0.
x1 2 x2
Solution to this is: 1 = 1 + x1 e2x2

State transformation:
z = (1 2 ).

0 0 1
z
= 0 2 0
x
1 e2x2 0

nonsingular.

x1 = 1 + + e2
x2 = 2/2
x3 = 1
Normal Form:
1 = 2
2 = 2 sin (2/2) + 22 (1 + + e2 ) +u
|
{z
}
A(,)

u = A + V gives 2 = V
= x 1 2e2x2 x 2
= (1 + e2 ) + e2 u 2e2 (u/2 + sin(2/2) + 2 (1 + e2 ))
= 2 sin(2/2) + (1 + e2 ) (1 + 22e2 )
Zero Dynamics:
1 = 0, 2 = 0 gives =
Zero dynamics are stable.

EE539: Nonlinear and Multivariable Systems


Semester 08, 2011/2012

1 Nonlinear Systems Control


1.1 Feedback Linearization
1.1.1 Introduction

Consider the simple pendulum system with a torque input u applied in the positive direction of
Selecting x1 = and x2 = ,
we can
the angle . The system dynamics are u mgr sin() = I .
write down the state space description of the system as:


x2
x 1

mgr sin() u
x 2

+
I
I
y = x1
Here I is the inertia of the system. If the mass of the rod to which the mass m is connected is
negligible, then I = mr2 .
Observe that the nonlinear term sin() and the control signal u appear on the same line. Therefore, it will be possible to directly compensate for the eects of the nonlinearity by appropriately
changing the control signal u. If both the control signal and the nonlinearity aects the same
state (i.e. appears in the same line) we say that the system is matched. Furthermore, even if the
nonlinear term is unknown but matched, it will be a matched uncertainty or an input perturbation.
This is because, we can think of the eects of the (possibly unknown) nonlinearity as a drift or a
perturbation of the input, provided the system is matched.
The main problem with controlling the dynamics of a nonlinear system is that its dynamics will
be dependent of the states as well as the input. Therefore, it is not possible to do a pole placement
design for a nonlinear system as it is. The pole placement problem is a linear problem. Hence
it requires the eigenvalues of the system to be invariant over the states. However, in a nonlinear
1

system, the eigenvalues (of the linearized system) do change from state to state. One approach
would be to compensate for the nonlinearity somehow and make the system look linear for the
design problem.
In this particular system, since the nonlinearity is known and it is matched, we can compensate
for the nonlinearity and eliminate its eect through properly choosing u. Since the nonlinearity is
state dependent, the control signal will have to depend of the state feedback. In other words, it
is possible to linearize the system through state feedback. Moreover, we can place the poles of the
system with the feedback linearized system. Suppose we apply the following control signal:
u(t) = aIx1 (t) + bIx2 (t) + mg sin(x1 (t)) .
|
{z
}
nonlinearity
compensated

Then the closed loop system will be as follows:


( )
(
)
x 1
x2
=
x 2
ax1 + bx2
| {z }

(
=

)( )
0 1
x1
.
a b
x2
| {z } | {z }
A

Then the characteristic equation of the closed loop system is |sI A| = s2 bs a = 0. We can
choose a and b to match the desired performance characteristics.

k
x3 x1
J
u

mI
mg sin x1

Now consider the single link exible joint. That means the hinge point can be thought of as a
torsional spring with the torque of the spring is proportional to the twist i.e. T = k(). Then the
equations of motion will be
I x1 = k(x3 x1 ) mg sin x1
J x2 = k(x1 x3 ) + u.
Here I is the inertia of the system with the mass and J is the inertia of the counter balance system
to which the input torque u is applied. The angle x1 and x2 corresponds to the rotation of the mass
and the counter balance respectively. By setting x2 = x 1 and x4 = x 3 , we can obtain the system
dynamics:
x 1 = x2
mgr sin(x1 ) k
x 2 =
(x1 x3 )
I
I
x 3 = x4
k
u
x 4 = (x3 x1 ) +
J
J
2

If x1 = x3 we would have the same system as before. However, this time, the control signal is
not directly inuencing the nonlinearity sin(x1 ); the control signal is only indirectly inuencing
the nonlinearity: u x 4 x4 x 3 x3 x 2 . Therefore, we say that the nonlinearity
is unmatched. We need to nd a method to stabilize and control such an unmatched system;
preferably by converting the to a matched form.
Controllable (or Controller) Canonical Realization of an LTI system:
If single input single output, linear time
sentation has the generic form

0
1
0
x 1
x 2 0
0
1

.. ..
..
..
. .
.
.
=

x n2 0
0
0

x n1 0
0
0
0 1 2
x n

invariant (SISO LTI) system in the state space repre

..
.

0
0
..
.

0
n2

..

+ . u

0
0

xn1
1
1
xn
n1
0
0
..
.

x1
x2
x3
..
.

(
y = 0 1

x1

)
x2
n1 ..
.
xn

(up to re-ordering the rows) then it is said to be in the controller canonical form or the controllable
canonical form. This system is guaranteed to be controllable (that is the controllability matrix
will have full rank) regardless of the values of the parameters 0 , . . . , n1 and 0 , . . . , n1 . The
transfer function of this system will be
H(s) =

n1 sn1 + + 1 s + 0
sn + n1 sn1 + n2 sn2 + + 1 s + 0

If a SISO LTI system is in the controller canonical form the characteristic polynomial is sn +
n1

+ + )1 s + 0 and if we apply a state feedback control law u = Kx where K =


( n1 s
k0 k1 kn1 the closed loop system x = (A BK)x will be

x1

x2

..

.
=

x n2
0
0

1
0
xn1

x n1
0
0

0
1
xn
(0 + k0 ) (1 + k1 ) (n2 + kn2 ) (n1 + kn1 )
x n
x 1
x 2
..
.

0
0
..
.

1
0
..
.

..
.

0
0
..
.

0
0
..
.

Then the closed loop characteristic polynomial is


sn + (n1 + kn1 )sn1 + + (1 + k1 )s + (0 + k0 ).
If the desired characteristic polynomial is sn + dn1 sn1 + + d1 s + d0 , it is possible to match all
the coecients and place all the poles arbitrarily. This is the equivalent of matched for a SISO
LTI system.

1.1.2 Input State Feedback Linearization


Input state feedback linearization provides a powerful approach to simplify nonlinear dynamics,
especially for systems with unmatched structures.
Example: Consider the system x 1 = 2x1 + x2 ; x 2 = u. This is an unmatched linear system
(i.e. not in the controller canonical form). However, if we set u = ax1 + bx2 we get the closed loop
system
( )
(
)( )
x 1
2 1
x1
=
,
x 2
a b
x2
which gives the characteristic polynomial (s + 2)(s b) a = 0. So, if we set a = 0 and b = 2 we
can get s = 2, 2.
Example: Consider the system x 1 = x2 ; x 2 = u. This is a matched linear system (i.e. in the
controller canonical form). If we set u = k1 x1 + k2 x2 we get the closed loop system
)( )
(
( )
x1
x 1
0 1
,
=
k1 k2
x2
x 2
which gives the characteristic polynomial (s k2 ) k1 = 0. So, if we set k1 = 4 and k2 = 4 we
can get s = 2, 2.
Extension to the tracking problem:
Suppose x1 in the previous example is tracking an external parameter xr = xr (t). The error is
e1 = x1 xr and let e2 = x2 x r . Then,
e 1 = x 1 x r = x2 x r = e2 ;

e 2 = x 2 xr = u xr .

Therefore, this system (of error dynamics of the tracking problem) will be in the controller canonical
form with some extra matched terms. If we set u = 4e1 4e2 + xr , we will get e 2 = 4e1 4e2 ,
and similar to above both poles will be at 2 and e1 , e2 0 as t .
Non controller canonical form tracking problem:
x 1 = 2x1 + x2
x 2 = u
Again, we are tracking the external trigger xr . We expect to have x1 xr as t . Dene error
dynamics e1 = x1 xr , e2 = x2 x r . Then,
e 1 = x 1 x r = 2x1 + x2 x r = 2(e1 + xr ) + e2 ;

e 2 = x 2 xr = u xr

Therefore we obtain the following system, which is not in the controller canonical form,
e 1 = 2e1 + e2 2xr
e 2 = u xr
The control input u can be used to compensate for xr but xr which is in the e 1 dynamic cannot be
controlled with u. If we use the control law u = 2er + xr , we get
(
)( ) ( )
( )
2
2 1
e1
e 1
+
xr
=
e2
0
e 2
0 2
4

If we have xr = 0 then e1 , e2 as t . But if xr = 0, at steady state


( ) (
)(
) ( )
(
) (
)( )
0
2 1
e1 ()
2
e1 ()
2/3 1/3
2
=
+
xr =
=
xr
0

e2 ()

e2 ()

2/3

Therefore, e2 () 0 but e1 () 9 0 as t . Therefore, the system is stable but not asymptotically stable.
Transforming Unmatched System to Matched System
Example: Consider the unmatched system
x 1 = ax2 + sin(x1 )
x 2 = u/J
Clearly, the input does not match the nonlinearity. We can dene the state transform z1 = x1 and
z2 = ax2 + sin(x1 ). Then, x1 = z1 and x2 = (z2 sin(z1 )) /a. Therefore
z1 = x 1

ax2 + sin(x1 )

= z2
au
au
z2 = ax 2 + x 1 cos(x1 ) =
+ x 1 cos(x1 ) =
+ z2 cos(z1 )
J
J
( ) ( )
J
z
z
If we use the control law u = [V z2 cos(z1 )], we will get 1 = 2 . The new control input V
z2
V
a
can be used for the pole placement of the state feedback linearized system. We can use this system
in the tracking problem with z1 = x1 ; e1 = z1 xr and e2 = z2 x r . Then, e 1 = z1 x r = z2 x r
and e 2 = z2 xr = V xr . If we set V = 4e1 4e2 + xr , we get e 2 = z2 xr = 4e1 4e2 . The
poles will be at 2 and e1 , e2 0, so z1 xr . Therefore, the overall control law for the tracking
problem is
V

}|
{
J z
r (ax2 + sin(x1 )) cos(z1 )]
u = [4( x1 xr ) 4(ax2 + sin(x1 ) x r ) + x
|{z}
{z
}
|
{z
}
|
a
z1

z2

z2

The strategies discussed up to now were rather ad-hoc and can only be done for second order
systems. The problem is, how can it be generalized to higher order and more complex systems?
Mathematical Tools
Directional Derivative (Lie Derivative)
Let h(x) = h(x1 , . . . , xn ) be a scalar function of vectors in Rn . Then the directional derivative
or the Lie derivative (Lie is pronounce Lee named after the Norwegian mathematician
(
)T
Marius Sophus Lie (18421899)) of h in the direction of a unit vector f = f1 f2 . . . fn ,

with f = f12 + + fn2 = 1, (however f being unit magnitude is not necessary for our
application; hence, without loss of generality and validity, we would relax that condition when
it is applied in the future ) is denoted by Lf h and is given by

f
(
) 1
h h
h f2
Lf h = (h) f =

.
x1 x2
x1 ..
fn

We have the following notations for higher order directional (Lie) derivatives:
L0f h = h
L1f h = Lf h

(h) f

L2f h

= Lf (Lf h) = ( (Lf h)) f


..
.
(
)
( ( n1 ))
n
Lf h = Lf Ln1
h
=
Lf h f
f
Lg Lf h = ( (Lf h)) g
Lie Bracket and the Adjoint Operator
Let f (x) = f (x1 , . . . , xn ) and g(x) = g(x1 , . . . , xn ) be two scalar functions of vectors in Rn .
Then we denote the adjoint action of f on g by adf g. It happens to be the Lie bracket of f
and g in the order [f, g] and
adf g = [f, g] = (g) f (g) f.
Note that this operation is non-commutative because [f, g] = [g, f ]. Also note that [f, g]
results in another scalar function on Rn . The higher order Lie brackets are dened recursively
as
adkf g = [f, [adk1
f , g]].
(This illustrates the need for the two notations!)
Diffeomorphism
A function : Rn Rn in region , is C k dieomorphic if
1. is k times dierentiable,
2. its inverse 1 exists and
3. 1 is k times dierentiable.
In this course, when we simply say that a function is diffeomorphic we would assume
that it is dierentiable innite times or at least dierentiable as many times as we would like
(
)T
it to be. In this context, dierentiability of = 1 (x) n (x) means that all the
partial derivatives of all components of exists and that they can be written as

1
1
x1 xn

..
..
...
=
.
.

x
n
n

x1
xn
We can use a dieomorphism (meaning that, the main point of dening the term is) to
transform a nonlinear system to a new system of states (i.e. change of variables in a nonlinear
state space system). Given a state space system, x = f (x) + g(x)u; y = h(x), dene
z = (x). Then, if is a dieomorphism, then
(
)
(
)
(
)
)

(
z =
f (1 (z)) + g(1 (z))u .
x =
(f (x) + g(x)u) =
x
x
x
6

Therefore, the system can be written in terms of z:


z = f (z) + g (z)u
y = h (z);
(
)
(
)

f (z) =
f ( (z)), g (z) =
g(1 (z)) and h (z) = h(1 (z)).
x
x
Involutivity
A linear independent set of (vector values) functions F = {f1 , . . . , fm } (with fi : Rn Rn
for i = 1, . . . , m, m is not necessarily nite) is involutive if for every pair of functions fi and
fj , we get [fi , fj ] F. In other words, the Lie bracket of any pair of functions in F can be
expressed as a linear combination of the functions in F. That is, if for every pair of functions
fi and fj , there are scalar functions ijk : Rn R such that
[fi , fj ](x) =

ijk fk (x).

k=1

A set of constant vectors are involutive because


[fi , fj ] = (fi ) fj (fj ) fi = 0 =
| {z }
| {z }
=0

=0

0fk (x)

k=1

A set of a single vector is involutive because


[f, f ] = (f )f (f )f = 0 =

0f (x)

k=1

Condition for input state feedback linearization


A time-invariant SISO (nonlinear) system x = f (x) + g(x)u is input state feedback linearizable if and only if there is a region Rn such that
{
}
1. the set of vectors g, adf g, . . . , adn1
g is linearly independent,
f
{
}
2. the set of vectors g, adf g, . . . , adn1
g is involutive,
f
Note that the second condition is always satised by a linear system. The rst condition
reduces to the controllability condition for LTI SISO system x = Ax + Bu. Note that for a
SISO LTI system
adf g = adAx B = (B)(Ax) ((Ax)) B = AB;
| {z }
| {z }
0

ad2f g = adAx (adAx B) = adAx (AB) = ((AB))(Ax) ((Ax))(AB) = A2 B


| {z }
| {z }
0

Therefore, adkf g = (1)k Ak B. Thus,


{

B,

adAx B,

...,

} {
adn1
B
= B,
Ax

AB,

...,

(1)n1 An1 B

Since the sign is immaterial for the linear independence, we see that the set of vectors obtained
implies the controllability condition for a SISO LTI system.
7

Procedure
{
}
1. Construct the set of vector g, adf g, . . . , adn1
g and check their linear independence (i.e.
f
controllability) and the involutivity condition.
2. Find the rst state transformation z1 = z1 (x1 , . . . , xn ) so that
(
)
(z1 ) adif g = 0, fori = 0, 1, . . . , n 2
(
)
(z1 ) adn1
g
= 0
f
Then, we can write down the transformed vector as


z1
z1
Lf z1
z2


2
z3
z = = Lf z1
..
..
.
.
Ln1
z1
zn
f

z1
(z1 ) f
(z2 ) f
..
.

(zn1 ) f

3. The linearizing control input is u = (x) + (x)V , where V is called the synthetic input. V
makes it possible to apply an external control signal to the feedback linearized system. Here
(x) =

Lnf z1

;
Lg Ln1
z1
f

(x) =

1
Lg Ln1
z1
f

Example
x 1 = ax2 + sin(x1 )
x 2 = u/J
(
)
(
)
ax2 + sin(x1 )
0
So, f =
and g =
. Then,
0
1/J

f1 f1

0 a
0
cos(x
)
a
x1 x2
1

adf g = (g) f (f ) g =
=
=
1
| {z }
f
f2 1
2
0
0
=0
0
J
J
x1 x2
(
)
0 a/J
Therefore, (g adf g) =
. Clearly the columns are linearly independent and involu1/J
0
tive since they are constant.
Let z1 = z1 (x1 , x2 ) be the transformed variable. Then,

) 0
( )
(
( 0 )
z1
z1 z1 z1 1 set
= 0,
(z1 ) adf g = (z1 ) g =
= 0 =
=
1
x2 J
x2
x1 x2
J
and

(
) a
( )
( 1 )
z1
z1 z1 J z1 a set
(z1 ) adf g = (z1 ) (adf g) =
= 0
= 0 =
=
x1 J
x1
x1 x2
0
8

Therefore, the choice of z1 is constrained bus arbitrary. We can pick the easiest possible choice
for z1 . We can pick
z1 = x1
to match the requirements. Then we see that
(
)
) ax2 + sin(x1 )
z2 = Lf z1 = (z1 ) f = 1 0
= ax2 + sin(x1 ).
0
(

Control law u =

V L2f z1
.
Lg Lf z1
(

)
Lg Lf z1 = Lg (Lf z1 ) = Lg z2 = (z2 ) g = cos(x1 ) a

L2f z1

0
1/J

)
= a/J

(
)
) ax2 + sin(x1 )
= Lf (Lf z1 ) = Lf z2 = (z2 ) f = cos(x1 ) a
0
= ax2 cos(x1 ) + cos(x1 ) sin(x1 )
(

Therefore,

V ax2 cos(x1 ) cos(x1 ) sin(x1 )


,
a/J
( ) ( )
z1
z
and the system becomes
= 2 , where z1 = x1 ; z2 = ax2 + sin(x1 ).
z2
V
u=

For higher order and more complex systems, the above approach becomes more eective and,
if fact, simpler to use.
The involutivity and controllability conditions are easy to formulate and check.
Feedback linearization is dierent from linear approximation (i.e. linearization of a system).
Feedback linearization is strictly valid in the entire region such that the state space model
is well dened; whereas, linear approximation is only valid in a small neighborhood of the
point around which the linear approximation is taken.
Feedback linearization may loose its eectiveness in the presence of system uncertainties.

The input state feedback linearization of the single link exible joint hold globally (try it!).
The system can be more general than a cascaded structure considered here.

EE539: NONLINEAR AND MULTIVARIABLE SYSTEMS - HOMEWORK #02


NONLINEAR CONTROL
DUE ON WEDNESDAY, MAY 10TH , 2012

(1) A single link manipulator with exible joints and negligible


damping is represented
by afourth
order

0
x2
0
a sin x1 b(x1 x3 )
; g(x) = with a,
model of the form x = f (x) + g(x)u, where f (x) =

x4
d
c(x1 x3 )
b, c and d are positive constants.
(a) What are the unforced (i.e. with no control input) equilibrium points of the system?
(b) Show that this system is controllable?
(c) Show that this system is involutive?
(
)
(d) Find the new state variables z = z1 z2 z3 z4 such that the system is converted to full controllable canonical form. Also nd the appropriate linearizing control law.
(e) From direct observation, nd the state feedback control law for the new (feedback linearized)
system such that the closed loop poles are placed at 2.
(f) Find the controllability matrix for the feedback linearized system that you found in part 1d.
(
)
(g) Using the Ackermanns formula, K = 0 . . . 0 1 1 P (A), nd the state feedback control law
for the new (feedback linearized) system such that the closed loop poles are placed at 2.
(
)
( )
a sin x2
0
.
(2) Consider the system of the form x = f (x) + g(x)u, where f (x) =
; g(x) =
1
x21
(a) Check the system for controllability and involutivity. (Note that the system is not controllable
whenever cos x2 = 0. This means that the system is not globally controllable.)
(
)
(b) Find the new state variables z = z1 z2 such that the system is converted to full controllable
canonical form. Also nd the appropriate linearizing control law.
( )
( n)
1
x2
; g(x) =
. Suppose the
(3) Consider the system of the form x = f (x) + g(x)u, where f (x) =
0
1
output is given by y = x1 . Out task is to regulate y at zero. That is to have y 0 as t .
(a) Obtain the relative degree of the above system using direct dierentiation of the output (without
using the input-output linearization procedure).
(b) Assign an appropriate control law to u such that the required control task is achieved.
(c) Considering the dynamics of x2 to be the internal dynamics of the system, obtain the zero dynamics
and check the conditions (on n) for stability of the zero dynamics using an appropriate Lyapunov
function.

NONLINEAR CONTROL DUE ON WEDNESDAY, MAY 10TH , 2012

2 + x23
x1
1 + x2

3
x3 ; g(x) =
(4) Consider the system of the form x = f (x) + g(x)u, where f (x) =
0 .
x1 x3
1
Suppose the output is given by y = x2 . Out task is to regulate y at zero. That is to have y 0 as
t .

(a) Using the input-output linearization procedure, nd the relative degree of the system.
(b) Find the new state variables 1 , 2 such that a second order canonical form is obtained. Derive
the linearizing control law for the system.
(c) Find
the
(
) (internal
)of the system. [HINT: The solution of the partial differential equation
)dynamics
(

2 + x23

= 0 is = x1 + x3 + tan1 x3 . ]
+
x1
x3
1 + x23
(d) Find the Jacobian of the transformation J and check if it is invertible.
(e) Check for the stability of the system under output regulation through the zero dynamics.


0
x2
(5) Consider the system of the form x = f (x) + g(x)u, where f (x) = sin x1 + x3 ; g(x) = 0 . Our
1
x1
task is to track xr (t) using x1 (t). Derive the intermediate ctitious control laws and the nal control
law for u according to the back stepping design using Lyapunov method.
(
(
))
(
(
))
(6) Consider the system x 1 = x2 + x1 1 x21 + x22 and x 2 = x1 + x2 1 x21 + x22 . Using the
(
)2
Lyapunov function V = x21 + x22 1
(a) Obtain V .
(b) Check the sign deniteness (negative denite, negative semi-denite, positive denite, positive
semi-denite, etc.).
(c) Sketch the region c on the state space diagram such that c c with 0 < c < 1.
(d) Apply the invariant set theorem to the local region c and nd the level set, system dynamics
within the level set and the invariant set.
(e) Using the invariant set theorem, prove the existence of a limit cycle in c .
(7) Consider the system x = f (x) + u. Where is an unknown constant. Our task is to track xr (t) using
x(t).
(a) Using an appropriate control law and update law for the parameter to be estimated derive the state
space model for the tracking error e = xxr and parameter error = in terms of e, xr and .
(b) Using the Lyapunov function V = (e2 + 2 )/2 determine the sign deniteness of V .
(c) Using the invariant set theorem, nd the conditions on f (xr ) for the system to achieve asymptotic
stability in the tracking task.
(d) What happens to the system if the above stability conditions for f (xr ) are not met.

(8) Consider the system x


+ 1 (t)|x| (x)
2 + 2 (t)x3 cos(2x) = V . Where |1 (t)| 1 and 1 5.
(a) Derive the state space model of the system.
(b) If our task is to track xr (t) using x(t), derive the error dynamics of the system where e1 = x1 xr
and e2 = x 1 x r .
(c) If the extended tracking error is dened as = ce1 + e2 , nd the required bounds on and b as
dened in class. That is, nd max and bmin .
(d) What is the control input to achieve sliding mode control for the above system.
(9) If the membership functions for both the fuzzy variables e and e are as shown in gure 1

Negative

Positive

0
0

Figure 1. Membership functions for both the fuzzy variables e and e


(a) If the fuzzy rule base for the fuzzy controller is as follows:
R1: If e > 0 and e > 0 then u = 1
R2: If e < 0 and e > 0 then u = 0
R3: If e < 0 and e < 0 then u = 1
R4: If e > 0 and e < 0 then u = 0,
nd the value of u for an error of 0.8 and rate of error of 0.2 using the min-max method.
(b) If the control signal u was a fuzzy variable with the membership functions as shown gure 2 and
the rule base is as follows
R1: If e > 0 and e > 0 then u = Positive
R2: If e < 0 and e > 0 then u = Zero
R3: If e < 0 and e < 0 then u = Negaitve
R4: If e > 0 and e < 0 then u = Zero,
nd the value of u for an error of 0.8 and rate of error of 0.2 using the min-max method. Use the
center of gravity method for defuzzyication.
Negative

Positive

Zero
1

Figure 2. Membership functions of u

You might also like