You are on page 1of 85

1

CONENTS
Chapter 1: Introduction---------------------------------------------------------------4


1.1 Introduction----------------------------------------------------------------4


1.2 Previous work-------------------------------------------------------------5


1.3. Objectives of the Dissertation-----------------------------------------6


1.4 Outline of the Dissertation----------------------------------------------7


Chapter 2: Modeling of the Separately Excited DC Machine-----------------7

2.1 Newtonian Method of Physical System Modeling:---7


2.2 Modeling of the DC machine Using Newtonian Method
of Model-------------------------------------------------------------8


Chapter 3: Nonlinear Control Design Tools for Stabilization---------------10

3.1 Feedback Linearization------------------------------------------------10

3.1.1 Input State Feedback Linearization--------------------11

3.1.2 Input Output Feedback Linearization-----------------15


3.2 Zero Dynamics Stability Analysis------------------------------------17


3.3 Definition of Lyapunov Stability-------------------------------------17


3.4 Control Lyapunov Function------------------------------------------20

2


3.5 Backstepping--------------------------------------------------------------24

3.5.1 Integrator Backstepping----------------------------------------24

3.5.2 Stabilization of Cascade Systems---------------------------34

3.5.3 Block Backstepping with Zero Dynamics Analysis-----42

3.5.4 Systematic design Procedure--------------------------------48
3.5.4.1Strict-feedback systems-----------------------48
3.5.4.2 Semi-strict feedback forms------------------53
3.5.4.3 Block-strict-feedback system---------------57
3.6 Conclusion------------------------------------------------------------------60




Chapter 4. Adaptive Backstepping Control of Separately excited dc machine-
--------------------------------------------------------------------------------------------61


4.1 Implementation of Adaptation Mechanism via Dynamic
Feedback--------------------------------------------------------------------------61


4.2 Adaptive Integrator Backstepping Control Algorithm-------------66


4.3 Adaptive Backstepping Control of separately excited DC Machie73
4.3.1 Formation of Adaptive Backstepping Control----73
4.3.2 Robustification of the Adaptive Backstepping Control
Laws-------------------------------------------------------------------------------76


4.4 Simulation Results------------------------------------------------------------79
3


4.5 Conclusion ----------------------------------------------------------79


Chapter 5. Discussion and Conclusions ------------------------------------------80
5.1 Discussion and Conclusions:------------------------80
5.2 Scope of Future Research:---------------------------80

Chapter 6. Bibliography-------------------------------------------------------------80















4

Chapter 1

Introduction
Since early 70s a considerable effort has been devoted to the development of a state space model
of nonlinear control systems. Although linear control theory has evolved a variety of powerful control
algorithm and has had a glorious tradition of successful industrial applications. But in many practical
situations it has found to be inadequate to address the control problem, for many reasons such as
increasingly stringent performance requirements and large operating range, which invalidate the use of
linearized models. Many physical systems have so-called hard nonlinearities, such as Coulomb friction,
saturation, hysteresis, backlash, and dead zones. These nonlinearities are non-smooth or discontinuous
functions of state variables, and it is almost impossible to obtain faithful linear approximations of these
hard nonlinearities in real time operation. And if they are not treated in an appropriate manner they may
lead to occurrence of some undesirable phenomenon in the control system, such as instability and limit
cycles. It may be necessary to apply nonlinear control design algorithm to obtain an acceptable
performance of those system. The design of nonlinear controllers is not always a tedious job for the
designer. For example, in robot control, it is easier to design a stabilizing nonlinear controller than a
stabilizing linear controller. Also, with the advances of low-cost microprocessors, it is neither difficult nor
costly to implement nonlinear controllers. All these factors have made nonlinear control increasingly
more popular, and the field has grown quickly during the past twenty years.
As one of the typical dynamical system, separately excited dc machine has concentrated much attention of
the control engineers in last three decades. It is essential element of the dynamics of various physical
systems depend upon its different operating condition.
Direct current (DC) motors have variable characteristics and are used extensively in variable-
speed drives. DC motor can provide a high starting torque and it is also possible to obtain speed control
over wide range. Why do we need a speed motor controller? For example, if we have a DC motor in a
robot, if we just apply a constant power to each motor on a robot, then the poor robot will never be able to
maintain a steady speed. It will go slower over carpet, faster over smooth flooring, slower up hill, faster
downhill, etc. So, it is important to make a controller to control the speed of DC motor in desired speed.
DC motor plays a significant role in modern industrial. These are several types of applications
where the load on the DC motor varies over a speed range. These applications may demand high-speed
control accuracy and good dynamic responses.
5

In home appliances, washers, dryers and compressors are good examples. In automotive, fuel
pump control, electronic steering control, engine control and electric vehicle control are good examples of
these. In aerospace, there are a number of applications, like centrifuges, pumps, robotic arm controls,
gyroscope controls and so
Considerable amount of research efforts have been employed from the nonlinear control
theoretic point of view to address the control problem of a cart-pole system. Mainly the position control
problem of the rotor has been addressed by control engineers. In this dissertation we will try to address
the control problem with the aid of Backstepping Control Design methodology.



1.2 Previous Work:
A Brief Survey on Backstepping as a Nonlinear Control Design Tools for Stabilization:
Nonlinear feedback control has been the topic of hundreds of publications, several monographs
and quite a lot of comprehensive textbooks, such as Khalil (2002), Slotine and Li(1991), Sastry (1993).
The main part of this survey deals with the nonlinear control design tools namely backstepping. Since
early 80s differential geometric concepts becoming popular in the literature of nonlinear control systems.
In 1989 Isidori [9] proposed the concept of feedback linearization. An efficient and powerful control
deign algorithm for nonlinear systems. But feedback linearization is not at all a very flexible design
approach, and in many cases the feedback control law obtained using feedback linearization method
results cancelation of the useful nonlinearilities present in the system. The short comings of feedback
linearization can be overcome by the use of Backstepping Control design algorithm.
.
The origin of Backstepping is not quite clear due to its simultaneous and often implicit
appearance in several papers in the end of eightys decade, as in the work of.Tsinias [13], Byrens and Isidori
[14]. and the work of Sontag and Sussmann [15]. Kokotovic and Sussmann studied the importance of
SPR transfer functions in the literature of integrator backstepping. The concepts of passivity were
extended to nonlinear design tools by Ortega [16]. However, it is fair to consider the work of Prof. Petar
V Kokotovic & coworker to be a pioneering work in the field of backstepping. The 1991 Bode lecture at
the IEEE Conference on Decision and Control, published in Kokotovic (1992), was devoted to the
evolving subject and the year after, Kanellakopoulos et al. (1992) presented a mathematical \toolkit" for
6

designing control laws for various nonlinear systems using backstepping. During the following years
several text book have been published, among them the book Nonlinear and Adaptive Control Design
by Krsti et al [1]. has becomes an excellent reference for the novice as well as the experienced
researchers in the field of nonlinear control design. Prof. Petar V. Kokotovic surveyed & published a
progress report of Backstepping and other nonlinear control design tool at the 1999 IFAC World
Congress in Beijing [15]. In [17] Kanellakopoulos, Kokotovic, Moorse proposed a nonlinear toolkit to
employ backstepping in a systematic way. I Kanellakopoulos [10] introduce the idea of using integral
control action along with the Backstepping control, which made a considerable improvement of the
steady state performance of the controller.





1.3 Objective of this Dissertation:
The primary objective of this dissertation is, namely the Position Control problem of a Separately
Excited DC motor.

.1.4 Outline of the Dissertations:

This dissertation is organized into two parts. Part one deals with generic theories while part two
deals with the specific applications. Part one consists of the following chapters
Chapter 2: Deals with the mathematical modeling of the separately excited DC machine,

Chapter 3: A detail discussion on available nonlinear control design tools has been discussed and
their performance is also compared by means of simulation.

Chapter 4: Made a detail discussion on Adaptive Integrator backstepping control algorithm and
then applies it to the Position control problem of the DC motor.

Finally chapter 5 concludes the dissertations and discussed future directions of research work.

7


Chapter 2
Mathematical Modeling of the Separately Excited DC Machine


Dynamical systems are described by a mathematical model of the differential equations whose
solutions indicate how the variables of the system are changing with time. In brief we can say that the
mathematical model of a system is a tool which we can use to analyze various aspects of the system
performance without performing a real world experiment. So it is expected as well as desirable to use a
mathematical model which produces a faithful representation of the system outcome irrespective of its
operating conditions. But formulating an exact model of a physical system is a tedious job, and it may
happen that the model of the system is too complex to deal. So it is an important aspect of physical
system modeling to maintain an optimal compromise between the accuracy of the model and the cost of
computational effort. Mathematical modeling of the mechanical system can be obtained using Newtonian
method modeling. In this chapter we will introduce the brief concept of Newtonian modeling in section
2.1. In 2.2 the modeling of the separately excited DC Machine has been done using Newtonian modeling
method.

2.1 Newtonian Method of Physical System Modeling:
The basic methodology for formulation of differential equations was first proposed by Sir Isaac
Newton (16421727). Inspired by the works of Galileo, he discovered that the application of a force
changes the velocity of a body. The rate of change of velocity, or acceleration, is proportional to the
applied force. And the mass of the body plays the role of the proportionality constant, i.e. = =

and for rotational system we can write = =

. These two equations give us a way of


writing an equation for mechanical systems, and since acceleration is the double derivative of position

8

w.r.to time, the resulting equation would be a differential equation. Thus, following Newton, a simple
methodology of understanding the dynamics of any system materialized; just we need to take care of the
opposing tendencies in a system since motion or change in any system is the result of these opposing
tendencies.
Basically the motion of any system would be a consequence of two kinds of forces: the applied
force (denoted by F) and the constraint force (denoted by Fc). We can write the Newtons equation for
each mass point as the acceleration of the jth mass point is proportional to the difference between the
applied force and the constraint force on the j th mass point.

(2.1)
where F
j
and

are the applied force and the constraint force acting on the jth point mass. This is the
basic formalism of Newtonian modeling method. In case of rotational motion this equation can be
reformed as

(2.2)
These two equations are the basic of Newtonian formalism for mechanical system.
2.2Modeling of the DC machine Using Newtonian Method of Modeling:
The dynamics of the armature circuit is given by the following differential equation

+ =
Now in case of practical servo we generally use a high R/L ratio to suppress the electrical transient in the
armature circuit. Also a high R/L ratio reduces the electrical time constant of the armature circuit. As a
result we can approximate the circuit as
=
The dynamics of the mechanical accessories are given

+ =
=


Where i can be expressed as
9

=


Where E
b
is the back emf induced in the armature circuit. So if we neglect the electrical dynamics of the
armature circuit and remodeled it as a simple resistive circuit then the dynamics of the mechanical system
can be expressed by the following differential equation.

+ =
Where u is a generic input which proportional to the input voltage V. and B represent all the frictional and
resistive force acting on the rotor. B accounts all the frictional forces including culomb friction, viscous
friction and static friction loss.












10

Chapter 3
Nonlinear Control Design Tools for Stabilization



This chapter introduces feedback linearization and backstepping control design methodology. Section 3.1
mainly describes two basic types of feedback linearization in brief. Section 3.2 discusses on the zero
dynamics of the system. In section 3.3 we review Lyapunov theory in brief and in section 3.4 the idea of
Lyapunov based control design is introduced. Section 3.5 introduces backstepping technique. We have
discussed different control design methodology with examples and simulation results. We have made a
comparative study of available nonlinear control design algorithm with there simulation results and
discussed there different features as well as condition of applicability on nonlinear system. In this chapter
we have made an extensive discussion on different backstepping techniques, because the aim of this
dissertation is to study the application of backstepping design on underactuated mechanical systems.


3.1 Feedback Linearization:

Feedback Linearization is a control system design methodology especially applicable for
nonlinear systems. The key idea of this approach is to algebraically transform a Nonlinear System
dynamics into an equivalent linear system, such that linear control technologies can be applied on the
transformed system. The Feedback Linearization methodology is entirely different from the Jacobian
linearization technique. Feedback linearization technique can be viewed as a methodology to transform a
complex nonlinear dynamics into an equivalent simpler linear model. The Feedback Linearization
technique has two subsections one is Input-State Feedback Linearization and the other is Input-Output
Feedback Linearization.

11



3.1. 1. Input-State Feedback Linearization:

Let us consider a nonlinear single input system as described in equation (3.1.1) bellow
( ) ( )u x g x f x + =
(3.1.1)
Basically the input state linearization is a two step control design methodology. At first it finds a
state transform
( ) x z z = and an input transform
( ) v x u u , = . By means of this the original nonlinear
system can be represented by an equivalent linear system. And in second step it uses some linear
techniques to design v.
In the following section the Input-State design methodology is applied on a second order system.
2 1 1
sin 2 x x x + =
(3.1.2.a)
( )
1 2
4
1 2
2 sin cos x u x x x + =
(3.1.2.b)
Linear control law can stabilize the system locally around its equilibrium point (0, 0). But in a
larger region linear control law is not able to serve the purpose properly. A specific difficulty is the
nonlinearity in the first equation, which cant be directly cancelled by the control input u.
However if we consider the new set of state variables
1 1
x z = (3.1.1.a)
2 2
sin x z = (3.1.3.b)
Then the new sets of state equations are
2 1 1
2 z z z + =
(3.1.4.a)
( ) ( ) ( )
1
2
2
2
2
4
1 2
2 sin 1 1 z u z z z z + = (3.1.4.b)
12



The equilibrium point after state transformation is also the origin of the new co ordinate system. Now a
control law of the following form can cancel out the nonlinear terms present in the state equation.
( )
( )
1
2
2
4
1
2 sin
1
z
z z v
u

= (3.1.5)
where v is an equivalent input to be designed using linear control law. This leading to a linear input-state
relation
2 1 1
2 z z z + =
(3.1.6.a)
v z =
2

(3.1.6.b)

Thus through the state transformation (3.1.3) and input transformation of (3.1.5) the complex
nonlinear system (3.1.2) has been transformed into an equivalent linear system (3.1.6). and the problem of
stabilizing the original system with control input u has been transformed into the problem of stabilizing
the new dynamics of (3.1.6) using the new input v.

Since the new linear system is fully state controllable, we can apply well known state feedback
control law on the system (3.1.6)
2 2 1 1
z k z k v = (3.1.7)
And it is possible to place the pole of the resultant system anywhere with the proper choice of feedback
gains. For an example v may be chosen as
2
3z v = (3.1.8)
Resulting in the stable closed loop dynamics

13


2 1 1
2 z z z + =

2 2
3z z =

where poles of the system is located at -2 and -3. In terms of the original state this control law
corresponds to the original input
( )
( ) ( )
2
2
4
1 2
1
cos 1 sin 3
2 sin
1
x x x
x
u = (3.1.9)
The original state x is given by z
1 1
z x = (3.1.10.a)
2
1
2
sin z x

= (3.1.10.b)
Since both z
1
and z
2
converges to zero, the original state vector x also converges to zero.
The closed loop dynamics under the above control law is represented in the block diagram form
in Fig. 3.1. this is a two loop control system, where the inner loop achieving the linearization of the input-
state relation and the outer loop achieving the stabilization of the closed loops dynamics (Slotine and li,
1991). This is consistent with equation (3.1.5) where the control input u is seen to be composed of two
parts, one is a nonlinearity cancellation part and the other one is linear compensation part.

Fig. 3.1: Input-State Linearization (Slotine and Li, 1991)

14

Definition 3.1.1. A single-input nonlinear system in the form (3.1.1) with ( ) x f and ( ) x g being smooth
vector fields on
n
R , is said to be Input-State Linearizable if the exists a region in
n
R , a diffeomorphism
n
R : , and a nonlinear feedback control law
( ) ( )v x x u + = (3.1.11)
such that the new state variables ( ) x z = and the new input v satisfy a linear time invariant relation
v b z A z + = (3.1.12)
where A is an n n and b is a 1 n matrix. The matrix A and vector b have a special structure,
corresponding to a linear companion form.
The new state z is called the linearizing state, and the control law (3.1.11 is called the linearizing control
law. (Slotine and Li, 1991)

Conditions for Input-State Linearization:

Theorem 3.1.1 the nonlinear system of equation (3.1.1) with f(x) and g(x) being smooth vector
fields is input-state linearizable iff, a region such that the following conditions hold:
a) the vector fields | | g ad g ad g
n
f f
1
......, ,......... ,

are linearly independent in
b) the set | | g ad g ad g
n
f f
1
......, ,......... ,

are involutive in
(the proof of this theorem can be found in Slotine and Li, 1991).

Some important conclusions on the Input-State Linearization control design method
The input-state linearization process consisting of state transformation and input transformation,
with state feedback used in both. So this is a linearization by feedback and thats why this
methodology is termed as Feedback Linearization. It is fundamentally different from a Jacobian
linearization which linearized a nonlinear system around a small region of the state plane.
15


The result though valid in a large region of the state space, is not global. The control law is ill-
defined when
|
.
|

\
|
+

= kn x
2
1
, k=1, 2. Obviously, when the initial state is at such singularity
points, the controller is not able to bring the system in to its desired equilibrium point. We will
consider whether backstepping can be able to address this problem or not in latter section

In order to implement the control law, the new state components (z
1
, z
2
) must be available. If they
are not physically meaningful or cant be measured directly, the original state x must be
measured, and used to compute them from (3.1.2).

3.1.2. Input-Output Feedback Linearization:

Let us now consider a tracking control for the nonlinear system shown in equation (3.1.13) below.
( ) ( )u x g x f x + = (3.1.13.a)
( ) x h y = (3.1.13.b)
If the control objective is to make the output ( ) t y to track a desired trajectory ( ) t y
d
while keeping the
whole state bounded. It is assumed that the time derivatives of the reference signal (up to a sufficiently
high order) are known and bounded. The main challenge of this tracking control is that the input to the
system has got no direct relationship with the output y. To find out a relationship with the output y with
the input u, to achieve the goal of this control problem. Indeed this idea constitutes the basics for Input-
Output linearization methodology to control nonlinear design.
Consider the third order system
( )
3 2 2 1
1 cos x x x x + + =
(3.1.14.a)

3
3
1 2
x x x + = (3.1.14.b)

16

u x x + =
2
1 3
(3.1.14.c)
1
x y = (3.1.14.d)
To generate a direct relationship between input u and output y. we differentiate y w.r.to time twice
( ) ( ) x f u x y


1 2
1 + + = (3.1.15)
where ( ) x f

1
is a function of state vector defined by
( ) ( )( ) ( )
2
1 2 2 3 3
3
1 1
1 sin x x x x x x x f + + + =

(3.1.16)
Clearly equation (3.1.15) depicts an explicit relationship between y and u. If we choose the control input u
to be in the following form
( ) ( ) x f v
x
u

1
2
1
1

+
= (3.1.17)
where v is an equivalent input to be determined. By means of last equation we can cancelled the
nonlinearity present in equation (3.1.15) and we are able to realize a simple double integrator relationship
between equivalent input v with the output y.
v y =
We can use standard linear control algorithm to design a controller for the tracing control purpose of this
double integrator system. If we can assign the tracking error e as
d
y y e = , then we can relate the
tracking error e and new control input v as follows,
e k e k y v
d

2 1
= (3.1.18)
with k
1
and k
2
are positive design constants, which yield the following equation for closed loop error
dynamics
0
1 2
= + + e k e k e (3.1.19)


17

which represent an exponential stable error dynamics. Therefore perfect tracking is achieved in almost all
cases except the singularity point 1
2
= x . Otherwise perfect tracking is ensured by this control
algorithm.

3.2. Zero Dynamics and its stability:

In case of an n
th
order system, in the time of Input-Output Linearization if we need to differentiate
the output y r-times to generate an explicit relationship between input u and output y, the system can be
termed as a system with relative degree r. the system of equation (3.1.14) has a relative degree equal to
2. At the time of Input-Output Linearization algorithm only consider a part of closed loop system.
Frequently a part of system dynamics has been rendered unobservable in the time of Input-Output
Linearization. In generally this part of the dynamics of the system is called as the Internal Dynamics of
the original system, because it cant be seen from the external relationship, between equivalent input and
output relationship of the system. For the tracking control for the system (3.1.14), the internal dynamics is
represented by the following equation,
( ) ( ) x f e k e k y
x
x x
d


1 2 1
2
2
1 3
1
1
+
+
+ = (3.2.1)
If the internal dynamics of the system is stable, then only it is possible to solve the tracking control
problem by means of Input-Output linearization. The total effectiveness of this Input-Output linearization
based on a reduced order model is practically hinges on the stability of the internal dynamics.

Definition.3.2.1. Zero-Dynamics: The Zero-Dynamics is defined to be the internal dynamics of the
system when the system output is kept at zero by the input.
In case of a linear system the poles of the zero dynamics are exactly the zeros of the original
system. The zero dynamics is an intrinsic property of nonlinear systems. The stability of the zero-
dynamics ensures the stability of the internal dynamics of the nonlinear system in local sense.

18


3.3 Definition of Lyapunov Stability:

An autonomous nonlinear system which is governed by the equation ) (x f x = where
N
R D f : is a locally Lipchitz map from a domain
N
R D into R
N
. If a point 0 ) ( : = x f D x ,
then the point x can be considered as one of the Equilibrium point of the above mentioned system.
Without any loss of generality we can consider that the equilibrium point is situated in origin of the state
space system (if it is located somewhere else we can shift the equilibrium point to origin by change of
state space coordinate.)

Definition3.3.1 The equilibrium point 0 = x , of the above mentioned system is
Stable if, for each R>0, there is r = r(R)>0 such that

0 r , ) ( ) 0 ( < < R t x r x

Otherwise it is an unstable system.

Asymptotically stable if it is stable and r can be chosen such that

0 ) ( lim ) 0 ( = <

t x r x
t

Global Asymptotically stable if it is asymptotically stable and r=
I.e. ) 0 ( x the system trajectory will follow the condition of asymptotic stability.

These definitions involve the trajectory x (t), the solution to (3.1). In general, x (t) cannot be
found analytically. Fortunately there are other ways of proving stability. The Russian mathematician and
engineer A. M. Lyapunov introduced the idea of condensing the state vector x (t) into a scalar function V
(x), measuring how far from the equilibrium the system is. If V (x) decreases over time, then the system
19


must be moving towards the equilibrium. This approach to showing stability is called Lyapunov direct
method (or second method). Lyapunov original work can be found in Lyapunov (1892).
In the following section some useful concepts will be introduced.

Definition 3.3.2 A scalar function V (x) is
positive definite if V (0) = 0 and V (x) > 0 x0
positive semi-definite if V (0) = 0 and V (x) = 0; x=0;
negative(semi-)definite if V (x) is positive (semi-)definite
radially unbounded if ( ) x as x V
We now state the main theorem to be used for proving global asymptotic stability (Khalil. 2002,
theorem. 4.2.).

Theorem 3.3.1 Consider an autonomous system given by the following equation (3.3.1)
( ) x f x = (3.3.1)
and let f(0) =0. Let V(x) is a positive definite scalar function, which is radially unbounded and
continuously differentiable. If
( ) ( ) ( ) 0 , 0 < = x x f x V x V


then the equilibrium point x=0 is a globally asymptotically stable (GAS) equilibrium point.

A positive definite function V(x) that satisfies R R V
n
: can be termed as generalized energy
function or we can consider V as a candidate for Lyapunov function. And V

can be termed as
associated generalized dissipation function. If ( ) x x V , 0

, then the equilibrium of the system x=0 can


be termed as a stable equilibrium point. If ( ) x x V < , 0

then the equilibrium point of the system is termed


as an asymptotically stable equilibrium point. The radial unboundness of V(x) means that all level curves
20

of V(x) are closed. This is necessary to guarantee the global stability of the system. The condition of
asymptotic stability can holds good with negative semi definite ( ) x V

provided ( ) 0 x V

along any system


state trajectory.

Theorem. 3.2. Consider the system (3.1) and let f(0) =0. Let V(x) be a positive definite, radially
unbounded, continuously differentiable, scalar function such that
( ) ( ) ( ) x x f x V x V
x
= , 0


Let ( ) { } 0 : = = x V x S

and suppose that no other solution than x(t) = 0 can stay forever in S. Then, x=0 is
a globally asymptotically stable (GAS) equilibrium.
Note that both these theorems are non-constructive, in the sense that they give no clue about how
to find out the function V such that it is satisfying the conditions necessary for Global Asymptotic
Stability.

3.4 Control Lyapunov Function:
Let us consider a non-autonomous time invariant nonlinear system like equation (3.4.1)
( ) ( ) 0 0 , 0 ; , = = f u x f x (3.4.1)
Now if we want to design a feedback control law u = (x) such that the origin of the system x = 0 is a
globally asymptotically stable. A Lyapunov function approach seems a good way to do this. Suppose we
pick a positive definite, radially unbounded, continuous and smooth scalar function V (x) as a Lyapunov
function candidate and require its time derivative along f(x, (x)) satisfy
( ) ( ) 0 ; 0 ,




<

= x x x f
x
V
t d
V d
(3.4.2)
A V(x) that satisfies the equation (3.4.2) that satisfies this constraint is said to be a Control Lyapunov
Function (CLF) for the above system.

21


Let us consider the scalar system of the equation (3.4.3) below
u x x x + =
3
sin (3.4.3)
Input-State feedback stabilization control law will suggests choosing of a control input u of equation
(3.4.4)
x k x x u sin
3
+ = (3.4.4)
Hence the system is now globally asymptotically stable. But did we need to cancel out the x
3
term from
the system using the control input? The term x
3
in the feedback path helps the stabilization of the system
for large values of x, whereas the presence of x
3
in the control law is harmful it leads to a high magnitude
of control input.

One can choose ( ) kx x u x u = = sin , choosing the earlier clf for the same system will result,
( ) ( )
2 4 3
, kx x kx x x u x f
x
V
V = =

(3.4.5)
which is negative definite.
The result has two distinct advantage the control signal increases linearly with x, whereas the Lyapunov
Function decreases in a faster rate than that of the previous one.

Sontags Formula:

Let us consider the case of an affine system of equation (3.4.1), also suppose that it is possible to
find out the clf for this above mentioned system, and then a stabilizing control law can be used to
calculate the control input u.
This formula is termed as Sontags formula for stabilizing control law is termed as Sontags
formula
22


0 0
0
4 2

|
.
|

\
|

+
|
.
|

\
|

=
g
x
V
for
g
x
V
for
g
x
V
g
x
V
f
x
V
f
x
V
u (3.4.6)

Let us apply the above mentioned Sontags formula on the scalar system (3.4.3)
u x x x + =
3
sin
The control law is based on the assumption that ( ) 0 0 = f , so let ( ) ( ) x x x u + = sin , where ( ) x

is to be
determined. With this assumption we have laid down with ( )
3
x x f = , ( ) x x g = and ( )
2
5 . 0 x x V = . So
this assumption will result,
4
x f
x
V
=

(3.4.7)
x g
x
V
=

(3.4.8)
So using equation above two equations we can write
( ) 1
4 3
+ = x x x x
(3.4.9)
So we can write the expression of control input from equation (3.4.7)
1 sin
4 3
+ + = x x x x u
(3.4.10)
A remarkable feature of the last equation is that ( ) x as 0 x which implies that for large value
of x the control input u reduces to the term x sin required to place the equilibrium point at 0 = x , the
underlying principle is very clear: Except for the cancellation of x sin , the control action is inactive for
the large values of x, because then the internal nonlinearity x
3
takes over and forces the x towards zero.
On the other hand for small values of x the magnitude of (x) becomes equal to that of x which is same

23


as of the previous two examples. This control law is superior because it required less amount of control
effort. Simulation results of the above three control laws are shown in Fig 3.2


Fig. 3.2.a


Fig 3.2.b
It is clearly being revealed from the simulation results Feedback Linearization requires a considerable
amount of large control effort compare to the magnitude of the control input in other two cases, which is
24


not at all desirable from the designers perspective, as well as it may cause non-robustness. In next section
we will study Backstepping in a systematic way.

3.5. Backstepping:
3.5.1 Integrator Backstepping

The simplicity for scalar design motivates us to use the same technique for higher order nonlinear
system. We should start constructing clf for the second-order systems. We begin by augmenting the
system of equation (3.4.3) by an integrator.
z x x x + =
3
sin (3.5.1.a)
u z =

(3.5.1.b)
Let the control design objective be the regulation of x (t), i.e. ( ) t t x as , 0 , for all x(0)
and z(0). Of course z(t) must remain bounded. From (3.5.1.1.a), it is clear that the equilibrium of the
system is located at (0, 0). The design objective is to find out a control input u for the system such that
this equilibrium becomes a GAS one. The system is shown in the block diagram form in Figure 3.3.

Figure 3.3


25


To construct a clf for system of equation (3.5.1), we can use the clf of its subsystem shown in the
dashed box. Instead of u, if we consider z to be the control input, then the system of equation (3.5.1)
would be identical to that of the scalar system of equation (3.4.3). So we can use the earlier clf
2
5 . 0 x V
x
= and x x c z sin
1
+ = . But z is just a state variable not the actual control input. Nevertheless,
according to our need the desired value for z we prescribe is
( ) x x x c z
s des
+ = sin
1
(3.5.2)
Let e be the deviation of z from its desired value,
( ) x x c z x z z z e
s des
sin
1
+ = = = (3.5.3)
We call z a Virtual Control and its desired value ( ) x
s
a Stabilizing Function. The variable e is termed as
the corresponding error variable. Rewriting the system equation (3.5.1) in (x, e) coordinate a more
convenient form of state space representation can be realized, which are illustrated in Figure 3.4 Starting
from equation (3.5.1) and Figure 3.5.1, the stabilizing function
s
(x) is added with and subtracted from the
1
x equation as shown in Figure 3.4. Then the signal ( ) x
s
is being used as a feedback control inside the
dashed box and backstep ( ) x
s
through the integrator as shown in Figure 3.5. In the new state
coordinates (x, e) the system is expressed as
( ) e x x c x x c x x c z x x x + = + + + =
3
1 1 1
3
sin sin sin
(3.5.4.a)

( ) ( ) ( )( ) e x x c x c u x x c z x z e
s
+ + = + = =
3
1 1 1
cos cos
(3.5.4.b)
26


Fig. 3.4

One of the key feature of backstepping is that we dont require any differentiator to realize the time
derivative of ( ) x
s

in last equation; since it is an known function, it is possible to compute its derivative


analytically as
( ) ( )( ) e x x c x c x
x
x
s
s
+ =

=
3
1 1
cos

(3.5.5)


Fig. 3.5

Now to design the control input for the system of equation (3.5.1) we need to construct a clf V
a
for the
total system. Let us try to construct the clf by augmenting a quadratic term of error variable e with it

27


( ) ( ) ( )
2
1
2 2
sin
2
1
2
1
2
1
, x x c z x e x V z x V
a
+ + = + = (3.5.6)
The derivative of V
a
along the solutions of (3.5.4) is computed as
( ) ( ) ( )( ) ( ) e x x c x c u e e x x c x u e x V
a
+ + + + =
3
1 1
3
1
cos , ,



( )( ) ( ) e x x c x c u x e x x c + + + + =
3
1 1
4 2
1
cos
(3.5.7)
Assuming that
a
V

be an analytic function of u, the design goal is to construct a control input in such a way
that the clf inequality of equation (3.4.2) hold goods. With this aim, the cross-term xe, which is due to the
presence of e in equation (3.5.4.a), which is grouped together u. This is possible because u is multiplied
by e due to chosen form of augmented Lyapunov function V
a
. This is the second key feature of
Backstepping. Now we choose the control input u to make
a
V

negative definite in x and e. The easiest


solution of the stated problem is to make the bracketed term of the last equation equal to c
2
e, where
c
2
>0:
( )( ) e x x c x c x e c u + =
3
1 1 2
cos
( ) ( )( )
3
1 1 2
sin sin sin x x z x c x x x c z c + = (3.5.8)

With this control input the derivative of the clf becomes:
2
2
2
1
e c x c V
a
=

(3.5.9)
which proves that in (x, e) coordinate system the equilibrium point (0, 0) is globally asymptotically stable.
The equilibrium point of the original system in (x, z) coordinate system (0, 0) is also globally
asymptotically stable. The resulting closed loop system in (x, e) coordinate is:

(



=
(

e
x
c
x c
e
x
2
2
1
1
1

(3.5.10)
28


In the above equation we are representing a nonlinear system in linear-like form. An important structural
property of this system is that its nonlinear system matrix is the sum of a negative diagonal and a skew-
symmetric matrix function of x. this is the third and the most important aspect of backstepping design.
Now we will describe an important assumptions and theorem of Integrator Backstepping.( mainly adopted
from Nonlinear and Adaptive Control Design by M. Krstic, I. Kallenakopoulous, and P. Kokotovic)

Assumption 3.5.1 Consider the affine system
( ) ( ) ( ) 0 0 , = + = f u x g x f x (3.5.11)
where
n
R x is the state of the system and R u is the control input. Then there exist a continuously
differentiable feedback law
( ) ( ) 0 0 , = =
s s
x u (3.5.12)
And a smooth, differentiable, positive definite, radially unbounded scalar function R R V
n
: such that
( )
( ) ( ) ( ) | | ( )
n
s
R x x W x x g x f
x
x V
+

, 0 (3.5.13)
where R R W
n
: is positive semi-definite.
Under this assumption the control input of (3.5.12) when applied on the system of equation
(3.5.11). guarantees the global boundedness, x(t), and, via the LaSalle-Yoshizawa theorem the regulation
of W(x(t)) is also ensured
( ) ( ) 0 lim =

t x W
t
(3.5.14)


Theorem 3.5.1 Let the system of equation (3.5.11) is augmented by an integrator
( ) ( )z x g x f x + = (3.5.15.a)
29


u z = (3.5.15.b)
and suppose equation (3.5.15.a) satisfies the assumption (3.5.1) with R z as its control input.
i) If W(x) is positive definite then,
( ) ( ) ( ) | |
2
2
1
, x z x V z x V
s a
+ = (3.5.16)
is a clf for the total system, i.e. a feedback control ( ) z x u
s
, = which ensures the equilibrium
point x=0 ,z=0 is a GAS. One of such control is
( ) ( )
( )
( ) ( ) | |
( )
( ) 0 c , >

+ = x g
x
x V
z x g x f
x
x
x z c u
s
s

(3.5.17)
ii) if W(x) is only positive semi-definite, then there exists a feedback control which ensures
( ) 0 , z x W V
a a

, such that ( ) 0 > x W


a
whenever W(x)>0 or ( ) x z
s
. This implies the
global boundedness and convergence of the set [x(t) z(t)]
T
to the largest invariant set M
a

contained in the set
( )

= =

=
+
x z x W R
z
x
E
s
n
a
, 0 ) ( |
1


Proof. Let us introduce the error variable e as
( ) x z e
s
= (3.5.18)
and differentiating w.r.to time, (3.5.15) can be rewritten as

( ) ( ) ( ) | | e x x g x f x
s
+ + =
(3.5.19.a)

30

( )
( ) ( ) ( ) ( ) | | e x x g x f
x
x
u e
s
s
+ +

(3.5.19.b)

Using equation (3.5.13) the derivative of (3.5.16) along the solutions of (3.5.19)

( ) ( ) ( )
(

+ +

+ + +

= e g f
x
u e ge g f
x
V
V
s
s
s a


( ) ( ) ( )
(

+ + +

+ +

= g
x
V
e g f
x
u e g f
x
V
s
s
s


( ) ( ) ( )
(

+ + +

+ g
x
V
e g f
x
u e x W
s
s

, (3.5.20)

By the LaSalle-Yoshizawa theorem, any choice of the control input u which implies
( ) ( ) x W z x W V
a a
,

, with W
a
positive definite in ( ) x z e
s
= , guarantees global boundedness of
x, e and ( ) x e z
s
+ = and regulation of W(x(t)) and e(t). Furthermore LaSalles theorem (appendix @#
%$%^) ensures the convergence of [x(t) e(t)]
T
to the largest invariant set contained in the set
( )
)
`

= =
(

+
0 , 0 |
1
e x W R
e
x
n
. Again, the easiest way to achieve a negative definite
a
V

in e is to
choose a control input (3.5.17), which forces the bracketed term in (3.5.20) equal to ce and yields

( ) ( ) 0 ,
2
z x W ce x W V
a a

(3.5.21)

Clearly, if W(x) is positive definite then LaSalle-Yoshizawa theorem ensures that the equilibrium x=0,
e=0 is a GAS one, which in turn implies that ( ) z x V
a
, is a clf and x=0, z=0 is the GAS equilibrium of
(3.5.15).
31


The choice of control input of equation (3.5.17) is simple, but it not always generate the desirable
control input because it involves the cancellation of nonlinearities, some of which may be useful. It is
clearly explained from equations (3.5.8) and (3.5.9), the requirement that
a
V

in (3.5.20) be made
negative by u allows a plenty of freedom in the choice of control law ( ) z x u
s
, = such that

( ) ( ) ( ) ( ) ( ) 0 , , =
(

+ + +

+ z x W g
x
V
e g f
x
z x e x W V
a s
s
s a

(3.5.22)

Example 3.5.1 let us consider the stabilization problem of the following system
u z
xz x x
=
+ =

2
(3.5.23)
The system is uncontrollable at origin. Here, we will demonstrate the flexibility of backstepping design
over feedback linearization. Comparing with system of 3.5.15 we see that () =
2
and g() = .
Applying theorem 3.5.1 with () =
1
2

2
we can choose the following stabilizing function
s

( ) 0 , ,
1
2
1
2
1
> + + = = = c x x c z z e x x c x
s s
(3.5.24)
So that W(x) in (3.5.13) is positive definite: () =
4
. The substitution of (3.5.24) in (3.5.23) yields
( )( ) xz x x c u e
ex x c x
+ + + =
+ =
2
1
3
1
2 1

(3.5.25)
Now the derivative of the augmented Lyapunov function

=
1
2

2
+
1
2

2
is
( )( ) ( )
2 2
1
4
1
2 1 x xz x x c u e x c V
a
+ + + + + =

(3.5.26)
The control which makes the derivative of V
a
negative definite is given by
( )( )
2 2
1 2
2 1 x xz x x c e c u + + = (3.5.27)
32

The resulting system in the (x,z) coordinate is
xz x x + =
2
(3.5.28.a)
( )
3
1
2
1
2
2 1 2 2
2 2 2 x c z x c zx x c c x c z c z + = (3.5.28.b)
And its equilibrium (0,0) is GAS.

Fig. 3.6 Stabilization of an unstable system with Integrator Backstepping Control Design
It is clear from the above figure that an unstable and uncontrollable system can be stabilize using
Integrator Backstepping. Now we will study the design flexibility offered by integrator backstepping.
A significant design flexibility of backstepping is in the choice of
s
. For the system (3.5.23)
instead of (3.5.24) we can choose
x z e x
s
+ = , (3.5.29)
So that () 0 is semidefinite and
2 2
2
1
2
1
e x V
a
+ = (3.5.30)
The derivative of V
a
along the solution of (3.5.23) is
( ) ( ) ( )
2
x xe u e xe u e xe x V
a
+ + = + + =

(3.5.31)
33


In this case the best we can do us to make the derivative of V
a
negative semidefinite. The control
2
x xe e u = (3.5.32)
which results the following closed loop control system
xz x x + =
2
(3.5.33.a)
2
2x xz z x z = (3.5.33.b)
and the Lyapunov derivative

=
2
. Then the theorem 3.5.1(ii) guarantees that (x(t), e(t)) is bounded
and converge to the largest invariant set M
a
of (3.5.33) contained in set E
a
where e=0. But 0 =
() 0. So we can conclude that the equilibrium is a GAS. Comparing the two design we see
the last one is flexible than the earlier one.
Here we have stated an important corollary 3.5.1 which will allow us to generalize the concept of
integrator backstepping for n number of integrators (Nonlinear and Adaptive Control Design by M.
Krstic, I. Kallenakopoulous, and P. Kokotovic)

Corollary 3.5.1 (Chain of Integrators) Let the system (3.5.11) satisfying the assumption (3.5.11)
satisfying assumption 3.5.1 with ( ) ( ) x x
s 0
= be augmented by a chain of m integrators so that the
control input u is replaced by z
1
, the state equation of the last integrator in the chain:

( ) ( )
u z
z z
z z
z x g x f x
m
m m
=
=
=
+ =

1
2 1
1
.
.
.
(3.5.34)

34


For this system repeated application of the result of 3.5.1 with z
1
, ., z
m
, as virtual controls, results in
the following augmented Lyapunov Function
( ) ( ) ( ) | |
2
1
1 1 1 1
......., ,......... ,
2
1
.. .......... ,......... ,

=

+ =
m
i
m i i m a
z z x z x V z z x V (3.5.35)
Any selection of feedback control law which makes ( ) 0 ......., ,......... ,
1

m a a
z z x W V

, with
( ) 0 ...., ,......... ,
1
=
m a
z z x W iff W(x)=0 and ( ) m i z z x z
i i i
....., ,......... 2 , 1 , ., ,......... ,
1 1 1
=

ensures
that the state vector ( ) ( ) ( ) | |
T
m
T
t z t z t x .., ,......... ,
1
is globally bounded and converges to the largest
invariant M
a
contained in the set
| | ( ) ( ) { } m i z z x z x W R z z x E
i i i
m n
T
m
T
a
..., ,......... 1 , ...., ,......... , , 0 ,........, ,
1 1 1 1
= = = =

+
| ,
Furthermore, if W(x) is positive definite, i.e. if x=0, can be rendered GAS through z
1
, then the augmented
Lyapunov function of equation (3.5.35) is a clf for the system of (3.5.34) and the equilibrium of the system
0 , 0
1
= = = =
m
z z x can be rendered GAS through u.

3.5.2 Stabilization of cascade Systems

The Integrator Backstepping (theorem 3.5.1) provides a way out to find out the stabilizing control
law for a nonlinear system augmented by an integrator. Now we will consider more complex system,
where the nonlinear system is globally stable, but the input subsystem is not a single integrator but an m-
dimensional linear subsystem.
( ) ( ) ( ) R y R x f y x g x f x
n
= + = , 0 0 ,
(3.5.36.a)
R u R z Cz y u B z A z
m
= + = , , ,
(3.5.36.b)
Here we have assumed that the equilibrium of (3.5.36.a) has a globally stable equilibrium at x=0, and that
an appropriate Lyapunov function V(x) is known:
35


( )
( ) ( ) 0

x W x f
x
x V
(3.5.37)

The challenge of this control design is to stabilize the linear system without hampering the
stability of the nonlinear subsystem, and if possible to achieve the equilibrium of (3.5.36) at (0,0). This
problem can only be addressed iff the input subsystem of equation (3.5.36.b) has the following passivity
property. (Nonlinear and Adaptive Control Design by M. Krstic, I. Kanellakopoulos and P.V. Kokotovic)

Assumption 3.5.2 The triple (A,B,C) is feedback positive real (FPR) if there exist a linear feedback
transformation u = Kz + v such that the following two conditions hold good

i) A + BK is Hurwitz
ii) And there are matrices P > 0, Q 0 which satisfy
( ) ( ) Q BK A P P BK A
T
= + + + (3.5.38.a)
T
K Pb = (3.5.38.b)

A sufficient condition for FPR is that there exists a gain row vector K such that A + BK is
Hurwitz, in other words the transfer function ( ) B BK A sI C
1
is appositive real one , and the pair (A
+ BK, C) is observable.0

Theorem 3.5.2: Let V(x) be a Lyapunov function for the system (3.5.36.a) satisfying the constraints
of equation (3.5.37); If the triple (A, B, C) is FPR, then a Lyapunov function for the cascade system
(3.5.36) is
( ) ( ) Pz z x V z x V
T
a
2
1
, + =

(3.5.39)
36

and the corresponding control law
( ) ( ) x g
x
V
Kz z x u
a

= = , (3.5.40)
ensures that
( )
( )
(

t z
t x
is globally bounded and converges to the largest invariant set M
a
contained in the set
( )
)
`

= =
(

=
+
0 , 0 |
2
1
z Q x W R
z
x
E
m n
a

if W(x) is positive definite, i.e. if the nonlinear subsystem (3.5.36.a) with y=0 has a globally
asymptotically stable equilibrium at x=0, then the equilibrium x=0, z=0 is also GAS.

Proof. From the equation (3.5.37) and (3.5.38) and denoting u = Kz + v where ( ) x g
x
V
v

=
2
1

, now from (3.5.40), the derivative of ( ) z x V
a
, is
( )
( ) ( ) | | ( ) | | ( ) | | | | Pz Bv z BK A Bv z BK A P z y x g x f
x
x V
V
T T
a
+ + + + + + +

=
2
1


Now from (3.5.37) and (3.5.38.a) we can write
( )
( )
( ) PBv z Qz z y x g
x
x V
x W V
T T
a
+

+
2
1


( )
( )
( ) ( )
(

+ = x g
x
V
y Qz z y x g
x
x V
x W
T
2
1
2
1

( ) 0 = z Q z x W
T
(3.5.41)

Since V
a
is positive definite, radially unbounded and has a negative semidefinite derivative, x(t) and z(t)
are globally bounded. Furthermore LaSalles theorem guarantees the convergence of the largest invariant
set M
a
in E
a
.

37


If W(x) is a positive definite function of x, then it can ensures the global stability of the
equilibrium point x=0, z=0. The positive definiteness of W(x) also implies the set E
a
, on which 0 =
a
V

is
given by ( )
)
`

= = = 0 , 0 | ,
2
1
z Q x z x E
a
. Since the positive definite function V(x) has its minimum at
x=0, implies that the rate of change of V(x) with x will vanishes at x=0, which in turn implies that the
control term v vanishes on the set E
a
. hence on the set E
a
the state of the system of (3.5.25.b) satisfies
( ) ( ) Pz z z x V z BK A z
T
a
2
1
, , == + = (3.5.42)
But V
a
is constant on the set E
a
, implies that Pz z
T
2
1
is also constant on E
a
. Now the only solution of
( )z K B A z + = with the two constraints, i.e. A+BK is Hurwitz, and Pz z
T
2
1
is constant, is z=0. Thus
z=0, on the largest invariant set contained in E
a
. So it is clear that the invariant set M
a
is just the
equilibrium point which is x=0, z=0. And as it is satisfied the condition of asymptotic stability it is a
GAS.
FPR property is a passivity property. Its nonlinear complement will be used in case of the
stabilization of the nonlinear cascade system

( ) ( ) ( ) , , , 0 , 0 , , , R y R x R z z f y z x g z x f x
n m
= + =

(3.5.34.a)
( ) ( ) ( ) ( ) R u R z C z C y u z z z
m
= = + = , , 0 0 , (3.5.34.b)

Definition: 3.5.1 the system
( ) ( ) ( ) ( ) R u R z z C y u z z z
n
= = + = , 0, 0 C , ,
(3.5.35)


38


Is said to be feedback passive (FP) if there exists a feedback transformation
( ) ( )v z r z K u + = (3.5.36)
such that the resulting system ( ) ( ) ( ) ( ) ( )v z r z z K z z z + + = , y = C(z) is passive with a storage
function U(z) which is positive definite and radially unbounded:
( ) ( ) ( ) ( ) ( ) ( ) 0
0
z U t z U d v y
t

(3.5.37)
The system of (3.5.35) is said to be feedback strictly passive (FSP) if the feedback transformation of
equation (3.5.36) renders it strictly passive:
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

+
t t
d z z U t z U d v y
0 0
0 (3.5.38)
where (.) is the positive definite dissipation rate. ( mainly adopted from Nonlinear and Adaptive
Control Design by M. Krstic, I. Kanellakopoulos and P.V. Kokotovic)

Theorem 3.5.3 (Stabilization with Passivity) Let V(x) be a radially unbounded Lyapunov function for
the system ( ) z x f x , = satisfying
( ) ( ) ( ) R z R x x W z x f x
x
V
n

, , 0 , (3.5.39)

and let(3.5.34.b) be FP as in definition 3.5.1. Then a Lyapunov function for the cascade system (3.5.34.a)
is
( ) ( ) ( ) z U x V z x V
a
+ = , (3.5.40)
and the corresponding control law
( ) ( ) ( ) ( ) ( ) z x g x
x
V
z r z K z x u
s
, ,

= = (3.5.41)
39


ensures that
( )
( )
(

t z
t x
is globally bounded and converges to the largest invariant set M
a
contained in the set
( ) if x W R
z
x
E
m n
a
. 0 |
)
`

=
(

=
+
(3.5.34.b) is FSP then (3.5.41) guarantees the convergence to the
largest invariant set M
a
contained in the set ( )
)
`

= =
(

=
+
0 0, | z x W R
z
x
E
m n
a
. Finally, if
(3.5.34.b) is FSP and W(x) is positive definite, i.e. if ( ) z x f x , = has a GAS equilibrium at x=0uniformly
in z, then the equilibrium x=0, z=0 of (3.5.34) is also GAS.

Proof. The closed-loop system of equation (3.5.34) with the control input of (3.5.41) is
( ) ( )
( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) z x g x
x
V
v z C y
v z r z z K z z z
y z x g z x f x
, ,
, ,

= =
+ + =
+ =

(3.5.42)
Now the above system of equation (3.5.42) as the feedback interconnection of two passive systems S
1

and S
2
. The aim of this representation is that here we want to apply the theorem of interconnection of
strictly passive system
( ) ( )
( ) ( )

=
+ =
z x g x
x
V
y z x g z x f x
S
,
, ,

1

(3.5.43.a)
( ) ( ) ( ) ( ) ( )
( )

=
+ + =
z C y
v z r z z K z z z
S



2

(3.5.43.b)
= v (3.5.43.c)

Since the system S
2
satisfies the condition of passive system of equation (3.5.37). To prove that S
1
is
passive with storage function V(x) we use (3.5.39).
40


( ) ( ) ( ) y x W gy
x
V
x W gy f
x
V
V + =

+ +

(3.5.44)
Integrating (3.5.44) on [0,t] we obtain
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) d x W x V t x V d y
t t

+
0 0
0 (3.5.45)
which shows that S
1
is passive since W(x) 0. So from the theorem of passivity it can be concluded that
(3.5.43) is passive with the positive definite and radially unbounded storage function
( ) ( ) ( ) z U x V z x V
a
+ = , , then from the theorem of it can be conclude that x=0, z=0 is a globally stable
equilibrium of (3.5.42). To observe that ( ) t x W as 0 , we differentiate (3.5.40) and then
combine the result with (3.5.42):
( ) ( ) ( ) yv gy f
x
V
z U x V V
a
+ +

+ =


( ) ( ) x W yv gy
x
V
x W = +

+ (3.5.46)
Then the LaSalles theorem guarantees the convergence to the set M
a
. If the system of (3.5.34.b) is FSP, it
is possible to replace the equation of (3.5.37) by (3.5.38). Then (3.5.46) becomes,

( ) ( ) z x W V

(3.5.47)
Being a positive definite function of z, (z) guarantees the convergence of V

to the set M
a
. Finally, as
W(x) is a positive definite function it implies that the equilibrium point x=0, z=0 is GAS.

Example 3.5.3 let us consider the cascade system
( )
4 2
1 z x e x x
z
+ + = (3.5.48)
u z z
3
= (3.5.49)

41


The choice of output =
4
satisfies all the conditions of theorem 3.5.3. First (3.5b) is FSP, the
feedback law
v z u + =
2
(3.5.50)
results in =
5
+
3

, =
4
, which is strictly passive with the storage function () =
1
2

2
, since
( ) v z z v z z z U
4 6 3 5
+ = + =

(3.5.51)
Implies that
( ) ( ) ( ) ( ) ( ) ( ) ( ) d z z U t z U d v y
t t

+
0
6
0
0 (3.5.52)
Furthermore, (3.5.) can be represented in the form (3.5.) with

( ) ( )
z
e x z x f + = 1 , , ( )
2
, x z x g = ,
4
z y = (3.5.53)
and (3.5.39) is satisfied with () =
1
2

2
, () =
2

Applying theorem 3.5.3, we can conclude that the control
3 2
x z u = (3.5.54)
guarantees GAS of x=0, z=0. Indeed the derivative of clf

(, ) =
1
2

2
+
1
2

2
is negative definite:
( ) ( )
6 2 3 3 5 4 3 2
1 z x x z z z z x e x V
z
a
+ + =

(3.5.55)
The phase plane response of the system with the specified feedback has been shown in Fig. 3.5.6

42


Fig. 3.7 Variation of storage function U w.r.to time


3.5.3 Block Backstepping with zero dynamics

Inspired by the results of Integrator Backstepping in this section we want to formulate a similar
control design algorithm to stabilize a system augmented by a dynamic block more complicated than a
simple integrator. Now we are reformulating assumption 3.5.1 which will allow us to describe the
theorem of linear block backstepping as well as nonlinear block backstepping (Nonlinear and Adaptive
Control Design by M. Krstic, I. Kanellakopoulos and P.V. Kokotovic)

Assumption 3.5.3 Suppose assumption 3.5.1 is valid with V(x) positive semidefinite, and the closed loop
system (3.5.11) with the control (3.5.12) has the property that x(t) is bounded if V(x(t)) is bounded.

Under this assumption the control input of equation (3.5.12) guarantees the global boundedness of the
state variable x(t) when it is applied to the system (3.5.111) as well as it ensures the regulation of W(x(t)).
From (3.5.13) it is easy to conclude that W(x(t)) is integrable on the set [0,) and uniformly converges to
zero by Barbalats lemma. Furthermore, as all solutions of x(t) are bounded, so from LaSalles theorem
43


we can conclude that x(t) converges to the largest invariant set M contained in the set E such that
( ) { } 0 | = = x W R x E
n
.

Corollary 3.5.2 When Assumptions 3.5.1 is replaced by the Assumption 3.5.3, then the boundedness and
convergence properties in the part (ii) of theorem 3.5.1 still hold.

Theorem 3.5.4 (Linear Block Backstepping) Consider the cascade system

( ) ( ) ( ) R y R x f y x g x f x
n
= + = , , 0 0 ,
(3.5.56.a)
R u R z Cz y Bu Az z
m
= + = , , ,
(3.5.56.b)

where (3.5.56.b) is a minimum phase system of relative degree one ( ) 0 CB . If (3.5.56.a) satisfies
assumption 3.5.3 with y as its input. Then it is possible to find out a control input which ensures global
boundedness and convergence of
( )
( )
(

t z
t x
to the largest invariant set M
a
contained in the set
( )
( )
( ) ( )
)
`

= =
(

=
+
x y x W R
t z
t x
E
s
m n
a
, 0 | . One choice of such control is

( ) ( ) ( ) ( ) ( ) | | ( ) ( ) 0 ,
1
>
)
`

+ = k x g x
x
V
y x g x f x
x
CAz x y k
CB
u
s
s

(3.5.57)

Moreover, if V(x) and W(x) are positive definite, then the equilibrium x=0, z=0 is GAS.

44


Proof. Linear SISO system with relative degree one can be represented in the following form
CBu CAz y + = (3.5.58.a)
y B A
0 0
+ =

(3.5.58.b)
where the eigenvalues of A
0
are the (stable) zeros of the transfer function ( ) ( ) B A sI C s H
1
= of the
minimum phase system (3.5.56.b). using (3.5.58) and the feedback transformation
( ) CAz v
CB
u =
1
(3.5.59)
We can rewrite (3.5.48) as follows
( ) ( )y x g x f x + = (3.5.60.a)
v y = (3.5.60.b)
y B A
0 0
+ =

(3.5.60.c)

For the sake of simplicity we first ignore the zero dynamics (3.5.60.c) and, using corollary 3.5.2 apply
theorem 3.5.1 to (3.5.60.a) and (3.5.60.b) to achieve global boundedness of x(t) and y(t) and the
regulation of W(x(t)) and ( ) ( ) ( ) t x t y
s
. From equation (3.5.17) we can construct a control law in view
of equation (3.5.59), a choice of such control input u is given by (3.5.57). now if we consider (3.5.60.c) ,
we can easily point out that is bounded because A
0
is Hurwitz and y is bounded. Thus z is bounded. As
all the solutions of equation (3.5.48) are bounded, we can apply LaSalles theorem with
m n
R
+
= to
conclude the convergence to the set M
a
.

From theorem 3.5.1 we can conclude V(x) and W(x) are positive definite, then the equilibrium
x=0 and y-0 of (3.5.52.a)-(3.5.52.b), which is completely decoupled from (3.5.52.c) is GAS. Now to
prove the fact that the equilibrium point x=0 and z=0 of the cascade system (3.5.48) is also GAS, we state
the theorem 3.5.5
45


Theorem 3.5.5 Consider the cascade system with
p
R ,
n
R x
y B A
0 0
+ =

(3.5.61.a)
( ) ( ) 0 0 , = = f x f x

(3.5.61.b)
( ) ( ) 0 0 , = = C x C y (3.5.61.c)

If (3.5.61.b) is GAS and A
0
is Hurwitz, then the equilibrium point of the above system: 0 , 0 = = x of the
cascade system (3.5.61) is GAS.

Proof. As the equilibrium of the system (3.5.53.b) is GAS, it automatically implies the existence of the
class

KL functions
1
and
2
such that
( ) ( ) ( ) ( ) ( ) ( ) t x t y t x t x , 0 , , 0
2 1
(3.5.62)
The solutions of (3.5.61.a), on the other hand are given by
( ) ( )
( )
( ) d y B e e t
t
t t A t A


+ =
0
0
0 0 0
0 (3.5.63)
As A
0
is Hurwitz, it implies
t t A
e k e

1
0

. Using this with equation (3.5.62) in (3.5.63), we obtain
( ) ( )
( )
( )

d y B e e t
t
t A t A


+
0
0
0 0
0
( )
( )
( ) ( )


+
t
t t
d x e k e k
0
2 2

1
, 0 0


46

( ) ( ) ( )
( )
( ) ( )
( )

+
+
t
t
t t
t
t
t
d t x k
d x k e k
2
- t -
2
2
2
2
0
- t -
2
2 0
2

1
e 2 , 0 sup
e , 0 sup 0



( ) ( ) ( )
( )
( ) ( )
( )

+
+

t
t
t
t
d t x k
d x k e k
2
- t -
2 2
2
0
- t -
2 2

1
e 2 , 0
e , 0 0






( )
( )
|
|
.
|

\
|
t
x
,
0
0
3

(3.5.64)
where
3
is a class

KL function. Combining (3.5.62) with (3.5.64) proves that =0, x=0 is GAS:
( )
( )
( )
( )

|
|
.
|

\
|
KL t
x t x
t
3 3
, ,
0
0

(3.5.65)
If we compare theorem 3.5.3 and theorem 3.5.4, we can conclude that instead of assuming global stability
of the equilibrium x=0, y=0, theorem of linear block backstepping (3.5.4) only assumes the global
stabilizability of x=0, through y.

Theorem 3.5.6 (Nonlinear Block Backstepping) let us consider the cascade system of the following
equation (3.5.57)

( ) ( ) ( ) R y R x f y x g x f x
n
= + = , , 0 0 , (3.5.66.a)
( ) ( ) ( ) ( ) R u R z C z C y u z x z x z
m
= = + = , , 0 0 , , , , (3.5.66.b)


( ) ( ) ( ) ( ) ( ) 2 , 0 , 0 0
2
2
2
2
1
t x
k
x
k
e k
t

+ +

47


Let us assume that (3.5.66.b) has globally defined and constant relative degree one uniformly in x, and its
zero dynamic subsystem is Input State Stable (ISS) with respect to x and y as its inputs. I f (3.5.66.a)
satisfies the assumption 3.5.3 with y as its input then there exists a feedback control law which guarantees
global boundedness and convergence of
( )
( )
(

t z
t x
to the largest invariant set M
a
contained in the set
( )
( )
( ) ( )
)
`

= =
(

=
+
x y x W R
t z
t x
E
s
m n
a
, 0 | . One particular choice is
( ) ( ) ( ) ( ) ( ) ( ) z x z
z
C
x y k z x z
z
C
u
s
, ,
1


|
.
|

\
|

=


( ) ( ) ( ) ( ) ( ) 0 , >
)
`

+ k x g x
x
V
y x g x f x
x
s


(3.5.67)
Moreover if V(x) and W(x) are positive definite, then the equilibrium x=0, z=0 is GAS.

Proof. Since the relative degree of the subsystem (3.5.66.b) is globally defined and equal to one
uniformly in x, there exist a global diffeomorphism (under some additional condition of connectedness
and completeness ( ) ( ) ( ) z x y y , , , = with 0

z
, which transforms (3.5.66.b) into
( ) ( ) ( ) ( ) ( )u y x g y x f u z x
z
C
z x z
z
C
y , , , , , ,
1 1
+

= (3.5.67.a)
( ) ( ) ( ) | | ( ) ( ) ( ) y x z x z x
z
y x g x f z x
x
, , , , ,

+ +

(3.5.67.b)
We now consider the cascade system consisting of (3.5.66.a) and (3.5.66.b). If we linearize (3.5.67.a)
with the following feedback control law,
|
.
|

\
|

|
.
|

\
|

=


z
C
v
z
C
u
1
(3.5.69)
it results v y = . Then we can easily apply theorem 3.5.1 with v as new control input, to ensure the global
boundedness of x and y and regulation of W(x (t)) and ( ) ( ) ( ) t x t y
s
. As we have assumed that the zero
48


dynamics of the system is ISS, so from (3.5.68.b) we can conclude that is also bounded, and thus we
can conclude z and u are also bounded. Since all solutions of (3.5.66) are bounded, it is possible to apply
the LaSalles theorem with
m n
R
+
= to conclude the convergence to the set M
a
. Combining equation
(3.5.60) with (3.5.17), we see that a particular choice of control is given by (3.5.67).

From theorem of integrator backstepping we can also conclude that if V(x) and W(x) are positive
definite, then the equilibrium x=0, z=0 of (3.5.66.a) and (3.5.68.a), which is completely decoupled from
(3.5.68b), is GAS. Moreover we can conclude that in this case the equilibrium x=0, z=0of the cascade
system (3.5.66) is also GAS follows from lemma of input state stability noting that the state (x, y) of the
GAS system (3.5.66.a) and (3.5.68.a) is the input of the ISS system (3.5.68.b).

3.5.4 Systematic Design Procedures:
In this section we will discuss how backstepping control design can be applied on complex
higher order systems. (Nonlinear and Adaptive Control Design by M. Krstic, I. Kanellakopoulos and P.V.
Kokotovic, section 2.8)


3.5.4.1 Strict-feedback systems
Nonlinear strict feedback systems are of the form
(3.5.70)
( ) ( )
( ) ( )
( ) ( )
( ) ( )
m m m m m m
z z z x g z z x f z
z z z x g z z x f z
z z x g z x f z
z x g x f x
1 1 1 1 1 1 1
2 2 1 2 2 1 2 2
2 1 1 1 1 1
1
, , , , , ,


, , , ,
, ,


+ =
+ =
+ =
+ =

49


where
n
R x and
m
z z , ,
1
are scalars. The nonlinearities of the z subsystem, f
i
, g
i
in the equation of z
i

are depends only on the state variables
i
z z x , , ,
1
, i.e. only on the state variables which are Fed back,
thats why this form is termed as strict feedback form.

The x subsystem satisfies Assumption 3.5.1 with z
1
as control input. The recursive design starts
with the subsystem
( ) ( )
( ) ( )
2 1 1 1 1 1
1
, ,

z z x g z x f z
z x g x f x
+ =
+ =

(3.5.71)
If 0
1
f and 1
1
g , the theorem of Integrator Backstepping would be directly applicable to (3.5.71),
treating z
2
as control. In the presence of ( )
1 1
, z x f and ( )
1 1
, z x g we construct an augmented Lyapunov
function ( )
1 1
, z x V for (3.5.62) as follows

( ) ( ) ( ) | |
2
1 1 1
2
1
, x z x V z x V
s
+ = (3.5.72)

where ( ) x
s
is stabilizing feedback that satisfies the (3.5.17) for the x-subsystem. The intermediate
control law ( ) x
s
which ensures the stabilization of the x-subsystem, is termed as stabilizing function.
Next step of design is the construction of another stabilizing function ( )
1 1
, z x for z
2
, the virtual control
in equation (3.5.62), our intension is to ensure the derivative of V
1
is non-positive when z
2
=
1
:

( ) ( ) | | ( ) ( ) ( ) ( )
( ) ( ) ( ) | |
)
`

+ +

+
1
s
2 1 1 1 1 1 1
x
-
, ,
z x g x f x
z z x g z x f x g x
x
V
x z x W V
s


50

( ) ( ) | | ( ) ( ) ( ) ( ) ( )
( ) ( ) | | ( ) ( ) ( ) | |
)
`

+
+ +

+ =
1 1 1 2 2 1
1 1 1 1 1 1 1
, ,
, , ,
z x g x f x
x
z x z z x g
z x z x g z x f x g x
x
V
x z x W
s
s



( ) ( ) ( ) ( ) | |
1 1 2 1 1 1
1
1
1 1
, , , , z x z z x g z x
z
V
z x W

+ = (3.5.73)

where ( ) 0 ,
1 1
> z x W when W(x)>0 or ( ) x z
s

1
. If ( ) 0 ,
1 1
z x g for all x and z
1
, one choice of
1
is

( )
( )
( ) | | ( ) ( ) ( ) ( ) ( ) ( ) | |
)
`

=
1 1 1 1 1
1 1
1 1
,
,
1
, z x g x f x
x
z x f x g x
x
V
x z k
z x g
z x
s
s


(3.5.74)



with k
1
>0, which yields ( ) ( ) ( ) | |
2
1 1 1 1
, x z k x W z x W + = .
After we have determined the stabilizing function of ( )
1 1
, z x , the next step of design is to
augment (3.5.62) with the
2
z -equation from (3.5.61). In a compact notation we can express the equations
in following form
( ) ( )
( ) ( )
3 2 1 2 2 1 2 2
2 1 1 1 1 1
, , z z X g z X f z
z X G X F X
+ =
+ =

(3.5.75)
where ( ) ( ) ( ) ( )
2 1 2 2 1 2 2 1 2 1 2
, , , , , for stand , , , z z x g z z x f z X g z X f and
(

=
1
1
z
x
X , ( )
( ) ( )
( )
(

+
=
1 1
1
1 1
, z x f
z x g x f
X F , ( )
( )
(

=
1 1
1 1
,
0
z x g
X G (3.5.76)
The structure of the system of (3.5.75) is similar to that of (3.5.71). So similarly we can construct an
augmented Lyapunov function for the system (3.5.76) similar to that of (3.5.72)
51


( ) ( ) ( ) | |
2
1 1 2 1 1 2 1 2
2
1
, X z X V z X V + =

( ) ( ) | |

=

+ =
2
1
2
1 1
2
1
i
i i i
X z x V (3.5.77)
The virtual control of z
2
is determined to render
( ) ( ) ( ) ( ) | |
2 2 2 2 1 2 2 1
2
2
2 1 2 2
, , , X z z X g z X
z
V
z X W V

(3.5.78)
with ( ) 0 ,
2 1 2
> z X W when ( ) 0 ,
1 1
> z x W or ( )
1 1 2
X z
It is clear from the previous design steps that this recursive procedure will terminate at the mth step, at
which the whole system of (3.5.61), is to be stabilized by the actual control input u. In a compact notation
(3.5.70) can be rewritten as
( ) ( )
( ) ( )u z X g z X f z
z X G X F X
m m m m m m m
m m m m m m
, ,
1 1
1 1 1 1 1


+ =
+ =

(3.5.79)
where
( )
( ) ( )
( )
( )
( )
(

=
(

+
=
(

1 2 1
1 1
1 1 1
1 2 2 2 2
1 1
1
2
1
,
0

,
,
m m m
m m
m m m
m m m m m
m m
m
m
m
z X g
X G
z X f
z X G X F
X F
z
X
X
(3.5.80)

Once again the last equation is in the form of (3.5.71) and (3.5.75), and the Lyapunov function for
(3.5.79) is
( ) ( ) ( ) | |
2
1 1 1 1 1
2
1
, , ,

+ =
m m m m m m m
X z X V z z x V
52

( ) ( ) | |
2
1
1 1
2
1

=

+ =
m
i
i i i
X z x V (3.5.81)

It is now clear from the last equation (3.5.81) that the stabilizing functions ( )
i i
X serve the purpose of
inter mediate control laws. The function V
m
is indeed a Lyapunov function, because the control input u
can be design to render 0
m m
W V

with 0 >
m
W when 0
1
>
m
W or
1

m m
z :

( ) ( )
(

+ + =

m m m
m
m
m m m m m m
z G F
X
u g f z V V
1 1
1
1
1 1



( ) ( )
1 1
1
1
1 2 1
,

+
m m m
m
m
m m m
z g
z
V
z X W
( ) ( )
(

+ +

m m m
m
m
m m m m
z G F
X
u g f z
1 1
1
1
1


( ) ( )

+ +


u g f g
z
V
z z X W
m m m
m
m
m m m m m 1
1
1
1 1 2 1
,
( )
(

(
+

m m m
m
m
z G F
X
1 1
1
1


( ) 0 ,
1

m m m
z X W

(3.5.82)
If the nonsingularity condition
( ) . , , 1 , , , 0 , , ,
1
m i R z R x z z x g
i
n
m m
= (3.5.83)
Is satisfied, then the simplest choice for u is
( ) ( )
(

m m m
m
m
m m
m
m
m m m
m
z G F
X
f g
z
V
z k
g
u
1 1
1
1
1
1
1
1
1
(3.5.84)
53


with k
m
>0, which yield ( )
2
1 1
+ =
m m m m m
z k W W .
3.5.4.2 Semi-strict feedback forms
Nonlinear semi-strict feedback systems are of the form
( ) ( )
( )
( )
( )
( ) u z z z x f z
z z z x f z
z z z x f z
z z x f z
z x g x f x
m m m
m m m
, , , , ,
, , , ,

, , ,
, ,
2 1
2 1 1 1
3 2 1 2 2
2 1 1 1
1

=
=
=
=
+ =

(3.5.85)
where
n
R x and
m
z z , ,
1
are scalars. Compared with the strict feedback forms of (3.5.61), semi-strict
feedback systems lack the affine appearance of the variables z
m
to be used as a virtual control, and of the
actual control u itself.

The x subsystem satisfies Assumption 3.5.1 with z
1
as control input. The recursive design starts
with the subsystem
( ) ( )
( )
2 1 1 1
1
, ,

z z x f z
z x g x f x
=
+ =

(3.5.86)
In the above equation (3.5.86), treating z
2
as control variable. In a similar manner we construct an
augmented Lyapunov function ( )
1 1
, z x V for (3.5.86) as follows
( ) ( ) ( ) | |
2
1 1 1
2
1
, x z x V z x V
s
+ = (3.5.87)
where ( ) x
s
is stabilizing feedback that satisfies the (3.5.17) for the x-subsystem. The intermediate
control law ( ) x
s
which ensures the stabilization of the x-subsystem is termed as stabilizing function.
Next step of design is the construction of another stabilizing function ( )
1 1
, z x for z
2
, the virtual control
in equation (3.5.62), our intension is to ensure the derivative of V
1
is non-positive when z
2
=
1
:
54


( ) ( ) | | ( ) ( ) ( )
( ) ( ) ( ) | |
)
`

+
1
s
2 1 1 1 1
x
-
, ,
z x g x f x
z z x f x g x
x
V
x z x W V
s


( ) ( ) | | ( ) ( ) ( ) ( )
( ) ( ) | | ( ) ( ) ( ) | |
)
`

+
+

+ =
1 1 1 2 2 1 1
1 1 1 1 1
, , ,
, , ,
z x g x f x
x
z x z z z x f
z x z x f x g x
x
V
x z x W
s
s



( ) ( ) ( ) ( ) | |
1 1 2 2 1 1 1
1
1
1 1
, , , , , z x z z z x f z x
z
V
z x W

+ = (3.5.88)

where ( ) 0 ,
1 1
> z x W when W(x)>0 or ( ) x z
s

1
. If the function f
1
is smooth it implies
1
f is also smooth:

( ) ( ) ( ) ( ) ( ) | |
1 1 2 2 1 1 1 1 1 1 2 1 1
, , , , , , , , z x z z z x f z x z x f z z x f = (3.5.89)
In earlier design, when g
1
0, a simple choice for
1
was to make ( ) ( ) ( ) | |
2
1 1 1 1
, x z k x W z x W
s
+ = . If we
follow the similar procedures to find a stabilizing function from equation (3.5.79). We have to solve the
following equation:
( ) ( ) ( ) | | ( ) ( ) ( ) | |
1 1 1 1 1 1 1
, , , z x g x f x
x
x z k z x z x f
s
s
+

+ =

(3.5.90)
But from the theorem of Implicit function we can find a necessary condition for solvability of the above
equation w.r.to
1
is
( ) ( )
2
2 1 2 1
2
1
, , , 0 , ,
+

n
R z z x z z x
z
f
(3.5.91)
This is quite conservative and unnecessary for our ability to find a
1
which satisfies the equation
of (3.5.88). It may happen that there exists a stabilizing function ( )
1 1
, z x which satisfies the purpose of
55


(3.5.88), but violating the constraints of equation (3.5.91). So here we have assumed that a
1
has been
found.
Similar to our last design case we augment the subsystem of (3.5.77) by one more state variable
equation (3.5.85)
( )
( )
3 2 1 2 2
2 1 1 1
, ,
,
z z X f z
z X F X
=
=

(3.5.92)
where ( ) ( )
3 2 1 2 3 2 1 2
, , , for stand , , z z z x f z z X f and
(

=
1
1
z
x
X , ( )
( ) ( )
( )
(

+
=
2 1 1
1
1 1
, , z z x f
z x g x f
X F (3.5.93)

Here we are using z
2
as the virtual control to stabilize (3.5.83) w.r.to the Lyapunov function V
2

( ) ( ) ( ) | |
2
1 1 2 1 1 2 1 2
2
1
, X z X V z X V + =

( ) ( ) | |

=

+ =
2
1
2
1 1
2
1
i
i i i
X z x V (3.5.94)
The goal of the design is to find out a stabilizing function
2
to yield

( ) ( ) ( ) ( ) | |
2 2 3 3 2 1 2 2 1
2
2
2 1 2 2
, , , , X z z z X f z X
z
V
z X W V

(3.5.95)

with ( ) ( ) ( ) ( )
2 1 2 2 1 2 3 2 1 2 3 2 1 2
, , , , , , , z X z X f z z X f z z X f = , and ( ) 0 ,
2 1 2
> z X W when ( ) 0 ,
1 1
> z x W or
( )
1 1 2
X z . Once again if we want ( ) ( ) ( ) | |
2
1 1 2 2 1 1 2 1 2
, , X z k z x W z X W + = , we need
56


( ) R z R z R X z z X
z
f
n

+
3 2
1
1 3 2 1
3
2
, , , 0 , , (3.5.96)
In most of the cases we would avoid this requirement by directly finding ( )
2 2
X to satisfy the inequality
constraints of (3.5.95).
It is clear from the previous design steps that this recursive procedure will terminate at the mth
step, at which the whole system of (3.5.69), is to be stabilized by the actual control input u. proceeding in
a similar manner, in the mth step we arrive at the actual control input u in
( )
( ) u z X f z
z X F X
m m m m
m m m m
, ,
,
1
1 1 1


=
=

(3.5.97)
where
( )
( )
( )
(

=
(

m m m m
m m m
m m m
m
m
m
z z X f
z X F
z X F
z
X
X
, ,
,
, ,
1 2 1
1 2 2
1 1
1
2
1
(3.5.98)

For the system of equation (3.5.88) we use the following Lyapunov function:
( ) ( ) ( ) | |
2
1 1 1 1 1
2
1
, , ,

+ =
m m m m m m m
X z X V z z x V
( ) ( ) | |
2
1
1 1
2
1

=

+ =
m
i
i i i
X z x V (3.5.99)
The lest step of the design is to find a control law
( )
m m
z z x u , , ,
1
= (3.5.100)
which ensures 0
m m
W V

with 0 >
m
W when 0
1
>
m
W or
1

m m
z :
( )
(

+ =

1
1
1
1 1 m
m
m
m m m m m
F
X
f z V V



57


( ) ( )
1 1
1
1
1 2 1
,

+
m m m
m
m
m m m
z f
z
V
z X W
( ) ( )
(

1
1
1
1 1
, ,
m
m
m
m m m m m
F
X
u z X f z


( ) 0 ,
1

m m m
z X W (3.5.101)

Now it is possible to find out u=
m
which will satisfy the following constraint
( ) 0 , ,
1

u z X
u
f
m m
m
,
1
1
+


m n
m
R X , R z
m
, R u (3.5.102)
However it is possible to find out
m
u = which yield | |
2
1 1
+ =
m m m m m
z k W W without satisfying
the constraint of equation (3.5.102).


3.5.4.3 Block-strict-feedback system

Theorem 3.5.6 can apply repeatedly to design controllers for the nonlinear system which can be
transformed into the Block-strict-feedback form by means of a coordinate transformation.

58

( ) ( )
( ) ( )
( )
( ) ( )
( )
( ) ( )
( )
( ) ( )
( )
( ) ( )
( )
m m m
m m m m m m
m m m
m m m m m m
k k k
k k k k k
X C y
y X X X x G X X X x F X
X C y
y X X X x G X X X x F X
X C y
y X X X x G X X X x F X
X C y
y X X x G X X x F X
X C y
y X x G X x F X
y x g x f x
=
+ =
=
+ =
=
+ =
=
+ =
=
+ =
+ =
+


+
1 2 1 2 1
1 1 1
1 2 1 1 1 2 1 1 1
1 2 1 2 2 1
2 2 2
3 2 1 2 2 1 2 2
1 1 1
2 1 1 1 1 1
1
, , , , , , , ,
, , , , , , , ,

, , , , , , , ,

, , , ,
, ,

(3.5.103)

where each of the m-subsystems with state
k
n
k
R X , output R y
k
and input
1 + k
y satisfies the
following conditions:
(BSF-1) its relative degree is one uniformly in
1 1
, , ,
k
X X x and
(BSF-2) its zero dynamics subsystem is ISS w.r.to
k k
y X X , , ,
1 1

Under conditions (BSFF-1) and (BSF-2), the system (3.5.94) can be transformed into a form
redolent of the strict feedback form (3.5.69). in particular, (BSF-1) is equivalent to

0

k
k
k
G
X
C
, . , , 1 , , ,
1
1
m i R X R X
k
n
k
n
= (3.5.104)

This means that for each
k
X subsystem in (3.5.94) there exists a global coordinates transformation
( ) ( ) ( ) ( )
k k k k k k
X X x X C y , , , , ,
1
= , with 0

k
k
k
G
X

, which converts it into the normal form of


equation (3.5.59):
59


( ) ( ) ( ) | |
( ) ( )
1 1 1 1 1
1 1 1
, , , , , , , , , ,
, , , , , ,
+
+
+
+

=
k k k k k k k
k k k k k k
k
k
k
y x y x g x y x f
y X X x G X X x F X
X
C
y


(3.5.105.a)

( ) ( ) ( ) | |
( ) ( )
( )
k k k
k k k
k
k
k k k k k k
k
i i
k
k
y X X x
X X x F X X x
X
y X X x G X X x F X X x
X

, , , , ,
, , , , , ,
, , , , , , , , ,
1 1
1 1
1 1 1 1
1
1

+
+


( )
k k k k
y y y x , , , , , , ,
1 1 1 1
(3.5.105.b)

The set of state equation of (3.5.103) are transformed into the following state equation, when represented
into this new coordinate system




(3.5.106)
If the zero
dynamics
variables
m
, ,
1
were
not present in (3.5.970 would be identical to the strict-feedback form of (3.5.61) with
k
z replaced by
k
y .
Hence after some necessary changes having been made we can apply the design procedure of section
3.5.3.1 to the system of (3.5.97). At first the boundedness of ( ) ( ) ( ) t y t y t x
m
, , , and the regulation of
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( )
( )
m m m m m
m m k m m m m
m m m k m m m m
y y y x
y x
u y y x G y y x F y
y y y x G y y x F y
y y y x G y y x F y
y y x G y x F y
y x g x f x






, , , , , , ,

, ,
, , , , , , , , , ,
, , , , , , , , , ,

, , , , , , , ,
, , , ,
1 1 1 1
1 1 1
1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
3 2 2 1 1 2 2 2 1 1 2 2
2 1 1 1 1 1 1 1
1


=
=
+ =
+ =
+ =
+ =
+ =

60


( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) t t y t t y t x t y t x t y t x W
m m m m s
, , , , , , , ,
1 1 1 1


is proved by means of an
augmented Lyapunov like function
( ) ( ) ( ) | |
2
1
1 1 1 1 1 1
, , , , ,
2
1
, , ,

=

+ =
m
k
m m k k m m
y y x y x V y y x V (3.5.107)
This implies the boundedness of
1
y . Now we use an induction argument 1 , , 2 + = m k with u y
m
=
+1
to
prove the boundedness
m m
y y , , , , ,
1 2
and u (and consequently the boundedness of
m
X X , ,
1
). If
1 1
, ,
k
y y and
2 1
, ,
k
are bounded the ISS stability assumption on the
1 k
subsystem ensures that
1 k
is bounded. This intern implies that ( )
1 1 1 1
, , , , ,
k k k
y y x is bounded which implies that
k
y is
also bounded. This completes the induction argument and shows that u X X
m
, , ,
1
are bounded, since
they can be expressed as smooth functions of
1 1 1 1
, , , , ,
k k
y y x .

3.6 Conclusion:
So we have seen that the backstepping control design is more flexible than that of feedback
linearization, and it offers the solution of the design problem for different types of nonlinear system,
which may contains hard nonlinearity ( we have studied the example of this type of system in Ex3.5.3).
Now inspired by the design flexibility and performance of backstepping controller we will try to
implement Backstepping control algorithm to design controller for unstable mechanical system (such as
inverted Pendulum).






61

Chapter 4
Adaptive Backstepping Control of Separately excited dc machine


The different controller design algorithm discussed in chapter 3, mainly deals with system which
are time invariant in nature as well as in the last section we have assumed that we have a white model of
system with us, i.e. there is no uncertain terms present in the dynamics of the system. But if we have to
deal with some parametric uncertainties present in the system dynamic, then the controller of last section
may not be able to produce the desired result (in fact most of the cases formulation of the controller is
also very difficult, and at least we provide the nominal value of the parametric uncertainty controller
design is not possible). All the controllers described in the last section employ static feedback, where as
the controller design in this section employ a nonlinear integral feedback. The core concept of this design
methodology is to estimate the uncertain parameter continuously by means of the dynamic feedback, with
which the static part is continuously adopted to the new parameter estimates. This design method employs
an adaptation scheme along with backstepping control design methodology; hence it has got the name
Adaptive Backstepping Control law.

4.1 Implementation of Adaptation Mechanism via Dynamic Feedback:

To illustrate the difference between static and dynamic feedback we will consider first a nonlinear
scalar plant given by
( ) u x x + = (4.1.1)
To achieve regulation of x(t), we introduce the dynamic feedback control action to employ adaptation
mechanism in our design. If were known then the control input
( ) x c x u
1
= (4.1.2)
62

Would ensure the negative definiteness of the derivative of Lyapunov Function ( )
2
0
2
1
x x V = , as is an
unknown parameter we cant implement the control input of (4.1.2). we can solve this problem if we
employ certainty-equivalence form in which we replace by an estimate

:
( ) x c x u
1

= (4.1.3)
Substituting (4.1.3) into (4.1.1), we obtain
( ) x x c x + =
1
(4.1.4)
where

is the parameter error:


= (4.1.5)
The derivative of ( )
2
0
2
1
x x V = becomes
( ) ( ) x x x c x V + =
2
1 0

(4.1.4)
Since the second term is sign indefinite and contains unknown parameter error

, so it is impossible to
draw any conclusion about the stability of the system (4.1.1) with the control input (4.1.3). To overcome
the problem we have planned to design an update law for the unknown parameter . We have augment V
0

with a quadratic term in the parameter error such that minimization of V
0
will guarantees the convergence
of parameter error to zero.
2 2
0
2
1
2
1

+ = x V

(4.1.7)
where > 0 is the adaptation gain. The derivative of this function is
( )
( )

1

1
,
2
1
+ + =
+ =
x x x c
x x x V
a


( )
|
|
.
|

\
|
+ + =



1
2
1
x x x c (4.1.8)
63

The second term of the equation is still sign indefinite and contained the term

as a factor. Somehow the


situation is far better than the earlier one because now we have the dynamics of

= at our disposal.
With an appropriate choice of the update law we can remove the indefinite term from equation (4.1.8). So
we can choose the update law
( ) x x = =

(4.1.9)
which yields
0
2
1 1
= x c V


(4.1.10)
The resulting adaptive system consists of (4.1.1) with the control (4.1.3) and the update law (4.1.9), and is
shown in Fig. 4.1.1.

Fig. 4.4.1 Adaptation as Dynamic Feedback

Since
1
0

, the equilibrium = 0,

= 0 of (4.1.11) is globally stable. The adaptive nonlinear


controller which ensures this guaranteed stable performance of the system is given by equations (4.1.4)
and (4.1.9)
( )
( ) x x
x x c u


=
=

1
(4.1.11)
64

In this case we are able to design an adaptive controller in a straightforward way. The design is so
simple not due to the fact that we are dealing with a scalar nonlinear system but the matching conditions
present in the dynamics of system. The term containing unknown parameter of system (x.1.1) is in the
span of the control, i.e. in other words we can say that the uncertain terms present in the system equation
can be directly controlled by the input u, when is known. To generalize the concept of matching
condition, we have take the example of a second order system given by (4.1.12), in which again the
uncertain term is matched by the control input u:
( )
( ) u x x
x x x
+ =
+ =
2 2 2
1 1 2 1

(4.1.12)
If were known it is possible to design a control input for the system of (4.1.12) using integrator
backstepping control design algorithm. First we should treat x
2
as a virtual control, design the stabilizing
function
( ) ( )
1 1 1 1 1 1
x x c x = (4.1.13)
and then from the control Lyapunov function
( ) ( ) ( )
2
1 1 2
2
1
2
1
2
1
x x x x V
c
+ = (4.1.14)
whose derivative is rendered negative definite
( ) ( ) ( )
2
1 1 2 2
2
1 1
x x c x c x V
c
=

(4.1.15)
by the control
( ) ( ) ( ) ( ) ( ) x x x
x
x x x c u
2 1 1 2
1
1
1 1 1 2 2

+ = (4.1.14)
Since is not known to us, we again replace it with its estimate

in last equation to derive the


adaptive control law:
( ) ( ) ( ) x x
x
x x c u
2 1 2
1
1
1 1 2 2

+ = (4.1.17)
The above control input results the following error dynamics (
1
=
1
,
2
=
2

1
):
65

( )

+
(

=
(

x z
z
c
c
z
z
2 2
1
2
1
2
1
0
1
1

(4.1.18)
Then we augment the Control Lyapunov function with a quadratic term in parameter error

to obtain the
Lyapunov function
( )
2 2
2
2
1
2
2
1
2
1
2
1
2
1
,

+ + = + = z z V z V
c a
(4.1.19)
Its derivative is
(

+ =

1
2 2
2
2 2
2
1 1
z z c z c V
a
(4.1.20)
Now we choose an update law which results the cancelation of sign indefinite terms present in equation
(4.1.20):
2 2

z =

(4.1.21)
This ensures the derivative of Lyapunov function is nonpositive:

=
1

1
2

2

2
2
0 (4.1.22)
This implies that the equilibrium point z=0,

=0 of the closed loop adaptive system consisting of (4.1.18)


and (x.1.22) is globally stable and it also ensures the asymptotic regulation of the state variable x (t), i.e.
() 0 .
( )
| |
(

=
(

+
(

=
(

2
1
2
2 2
1
2
1
2
1
0
0
1
1
z
z
x z
z
c
c
z
z

(4.1.23)
The resulting controller in the closed loop form has been shown in the figure 4.1.2 below.
66


Fig 4.1.2 The Closed loop Adaptive system

4.2 Adaptive Integrator Backstepping Control Algorithm

let us consider the system of equation (3.4.1)
( )
1 2 1
x x x + = (4.2.1.a)
u x =
2
(4.2.1.b)
If the parameter were known, we can apply theorem 3.5.1 to derive the expression for
stabilization function for x
2

( ) ( )
1 1 1 1 1
, x x c x = (4.2.2)
Let us consider the following control Lyapunov function
( ) ( ) ( )
2
1 1 2
2
1
,
2
1
2
1
, x x x x V
a
+ =

(4.2.3)
The derivative of the above Lyapunov Function along the solution of the equation (3.4.1) is given
by
( ) ( ) ( ) ( ) ( ) ( )
|
|
.
|

\
|
+

+ + =
1 2
1
1
1 1 2 1 2 1
, x x
x
u x x x x x V
a

(4.2.4)
67

Now we should chose a control input which will ensure the negative definiteness of
a
V

, this can
be achieved by the following control input
( ) ( ) ( ) ( )
1 2
1
1
1 1 1 2 2
, x x
x
x x x c u

+ = (4.2.5)
which ensure the following form of
a
V


( ) ( )
2
1 1 2 2
2
1 1
, x x c x c V
a
=

(4.2.4)
Since is unknown and appears one equation before the control does it is not possible for us to apply the
design methodology of integrator backstepping directly in this case because the dependence of the
stabilizing function ( ) ( )
1 1 1 1 1
x x c x = on the unknown parameter. Hence it is impossible to
continue the procedure further; however the ideas of integrator backstepping can be utilized.

Step 1. If we consider the system of (4.2.a) with x
2
as control input, an adaptive controller for the system
can be designed by using the adaptive control law of equation (4.1.12) as follows:
( ) ( )
1 1 1 1 1 1 1
, x z c x = (4.2.7)
( )
1 1 1
x z = (4.2.8)
In the above equation we have replaced the parameter estimate

with the estimate


1
, which is the
estimated value of in this step of design. Later we will see that another estimate of

will generate in the


next step of design. With (4.2.7) and the new error variable
2
=
2

1
, the
1
equation becomes
( )
1 2 1 1 1
+ + = z z c z (4.2.9)
The derivative of the following Lyapunov function
( ) ( )
2
1
2
1 1 1
2
1
2
1

+ = z x V , (4.2.10)
along the solutions of equation (4.2.9) is
68


( )
( )
|
|
.
|

\
|
+ =
=
1 1 1 1
2
1 1 2 1
1 1 1 1 1
1
1

z z c z z
z z V



2
1 1 2 1
z c z z = (4.2.11)
Step 2. The derivative of z
2
is now expressed as
1
1
1
1
1
1
1 2 2

=
=
x
x
u
x z
-

Substituting
1
from equation (4.2.1.a) and the update law from (4.2.8) the derivative of z
2
becomes:
( ) ( ) ( )
1 1
1
1
1 2
1
1
1 2 2
x z x x
x
u
x z

=
=
-


( ) ( )
1 1
1
1
1
1
1
2
1
1
x z x
x
x
x
u

= (4.2.12)
In this equation the control input u appears and it is at our disposal. At this moment we need to construct
a control Lyapunov function and design the control input u accordingly to make it derivative non-positive.
We construct a Lyapunov function by augmenting
1
with a quadratic term in z
2

2
2 1 2
2
1
z V V + = (4.2.13)
and the derivative of
2
along the solution of (4.2.9) and (4.2.12) is
2 2 1 2
z z V V

+ =
( ) ( )
(

+ + =
1 1
1
1
1
1
1
2
1
1
1 2
2
1 1
x z x
x
x
x
u z z z c


(4.2.14)
Now it is possible to cancel out the sign indefinite terms present in (4.2.14) by the control input u. To deal
with the term containing unknown parameter , we try to employ the existing estimate
1
designed in the
first step
69

( ) ( )
1 1
1
1
1 1
1
1
2
1
1
2 2 1
x z x
x
x
x
z c z u

+ = (4.2.15)
where c
2
is a positive design constant. The resulting derivative of V
2
is given as
( ) ( )
2 1 1
1
1
1
2
2 2
2
1 1 2
z x
x
z c z c V

(4.2.14)
But it is clearly revealed from the last equation that we will not able to cancel the term( )
1
. To get rid
off from this design problem we replace
1
from the expression of control input by
2
, a new estimate of
.



1
1
1
2
1
1
2
1
1
2 2 1
z
x
x
x
z c z u

+ = (4.2.17)
With this choice of control input u,
2
becomes
( )


1
1
2 1 2 2 2
x
z z c z

= (4.2.18)
To deal with the new estimate
2
and find out a suitable update law for it we have augmented the existing
Lyapunov function V
2
with a quadratic term in parameter error, i.e. ( )
2
as follows:
( ) ( )
2
2
2
1 1 2 1 2 1
2
1
2
1

+ + = z V z z V
f
, , ,
( ) ( ) ( ) | |
2
2 1
2
2 1
2
1
2
1

+ + + = z z (4.2.19)
The derivative of V
f
is
( )
( ) ( )
2 2
1
1
2 1 2 2 2
2
1 1 2 1
2 2 2 1 1
1
1



|
|
.
|

\
|

+ =
+ =
x
z z c z z c z z
z z V V
f


( )
|
|
.
|

\
|
+

=
2
1
1
2
2
2 2
2
1 1
1


x
z c z c (4.2.20)
70

Now we can eliminate the sign indefinite term of the last equation by selecting an update law for
2

2
1
1
2
z
x

= (4.2.21)
which yields
2
2 2
2
1 1
z c z c V
f
=

(4.2.22)
From the theorem of LaSalle-Yoshizawa (appendix 1) the nonpositive derivative of the Lyapunov
function V
f
ensures the global uniform boundedness of z
1
, z
2
,
1

,
2

and the asymptotic regulation of the


error variable z
1
and z
2
, i.e.
1
,
2
0 . Since z
1
is bounded it automatically implies the
boundedness of x
1
. The boundedness of x
2
follows from the boundedness of z
2
and
1
in (4.2.2) and from
the fact that
2
=
2

1
. From equation (4.2.17) it is clearly being revealed that the control input u is
also bounded. Now we will discuss some necessary assumption and an important theorem of
backstepping, which will allow us to apply adaptive backstepping control procedure on uncertain
nonlinear system in a systematic manner. (Preceding analysis on Adaptive Backstepping Control
algorithm are mainly adopted from Nonlinear and Adaptive Control Design by M. Krstic, I.
Kallenakopoulous, and P. Kokotovic)

Assumption 4.2.1 Consider the system
( ) ( ) ( )u x g x F x f x + + = (4.2.23)
where

is the state,

is a vector of constant parameters and is the control input.


There exists an adaptive controller
( )
( )

,
,
x
x u
=
=

(4.2.24)
with parameter estimate

, and a smooth function (, ):


(+)
which is positive definite
and radially unbounded in the variables (, ) such that for all (, )
(,)
:
( ) ( ) ( ) ( ) ( ) | | ( ) ( ) ( ) 0 , , , , ,

+ + +

x W x x
V
x x g x F x f x
x
V
(4.2.25)
where :
(+)
is positive semidefinite.
71

Under this assumption, the control of (4.2.4), applied to the system (4.2.23) guarantees global
boundedness of x(t), (t) and, by the LaSalle-Yoshizawa theorem , asymptotic regulation of W(x(t),(t)).
Adaptive Backstepping allow us to achieve the same properties for the augmented system.

Theorem 4.2.1 (Adaptive Backstepping) let the system (4.2.23) be augmented by an integrator,
( ) ( ) ( )z x g x F x f x + + = (4.2.24.a)
u z = (4.2.24.b)
where . Consider for this system the dynamic feedback controller
( ) ( ) ( ) ( ) ( ) ( ) | | ( ) ( ) 0 ), ( , , , , >

+ + +

+ = c x g x
x
V
x z x g x F x f x
x
x z c u

(4.2.27)
( ) , x = (4.2.28)
( ) ( ) ( ) ( )

, , x z x F x
x
T

= (4.2.29)
where is a new estimate of , =
T
>0 is the adaptation gain matrix. Under Assumption 4.2.1, this
controller guarantees global boundedness of (), (), (), () and asymptotic regulation of
W(x(t),(t)) and z(t)-(x(t),(t)). These properties can be established with the following augmented
Lyapunov function
( ) ( ) ( ) | | ( ) ( ) + + =
1 2
2
1
,
2
1
, , , ,
T
a
x z x V z x V (4.2.30)
Proof. With the error variable ( ) , x z e = , (4.2.24) is rewritten as
( ) ( ) ( ) ( ) | | , x e x g x F x f x + + + = (4.2.31.a)
( ) ( ) ( ) ( ) ( ) | | | | ( )

, , , x x e x g x F x f x
x
u e

+ + +

= (4.2.31.b)
72

In the last equation the derivative of was replaced by the update law of equation (4.2.28). Introducing a
new parameter estimate for the parameter . To find out an update law for this new parameter estimate
we augment the Lyapunov function:
( ) ( ) ( ) ( ) + + =
1 2
2
1
2
1
, , , ,
T
a
e x V z x V (4.2.32)
Using (4.2.25), we can easily establish the derivative of the augmented Lyapunov function V
a
satisfies
( )
| | | | ( )

+ + +

+ + + +

=
1

T
a
e g F f
x
u e
V
ge g F f
x
V
V


( )
| | | | ( )

+ + +

+ + +

=
1


T
g
x
V
e g F f
x
u e
V
g F f
x
V


( ) | | | |
(

+ + +

+ g
x
V
e g F f
x
u e x W

,
( )

1 T
Fz
x
(4.2.33)
Judicially we can formulate the update law for to cancel out the ( ) term in the last equation,
which results the following update law
e F
x
T
|
.
|

\
|

(4.2.34)
and the control u is chosen to make the bracketed term associated with e equal to ce, i.e.
| | | | g
x
V
e g F f
x
ce u

+ + + +

+ =

(4.2.35)
This results the desired nonpositive derivative of V
a
:
( ) 0 ,
2
ce x W V
a

(4.2.34)
73

From (4.2.30) and (4.2.34) we conclude that V(x,), and e are bounded. From Assumption 4.2.1 we can
directly conclude that x(t) and (t) are also bounded. So z=e+ and u are also bounded. As all the signals
mentioned above are bounded function of time, so the asymptotic regulation of W(x,) and e(t) can be
established by using LaSalle-Yoshizawa theorem,

4.3 Adaptive Backstepping Control of Separately excited dc machine:
4.3.1 Formation of Adaptive Backstepping Control laws
From the lagrangian modeling of the separately excited dc machine after some mathematical
manipulation we can formulate the equation of dynamics as follows
) (t u B x J = + (4.3.1)
Writing equation (4.3.3) in state space form is achieved by assigning z
1
= and
2
=


2 1
z z = (4.3.5.a)
B u z J =
2
(4.3.5.b)
For the sake of simplicity and ease of computation we introduce a normalize unknown function h(z
1
),
with the introduction of the normalize function equation (4.3.7) becomes (in modified form)
( ) h z J u + =
2
(4.3.9)
We start our design by considering z
2
as control variable. The 1
st
error variable e
1
is defined as
=
rerf
e
1
(4.3.10)
where
ref
is the reference angle, a continuous and piecewise differential piecewise function of time (for
separately excited dc machine it is equal to zero).
Using the following Lyapunov function, (a smooth, piecewise continuous differentiable, radially
unbounded quadratic function of e
1
):
2
1 1
2
1
e V = (4.3.11)
74

and to ensure the 1
st
derivative of V
1
is negative definite, we chose the following virtual control law for
angular velocity
ref ref
e c z

+ =
1 1
(4.3.12)
Now we are defining 2
nd
error variable e
2
as follows:
2 2
z z e
ref
= (4.3.13)
The derivative of e
2
can be computed as follows:
( )
( ) ( ) h
J
u
e e c
h
J
u
e z z
dt
de
ref
ref ref
+ + + =
+ + = =




2 1 1 1
1 1 2
2
c
c

( ) h
J
u
e c e c
ref
+ + + =

2 1
2
1 1
(4.3.13)
To analyze the stability of the error dynamics we have formed the following Lyapunov function by
augmenting V
1
with a quadratic term of e
2
as follows:
2
2
2
1 2
2
1
2
1
e e V + = (4.3.14)
Now we compute the derivative of V
2
as follows:
( ) ( ) ( )
(

+ + + + + = + = h
J
u
e e c e e e c e e e e e V
ref

2 1 1 1 2 2 1 1 1 2 2 1 1 2
c (4.3.15)
to ensure the nonpositive derivative of V
2
,we choose the following control input:
( ) ( ) ( ) h e c c e c J u
ref
+ + + + =

2 2 1 1
2
1
1 (4.3.14)
In equation (4.3.14) the estimated functions & represent the functions g & h of equation (4.3.15).
Although the expression of the force is determined by the derivation but the design is incomplete because
the parameter adaptation law is not determined yet. For this purpose the parameter error is defined in
following way
75

J J J

= (4.3.17.a)
h h h

= (4.3.17.b)
Expression of force u from equation (4.3.14) is substituted for the input force u in the expression (4.3.13)
which results the following error dynamics
h h e c c e c
J
J
e e c
dt
de
ref
+ + + + + + = }

) ( ) 1 {(
2 2 1 1
2
1 1 2 2
2


(4.3.14)
So to find the parameter update laws for J & , We have augmented the Lyapunov function V
2

with quadratic terms of parameter error as follows:
2
2
2
1
2
2
2
1
2
1
2
1
2
1
2
1
h J
J
e e V
a

+ + + = (4.3.17)
The time derivative of Lyapunov function is calculated in following way:


2 1
2 2 1 1
|
|
.
|

\
|
+
|
|
.
|

\
|
+ + =
dt
h d
h
h
dt
J d
j
j
e e e e V
a


)
1
( }

1
)

) ( ) 1 (( {
2
2
1
ref 2 2 1 1
2
1 2
2
2 2
2
1 1
dt
dh
e h
dt
J d
h e c c e c e
J
J
e c e c

+ + + + + + =

(4.3.18)
From the above equation the following parameter updates laws can be formed to cancel out the bracketed
sign indefinite terms of the last equation (4.3.18)
)

) ( ) 1 ((

ref 2 2 1 1
2
1 2 1
h e c c e c e
dt
J d
+ + + + =

(4.3.19)
2 2

e
dt
h d
= (4.3.20)
The adaptation law results the derivative of the Lyapunov function to become negative definite function
of error variables, which shows the stability of the error dynamics of the system
2
2 2
2
1 1 2
e c e c V =

(4.3.21)

76

4.3.2 Robustification of the Adaptive Backstepping Control laws:
Some of the basic assumptions which we have made in time of formulation of Adaptive
Backstepping control law are that the model of the plant is free from the un-modeled dynamics,
parameters drift, and noise etc. but in actual systems these assumptions are violated. In case of practical
system the stability of the system can be violated by some bound disturbances, also for a high rate of
adaptation system may becomes unstable. So for using a high rate of parameter adaptation gain without
affecting the stability of the overall system, a modification of the above control law is necessary. Here, in
this paper a continuous switching function is being proposed. The modified adaptation law is given
below:
( ) ( ) ( ) J h e c c e c e J
gs ref

1

1 2 2 1 1
2
1 2 1
+ + + + =

(4.3.22.a)
h e h
sh

2 2 2
=

(4.3.22.b)
where
gs
&
hs
are called the continuous switching function is represented as:


|
|
.
|

\
|
<
=
0 g0
0 0
0
0
0
2 g if
2 g if

g if 0
g
g g
g
g g
g
g gs

(4.3.23.a)


|
|
|
.
|

\
|

<
=
0 h0
0 0
0
0
0
2 h

if
2

h if

if 0
h
h h
h
h h
h
h hs

(4.3.23.b)
In equation sets (4.3.23.a) & (4.3.23.b) 0, 0,
go
&
ho
are the design constants. Now the magnitudes of
the above constants are chosen at the time of design by several trial and error methods.

4.4 Simulation Results:
The model of separately excited dc machine and the model of the controller are simulated in
MATLAB Simulink. Simulation executed by using M=1.25, m=0.1, l=0.1, g=0.1 model parameters
77

selected as c
1
=6, c
2
=6,
1
=9,
2
=9, g
0
=300, h
0
=10 &
g0
=
h0
=1. In fig. 4.4.1 the structure of the controller
has been sketched. Initial conditions of separately excited dc machine rod angle selected as 1 radian 57
o
.
In this wide range of angle the controller is converge the angle of the rotor to zero. In Fig. 4.4.2 the
variation of angle as a function of time has been plotted.

Fig. 4.4.1 Controller Structure
78


Fig. 4.4.2 Rotor Angle in radian
To examine the stability of the controller against several type of disturbances a band limited
white noise signal as a disturbance at input signal, and a sample Gaussian noise signal is also directly
added with the angle of the rotor. In Fig. 4.4.3 the rod angle as a function of time is plotted below, in Fig.
4.4.4 the disturbance signal which added with the angular position has been shown, Fig. 4.4.5 shows the
variation of parameter and in Fig. 4.4.4 the variation of parameter is shown. It is clear from the above
results that the controller is able to maintain the angle of the rotor in its unstable equilibrium position
(vertical Upright) in spites of suffering with some noise signal.

Fig. 4.4.4 Disturbance Signal added with the angular position
79


Fig. 4.4.4 Estimation of the Parameter g

Fig. 4.4.7 Parameter Estimation of h with time
4.5 Conclusion:
We have applied an adaptive backstepping control to solve the nonlinear control problem of a
separately excited dc machine. The control algorithm offers a stable control system in the presence of
unknown parameters of the cart/rotor system. The control algorithm also uses continuous switching
function to prevent the controller being unstable due to high rate of change of parameters during
adaptation. The performance of the controller is tested against various types of noise/disturbance signals.
80

The simulation results clearly reveal that the performance of the controller is very good for noise signals
and also for the large initial deviation of rotor from its desired position. The robust adaptive backstepping
control method has a capability of quickly achieving the control objectives as well as an excellent
stabilizing ability to maintain the angular position of the rotor in space in spite of suffering with bounded
impact applied on the rotor.
Chapter 5
Discussion and Conclusions

5.1 Discussion and Conclusions:
In this dissertation a robust adaptive backstepping control action has been proposed to address the
control problem of separately excited dc machine. This is achieved by selecting the controller structure
and parameter gain properly. Proper controller structure attenuates the effect of model uncertainties from
parametric uncertainties and uncertain nonlinearities. Thus, transient performance and steady state
tracking accuracy can be ensured in general. Proper selection of the controller parameter reduces the
impact of model uncertainties; as a result enhance the steady state performance of the system. Thus,
asymptotic tracking (or zero final tracking error) can be achieved without using high-gain feedback in the
presence of parametric uncertainties. The design is conceptually simple and is attractive in applications
because of its high performance.


5.2 Scope of Future Research:

Robust adaptive block backstepping control algorithm can employ to control the motion of the dc
drive system. In case of interconnected drive mechanism we employ block backstepping to address the
stabilization problem of interconnected nonlinear system. After successful implementation of a block
backstepping controller for an interconnected drive system we can employ the adaptation scheme via
tuning function. And finally we can introduce the concept of robust adaptive backstepping by
incorporating the continuous switching action in tuning function.

81


6. Bibliography

[1]. M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic, Nonlinear and Adaptive Control Design,
New York; Wiley Interscience, 1995.

[2]. H. K. Khalil, Nonlinear Systems, Prentice Hall, 1996.

[3]. J.J.E Slotine and W. LI, Applied Nonlinear Control, Prentice Hall, 1991

[4]. Jhou J. and Wen. C, Adaptive Backstepping Control of Uncertain Systems, Springer-Verlag,
Berlin Heidelberg, 2008.

[5]. Ioannou. P.A and Sun. J, Robust Adaptive Control, Prentice Hall, Inc, 1996.

[6]. Ola Hrkegrd, Backstepping and Control Allocation with Applications to Flight Control, PhD-
dissertations, Electrical Engineering Department of Linkping University, Sweden, 2003.

[7]. Chieh Chen, Backstepping Control Design and Its Applications to Vehicle Lateral Control in
Automated Highway Systems, PhD-dissertations, Mechanical Engineering Department of
UNIVERSITY of CALIFORNIA at BERKELEY, 1996.

[8]. Bin Yao, Adaptive Robust Control of Nonlinear Systems with Application to Control of
Mechanical Systems, PhD-dissertations, Mechanical Engineering Department of UNIVERSITY
of CALIFORNIA at BERKELEY, 1996.

[9]. A Isidori, Nonlinear control Systems, Second Edition, Berlin: Springer Verlag, 1989.

[10] I. Kanellakopoulos and P. T. Krein, Integral-action nonlinear control of induction motors,
Proceedings of the 12
th
IFAC World Congress, pp. 251-254, Sydney, Australia, July 1993.

82

[11] Yung-Chih Fu and Jung-Shan Lin, Nonlinear Backstepping control Design of Furuta
Pendulum, Proceedings of the 2005 IEEE Conference on Control Applications, Toronto,
Canada, pp. 96-101, August 28-31, 2005.

[12] J. Tsinias, Sufficient Lyapunov-like conditions for stabilization, Mathematics of control,
signals and systems, vol: 2, pp 343-357, 1989

[13]. C.I.Byrnes, A Isidori, J C Willems, Passivity, Feedback equivalence, and the global
stabilization of minimum phase nonlinear systems, IEEE Transactions on Automatic
Control,Vol: 36 issue 11, pp 1228-1240, 1991.

[14]. E. D. Sontag and H. J. Sussmann, Further comments on the stabilizability of the angular velocity
of a rigid body, Systems and Control Letters, Vol: 12, pp 437-442, 1988

[15] R. Ortega, Passivity properties for the stabilization of cascaded nonlinear systems, Automatica,
Vol. 27, pp. 423-424, 1991

[16] A. ohsumi and T. Izumikawa, Nonlinear control swing up and stabilization of an inverted
pendulum, Proceedings of the 34
th
IEEE Conference on Decision and Control, vol: 4, pp: 3873-
3880, 1995.

[17] K. J. Astrm and K. Futura, Swinging up a pendulum by energy control, Preprints 13
th
IFAC
World Congress, pp: 37-42, 1996.

[18] Furuta, K.: Control of pendulum: From super mechano-system to human adaptive
mechatronics, Proceedings of 42th IEEE Conference on Decision and Control, pp. 14981507
(2003)

[19] . Angeli, D.: Almost global stabilization of the inverted pendulum via continuous state
feedback, Automatica, vol: 37 issue 7, pp 11031108 2001.

[20] . Astrm, K.J., Furuta, K.: Swing up a pendulum by energy control, Automatica, Vol: 36, issue
2, pp 287295, 2000

83

[21] . Chung, C.C., Hauser, J.: Nonlinear control of a swinging pendulum. Automatica Vol: 36,
pp 287295 (2000)

[22] Fradkov, A.L.: Swinging control of nonlinear oscillations. Int. J. Control, Vol: 64 issue 6, pp-
11891202, 1996

[23] . Lozano, R., Fantoni, I., Block, D.J.: Stabilization of the inverted pendulum around its
homoclinic orbit, .System. Control Letter. Vol 40 issue 5, pp 197204, 2000

[24] . Shiriaev, A.S., Pogromsky, A., Ludvigsen, H., Egeland, O.: On global properties of passivity-
based control of an inverted pendulum,. Int. J. Robust Nonlinear Control, Vol: 10, issue 4, pp 283300
2000

[25] . Araceli, J., Gordillo, F., Astrom, K.J.: A family of pumping-damping smooth strategies for
swinging up a pendulum. Third IFAC Workshop on Lagrangian and Hamiltonian Methods for Nonlinear
Control, Negoya, Japan, 2006

[26] . Lozano, R., Dimogianopoulos, D.: Stabilization of a chain of integrators with nonlinear
perturbations: Application to the inverted pendulum,. Proc. of the 42nd IEEE Conference on Decision
and Control, Maui Hawaii, vol. 5, pp. 51915196 (2003)

[27] . Gordillo, F., Aracil, J.: A new controller for the inverted pendulum on a cart,. Int. J. Robust
Nonlinear Control Vol: 18, pp 16071621, 2008

[28] S. J. Huang and C. L. Huang, Control of an inverted pendulum using grey prediction model,
IEEE Transaction on Industry Applications, Vol: 36 Issue: 2, pp 452-458, 2000

[29] F. Maznec, L.praly, Adding Integrators, saturated controls, and stabilization for feedforoward
systems, IEEE trans. Automatic Control, pp 1559-1578, 1996.

[30] R. oltafi Saber, Fixed point controllers and stabilization of the cart pole system and the rotating
pendulum, Proceedings of the 38
th
IEEE Conference on Decision and Control, Vol: 2, pp 1174-1181,
1999.

84

[31] S. Mori, H. Nishihara, and K. Futura, Control of unstable mechanical system; control of
pendulum, Int. J. Control, vol. 23, pp 673-692, 1976.

[32] Q. Wei, et al, Nonlinear controller for an inverted pendulum having restricted travel,
Automatica, vol. 31, no 6, pp 841-850, 1995

[33]. Ebrahim. A and Murphy, G.V, Adaptive Backstepping Controller Design of an inverted
Pendulum, IEEE Proceedings of the Thirty-Seventh Symposium on System Theory, pp. 172-174, 2005.

[34] Benaskeur. A and Desbiens. A, Application of Adaptive Backstepping to the stabilization of the
Inverted Pendulum, Electrical and Computer Engineering, 1998. IEEE Canadian Conference, vol. 1, pp
113-116, May 24-28, 1998

[35] M. T. Alrifai, J. H. Chow, and D. A. Torrey. A backstepping nonlinear control approach to
switched reluctance motors, In Proc. of the 37th IEEE Conference on Decision and Control, pages
4652{4657, Dec. 1998.

[36] Ho-Hoon Lee, A new approach for the anti-swing control of overhead cranes with high speed
load hoisting, International Journal of Control, Vol. 76: issue 15, pp 1493 1499, 2003

[37] Ho-Hoon Lee, Yi Liang, Del Segura, A Sliding-Mode Antiswing Trajectory Control for
Overhead Cranes With High-Speed Load Hoisting, ASME transactions on Journal of Dynamic Systems,
Measurement, and Control, Vol 126, pp 842-845, 2006.

[38] Lee, H.-H., 1998, Modeling and Control of a Three-Dimensional Overhead Crane, ASME J.
Dyn. Syst., Meas., Control, 120, pp. 471476.

[39] Kiss, B., Levine, J., and Mullhaupt, P., 2000, A Simple Output Feedback PD Controller for
Nonlinear Cranes, Proc. of the 39th IEEE Conf. on Decision and Control, Sydney, Australia, pp. 5097
5101

[40] Yang, Y., Zergeroglu, E., Dixon, W., and Dawson, D., 2001, Nonlinear Coupling Control Laws
for an Overhead Crane System, Proc. of the 2001 IEEE Conf. on Control Applications, Mexico City,
Mexico, pp. 639644.
85


[41]. M Fliess_ J. ELvine_ and P. Rouchon, A simplified approach of crane control via a generalized
state space model, In Proc. 30th IEEE Control Decision Conf., Brighton, pp 736-741, 1991.

[42]. Joaquin Collado, Rogelio Lozano, Isabelle Fantoni, Control of convey-crane based on
passivity, Proceedings of the American Control Conference Chicago, Illinois, pp 1260-1264 June 2000

You might also like