You are on page 1of 18

Lecture notes on Controllability

By

N. Sukavanam
Department of Mathematics
IIT Roorkee

Controllability of finite dimensional control systems

Dynamical System
A System is a collection of objects which are related by interactions and produce various outputs in response to
different inputs.
If a system changes with respect to time then it is termed as Dynamical System
For example Electro-mechanical machines such as motor car, aircraft, or spaceships, Biological systems such as the
human body, Economic structures of countries or regions, Population growth in a region are dynamical systems

A mathematical model of a dynamical system is represented by

d xi
=f i ( t , x 1 , x 2 ,. . . , x n ) :i=1, 2,. .. ,n
dt

where

fi:R

n +1

R ; i=1,2, , n

x i ; i=1,2, , n

(1)

are some nonlinear functions.

are called state variables of the dynamical system.

Example: An equation of the form

dx
=x , x ( 0 )=x 0 , 0 t T
dt

is a linear dynamical system. The solution is given by

x ( t )=exp ( t )x 0

Example :
The system described by

x 1=x1 2 x 1 x 2

x 2=2 x 2+ x1 x 2

is a nonlinear dynamical system.


The above nonlinear system represents a mathematical model of prey-predator population known as Volterra-Lotka
model.

Consider the equation

x ' =f ( t , x ) ,t >0 : x ( 0 )=(2)

where f

is assumed to be integrable with respect to time,

Theorem 1 :Assume that the nonlinear function

is Lipschitz continuous in the 2

nd

variable and cotinuous in the

st
1 variable. Then equation (2) has a unique solution.

Control system
If a dynamical system is controlled by suitable inputs to obtain desired output then it is called a Control System

d xi
=f i ( t , x 1 , x 2 ,. . . , x n ,u 1 , u2 , .. . ,u n ) :i =1,2, ... , n
dt

Here

x i ; i=1,2, , n

represent the state of the system and

ui ;i=1,2, , n

are the control variables.

Example: Consider the control system

dx
=x +u ; x ( 0 ) =x 0 0 t T ,
dt

The solution is

x ( t )=exp ( t ) x 0+ exp (s ) u(s) ds


0

Now the control function


position at

t=T .

u(t) (input) can be chosen suitably so that the solution (output) has a desired final

If the desired value of x ( T )=x 1

u (t)=

u(t ) can be choosen as

the

exp ( (st 0 ))
[ x 0 + x 1 exp ( (tt 0 ))]
T

Example: Suppose a particle is to be moved along a straight line and let its distance from an initial point 0 be s (t )
at time t . For simplicity assume that the particle is controlled only by a force
only factor of interest are the particles position
which describe the state of the object at time

x 1 ( t )=s(t)

and velocity

u(t )

per unit mass. Suppose that

x 2 ( t )=s (t ) . Then the equations

are

x 1=x 2

x 2=u

or in matrix notation

x = Ax+ Bu

where

[]

x= x 1
x2

[ ]

A= 0 1
0 0

[]

B= 0 .
1

In practice there will be a limit on the values of u for obvious reason, and it also be necessary to impose restrictions on
the magnitudes of velocity and acceleration.
It may then be required to start from rest at 0 and reach some fixed point in the least possible time, or perhaps
with minimum consumption of fuel. The mathematical problems are firstly to determine whether such objectives are
achievable with the selected control variable, and if so, to find appropriate expressions for
and / or

x 1 and

x2

as a function of time

Solution of linear system


Consider the system

x = Ax ,

with

x ( t 0 )= x0 .

(3)

The solution is given by

tt 0
A ()

x ( t )=exp

We define the exponential matrix by

exp ( At )=I +tA +

The series on the right converges for all finite

and all

t 2 A 2 t 3 A3
+
+
2!
3!
n n

matrices

that

( At )
exp
exp ( 0 )=I and
so that it represents the solution of (3).
d

State Transition Matrix

The solution of (3) subject to the above initial condition is often written in the form

x ( t )= ( t , t 0 ) x 0

having finite elements. It is clear

where

( t ,t 0 )=exp A (tt 0 ) is called the state transition matrix, since it relates the state at any initial time

t 0 to the state at any other time t . The state transition


matrix has the following properties:

(1)

d
( t , t 0 )= A ( t ,t 0 )
dt

(2)

( t , t )=I

(3)

( t 0 ,t )= ( t , t 0 )

(4)

( t ,t 0 )= ( t , t 1 ) ( t 1 ,t 0 )

Time varying systems

x = A ( t ) x ( t ) , x ( t 0 )=x 0 (4)

It is then easy to verify by direct differentiation that

x ( t )= ( t , t 0 ) x 0

is the solution of (4) with initial condition

x ( t 0 )= x0 .

where ( t ,t 0 )= X (t ) X ( t 0 ) (5)

and X ( t )

is the unique

n n matrix satisfying

dX
=A ( t ) X ( t ) , X ( 0 ) =I .(6)
dt

In time varying case we can no longer define matrix exponential of A, but there is a result corresponding to the fact

exp( At )
that

is nonsingular when

is constant.

X (t )
Theorem : 2 The matrix

is nonsingular.

dY
YA(t ),
Y (0) I
dt

Y (t )
Proof: Define a matrix

as the solution of

Such a matrix exists and is unique.

Now

d
& YX&
(YX ) YX
dt
YAX YAX 0

Y (t ) X (t )
So

is equal to a constant matrix, which must be the unit matrix because of the condition at

X (t )

t 0

. Hence

Y (t )
is nonsingular and its inverse is in fact

(t , t0 )
It is most interesting that although in general it is not possible to obtain an analytic expression for

, this

matrix possesses precisely the same properties as those for the constant case. A further correspondence with the time
invariant case is the following generalization of constant matrix case.
Peano-Baker series : It can be seen that RHS of (5) is equal to the power series known as Peano-Baker Series given
below.
t

t t1

t n1

( t , t 0 )=I + A ( s ) ds ++ A ( t 1 ) A ( t 2 ) A ( t n ) d t n d t 1 +
t0

t 0 t0

t0

Finite dimensional linear control systems


Consider the linear control system defined as
x Ax Bu
x ( 0) x 0

x (t) Rn

where

u(t) R m

and

(7)

for each t

[0,T], A is

nn

and B is

n m

matrices.

The solution of the system is written as


t

x ( t )=exp ( At ) x 0+ exp ( A (ts ) ) B u ( s ) ds


0

Controllability: The system is said to be controllable in the time interval [0,T] if for any

and

find a control vector function u(t) such that the solution of (7) corresponding to u(t) satisfies

x (T )=x 1 .

Kalman Condition (1963)


The system (1) is controllable iff rank of the matrix .1
n = [B AB A2B.An-1B]
2.The system (1) is controllable iff the controllability Grammian matrix
T

( t , s )BB

U=

( t , s ) ds

Example: Consider the control system

dx
=x+u ; x ( t 0 )=x 0 , 0 t T
dt

is nonsingular.

x R n
in

x (0)=x 0

one can
and

The solution is

Now the control function

t0

u(t) (input) can be chosen suitably so that the solution (output) has a desired final

t=T .

position at

If the desired value of x ( T )=x 1

u (t)=

x ( t )=exp ( (tt 0)) x0 + exp ( ( st 0) ) u( s) ds

the

u(t ) can be choosen as

exp ( (st 0 ))
[ x 0 + x 1 exp ( (tt 0 ))]
T

Theorem 3: The constant system

x& Ax Bu

is completely controllable if and only if the

n nm

(8)

controllability matrix

U B, AB, A2 B,....., An 1B

(9)

n
has rank

Proof: Necessity: We suppose that (8) is completely controllable and wish to prove

assuming

rankU n

rankU n

, which leads to a contradiction. For then there will exist a constant row

. This is done by

q0
vector

that

qB 0, qAB 0,....., qAn 1 B 0


(10)

such

x(t ) exp( At ) x0 exp( A ) Bu ( )d


0

x (0) x0
The solution of (8) subjected to

t t1
Put

is

x(t1 ) 0
and

we get

0 exp( At1 ) x0 exp( A ) Bu ( ) d

0
t1

t1

x0 exp( A ) Bu ( ) d

exp( At1 )

Since

is nonsingular

exp( A )
Since

r ( A)
can be expressed as some polynomial

in

( n 1)
having degree at most

we get

t1

x0 ( r0 I r1 A ..... rn 1 An 1 ) Bu ( ) d
0

qx0 0

q
Multiplying on the left by

, using (10), we get

q0

x0
Since (8) is completely controllable this must hold for any vector

assumption that

rankU n

Sufficiency: we now assume

, which implies

, this contradicting the

rankU n

x (0) x0
, again the solution of (8) subjected to

x(t ) exp( At ) x0 exp( A ) Bu ( ) d


0

t t1
Put
t1

exp( At1 ) x(t1 ) x0 exp( A ) Bu ( )d


0

is

t1

exp( At1 ) x(t1 ) x0 r ( A) Bu ( ) d


0

or

(11)

Where the coefficients of

will be the functions of

. Carrying out the integration in (11) will produce

exp( At1 ) x(t1 ) x0 ( s0 B s1 AB ..... sn 1 An 1B)


(12)

Since

rankU n

x0
it follows that for any given

si
it will be possible to choose the

u ( )

(and hence by implication

x(t1 ) 0
) so that the right hand side of (12) is identical zero, giving

. This establishes that (8) is completely

controllable since the condition of the definition are satisfied.


Theorem-4 gives a criterion for determining whether a constant linear system is completely controllable, but gives no
help in determining a control vector which will carry out a required alteration of states. We now give an explicit
expression for such a vector for both constant and time varying systems.

S1
Theorem 4: The system

x&(t ) A(t ) x(t ) B(t )u (t )


(13)

is completely controllable if and only if the

nn

symmetric controllability matrix

t1

U (t0 , t1 ) (t0 , )B( ) BT ( )T (t0 , )d


t0

(14)

(t , t0 ) X (t ) X 1 (t0 )
where

is nonsingular. In this case the control

u (t ) BT (t )T (t0 , t )U 1 (t0 , t1 )[ x0 (t0 , t1 ) x f ]


(15)

t0 t t1
Defined on

x(t1 ) x f

x(t0 ) x0
, transfers

to

U (t0 , t1 )
Proof of Sufficiency: If

is assumed nonsingular then the control defined by (15) exists. It is then straight

S1
forward to show that

is completely controllable.

x(t0 ) x0
The solution of (13) with initial condition

is

x(t ) (t , t0 ) x0 (t0 , ) B( )u ( ) d

t0

Substituting (15) for u(t) in the above equation we get

x(t ) (t , t0 ) (t0 , t1 ) x f
t t1

x(t1 ) (t1 , t0 ) (t0 , t1 ) x f x f

At

S1
Hence

is completely controllable.

S1

U (t0 , t1 )

Necessity: we need to show that if

is an arbitrary constant column

is completely controllable then

is nonsingular. First notice that if

U
vector then from (14) since

is symmetric we can construct the quadratic form

t1

U T ( , t0 ) ( , t0 )d
T

t0

t1

|| || d
2
e

t0

( , t0 ) BT ( )T (t0 , )
where

(16)

U (t0 , t1 )
, so that

T U (t0 , t1 ) 0
such that

. Eqn. (16) with

is positive semi-definite. Suppose there exists some

when

then implies

t1

|| || d 0
2
e

t0

which implies that

( , t0 ) 0, t0 t1
.

S1
However, by assumption

x(t1 ) 0

v (t )
is completely controllable so there exists a control

, say, making

if

x(t0 ) x0
. Hence we have the solution of (13)
t1

x(t1 ) (t1 , t0 ) x0 (t0 , ) B( )u ( )d

t0

It implies that
t1

(t0 , ) B ( )u ( )d
t0

Therefore
t1

|| || vT ( ) BT ( )T (t0 , ) d
2
e

t0

t1

vT ( )( , t0 )d
t0

Which contradicts the assumption that

U (t0 , t1 )
. Hence

is positive definite and is therefore nonsingular.

( x f , t1 )

( x0 , t0 )
The control function (15) which transfers the system from

to

requires calculation of the transition

matrix and the controllability matrix (14). This is not too difficult for constant linear systems. Of course there will in

general be many other suitable control vectors which achieve the same result, but the expression (15) has an
additional interesting property:

u (t )
Theorem 5: If

( x f , t1 )

( x0 , t0 )
is any other control taking
t1

to

then

t1

|| u ( ) || d || u ( ) || d
2
e

2
e

t0

t0

u ( )
Where

is given by (15), provided

u
Proof: Since both

u u

u
and

satisfy (13), we obtain after subtraction


t1

0 (t0 , )B ( ) u ( ) u ( ) d
t0

Multiplication of this equation on the left by

x0 (t0 , t1 ) x f

U 1 (t0 , t1 )

and use of (15) gives


t1

( ) u ( ) u ( ) d 0

t0

(17)
Therefore
t1

t1

t0

t0

T
2
2
T
(u u) (u u )d || u ||e || u ||e 2u u d

t1

|| u ||e2 || u ||e2 d
t0

Using (17),

t1

|| u ||

t1

2
e

t0

d || u ||e2 || u u ||e2 d
t0

t1

|| u ||e2 d
t0

provided

u u

, as required.

This result can be interpreted as showing that the control (15) is optimal in the sense that it minimizes the integral
t1

|| u ( ) ||

2
e

t0

t1

d u12 u22 ..... um2 d


t0

( x f , t1 )

( x0 , t0 )
over the set of all controls which transfer

to

, and this integration can be thought of as a measure of

control energy involved.


If the system is not completely controllable, it would be misleading to call it uncontrollable, since the implication of
our definition is that for a non-completely controllable system there are only certain final states which cannot be
achieved by any choice of control. We can however modify the argument used in Theorem 5 to show how such final

S1
states can be attained when

is not completely controllable.

xf
Theorem 6: If, for a given

, there exists a constant column

vector

such that

U (t0 , t1 ) x0 (t0 , t1 ) x f
(18)
Then the control

u (t ) BT (t )T (t0 , t )
x(t1 ) x f

x(t0 ) x0
Transfers the system (13) from

to

x(t0 ) x0
Proof: The solution of the controlled system with

is

x(t ) (t , t0 ) x0 (t0 , ) B( )u ( )d

t0

Put

u (t ) BT (t )T (t0 , t )

x(t ) (t , t0 ) x0 (t0 , ) B( ) BT ( )T (t0 , ) d

t0
t

t t1
Put

x(t1 ) (t1 , t0 ) x0 (t0 , ) B( ) BT ( ) T (t0 , ) d


t0

t1

Using (14)

x(t1 ) (t1 , t0 ) x0 U (t0 , t1 )


Using (18), we get

x(t1 ) (t1 , t0 )(t0 , t1 ) x f


xf
, as required.

S1
Since the system

u (t )

U
is completely controllable,

is nonsingular and the expression for

in Theorem 7 reduces

xf
to (15). It can also be shown that the converse of this Theorem is true, namely that only states
holds can be reached.

for which (18)

x0 x(0)
Theorem 7: A given state

subspace spanned by the columns of

xf
can be transferred into another state

U B, AB, A2 B,....., An 1B

xf

x0
provided both

and

lie in the

You might also like