You are on page 1of 8

304-501 LINEAR SYSTEMS

Lecture 14: Linear Dynamical Systems


4.3 Linear Dynamical Systems

In this section, we discuss linear time-varying systems represented by the block diagram shown below.

u (t )
B (t )

x (t )

()d
A(t )

x(t )

C (t )

+ +

y (t )

D(t )

That is,

x(t ) = A(t ) x(t ) + B(t )u (t ) y (t ) = C (t ) x(t ) + D(t )u (t )


n m p

(4.14)

where x(t ) R , u (t ) R , y (t ) R , and A, B, C , D have piecewise continuous entries. 4.3.1 Homogeneous Differential Equation

x(t ) = A(t ) x(t )

Consider the homogeneous differential equation:

x(t ) = A(t ) x(t ), x(t0 ) = x0 .


4.3.1.1 Time-Varying Case
Proposition:

(4.15)

The set of all solutions of x(t ) = A(t ) x(t ), x(t0 ) = x0 forms an n -dimensional vector space over

R.
Proof: First, we show that the set of solutions forms a linear space over R . Let x1 (), x2 () be two distinct solutions of (4.15) (with distinct initial states.) Then,

L14-

1/1

304-501 LINEAR SYSTEMS

d d d [1 x1 (t ) + 2 x2 (t )] = 1 x1 (t ) + 2 x2 (t ) dt dt dt = 1 A(t ) x1 (t ) + 2 A(t ) x2 (t )

= A(t ) [1 x1 (t ) + 2 x2 (t ) ] , 1 , 2 R

Next, we show that the solution space has dimension n . Let xi () be solutions of (4.15) with

xi (t0 ) = ei , i = 1, , n (the canonical unit vectors in R n .) We shall show that these solutions are
linearly independent and that every solution can be expressed as a linear combination of { xi ()}i =1 .
n

We prove this by contradiction. Suppose { xi ()}i =1 are linearly dependent. Then,


n

x (t ) = , for some R not all zero, t T


i =1 i i i

At t = t0 :

i xi (t0 ) = = i ei
i =1 i =1

which implies that {ei }i =1 are linearly dependent, clearly a contradiction. Hence { xi ()}i =1 are linearly
n n

independent. Let x() be a solution to the homogeneous differential equation (4.15), with x(t0 ) = e . Then, e R can be written as a linear combination of the basis vectors {ei }i =1 :
n n

e = i ei , i R
i =1

i xi (t ) is a solution of (4.15), with initial state e = i xi (t0 ) .


i =1 i =1

By the uniqueness of the solution, we conclude that x() =

x () .
i =1 i i

Definition: Fundamental Matrix A fundamental set of solutions of x(t ) = A(t ) x(t ), x(t0 ) = x0 is any set { xi ()}i =1 such that for some
n

t T , { xi (t )}i =1 forms a basis of R n .


n

L14-

2/2

304-501 LINEAR SYSTEMS

An n n matrix function of t , () is said to be a fundamental matrix for x (t ) = A(t ) x(t ) if the n columns of () consist of n linearly independent solutions of x (t ) = A(t ) x(t ) , i.e.,

1 (t ) = A(t ) 1 (t ) n (t ) = A(t ) n (t )
where
n { i (t0 )}i =1 forms a basis for ( R n , R ) , and (t ) = [ 1 (t ) n (t )]

Example: Consider the system

0 0 x(t ) = x(t ) t 0
That is, x1 (t ) = 0 , x2 (t ) = tx1 (t ) . The solution is:

x1 (t ) = x1 (t0 ) , and x2 (t ) =
Let t0 = 0 and

1 2 1 2 t x1 (t0 ) t0 x1 (t0 ) + x2 (t0 ) . 2 2

1 (0) = = 1 . x (0) 1

2

x (0)

Then

1 (t ) = . 1

Let

2 (0) = = 1 . x (0) 0

2

x (0)

Then

2 (t ) = 2 . Thus a fundamental matrix for the system is given by: t


0 2 (t ) = 2 1 t
Proposition: An n n matrix () is a fundamental matrix for x (t ) = A(t ) x(t ) iff it satisfies

(t ) = A(t ) (t ), (t0 ) = H nonsingular for some t0 T


Proof: Follows from the previous proposition.

Proposition: If u R is any constant vector, then x(t ) = (t )u is a trajectory.


n

L14-

3/3

304-501 LINEAR SYSTEMS

Proof: x (t ) = (t )u = A(t ) (t )u = A(t ) x(t )

Proposition:

N { (t )} is invariant t T .
Proof: Suppose u N { (t1 )} , for some t1 T . We show that u N { (t )} , t T . Let

x(t ) = (t )u . Since x() is a trajectory, x(t1 ) = . But the null trajectory x(t ) = , t T passes through the point x(t1 ) = , and by uniqueness, x(t ) = , t T is the only possible trajectory.
Therefore,

x(t ) = (t )u = , t T
u N { (t )} , t T

Note that for every initial condition, there exists one state trajectory. Consider the fundamental matrix (t ) = Theorem: The vectors

[ 1 (t )
n

n (t ) ] .

t T .
Proof:

{ i (t )}i =1
n

form a basis of R at some t0 T iff

{ i (t )}i =1
n

form a basis of R at all


n

{ i (t )}i =1 forms a basis at some t0 T N { (t )} = { } at t0 T n all t T (by above proposition) { i (t )}i =1 forms a basis for all t T .
n

N { (t )} = { } for

Corollary: (i) (ii)

1 (t ) exists t T ,

N { (t )} = { } , t T 1 (t ) exists, t T

L14-

4/4

304-501 LINEAR SYSTEMS

Definition: State Transition Matrix The state transition matrix (t , t0 ) associated with the system x (t ) = A(t ) x(t ) is that matrix-valued function of t , t0 which: (1) Solves the matrix differential equation: (t , t0 ) = A(t ) (t , t0 ), t T , t0 T , (2) Satisfies (t0 , t0 ) = I , t0 T . Proposition: Let () be a fundamental matrix satisfying (t0 ) = I . Then (t , t0 ) = (t ) . Proof: Follows directly from the above definition.

Proposition: Let () be any fundamental matrix of x (t ) = A(t ) x(t ) . Then (t , t0 ) = (t ) (t0 ), t , t0 T . Proof: We have (t0 , t0 ) = (t0 ) (t0 ) = I , t0 T . Moreover,
1 1

(t , t0 ) = (t ) 1 (t0 ) = A(t ) (t ) 1 (t0 ) = A(t ) (t , t0 )

Proposition: The solution of x(t ) = A(t ) x(t ), x(t0 ) = x0 is given by x(t ) = (t , t0 ) x0 , t T . Proof: The initial state is x(t0 ) = (t0 , t0 ) x0 = x0 . Next, we need to check that x(t ) satisfies the differential equation: x(t ) = (t , t0 ) x0 = A(t ) (t , t0 ) x0 = A(t ) x (t ), t T .

Note: The function s (t ; t0 , x0 ) := (t , t0 ) x0 is called the state transition function.

L14-

5/5

304-501 LINEAR SYSTEMS

Properties of the state transition matrix: (1) (t , t ) = I , t T , (2) (t , t0 ) = (t , t1 ) (t1 , t0 ) ,

x1

(t1 , t0 ) x0 (t0 , t0 ) x0
t0
t1

(t , t1 ) x0
t t

x2
(3) (t , t0 )
1 1 1 = (t ) (t0 ) = (t0 ) (t ) = (t0 , t ) , 1

(4)

d (t0 , t ) = (t0 , t ) A(t ) , dt


T

(5) If (t , t0 ) is the state transition matrix of x (t ) = A(t ) x(t ) , then (t0 , t ) is the state transition matrix of the system z (t ) = A (t ) z (t ) .
T

(6) det (t , t0 ) = e

t Tr{ A( )}d
0

, where Tr{ A} denotes the trace of matrix A , and it is nonzero if

Tr{A( )}d
t0

is finite.

(7) The Peano-Baker series for the solution (transition matrix) of

(t , t0 ) = A(t )(t , t0 ), (t0 , t0 ) = I


is given by:

(t , t0 ) = I + A( 1 )d 1 + A( 1 ) A( 2 )d 2 d 1 + A( 1 ) A( 2 ) A( 3 )d 3 d 2 d 1 +
t0 t0 t0 t0 t0 t0

L14-

6/6

304-501 LINEAR SYSTEMS

Note that if A(t ) and


t

t0

A( )d commute, i.e., if A(t ) A( )d = A( )d A(t ) , then,


t0 t0
t

(t , t0 ) = I + A( 1 )d 1 + A( 1 ) A( 2 )d 2 d 1 + A( 1 ) A( 2 ) A( 3 )d 3 d 2 d 1 +
t0 t0 t0 t0 t0 t0 t t 1 t 1 2 = I + A( 1 )d 1 + A( 2 )d 2 A( 1 ) d 1 + A( 3 )d 3 A( 2 ) d 2 A( 1 ) d 1 + t0 t0 t0 t0 t0 t0

= I + A( 1 )d 1 +
t0

2 2 1 t d 1 d + 1 t 1 d 2 A( )d d A( ) d + A ( ) d 2 2 1 3 3 2 1 1 t0 t0 2 t0 d 1 2 t0 t0 d 2

2 2 t 1 t 1 t 1 = I + A( )d + A( )d + A( 2 )d 2 A( 1 )d 1 + t0 2 t0 t0 t0 2 3 2 t 1 t 1 1 t d 1 = I + A( )d + A( )d + A( 2 )d 2 d 1 + t t t0 t 2 3 0 d 1 0 2 0 2 3 t 1 t 1 1 t = I + A( )d + A( )d + A( )d + t0 2 3 t0 t0 2 k 1 t A ( ) d = A( )d = e t0 t 0 k =0 k ! +
t

k 1 t A( )d + t0 k!

and

we
t
0

can

check
t
0

that

this

transition
t
0

matrix

satisfies

the

differential

equation

d t e dt

A ( ) d

=e t

A ( ) d

A(t ) = A(t )e t

A( ) d

The commutative property holds if A(t ) is a diagonal, or constant matrix. Note that in general,

e Ae B e A+ B , A, B R n unless the matrices commute, i.e., AB = BA . (this can be shown by multiplying the series expansions A B of e , e .)
4.3.1.2 Time-Invariant Case
Consider the time-invariant differential equation:

x(t ) = Ax(t ), x(t0 ) = x0 .


In this case,

(4.16)

(t , t0 ) = (t t0 , 0) =: (t t0 )
and

(t ) = A (t ), (0) = I , t T .
Using the unilateral Laplace transform, we obtain:

L14-

7/7

304-501 LINEAR SYSTEMS

s ( s ) (0) = A ( s ) ( s ) = ( sI A) 1 (0) = ( sI A) 1 , with ROC: Re{s} > max {Re[i ( A)]}


i =1,, n

Thus, one can obtain the state transition matrix of an LTI space-space system by taking the inverse Laplace transform (entry-by-entry) of ( sI A) .
1

Properties of the Matrix Exponential (1) If A R


nn

, then the Peano-Baker series is

1 k A (t t0 ) k + k! A ( t t0 ) and the series converges uniformly and absolutely to (t , t0 ) = e on every finite interval.

(t , t0 ) = I + A(t t0 ) +

1 2 A (t t0 ) 2 + 2

(2)

d At e = e At A = Ae At , dt
A ( t t0 )

(3) The solution of x(t ) = Ax (t ), x(t0 ) = x0 is x(t ) = e

x0 . We have

(t , t0 ) = (t ) 1 (t0 ) = e At e At0 = e A( t t0 ) .

L14-

8/8

You might also like