You are on page 1of 37

Chapter 6: Solutions to State Equations

We know how to solve scalar linear differential equations,


but what about the state-space equations:
x& = Ax + Bu ,

x ( t0 ) = x 0

y = Cx + Du

Actually, we need only to consider x& = Ax + Bu because


finding y will then be a simple matter of matrix
multiplication.
Brogan starts out with the scalar case, but we'll go directly
to the vector equations:
Recall the technique of integrating factor in the solution of
linear differential equations:

x& Ax = Bu

Multiplying this equation by e At will result in the left-hand


side becoming a "perfect" differential:
e At [x& Ax = Bu ]
e At x& e At Ax = e At Bu
d At
e x (t ) = e At Bu (t )
dt

Now multiply both sides by dt and integrate over a


dummy variable from t0 to t .
t

e At x (t ) e At 0 x (t0 ) = e A Bu ( )d
t0

e At x (t ) e At 0 x (t0 ) = e A Bu ( )d
t0

Move initial condition term to RHS and multiply through by e At


t

x (t ) = e At e At 0 x (t0 ) + e At e A Bu ( ) d
t0
t

= e A( t t 0 ) x (t 0 ) + e A( t ) Bu( )d
t0

Note that if matrix B were a function of time, this would


become simply
x (t ) = e

A ( t t0 )

x (t0 ) + e A( t ) B ( )u ( )d
t0

(but if A were a function of time, we run into bigger


problems.)

If we wanted to compute y ( t ), we would simply get:


t

y (t ) = Ce A( t t0 ) x ( t0 ) + C e A( t ) Bu ( ) d + Du ( t )
t0

Again, C and D could be functions of time without


complicating matters too much. If A is time-varying, we
must be more careful choosing a proper integrating
factor. The matrix exponential will no longer work.
Again, the importance of the matrix exponential e At
arises. We'll summarize the several ways to compute
it shortly.

System modes and modal decompositions:


This is a very powerful representation of a system's
solutions, used widely in large-scale systems and
infinite-dimensional systems, which are often
represented by partial differential equations rather than
ordinary differential equations. It underscores the
importance of a basis of the state-space.
Let the set {i } be the set of n linearly independent
eigenvectors, including, if necessary, generalized
eigenvectors, corresponding to eigenvalues i of the
constant matrix A. Because this set forms a basis of the
state-space, we can write
qi (t ) denotes the
n
scalar coefficients
x ( t ) = qi ( t )i
i denotes the
i =1
the eigenvectors

x ( t ) = qi ( t )i
i =1

For some time-varying coefficients

qi ( t ),

i = 1, K , n

We can easily do the same for the term


n

B ( t ) u ( t ) = i ( t ) i
i =1

B ( t ) u ( t ):
i (t ) denotes the

scalar coefficients
i eigenvectors

Substituting these expansions into the state-equations


x& = Ax + Bu

Gives . . . . .

i =1

i =1

i =1

q& i (t )i = qi (t ) Ai + i (t )i

We have implicitly assumed in this step that we have n


linearly independent eigenvectors. If this is not the
case, relatively minor complications arise.
n

Re-arranging,

(q& i (t ) qi (t ) A i (t ) )i = 0

i =1

These coefficients must all be zero, so


q& i ( t ) = q i ( t ) i + i ( t )

Recall

A i = i i

for

i = 1, K , n

Eigenvalue/Eigenvector Problem

q& i ( t ) = q i ( t ) i + i ( t )

for

i = 1, K , n

This is a set of n de-coupled equations (if we had used


any generalized eigenvectors, some would still be
coupled, but only to one other equation).
The terms qi ( t )i are called system modes, and are
equivalent to the "new" state variables x ( t ) that we
obtained in the past example where we "diagonalized"
the system using the modal matrix. Recall that if M is
the modal matrix, we can define new variables.
x = Mq

such that

q& = M 1 AMq + M 1 Bu
y = CMq + Du

Jordan
Form

q& = M 1 AMq + M 1 Bu
y = CMq + Du

where M 1 AM = J is the Jordan form of the A-matrix


(diagonal if there are n eigenvectors).
Because these equations are decoupled, the solutions to
the state equations are particularly simple. We can find
the solutions q (t ) and then change them back to the
original variables x (t ) by un-doing the transformation
afterward. That is,

J = M 1 AM
t

Initial Conditions

q (t ) = e J ( t t0 ) q (t 0 ) + e J ( t ) M 1 B ( )u ( )d,q( t0 ) = M 1 x ( t0 )
t0

after which
x ( t ) = Mq ( t )

In diagonal form, the computation of e Jt is particularly


easy:
0
0
1
1t

=e

n t

and
e At = Me Jt M 1
Whenever two matrices A and J are similar, we can
compute our function of J and perform the reversesimilarity transform afterward. That is,
if J = M 1 AM ,
then f ( A) = Mf ( J ) M 1 and

f ( J ) = M 1 f ( A) M

Note that A and A are similar if


A = M 1 AM for some orthonormal M

Often
Often in
in large-scale
large-scale or
or infinite-dimensional
infinite-dimensionalsystems,
systems,some
some
modes
modes are
are negligible
negligible and
and are
are discarded
discarded after
after modal
modal
expansion,
expansion, thus
thus reducing
reducing the
the size
size of
of the
the system.
system. For
For
example,
example, when
when aa beam
beam vibrates,
vibrates, we
we have
have an
an infinite
infinite
number
number of
of terms
terms in
in aa series
series expansion
expansion of
of its
its
displacement
displacementfunction,
function,but
butonly
onlythe
thefirst
firstfew
few(2
(2--5)
5)may
may
dominate.
dominate.

Phase Portraits
Consider the homogeneous system

x& = Ax
A phase portrait is a graphical depiction of the
solutions to this equation, starting from a variety
of initial conditions. By sketching a few such
solutions (trajectories), the general behavior of
a system can be easily understood.
Phase portraits can be constructed qualitatively,
from knowledge of the eigenvalues and
eigenvectors, and are often used for nonlinear
system analysis as well.
Some examples:

.
x

e2
x2

e1
x

x1
x=
x2
1 0 x1
x& = Ax =
x
0

2
= { 1 4}
1 0
M =

0
1

e1 e2

Eigenvectors

Show how you


go from
x(0) 0
x1
Stable Node

Two invariant subspace


you can ride e1 or e2
line to the origin

2 1
A=

x2

e2
e1

x1
Stable Node

= { 1 4}

evals

1 1
M =

1 2
e1 e2

evectors

Two Invariant
Subspaces

2 0
A=

0
1

x2

= { 2 1}

e2
e1

x1
Saddle Point

1 0
M =

0 1
e1 e2

stable
unstable
evals
evectors

Two Invariant
Subspaces

3 2
A=
2
2
= { 2 1}

x2

e1
e2

2 1
M =

1 2

stable
invariant
subspace

x1
Saddle Point

unstable
invariant
subspace

1 1
A=

1 3
= { 2 2} eval
- 1
one eigenvecto r :
1

x2

e1
One Invariant
Subspace
x1
Stable Node

2 1
A=

Prev. Example
1 1
(Jordan Form of
)

1 3
= { 2 2}

x2
e1

x1
Stable Node

1
one eigenvector :
0

One Invariant
Subspace Rotates
Space

2
A=
6

3
4

evals
= { 1 + j 3

1 + j1
M =
j2
evectors

x2

1 j 3}
1 j1
j 2

geometric interpretation
of invariant subspace
dissolves

x1
Stable Focus

Inward arrows stable


Spirals denote oscillation
from imaginary part

1 2
A=

5 1

evals
= {3 j 3 j}
3 + j 3 j
M =

j5 j5
evectors

x2

oscillations for
all time

x1
Center

Time-varying case:

Things get hairy.

To simplify some computations,


consider the simpler
homogeneous system:
x& ( t ) = A( t ) x ( t )

(For uniqueness, we ask that the elements of A(t) be


continuous functions of time). Remember that the
matrix exponential is no longer an integrating factor,
so we must look for a different one:
It is known that the set of solutions of an nth order linear
homogeneous differential equation (or a system of n
first order equations) forms an n-dimensional vector
space.

A basis of n such solutions can be chosen in a number of


different ways, such as choosing a basis of n linearly
independent initial condition vectors and using the
resulting solutions. To make things easy, choose:

1
0
x1 ( t0 ) = ,
M

0

0
1
x 2 (t 0 ) = ,
0

M

0
M
x n (t0 ) =
0

1

When we stack the resulting solutions together side-by-side,


we get the fundamental solution matrix:
X ( t ) = [x1 ( t )

x 2 (t ) L

xn ( t ) ]

Obviously,

X& ( t ) = A( t ) X ( t )

Since x& = Ax

matrix system

vector
system

And an expansion of the solution of the state vector x (t )


into this basis will be
x ( t ) = X ( t ) x ( t0 )

So if we know the solution of the system to n linearly


independent initial conditions, we know it for any by
computing X (t ).

Now we notice from the identity


dX 1 (t )
dX ( t ) 1
1
= X (t )
X (t )
dt
dt

Matrix
Identity

dX 1 (t )
dX ( t ) 1
= X 1 (t )
X (t )
dt
dt

dX

dt

(t )

= X 1 (t ) A( t ) X ( t ) X 1 ( t )
= X 1 (t ) A( t )

same
Identity
substitute
X& = AX

I
new identity
So X 1 (t ) qualifies as a valid integrating factor for the
state equations:
X& = A(t ) X (t ) + B (t )u (t )
premultiply both
1
X ( t )[x& (t ) A(t ) x ( t ) = B( t )u( t )] sides by X 1
X 1 ( t ) x& (t ) X 1 (t ) A(t ) x ( t ) = X 1 ( t ) B( t )u( t ) rearrange
X

dX 1 ( t )
(t ) x& (t ) +
x (t ) = X 1 (t ) B (t )u (t )
dt

substitute new identity

Solving

1
dX
(t )
1
X (t ) x& (t ) +
x( t ) = X 1 ( t ) B( t )u ( t ) same
dt
product d
X 1 (t ) x(t ) = X 1 (t ) B(t )u (t )
rule
dt

(t ) x (t ) X

( to ) x( to ) = X 1 ( ) B( )u ( )d integrate
t0

or
x (t ) = X (t ) X

premultiply by X (t )
and simplify

(t 0 ) x(t 0 ) + X ( t ) X 1 ( ) B ( )u ( )d
t0

This would be great if we knew X (t ) all the time, but


unfortunately, it is difficult to compute.
X (t ) is the solution to X& = A (t )X

State Transition Matrix:


Matrix as:

Define the State Transition

(t , ) = X ( t ) X 1 ( )

This is an nxn linear transformation from the state-space


into itself. For homogeneous systems, it relates the
state vectors at any two times:
x (t ) = (t , ) x ( )
u=0
u=0
t0 =
1
From prev. page X ( t ) = X (t ) X (t0 )X (t0 ) x (t ) = (t , )X ( )

(Verify this using

x (t ) = X (t ) x (to ) ).

By differentiating it, one can show that:

d (t , ) d X ( t ) X 1 ( ) dX (t ) 1
=
=
X ( )
dt
dt
dt
= A( t ) X (t ) X 1 ( )
d ( t , )
= A( t )(t , )
dt

state transition matrix


satisfies original system
X& = A (t )X

(Chen uses this last line as the definition of ( t , ) in his


book.) Using our definition,
(t , ) = X (t ) X 1 ( )

it should be obvious that

( t2 , t0 ) = ( t2 , t1 ) ( t1 , t0 ) = X (t2 ) X 1 (t1 )( X (t1 ))X (t0 )

and

(t , t0 ) = (t0 , t ) X (t )X
1

(t0 )]

= X (t0 ) X 1 (t )

If our system is time-invariant, then it is easy to verify that


X& = AX

( t , ) = e A(t )

X (t ) = e At

by substitution into the definition.


When this is the case, we can compute ( t , ) = e A(t ) in
many ways:

1. Because

X ( s ) = ( sI A) 1 BU ( s) + ( sI A) 1 x (t0 )
and
x (t ) = e

A( t t0 )

x ( t0 ) + e A( t ) Bu ( ) d
t0

we can compare terms and get:

e A ( t t0 ) = L1 (sI A )1

Note that

}t t

( t , ) = ( t , 0)

whenever A is a constant matrix.

= (t , t0 )
0

property of
the exponential
function

2. Use the Cayley-Hamilton theorem to express e At as:


(t , ) = e A ( t ) = 0 I + 1 A + L + n 1 A n 1

and find the coefficients from the system of equations


found by substituting eigenvalues of A in the scalar
polynomial:
ith
n 1
i ( t )
e
= 0 + 1i + L + n 1i
eigenvalue

3. First simplify the system by putting it in diagonal form


(or Jordan form). Then

( t , ) = Me J (t ) M 1
4. "Sylvester's Expansion" (explained in Brogan)
5. Taylor series expansion:
(t , ) = I + A( t ) +

1 2
A (t ) 2 + L
2!

However when A=A(t), none of the choices are good:


1. Computer simulation of & (t , ) = A( t ) (t , ) with ( , ) = I
t

2. Define B ( t , ) = A( )d . Then if AB = BA ,
( t , ) = e B( t ,)

3. Integral expansions (Peano-Baker series):


t

t0

t0

t0

(t , t0 ) = I n + A( 0 )d 0 + A( 0 ) A( 1 ) d 1d 0 + L

4. Approximate with discrete-time systems.

An Introduction to Discrete-Time Systems:


Consider the continuous-time linear system
x& ( t ) = A( t ) x ( t ) + B ( t ) u ( t ),

x ( t0 ) = x 0

and suppose it is sampled every T seconds to give a


discrete-time system. Assume that this sampling
speed is much faster than the rate at which u(t)
changes, we therefore consider it to be constant
over any individual sampling period T = tk +1 tk ,
i.e., u ( t ) u ( t k ), for t k t tk +1.
Consider time tk to be an initial condition and use the t k +1.
state-transition matrix to find the state vector at time
t k +1

x (t k +1 ) = ( tk +1 , t k ) x ( tk ) + (t k +1 , ) B( )du (t k )
tk

This can be further simplified if the system has a constant Amatrix, so that:
( t , ) = e A ( t )
t k +1

x(t k +1 ) = e A( tk +1 tk ) x( tk ) + e A( tk +1 ) B( )du (t k )
tk
t k +1

x(t k +1 ) = e AT x( tk ) + e A( t k +1 ) B ( ) du (t k )
tk

Ad

Bd

This is the discrete-time approximation to the


continuous-time system. If we want the statetransition matrix from a discrete-time system, we can
use induction:

(We will give it a new name, ( k , j )):


Recall the recursions we obtained in the example that
introduced the concept of controllability of a discretetime system:
x ( k + 1) = Ax ( k ) + Bu ( k )
x (1) = Ax ( 0) + Bu ( 0)

x ( 2 ) = A2 x ( 0) + ABu ( 0) + Bu (1)
x ( 3) = A3 x ( 0) + A2 Bu ( 0) + ABu (1) + Bu ( 2 )

.
.
.
x ( k ) = A k x ( 0 ) + A k 1 Bu ( 0 ) + A k 2 Bu (1) + K + Bu ( k 1)

x ( k ) = A k x ( 0) +

Or

k 1

A k j 1B ( j ) u( j )
j =0

or
k

x ( k ) = A x ( 0) + A k j B ( j 1) u ( j 1)
k

j =1

change the
index

Leading to:
( k , j ) = Ak j

It may be apparent that in state-variables, discrete-time


systems are considerably easier to analyze than
continuous-time systems.

If A = Ad (k ) , that is, a discrete-time, time-varying system,


then
(k , j) =

k 1

Ad (i )

i= j

Computation of eigenvalues, eigenvectors, and


canonical forms for discrete-time systems is
exactly the same as for continuous-time systems.
The interpretation of eigenvalues in the context of
stability properties will be different, but modal
decompositions and diagonalization procedures
are exactly the same:

If M is the modal matrix, we will get a diagonalized (or


perhaps Jordan) form:

q ( k + 1) = M 1 AMq ( k ) + M 1Bu ( k )
y ( k ) = CMq ( k ) + Du( k )
and
x ( k ) = Mq ( k )

You might also like