You are on page 1of 4

RHODES UNIVERSITY

DEPARTMENT OF MATHEMATICS (Pure & Applied)


EXAMINATION : JUNE 2004

MATHEMATICS III/APPLIED MATHEMATICS III


PAPER 3

Examiner : Dr C.C. Remsing


External Examiner : Dr F.A.M. Frescura

AM3.2 (CONTROL and OPTIMIZATION)

Duration : 3 hours
Marks available : 120
Full marks : 100

NB : All questions may be attempted. All steps must be clearly motivated. Marks
will be deducted if this is not done.

Question 1. Let A Rnn .


(a) Define the terms eigenvalue and eigenvector (of A), and then investigate
the relationship between the eigenvalues of (the power) Ak and those of
A. Make a clear statement and then prove it.
(b) Let 1 , 2, . . . , r be r ( n) distinct eigenvalues of A with correspond-
ing eigenvectors w1 , w2, . . . , wr . Prove that the vectors w1 , w2 , . . ., wr
are linearly independent.
(c) The matrix A is said to be nilpotent if some power Ak is the zero matrix.
Show that A is nilpotent if and only if all its eigenvalues are zero.
(d) Define the exponential of A and then calculate exp (A) for

0 1 2
A = 0 0 3 .
0 0 0

[18]

Page 1 of 4

1
AM3.2 (Control and Optimization) June 2004 Paper 3

Question 2. Consider the (initialized) time-varying control system


x = A(t)x + B(t)u, x(0) = x0 Rm .
(It is assumed that the matrix-valued mappings A : [0, ) Rmm and B :
[0, ) Rm` are both continuous.)
(a) State the Existence and Uniqueness Theorem for the uncontrolled case
(i.e. for the underlying dynamical system) and then sketch the proof.
(Emphasize the main steps, make clear statements, give some details.)
(b) Explain what is meant by the state transition matrix (t, 0) and then
derive (an explicit expression for) the solution curve.
(c) Find the solution of
     
1 4 1 1
x = x+ u, x(0) =
1 1 1 2
when u(t) = e2t , t 0.
[22]

Question 3. Consider a time-invariant control system (with outputs)


x = Ax + Bu
y = Cx.
(a) Define the terms complete controllability and complete observability (of
such a system).
(b) Give a necessary and sufficient condition for complete controllability and
then prove EITHER the necessity OR the sufficiency of the criterion.
(c) Explain what is meant by the dual of the given control system, and then
state (but DO NOT prove) the Duality Theorem.
(d) Consider the control system
 
1 2
x = x + bu
0 3
y = cx.
Find b R21 and c R12 such that the system is
i. not completely controllable;
ii. completely observable.
When c = 1 1 , determine (the initial state) x(0) if y(t) = et 2e3t.
 

[24]
Page 2 of 4

2
AM3.2 (Control and Optimization) June 2004 Paper 3

Question 4. Let A Rmm and B Rm` . Consider the linear control system

x = Ax + Bu.

(a) State the Spectrum Assignment Theorem and then prove it (ONLY for
the case ` = 1).
(b) For the control system

x 1 = x2
x 2 = x3 + u
x 3 = x1 x2 2x3

find a linear feedback control which makes all the eigenvalues (of the
system) equal to 1.

[16]

Question 5. Let A Rmm , B Rm` , and C Rnm . Consider the linear


control system (with outputs)

x = Ax + Bu, u U R`
y = Cx.

(a) Explain what is meant by saying that A is a stability matrix. Prove that
if A is a stability matrix and (the control set) U is bounded, then the
given system is not completely controllable.
(b) Define the term b.i.b.o. stability, and then show that if A is a stability
matrix, then the given system is b.i.b.o. stable. (State clearly facts,
assumed to be known, used in the proof.)
(c) Define the term Lyapunov function (for a general nonlinear system) and
then use a quadratic Lyapunov function V (x) = xT P x to investigate the
stability of the system described by the equation

z +  (z 2 1)z + z = 0,  < 0.

[20]

Page 3 of 4

3
AM3.2 (Control and Optimization) June 2004 Paper 3

Question 6. Consider a cost functional


Z t1
J = (x(t1 ), t1 ) + L(t, x, u)dt
t0

subject to
x = F (t, x, u) and x(t0 ) = x0 .

(a) Explain what is meant by an extremal (for J ). Introduce a covector of


Lagrange multipliers ( p ) and the associated Hamiltonian function ( H),
and hence derive necessary conditions for u to be an extremal for J
(in terms of p and H).
(b) Find the (feedback) control u which minimizes the (quadratic) cost
functional
1 T 1 t1 T
Z
x Q(t)x + uT R(t)u dt

J = x (t1 )M x(t1 ) +
2 2 0

subject to
x = A(t)x + B(t)u(t) and x(0) = x0 .
(It is assumed that R(t) is a positive definite, and M and Q(t) are
positive semi-definite symmetric matrices for t 0.)

[20]

END OF THE EXAMINATION PAPER

Page 4 of 4

You might also like