You are on page 1of 141

Lecture Note on Robust Control

Jyh-Ching Juang
Department of Electrical Engineering
National Cheng Kung University
Tainan, TAIWAN
juang@mail.ncku.edu.tw
1
Motivation of Robust Control
J.C. Juang
For a physical process, uncertainties may appear as
Parameter variation
Unmodeled or neglected dynamics
Change of operating condition
Environmental factors
External disturbances and noises
Others
Robust control, in short, is to ensure that the performance and stability of feedback
systems in the presence of uncertainties.
Objectives of the course
Recognize and respect the eects of uncertainties in dynamic systems
Understand how to analyze the performance and robustness of the dynamic
systems in the presence of uncertainties
Learn to use computer aided tools for control system analysis and design
Design of robust controllers to accommodate uncertainties
Apply robust control techniques to engineering problems
2
Course Requirements
J.C. Juang
Prerequisite:
Control engineering
Linear system theory
Multivariable systems
Working knowledge of Matlab
References:
Kemin Zhou with John Doyle, Essentials of Robust Control, Prentice Hall,
1998.
G. E. Dullerud and F. Paganini, A Course in Robust Control Theory, Springer-
Verlag, 2000.
Doyle, Francis, and Tannenbaum, Feedback Control Theory, Maxwell MacMil-
lan, 1992.
Boyd, El Ghaoui, Feron, and Balakrishnan, Linear Matrix Inequalities in
System and Control Theory, SIAM, 1994.
Zhou, Doyle, and Glover, Robust and Optimal Control, Prentice Hall, 1996.
Maciejowski, Multivariable Feedback Design, Addison-Wiley, 1989.
Skogestad and Postlethwaite, Multivariable Feedback Control, John Wiley
& Sons, 1996.
Dahleh and Diaz-Bobillo, Control of Uncertain Systems, Prentice Hall, 1995.
Green and Limebeer, Linear Robust Control, Prentice Hall, 1995.
Many others
Grading:
50% : Homework
50% : Term project: paper study and design report
3
Course Outline
J.C. Juang
1. Introduction
2. Preliminaries
(a) Linear algebra and matrix theory
(b) Function space and signal
(c) Linear system theory
(d) Measure of systems (dierent norms)
(e) Equations for analysis and design
3. Internal Stability
(a) Feedback structure and well-posedness
(b) Coprime factorization
(c) Linear fractional map
(d) Stability and stabilizing controllers
4. Performance and Robustness
(a) Feedback design tradeos
(b) Bodes relation
(c) Uncertainty model
(d) Small gain theorem
(e) Robust stability
(f) Robust performance
(g) Structured singular values
5. Controller Design
(a) H
2
state feedback
(b) H

state feedback
(c) Observer-based controller
(d) Output injection and separation principle
(e) Output feedback H
2
controller design
(f) Loop transfer recovery
(g) Output feedback H

controller design
(h) synthesis
6. Miscellaneous Topics
4
Review of Linear Algebra and Matrix Theory
5
Linear Subspaces
J.C. Juang
Let R and C be real and complex scalar eld, respectively.
The set of all linear combinations of x
1
, x
2
, , x
k
1 is a subspace called the span
of x
1
, x
2
, , x
k
, denoted by
spanx
1
, x
2
, , x
k
= x[x =
1
x
1
+
2
x
2
+ +
k
x
k
;
i
F
A set of vectors x
1
, x
2
, , x
k
are said to be linearly dependent over F if there
exists
1
,
2
, ,
k
F not all zero such that
1
x
1
+
2
x
2
+ +
k
x
k
= 0; otherwise
they are linearly independent.
Let be a subspace of a vector space 1, then a set of vectors x
1
, x
2
, , x
k

is said to be a basis for if x
1
, x
2
, , x
k
are linearly independent and =
spanx
1
, x
2
, , x
k
.
The dimension of a vector subspace equals the number of basis vectors.
A set of vectors x
1
, x
2
, , x
k
in are mutually orthogonal if x

i
x
j
= 0 for all i ,= j,
where x

i
is the complex conjugate transpose of x
i
.
A collection of subspaces
1
,
2
, . . .,
k
of 1 is mutually orthogonal if x

y = 0
whenever x
i
and y
j
for i ,= j.
The vectors are orthonormal if x

i
x
j
=
ij
=
_
1 when i = j
0 otherwise
, the Kronecker delta.
Let be a subspace of 1. The set of all vectors in 1 that are orthogonal to every
vector in is the orthogonal complement of and is denoted by

= y 1 : y

x = 0, x
Each vector x in 1 can be expressed uniquely in the form x = x
W
+x
W
for x
W

and x
W

.
A set of vectors u
1
, u
2
, . . . , u
k
is said to be an orthonormal basis for a k-dimensional
subspace if the vectors form a basis and are orthonormal. Suppose that the di-
mension of 1 is n, it is then possible to nd a set of orthonormal basis u
k+1
, . . . , u
n

such that

= spanu
k+1
, . . . , u
n

Let M R
mn
be a linear transformation from R
n
to R
m
. M can be viewed as an
mn matrix.
The kernel or null space of a linear transformation M is dened as
kerM = ^(M) = x[x R
n
: Mx = 0
6
The image or range of M is
imgM = y[y R
m
: y = Mx for some x R
n

The rank of a matrix M is dened as the dimension of imgM.


An m n matrix M is said to a have full row rank if m n and rank(M) = m. It
has a full column rank if m n and rank(M) = n.
Let M be an mn real, full rank matrix with m > n, the orthogonal complement
of M is a matrix M

of dimension m (m n) such that


_
M M


is a square,
nonsingular matrix with the following property: M
T
M

= 0.
The following properties hold: (kerM)

= imgM
T
and (imgM)

= kerM
T
.
Homework: Given M =
_
1 4 2
0 1 1
_
. Find kerM, imgM, kerM
T
, and imgM
T
.
7
Eigenvalues and Eigenvectors
J.C. Juang
The scalar is an eigenvalue of the square matrix M C
nn
if
det(I M) = 0
There are n eigenvalues, which are denoted by
i
(M), i = 1, 2, , n.
The spectrum of M, denoted by spec(M), is the collection of all eigenvalues of M,
i.e., spec(M) =
1
,
2
, ,
n
.
The spectral radius is dened as
(M) = max
i
[
i
(M)[
where
i
is an eigenvalue of M.
If M is a Hermitian matrix, i.e., M = M

, then all eigenvalues of M are real. In


this case,
max
(M) is used to represent the maximal eigenvalue of M and
min
is
used to represent the minimal eigenvalue of M.
If
i
spec(M), then any nonzero vector x
i
C
n
that satises
Mx
i
=
i
x
i
is a (right) eigenvector of M. Likewise, any nonzero vector w
i
C
n
that satises
w

i
M =
i
w

i
is a left eigenvector of M.
If M is Hermitian, then there exists a unitary matrix U, i.e., U

U = I and a real
diagonal such that
M = UU

In this case, the columns of U are the (right) eigenvectors of M.


Homework: Given M =
_
_
1 4 2
4 0 1
2 1 1
_
_
. What is (M)? Find U and .
8
Matrix Inversion and Pseudo-Inverse
J.C. Juang
For a square matrix M, its inverse, denoted by M
1
is the matrix such that MM
1
=
M
1
M = I.
Let M be a square matrix partitioned as follows
M =
_
M
11
M
12
M
21
M
22
_
Suppose that M
11
is nonsingular, then M can be decomposed as
M =
_
I 0
M
21
M
1
11
I
_ _
M
11
0
0 S
11
_ _
I M
1
11
M
12
0 I
_
where S
11
= M
22
M
21
M
1
11
M
12
is the Schur complement of M
11
in M.
If M (as partitioned above) is nonsingular, then
M
1
=
_
M
1
11
+M
1
11
M
12
S
1
11
M
21
M
1
11
M
1
11
M
12
S
1
11
S
1
11
M
21
M
1
11
S
1
11
_
If both M
11
and M
22
are invertible,
(M
11
M
12
M
1
22
M
21
)
1
= M
1
11
+M
1
11
M
12
(M
22
M
21
M
1
11
M
12
)
1
M
21
M
1
11
The pseudo-inverse (also called Moore-Penrose inverse) of a matrix M is the matrix
M
+
that satises the following conditions:
1. MM
+
M = M
2. M
+
MM
+
= M
+
3. (MM
+
)

= MM
+
4. (M
+
M)

= M
+
M
Consider the linear matrix equation with unknown X,
AXB = C
The equation is solvable if and only if
AA
+
CB
+
B = C
All solutions can be characterized as
X = A
+
CB
+
A
+
AY BB
+
+Y
for some Y .
In the case of no solutions, the best approximation is
X
appr
= A
+
CB
+
Homework: Given M =
_
M
11
M
12
M
21
M
22
_
=
_
_
1 4 2
4 0 1
3 1 1
_
_
. Use the formula above to
determine M
1
.
9
Invariant Subspaces
J.C. Juang
Let M be a square matrix. A subspace C
n
is invariant for the transformation
M, or M-invariant, if Mx for every x .
is invariant for M means that the image of under M is contained in : M .
Examples of M-invariant subspaces: eigenspace of M, kerM, and imgM.
If is a nontrivial subspace and is M-invariant, then there exist x and such
that Mx = x.
Let
1
, . . .,
k
be eigenvalues of M (not necessarily distinct), and let x
i
be the cor-
responding (generalized) eigenvectors. Then = spanx
1
, . . . , x
k
is an M-invariant
subspace provided that all the lower-rank generalized eigenvectors are included.
More specically, let
1
=
2
= =
l
be eigenvalues of M, and let x
1
, x
2
, . . ., x
l
be
corresponding eigenvector and the generalized eigenvectors obtained through the
following equations:
(M
1
I)x
1
= 0
(M
1
I)x
2
= x
1
.
.
.
(M
1
I)x
l
= x
l1
Then a subspace with x
j
for some j l is an M-invariant subspace if and
only if all lower-rank eigenvectors and generalized eigenvectors of x
j
are in , i.e.,
x
i
, i j.
An M-invariant subspace C
n
is called a stable invariant subspace if all the
eigenvalues of M constrained to have negative real parts.
Example
M
_
x
1
x
2
x
3
x
4

=
_
x
1
x
2
x
3
x
4

1
1

4
_

_
with 1
1
< 0 (the real part of
1
is less than zero), 1
3
< 0, and 1
4
> 0.
M-invariant subspace? stable invariant subspace?
spanx
1

spanx
2

spanx
3

spanx
4

spanx
1
, x
2

spanx
1
, x
2
, x
3

spanx
1
, x
2
, x
4

spanx
2
, x
3

10
Vector Norm and Matrix Norm
J.C. Juang
Consider a linear space 1 over the eld F. A real-valued function dened on all
elements x from 1 is called a norm, written | |, if it satises the following axioms:
1. (Nonnegativity) |x| 0 for all x 1 and |x| = 0 if and only if x = 0.
2. (Homogeneity) |x| = [[ |x| for all x 1 and F.
3. (Triangle inequality) |x +y| |x| +|y| for all x and y in 1.
A linear space together with a norm dened on it becomes a normed linear space.
Vector norm: Let x =
_

_
x
1
x
2
.
.
.
x
n
_

_
be a vector in C
n
. The followings are norms on C
n
1. Vector p-norm (for 1 p < ):
|x|
p
= (
n

i=1
[x
i
[
p
)
1/p
2. Vector 1-norm: |x|
1
=

n
i=1
[x
i
[
3. Vector 2-norm or Euclidean norm: |x|
2
=

x =
_

n
i=1
[x
i
[
2
4. Vector -norm: |x|

= max
1in
[x
i
[
If | |
p
is a vector norm on C
n
and M C
nn
. The matrix norm induced by the
vector p-norm | |
p
is
|M|
p
= sup
xC
n
,x=0
|Mx|
p
|x|
p
Matrix Norm: Let M be a matrix in C
mn
. M =
_

_
m
11
m
12
m
1n
m
21
m
22
m
2n
.
.
.
.
.
.
m
m1
m
mn
_

_
1. Matrix 1-norm (column sum): |M|
1
= max
j

m
i=1
[m
ij
[
2. Matrix 2-norm: |M| = |M|
2
=
_

max
(M

M).
3. Matrix -norm (row sum): |M|

= max
i

n
j=1
[m
ij
[
4. Frobenius norm: |M|
F
= (

m
i=1

n
j=1
[m
ij
[
2
)
1
2
=
_
trace(M

M)
Homework: Let x =
_
_
1
3
2
_
_
and M =
_
_
4 0 2
4 0 1
3 1 1
_
_
. Determine |x|
1
, |x|
2
, |x|

,
|M|
1
, |M|
2
, |M|

, and |M|
F
.
11
Singular Value Decomposition
J.C. Juang
The singular values of a matrix M are dened as

i
(M) =
_

i
(M

M)
Here,
i
(M)s are real and nonnegative.
The maximal singular value,
max
, can be shown to be

max
(M) = |M| = max
x=0
|Mx|
2
|x|
2
When M is invertible, the maximal singular value of M
1
is related to the minimal
singular value of M,
min
(M), by

max
(M
1
) =
1

min
(M)
The rank of M is the same as the number of nonzero singular values of M.
The matrix M and its complex conjugate transpose M

have the same singular


values, i.e.,
i
(M) =
i
(M

).
Let M C
mn
, there exists unitary matrices U =
_
u
1
u
2
u
m

C
mm
and
V =
_
v
1
v
2
v
n

C
nn
such that
M = UV

where
=
_

1
0
0 0
_
with
1
=
_

1
0 0
0
2
0
.
.
.
.
.
. 0
0 0
r
_

_
with
1

2

r
0 where r = minm, n.
Singular Value Decomposition. The matrix admits the decomposition
M =
r

i=1

i
u
i
v

i
=
_
u
1
u
2
u
r

1
0 0
0
2
0
.
.
.
.
.
. 0
0 0
r
_

_
_
v
1
v
2
v
r

kerM = spanv
r+1
, , v
n
and (kerM)

= spanv
1
, , v
r
.
imgM = spanu
1
, , u
r
and (imgM)

= spanu
r+1
, , u
m
.
The Frobenius norm of M equals |M|
F
=
_

2
1
+
2
2
+ +
2
r
.
The condition number of a matrix M is dened as (M) =
max
(M)
max
(M
1
) pro-
vided that M
1
exists.
12
Semidenite Matrices
J.C. Juang
A square Hermitian matrix M = M

is positive denite (semidenite), denoted by


M > 0 ( 0), if x

Mx > 0 ( 0) for all x ,= 0.


All eigenvalues of a positive denite matrix are positive.
Let M =
_
M
11
M
12
M
21
M
22
_
be a Hermitian matrix. Then M > 0 if and only if M
11
> 0
and S
11
= M
22
M
21
M
1
11
M
12
> 0.
Let M 0. There exists a unitary matrix U and nonnegative diagonal such that
M = UU

.
The square root of a positive semidenite matrix M, denoted by M
1/2
, satises
M
1/2
M
1/2
= M and M
1/2
0. It can be shown that M
1/2
= U
1/2
U

.
For a positive denite matrix M, its inverse M
1
exists and is positive denite.
For two positive denite matrices M
1
and M
2
, we have M
1
+ M
2
> 0 when > 0
and 0.
Homework: Let M =
_
5 3
3 2
_
and assume the 2 1 vector x has a 2-norm that is
less than or equal to 1. In the 2-dimensional plane, draw the possible values of
Mx.
Homework: Let M =
_
_
5 3 1
3 2 2
1 2 x
_
_
. Find the minal value of x such that M is positive
denite.
13
Matrix Calculus
J.C. Juang
Let x = [x
i
] be an n-vector and f(x) be a scalar function on x, then the rst
derivative gradient vector is dened as
f(x)
x
= [g
i
] =
_

_
f(x)
x
1
f(x)
x
2
.
.
.
f(x)
xn
_

_
and the second derivative Hessian matrix is

2
f(x)
x
2
= [H
ij
] =
_

2
f(x)
x
2
1


2
f(x)
x
1
xn
.
.
.
.
.
.

2
f(x)
xn x
1


2
f(x)
x
2
n
_

_
Let X = [X
ij
] be an mn matrix and f(X) be a scalar function on X. The derivative
of f(X) with respect to X is dened as

X
f(X) =
_

X
ij
f(X)
_
Formula for derivatives (assuming that x and y are vectors and A, B, and X are
real matrices)

x
(y
T
Ax) = A
T
y

x
(x
T
Ax) = (A +A
T
)x
(traceX)
X
= I
(traceAXB)
X
= A
T
B
T
(traceAX
T
B)
X
= BA
(traceX
T
AX)
X
= (A +A
T
)X
(traceAXBX)
X
= A
T
X
T
B
T
+B
T
X
T
A
T
(log det X)
X
= (X
T
)
1
(det X)
X
= (det X)(X
T
)
1
14
Functional Space and Signal
15
Function Spaces
J.C. Juang
Let 1 be a vector space over C and | | be a norm dened on 1. Then 1 is a
normed space.
The space /
p
(, ) (for 1 p < ) consists of all Lebesgue measurable functions
w(t) dened on the interval (, ) such that
|w|
p
:= (
_

[w(t)[
p
dt)
1/p
<
The space /

(, ) consists of all Lebesgue measurable functions w(t) with a


bounded
|w|

:= ess sup
tR
[w(t)[
Let 1 be a vector space over C. An inner product on 1 is a complex value function
, ) : 1 1 C
such that for any u, v, w 1 and , C
1. u, v +w) = u, v) +u, w)
2. u, v) = v, u)
3. u, u) > 0 if u ,= 0
A vector space 1 with an inner product is an inner product space.
A Hilbert space is a complete inner product space with the norm induced by its
inner product.
The space /
2
= /
2
(, ) is a Hilbert space of matrix-valued functions on R, with
inner product
u, v) :=
_

trace(u

(t)v(t)) dt
The space /
2+
= /
2
[0, ) is a subspace of /
2
(, ) with functions zero for t < 0.
The space /
2
= /
2
(, 0] is a subspace of /
2
(, ) with functions zero for t > 0.
Let 1 be a Hilbert space and | 1 a subset. Then the orthogonal complement of
|, denoted by |

is dened as
|

= u : u, v) = 0, v |, u 1
The orthogonal complement is a Hilbert space.
Let = /
2+
/
2
, then

= /
2
.
Let | and be subspaces of a vector space 1. 1 is said to be the direct sum
of | and , written 1 = | , if |

= 0 and every element v 1 can be


expressed as v = u +w with u | and w . If 1 is an inner product space and |
and are orthogonal, then 1 is said to be the orthogonal direct sum of | and .
The space /
2
is the orthogonal direct sum of /
2+
and /
2
.
16
Power and Spectral Signals
J.C. Juang
Let w(t) be a function of time. The autocorrelation matrix is
R
ww
() := lim
T
1
2T
_
T
T
w(t +)w

(t) dt
if the limit exists and is nite for all .
The Fourier transform of the autocorrelation matrix is the spectral density, de-
noted as S
ww
(j)
S
ww
(j) :=
_

R
ww
()e
j
d
The autocorrelation can be obtained from the spectral density by performing an
inverse Fourier transform
R
ww
() =
1
2
_

S
ww
(j)e
j
d
A signal is a power signal if the autocorrelation matrix R
ww
() exists for all and
the power spectral density function S
ww
(j) exists.
The power of w(t) is dened as
|w|
rms
:=

lim
T
1
2T
_
T
T
|w(t)|
2
dt =
_
trace[R
ww
(0)]
In terms of the spectral density function,
|w|
2
rms
=
1
2
_

trace[S
ww
(j)] d
Note that | |
rms
is not a norm, since a nite-duration signal has a zero rms value.
17
Signal Quantization
J.C. Juang
For a signal w(t) mapping from [0, ) to R (or R
m
), its measures can be dened as
1. -norm (peak value)
|w|

= sup
t0
[w(t)[
or
|w|

= sup
t0
max
1im
[w
i
(t)[ = max
1im
|w
i
|

2. 2-norm (total energy)


|w|
2
=

_

0
w
2
(t) dt
or
|w|
2
=

_
_

0
m

i=1
w
2
i
(t) dt =

_
m

i=1
|w
i
|
2
2
=

_

0
w
T
(t)w(t) dt
3. 1-norm (resource consumption)
|w|
1
=
_

0
[w(t)[ dt
or
|w|
1
=
_

0
m

i=1
[w
i
(t)[ dt =
m

i=1
|w
i
|
1
4. p-norm
|w|
p
= (
_

0
[w(t)[
p
dt)
1/p
5. rms (root mean square) value (average power)
|w|
rms
=

lim
T
1
T
_
T
0
w
2
(t) dt
or
|w|
rms
=

lim
T
1
T
_
T
0
w
T
(t)w(t) dt
If |w|
2
< , then |w|
rms
= 0.
If w is a power signal and |w|

< , then |w|


rms
|w|

.
If w is in /
1
and in /

, then |w|
2
2
|w|
1
|w|

.
18
Review of Linear System Theory
19
Linear Systems
J.C. Juang
A nite-dimensional linear time-invariant dynamical system can be described by
the following equation
x = Ax +Bw, x(0) = x
o
z = Cx +Dw
where x(t) R
n
is the state, w(t) R
m
is the input, and z(t) R
p
is the output.
The transfer function matrix from w to z is dened as
Z(s) = M(s)W(s)
where Z(s) and W(s) are the Laplace transform of z(t) and w(t), respectively. It is
known that
M(s) = D +C(sI A)
1
B
For simplicity, the short-hand notation is used
M(s)
s
=
_
A B
C D
_
The state response for the given initial condition x
o
and input w(t) is
x(t) = e
At
x
o
+
_
t
0
e
A(t)
Bw() d
and the output response is
z(t) = Ce
At
x
o
+
_
t
0
Ce
A(t)
Bw() d +Dw(t)
Assume that the initial condition is zero, D = 0, and the input is an impulse, then
the output admits
m(t) = Impulse response of M(s) = Ce
At
B
With respect to power signal input, the power spectral density of the output is
related to the power spectral density of the input by
S
zz
(j) = M(j)S
ww
(j)M

(j)
The system is stable (Hurwitz) if all the eigenvalues of the matrix A are in the left
half plane.
20
Controllability and Observability
J.C. Juang
The pair (A, B) is controllable if and only if
1. For any initial state x
o
, t > 0, and nal state x
f
, there exists a piecewise
continuous input w() such that x(t) = x
f
.
2. The controllability matrix
_
B AB A
2
B A
n1
B

has full row rank, i.e.,


< A[imgB >:=

n
i=1
img(A
i1
B) = R
n
.
3. The matrix
W
c
(t) =
_
t
0
e
A
BB
T
e
A
T

d
is positive denite for any t > 0.
4. PBH test: The matrix
_
A I B

has full row rank for all in C.


5. Let and x be any eigenvalue and any corresponding left eigenvector of A, i.e.,
x

A = x

, then x

B ,= 0.
6. The eigenvalues of A +BF can be freely assigned by a suitable choice of F.
The pair (A, B) is stabilizable if and only if
1. The matrix
_
A I B

has full row rank for all in C with 1 0.


2. For all and x such that x

A = x

and 1 0, we have x

B ,= 0.
3. There exists a matrix F such that A +BF is Hurwitz.
The pair (C, A) is observable if and only if
1. For any t > 0, the initial state x
o
can be determined from the time history of
the input w(t) and the output z(t) in the interval [0, t].
2. The observability matrix
_

_
C
CA
CA
2
.
.
.
CA
n1
_

_
has full column rank, i.e.,

n
i=1
ker(CA
i1
) =
0.
3. The matrix
W
o
(t) =
_
t
0
e
A
T

C
T
Ce
A
d
is positive denite for any t > 0.
4. The matrix
_
A I
C
_
has full column rank for all in C.
5. The eigenvalues of A +HC can be freely assigned by a suitable choice of H.
6. For all and x such that Ax = x, we have Cx ,= 0.
7. The pair (A
T
, C
T
) is controllable.
21
The pair (C, A) is detectable if and only if
1. The matrix
_
A I
C
_
has full-column rank for all 1 0.
2. There exists a matrix H such that A +HC is Hurwitz.
3. For all and x such that Ax = x and 1 0, we have Cx ,= 0.
Kalman canonical decomposition: through a nonsingular coordinate transforma-
tion T, the realization admits
_
TAT
1
TB
CT
1
D
_
=
_
_
_
_
_
_
A
co
0 A
13
0 B
co
A
21
A
c o
A
23
A
24
B
c o
0 0 A
co
0 0
0 0 A
43
A
c o
0
C
co
0 C
co
0 D
_
_
_
_
_
_
In this case, the transfer function
M(s) = D +C(sI A)
1
B = D +C
co
(sI A
co
)
1
B
co
The state space realization (A, B, C, D) is minimal if and only if (A, B) is controllable
and (C, A) is observable.
22
State Space Algebra
J.C. Juang
Let
M
1
(s) :
_
x
1
= A
1
x
1
+B
1
w
1
z
1
= C
1
x
1
+D
1
w
1
and M
2
(s) :
_
x
2
= A
2
x
2
+B
1
w
2
z
2
= C
2
x
2
+D
2
w
2
In terms of the compact representation, M
1
(s)
s
=
_
A
1
B
1
C
1
D
1
_
and M
2
(s)
s
=
_
A
2
B
2
C
2
D
2
_
.
Parallel connection of M
1
(s) and M
2
(s).
w
-
w
2
M
2
(s)
-
w
1
M
1
(s)
z
2
z
1
6
?
c -
z
The new state is x =
_
x
1
x
2
_
, input w = w
1
= w
2
, and output z = z
1
+z
2
.
Thus
x =
_
A
1
0
0 A
2
_
x +
_
B
1
B
2
_
w and z =
_
C
1
C
2

x + (D
1
+D
2
)w
or M
1
(s) +M
2
(s)
s
=
_
_
A
1
0 B
1
0 A
2
B
2
C
1
C
2
D
1
+D
2
_
_
.
Series connection of two systems
-
w
2
M
2
(s)
-
z
2
= w
1
M
1
(s)
-
z
1
The connection gives M
1
(s)M
2
(s)
s
=
_
A
1
B
1
C
1
D
1
__
A
2
B
2
C
2
D
2
_
=
_
_
A
1
B
1
C
2
B
1
D
2
0 A
2
B
2
C
1
D
1
C
2
D
1
D
2
_
_
=
_
_
A
2
0 B
2
B
1
C
2
A
1
B
1
D
2
D
1
C
2
C
1
D
1
D
2
_
_
.
Inverse system M
1
1
(s)
s
=
_
A
1
B
1
D
1
1
C
1
B
1
D
1
1
C
1
D
1
1
D
1
1
_
provided that D
1
is invertible.
23
Can be veried that M
1
(s)M
1
1
(s) = I.
w
1
= D
1
1
z
1
D
1
1
C
1
x
1
x
1
= A
1
x
1
+B
1
(D
1
1
z
1
D
1
1
C
1
x
1
) = (A B
1
D
1
1
C
1
)x
1
+B
1
D
1
1
z
1
Transpose or dual system M
T
1
(s)
s
=
_
A
T
1
C
T
1
B
T
1
D
T
1
_
.
Conjugate system M

1
(s) = M
T
1
(s)
s
=
_
A
T
1
C
T
1
B
T
1
D
T
1
_
. Thus, M

1
(j) = M

1
(j).
Feedback connection of M
1
(s) and M
2
(s).
-c
6
-
w
1
w
M
1
(s)
-
z = z
1

w
2
M
2
(s)
z
2
The connection gives z = z
1
= w
2
and w
1
= w +z
2
.
Thus,
z = C
1
x
1
+D
1
(w +z
2
)
z
2
= C
2
x
2
+D
2
z
z = (I D
1
D
2
)
1
(C
1
x
1
+D
1
C
2
x
2
+D
1
w)
z
2
= (I D
2
D
1
)
1
(D
2
C
1
x
1
+C
2
x
2
+D
2
D
1
w)
x
1
= A
1
x
1
+B
1
w +B
1
(I D
2
D
1
)
1
(D
2
C
1
x
1
+C
2
x
2
+D
2
D
1
w)
x
2
= A
2
x
2
+B
2
(I D
1
D
2
)
1
(C
1
x
1
+D
1
C
2
x
2
+D
1
w)
(I M
1
(s)M
2
(s))
1
M
1
(s)
s
=
_
_
A
1
+B
1
(I D
2
D
1
)
1
D
2
C
1
B
1
(I D
2
D
1
)
1
C
2
B
1
+B
1
(I D
2
D
1
)
1
D
2
D
1
B
2
(I D
1
D
2
)
1
C
1
A
2
+B
2
(I D
1
D
2
)
1
D
1
C
2
B
2
(I D
1
D
2
)
1
D
1
(I D
1
D
2
)
1
C
1
(I D
1
D
2
)
1
C
2
(I D
1
D
2
)
1
D
1
_
_
24
System Poles and Zeros
J.C. Juang
Let M(s) = D + C(sI A)
1
B
s
=
_
A B
C D
_
. The eigenvalues of A are the poles of
M(s).
The system matrix of M(s) is dened as
Q(s) =
_
A sI B
C D
_
which is a polynomial matrix.
The normal rank of Q(s), denoted normal rank, is the maximally possible rank of
Q(s) for at least one s C.
A complex number
o
C is called an invariant zero of the system realization if it
satises
rank
_
A
o
I B
C D
_
< normal rank
_
A sI B
C D
_
The invariant zeros are not changed by constant state feedback, constant output
injection, or similarity transformation.
Suppose that
_
A sI B
C D
_
has full-column normal rank. Then
o
C is an invari-
ant zero if and only if there exist 0 ,= x C
n
and w C
m
such that
_
A
o
I B
C D
_ _
x
w
_
= 0
Moreover, if w = 0, then
o
is also a nonobservable mode.
When the system is square, i.e., equal number of input and output, then the
invariant zeros can be computed by solving a generalized eigenvalue problem.
_
A B
C D
_ _
x
w
_
=
_
I 0
0 0
_ _
x
w
_
for some generalized eigenvalue and generalized eigenvector
_
x
w
_
.
Suppose that
_
A sI B
C D
_
has full-row normal rank. Then
o
C is an invariant
zero if and only if there exist 0 ,= y C
n
and v C
p
such that
_
y


_
A
o
I B
C D
_
= 0
Moreover, if v = 0, then
o
is also a noncontrollable mode.
25
The system M(s) has full-column normal rank if and only if
_
A sI B
C D
_
has
full-column normal rank.
Note that
_
A sI B
C D
_
=
_
I 0
C(A sI)
1
I
_ _
A sI B
0 M(s)
_
and
normal rank
_
A sI B
C D
_
= n +normal rankM(s)
Let M(s) be a pm transfer matrix and let (A, B, C, D) be a minimal realization. If

o
is a zero of M(s) that is distinct from the poles, then there exists an input and
initial state such that the output of the system z(t) is zero for all t.
Let x
o
and w
o
be such that
_
A
o
I B
C D
_ _
x
o
w
o
_
= 0
i.e.,
Ax
o
+Bw
o
=
o
x
o
and Cx
o
+Dw
o
= 0
Consider the input w(t) = w
o
e
ot
and initial state x(0) = x
o
.
The output is
z(t) = Ce
At
x
o
+
_
t
0
Ce
A(t)
Bw() d +Dw(t)
= Ce
At
x
o
+Ce
At
_
t
0
e
(oIA)
(
o
I A)x
o
d +Dw
o
e
ot
= Ce
At
x
o
+Ce
At
_
e
(oIA)
[
t
0

x
o
+Dw
o
e
ot
= 0
26
Measure of System and Fundamental Equations
27
H
2
Space
J.C. Juang
Let o be an open set in C and let f(s) be a complex-valued function dened on o.
Then f(s) is said to be analytic at a point z
o
in o if it is dierentiable at z
o
and
also at each point in some neighborhood of z
o
.
If f(s) is analytic at z
o
then f has continuous derivatives of all orders at z
o
.
A function f(s) is said to be analytic in o if it has a derivative or is analytic at
each point of o.
A matrix-valued function is analytic in o if every element of the matrix is analytic
in o.
All real rational stable transfer function matrices are analytic in the right-half
plane.
Maximum modulus theorem. If f(s) is dened and continuous on a closed-bounded
set o and analytic on the interior of o, then [f(s)[ cannot attain the maximum in
the interior of o unless f(s) is a constant.
[f(s)[ can only achieve its maximum on the boundary of o, i.e.,
max
sS
[f(s)[ = max
sS
[f(s)[
where o denotes the boundary of o.
The space /
2
= /
2
(jR) is a Hilbert space of matrix-valued functions on jR and
consists of all complex matrix functions M such that
_

trace[M

(j)M(j)] d <
. The inner product for this Hilbert space is dened as
M, N) :=
1
2
_

trace[M

(j)N(j)] d
The induced norm is given by
|M|
2
:=
_
M, M)
All real rational strictly proper transfer function matrices with no poles on the
imaginary axis form a subspace of /
2
(jR) which is denoted by 1/
2
.
The space H
2
is a subspace of /
2
(jR) with matrix functions M(s) analytic in 1s >
0. The corresponding norm is dened as
|M|
2
2
:= sup
>0

1
2
_

trace[M

( +j)M( +j)] d
It can be shown that
|M|
2
2
=
1
2
_

trace[M

(j)M(j)] d
28
The real rational subspace of H
2
, which consists of all strictly proper and real
rational stable transfer function matrices is denoted by 1H
2
.
The space H

2
is the orthogonal complement of H
2
in /
2
.
If M is a strictly proper, stable, real rational transfer function matrix, then M H
2
and M

2
.
Parsevals Relation: Isometric isomorphism between the /
2
spaces in the time
domain and the /
2
spaces in the frequency domain.
/
2
(, ) /
2
(jR)
/
2
[0, ) H
2
/
2
(, 0] H

2
If m(t) /
2
(, ) and its bilateral Laplace transform is M(s) /
2
(jR), then
|M|
2
= |m|
2
Dene an orthogonal projection P
+
: /
2
(, ) /
2
[0, ) such that for any func-
tion m(t) /
2
(, )
P
+
m(t) =
_
m(t) for t 0
0 otherwise
On the other hand, the operator P

from /
2
(, ) to /
2
(, 0] is dened as
P

m(t) =
_
0 for t > 0
m(t) for t 0
Relationships among function spaces
-

? ?
6 6
Inverse transform
Laplace transform
Inverse transform
Laplace transform
Inverse transform
Laplace transform
/
2
(, 0]
/
2
(, )
/
2
[0, )
H

2
/
2
(jR)
H
2
P
+
P

P
+
P

29
H

Space
J.C. Juang
The space /

(jR) is a Banach space of matrix-valued functions that are essentially


bounded on jR, with norm
|M|

:= ess sup
R

max
[M(j)]
The rational subspace of /

denoted by 1/

consists of all proper and real rational


transfer function matrices with no poles on the imaginary axis.
The space H

is a subspace of /

with functions that are analytic and bounded


in the open right-half plane. The H

norm is dened as
|M|

:= sup
{s}>0

max
[M(s)] = sup
R

max
[M(j)]
The real rational subspace of H

is denoted by 1H

, which consists of all proper


and real rational stable transfer function matrices.
The space H

is a subspace of /

with functions that are analytic and bounded


in the open left-half plane. The H

norm is dened as
|M|

:= sup
{s}<0

max
[M(s)] = sup
R

max
[M(j)]
The real rational subspace of H

is denoted by 1H

, which consists of all proper


and real rational antistable transfer function matrices.
Homework: Consider the transfer function M(s) =
2
s
2
+s+2
.
1. Plot [M(j)[ as a function of and determine |M|

.
2. Determine |M|
2
.
30
Lyapunov Equation
J.C. Juang
Given A R
nn
and Q = Q
T
R
nn
, the equation on X R
nn
A
T
X +XA +Q = 0
is called a Lyapunov equation.
Dene the map
: R
nn
R
nn
, (X) = A
T
X +XA
Then the Lyapunov equation has a solution X if and only if Q img.
The solution is unique if and only if is injective.
The Lyapunov equation has a unique solution if and only if A has no two eigenvalues
sum to zero.
Assume that A is stable, then
1. X =
_

0
e
A
T
t
Qe
At
dt.
2. X > 0 if Q > 0 and X 0 if Q 0.
3. If Q 0, then (Q, A) is observable if and only if X > 0.
Suppose A, Q, and X satisfy the Lyapunov equation, then
1. 1
i
(A) 0 if X > 0 and Q 0.
2. A is stable if X > 0 and Q > 0.
3. A is stable if (Q, A) is detectable, Q 0, and X 0.
Proof:
(a) Let there be x ,= 0 such that Ax = x with 1 0.
(b) Form x

(A
T
X+XA+Q)x = 0 which can be reduced to 21 x

Xx+x

Qx =
0.
(c) Thus x

Qx = 0
(d) Leads to Qx = 0
(e) Implies that
_
A I
Q
_
x = 0, which contradicts the detectability assump-
tion.
Homework: Let A =
_
_
2 1 1
0 2 5
1 4 4
_
_
and Q =
_
_
1 0 0
0 1 0
0 0 1
_
_
. Determine X such that
A
T
X +XA +Q = 0.
31
Controllability and Observability Gramians
J.C. Juang
M(s)
- -
w z
6
x
o
Consider the stable transfer function from M(s) with z = M(s)w. Assume the
following minimal realization
M(s) = C(sI A)
1
B
That is,
x = Ax +Bw
z = Cx
Suppose that there is no input from 0 to , then the output energy generated by
the initial state can be computed as follows:
[
_

0
z
T
(t)z(t) dt]
1
2
= x
T
0
[
_

0
e
A
T
t
C
T
Ce
At
dt]x
o

1
2
= [x
T
o
W
o
x
o
]
1
2
where W
o
=
_

0
e
A
T
t
C
T
Ce
At
dt is the observability gramian of the system M(s).
Indeed, W
o
is the positive denite solution of the Lyapunov equation
A
T
W
o
+W
o
A +C
T
C = 0
When A is stable, the system is observable if and only if W
o
> 0.
On the other hand, the input from to 0 that drives the state to x
o
satises
x
o
=
_
0

e
At
Bw(t) dt
If we minimize the input energy subject to the reachability condition, the optimal
input can be found to be
w(t) = B
T
e
A
T
t
W
1
c
x
o
where
W
c
=
_
0

e
At
BB
T
e
A
T
t
dt =
_

0
e
At
BB
T
e
A
T
t
dt
32
The matrix W
c
is the controllability gramian of the system M(s). It satises the
Lyapunov equation
AW
c
+W
c
A
T
+BB
T
= 0
When A is stable, the system is controllable if and only if W
c
is positive denite.
Note that the input energy (square) needed is
[
_
0

w
T
(t)w(t) dt]
1
2
= x
T
o
[
_
0

e
At
BB
T
e
A
T
t
dt]x
o

1
2
= [x
T
o
W
1
c
x
o
]
1
2
In summary, the observability gramian W
o
determines the total energy in the
system output starting from a given initial state (with no input) and the control-
lability gramian W
c
determines which points in the state space that can be reached
using an input with total energy one.
Both W
o
and W
c
depend on the realization. Through a similarity transformation,
the realization (TAT
1
, TB, CT
1
) gives the gramians as TW
c
T
T
and T
T
W
o
T
1
,
respectively.
The eigenvalues of W
c
W
o
are invariant under similarity transformation, thus, a
system property.
33
Balanced Realization and Hankel Singular Values
J.C. Juang
Let W
c
and W
o
be the controllability gramian and observability gramian of the
system (A, B, C), respectively, i.e.,
AW
c
+W
c
A
T
+BB
T
= 0
and
A
T
W
o
+W
o
A +C
T
C = 0
The Hankel singular values are dened as the square roots of the eigenvalues of
W
c
W
o
, which are independent of the particular realization.
Let T be the matrix such that TW
c
W
o
T
1
= =
_

2
1
0
0
2
2
.
.
.

2
n
_

_
.
The Hankel singular values of the system are
1
,
2
, ,
n
(in descending order).
The matrix T diagonalizes the controllability and observability gramians. Indeed,
the new realization (TAT
1
, TB, CT
1
) admits TW
c
T
T
= T
T
W
o
T
1
=
_

1
0
0
2
.
.
.

n
_

_
.
A realization (A, B, C) is balanced if its controllability and observability gramians
are the same.
The maximal gain from the past input to the future output is dened as the Hankel
norm.
-
6
t
x(t)
sx
o
|M(s)|
h
= sup
u(.)L
2
(,0)
|z|
L
2
(0,)
|u|
L
2
(,0)
= sup
xo
(x
T
o
W
o
x
o
)
1
2
(x
T
o
W
1
c
x
o
)
1
2
=
1
2
max
(W
c
W
o
)
=
1
The Hankel norm is thus the maximal Hankel singular value.
34
Quantication of Systems
J.C. Juang
The norm of a system is typically dened as the induced norm between its output
and input.
M(s)
- -
w z
Two classes of approaches
1. Size of the output due to a particular signal or a class of signal
2. Relative size of the output and input
System 2-norm (SISO case).
Let m(t) be the impulse response of the system M(s). Its 2-norm is dened as
|M|
2
= (
_

[m(t)[
2
dt)
1
2
The system 2-norm can be interpreted as the 2-norm of the output due to
impulse input.
By Parseval theorem,
|M|
2
= (
1
2
_

[M(j)[
2
d)
1
2
If the input signal w and output signal z are stochastic in nature. Let S
zz
()
and S
ww
() be the spectral density of the output and input, respectively. Then
S
zz
() = S
ww
() [M(j)[
2
Note that
|z|
rms
= (
1
2
_

S
zz
() d)
1
2
= (
1
2
_

[M(j)[
2
S
ww
() d)
1
2
The system 2-norm can then be interpreted as the rms value of the output
subject to unity spectral density white noise.
35
System 2-norm (MIMO case). The system 2-norm is dened as
|M|
2
= (
1
2
_

trace[M(j)M

(j)] d)
1
2
= (trace
_

0
m(t)m
T
(t) dt)
1
2
= (
1
2
_

i
(M(j))
2
d)
1
2
where m(t) is the impulse response.
Let e
i
be the i-th standard basis vector of R
m
. Apply the impulsive input (t)e
i
to
the system to obtain the impulse response z
i
(t) = m(t)e
i
. Then
|M|
2
2
=
m

i=1
|z
i
|
2
2
The system 2-norm can be interpreted as
1. The rms response due to white noise: |M|
2
= |z|
rms
[
w=white noise
2. The 2-norm response due to impulse input |M|
2
= |z|
2
[
w=impulse
The system norm can be regarded as the peak value in the Bode magnitude
(singular value) plot.
|M|

= sup
{s}>0

max
(M(s))
= sup

max
(M(j)) (When M(s) is stable)
= sup
wrms=0
|Mw|
rms
|w|
rms
= sup
w
2
=0
|Mw|
2
|w|
2
= sup
w
2
=1
|Mw|
2
System 1-norm: peak to peak ratio
|M|
1
= sup

|Mw|

|w|

36
State-Space Computation of 2-Norm
J.C. Juang
Consider the state space realization of a stable transfer function M(s)
x = Ax +Bw
z = Cx +Dw
In 2-norm computation, A is assumed stable and D is zero.
Recall the impulse response of M(s), m(t), is
m(t) = Ce
At
B
The 2-norm, according to the denition, satises
|M|
2
2
= trace
_

0
m
T
(t)m(t) dt
= trace [B
T
(
_

0
e
A
T
t
C
T
Ce
At
dt)B]
= trace B
T
W
o
B
Thus, the 2-norm of M(s) can be computed by solving a Lyapunov function for W
o
A
T
W
o
+W
o
A +C
T
C = 0
and taking the square root of the trace of B
T
W
o
B.
|M|
2
=
_
trace B
T
W
o
B
Similarly, let W
c
be the controllability gramian
AW
c
+W
c
A
T
+BB
T
= 0
then,
|M|
2
=
_
trace CW
c
C
T
37
H

Norm Computation
J.C. Juang
The H

norm of M(s) can be computed through a search of the maximal singular


value of M(j) for all .
|M(s)|

= ess sup

max
(M(s))
Let M(s)
s
=
_
A B
C D
_
. Then
|M(s)|

<
if and only if
max
(D) < and H has no eigenvalues on the imaginary axis where
H =
_
A 0
C
T
C A
T
_
+
_
B
C
T
D
_
(
2
I D
T
D)
1
_
D
T
C B
T

Computation of the H

norm requires iterations on .


|M(s)|

<
if and only if
(s) =
2
I M

(s)M(s) > 0
if and only if (j) is nonsingular for all if and only if
1
(s) has no imaginary
axis pole. But

1
(s)
s
=
_
_
H
_
B(
2
I D
T
D)
1
C
T
D(
2
I D
T
D)
1
_
_
(
2
I D
T
D)
1
D
T
C (
2
I D
T
D)
1
B
T

(
2
I D
T
D)
1
_
_
Homework: For the system M(s)
s
=
_
_
_
_
2 1 1 1
1 1 1 2
4 2 1 0
1 0 1 0
_
_
_
_
, determine
1. The controllability and observability gramians of the system.
2. The Hankel singular value.
3. The Balanced realization of the system.
4. The H
2
and H

norms of M(s).
38
Linear Matrix Inequality
J.C. Juang
Linear matrix inequality (LMI)
F(x) = F
0
+
m

i=1
x
i
F
i
< 0
where x R
m
is the variable and the symmetric matrices F
i
= F
T
i
R
nn
for
i = 1, , m are given.
The inequality < must be interpreted as the negative denite.
The LMI is a convex constraint on x, i.e., the set x[F(x) < 0 is convex. That is, if
x
1
and x
2
satisfy F(x
1
) < 0 and F(x
2
) < 0, then F(x
1
+ x
2
) < 0 for positive scalars
and such that + = 1.
Multiple LMIs can be rewritten as a single LMI in view of the concatenation
F
1
(x) < 0, and F
2
(x) < 0
_
F
1
(x) 0
0 F
2
(x)
_
< 0
Schur complement technique: Assume that Q(x) = Q
T
(x), R(x) = R
T
(x), and S(x)
depend anely on x, then
_
Q(x) S(x)
S
T
(x) R(x)
_
< 0 R(x) < 0 and Q(x) S(x)R
1
(x)S
T
(x) < 0
That is, nonlinear inequalities of Q(x) S(x)R
1
(x)S
T
(x) < 0 can be represented as
an LMI.
Let Z(x) be a matrix on x. The constraint |Z(x)| < 1 is equivalent to I >
Z(x)Z
T
(x) and can be represented as the LMI
_
I Z(x)
Z
T
(x) I
_
> 0
Let c(x) be a vector and P(x) be a symmetric matrix, the constraints P(x) > 0
and c
T
(x)P
1
(x)c(x) < 1 can be represented as the LMI
_
P(x) c(x)
c
T
(x) 1
_
> 0
The constraint
trace S
T
(x)P
1
(x)S(x) < 1, P(x) > 0
where P(x) = P
T
(x) and S(x) depend anely on x can be restated as
trace Q < 1, S
T
(x)P
1
(x)S(x) < Q, P(x) > 0
and hence
trace Q < 1,
_
Q S
T
(x)
S(x) P(x)
_
> 0
39
Orthogonal complement of a matrix. Let P R
nm
be of rank m < n, the orthogonal
complement of P is a matrix P

R
n(nm)
such that
P
T
P

= 0
and
_
P P


is invertible.
Finslers lemma. Let P R
nm
and R R
nn
where rank(P) = m < n. Suppose P

is the orthogonal complement of P then


PP
T
+R < 0
for some real if and only if
P
T
RP

< 0
To see the above, note that
PP
T
+R < 0

_
P
T
P
T
_
(PP
T
+R)
_
P P


< 0

_
P
T
PP
T
P +P
T
RP P
T
RP

P
T
RP P
T
RP

_
< 0
P
T
RP

< 0 and
(P
T
PP
T
P) +P
T
RP P
T
RP

(P
T
RP

)
1
P
T
RP < 0
P
T
RP

< 0
Generalized Projection lemma. Given P R
nm
(rank m < n), Q R
nl
(rank
l < n), and R = R
T
R
nn
, there exists K R
ml
such that
R +PKQ
T
+QK
T
P
T
< 0
if and only if
P
T
RP

< 0 and Q
T
RQ

< 0
where P

and Q

be the orthogonal complements of P and Q, respectively.


Homework: Let P =
_
_
2 1
1 1
3 0
_
_
, Q =
_
_
1 4
2 3
3 2
_
_
, and R =
_
_
4 1 2
1 8 3
2 3 2
_
_
. Find a
matrix K such that R +PKQ
T
+QK
T
P
T
< 0.
40
Stability and Norm Computation: LMI Approach
J.C. Juang
Lyapunov Stability. The system
x = Ax
is stable if and only if all the eigenvalues of A are in the left half plane if and only
if there exists an
X > 0
such that
AX +XA
T
< 0
H

Norm Computation. The H

norm of the system


_
A B
C D
_
is less than if
and only if
2
I > D
T
D and there exists an X > 0 such that
A
T
X +XA +C
T
C + (XB +C
T
D)(
2
I D
T
D)
1
(B
T
X +D
T
C) = 0
if and only if
2
I > D
T
D and there exists an X > 0 such that
A
T
X +XA +C
T
C + (XB +C
T
D)(
2
I D
T
D)
1
(B
T
X +D
T
C) < 0
if and only if
_
A
T
X +XA +C
T
C XB +C
T
D
B
T
X +D
T
C D
T
D
2
I
_
< 0
if and only if there exists an X > 0 such that
_
_
A
T
X +XA XB C
T
B
T
X I D
T
C D I
_
_
< 0
H
2
Norm Computation. The H
2
norm of the system M(s)
s
=
_
A B
C 0
_
can be de-
termined from the gramian. Indeed, we have |M(s)|
2
2
= trace B
T
W
o
B where W
o
satises
A
T
W
o
+W
o
A +C
T
C < 0
Thus, to nd the 2-norm, one can solve positive denite X and Y such that
A
T
X +XA +C
T
C < 0
and
_
Y B
T
X
XB X
_
> 0
The 2-norm is determined from trace Y .
41
Stability
42
Well-Posedness
J.C. Juang
Well-posedness means
Given any initial state and input signals, the state and output are uniquely
determined.
All transfer functions between dierent nodes exist and are proper.
In order to investigate the normal behavior of the system and to avoid possible
pathologies, we are in need of this well-posedness condition.
Consider the feedback system in which both P
yu
and K are linear, time-invariant,
real rational, and proper.
K(s)
P
yu
(s)
i -
6
- i -
?
-

y
u
y
v
1
v
2
The transfer function from external input
_
v
1
v
2
_
to
_
u
y
_
admits
_
I K
P
yu
I
_ _
u
y
_
=
_
v
1
v
2
_
or
(I KP
yu
)u = v
1
+Kv
2
(I P
yu
K)y = P
yu
v
1
+v
2
Thus,
_
u
y
_
=
_
I K
P
yu
I
_
1
_
v
1
v
2
_
=
_
(I KP
yu
)
1
0
0 (I P
yu
K)
1
_ _
I K
P
yu
I
_ _
v
1
v
2
_
The feedback system is well-posed if and only if the inverse of (I P
yu
(s)K(s)) exists
and is proper.
Note that
(I K(s)P
yu
(s))
1
= I +K(s) (I P
yu
(s)K(s))
1
P
yu
(s)
The feedback system is well-posed if and only if (I P
yu
()K()) is invertible or
_
I K()
P
yu
() I
_
is invertible.
In particular, the feedback system is well-posed if either K() or P
yu
() is zero.
43
Internal Stability
J.C. Juang
Concepts of Stability
(i) Equilibrium State: An equilibrium state is asymptotically stable (in the sense
of Lyapunov) if the state trajectory returns to the original equilibrium point
under perturbations.
(ii) Input-Output Behavior: A system is stable if bounded input results in bounded
output.
K(s)
P
yu
(s)
j -
6
- j -
?
-

y
u
y
v
1
v
2
The system is internally stable if the output
_
u
y
_
is bounded under bounded
excitation
_
v
1
v
2
_
.
Since
_
u
y
_
=
_
I K
P
yu
I
_
1
_
v
1
v
2
_
=
_
(I KP
yu
)
1
(I KP
yu
)
1
K
(I P
yu
K)
1
P
yu
(I P
yu
K)
1
_ _
v
1
v
2
_
The system is internally stable if all four transfer function matrices are stable
rational transfer function matrices, i.e.,
_
(I KP
yu
)
1
(I KP
yu
)
1
K
(I P
yu
K)
1
P
yu
(I P
yu
K)
1
_
1H

Internal stability guarantees that all signals in a system are bounded provided that
the injected signals are bounded.
Internal stability cannot be concluded even if three of the four transfer function
matrices are stable.
For example,
P
yu
=
s 1
s + 1
and K =
1
s 1
44
Then
_
u
y
_
=
_
s+1
s+2

s+1
(s1)(s+2)
s1
s+2
s+1
s+2
_ _
v
1
v
2
_
The system is not internally stable.
Suppose that P
yu
1H

, then the feedback system is internally stable if and only


if it is well-posed and K(I P
yu
K)
1
1H

.
Similarly, suppose that K 1H

, then the feedback system is internally stable if


and only if it is well-posed and (I P
yu
K)
1
P
yu
1H

.
Suppose that both P
yu
and K are in 1H

. Then the system is internally stable if


and only if (I P
yu
K)
1
1H

or, equivalently, det (I P


yu
K) has no zero in the
closed right-half plane.
Suppose that there is no unstable pole/zero cancellation in the product P
yu
(s)K(s),
then the system is internally stable if and only if (I P
yu
K)
1
is in 1H

.
As another example, consider
P
yu
(s) =
_
1
s1
0
0
1
s+1
_
and
K(s) =
_

s1
s+1
1
0 1
_
Then
P
yu
K =
_
1
s+1
1
s1
0
1
s+1
_
and
(I P
yu
K)
1
=
_
s+1
s+2

(s+1)
2
(s+2)
2
(s1)
0
s+1
s+2
_
Note that, however,
det(I P
yu
K) =
(s + 2)
2
(s + 1)
2
That is, det(I P
yu
K) having no zeros in the closed left-half plane does not neces-
sarily imply internal stability.
Homework: A system is given by P
yu
(s) =
4(s1)
s
2
4
. Find a controller K(s) such that
the system is internally stable.
45
Concepts of Coprime Factorization
J.C. Juang
Two elements are coprime if their greatest common divisor (g.c.d.) is a unit.
For example, 8 and 11 are coprime in the set of integer because their g.c.d. is 1.
If a and b are coprime, then there exist (coprime) integers x and y such that
ax +by = 1.
In the previous example, 8 7 + 11 (5) = 1.
A rational number is the ratio of two integers and indeed any rational number can
be canonically written as a ratio of two coprime numbers.
Two polynomials are coprime if they do not share common zeros.
Two polynomials n(s) and d(s) are coprime if and only if there exists polynomials
u(s) and v(s) such that
n(s)u(s) +d(s)v(s) = 1
For example, for the coprime polynomials n(s) = 2(s 1) and d(s) = s
2
s +1, there
exist u(s) =
1
2
s and v(s) = 1 such that n(s)u(s) +d(s)v(s) = 1.
A rational transfer function can be represented as a fraction of two coprime poly-
nomials. For example, the transfer function
2(s 1)
s
2
s + 1
is a fraction of the polynomials 2(s 1) and s
2
s + 1.
The Euclids algorithm can be used to nd the great common divider and hence
to check coprimeness.
A rational transfer function can also be written as a ratio of two stable transfer
functions.
2(s 1)
s
2
s + 1
=
_
2(s 1)
s
2
+s + 1
__
s
2
s + 1
s
2
+s + 1
_
1
Benets:
Stability can be treated in an algebraic manner
State space computations can be carried out
46
Coprime Factorization
J.C. Juang
Let N(s) and D(s) be in 1H

. N(s) and D(s) are right coprime over 1H

if there
exist U(s) and V (s), both in 1H

, such that
U(s)N(s) +V (s)D(s) = I
They are left coprime if there exist U(s) and V (s) in 1H

such that
N(s)U(s) +D(s)V (s) = I
The above equations are called Bezout identities.
When N(s) and D(s) are right coprime, the concatenated matrix
_
N(s)
D(s)
_
has a left
inverse, namely
_
U(s) V (s)

, in 1H

. Likewise, N(s) and D(s) are left coprime


if and only if
_
N(s) D(s)

has a right inverse.


G(s) is said to admit a right coprime factorization if G(s) = N
r
(s)D
1
r
(s) for some
right coprime N
r
(s) and D
r
(s).
G(s) = D
1
l
(s)N
l
(s) is a left coprime factorization if N
l
(s) and D
l
(s) are left coprime.
Coprime Factorization Theorem. For every real rational transfer function matrix
G(s), there exist right coprime factors N
r
(s), D
r
(s) and left coprime factors N
l
(s),
D
l
(s) such that
G(s) = N
r
(s)D
1
r
(s) = D
1
l
(s)N
l
(s)
Proof: (by construction)
Assume that G(s) admits the (minimal) state space realization
G(s) = D +C(sI A)
1
B
s
=
_
A B
C D
_
Let F and H be stabilizing state feedback gain and output injection gain,
respectively. That is, both A +BF and A +HC are stable.
Then, the coprime factors admit the following state space realizations
_
N
r
(s)
D
r
(s)
_
s
=
_
_
A +BF B
C +DF D
F I
_
_
_
N
l
(s) D
l
(s)

s
=
_
A +HC (B +HD) H
C D I
_
Checks:
47
N
r
(s), N
l
(s), D
r
(s), and D
l
(s) are in 1H

. This is true because A + BF and


A +HC are stable matrices.
G(s) = N
r
(s)D
1
r
(s) = D
1
l
(s)N
l
(s).
N
r
(s)D
1
r
(s)
s
=
_
A +BF B
C +DF D
__
A +BF B
F I
_
1
=
_
A +BF B
C +DF D
__
A B
F I
_
=
_
_
A +BF BF B
0 A B
C +DF DF D
_
_
similarity transformation
=
_
_
A +BF 0 0
0 A B
C +DF C D
_
_
removal of uncontrollable modes
=
_
A B
C D
_
= G(s)
The coprimeness.
_
N
r
(s)
D
r
(s)
_
admits a left inverse in 1H

. Let
_
U
r
(s) V
r
(s)

s
=
_
A +HC H (B +HD)
F 0 I
_
Then
_
U
r
(s) V
r
(s)

_
N
r
(s)
D
r
(s)
_
s
=
_
A +HC H (B +HD)
F 0 I
_
_
_
A +BF B
C +DF D
F I
_
_
=
_
_
A +HC HC BF B
0 A +BF B
F F I
_
_
=
_
_
A +HC 0 0
0 A +BF B
F 0 I
_
_
= I
Let G = N
r
D
1
r
where N
r
and D
r
be right coprime, then G is stable if and only if
D
1
r
is stable.
If G is stable, then we can take N
r
= N
l
= G, D
r
= I, D
l
= I, U
r
= 0, U
l
= 0, D
r
= I
and D
l
= I.
48
Double Coprime Factorization
J.C. Juang
A transfer function matrix P
yu
(s) is said to have a double coprime factorization
if there exist a right coprime factorization P
yu
(s) = N
r
(s)D
1
r
(s) and a left coprime
factorization P
yu
(s) = D
1
l
(s)N
l
(s) such that
_
V
r
(s) U
r
(s)
N
l
(s) D
l
(s)
_ _
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
=
_
I 0
0 I
_
Here, N
r
, D
r
, N
l
, D
l
, U
r
, U
l
, V
r
, and V
l
are all in 1H

.
The double coprime equations
U
r
N
r
+V
r
D
r
= I
D
l
V
l
+N
l
U
l
= I
D
l
N
r
+N
l
D
r
= 0
U
r
V
l
+V
r
U
l
= 0
Suppose that P
yu
(s) is a proper real rational transfer function matrix with the
stabilizable and detectable realization
P
yu
(s)
s
=
_
A B
0
C
0
D
00
_
and let F and H be such that A + B
0
F and A + HC
0
are both stable. Then a
particular state space realization of the double coprime factors is
_
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
s
=
_
_
A +B
0
F B
0
H
F I 0
(C
0
+D
00
F) D
00
I
_
_
_
V
r
(s) U
r
(s)
N
l
(s) D
l
(s)
_
s
=
_
_
A +HC
0
B
0
+HD
00
H
F I 0
C
0
D
00
I
_
_
49
Stability and Coprime Factorization
J.C. Juang
K(s)
P
yu
(s)
j -
6
- j -
?
-

y
u
y
v
1
v
2
Internal stability requires that
_
I K
P
yu
I
_
1
=
_
(I KP
yu
)
1
(I KP
yu
)
1
K
(I P
yu
K)
1
P
yu
(I P
yu
K)
1
_
1H

A matrix transfer function E(s) 1H

is called unimodular if its inverse exist and


E
1
(s) 1H

.
Assume that P
yu
(s) admits the following right and left coprime factorizations
P
yu
(s) = N
r
(s)D
1
r
(s) = D
1
l
(s)N
l
(s)
and assume that K(s) admits the following coprime factorizations
K(s) = V
1
r
(s)U
r
(s) = U
l
(s)V
1
l
(s)
Then the following statements are equivalent
1. K(s) stabilizes P
yu
(s).
2. U
r
(s)N
r
(s) +V
r
(s)D
r
(s) is unimodular.
3. D
l
(s)V
l
(s) +N
l
(s)U
l
(s) is unimodular.
4.
_
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
is unimodular.
5.
_
V
r
(s) U
r
(s)
N
l
(s) D
l
(s)
_
is unimodular.
Proof of 1 and 2.
50
Assume P
yu
= N
r
D
1
r
and K = V
1
r
U
r
. Let E = U
r
(s)N
r
(s) +V
r
(s)D
r
(s). Then
(I KP
yu
)
1
= (I +V
1
r
U
r
N
r
D
1
r
)
1
= D
r
(U
r
N
r
+V
r
D
r
)
1
V
r
= D
r
E
1
V
r
(I KP
yu
)
1
K = D
r
E
1
U
r
(I P
yu
K)
1
P
yu
= N
r
E
1
V
r
(I P
yu
K)
1
= I +P
yu
(I KP
yu
)
1
K = I N
r
E
1
U
r
Clearly, when E is unimodular, the system is internally stable.
On the other hand, internal stability implies that
_
N
r
D
r
_
E
1
_
U
r
V
r

1H

Since N
r
and D
r
are right coprime, there exist

U
r
and

V
r
in 1H

such that
_

U
r

V
r

_
N
r
D
r
_
= I. Likewise, there exist

N
r
and

D
r
such that
_
U
r
V
r

_

N
r

D
r
_
=
I.
Hence,
_

U
r

V
r

_
N
r
D
r
_
E
1
_
U
r
V
r

_

N
r

D
r
_
= E
1
1H

That is, E is invertible in 1H

.
Proof of 1 and 4.
Note that
_
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
=
_
I U
l
V
1
l
N
r
D
1
r
I
_ _
D
r
0
0 V
l
_
=
_
I K
P
yu
I
_ _
D
r
0
0 V
l
_
The term is invertible provided that the system is well-posed.
Thus,
_
I K
P
yu
I
_
1
=
_
D
r
0
0 V
l
_
. .
X
_
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
1
. .
Y
1
However, Xand Y are right coprime in the sense that there exist W =
_
V
r
U
r
N
l
D
l
_
and Z =
_
0 U
r
N
l
0
_
both in 1H

such that
WX +ZY =
_
V
r
U
r
N
l
D
l
_ _
D
r
0
0 V
l
_
+
_
0 U
r
N
l
0
_ _
D
r
U
l
N
r
V
l
_
=
_
I 0
0 I
_
Thus,
_
I K(s)
P
yu
(s) I
_
1
is stable if and only if
_
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
1
is stable if
and only if
_
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
is unimodular in 1H

.
51
Stabilizing Controllers
J.C. Juang
Let P
yu
(s) = N
r
(s)D
1
r
(s) = D
1
l
(s)N
l
(s) be right and left coprime factorizations and
assume that the double coprime equation is satised
_
V
r
(s) U
r
(s)
N
l
(s) D
l
(s)
_ _
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
=
_
I 0
0 I
_
for some U
l
, V
l
, U
r
, and V
r
. Then a stabilizing controller is
K(s) = V
1
r
(s)U
r
(s) = U
l
(s)V
1
l
(s)
Given right coprime N
r
and D
r
, let U
r
and V
r
be a solution to the Bezout equation
U
r
N
r
+V
r
D
r
= I
Let

U
r
and

V
r
be another solution such that

U
r
N
r
+

V
r
D
r
= I
Then
(

U
r
U
r
)N
r
+ (

V
r
V
r
)D
r
= 0
Recall that
D
l
N
r
+N
l
D
r
= 0
Thus,

U
r
= U
r
+QD
l
and

V
r
= V
r
QN
l
for some Q 1H

.
Any stabilizing controller K(s) of P
yu
(s) can be represented as
K(s) = [V
r
(s) Q(s)N
l
(s)]
1
[U
r
(s) +Q(s)D
l
(s)]
= [U
l
(s) +D
r
(s)Q(s)][V
l
(s) N
r
(s)Q(s)]
1
for some Q(s) 1H

. U
l
(s), V
l
(s), U
r
(s), and V
r
(s) are in 1H

and satisfy the


Bezout equation:
_
V
r
(s) U
r
(s)
N
l
(s) D
l
(s)
_ _
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
=
_
I 0
0 I
_
Given a real rational proper P
yu
(s), the coprime factors can be found and all the
stabilizing controllers are then parametrized.
52
Framework of Feedback System
53
Standard Form for Feedback Design
J.C. Juang
Canonical form for feedback systems design and analysis problem.
P(s)
K(s)
-

- -
w z
u
y
In the above gure, w stands for the exogenous input signal which could be the
command signal, the disturbance, or the sensor noise depending on the applica-
tion. u is the control signal generated by the controller to satisfy certain design
requirements. y is the measured output, the signal used for controller synthesis.
Finally, z is the output signal to be controlled which represents the response of
the system, the tracking error, or the actuation signal.
_
z
y
_
=
_
P
zw
(s) P
zu
(s)
P
yw
(s) P
yu
(s)
_ _
w
u
_
The matrix transfer function from
_
w
u
_
to
_
z
y
_
, P(s) =
_
P
zw
(s) P
zu
(s)
P
yw
(s) P
yu
(s)
_
, is de-
rived based on the plant dynamics, the design requirements, the interconnection,
and the weightings.
u = K(s)y
The objective of the regulator design problem is to synthesize the controller K(s)
such that the design requirements are satised. Design requirements may include
internal stability, achievement of certain input-output relationships, robustness,
among others.
Redraw the previous gure as
54
P
yu
(s)
P
yw
(s) K(s) P
zu
(s)
P
zw
(s)
-
-
- j -
6
?
- j -
?

?
j - -
z w u y
u
v
2
v
1
The transfer functions P
ij
(s)s are assumed to be linear, time-invariant, causal, and
proper.
The signals v
1
and v
2
are ctitious, introduced so that all input nodes are excited
by some signals.
The transfer function matrix from
_
_
w
v
1
v
2
_
_
to
_
_
z
u
y
_
_
admits
z = P
zw
w +P
zu
u
u = v
1
+Ky
y = v
2
+P
yw
w +P
yu
u
which give
_
_
I P
zu
0
0 I K
0 P
yu
I
_
_
_
_
z
u
y
_
_
=
_
_
P
zw
0 0
0 I 0
P
yw
0 I
_
_
_
_
w
v
1
v
2
_
_
or
_
_
z
u
y
_
_
=
_
_
I P
zu
0
0 I K
0 P
yu
I
_
_
1
_
_
P
zw
0 0
0 I 0
P
yw
0 I
_
_
_
_
w
v
1
v
2
_
_
=
_
_
P
zw
+P
zu
(I KP
yu
)
1
KP
yw
P
zu
(I KP
yu
)
1
P
zu
(I KP
yu
)
1
K
(I KP
yu
)
1
KP
yw
(I KP
yu
)
1
(I KP
yu
)
1
K
(I P
yu
K)
1
P
yw
(I P
yu
K)
1
P
yu
(I P
yu
K)
1
_
_
_
_
w
v
1
v
2
_
_
The system is well-posed if and only if (I P
yu
()K())
1
exists.
55
Closed-Loop Map
J.C. Juang
The system is governed by
z = P
zw
w +P
zu
u
y = P
yw
w +P
yu
u
and
u = Ky
Linear Fractional Map. When the ctitious signals are zero, the transfer function
from the input w to the output z is a linear fractional map. Let P =
_
P
zw
P
zu
P
yw
P
yu
_
,
the linear fractional transform is
T
l
(P, K) = P
zw
+P
zu
K(I P
yu
K)
1
P
yw
= P
zw
+P
zu
(I K P
yu
)
1
K P
yw
Scattering Matrix Representation. Let S be the matrix that governs the map from
_
y
u
_
to
_
w
z
_
, that is,
_
w
z
_
= S
_
y
u
_
=
_
S
wy
S
wu
S
zy
S
zu
_ _
y
u
_
Then,
w = P
1
yw
P
yu
u +P
1
yw
y
and
z = (P
zu
P
zw
P
1
yw
P
yu
)u +P
zw
P
1
yw
y
Thus,
S =
_
S
wy
S
wu
S
zy
S
zu
_
=
_
P
1
yw
P
1
yw
P
yu
P
zw
P
1
yw
P
zu
P
zw
P
1
yw
P
yu
_
It can be shown that
w = S
wy
y +S
wu
u = (S
wy
+S
wu
K)y
and
z = S
zy
y +S
zu
u = (S
zy
+S
zu
K)y
Thus,
z = (S
zy
+S
zu
K)(S
wy
+S
wu
K)
1
w
This is a scattering matrix representation of the closed-loop map.
State Space Representation. Assume that the transfer function matrix P(s) admits
the state space realization:
x = Ax +B
1
w +B
0
u
z = C
1
x +D
11
w +D
10
u
y = C
0
x +D
01
w +D
00
u
56
or
P(s)
s
=
_
_
A B
1
B
0
C
1
D
11
D
10
C
0
D
01
D
00
_
_
Let K(s) admit the state space realization
K(s)
s
=
_
A
k
B
k
C
k
D
k
_
or

x = A
k
x +B
k
y
u = C
k
x +D
k
y
Then
u = C
k
x +D
k
[C
0
x +D
01
w +D
00
u]
or
u = (I D
k
D
00
)
1
[D
k
C
0
x +C
k
x +D
k
D
01
w]
Thus,
y = C
0
x +D
01
w +D
00
(I D
k
D
00
)
1
[D
k
C
0
x +C
k
x +D
k
D
01
w]
The closed-loop map in state space representation is
T
l
(P(s), K(s))
s
=
_
_
A +B
0
D
k
D
00k
C
0
B
0
D
k00
C
k
B
1
+B
0
D
k
D
00k
D
01
B
k
D
00k
C
0
A
k
+B
k
D
00k
D
00
C
k
B
k
D
00k
D
01
C
1
+D
10
D
k
D
00k
C
0
D
10
D
k00
C
k
D
11
+D
10
D
k
D
00k
D
01
_
_
where D
00k
= (I D
00
D
k
)
1
and D
k00
= (I D
k
D
00
)
1
.
Ane parametrization: Let T
l
(P(s), K(s))
s
=
_
A
cl
B
cl
C
cl
D
cl
_
. Then
A
cl
=
_
A 0
0 0
_
+
_
B
0
0
0 I
_
K
para
_
C
0
0
0 I
_
B
cl
=
_
B
1
0
_
+
_
B
0
0
0 I
_
K
para
_
D
01
0
_
C
cl
=
_
C
1
0

+
_
D
10
0

K
para
_
C
0
0
0 I
_
D
cl
= D
11
+
_
D
10
0

K
para
_
D
01
0
_
where K
para
=
_
D
k
D
00k
D
k00
C
k
B
k
D
00k
A
k
+B
k
D
00k
D
00
C
k
_
.
57
Some Examples
J.C. Juang
Disturbance rejection: Design a controller K(s) in the gure so that the transmis-
sion from the disturbance d to the output y is minimized (or zero), where G(s)
stands for the plant transfer function.
K(s) G(s)
- - -
?
- h
d
y u
The exogenous signal is w = d, the control signal is u, the measurement is y,
and the regulated variable is z = y. Thus
_
z
y
_
=
_
I G
I G
_ _
w
u
_
The augmented plant is P(s) =
_
I G
I G
_
.
The transfer function from w = d to z = y is
y = d +GKy = (I GK)
1
d = (I +GK(I GK)
1
)d
Command following: It is desirable that y
1
follows r closely or the transmission
from r to y
1
r is as small as possible.
K
1
(s) G(s)
K
2
(s)
- - - -

6
h

u y
1
r
The exogenous signal is w = r, the control signal is u, the regulated variable
z = y
1
r, and the measurement is y =
_
r
y
1
_
.
The augmented plant is constructed as
_
z
y
_
=
_
_
I G
I 0
0 G
_
_
_
w
u
_
58
The controller is
u = K
1
r +K
2
y
1
=
_
K
1
K
2

y
Weighted controller design: A feedback system with the following block diagram
G
1
(s)
G
2
(s)
K(s)
W
1
(s)
W
2
(s)
c

6
- c
?

-
u
f
y
u
d
- - -
v
?
c n
With w =
_
d
n
_
and z =
_
v
u
f
_
, the system is in the standard form with
P(s) =
_
_
W
2
G
1
0 W
2
G
1
0 0 W
1
G
2
G
1
G
2
G
2
G
1
_
_
Parametric uncertainty: Consider a linear system G(s) that is subject to state space
uncertainty such that G(s) = C
0
(sI A A
l
A
r
)
1
B
0
for some uncertainty . It is
desired to nd a controller K(s) that robustly stabilizes the system.
-
B
0
- c -
1
s
-
C
0
-

K(s)

6
A

A
r

A
l
?
u y
y
w z
System description
x = Ax +B
0
u +A
l
w
y = C
0
x
z = A
r
x
u = Ky
59
The augmented plant P(s)
s
=
_
_
A A
l
B
0
A
r
0 0
C
0
0 0
_
_
Equivalent feedback conguration
K(s)
P(s)

-
-

The uncertainty block is pulled out.


The augmented plant P(s) is known, the uncertainty block is unknown except
for certain information, and the controller is to be determined.
Mixed sensitivity problem: Suppose that the system is perturbed as (I+)G where
stands for the perturbation which is modelled as a stable, norm-bounded transfer
function. The objective is to achieve robust stability and disturbance rejection at
the same time.
K(s)
- - b -
G(s)
- b -
-

?
d
? y u
z
1
w
1
Auxiliary variables are introduced to pull out the uncertainty.
The exogenous input is w =
_
w
1
d
_
, the control signal is u, the measurement is
y, and the regulated variable is z =
_
z
1
y
_
.
_
z
y
_
= P(s)
_
w
u
_
=
_
_
0 0 I
G I G
G I G
_
_
_
w
u
_
Thus, the feedback design becomes
60
K(s)
P(s)

-
-
- -
w
1
d
u
y
y
z
1

General control analysis/synthesis problem


M(s)
K(s)
P(s)

-
-
-
z
-
w

Controller synthesis: Given the


augmented plant P(s) nd the
controller K(s) such that the
map from w to z, T
zw
(s) satises
certain feedback properties.
w
-
-
K(s)
P(s)

-
z
Control analysis: Given a sta-
ble M(s), characterize the uncer-
tainty (s) so that the stabil-
ity and the performance as de-
scribed by T
zw
(s) is maintained.
-
-
w
M(s)
(s)

-z
61
Stabilization of the Linear Fractional Map
J.C. Juang
K(s)
P(s)
- -
-

w z
u y
Assume that the augmented plant admits the stabilizable and detectable realiza-
tion
P(s) =
_
P
zw
(s) P
zu
(s)
P
yw
(s) P
yu
(s)
_
s
=
_
_
A B
1
B
0
C
1
D
11
D
10
C
0
D
01
D
00
_
_
K(s) stabilizes P(s) if and only if K(s) stabilizes P
yu
(s).
Check the stability of
_
_
P
zw
+P
zu
(I KP
yu
)
1
KP
yw
P
zu
(I KP
yu
)
1
P
zu
(I KP
yu
)
1
K
(I KP
yu
)
1
KP
yw
(I KP
yu
)
1
(I KP
yu
)
1
K
(I P
yu
K)
1
P
yw
(I P
yu
K)
1
P
yu
(I P
yu
K)
1
_
_
Recall that
P
yu
(s) = N
r
(s)D
1
r
(s) = D
1
l
(s)N
l
(s)
and
K(s) = [V
r
(s) Q(s)N
l
(s)]
1
[U
r
(s) +Q(s)D
l
(s)]
= [U
l
(s) +D
r
(s)Q(s)][V
l
(s) N
r
(s)Q(s)]
1
where
_
V
r
(s) U
r
(s)
N
l
(s) D
l
(s)
_ _
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
=
_
I 0
0 I
_
and
_
D
r
(s) U
l
(s)
N
r
(s) V
l
(s)
_
s
=
_
_
A +B
0
F B
0
H
F I 0
(C
0
+D
00
F) D
00
I
_
_
_
V
r
(s) U
r
(s)
N
l
(s) D
l
(s)
_
s
=
_
_
A +HC
0
B
0
+HD
00
H
F I 0
C
0
D
00
I
_
_
62
Note also that P
zu
(s) admits the factorization
P
zu
(s)
s
=
_
A B
0
C
1
D
10
_
=
_
A +B
0
F B
0
C
1
+D
10
F D
10
__
A +B
0
F B
0
F I
_
1
= T
10
D
1
r
and P
yw
(s) admits the factorization
P
yw
(s)
s
=
_
A +HC
0
H
C
0
I
_
1
_
A +HC
0
B
1
+HD
01
C
0
D
01
_
= D
1
l
T
01
Hence,
(I KP
yu
)
1
= D
r
(V
r
QN
l
)
K(I P
yu
K)
1
= D
r
(U
r
+QD
l
)
(I P
yu
K)
1
P
yu
= (V
l
N
r
Q)N
l
(I P
yu
K)
1
= (V
l
N
r
Q)D
l
P
zu
(I KP
yu
)
1
K = T
10
(V
r
QN
l
)
P
zu
(I KP
yu
)
1
= T
10
(U
r
+QD
l
)
(I P
yu
K)
1
P
yw
= (V
l
N
r
Q)T
01
(I KP
yu
)
1
KP
yw
= (U
l
+D
r
Q)T
01
Finally,
P
zw
+P
zu
K(I P
yu
K)
1
P
yw
= P
zw
T
10
QT
01
T
10
U
r
P
yw
=
_
A B
1
C
1
D
11
_

_
A +B
0
F B
0
C
1
+D
10
F D
10
__
A +HC
0
H
F 0
__
A B
1
C
0
D
01
_
T
10
QT
01
=
_
_
_
_
A +B
0
F B
0
F 0 0
0 A +HC
0
HC
0
HD
01
0 0 A B
1
(C
1
+D
10
F) D
10
F C
1
D
11
_
_
_
_
T
10
QT
01
=
_
_
_
_
A +B
0
F B
0
F 0 B
1
0 A +HC
0
HC
0
B
1
+HD
01
0 0 A B
1
(C
1
+D
10
F) D
10
F 0 D
11
_
_
_
_
T
10
QT
01
=
_
_
A +B
0
F B
0
F B
1
0 A +HC
0
(B
1
+HD
01
)
C
1
+D
10
F D
10
F D
11
_
_
T
10
QT
01
The set of all closed-loop map from w to z achievable by an internally stabilizing
proper controller is
T
zw
(s) = T
11
(s) T
10
(s)Q(s)T
01
(s)
where Q(s) 1H

and I D
00
Q() is invertible.
63
All Stabilizing Controllers
J.C. Juang
All stabilizing controllers can be represented in terms of a linear fractional map
K(s) = [V
r
(s) Q(s)N
l
(s)]
1
[U
r
(s) +Q(s)D
l
(s)]
= [U
l
(s) +D
r
(s)Q(s)][V
l
(s) N
r
(s)Q(s)]
1
and thus
K(s) = T
l
(J(s), Q(s))
for Q(s) in 1H

and
J(s) =
_
U
l
(s)V
1
l
(s) V
1
r
(s)
V
1
l
(s) V
1
l
(s)N
r
(s)
_
or
_
V
1
r
(s)U
r
(s) V
1
r
(s)
V
1
l
(s) N
l
(s)V
1
r
(s)
_
To see this,
K(s)
= (U
l
+D
r
Q)(V
l
N
r
Q)
1
= U
l
(V
l
N
r
Q)
1
D
r
Q(V
l
N
r
Q)
1
= U
l
V
1
l
U
l
V
1
l
N
r
Q(V
l
N
r
Q)
1
D
r
Q(V
l
N
r
Q)
1
= U
l
V
1
l
(D
r
+U
l
V
1
l
N
r
)Q(V
l
N
r
Q)
1
= U
l
V
1
l
V
1
r
Q(I V
1
l
N
r
Q)
1
V
1
l
= T
l
(J(s), Q(s))
The matrix J(s) in the linear fractional controller parametrization is
J(s)
s
=
_
_
A +B
0
F +HC
0
+HD
00
F H B
0
+HD
00
F 0 I
C
0
+D
00
F I D
00
_
_
64
Block Diagram Interpretation
J.C. Juang
Controller design problem
K(s)
P(s)
- -
-

w z
u y
After controller parametrization
Q(s)
J(s)
P(s)
- -
-

w z
u y
The closed-loop map after stabilization
Q(s)
T(s)
- -
-

w z
or
-
T
01
(s)
-
Q(s)
-
T
10
(s)
-b

-
z w
-
T
11
(s)
?
65
LMI-Based Stabilization Techniques
J.C. Juang
Recall the state space representation of P(s) as
x = Ax +B
1
w +B
0
u
z = C
1
x +D
11
w +D
10
u
y = C
0
x +D
01
w +D
00
u
and assume that D
00
= 0.
For state feedback control,
u = Fx
The closed-loop system matrix becomes
A
cl
= A +B
0
F
The system is thus stable provided that there exists a positive denite matrix X
such that
(A +B
0
F)X +X(A +B
0
F)
T
< 0
Let Z = FX, this is equivalent to nding matrices Z and X > 0 such that
_
A B
0

_
X
Z
_
+
_
X Z
T

_
A
T
B
T
0
_
< 0
Recall that generalized projection lemma, let B

0
be the orthogonal complement
of B
0
, then the system is stabilized via state feedback provided that there exists
X > 0 such that
B
T
0
(AX +XA
T
)B

0
< 0
In the general case, the closed-loop system matrix is linearly parametrized in terms
of unknown K
para
A
cl
=
_
A 0
0 0
_
. .
Ax
+
_
B
0
0
0 I
_
. .
Bx
K
para
_
C
0
0
0 I
_
. .
Cx
Thus, the system is stabilized if there exists a matrix X > 0 such that
B
T
x
(A
x
X +XA
T
x
)B

x
< 0
and
C

x
(X
1
A
x
+A
T
x
X
1
)C
T
x
< 0
66
Feedback Control and Performance Limitations
67
Review of Feedback Theory
J.C. Juang
Consider the feedback system in the gure
6
d -
r

e u
?
d
-
K(s)
-
G(s)
-d -
y
where
r(t): command or reference signal
y(t): output of the system
e(t): tracking error
u(t): control signal
d(t): disturbance
G(s): plant, the physical process to be controlled
K(s): compensator to be designed
Various design issues arise
Stability must be guaranteed: The closed-loop system is
Y (s) =
G(s)K(s)
1 +G(s)K(s)
R(s)
The system is stable when the poles of the closed-loop transfer function
GK
1+GK
or zeros of 1 +G(s)K(s) are in the left half plane.
The function G(s)K(s) = L(s) is the loop transfer function and 1 + G(s)K(s) =
1 +L(s) is the return dierence function.
Sensitivity is required to be small: The output response to the additive dis-
turbance is
Y (s)
D(s)
=
1
1 +G(s)K(s)
It is desired that
Y (s)
D(s)
be small without sacricing stability. The sensitivity
function dened as
S(s) =
1
1 +G(s)K(s)
must be small.
68
The sensitivity function also governs the response from the command r to the
tracking error e. Thus for good tracking performance, i.e., small tracking error,
S must be small as well.
Let L
o
(s) be the nominal loop transfer function, then the map from the com-
mand r to the output y is characterized by the transfer function T
o
(s) where
T
o
(s) =
L
o
(s)
1 +L
o
(s)
Suppose that due to neglected dynamics, perturbations, and so forth, the
actual loop transfer function becomes L(s), the resulting closed-loop transfer
function becomes
T(s) =
L(s)
1 +L(s)
The sensitivity function S(s) of the closed-loop map T(s) with respect to the
variation of the loop transfer function L is
S(s) =
TTo
T
LLo
L
=
1
1 +L
o
(s)
Thus, S(s) must be maintained small in order to reduce the eect of plant
uncertainty on the closed-loop map.
But, how small is small? Recall that in the open-loop situation, the transfer
function from d to y is 1, thus, it is generally required that [S(s)[ to be smaller
than 1 at all ss.
Let T C be an open set and let f(s) be a complex valued function dened
on T. Then f(s) is said to be analytic at a point z
o
in T if its is dierentiable
at z
o
and also at each point in some neighborhood of z
o
. In other words, the
function has a power series representation around z
o
.
A function f(s) is said to be analytic in T if it is analytic at each point of T.
The maximum modulus theorem states that if a complex valued function f
of complex variable is analytic inside and on the boundary of some domain
T, then the maximum modulus (magnitude) of the function f occurs on the
boundary of the domain T.
In other words, since S(s) is required to be stable, i.e., it is analytic in the closed
right half plane (including at innity), the maximum of [S(s)[ is achieved on
the imaginary axis. It is thus equivalent to evaluating [S(j)[.
6
a -
r
-
e u
?
d
-
K(s)
-
G(s)
-a -
y
c
G(s)
-a -
?
d
y
o
69
Let y
o
(t) and y
c
(t) be the disturbance responses in the open and closed-loop sit-
uations, respectively. To have improvement in disturbance response, a sensible
criterion is to require
_

0
y
2
c
(t)dt <
_

0
y
2
o
(t)dt
Using Parsevals theorem,
_

[S(j)[
2
[d(j)[
2
d <
_

[d(j)[
2
d
The inequality is veried for all square integrable disturbances if and only if
[S(j)[ < 1 for all .
Fundamental Limitation: Assume that G(s) is not minimum phase, that is,
G(z) = 0 for some z RHP (right half plane). From the denition on sensitivity
function, S(z) =
1
1+G(z)K(z)
= 1. According to the maximum modulus theorem,
there exists a frequency such that
[S(j)[ > [S(z)[ = 1
This implies that the desired sensitivity condition [S(j)[ < 1 for all cannot
be achieved.
Moreover, Bodes result states that when G(s) is stable, minimum phase, and
lim
s
sL(s) = 0. i.e., the loop transfer function L(s) decays faster than 20
dB/dec. Then,
_

0
ln [S(j)[ d = 0
In other words, the area of sensitivity reduction is the same as the area of
sensitivity amplication.
-
0 dB
ln |S(j)|
Robust Stability: The Nyquist stability criterion states that
Number of clockwise circle around 1
= Number of zeros of 1 +L in RHP Number of poles of 1 +L in RHP
70
-
6
u
Re L(j)
Im L(j)
1
Note that the number of poles of 1 + L in RHP is the same as the number of
unstable open loop poles.
Assume that both G(s) and K(s) are stable, then closed-loop stability means
that the Nyquist plot is not allowed to circle around 1.
The critical issue for the encirclement is the phase 180
o
with unity gain.
Let L be a perturbation that creates marginally instability, i.e.,
L
o
(j) + L = 1
Then, to have robust stability, L must be large. This is to require that
L
L
o
=
1 +L
o
L
o
be large. In other words,
T(s) =
L
o
(s)
1 +L
o
(s)
must be small in magnitude at the critical frequencies. The function T(s) is
the complementary sensitivity function.
Another Limitation: The sensitivity and complementary sensitivity functions
sum to one.
S(s) +T(s) = 1 , s
S(s) and T(s) cannot be made small at the same time. There is a conict
between achieving good sensitivity and good stability margin.
In summary, feedback can be used to
Shape system response
Attenuate disturbance
Accommodate uncertainty
Adapt toward variation
and yet it is subject to fundamental limitations on the achievable performance of
a feedback system. In particular, the tradeos between sensitivity function and
complementary sensitivity function must be accounted for. A small sensitivity
function implies that
71
Small sensitivity toward variations of GK
Good disturbance rejection
Small tracking error
On the other hand, a small complementary sensitivity function means
Large stability margin
Good noise rejection
Robust against variation of sensor dynamics
72
Multivariable Feedback System
J.C. Juang
Consider the feedback design problem in the gure
r
- f -
K(s)
-
u
f -
u
p
d
1
G(s)
- f -
?
y
p
y
o
d
2
f
n
6
? ?

The input loop transfer function (matrix) or the transfer function at the input
node L
in
(s) is dened as
L
in
(s) = K(s)G(s)
Likewise, the output loop transfer function is
L
out
(s) = G(s)K(s)
The input sensitivity function is dened as the transfer function matrix from the
input disturbance d
1
to the plant input signal u
p
S
in
(s) = [I +L
in
(s)]
1
= [I +K(s)G(s)]
1
and
u
p
= S
in
d
1
The output sensitivity function is dened as the transfer function matrix from the
output disturbance d
2
to the output signal y
o
S
out
(s) = [I +L
out
(s)]
1
= [I +G(s)K(s)]
1
and
y
o
= S
out
d
2
The matrix I +L
in
(s) is called input return dierence matrix and I +L
out
(s) is the
output return dierence matrix.
The input and output complementary sensitivity functions are dened, respec-
tively, as
T
in
(s) = I S
in
(s) = L
in
(s)[I +L
in
(s)]
1
= K(s)G(s)[I +K(s)G(s)]
1
and
T
out
(s) = I S
out
(s) = L
out
(s)[I +L
out
(s)]
1
= G(s)K(s)[I +G(s)K(s)]
1
73
The closed-loop system satises the following input-output equations:
y
o
= T
out
(r n) +S
out
Gd
1
+S
out
d
2
r y
o
= S
out
(r d
2
) +T
out
n S
out
Gd
1
u = KS
out
(r n) KS
out
d
2
T
in
d
1
u
p
= KS
out
(r n) KS
out
d
2
+S
in
d
1
Design specications such as disturbance rejection ratio and robust stability are
cast in terms of S
out
, S
in
, KS
out
, T
out
, T
in
, and so on.
These specications put constraints on the maximal singular values of S
in
(j),
S
out
(j), T
in
(j), T
out
(j), and so on.
Note that

max
[S
in
(j)] =
1

min
[I +K(j)G(j)]
and

min
[K(j)G(j)] 1
min
[I +K(j)G(j)]
min
[K(j)G(j)] + 1
Thus,

max
[S
in
(j)] 1
min
[K(j)G(j)] 1
Similarly,

max
[S
out
(j)] 1
min
[G(j)K(j)] 1
On the other hand, the constraints on
max
(T
in
) or
max
(T
out
) admit

max
[T
in
(j)] 1
max
[K(j)G(j)] 1
and

max
[T
out
(j)] 1
max
[G(j)K(j)] 1
Thus, the design requirements on S and T are translated into that on the loop
transfer function L.
In general, at low frequencies, the sensitivity function is made small, i.e., the
loop transfer function is high. in the high-frqeuency region, the complementary
sensitivity function is made small and the loop transfer function rolls o.
-
6
log 0 dB
@
@
@
P
P
P
P
P
P
P
P
P

i
(L)

max
[L(j)]

min
[L(j)]
Loop shaping: shapes the singular values of the loop transfer function so that
design objectives can be achieved.
74
Signicance of Frequency-Dependent Weightings
J.C. Juang
Weighting: adjusts the importance of dierent components in an optimization
problem.
Frequency-dependent weighting: selects a weighting function that is a function of
frequency to reect the design requirement and reduce design conservativeness.
For example, a specication on the sensitivity function [S(j)[:
_
[S(j)[
o
[S(j)[ >
o
can be converted into
[W(j)S(j)[ 1
with
[W(j)[ =
_

1

o

1
>
o
Design problem and weightings
-
W
r
(s)
- f -
6
W
e
(s)
6
e
e
K(s)
-
G(s)
- f -
W
y
(s)
-
6
6
W
u
(s)
6
u
u ?
W
d
(s)
?

d
d
y y r r
The transfer function from the input vector
_
r
d
_
to the output vector
_
_
e
y
u
_
_
is
_
_
e
y
u
_
_
=
_
_
I G(I +KG)
1
K I +G(I +KG)
1
K
G(I +KG)
1
K I G(I +KG)
1
K
(I +KG)
1
K (I +KG)
1
K
_
_
_
r
d
_
The weightings
1. W
r
: models the command shaping
2. W
e
: reects the tracking error
3. W
u
: characterizes the control and actuator activity
4. W
d
: governs the disturbance characteristics
75
5. W
y
: represents the output performance
After weighting,
_
_
e
y
u
_
_
=
_
_
W
e
0 0
0 W
y
0
0 0 W
u
_
_
_
_
I G(I +KG)
1
K I +G(I +KG)
1
K
G(I +KG)
1
K I G(I +KG)
1
K
(I +KG)
1
K (I +KG)
1
K
_
_
_
W
r
0
0 W
e
_ _
r

d
_
In terms of standard design formulation, the linear fractional map T
l
(P, K) = P
zw
+
P
yw
(I KP
yu
)
1
KP
yw
that characterizes this design problem is
P
zw
=
_
_
W
e
0 0
0 W
y
0
0 0 W
u
_
_
_
_
I I
0 I
0 0
_
_
_
W
r
0
0 W
e
_
P
zu
=
_
_
W
e
0 0
0 W
y
0
0 0 W
u
_
_
_
_
G
G
I
_
_
P
yw
=
_
I I

_
W
r
0
0 W
e
_
P
yu
= G
Control design may then be regarded as a process of choosing a controller K(s)
such that certain weighted signals are made small in some sense.
76
Bodes Gain/Phase Relation and Integral
J.C. Juang
Bodes gain-phase integral relation: the phase of a stable and minimum phase
transfer function is determined uniquely by the magnitude of the transfer function.
6
d -
r

e
-
L(s)
-
y
Let L(s) be a stable and minimum phase transfer function, then the phase of the
loop transfer function at
o
L(j
o
) =
1

d ln [L[
d
ln coth
[[
2
d
where = ln(/
o
).
The function
ln coth
[[
2
= ln [
+
o

o
[
serves as a weighting function on the the slope of the loop transfer function
d ln |L(j)|
d
.
The values of
d ln |L(j)|
d
are more heavily weighted near =
o
.
As a result, the steeper the graph of [L[ near the frequency
o
, the smaller the
value of L.
Suppose that ln [L[ has a constant slope, e.g.,
d ln |L|
d
= c, i.e., 20c dB/dec, then
L(j
o
) =
c

ln coth
[[
2
d =
c
2
If for between 0.1 and 10.0, the slope of [L[ is c, then L(j
o
) < c 82.7

.
Let
c
be the gain crossover frequency, i.e., [L(j
c
)[ = 1, then the phase margin is
known to be +(j
c
). Roughly speaking, if the slope is faster than 40 dB/dec
or c > 2, then the system becomes unstable. Hence, the roll o can not be too
fast.
The return dierence
[1 +L(j
c
)[ = 2[ sin
+L(j
c
)
2
[
Hence, if the phase margin is too small, the sensitivity becomes large.
77
For a nonminimum phase system, the constraint becomes more strigent. Indeed,
consider
L(s) =
s +z
s +z
L
mp
(s)
where L
mp
(s) is stable and minimum phase and z > 0. Then
L(j
o
) = L
mp
(j
o
) +
j
o
+z
j
o
+z
Since
jo+z
jo+z
< 0, an extra phase is introduced. In particular, when
o
= z,

jo+z
jo+z
= 90

and when
o
= z/2,
jo+z
jo+z
= 53.13

A rule of thumb is to have the crossover frquency

c
< z/2
Let L(s) be the open-loop transfer function with at least two more poles than zeros
and let p
i
s be the open right-half plane poles of L(s). Then, the Bodes sensitivity
integral states that
_

0
ln [S(j)[ d =
m

i=1
1p
i
0
If the sensitivity is kept low in certain frequency range, there will exist some
frequency range in which [S(j)[ is large. This is the so-called water bed eect.
Suppose that L(s) has a single real right-half plane zero z, then
_

0
ln [S(j)[
2z
z
2
+
2
d = ln
m

i=1
[
p
i
+z
p
i
z
[
The function
2z
z
2
+
2
acts as a weighting on the sensitivity function and further limits
the design tradeos.
78
Analyticity Constraints
J.C. Juang
For a closed-loop stable system, its sensitivity and complementary sensitivity func-
tions satisfy the analyticity constraints as follows.
Let p
i
s and z
j
s be respectively open right-half plane poles and zeros of the loop
transfer function L(s), then
S(p
i
) = 0 and T(p
i
) = 1 i = 1, 2, . . . , m
and
S(z
j
) = 1 and T(z
j
) = 0 j = 1, 2, . . . , k
that is, p
i
s are right-half plane zeros of S(s) and z
j
s are right-half plane zeros of
T(s).
These interpolation constriants once satised imply internal stability. They also
impose design limitations on the achievable performance.
Let
B
p
(s) =
m

i=1
s p
i
s +p
i
and B
z
(s) =
k

i=1
s z
j
s +z
j
Then [B
p
(j)[ = 1 and [B
z
(j)[ = 1 for all and
B
1
p
(s)S(s) 1H

and B
1
z
(s)T(s) 1H

By the maximum modulus theorem,


|S(s)|

= |B
1
p
(s)S(s)|

[B
1
p
(z)S(z)[ = [B
1
p
(z)[ =
m

i=1
[
z +p
i
z p
i
[ > 1
for any z that is a right-half plane zero of L(s).
Likewise, for any right-half plane pole p of L(s),
|T(s)|

= |B
1
z
(s)T(s)|

[B
1
z
(p)[ =
k

j=1
[
p +z
j
p z
j
[ > 1
In particular, if L(s) is stable and has a right-half plane zero z, then
|S(s)|

[S(z)[ = 1
Let W
e
(s) be a weight on the sensitivity function, then
|W
e
(s)S(s)|

[W
e
(z)[
m

i=1
[
z +p
i
z p
i
[
Thus, in a sense
[S(j)[ [W
e
(j)[ [W
e
(z)[
m

i=1
[
z +p
i
z p
i
[
79
Roughly, the bandwidth of the closed-loop system must be smaller than the right-
half plane zero and larger than the right-half plane pole.
Compare the two loop transfer functions L
o
(s) =
1
s
and L(s) =
1
s
zs
z+s
. for some
positive z. Both loops have the same magnitude plot with crossover frequency

o
= 1. Yet,
L(j) = L
o
(j) + 2(z j)
Note that at = z,
L(j) = L
o
(j)

2
The extra phase may destabilize the system.
The sensitivity function with respect to L(s) is
S(s) = (1 +L(s))
1
=
s(s +z)
s
2
+ (z 1)s +z
That is, L(s) would lead to an unstable closed-loop system when z 1.
On the other hand, consider L
o
(s) and L(s) = L
o
(s)
s+p
sp
for some positive p. In this
case,
L(j) = L
o
(j) + 2(p +j)
The compelmentary sensitivity function is
T(s) =
L(s)
1 +L(s)
=
s +p
s
2
+ (1 p)s +p
In this case, the system becomes unstable when p 1.
80
Robustness Analysis
81
Robustness Analysis Motivation
J.C. Juang
Let G
o
(s) =
s1
s(s2)
and K(s) be a stabilizing controller.
Suppose that the open loop plant is perturbed as G(s) =
s1+
2
s(s2
1
)
for some
1
and

2
.
6
c -

-
K(s)
-c -
1
s
-
1
s
-
1
-c -

1
2
6
c
6
?
-

2
6
w
1
z
1
z
2
w
2
G(s)
G
o
(s)
The problem of robust stability is to assess how much tolerance on the perturba-
tions
1
and
2
the system can endure before going unstable.
The robustness analysis problem can be formulated as the following standard form
where the perturbation is cast into an uncertainty block and the known dynamic
is represented in another block.
_
1
s2
1
1+KGo

1
s2
K
1+KGo
1
s(s2)
1
1+KGo

1
s(s2)
K
1+KGo
_
_

1
0
0
2
_
-
-
w
1
w
2

z
2
z
1
82
Robust Stability and Performance
J.C. Juang
General (standard) form of robust stability analysis
M(s)

- -
w z
M(s) is a stable, linear time-invariant system and represents the perturbation.
Problem of robust stability analysis: under what condition is the closed-loop re-
mains stable in the presence of ?
The answer depends, to a great extent, on what we know about .
Norm bounded or subspace constrained.
Linear or nonlinear; Time-invariant or time-varying; Real or complex; Stable
or unstable; Time domain or frequency domain.
Single or multiple; Structured or unstructured; Independent or dependent;
One-at-a-time or simultaneous.
Bounded gain or bounded phase.
Nominal stability: if M(s) is internally stable.
Robust stability: if the system is stable for all permissible s.
Nominal performance: if the performance as characterized by the map from w to
z satises the requirement when = 0.
Robust performance: if the performance satises the requirement for all permis-
sible s.
83
Plant Uncertainty and Robustness
J.C. Juang
Consider the system in the gure where L
o
(s) is the nominal loop transfer func-
tion and (s) is a multiplicative perturbation. The actual loop transfer function
becomes L(s) = (I + (s))L
o
(s).
6
e -

-
L
o
(s)
-
I + (s)
-
L(s)
The system can be redrawn as the standard form
L
o
(I +L
o
)
1

w z
If the nominal system is closed-loop stable, (s) is open-loop stable, and

max
((j))
max
[L
o
(j)(I +L
o
(j))
1
] < 1
for all , then the perturbed system is closed-loop stable.
Too see this, recall that the characteristic equation is det[I + (I + (s))L
o
(s)] = 0.
Since (I +L
o
) is invertible,
det[I+(I+(s))L
o
(s)] = det[(I+L
o
(s))+(s)L
o
(s)] = det[I+L
o
(s)]det[I+(s)L
o
(s)[I+L
o
(s)]
1
]
In order for det[I + (s)L
o
(s)[I + L
o
(s)]
1
] ,= 0, it is necessary and sucient that
I + (s)L
o
(s)[I +L
o
(s)]
1
is nonsingular. In other words,

max
[L
o
(s)(I +L
o
(s))
1
]
max
((s)) < 1
will ensure the nonsingularity of I + (I + (s))L
o
(s) and hence the stability of the
closed-loop system.
84
Singular Value Plot
J.C. Juang
In view of the inequality for closed-loop stability

max
(M(j))
max
((j)) < 1
it is important to investigate the singular values of M (as well as ) along the
j-axis. This leads to the singular value plot.
For linear system, one can test stability frequency by frequency. Thus, if the
perturbation is bounded by a known frequency function, e.g.,
max
((j)), then
the system is robustly stable if

max
(M(j))
max
((j)) < 1
for all . The curve
max
(M(j)) versus the frequency is the (maximum) singular
value plot.
-

6

max
(M(j))
The singular value plot is an important tool is representing robustness.
Plant uncertainties are specied in terms of singular values
Robustness is evaluated using singular values
Moreover, singular value plot can also be used to assess performance if the perfor-
mance design requirement is specied in terms of singular value type constraints.
When the system is single-input single-output, the singular value plot reduces to
Bode magnitude plot.
Which singular value plot to evaluate?
The answer depends on the source of uncertainty and the design specication.
85
Singular Value Plots and Uncertainties
J.C. Juang
Complementary sensitivity function at the output node.
6
b -

K
-
G
-b
-

?
-

Robustness Test

max
(GK(I +GK)
1
)
max
() < 1
Uncertainty:
Output (sensor) error
Neglected HF dynamics
Changing number of RHP zeros
Multiplicative uncertainty: G

=
(I + )G
Performance:
Sensor noise attenuation
Output response to output com-
mand
Control function at the input node.
6
b -

K
-
G
-b
-

?
-

Robustness Test

max
(K(I +GK)
1
)
max
() < 1
Uncertainty:
Additive plant error
Uncertain RHP zeros
Additive uncertainty: G

= G+
Performance:
Input response to output com-
mand
86
Complementary sensitivity function at the input node.
6
b -

K
G
- -b
-

?
-

Robustness Test

max
(KG(I +KG)
1
)
max
() < 1
Uncertainty:
Input (actuator) error
Neglected HF dynamics
Changing number of RHP zeros
Multiplicative uncertainty: G

=
G(I + )
Performance:
Input response to input com-
mand
Sensitivity function at the output node.
6
b -

K
-
G
-b
?

Robustness Test

max
((I +GK)
1
)
max
() < 1
Uncertainty:
LF plant parameter errors
Changing number of RHP poles
G

= (I + )
1
G
Performance:
Output sensitivity
Output error to output com-
mands and disturbances
87
Sensitivity function at the input node.
6
b -

K
G
- -b -
?

Robustness Test

max
((I +KG)
1
)
max
() < 1
Uncertainty:
LF plant parameter errors
Changing number of RHP poles
Fractional uncertainty: G

=
G(I + )
1
Performance:
Input sensitivity
Input errors to input commands
and disturbances
Control function at the output node.
6
b -

K
-b -
G
?

Robust Stability Test

max
((I +GK)
1
G)
max
() < 1
Uncertainty:
LF plant parameter errors
Uncertain RHP poles
Feedback-type uncertainty: G

=
(G
1
+ )
1
Performance:
Output errors to input com-
mands and disturbances
88
Robustness Margin
J.C. Juang
Gain margin problem
Real perturbation k
g
Test: [1 +k
g
L(j)[ = 0
At the phase crossover frequency
p
, the imaginary part of L(j
p
) is zero, i.e.,
L(j
p
) = L
r
; thus, k
g
= L(j
p
)
1
= 1/[L
r
[.
Phase margin problem
Unity gain, complex perturbation e
j
Test: [1 +e
j
L(j)[ = 0
At the gain crossover frequency
g
, [L(j
g
)[ = 1, i.e., L(j
g
) = e
jL(jg)
; thus,
= +L(j
g
).
Robustness margin problem
Complex perturbation k
r
Test: [1 +k
r
L(j)[ = 0
At the worst-case frequency
o
, L(j
o
) = L
r
+jL
i
; thus, [k
r
[ = (
_
L
2
r
+L
2
i
)
1
.
-
6
Re
Im
q
1
L(j)
-
6
Re
Im
q
1
L(j)
-
6
Re
Im
q
1
L(j)
89
Small Gain Theorem
J.C. Juang
M(s)

Suppose that M(s) 1H

, then the system is well-posed and internally stable for


all (s) 1H

if and only if
|M(s)|

|(s)|

< 1
Critical stability occurs when
det(I M(j)) = 0
for a frequency and an uncertainty .
Since M(s) is stable, the condition for robust stability can be stated as
det(I M(j)) ,= 0
for all and . This is guaranteed if

max
(M(j)(j)) < 1
Note that
sup

max
(M) sup

[
max
(M)
max
()]
sup

max
(M) sup

max
()
|M|

||

On the other hand, we can construct the worst-case perturbation as follows. As-
sume that
M
= |M|

is achieved at the frequency


o
and let M(j
o
) = U
M
V

(singular value decomposition). Then, by assigning =


1
M
V U

, one has
|M(s)|

|(s)|

=
M
1

M
= 1
and
det(I M(j
o
)) = det(I
1
M
U
M
V

V U

) = 0
90
The small gain theorem is conservative.
However, when the perturbation is unstructured, complex, and independent, the
small gain theorem gives a necessary and sucient condition for robust stability.
Assume that A is stable and the perturbed system has a state transition matrix
A +BEC for some known B and C (structure) and unknown E.
The perturbed system can be arranged as a feedback system
B (sI A)
1
C
E
- - -

Thus, a sucient condition for robust stability is that, according to the small gain
theorem,

max
(E) <
1
|C(sI A)
1
B|

91
Robustness Analysis (Lyapunov Approach)
J.C. Juang
The matrix A + E is stable if and only there exists a P = P
T
> 0 and Q = Q
T
> 0
such that the Lyapunov equation is satised:
(A +E)
T
P +P(A +E) +Q = 0
The Lyapunov approach for robustness analysis can be summarized as
1. The nominal system is
x = Ax
with A 1
nn
and the perturbed system is
x = (A +E)x
for some E.
2. Assume that A is stable, that is, there exists a P = P
T
> 0 and a Q = Q
T
> 0
such that
A
T
P +PA +Q = 0
3. Verify to what extent E can be varied such that x
T
Px remains a Lyapunov
function for the perturbed system A+E, i.e., (A+E)
T
P +P(A+E) is negative
semidenite.
It amounts to checking
(A +E)
T
P +P(A +E) = E
T
P +PE Q < 0
The system remains stable provided that

max
(E) <

min
(Q)
2
max
(P)
The inequality chain
Q
min
(Q)I > 2
max
(P)
max
(E)I
max
(E
T
P +PE)I E
T
P +PE
Factorizing E = [e
ij
] as E = U = [u
ij
] where = max
i,j
[e
ij
[ and [u
ij
[ < 1. Then

max
(E) =
max
(U) < n. A sucient condition for robust stability is
<

min
(Q)
2 n
max
(P)
The system is robustly stable provided that

max
(E
T
P +PE) <
min
(Q)
or

max
(U
T
[P[ +[P[U) <
min
(Q)
where [P[ is constructed from P by taking the absolute value componentwise.
92
Recall that, after a similarity transformation of T, the target Lyapunov equation
becomes
(T
1
AT +T
1
ET)
T
(T
T
PT) + (T
T
PT)(T
1
AT +T
1
ET) + (T
T
QT) = 0
Thus, a sucient condition for robust stability is

max
(E) <

min
(T
T
QT)
2
max
(T)
max
(T
1
)
max
(T
T
PT)
If the perturbation matrix E can be decomposed as
E =
m

i=1
d
i
E
i
for some known E
i
s (structure of E) and unknown d
i
s. Then robust stability if

max
(
m

i=1
d
i
(E
T
i
P +PE
i
)) <
min
(Q)
Robust stability if
max
i
[d
i
[ <

min
(Q)

max
(

m
i=1
[E
T
i
P +PE
i
[)
One can factorize
m

i=1
d
i
(E
T
i
P +PE
i
) =
_
E
T
1
P +PE
1
E
T
2
P +PE
2

_
_
d
1
I
d
2
I
.
.
.
_
_
Also,

max
(
_
_
d
1
I
d
2
I
.
.
.
_
_
) =

_
m

i=1
d
2
i
Thus, robust stability if

_
m

i=1
d
2
i
<

min
(Q)

max
(
_
E
T
1
P +PE
1
E
T
2
P +PE
2

)
Robust analysis based on the Lyapunov equation is, in general, simple to compute
the bound on the perturbation. The drawback is that the bound may be too
conservative.
Optimization with respect to Q and T may reduce conservativeness.
93
Structure of and Robustness Bounds
J.C. Juang
The robustness bound depends greatly on the structure of the uncertainty
M(s)

The condition is to have


det(I M) ,= 0
Fix a frequency, M is a complex matrix; M C
mm
.
1. When is totally unstructured, the measure is

max
(M)
that is, the system is stable for all s such that
max
(M) || < 1.
2. When = cI with c C, the measure is
(M)
3. When = rI with r R, the measure is
max

i
(M) real
[
i
(M)[
4. When = rI with r R and r > 0, the measure is
max

i
(M) real;
i
(M)>0

i
(M)
94
Structured Singular Value
J.C. Juang
Structured uncertainty:
Number of blocks
Type of each block
Dimensions
Let the uncertainty structure be characterized as
= diag
_

1
I
r
1

2
I
r
2

s
I
r
2

s+1

s+2

s+f

with
i
C for 1 i s (s scalar blocks) and
s+j
C
m
j
m
j
for 1 j f (f
full blocks). Then, the structured singular value ( value) of M C
mm
with
m =

s
i=1
r
i
+

f
j=1
m
j
is dened as

(M) =
_
0, (I M) is nonsingular
1
min{max():,det(IM)=0}
, otherwise
Properties
1.

(M) = [[

(M)
2.

(M) is not a norm since it violates the triangle inequality


3. Let B

= :
max
() 1, then

(M) = max
B

(M)
4. If = I : C, i.e., s = 1 and f = 0, then

(M) = (M)
5. If = C
mm
, i.e., s = 0 and f = 1, then

(M) =
max
(M)
6. In general
(M)

(M)
max
(M)
7. Dene the two sets
Q = Q : Q

Q = I
m

T = diag
_
D
1
D
s
d
s+1
I
m
1
s
s+f
I
m
f

, D
i
C
r
i
r
i
,
D
i
= D

i
> 0, d
s+j
R, d
s+j
> 0
Then, for any , Q Q, and D T,
Q

Q , Q , Q , and D
1
2
= D
1
2
95
8. For all Q Q and D T,

(M) =

(MQ) =

(QM)

(D
1
2
MD

1
2
)
9.
max
QQ
(QM) max
B

(M) =

(M) inf
DD

max
(D
1
2
MD

1
2
)
10.
max
(D
1
2
MD

1
2
) is convex in ln(D
1
2
).
11. Linear matrix inequality

max
(D
1
2
MD

1
2
) <

max
(D

1
2
M

D
1
2
D
1
2
MD

1
2
) <
2
D

1
2
M

D
1
2
D
1
2
MD

1
2

2
I < 0
M

DM
2
D < 0
12. Computation (lower bound)
max
QQ
(QM) =

(M)

M
Q
-

-
Nonconvex, local maximum
Power method
Projection method
13. Computation (Upper bound)

(M) inf
DD

max
(D
1
2
MD

1
2
)
D
1
2
D

1
2
D

1
2 M
D
1
2
- -

-
Convex optimization
Frobenius
Perron eigenvalue
Linear matrix inequality
The bound is tight if 2s +f 3.
96
Main Loop Theorem
J.C. Juang
For all
1

1
with
max
(
1
)
1
, the perturbed closed-loop system is well-posed
and stable if and only if
sup
(s)0

1
(M
22
) <

1
M =
_
M
11
M
12
M
21
M
22
_

-
- -
For all
1

1
with
max
(
1
)
1
, the perturbed closed-loop system is well-posed,
stable, and
sup
(s)0

2
T
l
(M,
1
) <
if and only if
sup
(s)0

(M) <
=
_

1
0
0
2
_
M =
_
M
11
M
12
M
21
M
22
_

-
-

97
Kharitonov Approach
J.C. Juang
Stability can be checked by the roots of characteristic equation det(I M) = 0.
Let the characteristic polynomial be
p(s, q) =
n

i=0
q
i
s
i
= q
n
s
n
+q
n1
s
(n1)
+ +q
1
s +q
0
where q
i
s are in the interval
q Q or q

i
q
i
q
+
i
i
The polynomial is called an interval polynomial p(s, q) =

n
i=0
[q

i
, q
+
i
] s
i
.
Kharitonov Theorem: The interval polynomial is robustly stable, i.e., all the roots
are in the LHP for all q
i
s in the interval, if and only if its four invariant degree
Kharitonov polynomials
k
1
(s) = q

0
+q

1
s +q
+
s
s
2
+q
+
3
s
3
+q

4
s
4
+q

5
s
5
+q
+
6
s
6
+
k
2
(s) = q
+
0
+q
+
1
s +q

s
s
2
+q

3
s
3
+q
+
4
s
4
+q
+
5
s
5
+q

6
s
6
+
k
3
(s) = q
+
0
+q

1
s +q

s
s
2
+q
+
3
s
3
+q
+
4
s
4
+q

5
s
5
+q

6
s
6
+
k
4
(s) = q

0
+q
+
1
s +q
+
s
s
2
+q

3
s
3
+q

4
s
4
+q
+
5
s
5
+q
+
6
s
6
+
are stable.
Proof: Fix a frequency
o
and consider the range p(j
o
, Q) = p(j
o
, q) : q Q.
p(j
o
, Q) = q
0
+q
1
(j
o
) +q
2
(j
o
)
2
+q
3
(j
o
)
3
+q
4
(j
o
)
4
+
= q
0
q
2

2
o
+q
4

4
o
q
6

6
o
+ +j(q
1

o
q
3

3
o
+q
5

5
o
)
Note that (Re and Im stand for real
part and imaginary parts, respec-
tively)
min
qQ
Re p(j
o
, q) = Re k
1
(j
o
)
max
qQ
Re p(j
o
, q) = Re k
2
(j
o
)
min
qQ
Imp(j
o
, q) =
_
Imk
3
(j
o
),
o
0
Imk
4
(j
o
),
o
< 0
max
qQ
Imp(j
o
, q) =
_
Imk
4
(j
o
),
o
0
Imk
3
(j
o
),
o
< 0
-
6
Re
Im
k
1
k
3
k
4
k
2

o
0
Thus, the range of p(j
o
, q) in the complex is bounded by four vertices k
1
(j
o
),
k
2
(j
o
), k
3
(j
o
), and k
4
(j
o
). When this is true for every frequency, the interval
polynomial is stable.
98
Controller Design
99
State Feedback Design
J.C. Juang
Assume that the state is accessible for feedback.
In terms of the standard design formulation:
x = Ax +B
0
u +B
1
w
y = x
z = C
1
x +D
10
u +D
11
w
Typically, the feedback control is assumed to be a state feedback
u = Fx
for some feedback gain F.
The closed-loop map becomes
x = (A +B
0
F)x +B
1
w
z = (C
1
+D
10
F)x +D
11
w
Issues in the feedback design
1. Stability
2. Robustness
3. Performance
4. Boundedness of z
5. Boundedness of the map from w to z
Stability The closed-loop (A+B
0
F) is stable if there exists a positive denite matrix
X such that
X(A +B
0
F) + (A +B
0
F)
T
X < 0
Performance The H
2
-norm of the map from w to z is the square root of traceB
T
1
XB
1
where X is the observability gramian
X(A +B
0
F) + (A +B
0
F)
T
X + (C
1
+D
10
F)
T
(C
1
+D
10
F) = 0
Robustness The H

-norm of the map from w to z is bounded above by if there


exists a positive denite matrix X such that
X(A +B
0
F) + (A +B
0
F)
T
X + (C
1
+D
10
F)
T
(C
1
+D
10
F)
+[XB
1
+ (C
1
+D
10
F)
T
D
11
](
2
I D
T
11
D
11
)
1
[B
T
1
X +D
T
11
(C
1
+D
10
F)] = 0
100
Quadratic Stabilization
J.C. Juang
Key Concept: Construct a quadratic Lyapunov function to govern the stability
(and robustness) of the closed-loop system.
Assume that the Lyapunov function is
V (x) = x
T
Px
for some positive denite matrix P.
The time derivative of V (x) is

V (x) = x
T
P(A +B
0
F)x +x
T
(A +B
0
F)
T
Px
= x
T
[PA +A
T
P +PB
0
F +F
T
B
T
0
P]x
A necessary and sucient condition for stability is that
PA +A
T
P +PB
0
F +F
T
B
T
0
P < 0
It suces to have
PA +A
T
P +PB
0
F +F
T
B
T
0
P +F
T
RF < 0
for some positive denite R.
The latter can be manipulated to become
(F +R
1
B
T
0
P)
T
R(F +R
1
B
T
0
P) +PA +A
T
P PB
0
R
1
B
T
0
P < 0
By assigning
F = R
1
B
T
0
P
with P satisfying
P > 0 and PA +A
T
P PB
0
R
1
B
T
0
P < 0
a stabilizing state feedback gain is constructed.
Equivalently (by treating P = X
1
), a stabilizing state feedback gain can be de-
signed by solving an X such that
X > 0
and
AX +XA
T
B
0
R
1
B
T
0
< 0
This is the linear matrix inequality (LMI) approach.
101
Matched and Unmatched Uncertainties
J.C. Juang
Consider the system
x = (A +
A
)x + (B
0
+
B
)u
for some uncertainties
A
and
B
with

A
= B
0
E
A
and
B
= B
0
E
B
for some unknown E
A
and E
B
.
Uncertainties of the above form, i.e., the uncertainties are in the range space of
B
0
, are said to satisfy the matching condition.
Assume that
1. The matrix A is asymptotically stable
2. The uncertainty E
a
is bounded: |E
a
| e
A
3. The uncertainty E
b
is bounded: |E
b
| e
B
< 1
Again, consider the Lyapunov function
V (x) = x
T
Px
and the Lyapunov equation
A
T
P +PA +Q = 0
for some positive denite Q and positive denite P.
The state feedback gain F is assumed to be
u = Fx = r
1
B
T
0
Px
Here, r is a positive scalar to be determined.
The derivative of V (x) is

V (x) = x
T
Px +x
T
P x
= x
T
Qx r
1
x
T
PB
0
(2I +E
B
+E
T
B
)B
T
0
Px +x
T
(PB
0
E
A
+E
T
A
B
T
0
P)x
Note that
Q
min
(Q)I
PB
0
(2I +E
B
+E
T
B
)B
T
0
P 2(1 e
B
)PB
0
B
T
0
P
and
PB
0
E
a
+E
T
a
B
T
0
P PB
0
B
T
0
P +E
T
a
E
a
PB
0
B
T
0
P +e
2
A
I
Thus, the derivative of the Lyapunov function

V (x)
min
(Q)x
T
x 2(1 e
B
)r
1
x
T
PB
0
B
T
0
Px +x
T
PB
0
B
T
0
Px +e
2
A
x
T
x
= x
T
[e
2
A

min
(Q)]x +x
T
PB
0
[1 2r
1
(1 e
B
)]B
T
0
Px
102
By selecting Q and r such that

min
(Q) e
2
A
and r 2(1 e
B
)
the system is stable for all permissible
A
and
B
.
It is always possible to stabilize bounded, matched uncertainties provided that the
state is accessible.
Consider the system
x = (A +
A
)x +B
0
u
where

A
=

i
E
i
F
i
where both E
i
and F
i
are known (structure of the uncertainty) and
i
is unknown.
A bound on
i
, however, is available: [
i
[
i
.
The uncertainty is unmatched in the sense that the uncertainty is not necessarily
in the range space of B
0
.
A robustly stabilizing state feedback gain F can be constructed as
F = R
1
B
T
0
P
where P is the positive denite solution to
A
T
P +PA +Q2PB
0
R
1
B
T
0
P +P(

i

i
E
i
E
T
i
)P +

i

i
F
T
i
F
i
= 0
for some positive denite Q and R.
To verify the above claim, check the Lyapunov function
V (x) = x
T
Px
and its time derivative

V (x) =
x
T
Qx

i

i
x
T
(E
T
i
P F
i
)
T
(E
T
i
P F
i
)x
103
Algebraic Riccati Equation
J.C. Juang
Let A, Q, and R be real nn matrices with Q and R symmetric. Dene the 2n2n
matrix
H :=
_
A R
Q A
T
_
A matrix of this form is called a Hamiltonian matrix.
The Hamiltonian matrix is related to the following algebraic Riccati equation
A
T
X +XA +XRX +Q = 0
The spectrum of H, spec(H), is symmetric with respect to the imaginary axis. To
prove this, introduce the 2n 2n matrix
J :=
_
0 I
I 0
_
having the property J
2
= I. Then,
J
1
HJ = JHJ = H
T
so H and H
T
are similar. Thus,
spec(H) spec(H
T
) spec(H)
Assume H has no eigenvalues on the imaginary axis. Then it must have n eigen-
values in 1s < 0 and n in 1s > 0. Thus the two spectral subspaces,

(H) and

+
(H), have dimension n. We will be interested in the stable subspace

(H) and
its associated properties. Partitioning the subspace,

(H) = img
_
X
1
X
2
_
where X
1
and X
2
are in R
nn
. If X
1
is nonsingular, i.e., if the two subspaces

(H), img
_
0
I
_
are complementary, we can dene
X = X
2
X
1
1
to get

(H) = img
_
X
1
X
2
_
= img
_
I
X
_
X
1
= img
_
I
X
_
Notice that X is uniquely determined by H, we can thus have
X = Ric(H)
104
Ric is a function R
2n2n
R
nn
which maps H to X where

(H) = img
_
I
X
_
The domain of Ric, denoted by domRic, consists of Hamiltonian matrices H with
two properties:
1. H has no eigenvalues on the imaginary axis,
2.

(H) and img


_
0
I
_
are complementary.
Suppose H domRic and X = Ric(H). Then
1. X is symmetric
2. X satises the equation
A
T
X +XA +XRX Q = 0
3. A +RX is stable
Proof of 1.
Let X
1
and X
2
be as above, i.e.,

(H) = img
_
X
1
X
2
_
. It is claimed that
X
T
1
X
2
is symmetric
To prove this, note that there exists a stable matrix H

in R
nn
such that
H
_
X
1
X
2
_
=
_
X
1
X
2
_
H

Premultiply this equation by


_
X
1
X
2
_
T
J
to get
_
X
1
X
2
_
T
JH
_
X
1
X
2
_
=
_
X
1
X
2
_
T
J
_
X
1
X
2
_
H

Now, JH is symmetric; hence so is the left hand side of the above equation. Hence
so is the right hand side:
(X
T
1
X
2
+X
T
2
X
1
)H

= H
T

(X
T
1
X
2
+X
T
2
X
1
)
T
= H
T

(X
T
1
X
2
+X
T
2
X
1
)
But this is a Lyapunov equation. Since H

is stable, the unique solution is


X
T
1
X
2
+X
T
2
X
1
= 0
We have
XX
1
= X
2
Premultiply this by X
T
1
, we have X
T
1
XX
1
is symmetric. Hence, X is symmetric
since X
1
is nonsingular.
105
Bounded Real Lemma
J.C. Juang
Suppose that M(s) = D+C(sIA)
1
B with A asymptotically stable. Then |M(s)|

<
if and only if
1. |D| <
2. There exists X = X
T
satisfying the Riccati equation
A
T
X +XA +C
T
C + (XB +C
T
D)(
2
I D
T
D)
1
(B
T
X +D
T
C) = 0
such that A +BR
1
(D
T
C +B
T
X) is asymptotically stable.
Furthermore, when such an X exists, X 0.
Proof
1. It amounts to performing a spectral decomposition, i.e., nding a stable N(s)
such that

2
I M

(s)M(s) = N

(s)N(s)
2. Recall that

2
I M

(s)M(s)
s
=
_
_
A 0 B
C
T
C A
T
C
T
D
D
T
C B
T

2
I D
T
D
_
_
Through a similarity transformation of
_
I 0
X I
_
s
=
_
_
A 0 B
C
T
C XA A
T
X A
T
XB +C
T
D
D
T
C +B
T
X B
T

2
I D
T
D
_
_
3. Let N(s)
s
=
_
A B
C
x
D
x
_
. Then
N

(s)N(s)
s
=
_
A
T
C
T
x
B D
T
x
__
A B
C
x
D
x
_
s
=
_
_
A 0 B
C
T
x
C
x
A
T
C
T
x
D
x
D
T
x
C
x
B
T
D
T
x
D
x
_
_
4. Comparing the coecients
D
T
x
D
x
=
2
I D
T
D
D
T
x
C
x
= B
T
X D
T
C
C
T
x
C
x
= C
T
C XA A
T
X
5. This leads to the Riccati equation.
A dual version is to nd a positive semidenite matrix Y such that
AY +Y A
T
+BB
T
+ (Y C
T
+BD
T
)(
2
I DD
T
)
1
(CY +DB
T
) = 0
106
Riccati Equation in Control Design
J.C. Juang
Theorem: Suppose H =
_
A BB
T
C
T
C A
T
_
with (A, B) stabilizable and (C, A) de-
tectable. Then H domRic and Ric(H) 0. If (C, A) is observable, Ric(H) > 0.
The corresponding algebraic Riccati equation is
A
T
X +XA XBB
T
X +C
T
C = 0
Proof:
1. H is in the domain of Riccati.
(a) H has no eigenvalues on the imaginary axis.
Let j and
_
x
z
_
be an eigenpair. Then
(A j)x = BB
T
z and (A j)

z = C
T
Cx
or z, (A j)x) = |B
T
z|
2
and x, (A j)

z) = |Cx|
2
. Thus, |Cx|
2
=
z, (A j)x) = |B
T
z|
2
. This implies that B
T
z = 0 and Cx = 0.
The assumptions on stabilizability and detectability are violated since
z

_
A jI B

= 0 and
_
A jI
C
_
x = 0. Hence, there are no eigenval-
ues on the j axis.
(b) The two subspaces

(H) and img


_
0
I
_
are complementary. Recall that

(H) = img
_
X
1
X
2
_
and H
_
X
1
X
2
_
=
_
X
1
X
2
_
H

. The two subspaces are com-


plementary is equivalent to that X
1
is nonsingular, i.e., kerX
1
= 0.
The subspace kerX
1
is H

-invariant. To see this, note that


AX
1
BB
T
X
2
= X
1
H

Let x kerX
1
, then
x

2
(AX
1
BB
T
X
2
)x = x

2
X
1
H

x
or (by recalling that X

1
X
2
is symmetric) x

2
BB
T
X
2
x = x

1
X
2
H

x, or
x

2
BB
T
X
2
x = 0, or B
T
X
2
x = 0. This implies that
X
1
H

x = 0
That is, H

x is in the kernel of X
1
.
The fact that X
1
is nonsingular is proved by contradiction. Suppose
kerX
1
,= 0, then H

[
kerX
1
has an eigenvalue and eigenvector x such that
H

x = x 1 < 0, 0 ,= x kerX
1
107
Note that C
T
CX
1
A
T
X
2
= X
2
H

. Postmultiplying by x gives
(A
T
+)X
2
x = 0
Since B
T
X
2
x = 0, we have
x

2
_
A +

B

= 0
Stabilizability implies that X
2
x = 0. But if X
1
x = 0 and X
2
x = 0, then
x = 0, a contradiction.
2. To prove that X 0. Set X = Ric(H). Then
A
T
X +XA XBB
T
X +C
T
C = 0
or
(A BB
T
X)
T
X +X(A BB
T
X) + (XBB
T
X +C
T
C) = 0
Note that A BB
T
X is stable, we have
X =
_

0
e
(ABB
T
X)
T
t
(XBB
T
X +C
T
C)e
(ABB
T
X)t
dt
Since XBB
T
X +C
T
C is positive semidenite, so is X.
3. The statement that X > 0 when (C, A) is observable is proved by contradiction.
Suppose that there exists a stable unobservable mode with 1 < 0 such
that
_
A I
C
_
x = 0 for a nonzero vector x.
From the Riccati equation
2(1) x

Xx x

XBB
T
Xx = 0
Thus, x

Xx = 0. That is X is singular.
Conversely, if there exists a nonzero vector x kerX, then the Riccati
equation implies that Cx = 0 and XAx = 0.
The space kerX is A-invariant. There then exists a such that x = Ax =
(A BB
T
X)x and Cx = 0 with 1 < 0. That is, there exists a stable,
unobservable mode.
108
Linear Quadratic Regulator
J.C. Juang
Consider the linear system
x = Ax +B
0
u , x(0) = x
o
and the quadratic cost functional to be minimized
_

0
x
T
Qx +u
T
Ru dt
Theorem: Assume that
1. The pair (A, B
0
) is stabilizable.
2. The matrix Q is symmetric and positive semidenite.
3. The pair (C, A) is detectable where C satises Q = C
T
C, i.e., C is a Cholesky
factor of Q.
4. The matrix R is symmetric and positive denite.
Then, the optimal, stabilizing control exists and is of the state feedback form
u = Fx
where the state feedback gain F is
F = R
1
B
T
0
X
with X satisfying the algebraic Riccati equation
X = Ric
_
A B
0
R
1
B
T
0
C
T
C A
T
_
or
A
T
X +XA +QXB
0
R
1
B
T
0
X = 0
Q governs the rate of convergence of the state and R controls the extent of control
activity.
Stabilizable = the integral is nite; Observable = guarantee x(t) approaches
zero as t goes to innity; R positive denite = no impulsive controls.
The assumptions imply that
_
A B
0
R
1
B
T
0
C
T
C A
T
_
domRic and X 0.
Dene
z =
_
C
0
_
x +
_
0
R
1
2
_
u = C
1
x +D
10
u
Then, the cost function becomes
_

0
x
T
Qx +u
T
Ru dt =
_

0
z
T
z dt = |z|
2
2
109
Approach 1
Dene the Hamiltonian
H(x, u, p) =
1
2
(x
T
Qx +u
T
Ru) +p
T
(Ax +B
0
u)
where p is the costate.
Optimality implies that
H
u
= 0 = Ru +B
T
0
p
H
p
= x = Ax +B
0
u
H
x
= p = Qx +A
T
p
Thus, the optimal control
u = R
1
B
T
0
p
and
_
x
p
_
=
_
A B
0
R
1
B
T
0
Q A
T
_ _
x
p
_
Assume that the state and costate are related
p = Xx
for some X. Then,
u = R
1
B
T
0
Xx
x = (A B
0
R
1
B
T
0
X)x
X x = (QA
T
X)x
The last equation then leads to the algebraic Riccati equation
A
T
X +XA +QXB
0
R
1
B
T
0
X = 0
Approach 2
The closed-loop system is
x = (A +B
0
F)x , x(0) = x
o
z = (C
1
+D
10
F)x
which can also be rewritten as
x = (A +B
0
F)x +x
o
(t) , x(0

) = 0
z = (C
1
+D
10
F)x
Let X be the observability gramian of the closed-loop system, that is,
(A +B
0
F)
T
X +X(A +B
0
F) + (C
1
+D
10
F)
T
(C
1
+D
10
F) = 0
110
Consider the Lyapunov function V (x) = x
T
Xx and its derivative

V (x) = x
T
Xx +x
T
X x
= (Ax +B
0
u)
T
Xx +x
T
X(Ax +B
0
u)
= [(A +B
0
F)x +B
0
(u Fx)]
T
Xx +x
T
X[(A +B
0
F)x +B
0
(u Fx)]
= x
T
[(A +B
0
F)X +X(A +B
0
F)]x + 2B
0
(u Fx), Xx)
= x
T
(C
1
+D
10
F)
T
(C
1
+D
10
F)x + 2u Fx, B
T
Xx)
= |C
1
x +D
10
Fx|
2
2u Fx, RFx)
= |C
1
x|
2
Fx, RFx) 2u, RFx) + 2Fx, RFx)
= [C
1
x|
2
+Fx, RFx) 2u, RFx) +u, Ru) u, Ru)
= |C
1
x|
2
|D
10
u|
2
+|R
1
2
(u Fx)|
2
= |z|
2
+|R
1
2
(u Fx)|
2
Integrating from 0 to gives
|z|
2
2
= x
T
o
Xx
o
+|R
1
2
(u Fx)|
2
2
To minimize |z|
2
2
the optimal control u = Fx is unique and the resulting cost
is
|z|
2
2
= x
T
o
Xx
o
Note that this selection also leads to stability in the sense of Lyapunov.
The design equation (Riccati equation) is obtained from the analysis equation
(observability gramian equation).
111
Feedback Properties of Optimal State Feedback
J.C. Juang
State equation
x = Ax +B
0
u
State feedback
F = R
1
B
T
0
X
where X is the solution of
XA +A
T
X XB
0
R
1
B
T
0
X +Q = 0
with R positive denite and Q positive semidenite.
The algebraic Riccati equation can be rewritten as
XA +A
T
X F
T
RF +Q = 0
or
X(sI A) + (sI A)
T
X +F
T
RF = Q
Hence,
B
T
0
(sI A)
T
XB
0
+B
T
0
X(sI A)
1
B
0
+B
T
0
(sI A)
T
F
T
RF(sI A)
1
B
0
= B
T
0
(sI A)
T
Q(sI A)
1
B
0
or
B
T
0
(sI A)
T
F
T
R RF(sI A)
1
B
0
+B
T
0
(sI A)
T
F
T
RF(sI A)
1
B
0
= B
T
0
(sI A)
T
Q(sI A)
1
B
0
Completing the square gives the factorization equation
[I B
T
0
(sI A)
T
F
T
]R[I F(sI A)
1
B
0
] = R +B
T
0
(sI A)
T
Q(sI A)
1
B
0
When the matrix R is selected to be R = rI, then the Kalman inequality is obtained
[I B
T
0
(sI A)
T
F
T
] [I F(sI A)
1
B
0
] I
The state feedback block diagram
-
F = R
1
B
T
0
X
-
(sI A)
1
B
0
-
u x
The function I F(sI A)
1
B
0
is the return dierence at the control input node.
112
The sensitivity function at the control input node is
S(s) = (I F(sI A)
1
B)
1
The sensitivity function has a norm that is less than 1 according to the Kalman
inequality. That is, the sensitivity is improved.
For SISO systems, this inequality implies that the Nyquist plot of F(jI A)
1
B
0
never enters a unit disk centered at the critical point 1 + j0. It can then be
established that
The gain margin is and gain reduction is 0.5
The phase margin is at least 60
o
.
-
6
&%
'$
q
1
R
I
Another inequality is obtained by taking the inverse of I F(sI A)
1
B
0
and gives
[I +B
T
0
(sI A
T
F
T
B
T
0
)
1
F
T
][I +F(sI A B
0
F)
1
B
0
] I
Essentially, this gives another robustness for the linear quadratic optimal con-
troller. The system is robust with respect to all satisfying ||

< 1.
-
F = R
1
B
T
0
X
-
(sI A)
1
B
0
- d -
u x
-

?
For MIMO systems, the inequality implies that

min
(I F(jI A)
1
B
0
) 1
and the same conclusions are reached for each loop.
113
Asymptotic Linear Quadratic Design
J.C. Juang
Problem: given
x = Ax +B
0
u , x(0) = x
o
Find a state feedback gain F such that
|z|
2
2
=
_

0

2
x
T
Qx +u
T
Ru dt
where z =
_
Q
1
2
0
_
x +
_
0
R
1
2
_
u, is minimized.
Question: How to select Q and R?
Design Riccati equation
XA +A
T
X +
2
QXB
0
R
1
B
T
0
X = 0
Optimal control
u = Fx = R
1
B
T
0
Xx
Factorization (Kalman) equation
[I B
T
0
(sI A
T
)
1
F
T
]R[I F(sI A)
1
B
0
] = R +
2
B
T
0
(sI A
T
)
1
Q(sI A)
1
B
0
The open loop characteristic polynomial p
o
(s) is
p
o
(s) = det(sI A)
The closed-loop characteristic polynomial p
c
(s) is
p
c
(s) = det(sI A B
0
F)
An equation involving the Hamiltonian matrix
_
I 0
X I
_ _
sI A B
0
R
1
B
T
0

2
Q sI +A
T
_ _
I 0
X I
_
=
_
sI A B
0
F B
0
R
1
B
T
0
0 sI +A
T
+F
T
B
T
0
_
Thus,
(1)
n
p
c
(s)p
c
(s) = det
__
sI A B
0
R
1
B
T
0

2
Q sI +A
T
__
Let Q = C
T
1
C
1
and R = I, then
det
__
sI A B
0
R
1
B
T
0

2
Q sI +A
T
__
= det(sI A) det[sI +A
T

2
Q(sI A)
1
B
0
RB
T
0
]
= det(sI A) det(sI +A
T
) det[I
2
(sI +A
T
)
1
C
T
1
C
1
(sI A)
1
B
0
RB
T
0
]
= (1)
n
p
o
(s) p
o
(s) det[I +
2
M
T
(s)M(s)]
where M(s) = C
1
(sI A)
1
B
0
.
114
Clearly, when 0,
p
c
(s)p
c
(s) p
o
(s)p
o
(s)
Thus, as 0, the zeros of p
c
(s) approaches the stable zeros of p
o
(s) or the
reection (through the j axis) of the unstable zeros of p
o
(s). In other words, as
0, the closed-loop eigenvalues approach the stable open loop eigenvalues and
the reections of the unstable open loop eigenvalues.
In the SISO case, let
M(s) = C
1
(sI A)
1
B
0
=
m(s)
p
o
(s)
with the degree of m(s), l, strictly less than that of p
o
(s), n. Then,
p
c
(s)p
c
(s) = p
o
(s)p
o
(s) +
2
m(s)m(s)
As in the SISO case, 2l zeros of p
c
(s)p
c
(s) approaches the zeros of m(s)m(s)
and the rest, 2(n l), zeros tend to the nonzero roots of
(1)
n
s
2n
+ (1)
l

2
s
2l
= 0
The latter is the Butterworth pattern.
In the MIMO case, as ,
1. From the Riccati equation,
2
Q XB
0
R
1
B
T
0
X 0, that is,
2
C
T
1
C
1
F
T
RF.
Thus,

1
R
1
2
F UC
1
for some orthogonal U.
2. From the Kalman identity, [IB
T
0
(sIA
T
)
1
F
T
]R[IF(sIA)
1
B
0
]
2
B
T
0
(sI
A
T
)
1
C
T
1
C
1
(sI A)
1
B
0
. Thus,

i
R
1
2
[I F(jI A)
1
B
0
]R

1
2

i
C
1
(jI A)
1
B
0
R

1
2

3. At high frequency, F(jI A)


1
B
0

FB
0
j
.
115
H
2
State Feedback Design
J.C. Juang
Given the system
x = Ax +B
0
u +B
1
w
z = C
1
x +D
10
u +D
11
w
with the following assumptions
The pair (A, B
0
) is controllable = the existence of a stabilizing control.
The matrix D
10
is of full column rank = devoid of impulsive control.
The matrix
_
A jI B
0
C
1
D
10
_
is of full column rank for all s = existence of
optimal solution.
Working assumptions:
1. D
T
10
D
10
= I. By scaling: u = (D
T
10
D
10
)

1
2
u

.
2. D
T
10
C
1
= 0. By precompensation: u = u

(D
T
10
D
10
)
1
D
T
10
C
1
x
The objective is to nd a state feedback such that the H
2
-norm of the closed-loop
system from w to z is minimized.
Let F be the state feedback gain, u = Fx, the closed-loop system becomes
x = (A +B
0
F)x +B
1
w
z = (C
1
+D
10
F)x +D
11
w
For H
2
-norm to be bounded, D
11
is zero.
The H
2
-norm from w to z can be computed as
|T
zw
|
2
2
= trace B
T
1
XB
1
where X is the observability gramian of (A +B
0
F, C
1
+D
10
F), i.e.,
(A +B
0
F)
T
X +X(A +B
0
F) + (C
1
+D
10
F)
T
(C
1
+D
10
F) = 0
which can be written as
(A +B
0
F)
T
X +X(A +B
0
F) +C
T
1
C
1
+F
T
F = 0
Rearranging the observability gramian equation gives
(F
T
+XB
0
)(F +B
T
0
X) +A
T
X +XA +C
T
1
C
1
XB
0
B
T
0
X = 0
Monotonic property of the Riccati solution: Let X
1
and X
2
be the stabilizing
solution to
A
T
X
1
+X
1
A +C
T
1
C
1
X
1
B
0
B
T
0
X
1
= 0
A
T
X
2
+X
2
A +C
T
1
C
1
X
2
B
0
B
T
0
X
2
+M = 0
116
with M = M
T
0. Then,
(A B
0
B
T
0
X
1
)
T
X
1
+X
1
(A B
0
B
T
0
X
2
) +C
T
1
C
1
+X
1
B
0
B
T
0
X
2
= 0
(A B
0
B
T
0
X
1
)
T
X
2
+X
2
(A B
0
B
T
0
X
2
) +C
T
1
C
1
+X
1
B
0
B
T
0
X
2
+M = 0
Hence,
(A B
0
B
T
0
X
1
)
T
(X
1
X
2
) + (X
1
X
2
)(A B
0
B
T
0
X
2
) M = 0
Since both A B
0
B
T
0
X
1
and A B
0
B
T
0
X
2
are stable and M is positive semidenite,
X
1
X
2
.
For all X that satises the observability gramian equation, the one that minimizes
trace B
T
1
XB
1
(= trace XB
1
B
T
1
) is obtained by setting
F = B
T
0
X
and having X solving
A
T
X +XA +C
T
1
C
1
XB
0
B
T
0
X = 0
117
LMI Approach for H
2
Control
J.C. Juang
H
2
State Feedback Design. Given
x = Ax +B
0
u +B
1
w
z = C
1
x +D
10
u
The H
2
control is to nd a state feedback gain F such that the H
2
-norm of the
closed-loop system is minimized. The closed-loop system is
x = (A +B
0
F)x +B
1
w
z = (C
1
+D
10
F)x
The 2-norm can be computed as
inf trace B
T
1
XB
1
such that
(A +B
0
F)
T
X +X(A +B
0
F) + (C
1
+D
10
F)
T
(C
1
+D
10
F) < 0
which is equivalent to (Letting Y = X
1
)
inf trace B
T
1
Y
1
B
1
such that
Y (A +B
0
F)
T
+ (A +B
0
F)Y +Y (C
1
+D
10
F)
T
(C
1
+D
10
F)Y < 0
In other words, this is to minimize trace B
T
1
Y
1
B
1
subject to
Y > 0,
_
Y (A +B
0
F)
T
+ (A +B
0
F)Y Y (C
1
+D
10
F)
T
(C
1
+D
10
F)Y I
_
< 0
The latter is
_
Y A +A
T
Y Y C
T
1
C
1
Y I
_
+
_
B
0
D
10
_
FY
_
I 0

+
_
I
0
_
Y F
T
_
B
T
0
D
T
10

< 0
The condition reduces to
_
I
0
_
T
_
Y A
T
+AY Y C
T
1
C
1
Y I
_ _
I
0
_

< 0
and
_
B
0
D
10
_
T
_
Y A
T
+AY Y C
T
1
C
1
Y I
_ _
B
0
D
10
_

< 0
Note that
_
I
0
_

is
_
0
I
_
. Thus, the rst condition reduces to I < 0. Let
_
M
1
N
1
_
be a basis of the null space of
_
B
T
0
D
T
10

, then the second condition becomes


_
M
T
1
N
T
1

_
Y A
T
+AY Y C
T
1
C
1
Y I
_ _
M
1
N
1
_
< 0
Designating Q as a matrix that satises Q > B
T
1
Y
1
B
1
, the H
2
state feedback design
can be summarized as the nding of Y and Q such that
118
1. trace Q is minimized (or less than a prescribed bound).
2. Y > 0
3.
_
Q B
T
1
B
1
Y
_
> 0
4.
_
M
T
1
N
T
1

_
Y A
T
+AY Y C
T
1
C
1
Y I
_ _
M
1
N
1
_
< 0
119
H

State Feedback Design


J.C. Juang
Given the system
x = Ax +B
0
u +B
1
w
z = C
1
x +D
10
u +D
11
w
with the following assumptions
The pair (A, B
0
) is controllable
The matrix D
10
is of full rank
The matrix
_
A jI B
0
C
1
D
10
_
is of full rank
The matrix D
11
= 0
Working assumptions:
1. D
T
10
D
10
= I. By scaling: u = (D
T
10
D
10
)

1
2
u

.
2. D
T
10
C
1
= 0. By precompensation: u = u

(D
T
10
D
10
)
1
D
T
10
C
1
x
The objective is to nd a state feedback such that the H

-norm of the closed-loop


system from w to z is minimized.
Typically, the objective is restated as the nding of F such that |T
zw
|

is less than
.
Theorem: There exists a state feedback gain F such that the closed-loop map
under the state feedback satises the H

-norm constraint, i.e., A + B


0
F is stable
and the H

-norm of T
zw
(s) is less than a scalar if and only if there exists a positive
denite, stabilizing solution X to the algebraic Riccati equation
XA +A
T
X +C
T
1
C
1
+
2
XB
1
B
T
1
X XB
0
B
T
0
X = 0
The resulting state feedback gain and worst-case disturbance gain are, respectively,
u

= Fx = B
T
0
Xx
w

= Gx =
2
B
T
1
Xx
where w

and u

are interpreted as the worst-case exogenous signal and optimal


control signal, respectively.
Approach 1: It is required that
|z|
2
2
<
2
|w|
2
2
or
_

0
z
T
z
2
w
T
wdt < 0
120
Dene the objective function as
J =
_

0
z
T
z
2
w
T
wdt
The Hamiltonian becomes
H =
1
2
(z
T
z
2
w
T
w) +p
T
(Ax +B
0
u +B
1
w)
For optimality,
H
p
= x = Ax +B
0
u +B
1
w
H
x
= p = C
T
1
C
1
x +A
T
p
H
u
= 0 = u +B
T
0
p
H
w
= 0 =
2
w +B
T
1
p
Letting p = Xx then leads to F = B
T
0
X and G =
2
B
T
1
X. The matrix X is the
positive denite, stabilizing solution to the algebraic Riccati equation as stated.
Approach 2: For the closed-loop system (C
1
+D
10
F)(sI AB
0
F)
1
B
1
to be stable
and bounded by , the bounded real lemma entails that there exists a positive
denite matrix X such that
X(A +B
0
F) + (A +B
0
F)
T
X + (C
1
+D
10
F)
T
(C
1
+D
10
F) +
2
XB
1
B
T
1
X = 0
The bounded real Riccati equation can be rewritten as
(F
T
+XB
0
)(F +B
T
0
X) +XA +A
T
X +C
T
1
C
1
XB
0
B
T
0
X +
2
XB
1
B
T
1
X = 0
Fix , it can be shown, in view of the monotonic property of the Riccati solution,
that the optimal F is required to be F = B
T
0
X and X solves the Riccati equation
as stated.
Property: Let x
o
be the initial state of x. Then,
|z|
2
2

2
|w|
2
2
x
T
o
Xx
o
= |v|
2
2

2
|r|
2
2
where
r = w w

and v = u u

for all bounded (in the /


2
norm sense) w and u.
The property is veried by noting that
d
dt
x
T
Xx
= x
T
Xx +x
T
X x
= (Ax +B
0
u +B
1
w)
T
Xx +x
T
X(Ax +B
0
u +B
1
w)
=
2
w
T
w

+
2
w
T
w u
T
u

u
T
u x
T
(C
T
1
C
1
+
2
XB
1
B
T
1
X XB
0
B
T
0
X)x
=
2
w
T
w

+
2
w
T
w u
T
u

u
T
u z
T
z +u
T
u
2
w
T
w

+u

= z
T
z +
2
w
T
w
2
(w w

)
T
(w w

) + (u u

)
T
(u u

)
121
Integrating from 0 to and applying the terminal condition gives the relationship.
Note that in H
2
state feedback design,
|z|
2
2
x
T
o
Xx
o
= |u u

|
2
2
where u

= Fx with F being the H


2
optimal state feedback gain and X solves the
H
2
Riccati equation. In the more general case,
|z|
2
2
traceB
T
1
XB
1
= |u u

|
2
2
Approach 3: Given
x = Ax +B
0
u +B
1
w
z = C
1
x +D
10
u +D
11
w
The closed-loop system is
x = (A +B
0
F)x +B
1
w
z = (C
1
+D
10
F)x +D
11
w
To have the H

norm bounded above by , there must exist a Y > 0 such that


_
_
Y (A +B
0
F)
T
+ (A +B
0
F)Y B
1
Y (C
1
+D
10
F)
T
B
T
1
I D
T
11
(C
1
+D
10
F)Y D
11
I
_
_
< 0
In other words,
_
_
Y A
T
+AY B
1
Y C
T
1
B
T
1
I D
T
11
C
1
Y D
11
I
_
_
+
_
_
B
0
0
D
10
_
_
FY
_
I 0 0

+
_
_
I
0
0
_
_
Y F
T
_
B
T
0
0 D
T
10

< 0
Thus, the condition reduces to
_
_
I
0
0
_
_
T
_
_
Y A
T
+AY B
1
Y C
T
1
B
T
1
I D
T
11
C
1
Y D
11
I
_
_
_
_
I
0
0
_
_

< 0
and
_
_
B
0
0
D
10
_
_
T
_
_
Y A
T
+AY B
1
Y C
T
1
B
T
1
I D
T
11
C
1
Y D
11
I
_
_
_
_
B
0
0
D
10
_
_

< 0
Note that
_
_
I
0
0
_
_

is
_
_
0 0
I 0
0 I
_
_
. Thus, the rst condition reduces to
_
I D
T
11
D
11
I
_
< 0
122
Let
_
M
1
N
1
_
be a basis of the null space of
_
B
T
0
D
T
10

, then
_
_
B
0
0
D
10
_
_
T
=
_
0 I 0
M
T
1
0 N
T
1
_
Hence, the second condition becomes
_
0 I 0
M
T
1
0 N
T
1
_
_
_
Y A
T
+AY B
1
Y C
T
1
B
T
1
I D
T
11
C
1
Y D
11
I
_
_
_
_
0 M
1
I 0
0 N
1
_
_
< 0
123
Output Injection
J.C. Juang
Given the description
x = Ax +B
1
w
y = C
0
x +D
01
w
How to inject the output y to the state equation so that certain desired properties
are obtained.
Let H be the output injection gain, the resulting closed-loop system is
x = Ax +B
1
w +Hy = (A +HC
0
)x + (B
1
+HD
01
)w
The output injection is dual to the state feedback.
b -
B
0
-b -
1
s
-
C
1
-b -
-
D
10
?
6
F

6
A

State Feedback Design
-b -
1
s
-
C
0
6
A

-b

H
D
01
?
?
B
1
?
?
Output Injection Design
Separation principle.
The state feedback gain F and output injection gain H can be separately de-
signed.
When both A +B
0
F and A +HC
0
are stable, the closed-loop system is stable.
When both F and H are designed to, respectively, optimize 2-norm perfor-
mance indices, the resulting closed-loop system is optimal in the 2-norm sense.
124
Observer Design
J.C. Juang
Objective: given the system description and the measurements, how to construct
the state?
System description
x = Ax +B
0
u , x(0) = x
o
y = C
0
x
Let x be the estimate of x. The linear observer is assumed to be of the form

x = A x +B
0
u +H( y y) , x(0) = x
o
y = C
0
x
Dene the state estimation error e as
e = x x
The error dynamics follow
e = (A +HC
0
)e , e(0) = x
o
x
0
Clearly, when H is selected such that A+HC
0
is stable, the error approaches zero,
i.e., the state estimate x approaches the actual state x as t .
The state estimation gain H can be designed to achieve pole placement, and so
forth.
Observer-based controller. By combining a state feedback design and an observer,
a observer-based controller is constructed.
Given the system
x = Ax +B
0
u
y = C
0
x
Let F and H be, respectively, the state feedback and observer gain. The observer-
based controller is realized as
u = F x
with x satisfying

x = A x +B
0
u +H(C
0
x y)
That is, the controller u = K(s)y is
K(s) = F(sI A B
0
F HC
0
)
1
H
If both A + B
0
F and A + HC
0
are designed to be stable, the resulting closed-loop
system is stable as well. Moreover, the closed-loop poles are in the union of the
eigenvalues of A +B
0
F and A +HC
0
. Indeed,
_
x
e
_
=
_
A +B
0
F B
0
F
0 A +HC
0
_ _
x
e
_
In other words, the closed-loop eigenvalues can be separately designed.
125
Optimal H
2
Observer
J.C. Juang
The system is described as
x = Ax +B
1
w
y = C
0
x +D
01
w
The objective is to nd an output injection (observer) gain H such that
|x x|
2
2
is minimized.
The error e = x x is known to satises
e = (A +HC
0
)e + (B
1
+HD
01
)w
The two norm of x x can thus be computed by nding the controllability gramian
Y (A +HC
0
)
T
+ (A +HC
0
)Y + (B
1
+HD
01
)(B
1
+HD
01
)
T
= 0
Again, with the working assumptions: (A, C
0
) is detectable, D
01
D
T
01
= I and D
01
B
T
1
=
0, the optimal output injection gain is
H = Y C
T
0
The design equation for Y is
Y A
T
+AY +B
1
B
T
1
Y C
T
0
C
0
Y = 0
Combining the observer design and state feedback design leads to the H
2
(output
feedback) controller design.
Return to the control design problem, the cost
|z|
2
2
= trace B
T
1
XB
1
+|u u

|
2
2
Note that u u

= F( x x). Thus, in the observer design for H


2
optimal, the cost
should be
|F(x x)|
2
2
This gives the same design equation of Y and the resulting (overall) cost is
|z|
2
2
= trace B
T
1
XB
1
+trace FY F
T
In summary, given the plant
x = Ax +B
0
u +B
1
w
y = C
0
x +D
01
w
z = C
1
x +D
10
u
with the following assumptions
126
1. (A, B
0
) is stabilizable
2. (C
0
, A) is detectable
3. D
T
10
D
10
= I
4. D
01
D
T
01
= I
5. D
T
10
C
1
= 0
6. D
01
B
T
1
= 0
7.
_
A jI B
0
C
1
D
10
_
is full rank for all
8.
_
A jI B
1
C
0
D
01
_
is full rank for all
The optimal H
2
controller is
K(s) = F(sI A B
0
F HC
0
)
1
H
with
F = B
T
0
X and H = Y C
T
0
where X and Y are, respectively, the positive denite solutions to
A
T
X +XA +C
T
1
C
1
XB
0
B
T
0
X = 0
and
AY +Y A
T
+B
1
B
T
1
Y C
T
0
C
0
Y = 0
127
Inner Functions
J.C. Juang
Unitary Functions: A square function E is unitary provided that
E

(j)E(j) = E(j)E

(j) = I
Properties of Unitary Functions
For a unitary E, |E|

= 1.
Let E
1
and E
2
be unitary, then for any H, |E
1
HE
2
|

= |H|

.
The Nyquist plot of a unitary function lies on the unit circle.
The Bode gain plot of a unitary function is at.
Inner Functions: A matrix E(s) H

(p m, p m) is inner if E

(s)E(s) = I for all


s. A matrix E(s) H

(p m, p m) is coinner if E(s)E

(s) = I for all s.


Characterization of an Inner Function. Let E(s)
s
=
_
A B
C D
_
(minimal realization).
Then E(s) is inner if and only if there exists a X = X
T
> 0 such that
A
T
X +XA +C
T
C = 0
B
T
X +D
T
C = 0
D
T
D = I
To verify this, note that
E

(s)E(s)
s
=
_
A
T
C
T
B
T
D
T
__
A B
C D
_
=
_
_
A
T
C
T
C C
T
D
0 A B
B
T
D
T
C D
T
D
_
_
=
_
_
A
T
0 XB C
T
D
0 A B
B
T
B
T
X +D
T
C D
T
D
_
_
Characterization of a Coinner Function. For a coinner function E(s)
s
=
_
A B
C D
_
,
there exists a matrix Y = Y
T
> 0 such that
AY +Y A
T
+BB
T
= 0
BD
T
+Y C
T
= 0
DD
T
= I
Back to the H
2
design problem. If both F and H are, respectively, the optimal (in
the H

sense) state feedback and output injection gains. Then,


_
A +B
0
F B
0
C
1
+D
10
F D
10
_
128
is inner and
_
A +HC
0
B
1
+HD
01
C
0
D
01
_
is coinner. To see this, recall that
0 = (A +B
0
F)
T
X +X(A +B
0
F) + (C
1
+D
10
F)
T
(C
1
+D
10
F)
0 = B
T
0
X +D
T
10
(C
1
+D
10
F)
I = D
T
10
D
10
0 = (A +HC
0
)Y +Y (A +HC
0
)
T
+ (B
1
+HD
01
)(B
1
+HD
01
)
T
0 = (B
1
+HD
01
)D
T
01
+Y C
T
0
I = D
01
D
T
01
129
H
2
Optimal Control
J.C. Juang
The open loop system P(s)
x = Ax +B
0
u +B
1
w
y = C
0
x +D
01
w
z = C
1
x +D
10
u
The eight assumptions hold
The controller is observer-based

x = (A +B
0
F +HC
0
) x Hy
u = F x
The error state e = x x
The closed-loop system
_
x
e
_
=
_
A +B
0
F B
0
F
0 A +HC
0
_ _
x
e
_
+
_
B
1
B
1
+HD
01
_
w
z =
_
C
1
+D
10
F D
10
F

_
x
e
_
The resulting closed-loop system is
-
w
T
zw
(s)
-
z
which can be redrawn as
-
w
_
A +HC
0
B
1
+HD
01
F 0
_
-
Fe
w
-
_
A +B
0
F B
1
B
0
C
1
+D
10
F 0 D
10
_
-
z
Or
130
-
_
A +HC
0
B
1
+HD
01
F 0
_
-
Fe
_
A +B
0
F B
0
C
1
+D
10
F D
10
_
6
d
?

-
w
-
_
A +B
0
F B
1
C
1
+D
10
F 0
_
z
Let G
c
(s)
s
=
_
A +B
0
F I
C
1
+D
10
F 0
_
, G
f
(s)
s
=
_
A +HC
0
B
1
+HD
01
I 0
_
, and U(s)
s
=
_
A +B
0
F B
0
C
1
+D
10
F D
10
_
.
Then, the transfer function from w to z is
T
zw
(s) = G
c
(s)B
1
U(s)FG
f
(s)
It can be shown that G
c
(s)B
1
and U(s)FG
f
(s) are orthogonal. This amounts to
verify that U

(s)G
c
(s) is antistable (since FG
f
(s) is stable).
U

(s)G
c
(s)
s
=
_
A +B
0
F B
0
C
1
+D
10
F D
10
_

_
A +B
0
F I
C
1
+D
10
F 0
_
=
_
(A +B
0
F)
T
(C
1
+D
10
F)
T
B
T
0
D
T
10
__
A +B
0
F I
C
1
+D
10
F 0
_
=
_
_
(A +B
0
F)
T
(C
1
+D
10
F)
T
(C
1
+D
10
F) 0
0 A +B
0
F I
B
T
0
F 0
_
_
=
_
_
(A +B
0
F)
T
0 X
0 A +B
0
F I
B
T
0
0 0
_
_
=
_
(A +B
0
F)
T
X
B
T
0
0
_
1H

2
Thus,
|T
zw
(s)|
2
2
= |G
c
(s)B
1
|
2
2
+|U(s)FG
f
(s)|
2
2
The transfer function matrix U(s) is inner.
Hence,
|T
zw
(s)|
2
2
= |G
c
(s)B
1
|
2
2
+|FG
f
(s)|
2
2
Both F and H are designed optimally.
131
B
1
w
-
- -
_
_
A B
1
I
I 0 0
C
0
D
01
0
_
_
-
H

_
_
A I B
0
C
1
0 D
10
I 0 0
_
_
-
F

-
F
e
d
?
-
z
6
-
U(s)

132
H
2
Control of the Linear Fractional Map
J.C. Juang
The design problem
K(s)
P(s)
- -
-

w
z
u y
The H
2
control problem is to nd a proper, real-rational controller K which stabi-
lizes P internally and minimizes the H
2
-norm of the transfer function matrix T
zw
from w to z.
Recall that in the linear fractional map, the class of all stabilizing controllers can
be parametrized as
K(s) = T
l
(J(s), Q(s))
where Q(s) is any stable transfer function matrix and (when D
00
= 0)
J(s)
s
=
_
_
A +B
0
F +HC
0
H B
0
F 0 I
C
0
I 0
_
_
K(s)
Q(s)
J(s)
P(s)
- -
-

w z
u y
133
The resulting closed-loop map T
zw
(s) is
T
zw
= P
zw
+P
zu
K(I P
yu
K)
1
P
yw
= T
11
T
10
QT
01
where
T
11
(s)
s
=
_
_
A +B
0
F B
0
F B
1
0 A +HC
0
B
1
+HD
01
C
1
+D
10
F D
10
F D
11
_
_
T
10
(s)
s
=
_
A +B
0
F B
0
C
1
+D
10
F D
10
_
T
01
(s)
s
=
_
A +HC
0
B
1
+HD
01
C
0
D
01
_
-
T
01
(s)
-
Q(s)
-
T
10
(s)
- c

-
-
T
11
(s)
?
When both F and H are optimal in the H
2
-norm sense, it can be veried that (1)
T
10
(s) is inner, (2) T
01
(s) is coinner, and (3) T
T
11
T
01
10
is antistable. Thus,
|T
zw
(s)|
2
2
= |T
11
(s)|
2
2
+|Q(s)|
2
2
or
|T
zw
(s)|
2
2
= trace B
T
1
XB
1
+trace FY F
T
+|Q(s)|
2
2
Given as the specication on the H
2
norm of T
zw
(s), the class of all stabilizing
controllers that meet the specication can be parametrized as
K(s) = T
l
(J(s), Q(s))
where J(s) is as above by substituting in the optimal F and H and Q(s) is any
stable transfer function matrix with the following constraint
|Q(s)|
2
2

2
trace B
T
1
XB
1
+trace FY F
T
134
H

Control
J.C. Juang
The open loop system P(s)
x = Ax +B
0
u +B
1
w
y = C
0
x +D
01
w
z = C
1
x +D
10
u
Assume that the eight assumptions hold.
Let F be the H

state feedback gain,


F = B
T
0
X
with X satisfying
A
T
X +XA +C
T
1
C
1
XB
0
B
T
0
X +
2
XB
1
B
T
1
X = 0
Recall also (Assume x
o
= 0),
|z|
2
2

2
|w|
2
2
= |v|
2
2

2
|r|
2
2
where
r = w w

= w Gx = w
2
B
T
1
Xx
and
v = u u

= u Fx = u +B
T
0
Xx
The design can then be reformulated as
x = (A +B
1
G)x +B
0
u +B
1
r
y = (C
0
+D
01
G)x +D
01
r
v = Fx +u
In this problem, u is immaterial, thus considering
x = (A +B
1
G)x +B
1
r
y = (C
0
+D
01
G)x +D
01
r
v = Fx
The objective of the H

output injection design is to nd an H such that


|v|
2
2

2
|r|
2
2
is less than zero.
Let H be the output injection gain, the closed-loop system becomes
x = (A +B
1
G +H(C
0
+D
01
G))x + (B
1
+HD
01
)r
v = Fx
135
For the map from r to v to be bounded above by , the bounded real lemma entails
that there must exist a positive denite Z such that
(A +B
1
G +H(C
0
+D
01
G))Z +Z(A +B
1
G +H(C
0
+D
01
G))
T
+(B
1
+HD
01
)(B
1
+HD
01
)
T
+
2
ZF
T
FZ = 0
Again, the output injection gain is derived as
H = Z(C
0
+D
01
G)
T
and the algebraic Riccati equation becomes
(A +B
1
G)Z +Z(A +B
1
G)
T
+B
1
B
T
1
Z(C
0
+D
01
G)
T
(C
0
+D
01
G)Z +
2
ZF
T
FZ = 0
The output injection gain ensures that the map from r to v is stable and bounded
above by .
To see this is indeed the H

controller, note that the controller is

x = [A +B
0
F +B
1
G +H(C
0
+D
01
G)] x Hy
u = F x
Dene the error state as e = x x, the closed-loop system can then be represented
as
_
x
e
_
=
_
A +B
0
F B
0
F
(B
1
+HD
01
)G A +HC
0
+ (B
1
+HD
01
)G
_
. .
A
cl
_
x
e
_
+
_
B
1
B
1
+HD
01
_
. .
B
cl
w
z =
_
C
1
+D
10
F D
10
F

. .
C
cl
_
x
e
_
For T
zw
(s) to be stable and bounded above by , it is equivalent to nding a positive
denite matrix A such that
AA
cl
+A
T
cl
A +C
T
cl
C
cl
+
2
AB
cl
B
T
cl
A = 0
It is not dicult to verify that
A =
_
X 0
0 Z
1
_
is a solution to the above equation.
136
H

Controller Synthesis
J.C. Juang
Assume that the augmented plant is
_
_
A B
1
B
0
C
1
D
11
D
10
C
0
D
01
D
00
_
_
with the following assumptions:
1. p
1
m
0
and m
1
p
0
.
2. D
10
is of full column rank (rank m
0
) and D
01
is of full row rank (rank p
0
).
3. (A, B
0
) is stabilizable and (C
0
, A) is detectable.
4.
_
A jwI B
1
C
0
D
01
_
is of rank n +p
0
for all w.
5.
_
A jwI B
0
C
1
D
10
_
is of rank n +m
0
for all w.
6. D
00
= 0
7. D
10
=
_
I
0
_
8. D
01
=
_
I 0

9. D
T
10
C
1
= 0
10. D
01
B
T
1
= 0
11. D
11
= 0
Let
X = Ric
_
A
2
B
1
B
T
1
B
0
B
T
0
C
T
1
C
1
A
T
_
> 0
and
Y = Ric
_
A
T

2
C
T
1
C
1
C
T
0
C
0
B
1
B
T
1
A
_
> 0
Furthermore,
(XY ) <
Then, an H

controller is
K(s)
s
=
_
A +B
0
F +ZHC
0
+
2
B
1
B
T
1
X ZH
F 0
_
where
F = B
T
0
X
H = Y C
T
0
,
Z = (I
2
Y X)
1
137
All H

Controllers of the Linear Fractional Map


J.C. Juang
The design problem
K(s)
P(s)
- -
-

w
z
u y
All admissible closed-loop map
T
zw
= T
l
(P, K) = P
zw
+P
zu
K(I P
yu
K)
1
P
yw
= T
l
(T, Q) = T
11
T
10
QT
01
where
T
11
(s)
s
=
_
_
A +B
0
F B
0
F B
1
0 A +HC
0
B
1
+HD
01
C
1
+D
10
F D
10
F D
11
_
_
T
10
(s)
s
=
_
A +B
0
F B
0
C
1
+D
10
F D
10
_
T
01
(s)
s
=
_
A +HC
0
B
1
+HD
01
C
0
D
01
_
-
w z r v
T
01
(s)
-
Q(s)
-
T
10
(s)
- c

-
-
T
11
(s)
?
Let T

T 1, then if Q is bounded above by 1, T


l
(T, Q) is bounded above by 1. To
see this, let
_
z
r
_
= T
_
w
v
_
Then
z
T
z +r
T
r =
_
w
T
v
T

T

T
_
w
v
_

_
w
T
v
T

_
w
v
_
Thus,
z
T
z w
T
w v
T
v r
T
r
138
Similarly, when T(s) is bounded above by , then T
l
(T, Q) is bounded by if Q(s)
is bounded above by
1
.
Check that T(s) is bounded by .
T(s)
s
=
_
_
_
_
A +B
0
F B
0
F B
1
B
0
0 A +HC
0
B
1
+HD
01
0
C
1
+D
10
F D
10
F D
11
D
10
0 C
0
D
01
0
_
_
_
_
All H

controllers to the linear fractional map can be parametrized as


K(s) = T
l
(J(s), Q(s))
where Q(s) is required to be stable and bounded by
1
.
139
LMI-Based Method
J.C. Juang
Given the augmented plant
_
_
A B
1
B
0
C
1
D
11
D
10
C
0
D
01
D
00
_
_
There exists an H

controller such that |T


zw
|

< when the following LMIs are


solvable in terms of X = X
T
> 0 and Y = Y
T
> 0
_
^
b
0
0 I
_
T
_
_
AY +Y A
T
Y C
T
1
B
1
C
1
Y I D
11
B
T
1
D
T
11
I
_
_
_
^
b
0
0 I
_
< 0
_
^
c
0
0 I
_
T
_
_
A
T
X +XA XB
1
C
T
1
B
T
1
X I D
T
11
C
1
D
11
I
_
_
_
^
c
0
0 I
_
< 0
_
Y I
I X
_
0
where ^
b
and ^
c
are, respectively, bases of the null spaces of
_
B
T
0
D
T
10

and
_
C
0
D
01

.
140
Method of Synthesis
J.C. Juang
The design problem
K(s)
P(s)
- -
D

D
1

D
-
D
1
-

w z
u y
The problem is to nd a scaling transfer function D(s) and a controller K(s) such
that
|DT
l
(P, K) D
1
|

is minimized (or less than a given bound).


Procedure of synthesis
1. For a given D(s), nd a controller K(s) that minimizes the H

norm.
2. Given the controller K(s), nd an invertable, rational, stable scaling transfer
function D(s).
Generally requires a search for D(j) at dierent and followed by an
interpolation.
141

You might also like