Professional Documents
Culture Documents
Techniques
Dr. S. Janardhanan
Contents
2 Large Scale System Model Order Reduction and Control - Modal Analysis
Approach 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Davison Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Reduced Order Model Using Davison Technique . . . . . . . . . . . . 8
2.2.2 Alternative Method to Obtain Reduced Order Model through Davison
Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 Improved Davison Technique . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.4 Suboptimal Control Using Davison Model . . . . . . . . . . . . . . . 14
2.2.5 Control Law Reduction Approach Using Davison Model . . . . . . . . 14
2.3 Chidambara Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 Reduced Order Model Using Chidambara Technique . . . . . . . . . . 15
2.3.2 Suboptimal Control Using Chidambara Model . . . . . . . . . . . . . 16
2.3.3 Control Law Reduction Approach Using Chidambara Model . . . . . 17
2.4 Marshall Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.1 Reduced Order model by Marshall Technique . . . . . . . . . . . . . 17
2.5 Choice of Reduced Model Order . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.1 Model Order Selection Criterion by Mahapatra . . . . . . . . . . . . 18
2.5.2 Another Criterion for Order Selection and Mode Selection . . . . . . 20
5 Large Scale System Model and Controller Order Reduction - Norm Based
Methods 67
5.1 Introductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.1.1 Norms of Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . 67
5.1.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . 68
5.1.3 Grammian Matrices and Hankel Singular Values . . . . . . . . . . . . 70
5.1.4 Matrix Inversion Formulae . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2 Model Reduction by Balanced Truncation . . . . . . . . . . . . . . . . . . . 73
5.2.1 Balanced Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.2.2 Balanced Truncation . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.2.3 Steady State Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.2.4 Reduction of Unstable Systems by Balanced Truncation . . . . . . . . 76
5.2.5 Properties of Truncated Systems . . . . . . . . . . . . . . . . . . . . 76
5.2.6 Frequency-Weighted Balanced Model Reduction . . . . . . . . . . . . 81
5.3 Model Reduction by Impulse/Step Error Minimization . . . . . . . . . . . . 82
5.3.1 Impulse Error Minimization . . . . . . . . . . . . . . . . . . . . . . . 83
5.3.2 Step Error Minimization . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.4 Optimal Model Order Reduction Using Wilsons Technique . . . . . . . . . . 90
5.4.1 Impulse Error / White Noise Error Minimization . . . . . . . . . . . 90
3. Control procedures such as series compensation, pole placement, optimal control etc.,.
The underlying assumption for all such control and system procedures has been cen-
trality i.e., all the calculations based upon system information and the information
itself are localized at a given center, very often a geographical position.
A notable characteristic of most large scale systems is that centrality fails to hold up
due to either the lack of centralized computing capability or centralized information. Need-
less to say, many real problems considered are large scale by nature and not by choice.
1
2 Large Scale Systems
The important points regarding large scale systems are that their hierarchial (multilevel)
and decentralized structures depict systems dealing with society, business, management, the
economy, the environment, energy, data networks, power networks, space structures, trans-
portation, aerospace, water resources, ecology and flexible manufacturing networks to name
a few. These systems are often separated geographically, and their treatment requires con-
sideration of not only economic costs as is common in centralized systems but also such
important issues as reliability of communication links, value of information etc.,. It is for
the decentralized and hierarchial control properties and potential applications that many
researchers throughout the world have devoted a great deal of effort to large scale systems
in recent years.
Fig. (??) shows a two controller decentralized system. The basic characteristic of any
decentralized system is that the transfer of information from one group of sensors or actuators
to others is quite restricted. For example, in the system of Fig. (??), only the output y1 and
the external input u1 are used to find the control v1 and likewise the control v2 is obtained
through only the output y2 and external input u2 . The determination of control signals v1
and v2 based on the output signals y1 and y2 respectively is nothing but two independent
output feedback problems which can be used for stabilization or pole placement purposes. It
is therefore clear that the decentralized control scheme is of feedback form, indicating that
this method is very useful for large scale linear systems. This is a clear distinction from the
hierarchial control scheme, which was mainly intended to be an open loop structure.
In the previous part the concept of a large scale system and two basic hierarchial and
decentralized control structures were briefly introduced. Although there is no universal
definition of a large scale system, it is commonly accepted that such systems possess the
following characteristics
1. Large scale systems are often controlled by more than one controller or decision maker
involving decentralized computations.
2. The controllers have different but correlated information available to them, possibly
at different times.
3. Large scale systems can also be controlled by local controllers at one level whose control
actions are being coordinated at another level in a hierarchial(multilevel) structure.
An attempt is made here to consider primarily modeling and control of large scale systems.
Most of the discussions are focussed on large scale linear, continuous time, stationary and
deterministic systems.
1. The purpose of the model must be clearly defined, no single model can be appropriate
for all purposes.
2. The systems boundary separating the system and the outside world must be defined.
3. A structural relationship among different system components which would best repre-
sent desired or observed effects must be defined.
4. Based on the physical structure of the model, a set of system variables of interest must
be defined. If a quantity of important significance cannot be labelled, step (3) must
be modified accordingly.
6. After the mathematical description of each system component is complete, they are
related through a set of physical laws of conservation (or continuity) and compatibility,
such as Newtons, Kirchoffs or D Alemberts.
8. The last step to a successful modeling is the analysis of the model and its comparison
with real situations.
Chapter 2
2.1 Introduction
It is usually possible to describe the dynamics of physical systems by a number of simulta-
neous linear differential equations with constant coefficients.
x = Ax + bu
But for many processes ( like chemical plants and nuclear reactors) the order of the matrix
A may be quite large. It would be difficult to work with these complex systems in their
original form. In such cases, it is common to study the process by approximating it to a
simpler model. For instance, the response of an airplane is quite commonly approximated by
a second order transfer function. These mathematical models correspond to approximating
a system by its dominant pole-zeros in the complex plane. They generally require empirical
determination of the system parameters. Many different methods have been developed to
accomplish the purpose by estimating the dominant part of the large system and finding a
simpler ( or reduced order) system representation that has its behavior akin to the original
system.
7
8 Large Scale Systems
Z = P 1 X (2.3)
P = v1 v2 vn
T
vi = x1,i x2,i xn,i
Z = Az Z + Bz u
where,
Az = P 1 AP,
Bz = P 1 B
and Az would be either in the diagonal or the Jordan canonical form. Truncation of
non-dominant eigenvalues is simpler in this case.
In this case, the state response of the system for an input Bu = Bu , can be shown to be
Zt
X = P eAz (t ) P 1 Bu d (2.4)
0
P 1 = [ij ], i = 1..n, j = 1..n
A = A0 + A1 P1 P01 (2.6)
B = P0 [P 1 B] (2.7)
xc1
xc2
Y =
(2.8)
xcl
where,
ac1,c1 ac1,c2 ac1,cl
ac2,c1 ac2,c2 ac2,cl
A0 = .. .. .. (2.9)
. . .
acl,c1 acl,c2 acl,cl
T
ac1,1 acl,1
ac1,2 acl,2
.. ..
. .
ac1,c11 acl,c11
ac1,c1+1 acl,c1+1
.. ..
. .
A1 = ac1,c21 acl,c21 (2.10)
ac1,c2+1 acl,c2+1
.. ..
. .
ac1,cl1 acl,cl1
ac1,cl+1 acl.cl+1
.. ..
. .
ac1,n acl,n
xc1,1 xc1,2 xc1,l
xc2,1 xc2,2 xc2,l
P0 = .. .. .. (2.11)
. . .
xcl,1 xcl,2 xcl,l
x1,1 x1,2 x1,l
x2,1 x2,2 x2,l
xc11,1 xc11,2 xc11,l
xc1+1,1 xc1+1,2 xc1+1,l
P1 =
(2.12)
xcl1,1 xcl1,2 xcl1,l
xcl+1,1 xcl+1,2 xcl+1,l
xn,1 xn,2 xn,l
10 Large Scale Systems
Derivation :
The principle involved in reducing the matrix is to neglect higher-order time constants of
the system. The following equation is obtained if the first alone are retained from Eqn..
(2.4),.i.e., considering only the first l terms of Eqn. (2.5), and taking into account only the
l states considered (i.e., equating the other states to zero), the following is obtained.
xc1,i
l
X xc2,i
Y = i .. (2.14)
i=1 .
xcl,i
1 + ei
i = (i1 b1 + i2 b2 + + in bn )
i
and therefore
1
2
.. = P01 Y (2.15)
.
l
then
x1
x2
..
.
xc11 1
xc1+1 2
.. = P1
..
(2.16)
. .
xcl1 l
xcl+1
..
.
xn
x1
x2
..
.
xc11
xc1+1
.. = P1 P01 Y (2.17)
.
xcl1
xcl+1
..
.
xn
Considering the equations for xc1 , xc2 , , xcl alone from Eqn. (2.1), the following equa-
tion can be obtained
x1
x2
..
.
xc11
xc1+1
Y
= A0 Y + A1 .. + P0 Pl Bz u (2.18)
.
xcl1
xcl+1
..
.
xn
Pl = Il 0
and the state and input matrices are also partitioned in appropriate fashion, then X can be
represented as
X1 A11 A12 X1 B1
X = = + u (2.21)
X2 A21 A22 X2 B2
X1 P11 P12 Z1
X= = , (2.22)
X2 P21 P22 Z2
where, Z1 and Z2 are the states of the decoupled system representation. Thus,
1 0
= P 1 AP, (2.26)
0 2
1
= P 1 B. (2.27)
2
The modes in Z2 are non-dominant and therefore can be ignored (according to Davison
[2]). Thus, setting Z2 to zero, and substituting in Eqns. (2.23 and 2.24) the following are
obtained.
X1 = P11 Z1 (2.28)
X2 = P21 Z1 (2.29)
1
X2 = P21 P11 X1 (2.30)
Eqn. (2.32) gives the reduced order model for the system computed through the alternate
method. Both models in Eqn. (2.2) and Eqn. (2.32) represent the same system dynamics.
2.2 Davison Technique 13
Y = DA D1 Y + DB u (2.33)
where,
d1
d2
D = ... (2.34)
dl
[A1 B ]j
dj = [A1 B ]j
, if [A1 B ]j =
6 0 j = 1, 2, , l (2.35)
=1 if [A1 B ]j = 0
where [A1 B ]j is the j th element of the l vector A1 B and [A1 B]j is the element of
the n vector A1 B which corresponds to the j th state retained in the simplified system. The
new simplified system is equivalent to the following system.
X = A X + B u (2.36)
Y = DX (2.37)
and so it can be seen that the response of the new system will have correct steady-
state values for a step-function input (provided that [A1 B ]j 6= 0), and will still maintain
satisfactory dynamic behavior.. It should be noted that if [A1 B ]j = 0 for some j, then
the steady-state values of the variable yj may be in error. Variables to be retained in the
reduced order model should, therefore, always be chosen so that [A1 B ]j 6= 0.
For the case of multi-input systems, the corresponding model would be
Xi = A Xi + Bi ui , i = 1, 2, , r
Xr
Y = Di Xi (2.38)
i=1
where,
B = B1 B2 Br (2.39)
and Di , i = 1, 2, , r is determined from Eqns. (2.34 - 2.35), using Bi in place of B
where
B= B1 B2 Br (2.40)
14 Large Scale Systems
Thus,
Z
T
J = JM = X1 QM X1 + uT Ru dt (2.44)
0
1 T T
1 1
QM = Q11 + 2Q12 P21 P11 + P11 P21 Q22 P21 P11 (2.45)
X1 = F X1 + Gu (2.46)
then, the suboptimal controller is
u = R1 GT (2.47)
where, is the solution of the Riccati equation,
F + F T GR1 GT + QM = 0 (2.48)
u = Kx (2.49)
X1
= K1 K2 (2.50)
X2
Z2 = 2 Z2 + 2 u (2.52)
2 Z2 (s) = 2 U (s)
Z2 = 1 2 2 u (2.54)
X2 = P21 Z1 P22 1
2 2 u (2.55)
1
1
X2 = P21 P11 X1 + P21 P11 P12 P22 1
2 2 u (2.56)
= LX1 + Hu (2.57)
Substituting Eqn. (2.57) in Eqn. (2.31), the reduced order model of the large scale
system is obtained using Chidambara technique as
1
1
X1 = A11 + A12 P21 P11 X1 + A12 P21 P11 P12 P22 1
2 2 + B 1 u (2.58)
= F X1 + Gu (2.59)
16 Large Scale Systems
X T QX = X1T Q11 X1 + 2X1T Q12 (LX1 + Hu) + (X1T LT + uT H T )Q22 (LX1 + Hu)
Z
T
JM = X1 (Q11 + 2Q12 L + LT Q22 L)X1 + 2X1T (Q12 H + H T Q22 L)u + uT (R + H T Q22 H)u dt
0
(2.60)
Defining
and
u = u + R11 S T X1 , (2.64)
the simplified model represented in Eqn. (2.59) is equivalent to
X1 = F GR11 S1T X1 + Gu (2.65)
and the performance criterion in Eqn. (2.60) is equivalent to
Z
JM = X1T (Q1 S1 R11 S1T )X1 + uT R1 u dt (2.66)
0
If the matrices R1 and QM = Q1 S1 R11 S1T are positive definite and positive semi-
definite, respectively, then an optimal solution of the problem represented be Eqns. (2.65
and 2.66) is given as
u = R11 GT X1 (2.67)
where is the solution of the Riccati equation
T
F GR11 S1T + F GR11 S1T GR11 GT + Q1 S1 R11 S1T = 0 (2.68)
Thus
2.4 Marshall Technique 17
u = u R11 S T X1
= R1 GT + S T X1 (2.69)
This optimal controller to the simplified system could also serve as a suboptimal controller
to the original system.
u = KX
X1
= K1 K2
X2
= K1 X1 + K2 X2
Using the relation between X2 and X1 described in Eqn. (2.57), the control law can be
represented as
u = (K1 + K2 L) X1 + K2 Hu (2.70)
(I K2 H)u = (K1 + K2 L)X1
If (I K2 H) is invertible then the control law can be represented for the reduced order
system as
Observing Eqn. (2.25), it can be seen that the submatrix 1 is associated with larger
time constants of the system, whereas the response of any element in Z2 settles very fast.
Thus, it may be approximated as an instantaneous step change. This is the essence of the
technique.
Mathematically, this approximation is equivalent to putting
Z2 = 0 (2.73)
Substituting Eqn. (2.73) and P 1 = Q in Eqn. (2.25),
Z1 = 1 Z1 + 1 u (2.74)
0 = 2 Z2 + 2 u
Z2 = Q21 X1 + Q22 X2 = 1
2 2 u
1 1 1
X2 = Q22 Q21 X1 Q22 2 2 u (2.75)
Substituting Eqn. (2.75) in Eqn. (2.31) and using the relationships between Pij and Qij ,
one obtains the reduced order model be Marshall technique as
1
X1 = P11 1 P11 X1 + B1 A12 Q1 1
22 2 2 u (2.76)
2 t
X(t) = P1 e1 t I 1
1 1 + P2 e I 1
2 2 (2.78)
The approximate solution provided by the reduced order model is
X(t) = P1 e1 t I 1
1 1
Hence, the error involved in ignoring the higher order modes l+1 , , n in state equation
Eqn. (2.78) is given by
E(t) = P2 e2 t I 1 2 (2.79)
t 2 1
kE(t)k kP2 k . e 2 I . 2 . k2 k (2.80)
" n #1/2
1 X 1 2 1
kexp (2 t) Ik < n l; 2 =
= (2.81)
i i=l+1 |l+1 |
kP2 k < kP k ; k2 k < kk = P 1 B (2.82)
kP k . k2 k = K (2.84)
Therefore,
nl
kE(t)k K (2.85)
|l+1 |
Eqn. (2.85) shows that
for a given system, the error in states due to neglecting the higher
nl
modes depends upon . Therefore, the error can be made small when
|l+1 |
nl
0
|l+1 |
Mahapatra had extended his approach for Marshalls model [10] and obtained the rela-
tionship
nl+1
kE(t)k K (2.86)
|l+1 |
Using the above equations, it is suggested that a suitable value of l can be selected such
that the error is within the tolerable limits. This criterion has following drawbacks.
kE(t)k KUl
where,
nl
Ul =
|l+1 |
Ul1
Vl =
Ul
Upper bound on error kE(t)k is proportional to Ul and hence Ul gives an idea about the
actual error while Vl represents a measure of improvement achieved by increasing the model
order from l 1 to l.
However all these criteria are based on the assumption that the eigenvalues retained are
dominant.
X = AX + Bu (2.87)
y = CX
In this case scalar y is the output for evaluation and one is interested in constructing an
th
l order system whose output is close to that of the system for a given input.
Let 1 , 2 , , n be distinct and negative real eigenvalues and of these 1 , , l are
retained in the reduced model.
Xn
h i f i i t
y = CM Z = e 1 (2.88)
i=1
i
where,
M is the modal matrix
Z is the transformed matrix (X = M Z)
hi are the elements of the row vector h = CM
fi are the elements of the column vector f = M 1 B
Output of Marshalls reduced order model can be obtained as
2.5 Choice of Reduced Model Order 21
Z1 = e1 t I 1
1 F1
1
Z2 = 2 F2
Xl n
h i f i i t X hi fi
y = e 1 (2.89)
i=1
i i=l+1
i
Z n
X n
X
2 hi hj fi fj
E= e dt = (2.90)
0 i=l+1j=l+1
i j (i + j )
For a given lth order reduced model, the procedure is to select a combination of l eigen-
values and determine the ISE. This is repeated for all possible combinations and the one,
which gives the least ISE is selected.
0 1 0 0
X = 0 0 1 X + 0 u
10 17 8 1
y = 1 0 0 X
1 1 1
M = 1 2 5
1 4 25
30 21 3
1
M 1
= 20 24 4
12
2 3 1
3
1
f = M 1 B = 4
12
1
h = CM = 1 1 1
1 = 1, 2 = 2, 3 = 5
22 Large Scale Systems
1, 2 0.0278
2 1, 5 6.94
2, 5 31.3
X = AX + Bu (3.1)
y = CX
where
X Rn is the state vector
u Rm is the control input vector
y Rp is the output vector
The matrices A, B and C are constant with appropriate dimensions and the triplet
(A, B, C) is completely controllable and observable. We wish to replace the large model
description by a satisfactory aggregated model given by
Z = F Z + Gu (3.2)
w = Hz
where
Z Rr is the aggregated state vector
w Rp is the aggregated output vector
The aggregated model description is considered satisfactory if for a given class of inputs
{u}, the aggregated outputs w are good approximations of the original outputs y of the large
model. Intuitively, the aggregated model has an order r such that m r n.
23
24 Large Scale Systems
The link between the linear dynamic models in Eqn. (3.1) and Eqn. (3.2) could be
established by a linear transformation of the form
Z = LX (3.3)
where L is an r n constant aggregation matrix of rank r. Using the Eqn. (3.3), the
equivalence between the original and aggregated models is achieved provided that the con-
ditions
F L = LA (3.4)
G = LB (3.5)
Z(0) = LX(0) (3.6)
are satisfied. Since the r n aggregation matrix L is assumed to be of full rank, it will
possess a pseudo-inverse and therefore a least squares solution of F is
1
F = LALT LLT (3.7)
It is emphasized that the aggregated system matrix F obtained as above is an approxi-
mate solution of the earlier equation and depends on the aggregation matrix L.
(F ) (A)
In particular, the matrix F retains some of the characteristic values of A. To see this,
let {1 , , n } be the n distinct eigenvalues of A, and let {u1 , , un } be the associated
eigenvectors. Then it follows that
i ui = Aui
Li ui = LAui
i (Lui ) = F (Lui )
indicating that if Lui 6= 0, then the vector Lui is an eigenvector of F with the same
eigenvalue i . Now a necessary and sufficient condition for the Eqns. (3.4 - 3.6) to have a
unique solution for F is given below.
A necessary and sufficient condition for Eqns. (3.4 - 3.6) to have a unique solution is
that
3.1 Aggregation of Control Systems 25
(AT LT ) (LT )
(L) = (LA)
where () and () are the notations for range and null spaces of a matrix.
Let us define
(LA)T , {1 |2 | |r } i Rn
F T = {f1 |f2 | |fr } fi Rr
LT fi = i (3.8)
It is well known that Eqn. (3.8) will have a unique solution for fi if and only if
i (LT )
Rank(LT ) = r
By virtue of the rank assumption on L Eqn. (3.4) will have a unique solution for F if
and only if
(AT LT ) (LT )
(LT ) (AT LT )
(LT ) = (AT LT )
or equivalently
(L) = (LA)
Another property of the aggregated system matrix F is that any polynomial in A, p(A),
has p(F ) as its aggregation.
Lp(A) = p(F )L
26 Large Scale Systems
In particular, if p(A) = [sIn A]1 then p(F ) = [sIr F ]1 is its aggregation. Where,
Ir is the rth order identity matrix. Next, let us derive the transfer function G(s) between
u(s) and Z(s).
G(s) = [sIr F ]1 LB
That is, the transfer function matrix must be realizable by either Eqn. (3.1) and Z = LX
or by Eqn. (3.2). Since, Z has a lower dimension than X, the state space description defined
in Eqn. (3.1) and Z = LX is nonminimal. This situation occurs if and only if there are
(n r) pole-zero cancellations.
In effect, the class of aggregation matrices is restricted to those creating zeros in the input-
Z(s)
output relationship that cancel (n r) poles of the relationship. These cancelled
u(s)
poles are precisely the eigenvalues of A that are not retained in F.
T
L= LT1 LT2 LTr (3.10)
Then the ith element of Z, Zi is given by LTi X. This means that Zi is the weighted sum
of some components of X. Given the freedom to select the elements of L, such that L has at
most one entry in each column, then the n components
of X can be grouped into at most r
T
separate clusters. In this way, the vectors Li , 1 i r are mutually orthogonal which
implies that L has a minimal rank. This procedure constitutes a method of determining the
aggregation matrix. Despite its simplicity it is rather arbitrary and essentially depends on
the physical model set up. We note that this procedure involves projecting the state vector
X into an r dimensional subspace.
An alternative method to compute the matrix L can be developed by considering the
controllability matrices of the systems defined in Eqns. (3.1 and 3.2). Define
WA , B AB An1 B (3.11)
WF , F F G F n1 G (3.12)
LWA = WF (3.13)
Thus, using the pseudo-inverse, L can be obtained by
L = WF WA+
T T 1
= WF WA WA WA (3.14)
Example 2 (From [12])In order to illustrate the aggregation procedure, let us consider the
following system
1 0 0.01 0.05 0.25 1 0.5
0 4 0 0.45 0.1 0 1
X = 0.088 0.2 10 0 0.22 X + 0.5 0.9
u
1 0 0.075 4 0.05 2 0.75
0.11 0.2 0.999 0.44 3 1 1
SOLUTION: The system eigenvalues are:{10.03, 0.952, 0.2996, 4.073, 3.95} . The
modes to be retained are the average of the first and fourth modes, the second mode and
the average of the third and fifth modes. This gives L of the form
28 Large Scale Systems
0.5 0 0 0.5 0
L= 0 1 0 0 0
0 0 0.5 0 0.5
1.975 0 0.1925
F = 0.45 4 0.1
0.231 0.2 5.9
1.5 0.626
G = 0 1
0.75 0.95
T
Using X(0) = 0.5 0 0.25 0 0.5 , C = 0.5 1 0.5 0.5 0.5 , Z(0) is ob-
T
tained as Z(0) = 0.25 0 0.125 .
The comparative responses of the original and aggregated models are found to match
well. (See [12], pp. 115-117)
X = AX + Bu
Z = F Z + Gu
where Z is r vector and r < n. Let the simplified state vector Z and the original state
vector X be related via an aggregation matrix C where by
Z = CX
X = MY
where M is the modal matrix of A with its columns arranged from left to right in the
order of decreasing magnitudes of the corresponding eigenvalues. Then we get
Y = Y + u
where
= M 1 AM ; = M 1 B (3.16)
Assuming that the first r dominant eigenvalues are to be retained in the simplified
method. Let
W = TY
T = [Ir : 0]rn (3.17)
W = T T T W + T u
In order to convert the modal representation of the simplified form into a general form,
we utilize a reduced dimensional variation of X = M Y., namely
Z = M0 W (3.18)
The transformation matrix M0 is obtained in the following manner. Let the first r
columns of the matrix M be represented by
u11 u12 u1r
u21 u22 u2r
M = .. .. ..
. . .
un1 un2 unr
If on physical grounds certain specific state variables are to be retained in the simplified
model, then the matrix M0 is written directly from the above matrix. If, for example, state
variables x1 , x4 , xn1 are to be retained in the model, then
u11 u12 u13
M0 = u41 u42 u43
u(n1)1 u(n1)2 u(n1)3
Consequently,
Z = M0 T T T M01 Z + M0 T M 1 Bu (3.19)
30 Large Scale Systems
Z = M0 W = M0 T M 1 X (3.20)
Thus the aggregation matrix is given by
C = M0 T M 1 (3.21)
SOLUTION: The eigenvalues of A are {A} = {0.5, 1, 0.333} , which indicates that
the system is unstable. The modal matrix M can be found out to be
0.5774 0.7071 0
M = 0.5774 0 0
0.5774 0.7071 1
0.5774 0.7071 1 0 1.7321
M0 = ; M0 =
0.5774 0 1.4142 1.4142
1 0 0
T =
0 1 0
u = KZ
If this control law is applied to the higher order plant, the resultant feedback system is
given by
3.3 Modal Aggregation 31
u = KZ = KCX
X = AX + Bu
= (A BKC) X
W = W + u
= ( KCM ) W
The eigenvalues of (A BKC) and ( KCM ) are identical. Consider the matrix
1 0
( KCM ) = KM0 T M 1 M
0 2
1 0
= KM0 Ir 0
0 2
1 0 D1
= Ir 0
0 2 D2
1 0 D1 0
=
0 2 D2 0
1 D1 0
=
D2 2
(F GK) = M0 1 M01 M0 T M 1 BK
= M0 1 M01 M0 T K
= M0 1 M01 M0 T KM0 M01
1
D1
= M0 1 M0 M0 Ir 0 M01
D2
= M0 1 M01 M0 D1 M01
= M0 (1 D1 ) M01
x = Ax + Bu (3.22)
y = Cn x
0 1 0 0
0 0 1 0
.. .. .. ... ..
A = . . . .
0 0 0 1
a11 a12 a13 a1n
T
B = 0 0 1 , Cn = a21 a22 a2n
Chen and Shieh have shown that the above system can be transformed to an aggregated
form using a transformation matrix P, corresponding to its continued fraction expansion,
i.e.,
q = Hq + Ku (3.23)
v = Cq q (3.24)
q = Px (3.25)
whose first two rows are extracted from the nth row of A and elements of output vector
Cn . The remaining rows are calculated from the common Routh-Hurwitz iterative formula.
ai1,1 .ai2,j+1 ai1,j+1 .ai2,1
aij =
ai1,1
q = P AP 1 q + P Bu (3.27)
which indicates that matrices H and K are
H = P AP 1
K = PB
The continued fraction expansion simplification of the system from nth to lth order cor-
responds to retaining the first l variables of q. Let the first l elements of q be called z, then
it is clear that
z = Rq (3.28)
R = [Il : 0]
Now,
34 Large Scale Systems
z = Rq
= R(Hq + Ku)
= RHq + RKu
which, when compared to the derived aggregated system state equation
z = F z + Gu
= F Rq + Gu
leads to
F R = RH = RP AP 1
G = RK = RP B
Using Eqns. (3.25 and 3.28),
z = Rq = RP x = Cx
where, C is the l n aggregation matrix.
A relation involving the aggregated system output matrix Cl from the corresponding
equation (w = Cl z) can be obtained by equating y and w, which yields
Cn = Cl C
Now, using the pseudo-inverse of matrices R and C, the aggregated system matrices are
1
F = RHR+ = RP AP 1 RT RRT (3.29)
G = RP B (3.30)
Cl = Cn C + = Cn C T (CC T )1 (3.31)
with the aggregation matrix expressed by
C = RP (3.32)
The proposed method is very convenient for computational purposes and its effectiveness
is examined by an example.
Example 4 Consider a fourth order system in companion form
0 1 0 0 0
0 0 1 0
x = x + 0 u
0 0 0 1 0
120 180 102 18 1
y = 120 90 24.8 1.4 x
It is desirable to find a second order aggregated model.
3.4 Aggregation by Continued Fraction 35
90 77.2 16.6 1
0 95.76 17.07 1
P =
0 0
13.2 1
0 0 0 1
1 0 0 0
R =
0 1 0 0
1.34 0.1351 1
F = ;G =
1.34 0.8048 1
90 77.2 16.6 1
Cl = 13.35 1.274 ; C =
0 85.76 17.07 1
The comparison between the unit step responses of the original and aggregated systems
in Fig. (3.1) shows remarkable degree of coincidence.
36 Large Scale Systems
Z Z
st st (st)2
G(s) = g(t)e dt = g(t) 1 + dt
0 0 1! 2!
Z Z Z
2
= g(t)dt s tg(t)dt + s t2 g(t)dt
0 0 0
= c0 + c1 s + c2 s2 + (4.2)
where
Z
(1)n (1)n
ci = ti g(t)dt = Mi (4.3)
i! 0 i!
where Mi is the ith time-moment of impulse response g(t). Direct division of Eqn. (4.1)
yields
37
38 Large Scale Systems
In Eqn. (4.4), a21 is the zeroth-term coefficient of the numerator in Eqn. (4.1), and the
remaining coefficients are obtained from the following recursion:
c0 0 0 0 0 0 a12
c1 c0 0 a13
... .. ... ..
c2
c1 c0 0 . . a14
.. . ..
. .. .. .. .
. .
cm =
cm1 cm2 c0 0 0 0 0 a1,n+1
c
cm+1
m cm1 c1 c0 0
0
cm+2 c 0 0
m+1 cm c1
.. . .. .. ..
. .. . . .
cm+n cm+n1 cm+n2 0 0 0
a21
a22
a23
.
.
.
+ a2,m+1 (4.6)
0
0
.
..
0
where C11 , C21 , C22 are (m+1)n, nn, n(m+1) matrices respectively, and Ci , A1 , i =
1, 2 are vectors of (m + 1)st and nth dimension defined by Eqn. (4.6). Using Eqn. (4.7) and
solving for A1 and A2 , one gets
4.1 Moment Matching 39
1
A1 = C21 C2
1
A2 = C1 C11 C21 C2 = C1 C11 A1 (4.8)
The submatrix C21 is generally nonsingular, and its singularity means that the given
set of moments can be matched with a simpler model. The following example explains the
moment matching method.
1
1 c1 c0 c2
A1 = C21 C2 =
c2 c1 c3
1
2.55 2 2.83
=
2.83 2.55 2.9705
1.5144
=
0.5171
c1 0 0 a12
A2 = C1 C11 A1 =
c0 c0 0 a13
2 0 0 1.5144
=
2.55 2 0 0.5171
2
=
0.48
Therefore, the second order reduced model is
a21 + a22 s 2 + 0.48s
R2 = 2
=
1 + a12 s + a13 s 1 + 1.5144s + 0.5171s2
40 Large Scale Systems
which provides a pair of dominant poles at s1,2 = 0.1, 1.925, which indicates that a
stable second order reduced model results.
This result is not always obtainable. Moment matching is known for resulting in unstable
reduced order models for stable full models and vice versa.
f (s) = c0 + c1 s + c2 s2 + (4.9)
Um (s)
and a rational function Vn (s)
, where Um (s) and Vn (s) are mth and nth order polynomials
Um (s)
in s respectively, and m n. The rational function Vn (s)
is said to be a Pade approximant
Um (s)
of f (s) if and only if the first (m + n) terms of the power series expansions of f (s) and Vn (s)
are identical.
For the function f (s) in Eqn. (4.9) to be approximated, let the following Pade approxi-
mant be defined.
a0 = b0 c0 (4.11)
a1 = b0 c1 + b1 c0
a2 = b0 c2 + b1 c1 + b2 c0
..
.
an1 = b0 cn1 + b1 cn2 + + bn1 c0
0 = b0 cn + b1 cn1 + + bn c0 (4.12)
..
.
0 = b0 c2n1 + b1 c2n2 + + bn2 cn + cn1
Once the coefficients ci , i = 0, 1, 2, are found out using Eqn. (4.5) and cj = (1)j aj+2,1 ,
for the full model,
4.2 Pade Approximation Methods 41
d0 + d1 s + + dm1 sm1
G(s) = (4.13)
e0 + e1 s + + em1 sm1 + em sm
Eqns. (4.11 and 4.12) can be written in vector form as
cn cn1 c1 b0 c0
cn+1 cn
cn1 c2 b1
c1
cn+2 cn+1 cn c3
b2 c2
.. .. .. =
.. (4.14)
. . . .
cn bn2 cn2
c2n1 c2n2 cn bn1 cn1
c0 0 0 b0 a0
c1 c 0 0 a1
0 b1
c2 c1 c0 0 0 a2
b2
.. .. .. .. . = .. (4.15)
. . . 0 . .. .
0 bn2 an2
cn1 cn2 c1 c0 bn1 an1
It must be noted that in the above reformulation of Eqns. (4.11 and 4.12),bn = 1.
Using the coefficients ci , and forming the matrices in Eqns. (4.14 and 4.15), the results
obtained for a second-, third- and fourth-order reduced model are given as below
k th Order Reduced Model, Rk (s) Closed-Loop Poles
0.19543 + 0.2238s
R2 (s) = 0.7368077, 0.442077
0.3257 0.2947s + s2
0.16 0.389s + 0.095s2
R3 (s) = 1.75, 0.385 j0.064
0.266 1.194s 0.98s2 + s3
0.066 0.108s 0.0986s2 + 0.265s3
R4 (s) = 0.88, 0.355 j0.358, 0.5
0.11 0.4s 0.45s2 + 0.32s3 + s4
It can be seen that all the reduced order models are unstable ( whereas the full model
is a stable one). In order to obtain a stable reduced order model, preassign the first pole at
42 Large Scale Systems
s = s1 = 3, corresponding to one of the full models original poles. Under these conditions,
since s1 is a pole of the reduced system, the last equation is replaced by
0 = b0 + b1 s1 + b2 s21 + + bn sn1
Using this new equation and solving for a1 and bi , the fourth order reduced model obtained
would be
Since there is a pole-zero cancellation (106 ' 0) in the above transfer function, a third
order reduced model results
Remark 1 Assume that the reduced order model R(s) of order k is required which retains
the pole at s = s1 say,
Then the Pade approximant uses the concept of Pade approximation about more than one
point and the last equation is replaced by the equation
0 = b0 + b1 s1 + b2 s21 + + bk sk1
a0 + a1 s + a2 s2 + + an1 sk1
R(s) = (4.16)
(s + s1 ) (s + s2 ) (s + sk )
where the bi (i = 0, 1, , k 1) may be computed in terms of s1 , , sk . Then if R(s)
is to approximate G(s) in the Pade sense, about s = 0, then the ai may be determined using
Eqn. (4.15).
4.2 Pade Approximation Methods 43
1
Q(s) =
e0 + e2 s2 +
1+
e1 s + e3 s3 +
1
=
1 1
1 + 1 +
s
1 ... 1
2 +
s 1
n
s
The i are termed as the alpha parameters of the full model. The denominator of the
reduced order transfer function is obtained by retaining the first k alpha parameters. The
denominator of R(s) is thus given by the denominator of the truncated continued fraction.
1
Qk (s) =
1 1
1 + 1 +
s
1 ... 1
2 +
s 1
k
s
Having obtained the denominator of R(s), the bi one known and hence the ai may be
obtained by solving the first k equations.
Let 1
R(s) = L0 + L1 s + Is2 [M0 + M1 s] (4.18)
44 Large Scale Systems
Now
1
1 2 1
L0 + L1 s + Is2 = L0 I2 + L10 L1 s + L0 s
1 2 1 1
= I + L10 L1 s + L0 s L0
= I L0 L1 s L0 s + L0 L1 L1
1 1 2 1
0 L1 s
2
+2L1 1 3 1 1 1 3
0 L1 L0 s L0 L1 L0 L1 L0 L1 s + . . .
= L1 1 1 1 1 2 1 1
0 L0 L1 L0 s L0 L0 s + L0 L1 L0 L1 L0 s
1 2
+2L1 1 1 3 1 1 1 1 3
0 L1 L0 L0 s L0 L1 L0 L1 L0 L1 L0 s + . . .
R(s) = (L1 1 1 1 1 2 1 1
0 L0 L1 L0 s L0 L0 s + L0 L1 L0 L1 L0 s
1 2
(4.19)
1 1 1 3 1 1 1 1 3
+2L0 L1 L0 L0 s L0 L1 L0 L1 L0 L1 L0 s ) (M0 + M1 s)
1
= L1 1
0 M0 + L0 M1 L0 L1 L0 M0 s
1
2 1 1 2 3 1 3
+ L1 0 L1 L0 M0 L 1
0 L L
1 0
1
M 1 L0 M0 s2
+ L1
0 L1 L0 M1 s
C0 = L1
0 M0
M0 = L0 C0 (4.20)
C1 = L1 1 1
0 M1 L0 L1 L0 M0
L0 C1 = M1 L1 L1
0 M0
L0 C1 = M1 L1 C0
M1 = L0 C1 + L1 C0 (4.21)
0 = L0 C2 + L1 C1 + C0 (4.22)
0 = L0 C3 + L1 C2 + C1 (4.23)
C2 C1 C0
L0 L1 =
C3 C2 C1
C0
L0 L1 D = (4.24)
C1
For unique solutions to exist for the above Pade equations. D must be nonsingular.
Once, L1 and L0 are solved, M0 and M1 can be obtained using Eqns. (4.20 and 4.21).
4.2 Pade Approximation Methods 45
1
R(s) = L0 + L1 s + L2 s2 + + Lr1 sr1 + Isr M0 + M1 s + + Mr1 sr1 (4.25)
If the above equation is expanded up to first 2r terms and compared with Eqn. (4.17),
we obtain the matrix equations
Cr Cr+1 C2r1
Cr1 Cr C2r2
L0 L1 Lr1 .. .. .. = C0 C1 Cr1 (4.26)
. . .
C1 C2 Cr2
C0 C1 Cr1
0 C0 Cr2
..
L0 L1 Lr1 0 0 . = M0 M1 Mr1 (4.27)
. . ..
.. .. .
0 0 C0
D0 + D1 s + D2 s2 + + Dn1 sn1
G(s) = (4.28)
c0 + c1 s + c2 s2 + + cn1 sn1 + sn
= C0 + C1 s + C2 s2 +
A0 + A1 s + A2 s2 + + Ar1 sr1
R(s) = (4.29)
b0 + b1 s + b2 s2 + + br1 sr1 + sr
The denominator of R(s) is fixed either by Routh method or by dominant mode retention.
The numerator of R(s) is obtained by using the following r matrix equations.
A0 = b0 C0
A1 = b1 C0 + b0 C1
..
. (4.30)
Ar1 = b0 Cr1 + b1 Cr2 + + br1 C0
46 Large Scale Systems
The methods presented above are based on the fact that the high-order system is asymp-
totically stable. For the case of non-asymptotically stable systems the methods may fail
since some of the alpha parameters of G(s) will be negative. Non-asymptotic stability is
due to the presence of poles with zero real parts. It is obviously important that the reduced
model be non-asymptotically stable if the higher order system is non-asymptotically stable.
Hence, the poles with zero real parts are retained in the reduced order model. Thus, let
G(s), with n poles and m zeros and p non-asymptotic poles, be of the form
1 Um (s)
G(s) =
ep (s) Vnp (s)
1 Unp1 (s)
= Pmn+p (s) +
ep (s) Vnp (s)
Pmn+p (s) 1 Unp1 (s)
= + (4.31)
ep (s) ep (s) Vnp (s)
where,
ep (s) = pth order polynomial in s consisting of the p non-asymptotic poles
Um (s) = mth order numerator of the full model.
Vnp (s) = Denominator of full model after exclusion of non-asymptotic poles
Pmn+p (s) = Quotient of the polynomial division VUnpm (s)
(s)
Um (s)
Unp1 (s) = Reminder of the polynomial division Vnp (s)
Now, assigning
Unp1 (s)
G1 (s) = (4.32)
Vnp (s)
and obtaining the reduced order model R1 (s) of order r1 < n p, by any of the previous
methods, the reduced order model of the original system can be obtained as
Pmn+p (s) 1
R(s) = + R1 (s) (4.33)
ep (s) ep (s)
This reduced model would be of the order r = p + r1 . Thus reiterating the condition that
the reduced order model should be of an order greater the number of non-asymptotic poles in
the system. If this condition fails to apply then the system is not reducible.
4.2 Pade Approximation Methods 47
Z = F Z + Gu (4.34)
y = LZ
where
0 1 0 0 0
0 0 1 0 0
. . . .. 0
F = 0 . ,G = (4.35)
. .. .. ..
.. . . 1 .
b0 b1 b2 br1 1
L = a0 a1 a2 ar1
Further, let c0 , c1 , c2r1 be the first 2r time moments of the system and let ai , bi be the
unknown parameters of the model. After equating the first 2r time moments of the model
with that of the system, the following equations are obtained.
c0 = LF 1 G
c1 = LF 2 G
..
. (4.36)
2r
c2r1 = LF G
On simplification, the equations in Eqn. (4.36) take the form of Eqns. (4.11 and 4.12).
The parameters of the reduced model can be evaluated by a solution of the Pade equations.
Let the controllable canonical form of the 8th order model be given by
Z = F Z + GU (4.37)
Y = LZ
where
Z R8 , U R2 , Y R2
0 I 0 0 0
0 0 I 0 0
F =
0
,G =
(4.38)
0 0 I 0
B0 B1 B2 B3 I
L = A0 A1 A2 A3
Let us assume F is nonsingular. Bi and Ai are the unknown matrices each of dimension
(2 2). Also, 0 and I are the null and identity matrices each of dimension (2 2). Equating
the first eight time moment matrices of the system and the model, the following equations
are obtained.
C0 = LF 1 G
C1 = LF 2 G
..
. (4.39)
8
C7 = LF G
The first of the above equations can be written as
B01 B1 B01 B2 B01 B3 B01 0
I 0 0 0
C0 = A0 A1 A2 A3 0
0 I 0 0 0
0 0 I 0 I
= A0 B01
A0 = C0 B0
As F is nonsingular, B0 is nonsingular and B01 exists. The second equation of the above
set can be written as
C1 = LF 2 G
A0 A1 A2 A3
(fB B1 ) B01 B2 (fB B2 ) B01 B3 (fB B3 ) B 1
0 fB
B 1
0 B1 B 01 B2 B 1 B B 1
0 3 0
I 0 0 0
0 I 0 0
4.2 Pade Approximation Methods 49
fB = B01 B1 B01
0
0
= C1
0
I
C1 = A0 B01 B1 B01 + A1 B01
A1 = C0 B1 + C1 B0
Similarly, the other six equations are obtained with reasonable effort to yield the following
set of Pade equations.
A0 = C0 B0
A1 = C0 B1 + C1 B0
A2 = C0 B2 + C1 B1 + C2 B0
A3 = C0 B3 + C1 B2 + C2 B1 + C3 B0 (4.40)
and
0 = C0 + C1 B3 + C2 B2 + C3 B1 + C4 B0
0 = C1 + C2 B3 + C3 B2 + C4 B1 + C5 B0
0 = C2 + C3 B3 + C4 B2 + C5 B1 + C6 B0
0 = C3 + C4 B3 + C5 B2 + C6 B1 + C7 B0 (4.41)
0 I 0 0 0
0 0 I 0 0
0 0 0 I 0
F = .. .. .. .. ..
. . . . .
I
B0 B1 B2 B3 Bk1
50 Large Scale Systems
0
0
0
G = (4.42)
..
.
I
L = A0 A1 A2 Ak1 (4.43)
k = r/m
Bi and Ai are the unknown matrices of dimension (m m) and (p m) respectively. 0
and I are null and identity matrices each of dimension (m m). It is assumed that both
r/m and r/p are integers. If the first (r/m) + (r/p) time moment matrices of the reduced
model are equated with that of tis system, the following equations are obtained.
C0 = LF 1 G
C1 = LF 2 G
..
. (4.44)
((r/m)+(r/p))
C(r/m)+(r/p)1 = LF G
where C0 , C1 , , C(r/m)+(r/p)1 are the first ((r/m) + (r/p)) time moment matrices of
the nth order system.
Then by using the results in Eqns. (4.40 and 4.41) and by employing the principle of
induction, Eqn. (4.44) can be simplified to the form
A0 = C 0 B0
A1 = C 0 B1 + C 1 B0
A2 = C 0 B2 + C 1 B1 + C 2 B0
A3 = C 0 B3 + C 1 B2 + C 2 B1 + C 3 B0
..
.
A(r/m)1 = C0 B(r/m)1 + C1 B(r/m)2 (4.45)
+C2 B(r/m)3 + + C(r/m)1 B0
0 = C0 + C1 B(r/m)1 + C2 B(r/m)2
+C3 B(r/m)3 + + C(r/m) B0
0 = C1 + C2 B(r/m)1 + C3 B(r/m)2
+C4 B(r/m)3 + + C(r/m)+1 B0
..
.
0 = C(r/p)1 + C(r/p) B(r/m)1 + C(r/p)+1 B(r/m)2
+C(r/p)+2 B(r/m)3 + + C(r/m)+(r/p)1 B0 (4.46)
The first set of equations contains r/m linear matrix algebraic equations ( or pr linear
simultaneous equations). The second set contains r/p linear matrix algebraic equations
4.2 Pade Approximation Methods 51
X = AX + BU (4.48)
Y = EX
where X Rn , U Rm , Y Rp . Also, let the system be completely controllable and
observable with controllability index n. A modal transformation X = M W yields
W1 1 0 W1 B1
= + U (4.49)
W2 0 2 W2 B2
W1
Y = E1 E2 (4.50)
W2
52 Large Scale Systems
T
where W = W1 W2 . Let 1 contain the r dominant eigenvalues of the system. The
state equation of the rth order model
can
be obtained by a truncation of the representation
in Eqn. (4.49) as follows W1 = Ir 0 W, thereby
W1 = 1 W1 + B1 U (4.51)
As the system is completely controllable, the state equation of the model in Eqn. (4.51)
can be transformed into controllable canonical form and can be represented by Eqn. (4.37)
with F, G given in Eqn. (4.42). The output matrix of the reduced model is assumed to be
unknown and is represented by L in Eqn. (4.43). In this situation, Bi matrices are known
while Ai matrices contain unknown elements. These unknowns can be obtained from the
solution of Eqn. (4.45). Thus, it can be seen that, for a multivariable system in the time
domain, a stable Pade approximant matches r/m time moment matrices with that of its
system.
An exact aggregation matrix also exists for the modal-Pade procedure. The derivation of
this is given in the following.
The transformation matrix M transforms the system in Eqn. (4.48) into the modal form
in Eqn. (4.49). So
W = M 1 X
Let the transformation matrix T transform the representation into controllable canonical
form in Eqn. (4.42). So
Z = T W1
Now
Z = T W1 = T Ir 0 W = T Ir 0 M 1 X
CX (4.52)
Un (s) b1 + b2 s + b3 s2 + + bn sn1
= (4.53)
Vn (s) a0 + a1 s + a2 s2 + + an1 sn1 + sn
1
It has been shown by Hutton [23] and Friedland [22] that G(s) = 1s G s
can be repre-
sented in a canonical fashion as
1
fk (s) = (4.55)
1
k s +
1
k+1 s+ .
. .
1
n1 s +
n s
1
f1 (s) = (4.56)
1 + 1 s
Eqns. (4.54 - 4.56) are called alpha-beta expansions of G(s) . The k th order model is
obtained by using the following algorithm. The i and i are computed through their alpha
and beta Routh tables.(Refer [13, 22])
Step 3: For a k th order reduced model use recursive formulae in Eqn. (4.64) to find Rk (s) =
Pk (s)/Q(s).
Step 4: Reverse the coefficients of Pk (s) and Q(s) back to find Rk (s) = Pk (s)/Qk (s).
The Routh-Hurwitz method can be used to obtain reduced order models for stable full
systems. By this method, the less dominant poles of the full model are retained.
54 Large Scale Systems
where the i for i = 1, 2, , n are constants and Wi (s), i = 1, 2, , n are defined by the
continued fraction expansions
1
Wi (s) = (4.58)
i 1
+
s i+1 1
+
s ..
.
n1 1
+
s n
s
1
W1 (s) =
1
1+ + W2 (s)
s
The values of the and parameters can be obtained using the gamma and delta Routh
tables ( or inverse Routh tables). The n parameters k , k = 1, 2, , n, of this expression
can be found in the following fashion
The i parameters can be similarly obtained using the coefficients of the numerator
bj , j = 1, 2, n, as
The recursive formula to compute the entries of gamma and delta tables can be obtained
from
4.3 Routh Approximation Techniques 55
ai+1
0 = ai1
2 i ai2
ai+1
2 = ai1
4 i ai4
..
. (4.59)
ani2 = ai1
i+1 i
ni i ani , i = 1, 2, n 1
ai+1 i1
ni1 = ani+1
and
and
The k th Routh reduced model using the alpha-beta expansion, Rk (s) for the full model
G(s) is found by truncating the expansion in Eqn. (4.54) and rearranging the retained terms
as a rational transfer function. Truncating the continued fraction in Eqn. (4.55) after the
k th term and denoting it by gj,k (s), the reduced model transfer function Rk (s) is similar to
Eqn. (4.54).
k
X i
Y
Rk (s) = i gj,k (s) (4.62)
i=1 j=1
56 Large Scale Systems
1
gj,k (s) = (4.63)
i 1
+
s i+1 1
+
s ...
k1 1
+
s k
s
Denote the numerator and denominator of Rk (s) by Pk (s) and Qk (s), respectively, defined
below
for k = 1, 2, and
P1 (s) = P0 (s) = 0
Q1 (s) = 1/s
Q0 (s) = 1
G(s) = G (s + a)
where the real, positive parameter a is chosen to be a > R {m } , where m is the closed-
loop pole with the highest positive real part. The next step would be to find Rk (s) as usual
and finally shift back the imaginary axis to its original position providing
Rk (s) = Rk (s a)
x = Ax + bu (4.65)
y = Cx
It has been shown in [26] that the linear relation
v = Px (4.66)
exists which transforms Eqn. (4.65) to its expansion in the phase canonical form
v = Rv + mu (4.67)
y = Ev
where,
1 0 3 0 n
0 0 3 0 n
1 2 3 0 n
R= 0 0 0 0 5 n if n is odd
.. .. .. .. ..
. . . . .
1 2 3 4 5 n
T
m = 1 0 1 0 1
(4.68)
0 2 0 4 n
1 2 0 4 n
0 0 0 4 n
R= 1 2 3 4 n if n is even
.. .. .. .. ..
. . . . .
1 2 3 4 5 n
T
m= 0 1 0 1 1
with, in both cases
E= 1 2 3 n (4.69)
It is obvious from Eqns. (4.65-4.67) that
R = P AP 1
m = Pb (4.70)
E = CP 1
In terms of the direct Routh approximation method (DRAM) [24], the linear transfor-
mation P is given as
58 Large Scale Systems
a10 0 a12 0 a1n2 0 1
0 a20 0 a22 a2n2 0
0 0 a30 0 1
.. ..
P = ... if n is odd
. .
.. ..
. .
0 0 0 0 0 1
(4.71)
a10 0 a12
0 a1n2 0
0 a20 0 a22 0 1
0 0 a30 0 0
..
P = . if n is even
0 0 0 a40 . . .
.. ..
. . an1 0
0
0 0 0 0 0 1
Since the matrix P is upper-triangular and sparse, the computation of its inverse would
be comparatively easy. The derivation of the rth order model which approximates the nth
order model in Eqns. (4.68 - 4.69) is carried by simply retaining the blocks of order r from
the matrices R, m, E and discarding the remaining (n r) blocks to yield
z = F z + gu (4.72)
y = Hz
where
1 0 3 0 r
0 0 3 0 r
1 2 3 0 r
F = 0 0 0 0 5 r if n is odd
.. .. .. .. ..
. . . . .
1 2 3 4 5 r
T
g = 1 0 1 0 1
(4.73)
0 2 0 4 n
1 2 0 4 n
0 0 0 4 n
F = 1 2 3 4 n if n is even
.. .. .. .. ..
. . . . .
1 2 3 4 5 n
T
g= 0 1 0 1 1
4.3 Routh Approximation Techniques 59
In both cases,
H= 1 2 r (4.74)
Define
Cr = Ir 0r(nr) (4.75)
then it is readily verified that the system in Eqn. (4.72) is an aggregated model of the
system in Eqn. (4.65) provided that
z = Cr v
F = Cr RCrT
(4.76)
g = Cr m
H = ECrT
Furthermore, from Eqns. (4.66 and 4.76) the transformation matrix L, transforming the
vector x into z, is given by
L = Cr P (4.77)
In summary, the dynamic model in Eqn. (4.72) is an approximant of the original model
in Eqn. (4.65) using L as an aggregation matrix.
Remark 2 It is to be noted that though the Routh technique assures simultaneous derivation
of reduced models of all orders, and retains the stability property during reduction, the
reduced order models do not retain the eigenvalue subset of the original system, nor are the
aggregated models of Routh approximation exact. The second and third equalities of Eqn.
(4.76) are not preserved during the Routh-based aggregation.
are related as
Xk+1 2
i2 k+1
kRk+1 k = = kRk k + (4.79)
i=1 2i 2k+1
and generally
As k n, the eigenvalues of the approximant tend towards the actual eigenvalues of the
full model.(k n = {Rk } {G }).
60 Large Scale Systems
Thus, as the model order increases, the closeness of fit also increases. But, using this
logic would indicate that the full model is the ideal reduced model ( which, though true,
loses the purpose of model order reduction). Hence, the maximum change in impulse energy
with increase of order of model by one is calculated.
k2
k = kRk k kRk1 k = , k = 1, 2, , n (4.80)
2k
Now, the optimal reduced order model is calculated as
Remark 3 The first k terms of the power series expansion of G(s) and Rk (s) coincide, that
is
X
Rk (s) G(s) = j sk+j
j=1
Evaluation of the above equation in accordance with the concept of Pade-type approximate
readily shows that the Routh approximant is only a partial Pade approximant and matches
the initial r time moments.
1
G(s) = (4.83)
1
h1 s+
1
h2 +
1
h3 s+ .
..
corresponds to a combination of multiple feedback loops comprising differentiator blocks
and feed-forward paths having proportional blocks. Thus, the MIMO first Cauer form is
4.4 Continued Fraction Method 61
h i1 1
1 1
G(s) = H1 s + H2 + H3 s + [ ] (4.84)
1
G(s) = (4.85)
1
k1 +
k2 1
+
s 1
k3 + .
..
can be represented by a combination of multiple feedback loops having proportional
blocks and multiple feed-forward paths, including integral blocks. In case of MIMO systems,
the second Cauer form is
" 1 1
1 #1
1 1
G(s) = K1 + K2 + K3 + K4 + [ ]1 (4.86)
s s
1
G(s) = (4.87)
1
d1 + f1 s +
d2 1
+f2 +
s 1
d3 + f3 s+
d4 1
+f4 + .
s ..
which corresponds to a combination of multiple feedback loops with blocks each con-
taining proportional plus differential and feed-forward paths comprising proportional plus
integral blocks. The third Cauer form can be represented for MIMO systems as
" 1 1
1 #1
1 1
G(s) = D1 + F1 s + D2 + F2 + D3 + F3 s + D4 + F4 + [ ]1 (4.88)
s s
quotients ki in Eqn. (4.85) or by arranging the polynomials in ascending order and then
performing long synthetic division. The generalized Routh algorithm [28] can then be used
to compute the quotients di and fi in Eqn. (4.87).
It should be noted that the quotients in the MIMO cases, Hi , Ki , Di and Fi are constant
m m quotient matrices. Observe that any algorithm that computes the quotients Di and
Fi is capable of computing the Hi quotients (by setting all the Di s equal to zero) and of
computing the Ki quotients by suppressing the quotients Fi throughout the implementation.
Next, we present an algorithmic procedure to compute the quotients Di and Fi of Eqn.
(4.88) for transfer function matrices.
G(s) = A2n sn1 + A2,n1 sn2 + + A22 s + A21
1
A1,n+1 sn + A1,n sn1 + + A12 s + A11 (4.89)
where A2j , j = 1, , n, are constant m m matrices and A1i = ai Im , i = 1, , n + 1,
where each ai is a coefficient of
n+1
X
(s) = ai si1
i=1
Assuming the indicated inverses exist, the matrix Routh array is completed as follows
A11 A12 A1,n A1,n+1
. &
D1 = A11 A1
21 F1 = A1,n+1 A1
2,n
- %
A21 A22 A2,n
. &
D2 = A21 A1
31 F2 = A2,n A1
3,n1
- %
A31 A32 A3,n1
. &
D3 = A31 A1
41 F3 = A3,n1 A1
4,n2
- %
A41 A42 A4,n2
.. .. .. .. ..
. . . . .
An1,1 An1,2 An1,3
. &
Dn1 = An1,1 A1 Fn1 = An1,3 A1
n,1 n,2
- %
An,1 An,2
. &
Dn = An,1 A1 Fn = An,2 A1
n+1,1 n+1,1
- %
An+1,1
(4.93)
It should be noted that the above procedure is directly amenable to machine computation.
To illustrate the calculations, let us consider a simple example :
Example 7
1 2s 3 2
G1 (s) = 2
s +2 s2 s2
with
3 2 2 0
A21 = A22 =
2 2 1 1
2 0 0 0 1 0
A11 = , A12 = , A13 =
0 2 0 0 0 1
64 Large Scale Systems
1 1
" 1 #1 1 1
1 1 1
Gr (s) = K1 + K2 + K3 + K4 + K2r (4.94)
s s s
4.4 Continued Fraction Method 65
K1 K2 K1 K4 K1 K6 K1 K2n
K1 K2 (K1 + K3 ) K4 (K1 + K3 ) K6 (K1 + K3 ) K2n
K1 K2 (K1 + K3 ) K4 (K1 + K3 + K5 ) K6 (K1 + K3 + K5 ) K2n
A = (4.95)
.. .. .. ..
. . . .
K1 K2 (K1 + K3 ) K4 (K1 + K3 + K5 ) K6 (K1 + K3 + + K2n1 ) K2n
Im
Im
B = ..
.
Im
C = K2 K4 K2n
Note that the order of the system matrix is nm nm. A simplified model can be
obtained by partitioning. For example, the 2mth order model can be derived from the upper
left corners if Eqn. (4.95) is partitioned as shown.
The above formulation can also be obtained via aggregation for a SISO system in the
following manner. For a SISO system represented in phase canonical form,
0 1 0 0 0
0 0 1 0
0
.. .. .. .. .
x = . . . . x + .. u
0 0 0 1 0
a11 a12 a13 a1,n+1 1
y = a21 a22 a23 a2,n x
A =
the aggregation matrix L = I2m 0 P , which results in the reduced state space model
described in Eqn. (4.95), has
a31 a32 a33 a3n
0 a51 a52 a5,n1
.. ..
P = 0 0 . .
0 0 0 a2n+1,1
0 0 0 1
It should be noted here that in this case too the aggregation is not exact as the eigenspectrum
retained in the reduced order model is not a subset of the eigenspectrum of the original full
model.
The most important properties of continued fraction expansion are
66 Large Scale Systems
2. It contains most of the essential characteristics of the original model in the first few
terms.
Since the denominator coefficients of the simplified model depend on both the numerator
and denominator coefficients of the original model, stability of the simplified model cannot
be guaranteed, even if the original model is stable.
Next let us look at the derivation of simplified transfer functions models that give good
approximations to both the initial transient and the steady state responses. The third Cauer
form , or any equivalent expansion, provides the basis of such models. This is due to the
fact that it considers Maclaurins expansions about origin and infinity. Thus, a simplified
model matrix Gr (s) could be obtained by truncating the continued fraction expansion in
Eqn. (4.88) after r terms. This means that r quotient matrices Di and Fi have to be
determined using the matrix Routh array algorithm. A quite similar expansion of the third
Cauer form is given by
h i1 1
1 1
G(s) = D1 + s D2 + D3 + s [D4 + ] (4.96)
which considers the expansion of G(s) into a matrix Cauer type continued fraction about
s = 0 and s = alternately. Note that there are 2n Di constant (m m) matrices. Thus,
this method puts equal emphasis on its approximations to the initial transients and the steady
state responses. In principle, there is no restriction to the relative number of truncated terms
about each side of the series expansion. For example, increasing the number of terms in the
series expansion about s = 0 will have the effect of yielding more accurate approximations
to the steady state response, and vice versa. This class of simplified models are often termed
biased simplified models.
In conclusion, it is interesting to note that the three matrix Cauer forms [29] could be
reduced to the form in Eqn. (4.89).
Remark 4 In the case of the second Cauer form, the first 2r terms of the power series
expansion about s = 0 ( or the first 2r time moments) for of Gr (s) agree with those of G(s).
In case of the first Cauer form, the coefficients of expansion about s = agree with one
another ( Markov parameters). The third Cauer form is able to match both parameters.
Chapter 5
5.1 Introductions
5.1.1 Norms of Vectors and Matrices
The norm is a measure of the magnitude of the vector or matrix. It is a measure that has
the following important properties..
kak + kbk ka + bk
kak = kak
k( + ) ak = kak + kak
k0k = 0
kxk = 0 x = 0
Vector Norms
The p norm For an n-dimensional vector x, the p norm is defined as
v
u n
uX p
kxkp = tp
xi (5.1)
i=1
67
68 Large Scale Systems
The term norm of a vector is usually used synonymously to the 2 -norm of the vector.
This norm also gives a measure of the length of the vector in the Rn space.
The norm (Infinity Norm) The infinity norm denoted as kk is simply the max-
imum value of a vector-component in any of its coordinate directions.
Matrix Norms
In case of a square matrix, the norm is defined as
T
x Ax
kAk = max (5.4)
x6=0 kxk
This operation cannot be performed in case of non-square matrices. Hence, a more general
definition of the matrix norm can be given as follows
kAk = max AT A = max AAT (5.5)
A = U V (5.6)
where,
where,
i are the eigenvalues of AAT .
5.1 Introductions 69
If r = min(m, n) then the first r columns of V the right singular vectors and the first r
columns of U the left singular vectors.
Avi = i ui
In the general case of N dimensions, the length (or norm) of a vector x is defined by
q p
kxk = (x21 + x22 + + x2N ) = (xT x)
An alternative norm for A is the Frobenius norm, which is the Euclidean norm of a
vector constructed by stacking the columns of A in one mn vector. The Frobenius
norm is then
m n
! 12
X X
kAkF = |aij |2
i=1 j=1
kAkE = 1
12
min(m,n)
X
kAkF = i2
i=1
x = Ax + Bu
y = Cx
AWr + Wr AT = BB T (5.7)
AT Wo + Wo A = C T C (5.8)
Eqns. (5.7 and 5.8) are called the Lyapunov equations of the system.
If the eigenvalues of A are assumed to be strictly in the left half-plane then we can define
the controllability-reachability Grammian and the observability Grammian, respectively, as
:
Z
Wr = exp(At)BB T exp(AT t)dt (5.9)
0
Z
Wo = exp(AT t)C T C exp(At)dt (5.10)
0
Proposition 1 For two algebraically equivalent systems with state vectors x and x the fol-
lowing relations exist:
Wr = T Wr T T (5.11)
1
Wo = T T Wo T 1 (5.12)
Proof : Let us consider, in fact, the matrix product Wr Wo . Taking into account Eqns.
(5.11-5.12) we have:
Wr Wo = T Wr T T (T T )1 Wo T 1
= T Wr Wo T 1
Definition 3 If a system is asymptotically stable, then the Hankel singular values of M (s)
are defined as
1
v (M (s)) = (i (Wr Wo )) 2 , (i = 1, 2, , n)
Hk = CAk1 B
The Markov parameters are also given as the values of the system impulse response and
its ith derivative computed at t = 0.
Definition 5 The Hankel matrix is defined as the doubly infinite matrix whose (i, j)th block
is Hi+j1 . Matrix H can be expressed as
H = Mo Mc
being
h iT
Mo = T T T T k T
C A C A C
Mc = B AB Ak B
where A11 and A22 are also square matrices. Now suppose A11 is nonsingular, then A has
the following decomposition.
72 Large Scale Systems
A11 A12 I 0 A11 0 I A1
11 A12
=
A21 A22 A21 A1
11 I 0 0 I
with := A22 A21 A1 11 A12 , and A is nonsingular if and only if is nonsingular.
Dually, if A22 is nonsingular then
A11 A12 I A12 A1
22 0 I 0
=
A21 A22 0 I 0 A22 A1
22 A21 I
The above matrix inversion formulae are particularly simple if A is block triangular:
1
A11 0 A1
11 0
=
A21 A22 A1 A A1
22 21 11 A 1
22
1
A11 A12 A1
11 A1 1
11 A12 A22
=
0 A22 0 A1
22
The following identity is also very useful. Suppose A11 and A22 are both nonsingular
matrices, then
1 1
A11 A12 A1
22 A21 = A1 1 1
11 + A11 A12 A22 A21 A11 A12 A21 A1
11
A + AT + B B T = 0 (5.13)
A + AT + C T C = 0 (5.14)
Hence = diag [i ] and 1 2 n . The i are termed the Hankel singular
values of the original system.
Let us consider how a balanced realization may be obtained [33].
Let P and Q be respectively the controllability and observability grammians associated
with an arbitrary minimal realization {A, B, C} of a stable transfer function, respectively.
Since P and Q are symmetric, there exist orthogonal transformations Uc and Uo such that
P = Uc Sc UcT (5.15)
Q = Uo So UoT (5.16)
where Sc , So are diagonal matrices. The matrix
H = UH SH VHT (5.18)
Using these matrices, the balancing transformation is given by
1/2
T = Uo So1/2 UH SH (5.19)
The balanced realization is
A B T 1 AT T 1 B
= (5.20)
C D TC D
74 Large Scale Systems
A + A + B B = 0 (5.21)
A + A + C C = 0 (5.22)
Then Eqns. (5.21 and 5.22) can be written in terms of their partitioned matrices as
By virtue of the method adopted to construct , the most energetic modes of the system
are in 1 and the less energetic ones are in 2 . Thus, the system with 1 as its balanced
grammian would be a good approximation of the original system.
Thus the procedure to obtain a reduced order model would be
Choose an appropriate order r, of the reduced order model. Partition the system
matrices accordingly.
A11 B1
The reduced order model obtained as Gr =
C1 D
5.2 Model Reduction by Balanced Truncation 75
The algorithm discussed above has a basic disadvantage.. Though the responses of the
reduced system get closer and closer to the original system, there is no certainty that they
would match at steady state. This is because of the upper bound on the approximation error
is dependent on the ignored Hankel singular values.
1 1
C jI A B C1 jI A11 B1 2 (r+1 + + n )
= 2Tr (2 )
This steady state error occurs because the ignored states even if not contributing much
to the dynamics of the system, do contribute to its steady state. Hence, they should be
considered while deriving the reduced order model without steady state error. This steady
state error can be eliminated by modifying the reduced order model using the concept of
singular perturbations.
x A11 A12 x B1
= + u (5.29)
z A21 A22 z B2
x
y = C1 C2 + Du
z
Assuming that the states z are fast and stable, they would settle quickly and hence it
can safely be assumed that z = 0. Hence,
z = A1 1
22 A21 x A22 B2 u (5.32)
Now, using Eqn. (5.32) in Eqn. (5.29), the modified reduced order system
A11 A12 A1
22 A21 B1 A12 A1
22 B2
Gr = 1 1 (5.33)
C1 C2 A22 A21 D C2 A22 B2
76 Large Scale Systems
x1 A Ac x1 B
= + U (5.34)
x2 0 A+ x2 B+
x1
y = C C+ + DU
x2
where, the A has all its eigenvalues stable and A+ has all its eigenvalues unstable.
This is system is then converted to two systems S1 and S2 with
Now, using the above described methods, a reduced order model can be obtained for the
0
stable system S1 . Let the reduced system obtained from S1 represented as S1
U
z = [Ar ] z + BrU Brx2
0
S1 x2 (5.37)
U
y = [Cr ] z + DrU Drx2
x2
The reduced order model for the original system can then be formulated as
z Ar Brx2 z BrU
= + U (5.38)
x2 0 A+ x2 B+
z
y = Cr Drx2 + [DrU ] U
x2
A X + XA + Q = 0 (5.39)
then
2Re ( (v Xv)) + v Qv = 0
Now, if X > 0 and v Xv > 0, and it is clear that Re() 0 if Q 0 and Re() < 0
if Q > 0. Hence, (1) and (2) hold. To see (3), we assume Re() 0. Then we must have
v Qv = 0, i.e., Qv = 0. This implies that is an unstable and unobservable mode, which
contradicts the assumption that (Q, A) is detectable.
AX + XB = C (5.40)
where A Fnn , B Fmm and C Fnm are given matrices. There exists a unique
solution X Fnm if and only if i (A) + j (B) 6= 0, i = 1, 2, , n, j = 1, 2, , m.
Proof : Eqn. (5.40) can be written as a linear matrix equation by using the Kronecker
product:
T
B A vec(X) = vec(C)
Now that equation has a unique solution if and only if B T A is nonsingular. Since
the eigenvalues of B T A have the form i (A) + j (B T ) = i (A) + j (B), the conclusion
follows.
Theorem 1 Assume that 1 and 2 have no diagonal entries in common. Then both sub-
systems (Aii , Bi , Ci ), i = 1, 2 are asymptotically stable.
Proof : It is sufficient to show that A11 is asymptotically stable. The proof for the
stability of A22 is similar.
Since is a balanced realization, by the properties of SVD, 1 can be assumed to be
positive definite without loss of generality. Then it is obvious that i (A11 ) 0 by the Lemma
1. Assume that A11 is not asymptotically stable, then there exists an eigenvalue at j for
some . Let V be a basis matrix for ker(A11 jI). Then
A11 jI V = 0 (5.41)
which gives
V A11 + jI = 0
A11 jI 1 + 1 A11 + jI + B1 B1 = 0 (5.42)
1 A11 jI + A11 + jI 1 + C1 C1 = 0 (5.43)
Multiplication of Eqn. (5.43) from right by V and from left by V gives V C1 C1 V = 0,
which is equivalent to
C1 V = 0.
Multiplication of Eqn. (5.43) from the right by V now gives
A11 + jI 1 V = 0
Analogously, first multiply Eqn. (5.42) from the right by 1 V and from the left by V 1
to obtain
B1 1 V = 0
Then multiply Eqn. (5.42) from the right by 1 V to get
A11 jI 21 V = 0
It follows that the columns of 21 V are in ker A11 jI . Therefore, there exists a
matrix 1 such that
21 V = V 21
Since 21 is the restriction of 21 to the space spanned by V, it follows that it is possible
to choose V such that 21 is diagonal. It is then also possible to choose 1 diagonal and such
that the diagonal entries of 1 are a subset of the diagonal entries of 1 .
Multiply Eqn. (5.25) from the right by 1 V and Eqn. (5.26) by V to get
A21 21 V + 2 A21 1 V = 0
2 A21 V + A12 1 V = 0
which gives
A21 V 21 = 22 A21 V
This is a Sylvester equation (refer [34]) in A21 V. Because 21 and 22 have no diagonal
entries in common it follows from Lemma 2 that
A21 V = 0 (5.44)
is the unique solution. Now Eqn. (5.44 and 5.41) imply that
A11 A12 V V
= j
A21 A22 0 0
which means that A-matrix of the original system has an eigenvalue at j. This con-
tradicts the fact that the original system is asymptotically stable. Therefore, A11 must be
asymptotically stable.
5.2 Model Reduction by Balanced Truncation 79
kG(s) GN 1 (s)k = 2N
Proof : The stability of Gr follows from Theorem 1. We shall now give a direct proof
of the error bound for the case si = 1 for all i. Hence, we assume si = 1 and N = n.
An alternative proof will be given later where the singular values i are not assumed to be
distinct.
Let
1
(s) : = sI A11
(s) : = sI A22 A21 (s)A12
B(s) : = A21 (s)B1 + B2
C(s) : = C1 (s)A12 + C2
then using the partitioned matrix results in section 5.1.4,
h i
[G(j) Gr (j)] = 1/2
max 1
(j)B(j)B
(j) 1
(j)C
(j)C(j) (5.45)
Expressions for B(j)B (j) and C (j)C(j) are obtained by using the partitioned
form of the internally balanced Grammian equations ( Eqns. (5.23-5.28).)
An expression for B(j)B (j) is obtained by using the definition of B(s), substituting
for B1 B1 , B1 B2 and B2 B2 from the partitioned form of the grammian Eqns. (5.23-5.25), we
get
These expressions for B(j)B (j) and C (j)C(j) are then substituted in Eqn. (5.45)
to obtain
[G(j) Gr (j)] = 1/2
max 2 + 1
(j) 2
(j) 2 + 1
(j) 2 (j)
Let Ek (s) = Gk+1 (s) Gk (s) for k = 1, 2, , N 1 and let GN (s) = G(s). Then
since Gk (s) is a reduced order model obtained from the internally balanced realization of
Gk+1 (s) and the bound for one step reduction, Eqn. (5.47) holds.
Noting that
5.2 Model Reduction by Balanced Truncation 81
N
X 1
G(s) Gr (s) = Ek (s)
k=r
kWo (G Gr ) Wi k
is made as small as possible. Assume that G, Wi and Wo have the following state space
realizations.
A B Ai Bi Ao Bo
G= , Wi = , Wo =
C 0 Ci Di Co Do
with A Rnn . Note that there is no loss of generality in assuming D = G() = 0 since
otherwise it can be eliminated by replacing Gr with D + Gr .
Now the state space realization for the weighted transfer matrix is given by
A 0 BCi BDi
Bo C A o 0 0 A B
Wo GWi = 0
=:
0 Ai Bi C 0
Do C Co 0 0
AP + P A + B B = 0 (5.48)
QA + A Q + C C = 0 (5.49)
Then the input weighted Grammian P and output weighted Grammian Q are defined by
In
P : = In 0 P
0
In
Q : = In 0 Q
0
82 Large Scale Systems
It can be shown easily that P and Q can satisfy the following lower order equations
A BCi P P12 P P12 A BCi BDi BDi
+ + = 0
0 Ai
P12 P22
P12 P22 0 Ai Bi Bi
Q Q12 A 0 A 0 Q Q12 C Do C Do
+ + = 0
Q12 Q22 Bo C Ao Bo C Ao
Q12 Q22 Co Co
P A + AP + BB = 0 (5.50)
while in the case of Wo = I, Q can be obtained from
QA + A Q + C C = 0 (5.51)
Now let T be a nonsingular matrix such that
1
T P T = T 1 QT 1 =
2
Remark 5 Unfortunately, there is generally no known a priori error bound for the approx-
imation error and the reduced model Gr is not guaranteed to be stable either.
or by the Routh model. The numerator of the reduced order model is obtained by minimiz-
ing the step/impulse response error along with a steady state constraint. The step/impulse
error is directly expressed in terms of the model parameters by evaluating certain integrals
which need the inversion of rth and (n + r)th order matrices. The impulse/step response er-
ror function minimization problem is converted to a problem of solving simultaneous linear
equations. These equations are solved to obtain the model parameters uniquely.
a0 + a1 s + + an1 sn1
H(s) = (5.52)
b0 + b1 s + + bn1 sn1 + bn sn
and the rth order (r < n) reduced order model of the system (5.52) with unknown coef-
ficients given by
c0 + c1 s + + cr1 sr1
Hr (s) = (5.53)
d0 + d1 s + + dr1 sr1 + dr sr
The denominator coefficients of Hr (s) are obtained by dominant pole retention or Routh
approximation methods. The numerator is obtained by minimizing the impulse response
error while satisfying the steady state constraint.
To match the steady state values of the system and the model,
c0 a0
=
d0 b0
Let h(t) and hr (t) be the impulse response of the system and the reduced model, then
the impulse response error is defined as
Z
e , kh(t) hr (t)k2 = [h(t) hr (t)]2 dt
0
Z Z Z
2
= h (t)dt+ h2r (t)dt 2 h(t)hr (t)dt (5.54)
0 0 0
or
Z
j Z
j Z
j
1 1 2
e= H(s)H(s)ds + Hr (s)Hr (s)ds H(s)Hr (s)ds
2j j 2j j 2j j
(5.55)
The integrals in Eqn. (5.55) can be evaluated by a process given in [36] in terms of the
coefficients ai , bi of H(s) and ci , di of Hr (s). A table providing the values of the definite
integrals in terms of the coefficients has also been provided in [37]. Only the coefficients
84 Large Scale Systems
1
R
j
ci , i = 1, 2, , r 1 are unknown at this stage. The integral 2j j
H(s)Hr (s)ds can
also be expressed in terms of ci , by extending the approach in [36] and is discussed in [38].
Thus,
e = K + Ar1 [En1 + Dr1 ]
where the components Ar1 , En1 , Dr1 are computed by solving the following equations.
The value of Ar1 can be found from the solution of
BA = C (5.56)
where,
d0 0 0
d2 d1 d0 0
..
B = d4 d3 .
. .
.. ..
0 0 (1)r1 dr1
A0 C0
A1 C2
.. ..
A = . ,C = .
Ar2 C2r4
Ar1 C2r2
where
m
X
2Cm = (1)k ck cmk for 0 m r 1
k=0
r1
X
= (1)k ck cmk for r m 2r 2
k=mr+1
Here, the vectors A and C are unknown. By inverting D, an expression for Ar1 can be
developed i.e.,
Ar1 = Last Row of B 1 [C] (5.57)
Similarly the third term for the expression of e (En1 +Pr1 ) can be obtained by solving
d0 0 0 0 b0 0 0 E0 P0
d1 d0 0 0 b1 b0 0 E1 P1
..
d2 d1 d0 0 b2 b1 0 P2
.
d3 d2 d1 0 b3 b2 0 ..
En1 = . (5.58)
.. .. .. .. .. .. ..
. . . . . . . D0
..
. Pn+r2
0 0 0 (1)r dr 0 0 bn Dr1 Pn+r1
5.3 Model Reduction by Impulse/Step Error Minimization 85
where
m
X
Pm = (1)mk ak cmk for 0 m r 1
k=0
m
X
= (1)mk ak cmk for r m n 1
k=mr+1
n1
X
= (1)mk ak cmk for n m n + r 1
k=mr+1
h i
e = K + Last Row of B 1 [C] + nth row of N 1 + (n + r)th row of N 1 [P ] (5.60)
The minimization of e with respect to ci , will yield (r 1) linear equations which can
be solved to obtain a unique solution and thus ci , i = 1, 2, , r 1 can be obtained. Differ-
entiating to find the minimizers,
e
= 0, i = 1, 2, , r 1
ci
Ar1 [En1 + Pr1 ]
= =0
ci ci
Now,
C0
C2
Ar1 = 0 1 r1 ..
.
C2r2
where, 0 1 r1 is the last row of B 1 or
0.5c20
c0 c2 0.5c21
..
Ar1 = 0 1 r1 . (5.61)
cr3 cr1 + 0.5cr2
0.5c2r1
= Q (5.62)
86 Large Scale Systems
The last term in the column vector in Eqn. (5.61) is c2r1 for r being odd. Differentiating,
we get
(r1)/2
Ar1 X
= j+i c2i for even j (5.63)
cj i=0
(r1)/2
X
= j+i c2i1 for even j
i=1
1. for r - even
(a) for i-even, all terms from q0 to qi2 are zeros, from qi to qr+i2 are cj , j =
0, 2, , r 2, and from qr+i to q2r2 are zeros.
(b) for i-odd, all terms from q0 to qi1 are zeros, from qi+1 to qr+i1 are -cj , j =
1, 3, , r 1, and from qr+i+1 to q2r2 are zeros.
2. for r - odd
(a) for i-even, all terms from q0 to qi2 are zeros, from qi to qr+i3 are cj , j =
0, 2, , r 1, and from qr+i1 to q2r2 are zeros.
(b) for i-odd, all terms from q0 to qi1 are zeros, from qi+1 to qr+i2 are -cj , j =
1, 3, , r 2, and from qr+i to q2r2 are zeros.
Again,
P0
P1
[En1 + Dr1 ] = 0 1 2 n+r1 P2
..
.
Pn+r1
where,
i = pi + ri , i = 0, 1, , n + r 1 (5.64)
pi and ri are the elements of the nth and the (n + r)th rows of N 1 . Thus,
a0 c0
a1 c0 a0 c1
a2 c0 a1 c1 + a0 c2
..
[En1 + Dr1 ] = 0 1 2 n+r1 . (5.65)
an1 cr2 an2 cr1
an1 cr1
0
5.3 Model Reduction by Impulse/Step Error Minimization 87
Thus,
n1
X
[En1 + Dr1 ] j
= (1) i+j ai , j = 1, , r 1 (5.66)
cj i=0
or symbolically
MC = (5.69)
where M is a known ((r 1) (r 1)) known matrix and is an ((r 1) 1) known
vector. Thus. C can be uniquely obtained from the above set of equations. For existence of
a unique solution, the matrix M must be non-singular.
e = kC(t) Cr (t)k
= kg(t) gr (t)k (5.70)
where g(t) = C() C(t) and gr (t) = Cr () Cr (t) are the transient part of the step
response. To match steady state values of the system and the model.,
c0 a0
Cr () = = C() =
d0 b0
and
c0 + c1 s + c2 s2 + + cr1 sr1
Gr (s) =
d0 + d1 s + d2 s2 + + dr sr
where
a0 a0
ai = bi+1 ai+1 , i = 0, 1, , n 2, an1 = bn (5.71)
b0 b0
and
c0 c0
ci = di+1 ci+1 , i = 0, 1, , r 2, cr1 = dr (5.72)
d0 d0
Then,
5.3 Model Reduction by Impulse/Step Error Minimization 89
The integrals can be evaluated as for the impulse error case, then
e = K + A En1 + Dr1
here cr1 is completely known from Eqn. (5.72). Thus, the minimization of e with respect
to ci , i = 0, 1, , r 2 by
e
= 0, i = 0, 1, 2, , r 2
ci
yields (r 1) linear equations, which can be obtained with reasonable effort by applying
a procedure similar to that for the impulse error minimization. These equations can be
represented in matrix form as
0 0 1 r/21 c0 0 1 n1 a0
0 1 0 0 c1 1 2 n a1
1 0 2 r/2 c2 2 3 n+1 a2
.. .. .. .. .. = .. .. .. ..
. . . . . . . . .
0 r/21 r3 0 cr3 r3 r2 n+r4 an2
r/21 0 r/2 r2 cr2 r2 r1 n+r3 an1
0
r/2 cr1
0
.. (5.73)
.
r2 cr1
0
for r- even.
For r- odd, the matrix equation is
0 0 1 r3 0 c0 0 1 n1 a0
2
0 1 0 0 r1
c1 1 2 n a1
2
1 0 2 r1 0 c2 2 3 n+1 a2
.. .. ..
2
.. .. = .. .. .. ..
. . . . .
. . . .
c r3 r2 n+r4 an2
r3
2
0 r1 r3
2
r3
0 r1 0 r2 cr2 r2 r1 n+r3 an1
2
90 Large Scale Systems
r1 cr1
2
0
r+1 cr1
..
2
(5.74)
.
0
r2 cr1
or symbolically
RC = T (5.75)
where,
R is a known ((r 1) (r 1)) matrix and T is a known ((r 1) 1) vector. Thus
C can be uniquely found. To find the coefficients ci , i = 1, , r 1 the following matrix
equation can be used.
c1 d1 c0
c2 d2 c1
c3 c d3 c2
0
.. = .. .. . (5.76)
. d0 . .
cr2 dr2 cr3
cr1 dr1 cr2
Thus the reduced order model can be obtained.
x = Ax + Bu (5.77)
y = Hx
where, x Rn , u Rm , y Rp .
The problem is to find an rth order reduced state-space representation with p r n of
the form
5.4 Optimal Model Order Reduction Using Wilsons Technique 91
xr = Ar xr + Br u (5.78)
yr = Hr xr
To synthesize the reduced order model given by Eqn. (5.78), a functional of the reduction
error will be minimized which is given as
Z
J = eT (t)e(t)dt
0
FTP + PF + M = 0 (5.82)
F R + RF T + S = 0 (5.83)
where,
T BN B T BN BrT
S = Z(0)Z (0) = J = Tr(P S)
Br N B T Br N BrT
N = p p positive definite, symmetric noise matrix.
A 0
F =
0 Ar
Z
It is a well known fact that R would the solution of the linear matrix equation (5.83).
Now, consider the problem of finding the derivatives of the cost function above with
respect to a parameter appearing in the elements of F and M.
J P S
= Tr S + Tr P (5.84)
Considering the evaluation of J, by using white noise input would give J =Tr(M R) =Tr(RM )
Substituting for S in Eqn. (5.84)
J P P T S
= Tr F R Tr RF + TrP
P S
= 2Tr F R + Tr P (5.85)
P F P F T M
F +P +F + P+ =0 (5.86)
Postmultiplying by R and taking trace
P R P
0 = Tr F R + Tr P R + Tr F R
T
F M
+Tr P R + Tr R
P F M
2Tr FR = 2Tr P R + Tr R (5.87)
It has been shown with considerable elaboration in Wilsons paper that using Eqn. (5.88)
for the derivative of the cost function with respect to the parameters in F ,M and S, the
following conditions on Ar , Br and Hr can be found
J
1. Equating = 0, where br is an element of Br ,
br
1 T
Br = P22 P12 B (5.89)
5.4 Optimal Model Order Reduction Using Wilsons Technique 93
J
2. Equating = 0, where hr is an element of Hr
hr
1
Hr = HR12 R22 (5.90)
J
3. Equating = 0, where ar is an element of Ar
ar
1 T 1
Ar = P22 P12 AR12 R22 (5.91)
1 T 1
Defining 1 = P22 P12 and 2 = R12 R22 , we get
xr = 1 A2 xr + 1 Bu (5.92)
y = H2 xr
The solution equations, Eqns. (5.89 - 5.91) contain four unknown matrices. The following
relation is available to have four relations for the four unknown matrices.
T
R12 P12 + R22 P22 = 0
which implies
Eqns. (5.93 and 5.94) are Sylvester equations in R12 and R22 . Solving them would lead to
the unique solution of Hr using Eqn. (5.90). A more detailed discussion is provided in [42].
94 Large Scale Systems
Chapter 6
6.1 Introduction
The design of control system by pole assignment has two main features - the choice of pole
locations to meet the design specification and the design of the controller to achieve these
desired pole locations. In dealing with pole assignment it should be remembered that it deals
basically with the design of system transient response, a very specific quantity. Since this
is but one aspect of system design in some ways pole assignment is restricted, specialized
design technique in case of SISO systems. On the other hand, in multivariable system design
it is a very direct way of bringing out and exploiting the extra degrees of freedom inherent in
multi-input systems and it is likely that a complete solution to the pole assignment problem
will provide a vehicle for a much broader design than purely transient response.
k (s z1 ) (s z2 ) (s zm )
G(s) = ,n m
(s p1 )(s p2 ) (s pn )
k1 k2 kn
= + + + (6.1)
(s p1 ) (s p2 ) (s pn )
It is uniquely defined by its poles s = p1 , p2 , , pn , and its zeros s = z1 , z2 , , zm and
the gain multiplier k. The unit impulse response
g(t) = k1 ep1 t + k2 ep2 t + + kn epn t
is closely linked to the pole-zero configuration. It is this link which is main attractions
of the technique, it allows the time response to be interpreted from the pole-zero locations
and conversely, it specifies where the poles and zeros should assigned to achieve a desired
time response and other system specifications.
95
96 Large Scale Systems
1
G(s) = , g(t) = eat
s+a
1
As the pole is moved to the left, the response speeds up; time constant is , the response
a
is substantially complete in five time constants.
Two poles
In the normalized form
n2
G(s) = ,0 < 1
s2 + 2n s + n2
has a complex-conjugate
p pair lying on semicircle of radius n with real parts= n and
2
imaginary parts=n (1 ) = damped oscillation frequency of response. Time to first
peak of step response is . Thus moving out on the radius from the origin speeds up the
n
response with its form unchanged, moving away from negative-real axis reduces damping.
= 1, double pole at s = n , critically damped response. 1 < 2, two real poles, one
moving to s = and the other to s = 0 as is increased.
n2
G(s) =
(s2 + 2n s + n2 ) (s + a)
When the pole is well to the left, the residue at the real pole would be small and the
contribution to response over in short time, so effect is negligible. As a decreases, the effect
is stabilizing; response is aperiodic when a n , and becoming sluggish.
s
n2
1+
a
G(s) = 2
(s + 2n s + n2 )
1
The addition of a zero adds derivative of the original system response scaled by ,
a
having a destabilizing effect as a is reduced.
6.3 Pole Assignment in Single Input Systems 97
Dipoles
Poles and zeros close together are called as dipoles. They effectively cancel and pole has
negligible residue. In the same way a cluster of P poles and Z zeros can be replaced by
(P Z) poles at the centre of gravity.
x = Ax + bu (6.2)
y = Cx
98 Large Scale Systems
1. Transform the system representation (A, B) into its controllable canonical form (Ac , Bc )
using a transformation, say T z = x.
Then the transformation matrix would be T = P M (From [44])
where
P = B AB An1 B
a1 a2 an1 1
a2 a3 1 0
.. .. .. ..
M = . . . .
an1 1 0 0
1 0 0 0
2. Compute the open loop characteristic polynomial and the desired closed loop charac-
teristic polynomial of the system. Let open loop characteristic polynomial be
3. Equating the coefficients of the powers of s in Eqns. (6.5 and 6.6), the values of
fi , i = 1, 2, , n and hence F can be found.
6.3 Pole Assignment in Single Input Systems 99
4. The state feedback gain F 0 , that gives the desired closed loop characteristic equation
when applied to the (A, B) system representation can be calculated as
F 0 = F T 1 (6.7)
Using the relationship between u and x from Eqn. (6.2), and incorporating it into Eqn.
(6.9) with a constant (but unassigned) weighting factor ,
Z
T
u = min x Qx + uT Ru + (x Ax Bu) dt (6.10)
u 0
Z
L = min H(p, p)dt H(p, p) = (p , p )
H d H
=
p (p ,p ) dt p (p ,p )
2Qx AT T = = 0
2Ru B T T = 0
u = R1 B T P x (6.11)
where P is the solution of the algebraic Riccati equation
AT P + P A P BR1 B T P + Q = 0 (6.12)
100 Large Scale Systems
u = R1 B T P 0 x
uet = R1 B T P 0 et x
u = R1 B T P 0 x (6.18)
Thus, the poles of the closed loop system can be ensured to be to the left of a line
real(s) = by finding the optimal controller for the auxiliary system
.
x= (A + I) x + B u
Consider the transfer function form of the state equation in the space representation
(6.2).
T (s) = 1 + K T G(s)
It can be verified that the above transformations translate the cone bounded by lines
tan1 {real(s)/Im(s)} =
in the s plane to the imaginary axis in the complex z plane. Thus checking for the Hurwitz
stability of H(z) ensures that the closed loop poles of the system lie within the stable cone
with an included angle of 2.
then desired = 60 .
102 Large Scale Systems
SOLUTION: Let
h(s) = s2 + c1 s + c0
H(z) = z 4 + a3 z 3 + a2 z 2 + a1 z + a0
a0 = c20 (6.22)
a1 = 2c0 c1 cos (/2 ) = 3c0 c1 (6.23)
a2 = c21 + 2c0 cos( 2) = c21 + c0 (6.24)
a3 = 2c1 cos(/2 ) = 3c1 (6.25)
0 a1 a2 a3 a21 a0 a23 (6.26)
The last inequality, obtained from the Hurwitz determinant being positive, can be rep-
resented in terms of ci as
c21 c0 c21 c0 0
Choosing c1 = 3/2 and c0 = 3/2 would satisfy make H(z) Hurwitz.
Now,
h(s) = s2 + 1.255s + 1.5
2
T d1 d1 d2
For R = 1, and Q = dd = , we get the equations,
d1 d2 d22
2 q22 = 3/2
1 + q11 = 9/4
K is the stabilizing state feedback gain that provides closed loop poles with damping ratio
greater than cos (60 )
outputs. Hence, the output feedback would be a state feedback in disguise and thus of no
real significance.
Since in most cases, the number of outputs is less than the system order, static output
feedback would not be a viable option for single input systems. Hence, the concept of
dynamic output feedback comes into picture.
1. Specifying the desired zeros, poles and numerator constant of the desired closed-loop
C(s)
function
R(s)
2. Solving for the required cascade compensator transfer function Gc (s) for the plant
Gp (s) using the formula
C(s)
Gc (s) = (6.27)
[R(s) C(s)] Gp (s)
This method would be able to match both the zeros and poles of the system using a
dynamic compensator. However, the order of the compensator would be large. If the system
has n poles and m zeros, the compensator would be of order (m + n). Moreover, since the
emphasis here is on pole placement, we would look into a dynamic output feedback method
that would place the 2n-poles of the closed loop system. The procedure is thus :
Np (s)
Let the plant have a transfer function Gp (s) = and the dynamic compensator
Dp (s)
Nc (s)
Gc (s) = . The closed loop transfer function then would be
Dc (s)
Gp (s)
G(s) =
1 + Gp (s)Gc (s)
Np (s)Dc (s) C(s)
= =
Np (s)Nc (s) + Dp (s)Dc (s) R(s)
For the compensator to match the 2n poles of the closed loop system, the denominator of
the closed loop system should match R(s). Matching the coefficients of the two polynomials,
104 Large Scale Systems
one would get 2n linear equations in 2n unknowns, which can be solved to obtain the
numerator and denominator coefficients of the compensator.
If the transfer functions are of the form
P
n1
i=0 ai si
Gp (s) = Pn
sn + i=0 bi si
P
n1
i=0 ci si
Gc (s) =
P
n1
sn + i=0 di si
2n1
X
R(s) = sn + gi si
i=0
b0 0 0 a0 0 0 d0 g0 0
b1 b0 0 a1 a0 0 d1 g1 0
.. .. .. .. .. .. .
.. . . ..
. . . . . . . .. . .
bn1 bn2 b0 an1 an2 a0 dn1 gn1 0
= (6.28)
1 bn1 b1 0 an1 a1 c0 gn b0
0 1 b2 0 0 a2 c1 gn+1 b1
.. .. .. .. .. . . .. . . ..
. . . . . . . .. .. .
0 0 1 0 0 0 cn1 g2n1 bn1
Me X = g b (6.29)
x = Ax + Bu
6.4 Pole Assignment and Placement in Multi-Input Systems 105
with, B being assumed to be of full rank m n. The assumption, which can be removed
later on, implies that all m available inputs are mutually independent, which is usually
the case in practice. We now define C as the (n n) matrix obtained by selecting from
left to right as many (n)
linearly independent columns of the controllability matrix D =
B AB An1 B as possible. Since the system was assumed to be controllable, it
follows that D has a full rank n and hence that n = n. Therefore, C has full rank n and
|C| 6= 0. We now reconstruct the nonsingular (n n) matrix L by simply reordering the n
columns of C,beginning with a power ordering of those first (d1 ) columns of C which involve
b1 , the first column of B, and then employing those (d2 ) columns of C which involve b2 next
and so forth. In particular,
L= b1 Ab1 Ad1 1 b1 b2 Ab2 Ad2 1 b2 Adm 1 bm (6.30)
Xk
k = di , k = 1, 2, , m (6.31)
1
The transformation Q, which would transform the system to its multivariate controllable
canonical form can then be computed in the following manner.
q1
q1 A
..
.
q1 Ad1 1
q2
Q=
q2 A
(6.32)
..
.
q2 Ad2 1
..
.
qm Adm 1
.
x = Ax + Bu
A11 A12 A1m
A21 A22 A2m
1
A = QAQ = .. .. ..
. . .
Am1 Am2 Amm
106 Large Scale Systems
0 1 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
.. .. .. .. .. .. ..
. . . . . . .
1
x x x x x x x x x
0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0
. .. .. .. .. ..
. ..
. . . . . . .
= (6.33)
1
x x x x x x x x x
.. .. ... ..
. . .
0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 1 0
. .. .. .. .. .. .. .. .. ... ..
.. . . . . . . . . .
1
x
x x x x x x x x
0 0 0
0 0 0
.. .. ..
. . .
1 x x x
0 0 0
0 0 0
. . ..
B = QB = .. .. . (6.34)
0 1 x x
.
..
0 0 0
0 0 0
. . ..
.. .. .
0 0 0 1
where the diagonal blocks Aii are each upper right identity companion matrices of di-
mension di , while the off-diagonal blocks, Aij , i 6= j are each identically zero except for their
respective final rows. We therefore note that all information regarding the equivalent state
matrix A can be derived from knowledge of the m ordered controllability indices di and the
m ordered i rows of A . The same can also be said about B, since we note that only these
same ordered i rows of B are nonzero. This particular structured form for the controllable
pair (A, B) plays an important role in controller design.
Contollability Indices
The m integers di are defined as the controllability indices of the system with respect to the
input vectors bi respectively. The controllability index of the system is denoted by the Greek
6.4 Pole Assignment and Placement in Multi-Input Systems 107
letter = max (di ) , i = 1, 2, , m. The controllability index of the system is a measure for
the extent of controllability of the system with each input.
.
x = QT AQT x + QT Bu (6.35)
y = C QT x
The controllability indices of the dual system are equivalent to the observability indices
of the original system.
1. Transform the multi-input system (A, B, C) into its controllable canonical form using
a transformation z = Qx. Let the controllable canonical from be (A, B, C).
2. Select a (n n) matrix Ad which has the same characteristic equation as the desired
characteristic equation.
3. Find the state feedback for the controllable canonical structure as
1 T
K = B T B B (Ad A) (6.36)
4. The required state feedback for the original system can be found out as
K = KQ (6.37)
It can be seen that there would a different K for each choice of Ad . Thus, there would be
a flexibility of choice of the state feedback gain. This choice, however, does not exist in case
of single input systems as the above procedure would result in the same K for all choices of
Ad with the same characteristic equation.
This extra flexibility can be used to design optimal state feedback that also realizes the
desired closed loop poles.
108 Large Scale Systems
Remark 6 A direct method of pole placement for multi-input systems has been discussed
in [55]. The method converts the multi-input case into a related single input problem and
solves it using the methods of SISO state feedback computation. A different approach for
pole assignment of multi-input systems has been discussed in [56]. It presents an efficient
method for pole assignment, however, both the methods are not suitable to be used for optimal
controller deduction.
x = Ax + Bu (6.38)
y = Dx
where x, the state, is an n vector; u, the control is an r vector; and y the output, is an m
vector.
Let us assume a linear feedback control law of the form:
u = Gx (6.39)
The feedback control matrix G may be derived through two distinct approaches. One is
to choose G in order to minimize a quadratic performance index of the form:
Z
1
J= xT Qx + uT Ru dt (6.40)
2 0
x = (A + BG) x
p = Qx AT p (6.41)
The optimal control is given by the linear control law
u = R1 B T p = R1 B T P x = Gx (6.42)
where R is the solution of the degenerate Riccati equation:
P = P A AT P + P BR1 B T P Q = 0 (6.43)
Combining Eqns. (6.38,6.41 and 6.42) gives the canonical system:
x A BR1 B T x x
= =F (6.44)
p Q AT p p
This system has n eigenvalues with negative real parts and n with positive real parts,
and the eigenvalues are located symmetrically about the imaginary axis. The eigenvalues of
the optimal feedback system x = (A + BG) x are identical to those eigenvalues of F with
negative real parts. Therefore, it is possible to study the eigenvalues of F instead of those of
(A + BG) . This has a great advantage that the eigenvalue dependence upon Q and R may
be studied without solving the Riccati equation.
The particular problem to be considered here is to determine a weighting matrix, Q, that
gives the feedback system a set of prescribed eigenvalues. The method to be presented is
based on the decoupled system.
Systems with Real, Distinct Eigenvalues Using the modal matrix M, the system in
Eqn. (6.38) can be diagonalized into
z = z + u (6.45)
= M 1 AM
= M 1 B
x = Mz (6.46)
We may express the performance criterion (6.40) in terms of the new state z as:
Z
1
J = z T M T QM z + uT Ru dt
2 0
Z h i
1 T T
= z Qz + u Ru dt (6.47)
2 0
Q = M T QM
110 Large Scale Systems
The co-state of the system (6.45) with the performance criterion (6.47) is defined by:
.
p= Qz p (6.48)
The optimal control law is then given by:
u = R1 B T M T p (6.49)
Combining (6.45,6.48 and 6.49) yields the canonical system:
z H z z
. = = F (6.50)
p Q p p
where,
H = M 1 BR1 B T M T (6.51)
The eigenvalues of the canonical system F are identical to the eigenvalues of the canonical
form F and can be obtained from the characteristic equation:
sI F = 0 (6.52)
h i s
I 0 H
sI F = (6.53)
Q (sI ) I 0 sI + Q (sI )1 H
we get:
1
sI F = |sI | sI + Q (sI ) H (6.54)
Suppose now the weighting factor Q has only one non-zero element, namely qjj . This
means that only mode zj is being considered in the performance criterion.
The second determinant on the right-hand side of Eqn. (6.54) then becomes
s + 1 0 0
0 s + 2 0
.. .. .. .. ..
. . . . .
1 qjj qjj qjj
sI + Q (sI ) H =
hj1 s + j
hjj hjn
s j s j s j
.. .. .. .. ..
. . . . .
0 0 s + n
(6.55)
By combining Eqns. (6.54 and 6.55) the characteristic equation in Eqn. (6.52) may be
written:
6.4 Pole Assignment and Placement in Multi-Input Systems 111
" Y #
n
Y qjj
n
sI F = (s i ) s + j hjj (s + i )
i=1 s j i=1,i6=j
n
Y
= [(s + j ) (s j ) qjj hjj ] ((s + i ) (s i )) = 0 (6.56)
i=1,i6=j
s2j 2j
qjj = (6.58)
hjj
The only element of the H-matrix H = M 1 BR1 B T M T which is needed is thus
hjj .
With Q known, we can obtain the optimal feedback gain, by solving the Riccati equation:
P P + P M 1 BR1 B T M T P Q = 0 (6.59)
u = R1 B T M T P M 1 x (6.60)
In the optimal feedback system, x = (A + BG) x, we have now shifted one eigenvalue
of the open loop system to the specified position. We may now start with a new system
(A1 = A + BG) and shift the next eigenvalue. The result is a recursive procedure that is
easy to implement. The sequence in which the eigenvalues are shifted may be arbitrary, but
a different sequence will give a different Q matrix.
Before we summarize the procedure, we will mention that the system we start with may
already be an optimal system, where the feedback gain, G0 , is the result of an optimization
using a weighted matrix Q0 . If some of the eigenvalues in this optimal system are located too
close to the imaginary axis, we may use the present method to shift them to more desired
locations.
The Algorithm Let us assume that we shall shift k eigenvalues, where k n. We get
the following recursive procedure.
1. Initialize: Q = Q0 , G = G0 , i = 0.
2. Ai = A + BG
4. i = i + 1
112 Large Scale Systems
s2j 2j
(qjj )i = .
(hjj )i1
6. With Qi = (qjj )i compute the optimal feedback gain Gi .
1
7. Gi = Gi Mi1 , G = G + Gi
T 1
8. Qi = Mi1 Qi Mi1 , Q = Q + Qi
9. If the number of eigenvalues shifted are less than k, then change j and go back to 2.
Solution :
1 0 1 0 1 1
M0 = , M01 = , H0 = .
1 1 1 1 1 1.2
We have now shifted one eigenvalue. We now proceed to shift the next one:
8 0
A1 = A + BG = ,
1 1
1
7 0 1 0 1 1 1
M1 = , M1 = 7
1 , H1 = 54 ,
1 1 7
1 49 1 5
6.4 Pole Assignment and Placement in Multi-Input Systems 113
s22 22
(q22 )2 = = 109,
(h22 )1
0 0 0 2.6
Q2 = , G2 = ,
0 109 0 3.63
T 1 2.2 15.6
Q2 = M1 Q2 M1 = ,
15.6 109
1 0.37 2.6
G2 = G2 M1 =
0.52 3.63
But, as mentioned above there exist a number of Q matrices that will give the prescribed
eigenvalues. Different Q matrices may be obtained by changing the order in which we shift
the eigenvalues, and also by changing the numbering . If in this example we first shift 2 to
s2 , and then 1 to s1 , we get:
22.86 48.6 3.62 6.2
Q= ,G =
48.6 306 1.24 6.38
Systems with Complex Eigenvalues Consider a system with two complex eigenvalues:
+ j 0 0 0
0 j 0 0
3 0
= 0 0 (6.62)
.. .. .. . . . ..
. . . .
0 0 0 n
The eigenvector matrix M will also be complex. It is often advantageous to work with
real transformation matrices, so instead of using M directly, we introduce an auxiliary trans-
formation:
0 = L1 L (6.63)
where
2j 0
1
2
0
j
1
0 0
2
2
0 0 1 0
L= (6.64)
.. .. .. ... ..
. . . .
0 0 0 1
114 Large Scale Systems
and
0 0
0 0
0 0 0 3 0
= (6.65)
.. .. .. .. ..
. . . . .
0 0 0 n
The overall transformation becomes:
0 = L1 M 1 AM L = T 1 AT (6.66)
where the transformation matrix T = M L is now real.
We now consider shifting the two complex eigenvalues. We choose a weighting matrix:
q11 0 0 0
0 q22 0 0
0 0 0 0
Q = (6.67)
.. .. .. . . ..
. . . . .
0 0 0 0
where q11 = q22 .
Thus, we can arrive at the characteristic equation for that part of the canonical system
F that concerns the two eigenvalues:
s4 2 2 2 + q11 (h11 + h22 ) s2 + q11
2
h11 h22 h212
2
+q11 (h11 + h22 ) 2 + 2 + 2 + 2 = 0 (6.68)
The H-matrix defined in Eqn. (6.50) becomes in this case:
H = T 1 BR1 B T T T (6.69)
The two complex eigenvalues are now shifted to either two new complex positions or
positions on the real axis. In this case, we do not have complete freedom of choice. If we
choose to shift to a complex pair, say
s1 = + j, s2 = j
With these two eigenvalues, the characteristic equation corresponding to Eqn. (6.68)
becomes: 2
s4 2 2 2 s2 + 2 + 2 = 0 (6.70)
Equating the coefficients for the s2 terms in Eqn. (6.68) and Eqn. (6.70) yields:
2 ( 2 2 ) 2 ( 2 2 )
q11 = (6.71)
h11 + h22
As mentioned above, and cannot both be chosen arbitrarily, but they must satisfy a
given constraint. Equating s0 coefficients in Eqn. (6.68) and Eqn. (6.70):
2 2 2
+ 2 = 2 + 2 + q11 (h11 + h22 ) 2 + 2 + q112
h11 h22 h212 (6.72)
6.4 Pole Assignment and Placement in Multi-Input Systems 115
The Algorithm The recursive procedure for shifting complex eigenvalues is:
1. Initialize: Q = Q0 , G = G0 , i = 0.
2. Ai = A + BG
3. Compute i , Mi , Li , Ti and Hi = Ti1 BR1 B T TiT .
4. i = i + 1
5. The two complex conjugate eigenvalues j and j+1 are to be shifted to sj and sj+1 .
If sj and sj+1 are complex conjugate, use Eqns. (6.71 and 6.72) to determine (qjj )i . If
sj and sj+1 are real, use Eqns. (6.74 and 6.75) to determine (qjj )i .
6. With:
0 0
...
(qjj )i
Qi =
(qj+1,j+1 )i
..
.
0 0
where (qjj )i = (qj+1,j+1 )i . Next compute the optimal feedback gain, Gi .
116 Large Scale Systems
T 1
7. Gi = R1 B T Ti1 P , Gi = Gi Ti1 , G = G + Gi
T 1
8. Qi = Ti1 Qi Ti1 , Q = Q + Qi
9. If there are more complex eigenvalues to be shifted, then change j to a relevant number
and go back to 2.
J = U 1 AU (6.76)
This system now has distinct eigenvalues, and we may now use the procedure given above.
u = Ky (6.77)
Now, the closed loop system would be of the form
x = (A + BKC) x
For a single input n-state, p-output system it can be seen that the existence of a K that
would deliver the desired closed loop poles is not always possible if p < n (n simultaneous
equations in p unknowns to be satisfied to match the characteristic equations). However,
when n = p, there is a possibility of a unique solution to the problem. There would be no
chance of having multiple solutions as in that case the outputs would be linearly dependent.
1. Let
K= k1 k2 kn
2. Compare the coefficients of the characteristic equations of (A + BKC) and the desired
characteristic equations, leading to n simultaneous linear equations in ki
3. Solve for ki and obtain the static output feedback gain K
x = (A + BKy C) x (6.78)
confirm to specified eigenvalues. The procedure to find such a Ky , ( if it exists) has been
given in [57, 58].
The procedure involves finding a mn state feedback matrix Kx such that the eigenvalues
of the closed-loop system
x = (A + BKx ) x (6.79)
confirm to the desired eigenvalues. This can be easily accomplished using the procedures
described in the previous section.
The output feedback problem can therefore be viewed as that of determining a matrix
Ky such that
where B and C are given, and Kx is any one member of the set of feedback matrices which
achieve the desired pole placement using state feedback.
118 Large Scale Systems
Using the notation given by Pringle and Rayner [59], a q p matrix C g1 is said to be a
g1 inverse of the p q matrix C if
CC g1 C = C (6.82)
Kx C g1 C = Kx (6.83)
If the consistency condition given in Eqn. (6.83) is satisfied then the general solution for
Ky is given by
Ky = Kx C g1 + Z (Il CC g1 ) (6.84)
where Z is an arbitrary m l matrix. Since Z is arbitrary, setting it to be the null matrix
gives Cy as
Ky = Kx C g1 (6.85)
Thus, the following theorem can be stated:
Theorem 3 A necessary and sufficient condition for all poles of a system described by Eqn.
(6.2) to be arbitrarily assigned using constant output feedback is that at least one of the set
of state feedback matrices Kx , which achieves the same pole placement, and one of the g1
inverses of C satisfy the consistency relationship Kx C g1 C = Kx .
Procedure of finding C g1 Given a pq matrix C with rank r min {p, q} and p > q then
it is shown by Pringle and Rayner [59] that if the matrix C is augmented on the right by a
unit matrix of order p.
C = C Ip (6.86)
and if the augmented matrix C is reduced using elementary row operations to the form
Ir C 12 C 13
C = (6.87)
0 0 C 23
= E F (6.88)
where C2 and I2 are the l (n l) and n (n l) matrices formed from the remaining
(n l) columns of C and I respectively. Then, Eqn. (6.89) can be written as two equations
Ky C1 = Kx I1 (6.90)
and
Ky C2 = Kx I2 (6.91)
Eqn. (6.90) can be solved for Ky as
Ky = Kx I1 C11 (6.92)
On substitution this solution in Eqn. (6.91), we obtain the consistency condition for Eqn.
(6.81) as
Kx I1 C11 C2 = Kx I2
or
Kx I2 I1 C11 C2 = 0 (6.93)
Eqn. (6.93) gives the necessary and sufficient condition on the matrix C and Kx for the
existence of a matrix Ky satisfying Eqn. (6.81). If this condition is satisfied, Eqn. (6.92)
can be used to calculate the required output feedback matrix Ky .
Step I : (Initialization):
2. If i = l = n, goto Step II-9; else determine an orthogonal matrix Pi such that aTi Pi =
kai k2 e2n where aTi = [0, 0, , an,n1 , an,n i ] is the last row of Ai i In , kai k2 =
1/2
(a2i ai ) , and en is a vector of length n defined as [0, 0, , 0, 1]T .
T T
Ai = Pi,ni1 Pi,1 Ai Pi,1 Pi,ni1
and let
T T
bi+1 = Pi,ni1 Pi,1 bi
Ci = Ci Pi,1 Pi,ni1
5. Find an orthogonal matrix Ti such that Ci+1 = Ti Ci , is in the lower row echelon form.
If the ith column of Ci is a zero vector (i is a transmission zero of the system), go to
Step II-8, else, continue.
9. If the last column vector of Cn is a zero vector (n is a transmission zero of the system),
T th
then
STOP; else, determine a feedback vector kn such that the (n, n) element of
An bn knT Cn is equal to n . The vector knT will have only the first element k1 as
nonzero and Tn will be a n n identity matrix.
an,n n
kl = (6.94)
bn,n c1,n
where an,n denotes the (n, n)th element of An , bn,n denotes the nth element of bn ,
and c1,n denotes the (1, n)th element of Cn
and let
T T
bi+2 = Pi,3(ni1) Pi,1 bi
Ci = Ci Pi,1 Pi,3(ni1)
4. Find an orthogonal matrix Ti such that Ci+2 = Ti Ci , is in the lower row echelon form.
If the ith and (i + 1)th columns of Ci are zero vectors (i and i are complex-conjugate
transmission zeros of the system), goto Step III-7; else, continue
5. Determine a feedback vector kiT such that the (i + 2, i)th and (i + 2, i + 1)th elements
of Ai bi+2 kiT Ci+2 are eliminated. The vector kiT Rp is given by
h i
T
ki = 0, , 0, kpi , kpi+1 , 0, , 0
where
ai+2,i
kpi =
bi+2 cpi,i
ai+2,i+1
kpi+1 =
bi+2 cpi+1,i+1
Comment: The feedback described above results in a 2 2 matrix in the ith and
th
(i + 1) rows and columns of the closed loop matrix Ai bi+2 kiT Ci+2 with eigen-
values (i , i )
122 Large Scale Systems
8. If the last two columns of Cn1 are zero vectors (n1 and n1 are transmission zeros
of the system), then STOP; else, determine a feedback vector knT such that the 2 2
T
matrix in the last two rows and columns of An1 bn1 kn1 Cn1 have the desired
complex-conjugate pair of eigenvalues at i and i .
Comment: The vector knT will be a vector of length n with only the last two elements
T
being nonzero. The last two rows and columns of An1 bn1 kn1 Cn1 are given
by
an1,n1 an1,n bn1 0 c1,n
k1 k2 (6.95)
an,n1 an,n 0 c2,n1 c2,n
1
k1 = . n1 n1 + an1,n an,n1 + a2n,n an,n n1 n1
(an,n1 bn1 c1,n )
c2,n
an1,n1 + an,n n1 n1 (6.96)
bn1 c1,n c2,n1
and
1
k2 = an1,n1 + an,n n1 n1 (6.97)
bn1 c2,n1
T
The effect of applying the feedback kn1 is to change the first row of the 2 2
matrix above so that by appropriate choices of the two nonzero elements of the
feedback vector, we can ensure that the 2 2 matrix in (6.95) has eigenvalues at
i and i .
9. Set k T = k T + kn1
T
and STOP.
3. Reduce (A, b, C) to its UHF and apply Algorithm {EVA-1} to get the system (Ap1 , Bp1 , Cp1 )
and output feedback matrix K1 = d1 k1T where k1T is the output feedback vector required
to assign the desired (p 1) eigenvalues for the single-input system (A, b, C) .
1. Form the dual system, i.e., set F = (Ap1 )T , G = (Cp1 )T , H = (Bp1 )T , and partition
F, G, H as follows
F11 0 G1
F , ,G , , H = H1 H2
F21 F22 G2
Block-Diagonalization
In this section, we consider the problem of block-diagonalizing large, continuous time-
invariant systems whose eigenspectra (the sets of eigenvalues) are formed by clusters of
large eigenvalues and clusters of small eigenvalues.
Consider an n-dimensional system
x = Ax + Bu (6.98)
The two-time-scaledness of the system can now be checked by evaluating the norm condition
1 1
A22 < (kA0 k + kA12 k . kL0 k)1 (6.100)
3
where
However, Eqn. (6.100) gives only the sufficiency condition, it is not necessary for a two-
time scaled system to satisfy Eqn. (6.100). Once the system is confirmed to have two-time
scale property, the first of the two linear transformations is performed as follows. The first
stage is to apply the change of variables
x1 x1
= T1 (6.102)
z2 x2
I1 0
T1 =
L I2
Thus,
x1 I1 0 A11 A12 I1 0 x1
I1 0 B1
= + u
z2 L I2 A21 A22 L I2 z2
L I2 B2
A11 A12 L A12 x1 B1
= + u
LA11 + A21 LA12 L A22 L A22 + LA12 z2 LB1 + B2
Fs A12 x1 B1
= + u (6.104)
0 Ff z2 G2
The numerical value of L is computed using an iterative algorithm
Lk+1 = A1
22 (Lk A11 + A21 Lk A12 Lk ) (6.105)
0 = A12 + M Ff Fs M
0 = A12 + M (A22 + LA12 ) (A11 A12 L) M
M = (A12 + M LA12 A11 M + A12 LM ) A1 22
Applying this transformation to the system in Eqn. (6.110), we obtain the transformed
system
z1 Fs + G1 K1 0 z1 G1
= + u2
g2 N (Fs + G1 K1 ) Ff N + G2 K1 Ff g2 G2 + N G1
Now obtaining the value of N so that the term N (Fs + G1 K1 ) Ff N + G2 K1 becomes zero,
the system would then be diagonal. This is done by using the iterative algorithm
The control input u = u1 + u2 can be expressed in terms of the original state vector x in
the following manner.
u = u1 + u2
z1 z1
= K1 0 + 0 K2
z2 g2
x1 x1
= K 1 0 T2 T1 + 0 K 2 T3 T2 T1
x2 x2
x1
= K1 + K2 N + ((K1 + K2 N ) M + K2 ) L (K1 + K2 N ) M + K2 (6.114)
x2
Chapter 7
7.1 Introduction
The problem of simultaneous stabilization has received considerable attention. Given a
family of plants in state space representation (i , i ) , i = 1, , M , find a linear state
feedback gain F such that (i + i F ) is stable for i = 1, , M , or determine that no such
F exists. But the method is of use only in the case where whole state information is available.
One way of approaching this problem with incomplete state information is to use observer
based control laws, i.e. dynamic compensators. The problem here is that the state feedback
and state estimation cannot be separated in face of the uncertainty represented by a family
of systems. Assuming that a simultaneously stabilizing F has been found, it is possible to
search for a simultaneously stabilizing full order observer gain, but this search is dependent
on the F previously obtained. If no stabilizing observer for this state feedback exists, nothing
can be said because there may exist stabilizing observers for different feedback gains.
With the Fast output sampling approach proposed by Werner and Furuta in [64], it is
generically possible to simultaneously realize a given state feedback gain for a family of
linear, observable models. For fast output sampling gain L to realize the effect of state
feedback gain F , find the L such that (i + i LC) is stable for i = 1, , M , If there exist
a set of F s, there should also exist a common L for given family of plants. One of the
problems with this approach is that large feedback gains tend to render the system very
noise -sensitive. The design problem can be posed as a multiobjective optimization problem
in an LMI formulation [?] [66]
The fast output sampling controller obtained by the above methods requires only constant
gains and hence is easier to implement online. This approach can be used for tracking purpose
also as described in [?] and [?].
x = Ax + Bu
127
128 Large Scale Systems
y = Cx (7.1)
. Here (A, B) and (A, C) are assumed to be to controllable and observable respectively.
Choose an effective sampling time during which control signal u is held constant. The
sampling interval is divided into N subintervals of length = /N, and the output mea-
surements are taken at time instant t = l,where l = 0, 1, ...,. The control signal u(t), which
is applied during the interval k < t < (k + 1) , is then constructed as linear combination
of the last N output observations. Here N v the observability index.
Definition 6 (Observability Index) Given an observable pair (A, C) Rnn Rqn , and
rank(C) = q, the observability index of the system with respect to any particular row ci of C
is the minimum value of i such that the row ci Ai is dependent on the rows before it in the
following series
{c1 , c2 , , cq , c1 A, c2 A, , cq A, , c1 Ai , , ci Ai , }
The observability index of the entire system is defined as = max (i ) .
The following sampled data control is applied to the system.
y(k )
y(k + )
.
u(t) = [L0 L1 L2 ......LN 1 ]
= Lyk
(7.2)
.
.
y(k )
For k < t (k + 1), , where the matrix blocks Lj represent output feedback gains and
the notation L and yk have been introduced for the convenience. Note that 1/ is the rate at
which the loop is closed, whereas output samples are taken at the N -times faster rate 1/.
Let ( , , C) denotes the system (A, B, C) sampled at the rate 1/,i.e. = eA , =
R As
e dsB and (, , C) the same system sampled at the rate 1/ . Consider the discrete
0
time systems having at t = k the input uk = u(k ),states xk = x(k ) and the output
yk. Then we have
xk+1 = xk + uk
yk+1 = C0 xk + D0 uk (7.3)
Where
C 0
C C
C0 =
. , D0 =
.
(7.4)
. .
PN 2 j
CN 1 C j=0
7.3 Closed Loop Stability 129
Next assume that a state feedback gain has been designed such that ( + F ) has no
eigenvalues at the origin. For this state feedback one can define the fictitious measurement
matrix
yk = Cxk
Let
L = [L0 L1 .......LN 1 ],then the feedback law Eqn. (7.2) can be interpreted as static
output feedback
uk = Lyk
For L to realize the effect of F ,in the system described by Eqns. (7.3) and (7.4) with the
measurement matrix C. it must satisfy
LC = F (7.6)
where uk = uk F xk
Let = LD0 F . Thus we have the eigenvalues of the closed loop system under
a fast output sampling control law in Eqn. (7.2) as those of + F together with those
of LD0 F .
The above design was carried out for a simple model, but it can also be used to find
a fast output sampling gain which simultaneously assigns prescribed closed -loop poles for
multiple models.
130 Large Scale Systems
with n states, n1 of which are close to unity ( slow modes) and n2 eigenvalues are around
the origin (fast modes). We assume the system is asymptotically stable, completely control-
lable and completely observable. Note that re-indexing of states using permutational matrix
whose columns are elementary vectors and re-scaling the resultant model using appropriate
diagonal matrix is necessary to isolate fast and slow states.
A11 A12 x1 (k) B1
x(k + 1) = + u(k) (7.9)
A21 A22 x2 (k) B2
x1 (k)
y(k) = C1 C2
x2 (k)
From this representation a transformation matrix T [69] can be derived which would
decouple the fast and slow states using explicitly invertible linear transformation, where,
I1 M L M
T = (7.10)
Q I2
where, Ii is a ni ni identity matrix, for i = 1, 2 .M is n1 n2 and Q is n2 n1 . With Q
and M satisfying the relations
xs (k + 1) As 0 xs (k) Bs
= + u(k)
xf (k + 1) 0 Af xf (k) Bf
xs (k)
y(k) = Cs Cf
xf (k)
where
As 0 1 Bs
= T AT , = TB (7.13)
0 Af Bf
Cs Cf = CT 1
Derivation of Conditioned State Feedback
To derive a well-conditioned F for the system,stabilizing state feedback is derived sepa-
rately for the slow and fast subsystems, so that the systems (As + Bs Fs ) and (Af + Bf Ff )
are both stable. But, since the fast dynamics are stable by assumption, the value of Ff can
be taken to be zero.
The conditioned state feedback to be applied on the original system in Eqn. (7.8) is
derived as
F = Fs Ff T (7.14)
132 Large Scale Systems
C = UVT (7.15)
The matrix of singular values , is then analyzed.. Singular values very close to the
origin are discarded.
If min is the minimum value of singular value that is considered as significant and if
1 2 . . . q min q+1 . . . r ., then the measurement is approximated to
T
CA = UA A VA (7.16)
Where,
A = diag(1 , 2 , ..., q )
UA = U1 U2 Uq
VA = V1 V2 Vq (7.17)
With the new measurement matrix calculated from Eqn. (7.16), the FOS Controller Gain
is calculated using Eqn. (7.6).
h i
e =
C f1 C
C f2 . . C
gM (7.20)
F = F1 F2 . . FM
7.5 An LMI Formulation of the design problem 133
k (L) k< 2
k L C F k 3
Here three objectives have been expressed by the upper bounds on matrix norms , and
each should be as small as possible. If 3 = 0 then L is exact solution.
Using the schur complement, it is straight forward to bring these conditions in the form
of LMI (Linear Matrix inequalities) [70].
(1 )2 I L
T < 0 (7.21)
L I
(2 )2 I (L)
< 0 (7.22)
T (L) I
2
(3 ) I L C F
< 0 (7.23)
(L C F )T I
Here, C0T C0 is a n n matrix of rank n and hence invertible. Therefore, the state vector can
be determined as
1 T
x(k) = C0T C0 C0 (yk+1 D0 u(k)) (7.25)
x(k + 1) = x(k) + u(k)
T 1 T T 1 T
= C0 C0 C0 yk+1 + C0 C0 C0 D0 u(k)
Thus,
1 T 1 T
x(k) = C0T C0 C0 yk + C0T C0 C0 D0 u(k 1) (7.26)
Now, if the state feedback control input is designed as u(k) = F x(k), it can be converted
into an output feedback based control by simply substituting for x(k) from Eqn. (7.26) to
obtain
1 T 1 T
u(k) = F C0T C0 C0 yk + F C0T C0 C0 D0 u(k 1) (7.27)
The advantage of the improved version of the Fast output sampling controller (or multi-
rate output feedback based controller) as proposed in [72,73] is that Eqn. (7.26) can be used
to realize any state based controller, not just the static gain state feedback u (k) = F x(k).
Chapter 8
8.1 Review
The problem of pole assignment by piecewise constant output feedback was studied by
Chammas and Leondes [71] for linear time-invariant systems with infrequent observation.
They showed that, by use of periodically time-varying piecewise constant output feedback
gain, the poles of the discrete-time control system could be assigned arbitrarily (within the
natural restriction that they be located symmetrically with respect to real axis) [64, 74].
x (k + 1) = x (k) + u (k) ,
u (t) = Kl y (k ) ,
k + l t k + (l + 1) , Kl+N = Kl (8.2)
for l = 0, 1, .....N 1.
Note that a sequence of N gain matrices {K0, K1, ....., KN 1 } when substituted in Eqn.(8.2)
generates a time-varying, piecewise constant output feedback gain K(t) for 0 t .
135
136 Large Scale Systems
Consider the following system obtained by sampling the system in Eqn.(8.1)at sampling
interval = /N and which is denoted by (, , C) :
x (k + 1) = x (k) + u (k) ,
Proof: Define
T
K= K0 K1 . . . KN 1 ,
u (k )
u (k + 4)
u (k ) = Ky (k ) =
. ,
.
u (k + 4)
x (k + ) = N x (k ) + u,
y (k) = Cx (k) ,
where
= [N 1 , ., ].
Applying periodic output feedback in Eqn.(8.3), i.e., Ky (k ) is substituted for u (k ) ,
the closed loop system becomes
8.3 Multimodel Synthesis 137
x (k + ) = N + KC x (k ) . (8.4)
The problem has now taken the form of static output feedback problem. Eqn.(8.4) suggest
that an output injection matrix G be found such that
( N + GC < 1, (8.5)
where () denotes the spectral radius. By observability one can choose an output
injection
gain G to achieve any desired self-conjugate set of eigenvalues for the closed loop
N
matrix + GC , and from N it follows that one can find a periodic output feedback
gain which realizes the output injection gain G by solving
K =G. (8.6)
for K.
The controller obtained from the above equation will give desired behavior, but might
require excessive control action. To reduce this effect we relax the condition that K exactly
satisfy the above linear equation and include a constraint on the gain K. Thus we arrive at
the following in equations
kKk < 1
y = Ci x i = 1, 2, , M
138 Large Scale Systems
By sampling at
the rate
of 1/4 we get a family of discrete systems {i i Ci } .
Assume that N i , Ci are observable. Then we can find output injection gains Gi such
N
that i + GCi has required set of poles. Now consider the augmented system defined
below.
1 0 . 0 1 G
v 0 2 . . v 2 v G
=
.
,
=
.
,
G=
.
. . .
0 . . M M G
kKk < 1
A + B = [a + b, a + b]
A B = [a b, a b]
A/B = A. (1/B)
where, 1/B = {1/b|b B}
provided 0
/B
The concept of interval analysis and interval arithmetic is useful in the study of stability
and robust control of systems with parametric uncertainty within known bounds. The un-
certain parameter is replaced by an interval entity and the robust stability of the system is
analyzed by various methods.
139
140 Large Scale Systems
P (s) = p0 + p1 s + + pn sn
P (s) is said to be Hurwitz if and only if all its roots lie in the open left half of the complex
plane. For a Hurwitz polynomial with real coefficients we have the following two elementary
properties.
1. If a polynomial P (s) is Hurwitz then all its coefficients are non zero and have the same
sign, either all positive or all negative.
2. If a polynomial P (s) is Hurwitz and of degree n, then arg(P (j)) is a continuous and
strictly increasing function of on (, ) . Moreover the net increase in phase from
to is
P even (s) : = p0 + p2 s2 +
P odd (s) : = p1 s + p3 s3 + (9.2)
P e () = P even (j) = p0 p2 2 +
P odd (j)
P o () = = p1 p3 2 + (9.3)
j
P e () = p0 p2 2 + + (1)m p2m 2m
P o () = p1 p3 2 + + (1)m1 p2m1 2m2
2. All the roots of P e () and P o () are real and the m positive roots of P e () together
with the m 1 positive roots of P o () interlace in the following manner:
0 < e,1 < o,1 < e,2 < < o,m1 < e,m
P e () = p0 p2 2 + + (1)m p2m 2m
P o () = p1 p3 2 + + (1)m p2m+1 2m
and the definition of the interlacing property for this case is naturally modified to
2. All the roots of P e () and P o () are real and the m positive roots of P e () together
with the m positive roots of P o () interlace in the following manner:
0 < e,1 < o,1 < e,2 < < o,m1 < e,m < o,m
Theorem 4 (Hermite Bieler Theorem) A real polynomial P (s) is Hurwitz if and only
if it satisfies the interlacing property.
Proof: The proof of this theorem is rather involved and has been discussed in detail
in [80].
Lemma 4 Let
denote two stable polynomials of the same degree with the same odd part P odd (s) and
differing even parts P1even (s) and P2even (s) satisfying,
Proof: Since P1 (s) and P2 (s) are stable, P1e () and P2e () both satisfy the interlacing
property with P o (s). In particular, P1e () and P2e () are not only of the same degree, but
the sign of their highest order coefficient is also the same since it is in fact the same as that
of the highest coefficient of P o (). Given this it is easy to see that P e () cannot satisfy
Eqn. (9.6) unless it also has this same degree and the same sign for its highest coefficient.
Then, the condition in Eqn. (9.6) forces the roots of P e () to interlace with those of P o ().
Therefore, P even (s) + P odd (s) is stable.
The dual of this lemma can be stated as
are stable.
The Kharitonovs theorem provides a surprisingly simple necessary and sufficient condi-
tion for the Hurwitz stability of the entire family.
Proof: The proof given allows for the interpretation of Kharitonovs theorem as a gen-
eralization of the interlacing property of Hurwitz polynomials.
Let us introduce the hyper-rectangle or box B of coefficients of the perturbed polynomials
B = q|q Rn+1 , qi qi qi+ , i = 0, 1, , n (9.9)
even
The four Kharitonovs polynomials are built from two different even parts Kmax (s) and
odd odd odd
Kmin (s) and two different odd parts Kmax (s) and Kmin (s) defined below:
9.1 Concepts Related to Uncertain Systems 143
even
Kmax (s) = q0+ + q2 s2 + q4+ s4 + q6 s6 +
even
Kmin (s) = q0 + q2+ s2 + q4 s4 + q6+ s6 + (9.10)
and
odd
Kmax (s) = q1+ s + q3 s3 + q5+ s5 + q7 s7 +
odd
Kmin (s) = q1 s + q3+ s3 + q5 s5 + q7+ s7 + (9.11)
The motivation of the subscripts max and min is as follows. Let q(s) be an arbitrary
polynomial with its coefficients lying in the box B and let q even (s) be its even part. Then
e
Kmax () = q0+ q2 2 + q4+ 4 q6 6 +
q e () = q0 q2 2 + q4 4 q6 6 + (9.12)
e
Kmin (s) = q0 q2+ 2 + q4 4 q6+ 6 +
so that
e
Kmax () q e () = q0+ q0 + q2 q2 2 + q4+ q4 4 +
and
q e () Kmin
e
() = q0 q0 + q2+ q2 2 + q4 q4 4 +
Therefore,
e
Kmin () q e () Kmax
e
(), [0, ] (9.13)
Similarly, if q odd (s) denotes the odd part of q(s), it can be verified that
o
Kmin () q o () Kmax
o
(), [0, ] (9.14)
To proceed, note that the Kharitonov polynomials in Eqn. (9.8) can be rewritten as
K 1 (s) = even
Kmin odd
(s) + Kmin (s) (9.15)
K 2 (s) = even odd
Kmin (s) + Kmax (s) (9.16)
K 3 (s) = even
Kmax odd
(s) + Kmin (s)
K 4 (s) = even odd
Kmax (s) + Kmax (s)
If all the polynomials with the coefficients in the box B are stable, it is clear that the
Kharitonov polynomials in Eqn. (9.8) must also be stable since their coefficients lie in
B. For the converse, assume that the Kharitonov polynomials are stable, and let q(s) =
144 Large Scale Systems
q even (s) + q odd (s) be an arbitrary polynomial with coefficients in the box B with its even part
q even (s) and its odd part q odd (s).
Since K 1 (s) and K 2 (s) are stable and Eqn. (9.14) holds, we conclude from Lemma 5
applied to K 1 (s) and K 2 (s) in Eqn. (9.15), that
even
Kmin (s) + q odd (s) is stable.
Similarly, from Lemma 5 applied to K 3 (s) and K 4 (s) in Eqn. (9.15), we conclude that
even
Kmax (s) + q odd (s) is stable.
Now, since Eqn. (9.13) holds, we can apply Lemma 4 to the two stable polynomials
even
Kmax (s) + q odd (s) and Kmin
even
(s) + q odd (s) and we conclude that
q(s) = (0 + j0 ) + (1 + j1 ) s + + (n + jn ) sn (9.17)
where
i [i , i+ ], i [i , i+ ], i = 0, 1, , n (9.18)
Kharitonov extended his result for real polynomials to the above complex interval family
by introducing two sets of complex polynomial as follows:
K1+ (s) : = 0 + j0 + 1 + j1+ s + 2+ + j2+ s2 + 3+ + j3 s3 (9.19)
+ 4 + j4 s4 + 5 + j5+ s5 +
K2+ (s) : = 0 + j0+ + 1+ + j1+ s + 2+ + j2 s2 + 3 + j3 s3 (9.20)
+ 4 + j4+ s4 + 5+ + j5+ s5 +
K3+ (s) : = 0+ + j0 + 1 + j1 s + 2 + j2+ s2 + 3+ + j3+ s3 (9.21)
+ 4+ + j4 s4 + 5 + j5 s5 +
K4+ (s) : = 0+ + j0+ + 1+ + j1 s + 2 + j2+ s2 + 3 + j3+ s3 (9.22)
+ 4+ + j4+ s4 + 5+ + j5 s5 +
and
9.1 Concepts Related to Uncertain Systems 145
K1 (s) : = 0 + j0 + 1+ + j1 s + 2+ + j2+ s2 + 3 + j3+ s3 (9.23)
+ 4 + j4 s4 + 5+ + j5 s5 +
K2 (s) : = 0 + j0+ + 1 + j1 s + 2+ + j2 s2 + 3+ + j3+ s3 (9.24)
+ 4 + j4+ s4 + 5 + j5 s5 +
K3 (s) : = 0+ + j0 + 1+ + j1+ s + 2 + j2+ s2 + 3 + j3 s3 (9.25)
+ 4+ + j4 s4 + 5+ + j5+ s5 +
K4 (s) : = 0+ + j0+ + 1 + j1+ s + 2 + j2 s2 + 3+ + j3 s3 (9.26)
+ 4+ + j4+ s4 + 5 + j5+ s5 +
Theorem 6 (Complex Variable Kharitonovs Theorem) The family of polynomials F
is Hurwitz if and only if the eight Kharitonov polynomials K1+ (s), K2+ (s), K3+ (s), K4+ (s), K1 (s), K2 (s), K3 (s
are all Hurwitz.
Proof : The proof of this theorem has been discussed in [80].
Zi = {z C| |z aii | ri } (9.28)
Theorem 7 (Gershgorin Theorem) A as above. Let be an eigenvalue of A then
belongs to one of the circles Zi . Moreover if m of the circles form a connected set S, disjoint
from the remaining n m circles, then S contains exactly m of the eigenvalues of A, counted
according to their multiplicity as roots of the characteristic polynomial of A.
Proof: [82]Let be an eigenvalue of A and x its corresponding eigenvector. Let k be
such that
n
X
akj xj = xk
j=1
n
X
( akk ) xk = akj xj
j=1,j6=k
n
X
| akk | |xk | |akj | |xj | rk kxk
j=1,j6=k
146 Large Scale Systems
Remark 7 Since A and AT have the same eigenvalues and characteristic polynomial, these
results are also valid if summation within the column rather than in the row is used defining
the radii in Eqn. (9.27)
N (s)
F= (9.30)
D(s)
where
P
n1
+
D(s) = sn + i
i=0 ai s , ai = [ai , ai ]
P
n1
N (s) = i=0 bi si , bi = [b +
i , bi ]
This system in Eqn. (9.30) can be represented in controllable canonical form as
X = AX + Bu,
y = CX (9.31)
where,
0 1
0 0
.. ..
. .. .. ..
. . .
A = ,B = . ,
0 0 1 0
a0 a1 an1 1
C = b0 b1 bn1 . (9.32)
A state feedback vector F is said to be stabilizing the above interval system if the input
u = F X makes the characteristic polynomial of the closed loop system
a hurwitz invariant polynomial; i.e., all the roots of the uncertain polynomial are in the strict
left half of the complex plane.
where F = f0 f1 fn1 and i = ai fi , i = 0, 1, .., n 1.
Proof : The four Kharitonov plants with have their characteristic equations as
K 1 (s) = a + 2 + 3 4
0 + a1 s + a2 s + a3 s + a4 s + + s
n
K 2 (s) = a + + 2 3 4
0 + a1 s + a2 s + a3 s + a4 s + + s
n
(9.33)
K 3 (s) = a+ 2 + 3 + 4
0 + a1 s + a2 s + a3 s + a4 s + + s
n
K 4 (s) = a+ + 2 3 + 4
0 + a1 s + a2 s + a3 s + a4 s + + s
n
If these plants were represented in phase variable canonical form as shown in Eqn.(9.32), and
a simultaneously stabilizing state feedback gain F is obtained. This state feedback when
applied to the interval plant, would change its characteristic polynomial from D(s) to D(s),
where
This would result in the new interval polynomial D(s), with its four Kharitonov polynomials
as
K 1 (s) = d + 2 + 3 4
0 + d1 s + d2 s + d3 s + d4 s + + s
n
(9.34)
K 2 (s) = d + + 2 3 4
0 + d1 s + d2 s + d3 s + d4 s + + s
n
K 3 (s) = d+ 2 + 3 + 4
0 + d1 s + d2 s + d3 s + d4 s + + s
n
K 4 (s) = d+ + 2 3 + 4
0 + d1 s + d2 s + d3 s + d4 s + + s
n
all of which are Hurwitz (by design). Since the four Kharitonov polynomials of D(s) are
stable , it is implied from the Kharitonov theorem, that the entire family of polynomials is
stable. This implies that the state feedback F , designed to stabilize the four Kharitonov
plants stabilizes the entire family of plants.
(s) = 0 + 1 s + 2 s2 + + n1 sn1 + sn
where
0 ai 0 ai
i ai , ai + , i = 0, 1, , n 1
2 2
Suppose now that you can use a vector of n free parameters K = (k0 , k1 , k2 , , kn1 ) ,
to transform the family F0 into a family Fk described by
148 Large Scale Systems
Lemma 7 Let n be a positive integer and let P (s) be a stable polynomial of degree n 1 :
0 < e,1 < o,1 < e,2 < o,2 < < e,2r1 < o,2r1
P e (o,1 ) < 0, P e (o,2 ) > 0, P e (o,3 ) < 0, , P e (o,2r2 ) > 0, P e (o,2r1 ) < 0 (9.35)
Let us denote
( )
P e (o,j )
= min (9.36)
j odd (o,j )4r
By (9.35), we know that is positive. We can now prove the following:
We are going to show that Qo () and Qe () satisfy the Hermite-Bieler theorem provided
that p4r remains within the bounds defined in (9.37).
First we know the roots of Qo () = P o (). Then we have that Qe (0) = p0 > 0, and also
P e (o,1 ) 4r
Qe (o,1 ) < P e (o,1 ) 4r (o,1 )
(o,1 )
| {z }
=0
But, by (9.35), we know that P e (o,2 ) > 0, and therefore we also have
150 Large Scale Systems
Qe (o,2 ) > 0
Pursuing the same reasoning we could prove in exactly the same way that the following
qualities hold
Qe (o,2r1 ) < 0
Qe (+) > 0
0
Therefore Qe () has a final positive root e,2r which satisfies
0
o,2r1 < e,2r (9.41)
From (9.40) and (9.41) we conclude that Qo () and Qe () satisfy the Hermite-Bieler
theorem and therefore Q(s) is stable.
To complete the proof of this lemma, notice that Q(s) is obviously unstable if p4r < 0,
since we assumed all pi to be positive. Moreover, it can be shown by using (9.36) that for
p4r = , the polynomial P (s) + s4r has a pure imaginary root and therefore is unstable.
Now, it is impossible that P (s) + s4r to be stable for some p4r > ,because otherwise we
could use the Kharitonovs theorem and say,
4r
P (s) + s and P (s) + p4r s4r both stable P (s) + s4r stable
2
which would be a contradiction. This completes the proof of the theorem when n = 4r.
For the sake of completeness, let us make precise that in general we have,
( )
P e (o,j )
if n = 4r, = min
j odd (o,j )4r
( )
P o (o,j )
if n = 4r + 1, = min
j even (o,j )4r+1
9.2 Bhattacharyyas Method 151
( )
P e (o,j )
if n = 4r + 2, = min
j even (o,j )4r+2
( )
P o (o,j )
if n = 4r, = min
j odd (o,j )4r+3
The details of the proof for the other cases are omitted.
We can now enunciate the following theorem to answer the question raised at the begin-
ning of this section.
Theorem 8 For any set of nominal parameters {a0 , a1 , a2 , , an1 } , and for any set of
positive numbers a0 , a1 , , an1 , it is possible to find a vector K such that the entire
family Fk is stable.
(R(.)) = (R(.))
we conclude from Eqn. (9.42) that the following four Kharitonov polynomials of degree
n 1 are stable.
1 a0 a1 a2
P (s) = p0 + p1 s + p2 + +
4 4 4
2 a0 a1 a2
P (s) = p0 + p1 + s + p2 + + (9.43)
4 4 4
3 a0 a1 a2
P (s) = p0 + + p1 s + p2 +
4 4 4
4 a0 a1 a2
P (s) = p0 + + p1 + s + p2 +
4 4 4
Step 2: Now, applying Lemma 7, we know that we can find four positive numbers
1 , 2 , 3 , 4 , such that
152 Large Scale Systems
P j (s) + sn (9.44)
are all stable. If can be chosen to be equal to 1 ( that is if all four j are greater than
1
1) then we do choose = 1; otherwise we multiply everything by which is greater than
1 and we know from (9.44) that the four polynomials
1 j
K j (s) = P (s) + sn
are stable. But the four polynomials K j (s) are nothing but the four Kharitonov polyno-
mials associated with the family of polynomials
(s) = 0 + 1 s + + n1 sn1 + sn
where,
1 1 ai 1 1 ai
i = pi , pi + , i = 0, 1, 2, , n 1
2 2
1
ki + a0i = pi , i = 0, 1, , n 1
The vector K so obtained would be the stabilizing state feedback for the interval plant
representation (A, b) in phase variable canonical form. The state feedback K would render
the closed loop system
x = Ax + bu
u = Kx
stable.
Remark 8 It is clear that in Step 1 one can determine the largest box around R(.) with
sides proportional to ai . The dimensions of such a box are also enlarged by a factor when
R(.) is replaced by R(.). This change does not affect the steps that follow.
9.2 Bhattacharyyas Method 153
a0 = 3, a1 = 5, a2 = 2, a3 = 1, a4 = 7, a5 = 5
are stable
Step 3: We just have to take
K = (k0 , k1 , k2 , k3 , k4 , k5 ) = (p0 , p1 , p2 , p3 , p4 , p5 ) a00 , a01 , a02 , a03 , a04 , a05
= (5, 29, 58, 63, 28, 7)
Thus, the state space representation
0 1 0 0 0 0 0
0 0 1 0 0 0 0
0 0 0 1 0 0 0
x =
x +
u
0 0 0 0 1 0 0
0 0 0 0 0 1 0
[2.5, 0.5] [3.5, 1.5] [3, 1] [2.5, 3.5] [5.5, 1.5] [1.5, 3.5] 1
u = 5 29 58 63 28 7 x
is stable.
where
aij , j = 0, 1, 2, , n 1, i = 1, 2, 3, 4, are one or the other of the extremal points of the
intervals [a +
j , aj ] depending on the value of i.
9.3 Jayakumars Method 155
2. Let the simultaneously stabilizing state feedback be K = k0 k1 kn1 . Then,
applying this feedback to the Kharitonov plants would transform their characteristic
equations to
0
K i (s) = ai0 + k0 + ai1 + k1 s + + ain1 + kn1 sn1 + sn
i = 1, 2, 3, 4
3. Now prepare the Routh Table of the four polynomials. The first row of the table would
be nonlinear expressions in ki . Setting all the first row elements of the Routh Tables
of the four Kharitonov polynomials to be positive, one would get a set of nonlinear
inequalities in ki .
4. These inequalities when solved using nonlinear programming, with the aim of mini-
mizing kKk2 .
Example 11 The problem is to compute the simultaneously stabilizing state feedback for the
system
0 1 0 0
x = 0 0 1 x + 0 u
[3, 2] [2, 1] [2, 1] 1
SOLUTION: The characteristic polynomial of the interval system is
K 1 (s) = 2 + s + 2s2 + s3
K 2 (s) = 2 + 2s + 2s2 + s3
K 3 (s) = 3 + s s2 + s3
K 4 (s) = 3 + 2s s2 + s3
The Kharitonov polynomials of the closed loop system would be
0
K 1 (s) = (2 + k0 ) + (1 + k1 )s + (2 + k2 ) s2 + s3
0
K 2 (s) = (2 + k0 ) + (2 + k1 ) s + (2 + k2 ) s2 + s3
0
K 3 (s) = (3 + k0 ) + (1 + k1 )s + (1 + k2 )s2 + s3
0
K 4 (s) = (3 + k0 ) + (2 + k1 ) s + (1 + k2 )s2 + s3
This gives rise to eight nonlinear inequalities which when solved give the value of K as
K = 0.6 1 2.2
The interval plant family is stable for u = Kx
156 Large Scale Systems
2. Find a state feedback gain K such that real part of eigenvalues of (Ai bK) is less
than zero for all i. (where i = 1, 2, 3, 4)
Proof:
2 1 Suppose the second statement is true, then there exist positive definite matrices P and
Q such that the following equation is true.
It may be difficult to construct such a P and Q, but there exist such a P and Q to
satisfy Eqn. (9.45) and it is true for all i from 1 to 4.
1 2 Suppose the first statement is true. All the four extremal systems are stable. By
Kharitonov theorem, it is clear that the second statement is true.
Proposition 4 Let Ri (s) = Pi (s) + Q(s) be an nth degree polynomial. For a given Pi (s) it
is possible to select Q(s) such that Ri (s) is stable.
9.4 Smagina & Brewers Method 157
Proof : Assume i = 2 and also assume P2 (s) has its roots inside a circle centered at the
origin and radius rmax . Transform P2 (s) to R2 (s) by using rmax . Bound on can be derived
as follows.
Add the same Q(s) to other polynomials P1 (s), P3 (s) and P4 (s).
If cmax < rmax rmax then it would be impossible for any of these qi (s) to shift the roots
of R2 (s) + qi (s) to the right half of the complex plane
From this it is clear that it is always possible to choose such that Ri (s) is stable for
all i = 1, 2, 3, 4. by using the following relation.
cmax
> +1 (9.46)
rmax
The above value may be high, but it makes the interval polynomials P (s) stable by
adding Q(s) with it.
Existence of stable closed loop state feedback gain K is nothing but existence of Q(s).
Thus the existence of simultaneously stabilizing K for a given A1 , A2 , A3 and A4 is proved.
u = Kx (9.48)
The problem is to find a real r n matrix K satisfying the inclusions
Controllability Criterion 1 (Sufficient) The pair ([A] , [b]) is said to be controllable for
any A [A] , b [b] if an n n interval controllability matrix.
[Y ] = [b] , [A] . [b] , , [A]n1 [b] (9.51)
0
/ det [Y ] (9.52)
If the pair ([A] , [b]) do not satisfy the above criterion then there is no guarantee of the
existence of a simultaneously stabilizing state feedback.
Introduce a n n interval matrix
[1 ] [2 ] [n1 ] 1
[2 ] [3 ] 1 0
[P ] = [Y ] . .. .. .. .. (9.53)
. . . .
1 0 0 0
and an interval row vector
Theorem 9 If the pair ([A] , [b]) is controllable and the widths of the polynomial coefficients
satisfy the inequalities w ([di ]) > w ([i ]) , i = 0, 1, , n 1 then a n-row vector k of a
stabilizing state feedback might be calculated from the interval inclusion
K. [P ] [f ] (9.57)
Proof : If the pair ([A] , [b]) is controllable then all pairs (A, b) with A [A] , b [b] are
controllable. Then, a feedback row vector k can be calculated from the following matrix
equation:
1 2 n1 1
2 3 1 0
k. b, Ab, , An1 b .. .. .. .. +(0 , 1 , , n1 ) = (d0 , d1 , , dn1 )
. . . .
1 0 0 0
(9.58)
where 0 , 1 , , n1 are the coefficients of the characteristic polynomial of A, d0 , d1 , , dn1
are the coefficients of the assigned asymptotically stable polynomial D(s).
The characteristic polynomial coefficients i (A), i = 0, 1, , n 1 can be considered as
multi-linear functions of the elements of matrix A. Denote the row vector from the left-hand
side of Eqn. (9.58) by f (k, A, b). Its coordinates are rational functions of the elements of
matrices k, A, b. Selecting an interval extension F (k, [A] , [b]) of the function f (k, A, b), then
we can represent the left-hand side of Eqn. (9.58) in the interval form as
from both sides of the relation (9.60) leaving the unknown member k. [P ] in the left-hand
side. Since the regular interval subtraction of an interval number [a] from itself does not
result in a zero ([a] [a] 6= 0) we cannot use the usual interval arithmetic operation to solve
Eqn. (9.60) for k. [P ] . The desired zero result can be achieved if we apply a nonstandard
interval subtraction as defined in Eqn. (9.55). Application of this arithmetic operation to
Eqn. (9.60) results in the formula in Eqn. (9.57).
The inequalities w [di ] > w [i ] , i = 0, 1, , n 1 follow from Eqn. (9.54) and the
definition of the regular interval width. Thus, the theorem has been proved.
k = M [f ] P 1 (9.61)
if the pair ([A] , [b]) is controllable and for assigned asymptotically stable interval polyno-
mial as in Eqn. (9.50), the following inclusion
M [f ] P 1 . [P ] [f ] (9.62)
takes place.
In Eqns. (9.61 and 9.62), the matrix P [P ] is a real nonsingular matrix, M [] denotes
a real matrix of interval element midpoints.
Corollary 1 The pair ([A] , [b]) controllability is a necessary condition for the modal P-
regulator u = kx existence:
Remark 9 If a regular interval number [a] is not a degenerate (point) interval then it has
a positive width w [a] = w [a , a+ ] = (a+ a ) > 0. So, the elements of interval vector [f ]
in Eqn. (9.54) have to have positive widths and therefore, they must satisfy the inequalities
w [di ] > w [i ] , i = 0, 1, 2, , n1. Thus, it is recommended to select an asymptotically stable
or stable interval polynomial [D(s)] with interval coefficients [di ] , i = 0, 1, 2, , n 1 that
should be wide enough to guarantee the required inequalities.
Theorem 11 If the pair ([A] , [b]) controllable according to the Controllability criterion 1
and P = M [P ] then the inclusion (9.57) has a solution of the form
k = M [f ] (M [P ])1 (9.63)
provided that the widths w [dj ] , j = 0, 1, 2, , n 1 of the interval polynomial [D(s)]
coefficients satisfy the inequalities
n
X
abs(ki )w [pij ] + w [j ] < w [dj ] (9.64)
i=1
where ki are the row vector k elements, w [pij ] are the widths of the elements of the matrix
[P ] in Eqn. (9.53).
9.4 Smagina & Brewers Method 161
Remark 10 For some control systems it is possible to calculate k that does not satisfy some
of the inequalities in (9.64). In order to comply with the restrictions in (9.64), we can try to
relax the intervals [dj ] without losing the stability of the interval polynomial [D(s)] and then
recalculate k.
Therefore, a stabilizing state-feedback control u = kx exists if the pair ([A] , [b]) is con-
trollable and the coefficients of some stable interval polynomial [D(s)] satisfy the inequalities
in (9.64).
Case r 2 : For multivariable interval control systems we use a concept of interval
matrix rank.
For an l p interval matrix [C] we consider all minors det [C]mm of order m min(l, p).
These minors are equal to some interval numbers.
A minor is referred to as singular if it contains a zero.
Definition 9 An interval matrix [C] has a rank [C] equal to the maximal order of its non-
singular minors.
Based on the above definition for n n matrix [A] and n r matrix [B] , we introduce
Controllability Criterion 2 The pair ([A] , [B]) is controllable for any A [A] , B [B]
if and only if the interval controllability matrix
[Y ] = [B] , [A] . [B] , , [A]n1 . [B] (9.65)
rank [Y ] = n (9.66)
Step 1: For the pair ([A] , [B]) analyze controllability criterion in Eqn. (9.66) to determine if
the problem has a solution. Note that for the pair ([A] , [b]) conditions (9.51 and 9.52)
can be checked out.
Step 3: Select interval coefficients [di ] of the polynomial [D(s)]such that w [di ] > w [i ] , i =
0, 1, , n 1 and [D(s)] is asymptotically stable. If for certain i both conditions
cannot be satisfied, then the problem has no solution.
Step 4: If r = 1, then go to Step 5, otherwise choose a real r-vector q such that the pair
([A] , [B] .q) is controllable.
162 Large Scale Systems
Example 12 Let us consider a stabilization problem for the helicopter longitudinal motion
speed model with n = 3, r = 2 and interval matrices [A] and [B]
[a11 ] [a12 ] 9.8 [b11 ] 0
[A] = [a21 ] [a22 ] 0 , [B] = 0 [b22 ] (9.67)
0 1 0 0 0
where
and
SOLUTION : It can be shown that the pair ([A] , [B]) is controllable and for a selected
vector q = (0.8, 1.2)T , the pair ([A] , [b]) is controllable. Calculations of the characteristic
equation of [A] yields
The gain matrix K for the regulator has the following form.
0.0634 0.8228 0.9977
K = qk =
0.0958 1.3392 1.4963
Thus solved.
9.5 State Feedback for Uncertain Systems Based on Gerschgorin Theorem 163
Using the above lemma, an algorithm can be derived for the stabilizing state feedback
for parametric uncertain systems in state space.
Assume that the system is
r
X n
X r
X
sup ([aii ]) + sup ([bij ] . [kji ]) + [aij ] + [bip ] [kpj ] < 0 (9.70)
j=1 j=1,j6=i p=1
n
X n
X r
X
sup ([aii ]) + sup ([bij ] . [kji ]) + [aji ] + [bjp ] [kpi ] < 0 (9.71)
j=1 j=1,j6=i p=1
where, sup ([a]) = a+ , the supremum of an interval number and |[a]| = max(|a | , |a+ |)
, the absolute value of an interval number. The inequalities in (9.70 and 9.71) produce 2n
linear inequalities in kij , which when solved under the minimizing constraint of min (kKk) ,
would give the stabilizing state feedback for the interval uncertain system with its sensitivity
function minimized.
SOLUTION : Let the stabilizing state feedback gain be K = k1 k2 k3 .
Then, applying the conditions in (9.70 and 9.71) would give the six inequalities,
0.5 + sup([1.2, 1.1] .k1 ) + |[1.2, 1.1] .k2 | + |[1.2, 1.1] .k3 | < 0
k2 + |k1 | + |k3 | < 0
0.25 + k3 + |k1 | + |k2 | < 0
0.5 + sup([1.2, 1.1] .k1 ) + |2k1 | < 0
k2 + |[1.2, 1.1] .k2 | + |k2 | < 0
0.25 + k3 + |[1.2, 1.1] .k3 | + |k3 | < 0
Thus, solved.
Bibliography
[1] G.B. Dantzig and P. Wolfe. The decomposition algorithm for linear programs, Econo-
metrica, Vol. 29, pp. 767-778, 1961.
[2] E. J. Davison, A method for simplifying linear dynamic systems, IEEE Trans. Auto.
Contr., AC-11, pp. 93-101, Jan. 1966
[6] E. J. Davison, A new method for simplifying large linear dynamic systems, IEEE
Trans. Auto. Contr. (Correspondence), AC-13, pp. 214-215, Apr. 1968
[7] S. Vittal Rao, S. S. Lamba, Suboptimal Control of Linear Systems via Simplified
Models of Chidambara, Proceedings of IEE, Vol. 121, No. 8,pp. 879-881 Aug. 1974
[8] Marshall, S. A., An approximate method for reducing the order of a linear system,
Control, pp.642-643, 1966.
[10] G. B. Mahapatra, A Further Note on Selecting a Low-Order System Using the Dom-
inant Eigenvalue Concept, IEEE Trans. Auto. Contr., AC-24, No. 1,pp. 135-136, Feb.
1979
[11] Aoki M., Control of Large Scale Dynamic Systems by Aggregation, IEEE Trans.
Auto. Contr., AC-13,pp. 246-253,Jun. 1968
165
[13] M. Jamshidi, Large-scale systems: modeling & control. , North Holland, New York,
1983.
[15] M. J. Bosley, F. P. Lees, A survey of simple transfer function derivations from high-
order state-variable models, Automatica, 8, pp. 765-775, 1974
[16] Pade, H., Sur la representation approachee dune function par des fractions ratio-
nalles, Annales Scientifiques de PEcole Normale Superieure, Serie. 3 (Suppl.) 9, pp.
1-93, 1892
[17] Y. Shamash, Model reduction using the Routh stability criterion and the Pade ap-
proximation technique, Int. J. Control, Vol. 21, No. 3, pp. 475-484, 1975.
[20] Y. Shamash, Linear system reduction using Pade approximation to allow retention of
dominant modes, Int. J. Contr., Vo. 21, Issue. 2, pp. 257-272, 1975
[21] A. S. Rao, Routh approximant state space reduced order models for system with
uncontrollable modes, IEEE Trans. Auto. Contr., Vol. AC-26, pp. 1286-1288, 1981
[22] M. F. Hutton, B. Friedland, Routh approximations for reducing order of linear, time-
invariant systems, IEEE Trans. Auto. Control, AC-20, pp. 329-337, 1975
[23] M. F. Hutton, Routh Approximation Method for High-Order Linear Systems, Ph.D
Dissertation, Polytechnic Institute of New York, 1974
[25] Chyi Hwang, Kuan-Yue Wang, Optimal Routh approximations for continuous-time
systems, Int. J. Systems Sci., Vol. 15, No. 3, pp. 249-259, 1984
166
[28] L. S. Shieh, M. J. Goldman, Continued Fraction Expansion and Inversion of the Cauer
Third Form, IEEE Trans. Circuit Syst., Vol. CAS-21, pp. 341-345, 1974
[30] L. Fortuna, G. Nunnari, A. Gallo, Model Order Reduction Techniques with Applica-
tions in Electrical Engineering, Springer-Verlag, London, 1992
[33] Goro Obinata, Brian D. O. Anderson, Model Reduction for Control System Design,
Springer-Verlag, London, 2001
[34] Kemin Zhou, John C. Doyle, Keith Glover, Robust and Optimal Control , Prentice
Hall, Upper Saddle River, New Jersey, 1996
[35] C. S. Hsu, D. Hou, Reducing Unstable Linear Control Systems via Real Schur Trans-
formation, Elect. Letters, Vol. 27, No. 11, pp. 984-986, May 1991.
[38] Homipal Singh, Model Reduction Techniques for Multivariable Systems, Systems and
Control Engg., IIT Bombay, 1989
[40] D. A. Wilson, Optimum Solution of Model-Reduction Problem, Proc. IEE, Vol. 117,
No. 6, pp. 1161-1165, Jun. 1970
[41] D. A. Wilson, Model Reduction for Multivariable Systems, Int. J. Control, Vol. 20,
No. 1, pp. 57-64, 1974
[42] Basanta Kumar Dash, Computer Aided Analysis of Model Reduction Techniques Based
on Error Minimization, Systems and Control Engg., IIT Bombay, 1990
[43] N. Munro, Modern Approaches to Control System Design, Control Engineering Series,
9, IEE, London, 1979
167
[44] Web Based Reference. See : scilib.ucsd.edu/pcchau/CENG120/txtbk/chap4canon.pdf
[45] Brian D. O. Anderson, John B. Moore, Optimal Control - Linear Quadratic Methods,
Prentice-Hall of India, New Delhi, 1989
[46] Brian D. O. Anderson, John B. Moore, Linear system optimisation with prescribed
degree of stability, Proc. IEE, Vol. 116, Issue 12, pp. 2083-2087, 1969
[48] R. E. Kalman, When is Linear Control Optimal ?, Trans. ASME, Ser. D, Vol. 86,
pp. 51-60, 1964
[49] John J. DAzzo, Constantine H. Houpis, Linear Control System Analysis and Design
- Conventional and Modern, McGraw-Hill Inc., New York, 1975
[51] D. G. Luenberger, Canonical forms for Linear Multivariable Systems, IEEE Trans.
Auto. Contr., Vol. AC-12, pp. 290-292, 1967
[53] M. Gopal, Modern Control System Theory, 2nd Edition, John Wiley, 1993.
[54] O. A. Solheim, Optimal Control Systems with Prescribed Eigenvalues, Int. J. Control,
Vol. 15, Issue. 1, pp. 143-160, 1972
[55] F. Fallside, H. Seraji, Direct design procedure for multivariable feedback systems,
Proc. IEE, Vol. 118, Issue. 6, pp. 797-801, 1971
[56] M. Valasek, N. Olgac, Efficient eigenvalue assignment for general linear MIMO sys-
tems, Automatica, Vol. 31, pp. 1605-1617, 1995
[57] N. Munro, A. Vardulakis, Pole-shifting using Output Feedback, Int. J. Control, Vol.
18, No. 6, pp. 1267-1273, 1973
[58] R. V. Patel, Comment on Pole-shifting using Output Feedback, Int. J. Control, Vol.
20, No. 1, pp. 171-172, 1974
[59] R. M. Pringle, A-A. Reyner, Generalized Inverse Matrices with Application to Statis-
tics, Griffin, London, 1971
[60] H. Seraji, On Pole-Shifting using Output Feedback, Int. J. Control, Vol. 20, No. 5,
pp. 721-726, 1974
168
[62] Pradeep Misra, R. V. Patel, Numerical Algorithms for Eigenvalue Assignment by
Constant and Dynamic Output Feedback, IEEE Trans. Auto. Contr., AC-34, No. 6,
pp. 579-588, June 1989
[65] Jeremy G. VanAntwerp, Richard D. Braatz, A tutorial on linear and bilinear matrix
inequalities , Journal of Process Control ,Vol. 10, Issue 4, pp. 363-385, Aug. 2000
[66] Hirokazu Anai, Solving LMI and BMI Problems by Quantifier Elim-
ination, Electronic Proceedings of IMACS ACA98 - the 4th Interna-
tional IMACS Conference on Applications of Computer Algebra . See also
http://www.math.unm.edu/ACA/1998/sessions/control/anai/anai.html
[67] H. Werner, Generalized Sampled -data hold functions for Robust Multivariable Track-
ing and Disturbance Rejection, Proc. 36th IEEE Conference on Decision and Control,
San Diego, USA, 1997, pp. 2055-2060.
[68] H. Werner, Robust Tracking and Disturbance Rejection - Non Dynamic Multirate
Output Feedback and Observer Based Control, 4th European Cont. Conference, Brus-
sels, Belgium, 1997.
[70] H. Werner, Multimodel Robust Control by Fast Output Sampling- An LMI Approach,
Automatica, Issue 12,Vol. 34,Dec. 1998, pp 1625-1630
169
[75] Gahinet. P., A. Nemirovski, A.J. Laub, M. Chilali, LMI Control Toolbox for Use with
MATLAB, The Mathworks Inc. Natick, MA., 1995
[76] T. Sunaga, Theory of an interval algebra and its application to numerical analysis,
RAAG Memoirs, 2 , pp. 29-46, 1958.
[77] R.E. Moore, Interval Arithmetic and Automatic Error Analysis in Digital Computing,
Thesis, Stanford University, October 1962.
[78] R.E. Moore, Interval Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1966.
[79] Gotz Alefeld, Gunter Mayer, Interval analysis: theory and applications, Journal of
Computational and Applied Mathematics, 121, pp. 421-464, 2000
[80] Herve Chapellat, S. P. Bhattacharyya, Lecture Notes on Robust Stability and Control
of Interval Dynamic Systems, Dept. of Electrical Engg., Texas A&M Univ.,
[85] Ye. Smagina, Irina Brewer, Using Interval Arithmetic for Robust State Feedback
Design, Systems & Control Letters, Vol. 46, pp. 187-194, 2002
170
Appendix A
171
Obtain a third order model that is composed of the average of consecutive modes of
the original system.
7. Use aggregation by continued fraction to find a second order approximation for the
system
2 0 0 0 1
1 4 3 0 1
X =
0
X + u
3 4 0 1
0 0 1 6 1
172
7. Obtain the second order reduced order model using parameters for the system
s2 + 4s + 3
G (s) =
s3 + 5s2 + 2s 8
8. Find the first order model for the second order system using canonical form
0 1 0
x = x+ u
2 3 1
y = 1 0 x
9. Find the second order reduced model, using routh approximants for the system
0 1 0 0 0
0 0 1 0
X = X + 0 u
0 0 0 1 0
120 180 102 18 1
y = 1200 900 248 14 X
10. Illustrate a technique to determine the optimal order for a reduced order model through
routh approximants for the above system.
11. Find the second order approximation for the following system using the second cauer
form
1441.53s3 + 78319s2 + 525286.125s + 607693.25
G (s) =
s7 + 112.04s6 + 3755.92s5 + 39736.73s4 + 363650.56s3 + 759894.19s2
+683656.25s + 617497.375
173
3. Design a static output feedback controller for the system
0 1 0 0
X = 0 0 1 X + 0 u
0 0 3 1
0 1 0
y = X
1 1 0
2. What are the conditions for the existence of periodic output feedback gain ?
3. How do you calculate the output injection gain for the system
4. Derive the expression for the lifted output in terms of state and input in the context
of fast output sampling feedback technique.
5. Derive the expression for the dynamics of the error input in the context of fast output
sampling technique.
6. For a three model case, each being controllable and observable and of 10th order, what
are the conditions for the existence of a robust periodic output feedback gain.
174
7. Compute the fast output sampling feedback control input for the system
0 1 0
X(k + 1) = X(k) + u(k)
4 4 1
y = 1 0 X(k)
Find the fast output sampling gain L such that the closed loop eigenvalues are placed
exactly at = {0.6, 0.7} . The output sampling rate is = /2.
9. Assuming that the state feedback F is so designed that the closed loop system ( + F ) is
non-singular, derive the formula of the fast output sampling controller gain for a system
2. Using the above result, find the state feedback gain F, that would assure the closed
loop stability of the system
0 1 0 0 0
0 0 1 0 0
X =
0
X +
u
0 0 1 0
+
1, b0 [4, 3] b2 , 6 [5, 4] 1
175