You are on page 1of 20

OSE801 Engineering System Identification

Lecture 11: System Realization


1 Introduction

System realization is defined in Lecture 1 as the determination of the model


characteristics (Ā, B̄ , C ) in the discrete state space equation from the mea-
sured excitations (input) and measured responses (output) data. There have
been two approaches: frequency-domain and time-domain system realizations.
This chapter will focus on the latter, often called time-domain parameter
determination. As alluded to in the previous chapters, the basis of system re-
alization is modern state space systems theory that was originally developed
for building the models of complex systems as well as for automatic controls.

Two fundamental aspects of systems theory we will employ for system real-
ization are the so-called observability (measurability) and controllability (ex-
citability). Observability is the ability to obtain the response output which can
contain a number of response characteristics starting with an initial response
x(0). Controllability pertains to the extent the input can excite a number of
system response components, x(t) by a train of excitations u(t). Because of
their relevance in system realization we review them first below before moving
on to system realization description.

2 Observability and Controllability Operators

Let us recall the discrete state space model (6.36):

x(k + 1) = A x(k) + B u(k)


(1)
y(k) = C x(k) + D u(k)

where we have dropped the bar ( ¯ ) symbol on A and B for notational sim-
plicity. First, if there were no excitation except the initial condition x(0) 6=
0, the corresponding output y for the succesive discrete time steps, t =
∆t, 2∆t, 3∆t, ..., r∆t can be expressed as
   
y
 0  
C 
   
 y   CA 
 1   
   
CA2
   
 y2   
= V p x0 , Vp = (mp × n) (2)
   
   
.  .
   
  
   
   

 . 


 . 

   
ys−1 CA(s−1)

2
where matrix Vp (mp × n) is called an observability matrix. It will be shown
that, together with the controllability matrix we are about to derive, the ob-
servability matrix plays a crucial role in two ways: it facilitates the formulation
of the minimal realization and at the same time becomes a part of factored
Hankel matrix needed for realization computations.

Now, if we apply a train of excitations {u(0), u(1), u(2), ..., u(s − 1)}, the
internal state variable vector xr can be written as

 
u
 s−1 
 
u 
 s−2 
 
 
 us−3   
s
xs = A x0 +Ws 

,

Ws = B AB A2 B ... A(s−1) B (n×rs) (3)
 . 
 
 
 
 . 
 
 
u0

where Ws is called a controllability matrix.

In order to appreciate the roles of the observability and controllability matri-


ces, let us ask two questions:

(1) Under what condition the initial state x0 6= 0 can be uniquely recon-
T
structed from a set of the measured output ỹpT = (y0T , y1T , y2T , ..., y(p−1) ),
if there were no excitation ?
(2) If the system was initially at rest,namely, x0 = 0, what is the condition
that ensures the initial zero state to arrive at the desired state xs , when
a train of input ũTs = (uT(s−1) , uT(s−2) , uT(s−3) , ..., uT0 ) are applied to the
system?

In addressing the first question, we note that, if the dimension of x0 is (n × 1),


then the row vectors of the observability matrix Vp must span the whole n-
dimensional space in order for x0 to be uniquely reconstructed from the output
ỹp .

With regard to the second question, suppose that the rank of A is n. This
means that xs will consist of the distinct n-response components if and only
if the column vectors of the controllability matrix Ws spans the same n-
dimensional space. It should be noted that, while the initial state x0 can be
uniquely reconstructable provided the row vectors of Vp span the n-dimensional
space, there exists in general no unique set of input vectors that satisfy the

3
controllability requirement. To see this, we observe with x0 = 0:

xs = Ws ũs
(4)
(n × 1) (n × r s) (r s × 1)

Since in general s m > n, ũs is given by

ũs = Ws+ xs + Nw α (5)

where Nw is a null space basis of Ws and α is an arbitrary vector. Therefore,


ũs cannot be uniquely determined.

On the other hand, as long as the row rank of Vp is n, one obtains uniquely

x0 = [VpT Vp ]−1 VpT ỹp (6)

To gain further insight into the properties of the controllability matrix Ws ,


consider a single-point excitations so that B becomes a column vector, say, b.
With Ψ−1 AΨ = Λ(n × n), the controllability matrix becomes:
 
Ws = B AB ... A (s−1)
B
 
= ΨΨ−1 B AΨΨ−1 B ... A(s−1) ΨΨ−1 B
 
−1
= Ψ Ψ B ΛΨ ΨB ... Λ −1 (s−1) −1
Ψ B

   
(s−1)
 b1 λ1 b1 ... λ1 b1   b1 

(s−1)
   (7)
 b2 λ2 b2 ... λ2 b2   b2 
  

   
   
 . . ... .   . 
   
   
= Ψ . . ... . ,  .  = Ψ−1 B
   
   
(s−1)
   
 bi λi bi ... λi bi   bi 
  

   
   
 . . ... .   . 
   
   
bn λn bn ... λ(s−1)
n bn bn

Physically, bi = 0 corresponds to the nodal line of the i-th vibrational modes.


Hence, if the control is applied at one of the vibrational nodal lines, that mode
is not controllable or it introduces rank-one loss in the controllability. A careful
planning in the selection of excitation points is mandatory in order to excite
the desired mode set of interest. Once again, as stated previously, the ramini-
fications of the observability and controllability matrices in system realization

4
are both their theoretical basis of providing minimal order realization as well
as computational utility. These will be discussed below.

3 The Hankel Matrix

The basic building block for a system realization is the Hankel matrix Hps (0)
defined as

 

C 
 

 CA 

 
CA2
   
 
Hps (0) = V p Ws =  B 2 (s−1)
AB A B ... A B
 


 .

 (8)
 
 

 . 

 
CA(p−1)
(m p × r s) (m p × n) (n × r s)

To see the above Hankel matrix is the most basic fundamental building block,
we recall that, given the measured input and output, one can obtain the FRFs
or IRFs. A key step is thus to recognize that Hps (0) can be constructed in
terms of the Markov parameters {Y(i), i = 0, 1, 2, ..., Y (N − 1)}. To this end,

5
we expand the Hankel matrix Hps (0) to read:

 
2 (s−1)
 CB CAB CA B ... CA B 
 

 CAB CA2 B CA3 B ... CA(s) B 
 
 
2 3 4 (s+1)

 CA B CA B CA B ... CA B 

 
Hps (0) = . . . ... .
 
 
 
 

 . . . ... . 

 
 

 . . . ... . 

 
CA(p−1) B CA(p) B CA(p+1) B ... CA(s+p−2) B
(Mathematical Expression)

  (9)
 Y (1) Y (2) Y (3) ... Y (s) 
 
 Y (2) Y (3) Y (4) ... Y (s + 1) 
 
 
 
 Y (3) Y (4) Y (5) ... Y (s + 2) 
 
 
 
 . . . ... . 
=
 
 
 . . . ... .
 

 
 
 . . . ... . 
 
 
 Y (p) Y (p + 1) Y (p + 2) ... Y (p + s) 
 
 
 

(Measured Data)

Note that the first expression in the above equation consists of the system pa-
rameters (A, B , C ) that are to be determined, whereas the second expression
consists of the Markov parameters that can be extracted from the measured
data.

A generalization of the basic Hankel matrix Hps (0) that is also needed in

6
system realization can be expressed as

Hps (k) = Vp Ak Ws

 
 Y (k + 1) Y (k + 2) Y (k + 3) ... Y (k + s) 
 
 Y (k + 2) Y (k + 3) Y (k + 4) ... Y (k + s + 1) 
 
 
 
 Y (k + 3) Y (k + 4) Y (k + 5) ... Y (k + s + 2) 
 
(10)
 
 
 . . . ... . 
=
 
 
. . . ... .
 
 
 
 

 . . . ... . 

 
 Y (k + p) Y (k + p + 1) Y (k + p + 2) ... Y (k + p + s) 
 
 
 

which shows that a Hankel matrix made of the Markov parameters can be
factored in terms of the observability matrix Vp , the system dynamics op-
erator A and the controllability matrix Ws . This intrinsic factored form of
Hankel matrices is exploited in the development of computational algorithms
for system realization.

4 Minimum Realization

Suppose you are given a set of test data and asked to perform system real-
ization. One question that comes to every analyst’s mind would be: how do
I determine the order of realization model?, what modes to include in the
model? and how good would the realization model be? This section addresses
the first of these questions.

During the early days of modal (not model!) testing, the lack of criterion
for determining the model order and modal accuracy criterion often led to
realization models of different orders and varying modal components from the
same test data, depending upon the skills and tools available to each analyst.
For complex problems, therefore, a series of model syntheses were required
before a consensus could be reached.

The problem of minimal realization was perhaps first solved by Ho and Kalman
in 1965 who presented what is now konwn the minimal realization theorem
for noise-free data which states: there exists a minimum size of the system
operator A for a given pairs of the input and the output. Thus, a realization
whose size is less than the minimal realization order is deficient. On the other

7
hand, a realization whose size is greater than the minimum realization order
will contain a set of dependent (superfluous) internal state vector components.
To see this, we recall the Laplace-transformed transfer function:

H (s) = C (sI − A)−1 B (11)

Given the transfer function, a minimal realization (A, B , C ) consists of the


smallest-size A matrix. This is possible only if the ranks of the observability
and controllability matrices, Vp (m × p n) and Ws (n × sm), are n.

Theorem of Ho and Kalman: A realization {A(n × n), B (n × m), C (r × n)} is


minimal if and only if it is observable and controllable. Hence, a non-observable
or non-controllable realization cannot be minimal.

Suppose a non-controllable realization is {Ā(n̄ × n̄), B̄ (n̄ × m), C̄ (r × n̄)}


which satisfy the same transfer function H(s), but subject to

n̄ < n (12)

This means that {A(n × n), B (n × m), C (` × n)} cannot be a minimal


realization.

First, if the rank of W̄s < n, then one can perform a similarity transformation
T such that

 
 Âc (n̂c × n̂c ) Â12 (n̂c × n − n̂c ) 
 
−1
 = T Ā T =
 
 
 
 
0 (n − n̂c × n̂c ) Â22 (n − n̂c × n − n̂c )
(13)
   
T
 B̂c (n̂c × r)  Ĉc
B̂ =  , Ĉ T = 
 

0 ((n − n̂c ) × r) Ĉ2T

Second, if the rank of V̄p < n, then employing a similar step in deriving the

8
above non-controllable case we obtain
 
 Âo (n̂o × n̂o ) 0 (n̂o × n − n̂o ) 
 
 =
 
 
 
 
Â21 (n − n̂o × n̂o )r Â22 (n − n̂o × n − n̂o )
(14)
   
T
 B̂o   Ĉc
B̂ =  , Ĉ T = 


B̂2 0

However, since the two realizations come from the same transfer function H(s),
one must have

H(s) = Ĉc (sÎc − Âc )−1 B̂c = Ĉo (sÎo − Âo )−1 B̂o = C (sI − A)−1 B (15)

It is noted that a non-controllable system (but observable) (Ā, B̄ , C̄ ) becomes


both controllable and observable for a reduced size system n̄c ≤ n. Likewise, a
non-observable system becomes observable and controllable for a reduced size
system n̄o ≤ n. Therefore, if (Ā, B̄ , C̄ ) is controllable and observable, one
must have
n̄c = n̄o = n (16)
This proves Theorem of Minimum Realization.

An important practical consequence of the minimum realization theorem is


that, if the data are free of noises, the smallest size A matrices of all the
possible realizations must be the same, and they are related by similarity
transformations.

A remaining question: what if the rank of the observability matrix is different


(smaller or larger) from that of the controllability matrix? It is precisely for
this reason (13) and (14) are derived. For example, if n̂c < n̂o , then one must
employ the reduction given by (13) When n̂o < n̂c , one must employ (14).

Thus we have demonstrated that the system theory-based realization offers a


fundamental basis for a minimal realization, at least for noise-free data.

5 An Example: Kalman’s problem

Let us consider a single input/single output sequence given by

y = {1, 1, 1, 2, 1, 3, 2, . . .}, u = {1, 0, 0, 0, 0, 0, 0, . . .} (17)

9
and find a minimal realization.

Step 1: Realization Since the input is a unit impulse, the output becomes
the Markov parameters. Hence, we can construct a series of Hankel matrices
as follows:
 
1 1 1 2
 
1 1 1
   
1 1 1 1 2 1
   
H22 (1) =  , H33 (0) = 1 1 2 , H44 (0) =  (18)
  
 
21 1 21 3
 
   
121  
2132

where H22 is taken from {1, 1, 2, 1, . . .} instead of from the beginning of the se-
ries. Since |H22 | =
6 0, |H33 | =
6 0 and |H44 | = 0, we conclude that the rank of the
system is 3. In other words, the ranks of the observability and controllability
matrices are three.

A singular value decomposition of H33 (0) yields:


   
 0.4597 0.0000 −0.8881   3.7321 0.0 0.0   0.4597 0.6280 0.6280 
   
H33 (0) = PSQT =  0.6280

−0.7071 0.3251 ·

0.0 1.0 0.0  · 0.0 0.7071 −0.7071 


   
   
0.6280 0.7071 0.3251 0.0 0.0 0.2679 −0.8881 0.3251 0.3251
(19)

Now, if H33 (0) = V3 W3 = C B , then


 
1 1 2
 
H33 (1) = 1 2 1 (20)
 
 
 
213

must possess the form of H33 (1) = V3 AW3 = C A B from (??).


1
Since V3 = PS 2 = W3T , A can be determined from
 
 1.2603 −0.3981 −0.2041 
1 1  
A1 = V3−1 H33 (1)W3−1 = S− 2 PT H33 (1)PS− 2 = 
 0.3981 −1.5 −0.7691 

 
 
−0.2041 0.7692 −0.7605
(21)

10
and finally B and C are obtained from:
   
1  0.8881 
1    
B1 = S 2 PT 0
 
= 
 0.0

 (22)
   
   
0 −0.4597

1
   
C1 = 1 0 0 PS 2 = 0.8881 0.0 −0.4597 (23)
 
A check with the above realization shows that, with xT0 = 0 0 0 , it recon-
structs the output.

Step 2: Uniqueness of Realization The question is whether the preceding


realization is the only possible one. There is a second way of realizing the above
sequence. Instead of the singular decomposition employed, one can invoke the
wellknown LU-decomposition as H33 (0) is symmetric:
   
1 0 0 1 1 1
   
H33 (0) = LU = 1 0 1 · (24)
 0 1 0
   


   
110 001

so that S = I

Following a parallel derivation, we obtain


 
1 1 0 
 
A2 = 0 −1 1 
 
 
 
1 0 −1
 
1 (25)
 
B2 = 0
 
 
 
0
 
C2 = 1 0 0

One can verify that this second realization, although the sequences of x are
completely different, gives the the same output y.

Step 3: Singular Values and Similarity Transformation The singular


values of the two A matrices, A1 and A2 , are the same as they should be:

Λ = diag(1.2056, − 1.1028 + 0.6655j, − 1.1028 − 0.6655j) (26)

11
Of course, their bases are different, which are related by a similarity transfor-
mation such that φ1 = Tφ2 so that the two realizations are related according
to

(A1 , B1 , C1 ) = (T−1 A2 T, T−1 B2 , C2 T) (27)

6 Eigensystem Realization Algorithms

The minimum realization theory of Ho and Kalman is valid for noise-free data.
In practice there are two major factors that influence realization computations:
incompleteness of the measured data and data contaminated with noises. The
incompleteness of measued data implies that the collected data are either
insufficent or may not satisfy the periodicity requirement (7.9) as well as the
asymptotic properties of (7.10). One possible way to deal with incomplete and
noisy measured data is to perform overdetermined matrix computations, which
necessarily involve singular value decompositions. In the structural engineering
community it was Juang and Pappa who in 1984 extended the Ho-Kalman
algorithm suitable for the handling of structral test data.

A key idea used in the Juang-Pappa eigensystem realization algorithm (ERA)


is to utilize the properties of the singular-decomposed form of the overde-
termined Hankel matrix. A singular decomposition of a rectangular matrix
Hps (0)(mp × rs) can be expressed as

 
 Snn 0nz 
Hps (0) = Pp Sps QTs , Sps =   (28)
0zn zz

where  is close to zero, which can be ignored unless some of them represent
the physical rigid-body modes of the structure.

It should be mentioned that the task of truncating the zero eigenvalues is not
as straightforward as it seems. This is because the eigenvalues continuously
change in most large-scale test data and the lowest structural eigenvalue and
the zero eigenvalues get blurred, especially for very flexible structures or very
large structures. Nevertheless, it is this truncation concept that constitutes a
central aspect of Junag and Pappa’s ERA.

12
Once the eigenvalue truncation is decided, we partition Pp and QTs as follows:

 
T
   Qsn (1 : n, 1 : rs) 
 
Pp = Ppn (1 : mp, 1 : n) Ppz (1 : mp, 1 : z) , QTp = 
 

 
 
QTsz (1 : z, 1 : rs)
(29)

Hence, the truncated form of the Hankel matrix now becomes

Hpr (0) ≈ Ppn Snn QTsn = Vp Ws (30)


from which the observability and controllability matrices are obtained as fol-
lows
1 1
2
Vp = Ppn Snn , 2
Ws = Snn QTsn (31)

6.1 Computation of A

Now, in order to compute the system operator A, we recall the Hankel matrix
in the form of
Hps (1) = Vp A Ws (32)

The second computational step is the generalized inverses of the observability


and controllability matrices:

1 1
− −
Vp+ = Snn2 PTpn Ws+ = Qsn Snn2 (33)

with the identities

PTpn Ppn = QTsn Qsn = I (1 : n, 1 : n) (34)

Notice that the generalized inverses of the observability and controllability


matrices, Vp+ and Ws+ satisfy

Vp+ Vp Vp+ = Vp+ , Vp Vp+ Vp = Vp


(35)
Ws+ Ws Ws+ = Ws+ , Ws Ws+ Ws = Ws

13
Thus, A can be obtained by the following computations:
1 1 1 1
− −
Vp+ Hpr (1) Wp+ = Snn2 PTpn 2
Ppn Snn A 2
Snn QTsn Qsn Snn2 =A
⇓ (36)
A = Vp+ Hps (1) Ws+

6.2 Computations of B and C

To extract B and C , we notice that the observability and controllability ma-


trices are given by

 

C 
 

 CA 

 
CA2 1
 
 
Vp = (1 : mp, 1 : n) ≈ PTpn (1 : mp, 1 : n) 2
Snn
 
 
.
 
 
 
 

 . 

 
CA(p−1)
  1
Ws = B AB A2 B ... A(s−1) B 2
(1 : n, 1 : rs) ≈ Snn QTsn (1 : n, 1 : rs)
(37)

Therefore, C can be obtained by extracting the first (1 : m, 1 : n)-block entries


from Vpn . Similarly, B can be obtained by extracting the first (1 : n, 1 : r)
entries from Wsn :

1
B= 2
Snn QTsn (1 : n, 1 : r)
1 (38)
C = PTpn (1 : m, 1 : n) 2
Snn

So far we have described the basic form of the eigensystem realization algo-
rithm. Experience with the basic ERA indicated that it often has to deal with
nonsquare Hankel matrices, which can encounter numerical robustness diffi-
culties. In addition, accuracies of the mode shapes that are contained in C or
B are much to improve. One improvement over the basic ERA is to work with
a symmeterized form of Hankel matrices. This is discussed below.

14
6.3 Realization Based on Product of Hankel Matrices

Instead of performing system realizations with nonsquare Hankel matrices, one


can work with a symmetric overdetermined matrices. The two symmetrized
overdetermined matrices most often used can be constructed as

RHH T = H(0) HT (0) = (Pp Snn QTr ) (Qr Snn PTp ) = Pp S2nn PTp
(39)
T
RH T H = H (0) H(0) = (Qs Snn PTp ) (Pp Snn QTs ) = Qs S2nn QTs

where the singular value decompositions of (32) and the identities of (31) have
been used.

The above two matrices, RHH T and RH T H , can be expressed in terms of the
observability and controllability matrices (31) as

RHH T = Vp Snn VpT


(40)
(mp × mp) (mp × n) (n × n) (n × mp)

RH T H = WsT Snn Ws
(41)
(rs × rs) (rs × n) (n × n) (n × rs)

Realizations based on the preceding symmetrized matrices are known by var-


ious names: ERA with data correlation (ERA/DC) proposed by Juang and
co-workers, Principal Hankel Component Algorithm by Kung et al, Q-markov
Algorithm by Skelton and his co-workers, among others. A major advantage
of using the symmetrized matrices is, in addition to their computational ad-
vantages in carrying out singular value decompositions, is their inherent noise
filtering characteristics. This can be seen by examining the matrix structure

15
of the matrix, e.g., RHH T :
   
T
 Y (1) Y (2) Y (3) ... Y (r)   Y (1) Y T (2) Y T (3) ... Y T (r) 
   
 Y (2) Y (3) Y (4) ...   Y T (2)
Y (r + 1)  Y T (3) Y T (4) ... Y T (r + 1) 
 

   
   
 Y (3) Y (4) Y (5) ... Y (r + 2)   T Y T (4) Y T (5) T
   Y (3) ... Y (r + 2)  
   
RHH T =  . . . ... . × . . . ... .
   

   
   
 . . . ... . . . . ... .
   
  
   
   
 . . . ... .   . . . ... . 
   
   
Y (p) Y (p + 1) Y (p + 2) ... Y (p + r) Y T (p) Y T (p + 1) Y T (p + 2) ... Y T (p + r)
(42)

It is seen from the above expression that RHH T consists of correlation func-
tions of the Markov parameters. Therefore, if the noises present in the Markov
parameters are uncorrelated, RHH T should filter out the noises much more
effectively than performing realizations with H(0) alone. Specifically, since
RHH T works on the Markov parameter product form Y YT , one may conjec-
ture it would filter the output noises more effectively than the input noises.
On the other hand, since RH T H involves the form YT Y, it would filter the
input noises more effectively than the output noises.

Apart from the possible flitering role, experience indicates that either form
would capture the eigenvalues competitively, including using H(0) as discussed
in the previous section. Thus, as an accurate determination of the system mode
shapes is more critical and the mode shapes are typically extracted from C ,
one should evaluate which matrix would yield more accurate C . However, as
the number of sensors m is typically much larger the number of actuators r,
RHH T -based realizations would be computationally more intensive than those
based on RH T H . We now present two realizations following closely the so-called
FastERA implementation proposed by Peterson.

6.4 Realizations based on RHH T

A singular value decomposition of RHH T = H(0) HT (0) yields

RHH T = Vp Snn VpT (43)


It should be cautioned that one should not use the result of a singular value
decomposition of Hps (0) as obtained by (32) thus literally believing the ex-
pression of (39) and (40)

16
Noting that the computed Vp is related to the analytical expression of the
observability matrix by

 

C 
 

 CA 

 
CA2
 
 
Vp = (1 : mp, 1 : n) (44)
 
 
.
 
 
 
 

 . 

 
CA(p−1)
we obtain two shifted operators from Vp :

 
C  



 CA
CA
   
   
2
CA
   
 
CA2
   
 
0 1
 
Vp−1 =  ((p − 1)m × n), Vp−1 = CA3 .  ((p − 1)m × n)
  
  
.
   
   



  . 
.
   
 
CA(p−1)
 
 
(p−2)
CA
(45)
0 1
Note that Vp−1 and Vp−1 are related via
0 1
Vp−1 A = Vp−1 (46)

from which one obtains A as

0
A = [Vp−1 ]+ Vp−1
1
(47)

To obtain B , we utilize the following identity:


   

C  
Y (1) 
   

 CA 


 Y (2) 

   
CA2
   
0
   Y (3) 
V(p−1) B= B= (48)
   
   
. .
   
   
   
   

 . 


 . 

   
CA(p−2) Y (p − 1)

17
so that B is given by

 

Y (1) 
 

 Y (2) 

 
 
0

+
Y (3) 
B= [Vp−1 ]  (49)


.
 
 
 
 

 . 

 
Y (p − 1)

Finally, C is obtained from

0
C = V(p−1) (1 : m, 1 : n) (50)
as in the case of the ERA derivation.

6.5 Realizations based on RH T H

For this case, we obtain the following shifted operators from the controllability
matrix Ws :
 
0
W(s−1) 2
= B AB A B ... A (s−2)
B (1 : n, 1 : (s − 1)r)
(51)
 
1
W(s−1) = AB A2 B A3 B ... A(r−1) B (1 : n, 1 : (s − 1)r)

0 1
Notice that W(s−1) and W(s−1) are related by
0 1
A W(s−1) = W(s−1) (52)

from which one can obtain


1 0
A = W(s−1) [W(s−1) ]+ (53)

0
B matrix is just the first n-column entries in W(s−1) :
0
B = W(s−1) (1 : n, 1 : r) (54)

Finally, C can be extracted from the following identity:

18
 
0
C Ws−1 = Y (1) Y (2) Y (3) ... Y (s − 1)

(55)

 
0
C = Y (1) Y (2) Y (3) ... Y (s − 1) [Ws−1 ]+ (1 : m, 1 : n)

From the computational accuracy viewpoint, we conjecture that (49) would


offer more accurate solution of B , whereas the above equation (55) would offer
a better solution of C . Confirming this conjecture may present one of the term
projects.

7 Continuous Model from Discrete Realization Model

Realizations (Ā, B̄ , C ) obtained in the preceding two sections valid for the dis-
crete state space model (1). While they are adequate for discrete event dynam-
ics, it will prove more convenient to deduce the continuous state space model.
The second-order structural dynamics models can then be obtained from the
continuous state space model. In this section we focus on the derivation of a
modal-form, continuous state space equation. We defer the transformation of
the modal-form state space equation into the second-order structural equa-
tions to the next chapter.

First, we transform the discrete system matrix Ā into its continuous case A
as follows. Using the relation

Ψ−1 A Ψ = eΛ∆t ⇐ ⇒ Ψ−1 A Ψ = Λ (56)

we obtain the continuous system eigenvalues from


ln(eΛ∆t )
Λ= = diag{σi ± jωi , i = 1, . . . , n} (57)
∆t
where σi and ωi are the real and imaginary parts of the continuous state-space
system (CSS) characteristic roots. It should be noted that, although the real
roots represent the structural damping, the imaginary roots are not the same
as the natural frequencies as will become clear in the next Chapter.

Second, we recall the discrete operator B̄ (6.37) for the zero-hold case given
by

B̄ = (A − I)A−1 B (58)

19
from which we have

Ψ−1 B̄ = (eΛ∆t − I)ΛΨ−1 B (59)

Therefore, the modal excitation matrix Bψ can be written as

Bψ = Ψ−1 B = Λ−1 (eΛ∆t − I)−1 Ψ−1 B̄ (60)

Third, in order to obtain the modal measurement matrix Cψ , we recall the


output matrix relation (6.21c)

     
C = Sd 0 + Sv 0 A + Sa 0 A2 (61)
In practice, the output data from each sensor type are treated separately. This
means the modal output matrix becomes different according to sensor types:

 
Cψ = Sd Ψ for displacement sensing

 
= −1
Sv ΨΛ for velocity sensing
(62)
 
= Sa ΨΛ−2 for acceleration sensing

 
= . . . <(Cψ i ) ± =(Cψ i ) . . . i = 1, . . . , n

Finally, we obtain the continuous, modal state-space realization as

ż(t) = Λ z(t) + Bψ u(t)


(63)
y(t) = Cψ z(t)

Observe that the internal variable z(t) is associated with an arbitrary basis
vector. On the other hand, the input u(t) and output y(t) are the same ones
measured from the experiments. In other words, for all feasible internal vector
representations, the input/output transfer function is unique, which will be
utilized for extracting the structural physical parameters in the subsequent
lecture.

20

You might also like