You are on page 1of 8

STATISTICAL ERROR ON DAMPING RATIOS IN

DETERMINISTIC SUBSPACE IDENTIFICATION

Michel Raffy and Camille Gontier


Laboratoire de Mcanique et de Rhologie
Ecole dIngnieurs du Val de Loire
Rue de la Chocolaterie
41000 Blois France

ABSTRACT
Today, in the field of structure modal analysis frequency or
time domain methods are available. They allow engineers to
determine the modal characteristics of a mechanical
structure in testing conditions, i.e. when the excitation can be
measured. Once modal parameters are determined, the
question of data reliability arises especially concerning
damping ratios, which are often identified with an
unsatisfactory accuracy.
From a deterministic subspace identification algorithm in
state space such as N4SID (Numerical algorithm for
Subspace State Space System Identification), the present
paper develops a new formulation which allows the
evaluation of the statistical error on the identified damping
ratios. After developing the basic formula, the method is
validated and discussed by means of first the example of
spring-damper-mass system, and second an experimental
structure representing a two-floor building.
NOMENCLATURE
M , C , K mass, damping and stiffness matrices.
f , F respectively force and force vector.
Ac , Bc continuous time matrices.
A, B, C , D discrete time matrices in displacement form state
equations.
A, B , C , D discrete time matrices in acceleration form
state equations.
x, x , X , X displacement/velocity, velocity/acceleration and
associated state vectors.
y , y , u observed displacements, velocities and excitations in
state equations.
l, m number of outputs and inputs.
w, v, w , v process and measurement noises in displacement

and acceleration form state equations.


i, n, r identification order, system order, truncation rank of
identification.
j sample size.

i , i , i , i , i observability, excitation and stochastic


matrices.
U , S , V , U , S , V full and truncated SVD (superscript
takes the values s and n for signal and noise parts).
eigenvector matrix of Ac and A .

k , k eigenvalues of matrices Ac and A .


A d , Acd diagonal realisation matrices of A and Ac .

id diagonal realisation matrix of i .


() k kth row of the matrix.
() k kth column of the matrix.

T similarity transformation matrix.


T sampling period.
~
error on the variable .
estimate of the variable .

complex conjugate transpose of the variable .


T transpose of the variable .
c conjugate of the variable .
pseudo inverse of the matrix .
E expectation operator with E is equal to 1j E .

pq Kronecker delta operator.

( )

B orthogonal projection operator B = B T BB T

B.

B . orthogonal complement projection operator.


B = I B

/ B C oblique projection operator / B C = B C B C .

Generic procedure (N): from any discrete vector signal s (t ) ,


size i , sequence length j and index k given, an extended
signal vector s k also labelled s k , i is defined as

s k = s k ,i = s (k ) T s (k + 1) T L s(k + i 1) T
(N1)
from which a Hankel signal matrix S is derived as
follows
S = S k , i, j = s k s k +1 L s k + j 1
(N2)

Two indexes , being given, a past signal matrix S p


and a future signal matrix S f are defined from (N1) and
(N2) by
S p = S k , , j = [ s k ,

L s k + j 1, ]

s k +1,

S f = S k + , , j = [ s k + ,

s k + +1,

(N3)

L s k + + j 1, ] (N4)

Furthermore, a past signal + matrix S p + and a future


signal - matrix S f are defined

S p + = S k , + 1, j

(N5)

S f = S k + + 1, 1, j

(N6)

Specific procedure (P): from the state vector signal x(t ) ,


sequence length j and index k given, state vector signal

x k is defined as
x k = x k , j = [x(k ) x(k + 1) L x (k + j 1)]

(P1)

A past state sequence X p and a future state


sequence X f are defined from (P1) by

X p = x k, j
Xf =x

k + , j

X f + = x k + +1, j
1

(P2)
(P3)
(P4)

INTRODUCTION

Among numerous methods available today in the field of


[1]
modal analysis, time domain methods are becoming more
and more important as an alternative to the classical
[2]
frequency domain ones . Among them, subspace methods
are especially attractive since they require no prior
manipulation of the data. A comprehensive survey on these
[3, 4]
. These methods are
methods can be found in literature
currently applied in three typical situations: i) deterministic
identification, ii) pure stochastic indentification, iii) mixed
deterministic-stochastic identification. The present paper is
developed within the framework of the third situation, but
obviously could be degenerated to the second one.
In such situations, the identification process is of stochastic
nature. So the question of the statistical validity of the
identified parameters arises, whatever the identification
algorithm may be. Although modal frequencies are generally
identified with accuracy, that is seldom true for the damping
ratios. For these reasons, providing a confidence range
associated with any estimated parameter would be the only
way to ensure valuable information.
The issue of the statistical analysis of the identified
parameters from subspace methods was tackled by several
authors in the last decade. From the early 90s, a series of
[4, 5, 6]
[7]
[8, 9, 10]
[11]
, Jansson , Bauer
, Deistler ,
papers by Viberg
[12]
[13]
Peternell , Kundsen
thoroughly investigated the
consistency conditions of the subspace methods, providing
the basic elements for a statistical analysis of the
parameters at the same time.
[6]
In the present paper a method proposed by Viberg is
adapted to the identification of the damping ratios, rewritten
[3]
[4]
in the frameworks of N4SID algorithm and IVM algorithm ,

then applied to several cases in simulation, and finally to an


experimental case.
2

STATE EQUATIONS OF STRUCTURES

The dynamic equation of a structure, under the linear timeinvariant hypothesis is usually expressed in the form:
Mx + Cx + Kx = f
(2.1)
where matrices M , C , K stand respectively for mass,
damping and stiffness matrices, x and
displacement and external force vectors.
The following vectors are usually defined.

for the

X = x T x T
and F = f T 0
(2.2 a, b)
standing for the state variable and associated force vectors.
0
I

Ac =
While writing
(2.3 a)
1
1
M
K
M
C

M 1
(2.3 b)
Bc = 1
1
1
M CM
M
the above notations allow the classical differential form
X = Ac X + Bc F
(2.4)
Generally, only a small number of the structure degrees of
freedom are observed so that a vector Y of observations
(2.5)
must be defined:
Y = Cc X

and

C c being the observation matrix .


Such a defined continuous dynamic equation is then
discretized using a classical procedure. The resulting
discrete system of state equations then becomes
x(k + 1) = Ax(k ) + Bu (k ) + w(k )
(2.6)

y (k ) = Cx (k ) + Du (k ) + v(k )
In the present paper, we will consider the following
[14]
formulation in terms of accelerations :
x ( k + 1) = Ax (k ) + B u ( k ) + w (k )
(2.7)

y (k ) = C x (k ) + D u (k ) + v(k )

with A = A = e T Ac , B = ABc , C = C = C c , D = D
and u (k ) = u (k + 1) u ( k ) .

Some general assumptions are used:


a) the process and measurement noises are assumed to be
stationary
zero-mean ergodic white Gaussian random
processes with covariance
w w T
Q S
p
q
(2.8)
E = T
pq .
v p v q S
R

b) the input sequence u (k ) is assumed to be an arbitrary


quasi-stationary deterministic sequence uncorrelated with
[1]
the process and measurement noises .
Futhermore the input sequence u (k ) is persistently exciting
[1,3]
.
of order 2i
c) The system is assumed to be observable, which implies
that the extended observability matrix i has a full rank
equal to system order n .

N4SID: MODEL AND FORMULATION

From the system of equations (2.7), it is easy to prove the


following relations, through a recursive process, and using
the procedure (N1)
(3.1)
y k = i x k + i u k + i w k + v k

x (k + i) = A i x (k ) + i u (k ) + i w (k )

(3.2)

i is called the extended observability matrix ,


i , i , i and i the first two matrices being attached to
the excitation vector and the last two to the stochastic
effects. The matrices have the following structure:
0 L 0
C
D

CA
CB
D O M
i =
(3.3 a, b)
i =
M

M
O D 0
i 1
i 2

CA
CA B L CB D
where

0 L 0
0

C
0 O M
i =
M
C 0 0
i 2

L C 0
CA

[
= [A

i = A i 1 B L AB B
i 1
i
L A
Using the generic
observation matrix

(3.3 c)

Y f

are defined. Similarily, the following matrices U p ,


U f , W p , W f , V p , V f are constructed. From the

equations (3.1), (3.2) and using the specific procedures (P2,


P3), the following relations are obtained
Y = X + H U + W + V
p
p
p
p

p

(3.4 a, b, c)
Y f = X f + H U f + W f + V f



=
+

X f A X p
Up
Wp
As part of this formulation in terms of accelerations, the
instrumental variable, by analogy with the one which is
[3, 15, 16]
, is
generally taken up in the literature
Up

P p =  .
(3.5)
Y p

The N4SID algorithm based on the state sequences uses


the oblique projection of the row space of matrix Y f onto the

row space of the instrumental variable matrix P p parallel to


the row space of matrix U f . This quantity O is equal
to the product of the estimate of observability matrix by
[3,14]

the future states X f , plus the oblique projection of


W& f + V& f . When length sequence j tends to infinity the
oblique projection of the noise vanishes.
O = Y f / U f P p

X f

(3.8)

and after inspecting the singular values of S , the


matrices are partitioned.
S s 0 V sT
O = USV T = [U s U n ]
(3.9)

0 Sn V T
n

with S s corresponding to cutting rank r , the most significant

S singular values.
The estimates of and X f are arbitrarily defined by
= U
(3.10)

X

= S sV sT

(3.11)

and are obtained with a close similarity transformation.


The estimates of matrices A and C can be extracted by
using the shift property of .
After that, the following oblique projection must be defined to
obtain the estimates of state vector X f + .
O = Y f / U f P p +

(3.3 d)

I
(3.3 e)
procedures (N1, N3, N4), a past
Y p and a future observation matrix

O = X f + W f + V f / U f P p

The SVD of O gives


O = USV T

(3.12)

(3.13)
1 X f +
Then the estimates of matrices B and D are obtained by
solving the following equations in the least square sense.
X
A
B
f + X f = U /
(3.14)

Y / C
D
O

POLE ESTIMATE

Using the shift-structure property of , the estimate of the


system matrix A can be obtained. Two selection matrices
[6]
must be introduced
J 1 = [ I ( 1)l 0 ( 1)ll ]
(4.1)

J 2 = [0 ( 1)l l

(4.2)

I ( 1)l ]

The error in the estimate of A may be written

~
A = A A = J
J J A

(4.3)

Considering that at first-order approximation


(4.4)

Equation (4.3) becomes


~
A J 1 J 2 J 1 A

(4.5)

( 1 ) ( 2

)(

Assuming that matrix A has distinct eigenvalues k , it can


be diagonalized
A = 1 A d
(4.6)
So at first-order approximation, the errors on the eigenvalues
of A can be written
~
~ k k A k1
(4.7)

(3.6)

Inserting (4.5) into (4.7) gives


~ k k J 1 J 2 J 1 A k1

(3.7)

As

then

A k1 = k k

)(
)(

~ k k J 1 J 2 J 1 k k1

(4.8)
(4.9)
(4.10)

Lemma

[17]

: let two matrices A be m r and B be r n ,

( AB )

both of rank r , then

=B A

(4.11)

Considering on the one hand the extended observability


matrix in the diagonal realization

d = 1

(4.12)

and on the other hand the lemma (4.11)

) (

) (

J 1 = J 1 1 = J 1d

(4.13)

Using (3.10) and (4.12), the following relation can be written

1 = U sT d

(4.14)

The error on pole k can be obtained by inserting (3.10),


(4.13) and (4.14) into (4.10)
~ k = f kU s U sT d
(4.15)
with

( )k

f k = (J 1d ) (J 2 J 1 k )
k

(4.16)

Let us define matrix G = 1j O

(4.17)

The SVD of G can be expressed as


G = U s S jsVsT + U n S jnVnT

(4.18)

S js = 1j S s and S jn = 1j S n
(4.19 a, b)
From equations (3.6) and (4.18), multiplying both sides by
V , the following equation is obtained

are O p (1) , and that they have a limiting zero-mean


Gaussian distribution. Hence, the asymptotic distribution of
j U s S s1 is the same as that of the quantity

j U s = 1 N f Z T H
j

U f

&
with
Z = U p = & k & k +1 L & k + j 1
Y&
p
using the procedure (N2).
R 1 R
ff fp W c Vs S js1
H =
I

1 Y
j f

with

U P pT W cVs = U s S js
f

W c = P p U P pT
f

(4.20 a)
1

P p

(4.20 b)

Theorem: let the deterministic vectors and satisfy

= = 0
Pre-multiplying by

(4.21)

and post-multiplying by S js1 the

equation (4.20 a) gives


1 N
P pT W c Vs S js1 = U s
f
j
U f

(4.22 a)

N f = W f + V f
(4.22 b)
Using the definition of the projector U , the equation
f
with

(4.22 a) can be written

j U s = 1 N f P pT
j

with

1 

N f U Tf
j

R ff = U f U Tf

1
R ff R fp W cVs S js1

(4.25 a)

(4.25 b)

R fp = U f P pT
(4.25 c)
[9]
From assumptions a) and b), the Central limit theorem
shows that the elements
1 N P T and 1 N U T
(4.26 a, b)
f
f p
f
j

(4.27 b)

(4.27 c)

Replacing by f k and by f l in the equation (4.27 a)

( )

and post-multiplying both sides by U sT d


yields
k

( )

j f kU s U sT d

( )

= f k 1 N f Z T bk
j

j f lU s U sT d
= f l 1 N f Z T
l
j
h {k , l} b = H U T d
with
h

with

(4.27 a)

It is easy to prove


bl

(4.28 b)

( )h

(4.28 c)

that f k is orthognal to d , and

[7]

U s . M. Viberg et al

(4.28 a)

[5, 6, 16]

showed that

j f kW r 2U s (in a

N4SID case W r = I ) has a limiting zero-mean Gaussian


distribution with covariance
T

1
1

lim N E f kW r 2U s f lW r 2U s
j

(4.29 a)
=
H T R   ( ) H f R   ( ) f c

<

nf nf

T
c

1
1


lim N E f kW r 2U s f lW r 2U s
j

= H T R ( ) H c f l R n f n f ( ) f l

(4.29 b)

<

A third relation can be added to the first two

1
1


lim N E f kW r 2U s f lW r 2U s
j

= H R ( ) H c f lT R n f n f ( ) f l

(4.29 c)

<

In the left hand side of equations (4.28 a, b) the product


f kU s U sT d
is equal to ~ k (4.15). Inserting (4.29 a, b,

( )k

c) into the equation


distributions of the poles
lim j E [~ ~ ] = 1
j

<

(4.15), the asymptotic normality


k , for k {1, L , n} , are obtained
bkT R ( )bk f k R n f n f ( ) f kc (4.30 a)

[ ]
lim j E [~ ~ ] =

lim j E ~ k ~ kc = 1j

j
j

c c
k k

1
j

bkT R ( )bkc f k Rn f n f ( ) f k (4.30 b)

bk R ( )bkc f kT Rn f n f ( ) f k (4.30 c)

<
<

STATISTICAL ERROR ON DAMPING RATIOS

By first order approximation, the estimate of k being very


closed to the real value, the following relation can be written
~
~
k + kc 2 k ( k k ) = 2 k ~k
(5.2)
~
The following variable is defined as
k

(5.3)

] [

c1

~ ~
~ ~
~ ~
E (~ k ~ k ) = 14 E k k + 2 k kc + kc kc = E ( k ~k )2
(5.4)
Since the estimate of k can be considered exact, equation
(5.4) can be written
E (~ k ~ k ) = k2 E ~k2
(5.5)

[ ]

c2
m1

k1

~
~
Let k be the error on pole k , the sum of k and its
~
~
conjugate gives k + kc = 2( k k k k )
(5.1)

~
~
~ k = 12 k + kc
The covariance of ~ gives

collected. A white noise signal is added to these output


signals. This noise represents around 0.5 percent of the
noise-to-signal ratio.

k2

m2
x2

x1

k3

c5

c4

c3
m3
x3

k4

m4
x4

k5

m5

k6

x5

Figure 1: Schema of the simulated system


Two hundred and fifty 4,000 point long simulated data files
have been analysed to obtain the distribution diagrams
presented.
Figures 2 and 3 show the distribution of the frequency and
the damping ratio of the second mode obtained after the
identification process with the identification order i equal to
11 and the truncation rank r equal to 10. The distributions
have a Gaussian look centred around a central value. These
E-4
central values are respectively 33.2768 Hz and 7.358 .

15

10

Between the dicrete and continuous time matrices, the


following equation exists
d

A d = e T Ac

(5.6)

k k = e T k
which gives
After differentiation of (5.7)

(5.7)
0
33.2664

T k

k = T k e
(5.8)
which can be expressed as
~
1 ~ k
(5.9)
k =
T k
Inserting (5.9) and its conjugate in (5.4) and using (5.5)
yields
[ ~ ~ ] 2 E ~ ~ c
E ~ kc ~ kc
1
k k
E k k
E ~k2 =
+
+

(5.10)
k kc
4T 2 k2 k2
kc 2

This equation results in the relation between the covariance


of k and the covariances of pole k and its conjugate.

[ ]

] [

EXPERIMENTAL VALIDATION

The above theory was tested, on the one hand on a


simulated spring-damper-mass system, and on the other
hand on an experimental structure representing a two floor
bulding.
6.1
Simulated spring-damper-mass system
This simulated system has five degrees of freedom, and is
made up of five masses (2, 2.5, 2, 1.8 and 4 kg), six springs
(180, 100, 200, 100, 85 and 120 kN/m) and five dampers (1,
2.5, 0.5, 3 and 0.1 N/m/s).
Using the state equations in terms of accelerations, the
previous system was simulated in the following conditions.
Two random excitation forces are applied to the nodes
number two and four, and two acceleration signals are

33.2668

33.267

Hz

Figure 2: Distribution of the frequency


20

15

10

33.2666

7.2

7.25

7.3

7.35

7.4

7.45

7.5

7.55
-4

x 10

Figure 3: Distribution of the damping ratio


Figure 4 shows the distribution of the statistical error
estimate on damping ratio.
Figure 5 shows the distribution of the ratio between the
statistical error estimate and real error on damping ratio. It
shows that approximately 95% results are bounded by the
values 2 and +2, that complying with a Gaussian
distribution. For the other modes, similar diagrams are
obtained.

14
12
10
8
6
4
2
0

10

12

14
-6

x 10

Figure 4: Distribution of the statistical error


estimate on damping ratio

At a top node, the experimental framework model was


submitted to a pointwise random excitation by means of a 10
newton excitator, and was observed at two other nodes by
means of two accelerometers.
A frequency domain analysis was carefully performed with
the help of the Brel & Kjr Pulse system, a system
providing very accurate results, at least for the main modes
of the experimental structure. The damping coefficients were
obtained by zooming around each modal frequency, and
averaging 400 sample sequences of 4,096 points on one
channel with a 75% overlapping.
The damping ratio of the four main modes obtained with the
Pulse system are presented table 1 in the third column.
The values of the obtained frequencies are shown in the
second column.
N
1
2
3
4

25
20

F (Hz)
35.008
60.484
127.22
207.32

(%)
0.44
0.086
0.071
0.146

r=10
r=26
r=42
r=60
0.601 0.416 0.447 0.444
0.0932 0.0948 0.0948 0.0948
0.0812 0.0769 0.0766 0.0766
0.1417 0.1415 0.1427 0.1424

15

Table 1: Experimental and identified modes

10

In this practical validation, 250 experimental records of 4,096


points per channel have been analysed, but two main
problems were encountered
The first one concerns the stability of the main modes while
recording the data files, i.e. the frequency and damping ratio
values of these modes, obtained after the identification
process, evolved between the first and the last record. On
figures 7 and 8, the frequency and damping ratio evolutions
of the first mode are presented according to the analysed file
number.

5
0
-6

-4

-2

Figure 5: Distribution of the ratio


statistical error estimate/real error
6.2
Experimental structure
The experimental structure below roughly represents the
steel framework of a two-floor building (figure 6). Observed
in its axial direction, this structure holds four main modes
which are associated with the following mode shapes: i) in
phase floor translation, ii) in opposite phase floor translation,
iii) in phase floor rotation, iv) in opposite phase floor rotation.
Some local modes exist in the structure such as the floor
plate modes, the column beam modes, but they are only
weakly excited.

35.01

35

34.99

34.98

34.97

50

100

150

200

250

Figure 7: First mode frequency evolution


-3

4.55

x 10

4.5

x1
f(t)


x2

4.45

4.4

4.35

50

100

150

200

Figure 8: First mode damping ratio evolution


Figure 6: Experimental structure

250

Because of this variation, the Pulse damping ratio values


or the experimental means resulting from the identification
process could not be used as a reference for a statistical
validation. Therefore another method was used.
To solve the problem, a local mean using 20 values around
the observed value was used to calculate the real error
estimates. On figure 9, the first mode distribution of the ratio
between the statistical error estimate and the estimated
real error is presented. This distribution still complies
with Gaussian specifications.
25
20
15
10
5
0

-3

-2

-1

Figure 9: First mode - distribution of the ratio


statistical error estimate/estimated real error
On figure 10, the same parameter is presented for the third
mode. The distribution keeps a Gaussian aspect but the
limits for the 95% confidence interval are reduced.
30
25
20

REFERENCES

15

[1] Ljung, L., System Identification, Theory for the User,


Prentice-Hall, Englewood Cliffs, NJ, 1987.
[2] Ewins, D., Modal Testing: Theory and Practice,
Research Studies Press Ltd, Somerset, England, 1995.
[3] Van Overschee, P., Subspace Identification: Theory
Implementation Application, Phd Thesis, Katholieke
Universiteit Leuven, Belgium, February 1995.
[4] Viberg, M., Subspace-based methods for the
identification of linear time-invariant systems, Automatica,
Vol. 31, N 12, pp. 1835-1851, Elsevier Science Ltd, 1995.
[5] Viberg, M., Ottersten, B., Wahlberg, B., Ljung, L., A
statistical perspective on state-space modeling using
th
subspace methods, Proceedings 30 IEEE Conference on
Decision and Control, pp. 1337-1342, Brighton, England,
1991.
[6] Viberg, M., Wahlberg, B., Ottersten, B., Analysis of
state space system identification methods based on
instrumental variables and subspace fitting, Automatica, Vol.
33, N 9, pp. 1603-1616, Elsevier Science Ltd, 1997.
[7] Jansson, M., Wahlberg, B., A linear regression
approach to state-space subspace system identification,
Signal Processing, Vol . 52, pp. 103-129, 1996.
[8] Bauer, D., Deistler, M., Scherrer, W., The analysis of
the asymptotic variance of subspace algorithms, IFAC,
System Identification, pp. 1037-1041, Fukuoka, Japan, 1997.
[9] Bauer, D., Deistler, M., Scherrer, W., Consistency and
asymptotic normality of some subspace algorithms for

10
5
0

-1

-0.5

0.5

1.5

2.5

Figure 10: Third mode distribution of the ratio


statistical error estimate/estimated real error
The second problem comes from the identification order of
structure and cutting rank r . In a real or experimental
structure, the exact number of modes is unknown. The
classical difficulty lies in the choice of these parameters.
In this study case, an identification order of 60 and cutting
ranks ranging from 10 to 60 have been chosen. On table 1,
the damping ratio median values for the four main modes
are presented when r equals 10, 26, 42 and 60. These
median values agree on the Pulse ones except for the
second mode.
7

number of modes is known. Moreover there is no weakly


excited modes because the number of excitation and output
points is sufficient.
In a classical subspace identification procedure, the quality
of the identification depends on parameters such as the
number of excitators m , the number of output sensors l ,
the identification order i , the cutting rank r and the sample
size s .
For practical reasons the number of simultaneous excitators
generally reduces to one, sometimes two for large
structures. The question arises from the best choice for the
other parameters. It is clear that this choice depends on the
experimental case, for instance on the measurement noises,
which can be colored, on the modal densities, and on the
possible presence of small non-linearities.
Practical experimentation provides an empiric optimal choice
for these parameters though no real theory is able to confirm
the latter today.
In this practical case, the number of excitators and the
number of output sensors are respectively equal to 1 and 2.
It is obvious that some modes are weakly excited, and that
some modes provide the output sensors with a weak signal
due to the sensor distance. As the exact number of present
modes is not known, the identification order is arbitrarily
fixed to 60 for computing and time reasons. This choice
gives better results than lower values, but the statistical error
estimate remains overvalued for some modes: for the third
one for instance (figure 10).
For all identification order values and cutting rank values
r tested, the statistical error estimate never underestimates
the real error. In this case, the statistical error estimate may
be used to evaluate the errors made on the damping ratios
during the identification process.

CONCLUSION

The study above shows that in a simulated structure case


with white noise, the statistical error estimate analysis on
damping ratios provides reliable results. In this case, there is
no problem for the identification process because the

systems without observed inputs, Automatica, Vol. 35, pp.


1243-1254, Elsevier Science Ltd, 1999.
[10] Bauer, D., Jansson, M., Analysis of the asymptotic
properties of the MOESP type of subspace algorithms,
Automatica, Vol. 36, pp. 497-509, Elsevier Science Ltd,
2000.
[11] Deistler, M., Peternell, K., Scherrer, W., Consistency
and relative efficiency of subspace methods, Automatica,
Vol. 31, N 12, pp. 1865-1875, Elsevier Science Ltd, 1995.
[12] Peternell, K., Scherrer, W., Deistler, M., Statistical
analysis of novel subspace identification methods, Signal
Processing, Vol. 52, pp. 161-177, Elsevier Science Ltd,
1996.
[13] Knudsen, T., Consistency analysis of subspace
identification methods based on a linear regression
approach, Automatica, Vol. 37, pp. 81-89, Elsevier Science
Ltd, 2001.
[14] Raffy, M., Gontier, C., A Subspace method of modal
analysis using acceleration signals, Proceedings IMAC-XX,
pp. 824-830, Los Angeles, U.S.A., 2001.
[15] Ottersten, B., Viberg, M., A subspace-based
instrumental variable method for state-space system
th
identification, 10 IFAC Symposium on system identification,
Copenhagen, Denmark, 1994.
[16] Viberg, M., Ottersten, B., Wahlberg, B., Ljung, L.,
Performance of subspace-based system identification
methods, Proceedings IFAC 93, Vol. 7, pp. 369-372,
Sydney, Australia, 1993.
[17] Albert, A., Regression and Moore-Penrose
Pseudoinverse, Academic Press, New York, 1972.

You might also like