You are on page 1of 6

AIAA J OU R NA L Vol. 35, No.

8, August 1997

Partial Derivatives of Repeated Eigenvalues and Their Eigenvectors


Uwe Prells and M ichael I. Friswell University of Wales, Swansea SA2 8P P, Wales, United Kingdo m
The analysis of inverse problem s in linear m odeling often require the sensitivities of the eigenvalues and eigenvectors. The calculation of these sensitivities is mathem atically related to the corresponding partial derivatives, which do not exist for any param eterization. Inasmuc h as eigenvalues and eigenvectors are coupled by the constitutional equation of the general eigenvalue problem , their derivatives are coupled, too. C onditions on the param eterization are derived and form ulated as theorem s, which ensure the existence of the partial derivatives of the eigenvalues and eigenvectors with respect to these parameters. The application of the theorem s is demonstrated by examples.

I.

Introduction

ANY engineering optimization problems, for instance, optimal design or model updating, lead to a sensitivity analysis of the eigenvalue problem. In the case of distinct eigenvalues, the partial derivatives of the eigenvalues and eigenvectors with respect to a prechosen parametrization can be calculated. The situation is more complicated in the case of multiple eigenvalues because the associated subbasis of eigenvectors is only de ned up to an arbitrary orthonormal matrix. This case has been the subject of m any studies in the recent years (see, for instance, Refs. 113). As demonstrated by Haug and Rousselet,8 the (Fr echet) derivatives of the multiple eigen values do not exist, in general, for any parametrization in the case of more than one parameter. They suggest the use of the directional (Gateaux) derivatives, which exist for some directions. The question of which parametrization is permissable to ensure the existence of the partial derivatives of the eigenvalues and eigenvectors has not been discussed. A remark on that issue is the purpose of this paper. In Sec. II the problem of calculating the partial derivatives of repeated eigenvalues and the partial derivatives of the corresponding eigenvectors is recalled. Conditions on the existence of the partial derivatives of the eigenvalues and eigenvectors are investigated in Sec. III, accompanied by simple two-dimensional examples. To demonstrate the application of the theorems derived, in the fourth section an example of a three-dimensional elastomechanical model is presented.

the eigenvectors and eigenvalues will depend on q. If at parameter vector q o the rst n N eigenvalues are equal, i.e.,

K (q o ) =

`I n
0

]
>

(5)

N n then the primary partition X 1 2 of the matrix of eigenvectors X (q o ) = [ X 1 , X 2 ] is de ned only up to postmultiplication by an n n orthogonal matrix H . Indeed, from Eqs. (3) and (4), substituting X 1 ! X 1 H gives

X (q o ) X (q o )
>

H o = X (q ) 0

0 IN
n

][

0 IN
n

]
n

X (q o )
>

(6)

X (q ) K

(q ) X (q )

>

X (q o )

0 0 IN
n

][

`I n
0

][
( Zr

>

0 IN

X (qo ) (7)
>

For some parameterizations this redundancy can be used to ensure the existence of the partial derivatives of the eigenvalues at q o . Partial differentiation of Eqs. (3) and (4) with respect to the r th component qr of q leads to
Ar := X A , r X Br := X B , r X
> >

> + Zr ) > + Zr K )

(8) (9)

II.

Recalling the Problem


AXK

Consider the general eigenvalue problem

(K Zr

,r

= BX
2

(1)

where the subscript r denotes the partial derivative with respect to qr and the matrix Z r is de ned by X , r (q)

with the normalization of the eigenvectors X AX Both equations are equivalent to A B


> = (X X ) >

= X (q) Z r (q),
symmetric , A,r B,r

8 r

= 1, . . . , m = 1, . . . , m ,

(10)

IN

(2)

Note that Eqs. (8) and (9) imply that, for all r
Ar Br

symmetric symmetric

(11) (12)

(3)

= (X K

X )

>

(4)

symmetric ,
>

with the real-valued symmetric and positive de nite N N m atrices A and B and the diagonal matrix K of the eigenvalues k i , i = 1, . . . , N . If A = A(q) and B = B(q) are given functions m of the parameter vector q 2 such that for each q 2 decompositions (3) and (4) of A(q) and B(q) exist, then, of course,
Received Dec. 5, 1996; revision received April 21, 1997; accepted for publication April 22, 1997. Copyright c 1997 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. Senior Research Assistant, Department of M echanical Engineering, University College of Swansea, Singleton Park. E-mail: u.prells@ swansea. ac.uk. Senior Lecturer, Department of M echanical Engineering, University College of Swansea, Singleton Park. E-mail: m.i.friswell@swansea.ac.uk. 1363

Using Eq. (8) to eliminate Z r in Eq. (9) leads to [Zr ; K ] +

,r

Br

Ar K

= : Cr

(13)

where the commutator product is de ned by [ A ; B ] := A B BA (14)

Considering only the diagonal of Eq. (8) leads to ( Z r ) diag

1 ( Ar ) diag 2

(15)

Inserting this result into the diagonal of Eq. (9), lated, yielding

,r

can be calcu(16)

,r

= ( Br ) diag ( Ar K ) diag = ( Cr ) diag

1364

PRELLS AND FRISWELL

This can be done for all r = 1, . . . , m and also at q = q o . The problem is that, at q = qo , Eq. (13) requires the rst n n diagonal block Cr11 of
Cr (q )
o

side of Eq. (28). That can be veri ed by recalling that with reference to Eq. (18) the rst diagonal block of Cr is diagonal, i.e.,
Cr
11

Cr11 21 Cr

12 Cr 22 Cr

]
8 r

=K

1 ,r

(29)

(17)

with the rst partition

K K

1 ,r

of the eigenvalue derivatives given by

to be diagonal for all r


> 11 o Cr := X 1 [ B , r (q )

= 1, . . . , m , i.e.,
o `A ,r (q ) ] X 1 = diagonal

,r

[
K

1 ,r

2 ,r

(30)

= 1, . . . , m
(18)

Only if Eq. (18) holds can the associated part of K ,r be understood as the corresponding partial derivative. Note that, at q = q o , except the rst n n diagonal block Z r11 of Z r , the rst partial derivative of the eigenvectors can be calculated from Eq. (13). Partitioning of Z r (q o ) according to the partition of K (q o ) in Eq. (5) and of Cr (q o ) in Eq. (17),

Only the off-diagonal blocks of the last commutator depend on Z s11 . Thus, the second partial derivatives of the eigenvalues can be calculated from the diagonal of Eq. (28), yielding

, r, s

= ( Q r s ) diag

( Z s12 Cr21

12 21 Cr Z s ) diag

0 ( Z s21 Cr12
Cr Z s
21 12

[Z s

22

Z r (q ) and using Eq. (13) leads to Zr


12

Z r11 Z r21

Z r12 Z r22

]
1

; Cr

22

])

diag

(19)

(31) It remains to derive conditions that enable the calculation of the unknown off-diagonal of Z r11 . Because the rst partition of Eq. (28) does not depend on Z r, s it can be used to calculate ( Z r11 ) off , i.e.,
11 ( Q r s ) off 11 = [( Z r ) off ; K 1 ,s

= =

Cr ( C

12

`I N `I N
n)

n) 1

(20) (21)

Z r21 and for N ( Z r22 ) i j n > 1 ( Cr22 )


j ij

(C

21 Cr

] + [( Z s

11 )

off

;K

1 ,r

]+ (Zs

12

21 Cr

12 21 Cr Z s ) off

= c

i 6= j,

i, j 2

f 1, . . . , N

ng

(22)

(32) For distinct eigenvalue derivatives, Eq. (32) allows the calculation of ( Z r11 ) off using, for instance, the diagonal r = s, which yields with reference to Eq. (21) for i 6= k : ( Z r11 ) i k

where c i = ( C ) i i . The expression in Eq. (22) is equivalent to the coef cients of the standard series approach for nonrepeated eigenvalues. Note that the diagonal of Z r11 and of Z r22 is given by Eq. (15). To calculate the off-diagonal part ( Z r11 ) off of Z r11 the second partial derivatives of the eigenvalue problem are needed. Here the subscript off denotes a matrix with a zero diagonal. Differentiating Eqs. (8) and (9) with respect to q s yields
Ar s := X A , r, s X
>

= 2( k

1
k,r

i, r )

[Qrr

11

2 Cr12 ( C

`I N

n)

Cr

21

]i k

(33)

Of course, the consistency of Eq. (32) with this result has to be checked for the remaining equations resulting from r 6= s because the coupling of the sensitivities of the eigenvalues and eigenvectors means that their existence is coupled, too. In the following section, conditions for the existence of the rst partial derivatives will be inferred and their continuity is investigated.

= =

( Z r, s

> + Z r, s )

Z s> Ar

Ar Z s
,s )

(23)

III. Existence of Eigenvalue an d Eigenvector Sensitivities


A. Two Theorems on the Existence of the Partial Derivatives of Repeated Eigenvalues

Br s : = X > B , r, s X

(K

,s

Zr

+ K Z r, s

, r, s

> > + Z r, s K + Z r K

> Z s Br

Br Z s

(24)

where according to Eq. (10) X ,r, s

= X ( Z s Z r + Z r, s )
> + Z r, s ) + [ Z s ; Ar ] +

(25)

Again using Eq. (8) to eliminate Z r> and Z s> leads to


Ar s Br s

As already pointed out, the eigenvectors related to the multiple eigenvalues, in general, are not unique. In some cases it is possible to choose the orthonormal matrix H in such a way that it diagonalizes the m atrices Cr11 for all r = 1, . . . , m . Thus, the existence of the partial derivatives of the eigenvalues depends on the choice of the param eterization. Theorem 1: A necessary and suf cient condition of the existence of the partial derivatives of K at q o is that
11 11 [Cr ; Cs ] =

( Z r, s
>

As Ar As Br

(26)

8 r, s

= 1, . . . , m

(34)

(K

Z r, s
,s]

+ Z r, s K

+K

, r, s

Ar K

,s

+ [Zr ; K

+ [ Z s ; Br ]

(27)

> Inserting Z r, s from Eq. (26) into Eq. (27) and using Eq. (13) nally yields

Q r s := Br s

Ar s K

As Cr
,s ]

Ar Cs

= [ Z r, s ; K ] + [ Z r ; K

+ [ Z s ; Cr ] + K

,r, s

(28)

On the right-hand side of this equation the diagonal of the rst commutator vanishes at q = q o and the diagonal of the second commutator is zero because K , s is diagonal. Only the off-diagonal elements of Z r11 , r = 1, . . . , m , are unknown. But this block does not occur in the diagonal of the last comm utator on the right-hand

From basic algebra (for further reading see, for instance, Refs. 14 16) it is clear that for two matrices having the same orthonormal eigenvectors the commutator vanishes. On the other hand, if the com mutator of two matrices vanishes they have the same diagonalizing matrix. If both matrices are symm etric [see Eqs. (11) and (12)], the diagonalizing matrix is orthonormal. The fact that there exist a maximum n linearly independent comm utating n n matrices leads to the following theorem. Theorem 2: Denoting cs( Cr11 ) the n 2 -dimensional vector containing the sequence of the column vectors of Cr11 , then a necessary condition for the exsistence of the rst partial derivative of the eigenvalues at q = q o is
11 11 rank ( [cs ( C1 ) , . . . , cs( Cm ) ]) n

(35)

The conditions of Theorems 1 and 2 enable a given parameterization to be tested. If either theorem is violated, further computations

PRELLS AND FRISWELL

1365

are in vain because the derivatives will not exist at q = qo . Moreover, if the parameterization is permissible in the speci ed sense, at q = qo the eigenvalues and eigenvectors and their rst partial derivatives are continuous if A(q) and B (q) and their rst derivatives are continuous. As a consequence of the continuity of the rst partial derivatives, the order of differentiation within the second partial derivatives is arbitrary. Interchanging r $ s, for example, in Eq. (28) and using Br s = Bsr as well as Ar s = Asr lead to 0

The rst partial derivatives are 3 @k = 2 @q

()
0

1 4w

( )
4q 2 q1 q 2

2 q1

q1

(46)

and the second partial derivatives turn out to be

@2 k @q @q >

1 4w

2 q2

q1 q2

(47)

= Q r s Q sr = [ Z r, s Z s ,r ; K ] + [ Z r ; K

,s]

Obviously the limits of the derivatives as q !

0 do not exist.

[Z s; K

,r ]

+ [ Z s ; Cr ] + [ Z s ; [ Z r ; K ]] +K
,r, s

[ Z r ; Cs ] +

, r, s

B. Two Theorem s on the Existence of the Partial Derivatives of Eigenvectors

,s,r ,s]

= [ Z r, s Z s ,r ; K ] + [ Z r ; K + [Zs; K
,r ]

[Z s; K [Zr ; K
,s,r

,r ] ,s ]

[ Z r ; [ Z s ; K ]]
,r, s

, s ,r

Although it is popular (see, for instance, Ref. 5 or 9) to use the diagonal r = s of Eq. (32) only to calculate ( Z r11 ) off , it may be that this solution violates the remaining equations resulting from r 6= s. Thus, it remains to investigate Eq. (32) and to assure its consistency. It is suf cient to show that the system of equations (G r s ) off
11 = [( Z r ) off ; K 1 ,s

= [ Z r, s Z s ,r ; K ] + K +
[ Z s ; [ Z r ; K ]]

[ Z r ; [ Z s ; K ]] (36)

] + [( Z s
K
1 ( ,s

11 )

off

;K

1 ,r

]
1 ,r

= (

Z r11 ) off

1 ,s

Z r11 ) off

11 + ( Z s ) off K

1 ( ,r

Z s11 ) off

where for Cr and Cs the corresponding left-hand sides of Eq. (13) have been inserted. Interchanging the arguments in the last commutator of Eq. (36) and using the Bianci identity [ A ; [ B ; C ]] + [ B ; [C ; A]] + [C ; [ A ; B ]]

8 r, s

= 1, . . . , m (48)

=0

(37)

to rewrite the last two comm utators in Eq. (36) yields [ Z r, s Z s,r [ Z r ; Z s ]; K ] +

has unique solutions r = 1, . . . , m , for given diagonal matrices K 1r , having distinct elements, i.e., k i, r 6= k k , r , i 6= k, for , i, k 2 f 1, . . . , n g for each r = 1, . . . , m . The m atrix G r s in Eq. (48) represents all known terms of Eq. (32), i.e.,
11 (G r s ) off : = ( Q r s ) off 12 Zs 21 Zs

( Z r11 ) off ,

, r, s

, s, r

=0

(38)

( Z s12 Cr21

12 21 Cr Z s ) off

(49)

In general, this equation holds if

K
Z r, s

,r, s

=K

,s,r

(39) (40)

Z s,r

= [Zr ; Z s]

where and are known from Eqs. (20) and (21). Because the diagonal part of Eq. (48) is identically zero it represents a coupled system of n(n 1) equations. Writing one equation for the element gi kr s of G r s in row i and column k, where i 6= k for i, k 2 f 1, . . . , n g and denoting the corresponding element of Z r11 by z i kr Eq. (48) reads gi kr s

The rst equation expresses the arbitrariness of the order of differentiation of the eigenvalues and is equivalent with the continuity of their rst partial derivative. The second equation is, with reference to Eq. (25), equivalent with the continuity of the rst partial derivatives of the eigenvectors. Before corresponding conditions for the existence of the partial derivative of the eigenvectors will be derived the following academical example of Seyranian et al. 13 (see also Ref. 7, p. 158) shows that not for any parameterization do the partial derivatives of the eigenvalues exist. For the example with a linear parameterization, let A = I 2 , and for q = (q1 , q 2 ) > , let B(q) := I 2

= (k

k,s

i, s ) z i kr 2

+ (k

k,r

i, r )z i ks

(50)

Expanding this equation to m 1, . . . , m , leads to

equations resulting from r, s

g i k 11 . . . gi km 1

gi k 1m . . . gi k m m

..

+ 0 1 q 1 + 1 0 q2 = k
1

[ ] [ ]
2 0 0 1 0 1 1 0

(41)
2

which leads to the repeated eigenvalue 1 Theorem 2 does hold in this case because

= k

at q

= 0.

k = , , & ( *
zik1 . . . ( z i km

& ( = : Gi k

*
k
i, 1 ,

k,1

k & ( > = : hi k
...,

k,m

i, m )

+ h i k zi k
2

>

= : zi k

(51)

rank


0 1

=
=
1 0
Br

=n

(42)

Because the right-hand side of this equation is symmetric, the lefthand side has to be symmetric, too. Moreover, the ranks of both sides have to be the same, which is equivalent to the condition rank( Gi k )
> > = rank ( zi k h i k + h i k zi k )

(52)

Checking Theorem 1 by using Cr [ B 1 ; B2 ]

= B , r leads to =
0 1 0

Of course, Eq. (51) has a unique solution only if (43) N i k Gi k N i k

[[ ] [ ]] [ ]
0

0 1

6= 0

=0

(53)

Thus, the given parameterization (41) is not perm issible. Indeed, the calculation of the eigenvalues leads to

where the symmetric matrix N i k is the orthogonal projector (see, for instance, Ref. 17) into the (m 1)-dimensional orthogonal complement of the one-dimensional subspace spanned by h i k , i.e., N i k := Im where Pi k (54)

k (q) = 1 + 3 q 1 w (q) 2
where w (q) :=

(44)

(q 1 / 2)

2 q2

(45)

Pi k :=

h i k h >i k
k hi k k
2

(55)

1366

PRELLS AND FRISWELL


m

is the orthogonal projector into the subspace spanned by h i k 2 To see this, one can multiply Eq. (51) by h i k , yielding
Gi k h i k
> 2 = z i k k h i k k + hi k z i k hi k > 2 = ( k h i k k I m + h i k h i k ) zi k

(56)

For k h i k k 6= 0, the matrix on the right-hand side of Eq. (56) is nonsingular. The unique solution of Eq. (56) is given by zi k

Obviously, the matrices Gi k are symmetric for all i 6= k 2 f 1, . . . , n g . Thus, the rst of the necessary conditions of Theorem 3 does hold. Although Eq. (65) is related to Eq. (53) via h i k = fk f i , neither the rank condition nor the suf cient condition formulated in Theorem 3 does necessarily hold for any parameterization. Even in the case of a linear parameterization of the matrix B , Eq. (65) becomes
Gi k

> Ck DCi

Ci D Ck

>

(66)

= k h k ik = k h k ik
1

( (

k hi k k

Im Pi k

1 2

hi k h i k

>

Im

1 2

Gi k h i k

Gi k h i k

(57)

Of course, in general this solution is not a solution of Eq. (51). Inserting z i k from Eq. (57) into Eq. (51) yields
Gi k

which may be of rank m > 2 for some i 6= k , i, k 2 f 1, . . . , n g . The suf cient condition formulated in Theorem 3 should be checked numerically rather than analytically. The analytical expression of how the projector N i k acts on Gi k is rather dif cult. A better insight can be achieved by translating the effect of N i k on G r s rather than on Gi k . Of course, in the light of Eq. (53) the effect of N i k on Gi k is related to the effect of the projector Pi k : 0

= ( Im

1 2

Pi k ) Gi k Pi k

+ Pi k Gi k ( I m
Pi k Gi k Pi k

1 2

Pi k ) (58)

= N i k Gi k N i k =

Gi k

Gi k Pi k

Pi k Gi k

+ Pi k Gi k Pi k

(67)

Gi k Pi k

+ Pi k Gi k
Pi k Gi k

Considering only the element in row r and in column s of Eq. (67) and expanding this expression for all i 6= k 2 f 1, . . . , n g leads to 0 = (G r s ) off H ( [ Ss ; K
1 ,r

The latter equation is equivalent to 0

] + [Sr ; K

1 ,s

]+

[[T ; K

1 ,r

]; K

1 ,s

])

Gi k

Gi k Pi k

+ Pi k Gi k Pi k = N i k Gi k N i k

(59)

(68) where denotes the Hadamard product, which is the componentwise product of two matrices having the same size, 17 and Sr : =

Summarizing what has been said in this section leads to the following. Theorem 3: Necessary conditions for the existence of a unique solution of Eq. (51) are
Gi k

[G sr ; K
s= 1

1 ,s

(69)

Gi k

>

(60) (61)

T :=

rank( Gi k ) 2

S
r

[K
s

1 ,s

; Ss ]

(70)

=1

Moreover, Eq. (32) is consistent with the result given in Eq. (33) if and only if N i k Gi k N i k

( H ) i k :=

=0

8 i 6= k ,

i, k 2

f 1, . . . , n g

(62)

[S

(k
=1

k,r

i, r )

(71)

where Gi k and N i k are de ned in Eqs. (5155). Of course, this theorem leads to restrictions on the given param eterization. Inserting the expressions given in Eqs. (20) and (21) into Eq. (49), and using Eq. (28) with partitions for Ar s , Br s , Ar , and for Br corresponding to that of Z r and Cr de ned in Eqs. (19) and (17), respectively, yields Grs

This leads to the formulation of the following theorem. Theorem 4: Equation (32) is consistent with the result given in Eq. (33) if and only if 1 1 1 1 ( G r s ) off = H ( [ Ss ; K , r ] + [Sr ; K , s ] + H [[T ; K , r ]; K , s ]) (72)
8 r, s

11 Br s

11 `Ar s

As

11

1 ,r

Ar

11

1 ,s

Cs

21 >

21 D Cr

Cr

21 >

D Cs

21

(63) where D := ( C `I N n ) . Considering the element in row i and column k of Eq. (63) leads to gi kr s
1

= b i kr s

`a i kr s

ai ks k

k,r

a i kr k

k,s

> c i s D ckr

ci r D ck s (64)

>

= 1, . . . , m , where Sr , T , and H are de ned in Eqs. (6971). Of course, Theorem 3 is equivalent to Theorem 4 but for a numerical check either one of them may be used. Before a threedimensional example is presented (in the next section), the following academical exam ple of a nonlinear parameterization shows that, though the partial derivatives of the eigenvalues exist, the partial derivatives of the eigenvectors do not exist in general. For the example with a nonlinear parameterization, let A = I 2 and 2 de ne B(q) with q 2 by
q2 (73) + 0 2 q 1 + q2 1 which leads at q = 0 to eigenvalues k 1 = k 2 = 1. A brief calculation shows
C1 C2

where bi kr s , a i kr s , and a i kr denoting the elements in the i th row and 11 11 11 in the k th column of the matrices Br s , Ar s , and Ar , respectively, and c ir is the i th column vector of Cr21 . W riting Eq. (64) for all rows r = 1, . . . , m and for all columns s = 1, . . . , m yields

B(q) := I2

[ ] [
,1

q2

Gi k

,
k

bi k 11

bi k 1m . . . b i km m

. . . bi k m 1

..

& ( = : Bi k

*
*

ai k 11 . . . ai k m 1

ai k 1m . . . ai k m m

..

= =

B1 B2

= B,1 = K = B,2 = K

& ( = : Ai k

*
Ci DCk
>

,2

[ ] = [ ]
= 0 2
0 0 0 1

(74)

(75)

k,1

. . .

, & ( * = : fk
k,m

(ai k 1 , . . . , a i k m )

& ( > = : ai k

> a i k fk

, & ( *
c >km Ck
>


c>k 1 . . .

Thus, due to Theorems 1 and 2, the partial derivatives of the eigenvalues exist. To check Theorem 3 or 4, one has to calculate the following quantities:
B11 B12

DCi

= B, 1, 1 = 0 = Q 11 = B , 1, 2 = 0 = Q 12 = Q 21

(76) (77) (78)

B21

(65)

B22

= B, 2, 2 = Q 22 =

[ ]
2 0

PRELLS AND FRISWELL

1367 Table 1
C1

A brief calculation shows that Gi k (i, k) = (1, 2), yielding


G12

= 0 for all but one pair of indices


2

Generators of the sym metric dyads of C r


C2 C3

[ ]
0 1

C4

C5

C6

(79)

a b 2c

0 8b 8c

a b 2c

2a 4b 2c

On the other hand, h 12 and, therefore, N 12

2a 4b 2c

2a 0 0

= k

(
1 2

2, 1 2, 2

k k

1, 1 1, 2

) ()
= 1
1 1
1

(80)

Table 2 Upper off-diagonal elem ents of the skew-sym m etric com m utators: 11 11 2 2 [C r ; C s ] 2 for tuples (r, s) with r < s (r, s)
11 [ Cr11 ; Cs ] / ab

(r, s ) (2, 6) (3, 4) (3, 5) (3, 6) (4, 5) (4, 6) (5, 6)

= I2

[ ]= [
1 1 1 1 1 1

2 1

11 11 [ Cr ; Cs ] / a b

(81)

Finally, to check Theorem 3, one has to calculate N 12 G12 N 12

1 2 1

1 1
6

][ ][
0 1

1 1

1 1

(82)

(1, 2) (1, 3) (1, 4) (1, 5) (1, 6) (2, 3) (2, 4) (2, 5)

32/3 1 10/3 2 2 32/3 256/3 256/3

0 2 10/3 2 32/3 16 16

= 2

[ ]=
1

(83) At q = (0, 1, 0, 1, 1, 1) > , a repeated eigenvalue occurs. The associated eigenvectors are

Thus, the partial derivatives of the eigenvectors do not exist. A direct calculation by using the diagonal [see Eq. (33)] without checking the consistency would lead to the incorrect results Z1 Z2

= k

= 4
(89)

x1

> = a (1, 0, 1) ,

x2

> = b (1, 1, 1)

=0
0 1 0

(84) (85)

The remaining eigenvalue is k x3

= 1 with the eigenvector


(90)

[ ]
1

> = c (2, 1, 2)

In the next section, a three-dimensional example is investigated concerning permissible linear parameterizations.

IV.

Exam ple

To demonstrate the application of the theorems, the spring-mass model depicted in Fig. 1, which corresponds to the example presented by Friswell, 5 is used. The mass matrix

The norm alization constants are a := 1 / 2, b : = 1 / 6, and c := p 1 / 12. Because A does not depend on the param eters Cr = Br , and because all submatrices B r are symmetric generated by a single vector, each Cr also is symmetric and generated by one vector only. The generating vectors are listed in Table 1. For example,
C1
> = (a, b, 2c) (a, b, 2c)

(91)

1 0 0

0 4 0

0 0 1

(86)

2 2 To check Theorems 1 and 2, the matrices Cr11 2 have to be calculated for all r = 1, . . . , 6. They are generated by the twodimensional vectors containing the rst two components of the generators listed in Table 1, for instance,

is assumed to be constant and the stiffness matrix is parameterized by B(q )


> > > = e 1 e 1 q 1 + 8e2 e2 q 2 + e 3 e 3 q 3 ,& ( * , & ( * ,& ( * = : B1 = : B2 = : B3

C1

11

> = (a , b) (a , b)

(92)

Theorem 2 yields

rank e3 ) q5
>

a2 ab ab b2

0 0 0 8b 2

a2 ab ab b2

1 4a b 4a b 8b 2

1 4a b 4a b 8b 2

2 0 0 0

+,

2(e1

e 2 )(e 1

& ( = : B4

e2 ) q4

>

+,

2(e2

e 3 )(e 2

& ( = : B5

4> 2

=n
(93)

> + (e1 e3 )(e1 e3 ) * q 6 , & ( = : B6

(87)

which corresponds (see Fig. 1) to (k 1 , k 2 , k 3 , k 4 , k 5 , k 6 )

= (q 1 , 8q 2 , q 3 , 2q 4 , 2q 5 , q 6 )

(88)

Thus, the partial derivatives of the eigenvalues will not exist for the complete parameterization as de ned by Eq. (87). To answer the question of a permissible parameterization Theorem 1 has to be checked. This requires the calculation of 15 commutators. The resulting matrices are skew sym metric. The corresponding upper off-diagonal elements are listed in Table 2 for all r < s. Of the six matrices only the two corresponding to param eters q 2 and q 6 com2 2 mute. Thus, an orthogonal matrix H 2 will only exist for the param eterization

B(q 1 , q 2 )

1 1 0

1 2 1

0 1

> > + 8e 2 e2 q 1 + (e 1 e 3 )(e 1 e3 ) q 2

(94)

Fig. 1

Simple discrete three-degree-of-freedom model.

where the parameters have been changed according to (q 2 , q 6 ) ! (q 1 , q 2 ). Using this new param eterization to calculate the rst derivative of the eigenvalues from Eq. (13) at q 1 = q 2 = 0 it turned out

1368

PRELLS AND FRISWELL

that Cr11 is already diagonal, i.e., H are 2 @K = 3 @q 1

= I 2 . The eigenvalue derivatives


0 2 0 0 0 0 0 0 0 0 0 1


2 0 0

0 0 0

(95)

h = 1, the operator in Eq. (105) leads to 0 2b / 3 3 X ,h = X Z 1 = 0 b / 3 2 0 2b / 3

To calculate the derivatives of the eigenvectors with respect to h at

2c / 3

2c / 3 2c / 3

(109)

@K = @q 2

The results given by Friswell5 match the results in Eqs. (107109). (96)

V.

C onclusions

Because the inertia matrix A does not depend on the parameters from Eq. (8), it follows that Z r is skew symmetric and, thus, ( Z r ) i i = 0 for r = 1, 2 and for i = 1, . . . , N . Equations (20) and (21) lead to ( Z r ) 13

=0
( Z 1 ) 23

8 r

= 1, 2

(97) (98) (99)

8 = 3 bc

The existence of the derivatives of eigenvalues is investigated. In the case of multiple eigenvalues, the partial derivatives do not exist for any param eterization. Two conditions are deduced, which enable a given parameterization to be tested to determine its perm issability. For any continuous permissible parameterization, the partial derivatives of the eigenvalues and eigenvectors are continuous, too. The application of the theorems presented has been demonstrated by examples. In preparation is a method to calculate the partial derivatives of repeated eigenvalues independent of the existence of eigenvectors.

( Z 2 ) 23

=0

Acknow ledgment
M. I. Friswell gratefully acknowledges the support of the Engineering and Physical Science Research Council through the award of an Advanced Fellowship.

To determine the off-diagonal part of ( Z r ) 12 , Eq. (32) can be used. For the example considered here, Q r s and Ar are zero and Eq. (32) is consistent with the result ( Z r ) 12

=0

8 r

= 1, 2

(100)

References
Chen, T.-U., Design Sensitivity Analysis for Repeated Eigenvalues in Structural Design, AIAA Journal, Vol. 31, No. 12, 1993, pp. 23472350. 2 Choi, K. K., and Haug, E. J., Optimization of Structures with Repeated Eigenvalues, Optimization of Distributed Parameter Structures, edited by E. J. Haug and J. Cea, Sijthoff and Noordhoff, Alphen aan den Rijn, The Netherlands, 1981, pp. 219277. 3 Choi, K. K., and Haug, E. J., A Numerical Method for Optimizing Structures with Repeated Eigenvalues, Optimization of Distributed Parameter Structures, edited by E. J. Haug and J. Cea, Sijthoff and Noordhoff, Alphen aan den Rijn, The Netherlands, 1981, pp. 534551. 4 Choi, K. K., and Haug, E. J., Repeated Eigenvalues in Mechanical Optimization Problems, Problems of Elastic Stability and Vibrations, edited by V. Komkov, Vol. 4, Contemporary M athematics, American Mathematical Society, 1981, pp. 6186. 5 Friswell, M . I., The Derivatives of Repeated Eigenvalues and Their Associated Eigenvectors, Journal of Vibration and Acoustics, Vol. 118, July 1996, pp. 390397. 6 Haug, E. J., and Choi, K. K., Systematic Occurrence of Repeated Eigenvalues in Structural Optimization, Journal of Optimization Theory and Applications, Vol. 38, No. 2, 1982, pp. 251274. 7 Haug, E. J., Choi, K. K., and Komkov, V., Design Analysis of Structural Systems, Academic, New York, 1986. 8 Haug, E. J., and Rousselet, B., Design Sensitivity Analysis in Structural M echanics, II, Eigenvalue Variations, Journal of Structural Mechanics, Vol. 2, No. 8, 1980, pp. 161186. 9 Lallement, G., and Kozanek, J., Parametric Correction of Self-Adjoint Finite Element Models in the Presence of Multiple Eigenvalues, Inverse Problems in Engineering, Vol. 1, 1995, pp. 107131. 10 M asur, E. F., and M r oz, Z., Non-Stationary Optimality Conditions in Structural Design, International Journal of Solid Structures, Vol. 15, No. 6, 1979, pp. 503512. 11 M asur, E. F., and Mr o z, Z., Singular Solutions in Structural Optimiza tion Problems, Variational M ethods in the M echanics of Solids, edited by S. Nemat-Nesser, Pergamon, New York, 1980, pp. 337343. 12 M ottershead, J. E., and Friswell, M. I., Model Updating in Structural Dynamics: A Survey, Journal of Sound and Vibration, Vol. 167, No. 2, 1993, pp. 347375. 13 Seyranian, A. P., Lund, E., and Olhoff, N., M ultiple Eigenvalues in Structural Optimization Problems, Structural Optimization, Vol. 8, Dec. 1994, pp. 207227. 14 Curtis, M . L., Matrix Groups, SpringerVerlag, New York, 1979. 15 Humphreys, J. E., Introduction to Lie Algebras and Representation Theory, SpringerVerlag, New York, 1972. 16 Jacobson, N., Basic Algebra, H. W. Freeman, San Francisco, CA, 1974. 17 Rao, R. C., and M itra, S. K., Generalized Inverse of Matrices and its Applications, Wiley, New York, 1974.
1

The result X , 2 = X Z 2 = 0 m eans that a rst-order approximation change in the stiffness k 6 do not affect the eigenvectors. Using Eq. (31) to calculate the second derivatives of the eigenvalues this particular example leads to (K which yields
, r, s ) i i

= ([Cr ; Z s ]) i i

(101)

@2 K
2 @q 1

= 27

16

0 0 0

0 1 0

0 0

(102)

All other derivatives turned out to be zero. To com pare these results with those presented by Friswell,5 the derivatives of the eigenvalues and eigenvectors with respect to parameter h will be calculated, where q1
3 = 2h 1 2

(103) (104)

q2 At q1 = q 2 = 1 , are related via

=h

h = 1 the operators of the partial differentiation = 2 @q + @q @h 1 2


@
3 @

(105)

9 @2 @2 @2 @2 + @q 2 2 + 3 2 = 4 @q 1 @h @q1 @q 2 2

(106)

Using the rst operator equation together with Eqs. (95) and (96) leads to

,h

2 0 0

0 2 0

0 0 1

0 0

(107)

From Eq. (102) the second operator equation yields 4 0 0 0 0 1 0

,h ,h

= 3

(108)

A. D. Belegundu Associate Editor

You might also like