Professional Documents
Culture Documents
discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/280947894
DOWNLOADS
2 AUTHORS, INCLUDING:
Andrew Whittaker
University at Buffalo, The State University o…
321 PUBLICATIONS 2,061 CITATIONS
SEE PROFILE
Abstract
Monte Carlo analysis underpins some modern seismic The generation of large numbers of simulations by
performance and risk assessment methodologies. nonlinear response-history analysis is impractical
Such analysis requires thousands to millions of because of a lack of appropriate ground motion
simulations to generate stable distributions of loss. recordings and computational expense. An algorithm
96 Volume 45 Number 1 March 2015 The Bridge and Structural Engineer
unchanged by these other sources of uncertainty. The technical bases for the chol and cholcov routines
are presented in the Matlab documentation [7] and are
Figure 1 illustrate the Yang et al. process for generating
not repeated here. The cholalgorithm is stable only
statistically consistent demand vectors. Objective is to
if the associated matrix (correlation matrix [AYY] or
generate a matrix of demand vectors [Z] with the same
the related covariance matrix [∑] = [DY][RYY][DY])
statistical characteristics as [Y] but which is much
is not rank deficient whereas cholcov is stable for rank
larger to enable Monte Carlo simulation. This process
deficient matrices. Appendix A provides a proof that
uses a set of vectors, one for each demand parameter,
the covariance matrix will be rank deficient when n
of uncorrelated standard normal random variables
≥ m.
(zero mean and a unit standard deviation), [U] and a
linear transformation and a linear translation: The goal of this paper is to present an alternate
algorithm to Yang et al. process, which can better
[Z]T = [a][U]T + [B] = [DY][LY][U]T + [MY] (1)
recover the underlying statistics of [Z] with fewer
simulations. The alternative algorithm is developed
1n based
1 1. Read [ X ] , select OPT=on Spectral
0 for Value Decomposition
no acceleration (SVD)
(see above) and 1 for and
acceleration
X X MY,DY, RYY can also be used with the cholcov.
2 2. Compute [Y ] = ln [ X ] and mean μY
1000s of simulations. Huang et al. [1] drew a similar [A ][λ ][A T]T
8 6. Using spectral value decomposition, R (G ) =Gcor
T
8 6. Using spectral value decomposition, l R G = A AGcor ][ ] Gcor ][ Gcor ]
λ A
conclusion for 3 demand parameters 16 and 9. 11 Generate
ground demand vectors Gcor by taking theGcorexponential of ⎡⎢Y ⎤.
7. 1If
27.OPT
7. If T = 1, then
OPT=1, then ⎡G ⎡ ⎤ ⎤==⎡U⎡U⎤ [⎤λλ ⎣]1 12⎦⎥2[ AA ]T T[σσ ] . .
9 9 If OPT=1, then G [
⎣⎢ ⎢⎣ ⎦⎥ ⎥⎦ ⎣⎢ ⎢⎣ ⎦⎥ ⎥⎦ Gcor ] [ Gcor
Gcor ] [ GG ]
motions. Subsequent
9 7.studies
If 9 7. If
for
OPT=1, thenOPT=1,
⎡G ⎤ =then
building ⎡U ⎤ [λ⎡⎣⎢G ⎤⎦⎥ =
structures ]
1 2⎡U ⎤ λ T
⎢
⎣ [ A ⎥
⎦ [ Gcor] []σ G[]AGcor
. ] [σ G ] . Gcor
⎣⎢ ⎦⎥ ⎣⎢ ⎦⎥ Gcor
If OPT
Gcor
= 2, then
have shown that stable distributions 17 ofMathematical
loss can beBasis 10
10 IfIfOPT=2,
OPT=2,then then
10 10 If
If OPT=2, then OPT=2, then
achieved with fewer than 1000 simulations i) subtract the sample mean from each column
11 i)i) subtract
11 subtract the the sample
sample mean
⎡Uis⎤ presentedmean from from each each columncolumn ofof ⎡⎢⎣U⎡⎢U⎤⎥⎦ ⎤⎥ and
and calcul
calcu
11 i) subtract
18 A the sample
mathematical
11 i) subtract the sample mean from each column of mean proof from for each
the SVD column
⎡
of ⎢U ⎥ and ⎤
algorithmof calculate
and calculate
⎢ ⎦⎥ and calculate in
thethecovariance whatthe
covariance covariance
follows.
matrix matrix ⎣ ⎦
The matrix
uniqueness of t
Zareian [8] observed that the chol algorithm was ⎣ ⎦ ⎣
unstable if the number of12 demand 19 solution is not guaranteed
parameters
l
()
equal 12 because
((ll))
ΣΣ UU the computation involves the generation of random numbe
()
l 12
12 Σ U Σ U
to or greater than the number of20 ground and, motions,
even though spectral ii) values usingare unique,value
spectral by definition, the associated left and right ba
decomposition,
namely, n ≥ m. He proposed replacing the Matlab
21 vectors are not. 13 13 ii)
However,
l ii) using
using spectral
uniqueness
l spectral value value
()
is T not decomposition,
an Tissue for
decomposition,
ll
ΣΣ UU ==[ A
loss [computations (( ))
AcovcovUU][][λλcovcovUU][][AAcovcov
or
TT
UU] ]ri
13 ii) using spectral value
decomposition, Σ U =
using spectral value decomposition, Σ U = [ AcovU ][λcovU ][covAUcovU ]covU covU
chol subroutine13 withii) the cholcov22 subroutine.
assessment.The
[()A
The procedure differs from the Yang et al. algorithm
][ λ ][ A ]
in two ways: 1) the factorizati
1 2−iii)
iii) calculate
2 calculate
⎡G⎡G⎤⎥ ⎤= ⎡ ⎡U⎤ [⎤ AA ][λλ ]−−1 12 2[λλ ]1 12 2[ AA ]T T[σσ ]
T=⎢U
cholcov subroutine is stable for both m > n ⎡ and
⎤ n⎡ ≥⎤ m. 14
14
U ⎥ [ AcovU ][λλcovmatrix
1iii) calculate
12 ⎢
⎣ ⎢ ⎦ ⎥ ⎣ ⎢ ⎥
⎦ ⎥ [ UU ][ cov UU ] [ Gcor ] [ Gcor ][ ]
U ] A[ λis ]σ [ AGcor ]⎦ [σSVD,
14 iii) ⎡G ⎤ = ⎡U of
calculate
23 ⎤ A⎢Gthe
⎥=λ ⎢correlation
− 1 2
Gcor based
T ⎣ on ⎣G ]⎦ which is more
cov
cov cov Gcor
robustGcorthanGG the Choles
⎣⎢ ⎦⎥ ⎣⎢ ⎦⎥ [ ⎣ cov⎦U ][ ⎣ cov⎦ U ] [ Gcor ] [ Gcor ] [ G ]
14 iii) calculate
ecomposition, Σ12
14 iii)14 calculate
l
U = [⎡Acov
iii) calculate
⎢
()
ΣU U
G ⎤⎥ =
l
][⎡Uλ⎡cov
⎢ G
⎤[ A
⎥ ⎤= ()
U ][ AcovU ]
A
T
⎡U][⎤λ[ cov ]
−1 2
][ λ [
12
λGcor]−]1 2 [[λA thGcor19 ]]
T
12
[[σAG where ] ]
T
[
1 19 Giwhere
σ μ ] is
=where
the
Yi − μYμi Y{is
mean i
1}isthe
of
themean
th the i column of [Y ] l
mean
th
ofof the i th column columnofof[Y[Y]
and {G1vector
}]T 1isvector
] and
m×1of
and{1}{1}
vector
is m×1 vector o
ofMatrix
ones. l =Matrix
lG=T[G
19 ⎣ ⎦ where ⎣ ⎣⎢ ⎦ ⎦⎥μYi ⎣⎢ is ⎦⎥ thecovmean
cov U U
U of
cov U the
19 19 i
where
Gcor where
column μ Gcorμ
of
is [
the Y is ] the
Gand
mean Y i mean {
is of1 } m the of
is x m× the
i1 th
vector1
column i vector column of of of [
ones.Y of
ones. ] [ Y
and
l = [G ] l ]Matrix
Matrix {
and 1 } { 1
is G} m× is
= [ m×
1 represents
of ones. ones.
the
Matrix G G[ ]
isTthe mean of the i th column of [Y ] andYi {1}i is m×1 vector of ones.
T
1 2 μYi
Y
19 where Matrix G
covU ][15
−1 2
λcovU ] 8. [8. Compute
13 ] [ A
λGcor
15 Compute 8. 20 Compute
⎤] by
⎡Yusing
ii)Gcor [σ Gadding
⎢⎣ ⎥⎦ represents ⎡]Y ⎤ bythe
spectral
⎢⎣ ⎥⎦ variable
by value
back
adding thedecomposition,
adding
random backmean:
l with
Y20
theback
variable
20
i =lGithe
mean:
represents
20
Y represents
with
l
Σ U mean:
Y+i μ=Yai G {zero
the
=
1}+
represents [.AμcovU{1][}λthe
i randomthe ()
20
Yi random
mean.
random
.variable
represents
covUrandom
Thevariable
][ AcovUlvariable
statistical Y with
T the random variable Y with a zero mean. The statistical prope
]variable
Yl properties
with
la,zero
Yla zero withaalzero
with
mean. ofmean. GThe
zero
, forThe mean. mean.
example,
statistical
TheThe
statistical statisticalstatistical
properties
properties
properties
of G lof, for of G
l , example,
G for
l , fo
exam
20 represents the random . Y a zero mean. The statistical properties
21 the covariance properties
( )
ofof
l matrix
G for
for ( ) example,
example, l
Σ G and the correlation the covariance ( )
matrix
matrix R G are l
( )( ) ( )
{1} .⎡G ⎤ =by⎡U l are
ng back 16 the9.mean: Generate
14 Y i 21
=iii)
16 9. Generate demand
21 9. the covariance
Gicalculate
demand
Generate matrix
+the μYcovariance
i vectors
(( ))
Σ ( )
[GG]] [[GG]l]== 1 m1−GG
G =
⎡Y ⎤ l. m −1 −1 l m −−11 m
m 1 m
−
− −
1
1l l
1
GG l
Σ1 GT =
[
l
G
Tl
] [ G ] =
m m
m − −
1[− 1 1
G1] l[G
T
GG l ]
T =
6 m −1
GG 6
((((( )))))
1 T Mathematical
16 1 9.l lGenerate T Basisdemand vectors by taking the exponential m −12 of R
6 ⎢ ⎥G = [m σ − ] 1 Σ G [ σ ]m − 1
6
18 [G ] A[G ] = A mathematical
= GG proof for the SVD algorithm is presented in what follows.
⎣ l⎦ The (4) uniqueness 6
m −1
mathematical
18 m −1 proof for the SVD algorithm 6 is presented 2 2 in what
2 1 RRRG (((( ))))
R G lG lGlfollows.
== [σσ[σ[σG ]G−]−]−1The
[
G−1−1 l
]
1Σ ΣG
1ΣΣ G lG l[σ[of
luniqueness [ [ σ σ G ]G−−
G−
]
the
]
1−1
]−1 11 of the (5)
(( )()( )) ( (() ))
A mathematical proof for the SVD algorithm is 2 l = = G l σ
1 [σrandom G] Σ [[σσ G ]−]−11numbers
(( )) lR [G lG
=
( )
G G
r the 19 solution isisnot
19 17 solution guaranteed
is not guaranteedbecause the The
because computation
the computationR l = [involves
involves
lG σ the−] 2 − 1 generation
Σ l G
the σgeneration −of
]
− 1 G
R ofG numbers
random
=
G
Σ Gl [σ ]−1
The uniqueness of[ the GG] 2 [G ] ⎡ [σ2 ⎤ ] Σ G
SVD algorithm presented
Mathematical in what Basis follows. 2
uniqueness
2 R G of = the σ Σ G σ
l GG= 2 − 1 l
presented in what follows. R G = [σ ][σ [σ]Gis ] a diagonalG matrix of variances.
G
=d[σbecause ]
−1
20 Σ Gl 20
and, [σ ( )
even
]
−1
and, though
even spectral
though values
spectral are
values unique, are by
unique, definition,by definition, the 3 associated where
the ⎡(5)
associated ⎢⎣σleft ⎤ ⎥⎦ andleft right G basis
2G
[σ[σ[σG ][G][σ][σσG ]G]]isisisright
and basis matrix
G
G the solution
computation
G is involves
not guaranteed the generation because of random
the computation numbers 3 3 where
3 3 where where⎢⎣⎡σ
where σ⎡⎣⎢⎡σσ
⎡σ 2 =
2
⎣⎢ GG2G2⎥⎦⎤⎦⎥G⎤⎦⎥=
⎤=
⎦⎥= [ σ[σGG][][σσGG] ]isis
a adiagonal
aaadiagonal diagonal
diagonal matrixof
matrix
matrix ofofofvariances.
variances.
variances.
21 vectors are
18 vectorsA not.
mathematical However, uniqueness
proof for the SVD is not
algorithm an issue
is ⎡ presented
⎤ for 3 where
loss where
in what ⎢
⎣
computations ⎢ follows. ⎤
⎥ = The ⎡ or 2 ⎤risk
uniquenessis diagonal
diagonal of the matrix
matrix ofvariances.
of variances.
variances.
al values are unique, 21 involves bythe aregeneration
definition, not. However,
of randomleft
the associated uniqueness
numbers
and
3 right is
whereand,notbasis ⎡σ σ ⎤ = [σ ][σσGG]] is ⎡a3 diagonal
an
even 2
issue for is loss
⎣ a ⎦
diagonal computations
wherematrix matrix
⎢⎣σ ]G ⎥⎦is=of of or
[σdiagonal risk
G ][σ G ] is a diagonal matrix of variances.
variances.
⎣⎢ ⎢⎣ GG⎦⎥ ⎥⎦=3 [σ GG][where
3 where 2 G G G
variances.
l ⎢σ G2⎡ ⎤⎥ = T
[ σ ][ σ a matrix of variances.
4 twoLet Gtwo ⎣=1)⎢ways: G⎦ T⎤⎥ T be G the Gsimulated random variable of M realizations with a)
σ G2 ⎤⎥⎦ =
ever, [σ G ][assessment.The
22 σ22
uniqueness though solution
spectral
19 assessment.The
G ] is aisdiagonal
not procedure
anmatrix is
issue not
values ofdiffers
procedureguaranteed
are
variances.
for from
loss unique,
differs thefrom
because Yang
computations bythe the etYang
al.
definition, oralgorithm
computation etrisk al.the algorithm in
involves
4 4 Let Let Let
ways:
in
G
lthe
l lG l= generation
=
⎡G⎣⎡ ⎤ the
G T⎦T⎤be be
factorization
be the1) the
thethe
of random
simulated factorization
simulated simulated
numbers
random
random randomvariable
variableof variable
ofM Mrealizations of
realizations with
witha) a)ta
4 4 Let Let G G l = = ⎡
⎢
⎣ G ⎡ G ⎢
⎣ ⎥
⎦⎤ ⎤ ⎥
⎦T
be be thethe simulated
simulated random
random variable
variable ofof MM realizations
realizations with
with a)a) ta
23 of23 associated
the correlation
20 of and, lefteven andmatrix right
though basis
is based
spectral vectors on
values are
SVD, are not. which However,
unique, l isby more
definition,
T
4 robust
Let the
G = than ⎢
⎣ ⎢⎡
associated
⎣ G ⎥
⎦ ⎥
⎦⎤ the
be the
l Cholesky
left simulated and
T right random basis variable of M realizations with a)
e differs from the Yangthe et al. correlation
algorithm matrix in two ways: is based 1) the on Let SVD, lG = which⎡G⎤ ⎤ 5 T
beisthe more
targetcovariance
realizations robust
4 ⎡⎣⎢ ⎤⎦⎥Trandom than
with matrix a)the
⎡G ⎤target Cholesky
(or, alternately,
of means targetvariances
and b)with avariable
target andofmeans,
ameans,
targetcorr
4 4 factorization
Let G = ⎡⎢G ⎢ ⎥ be the simulated
l
simulated Letrandom G = variable
variable
⎢ ⎥ be the
of M realizations
M realizations
simulated random with a) target
a) target M and
realizati
and
uniqueness vectors is not an issue for loss computations ⎣ ⎣ or ⎦⎥ ⎦5 4 targetcovariance
Let
5 5 targetcovariance
Gfor
targetcovariance = ⎢Gloss ⎥ be computations matrix
the matrix ⎣ (or,
simulated ⎦ (or,alternately, random
alternately, targetvariances
variable of M realizations
targetvariances andandaaaatargetcorre
with a) t
targetcorr
on21 SVD, whichare is not. However, uniqueness is not 5 an issue
targetcovariance
and b) a⎣ ⎦ matrix
covariance matrix (or,(or, (or, l
alternately, or risk
alternately, target targetvariances
variances andand targetcorre
T
=rix⎡⎢Gis ⎤ bebased
the simulated random variable more of M robust realizations than thewithCholesky a) target means, targetcovariance matrix
matrix (or, alternately,
alternately, ⎡G ⎤targetvariances
targetvariances and
and a targetcorre
targetcorr
⎥ 5
6 targetstatistics are those of G . Here is a M × n matrix and the mean
⎣ ⎦ risk assessment. The procedure differs5 5 5from thethe
targetcovariance
targetcovariance Yang6 6 algorithm matrix (or,
matrix targetin5 correlation
(or, alternately, alternately,
targetcovariance targetvariances
l l. matrix
targetvariances ⎣⎢⎡ target
⎡ (or, ⎤and ⎦⎥⎤isalternately,
and a targetcorrelation
targetcorrelation targetvariances matrix). and To
22 assessment.The procedure differs from Yang
5 et al. 5
6
targetstatistics
atargetcovariance
targetstatistics
targetstatistics
targetstatistics two ways: are
are
are are those
matrix those
those
those 1)matrix). theof(or,
ofof
G
offactorization
GlG . .
Here Here⎣⎢⎡G
.These
lGalternately,
Here
Here G⎡ G⎢ G
⎥
⎤
⎦ ⎤ ⎥isisisaaaaaMMM M×
statistics
targetvariances × × ×n nmatrix
n n matrix
matrix
matrix are
andand and
and
and a matrix).
thethemean
the
the mean
targetcorre
mean
mean
Th
o
et al. 6 l ⎢ ⎢⎡ ⎣ ⎥ ⎥⎤ ⎦
ovariance matrix algorithm
(or, alternately, in two ways: 1)
targetvariances andthe factorization
a targetcorrelation
targetstatistics ofmatrix). 6
arethose
targetstatistics
those These ofof lGl ..targetstatistics
Here are
Here than
those
⎡G ⎡G⎤ ⎤ is aare of G
matrix
M .
× Here
n and
matrix
⎣ G ⎦
⎣ ⎢⎣ ⎦ ⎥⎦ lthe is
and
a M
mean
the
× n
⎡mean
of matrix
⎤ iseach of
and
each
the
vector
mean
is
23 of the correlation matrix is based6 targetstatistics
6 on SVD, which are those
Thisis morestep of 6 G is .
robust Here
performed is
by thea M
generating × those
Cholesky n matrix ofthe G and .
requiredHere
the G
mean
set of a
of M
each
independent× n matrix
vector is
rando anzzo
5
the correlation matrix is based on SVD, which is7 6 7 This
7 targetstatistics
step isis performed are those ⎣⎢ ⎣⎢ ⎦⎥ by ⎦⎥ of G l . Here ⎡G ⎤ is a M ×⎣⎢ n matrix
generating the⎢⎣ the ⎥⎦required ⎦⎥
setsetofofindependent and the mean random
l ⎡ ⎤
atistics are those of G . Here ⎢G ⎥ is a M × n matrix and the mean of each7 vector vector
ThisThisThis
is step zero. step is
stepisisperformed zero. performed
performedby This step
by
bygenerating is
generating
generatingthe performed therequired by
required
requiredset generating independent
setofofindependent
independentrandom rando
rando
more robust than ⎣ ⎦ the Cholesky decomposition and 2) 7 7 This
of the step is performed bywith generating the and required set of independent l l a li⎡
rando
7 7 This This step step isis performed 8 the
performed bysame
required generating distribution
This set step the of is independent
required
performed zero setmean of random
independent
byindependent
generating unit variancevariables
therandom required followedset of by
variables U=
indep
U =
a alinl⎢⎣U
by 7
generating the required set of random variables
5 8 7 8 of This of the the stepsame same is distribution
performed
distribution bywithwithgeneratingzerozero mean
mean the and required
and unit
unit variance
set
variance of followed
independent
followed byby
rando
ep is performed by generatinga procedure is provided to accelerate
the required set of independent random variables the convergence 8 8 9 U
8
of l
of the
the=the
of
the ⎡Usame
form same
same T
⎤ of the distribution
distribution
distribution same distribution with
with
with zero mean
zero
zero mean
mean with and and
andzero
unit
unit variance
variance
unit mean variance and followed
followed
followed by alin byby aa li
li
of the same distribution ⎢⎣ no ⎥⎦ 8 acceleration
with zero mean and unit variance followed by a linear transformatio
1 of1.theRead means, [ X ] , variances
select OPT=and 0 forcorrelation 1. 8
Read
8
no acceleration [
ofX ]
the
coefficients
, selectsame
(see above) OPT=9
distribution the 0 theforform form with zero of the mean same (see and distribution
unit
above) varianceand with
1 for zero
followed mean
acceleration by a andlinearunit variance
transformatio foll
1 9 and
9 1 for acceleration
9 unit variance followed by a linear transformation of
9 8 the the
of theform
form same distribution with zero mean and unit variance followed by a li
ame distribution the form
to with
the zerotarget mean values and unit to variance
enable the followeduse9 9 of by the
the aaform
linear
form
smaller transformation l of 9 the form
2 2. Compute [Y ] = ln [ X ] and mean 2 2. μY Compute [Y ] = ln [ X 9 ] andl
10 the
the Gmean
l
= form
form [T ][μJY l]Ull
m number of simulations. (Whether the target values 10 10 G lllG== = [TT[T][ J][J]JU]lU ]llU
GG G==[T[[T][][][JJ]U
10 10
Computeis[Ganother
3 are3. accurate ] using Eq matter,
(3) 3 which
3. Compute
10 G is ll [G ] using
beyond
G= =[[TT][][JJ]U the l
]U l10 Eq (3) ] Ul
10 l G = [T ][ J ]U (6)
l
10 l
10 11 GHere = [ T [ ]T][ J and ] U [ ]
J are n × n matrices representing the linear transformation
][ J ]Ul scope of this study.) The two developments, SVD and 11 11 Here
Here Here ⎡ [
⎤ T (6)
[T
[T] ] and] and and [ J [ ]J [J]]are are nxn n × n n
×nnnnare
× matrices
matricesmatrices representing
representing representing the the linear thetransformation
linear transformation
⎢U ⎦⎥[T[using T] ]and anda[ [Jstandard J ] are
⎡U ⎤ using
and [ J]M nnn×
a M × n11 HereHere are matrices
matrices representing
representing wherethe the linear
Mlinear ntransformation
transformation
4 4. Generate a M × n matrix 4 ⎥⎦ 4. use Generate matrix normal distribution, and are the
a standard normal 11 distribution,
acceleration, are independent. ⎢⎣ The of HereSVD is an 11
12 Here
random ⎣ [ T ]where variables ] are
to
and
the nare
×correlated matrices the
randomrepresenting variables. the linear
Using transformation
the property of
11 Here [[T ]]and
11 T and [ [JJ ] ] are
linear
are
random nn × × n
11 n
transformation
variables matrices
matrices Here to [ T ]
representing
representing
the and from[
correlated J ] are the
the
the n linear
× n
independent
linear
random transformation
matrices
transformation
variables. representing
random
Using from
fromthe the
the
property indepen
linear
independ oftra
5 alternative cholcov
numbertoofrepresenting andand the5 numberprocedure to accelerate
12 11 12 Here random
random
random [ T ] variables
and
variables
variables [ J ] are
toto to the the n
the × n
correlated
correlated
iscorrelated
matrices random
representing
randomvariables.
random variables.
variables.Using the Using
linear
Usingthe the property
transformation
theproperty
property of o
the propertyofof
simulations number
of demand of simulations
parameters, 12 12
13 independentand thenumber
respectively. covariance of demand matrix parameters, given by respectively.
] and [ J ] are n× n matrices the linear transformation random from variables the12 variables
random
to the correlated
12 random
variables to the tocorrelated
random
the
variables
correlated
variables. to random randomvariables.
Using the
variables. Using
property Using
of linear transforma
convergence could also be used with 12
cholcov.
12 random variables 13 13 the to the the
random
covariance
the correlated
covariance
covariance
matrix random
matrix
matrix
isisgiven
isis givenby
variables.
given bybythe Using correlated the property randomofvariables. linear Using th
the transformat
6 5. Compute the correlation and 6 the 5.square
Compute root the of the correlation 13 13
12
13 the
variance the theand
matrix thevariables
covariance
property
covariance square
of thematrix of root
column
matrix
to
linear theof isgiven
correlated
the
vectors
given variance
transformation, by by in random matrixvariables. of the
the column
covariance Using vectors property
in of
variables to the correlated random variables. Using the 13 property of
thecovariance linear
covariancematrix transformation,
matrixis isgiven given by covariance
(((((( ))))))
the matrix is given by l
((((( )))))
13 the l 13 1by T 1 l l T
The matrix [X] is transformed intoσG ]the [Y] space13 14 matrix the lcovariance =is given ⎡ by ⎤ ⎡Gis⎤ =
1 matrix given byGGT T = [TJ ] Σ U [TJT] T
ariance matrix 7 is given [Gby ] and denote them as R 7 G l and [ G ]and ( ) denote them
, respectively. 14 as ΣRΣ G
Σ
GlGl=and
l
l G = M1[1σ1−G1⎡⎢⎡]G ,⎣⎢⎡G
G
⎤ T⎦⎥T⎤ T⎡G⎣⎢⎡ ⎤ =
respectively.
T
G
⎦⎥⎤ M1111−1GG
=
l ll l
lGGlGG lllT T==
T
= [TJ [ TJ]]Σ ] ΣU
ll
lU lU[TJ [TJ]]T]T]T
T
()
⎡ ⎤ ⎡ ⎤
()
as shown in Figure 1. It is transformed again into 14 ⎥
⎤ T⎣⎢⎡ G ⎥
⎤ [ ] [
14 14 Σ Σ G G l = = M − 1 1 ⎣ G G⎢
⎣ ⎦ ⎥
⎦ G ⎢
⎣ ⎦ = ⎥
⎦= M − 1 1 GG l = [TJ TJ Σ Σ U lT[TJ TJ
( ) (( )) (( ))
⎡ ⎤ ⎡ ⎤
(( )) ()
l 1 Σ G = M 1 − 1
⎢l ⎢
⎣ Gl ⎥
⎣ ll⎦ ⎥⎦T ⎣ = ⎥
⎦T
⎢ ⎢
⎣ G ⎥ ⎥
⎦ =1 M −
l 1 GG = [ ] GG
1
TJ Σ ll [ ]
U TJ
l = 1 ⎡ ⎡G⎤ ⎤ ⎡ ⎡Gl⎤ ⎤ =14 MM 14 1 1−1Σ1lGG ⎢⎣ [⎦[TJ l
T T T
M− ⎥⎦TJ]M ]ΣMΣ− 1⎡⎢G −
U1⎤⎥11[[TJ
l l⎡⎣⎢TJ
Gl⎤⎥]T]T= T
T
1⎡⎢⎣l
T
()
T
=
another
1 ⎡ ⎤ T 8 ⎡ ⎤ 6. Using
G ⎥ ⎢Gparameter
=
1 ll
space,
GG
[G],
spectral valueldecomposition,
T
)))))
Gcor ⎣ 15 ⎦Accordingly, Gcor
⎤−the correlation ⎡ matrix ⎤
−1
is
ingly, the correlation such that matrix theis mean, variance and correlation matrix 16 R l l ⎡ ⎣ G⎤ −⎦−11−11[ ] ll ⎡ ⎣ G⎤ −−⎦−11−11[ ]TT (8)
G = σ TJ Σ U σ TJ T T
If OPT=2, then 16 16 R RG lG lG==⎣⎡σ⎡ ⎡σ ⎤ [TJ [TJ]] ]]ΣΣTΣTΣU lU lU ⎣⎡σ⎡ ⎡σ ⎤ [TJ [TJ]] ]] T
))(( ()() ))(((( ( ) ))))( (( ))(( ) ( )))(
If OPT=2, then 10 G ⎦⎤G⎤ −1[TJ G ⎦⎤G⎤ −1[TJ
10 16 R R G l = = σ σ⎣ ⎦ [ TJ U l σ σ⎣ ⎦ [ TJ
(( )) ((
are preserved. This is achieved by generating the 16 = ⎣ ⎣⎡⎡σGGG⎦ ⎦⎤⎦−−⎤ −111[lTJ ] Σ⎡ U⎤ −⎣1⎣⎡⎣σGGG⎦ ⎦⎤⎦−1 [TJl] T ⎡ ⎤ −1
R G ll
)
T
lGl ⎡σ 16 −−11
16 RR G
16 = = ⎡σ ⎣ ⎤ ⎤⎦ [[TJ TJ]] ΣΣ l U U16 ⎡⎡⎣σ ⎣σ ⎤⎤R⎦ [G [ TJ=
TJ ] ] σ
⎣ of l ⎡ [ TJ
⎡σ ⎤and⎤ ] Σ U σ [TJ ]
11 required
i) subtract −set ofT sample
the independent mean11 from random i) eachsubtractvariables
column the of sample ⎣ ⎡UG16 G ⎤ ⎦ mean
⎢⎣ ⎥⎦17 and RcalculateG
from =2 ⎤each ⎣σthe G ⎦ column
G
[TJ ] Σis
covariance Ua⎦ diagonalU
matrix ] ⎣ of⎦ variances.
[TJcalculate ofthevariances.
covariance matrix
) ( )(
G G
( −1 l
= ⎡⎣σ G ⎤⎦ [TJ ] Σ U ⎡⎣σ G ⎤⎦ [TJ ]
1
of the same distribution with zero mean and unit ) 17 17 and,
and,
and,
and,
⎡
and,⎢⎣⎡ ⎡σ⎣⎢σσ
⎡σ
⎢⎣ (8)
⎡ 2G ⎤
2 2⎥ ⎤ =
2=
⎥⎦⎣ =⎡G⎡⎣⎦σ G⎤ ⎡⎤⎦ ⎡⎣σ G⎤ ⎤⎦ is
⎤ σ
⎣⎡ ⎡σ⎡⎣σ G ⎦⎤G⎤⎣⎡ ⎡σ
σ⎤ ⎡σ
⎣ GGG⎦⎤⎦G⎤⎦⎤⎦isis
is a
⎣⎢ ⎣ ⎦⎥G ⎦ matrix
a diagonal
diagonal
⎤ is a diagonal matrix of variances. matrix
matrix
of variances.
and,
and,⎣⎢σ⎣⎢consider ⎡σGG2⎦⎤⎦⎥G⎦⎥= ⎤⎦⎥= =⎣σ⎣⎡⎣σ σ⎦ σ aaadiagonal
diagonalmatrix matrixofofvariances. variances.
()
17
()
17 G
GG⎦ ⎦⎣⎤ ⎣⎡now
12 variance
l
Σ Ufollowed by a linear 12 transformation. Σ U
l ⎡ 17 We
22⎤ ⎤ =⎡ ⎡σ ⎤ ⎤⎡ ⎡σ ⎤ ⎤ is⎣⎢ a17 ⎥ G ⎦ ⎣matrix ⎡ 2 ⎤ is two
⎦ ⎡σdiagonal ⎤ ⎡σcases, ⎤ ismatrix without of variances. and with
17
17 and, ⎢σ and, ⎡ σ
⎣⎣ ⎢ GG⎦⎥ ⎦⎥= 17 σ
⎣⎣ 18 ⎦ σ⎣ ⎦ is ⎡ a diagonal⎤
⎦
Gdiagonal
⎡ and, ⎤ matrix
⎡ ⎢ σ G
⎤ ⎥ = of
of ⎣
variances.
variances.
⎦ ⎣ ⎦ a diagonal matrix of variances.
G ⎦ ⎣ and,
G
We G
⎦
acceleration, σ
consider 2
= σ now σ ⎣ two ⎦is a cases,
diagonal without matrix andof with
variances. acceleration, to fully r
Gto ⎦ ⎣ fully recover the underlying statistics.
G G G G
2⎤ ⎣⎢ G ⎦⎥ ⎣ now G⎦
G ⎦⎥
= ⎡⎣σ G ⎤⎦ ⎡⎣σ G ⎤⎦ is a diagonal matrix of variances. 18 18 We We consider
consider now two two cases,
cases, without
without and and with
with acceleration,
acceleration,to tofully
fullyre r
()
We consider now two cases, without and with acceleration, toto fully rere
()
18 18 We statistics. consider
Tconsider now l two cases, without and with acceleration, fully
13 Proof
lspectral value 19 1 We now two cases, without T
and with acceleration, to fully r
ii) using spectral value decomposition, 13 ii) using Σ U We[ consider
= A covU ][ 19
consider λ 18 1 now ][
decomposition,
A Case two
statistics.
Case
Case
] 1: cases,
1: Largenumber
1:
Largenumber Largenumber
Σ We U
without =
consider [ A andof ][
of λ
simulations,no
with
now
of simulations, two][
simulations,no A
acceleration,
cases, ] accelerationacceleration
to
without fully and recover
with the underl
acceleration
without andnowith acceleration
18 We
18 cov now
19 statistics.
U cov two
statistics.
U 18
cases, without and cov U with cov U acceleration,
cov U to fully recover the underly
19 19
18 statistics.
We consider now two cases, acceleration, to fully re
Letcases,
[Y] = [Y1 Y2and ..Ywith m x n matrix 19 statistics.
nsider now two without n], a acceleration, −1 2 19
to
19 represents
fully recover
statistics.
1statistics.
the the underlying 19 statistics. 7
( )( )
iii) calculateof⎡⎢G ⎤ = ⎡U ⎤ [ A ][14 ⎡ ⎤ T ⎡ ⎤ −1 2 12 T
⎣⎢ G⎦⎥ ][ 2 covU2 For ][ λcovFor ] this [case, Gcor ] l [ AUlis approximately ] is[isσapproximately
]
U 19 A statistics.
2
14 realizations
⎣ an⎥⎦ n ⎢⎣dimensional ⎥⎦ covU λcovUrandom ]iii) [calculate ] [ AGcor
λGcorvariable ⎣⎢ ⎦⎥ ] such
G =[σ this λcase,
Σ UΣ Gcor G approximately an identity7an identity matrix of order n . Ac
For this case, an identity 777 7 matrix of order n . Accor
U
s.
that matrix of order. Accordingly,
7 Eq (8) leads7to
7 7
15 8. Compute ⎡⎢Y ⎤⎥ by adding back 15 the 8.mean:Compute Y i = ⎡⎢⎣G Yi⎤⎥ +byμYadding
⎦ {1} . back the mean: Y i = Gi + μYi {1} .
⎣ ⎦ 7
( )( )
i
⎡ T T T⎤ T l
Yl = ⎢Yl1 Yl2 . Yln ⎥ = [Y ] (2)
T
1 lR G =][[LL]T][ L ]
T
4 Using Eq and
(4) and Eq (5),
theofthe correlation matrix of the original data set m
18 SVD A
A mathematical proof for the 18 mathematical
algorithm proof for
is presented theUsing
in4 whatSVD Eq (4)
algorithm
follows. isEqpresented
(5),
The uniqueness correlation
in matrix
thewhat follows. of the
The original
uniqueness data set may
of the
19 98 solution
Volume 45 guaranteed
is not Number 1because
March 2015
19 solution
the is not guaranteed
computation because
involves the ofTThe
lthe computation
generation Bridge
random and
involves
numbers theStructural Engineer
generation of random numbers
20 and, even though spectral values and,unique,
20 are even though
T
spectral the
by definition,
R G
values
lR = G[ K=][ K
[ K] ][ K ]
are unique,
associated left and by definition,
right basis the associated left and right basis
( )( )
5
5 ⎛ ⎛⎜1 1 −1 ⎞ −1 ⎞
⎟⎟ ]or ⎟⎟⎟issue
21 vectors is
vectors are not. However,21 uniqueness are not
not.anHowever,
issue for ][=loss
K⎜⎜] = ⎜computations
[ Kuniqueness ⎜
⎜⎝ m⎝ −m
[ is ][[σG
G ][]σ Gan
not
⎟⎠ ⎠
risk for loss computations or risk
1 −1
G
⎠
4 4 Using UsingUsingEq Eq
Eq(4) (4)
(4)and andEq
and EqEq (5),
(5), (5), thethe
the correlation
correlation correlation matrix
matrix matrix
of ofthe theof originalthe14
original 15 data dataNext
Next
possible set
setmay we
wemay when
select
select
be 1
beexpressed [[J]
1.
expressed J ] such Read
such asasthat ,
⎛
⎝()
[that] ⎜⎜[ J ] Σ U [ J ] ⎟ for
X select l
OPT=
T⎞
0⎟
⎠
becomes
becomesno acceleration an identity
an (seematriabov
rrelation 4 original
Using of theEq data
(4)
(4)setand
set Eq may (5), be the expressed
the correlation
as matrix as of of the the original data identity set
set may
matrix
be
be2 expressed
of order. as This is possible when
matrix ofmatrix Using
4 original
the Eqoriginal
data and maydata Eqbe set
(5), may
expressed be expressed
correlation as matrix original 15
data possible maywhen expressed 2. Compute as [Y ] = ln [ X ] and mean μY
RR G lG l=
(( )) TT
=[[KK]] [[KK]] −1 2
16 [ J ] = [ λcovU ] [ AcovU ] (16)
T
5 5 R
[
RG
KK]]=
[
G
=⎛⎜⎜⎜⎜
(( ))
ll = [ K ]TTT [ K ]
=⎛⎛ [ K11] [ K ]
[ [GG][][σσGG]] ⎞⎟⎟⎟⎟⎟⎟
−−11⎞ ⎞
(10)17 16 [
R G
Accordingly,
J ](( ))
l
=l [= λ
R G =cov[ LU ][ L ] covU
[ L ][−3 L1]2T 3. Compute
]
T
[
EqA (15) ]
T
is
(10)
(10)
reduced
[G ] using Eq (3) T
() to
l
R G = [ L ][ L ]
5 (10) 1 (10)
⎜⎜⎛⎝⎜⎝⎜ mm 1 Accordingly, −1 Eq 4. (15)Generate is reduced M × n matrix ⎡⎢U ⎤⎥ using a standard norma
to
( )(( ))
5 11− −11[G ][σ ]−−−111⎟⎟⎞⎠⎠ (10) (10) a 1
[[KK ]]= ⎡σ ⎤ −4
() ()()
[ L ] [ T ] ⎣−1 ⎦
= ⎜⎝ m
()
l −1 [G ][σTG ] ⎠⎟l R G = T[lL ][ L ] ⎟ l l=l
( )
T
GG
R[ RL ]G G= = =⎣⎡σ[[GLL⎦⎤][][LL[]T] ] 1 TT
[ L ] = ⎡σ ⎤ [T ]
⎜⎝R G m −1 = [ L ][ L ] R G ⎠ = [ L ][ L ] R G = [ ][ ]
L L
T
17 1 Accordingly, ⎣ ⎦ 5 Eq [ X (15) is reduced to
] (17) ⎣ G ⎦
1 TT 1. Read , select OPT= 0 for no acceleration (see of above)
demandand 1 for
G
1 1 number of simulations and number (17) parame
(17)
6 6 Since
1
Since RR G lG
(( ))
l isisisa1 a−symmetric a symmetric 1
( )( ) ) matrix and can be [[ K][Since
][LL[[]]K= ]] the
( )
(17) (17)
(( ))( ) (
Since 1symmetric
⎤ T[T ]L = [⎡Lσ] =⎤ −[1L⎡⎣σT] = matrix
matrix and −1
and
⎤ [T ] −1 cancan bebe expressed
expressed asas K K , , it
⎡σ it is is
⎡ ⎤⎤simulated − possible
−11
possible
[[TT]] 5.[Y Compute to to show
show that
that
Since [RL ]G ll ⎡⎣σ
=
is Ga⎦ symmetric [ ]], ititis⎣ismatrix [ Gand ] ⎦ to ⎡σ ⎤ [T ]
can
2 TT2
[ ] T [ K ] ,2.it
= ⎣⎣σis GG⎦⎦ 6
Compute ]random
= ln [ X ]variables the mean
and correlation have
μY the andsame
the the square correlation root ofmatrix the vaa
atrix and 6 be
can expressedexpressed as
as [ K ] [ K possible ⎦
G possible ⎣ can be expressed as [ K ] [ K ] , it is possible to
to
show⎦ be
show expressed
that that as
all 2 K Since the possible
simulated torandom show
show that 2 Since
variables have thesimulated
same correlation random variable matrix
G
as [R K ]G[ Kis
TT
can be expressed 6 Since ] , ait symmetric
is possible matrix
to show and that that
7 7 all
the allthe theeigenvalues
eigenvalues eigenvalues ofofof are
lGl are
RRnon-negative.
G (( )) arenon-negative.non-negative. By spectral ByByspectralspectral value value
value data set,the
Since
3 decomposition
decomposition
data 3.set,
andsimulated
Compute
and
7 equating[G[ L
equating [ G ] using [
] ]and
random
L] from
from
Eq
denote Eq
(3)
Eq3 (17)
(17)
variables them
data ( )
andasEq
and set, haveR (11),
Eqand
l the
G
(11),
it may
and [σ be
same
it mayG [be
equating
]L, ]respectively.
shown that
from Eq
shown that (17
non-negative.
ative. By spectral
7 By
7
all
all 2 the
spectral
the Since eigenvalues
eigenvalues
value decomposition
decomposition
the 2 simulated
value of
decomposition
of
2
R
Since
R
l
G
G (( ))
Since
l random
2 are
the
are
the
non-negative.
Since
simulated
non-negative.
simulated
variablesthe random By
have
simulated By
random
spectral
therandom
variables
spectral
variables
same value
value
3 3
2 correlation
2 variables
have
1 1. Read [ X ] , select OPT= 0 for 1no
have
Since the
Since
decomposition
correlation
the
decomposition same
thethe
have matrix same
simulated
simulated matrix
the
correlation
correlation
assame that random
random
as ofthat
correlation
matrix
matrix
the variablesvariables
oforiginal
as the matrix
that
as
original
of
that
have
have as
the
of the
the
that
the
data
original9
same
same
of
original
set
the correlation
correlation
and
original matrix a
matrix
3 data TT[ L] set, and TTEqequating [EqL] from Eq 3
3 (17)4 equating
[data
data T and] = 4. Eq
set,⎡σand
set, (11),
Generate
and⎤ [that
and [L]
A itfrom
8 equating
equating ][ 6.
may
λ a M Using
be
] [[
Eq ×
L
L
2
] ]
acceleration
nfrom
shown fromspectral
(17)
matrix that
Eq
Eq and ⎡value
U ⎤⎥Eq
(17)
(17)
(see above) and 1 for acceleration
usingand
and decomposition,
(11), Eq
Eq a standard it may
(11),
(11), ( )
itit may
may Rbe
be
normal be
l = [ A ][λ
G distribution
shown
1 2 shown that
that G
data
(( )) set, and equating from (17) and [
(11), ] it may
4 be shown ⎢ Gcor
3
RR GG =l l =[[AAGcor ][][ λ λ
3 ][data
][ AA set,
] 3
] = and
= [ [
L data
L ][equating
][ L L ] set,
] and [ L ]
equating
from Eq L(17) from and
4
EqEq [ T
(17)
](11),
= ⎣ ⎡ σ [
Eq
Git⎦⎤ may be shown that
A
Gcor (11),
][ λ
Gcor it may
]
1 2 be shown 4
⎣ that
[⎦T ] = ⎡σ ⎤ [ 9A ][ λ ]
[Y ] =
shown ⎣ that G ⎦[ X ] and mean μ ⎣ ⎦
(( ))
Gcor Gcor
Gcor Gcor
Gcor
T 8 8 R G ll = [ A ][λ ][ A ]TTT = [ L ][ L ]TTT 2 2. Compute ln Gcor Gcor
Y (11) (11) G Gcor Gcor
L ][ L ] [ AGcorGcor ][ λGcor 1122][ AGcor number 9 of 7. simulations and
thennumber ⎡G ⎤ = ⎡Uof⎤ [demand λGcor ] [ A
12
parameters, T
Gcor ] [σ G ] .
respecti
Gcor ] = [ L ][ L ] (11)
RLLG = Gcor Gcor 5 If11OPT=1,
8 [ [ ] ]= = [ [AA ][][
Gcor λ λ ]
Gcor ] 1 2] = ⎡σ ⎤ [ A
12 (11) ⎢
⎣ ⎥
⎦ ⎢
⎣ ⎥
⎦
4 λ [T ][ (11) ] ⎡using ⎤⎤[[AAGcor 2
8 ⎡σ Gcor ⎤ 4 1λ [ [ T T ]] == ⎡σ σ ][
][ λλ ]]
2
(18)
(11) (18)
4
[[LL]]= [ T ]
Gcor
Gcor
=
Gcor
[ A11 22 ][
2 [T ] Gcor ⎡ ]
σ G ⎦ [ A[TGcor
⎤ ⎣ ⎦ (11)
⎡
] =][λ⎣σGcor ⎤ [ A ][λGcor ]
G ⎦ ] Gcor
2 Gcor
3 3.12
Compute 4
4 [
Substituting,G ]
5.⎣⎣ Compute ⎦⎦ Eq (3)
Eq (18) the correlation and Eq (16) (18)
and into the square Eq (6), roottaking of thetranspose
(18)
variance matri and
= [[ A Gcor ][⎣ λGcor
G ⎦ ]1Gcor = ⎣4 Gcor
G 5 6 G G Gcor Gcor
Gcor (18)
Gcor ][ λGcor ]
AGcor 5 Substituting, 10 Eq (18) If OPT=2, and Eq then (16) Substituting,
5 into Eq (6), Eq taking (18) transpose
and Eq (16) and
Gcor
6 ]⎡σ
⎡U⎡σ ⎡σ⎤ [ A ⎤ ,, itit may
⎤12 Σ shown
data set, and by equating from ⎣σ
Eq ⎦(9) and Eq shown
(11), that6
can bethat [[⎡⎢σG
shown σthe ⎤ ]=
GG ]=
=that may be be U
shown λGcorthat that
2 12 T
GG⎦⎦ covU ][ λand covU ]the −1 2 [square ]1 2root [ AGcor⎡of]T⎤ the [σ G⎡ variance]⎤
) and 10
(11), can be [σthat shown that 6
G ] = ⎣σ G6
⎤ shown
⎦ , it may shown ⎤[,σit ]that
6 [σ G112]2= G= σ , it may be 5. shown Compute 7 ⎣⎣
correlation
shown may be 6 that ⎣ ⎡ ⎥
⎦⎤ ⎢
⎣ ⎡ ⎥
⎦ ⎤ ][matrix
λcovU ] of[λthe column
−1 2 12
⎣ G⎦ ⎣ ⎦ G ⎥ = ⎢U ⎥ [ AcovU ][λcovU ] [λGcor ] 7 [ AGcor ⎢G]1⎦⎥ 2= ⎥[ A
[σ ⎣⎢GU] (19) Gcor ] [ AGco
G G 7 ⎢
11 [TJ
11 [TJ]]==⎡⎣σ⎡⎣σGG⎤⎦ ⎤⎦[[AAGcor ][][λ λ ] ] ⎣ ⎦
9 7. If OPT=1, then ⎣ ⎦ ⎡ ⎤ (12)
(12) ⎡ ⎤
G = ⎢U ⎥ [λGcor ] [ AGcor ] [σ G ] . ⎣ ⎦ T covU
( ) ⎣⎢ ⎦⎥ (12)
()
Gcor
Gcor Gcor
[1G2 ] and l and
R preserves
G [σ G⎣ 11]the ⎦ l
11 [TJ ] = ⎡⎣⎡σ GG ⎤⎦⎤[ AGcor denote them 13 as ii) 2,2 respectively.
11 22
11 [TJ
][ λλGcor ] 12
⎡ ⎤ ⎡ ⎤ (12) 7
−1 2
This ⎡ T
⎤ procedure
⎡ ⎤ using −11 22 spectral value mean, TT decomposition, variance andΣ U = [ AcovU ][λc
7 ] ⎡⎢G [ ][ ] [ ][ 2 U] 1[ 2 Gcor ] 7 [
= σ ⎤ A
⎡ Gcor
⎤ Gcor 7 G− 1 2= U A
(12)
1
(12)
2 λ T λ A
T GcorThis
7
8 ⎡ G]G [
⎤ σ== ⎡ ]
U U
procedure ⎤ [[ A A ][
][ λλ
preserves ]]
−
[[λλ
the (12) ]]
mean, [[ A
A variance]] [[
σσ ]] and correlation (19) matrix regard
⎣ ⎣ ⎥⎦ ⎦ ⎢⎣ ⎥⎦ [7 covU ⎣⎢][G ⎦⎥cov ] ⎦ A[ G ⎣Gcor⎦=] λU[ [Gcor A ] [λ][λG ] ] A [λ ]8 [σ]10
= U A ⎡ λ ⎤ ⎡⎢ ⎥
⎤ ⎡
λ ⎢ ⎤ ⎥ ⎡ A⎤ −1cov
σ − 1 2 1 2
⎢][⎣⎢ A⎦⎥Gcor
T
⎣⎢]⎣⎢ [If
⎦⎥ procedure σ⎦⎥⎦⎥ GOPT=2, ] preserves (19)
⎣⎢ ⎦⎥ [ ⎣⎢covU⎦⎥ ][ ⎣⎢ cov⎦⎥U ] covU[ Gcorcov]U [ GcorGcor
G Gcor Gcor
=U 7 ⎣U cov U
G ⎣This
G cov
cov U U cov
cov UU
then Gcor
Gcor
the mean, 8 of the
Gcor
Gcor
Thissimulation
variance
GG
procedure and(19) preserves
correlation (19) the mean,
matrix regard v
correlation TT matrix regardless size if
Substituting
12 Substituting
12 Substituting Eq Eq (12)
(12)
Eq (12) into Eq into
into
l Eq
Eq Eq
T
(6),
(6), taking
taking
(6), taking the transpose 9 M the
the transpose,
transpose,
8 6. and
and
Using 9 noting
noting size
spectral
11 size >
G
l
G
l
ll i)
if ==
ifn.M
M ⎡G
value⎡G>
⎢⎣subtract
⎡and
⎢⎣ >
⎤ ⎤ nand
⎥T14
⎥ n . iii)
.
and [
decomposition,
⎤⎤⎦TT⎦ correlation
σ
the
[σ ] ]=
Gcalculate
G =
sample
⎡σ⎡σ ( ) ⎤
⎣ ⎣ GG⎦ ⎦ mean
⎡σthe
⎤,R, itit Gl
⎡G9 ⎤ = ⎡U
⎣⎢regardless
⎤ , mean, ⎦⎥ from
= [
size
A⎤ [Gcor
⎣⎢variance
⎦⎥ of
Aif ][
each
cov UM
λ ][Gcor ][
covnU.]
λ>column A −1 2
Gcor [ λ ]
T
12
ofGcor⎡⎢U] ⎤⎥ [ and
T
AGcor ]calcul[σ G
6), taking 12 the Substituting
transpose,
and
Substituting
8 noting
This and Eql
procedure
Eq noting(12)⎡
(12) ⎤
TTinto
G8
and =
preserves
into
⎡This
EqG ⎤ (6),
(6), and
the taking
procedure
taking
⎡ [ σ
mean, ⎤ , ] the
=
itthe ⎡σtranspose,
preserves
may
variance ⎤ , be
transpose, it theand
shown
and mean,
correlation
and noting
8
8
noting variance
This
This G
matrix
G =
procedure
procedure
= ⎢
⎣⎡ GG ⎥
⎦regardless and
and [[σσGGof
preserves
preserves ]] =matrix
= the⎣⎡ σ the ⎤ it
mean,
simulation
⎦ , it variance the and
and simulation
correlation
correlation ⎣ matrix
⎦
matrix regard
regard
g the transpose, 12
13 may
13 and
maybe noting
beshown shown G= 8 ⎢G ⎥ This
that
that ⎣ ⎦
andprocedure [σ⎦⎥ GG ] =
⎣⎢ 8 This procedure
⎣σ GGpreserves
⎦ , it ⎣ ⎦the
G G preserves mean, variance the mean,and variance
correlation⎢⎣ ⎥⎦ ⎤ ⎡ ⎤correlation matrix G 1 2⎣regardless
G
G
G⎦
matrix regardless
of the simulation of the simulation
G⎡⎣⎢G
T
9 7. If10 OPT=1, Once, then ⎡M ⎤> = is [λGcor
U computed, ] ⎡⎢Y[ A⎤⎥ may ⎡] be ⎤[σmay ] adding
. be computed by Ythe asi {
that
13 may be shown that
9
13 may be shown that
size if M > n . 9 size if M > n .
9 size if 9 M >size n . if M > n .
9
9
10 12
Once,
size
size
Once, ⎣⎡⎢Gback
adding
if
if ⎢ M
⎣ ⎦ ()⎥ > is
⎥
15
⎦ n
ln computed,
. .⎢
⎣
Σ⎦⎤⎥ isUcomputed, ⎣⎡⎢Y ⎦⎤⎥ may
the
8.
⎥
⎦
mean
Compute
as
⎣ ⎦
Gcor Y
⎢
⎣10 be ⎥ by Gcomputed
⎦ computed Once, ⎢Gby ⎡
⎣
by
back⎤
⎥
⎦
adding
the mean:
isadding
computed,
back
back ⎡⎣⎢the
i =mean
Y ⎤⎥ may
⎦
Gi + μ
mean beasYco
14 ⎡G ⎡G⎤ ⎤= =⎡U ⎡U⎤ ⎤ λλ ]1]122[[AA ]T]T[σ 10 IfTOPT=2, then
⎦⎤⎥ ⎦⎥ ⎡⎣⎢ ⎣⎢ ⎤⎦⎥ ⎦⎥[[ Gcor ]] (13)
[σGGdecomposition
( )
14 l = 16 9. Generate (13)
(13) demand vectors by taking the exponential of
to the cholcov⎡⎣⎢ ⎣⎢algorithm ifGcor the Cholesky ⎡ ⎤ R G ⎡ [⎤L ][ L ] ⎡ ⎤ ⎡Y⎤⎥][⎤⎥mean
⎡ ⎤
( ) ⎡
()
Gcor
Gcor
⎡identical
⎤1 2is[ Acomputed,
10 Once, ⎡YOnce, G ⎥ algorithm
⎤ ⎣⎢may isbe computed, Yby⎥ Cholesky
may be computed Once,
Once, G Gby adding
isisas computed,
computed, back
l =back the YL may beas computed
(20)
may be computed byladdingadding back back the mean mean as asT
{ } asof ⎡UΣby [ AcovU ][λthe
T
1 This G = Once,
form U [ λis G ]
1 1 22
]
to
TT
T [the
σ ⎡ ] ⎤
cholcov ⎡ ⎤computed ⎡ ⎤ if ⎢
the ⎡
adding ⎤10
10
11 back Y
decomposition =
the G mean
⎢ ⎢ + ⎥ ⎥ μ 1 R G [⎢ ⎢ L
(13) ] column U ][ A covU ]
14 10 ⎦ G is computed, ⎣ ⎦ Y may be computed ⎣ ⎦ by adding ⎣ ⎦ the mean
⎡ ⎤ ⎡ ⎤ G Y ii) ⎣ G⎣⎢ i +⎤⎦⎥ U =calculate
Once, 10 is computed, (13) may be computed iby adding i using ⎦ back spectral the mean value⎣ ⎦ as decomposition,
⎣⎢⎣⎢G⎦⎥⎦⎥ =form ⎣⎢⎣⎢U ⎦⎥⎦⎥ [λGcor ⎢ ]⎦⎥ [ AGcor
10 Gcor ] [σ G ] ⎣⎢ ⎦⎥ ⎢ ⎥ ⎢ ⎥ 13
Yi {1}
14 This Gcor
Gcor⎣ Gcor G
⎢ ⎥ (13)⎣ ⎦ ⎢ ⎥
11 i) ⎣ subtract
⎦ the sample
Y i = G i + μYi {1} l
Y i mean from each (13) Yi = μand the covaria
is identical to G⎣the ⎦ cholcov algorithm ⎣ ⎦ if the 11
( ) T 11 cov
1 This form is identical to the cholcov algorithm 1 2 if the T Cholesky decomposition R G = [ L ][ L ]
T
whereR{1} ML ]x
is [aL1 ][2Mathematical T 1Tvector of ones.
]v algorithm
inThisthe form Cholesky
2 if Cholesky
the is identical decomposition
Cholesky decomposition
the [cholcov
todecomposition is
T equivalent
] in Ythe algorithm RCholesky G l to=if[[λLthe
( ) ][ L]]Cholesky
T [A
Gcor ] in
performed, the
decomposition ( ) Gl 17
[ = ] [ ⎤ =]⎡Uin⎤ [of the Basis
()
1 performed, where L11 Gcor
decomposition is equivalent
l 14 tocalculate λGcor ⎡ AGcor −1 2 12 T
Y i =G μ { } i = G i + μY {1} Σ 11
11
12 U Y Y
where = = iii)
GG { ii }
1 + + μ
isμ a { { 1M1}} × ⎢ G1 vector
⎥ ⎢ ⎥ A ones. U ][(20) λcovU ] [λGcor ] [ AGcor (20) ] [σ(20) G]
position. 2
11
where
performed, [T] T
wherei + 11 in [
Yi L 1
the Y
]
T
i =in G
11
Cholesky
the i + μ Y
Cholesky
Y
i {1} decomposition
= G i
i + μ {
decomposition
Yi
112 }
8 8 is
is 12 equivalentThis
where
ii
{
completes
to 1 } [ λ
is
YYi
Gcora
i
]
M
1 2
×[⎣ A the
1 ⎦
Gcorvector ]
T⎣ in ⎦
simulation of the
cov
ones.
12 of
where the { 1 } (20)
multivariate
is a M × 1 vector of ones.
3 spectral isvalue ] todecomposition.
i
( )
T 12 T
[ Lequivalent [λGcor8.] [Compute if ]the
12 T
leskyperformed,
2 decomposition where
equivalent in[λthe to Cholesky
1/2 [λGcor ] decomposition [ ATGcorin] isthe inidentical
thespectral is
88 equivalent value cholcov to normal AGcor
variable 18 inACholesky
⎡Ymathematical
the ⎤ by with specified proofmean, for theRvariance
mean:GY i==[ L
SVD l algorithm
G][i andL ] is presented
T
Gcor] 1 [AThis Gcor]form to the 15 algorithm decomposition T+ μYi {1} .
()
8 3 spectral 8 value decomposition. ⎢ ⎥ adding l back the
12 acceleration where {1}12 is a 12 where {of 1}ones. is a M ×1 vector 13 ii) of ones. using12
12 spectral
wherecompletes
where
This {{value
11}} is 19 adecomposition,
is aM ×⎣11vector
Msolution
×
the ⎦vector
simulation of
of Σones.
ones.U of=the [choose U ][because
Acovmultivariate λcovU ][ Acov U ]computation
normal variableinvol Yl
r3 of simulations,with
spectral 4 value
decomposition.
decomposition.
Case 2: Small number
Mwhere ×112 vector
{1} iswhere
of simulations,with a M ×{11}vector is a M
acceleration
1 vector
of×Tones. 13
of ones. correlation
13 This completes the simulation13 of the
matrix. Theisuser not guaranteed
can This completes
multivariate 12
to accelerate the
T the
normal simulation
variable ofYl th
2 performed, where [ L ] in the14 Cholesky 9.decomposition Generate theiseven
demand equivalent
vectorsThe byto [λGcor
taking ]the [Irrespective
Aexponential] toinunique, theof ⎡Yby ⎤
4 Case 2: Small number of simulations,with acceleration the
16 variance
⎡
recovery
⎤ ⎡
and
⎤20 correlation ofand, underlying
−matrix.
1 2 though 1 2 statistics.
user
spectral Tcan choose
values Gcorare accelerate ⎢⎣ ⎥⎦ .the reco
definiti
Case 2: Small number of simulations, 14 iii) with calculate
14 variance G = U and [ A correlation][ λ ] matrix.[ λ 14 ]
The l [ A variance
user ] [
canσ and
]
choose correlation
to accelerate matrix. theThe ll u
reco
This completes the simulation of theof ⎢
⎣ ⎥
the
multivariate
⎦ ⎢
⎣number ⎥
⎦ normal of normal simulations,
covU variable Gcor Y with
the
Gcor specified
SVDmean, algorithmmean,
s,with Case 2:
acceleration Small number of simulations,with acceleration This completes l the simulation of the the multivariate normal variablenot anYYret
()
4
ill be considerably 13 different This completes from l the 13 the identity
3 13 spectral
simulation matrix value of order decomposition.
the nthe. First,
multivariate we 13
13
15 normal This
statistics. completes
variable Irrespective
21 Y with
covU
the
vectors simulation
of the
specified are lnumber not. of
mean, of
However, multivariate
lsimulations, G
uniqueness thenormalSVD isvariable
algorithm is
13
In this case Σ U will be considerably different from the 15
5 acceleration This completes This completes
the simulation simulation
of the of
multivariate
identity the
matrix Irrespective multivariate
normal
of order n . First, variable Y wevariable
with
15 statistics.
Y
specified with specified
Irrespective mean,
of the numberreo
statistics.
returns results similar of the to number
the cholcov of simulations, routine theif SVD
the algorithm
5 In
l In this
14 variance case Σ and
l case 14 l will
l
()
U correlation will
14
variance be14
variance
considerably
andmatrix. variance
and
correlation The correlation
and user
different canfrom
15
correlation
matrix.
matrix.
8. the
choose
The user
The
14
14
Compute
to
16
matrix.
user
17 variance
variance
the
accelerate
identity canThe
Mathematical
can
⎡Ycholcov
⎤ bythe
user
⎣⎢matrix
choose ⎦⎥ acceleration
choose and
and
canadding
to of 22 recovery
choose
order
accelerate
to
correlation
correlation
routine back Basis
accelerate
nifto
of
assessment.The . the the
First,
the
matrix.
matrix.
mean:
accelerate
the
acceleration
underlying
recovery we16
recovery
The
The i =user
Yprocedure
the ofthe
user
G option
recovery
the
of
can
can the
μdiffers
i +underlying
choose
Yi is{of underlying
choose
1}not . from
the to accelerate
to
selected. accelerate
the
underlying YangFor aetthe the reco
al.recov
small algo n
mean
5 from
In
ov algorithm
this
ably different from U caseand15
6 ifsubtract
this
Σ
the Cholesky
U
compute
the ()
statistics. will
the sample
Σ be
identityIrrespective U
15 mean
()
decomposition
. Assuming
considerably
matrix 4
statistics.
be
15 ofstatistics.
from
15
Case
considerably
order
of Irrespective the
different
the
RU
2:
n
l lnumber
( )
statistics.
Gand
Small
number
. Irrespective
First, we
= [compute
from number
of
different
()of thesimulation
of] simulations,
L
T
L][Irrespective the Σnumber
of
identity
of the simulations,with
from
l number
U of. Assumingthe
to
of
16 acceleration
matrix
15 number
15
the
17 SVD
the
of
ofstatistics.
the
simulations,
17 18 of the
cholcov
order
simulations,
statistics. use
algorithm
of
A number
simulations,
of
simulations,
n
the23
mathematical
.
the
routine
First,
Irrespective
Irrespective
SVD
option
the we
SVD
acceleration
returns
ofofalgorithm
the
ifisthe
the
simulation of
results
the
proof
not
algorithm
ofSVD the
the
usefor
acceleration
selected.
number
number
option
similar
correlation algorithm
tothe
returns
of the
returns
may
SVD
to of
of
results
the
For
matrix
cholcov
option a
results
simulations,
simulations,
appear
returns
usesimilar
acceleration
algorithm
is not
small
isresults
ofof⎡⎢Ythe
routine
tosimilar
based
to⎤⎥ isto
selected.
number
the
the
violate SVD
on
similar
option
presented
acceleration
if theFor
tosome
SVD
acceleration
algorithm
algorithm
SVD,
to
a small n
basic
in basic
which
what follo
option
rule re
ret
ma
()
the identity matrix16 of order n.
l First, we subtract
16 l9. the Generate the demand use of the
vectors acceleration
by taking option
the 17 exponential may appear . violate some rule
6 subtract thethe sample mean the
from cholcov U and routine
compute if the Σ is Uacceleration
. Assuming16 the
16 theoption cholcov
cholcov
the issmall
anumber notroutineroutineselected.
ofFor ifif the
simulation theForsimulations,
a small
acceleration
acceleration to number option
option ofis ⎣ not
not⎦ selected.
issimulations, selected. For For aa small small nn
( () )
cholcov routine if nthe acceleration loption not selected. isFor number aaof
()
umber of demand 16 lparameters, that
16 is,the lM cholcov
>16 ,compute theroutine
spectral cholcov if lthe
value routineacceleration
decomposition if the acceleration
option may
instead not option
appear selected.
of performing isisto not selected.
violate small
linear some Fornumber abasic
transformation small of the number
rules
simulations, onof ofstatistics
simulations,
simulated standardthe norma
and
6 subtract
compute Σ
thesample
be sample
Ugreater . mean
Assuming mean
than from
from
the the number U
number 5
to [ λthe
and
and of In 1this
compute
of
demand 2
simulation case TΣ ΣU
parameters, U
to . . Assuming
will
Assuming thatbe is,
18
considerably
the
M 19
number
> n , spectralsolution
different
of simulation value from not guaranteed
the
to
decomposition identity because
matrix of order computation n . First, involves
we gene
Gcor ]use [ Aoption ] inmay
7
olesky decomposition is equivalent17 the instead of toperforming a linear transformation instead of ontoperforming
tosimulated a linear
standard transfor
17 the use of the acceleration
the use 17 of the
of
Gcorthe
use
acceleration of
acceleration
the appear
acceleration
option to option 18 may
17
17
violate
may option
appear
thethe
because
some appear
may use
use
to basic ofinstead
appear
violate
the
the violate
rules acceleration
acceleration
to
some ofof
violate
some
performing
statistics
basic
basic
some
rules
option
option 18
because rules
basic
of a may
may linearofappear
rules
statistics
statistics
appear transformation
of statistics
because
because
violate
violate some
some
because norma
5basic
basic rule
rule
8 be
7 the
shows
number
greater of simulation to be greater than the
17
that than the number of demand parameters, 17 that numberMathematical
is, 19 M >20 ntransformation
, spectral and, even
Basis valueis though
decomposition
performed spectral on other valuessimulated are unique, random by definition,
variables V
lthe asso
. Here
l other
()
l on simulated standard normal variables l ,, aaislinear
be greaterof
nd parameters, than
that is,
18 demand
the M number> n , spectral of demand 18 6 value parameters, decomposition Mtransformation
>ofsamplethat
n,performingis, mean M > nfrom , transformation
spectral value decomposition lperformed l normal transformation llinear
performed on
issueV normal
that parameters, that is, spectral onvalue onU
7 transformation is U on other simulated random variables .for
Here
instead of performing a linear 19 instead on ofnormal simulated
performing . standard
aa linear
linear 19 variables
8 shows instead of18 performing instead 18 asubtract
oflinear instead
performing the a linear asimulated
transformationlinear 18
18 instead
Utransformation
and
21 standardoncomputevectors
simulated
of performingonΣare Uvariables
not.
simulated
standard Assuming However,
normal standard atransformation
, transformation
the
linear
variables number
uniqueness
normal l ofon
Uvariables, a simulation
simulated
simulated
is
linear U , aanto
not standard
standard
linear norma lo
showsT
that decomposition shows that 18 A mathematical transformation proof for the is
SVD performed
algorithm is on
presented other in simulated
what follows. The unique
[ AcovU ]
()
8 l (14) l l ll in two
T
transformation is performedrandom on other simulated assessment.The
l l variables procedure V ldiffers from
Vsimulated l the 10 Yang to etrelated al. algorithm
= [ AcovU ][λcovU ][19 ] be greater V .random . Here is linearly related toto
22 l to V
ns,with acceleration
9 Σ
19 U transformation 19 transformation isAcov performed
7 U19 onisother
transformation thansimulated
performed the is number
performed
on other
19
19
ofsimulated
demandonvariables transformation
transformation
random
notparameters,
isother simulated
random variables
Here variables
is
is
that
random performed
V performed
isis, lvariables
linearly
VM ..the Here
>
Here
(14) on
on
related
ncomputation other
other is
Vis .linearly
, Vspectral Here simulated
linearly
value V related random
random
related
isdecomposition
linearly variables
variables to Vof .. Here
Here
9
l
Σ U = [ AcovUl][λcov () ][ covU ]
A
T
(14)
l sample mean) such that (after
19 solution
23
guaranteed
l of subtracting lthe correlation U
because
l
(after
l [ J ]the
(14)
matrix is
subtracting
sample the
mean) basedsuch
sample
involves
on that
mean)
10 the generation
SVD,where
such that V
l
which [ is] more ro
= J
l
U
random
where [
()
1
l
[ A(14) 1 ][ AU (after T U subtracting
8 shows the that V = [ Jsuch ]U where [ J ]Ulisare computed [ J ] isfrom the from
Σ of U = covU ][ λ covU ] because M 1 (14) and, even though spectral that Vvalues (14) unique, by definition, the
the associated left and
9 form U (after subtracting
20 the sample mean) = where computed
Eq is covUpossible > n guarantees that all the is
rably different 10 Factorization
from the identity
Factorization ofofthe theform
matrix form
of order of of Eqn Eq .(14)
First, (14) iswepossible is possible 10because
Mcomputed >10n guarantees
102 from
10 that
spectral
thevalue spectral the value decomposition
all decomposition () of Σ U10
l10 of
(after subtracting the sample
100 Volume 45 Number 1 March 2015 The Bridge and Structural Engineer
value decomposition of the covarience/correlation 5. Yang, T. Y., Moehle, J., Stojadinovic, B. and
matrix, which is more robust than the conventional Der Kiureghian, A. (2009). "Performance
Cholesky type decomposition. Second, the procedure evaluation of structural systems: theory
introduces an (optional) acceleration technique in and implementation." Journal of Structural
the convergence of the statistics of the simulated Engineering, ASCE, 135 (10): 1146-1154.
demand parameters to the target statistics. Without
6. Wilkinson, J. H. (1965). The algebraic eigenvalue
the acceleration technique, the proposed procedure is
problem, Oxford Science Publication.
mathematically identical to the conventional cholcov
algorithm, provided the Cholesky decomposition 7. Mathworks, Ed. (2008). MATLAB 8—The
exists. Therefore, the use of spectral value language of technical computing, Natick, MA.
decomposition is viable alternative to the cholcov 8. Zareian, F. (2010). “Personal communication.”
routine. With acceleration technique, the target
statistics are recovered in the proposed procedure Table 1: Demand matrix, X, from response-history
regardless of the simulation size. The proposed analysis
acceleration technique can also be used with cholcov
routine. Therefore, fundamental contribution in this δ1 (%) δ2 (%) δ3 (%) δa1 (g) a2 (g) a3 (g) a4 (g)
paper is the acceleration technique and SVD is shown G1 1.26 1.45 1.71 0.54 0.87 0.88 0.65
as an alternative to the Cholesky Decomposition.
G2 1.41 2.05 2.43 0.55 0.87 0.77 0.78
The Bridge and Structural Engineer Volume 45 Number 1 March 2015 101
0.375 -0.353 0.136 1.000 0.839 0.731 0.881 1.0010 1.0000 1.0099 1.3168 1.3732 0.8603 1.0090
-0.022 -0.646 -0.094 0.839 1.000 0.934 0.863 1.0183 1.0099 1.0000 1.1286 1.1479 0.8145 1.0000
-0.193 -0.723 -0.066 0.731 0.934 1.000 0.820 1.4873 1.3168 1.1286 1.0000 1.0002 0.8754 1.1338
0.145 -0.376 0.220 0.881 0.863 0.820 1.000 1.5998 1.3732 1.1479 1.0002 1.0000 0.8808 1.1539
0.8824 0.8603 0.8145 0.8754 0.8808 1.0000 0.8166
Table 5: cholcov algorithm using 11 ground
1.0170 1.0090 1.0000 1.1338 1.1539 0.8166 1.0000
motions, 100 simulations
102 Volume 45 Number 1 March 2015 The Bridge and Structural Engineer
Ratio of correlation coefficients
will have the same eigenvalues except the product,
which is of higher order will have an additional |m–n|
1.0000 1.0013 1.0089 0.8330 0.7768 1.1076 1.0086
zero eigenvalues.
1.0013 1.0000 1.0030 0.8629 0.8277 1.1572 1.0028
1.0089 1.0030 1.0000 0.9152 0.8981 1.3930 1.0000 Consider a specific case, namely [A]=[B]T. The matrix
product [B]T [B] can be considered as the covariance
0.8330 0.8629 0.9152 1.0000 0.9996 0.9343 0.9132
matrix with some scale factor and will have (m–
0.7768 0.8277 0.8981 0.9996 1.0000 0.9512 0.8955
n) zero eigenvalues if n>m. Moreover, when the
1.1076 1.1572 1.3930 0.9343 0.9512 1.0000 1.3752 determinant of the covariance matrix will be zero and
1.0086 1.0028 1.0000 0.9132 0.8955 1.3752 1.0000 one of the eigenvalues will be zero. Accordingly, the
covariance matrix is also rank deficient if. Combining
Table 11: SVD algorithm, with acceleration this property with the mathematical proof presented
Demand parameters above, the covariance matrix will be rank deficient
when n ≥ m.
δ1 δ2 δ3 a1 a2 a3 a4
Ratio of means
APPENDIX B: MATLAB Coding of the SVD
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 Algorithm
Ratio of correlation coefficients
clear all
1 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
1
2
close all
2
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
1 3 %OPT=0;% no acceleration
3 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
4
2
4
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
OPT=1;% with acceleration
3 5
5
4 6
6 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 %Develop underlying statistics of the of the response
7
5 history analysis
7 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
8
6
8
1 %step-1
7 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
2 9 APPENDIX A: Rank Deficient Covariance Matrices
8
9
3 APPENDIX A: Rank Deficient Covariance%X=load('DP.txt');
Matrices
9 APPENDIX A:APPENDIX
Rank basisDeficient Covariance
4
10 We provide the mathematical A: Rank
for theDeficient
statement that if n ≥[mrow,ncol]=size(X);
Covariance Matrices
m ( n and m are the number of
5 We provide the mathematical basis for the statement that if n ≥ m ( n and m are the number of
10 Matrices
6
11 Msize=1000000;%
demand parameters and ground motions, respectively) the covariance matrix is rank deficient. simulation size
7 demand
11
10 We provide
We
parameters and groundbasis
the mathematical motions,
for therespectively) theifcovariance
statement that
tic;
matrix
n ≥ m ( n and is rank
m are deficient.
the number of
12
8 We provide theanmathematical
first present identity of the basis for the statement
form [9]:
11 We
12 first present
demand an identity
parameters of themotions,
and ground form [9]:respectively) the covariance matrix is rank deficient.
that if (and are the number of demand parameters and %step-2
9
12 ground
We ⎡ first motions,
present respectively)
APPENDIX
⎡ an identity ⎤ of the
A:
⎡ the form covariance
Rank
[9]: Deficient
⎤ matrix
Covariance Matrices
⎡ ⎢ I o ⎤ ⎡⎤⎥ ⎢ μ I A ⎤ ⎥ ⎡ ⎢ μ I A ⎤ ⎥ Y=log(X);
13 is
⎢ ⎢ −B rank
I odeficient.
⎥ ⎢⎥ ⎢ μ I A We⎥ ⎥ first
=⎢ ⎢ μ Ipresent A an identity
⎥⎥ of the (A-1)
13 =
μ I ⎥ ⎢ B μ I ⎥ ⎥ ⎢ ⎢ o μ2 I − BA ⎥ ⎥ 2
mu=(mean(Y)); (A-1)
⎢ ⎢⎣ −Bprovide
We
10 form
⎡⎣⎢ I [6]: μ I ⎥⎥⎤ ⎢⎢the
⎡⎦ ⎣ μBImathematical
μ I ⎤ ⎦ ⎡
⎢
A ⎥⎦⎥ ⎣⎢ μ I⎣ obasisμ for
I −
A
the
BA statement
⎥⎤ ⎦ that if n ≥ m ( n and m are the number of
⎢ o ⎦⎥ ⎣⎢ ⎦⎥
13 ⎢ −B μparameters ⎢ ⎥ = ⎢ ⎥ RY=corrcoef(Y); (A-1)
11 demand
⎢⎣ I ⎥⎥ ⎢ B μand I ⎥ ground
⎢ o motions,
μ 2 I − BArespectively)
⎥ the covariance matrix is rank deficient.
⎡ μ I −A ⎦ ⎣⎤ ⎡ μ I A ⎦ ⎤ ⎣ ⎡ μ 2 I − AB o ⎦ ⎤ %step-3
12 We first present ⎡ an identity ⎤ ⎡of ⎢ the form [9]: ⎤ ⎥
14 ⎡⎢ ⎢⎢ μ I −A ⎤⎥ ⎢⎥⎥ ⎢⎢ μ I A ⎥ ⎥⎥ =⎢ ⎢ μ 2 I − AB o ⎥ ⎥ (A-2)
14 ⎢ ⎢⎣ o I ⎥ ⎢⎥ ⎢ B μ I ⎥ = ⎥ ⎢⎢ B μ I ⎥⎤ ⎥ (A-1) G=Y-ones(mrow,ncol)*diag(mu);
(A-2)
⎡⎢⎣ μoI −A I ⎥⎦⎤ ⎢⎣⎡⎦ ⎣ μBI μAI ⎥⎦⎤ ⎦ ⎢⎣⎡⎢ ⎣ μ 2 I B
− AB o ⎤⎦⎥μ I ⎥ ⎦
⎢⎡ ⎥ ⎢⎡ ⎥⎤ ⎡ %step-4
14 ⎢ I o ⎥⎤ ⎢⎢ μ I A ⎥⎥ = ⎢⎢ μ I A ⎥ (A-2)
13 ⎢⎣ o I ⎥⎦⎥ ⎢⎣ B μ I ⎥⎦ = ⎢⎣ B 2 μ I ⎥⎥⎥ (A-1)
15 If−B μ I matrix
the second B μon I ⎥left⎢ sideo ofofμ Eq− (A-1) ⎦ (A-2)
⎥[andEq colU=ncol;
Eq (A-2)] is denoted Q , then from Eq (A-1):
15 If⎢⎣ the second⎥⎦ ⎢⎣matrix on left ⎦ side
⎣ EqI (A-1)BA [and
⎦ (A-2)] is denoted Q , then from Eq (A-1):
If thethesecond
second matrix on left U_star=randn (Msize,colU);
15 If matrix on left side side
of Eqof Eq [and
(A-1) (A-1)
Eq[and Eqis denoted Q , then from Eq (A-1):
(A-2)]
16 μn n det(Q ) =⎤ ⎡ μn n det ( μ2 2 I⎤ − BA ) (A-3)
I ( μA I − )μ I − AB o ⎥ %step-5
(A-2)]
⎡
μ μdet( is denoted
) = μ⎢ μdet , then ⎡
from Eq (A-1): ⎤ (A-3)
16 I Q BA 2
⎢ −A ⎥ ⎥ ⎢
14 ⎢ ⎥⎢ ⎥=⎢ ⎥ (A-2)
16 μ⎢⎣ n odet(QI) =⎥⎦ μ⎢⎣ n Bdet ( μ 2II − ⎢ ) B
⎥⎦ BA μI ⎥ (A-3) RG=corrcoef(G); (A-3)
17 and similarly from Eq (A-2): ⎣ ⎦
17 and similarly from Eq (A-2): Var_G=var(G);
and similarly from Eq (A-2):
15
17
18 Ifμthe
and second
Q ) = matrix
n similarly
det( from
(on
μ n detEqμ(A-2):
I − AB ) Sigma_G=diag(sqrt
2 left side of Eq (A-1) [and Eq (A-2)] is denoted (Var_G));
Q , then from Eq (A-1):
(A-4)
18 μ n det(Q ) = μ n det ( μ 2 I − AB ) (A-4) (A-4)
%step-6
μ nn det(
16 The
18 matrix det ( μ 22 I [A][B]
Q ) = μproducts
nn
AB ) and [B][A] will therefore
− BA (A-4)
(A-3)
19 The matrix products [ A][ B ] and [ B ][ A] will therefore have the same [A_Gcor,LAM_Gcor,B_Gcor]=svd(RG);
eigenvalues. Moreover, if
19 The matrix products [ A ][ B ] and [ B ][ A]
have the same eigenvalues. Moreover, if [A] is of size will therefore have the same eigenvalues. Moreover, if
sqrt_LAM_Gcor=zeros (ncol,ncol);
20 [xA]mismatrix
nand isand [B]
of size
A] similarly
isn×of
from
n×Eq
m size
Aand
m [(A-2):
[ B ]misthen
][ B ][nx [A][B]
][ofAsize
B ] is[ Bof ]size m×and [ A][ B ] and
[B][A]
n then and [ B ][ A] will have the same
17
19
20 [The products
of size and and will therefore
m× n thenhave
[ A][ Bthe
] same[ Beigenvalues. Moreover,
][ A] will have if
the same
21 [eigenvalues
20 A] is of sizeexcept
n× mthe andproduct,
[ B ] is which
of sizeis m×
of higher
n thenorder
[ A][ Bwill
] andhave
[ B ][anA] additional
will have m− zero
the nsame
18 eigenvalues
21 μ n det (the
μ n det(Q ) = except − AB ) which is of higher order will have an additional m− n(A-4)
μ 2 I product, zero
The Bridge and Structural Engineer
eigenvalues.
22 eigenvalues Volume 45 Number 1 March 2015 103
21
22 eigenvalues. except the product, which is of higher order will have an additional m− n zero
19
22
The matrix products [ A][ B ] and [ B ][ A] will therefore have the same eigenvalues. Moreover, if
eigenvalues.
20 [ A] is of size n× m and [ B ] is of size m× n then [ A][ B ] and [ B ][ A] will have the same
18
for j=1:ncol %step-7.3
sqrt_LAM_Gcor(j,j)=sqrt (LAM_Gcor(j,j)); G_BAR=U_star*A_covU*inv_sqrt_LAM_
end covU*sqrt_LAM_Gcor*A_Gcor'*Sigma_G;
%step-7 end
if OPT==0 end
G_BAR=U_star*sqrt_LAM_Gcor*A_ %step-8
Gcor'*Sigma_G; Y_BAR=G_BAR+ones (Msize,ncol)* diag(mu);
else %step-9
if OPT==1 W=exp (Y_BAR);
%step-7.1 Time_Elapsed=toc;
mu_U_star=mean(U_star); save correlated_DP_svd_eff.txt W -ascii -double
U_star=U_star-ones(Msize,colU)*diag (mu_U_star); -tabs;
%step-7.2 %Check results
covU=cov(U_star); mu_Y_BAR=mean (Y_BAR);
[A_covU,LAM_covU,B_covU]=svd(covU); RY_BAR=corrcoef (Y_BAR);
inv_sqrt_LAM_covU=zeros(ncol,ncol); G_8=mu_Y_BAR./mu % for Table G-8
for j=1:ncol G_9=RY_BAR./RY %for Table G-9
inv_sqrt_LAM_covU(j,j)=1/sqrt(LAM_covU(j,j)); G_10=var (Y_BAR)./var(Y)
end
104 Volume 45 Number 1 March 2015 The Bridge and Structural Engineer