You are on page 1of 11

See

discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/280947894

Efficient generation of statistically consistent


demand vectors for seismic performance
assessment

ARTICLE · MARCH 2015

DOWNLOADS

2 AUTHORS, INCLUDING:

Andrew Whittaker
University at Buffalo, The State University o…
321 PUBLICATIONS 2,061 CITATIONS

SEE PROFILE

Available from: Andrew Whittaker


Retrieved on: 15 August 2015
Efficient Generation of Statistically Consistent Demand
Vectors for Seismic Performance Assessment

Dhiman BASU Andrew WHITTAKER


Assistant Professor Professor and Chair
Department of Civil Engineering, IIT Department of Civil, Structural and Environment
Gandhinagar, Ahmedabad Engineering State University of New York
dbasu@iitgn.ac.in Buffalo
Andrew Whittaker is Professor and Chair in the
Dr. Dhiman Basu is a faculty member in the Department of Civil, Structural and Environmental
discipline of civil engineering at Indian Institute Engineering at the University at Buffalo and serves
of Technology Gandhinagar. His research interest as the Director of MCEER. He is a registered civil
includes rotational seismology, ground motion and structural engineer in the State of California.
characterization, supplemental damping, irregular Whittaker served as the Vice-President and President
buildings and confined masonry system. Dr. Basu of the Consortium of Universities for Research in
received BE (civil Engg) from Jadavpur University Earthquake Engineering (www.curee.org) from
(1995), MTech (Structural Engg) from IIT Kanpur 2003 to 2011 and on the Board of Directors of
(1998) and Phd (Structural Engg) from SUNY the Earthquake Engineering Research Institute
Buffalo (2012). He also held a faculty position in (www.eeri.org) and the World Seismic Safety Initiative
GB Panth University (1999-2002) and a research from 2008 to 2010. Currently, he is a member of the
scientist position in Structural Engineering Research Advisory Board for the Southern California Earthquake
Centre-Chennai (2002-2008). Center. Whittaker made significant contributions to
the first generation of tools for performance based
earthquake engineering (FEMA 273/274, 1992-
1997) and led the structural engineering team that
developed the second generation of these tools (FEMA
P58, 2000-2013). Whittaker serves on a number of
national committees including ASCE 4, ASCE 7 and
ASCE 43 and ACI 349. His research interests are
broad and include earthquake and blast engineering
of buildings, long-span bridges and nuclear structures.
The US National Science Foundation, US Department
of Energy, US Nuclear Regulatory Commission,
US Federal Highway Administration and Canadian
Nuclear Safety Commission fund his research. He
consults to federal agencies, regulators, consultancies,
contractors and utilities in the United States, Canada,
United Kingdom, Europe and Asia.

Abstract
Monte Carlo analysis underpins some modern seismic The generation of large numbers of simulations by
performance and risk assessment methodologies. nonlinear response-history analysis is impractical
Such analysis requires thousands to millions of because of a lack of appropriate ground motion
simulations to generate stable distributions of loss. recordings and computational expense. An algorithm

The Bridge and Structural Engineer Volume 45 Number 1 March 2015  95


based on spectral value decomposition is proposed including all components that can be damaged, 2)
to transform results of a small number of response- develop fragility curves for all components that can
history analyses to a much larger space to enable be damaged and for which there are consequences
Monte Carlo simulation. The algorithm includes that will contribute to loss, 3) generate ground motion
an option to accelerate the recovery of the statistics records for each intensity of earthquake shaking
underlying the results of the response-history analysis. considered, 4) perform response-history analysis
of the model to estimate distributions of earthquake
1. Seismic Performance and Risk demand on each component and 5) perform Monte
Assessment Carlo analysis to transform earthquake demand to loss
Tools for seismic performance assessment of through the use of fragility and consequence functions.
buildings and mission-critical infrastructure are A key to this framework is the development of
used to predict distributions of loss and mean annual statistically consistent demand vectors for the Monte
frequencies of specific losses and unacceptable Carlo simulations because response-history analysis
performance. Loss for buildings is measured in many is performed for best estimate models using a limited
different ways, including repair cost (a direct loss), number of sets of ground motion records: 7 to 11 as
business interruption (an indirect loss), casualties and noted in the Guidelines. The process used to generate
deaths. Unacceptable performance for a safety-related statistically consistent vectors is described below. It is
nuclear structure is generally measured by core melt needed because hundreds to thousands of simulations
or large dose of radiation release. are required to generate stable loss curves. The direct
calculation of thousands to millions of demand vectors
Seismic performance (risk) assessment tools have by nonlinear response-history analysis is not practical
been available in the nuclear industry for more than because of a lack of suitable ground motion records
30 years. Traditional risk assessment for nuclear for such analysis and the computational expense of
structures has involved integrating a plant level running a very large number of nonlinear analyses.
seismic fragility curve over a seismic hazard curve to Yang et al. [5] are credited with the development of
compute the mean annual frequency of unacceptable the process included in the Guidelines.
performance. Huang et al. [1, 2] summarizes these
procedures. 1.1 Procedures to generate demand vectors for
Monte Carlo Analysis
Seismic performance assessment of buildings is in
its infancy. The first generation tools for seismic A limited number, m, of response-history analyses are
performance assessment of buildings focused on performed for ground motions scaled to a specified
engineering parameters and did not seek to express intensity. For each analysis, the peak absolute value of
performance in terms of loss. These tools were each demand parameter (e.g., third story drift, fourth
developed in the mid 1990s and have not substantially floor acceleration) is assembled into a row vector
changed since. The latest versions of these tools are with n entries, where n is the number of demand
presented in ASCE 41 [3]. Second generation tools parameters. The m row vectors are catenated to form
for performance-based earthquake engineering of a m x n matrix, [X], where each column presents m
buildings have been under development since the late values of one demand parameter. These entries in [X]
1990s, with work initially spearheaded by the Pacific are assumed to be jointly lognormal and so the natural
Earthquake Engineering Research Center (PEER). A logs of the entries, matrix [Y] are jointly normal. The
companion effort, funded by the Federal Emergency entries in [Y] can be characterized by a vector of
Management Agency (FEMA) and managed by means, {mY}, a n x n correlation coefficient matrix,
the Applied Technology Council (ATC) sought to [RYY] and a diagonal matrix of standard deviations,
operationalize the research products from PEER [DY]. This diagonal matrix of standard deviations does
and other researchers in the field, has developed not include variability due to modeling uncertainty
Guidelines for the seismic performance assessment and ground motion uncertainty and both were added
of buildings[4]. The Guidelines provide a framework to the diagonal entries of [DY] in the Guidelines using
for seismic performance assessment that involves the square-root-sum-of-the-squares. The Guidelines have
following five steps: 1) define a model of the building, assumed the means and correlation coefficients are

96  Volume 45 Number 1 March 2015 The Bridge and Structural Engineer
unchanged by these other sources of uncertainty. The technical bases for the chol and cholcov routines
are presented in the Matlab documentation [7] and are
Figure 1 illustrate the Yang et al. process for generating
not repeated here. The cholalgorithm is stable only
statistically consistent demand vectors. Objective is to
if the associated matrix (correlation matrix [AYY] or
generate a matrix of demand vectors [Z] with the same
the related covariance matrix [∑] = [DY][RYY][DY])
statistical characteristics as [Y] but which is much
is not rank deficient whereas cholcov is stable for rank
larger to enable Monte Carlo simulation. This process
deficient matrices. Appendix A provides a proof that
uses a set of vectors, one for each demand parameter,
the covariance matrix will be rank deficient when n
of uncorrelated standard normal random variables
≥ m.
(zero mean and a unit standard deviation), [U] and a
linear transformation and a linear translation: The goal of this paper is to present an alternate
algorithm to Yang et al. process, which can better
[Z]T = [a][U]T + [B] = [DY][LY][U]T + [MY] (1)
recover the underlying statistics of [Z] with fewer
simulations. The alternative algorithm is developed
1n based
1  1. Read [ X ] , select OPT=on Spectral
0 for Value Decomposition
no acceleration (SVD)
(see above) and 1 for and
acceleration
X X MY,DY, RYY can also be used with the cholcov.
2  2. Compute [Y ] = ln [ X ] and mean μY

U 2. SVD Algorithm and its Mathematical


3  3. Compute [G ] using Eq (3)
Basis
4  4. Generate a M × n matrix ⎡⎢U ⎤⎥ using a standard normal distribution, where M and n are t
exp Algorithm ⎣ ⎦
W Z 5  number of simulations and number of demand parameters, respectively.
The following nine steps describe the SVD algorithm
6  5.demand
Compute the correlation and the square root of the variance matrix of the column vectors
Fig. 1: Generation of vectors of correlated for possible coding by others. A Matlab code is
parameters [5]
7  [G ] and denote provided
them as R in
l Appendix
G B.
and [σ G ] , respectively. ( )
Matrices [A] and [B] are derived in Yang et al. [5]
1. Read [X], select OPT= 0 for no acceleration (see
8  6.[LUsing
and are not repeated here. The matrix the value decomposition, R (G
Y] is spectral ) Gcor Gcor Gcor
l = [ A ][λ ][ A ]T
above) and 1 for acceleration
Cholesky decomposition [6] of [RYY] and all other
⎡G ⎤ = ⎡U ⎤ [λ ]1[Y] 2
= In
T
] [σ[X] μ
terms are defined above. 9  7. If OPT=1, then 2. Compute
1 ⎢⎣1  ⎥⎦ 1.1.
⎢⎣ ReadGcor[ X ][ A
⎥⎦ Read G ] .and mean

[ X ], ,Gcor
select
selectOPT=
OPT= 00for
forno
noacceleration
acceleration(see
(seeabove)
above)and
1. Read [ X ] , select OPT= 0 for no acceleration (see above) and 1 for acceleration
and11for
fo
1  1. Read [ X ] , select OPT= 0 for no acceleration (see above) and 1 for acceleration
The original implementation of this 10  algorithm If OPT=2, usedthen 2  3. Compute [G] using Eq (3)
2  2.2. Compute Compute[Y[Y] = ] =lnln[ X [ X] and
] andmean meanμμY Y
the chol subroutine
2  in 2  2.
2. Matlab
Compute [7][Compute
Yfor
]11 
= ln [i)X []Yand
performing ] = mean
ln [ X ] and
the μ mean μY
subtract the sample mean from each column of ⎡⎢U ⎤⎥ and
Y 4. Generate a Mxn matrix using a standard
calculate the covariance matr
decomposition. Early studies using the algorithm 3 3  3.3. Compute
normal Compute [ G
[ G]
distribution, ] using
using Eq
Eq (3)
where(3) ⎣ ⎦
M and n are the
3  3. Compute
Compute [G ] using Eq (3)l [ G ] using Eq (3)
were reported in3 the 3.50%
example that involved
draft Guidelines
12  Σusing
U an () ⎡4 
number
4 ⎤ 4.4. Generate
of simulations
Generateaa MM××nn matrix
and ⎡ ⎤number
matrix ⎣⎢U⎡⎢U⎦⎥ ⎤⎥using
of
usingaastandard
demand
standardnormal normaldistribution
distributio
4  4. 7Generate
demand parameters
4  4. aGenerate
M × n matrix a M and ×⎡U 11
n ⎤matrix
using a⎢

Ustandard


using anormal
parameters,standard normal distribution,
respectively.
distribution, where ⎣ M ⎦ where
and n M
are and
the n are the
⎣⎢ ⎦⎥
ground motions, namely, n = 7 and
5  5 
number of
m = 11.
13  ii)
number
simulations
reported in Appendix G of that document showed the and
Results value
using spectral
of simulations number and of
5 5 
number
5.
demand
number
of
Compute
6 6  5.5. Compute
numberofofsimulations
decomposition,
demand
parameters, the Σ U = [ Aand
l
simulations
parameters,
correlation ()
respectively.
and][number
λnumber
respectively.
cov Uand covthe
ofofdemand
U ][ Asquare
covU ]
demand
T parameters,
parameters,respecti
root of
respect
Computethe thecorrelation
correlationand andthe thesquare
squareroot rootofofthethevariance
variancematri matr
algorithm yielded6 robust 6  5.if the
results
5. Compute Compute
the simulation
correlation the correlation
space
and the⎡ square and the the
root ofvariance
square theroot variance matrix
−1 2of thematrix
of of
1variance
theTthe column
matrix column of thevectors
column
vectors in [G]
in vectors in
⎤ ⎡ ⎤
14  iii) calculate ⎢G ⎥ = ⎢U ⎥ [ AcovU ][ λcovU ] [ λGcor ] [ AGcor ] [σll
2
G ][σ
⎣ ⎦ 7  ⎣l7 ⎦ and[G [G] ]]and
denote them
denoteas
anddenote them
them R﴾Ĝ﴿asasRandR(G ( G))and G] [,σ
and GG] ,] ,respectively.
[σrespectively.
respectively.
was large, with best results,
7  7  as measured
[ G ] and by accurate
[G ] and denote them as R (G ) and
denote them l as R[ ( G ) and [σ
σ G ] , respectively. G , respectively.
recovery of statistical properties, being15  8. achieved
Compute for⎡⎢Y ⎤⎥ by6. adding Using backspectral
the mean: value Y i = Gi decomposition,
+ μY {1} . R﴾Ĝ﴿ =
RR(G ( G)) =[ A
ll = T
⎣ ⎦ 8 8  6.6. Using [ ][λGcor ][ AGcor ]T
l spectral Gcor ][ λGcor ][ A Gcor ]
Using spectral value
value decomposition,
decomposition, AGcor
[ A ( ][Gcor λ) Gcor[ ][Gcor
i

1000s of simulations. Huang et al. [1] drew a similar [A ][λ ][A T]T
8  6. Using spectral value decomposition, R (G ) =Gcor
T
8  6. Using spectral value decomposition, l R G = A AGcor ][ ] Gcor ][ Gcor ]
λ A
conclusion for 3 demand parameters 16  and 9. 11 Generate
ground demand vectors Gcor by taking theGcorexponential of ⎡⎢Y ⎤.
7. 1If
27.OPT
7. If T = 1, then
OPT=1, then ⎡G ⎡ ⎤ ⎤==⎡U⎡U⎤ [⎤λλ ⎣]1 12⎦⎥2[ AA ]T T[σσ ] . .
9 9  If OPT=1, then G [
⎣⎢ ⎢⎣ ⎦⎥ ⎥⎦ ⎣⎢ ⎢⎣ ⎦⎥ ⎥⎦ Gcor ] [ Gcor
Gcor ] [ GG ]
motions. Subsequent
9  7.studies
If 9  7. If
for
OPT=1, thenOPT=1,
⎡G ⎤ =then
building ⎡U ⎤ [λ⎡⎣⎢G ⎤⎦⎥ =
structures ]
1 2⎡U ⎤ λ T

⎣ [ A ⎥
⎦ [ Gcor] []σ G[]AGcor
. ] [σ G ] . Gcor

⎣⎢ ⎦⎥ ⎣⎢ ⎦⎥ Gcor
If OPT
Gcor
= 2, then
have shown that stable distributions 17  ofMathematical
loss can beBasis 10 
10  IfIfOPT=2,
OPT=2,then then
10  10  If
If OPT=2, then OPT=2, then
achieved with fewer than 1000 simulations i) subtract the sample mean from each column
11  i)i) subtract
11  subtract the the sample
sample mean
⎡Uis⎤ presentedmean from from each each columncolumn ofof ⎡⎢⎣U⎡⎢U⎤⎥⎦ ⎤⎥ and
and calcul
calcu
11  i) subtract
18  A the sample
mathematical
11  i) subtract the sample mean from each column of mean proof from for each
the SVD column

of ⎢U ⎥ and ⎤
algorithmof calculate
and calculate
⎢ ⎦⎥ and calculate in
thethecovariance whatthe
covariance covariance
follows.
matrix matrix ⎣ ⎦
The matrix
uniqueness of t
Zareian [8] observed that the chol algorithm was ⎣ ⎦ ⎣
unstable if the number of12 demand 19  solution is not guaranteed
parameters
l
()
equal 12  because
((ll))
ΣΣ UU the computation involves the generation of random numbe
()
l 12 
12  Σ U Σ U
to or greater than the number of20 ground and, motions,
even though spectral ii) values usingare unique,value
spectral by definition, the associated left and right ba
decomposition,
namely, n ≥ m. He proposed replacing the Matlab
21  vectors are not. 13  13  ii)
However,
l ii) using
using spectral
uniqueness
l spectral value value
()
is T not decomposition,
an Tissue for
decomposition,
ll
ΣΣ UU ==[ A
loss [computations (( ))
AcovcovUU][][λλcovcovUU][][AAcovcov
or
TT
UU] ]ri
13  ii) using spectral value
decomposition, Σ U =
using spectral value decomposition, Σ U = [ AcovU ][λcovU ][covAUcovU ]covU covU
chol subroutine13 withii) the cholcov22 subroutine.
assessment.The
[()A
The procedure differs from the Yang et al. algorithm
][ λ ][ A ]
in two ways: 1) the factorizati
1 2−iii)
iii) calculate
2 calculate
⎡G⎡G⎤⎥ ⎤= ⎡ ⎡U⎤ [⎤ AA ][λλ ]−−1 12 2[λλ ]1 12 2[ AA ]T T[σσ ]
T=⎢U
cholcov subroutine is stable for both m > n ⎡ and
⎤ n⎡ ≥⎤ m. 14 
14 
U ⎥ [ AcovU ][λλcovmatrix
1iii) calculate
12 ⎢
⎣ ⎢ ⎦ ⎥ ⎣ ⎢ ⎥
⎦ ⎥ [ UU ][ cov UU ] [ Gcor ] [ Gcor ][ ]
U ] A[ λis ]σ [ AGcor ]⎦ [σSVD,
14  iii) ⎡G ⎤ = ⎡U of
calculate
23  ⎤ A⎢Gthe
⎥=λ ⎢correlation
− 1 2
Gcor based
T ⎣ on ⎣G ]⎦ which is more
cov
cov cov Gcor
robustGcorthanGG the Choles
⎣⎢ ⎦⎥ ⎣⎢ ⎦⎥ [ ⎣ cov⎦U ][ ⎣ cov⎦ U ] [ Gcor ] [ Gcor ] [ G ]
14  iii) calculate

⎡Y ⎤ by adding back 15  8.8. Compute


15  Compute⎡⎢⎣Y⎡⎢Y⎤⎥⎦ ⎤⎥ by
byadding
addingback backthe mean:YYi i==GGi i++μμY Y{1{1}}. .
themean:
15  8. ⎡Compute
15  8. Compute Y ⎤ by adding⎢ ⎥ back the mean: the
Y =mean:
G + Yi =
μ {1} G⎣.i + ⎦ μY {1} . i i

The Bridge and Structural Engineer ⎣⎢ ⎦⎥ ⎣ ⎦ i i Y


Volume 455  Number 1 March 2015  97
i
i

  16  9.9. Generate


16  Generatedemand
demandvectors
vectors by
bytaking
takingthe exponentialofof⎢⎣Y⎡⎢Y⎥⎦ ⎤⎥. .
theexponential ⎡ ⎤
⎡ ⎤ ⎣ ⎦
16  16  9. demand
9. Generate Generate demand
vectors by vectors by exponential
taking the of ⎡⎢Y ⎤⎥ . of ⎢⎣Y ⎥⎦ .
taking the exponential
⎣ ⎦
17  Mathematical
17  MathematicalBasis
Basis
17  17  Mathematical
Mathematical Basis Basis
⎡U ⎤ and calculate the covariance
() () [G[G ]G= [.G.1GG 2] .such ] such that n
1 2
mean from each column l lofDefine 17  matrix Define . G nthat
12  Σ 10 U 17  Σ U If ⎣⎢ ⎦⎥ [Gthen
OPT=2, ] = [G1 G 2 . . G17 n ] 17  suchDefine Define
that [ G] = [G[G ] =1 G 21 . . 2G n ] such that
17  Define [G ] = [G1 G 2 . . G n ] such that
12  n

18 ⎡ ⎤ Gi = Yi − μYi {1}


13  ii)13 using
18 
11  i) subtract the sample mean from
ii) spectral
18 
using
Gi = Yi − μYi {1}
G i = Yi −value
value
spectral μYi {1}decomposition,
decomposition,
l
Σ18 U18 =
each
GΣi[ =AG U
18  column
lYi Ui=
cov −
= () ()
][Yλμ[icovA
G
− iμ
{
U
=AYof
1 ][}
Yi covU i][ cov Y { i1−
λ
cov
UT and calculate the covariance matrix
}U ⎣⎢Uμ] Y][i⎦⎥ A{1cov} U ]T
(3)
(3) (3)

ecomposition, Σ12 
14  iii)14 calculate
l
U = [⎡Acov
iii) calculate

()
ΣU U
G ⎤⎥ =
l
][⎡Uλ⎡cov
⎢ G
⎤[ A
⎥ ⎤= ()
U ][ AcovU ]
A
T

⎡U][⎤λ[ cov ]
−1 2
][ λ [
12
λGcor]−]1 2 [[λA thGcor19 ]]
T
12
[[σAG where ] ]
T
[
1  19 Giwhere
σ μ ] is
=where
the
Yi − μYμi Y{is
mean i
1}isthe
of
themean
th the i column of [Y ] l
mean
th
ofof the i th column columnofof[Y[Y]
and {G1vector
}]T 1isvector
] and
m×1of
and{1}{1}
vector
is m×1 vector o
ofMatrix
ones. l =Matrix
lG=T[G
19 ⎣ ⎦ where ⎣ ⎣⎢ ⎦ ⎦⎥μYi ⎣⎢ is ⎦⎥ thecovmean
cov U U
U of
cov U the
19  19  i
where
Gcor where
column μ Gcorμ
of
is [
the Y is ] the
Gand
mean Y i mean {
is of1 } m the of
is x m× the
i1 th
vector1
column i vector column of of of [
ones.Y of
ones. ] [ Y
and
l = [G ] l ]Matrix
Matrix {
and 1 } { 1
is G} m× is
= [ m×
1 represents
of ones. ones.
the
Matrix G G[ ]
isTthe mean of the i th column of [Y ] andYi {1}i is m×1 vector of ones.
T
1 2 μYi
Y
19  where Matrix G
covU ][15 
−1 2
λcovU ] 8. [8. Compute
13 ] [ A
λGcor
15 Compute 8. 20 Compute
⎤] by
⎡Yusing
ii)Gcor [σ Gadding
⎢⎣ ⎥⎦ represents ⎡]Y ⎤ bythe
spectral
⎢⎣ ⎥⎦ variable
by value
back
adding thedecomposition,
adding
random backmean:
l with
Y20 
theback
variable
20 
i =lGithe
mean:
represents
20 
Y represents
with
l
Σ U mean:
Y+i μ=Yai G {zero
the
=
1}+
represents [.AμcovU{1][}λthe
i randomthe ()
20 
Yi random
mean.
random
.variable
represents
covUrandom
Thevariable
][ AcovUlvariable
statistical Y with
T the random variable Y with a zero mean. The statistical prope
]variable
Yl properties
with
la,zero
Yla zero withaalzero
with
mean. ofmean. GThe
zero
, forThe mean. mean.
example,
statistical
TheThe
statistical statisticalstatistical
properties
properties
properties
of G lof, for of G
l , example,
G for
l , fo
exam
20  represents the random . Y a zero mean. The statistical properties
21  the covariance properties
( )
ofof
l matrix
G for
for ( ) example,
example, l
Σ G and the correlation the covariance ( )
matrix
matrix R G are l
( )( ) ( )
{1} .⎡G ⎤ =by⎡U l are
ng back 16  the9.mean: Generate
14  Y i 21 
=iii)
16  9. Generate demand
21  9. the covariance
Gicalculate
demand
Generate matrix
+the μYcovariance
i vectors

demand ⎣⎢ Σ⎦⎥ vectors


⎤[ A Σ
taking
matrix
l ⎣⎢ and
G ⎦⎥ by
vectors
cov ][
the
( )
21 
thetaking
U λ
G cov and
21 
U
−1 2 21  1 2the
lexponential
]
the
correlation
[ λ
the
the
covariance
the exponential
by taking
of( )
]
covariance
correlation
matrix⎣⎢ R⎦⎥the
Gcor
⎡YA⎤covariance
[ .
matrix
Gcor
G
]
T
lof are [ ⎡
( ) ( ( )(() ))
σ
matrix
matrix
⎢⎣Y ⎥⎦ .
G⎤ Σ ] Glmatrix
RΣ l
and
G Gl are
and
ΣtheG
and
the the
correlation
and
correlation correlation the correlation
matrix matrix
matrix R l
G
matrix
R G
arel are areR G
1  Σ lG l = 1 [G ]T [G ] = 1 GG l lT
11 1 1 l l T TT
tors by taking the15 exponential exponential
8. Compute of of
⎡ Y ⎤ ⎡. ⎤
⎣⎢ ⎦⎥ ⎣⎢Y ⎦⎥ .by adding back the mean: Y i = Gi 1  + μ Σ1 G
1 1 1  Yi ΣT{ΣΣ}G
(((( )))) lG l
.lG= = 111 [[G m − 1
[[[G G ]
T T
]T]T]T[G
T [G [[[G G ] ]=]== 111 GG
]
m − 1 lGG llll
lGG T
(4)
( ) == G ==1mm GG T
17  Mathematical Basis Basis 1 Σ l m 1 m − − 1 1 ] ] − − 1 1 l l 1 l lT
(( ))
17  Mathematical G = G G GG
1 1  ΣΣ G l l
G= =
m
1
− 1 

1
[
T

(( ))
Σ ( )
[GG]] [[GG]l]== 1 m1−GG
G =
⎡Y ⎤ l. m −1 −1 l m −−11 m
m 1 m

− −
1
1l l
1
GG l
Σ1 GT =
[
l
G
Tl
] [ G ] =
m m
m − −
1[− 1 1
G1] l[G
T

GG l ]
T =
6  m −1
GG 6 

((((( )))))
1 T Mathematical
16  1 9.l lGenerate T Basisdemand vectors by taking the exponential m −12  of R
6 ⎢ ⎥G = [m σ − ] 1 Σ G [ σ ]m − 1
6
18 [G ] A[G ] = A mathematical
= GG proof for the SVD algorithm is presented  in what follows.    
⎣ l⎦ The (4) uniqueness 6  
m −1
mathematical
18  m −1   proof for the SVD algorithm   6 is presented 2 2 in what
2 1 RRRG (((( ))))
R G lG lGlfollows.
== [σσ[σ[σG ]G−]−]−1The
[
G−1−1 l

]
1Σ ΣG
1ΣΣ G lG l[σ[of
luniqueness [ [ σ σ G ]G−−
G−

]
the
]
1−1 
]−1 11 of the (5)
(( )()( )) ( (() ))
A mathematical proof for the SVD   algorithm is 2  l = = G l σ
1 [σrandom G] Σ [[σσ G ]−]−11numbers
(( )) lR [G lG
=
 
( )
G G
r the 19  solution isisnot
19  17 solution guaranteed
is not guaranteedbecause the The
because computation
the computationR l = [involves
involves
lG σ the−] 2  − 1 generation
Σ l G
the σgeneration −of
]
− 1 G
R ofG numbers
random
=
G
Σ Gl [σ ]−1
The uniqueness of[ the GG] 2  [G ] ⎡ [σ2 ⎤ ] Σ G
SVD algorithm presented
Mathematical in what Basis follows. 2 
uniqueness
2  R G of = the σ Σ G σ
l GG= 2  − 1 l
presented in what follows. R G = [σ ][σ [σ]Gis ] a diagonalG matrix of variances.
G

=d[σbecause ]
−1
20  Σ Gl 20 
and, [σ ( )
even
]
−1
and, though
even spectral
though values
spectral are
values unique, are by
unique, definition,by definition, the 3 associated where
the ⎡(5)
associated ⎢⎣σleft ⎤ ⎥⎦ andleft right G basis
2G
[σ[σ[σG ][G][σ][σσG ]G]]isisisright
and basis matrix
G
G the solution
computation
G is involves
not guaranteed the generation because of random
the computation numbers 3 3  where
3 3  where where⎢⎣⎡σ
where σ⎡⎣⎢⎡σσ
⎡σ 2 =
2
⎣⎢ GG2G2⎥⎦⎤⎦⎥G⎤⎦⎥=
⎤=
⎦⎥= [ σ[σGG][][σσGG] ]isis
a adiagonal
aaadiagonal diagonal
diagonal matrixof
matrix
matrix ofofofvariances.
variances.
variances.
21  vectors are
18 vectorsA not.
mathematical However, uniqueness
proof for the SVD is not
algorithm an issue
is ⎡ presented
⎤ for 3  where
loss where
in what ⎢

computations ⎢ follows. ⎤
⎥ = The ⎡ or 2 ⎤risk
uniquenessis diagonal
diagonal of the matrix
matrix ofvariances.
of variances.
variances.
al values are unique, 21  involves bythe aregeneration
definition, not. However,
of randomleft
the associated uniqueness
numbers
and
3  right is
whereand,notbasis ⎡σ σ ⎤ = [σ ][σσGG]] is ⎡a3 diagonal
an
even 2
issue for is loss
⎣ a ⎦
diagonal computations
wherematrix matrix
⎢⎣σ ]G ⎥⎦is=of of or
[σdiagonal risk
G ][σ G ] is a diagonal matrix of variances.
variances.
⎣⎢ ⎢⎣ GG⎦⎥ ⎥⎦=3 [σ GG][where
3  where 2 G G G
variances.
l ⎢σ G2⎡ ⎤⎥ = T
[ σ ][ σ a matrix of variances.
4  twoLet Gtwo ⎣=1)⎢ways: G⎦ T⎤⎥ T be G the Gsimulated random variable of M realizations with a)
σ G2 ⎤⎥⎦ =
ever, [σ G ][assessment.The
22  σ22 
uniqueness though solution
spectral
19 assessment.The
G ] is aisdiagonal
not procedure
anmatrix is
issue not
values ofdiffers
procedureguaranteed
are
variances.
for from
loss unique,
differs thefrom
because Yang
computations bythe the etYang
al.
definition, oralgorithm
computation etrisk al.the algorithm in
involves
4 4  Let Let Let
ways:
in
G
lthe
l lG l= generation
=
⎡G⎣⎡ ⎤ the
G T⎦T⎤be be
factorization
be the1) the
thethe
of random
simulated factorization
simulated simulated
numbers
random
random randomvariable
variableof variable
ofM Mrealizations of
realizations with
witha) a)ta
4  4  Let Let G G l = = ⎡

⎣ G ⎡ G ⎢
⎣ ⎥
⎦⎤ ⎤ ⎥
⎦T
be be thethe simulated
simulated random
random variable
variable ofof MM realizations
realizations with
with a)a) ta
23  of23 associated
the correlation
20 of and, lefteven andmatrix right
though basis
is based
spectral vectors on
values are
SVD, are not. which However,
unique, l isby more
definition,
T
4  robust
Let the
G = than ⎢
⎣ ⎢⎡
associated
⎣ G ⎥
⎦ ⎥
⎦⎤ the
be the
l Cholesky
left simulated and
T right random basis variable of M realizations with a)
e differs from the Yangthe et al. correlation
algorithm matrix in two ways: is based 1) the on Let SVD, lG = which⎡G⎤ ⎤ 5  T
beisthe more
targetcovariance
realizations robust
4 ⎡⎣⎢ ⎤⎦⎥Trandom than
with matrix a)the
⎡G ⎤target Cholesky
(or, alternately,
of means targetvariances
and b)with avariable
target andofmeans,
ameans,
targetcorr
4 4  factorization
Let G = ⎡⎢G ⎢ ⎥ be the simulated
l
simulated Letrandom G = variable
variable
⎢ ⎥ be the
of M realizations
M realizations
simulated random with a) target
a) target M and
realizati
and
uniqueness vectors is not an issue for loss computations ⎣ ⎣ or ⎦⎥ ⎦5 4  targetcovariance
Let
5 5  targetcovariance
Gfor
targetcovariance = ⎢Gloss ⎥ be computations matrix
the matrix ⎣ (or,
simulated ⎦ (or,alternately, random
alternately, targetvariances
variable of M realizations
targetvariances andandaaaatargetcorre
with a) t
targetcorr
on21 SVD, whichare is not. However, uniqueness is not 5 an issue
targetcovariance
and b) a⎣ ⎦ matrix
covariance matrix (or,(or, (or, l
alternately, or risk
alternately, target targetvariances
variances andand targetcorre
T
=rix⎡⎢Gis ⎤ bebased
the simulated random variable more of M robust realizations than thewithCholesky a) target means, targetcovariance matrix
matrix (or, alternately,
alternately, ⎡G ⎤targetvariances
targetvariances and
and a targetcorre
targetcorr
⎥ 5 
6  targetstatistics are those of G . Here is a M × n matrix and the mean
⎣ ⎦ risk assessment. The procedure differs5 5 5from thethe
targetcovariance
  targetcovariance Yang6 6 algorithm matrix (or,
matrix targetin5 correlation
(or, alternately, alternately,
targetcovariance targetvariances
l l. matrix
targetvariances ⎣⎢⎡ target
⎡ (or, ⎤and ⎦⎥⎤isalternately,
and a targetcorrelation
targetcorrelation targetvariances matrix). and To
22  assessment.The procedure differs from Yang
5  et al. 5 

targetstatistics
atargetcovariance
targetstatistics
targetstatistics
targetstatistics two ways: are
are
are are those
matrix those
those
those 1)matrix). theof(or,
ofof
G
offactorization
GlG . .
Here Here⎣⎢⎡G
.These
lGalternately,
Here
Here G⎡ G⎢ G


⎦ ⎤ ⎥isisisaaaaaMMM M×
statistics
targetvariances × × ×n nmatrix
n n matrix
matrix
matrix are
andand and
and
and a matrix).
thethemean
the
the mean
targetcorre
mean
mean
Th
o
et al. 6  l ⎢ ⎢⎡ ⎣ ⎥ ⎥⎤ ⎦
ovariance matrix     algorithm
(or, alternately, in two ways: 1)
targetvariances andthe factorization
a targetcorrelation
targetstatistics ofmatrix). 6 
arethose
targetstatistics
those These ofof lGl ..targetstatistics
Here are
Here than
those
⎡G ⎡G⎤ ⎤ is aare of G
matrix
M .
× Here
n and
matrix
⎣ G ⎦
⎣ ⎢⎣ ⎦ ⎥⎦ lthe is
and
a M
mean
the
× n
⎡mean
of matrix
⎤ iseach of
and
each
the
vector
mean
is
23  of the correlation matrix is based6  targetstatistics
6  on SVD, which are those
Thisis morestep of 6  G is .
robust Here
performed is
by thea M
generating × those
Cholesky n matrix ofthe G and .
requiredHere
the G
mean
set of a
of M
each
independent× n matrix
vector is
rando anzzo

the correlation matrix is based on SVD, which is7 6 7  This
7  targetstatistics
step isis performed are those ⎣⎢ ⎣⎢ ⎦⎥ by ⎦⎥ of G l . Here ⎡G ⎤ is a M ×⎣⎢ n matrix
generating the⎢⎣ the ⎥⎦required ⎦⎥
setsetofofindependent and the mean random
l ⎡ ⎤
atistics are those of G . Here ⎢G ⎥ is a M × n matrix and the mean of each7 vector vector
ThisThisThis
is step zero. step is
stepisisperformed zero. performed
performedby This step
by
bygenerating is
generating
generatingthe performed therequired by
required
requiredset generating independent
setofofindependent
independentrandom rando
rando
more robust than ⎣ ⎦ the Cholesky decomposition and 2) 7  7  This
of the step is performed bywith generating the and required set of independent l l a li⎡
rando
7 7  This This step step isis performed 8  the
performed bysame
required generating distribution
This set step the of is independent
required
performed zero setmean of random
independent
byindependent
generating unit variancevariables
therandom required followedset of by
variables U=
indep
U =
a alinl⎢⎣U
by 7 
generating the required set of random variables
5   8 7 8  of This of the the stepsame same is distribution
performed
distribution bywithwithgeneratingzerozero mean
mean the and required
and unit
unit variance
set
variance of followed
independent
followed byby
rando
ep is performed by generatinga procedure is provided to accelerate
the required set of independent random variables the convergence 8  8  9  U

of l
of the
the=the
of
the ⎡Usame
form same
same T
⎤ of the distribution
distribution
distribution same distribution with
with
with zero mean
zero
zero mean
mean with and and
andzero
unit
unit variance
variance
unit mean variance and followed
followed
followed by alin byby aa li
li
  of the same distribution ⎢⎣ no ⎥⎦ 8 acceleration
with zero mean and unit variance followed by a linear transformatio
1  of1.theRead means, [ X ] , variances
select OPT=and 0 forcorrelation 1. 8 
Read

no acceleration [
ofX ]
the
coefficients
, selectsame
(see above) OPT=9 
distribution the 0 theforform form with zero of the mean same (see and distribution
unit
above) varianceand with
1 for zero
followed mean
acceleration by a andlinearunit variance
transformatio foll
1  9  and
9  1 for acceleration
9  unit variance followed by a linear transformation of
9 8  the the
of theform
form same distribution with zero mean and unit variance followed by a li
ame distribution the form
to with
the zerotarget mean values and unit to variance
enable the followeduse9 9  of by the
the aaform
linear
form
smaller transformation l of 9  the form
2  2. Compute [Y ] = ln [ X ] and mean 2  2. μY Compute [Y ] = ln [ X 9 ] andl
10  the
the Gmean
l
= form
form [T ][μJY l]Ull
m number of simulations. (Whether the target values 10  10  G lllG== = [TT[T][ J][J]JU]lU ]llU
GG G==[T[[T][][][JJ]U
10  10 
Computeis[Ganother
3  are3. accurate ] using Eq matter,
(3) 3  which
3. Compute
10  G is ll [G ] using
beyond
G= =[[TT][][JJ]U the l
]U l10  Eq (3) ] Ul
10  l G = [T ][ J ]U (6)
l
10  l
10 11  GHere = [ T [ ]T][ J and ] U [ ]
J are n × n matrices representing the linear transformation
][ J ]Ul scope of this study.) The two developments, SVD and 11  11  Here
Here Here ⎡ [
⎤ T (6)
[T
[T] ] and] and and [ J [ ]J [J]]are are nxn n × n n
×nnnnare
× matrices
matricesmatrices representing
representing representing the the linear thetransformation
linear transformation
⎢U ⎦⎥[T[using T] ]and anda[ [Jstandard J ] are
⎡U ⎤ using
and [ J]M nnn×
a M × n11  HereHere are matrices
matrices representing
representing wherethe the linear
Mlinear ntransformation
transformation
4  4. Generate a M × n matrix 4  ⎥⎦ 4. use Generate matrix normal distribution, and are the
a standard normal 11  distribution,
acceleration, are independent. ⎢⎣ The of HereSVD is an 11 
12  Here
random ⎣ [ T ]where variables ] are
to
and
the nare
×correlated matrices the
randomrepresenting variables. the linear
Using transformation
the property of
11  Here [[T ]]and
11  T and [ [JJ ] ] are
linear
are
random nn × × n
11  n
transformation
variables matrices
matrices Here to [ T ]
representing
representing
the and from[
correlated J ] are the
the
the n linear
× n
independent
linear
random transformation
matrices
transformation
variables. representing
random
Using from
fromthe the
the
property indepen
linear
independ oftra
5  alternative cholcov
numbertoofrepresenting andand the5 numberprocedure to accelerate
12 11 12  Here random
random
random [ T ] variables
and
variables
variables [ J ] are
toto to the the n
the × n
correlated
correlated
iscorrelated
matrices random
representing
randomvariables.
random variables.
variables.Using the Using
linear
Usingthe the property
transformation
theproperty
property of o
the propertyofof
simulations number
of demand of simulations
parameters, 12  12 
13 independentand thenumber
respectively. covariance of demand matrix parameters, given by respectively.
] and [ J ] are n× n matrices the linear transformation random from variables the12  variables
random
to the correlated
12  random
variables to the tocorrelated
random
the
variables
correlated
variables. to random randomvariables.
Using the
variables. Using
property Using
of linear transforma
convergence could also be used with 12 
cholcov.
12  random variables 13  13  the to the the
random
covariance
the correlated
covariance
covariance
matrix random
matrix
matrix
isisgiven
isis givenby
variables.
given bybythe Using correlated the property randomofvariables. linear Using th
the transformat
6  5. Compute the correlation and 6  the 5.square
Compute root the of the correlation 13  13 
12 
13  the
variance the theand
matrix thevariables
covariance
property
covariance square
of thematrix of root
column
matrix
to
linear theof isgiven
correlated
the
vectors
given variance
transformation, by by in random matrixvariables. of the
the column
covariance Using vectors property
in of
variables to the correlated random variables. Using the 13  property of
thecovariance linear
covariancematrix transformation,
matrixis isgiven given by covariance

(((((( ))))))
the matrix is given by l
((((( )))))
13  the l 13  1by T 1 l l T
The matrix [X] is transformed intoσG ]the [Y] space13  14  matrix the lcovariance =is given ⎡ by ⎤ ⎡Gis⎤ =
1 matrix given byGGT T = [TJ ] Σ U [TJT] T

ariance matrix 7  is given [Gby ] and denote them as R 7 G l and [ G ]and ( ) denote them
, respectively. 14  as ΣRΣ G
Σ
GlGl=and
l
l G = M1[1σ1−G1⎡⎢⎡]G ,⎣⎢⎡G
G
⎤ T⎦⎥T⎤ T⎡G⎣⎢⎡ ⎤ =
respectively.
T
G
⎦⎥⎤ M1111−1GG
=
l ll l
lGGlGG lllT T==
T
= [TJ [ TJ]]Σ ] ΣU
ll
lU lU[TJ [TJ]]T]T]T
T

()
⎡ ⎤ ⎡ ⎤
()
as shown in Figure 1. It is transformed again into 14  ⎥
⎤ T⎣⎢⎡ G ⎥
⎤ [ ] [
14  14  Σ Σ G G l = = M − 1 1 ⎣ G G⎢
⎣ ⎦ ⎥
⎦ G ⎢
⎣ ⎦ = ⎥
⎦= M − 1 1 GG l = [TJ TJ Σ Σ U lT[TJ TJ

( ) (( )) (( ))
⎡ ⎤ ⎡ ⎤
(( )) ()
l 1 Σ G = M 1 − 1
⎢l ⎢
⎣ Gl ⎥
⎣ ll⎦ ⎥⎦T ⎣ = ⎥
⎦T
⎢ ⎢
⎣ G ⎥ ⎥
⎦ =1 M −
l 1 GG = [ ] GG
1
TJ Σ ll [ ]
U TJ
l = 1 ⎡ ⎡G⎤ ⎤ ⎡ ⎡Gl⎤ ⎤ =14 MM 14  1 1−1Σ1lGG ⎢⎣ [⎦[TJ l
T T T
M− ⎥⎦TJ]M ]ΣMΣ− 1⎡⎢G −
U1⎤⎥11[[TJ
l l⎡⎣⎢TJ
Gl⎤⎥]T]T= T
T
1⎡⎢⎣l
T

()
T

=
another
1 ⎡ ⎤ T 8 ⎡ ⎤ 6. Using
G ⎥ ⎢Gparameter
=
1 ll
space,
GG
[G],
spectral valueldecomposition,
T

vectors = [TJ ]have


such
Σ U a[TJmean ]
that
T

()
all
6.
of
14 
of
14 
Using
zero.
ltheΣΣ GG
R G = [ AGcor M
spectral
Matrix
demand =
][λ−
M
value
[G] 15 

14 15 
Gcor
1 ⎢

G


1 ][ AGcor ⎥

decomposition,⎥
⎦Σ ⎢

G


Accordingly,
Accordingly,
T ⎥=

⎦ ⎦
] MM−−11the
G =
(7) the
M −
−GG
R
1 G G
G⎤ =
=
⎢⎣ ⎥⎦correlation
correlation

⎢⎣ [ M
G A⎤
⎥⎦ Gcor= M
−][1 Gcor λ
Mmatrix
⎣ U

− ⎦
matrix
1 is
][ Gcor
GG A ⎦ is ]M −1
= ()[ TJ ] Σ U
l = [TJ
[ TJ ]
T ] Σ U [TJ ]
(7)
M −1 ⎢
⎣ ⎦ ⎣ ⎥
⎦ M −1 15 
15  Accordingly,
15  Accordingly,
Accordingly, thethethe correlation
correlation
correlation matrix matrix
matrix is
isisis
9  is 7. then expanded theninto ⎡G ⎤ a=larger⎡U ⎤ λ9  dimensional space then ⎡⎢G ⎤⎥15  Accordingly,
⎤Accordingly,
⎡Ucorrelation 12
] the
the
[σiscorrelation
T correlation
] . l thematrix matrix is
⎥ [λGcor l ]15 [ ⎡matrix
12 T
⎣⎢ ⎦⎥ ⎣⎢ ⎦⎥ [ Gcor ] [ 15  ] [σAccordingly,
G] .
If OPT=1, 7. AIf 15 OPT=1, Accordingly, =the Amatrix Accordingly, correlation T matrix is

((((( ))))) ((((( ))))) ((((( )))(())(((


⎦ the ⎣⎢ correlation −1 isG

)))))
Gcor ⎣ 15  ⎦Accordingly, Gcor
⎤−the correlation ⎡ matrix ⎤
−1
is
ingly, the correlation such that matrix theis mean, variance and correlation matrix 16  R l l ⎡ ⎣ G⎤ −⎦−11−11[ ] ll ⎡ ⎣ G⎤ −−⎦−11−11[ ]TT (8)
G = σ TJ Σ U σ TJ T T
If OPT=2, then 16  16  R RG lG lG==⎣⎡σ⎡ ⎡σ ⎤ [TJ [TJ]] ]]ΣΣTΣTΣU lU lU ⎣⎡σ⎡ ⎡σ ⎤ [TJ [TJ]] ]] T
))(( ()() ))(((( ( ) ))))( (( ))(( ) ( )))(
If OPT=2, then 10  G ⎦⎤G⎤ −1[TJ G ⎦⎤G⎤ −1[TJ
10  16  R R G l = = σ σ⎣ ⎦ [ TJ U l σ σ⎣ ⎦ [ TJ
(( )) ((
are preserved. This is achieved by generating the 16  = ⎣ ⎣⎡⎡σGGG⎦ ⎦⎤⎦−−⎤ −111[lTJ ] Σ⎡ U⎤ −⎣1⎣⎡⎣σGGG⎦ ⎦⎤⎦−1 [TJl] T ⎡ ⎤ −1
R G ll
)
T
lGl ⎡σ 16  −−11
16  RR G
16  = = ⎡σ ⎣ ⎤ ⎤⎦ [[TJ TJ]] ΣΣ l U U16 ⎡⎡⎣σ ⎣σ ⎤⎤R⎦ [G [ TJ=
TJ ] ] σ
⎣ of l ⎡ [ TJ
⎡σ ⎤and⎤ ] Σ U σ [TJ ]
11  required
i) subtract −set ofT sample
the independent mean11 from random i) eachsubtractvariables
column the of sample ⎣ ⎡UG16  G ⎤ ⎦ mean
⎢⎣ ⎥⎦17 and RcalculateG
from =2 ⎤each ⎣σthe G ⎦ column
G
[TJ ] Σis
covariance Ua⎦ diagonalU
matrix ] ⎣ of⎦ variances.
[TJcalculate ofthevariances.
covariance matrix
) ( )(
G G

( −1 l
= ⎡⎣σ G ⎤⎦ [TJ ] Σ U ⎡⎣σ G ⎤⎦ [TJ ]
1

of the same distribution with zero mean and unit ) 17  17  and,
and,
and,
and,

and,⎢⎣⎡ ⎡σ⎣⎢σσ
⎡σ
⎢⎣ (8)
⎡ 2G ⎤
2 2⎥ ⎤ =
2=
⎥⎦⎣ =⎡G⎡⎣⎦σ G⎤ ⎡⎤⎦ ⎡⎣σ G⎤ ⎤⎦ is
⎤ σ
⎣⎡ ⎡σ⎡⎣σ G ⎦⎤G⎤⎣⎡ ⎡σ
σ⎤ ⎡σ
⎣ GGG⎦⎤⎦G⎤⎦⎤⎦isis
is a
⎣⎢ ⎣ ⎦⎥G ⎦ matrix
a diagonal
diagonal
⎤ is a diagonal matrix of variances. matrix
matrix
of variances.
and,
and,⎣⎢σ⎣⎢consider ⎡σGG2⎦⎤⎦⎥G⎦⎥= ⎤⎦⎥= =⎣σ⎣⎡⎣σ σ⎦ σ aaadiagonal
diagonalmatrix matrixofofvariances. variances.
()
17 
()
17  G
GG⎦ ⎦⎣⎤ ⎣⎡now
12  variance
l
Σ Ufollowed by a linear 12 transformation. Σ U
l ⎡ 17  We
22⎤ ⎤ =⎡ ⎡σ ⎤ ⎤⎡ ⎡σ ⎤ ⎤ is⎣⎢ a17  ⎥ G ⎦ ⎣matrix ⎡ 2 ⎤ is two
⎦ ⎡σdiagonal ⎤ ⎡σcases, ⎤ ismatrix without of variances. and with
17 
17  and, ⎢σ and, ⎡ σ
⎣⎣ ⎢ GG⎦⎥ ⎦⎥= 17  σ
⎣⎣ 18  ⎦ σ⎣ ⎦ is ⎡ a diagonal⎤

Gdiagonal
⎡ and, ⎤ matrix
⎡ ⎢ σ G
⎤ ⎥ = of
of ⎣
variances.
variances.
⎦ ⎣ ⎦ a diagonal matrix of variances.
G ⎦ ⎣ and,
G
We G

acceleration, σ
consider 2
= σ now σ ⎣ two ⎦is a cases,
diagonal without matrix andof with
variances. acceleration, to fully r
Gto ⎦ ⎣ fully recover the underlying statistics.
G G G G
2⎤ ⎣⎢ G ⎦⎥ ⎣ now G⎦
G ⎦⎥
= ⎡⎣σ G ⎤⎦ ⎡⎣σ G ⎤⎦ is a diagonal matrix of variances. 18  18  We We consider
consider now two two cases,
cases, without
without and and with
with acceleration,
acceleration,to tofully
fullyre r
()
We consider now two cases, without and with acceleration, toto fully rere
()
18  18  We statistics. consider
Tconsider now l two cases, without and with acceleration, fully
13  Proof
lspectral value 19  1  We now two cases, without T
and with acceleration, to fully r
ii) using spectral value decomposition, 13  ii) using Σ U We[ consider
= A covU ][ 19 
consider λ 18  1 now ][
decomposition,
A Case two
statistics.
Case
Case
] 1: cases,
1: Largenumber
1:
Largenumber Largenumber
Σ We U
without =
consider [ A andof ][
of λ
simulations,no
with
now
of simulations, two][
simulations,no A
acceleration,
cases, ] accelerationacceleration
to
without fully and recover
with the underl
acceleration
without andnowith acceleration
18  We
18  cov now
19  statistics.
U cov two
statistics.
U 18 
cases, without and cov U with cov U acceleration,
cov U to fully recover the underly
19  19 
18  statistics.
We consider now two cases, acceleration, to fully re
Letcases,
[Y] = [Y1 Y2and ..Ywith m x n matrix 19  statistics.
nsider now two without n], a acceleration, −1 2 19 
to
19  represents
fully recover
statistics.
1statistics.
the the underlying 19  statistics. 7  
( )( )
iii) calculateof⎡⎢G ⎤ = ⎡U ⎤ [ A ][14  ⎡ ⎤ T ⎡ ⎤ −1 2 12 T
⎣⎢ G⎦⎥ ][ 2 covU2 For ][  λcovFor ] this [case, Gcor ] l [ AUlis approximately ] is[isσapproximately
]
U 19 A statistics.
2
14  realizations
⎣ an⎥⎦ n ⎢⎣dimensional ⎥⎦ covU λcovUrandom ]iii) [calculate ] [ AGcor
λGcorvariable ⎣⎢ ⎦⎥ ] such
G =[σ this λcase,
Σ UΣ Gcor G approximately an identity7an   identity matrix of order n . Ac
For this case, an identity 777 7   matrix of order n . Accor
U
s.    
that  matrix of order. Accordingly,
  7  Eq (8) leads7to
7   7 
15  8. Compute ⎡⎢Y ⎤⎥ by adding back 15  the 8.mean:Compute Y  i = ⎡⎢⎣G Yi⎤⎥ +byμYadding
⎦ {1} . back   the mean:   Y i = Gi + μYi {1} .
 
⎣ ⎦ 7  
( )( )
i
⎡ T T T⎤ T   l
Yl = ⎢Yl1 Yl2 . Yln ⎥ = [Y ] (2)
T
1  lR G =][[LL]T][ L ]
T

16  9. ⎣ Generate demand 9. Generate


⎦ vectors by taking the exponential of ⎢Y ⎥ .
16  demand ⎡
vectors ⎤ R by G taking = [ L the exponential of ⎡⎢Y ⎤⎥ .
⎣3  ⎦ 3  ⎣ ⎦
(9)
Define [G]=]G1 G2 ..Gn] such that
( ( ) )
− 1
[ L ]⎡ = ⎡
⎤ σ−1 ⎤
[ TJ ]
[ L ] = ⎣σ G ⎦⎣ G[TJ ⎦ ]
1 17  G Mathematical
= Y − μ {1} Basis 17  Mathematical Basis
(3)
i i Yi

4  Using Eq and
(4) and Eq (5),
theofthe correlation matrix of the original data set m
18  SVD A
A mathematical proof for the 18  mathematical
algorithm proof for
is presented theUsing
in4  whatSVD Eq (4)
algorithm
follows. isEqpresented
(5),
The uniqueness correlation
in matrix
thewhat follows. of the
The original
uniqueness data set may
of the
19  98 solution
Volume 45 guaranteed
is not Number 1because
March 2015
19  solution
the is not guaranteed
computation because
involves the ofTThe
lthe computation
generation Bridge
random and
involves
numbers theStructural Engineer
generation of random numbers
20  and, even though spectral values and,unique,
20  are even though
T

spectral the
by definition,
R G
values
lR = G[ K=][ K
[ K] ][ K ]
are unique,
associated left and by definition,
right basis the associated left and right basis
( )( )

5  ⎛ ⎛⎜1 1 −1 ⎞ −1 ⎞
⎟⎟ ]or ⎟⎟⎟issue
21  vectors is
vectors are not. However,21 uniqueness are not
not.anHowever,
issue for ][=loss
K⎜⎜] = ⎜computations
[ Kuniqueness ⎜
⎜⎝ m⎝ −m
[ is ][[σG
G ][]σ Gan
not
⎟⎠ ⎠
risk for loss computations or risk
1 −1
G

22  assessment.The procedure differs assessment.The


22  from procedure
the Yang et al. algorithmdiffers
in two fromways:the 1)
Yang the et al. algorithm in two ways: 1) the factorization
factorization
RR GG =
ll
(( ))
=[[LL][][LL]]
TT 13  () (
R G = ⎣σ G ⎦ [T ] ⎜[ J ] Σ U [ J ] ⎟ ⎣σ G ⎦ [T ] ) ⎝ () ( ⎠ )
ll
(( )) () ( )
⎡σ ⎤ [T ] ⎛⎜[ J ] Σ U () ( T⎞
)
T
3 3  R G = [ L ][ L ]T
TT l −1 l (9) ⎟ ⎡σ ⎤ [T ]
−1
R G = [ L ][−−1L1 ] 13  R G = ⎣ ⎦ ⎜ [ (9)
J ] ⎟
⎠ ⎣ ⎦
(( )) ⎝
G G
3  [[LL]]== ⎡⎣σ⎡⎣σGG⎤⎦−−⎤⎦ 11 [TJ [TJ]] (9) (9)

[[LL]]= = ⎣⎡⎣σ ((
⎡ ⎤⎤ −1[TJ ]
σGGG ⎦⎦ [TJ ] )) (9) 14  Next we select [ J ] such that ⎜⎜[ J ] Σ U [ J ] ⎟⎟ becomes an identity matri
(9)⎛
()

l T⎞


4 4  Using UsingUsingEq Eq
Eq(4) (4)
(4)and andEq
and EqEq (5),
(5), (5), thethe
the correlation
correlation correlation matrix
matrix matrix
of ofthe theof originalthe14 
original 15  data dataNext
Next
possible set
setmay we
wemay when
select
select
be 1 
beexpressed [[J]
1.
expressed J ] such Read
such asasthat ,

⎝()
[that] ⎜⎜[ J ] Σ U [ J ] ⎟ for
X select l
OPT=
T⎞
0⎟

becomes
becomesno acceleration an identity
an (seematriabov
rrelation 4  original
Using of theEq data
(4)
(4)setand
set Eq may (5), be the expressed
the correlation
as matrix as of of the the original data identity set
set may
matrix
be
be2 expressed
of order. as This is possible when
matrix ofmatrix Using
4  original
the Eqoriginal
data and maydata Eqbe set
(5), may
expressed be expressed
correlation as matrix original 15 
data possible maywhen expressed 2. Compute as [Y ] = ln [ X ] and mean μY
RR G lG l=
(( )) TT
=[[KK]] [[KK]] −1 2
16  [ J ] = [ λcovU ] [ AcovU ] (16)
T

5 5  R
[
RG
KK]]=
[
G
=⎛⎜⎜⎜⎜
(( ))
ll = [ K ]TTT [ K ]
=⎛⎛ [ K11] [ K ]
[ [GG][][σσGG]] ⎞⎟⎟⎟⎟⎟⎟
−−11⎞ ⎞
(10)17  16  [
R G
Accordingly,
J ](( ))
l
=l [= λ
R G =cov[ LU ][ L ] covU
[ L ][−3 L1]2T 3. Compute
]
T

[
EqA (15) ]
T
is
(10)
(10)
reduced
[G ] using Eq (3) T
() to
l
R G = [ L ][ L ]
5  (10) 1  (10)
⎜⎜⎛⎝⎜⎝⎜ mm 1  Accordingly, −1 Eq 4. (15)Generate is reduced M × n matrix ⎡⎢U ⎤⎥ using a standard norma
to
( )(( ))
5  11− −11[G ][σ ]−−−111⎟⎟⎞⎠⎠ (10) (10) a 1 
[[KK ]]= ⎡σ ⎤ −4 
() ()()
[ L ] [ T ] ⎣−1 ⎦
= ⎜⎝ m
()
l −1 [G ][σTG ] ⎠⎟l R G = T[lL ][ L ] ⎟ l l=l
( )
T
GG
R[ RL ]G G= = =⎣⎡σ[[GLL⎦⎤][][LL[]T] ] 1 TT
[ L ] = ⎡σ ⎤ [T ]
⎜⎝R G m −1 = [ L ][ L ] R G ⎠ = [ L ][ L ] R G = [ ][ ]
L L
T
17  1  Accordingly, ⎣ ⎦ 5 Eq [ X (15) is reduced to
] (17) ⎣ G ⎦
1  TT 1. Read , select OPT= 0 for no acceleration (see of above)
demandand 1 for
G
1  1  number of simulations and number (17) parame
(17)
6 6  Since

Since RR G lG
(( ))
l isisisa1 a−symmetric a symmetric 1 
( )( ) ) matrix and can be [[ K][Since
][LL[[]]K= ]] the
( )
(17) (17)
(( ))( ) (
Since 1symmetric
⎤ T[T ]L = [⎡Lσ] =⎤ −[1L⎡⎣σT] = matrix
matrix and −1
and
⎤ [T ] −1 cancan bebe expressed
expressed asas K K , , it
⎡σ it is is
⎡ ⎤⎤simulated − possible
−11
possible
[[TT]] 5.[Y Compute to to show
show that
that
Since [RL ]G ll ⎡⎣σ
=
is Ga⎦ symmetric [ ]], ititis⎣ismatrix [ Gand ] ⎦ to ⎡σ ⎤ [T ]
can
2  TT2 
[ ] T [ K ] ,2.it
= ⎣⎣σis GG⎦⎦ 6 
Compute ]random
= ln [ X ]variables the mean
and correlation have
μY the andsame
the the square correlation root ofmatrix the vaa
atrix and 6  be
can expressedexpressed as
as [ K ] [ K possible ⎦
G possible ⎣ can be expressed as [ K ] [ K ] , it is possible to
to
show⎦ be
show expressed
that that as
all 2  K Since the possible
simulated torandom show
show that 2  Since
variables have thesimulated
same correlation random variable matrix
G
as [R K ]G[ Kis
TT
can be expressed 6  Since ] , ait symmetric
is possible matrix
to show and that that
7 7  all
the allthe theeigenvalues
eigenvalues eigenvalues ofofof are
lGl are
RRnon-negative.
G (( )) arenon-negative.non-negative. By spectral ByByspectralspectral value value
value data set,the
Since
3  decomposition
decomposition
data 3.set,
andsimulated
Compute
and
7 equating[G[ L
equating [ G ] using [
] ]and
random
L] from
from
Eq
denote Eq
(3)
Eq3  (17)
(17)
variables them
data ( )
andasEq
and set, haveR (11),
Eqand
l the
G
(11),
it may
and [σ be
same
it mayG [be
equating
]L, ]respectively.
shown that
from Eq
shown that (17
non-negative.
ative. By spectral
7  By

all
all 2  the
spectral
the Since eigenvalues
eigenvalues
value decomposition
decomposition
the 2 simulated
value of
decomposition
of

R
Since
R
l
G
G (( ))
Since
l random
2  are
the
are
the
non-negative.
Since
simulated
non-negative.
simulated
variablesthe random By
have
simulated By
random
spectral
therandom
variables
spectral
variables
same value
value
3  3 
2 correlation
2  variables
have
1  1. Read [ X ] , select OPT= 0 for 1no
have
Since the
Since
decomposition
correlation
the
decomposition same
thethe
have matrix same
simulated
simulated matrix
the
correlation
correlation
assame that random
random
as ofthat
correlation
matrix
matrix
the variablesvariables
oforiginal
as the matrix
that
as
original
of
that
have
have as
the
of the
the
that
the
data
original9  
same
same
of
original
set
the correlation
correlation
and
original matrix a
matrix
3  data TT[ L] set, and TTEqequating [EqL] from Eq 3 
3  (17)4  equating
 [data
data T and] = 4. Eq
set,⎡σand
set, (11),
Generate
and⎤ [that
and [L]
A itfrom
8 equating
equating ][ 6.
may
λ a M Using
be
] [[
Eq ×
L
L
2
] ]
acceleration
nfrom
shown fromspectral
(17)
matrix that
Eq
Eq and ⎡value
U ⎤⎥Eq
(17)
(17)
(see above) and 1 for acceleration
usingand
and decomposition,
(11), Eq
Eq a standard it may
(11),
(11), ( )
itit may
may Rbe
be
normal be
l = [ A ][λ
G distribution
shown
1 2 shown that
that G
data
(( )) set, and equating from (17) and [
(11), ] it may
4  be shown ⎢ Gcor

RR GG =l l =[[AAGcor ][][ λ λ
3  ][data
][ AA set,
] 3 
] = and
= [ [
L data
L ][equating
][ L L ] set,
] and [ L ]
equating
from Eq L(17) from and

EqEq [ T
(17)
](11),
= ⎣ ⎡ σ [
Eq
Git⎦⎤ may be shown that
A
Gcor (11),
][ λ
Gcor it may
]
1 2 be shown 4 
⎣ that
[⎦T ] = ⎡σ ⎤ [ 9A   ][ λ ]
  [Y ] =
shown ⎣ that G ⎦[ X ] and mean μ ⎣ ⎦
(( ))
Gcor Gcor
Gcor Gcor
Gcor
T 8 8  R G ll = [ A ][λ ][ A ]TTT = [ L ][ L ]TTT 2  2. Compute ln Gcor Gcor
Y (11) (11) G Gcor Gcor
L ][ L ] [ AGcorGcor ][ λGcor 1122][ AGcor number 9  of 7. simulations and
thennumber ⎡G ⎤ = ⎡Uof⎤ [demand λGcor ] [ A
12
parameters, T
Gcor ] [σ G ] .
respecti
Gcor ] = [ L ][ L ] (11)
RLLG = Gcor Gcor 5  If11OPT=1,
8  [ [ ] ]= = [ [AA ][][
Gcor λ λ ]
Gcor ] 1 2] = ⎡σ ⎤ [ A
12 (11) ⎢
⎣ ⎥
⎦ ⎢
⎣ ⎥

4 λ [T ][ (11) ] ⎡using ⎤⎤[[AAGcor 2
8  ⎡σ Gcor ⎤ 4  1λ [ [ T T ]] == ⎡σ σ ][
][ λλ ]]
2
(18)
(11) (18)

[[LL]]= [ T ]
Gcor
Gcor
=
Gcor
[ A11 22 ][
2 [T ] Gcor ⎡ ]
σ G ⎦ [ A[TGcor
⎤ ⎣ ⎦ (11)

] =][λ⎣σGcor ⎤ [ A ][λGcor ]
G ⎦ ] Gcor
2 Gcor
3  3.12
Compute 4 
4  [
Substituting,G ]
5.⎣⎣ Compute ⎦⎦ Eq (3)
Eq (18) the correlation and Eq (16) (18)
and into the square Eq (6), roottaking of thetranspose
(18)
variance matri and
= [[ A Gcor ][⎣ λGcor
G ⎦ ]1Gcor = ⎣4  Gcor
G 5  6  G G Gcor Gcor
Gcor (18)
Gcor ][ λGcor ]
AGcor 5  Substituting, 10  Eq (18) If OPT=2, and Eq then (16) Substituting,
5  into Eq (6), Eq taking (18) transpose
and Eq (16) and
Gcor

Substituting, Eqdenote (18) and Eq (16) l into Eq]distribution,


(6), taking where M and
9 9  Since SinceSince the simulated
the simulated
simulated random random
random variables
variables variables have
have havethe
the4 same the
same 4.samecorrelation
correlation
Generate 7 
6  [σ G ] = ⎣σ G ⎦11 
transpose
matrix
a matrix
M ×⎡ [ nG as as⎤ ]
and
that
and
matrix that
, it may i)
of
noting
of ⎡Uthe
⎣⎢ be
⎤ using
the
⎦⎥ shown
subtract
original
them
l Eq
original a ( )
as standard
that
the
R G
T sample mean
andnoting
andnormal [ σ
⎡Gl T ⎤from
G , respectively.
⎡G
T
⎤taking each T column of ⎢U

[into l ] =Eq ⎦ ,andit may ⎤ be
Substituting, Eqoriginal (18) and Eq (16) into[σmatrix ]Eq= ⎡⎣σ(6), ⎤and ittaking transpose ⎤ and σ = ⎡and
⎣σ⎤ G(6),
Since the simulated 5  (18)
random variables have the same correlation Substituting, as Eq (18) and ⎡G
Eq (16) into Eq (6), l taking
⎥may be shown thatand
transpose and
Substituting, ⎦ ,that Eq may of(18) the
be
theshown andoriginal that (16) transpose ⎣
9  the 5 

6  6 
ariables
have the havesame
10 9  correlation
10  Since
data 5  same
correlation
data set,
set, theand and matrix
correlation
Substituting, simulated
by
matrixby5  equating Eq
as
equating as
matrix
random that
Substituting,
that [[Lof5 ]as
L ]ofand
variables
from
the
from thatthe Eq
Substituting,
Eq of
original
Eq original
(9)the
(16)have
(18)
(9) and
and into
and Eqdata
EqEq
Eq
the Eq(18)
same
(11),
(11),set(6),
(16) and
itand taking
itcorrelation
can
can into Eq
be (16)
Eq
beshown
showntranspose
matrix
G(6), into
that that taking GEq
as that (6), noting
of
transpose taking G =
originaltranspose
and ⎢ noting
⎣ ⎥ andand
⎦ GG =noting ⎡G ⎢G
⎣ ⎦= ⎢ G ⎥ and
⎣⎢ ⎦⎥ l respectively. ⎣ ⎦
om EqEq (9)10  and it by
data
Eq equating
(11),set, and
it canby [L]
⎡ be
equating from
⎤ shown 6  that [EqLL[σ
[be ] from (9) and
G ]=
⎡Eq ⎤(9) Eqmay
, ⎡it and
(11), Eq be(11),

it can itit can
numbershown
be be shown
of simulations
8  6.thatUsing spectral−1value that and
()
number
l
of demand
decomposition, ( )
parameters, R G = [ Gcor ][ Gcor ][ Gcor ]
A λ A
T

6 ]⎡σ
⎡U⎡σ ⎡σ⎤ [ A ⎤ ,, itit may
⎤12  Σ shown
data set, and by equating from ⎣σ
Eq ⎦(9) and Eq shown
(11), that6 
can bethat [[⎡⎢σG
shown σthe ⎤ ]=
GG ]=
=that may be be U
shown λGcorthat that
2 12 T
GG⎦⎦ covU ][ λand covU ]the −1 2 [square ]1 2root [ AGcor⎡of]T⎤ the [σ G⎡ variance]⎤
) and 10 
(11), can be [σthat shown that 6 
G ] = ⎣σ G6 
⎤ shown
⎦ , it may shown ⎤[,σit ]that
6  [σ G112]2= G= σ , it may be 5. shown Compute 7  ⎣⎣
correlation
shown may be 6  that ⎣ ⎡ ⎥
⎦⎤ ⎢
⎣ ⎡ ⎥
⎦ ⎤ ][matrix
λcovU ] of[λthe column
−1 2 12
⎣ G⎦ ⎣ ⎦ G ⎥ = ⎢U ⎥ [ AcovU ][λcovU ] [λGcor ] 7 [ AGcor ⎢G]1⎦⎥ 2= ⎥[ A
[σ ⎣⎢GU] (19) Gcor ] [ AGco
G G 7  ⎢
11  [TJ
11  [TJ]]==⎡⎣σ⎡⎣σGG⎤⎦ ⎤⎦[[AAGcor ][][λ λ ] ] ⎣ ⎦
9  7. If OPT=1, then ⎣ ⎦ ⎡ ⎤ (12)
(12) ⎡ ⎤
G = ⎢U ⎥ [λGcor ] [ AGcor ] [σ G ] . ⎣ ⎦ T covU

( ) ⎣⎢ ⎦⎥ (12)
()
Gcor
Gcor Gcor
[1G2 ] and l and
R preserves
G [σ G⎣ 11]the ⎦ l
11  [TJ ] = ⎡⎣⎡σ GG ⎤⎦⎤[ AGcor denote them 13  as ii) 2,2 respectively.
11 22
11  [TJ
][ λλGcor ] 12
⎡ ⎤ ⎡ ⎤ (12) 7 
−1 2
This ⎡ T
⎤ procedure
⎡ ⎤ using −11 22 spectral value mean, TT decomposition, variance andΣ U = [ AcovU ][λc
7  ] ⎡⎢G [ ][ ] [ ][ 2 U] 1[ 2 Gcor ] 7  [
= σ ⎤ A
⎡ Gcor
⎤ Gcor 7  G− 1 2= U A
(12)
1
(12)
2 λ T λ A
T GcorThis

8  ⎡ G]G [
⎤ σ== ⎡ ]
U U
procedure ⎤ [[ A A ][
][ λλ
preserves ]]

[[λλ
the (12) ]]
mean, [[ A
A variance]] [[
σσ ]] and correlation (19) matrix regard
⎣ ⎣ ⎥⎦ ⎦ ⎢⎣ ⎥⎦ [7 covU ⎣⎢][G ⎦⎥cov ] ⎦ A[ G ⎣Gcor⎦=] λU[ [Gcor A ] [λ][λG ] ] A [λ ]8 [σ]10 
= U A ⎡ λ ⎤ ⎡⎢ ⎥
⎤ ⎡
λ ⎢ ⎤ ⎥ ⎡ A⎤ −1cov
σ − 1 2 1 2
⎢][⎣⎢ A⎦⎥Gcor
T
⎣⎢]⎣⎢ [If
⎦⎥ procedure σ⎦⎥⎦⎥ GOPT=2, ] preserves (19)
⎣⎢ ⎦⎥ [ ⎣⎢covU⎦⎥ ][ ⎣⎢ cov⎦⎥U ] covU[ Gcorcov]U [ GcorGcor
G Gcor Gcor
=U 7  ⎣U cov U
G ⎣This
G cov
cov U U cov
cov UU
then Gcor
Gcor
the mean, 8 of the
Gcor
Gcor
Thissimulation
variance
GG
procedure and(19) preserves
correlation (19) the mean,
matrix regard v
correlation TT matrix regardless size if
Substituting
12  Substituting
12  Substituting Eq Eq (12)
(12)
Eq (12) into Eq into
into
l Eq
Eq Eq
T
(6),
(6), taking
taking
(6), taking the transpose 9  M the
the transpose,
transpose,
8  6. and
and
Using 9 noting
noting size
spectral
11  size >
G
l
G
l
ll i)
if ==
ifn.M
M ⎡G
value⎡G>
⎢⎣subtract
⎡and
⎢⎣ >
⎤ ⎤ nand
⎥T14 
⎥ n . iii)
.
and [
decomposition,
⎤⎤⎦TT⎦ correlation
σ
the
[σ ] ]=
Gcalculate
G =
sample
⎡σ⎡σ ( ) ⎤
⎣ ⎣ GG⎦ ⎦ mean
⎡σthe
⎤,R, itit Gl
⎡G9 ⎤ = ⎡U
⎣⎢regardless
⎤ , mean, ⎦⎥ from
= [
size
A⎤ [Gcor
⎣⎢variance
⎦⎥ of
Aif ][
each
cov UM
λ ][Gcor ][
covnU.]
λ>column A −1 2
Gcor [ λ ]
T
12
ofGcor⎡⎢U] ⎤⎥ [ and
T
AGcor ]calcul[σ G
6), taking 12 the Substituting
transpose,
and
Substituting
8  noting
This and Eql
procedure
Eq noting(12)⎡
(12) ⎤
TTinto
G8 
and =
preserves
into
⎡This
EqG ⎤ (6),
(6), and
the taking
procedure
taking
⎡ [ σ
mean, ⎤ , ] the
=
itthe ⎡σtranspose,
preserves
may
variance ⎤ , be
transpose, it theand
shown
and mean,
correlation
and noting


noting variance
This
This G
matrix
G =
procedure
procedure
= ⎢
⎣⎡ GG ⎥
⎦regardless and
and [[σσGGof
preserves
preserves ]] =matrix
= the⎣⎡ σ the ⎤ it
mean,
simulation
⎦ , it variance the and
and simulation
correlation
correlation ⎣ matrix

matrix regard
regard
g the transpose, 12 
13  may
13  and
maybe noting
beshown shown G= 8 ⎢G ⎥ This
that
that ⎣ ⎦
andprocedure [σ⎦⎥ GG ] =
⎣⎢ 8  This procedure
⎣σ GGpreserves
⎦ , it ⎣ ⎦the
G G preserves mean, variance the mean,and variance
correlation⎢⎣ ⎥⎦ ⎤ ⎡ ⎤correlation matrix G 1 2⎣regardless
G
G
G⎦
matrix regardless
of the simulation of the simulation
G⎡⎣⎢G
T
9  7. If10  OPT=1, Once, then ⎡M ⎤> = is [λGcor
U computed, ] ⎡⎢Y[ A⎤⎥ may ⎡] be ⎤[σmay ] adding
. be computed by Ythe asi {
that
13  may be shown that

13  may be shown that
size if M > n . 9  size if M > n .
9  size if 9 M >size n . if M > n .


10  12 
Once,
size
size
Once, ⎣⎡⎢Gback
adding
if
if ⎢ M
⎣ ⎦ ()⎥ > is

15 
⎦ n
ln computed,
. .⎢

Σ⎦⎤⎥ isUcomputed, ⎣⎡⎢Y ⎦⎤⎥ may
the
8.


mean
Compute
as
⎣ ⎦
Gcor Y

⎣10 be ⎥ by Gcomputed
⎦ computed Once, ⎢Gby ⎡

by
back⎤


adding
the mean:
isadding
computed,
back
back ⎡⎣⎢the
i =mean
Y ⎤⎥ may

Gi + μ
mean beasYco
14  ⎡G ⎡G⎤ ⎤= =⎡U ⎡U⎤ ⎤ λλ ]1]122[[AA ]T]T[σ 10  IfTOPT=2, then
⎦⎤⎥ ⎦⎥ ⎡⎣⎢ ⎣⎢ ⎤⎦⎥ ⎦⎥[[ Gcor ]] (13)
[σGGdecomposition
( )
14  l = 16  9. Generate (13)
(13) demand vectors by taking the exponential of
to the cholcov⎡⎣⎢ ⎣⎢algorithm ifGcor the Cholesky ⎡ ⎤ R G ⎡ [⎤L ][ L ] ⎡ ⎤ ⎡Y⎤⎥][⎤⎥mean
⎡ ⎤
( ) ⎡
()
Gcor
Gcor
⎡identical
⎤1 2is[ Acomputed,
10  Once, ⎡YOnce, G ⎥ algorithm
⎤ ⎣⎢may isbe computed, Yby⎥ Cholesky
may be computed Once,
Once, G Gby adding
isisas computed,
computed, back
l =back the YL may beas computed
(20)
may be computed byladdingadding back back the mean mean as asT
{ } asof ⎡UΣby [ AcovU ][λthe
T
1  This G = Once,
form U [ λis G ]
1 1 22
]
to
TT
T [the
σ ⎡ ] ⎤
cholcov ⎡ ⎤computed ⎡ ⎤ if ⎢
the ⎡
adding ⎤10 
10 
11  back Y
decomposition =
the G mean
⎢ ⎢ + ⎥ ⎥ μ 1 R G [⎢ ⎢ L
(13) ] column U ][ A covU ]
14  10  ⎦ G is computed, ⎣ ⎦ Y may be computed ⎣ ⎦ by adding ⎣ ⎦ the mean
⎡ ⎤ ⎡ ⎤ G Y ii) ⎣ G⎣⎢ i +⎤⎦⎥ U =calculate
Once, 10  is computed, (13) may be computed iby adding i using ⎦ back spectral the mean value⎣ ⎦ as decomposition,
⎣⎢⎣⎢G⎦⎥⎦⎥ =form ⎣⎢⎣⎢U ⎦⎥⎦⎥ [λGcor ⎢ ]⎦⎥ [ AGcor
10  Gcor ] [σ G ] ⎣⎢ ⎦⎥ ⎢ ⎥ ⎢ ⎥ 13 
Yi {1}
14  This Gcor
Gcor⎣ Gcor G
⎢ ⎥ (13)⎣ ⎦ ⎢ ⎥
11  i) ⎣ subtract
⎦ the sample
Y i = G i + μYi {1} l
Y i mean from each (13) Yi = μand the covaria
is identical to G⎣the ⎦ cholcov algorithm ⎣ ⎦ if the 11 
( ) T 11  cov
1  This form is identical to the cholcov algorithm 1 2 if the T Cholesky decomposition R G = [ L ][ L ]
T
whereR{1} ML ]x
is [aL1 ][2Mathematical T 1Tvector of ones.
]v algorithm
inThisthe form Cholesky
2  if Cholesky
the is identical decomposition
Cholesky decomposition
the [cholcov
todecomposition is
T equivalent
] in Ythe algorithm RCholesky G l to=if[[λLthe
( ) ][ L]]Cholesky
T [A
Gcor ] in
performed, the
decomposition ( ) Gl 17 
[ = ] [ ⎤ =]⎡Uin⎤ [of the Basis
()
1  performed, where L11  Gcor
decomposition is equivalent
l 14  tocalculate λGcor ⎡ AGcor −1 2 12 T
Y i =G μ { } i = G i + μY {1} Σ 11 
11 
12  U Y Y
where = = iii)
GG { ii }
1 + + μ
isμ a { { 1M1}} × ⎢ G1 vector
⎥ ⎢ ⎥ A ones. U ][(20) λcovU ] [λGcor ] [ AGcor (20) ] [σ(20) G]
position. 2 
11 
where
performed, [T] T
wherei + 11 in [
Yi L 1
the Y
]
T
i =in G
11 
Cholesky
the i + μ Y
Cholesky
Y
i {1} decomposition
= G i
i + μ {
decomposition
Yi
112 }
8 8    is
is 12  equivalentThis
where
ii
{
completes
to 1 } [ λ
is
YYi
Gcora
i
]
M
1 2
×[⎣ A the
1 ⎦
Gcorvector ]
T⎣ in ⎦
simulation of the
cov
ones.
12  of
where the { 1 } (20)
multivariate
is a M × 1 vector of ones.
3  spectral isvalue ] todecomposition.
i

( )
T 12 T
[ Lequivalent [λGcor8.] [Compute if ]the
12 T
leskyperformed,
2  decomposition     where
equivalent in[λthe to Cholesky
1/2 [λGcor ] decomposition [ ATGcorin] isthe inidentical
thespectral is
88   equivalent value cholcov to normal AGcor
variable 18  inACholesky
⎡Ymathematical
the ⎤ by with specified proofmean, for theRvariance
mean:GY i==[ L
SVD l algorithm
G][i andL ] is presented
T
Gcor] 1  [AThis Gcor]form to the 15 algorithm decomposition T+ μYi {1} .
()
8  3  spectral 8  value decomposition. ⎢ ⎥ adding l back the
 12  acceleration where {1}12  is a 12  where {of 1}ones. is a M ×1 vector 13  ii) of ones. using12 
12  spectral
wherecompletes
where
This {{value
11}} is 19 adecomposition,
is aM ×⎣11vector
Msolution
×
the ⎦vector
simulation of
of Σones.
ones.U of=the [choose U ][because
Acovmultivariate λcovU ][ Acov U ]computation
normal variableinvol Yl
r3 of simulations,with
spectral 4  value
decomposition.
  decomposition.
Case 2: Small number
Mwhere ×112  vector
{1} iswhere
of simulations,with a M ×{11}vector is a M
acceleration
1 vector
of×Tones. 13 
of ones. correlation
13  This completes the simulation13 of the
matrix. Theisuser not guaranteed
can This completes
multivariate 12
to accelerate the
T the
normal simulation
variable ofYl th
2  performed, where [ L ] in the14  Cholesky 9.decomposition Generate theiseven
demand equivalent
vectorsThe byto [λGcor
taking ]the [Irrespective
Aexponential] toinunique, theof ⎡Yby ⎤
4  Case 2: Small number of simulations,with acceleration the
16 variance

recovery
⎤ ⎡
and
⎤20 correlation ofand, underlying
−matrix.
1 2 though 1 2 statistics.
user
spectral Tcan choose
values Gcorare accelerate ⎢⎣ ⎥⎦ .the reco
definiti
Case 2: Small number of simulations, 14  iii) with calculate
14  variance G = U and [ A correlation][ λ ] matrix.[ λ 14  ]
The l [ A variance
user ] [
canσ and
]
choose correlation
to accelerate matrix. theThe ll u
reco
This completes the simulation of theof ⎢
⎣ ⎥
the
multivariate
⎦ ⎢
⎣number ⎥
⎦ normal of normal simulations,
covU variable Gcor Y with
the
Gcor specified
SVDmean, algorithmmean,
s,with Case 2:
acceleration Small number of simulations,with acceleration This completes l the simulation of the the multivariate normal variablenot anYYret
()

ill be considerably 13  different This completes from l the 13  the identity
3  13  spectral
simulation matrix value of order decomposition.
the nthe. First,
multivariate we 13 
13 
15  normal This
statistics. completes
variable Irrespective
21 Y with
covU
the
vectors simulation
of the
specified are lnumber not. of
mean, of
However, multivariate
lsimulations, G
uniqueness thenormalSVD isvariable
algorithm is
13 
In this case Σ U will be considerably different from the 15 
5  acceleration This completes This completes
the simulation simulation
of the of
multivariate
identity the
matrix Irrespective multivariate
normal
of order n . First, variable Y wevariable
with
15  statistics.
Y
specified with specified
Irrespective mean,
of the numberreo
statistics.
returns results similar of the to number
the cholcov of simulations, routine theif SVD
the algorithm
5  In
l In this
14  variance case Σ and
l case 14 l will
l
()
U correlation will
14 
variance be14 
variance
considerably
andmatrix. variance
and
correlation The correlation
and user
different canfrom
15 
correlation
matrix.
matrix.
8. the
choose
The user
The
14 
14 
Compute
to
16 
matrix.
user
17  variance
variance
the
accelerate
identity canThe
Mathematical
can
⎡Ycholcov
⎤ bythe
user
⎣⎢matrix
choose ⎦⎥ acceleration
choose and
and
canadding
to of 22 recovery
choose
order
accelerate
to
correlation
correlation
routine back Basis
accelerate
nifto
of
assessment.The . the the
First,
the
matrix.
matrix.
mean:
accelerate
the
acceleration
underlying
recovery we16 
recovery
The
The i =user
Yprocedure
the ofthe
user
G option
recovery
the
of
can
can the
μdiffers
i +underlying
choose
Yi is{of underlying
choose
1}not . from
the to accelerate
to
selected. accelerate
the
underlying YangFor aetthe the reco
al.recov
small algo n
mean
5  from
In
ov algorithm
this
ably different from U caseand15 
6  ifsubtract
this
Σ
the Cholesky
U
compute
the ()
statistics. will
the sample
Σ be
identityIrrespective U
15  mean
()
decomposition
. Assuming
considerably
matrix 4 
statistics.
be
15  ofstatistics.
from
15 
Case
considerably
order
of Irrespective the
different
the
RU
2:
n
l lnumber
( )
statistics.
Gand
Small
number
. Irrespective
First, we
= [compute
from number
of
different
()of thesimulation
of] simulations,
L
T
L][Irrespective the Σnumber
of
identity
of the simulations,with
from
l number
U of. Assumingthe
to
of
16  acceleration
matrix
15 number
15 
the
17  SVD
the
of
ofstatistics.
the
simulations,
17  18  of the
cholcov
order
simulations,
statistics. use
algorithm
of
A number
simulations,
of
simulations,
n
the23 
mathematical
.
the
routine
First,
Irrespective
Irrespective
SVD
option
the we
SVD
acceleration
returns
ofofalgorithm
the
ifisthe
the
simulation of
results
the
proof
not
algorithm
ofSVD the
the
usefor
acceleration
selected.
number
number
option
similar
correlation algorithm
tothe
returns
of the
returns
may
SVD
to of
of
results
the
For
matrix
cholcov
option a
results
simulations,
simulations,
appear
returns
usesimilar
acceleration
algorithm
is not
small
isresults
ofof⎡⎢Ythe
routine
tosimilar
based
to⎤⎥ isto
selected.
number
the
the
violate SVD
on
similar
option
presented
acceleration
if theFor
tosome
SVD
acceleration
algorithm
algorithm
SVD,
to
a small n
basic
in basic
which
what follo
option
rule re
ret
ma
()
the identity matrix16  of order n.
l First, we subtract
16  l9. the Generate the demand use of the
vectors acceleration
by taking option
the 17 exponential may appear . violate some rule
6  subtract thethe sample mean the
from cholcov U and routine
compute if the Σ is Uacceleration
. Assuming16  the
16  theoption cholcov
cholcov
the issmall
anumber notroutineroutineselected.
ofFor ifif the
simulation theForsimulations,
a small
acceleration
acceleration to number option
option ofis ⎣ not
not⎦ selected.
issimulations, selected. For For aa small small nn
( () )
cholcov routine if nthe acceleration loption not selected. isFor number aaof
()
umber of demand 16 lparameters, that
16  is,the lM cholcov
>16  ,compute theroutine
spectral cholcov if lthe
value routineacceleration
decomposition if the acceleration
option may
instead not option
appear selected.
of performing isisto not selected.
violate small
linear some Fornumber abasic
transformation small of the number
rules
simulations, onof ofstatistics
simulations,
simulated standardthe norma
and
6  subtract
compute Σ
thesample
be sample
Ugreater . mean
Assuming mean
than from
from
the the number U
number 5 
to [ λthe
and
and of In 1this
compute
of
demand 2
simulation case TΣ ΣU
parameters, U
to . . Assuming
will
Assuming thatbe is,
18 
considerably
the
M 19 
number
> n , spectralsolution
different
of simulation value from not guaranteed
the
to
decomposition identity because
matrix of order computation n . First, involves
we gene
Gcor ]use [ Aoption ] inmay

olesky decomposition is equivalent17  the instead of toperforming a linear transformation instead of ontoperforming
tosimulated a linear
standard transfor
17  the use of the acceleration
the use 17  of the
of
Gcorthe
use
acceleration of
acceleration
the appear
acceleration
option to option 18  may
17 
17 
violate
may option
appear
thethe
because
some appear
may use
use
to basic ofinstead
appear
violate
the
the violate
rules acceleration
acceleration
to
some ofof
violate
some
performing
statistics
basic
basic
some
rules
option
option 18 
because rules
basic
of a may
may linearofappear
rules
statistics
statistics
appear transformation
of statistics
because
because
violate
violate some
some
because   norma
5basic
basic rule
rule
8  be
7  the
shows
number
greater of simulation to be greater than the
17 
that than the number of demand parameters, 17  that numberMathematical
is, 19  M >20  ntransformation
, spectral and, even
Basis valueis though
decomposition
  performed spectral on other valuessimulated are unique, random by definition,
variables V
lthe asso
. Here
l other
()
l on simulated standard normal variables l ,, aaislinear
be greaterof
nd parameters, than
that is,
18 demand
the M number> n , spectral of demand 18 6  value parameters, decomposition Mtransformation
>ofsamplethat
n,performingis, mean M > nfrom , transformation
spectral value decomposition lperformed l normal transformation llinear
performed on
issueV normal
that parameters, that is, spectral onvalue onU
7  transformation is U on other simulated random variables .for
Here
instead of performing a linear 19  instead on ofnormal simulated
performing . standard
aa linear
linear 19  variables
8  shows instead of18  performing instead 18 asubtract
oflinear instead
performing the a linear asimulated
transformationlinear 18 
18  instead
Utransformation
and
21  standardoncomputevectors
simulated
of performingonΣare Uvariables
not.
simulated
standard Assuming However,
normal standard atransformation
, transformation
the
linear
variables number
uniqueness
normal l ofon
Uvariables, a simulation
simulated
simulated
is
linear U , aanto
not standard
standard
linear norma lo
showsT
that decomposition shows that 18  A mathematical transformation proof for the is
SVD performed
algorithm is on
presented other in simulated
what follows. The unique
[ AcovU ]
()
8  l (14) l l ll in two
T
transformation is performedrandom on other simulated assessment.The
l l variables procedure V ldiffers from
Vsimulated l the 10 Yang   to etrelated al. algorithm
= [ AcovU ][λcovU ][19  ] be greater V .random . Here is linearly related toto
22  l to V
ns,with acceleration
9  Σ
19 U transformation 19  transformation isAcov performed
7  U19  onisother
transformation thansimulated
performed the is number
performed
on other
19 
19 
ofsimulated
demandonvariables transformation
transformation
random
notparameters,
isother simulated
random variables
Here variables
is
is
that
random performed
V performed
isis, lvariables
linearly
VM ..the Here
>
Here
(14) on
on
related
ncomputation other
other is
Vis .linearly
, Vspectral Here simulated
linearly
value V related random
random
related
isdecomposition
linearly variables
variables to Vof .. Here
Here

l
Σ U = [ AcovUl][λcov () ][ covU ]
A
T
(14)
l sample mean) such that (after
19  solution
23  
guaranteed
l of subtracting lthe correlation U
because
l
(after
l [ J ]the
(14)
matrix is
subtracting
sample the
mean) basedsuch
sample
involves
on that
mean)
10   the generation
SVD,where
such that V
l
which [ is] more ro
= J
l
U
random
where [
()

l
[ A(14) 1  ][ AU (after T U subtracting
8  shows the that V  = [ Jsuch ]U where [ J ]Ulisare computed [ J ] isfrom   the from
Σ of U = covU ][ λ covU ] because M 1  (14) and, even though spectral that Vvalues (14) unique, by definition, the
the associated left and
9 form U (after subtracting
20  the sample mean) = where computed
Eq is covUpossible > n guarantees that all the is
rably different 10  Factorization
from the identity
Factorization ofofthe theform
matrix form
of order of of Eqn Eq .(14)
First, (14) iswepossible is possible 10because
  Mcomputed >10n  guarantees
102   from
10   that
spectral
thevalue spectral the value decomposition
all decomposition () of Σ U10
l10   of
(after subtracting the sample

) 10  because Factorization   M >of 2  n guarantees


spectral the value
  form
 
ofthat
decomposition
()
2  lEq all spectral
(14) theisof of
value Σ()
eigenvalues
possible ()
21 l vectors are   l
U
decomposition T of
(after
because subtracting
of Σ U
not. (after
(after
M > n guarantees that all the
However,
the subtracting
sample
subtracting
uniqueness
mean) the the
sampleper sample
Eq
is notmean)
mean) (16). per
an issue
The Eq per
(16). Eq
The
for
5   (16).loss computation
are non-negative.
(14)Factorization
0 and is11 possible
compute
eigenvalues
Σ
l because
U of
Accordingly,
()
.are
of
thenon-negative.
Assuming form MΣ > U
lthe
()
ofn guarantees
the Eq
number
9 correlation
are (14)
Σ
non-negative.
Accordingly,
of
U  =matrix
issimulation
that
[ Aall
possible ][λbecause
covUAccordingly,
the
theto
the
cov22 
simulated
U ][ Acovassessment.The
correlation
UM ]the> correlation
n guarantees
The  procedure matrix
transformation that 3  all ofdiffersthe
the simulated
transformation from
[J] isthe ] Yang
[ J such is such that () ( )
et
that al.ΣalgorithmV = Σ U ∞in two
l m (14) andwhere
and ways:Um 1) the
∞ is the
fa
11  eigenvalues of Σ U are non-negative.
l
() Accordingly,
() ( ) () ( )
V[ Jgiven
l ] is of the
m correlation l
V = Σ Um
Σ where
m
matrix is
andthe of the simulated
m
simulated
U ∞ is the standard normal variables
()
transformation23 
J 3 is such = Σsuch that correlation where simulated standard normal
by[[from of] Accordingly,
matrix the matrix issimulated based on SVD,normal which is more robust than the
Σ of 3 l the simulated random variables is Uby
iven by [from Eq(8)] that Σ the
transformation and where of U is the standard

ative.eigenvalues
1  Accordingly,
12  random of the Ucorrelation
variables are non-negative.
is given matrix theEq(8)] simulated correlation ∞ matrix the∞4 simulated variables for an infinite number of realizations (or simulations).T
and parameters,[from that is, M > n , spectral value
10  Factorization of the form of Eq fordecomposition M > n guarantees
variables for an infinite number of an
(14) isinfinite possible number because of realizations (orwhich simulations).
that all the
12  randomEq(8)] variables is given by [from 4  Eq(8)] realizations 5  acceleration (or simulations).The option violates degreefundamental to the of statistics can be m
rules
variables for an infinite number of realizations (or simulations).The degree to which the
( ) ( ( ) )( The degree to which the acceleration option violates

)( )
Eq(8)]
2  l
random T ⎞ variables
⎡ ⎤
− 1
is
T
given by [from Eq(8)] l 5
() (
J ] Σ U [ J ] ⎟⎟ ⎣σ G ⎦ l [T ] ⎡ ⎤ −1  
)
⎛ 11  l eigenvalues T⎞
5  acceleration−1 of Σ U T option violates (15)
are non-negative. fundamental rules
Accordingly, of statistics can
the correlation be l measured by
matrixto of testing the of the
simulated
[ ] ⎜⎝[ ] option [ ] ⎟⎠violates ⎟ ⎡ ⎤
⎣ G ⎦ −1 [ fundamental ] T (15)
13  ⎠ R G = 5 ⎣σ G ⎦acceleration T ⎜ J Σ U J σ T   rules of statistics fundamental can berules
6  distribution
measured of
of statistics V
(15) with
by testingcan respect of be the
the measured standard normal by distribution. Suc
13  ( ) () R G
l ⎡ ⎤
= ⎛⎣σ G ⎦ [T ] ⎜⎜[ JT] Σ
−1 ⎛
) l
⎞⎟ ⎡ U random ( ) () T⎞
6  ⎟
[−J1 ] ⎟⎠ T⎣σvariables ⎡ ⎤
distribution
) of V
l
[T ] is given by [from Eq(8)]
with respect to the standard normal
7  the original datadistribution. (15)Such testing
set to be known, assuming M > n . does not require

−1
σ G ⎤⎦ R[TG ( )) (
lT −1
] ⎛= ⎡⎣σ Gl⎤⎦ [T T]6 ⎞⎜⎜⎝[ J ]distribution
l
Σ U [⎝J ] 12  () ⎟⎠ ⎣of (
σ GVl⎤⎦ 7  [T ] the
with
G⎦
respect
original
(14)
(15) todata theset standard
to be known, normal assuming distribution. M > n . Such (15)testing does not require
14  ()
uch that ⎜⎜[ J ] Σ U [ J ] ⎟⎟ becomes an identity⎛ matrixl of order
⎝ The Next Bridge we7 ⎠select theand J ]Structural
[original suchdata thatset ()
⎜⎜⎝⎛8 l[Engineer
Jto] ΣbeUknown,
T⎞
[ J−1] ⎟⎟⎠ assuming
n . This is
becomes M an>identity n. matrix of order
8  3.0 Evaluation of results generated by the cholcov and SVD algorith
Volume n . This 45 isNumber 1 March 2015  99
(14) isT14  possibleNext we
because select M [ J > ] n
such 13  thatR ⎜G
guarantees () ( ) [ J =
] Σ
3.0⎡l
that Uσ
Evaluation
all [ ⎤ J ]the
[
T
T⎞

⎟ ]) ⎛ of
becomes
results
⎜⎜[ J ] Σ U an () (
l generated
[ J ]
T ⎞
⎟ ⎡σ ⎤ matrix
identity

by−1 the T
[ T )]
cholcov
of
and SVD algorithms
order n . This isof the cholcov and SVD algorithms (15) is evaluated usi
() ()
l ⎝⎜ T ⎞⎟ ⎣ ⎦ ⎠ ⎝ ⎠ ⎣ ⎦
⎞⎟ ⎛ G G 9  The performance
l
Σ4  U Next [ J ]15 ⎟⎠we select
becomes
possible [an J8 ] identity
when such 3.0 that matrix
Evaluation ⎜⎝[ J ] ΣofUof
⎜ order [9 results
J ] ⎟The n .becomes
⎠ generated
This
performance is an by identity
of the thecholcov matrix
cholcov and
ofSVD
and order
SVD algorithms
nalgorithms
10  sample
. This is
is evaluated three-story usingbuilding results ofwith analysis seven of ademand parameters (3 s
gative. Accordingly,
15  possible thewhen correlation matrix of the simulated
10  sample three-story building with seven demand parameters (3 story drifts and 4 floor
()
T
⎛ T⎞
11  accelerations). The base line study involved a maximum of 11 ground m
5  possible when (16) lis evaluated
16  [ J ] = [ λ 9  ] The
−1 2
[ A performance ] 14  Next
T of the we cholcov select [ Jand ] such SVDthat algorithms⎜⎜[ J ] Σ U [ J ] ⎟⎟ becomes usingan resultsidentity (16) of analysismatrix of of order a n . This is
testing
(after subtracting the of the distribution
sample of Vl =with
mean) such that [ J ]Ul respect
where [to
J ] the namely,
is computed from without
the and with acceleration is next
standard normal distribution. Such testing does not considered for illustration. Note that proposed
ectral value decomposition ()
require the original
l
of Σ U data (after
setsubtracting
to be known, the sample
assuming.mean) peralgorithm
Eq (16). The without acceleration is same as the Yang
et al. process, if the Cholesky decomposition exists
nsformation [ J3.
] is such
Evaluation() ( )
l
that Σ V =of Σ U
m
results

m
generated
and where U ∞ is the bysimulated
the standard normal its modification using cholcov. Even
and otherwise,
cholcov and SVD algorithms though the power of proposed algorithm lies in
riables for an infinite number of realizations (or simulations).The degree its to which the
acceleration component, similitude of without
The performance of the cholcov and SVD algorithms
celeration option violates fundamental rules of statistics can be measured by acceleration testing of thepart to cholcov is first illustrated. Tables
is evaluated using results of analysis of a sample
l 8 and 9 present the results for 100 and 1,000,000
stribution of V three-story
with respectbuilding
to the standard
with normal
seven distribution. Such testing does not require
demand parameters simulations, respectively, with 11 ground motions.
(3 story
e original data set drifts and
to be known, 4 floorMaccelerations).
assuming > n. The base line These results should be compared with results in
study involved a maximum of 11 ground motions. Tables 5 and 6, respectively. For example, entries
0 Evaluation of results generated by the cholcov and SVD algorithms
Table 1 presents benchmark data that are used later for at (5,3) and (5,1) in Table 8 are 0.07 (0.25 with
e performancetheof purpose
the cholcovof comparing results. The
and SVD algorithms matrix of
is evaluated is ofcholcov)
dataresults
using analysis of anda 2.16 (5.38 with cholcov), respectively.
denoted as X. The demand parameters are assumed to The accuracy of the SVD results is comparable to
mple three-story building with seven demand parameters (3 story drifts and 4 floor
be jointly lognormal. The mean and variance for each that achieved with the cholcov algorithm. Table 10
celerations). The base line study involved a maximum of 11 ground motions.
demand parameter from matrix are presented in Table presents the results for 3 ground motions (G1 to G3)
2. The mean and variance for each demand parameter with 100 simulations. Comparison of these data with
ble 1 presents benchmark data that are used later for the purpose of comparing results. The
computed for matrix [Y]=in [X] are presented in Table Table 7 indicates that SVD algorithm also works for a
atrix of data is denoted as X . The demand parameters are assumed to be jointly lognormal. covariance matrix. Therefore, proposed
rank-deficient
3. The correlation matrix, [RYY], computed for [Y] is
e mean and variance for each
presented demand
in Table 4. parameter from matrix X are presented inmethod Table 2.without
The acceleration leads to the comparable
ean and variance for each demand parameter computed for matrix [Y ] = ln [ X ] are results as those
presented in from the existing method (modified
Table 5 presents the results of computations using
by cholcov).
cholcovmatrix,
ble 3. The correlation and [100 simulations.
RYY ] , computed for [YThe table presents
] is presented in Table 4.
the ratio of cholcov to benchmark values of means The SVD algorithm can also be used for a negative
ble 5 presents(inthethe space)
results and correlation
of computations using coefficients
cholcov and 100
definite
(in simulations.
the covariance matrix by assigning zero to those
The table
Y space). The correlation coefficients are seen to spectral values for which the left and right spectral
esents the ratio of cholcov to benchmark values of means (in the Y space) and correlation
vary significantly from the target values because shapes differ only by sign. The cholcov algorithm will
efficients (in the Y space). The correlation coefficients are seen to vary significantly from
not work in thesuch a case as it requires the covariance
of the small number of simulations. Minimum and
get values because
maximumof the entries
small number of simulations.
are noted as 0.25 (5,Minimum
3) and 5.38 and(5,
maximum entries
matrix to are
be necessarily positive semi-definite. The
ted as 0.25 (5,1),
3) and 5.38 (5, 1),against
respectively, respectively, againstof
the target the1.0.
target of 1.0.
Note context
thatNote that of the present paper, however, does not
associated
associated
nchmark correlation benchmark
coefficients correlation
are also coefficients
small (close to zero) andare also contributing to thedistinction.
hence, explore this
smallHowever,
served higher ratio. (close tothezero) and hence,
ratio noted for (4, 1)contributing
is 0.74 and thetocorrelation
the We now consider
coefficient is the results of analysis using
observed higher ratio. However, the ratio noted for the sizes.
375, which describes the limitation of existing procedure with smaller simulation acceleration
This component of the proposed SVD
(4, 1) is 0.74 and the correlation coefficient is 0.375, algorithm. We repeated the analysis above for a
nclusion is verified by analysis using 1,000,000 simulations and the results presented in Table
which describes the limitation of existing procedure number of combinations of ground motions with 100,
show that the statistical parameters
with smaller convergesizes.
simulation to the target
This values as the number
conclusion is of simulations
10,000 and 1,000,000 simulations. Table 11 presents
verified by analysis using 11 
1,000,000 simulations results. For every analysis, the means and correlation
and the results presented in Table 6 show that the coefficients are equal to the target values, regardless
statistical parameters converge to the target values of the number of simulations. Clearly, target statistics
as the number of simulations is increased, which is are completely recovered regardless of the number
an expected result. Three ground motions G1 to G3 of simulations. This acceleration technique is the
are next considered and the results are presented in fundamental contribution in the present paper.
Table 7 for the 100 simulations. This is a case with
rank deficient covariance matrix and the results in 4. Conclusions
Table 7 indicate that the cholcov routine works for
A procedure to generate statistically consistent
this case. Note the ratio of correlation coefficient to its
demand vectors for seismic performance (risk)
benchmark value (0.375) at (4,1) is 1.49.
assessment is developed. The proposed procedure
Proposed SVD algorithm which has two components, has two distinct features. First, it utilizes the spectral

100  Volume 45 Number 1 March 2015 The Bridge and Structural Engineer
value decomposition of the covarience/correlation 5. Yang, T. Y., Moehle, J., Stojadinovic, B. and
matrix, which is more robust than the conventional Der Kiureghian, A. (2009). "Performance
Cholesky type decomposition. Second, the procedure evaluation of structural systems: theory
introduces an (optional) acceleration technique in and implementation." Journal of Structural
the convergence of the statistics of the simulated Engineering, ASCE, 135 (10): 1146-1154.
demand parameters to the target statistics. Without
6. Wilkinson, J. H. (1965). The algebraic eigenvalue
the acceleration technique, the proposed procedure is
problem, Oxford Science Publication.
mathematically identical to the conventional cholcov
algorithm, provided the Cholesky decomposition 7. Mathworks, Ed. (2008). MATLAB 8—The
exists. Therefore, the use of spectral value language of technical computing, Natick, MA.
decomposition is viable alternative to the cholcov 8. Zareian, F. (2010). “Personal communication.”
routine. With acceleration technique, the target
statistics are recovered in the proposed procedure Table 1: Demand matrix, X, from response-history
regardless of the simulation size. The proposed analysis
acceleration technique can also be used with cholcov
routine. Therefore, fundamental contribution in this δ1 (%) δ2 (%) δ3 (%) δa1 (g) a2 (g) a3 (g) a4 (g)
paper is the acceleration technique and SVD is shown G1 1.26 1.45 1.71 0.54 0.87 0.88 0.65
as an alternative to the Cholesky Decomposition.
G2 1.41 2.05 2.43 0.55 0.87 0.77 0.78

Acknowledgments G3 1.37 1.96 2.63 0.75 1.04 0.89 0.81

G4 0.97 1.87 2.74 0.55 0.92 1.12 0.75


The study described in this paper was partially funded
by the Applied Technology Council in support of the G5 0.94 1.8 2.02 0.40 0.77 0.74 0.64
ATC-58 project on seismic performance assessment G6 1.73 2.55 2.46 0.45 0.57 0.45 0.59
of buildings. This support is gratefully acknowledged.
G7 1.05 2.15 2.26 0.38 0.59 0.49 0.52
The study benefited greatly from comments received
from Professor Jack Baker of Stanford University and G8 1.40 1.67 2.1 0.73 1.50 1.34 0.83
Professor Farzin Zareian of UC Irvine. G9 1.59 1.76 2.01 0.59 0.94 0.81 0.72

G10 0.83 1.68 2.25 0.53 1.00 0.9 0.74


Reference
G11 0.96 1.83 2.25 0.49 0.90 0.81 0.64
1. Huang, Y.-N., Whittaker, A. S. and Luco,
N. (2008). "Performance assessment of
conventional and base-isolated nuclear power Table 2: Demand matrix statistics
plants for earthquake and blast loadings." δ1 (%) δ2 (%) δ3 (%) a1 (g) a2 (g) a3 (g) a4 (g)
MCEER-08-0019, Multidisciplinary Center
for Earthquake Engineering Research, SUNY, μx 1.2282 1.8882 2.2600 0.5418 0.9064 0.8364 0.6973

Buffalo, NY. σx 0.0878 0.0849 0.0885 0.0139 0.0615 0.0627 0.0094

2. Huang, Y.-N., Whittaker, A. S. and Luco, N.


(2011). "A seismic risk assessment procedure for Table 3: Demand matrix statistics
nuclearpower plants, (I) methodology." Nuclear δ1 (%) δ2 (%) δ3 (%) a1 (g) a2 (g) a3 (g) a4 (g)
Engineering and Design, 241: 3996-4003.
μx 0.179 0.625 0.807 -0.634 -0.131 -0.222 -0.370
3. ASCE (2006). "Seismic rehabilitation of existing σx 0.059 0.022 0.018 0.046 0.070 0.0993 0.021
buildings, American Society of Civil Engineers,
Reston, VA," Standard ASCE/SEI 41-06.
Table 4: Benchmark correlation matrix,
4. ATC (2012). "Guidelines for the Seismic
1.000 0.339 -0.019 0.375 -0.022 -0.193 0.145
Performance Assessment of Buildings," Report
No ATC-58-1, Applied Technology Council, 0.339 1.000 0.656 -0.353 -0.646 -0.723 -0.376
Redwood City, CA. -0.019 0.656 1.000 0.136 -0.094 -0.066 0.220

The Bridge and Structural Engineer Volume 45 Number 1 March 2015  101
0.375 -0.353 0.136 1.000 0.839 0.731 0.881 1.0010 1.0000 1.0099 1.3168 1.3732 0.8603 1.0090
-0.022 -0.646 -0.094 0.839 1.000 0.934 0.863 1.0183 1.0099 1.0000 1.1286 1.1479 0.8145 1.0000
-0.193 -0.723 -0.066 0.731 0.934 1.000 0.820 1.4873 1.3168 1.1286 1.0000 1.0002 0.8754 1.1338
0.145 -0.376 0.220 0.881 0.863 0.820 1.000 1.5998 1.3732 1.1479 1.0002 1.0000 0.8808 1.1539
0.8824 0.8603 0.8145 0.8754 0.8808 1.0000 0.8166
Table 5: cholcov algorithm using 11 ground
1.0170 1.0090 1.0000 1.1338 1.1539 0.8166 1.0000
motions, 100 simulations

Demand parameters Table 8: SVD algorithm using 11 ground motions,


100 simulations, no acceleration
δ1 δ2 δ3 a1 a2 a3 a4
Ratio of means Demand parameters
1.0347 1.0054 1.0010 1.0032 1.0215 1.0410 1.0004 δ1 δ2 δ3 a1 a2 a3 a4
Ratio of correlation coefficients Ratio of means
1.0000 1.2520 0.6032 0.7389 5.3804 1.4156 0.4426 0.9910 0.9897 0.9759 1.0313 1.0558 1.0311 1.0452
1.2520 1.0000 0.9645 0.8050 0.9612 1.0051 0.9401 Ratio of correlation coefficients
0.6032 0.9645 1.0000 1.7816 0.2477 0.7994 1.2546 1.0000 0.9031 1.3382 1.0446 2.1557 0.8650 1.1866
0.7389 0.8050 1.7816 1.0000 1.0010 0.9940 1.0031 0.9031 1.0000 1.0596 0.8851 0.8649 0.9325 0.7957
5.3804 0.9612 0.2477 1.0010 1.0000 1.0129 0.9842 1.3382 1.0596 1.0000 1.0812 0.0676 0.6748 1.1370
1.4156 1.0051 0.7994 0.9940 1.0129 1.0000 0.9888 1.0446 0.8851 1.0812 1.0000 0.9865 0.9892 0.9771
0.4426 0.9401 1.2546 1.0031 0.9842 0.9888 1.0000 2.1557 0.8649 0.0676 0.9865 1.0000 0.9857 0.9805
0.8650 0.9325 0.6748 0.9892 0.9857 1.0000 0.9916
Table 6: cholcov algorithm using 11 ground
1.1866 0.7957 1.1370 0.9771 0.9805 0.9916 1.0000
motions, 1,000,000 simulations

Demand parameters Table 9: SVD algorithm using 11 ground motions,


1,000,000 simulations, no acceleration
δ1 δ2 δ3 a1 a2 a3 a4
Ratio of means Demand parameters
1.0003 1.0000 0.9999 1.0000 1.0003 1.0003 1.0001 δ1 δ2 δ3 a1 a2 a3 a4
Ratio of correlation coefficients Ratio of means
1.0000 1.0008 1.0599 0.9985 0.9982 1.0028 0.9979 1.0010 0.9999 0.9996 1.0003 1.0017 1.0016 1.0006
1.0008 1.0000 0.9990 1.0001 0.9997 0.9992 0.9997 Ratio of correlation coefficients
1.0599 0.9990 1.0000 0.9997 0.9941 0.9776 1.0013 1.0000 1.0006 1.0533 0.9964 1.0520 1.0049 0.9849
0.9985 1.0001 0.9997 1.0000 1.0003 1.0001 1.0000 1.0006 1.0000 0.9985 1.0002 1.0000 1.0001 1.0016
0.9982 0.9997 0.9941 1.0003 1.0000 0.9999 1.0002 1.0533 0.9985 1.0000 1.0078 0.9893 0.9849 1.0032
1.0028 0.9992 0.9776 1.0001 0.9999 1.0000 1.0002 0.9964 1.0002 1.0078 1.0000 0.9999 1.0000 0.9996
0.9979 0.9997 1.0013 1.0000 1.0002 1.0002 1.0000 1.0520 1.0000 0.9893 0.9999 1.0000 0.9999 1.0002
1.0049 1.0001 0.9849 1.0000 0.9999 1.0000 1.0004
Table 7: cholcov algorithm using 3 ground motions,
0.9849 1.0016 1.0032 0.9996 1.0002 1.0004 1.0000
100 simulations

Demand parameters Table 10: SVD algorithm using 3 ground motions,


100 simulations, no acceleration
δ1 δ2 δ3 a1 a2 a3 a4
Ratio of means Demand parameters
0.9982 0.9901 0.9885 1.0086 1.0161 0.9986 1.0081 δ1 δ2 δ3 a1 a2 a3 a4
Ratio of correlation coefficients Ratio of means
1.0000 1.0010 1.0183 1.4873 1.5998 0.8824 1.0170 1.0198 1.0319 1.0261 0.9889 0.9674 1.0318 0.9638

102  Volume 45 Number 1 March 2015 The Bridge and Structural Engineer
Ratio of correlation coefficients
will have the same eigenvalues except the product,
which is of higher order will have an additional |m–n|
1.0000 1.0013 1.0089 0.8330 0.7768 1.1076 1.0086
zero eigenvalues.
1.0013 1.0000 1.0030 0.8629 0.8277 1.1572 1.0028
1.0089 1.0030 1.0000 0.9152 0.8981 1.3930 1.0000 Consider a specific case, namely [A]=[B]T. The matrix
product [B]T [B] can be considered as the covariance
0.8330 0.8629 0.9152 1.0000 0.9996 0.9343 0.9132
matrix with some scale factor and will have (m–
0.7768 0.8277 0.8981 0.9996 1.0000 0.9512 0.8955
n) zero eigenvalues if n>m. Moreover, when the
1.1076 1.1572 1.3930 0.9343 0.9512 1.0000 1.3752 determinant of the covariance matrix will be zero and
1.0086 1.0028 1.0000 0.9132 0.8955 1.3752 1.0000 one of the eigenvalues will be zero. Accordingly, the
covariance matrix is also rank deficient if. Combining
Table 11: SVD algorithm, with acceleration this property with the mathematical proof presented
Demand parameters above, the covariance matrix will be rank deficient
when n ≥ m.
δ1 δ2 δ3 a1 a2 a3 a4
Ratio of means
APPENDIX B: MATLAB Coding of the SVD
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 Algorithm
Ratio of correlation coefficients
clear all
1  1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000


close all

1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
1 3  %OPT=0;% no acceleration
3  1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000



1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
OPT=1;% with acceleration
3 5 

4 6 
6  1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 %Develop underlying statistics of the of the response

5  history analysis
7  1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000



1  %step-1
7  1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
2 9  APPENDIX A: Rank Deficient Covariance Matrices


3  APPENDIX A: Rank Deficient Covariance%X=load('DP.txt');
Matrices
9  APPENDIX A:APPENDIX
Rank basisDeficient Covariance

10  We provide the mathematical A: Rank
for theDeficient
statement that if n ≥[mrow,ncol]=size(X);
Covariance Matrices
m ( n and m are the number of
5  We provide the mathematical basis for the statement that if n ≥ m ( n and m are the number of
10  Matrices

11  Msize=1000000;%
demand parameters and ground motions, respectively) the covariance matrix is rank deficient. simulation size
7  demand
11 
10  We provide
We
parameters and groundbasis
the mathematical motions,
for therespectively) theifcovariance
statement that
tic;
matrix
n ≥ m ( n and is rank
m are deficient.
the number of
12 
8  We provide theanmathematical
first present identity of the basis for the statement
form [9]:
11  We
12  first present
demand an identity
parameters of themotions,
and ground form [9]:respectively) the covariance matrix is rank deficient.
that if (and are the number of demand parameters and %step-2

12  ground
We ⎡ first motions,
present respectively)
APPENDIX
⎡ an identity ⎤ of the
A:
⎡ the form covariance
Rank
[9]: Deficient
⎤ matrix
Covariance Matrices
⎡ ⎢ I o ⎤ ⎡⎤⎥ ⎢ μ I A ⎤ ⎥ ⎡ ⎢ μ I A ⎤ ⎥ Y=log(X);
13  is
⎢ ⎢ −B rank
I odeficient.
⎥ ⎢⎥ ⎢ μ I A We⎥ ⎥ first
=⎢ ⎢ μ Ipresent A an identity
⎥⎥ of the (A-1)
13  =
μ I ⎥ ⎢ B μ I ⎥ ⎥ ⎢ ⎢ o μ2 I − BA ⎥ ⎥ 2
mu=(mean(Y)); (A-1)
⎢ ⎢⎣ −Bprovide
We
10  form
⎡⎣⎢ I [6]: μ I ⎥⎥⎤ ⎢⎢the
⎡⎦ ⎣ μBImathematical
μ I ⎤ ⎦ ⎡

A ⎥⎦⎥ ⎣⎢ μ I⎣ obasisμ for
I −
A
the
BA statement
⎥⎤ ⎦ that if n ≥ m ( n and m are the number of
⎢ o ⎦⎥ ⎣⎢ ⎦⎥
13  ⎢ −B μparameters ⎢ ⎥ = ⎢ ⎥ RY=corrcoef(Y); (A-1)
11  demand
⎢⎣ I ⎥⎥ ⎢ B μand I ⎥ ground
⎢ o motions,
μ 2 I − BArespectively)
⎥ the covariance matrix is rank deficient.
⎡ μ I −A ⎦ ⎣⎤ ⎡ μ I A ⎦ ⎤ ⎣ ⎡ μ 2 I − AB o ⎦ ⎤ %step-3
12  We first present ⎡ an identity ⎤ ⎡of ⎢ the form [9]: ⎤ ⎥
14  ⎡⎢ ⎢⎢ μ I −A ⎤⎥ ⎢⎥⎥ ⎢⎢ μ I A ⎥ ⎥⎥ =⎢ ⎢ μ 2 I − AB o ⎥ ⎥ (A-2)
14  ⎢ ⎢⎣ o I ⎥ ⎢⎥ ⎢ B μ I ⎥ = ⎥ ⎢⎢ B μ I ⎥⎤ ⎥ (A-1) G=Y-ones(mrow,ncol)*diag(mu);
(A-2)
⎡⎢⎣ μoI −A I ⎥⎦⎤ ⎢⎣⎡⎦ ⎣ μBI μAI ⎥⎦⎤ ⎦ ⎢⎣⎡⎢ ⎣ μ 2 I B
− AB o ⎤⎦⎥μ I ⎥ ⎦
⎢⎡ ⎥ ⎢⎡ ⎥⎤ ⎡ %step-4
14  ⎢ I o ⎥⎤ ⎢⎢ μ I A ⎥⎥ = ⎢⎢ μ I A ⎥ (A-2)
13  ⎢⎣ o I ⎥⎦⎥ ⎢⎣ B μ I ⎥⎦ = ⎢⎣ B 2 μ I ⎥⎥⎥ (A-1)
15  If−B μ I matrix
the second B μon I ⎥left⎢ sideo ofofμ Eq− (A-1) ⎦ (A-2)
⎥[andEq colU=ncol;
Eq (A-2)] is denoted Q , then from Eq (A-1):
15  If⎢⎣ the second⎥⎦ ⎢⎣matrix on left ⎦ side
⎣ EqI (A-1)BA [and
⎦ (A-2)] is denoted Q , then from Eq (A-1):
If thethesecond
second matrix on left U_star=randn (Msize,colU);
15  If matrix on left side side
of Eqof Eq [and
(A-1) (A-1)
Eq[and Eqis denoted Q , then from Eq (A-1):
(A-2)]
16  μn n det(Q ) =⎤ ⎡ μn n det ( μ2 2 I⎤ − BA ) (A-3)
I ( μA I − )μ I − AB o ⎥ %step-5
(A-2)]

μ μdet( is denoted
) = μ⎢ μdet , then ⎡
from Eq (A-1): ⎤ (A-3)
16  I Q BA 2
⎢ −A ⎥ ⎥ ⎢
14  ⎢ ⎥⎢ ⎥=⎢ ⎥ (A-2)
16  μ⎢⎣ n odet(QI) =⎥⎦ μ⎢⎣ n Bdet ( μ 2II − ⎢ ) B
⎥⎦ BA μI ⎥ (A-3) RG=corrcoef(G); (A-3)
17  and similarly from Eq (A-2): ⎣ ⎦
17  and similarly from Eq (A-2): Var_G=var(G);
and similarly from Eq (A-2):
15 
17 
18  Ifμthe
and second
Q ) = matrix
n similarly
det( from
(on
μ n detEqμ(A-2):
I − AB ) Sigma_G=diag(sqrt
2 left side of Eq (A-1) [and Eq (A-2)] is denoted (Var_G));
Q , then from Eq (A-1):
(A-4)
18  μ n det(Q ) = μ n det ( μ 2 I − AB ) (A-4) (A-4)
%step-6
μ nn det(
16  The
18  matrix det ( μ 22 I [A][B]
Q ) = μproducts
nn
AB ) and [B][A] will therefore
− BA (A-4)
(A-3)
19  The matrix products [ A][ B ] and [ B ][ A] will therefore have the same [A_Gcor,LAM_Gcor,B_Gcor]=svd(RG);
eigenvalues. Moreover, if
19  The matrix products [ A ][ B ] and [ B ][ A]
have the same eigenvalues. Moreover, if [A] is of size will therefore have the same eigenvalues. Moreover, if
sqrt_LAM_Gcor=zeros (ncol,ncol);
20  [xA]mismatrix
nand isand [B]
of size
A] similarly
isn×of
from
n×Eq
m size
Aand
m [(A-2):
[ B ]misthen
][ B ][nx [A][B]
][ofAsize
B ] is[ Bof ]size m×and [ A][ B ] and
[B][A]
n then and [ B ][ A] will have the same
17 
19 
20  [The products
of size and and will therefore
m× n thenhave
[ A][ Bthe
] same[ Beigenvalues. Moreover,
][ A] will have if
the same
21  [eigenvalues
20  A] is of sizeexcept
n× mthe andproduct,
[ B ] is which
of sizeis m×
of higher
n thenorder
[ A][ Bwill
] andhave
[ B ][anA] additional
will have m− zero
the nsame
18  eigenvalues
21  μ n det (the
μ n det(Q ) = except − AB ) which is of higher order will have an additional m− n(A-4)
μ 2 I product, zero
The Bridge and Structural Engineer
eigenvalues.
22  eigenvalues Volume 45 Number 1 March 2015  103
21 
22  eigenvalues. except the product, which is of higher order will have an additional m− n zero
19 
22 
The matrix products [ A][ B ] and [ B ][ A] will therefore have the same eigenvalues. Moreover, if
eigenvalues.
20  [ A] is of size n× m and [ B ] is of size m× n then [ A][ B ] and [ B ][ A] will have the same
18 
for j=1:ncol %step-7.3
sqrt_LAM_Gcor(j,j)=sqrt (LAM_Gcor(j,j)); G_BAR=U_star*A_covU*inv_sqrt_LAM_
end covU*sqrt_LAM_Gcor*A_Gcor'*Sigma_G;
%step-7 end
if OPT==0 end
G_BAR=U_star*sqrt_LAM_Gcor*A_ %step-8
Gcor'*Sigma_G; Y_BAR=G_BAR+ones (Msize,ncol)* diag(mu);
else %step-9
if OPT==1 W=exp (Y_BAR);
%step-7.1 Time_Elapsed=toc;
mu_U_star=mean(U_star); save correlated_DP_svd_eff.txt W -ascii -double
U_star=U_star-ones(Msize,colU)*diag (mu_U_star); -tabs;
%step-7.2 %Check results
covU=cov(U_star); mu_Y_BAR=mean (Y_BAR);
[A_covU,LAM_covU,B_covU]=svd(covU); RY_BAR=corrcoef (Y_BAR);
inv_sqrt_LAM_covU=zeros(ncol,ncol); G_8=mu_Y_BAR./mu % for Table G-8
for j=1:ncol G_9=RY_BAR./RY %for Table G-9
inv_sqrt_LAM_covU(j,j)=1/sqrt(LAM_covU(j,j)); G_10=var (Y_BAR)./var(Y)
end

104  Volume 45 Number 1 March 2015 The Bridge and Structural Engineer

You might also like