You are on page 1of 21

STRUCTURAL

SAFETY

Structural Safety 26 (2004) 421–441


www.elsevier.com/locate/strusafe

Seismic fragility analysis of 3D structures


M.I.J. Schotanus a, P. Franchin b,*
, A. Lupoi b, P.E. Pinto b

a
European School for Advanced Studies in Reduction of Seismic Risk (ROSE school), Via Ferrata 17, Pavia 27100, Italy
b
Department of Structural & Geotechnical Engineering, University of Rome ‘‘La Sapienza’’,
Via Gramsci 53, Rome 00197, Italy

Received 12 September 2003; received in revised form 8 March 2004; accepted 9 March 2004

Abstract

A statistical approach for time-variant system reliability problems is developed and investigated. The
basic proposal is to use a response surface, characterised by a statistical model of the mixed type, to
represent the capacity in an analytical limit-state function. The fragility function of the system is then
calculated by FORM analysis, with the constructed empirical limit-state function as input. The method-
ology is applied in the assessment of a 3D RC frame structure. The application allows a number of detailed
implementation issues to be clarified, and confirms the robustness and versatility of the method.
Ó 2004 Elsevier Ltd. All rights reserved.

Keywords: Response surface; Reinforced concrete; Multiple failure modes; Recorded accelerograms; Experimental
plan; Mixed statistical model; FORM

1. Introduction

In the computation of seismic risk, the approach most commonly followed involves a sepa-
ration between hazard and fragility: efficient procedures for the construction of these latter
functions are still an open research subject. When dealing with RC structures, the main obstacle is
the type of non-linearity by which the response is characterised: degrading, non-holonomic and
describable only numerically, for which valid if approximate solutions are not available. The
resulting algorithmic definition of the response that enters in the limit-state functions (LSFs) used

*
Corresponding author. Tel.: +39-06-49919167; fax: +39-06-3221449.
E-mail address: paolo.franchin@uniroma1.it (P. Franchin).

0167-4730/$ - see front matter Ó 2004 Elsevier Ltd. All rights reserved.
doi:10.1016/j.strusafe.2004.03.001
422 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

in reliability analysis is far from being computationally effective. Availability of an analytical


approximation would greatly simplify the estimation of the fragility.
The approach adopted in this study is an extension of the response surface (RS) method, which
has found applications for time-invariant problems [1], but has been less frequently used for
seismic problems. The main purpose of this work is to investigate further the response surface
approach proposed first in [2], since it possesses most of the requisites associated with an af-
fordable and comprehensive procedure and seems, therefore, potentially powerful.
The methodology is explored here for the risk assessment of a 3D RC frame structure. Par-
ticular attention is devoted to practical issues that arise in its application.

2. Theory

2.1. Statement of the seismic reliability problem

Let x be a k  1 random vector completely characterised by its joint probability density


function f ðxÞ, whose components can be of load or resistance type. In the space of x a LSF is
defined, usually indirectly in terms of the response of the structure rðxÞ, as a scalar function
g½rðxÞ; x that takes on positive values as long as the structure is safe in the corresponding failure
mode, and negative values when it has failed, the boundary fxjg½rðxÞ; x ¼ 0g between the two
conditions being called limit-state surface.
For seismic problems, the response and possibly the LSF itself need to be described as functions
of time t. This requires that the LSF is re-written as time-dependent: g½rðx; tÞ; x; t. The reliability
is therefore also time-dependent, and can be expressed as:
Prf min g½rðx; tÞ; x; t < 0g; ð1Þ
t2½0;T 

where t ¼ 0 indicates a safe state and T the time interval of interest (the duration of a seismic
event). Its complement is called the first excursion probability.
Using (1), the fragility is defined as:
Prf min g½rðx; tÞ; x; t < 0jIMg; ð2Þ
t2½0;T 

i.e. as the failure probability conditional on an intensity measure (IM) characterising the earth-
quake, consistent with the expression for the hazard. The spectral acceleration close to the fun-
damental period of the structure, Sa ðT Þ, is currently widely used for this purpose.
As anticipated, when the response of a complex structure enters the inelastic range, its calcu-
lation, necessary for the evaluation of the LSF, generally requires a demanding numerical (finite
element) time-history analysis, corresponding to a realisation of the random vector x. In order to
solve (2) it is then often necessary to pass to the computationally expensive class of simulation
methods. However, reduction of the effort associated with these latter is mandatory if fragility
analysis of realistically complex systems has to be undertaken.
A factor that has large impact in decreasing the required number of analyses is the choice of the
form of the limit-state function. Most often the LSF is formulated in the capacity minus demand
format, these latter being dimensionally homogenous quantities such as bending moments, dis-
placements, etc., depending on the considered failure mode. This is written as:
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 423

gðxÞ ¼ CðxÞ  DðxÞ; ð3Þ


where CðxÞ and DðxÞ are the capacity and demand terms, respectively. The demand DðxÞ usually
coincides with the response rðxÞ. When the problem is time-dependent, the format above
is generally preserved using as demand the maximum of the response of interest over the time
interval T , or, when the capacity varies with time too, the minimum of the difference
Cðx; tÞ  Dðx; tÞ.
This format requires repetition of the simulation at each intensity level for which the fragility
function (3) is desired. The alternative choice made here following Veneziano [2] to avoid this
need, is to express the capacity in terms of spectral acceleration that leads to incipient failure:
gðxÞ ¼ Saf ðxÞ  Sa : ð4Þ
In the above equation the demand spectral acceleration Sa is a parameter. Once Saf ðxÞ has been
established by some form of simulation, one can build the entire fragility by solving repeatedly the
analytical problem above changing the value of Sa .
Within this framework, it is possible to employ the response surface technique as a means to
obtain an analytical approximation for the capacity term Saf ðxÞ in (4), with a limited number of
simulations.
In this work the constructed response surface is used as input for FORM analysis. Alterna-
tively, Monte Carlo simulation (MCS) possibly enhanced by Importance Sampling might have
been used.

2.2. Response surface

Now consider again the k  1 vector x of controllable basic variables. Let y be a measurable
random variable whose mean value is believed to depend, in an unknown fashion, on the
quantities collected in x. Let ly ðxÞ be a postulated model of this dependence, usually chosen to be
linear or quadratic in x with p unknown model parameters, the values of which have to be es-
timated on the basis of assigned values of x and corresponding values of y observed from sim-
ulations. Let one further assume that y can be expressed in the additive form:
yðxÞ ¼ ly ðxÞ þ e; ð5Þ
where e is a random deviation term with zero-mean, called error term, that gives account of the
inherent random variability of y around its true mean, as well as of the inadequacy of the mean
model ly ðxÞ to represent the relation between the true mean and x, also due to the limited sta-
tistical information from which its parameters are estimated. If the assumptions are made that the
error term is Gaussian with uniform variance r2e , that the simulations from which this variance is
estimated are carried out in homogeneous conditions and that their results are independent, i.e.
that e  N ð0; Ir2e Þ, the model in (5) becomes a Normal model.
Establishing an empirical/analytical model of the dependence of the response y on x such as
that represented by (5) is the aim of the response surface technique, and comprises three stages:
the choice of the model ly ðxÞ; the collection of data, in the form of ðy; xÞ pairs, from carefully
planned simulations (using notions from the branch of statistics called experimental design); the
estimation of model parameters (using the techniques of statistical inference). Each of these issues
is addressed in a subsequent section.
424 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

2.3. An adequate statistical model

It can be intuitively anticipated that the number n of experiments (a term consistent with the
terminology of RS, here referring to the simulations) needed to establish a response model such as
that in (5) will increase with the number k of variables in x. For reasons that will become clear
later in Section 2.4 the number of variables in the classical model rarely exceeds k ¼ 6.
On the other hand, an adequate mathematical description of an earthquake, even in simplified
form, requires the introduction of a very large number of random variables (e.g. random am-
plitudes and phases in a spectral representation, random pulses in a time-domain representation),
certainly not compatible with the above-mentioned practical limit for a classical response surface,
i.e. it is not feasible to explicitly account for the effect on y of the very large number of variables
that describe an earthquake. This consideration suggests that a response model different from the
classical one should be selected for dealing with the seismic case.
One alternative is to account for the random variables describing the earthquake in a global
way. To this end the vector x can be partitioned in two sub-vectors x1 and x2 , collecting the few
variables whose effect is modelled explicitly and the large number of remaining variables whose
effect is modelled globally, respectively. The x2 sub-vector therefore represents uncertainty related
to the earthquake. To model its effect the concept of random factors [3] is resorted to. These latter
are physical factors that randomly affect the response, assumed to do so in an additive manner.
The corresponding, mixed type, response model 1 is:
yðx1 ; x2 Þ ¼ zðx1 Þb þ d þ e; ð6Þ
where zðx1 Þb is called the fixed-effect part of the model, while d is the random-effect, i.e. the effect
of the random factor x2 , which accounts for the variability in the response due to the seismic
action.
The fixed-effect part of the model is the only one present in the classical response surface, and it
is most commonly assumed to be quadratic in x1 . Thus, it simply is a linear combination of p non-
linear functions of x1 , usually referred to as explanatory functions:
X p
zðx1 Þb ¼ bi zi ðx1 Þ; ð7Þ
i¼1

where zðx1 Þ is the 1  p row vector of explanatories, and b collects the p model parameters.
The random effect d is commonly assumed to be a zero-mean Gaussian variable, with unknown
constant variance r2d . Choosing the distribution of d Gaussian as that of e assures that the model
remains Normal. It is observed that assuming that the random effect has zero mean implies that
the average earthquake effect on the response will be captured by the constant term in the fixed
effect part of the model, while d will account only for the random deviations from this average
effect, i.e. it captures the variability of the response due to the earthquake.
The values of b and the variances of d and e are unknown, and they have to be estimated on the
basis of assigned values of x and observed corresponding values of y. One will assume for now

1
The model in (6) has been independently advanced in [2] and given a formal treatment in [4] under the name of
extended response surface.
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 425

that these data are available and agree on calling statistical model the results of n experiments,
collectively written in matrix notation as:
y ¼ Zb þ Bd þ e; ð8Þ
where the n  p matrix Z collects the n row vectors zðx1j Þ, j ¼ 1; . . . ; n and the n  1 vectors y and
e collect the observed responses and their deviation from the predicted response ðZb þ BdÞ, re-
spectively. The matrix Z is usually called design matrix.
The matrix product Bd represents the random effect part of the model. In particular the vector d
collects the values that d has taken in the n experiments, which, however, cannot be directly
observed but can be estimated. In practice, as shown later in Section 2.5, one needs not to estimate
d but can estimate directly the unknown variance r2d . The matrix B, called collocation matrix, is a
boolean matrix made of zeros and ones that collocates each dj to the corresponding experiment.
As will become clear in Section 2.4, a single value of d will be associated with more than one
experiment. If b is the number of distinct values of d in the n experiments, matrix B will be n  b
and, after reordering of the experiments, will have the form:
2 3
1n1 0n1    0n1
6 0n2 1n2    0n2 7
6 7
B¼6 . .. . . .. 7: ð9Þ
4 .. . . . 5
0nb 0nb    1nb

2.4. Experimental design

First consider a classical response surface, or, which is the same, only the fixed effect part of the
model. The number n of experiments should at least equal the number p of parameters that have
to be estimated. Clearly the scatter of the estimates will decrease as the number of experiments
increases above p. Thus n has to be determined in a way that balances the computational demand
with the accuracy of the estimated model. A convenient experimental plan for producing the data
to estimate a quadratic surface is the central composite design (CCD), which consists of a complete
two-level factorial design augmented with a star design [5]. The former consists of taking two levels
(low, n ¼ 1, and high, n ¼ þ1) for each of the basic variables and performing an experiment for
every one of the 2k resulting combinations. In those cases where the basic variables are defined as
random variables with a known distribution, a convenient choice is to define n ¼ ðx  lÞ=r, based
on argumentation given in [6].
The data collected through the factorial portion of the CCD would allow estimation of linear
terms in x1 , as well as interaction (mixed) terms of order up to the kth, but not of quadratic terms.
This is why some additional points are added to the two-level factorial through the star design:
two additional points at distance n ¼ a along each axis, i.e. for each basic variable in x1 , plus
some (n0 ) centre points (at level n ¼ 0), resulting in a total number of experiments
n ¼ 2k þ 2k þ n0 . With reference to the previously mentioned limit of k ¼ 6, it is seen that it is due
to the exponential increase of n with k.
Since this design allows estimation of the parameters of the fixed-effect part only, it is still unfit
to be used in conjunction with the mixed model. It is therefore necessary to modify it, in order to
426 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

allow estimation of the random factor effects. This extension consists of subdividing the planned
experiments into groups, called blocks, to each of which a realisation of the random factor is
associated. This operation is referred to as blocking of the experimental plan [5]. This technique is
often used when the experiments cannot be carried out in homogenous conditions, as it is in the
present case where different accelerograms are used in the experiments (response analysis). Each
block, containing nj experiments, will enable the observation of the effect dj of a single realisation
of the random factor. The collection of observations, gathered in d, are then used to estimate r2d .
A rational subdivision of the CCD into suitably defined blocks can be achieved making use of
criteria that assure that the blocked design is orthogonal and rotatable.
An important property of a design that blocks orthogonally, following [7], is that the estimates
of the fixed-effects b are not influenced by the random-effects. Furthermore, an orthogonal design
is recognised to be desirable in general, as parameter estimates have greater precision.
The property of rotatability assures, according to the definition given [5], that the variance of y is
invariant to any rotation of the coordinate axes in the space of the input variables, and is the same at
all points x that are equidistant from the design centre, which makes it intuitively appealing.
The central composite design can be made to block orthogonally. Both the factorial and axial
portion of the design forms a first-order orthogonal design, which is an important condition for
orthogonal blocking. These portions provide a basis for a first division of the CCD into two
blocks. The composition of these two blocks can be described as follows:
Block 1. A factorial portion consisting of F ¼ 2k points, with n0F additional centre points.
Block 2. An axial portion consisting of A ¼ 2k points plus n0A centre points.
Combining the conditions for orthogonal blocking and rotatability, the value of the axial
setting a and the number of centre points can be defined. It is noted that for some values of k, it is
not possible to find a rotatable CCD that blocks orthogonally, but orthogonal blocking and near
rotatability can be achieved.
Further subdivision of the axial portion into more than one block is not possible without vi-
olating the requirement that a block must consist of a first-order orthogonal design.
The factorial portion, on the other hand, can be subdivided into more than one block. It has to
be observed that after this subdivision of the factorial design some of the terms in the fixed effect
part of the model cannot be estimated. Tables giving possible arrangements and their respective
lost terms are available, e.g. in [5], as are lists of orthogonal blocking arrangements for rotatable
or near-rotatable central composite designs, e.g. [7].

2.5. Parameters estimation

It remains to be determined a procedure to estimate the parameters of the model once the
experimental data are available. While for the classical response surface the estimation of the
model parameters b can be carried out using the method of ordinary least squares (OLS) without
knowledge of the error term variance, for model (6) this is not possible: the parameters b have to
be estimated together with the variances r2d and r2e . This is necessary because the experiments are
not uncorrelated. In fact, from model (8) one gets the response covariance matrix:
T T
Cyy ¼ E½ðy  ly Þðy  ly Þ  ¼ E½ðBd þ eÞðBd þ eÞ  ¼ BBT r2d þ Ir2e ; ð10Þ
where BBT is a non-diagonal matrix.
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 427

Estimates of b, and of r2d and r2e can be obtained by maximum likelihood estimation (MLE)
using the likelihood function:
Lðb; Cyy jyÞ / fy ðyjb; Cyy Þ / jCyy j1=2 exp½ðy  ZbÞT C1
yy ðy  ZbÞ; ð11Þ
whose maximisation is a constrained optimisation problem since r2d and r2e must both be positive.
It is observed here that this optimisation problem is generally more stable if it is carried out using
normalised variables.
Looking at (10) it can be checked that if r2d ¼ 0 the covariance matrix reduces to Cyy ¼ Ir2e and
that the error term variance r2e in (11) does not enter the maximisation of L with respect to b,
which then reduces to the OLS result.

2.6. Treatment of multiple failure modes

In realistic problems one has multiple failure modes for each of which the dimensionally ho-
mogeneous quantities C and D can differ, being possibly bending moments, curvatures, shear
forces, etc. Using the format in (3) one then has as many LSFs as the different failure modes that
are considered significant, say m, and a system reliability problem would have to be solved in
order to arrive at the total probability of failure. Hence, m different response surfaces would have
to be established before one can perform the reliability analysis.
The format in (4), however, can be generalised to the system case defining the spectral accel-
eration capacity Saf ðxÞ as the lowest that leads to system failure, whichever the arrangement of the
components (failure modes) is. Here failure is assumed to be attained as soon as the structure fails
according to any of the possible significant failure modes, which is equivalent to considering a
series system. The most important feature of using the extended format (4) is that a unique re-
sponse surface can be used irrespective of the number m of failure modes and that the interaction
between the failure modes is implicitly accounted for in the construction of the response surface,
as visualised in Fig. 1.

Fig. 1. Treatment of system problem through unique RS.


428 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

1
Set 3
MC

0.8

0
0.6 10

–1
Pf

10

–2
0.4 10

–3
10

–4
0.2 10

–5
10
0 0.05 0.1

0
0 0.2 0.4 0.6 0.8 1
Sa [g]

Fig. 2. Validation of system formulation by MCS, adapted from [9].

A limited validation of this idea, judged a necessity as combining different limit-states is not
guaranteed to yield a smooth failure surface, is given in [9], where fragility values obtained by
MCS over a range of Sa levels of interest are compared with the results from the described
procedure. The structure considered was specially designed to show multiple types of failure, and
results for one particular case are shown in Fig. 2.
A final observation on the form of the LSF is in order. As the response model y representing the
capacity is Gaussian according to (6) and therefore y 2 ð1; 1Þ while the spectral acceleration is
a positive definite quantity, Saf > 0, a transformation is needed. The logarithmic form
y ¼ ln½Saf ðxÞ is employed, which, incidentally, is also known to be variance stabilising, thus
consolidating the assumed homoskedasticity of the model.
Finally, taking this transformation into account, the limit-state function will be:
gðx1 ; x2 Þ ¼ ln½Saf ðx1 ; ðx2 Þ  ln½Sa  ¼ zðx1 Þb þ d þ e  ln½Sa ; ð12Þ
where Sa is the demand spectral acceleration.

3. Practical issues concerning the simulations

Requiring the experimental design to be block orthogonal and rotatable limits the possible
configurations of the design, thus also limiting the choices in selecting the most appropriate one.
Still, more than one arrangement may be available, and therefore some practical guidelines re-
garding the issue of setting up an appropriate design in practice are given in this section. In
addition, the selection of appropriate realisations of the random factor, i.e. the earthquake
ground-motion records, is discussed.
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 429

3.1. Blocking the design

It was indicated in Section 2 that the n experiments were to be subdivided into b blocks of nj
experiments. For the case of k ¼ 3, used here as an example and anticipating the design used for
the application in Section 4, a possible configuration that satisfies both criteria for orthogonality
and rotatability, is to subdivide the CCD into three blocks, one axial and two factorial. The axial
block contains all 2k ¼ 6 star points, and requires n0A ¼ 2 additional centre points [7]. Both
factorial blocks are made up of half of the 2k ¼ 8 factorial points, plus half of the n0F ¼ 4 required
centre points. The final product is a design consisting of 20 experiments, subdivided into 3 blocks
of 8, 6 and 6 experiments, respectively, that block orthogonally and is (near) rotatable.
The question that now arises is, whether the number of blocks and their individual size is
sufficient to estimate the properties of d accurately.

3.1.1. Number of blocks


It is not surprising that textbooks do not supply rules for the number of blocks to be used in
practice, since the number required is directly related to the specific properties of the random factor
producing d. It has been suggested in other studies (e.g. [8]) that the variability in the response due
to the ground-motion can be at least roughly estimated using only four to five recorded earthquake
accelerograms. This establishes a minimum for the number of blocks to be used.
In a recent benchmark study, details of which can be found in [9], this problem of determining a
suitable number of blocks was already addressed in some detail. For a design with four basic
variables, three distinct cases were studied, using several different sets of accelerograms, yielding
multiple fragilities for each case.The number of blocks was varied from case to case, whereas the
size of the blocks remained constant: nj ¼ 7.
The envelopes of the several fragility functions obtained for the three cases are given in Fig. 3.
It can be seen that doubling the number of blocks from 4 to 8 significantly improves the stability
of the results. Increasing to 16 blocks, on the other hand, does not seem to improve the stability to
a point that could justify doubling the number of experiments that have to be performed.
The significance of the difference between the various fragility functions is best appreciated by
looking at its effect on the final value of the risk. To this end, the fragility functions have been
convoluted with the hazard as obtained for a site in southern Italy, giving the results shown in
Table 1.
Table 1 gives minimum and maximum values of the annual risk obtained for all the fragility
functions of the three cases. The results support the conclusion drawn before, based on the fra-
gility functions only, that roughly 8 blocks are needed, and this number can be used as a practical
guideline.

3.1.2. Block size


Next to the number of blocks, also the number of experiments nj that a block contains is open
to debate, as there is often more than one possibility to split the (factorial part of the) design into
blocks. The block size would then, presumably, affect the accuracy of the estimate of the block
effect. Again making reference to the benchmark study in [9], the results for two possible ways of
blocking the design, the first resulting in 8 blocks consisting of 7, the second in 8 blocks consisting
of 14 experiments, are shown in Fig. 4.
430 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

1
b=4
b=8
b = 16

0.8

0
0.6 10

–1
Pf

10

–2
0.4 10

–3
10

–4
0.2 10

–5
10
0 0.05 0.1

0
0 0.2 0.4 0.6 0.8 1
Sa [g]

Fig. 3. Envelopes of the fragilities obtained by varying the number of blocks.

Table 1
Risk using a given hazard curve and extreme fragilities: effect of number of blocks
Minimum risk Maximum risk
4 blocks 0.0162 0.0375
8 blocks 0.0212 0.0321
16 blocks 0.0214 0.0289

It can be observed that increasing the block size beyond 7 does not significantly improve
the estimates of the fragility. This observation is fortified when the values for the risk, given in
Table 2, are compared.
Returning to the design that will be used in Section 4, the block sizes of 6 and 8 satisfy the
practical criteria outlined above. However, three blocks are not sufficient to estimate the distri-
bution of the random factor. It is therefore necessary to perform replicates of the experimental
plan, i.e. performing the 20 experiments repeatedly for different ground-motion records. It is
deemed appropriate to perform three replicates, resulting in a total of 9 blocks and thus nine
different ground-motion records.

3.2. Selection of earthquake accelerograms

In order to carry out the experiments, one needs as many random factor levels as the number of
blocks in which the CCD is subdivided. Among possible alternatives the decision has been made
to use in this work recorded accelerograms as levels of ground-motion. A well recognised problem
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 431

1
n =7
j
nj = 14

0.8

0
0.6 10

–1
Pf

10

–2
0.4 10

–3
10

–4
0.2 10

–5
10
0 0.05 0.1

0
0 0.2 0.4 0.6 0.8 1
Sa [g]

Fig. 4. Effect of increasing block size on fragility envelopes.

Table 2
Risk using a given hazard curve and extreme fragilities: effect of block size
Minimum risk Maximum risk
8 blocks (nj ¼ 7) 0.0212 0.0321
8 blocks (nj ¼ 14) 0.0216 0.0320

connected with the choice of natural accelerograms is the admissibility of modifying their intensity
if required by the analysis. This is actually the case in the present study where the dynamic
analyses need to be run for varying intensities until a state of failure is attained.
This scaling issue, whose importance in risk studies is unquestionable, is still subject to debate
and by its very nature does easily lend itself to rigourous treatment. Reference is made to a recent
comprehensive research whose results are summarised, for instance, in [10], and of which, for the
sake of the present study, it is sufficient to retain the following concluding remarks. Conditional
on the selection as intensity parameter of the spectral ordinate close to the fundamental period of
the structure and on selection of a peak quantity as response parameter of interest, scaling up or
down this parameter by factors as large as 30 does not introduce bias in the results. This amounts
to say that, within the limits of the considered data set, peak responses are practically independent
of the magnitude and distance characterising the record used for the analysis.
Based on the preceding considerations, in the current application records for rock and stiff soil-
types, with a source-to-site distance between 25 and 75 km and a magnitude ranging from 5.5 to
432 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

1
Random
Bounded

0.8

0
0.6 10

–1
f

10
P

–2
0.4 10

–3
10

–4
0.2 10

–5
10
0 0.05 0.1

0
0 0.2 0.4 0.6 0.8 1
S [g]
a

Fig. 5. Effect of selection of records on fragility envelopes.

7.5 are used. A set of 145 earthquake strong-motion records 2 has been collected, satisfying these
conditions. Even though the selected records are chosen in a range that can be expected to lead to
severe damage in structures, the actual response depends on the structures’ properties.
A further validation of the issue of scaling based on the results from [10], in conjunction with
the procedure used, has been addressed in previous work [9,11]. The records used to construct the
fragilities for the benchmark study, shown in Figs. 3 and 4, have in certain cases required scaling
exceeding even the wide range, up to 30, indicated before. In the same work, a selection of the
records was introduced, by choosing those whose unscaled spectral ordinate at the fundamental
period of the structure was large enough compared to that producing yielding. The results ob-
tained using these records are shown in Fig. 5, together with the previously obtained ones for
randomly selected records.
The results show that the fragility functions gain considerably in stability if records are chosen
for which only a limited amount of scaling is required. The difference in the results, however, tends
to disappear in the lower tails. To appreciate the importance of the low portions of the fragilities,
the total risk is calculated using the same hazard function as for the other explorations. The
results are shown in Table 3, from which it would appear that the overall improvement in sta-
bility, though tangible, is not decisive.

2
All ground-motion records used in this study are taken from the PEER database, website: http://peer.berkeley.edu/
smcat.
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 433

Table 3
Risk using a given hazard curve and extreme fragilities: effect of limiting scaling of intensity
Minimum risk Maximum risk
Random records 0.0212 0.0321
Bounded records 0.0247 0.0306

4. Application

The structure used in this application of the method presented in Section 2, is a model of a 3D
reinforced concrete frame that has been constructed and will be tested as part of the EU project
SPEAR at the Joint Research Center in Ispra, Italy. It is a simplification of an actual three-storey
building representative of older construction in Greece, Portugal or other seismic Southern Eu-
ropean countries, without engineered earthquake resistance. It has been designed for gravity loads
alone, using the concrete design code applying in Greece between 1956 and 1995, with the con-
struction practice and materials commonly used in Southern Europe in the early 1970s. The
structural configuration is also typical of non-earthquake-resistant construction of that period. As
the design document states [12]: ‘‘The centre of stiffness (based on column secant-to-yield stiffness)
is eccentric with respect to the mass centre by 0.84m in one direction (10% of plan dimension) and
by 0.36m in the other (3.5% of plan dimension). The configuration includes two beam indirect
supports (at eccentricities of 0.5m and 1m from the column axis) and a strongly eccentric beam-
column joint’’. The plan of the framing and a photo of the test structure as constructed at the JRC
laboratory are given in Figs. 6 and 7.

4.1. Analysis programme and modelling assumptions

The Finite Element package OpenSees [13] is used for static and dynamic analyses. Non-linear
beam-column elements used to model the structure are characterised by fibre sections, which

Fig. 6. Plan view of the structure with details of column cross-sections.


434 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

Fig. 7. Photo of full scale model as built (seen in negative Y direction).

enforce the Bernoulli beam assumption. The element is based on the non-iterative force formu-
lation and considers the spread of plasticity along the element.
Using one of the available material models in OpenSees, concrete behaviour is modelled by a
uniaxial model with degrading linear unloading/reloading stiffness, and no strength in tension.
The concrete has been considered to have an actual peak strength fc0 ¼ 25 MPa, which is increased
for the core concrete to account for the level of confinement offered by the actual layout, diameter
and spacing of stirrups. Formulas developed by Mander, as presented in [14], are used, yielding a
ratio K ¼ 1:115 of the compression strength of the confined section (fcc0 ) to the unconfined
strength, and an estimate for the ultimate compression strain of ecu ¼ 0:0098.
Steel behaviour is represented by a uniaxial bilinear model with kinematic hardening. Rein-
forcement consists of smooth bars with yield strength fy equal to 450 MPa for the longitudinal
bars.
Masses, corresponding to the design gravity loads on slabs and the self-weight of the structure,
are lumped at the nodes and are exclusively translational.
Based on preliminary analyses, the structure is characterised by a median period of T ¼ 0:8 s.

4.2. Choice of basic variables

As discussed before, only a limited number of variables are used as fixed effect variables for the
construction of the response surface. The vector of basic variables x1 therefore needs to contain
only those parameters that have a significant influence on the response. The choice of the com-
ponents of x1 is to some extent arbitrary and relies more on engineering judgement than on
theoretical arguments: experience often leads to an acceptable choice.
The variables can be subdivided in two categories: variables related to the strength (and
therefore stiffness) and variables related to the geometry. It is convenient to work in the space of
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 435

basic material properties, as they enable the description of sectional variability with a minimum of
variables. Considering the concrete, if only fc0 and ecu are used, properties like the initial tangent
stiffness Ec , tensile strength ft and the strain at peak strength e0c can be expressed as a function of
these basic properties. The steel is fully determined by its yield strength fy , since the stiffness Es can
be considered as deterministic. These three variables then fully determine the element stiffness and
strength and therefore are considered to be the main parameters of interest. The mean values of
the material properties considered are shown in Table 4, together with the values of the coefficient
of variation (CoV) typically associated with each of them. They are all assumed to be lognormally
distributed.
The variables all describe the randomness for the whole structure, i.e. spatial variability of the
material property throughout the structure is not considered. In order to also account for this
spatial variability, additional random-effect variables should be introduced [11]. For the sake of
simplicity an approximate approach is adopted here to account for variations in the amount of
steel present in the section, doubling the CoV of the steel yield strength fy . This is the only
geometric parameter that is considered, as the size of the structural elements (outer dimensions)
can usually be considered as deterministic.

4.3. Definition of capacity: failure modes of RC frame structures

The limit-states that characterise the resistance of the structure correspond to various (in-
elastic) modes, which are known to cause an RC structure to undergo progressive deterioration
before reaching its final state of collapse. The structure considered here was not designed for
horizontal loads, nor according to modern capacity design principles that would avoid unde-
sirable failure modes. Based on experience, a frame of this type is likely to fail due to the loss of
load-bearing capacity of one or more columns, and therefore two relatively well established
criteria are reported here for shear and flexural failure of these structural members. Alternative
failure modes may be introduced, if deemed more appropriate, with the general procedure re-
maining unaffected.

4.3.1. Shear strength of columns


A shear strength model suggested by Priestley et al. [15] is utilised to obtain the shear supply of
members, considered appropriate here as it enables computation of the capacity under biaxial
loading. The model assumes that shear strength V consists of three components:
V ¼ Vc þ Vs þ Vp ; ð13Þ
where Vc represents the strength of the concrete shear resisting mechanism, Vs the shear capacity
attributed to the steel truss mechanism and Vp the strength attributed to the axial load, i.e. the

Table 4
Properties of the basic random variables
Mean Units CoV
fc0 25 MPa 0.20
ecu 0.0098 0.35
fy 450 MPa 0.20
436 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

horizontal component of the diagonal compression strut that transfers the load from the top to
the bottom of the column. In this study Vp is ignored, as the axial load component is small for
slender members.
In the analysis shear strength is calculated for each experiment, based on the values as-
signed to the components of x1 . This is a simplification, as the shear capacity is also a
function of time, because Vc and Vs are dependent on loading level and history. As an effect of
this simplification, lower bounds have been chosen for the two remaining terms in (13), so as
to account roughly for possible unfavourable combinations of the shear demand with a re-
duced capacity.

4.3.2. Flexural strength: ultimate concrete strain


Failure in flexure is attained when the extreme compression strain reaches the crushing limit ecu .
To be more realistic, crushing and spalling of the cover concrete is not considered to be fatal for a
column. Therefore, the corner fibers of the confined section are monitored during the analysis.
When crushing of the core concrete initiates, the section will deteriorate progressively, jeopar-
dising the vertical load bearing capacity of the column. As it is difficult to simulate the deterio-
ration of the column strength and the following redistribution of load to other columns
realistically, it is assumed that structural collapse is attained when any of the monitored fibres in a
single column reaches the ultimate compression strain.

4.4. Further settings for the simulations

The experimental plan developed in Section 3 is used, requiring 60 numerical simulations di-
vided into 9 blocks, each block associated with a specific accelerogram. Since the 3D frame ac-
tually requires ground-motion input in two directions, acceleration records are selected in pairs.
The nine two-component records to be used in the analysis were chosen based on the criteria
discussed in Section 3.2, and are given in Table 5.

Table 5
Accelerograms used in numerical analyses
No. Name Date Station name M R (km) PGA (g) Td (s) Sa ðT ¼ 0:8 sÞ
(month/ Comp. 1 Comp. 2
day/year) (g) (g)
1 Friuli 05/06/76 Tolmezzo 6.5 37.7 0.351 36.32 0.542 0.357
2 Loma Prieta 10/18/89 Apeel 7 Pulgas 7.1 47.7 0.156 39.95 0.290 0.181
3 Victoria Mexico 06/09/80 Cerro Prieto 6.4 34.8 0.621 24.45 0.477 0.305
4 Spitak Armenia 12/07/88 Gukasian 7.0 30.0 0.199 19.90 0.436 0.166
5 Imperial Valley 10/15/79 Cerro Prieto 6.9 26.5 0.169 63.70 0.415 0.162
6 Coalinga 05/02/83 Parkfield – Vine- 6.5 32.3 0.137 40.00 0.262 0.256
yard Canyon 3W
7 Northridge 01/17/94 Sandberg – Bald 6.7 43.4 0.098 40.00 0.255 0.123
Mountain
8 Loma Prieta 10/18/89 Palo Alto Slac 7.1 35.6 0.278 39.57 0.394 0.360
Lab.
9 Kobe 01/16/95 TOT 6.9 57.9 0.076 39.00 0.378 0.252
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 437

Consistent with the attenuation relationship used in the hazard computation [16], which uses
the larger of the two horizontal components of the motion in the regression analysis for predictive
equations of response spectra, the fragility is expressed in terms of the larger spectral acceleration
of the two components, at the median period of the structure. As the direction of the motion
cannot be predicted, each pair of records is applied twice, first with the strongest one along axis X,
and then along axis Y (see Fig. 6). The worst case scenario is used in the final computation of the
risk.
A second-order polynomial is now developed on the three basic variables. The statistical model
is given in (14), in which all linear and quadratic terms are retained, but the interaction terms
between concrete and steel parameters are omitted. Next to the error term e one random factor deq
is included, representing the earthquake:
ln½Saf ðx1 ; deq ; eÞ ¼ b0 þ b1  fc0 þ b2  ecu þ b3  fy þ b4  fc02 þ b5  e2cu þ b6  fy2
þ b7  fc0  ecu þ deq þ e: ð14Þ

4.5. Results

For each experiment a search procedure is employed to run the analyses until the lowest of the
capacity/demand ratios reaches unity. Once the Saf ðxÞ values for all the experiments are deter-
mined, the parameters of the response surface are computed with reference to the model as de-
scribed in (14), using MLE. Subsequently FORM analyses are carried out to calculate the
fragility, using components of the reliability package FERUM [17], giving the fragilities shown in
Fig. 8. Three fragility functions are shown, one for the case in which the strongest (and cha-

0.8 Upper bound fragility


Strong component in X
Strong component in Y
+/ st.dv.
Monte Carlo
0.6
0
10
Pf

0.4 –5
10

–10
10
0.2

–15
10
0 0.05 0.1

0
0 0.5 1 1.5
Sa [g]

Fig. 8. Fragilities obtained by considering different directions of loading.


438 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

Table 6
Modes determining collapse of the structure
Failure modes 1 2 3 4 6 12 13 33
No. of failures 1 12 4 23 6 4 4 6

racterising) component of the ground-motion was applied in the X direction, one when this
component was applied in the Y direction, and finally one taking for each experiment the mini-
mum of the two values of Saf , indicated as upper bound fragility. This last one is considered to be
the final fragility of the frame.
That the response surface is indeed a combined surface for multiple collapse mechanisms can be
seen from Table 6, which summarises the failure modes for all 60 experiments that form the
statistical basis for the final response surface. Modes 1–27 represent flexural failure in the col-
umns: the first 9 for each of the corresponding columns of the ground floor, the second 9 (from 10
to 18) for each of the corresponding first floor columns, etc. Similarly, modes 28–54 represent the
corresponding shear failure modes, e.g. mode 33 corresponds to shear failure of column C6 on the
ground floor.
Flexural failure is dominant, notwithstanding the lower bound that was used for the shear ca-
pacity. This result is not completely surprising, given the very slender, lightly reinforced columns.

4.6. Validation of results

The first question that arises in assessing the validity of the proposed method is whether a
second order polynomial is an acceptable approximation of Saf over the portion of the space
covered by the experimental design. Furthermore, when performing FORM analysis with the
approximative surface as input, the design-point obtained may fall outside the experimental de-
sign, thus resulting in an extrapolation with dubious accuracy. For both these reasons, the validity
of the RS needs to be checked.
A global validation of the results is obtained by comparing the fragility from RS with a fragility
calculated by Monte Carlo simulation. These simulations have been performed using the same
records, and possible loading directions, as in the RS computations. The results are shown in
Fig. 8, together with the associated coefficient of variation, and confirm the validity of the as-
sumptions made in the procedure.
It is also of interest to compare the region covered by the experimental points with the region
explored in the fragility analyses, both in terms of the response quantity and the design-point
coordinates. Based on Fig. 8, it can be concluded that the area of interest lies between 0 and 1.5g.
Fig. 9 shows that the area covered by the outcome of the experiments, on the other hand, is
mainly contained between 0.15 and 0.85g. Equally important is the space spanned by the input
variables. Fig. 10 gives the coordinates of the various design-points of the FORM analyses for the
three basic variables, which show that only for very low probability values, the design-point falls
outside the domain covered by the basic variables in constructing the RS. Based on these results is
can be concluded that the fragility values in the range from 0.08 to 1.5g fall within the region of
validity of the RS. Dependent on the application, the area of validity might be more restrictive,
in which case the RS would become an iterative procedure, requiring shifting the centre of the
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 439

6
Empirical CDF
1

No. of realisations 5

4 0.5

3
0.415
0
0 S [g] 1.5
af
2

0
0 0.5 1 1.5
S [g]
af

Fig. 9. Histogram of Saf values used as statistical basis.

α α α
εcu

fy

mean mean mean


fc

α α
α
0 0.5 1 1.5 0 0.5 1 1.5 0 0.5 1 1.5
Sa [g] Sa [g] S [g]
a

Fig. 10. Coordinates of FORM design-point w.r.t. space covered by experimental plan.

experimental design, in order to improve the accuracy of the results for the region of interest.
Whether this iteration is actually warranted, however, it depends on the sensitivity of the total
risk, i.e. the convolution between fragility and hazard, to a more precise local definition of the
fragility.

5. Conclusions

A statistical approach for time-variant reliability problems has been developed and investigated
in this study. The basic proposal is to use a response surface to represent the capacity part in an
analytical limit-state function as input for FORM analysis.
In order to include the maximum possible number of input variables, without taking account of
them all explicitly, a mixed model is employed for the response surface, using a random factor to
440 M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441

characterise the variability of the ground motion. Occurrence of failure is associated with the
maximum demand in time, yielding a time-invariant limit-state function for the time-variant
problem. Failure is defined as the union of all significant failure modes, and the approach takes
the interaction between different modes implicitly into account. The fragility function for the
system is then obtained through the performance of repeated FORM analyses, with the con-
structed empirical limit-state function as input.
The procedure is general in the sense that it can be used in conjunction with state-of-the-art
mechanical models, and that the variability in the response due to uncertainty in the input ground
motion is realistically represented. Further, variability of the mechanical parameters is included,
and the computation of the failure probability accounts for any type and number of different
failure modes.
The method can be regarded as affordable as the number of computations can still be con-
sidered as acceptably low. The total number of analyses needed is given by the number of ex-
periments in the design times the average number of trials required to determine the exact value of
Saf . The former depends on the number of basic variables and records selected: the presented case
study, which considered 3 basic variables and 9 records, led to an experimental plan of 60 ex-
periments, requiring about 240 analyses.
Finally, the results indicate that a second order polynomial yields fragility values that satisfy
criteria of accuracy in the reliability analysis. In the present application no iterative RS formu-
lation was found to be needed. However, this favourable feature is not necessarily extendable to
all cases.

Acknowledgements

The work has been carried out under partial funding from the EU project SPEAR (Contract
No. NG6RD-CT-2001-00525) and from project VIA (INGV 1.7.2000), funded by the Italian
Institute for Geophysics.

References

[1] Rajashekhar MR, Ellingwood BR. A new look at the response surface approach for reliability analysis. Struct
Safety 1993;12:205–20.
[2] Veneziano D, Casciati F, Faravelli L. Method of seismic fragility for complicated systems. In: Proc of the 2nd
Committee of Safety of Nuclear Installation (CNSI). Specialist Meeting on Probabilistic Methods in Seismic Risk
Assessment for NPP, Lawrence Livermore Laboratory, CA, 1983.
[3] Searle SR, Casella G, McCulloch CE. Variance components. New York: Wiley; 1992.
[4] Casciati F, Faravelli L. Fragility analysis of complex structural systems. New York: Wiley; 1991.
[5] Box GEP, Draper NR. Empirical model-building and response surfaces. New York: Wiley; 1987.
[6] Cochran WG, Cox GM. Experimental designs. 2nd ed. New York: Wiley; 1992.
[7] Khuri A, Cornell J. Response surfaces: designs and analyses. 2nd ed. New York: Marcel Dekker; 1996.
[8] Cornell CA. Calculating building seismic performance reliability: a basis for multi-level design norms. In: Proc of
the 11th WCEE, Acapulco, Mexico, STS 13, 1996.
[9] Franchin P, Lupoi A, Pinto PE, Schotanus MIJ. Response surface for seismic fragility analysis of RC structures.
In: Der Kiureghian A, Madanat S, Pestana JM, editors. Proc of the ICASP 9. Rotterdam: Millpress; 2003. p. 41–8.
M.I.J. Schotanus et al. / Structural Safety 26 (2004) 421–441 441

[10] Shome N, Cornell AC, Bazzurro P, Carbalho IE. Earthquakes, records and non-linear responses. Earthquake
Spectra 1998;14(3):469–500.
[11] Franchin P, Lupoi A, Pinto PE, Schotanus MIJ. Seismic fragility of reinforced concrete structures using a response
surface approach. J Earthquake Eng 2003;7(Special Issue 1):45–77.
[12] Fardis MN. Design of an irregular building for the SPEAR project. Internal Report, Structures Laboratory,
University of Patras, Greece, 2002.
[13] McKenna F. Object oriented finite element programming: frameworks for analysis, algorithms, and parallel
computing. Ph.D. Thesis, Department of Civil and Environmental Engineering, University of California, Berkeley,
1997.
[14] Paulay T, Priestley MJN. Seismic design of reinforced concrete and masonry buildings. New York: Wiley; 1992.
[15] Priestley MJN, Verma R, Xiao Y. Seismic shear strength of reinforced concrete columns. J Struct Eng
1994;120(8):2310–29.
[16] Sabetta F, Pugliese A. Estimation of response spectra and simulation of nonstationary earthquake ground
motions. Bull Seismol Soc Am 1996;8(2):337–52.
[17] Haukaas T. Finite element reliability and sensitivity analysis of hysteretic degrading structures. Ph.D. Thesis,
Department of Civil and Environmental Engineering, University of California, Berkeley, 2003.

You might also like