Conference PaperPDF Available

Ground motion sample size vs estimation uncertainty in seismic risk

Conference Paper

Ground motion sample size vs estimation uncertainty in seismic risk

Abstract

In the context of seismic risk assessment as per the performance-based earthquake engineering paradigm, a probabilistic description of structural vulnerability is often obtained via dynamic analysis of a non-linear numerical model. It typically involves subjecting the structural model to a suite of ground-motions that are representative, as a sample, of possible seismic shaking at the site of interest. The analyses' results are used to calibrate a stochastic model describing structural response as a function of seismic intensity. The sample size of ground motion records used is, nowadays, usually governed by computation-time constraints; on the other hand, it directly affects the estimation uncertainty which is inherent in risk analysis carried out in this way. Recent studies have suggested methodologies for the quantification of estimation uncertainty, to be used as tools for determining the appropriate number of records for each application on an objective basis. The present study uses one of these simulation-based methodologies, based on standard statistical inference methods and the derivation of structural fragility via incremental dynamic analysis, to investigate the accuracy of the risk estimate (e.g., the annual failure rate) vs the size of ground motion samples. These investigations consider various scalar intensity measures and confirm that that the number of records required to achieve a given level of accuracy for annual failure rate depends not only on the dispersion of structural responses, but also on the shape of the hazard curve at the site. This indicates that the efficiency of some frequently-used intensity measures is not only structure-specific but also site-specific.
13th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP13
Seoul, South Korea, May 26-30, 2019
1
Ground motion sample size vs estimation uncertainty in seismic risk
Georgios Baltzopoulos
Assistant Professor, Dept. of Structures for Engineering and Architecture, Università degli studi di
Napoli Federico II, Naples, Italy
Iunio Iervolino
Full Professor, Dept. of Structures for Engineering and Architecture, Università degli studi di Napoli
Federico II, Naples, Italy
Roberto Baraschino
PhD Candidate, Dept. of Civil Engineering, National Technical University of Athens, Athens, Greece
ABSTRACT: In the context of seismic risk assessment as per the performance-based earthquake engineering
paradigm, a probabilistic description of structural vulnerability is often obtained via dynamic analysis of a non-
linear numerical model. It typically involves subjecting the structural model to a suite of ground-motions that are
representative, as a sample, of possible seismic shaking at the site of interest. The analyses’ results are used to
calibrate a stochastic model describing structural response as a function of seismic intensity. The sample size of
ground motion records used is, nowadays, usually governed by computation-time constraints; on the other hand,
it directly affects the estimation uncertainty which is inherent in risk analysis carried out in this way. Recent studies
have suggested methodologies for the quantification of estimation uncertainty, to be used as tools for determining
the appropriate number of records for each application on an objective basis. The present study uses one of these
simulation-based methodologies, based on standard statistical inference methods and the derivation of structural
fragility via incremental dynamic analysis, to investigate the accuracy of the risk estimate (e.g., the annual failure
rate) vs the size of ground motion samples. These investigations consider various scalar intensity measures and
confirm that that the number of records required to achieve a given level of accuracy for annual failure rate depends
not only on the dispersion of structural responses, but also on the shape of the hazard curve at the site. This indicates
that the efficiency of some frequently-used intensity measures is not only structure-specific but also site-specific.
1. INTRODUCTION
Performance-based earthquake engineering
(PBEE; Cornell and Krawinkler 2000), entails the
probabilistic quantification of structure-specific
seismic risk. This risk can be quantified by the
annual rate of earthquakes able to cause the
structure to violate a seismic performance
objective, which can be simply termed the failure
rate,
f
, given by Eq. (1):
f im
im
λ P f im  

(1)
where the conditional probability term
P f im

represents what is often known as a fragility
function, which provides the probability of failure
for various values of a seismic intensity measure
(IM), while
im
is the annual rate of earthquakes
exceeding the value of shaking intensity
im
and
therefore constitutes a measure of seismic hazard
at the site.
The state-of-the-art in PBEE is to analytically
estimate structure-specific fragility functions by
means of procedures that require multiple
dynamic analysis runs of a numerical model of the
structure. These analyses typically use a multitude
of acceleration records as input motion, in order
to map the record-to-record variability of inelastic
structural response (Shome et al. 1998). On the
other hand, the evaluation of
im
for various
intensity levels, which is known as the hazard
curve, is usually obtained by means of
13th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP13
Seoul, South Korea, May 26-30, 2019
2
probabilistic seismic hazard analysis (PSHA; e.g.,
McGuire 1995), which typically employs
empirical ground motion prediction models
(GMPMs) to account for the attenuation of
shaking intensity.
In modern practice, the number of records
used for non-linear dynamic analysis of a
structure is typically limited due to the large
computation times required for running intricate
structural models at high non-linearity levels.
However, this number of records determines the
sample-size of seismic structural responses that is
used for fragility estimation and, eventually, the
failure rate. Since these descriptors of seismic
fragility and risk are inferred from finite-size
samples, they are only estimates of the
corresponding true values, and are therefore
affected by estimation uncertainty (Iervolino
2017). In fact, the estimator of
f
, obtained using
a specific sample of ground motions and denoted
using a hat symbol as
f
ˆ
, can be regarded as a
random variable (RV) whose distribution is a
function of the sample size. In other words,
computing
f
ˆ
over and over for a number of
times using different sets of accelerograms (equal
in number to the first one and equivalent in
characteristics) would lead to a different value for
the estimator each time around. Although
GMPMs are also based on samples of recorded
ground motion, these datasets are extensive
enough to allow the assumption that the
estimation uncertainty underlying
f
ˆ
is only due
to the fragility portion of Eq. (1).
Estimation uncertainty present in parametric
fragility models fitted from dynamic analysis
results has also been highlighted by other past
studies (Eads et al. 2015; Gehl et al. 2015; Jalayer
et al. 2015): in fact, a quantitative measure of the
effect of this uncertainty on the failure rate, can be
obtained according to Eq. (2):
f
f
ˆ
λ
f
ˆ
VAR λ
CoV ˆn
Eλ





(2)
where the notation
f
ˆ
λ
CoV
indicates the coefficient
of variation of
f
ˆ
,


f
ˆ
VAR λ
and


f
ˆ
Eλ
denote its
variance and expected value, respectively,
n
is
the sample size of accelerograms used to estimate
the fragility function and
is a parameter that
depends on the so-called efficiency of the IM
chosen to express structural fragility and also on
the shape of the site-specific hazard curve.
The objective of the present article is to
employ a simulation-based methodology for the
quantification of estimation uncertainty, which
was recently proposed as part of a broader-in-
scope study (Baltzopoulos et al. 2018a) and
investigate the efficiency of some commonly-
used scalar IMs, directly in terms of the ground
motion sample size required to contain the mean
relative estimation error, rather than in terms of its
frequently-used proxy; i.e., the dispersion of
response. This methodology is based on
incremental dynamic analysis (IDA; Vamvatsikos
and Cornell 2001) and involves using a relatively
large set of accelerograms to run dynamic
analyses for an assortment of simple inelastic
structures. The results of these analyses are then
used to fuel a procedure based on Monte-Carlo
simulation, where fragility estimates at various
limit states and using alternative IMs are
generated and statistics of the estimator of the
failure rate,
f
ˆ
, are extracted.
The structure of this article follows this
order: first there is a brief presentation of the
methodology for estimating structural fragility via
an IM-based procedure and of that for obtaining
statistics of the estimator of failure rate. Then
specific applications are given, considering
single-degree-of-freedom (SDOF) and simple
frame structures exposed to a variety of seismic
hazard conditions. Finally, the issue of record
sample size vs estimation uncertainty in the
estimate of the risk metric is discussed, in
conjunction with the choice of IM used as
interfacing variable, followed by some
concluding remarks.
13th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP13
Seoul, South Korea, May 26-30, 2019
3
2. METHODOLOGY
In order to investigate the issue of ground
motion sample-size vs estimation uncertainty,
fragility is derived via dynamic analysis using the
so-called IM-based approach using IDA. IDA
consists of running a series of analyses for a non-
linear structure, using a suite of accelerograms
that are scaled in amplitude in order to represent a
broad range of IM levels. At each IM level, a
measure of structural response is registered
generically named an engineering demand
parameter (EDP). An exception to this are cases
where response approaches numerical instability,
which translates to lack of convergence in the
computer model (Shome and Cornell 2000). Thus,
at the conclusion of the dynamic analyses at an
adequate number of IM levels, a quasi-continuous
EDP-IM relationship is obtained, termed an IDA
curve (Figure 1).
Figure 1: IM-based derivation of seismic fragility via
incremental dynamic analysis. Set of generic IDA
curves and intersections of each curve with a vertical
line passing from the failure threshold (a);
parametric (lognormal) and non-parametric
representations of the fragility function derived from
the IDA results (b).
It can be assumed that violation of some limit
state of seismic performance (i.e., failure) occurs
whenever the EDP response exceeds a certain
threshold value, denoted as
f
edp
. In this context,
IM-based fragility entails the introduction of an
additional RV,
f
IM
, which is the lowest seismic
intensity that a record has to be scaled to, in order
to cause
f
EDP edp
. Thus,
f
IM
may be viewed
as the seismic intensity that causes structural
failure and, consequently, the fragility function
can be defined as the complementary cumulative
distribution function of
f
IM
i.e.,

 


f
P f im P IM im
.
In the context of IDA, the lowest IM value
for each record that causes the structure to reach
the performance threshold, can be calculated by
finding the height,
,fi
im
, where the i-th IDA curve
intersects the vertical line
f
EDP edp
,
 
1,2,...,in
,
n
being the total number of
records), as shown in Figure 1a. These values can
be considered as a sample of realizations of
f
IM
and, consequently, well-known statistical
methods (e.g., Baker 2015) can be used to fit a
parametric probability distribution model to that
sample. One frequently-used distribution is the
lognormal (Figure 1b), which is completely
defined by two parameters: the logarithmic mean
and standard deviation, whose point estimates
based on one sample,
f
IM
ˆ
η
and
f
IM
ˆ
β
respectively,
are given in Eq.(3):
 
 
 
 
1
2
1
Φ
1 log
11

 




 

ff
f
ff
IM IM
n
IM f ,i
i
n
IM f ,i IM
i
ˆ
ˆ
P f im logim ηβ
ˆ
η n im
ˆˆ
β n log im η
(3)
where
,fi
im
is the i-th record’s (lowest) scaled IM
value causing failure and
 
Φ
is the standard
(cumulative) Gaussian function. A non-
parametric alternative is to assume that the
observed sample values approximate the fragility
by defining a stepwise function, according to
Eq.(4):
 
1
1

 


f ,i
n
fim im
i
P f im P IM im I
n
(4)
where
 
f ,i
im im
I
is an indicator function that returns
1 if
f ,i
im im
and 0 otherwise. In either case, once
the fragility function has been estimated, the point
estimate of the failure rate
f
ˆ
can be obtained via
Eq. (1).
1.00.50
(b)
(a)
edpf
EDP
IM
Single-record IDA curve
imf,i , intersec tion with edpfLognormal fit
Non para metric fragility
 
 
P f im
13th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP13
Seoul, South Korea, May 26-30, 2019
4
As already mentioned, the estimation
uncertainty inherent in deriving the fragility from
a finite sample of structural responses is
propagated to the estimator of seismic risk
f
ˆ
,
which should be therefore regarded as a RV and a
function of the sample: assuming that one were to
perform a number of different IDAs, using each
time a set of accelerograms of the same size but
with different records than the previous ones, it is
to be expected that the estimated fragility curve
will differ from time to time, thus leading to
different estimates of the failure rate (i.e.,
different realizations of the RV
f
ˆ
).
One way of quantifying the estimation
uncertainty of
f
ˆ
is by means of the mean relative
estimation error,
f
ˆ
λ
CoV
, which can be regarded as
the coefficient of variation of the estimator. In the
case of IM-based fragility via IDA, the
relationship between
f
ˆ
λ
CoV
and the ground
motion sample size
n
can be approximated by
means of Monte-Carlo simulation (Baltzopoulos
et al. 2018a). This procedure begins with a
reference IDA that uses a relatively large amount
of records (
200n
is used herein) to derive a
reference fragility function, which can be either
lognormal or non-parametric. The simulation
entails randomly sampling
s
times from this
reference distribution of
f
IM
for different
sample sizes
 
2 3 200n , ,...,
(in the case of non-
parametric, empirical fragility, this translates to
resampling with substitution). At the next step in
the procedure, either new lognormal fragility
curves are fitted to each extracted sample
according to Eq.(3), or Eq.(4) is mustered to
directly express the fragility function. In either
case, integrating the fragility with a hazard curve,
according to Eq.(1), leads to a point estimate of
the failure rate at the j-th simulation, denoted
f , j
ˆ
λ
. As a last step, after
s
simulations have been
concluded at any given record sample size
n
,
ˆ


f
E
and
ˆ


f
VAR
can be approximated via
the first two moments of the Monte-Carlo-
generated sample of point estimates. By
substituting these values into Eq.(2), one obtains
Eq.(5):
2
11
1
11
1
1


 



f
ss
f ,j f ,k
jk
ˆs
λ
f ,j
j
ˆˆ
λλ
ss
CoV ˆ
λ
s
(5)
which provides the simulation-based
approximation for
f
ˆ
λ
CoV
.
3. APPLICATIONS
The methodology outlined in the previous
sections is applied to an assortment of simple
inelastic structures, which are assumed to be
located at three Italian sites that can be considered
representative of varying levels of seismic hazard
severity. The three sites considered are in the
vicinity of the cities of L’Aquila (representative
of a high seismic hazard site), Naples (medium
hazard levels) and Milan (low hazard) and are all
assumed to be characterized by firm soil
conditions. At each of the three sites a yielding
single-degree-of-freedom system is considered,
with natural vibration period
0.7 sT
and
viscous damping ratio
0.05
. Additionally, at
the L’Aquila site a four-story steel moment-
resisting frame is considered, with first mode
period
11.82 sT
.
Hazard curves were calculated at these sites,
in terms of several different scalar IMs, using the
software REASSESS (Chioccarelli et al. 2018)
employing the seismic source model from Meletti
et al. (2008). Hazard was obtained at all three sites
for spectral pseudo-acceleration at the SDOFs’
period,
 
0.7 sSa T
, and also for peak ground
acceleration (PGA) and
 
1.8 sSa T
at
L’Aquila. Also considered, were two more
advanced IMs that implicitly account for spectral
shape (Bojórquez and Iervolino 2011; Eads et al.
2015), namely average spectral acceleration
avg
S
and
Np
I
, given by Eqs.(6) and (7), respectively:
13th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP13
Seoul, South Korea, May 26-30, 2019
5
 
1


T
T
n
n
avg i
i
S Sa T
(6)
   
0 40
11



.
Np avg
I Sa T S Sa T
(7)
where
T
n
is the number of periods,
i
T
, that are
used in the definition of
avg
S
. Hazard curves in
terms of
avg
S
and
Np
I
are obtained at all three
sites using
 
0.7s, 1.0s, 1.5s
i
T
for the seismic risk
assessment of the SDOF structures and at
L’Aquila, using
 
0.6s, 1.8s, 2.5s, 4.0s
i
T
for that of
the steel frame. These hazard curves are shown in
Figure 2.
Figure 2: Annual exceedance rates (hazard curves)
at the three Italian sites for all IMs considered;
hazard curves used for seismic risk assessment of the
SDOF structures (above) and for the four-story steel
frame presumed at L’Aquila (below).
In order to construct reference fragility
functions for all of the structures considered, via
IDA, a set of two-hundred records was selected
from the NESS flatfile (Pacor et al. 2018),
avoiding records that were likely affected by near-
source effects such as rupture directivity or by site
effects due to deformable soil deposits.
3.1. SDOF structures
The simplest structures used in this
application are yielding SDOF systems that
follow a peak-oriented hysteretic rule (Lignos and
Krawinkler 2011) that also considers in-cycle
strength degradation by including a softening,
negative-stiffness post-peak branch in their
monotonic pushover (backbone) curve, thus
permitting explicit consideration of the collapse
limit state in the numerical analyses. The yield
threshold and backbone characteristics of the
three SDOF oscillators have been tweaked to
render them ostensibly risk-equivalent; i.e., they
were determined so that each structure at its
presumed site exhibits the same estimated annual
collapse rate (
4
ˆ3.6 10

f
) when fragility at
collapse is calculated from the IDA flat-lines
(Vamvatsikos and Cornell 2004) with
200n
records, using
avg
S
as IM. The numerical model
of the oscillators and IDA analyses were set up in
the OPENSeeS analysis platform (McKenna
2011) using the DYANAS interface
(Baltzopoulos et al. 2018b). Both lognormal
models and non-parametric representations are
considered for collapse fragilities.
In Figure 3, the resulting values of the
relative mean estimation error
f
ˆ
λ
CoV
from the
Monte-Carlo simulation procedure i.e., from
Eq.(5) are plotted against record sample size
n
for all combinations of IM, structure-site pairing
and fragility model (eighteen cases in total), also
reporting the point estimates of
ˆ
f
at
200n
in
the legend. It is clear that these two-hundred-
record estimates shift when switching IM, but this
is mainly an effect of how sensitive structural
response is to seismological parameters when
records are scaled (Luco and Cornell 2007), and
not directly related to ground motion sample size
and estimation uncertainty. The figure also reports
the number of records required to limit
f
ˆ
λ
CoV
to
10 0
IM [g]
10-4
10
100
λim
-2
10-5
Sa (T=1.8 s)
PGA
Savg
ΙNp
Savg
Savg
Savg
ΙNp
ΙNp
ΙNp
Milan
Naples
L’Aquila
Sa (T=0.7 s)
Sa (T=0.7 s)
Sa (T=0.7 s)
10-4
10
100
λim
-2
13th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP13
Seoul, South Korea, May 26-30, 2019
6
20% and 10% for some cases. The most
immediate observation emanating from Figure 3,
is that, for risk-wise nominally equivalent
structures that express fragility in terms of the
same IM, the shape of the hazard curve makes a
difference on the number of records required to
limit estimation uncertainty to a desired level, as
verified also analytically in the past (Baltzopoulos
et al. 2018a). The parameter
, that summarizes
the combined effect of the shape of the hazard
curve and IM efficiency on the coefficient of
variation of
ˆ
f
according to Eq. (2), can be
evaluated by means of a least-squares fit of that
equation to the simulation data and is given in
Table 1.
Figure 3: Mean relative estimation error,
f
ˆ
λ
CoV
,
calculated via Monte Carlo simulation for the three
SDOF structures considered, plotted against ground
motion sample size
n
.
3.2. MDOF steel frame structure
For the MDOF structure, that is the steel
moment-resisting frame presumed built at the
higher-hazard-level location of L’Aquila, the
same procedure was followed as for the three
simpler SDOF systems. In this case, a center-line,
non-linear finite-element model of the structure
created in the OPENSeeS environment was used
to run IDA using the same two-hundred record
set. Differently from the previous applications,
the collapse limit state was not considered;
instead, fragility was derived for two generic limit
states whose violation can be conventionally
defined by exceedance of some threshold in terms
of maximum inter-story drift ratio,
 
IDR
:
1.5%IDR
was considered for the first limit
state and
2.5%IDR
for the second. As in the
previous case, the simulation-based values of
f
ˆ
λ
CoV
were calculated for
 
2,3,...,200n
using
all four IMs for which hazard curves had been
derived and the results are plotted in Figure 4. The
corresponding
values, i.e., the site-and-
structure-specific parameter that allows the mean
relative estimation error to be expressed as a
function of record sample size as
ˆ
f
CoV n
,
is also reported in Table 1.
3.3. Discussion of the results
A cursory examination of the results from the
two examples, already reveals that adoption of a
traditional IM such as PGA, can be inadequate for
risk analysis of a flexible structure, since the
number of records required to limit
f
ˆ
λ
CoV
to an
(arbitrary) value as low as 10% verges on the
impracticable. In certain cases, such as the case of
estimating annual collapse rate and especially at
low-seismicity areas, the same can be said even
for first mode spectral acceleration
 
1
Sa T
; in
fact, even for these simple inelastic structures, the
number of records required to limit the mean
relative estimation error below 10% exceeds fifty.
Finally, it is interesting to compare the
relative efficiency of the geometric mean spectral
acceleration
avg
S
vs
Np
I
, the weighted geometric
mean; i.e., compare their ability to reduce
estimation uncertainty for a fixed record sample
43
10-2
10-1
100
101
Milan
λ
4
f3.7 10
Naples
λ
  4
f3.7 10
L’Aquila
λ
  4
f3.6 10
Milan
λ
  4
f6.4 10
Naples
λ
  4
f5.9 10
L’Aquila
λ
4
f4.7 10
Milan
λ
  3
f1.1 10
Naples
λ
  3
f1.0 10
L’Aquila
λ
  4
f6.2 10
174
8 35
13 50
IM: Sa (T )
1
IM: INp
fragility
non-parametric
non-parametric
fragility
IM: Savg
non-parametric
fragility
10%
20%
20%
10%
20%
10%
10 41
25 100
18 73
Milan
λ
  3
f1.2 10
Naples
λ
  3
f1.0 10
L’Aquila
λ
  4
f6.3 10
Milan
λ
  4
f6.5 10
Naples
λ
  4
f6.1 10
L’Aquila
λ
  4
f4.8 10
Milan
λ
  4
f3.6 10
Naples
λ
  4
f3.7 10
L’Aquila
λ
  4
f3.6 10
102
n
IM: Sa (T )
1
lognormal fragility
IM: INp
IM: I
lognormal fragility
IM: Savg
lognormal fragility
20%
10%
10%
20%
10%
20%
24 80 167
12 49 116
30
521 57
14 102
n
10-2
10-1
100
101
10-2
10-1
100
101
13th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP13
Seoul, South Korea, May 26-30, 2019
7
size. From the calculated
values, it can be
observed that, for these specific applications,
Np
I
is somewhat more efficient than
avg
S
at limit
states corresponding to lower level of inelasticity,
while
avg
S
overcomes
Np
I
in efficiency near
collapse. More elaborate discussion of the issue
can be found in the article from which this study
was inspired (Baltzopoulos et al. 2018a).
Figure 4 : Mean relative estimation error, calculated
via Monte Carlo simulation for the steel frame
assumed at L’Aquila, plotted against ground motion
sample size
n
.
Table 1: Dispersion of intensity causing failure
and site-/structure- specific parameter
of the mean
relative estimation error (
ˆ
f
CoV n
),provided for
each site, structure, IM, limit state and assumption
about the fragility function (LogN lognormal, Emp
- empirical).
Site
Stru-
cture
Limit
State
IM
f
IM
β
Fragi-
lity
L’Aquila
Four-story steel moment resisting frame
T1=1.82s
IDR>1.5%
PGA
0.608
LogN
Emp
1.148
1.078
Sa(T1)
0.253
LogN
Emp
0.438
0.561
INp
0.220
LogN
Emp
0.408
0.490
Savg
0.265
LogN
Emp
0.530
0.563
IDR>2.5%
PGA
0.639
LogN
Emp
1.536
1.357
Sa(T1)
0.333
LogN
Emp
0.715
0.694
INp
0.267
LogN
Emp
0.628
0.605
Savg
0.241
LogN
Emp
0.622
0.628
Milan
Inelastic SDOF (T=0.70s)
Collapse
Sa(T)
0.475
LogN
Emp
1.839
1.476
Naples
0.444
LogN
Emp
1.318
0.998
L’Aquila
0.441
LogN
Emp
0.922
0.712
Milan
INp
0.378
LogN
Emp
1.468
1.383
Naples
0.341
LogN
Emp
1.099
0.855
L’Aquila
0.337
LogN
Emp
0.701
0.580
Milan
Savg
0.273
LogN
Emp
1.039
1.119
Naples
0.223
LogN
Emp
0.760
0.641
L’Aquila
0.214
LogN
Emp
0.457
0.409
4. CONCLUSIONS
The introduction of ever more realistic, and
thus complex, numerical structural models in
probabilistic seismic risk analysis, renders the
topic of constraining the appropriate size of the
input ground motion set to use, ever topical. This
study builds on recent proposals to base the
determination of the number of records on the
notion of limiting estimation uncertainty of the
risk metric to desired levels. A general rule-of-
thumb emerging from these results, is that using
record sample sizes in the thirty-to-fifty range, in
combination with adoption of more advanced,
10 -1
100
PGA
λ
  3
f2.0 10
λ
  3
f7.7 10
λ
  4
f8.3 10
λ
  4
f3.7 10
λ
  4
f5.6 10
λ
  3
f2.5 10
>
fragility >
16439
10 58
20%
n102n10 2
λ
  4
f8.3 10
λ
  4
f3.7 10
λ
  4
f5.7 10
λ
  3
f2.5 10
IDR %2.5>
fragility
non-parametric
λ
  3
f2.6 10
λ
  3
f1.6 10
λ
  3
f2.0 10
λ
  3
f7.7 10
non-parametric
fragility
IDR %1.5>>
IDR %2.5
lognormal
fragility
λ
  3
f1.6 10
λ
  3
f2.6 10
IDR %1.5
lognormal
10%
10% 10%
10%
20%
20% 20%
24 121 6 30 35 189
946
130
33
ΙNp
Savg
Sa (T )
1
PGA
ΙNp
Savg
Sa (T )
1
PGA
ΙNp
Savg
Sa (T )
1
PGA
ΙNp
Savg
Sa (T )
1
10 -1
100
13th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP13
Seoul, South Korea, May 26-30, 2019
8
efficient IMs, tends to keep the mean relative
estimation error at 10% or below. The higher end
of that range is needed for cases that combined
limit states corresponding to larger inelastic
excursions with site subjected to lower hazard
levels. It became apparent that the so-called
efficiency of seismic intensity measures, i.e., their
ability to keep estimation uncertainty to the
desired levels using smaller-size samples of
ground motions, is in fact site- and structure-
dependent, as recent research has shown.
5. ACKNOWLEDGEMENTS
The study presented in this paper was developed within
the activities of ReLUIS (Rete dei Laboratori
Universitari di Ingegneria Sismica) for the project
ReLUIS-DPC 20142018, as well as within the H2020-
MSCA-RISE-2015 research project EXCHANGE-Risk
(Grant Agreement Number 691213).
6. REFERENCES
Baker, J. W. (2015). “Efficient analytical fragility
function fitting using dynamic structural analysis.”
Earthquake Spectra, 31(1).
Baltzopoulos, G., Baraschino, R., and Iervolino, I.
(2018a). “On the number of records for structural
risk estimation in PBEE.” Earthquake Engineering
& Structural Dynamics, (In press).
Baltzopoulos, G., Baraschino, R., Iervolino, I., and
Vamvatsikos, D. (2018b). “Dynamic analysis of
single-degree-of-freedom systems (DYANAS): a
graphical user interface for OpenSees.”
Engineering Structures, 177, 395408.
Bojórquez, E., and Iervolino, I. (2011). “Spectral shape
proxies and nonlinear structural response.” Soil
Dynamics and Earthquake Engineering, 31(7),
9961008.
Chioccarelli, E., Cito, P., Iervolino, I., and Giorgio, M.
(2018). “REASSESS V2.0: software for single- and
multi-site probabilistic seismic hazard analysis.”
Bulletin of Earthquake Engineering, (in press).
Cornell, C. A., and Krawinkler, H. (2000). “Progress and
Challenges in Seismic Performance Assessment.”
PEER Center News, 3(2), 14.
Eads, L., Miranda, E., and Lignos, D. G. (2015).
“Average spectral acceleration as an intensity
measure for collapse risk assessment.” Earthquake
Engineering and Structural Dynamics, 44(12),
20572073.
Gehl, P., Douglas, J., and Seyedi, D. M. (2015).
“Influence of the number of dynamic analyses on
the accuracy of structural response estimates.”
Earthquake Spectra, 31(1).
Iervolino, I. (2017). “Assessing uncertainty in estimation
of seismic response for PBEE.” Earthquake
Engineering & Structural Dynamics, 46(10),
17111723.
Jalayer, F., De Risi, R., and Manfredi, G. (2015).
“Bayesian Cloud Analysis: Efficient structural
fragility assessment using linear regression.”
Bulletin of Earthquake Engineering, 13(4), 1183
1203.
Lignos, D. G., and Krawinkler, H. (2011). “Deterioration
Modeling of Steel Components in Support of
Collapse Prediction of Steel Moment Frames under
Earthquake Loading.” Journal of Structural
Engineering, 137(11), 12911302.
Luco, N., and Cornell, C. A. (2007). “Structure-specific
scalar intensity measures for near-source and
ordinary earthquake ground motions.” Earthquake
Spectra, 23(2), 357392.
McGuire, R. K. (1995). “Probabilistic Seismic Hazard
Analysis and Design Earthquakes: Closing the
Loop.” Bulletin of the Seismological Society of
America, 85(5), 12751284.
McKenna, F. (2011). “OpenSees: A framework for
earthquake engineering simulation.” Computing in
Science and Engineering, 13(4), 5866.
Meletti, C., Galadini, F., Valensise, G., Stucchi, M.,
Basili, R., Barba, S., Vannucci, G., and Boschi, E.
(2008). “A seismic source zone model for the
seismic hazard assessment of the Italian territory.”
Tectonophysics, 450(14), 85108.
Pacor, F., Felicetta, C., Lanzano, G., Sgobba, S., Puglia,
R., D’Amico, M., Russo, E., Baltzopoulos, G., and
Iervolino, I. (2018). “NESS v1.0: A worldwide
collection of strong-motion data to investigate near
source effects.” Seismological Research Letters.
Shome, N., and Cornell, C. A. (2000). “Structural seismic
demand analysis: Consideration of collapse.” 8th
ACSE Specialty Conference on Probabilistic
Mechanics and Structural Reliability, (3),
PMC2000-119.
Shome, N., Cornell, C. A., Bazzurro, P., and Carballo, J.
E. (1998). “Earthquakes, records, and nonlinear
responses.” Earthquake Spectra, 14(3), 469500.
Vamvatsikos, D., and Cornell, C. A. (2001).
“Incremental Dynamic Analysis.” Earthquake
Engineering and Structural Dynamics, 31(3), 491
514.
Vamvatsikos, D., and Cornell, C. A. (2004). “Applied
incremental dynamic analysis.” Earthquake
Spectra, 20(2), 523553.
... The uncertainty in the seismic input for structures is a wellknown phenomenon by the earthquake engineering community (Baltzopoulos, Iervolino, & Baraschino, 2019;Jiang & Ye, 2020;Zanini, Hofer, Faleschini, & Pellegrino, 2017). The main factors affecting the structural response are an earthquake's duration, frequency content and magnitude (Clough & Penzien, 1993). ...
Article
Full-text available
Seismic structural assessment of masonry arch bridges is essential due to their cultural, economic and strategic importance and vulnerability. A realistic and rigorous structural assessment is necessary in order to protect the bridges and use resources carefully. At this time, there are no standardised or widely accepted assessment procedures, and the performance criteria for masonry arch bridges are not available. This study presents an overarching reliability-based seismic assessment methodology based on analytical modelling, laboratory testing and probabilistic assessment. It investigates the performance criteria concept and proposes performance limit states. The methodology is applied to a historical masonry arch bridge in Turkey. To consider rigorously the uncertainties in material properties and seismic input, probabilistic nonlinear analyses are conducted using a detailed 3D finite element model. The results are combined with the proposed limit state definitions to generate the capacity and demand distributions. Both distributions are used in Monte Carlo simulation and First-Order Reliability Method (FORM) to estimate the probability of exceedance and reliability indexes for each limit state. The proposed probabilistic assessment methodology will be a useful approach to obtaining more reliable information about masonry arch bridges' expected seismic performance and for better-informed decision-making, especially when designing intervention actions and post-earthquake scenarios.
Article
Full-text available
Probabilistic seismic hazard analysis (PSHA) is generally recognized as the rational method to quantify the seismic threat. Classical formulation of PSHA goes back to the second half of the twentieth century, but its implementation can still be demanding for engineers dealing with practical applications. Moreover, in the last years, a number of developments of PSHA have been introduced; e.g., vector-valued and advanced ground motion intensity measure (IM) hazard, the inclusion of the effect of aftershocks in single-site hazard assessment, and multi-site analysis requiring the characterization of random fields of cross-correlated IMs. Although software to carry out PSHA has been available since quite some time, generally, it does not feature a user-friendly interface and does not embed most of the recent methodologies relevant from the earthquake engineering perspective. These are the main motivations behind the development of the practice-oriented software presented herein, namely REgionAl, Single-SitE and Scenario-based Seismic hazard analysis (REASSESS V2.0). In the paper, the seismic hazard assessments REASSESS enables are discussed, along with the implemented algorithms and the models/databases embedded in this version of the software. Illustrative applications exploit the potential of the tool, which is available at http://wpage.unina.it/iuniervo/doc_en/REASSESS.htm.
Article
Full-text available
The availability of high-quality waveforms recorded in epicen-tral areas of moderate-to-strong earthquakes is a key factor for investigating ground-motion characteristics close to the seismic source. In this study, near-source strong-motion waveforms (named NESS1) were collected from worldwide public archives with the aim of building a flat file of high-quality meta-data and intensity measures (IMs) of engineering interest. Particular attention was paid to the retrieval of reliable information about event sources, such as geometries and rupture mechanisms that are necessary to model near-source effects for engineering seismology and earthquake engineering applications. The accelerometric records are manually and uniformly processed, and the associated information is fully traceable. NESS1 consists of about 800 three-component waveforms relative to 700 accelerometric stations, caused by 74 events with moment magnitude larger than 5.5 and hypocentral depth shallower than 40 km, with Joyner-Boore distance up to 140 km. Ground-motion data were selected to have a maximum source-to-site distance within one fault length, defined through seis-mological scaling relations. About 40 records exhibit peak acceleration or peak velocity exceeding 1g or 120 cm=s, and they represent some of the largest ground motion ever recorded. Evidence of near-source effects was recognized in the NESS1 dataset, such as velocity pulses, large vertical ground motions, directional and hanging-wall amplifications and fling step. In particular, around 30% of the records was found to exhibit pulse-like characteristics that are possibly due to forward rupture directivity. Electronic Supplement: Table listing the main features of the selected events, including the references of fault geometry parameters and Figures showing further metadata and intensity measures distributions of the NESS1 flat file.
Article
Full-text available
State-of-the-art approaches to probabilistic assessment of seismic structural reliability are based on simulation of structural behavior via nonlinear dynamic analysis of computer models. Simulations are carried out considering samples of ground motions supposedly drawn from specific populations of signals virtually recorded at the site of interest. This serves to produce samples of structural response to evaluate the failure rate, which in turn allows to compute the failure risk (probability) in a time interval of interest. This procedure alone implies that uncertainty of estimation affects the probabilistic results. The latter is seldom quantified in risk analyses, although it may be relevant. This short paper discusses some basic issues and some simple statistical tools, which can aid the analyst towards the assessment of the impact of sample variability on fragility functions and the resulting seismic structural risk. On the statistical inference side, the addressed strategies are based on consolidated results such as the well-known delta method and on some resampling plans belonging to the bootstrap family. On the structural side, they rely on assumptions and methods typical in performance-based earthquake engineering applications.
Article
Full-text available
The estimation of MDOF nonlinear structural response given an earth-quake of magnitude M at distance R is studied with respect to issues such as the benefits and harms of (1) first scaling the records, (2) selecting records from the "wrong" magnitude, (3) alternative choices for how to scale the records, and (4) scaling records to a significantly higher or lower intensity, etc. We find that properly chosen scaling can reduce the necessity of the number of nonlinear analyses by a factor of about four, and that proper scaling does not introduce any bias. Several global and local nonlinear damage measures are considered. A five-DOF model of a steel structure is used; other cases are under study. The paper finishes with a demonstration of the use of such results in the estimation of the annual probability of exceeding a specified interstory ductility (drift) or other damage measures.
Article
Full-text available
Cloud Analysis is based on simple regression in the logarithmic space of structural response versus seismic intensity for a set of registered records. A Bayesian take on the Cloud Analysis, presented herein, manages to take into account both record-to-record variability and other sources of uncertainty related to structural modelling. First, the structural response to a suite of ground motions, applied to different realizations of the structural model generated through a standard Monte Carlo, is obtained. The resulting suite of structural response is going to be used as “data” in order to update the joint probability distribution function for the two regression parameters and the conditional logarithmic standard deviation. In the next stage, large-sample MC simulation based on the updated joint probability distribution is used to generate a set of plausible fragility curves. The robust fragility is estimated as the average of the generated fragility curves. The dispersion in the robust fragility is estimated as the variance of the plausible fragility curves generated. The plus/minus one standard deviation confidence interval for the robust fragility depends on the size of the sample of “data” employed. Application of the Bayesian Cloud procedure for an existing RC frame designed only for gravity-loading demonstrates the effect of structural modelling uncertainties, such as the uncertainties in component capacities and those related to construction details. Moreover, a comparison of the resulting robust fragility curves with fragility curves obtained based on the Incremental Dynamic Analysis shows a significant dependence on both the structural performance measure adopted and the selection of the records.
Article
Reliable collapse assessment of structural systems under earthquake loading requires analytical models that are able to capture component deterioration in strength and stiffness. For calibration and validation of these models, a large set of experimental data is needed. This paper discusses the development of a database of experimental data of steel components and the use of this database for quantification of important parameters that affect the cyclic moment-rotation relationship at plastic hinge regions in beams. On the basis of information deduced from the steel component database, empirical relationships for modeling of precapping plastic rotation, postcapping rotation, and cyclic deterioration for beams with reduced beam section (RBS) and other-than-RBS beams are proposed. Quantitative information is also provided for modeling of the effective yield strength, postyield strength ratio, residual strength, and ductile tearing of steel components subjected to cyclic loading. DOI: 10.1061/(ASCE)ST.1943-541X.0000376. (C) 2011 American Society of Civil Engineers.
Article
This paper investigates the performance of spectral acceleration averaged over a period range (Saavg) as an intensity measure (IM) for estimating the collapse risk of structures subjected to earthquake loading. The performance of Saavg is evaluated using the following criteria: efficiency, sufficiency, the availability or ease of developing probabilistic seismic hazard information in terms of the IM and the variability of collapse risk estimates produced by the IM. Comparisons are also made between Saavg and the more traditional IM: spectral acceleration at the first-mode period of the structure (Sa(T1)). Though most previous studies have evaluated IMs using a relatively limited set of structures, this paper considers nearly 700 moment-resisting frame and shear wall structures of various heights to compare the efficiency and sufficiency of the IMs. The collapse risk estimates produced by Saavg and Sa(T1) are also compared, and the variability of the risk estimates is evaluated when different ground motion sets are used to assess the structural response. The results of this paper suggest that Saavg, when computed using an appropriate period range, is generally more efficient, more likely to be sufficient and provides more stable collapse risk estimates than Sa(T1). Copyright © 2015 John Wiley & Sons, Ltd.
Article
Estimation of fragility functions using dynamic structural analysis is an important step in a number of seismic assessment procedures. This paper discusses the applicability of statistical inference concepts for fragility function estimation, describes appropriate fitting approaches for use with various structural analysis strategies, and studies how to fit fragility functions while minimizing the required number of structural analyses. Illustrative results show that multiple stripe analysis produces more efficient fragility estimates than incremental dynamic analysis for a given number of structural analyses, provided that some knowledge of the building's capacity is available prior to analysis so that relevant portions of the fragility curve can be approximately identified. This finding has other benefits, given that the multiple stripe analysis approach allows for different ground motions to be used for analyses at varying intensity levels, to represent the differing characteristics of low-intensity and high-intensity shaking. The proposed assessment approach also provides a framework for evaluating alternate analysis procedures that may arise in the future.