Conference PaperPDF Available

Generalized multivariate fragility functions with multiple damage states

Authors:

Abstract

Fragility functions are widely used in performance-based analysis and risk assessment of structures, readily addressing the earthquake and structural engineering needs for uncertainty quantification. Fragility functions indicate the probability of a system exceeding certain damage states given some appropriate intensity measures characterizing recorded or simulated data-series. Formally, these intensity measures are characteristic features of the data-series, which can then be probabilistically mapped to a label state space, through presumed structural models and engineering demand parameters. In this sense, the development of fragility functions is a learning task, which has to preserve the statistical information of the labeled data. In this work, fragility functions are derived in their utmost generality, accounting for both multivariate intensity measures and multiple damage states, and are even further expanded to cases with multiple transitions among different states, what is called herein generalized fragility functions. As shown in this work, the framework of softmax regression is proven to be the appropriate one for such learning tasks for several theoretical and practical reasons. Different variants of the methodology applicable in fragility analysis are discussed and their underlying implementation details, statistical properties and assumptions are provided.
Generalized multivariate fragility functions with multiple
damage states
C. P. Andriotis and K.G. Papakonstantinou
Department of Civil & Environmental Engineering
The Pennsylvania State University, University Park, PA, 16802, USA
Introduction
Fragility analysis of structures is a practical mathematical and engineering tool for parametriz-
ing the inherent uncertainties due to earthquake events, enabling engineering decision-making
based on a few informative metrics. Fragility functions quantify the probabilities of a structure
exceeding certain Damage (or limit) States (DSs) Z, given some Engineering Demand Param-
eters Y (EDPs),
|ZY
f
. Through these probabilities we can eventually estimate the Mean Annual
Frequency (MAF) of DSs exceedance,
DS
λ
, and other relevant MAF responses related to eco-
nomical, societal, or environmental impacts. A strong assumption made is that given some In-
tensity Measures X (IMs) we can sufficiently define the probabilistic response of EDP,
|YX
f
.
Following the premises of these conditional independencies of the uncertain measures and re-
sponses, we obtain [2]:
( )
||
,
( 1| ) ( | )
DS Z Y Y X IM
xy
f z y df y x d x
λλ
= =
∫∫
(1)
where
{ }
0,1z
,
1
for exceeding the DS and
0
otherwise, and
IM
λ
is the corresponding seismic
hazard function.
risk assessment of structures, readily addressing the earthquake and structural engi-
neering needs for uncertainty quantification. Fragility functions indicate the proba-
bility of a system exceeding certain damage states given some appropriate intensity
measures characterizing recorded or simulated data-series. Formally, these intensity
measures are characteristic features of the data-series, which can then be probabil-
istically mapped to a label state space, through presumed structural models and en-
gineering demand parameters. In this sense, the development of fragility functions
is a learning task, which has to preserve the statistical information of the labeled
data. In this work, fragility functions are derived in their utmost generality, account-
ing for both multivariate intensity measures and multiple damage states, and are
even further expanded to cases with multiple transitions among different states, what
is called herein generalized fragility functions. As shown in this work, the frame-
work of softmax regression is proven to be the appropriate one for such learning
tasks for several theoretical and practical reasons. Different variants of the method-
ology applicable in fragility analysis are discussed and their underlying implemen-
tation details, statistical properties and assumptions are provided.
IASSAR
Safety, Reliability, Risk, Resilience and Sustainability of Structures and Infrastructure
12th Int. Conf. on Structural Safety and Reliability, Vienna, Austria, 6–10 August 2017
Christian Bucher, Bruce R. Ellingwood, Dan M. Frangopol (Editors)
c
2017 TU-Verlag Vienna, ISBN 978-3-903024-28-1
2019
A favorable simplification for alleviating a part of the computational effort is to assume that
DSs are precisely defined by EDPs. That is,
|ZY
f
is just a step function from the EDP space to
the discrete Z space. Thereby, fragility functions are given conditionally to IMs, instead of
EDPs, and this is the format considered in this work. There are various methods in the literature
for selecting and handling recorded or simulated ground motions to extract the information
required for the fragility analysis [4,5,7,9,14]. In this work a stochastic ground motion model
is utilized, which can be also integrated in (1).
Several formulations for conducting fragility analysis exist, either combining univariate
IMs with binary DSs [14], or univariate IMs with multiple DSs [13], or multivariate IMs with
binary DSs [17]. In general, fragility functions can be expressed as multivariate models with
multiple DSs. While the multivariate extension is straightforward under fair assumptions, the
multidimensional one, i.e. through multiple DSs, often brings out inconsistencies. The crossings
of fragility curves are, for instance, indicative of some modeling flaws, since what they essen-
tially imply is negative state probabilities. Some techniques for circumventing this issue have
been proposed in the literature, e.g. [12], yet without always providing clear theoretical justifi-
cations.
In this paper, multivariate fragility functions with multiple DSs are analyzed within the
context of Softmax Regression (SR) [11]. SR has strong theoretical connections with general-
ized linear models [3], featuring the special case of logit links. In the binary case, SR can be
regarded as binary logistic regression, which has been effectively used for fragility analysis,
e.g. [8,17]. The development of fragility functions based on data and SR is seen as a learning
problem in this work, where the probability distribution over multiple discrete structural DSs is
to be inferred, given some multivariate data attributes.
In current fragility analysis frameworks, the lognormal distribution is favorably utilized for
data belonging to certain DSs, mainly for the practical reason of the non-negativity of the used
IMs, x. Quite often though, the posterior,
|ZX
f
, is misconceived as the likelihood,
|
XZ
f
, fre-
quently enabling, among other issues, DS probabilities with negative values, i.e. crossings of
fragility functions. In this work, we show that SR, in the log-space of IMs, is a mathematically
accurate modeling choice for this posterior, when the DS conditional distribution is log-normal.
Besides lognormal, the results are generalizable to the entire exponential family of distributions,
which technically implies that a softmax fragility function assumption is invariant to this large
family of distributions. Another significant feature of fragility analysis is the fact that states
commonly follow a certain order, meaning that, for example, a “minor damage” state is a subset
of an “up-to-major-damage” state. This nested structure is analyzed, investigated and leveraged
along the lines of nominal, ordered and hierarchical regression approaches.
Classic fragility analysis is mainly focused on the DS probabilities given that the structure
is in one given initial configuration, usually the intact state. However, it would have broad
implications to model state transitions from every to all DSs. Such generalized fragility func-
tions are particularly useful in life-cycle applications, that necessitate computation of failure
probabilities from multiple initial structural configurations. This generalized approach that ac-
counts for transition probabilities among all DSs, and is capable of describing long-term struc-
tural behavior, is presented here, assuming Markovian properties for the evolution dynamics of
DSs. In this regard, the current state of the structure is a sufficient statistic over a history of
state transitions that do not need to be tracked.
2020
Statistical learning approach
2.1 Theoretical implications
To shed some insight on the choice of SR in fragility analysis, we examine, without loss of
generality, a binary state case, with one IM. The scope in fragility analysis is to model the
posterior probability of a DS exceedance given an IM,
|ZX
f
, which based on Bayes’ rule can be
written:
( ) ( ) ( )
( ) ( ) ( ) ( )
|
|||
|1 1
1| |1 1 |0 0
XZ Z
ZX XZ Z XZ Z
f xz f z
f zx f xz f z f xz f z
= =
= = ==+==
(2)
A common misconception in fragility analysis is to model
|XZ
f
instead of
|
ZX
f
. To elaborate
on this, once the structural analysis results have been obtained, usually only the points that
indicate damage are kept, and their derived distribution is eventually treated as being the pos-
terior,
|ZX
f
, when in fact it is merely the DS conditional likelihood
|XZ
f
. From relation (2) we
can further derive:
( ) ( )
( ) ( )
( )
||
|
1
1| |0 0
1
|1 1
ZX XZ Z
XZ Z
f zx f xz f z
f xz f z
= = = =
+= =
(3)
The state z is binary, and random variable Z follows a Bernoulli distribution, having mass den-
sity function:
( ) ( )
1
;1
z
z
Z
fz
θθ θ
= −
(4)
A popular distribution for modeling
|XZ
f
is the lognormal, mainly due to its positive support
domain. Adopting this assumption we have:
( ) ( )
2
|2
1
| ; , exp 2
2
i
XZ i i i
i
lnx
f xz i x
µ
σµ σ
σπ

= =



(5)
Substituting (4) and (5) in (3), and after some tedious but trivial algebraic steps, we obtain:
( )
( )
|2
1
1| 1 exp
ZX
f zx Aln x Blnx C
= = +− −
(6)
where A, B, C are constants, functions of the parameters of (4) and (5). Under the assumption
that
01
σσ
=
, it can be shown that
0A=
, thus
|ZX
f
is a logistic function in the log-space of x.
From a classification perspective, the affinity of the exponent in (6) implies that DSs can be
discriminated by a linear function in the log-space of IMs. In case
0A
, the exponent is
quadratic in the log-space of x, which suggests that a quadratic kernel can be integrated in the
analysis. This important result in (6), also holds for any distribution in the exponential family,
with the lognormal just being one of them [1,6]. In addition, it can be easily shown that with
multiple DSs, relation (6) becomes the softmax function, demonstrating that SR is the mathe-
matically accurate modeling choice for fragility analysis, with the potential aid of nonlinear
kernels, under the most general and diverse assumptions.
2021
2.2 Softmax regression
In an SR setting, given a set of n labeled data points lying at an m-dimensional feature space,
namely
( ) ( )
{ }
() ( ) ( )( )
{ }
12
11
, ,,,,
nn
ii ii ii
m
ii
z xx xz
= =
= …x
, we want to estimate the probability of a class z=j
given x,
()
12
|, , ,m
P z jx x x
= …
,
{1,2,...,| |}
zS S
∈=
, where |S| is the total number of classes.
In the context of fragility analysis, the classes can designate DSs, whereas the data features are
the various IMs. In SR, the labels are often given in a one-zero vector format, meaning that if
x belongs to DS
zj
=
, its label is a zero vector with only its j-th entry equal to 1:
0 0 ... 1 ... 0
j
z=

(7)
This vectorized representation of the classes also allows for a relaxation of the strict one-zero
requirement, thus allowing for a softer probability distribution over the classes, in cases where
the actual DS is only partially observable and is not known with certainty. To discern the two
approaches in the presence of deterministic classes (one-zero vectors), the method is referred
to as sparse SR, which is quite similar to the classical multi-class logistic regression. Along the
premises discussed in section 2.1, the probability of a class j given x, can be directly modeled
by the softmax function as:
()
( )
( )
12
|, , , j
i
g
mj g
S
e
P z jx x x p e
= …==
x
x
(8)
where
i
g
is an affine function of x, for all
iS
:
( )
11i oi i mi m
g a ax a x= + +…+x
(9)
It is clear from relation (8) that the probabilities of all individual states sum up to 1 for all x,
and are, of course, positive. Although the necessity of positivity is self-evident, its importance
has to be underlined here, since this is the guarantee that resolves fragility functions crossings.
The total number of optimal coefficients to be determined is
( )
1 | |,mS+
whereas the loss func-
tion to be minimized is given by the cross-entropy:
|| () ()
11
ln
ii
S
j
n
jj
i
L zp
= =
= −
∑∑
(10)
Note that minimizing (10) is essentially equivalent to maximizing the log-likelihood of
( )
12
|, , ,
m
P zx x x
assuming i.i.d. observed data.
2.2.1 Nominal approach
If we set
zk=
as a reference state, by dividing the denominator and nominator of (8) with
( )
k
g
e
x
we end up with the multinomial logistic function. This scheme eliminates a set of unknown
coefficients. In differentiation among the other types of SR, this reduced version is called the
nominal one, and includes
( )( )
1| |1mS
+−
coefficients. Accordingly, the probability of each
DS now becomes:
2022
()
( ) ( )
\{ } \{ }
1
,
11
j
ii
g
jk k
g
k
g
Sk S
e
pp
ee
= =
++
∑∑
x
xx
(11)
2.2.2 Ordinal approach
The ordinal SR is a more restrictive version of nominal SR. In this case, the affine
i
g
functions
are now constructed so as to directly take advantage of the ordered state structure, which is
rather appealing in fragility analysis given the nature of DSs, as for example “minor damage”,
major damage” etc. This assumption is true to some extent, but rather restrictive, requiring a
good prior knowledge of the data domain. In the ordinal case, the total number of optimal co-
efficients is reduced to
( )
| | 1
Sm
−+
, since the probabilities of exceedance of a set of sequen-
tial DSs, are now modeled as:
( )
( )
12
1
|, , , 1
j
mg
P z jx x x e
> …=
+
x
(12)
()
11io mmi
g a ax a x
= + +…+
x
(13)
As seen, the gradient of
i
g
is constant for all i, namely the respective separating hyperplanes,
( )
0
i
g=
x
, as well as the corresponding fragility functions are parallel to each other. As such,
although fragility functions have different means, they share the same variance, as the only way
to guarantee non-crossings in this case. It should be made clear at this point, that equal variance
is not required in the other SR formulations in order to avoid crossings. In all other cases, non-
crossings are a priori guaranteed, even when different variances are used, due to the way the
probabilities are formulated. In these cases, however, the modeled probabilities have to be
simply post-processed in order to provide the typical fragility functions, which model the prob-
ability of exceedance of DSs. Along the ordinal assumptions, early and recent works in fragility
analysis have subtly employed ordinal models, without explicitly addressing it, either using the
probit or the logit link [13,17]. It can be noted that, with the probit link, equation (12) yields
the classical maximum likelihood estimation formulation presented in [13]:
( )
12
|, , , Φ
j
m
P z jx x x
σ

> …=


xμ
(14)
where, again, x is the log-IM and Φ the standard Gaussian CDF. The probit link is quite similar
to the logit one, with the former turning sharper at the tails. However, as shown in section 2.1
the choice of a logit link is in general theoretically more consistent for DS conditional data,
distributed according to the exponential family.
2.2.3 Hierarchical approach
The hierarchical approach reflects a nested logic. The probability of a DS is now given condi-
tionally to the IMs and DSs, as:
( )
( )
12
1
| 1, , , , 1
j
mg
Pzjzj xx x e
> >− =+
x
(15)
2023
This is a formulation that comprises features from the two previous approaches. The
i
g
expres-
sions are similar to the ones in (9), hence the same number of coefficients as in nominal SR
should be determined, whereas concurrently the concept of DSs ordering is explicitly enforced.
2.3 Kernelized softmax regression
The developed formulation of linear SR in the previous sections can be expanded to facilitate
nonlinear data discrimination. To accomplish this, a nonlinear mapping should be defined from
the original space of the dataset to a new space, where the linear model performs more
efficiently. An elegant way towards this, is to define the inner product in the new space by
means of a kernel function. Technically, linear SR can be seen as a special case of the family
of polynomially kernelized SR, that employs a kernel of the form
( )
(,) 1 ,
r
Kxy xy= +
for any
,
m
xy∈ℜ
, with
1r=
. The quadratic kernel is accordingly obtained for r=2, the cubic kernel
for r=3, etc. Another versatile and useful inner product is defined by the gaussian kernel or
radial basis function
22
2
( , ) exp( / ),Kxy x y
γγ
+
= − − ∈ℜ
. Relation (9) admits the following
modification [10]:
( )
( )
( )
()
1,
jj
j
n
ii
g a Kx x
=
=
x
(16)
Nonlinear kernels can be integrated in any of the SR variants discussed previously. To avoid
overfitting when using nonlinear kernels, the loss function should often be supplied with L2- or
L1-norm regularizers. For more details on the strengths and weaknesses of different
regularization functions the interested reader may consult [11].
Generalized fragility functions
In order to capture the longitudinal data dependencies, a network with Markovian states evolu-
tion is introduced, shown in Figure 1. The network consists of two node categories; the X-nodes
and the Z-nodes, corresponding to IMs and DSs random variables respectively. More formally,
for all
0,1,2,...,iT=
,
,
ii
ZX
are random variables, such that
i
ZS
and
i
X∈Ω
, where
{1,2,...,| |}SS=
,
||
Ω⊆ℜ
. Assuming that the initial state
0
z
is known, the joint probability
mass function is [11]:
1
0: 1: 1 0 1 2 1 2
|| || (, )
|
1
( | ) ( | , ) ( | , )... ( | , )
tt
T T
SS
TII z j z k
jk
t kj
fz x fz z x fz zx fz z x
p
TT1 T
= =
=
=
=
∏∏∏
(17)
where II is the indicator function. Considering the negative log-likelihood of (17), the corre-
sponding loss function reads:
X1
Z1 Z2 ZT
X2 XT
Z0
Figure 1: Markovian network representation
2024
|| || ||
()
1 | ()
11 1
( , )ln
SS S
nT i
t t jk k
it kj k
L II z j z k p L
= = =
=−===
∑∑∑∑ ∑
(18)
Table 1: Analysis and modeling parameters
Magnitude (-)
5.7 – 7.5
Distance (km)
5.0 – 40.0
Vs30 ground velocity (m/s)
305.0
Beam length (m)
6.5
Column length (m)
4.2
Beam section
W21x83
Column section
W24x84
Yield strength (MPa)
235.0
Elastic modulus (GPa)
200.0
Hardening (%)
0.5
Concrete slab height (cm)
20.0
The subscript k in the loss functions of (18) indicates the conditional state of the respective
cross entropy. In addition, the form of the loss function indicates that regardless the length of
the sampled earthquake sequences, the parameter estimation process can be decomposed into
|S| subproblems that can be processed in parallel. The transition probabilities from each state k
to all j,
|jk
p
, form the generalized fragility functions modeled according to (8), or any other of
the presented variations.
Numerical results
In this section, the dynamic response of a one-bay three-story internal moment resisting frame
is considered, under simulated seismic excitations. Details for the earthquake model imple-
mented can be found in [15,16]. Dynamic time-history analyses are conducted using Opensees.
All beams and columns are geometrically linear force-based elements with proper fiber dis-
cretization, modeled with a bilinear material law with kinematic hardening, simply simulating
the uniaxial steel constitutive behavior chosen. In Table 1, the simulated ground motion and
analysis parameters are shown in detail. For the earthquake magnitude and distance from the
site, uniform distributions with bounds shown in Table 1 are sampled. The DSs are based on
different levels of maximum interstorey drifts. For illustration purposes, only three states are
chosen, shown in Table 2.
Fragility analysis for a 1D IM case is performed first, with the Peak Ground Acceleration
(PGA) (in g) chosen as the IM. A total number of 50 earthquakes and respective structural
analysis is used in this example for deriving the fragility functions.
Table 2: Damage states based on maximum drift
1: Minor damage
<0.5 %
2: Major damage
0.5 – 1.5 %
3: Near collapse
>1.5%
6.5 m
4.2 m
2025
Figure 2: Moment resisting frame fragility curves for one IM and three DSs (left). Crossing of fragility functions
for classical maximum likelihood approach without common variance (middle). Crossing avoided without
common variance assumption using nominal SR (right).
Results for this example are presented in Figure 2. Nominal and hierarchical approaches
are almost identical, whereas ordinal fragility curves for different DS exceedances are only
differentiated through a shift in their mean values. In all cases crossings are avoided, and as
explained in section 2.2.1, in the nominal and hierarchical cases this is accomplished without
the common variance assumption. In Figure 2, the remedy of fragility functions crossing is
also demonstrated. The same example is evaluated and fragility functions based on one of the
leading methodologies [13], without the common variance assumption, is compared with the
presented formulation results.
The difference among the three approaches can be also seen in Figure 3, where fragility
surfaces for 2 IMs are demonstrated, for 500 analyses. In this example, the chosen D595 (in sec)
duration is a typical measure for the significant duration of seismic excitations, denoting the
time interval between 5% and 95% of the arias intensity. In Figure 3, we can again observe that
nominal and hierarchical approaches are almost identical and different from the ordinal case.
This difference is more obvious in the x-y plane, in Figure 4, where the boundaries, for
0
i
g=
, are shown. The parallel boundaries among DSs imposed by the ordinal assumption seem to
not be accurate when the model is free to optimize all the coefficients.
Finally, we show the analysis regarding the generalized fragility functions. The generalized
formulation is applied based on 1000 series events, with each series consisting of 5 earthquakes,
corresponding to 5000 analyses in total. In Figure 5, the corresponding plots are shown, for
every initial state, based on the nominal SR. In this figure, the diffusion of more severely dam-
aged points into former less damaged regions is observed, and the corresponding linear bound-
aries defining the fragility functions are drawn.
Figure 3: Moment resisting frame fragility surfaces for two IMs and three DSs.
Nominal (left), Ordinal (middle), Hierarchical (right).
-4 -2 0 2
log - PGA (m/s
2
)
0
0.2
0.4
0.6
0.8
1
P(z>1), P(z>2)
Nominal
Ordinal
Hierarchical
-4 -2 0 2
log - PGA (m/s
2
)
0
0.2
0.4
0.6
0.8
1
fragility functions
crossing region
-4 -2 0 2
log - PGA (m/s
2
)
0
0.2
0.4
0.6
0.8
1
fragility functions
crossing region
0
5
0.5
P(z>1), P(z>2)
1
-4
-2
00log-PGA
log-D
595
0
0
5
0.5
1
-4
-20 log-PGA
log-D
595
0
5
0.5
1
-4
-20 0log-PGA
log-D
595
2026
Figure 4: Separating boundaries of DSs based on fragility analysis.
Nominal (left), Ordinal (middle), Hierarchical (right).
Figure 5: Conditional separating boundaries of DSs based on generalized fragility analysis
with nominal SR.
Note that by determining all separating boundaries, all fragility functions have essentially been
obtained, since their optimal coefficients have been computed.
5 Conclusions
This work presents a complete and systematic framework for fragility analysis, based on SR.
This choice is driven by the fact that the softmax function turns out to describe the probabilities
of the DSs, when the distributions of the DS conditional data belong to the exponential family
of distributions. The presented methodology can be implemented in three alternative ways ei-
ther under (i) nominal, (ii) ordered or (iii) hierarchical assumptions regarding the DSs, and in
all cases fragility functions crossings are avoided. The numerical investigation supports the fact
that the nominal and hierarchical approaches are more flexible and obtain similar results, and
do not need the common variance assumption to avoid crossings, as the ordinal approach does.
Finally, a theoretical formulation for generalized fragility functions based on Markovian as-
sumptions is derived. Generalized fragility functions provide the transition probabilities from
every to all DSs, given the IMs, allowing for long-term structural damage predictions. Numer-
ical examples and connections to current practice are analyzed and discussed for all presented
formulations.
References
[1] C. M. Bishop. Neural networks for pattern recognition. Oxford university press, 1995.
[2] C. A. Cornell and H. Krawinkler. "Progress and challenges in seismic performance
assessment." PEER Center News 3.2: 1-3, 2000.
0 1 2 3 4 5
log-D
595
-5
-4
-3
-2
-1
0
1
log-PGA
012345
log-D595
-5
-4
-3
-2
-1
0
1
012345
log-D
595
-5
-4
-3
-2
-1
0
1
0 1 2 3 4 5
log-D
595
-5
-4
-3
-2
-1
0
1
log-PGA
states | previous state = 1
0 1 2 3 4 5
log-D595
-5
-4
-3
-2
-1
0
1states | previous state = 2
012345
log-D
595
-5
-4
-3
-2
-1
0
1states | previous state = 3
2027
[3] P. McCullagh. "Generalized linear models." European Journal of Operational
Research 16.3: 285-292, 1984.
[4] M. Grigoriu. "To scale or not to scale seismic ground-acceleration records." Journal of
Engineering Mechanics 137.4: 284-293, 2010.
[5] F. Jalayer, and C. A. Cornell. "Alternative nonlinear demand estimation methods for
probabilitybased seismic assessments." Earthquake Engineering & Structural
Dynamics 38.8: 951-972, 2009.
[6] M. I. Jordan "Why the logistic function? A tutorial discussion on probabilities and neural
networks.", Computational Cognitive Science, Technical Report 9503, MIT, 1995.
[7] N. S. Kwong, A. K. Chopra, and R. K. McGuire. "A framework for the evaluation of
ground motion selection and modification procedures." Earthquake Engineering &
Structural Dynamics 44.5: 795-815, 2015.
[8] D. Lallemant, A. Kiremidjian, and H. Burton. "Statistical procedures for developing
earthquake damage fragility curves." Earthquake Engineering & Structural
Dynamics 44.9: 1373-1389, 2015.
[9] N. Luco and P. Bazzurro. "Does amplitude scaling of ground motion records result in
biased nonlinear structural drift responses?." Earthquake Engineering & Structural
Dynamics 36.13: 1813-1835, 2007.
[10] S. Marsland. Machine learning: an algorithmic perspective. CRC press, 2015.
[11] K. P. Murphy, Machine learning: a probabilistic perspective. MIT press, 2012.
[12] K. Porter, R. Kennedy, and R. Bachman. "Creating fragility functions for performance-
based earthquake engineering." Earthquake Spectra 23.2: 471-489, 2007.
[13] M. Shinozuka, , M. Q. Feng, H. Kim, T. Uzawa, and T. Ueda. "Statistical analysis of
fragility curves" . Technical Report, MCEER-03-002, 2003.
[14] D. Vamvatsikos. "Analytic Fragility and Limit States [P (EDP| IM)]: Nonlinear Dynamic
Procedures." Encyclopedia of Earthquake Engineering: 87-94, 2015.
[15] C. Vlachos, K. G. Papakonstantinou, and G. Deodatis. "A multi-modal analytical non-
stationary spectral model for characterization and stochastic simulation of earthquake
ground motions." Soil Dynamics and Earthquake Engineering 80, 177-191, 2016.
[16] C. Vlachos, K. G. Papakonstantinou, and G. Deodatis. "Predictive model for site specific
simulation of ground motions based on earthquake scenarios." Earthquake Engineering
& Structural Dynamics, under review, 2017.
[17] A. J. Yazdi, T. Haukaas, T. Yang, P. Gardoni. "Multivariate fragility models for
earthquake engineering." Earthquake Spectra 32.1: 441-461, 2016.
2028
... Multivariate fragility assessment has been demonstrated in several previous research studies for the effects of seismic loading on civil structures [47][48][49][50][51]. In particular, studies by Sousa et al. [52,53] and Andriotis and Papakonstantinou [54,55] utilized multiple IMs and multiple damage states to better characterize the response of civil structures to seismic action. However, there are no published studies (to the authors' knowledge) that have yet applied a multivariate approach to structural-fire fragility analysis for civil structures, much less for bridges. ...
Article
This study demonstrates the development of bivariate fragility curves for a fire-exposed simple-span overpass bridge prototype with composite steel plate girders. The fire and resulting heat transfer to the girders are modeled using the computationally efficient Modified Discretized Solid Flame (MDSF) model, developed previously by the authors. Several input parameters to model the thermo-structural response of the girders (particularly the material strength and applied loading) are stochastically selected for Monte Carlo Simulation via Latin Hypercube Sampling. The thermo-structural response of the composite steel girders is calculated using uncoupled, reduced-form finite element analyses. The structural-fire response of the prototype bridge girder is iteratively calculated for a large suite (∼1,000 iterations) of fire scenarios and then categorized into escalating damage levels based on the maximum deflection reached during the fire event. The damage from each fire scenario is correlated to two measures of fire hazard intensity: the peak heat release rate, and the total thermal energy imparted along the girder span. Bivariate fragility curves that correlate the two intensity measures to each damage level via a cumulative normal distribution function are obtained for the prototype bridge with span length varying from 12.2 to 42.7 m. An illustrative example uses these fragility curves to assess fire-induced damage for the two overpass spans in the MacArthur Maze interchange in Oakland, CA that collapsed due to a 2007 tanker truck fire.
... Cimellaro and Reinhorn (2011) presented a multiple failure function that contains maximum inter-story drift and acceleration of the structure and fragility curves of a hospital structure. Other studies addressed the fragility and reliability of structures accounting for multiple failure functions (Lu and Zhang, 2011;Wang et al., 2012;Andriotis and Papakonstantinou, 2017;Risi et al., 2019). On the other hand, structural control systems, due to practical limitations, may have the capacity thresholds themselves; these have not typically been considered as failure modes in fragility analysis. ...
Article
This paper presents a procedure to develop fragility curves of structures equipped with TMD considering multiple failure functions. The failure criteria considered are maximum inter-story drift ratio as a safety criterion, maximum absolute acceleration as a convenience criterion and TMD stroke length. The relationship between intensity measure and responses of the structure was assumed to follow the power-law model, and a regression analysis was used to estimate its properties. A nonlinear eight-story shear building subjected to near-fault earthquakes was used for the numerical studies. Fragility curves using multiple and single failure functions for an uncontrolled structure and a structure equipped with optimal TMDs were developed. Numerical analysis showed that using multiple failure functions led to increasing the fragility when compared with using the single failure function for both the uncontrolled and controlled structures. However, TMDs slightly reduced the seismic fragility and have the capability to improve the reliability of the structure. Also, it was found that the fragility was significantly influenced by the values of the capacity thresholds of both the acceleration of the structure and TMD stroke length, which should be selected by considering the target performance and application of the structure and control device.
... Excluding the O-nodes (observation nodes), the remaining DMM network features a direct generalization of softmax fragility, consisting of X-and Z-nodes, denoting IMs and DSs respectively. In this case, DSs are considered to be fully observable at each time step t (Andriotis & Papakonstantinou, 2017). The entire network on the other hand, including the O-nodes, defines a DHMM representation that does not necessarily require complete information over the states (Andriotis & Papakonstantinou, 2018b). ...
Conference Paper
Full-text available
Extended and generalized fragility functions support estimation of multiple damage state probabilities, based on intensity measure spaces of arbitrary dimensions and longitudinal state dependencies in time. The softmax function provides a consistent mathematical formulation for fragility analysis, thus, fragility functions are herein developed along the premises of softmax regression. In this context, the assumption that a lognormal or any other cumulative distribution function should be used to represent fragility functions is eliminated, multivariate data can be easily handled, and fragility crossings are avoided without the need for any parametric constraints. Adding to the above attributes, generalized fragility functions also provide probabilistic transitions among possible damage states, which can be either hidden or explicitly defined, thus allowing for long-term performance predictions. Long-term considerations enable the study and probabilistic quantification of the cumulative deterioration effects caused by multiple sequential events, while hidden damage states are described as states that are either not deterministically observed or determined, or that are initially even completely unknown and undefined based on relevant engineering demand parameters. Although hidden damage state cases are, therefore, frequently encountered in structural performance assessments, methods to untangle their longitudinal dynamics are elusive in the literature. In this work, various techniques are developed for fragility analysis with hidden damage states and long-term deterioration effects, from Markovian probabilistic graphical models to more flexible deep learning architectures with recurrent units.
Article
Full-text available
This paper describes statistical procedures for developing earthquake damage fragility functions. Although fragility curves abound in earthquake engineering and risk assessment literature, the focus has generally been on the methods for obtaining the damage data (i.e., the analysis of structures), and little emphasis is placed on the process for fitting fragility curves to this data. This paper provides a synthesis of the most commonly used methods for fitting fragility curves and highlights some of their significant limitations. More novel methods are described for parametric fragility curve development (generalized linear models and cumulative link models) and non-parametric curves (generalized additive model and Gaussian kernel smoothing). An extensive discussion of the advantages and disadvantages of each method is provided, as well as examples using both empirical and analytical data. The paper further proposes methods for treating the uncertainty in intensity measure, an issue common with empirical data. Finally, the paper describes approaches for choosing among various fragility models, based on an evaluation of prediction error for a user-defined loss function. Copyright © 2015 John Wiley & Sons, Ltd.
Article
Full-text available
This study develops a framework to evaluate ground motion selection and modification (GMSM) procedures. The context is probabilistic seismic demand analysis, where response history analyses of a given structure, using ground motions determined by a GMSM procedure, are performed in order to estimate the seismic demand hazard curve (SDHC) for the structure at a given site. Currently, a GMSM procedure is evaluated in this context by comparing several resulting estimates of the SDHC, each derived from a different definition of the conditioning intensity measure (IM). Using a simple case study, we demonstrate that conclusions from such an approach are not always definitive; therefore, an alternative approach is desirable. In the alternative proposed herein, all estimates of the SDHC from GMSM procedures are compared against a benchmark SDHC, under a common set of ground motion information. This benchmark SDHC is determined by incorporating a prediction model for the seismic demand into the probabilistic seismic hazard analysis calculations. To develop an understanding of why one GMSM procedure may provide more accurate estimates of the SDHC than another procedure, we identify the role of ‘IM sufficiency’ in the relationship between (i) bias in the SDHC estimate and (ii) ‘hazard consistency’ of the corresponding ground motions obtained from a GMSM procedure. Finally, we provide examples of how misleading conclusions may potentially be obtained from erroneous implementations of the proposed framework. Copyright © 2014 John Wiley & Sons, Ltd.
Book
A Proven, Hands-On Approach for Students without a Strong Statistical Foundation Since the best-selling first edition was published, there have been several prominent developments in the field of machine learning, including the increasing work on the statistical interpretations of machine learning algorithms. Unfortunately, computer science students without a strong statistical background often find it hard to get started in this area. Remedying this deficiency, Machine Learning: An Algorithmic Perspective, Second Edition helps students understand the algorithms of machine learning. It puts them on a path toward mastering the relevant mathematics and statistics as well as the necessary programming and experimentation. New to the Second Edition Two new chapters on deep belief networks and Gaussian processes Reorganization of the chapters to make a more natural flow of content Revision of the support vector machine material, including a simple implementation for experiments New material on random forests, the perceptron convergence theorem, accuracy methods, and conjugate gradient optimization for the multi-layer perceptron Additional discussions of the Kalman and particle filters Improved code, including better use of naming conventions in Python Suitable for both an introductory one-semester course and more advanced courses, the text strongly encourages students to practice with the code. Each chapter includes detailed examples along with further reading and problems. All of the code used to create the examples is available on the author’s website.
Article
A predictive stochastic model is developed based on regression relations that inputs a given earthquake scenario description and outputs seismic ground acceleration time histories at a site of interest. A bimodal parametric non-stationary Kanai-Tajimi (K-T) ground motion model lies at the core of the proposed predictive model. The functional forms that describe the temporal evolution of the K-T model parameters can effectively represent strong non-stationarities of the ground motion. Fully non-stationary ground motion time histories can be generated through the powerful Spectral Representation Method. A Californian subset of the available NGA-West2 database is used to develop and calibrate the predictive model. Samples of the model parameters are obtained by fitting the K-T model to the database records, and the resulting marginal distributions of the model parameters are efficiently described by standard probability models. The samples are translated to the standard normal space and linear random-effect regression models are established relating the transformed normal parameters to the commonly used earthquake scenario defining predictors: moment magnitude Mw, closest-to-site distance Rrup, and average shear-wave velocity VS30 at a site of interest. The random-effect terms in the developed regression models can effectively model the correlation among ground motions of the same earthquake event, in parallel to taking into account the location-dependent effects of each site. For validation purposes, simulated acceleration time histories based on the proposed predictive model are compared with recorded ground motions. In addition, the median and median plus/minus one standard deviation elastic response spectra of synthetic ground motions, pertaining to a variety of different earthquake scenarios, are compared to the associated response spectra computed by the NGA-West2 ground motion prediction equations and found to be in excellent agreement.
Article
A novel stochastic earthquake ground motion model is formulated in association with physically interpretable parameters that are capable of effectively characterizing the complex evolutionary nature of the phenomenon. A multi-modal, analytical, fully non-stationary spectral version of the Kanai-Tajimi model is introduced achieving a realistic description of the time-varying spectral energy distribution. The functional forms describing the temporal evolution of the model parameters can efficiently model highly non-stationary power spectral characteristics. The analysis space, where the analytical forms describing the evolution of the model parameters are established, is the energy domain instead of the typical use of the time domain. This space is used in conjunction with a newly defined energy-associated amplitude modulating function. The Spectral Representation Method can easily support the simulation of sample model realizations. A subset of the NGA database is selected in order to test the efficiency and versatility of the stochastic model. The complete selected database is thoroughly analyzed and sample observations of the model parameters are obtained by fitting the evolutionary model to its records. The natural variability of the entire set of seismic ground motions is depicted through the model parameters, and their resulting marginal probability distributions together with their estimated covariance structure effectively describe the evolutionary ground motion characteristics of the database and facilitate the characterization of the pertinent seismic risk. For illustration purposes, the developed evolutionary model is presented in detail for two example NGA seismic records together with their respective deterministic model parameter values.
Article
Current estimates of seismic structural fragilities are commonly made on the basis of finite collections of actual or virtual ground-acceleration records that are scaled to have the same scalar intensity measure, for example, peak ground acceleration or pseudospectral acceleration. This paper models seismic ground-acceleration records by samples of Gaussian processes X(t) and constructs scaled versions X̃(t) of X(t) by following current procedures. This analysis shows that X̃(t) and X(t) have different probability laws, so that fragilities on the basis of X̃(t) provide limited if any information on the seismic performance of structural systems, that is, fragilities on the basis of X(t). The usefulness of current fragility estimates on the basis of scaled seismic ground-acceleration records is questionable, and scaling ground motions is not recommended.
Article
This paper employs a logistic regression technique to develop multivariate damage models. The models are intended for performance assessments that require the probability that structural components are in one of several damage states. As such, the developments represent an extension of the univariate fragility functions that are omnipresent in contemporary performance-based earthquake engineering. The multivariate logistic regression models that are put forward here eliminate several of the limitations of univariate fragility functions. Furthermore, the new models are readily substituted for existing fragility functions without any modifications to the existing performance-based analysis methodologies. To demonstrate the proposed modeling approach, a large number of tests of reinforced concrete shear walls are employed to develop multivariate damage models. It is observed that the drift ratio and aspect ratio of concrete shear walls are among the parameters that are most influential on the damage probabilities.