Article

Effects of Two Alternative Representations of Ground Motion Uncertainty on Probabilistic Seismic Demand Assessment of Structures

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A probabilistic representation of the entire ground-motion time history can be constructed based on a stochastic model that depends on seismic source parameters. An advanced stochastic simulation scheme known as Subset Simulation can then be used to efficiently compute the small failure probabilities corresponding to structural limit states. Alternatively, the uncertainty in the ground motion can be represented by adopting a parameter (or a vector of parameters) known as the intensity measure (IM) that captures the dominant features of the ground shaking. Structural performance assessment based on this representation can be broken down into two parts, namely, the structure-specific part requiring performance assessment for a given value of the IM, and the site-specific part requiring estimation of the likelihood that ground shaking with a given value of the IM takes place. The effect of these two alternative representations of ground-motion uncertainty on probabilistic structural response is investigated for two hazard cases. In the first case, these two approaches are compared for a scenario earthquake event with a given magnitude and distance. In the second case, they are compared using a probabilistic seismic hazard analysis to take into account the potential of the surrounding faults to produce events with a range of possible magnitudes and distances. The two approaches are compared on the basis of the probabilistic response of an existing reinforced-concrete frame structure, which is known to have suffered shear failure in its columns during the 1994 Northridge Earthquake in Los Angeles, California. Copyright © 2007 John Wiley & Sons, Ltd.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, it may still be difficult to find records for high intensity levels (corresponding to very large magnitudes) without scaling. The alteration induced by records scaling is widely discussed and analysed in the literature (Der Kiureghian and Fujimura 2009;Jalayer and Beck 2008;Dávalos and Miranda 2019b). It is noteworthy that the problem of scaling is overcome by using a stochastic earthquake input, despite other sources of approximation might be introduced by this way. ...
... Similarly to Au and Beck (2003), Jalayer and Beck (2008) and Vetter and Taflanidis (2012), the seismic scenario is described by a single source model, characterized by two main random seismological parameters, namely the moment magnitude M, and the epicentral distance R. A Gutenberg-Richter recurrence law (Kramer 2003) (Eq. 6) is used to describe the magnitude-frequency relationship of the seismic source: ...
... In particular, for each earthquake sample a Gaussian white noise signal is generated and, after being windowed through the envelope-functions e(t) (Fig. 2b), its normalized frequency spectrum is applied to the target radiation spectrum (Fig. 2a), thus providing the variability of the energy content within the frequency domain. Such variability is further amplified by the lognormally-distributed multiplicative factor of the radiation spectra, ε mod , characterised by a unitary median value and a standard deviation σ ln = 0.5, as proposed by Jalayer and Beck (2008). The resulting overall variability provided by the model is shown in Fig. 3a, in which the spectra of three earthquake samples corresponding to the same pair of magnitude and distance (i.e., m = 6.5 and r = 20 km) are depicted in different colours. ...
Article
Full-text available
Current practical approaches for probabilistic seismic performance assessment of structures rely on the concept of intensity measure (IM), which is used to decompose the problem into hazard analysis and conditional seismic demand analysis. These approaches are potentially more efficient than traditional Monte-Carlo based ones, but the performance estimates can be negatively influenced by inadequate setup choices. These include, among the others, the number of seismic intensity levels to consider, the number of structural analyses to be performed at each intensity level, and the lognormality assumption for the conditional demand. This paper investigates the accuracy and effectiveness of a widespread IM-based method for seismic performance assessment, multi-stripe analysis (MSA), through an extensive parametric study carried out on a three-story steel moment-resisting frame, by considering different setup choices and various engineering demand parameters. A stochastic ground motion model is employed to describe the seismic hazard and the spectral acceleration is used as intensity measure. The results of the convolution between the seismic hazard and the conditional probability of exceedance obtained via MSA are compared with the estimates obtained via Subset Simulation, providing a reference solution. The comparison gives useful insights on the influence of the main parameters controlling the accuracy and precision of the IM-based method. It is shown that, with the proper settings, MSA can provide risk estimates as accurate as those obtained via Subset Simulation, at a fraction of the computational cost.
... The record-to-record variability is also well-accounted for. The AS model does not account for the spectral variability and the record-to-record variability needs to be improved by adding some external source of variability (Jalayer and Beck 2008). However, this model can be used in a wide range of seismic scenarios. ...
... Following the Atkinson-Silva model (Atkinson and Silva 2000;Boore 2003), the ground motion is obtained by modulating in time the white noise by means of the function e(t), which yields the time-function z(t) = e(t)w(t). The amplitude and the frequency content are obtained by multiplying its Fourier transform z f ð Þ (normalized to have a mean square amplitude of unity) by the radiation spectra e mod S f ð Þ, where S f ð Þ is a deterministic function of the frequency f while e mod is a scaling factor describing the amplitude variability (Jalayer and Beck 2008). The final ground motion acceleration a(t) is obtained by the inverse Fourier transform of z f ð Þ ¼ e mod S f ð Þ. ...
... The soil amplification factor V(f) is taken according to (Boore and Joyner 1997) for generic soil (V S,30 = 310 m/s). The model-error parameter e mod is the adding lognormal random variable (l lne = 0, r lne = 0.5), according to Jalayer and Beck (2008), used for increasing the record-to-record variability. For which concerns the envelope function e t ð Þ, it is given by ...
Article
Full-text available
Viscous dampers are dissipation devices widely employed for seismic structural control. To date, the performance of systems equipped with viscous dampers has been extensively analysed only by employing deterministic approaches. However, these approaches neglect the response dispersion due to the uncertainties in the input as well as the variability of the system properties. Some recent works have highlighted the important role of these seismic input uncertainties in the seismic performance of linear and nonlinear viscous dampers. This study analyses the effect of the variability of damper properties on the probabilistic system response and risk. In particular, the paper aims at evaluating the impact of the tolerance allowed in devices’ quality control and production tests in terms of variation of the exceedance probabilities of the Engineering Demand Parameters (EDPs) which are most relevant for the seismic performance. A preliminary study is carried out to relate the variability of the constitutive damper characteristics to the tolerance limit allowed in tests and to evaluate the consequences on the device’s dissipation properties. In the subsequent part of the study, the sensitivity of the dynamic response is analysed by harmonic analysis. Finally, the seismic response sensitivity is studied by evaluating the influence of the allowed variability of the constitutive damper characteristics on the response hazard curves, providing the exceedance probability per year of EDPs. A set of linear elastic systems with different dynamic properties, equipped with linear and nonlinear dampers, are considered in the analyses, and subset simulation is employed together with the Markov Chain Monte Carlo method to achieve a confident estimate of small exceedance probabilities.
... Stochastic methods can create a large number of records rapidly, helping to remove potential gaps and biases in empirical data. Jalayer and Beck (2008) provide an in-depth comparison between the two approaches for hazard and risk assessment. ...
... The synthetic ground motion record is finally computed through an inverse Fourier transformation of the signal � zðf Þ � ε mod � Aðf Þ, where Aðf Þ denotes the radiation spectrum. Following (Jalayer and Beck 2008), a random amplification factor, ε mod , is introduced to amplify the radiation spectra. This factor is assumed to be lognormally distributed with parameters λ ¼ μ log εmod ð Þ ¼ 0 and � ¼ σ log εmod ð Þ ¼ 0:5. ...
... The underlying idea is to express the small failure probability as a product of larger probabilities conditional on some intermediate events. This allows converting the simulation of a rare event into a sequence of simulations of more frequent events (see also Jalayer and Beck 2008;Jalayer et al. 2010). ...
... In the simulation-based approach, consideration of RTR variability requires the adoption of a suitable stochastic ground motion model (see the chapter "▶ Stochastic Ground Motion Simula tion") conditional on seismic source parameters (see Au and Beck 2003b;Jalayer and Beck 2008). Accordingly, the consideration of RTR variability together with other sources of uncertainties will provide a complete probabilistic treatment of the structural response (see, e.g., Jalayer et al. 2007b where RTR variability and modeling uncertainty are considered together in the seismic reliability analysis of RC frames based on the subset simulation). ...
... The source-based ground motion model proposed in Atkinson and Silva (2000) and widely employed in the literature (e.g. Au and Beck 2003, Dall'Asta et al. 2017, Jalayer and Beck 2008, Scozzese et al. 2019) is used in this study. This model, combined with the stochastic point source simulation method (Boore 2003), is employed to generate ground motion time series according to the samples of M, R. The ground motions record-to-record variability is accounted by means of a Gaussian white noise process and a lognormal scale factor (Jalayer and Beck 2008) applied to the target Fourier spectrum. ...
... Au and Beck 2003, Dall'Asta et al. 2017, Jalayer and Beck 2008, Scozzese et al. 2019) is used in this study. This model, combined with the stochastic point source simulation method (Boore 2003), is employed to generate ground motion time series according to the samples of M, R. The ground motions record-to-record variability is accounted by means of a Gaussian white noise process and a lognormal scale factor (Jalayer and Beck 2008) applied to the target Fourier spectrum. ...
Chapter
Recent seismic events occurred in Central Italy drew the attention towards the resilience of the Italian road network, which is characterised by a significant number of old reinforced concrete bridges and viaducts. In this context, the fragility assessment of existing bridges is crucial, since their collapse or loss in functionality after earthquakes may lead to significant economic and social consequences. As a part of a more general study oriented to characterize the fragility level of the Central Italy bridge stock, this work focuses on the real case study of the Chiaravalle viaduct, which may be representative of a widespread class of reinforced concrete bridges in Italy: the viaduct is a continuous multi-span bridge consisting of precast simply supported V-shaped beams connected by a continuous slab. A numerical model is developed in order to capture the failure mechanisms most likely to occur for this bridge typology subjected to seismic actions. A probabilistic assessment of the seismic response of the bridge is carried out by performing multiple stripe analysis, considering a seismic scenario consistent with the Chiaravalle site. Fragility curves, built by accounting for the main demand parameters and the relative limit states, provide useful insights about structural deficiencies of the system at hand.Keywordsbridgesreinforced concrete viaductsfragility assessmentfragility curvesstructural numerical model
... Few studies explicitly compare conditional and unconditional methods. Both Jalayer and Beck (2008) and Franchin et al. (2012) explored the effects of using conditional and unconditional approaches on seismic risk estimates for a reinforced-concrete frame structure. Whilst Azar and Dabaghi (2021) and Bijelic et al. (2019) examined unconditional hazard and risk assessments, respectively, using the CyberShake software (Graves et al., 2011). ...
... The conditional hazard analyses rely on the standard deviation (sigma) of each GMM to introduce variability in results when estimating Sa, whilst the unconditional approach already models the variability in ground motions by using every simulated record in the hazard analysis. On top of this, a model-error parameter ( ) is introduced to the stochastic model, as proposed by Jalayer and Beck (2008). This parameter scales the radiation spectra calculated by the stochastic model in order to account for modelling uncertainty. ...
Preprint
Full-text available
Accurately characterizing ground motions is crucial for estimating probabilistic seismic hazard and risk. The growing number of ground motion models, and increased use of simulations in hazard and risk assessments, warrants a comparison between the different techniques available to predict ground motions. This research aims at investigating how the use of different ground-motion models can affect seismic hazard and risk estimates. For this purpose, a case study is considered with a circular seismic source zone and two line sources. A stochastic ground-motion model is used within a Monte Carlo analysis to create a benchmark hazard output. This approach allows the generation of many records, helping to capture details of the ground-motion median and variability, which a ground motion prediction equation may fail to properly model. A variety of ground-motion models are fitted to the simulated ground motion data, with fixed and magnitude-dependant standard deviations (sigmas) considered. These include classic ground motion prediction equations (with basic and more complex functional forms), and a model using an artificial neural network. Hazard is estimated from these models and then we extend the approach to a risk assessment for an inelastic single-degree-of-freedom-system. Only the artificial neural network produces accurate hazard results below an annual frequency of exceedance of 1x10 − 3 years − 1 . This has a direct impact on risk estimates - with ground motions from large, close-to-site events having more influence on results than expected. Finally, an alternative to ground-motion modelling is explored through an observational-based hazard assessment which uses recorded strong-motions to directly quantify hazard.
... Three popular procedures are available to estimate ( > | ) using nonlinear dynamic analysis: cloud analysis [39], multiple stripe analysis (MSA) [40], and incremental dynamic analysis (IDA) [41]. IDA successively scales the selected GMs to increasing amplitude using the same GM suite [42,43], whereas MSA uses different GMs at different hazard levels. ...
... GM uncertainty is commonly described using either stochastic GM models or intensity measures (IM) [40]. A stochastic GM model modifies a white noise sequence based on expected spectral and temporal ground motion features. ...
Article
Full-text available
Current fully probabilistic approaches to performance-based earthquake engineering describe structures' behavior under a wide range of seismic hazard levels. These approaches require a detailed representation of ground motion (GM) uncertainty at all considered hazard levels, yet different GM selection methods lead to different estimations of structural performance. This paper presents a holistic review of the current practices in GM representation and selection for structural demand analysis through a performance-based lens. The multidisciplinary nature of GM selection, ranging from earth science to engineering seismology and statistics, has created a preponderance of literature to find the best practice for probabilistic assessment of structures in terms of computational efficiency and statistical accuracy. Many of these studies focus individually on GM selection or structural analysis, and the relatively scarce review papers either focus on code-based GM selection or do not specifically address risk-based evaluations by overlooking the interaction between GM selection and structural analysis. This paper aims to aid researchers in selecting appropriate GMs as part of a statistically valid and robust probabilistic demand analysis without performing an exhaustive literature review. Discussion on the available computational tools and their trade-offs for risk-based assessment of single structures is provided. While the problem-specific nature of GM selection means that no pre-selected set of GM/IM is applicable to all cases, the comprehensive narrative of this paper is expected to aid analysts in reaching a more informed decision.
... Deep learning is a subfield of artificial intelligence that is used to perform complex mappings through representation learning, such as image-toimage [31] and sequence-to-sequence [32] translations. Among different modeling techniques, encoder-decoder models are prevalent in sequence-to-sequence translation, where the encoder 40 encodes the first dimension while the decoder reconstructs the target dimension from encoded features. The main advantage of encoder-decoder architecture is that it allows the mapping of sequences with different lengths. ...
... 2.1. Obtaining PSDM from dynamic analysis 60 There are various procedures to estimate PSDM from nonlinear dynamic analysis, including cloud analysis [39], multiple stripe analysis (MSA) [40], and incremental dynamic analysis (IDA) [41]. IDA successively scales the selected GMs to increasing amplitude using the same GM suite, whereas MSA uses different GMs at different hazard levels. ...
Article
Full-text available
Probabilistic seismic demand analysis (PSDA) is the most time-and effort-intensive step in risk-based assessment of the built environment. A typical PSDA requires subjecting the structure to a large number of ground motions and performing nonlinear dynamic analysis, where the analysis dimension and effort substantially increase at large-scale assessments such as community-level evaluations. This study presents a deep learning framework to estimate seismic demand models from nonlinear static (i.e., pushover) analysis, which is computationally inexpensive. The proposed architecture leverages an encoder-decoder model with customized training schedules and a loss function capable of determining demand model parameters and error. Furthermore, the framework facilitates the seamless incorporation of structural modeling uncertainties in PSDA. The proposed framework is then applied to a building inventory consisting of 720 concrete frames to examine its generalizability and accuracy. The results show that the deep learning architecture can estimate demand models by an R 2 of 84% using a test-to-train ratio of unity. In addition, the average prediction error is less than 3% and 6% for demand model slope and intercept parameters , respectively, translating into an accurate estimation of fragility functions with a median error of 5.7%, 6.9%, and 6.8% for immediate occupancy, life safety, and collapse prevention damage states. Lastly, the framework can efficiently propagate structural uncertainties into seismic demand models, capturing the implicit relationship of the frames' nonlinear characteristics and resultant fragility functions.
... The source-based ground motion model proposed in [24] is used, which has already been employed in several applications [19], [20], combined with the stochastic point source simulation method [25], to generate ground motion time series. The ground motions record-to-record variability is accounted by means of a Gaussian white noise process and a lognormal scale factor [26] applied to the target Fourier spectrum. The resulting IM hazard curve is discretised in 10 intervals identifying the 11 IM stripes adopted to perform MSA, as shown in Figure 2. ...
... As a result of the structural analysis performed according to MSA, at each IM level the fraction of ground motions satisfying the exceedance condition can be computed. A parametric lognormal cumulative distribution function is generally used to model fragility functions, which is a reasonable assumption for a large set of cases [26], [27]: ...
... Fig. 6 shows the effect of the epicentral distance r (2, 30, 50 km), for a fixed magnitude M = 6. It is worth to note that the present seismic scenario is also consistent with several other works [26,65] in which the Atkinson-Silva ground motion model is adopted, and further details about the parameters used in the present study can be found in Dall'Asta et al. (2017) [2]. ...
... Besides the scenario-related random variables (magnitude and distance), further uncertainties derive from the record-to-record variability expected for ground motions associated with seismic inputs with the same magnitude and distance. This variability is described first by generating the signals as realizations of a white-noise process, and then by including an additional source of variability through the multiplicative factor of the radiation spectra, ε mod , proposed by Jalayer and Beck (2008) [65]. This factor is assumed to be lognormally-distributed, with a lognormal mean equal to 0 and a lognormal standard deviation of 0.5. ...
Article
Damping and isolation devices are often employed to control and enhance the seismic performance of structural systems. However, the effectiveness of these devices in mitigating the seismic risk may be significantly affected by manufacturing tolerances, and systems equipped with devices whose properties deviate from the nominal ones may exhibit a performance very different than expected. The paper analyzes this problem by proposing a general framework for investigating the sensitivity of the seismic risk of structural systems with respect to system properties varying in a prescribed range. The proposed framework is based on the solution of a reliability-based optimization (RBO) problem, aimed to search for the worst combination of the uncertain anti-seismic device parameters, within the allowed range of variation, that maximizes the seismic demand hazard. A hybrid probabilistic approach is employed to speed up the reliability analyses required for evaluating the objective function at each iteration of the RBO process. This approach combines a conditional method for estimating the seismic demand at a given intensity level, with a simulation approach for representing the seismic hazard. The proposed method is applied to evaluate the influence of the variability of the properties of linear and nonlinear fluid viscous dampers on the seismic risk of a low-rise steel building. The study results show that the various response parameters considered are differently affected by the damper properties and unveil the capability of the proposed approach to evaluate the potentially worst conditions that jeopardize the system reliability.
... The uncertainty of ground motion excitation and its influence on structural seismic design have been increasingly emphasized in the structural engineering community (Mehanny and Ayoub [55], Ellingwood and Kinali [56], Jalayer and Beck [57]). Accordingly, the probabilistic distribution of DMFV-Sa is examined in this section to quantify the variation of DMFV-Sa. ...
... According to former publications, the probabilistic structural seismic demand could be better modeled by the log-normal distribution model in most cases (Jalayer and Beck [57], Decanini et al. [58], Yi et al. [59]). However, regarding the DMFV-Sa, the normal distribution model seems to be more relevant. ...
Article
This paper aims to establish a regional (Japanese) damping modification model for scaling the 5%-damped vertical seismic response spectra to other damping ratios. In doing so, 3198 strong vertical ground motion (VGM) records are selected from the Japanese seismic database, and their linear elastic response spectra are computed by the Newmark-β algorithm. Taking the 5%-damped vertical response spectra as benchmarks, the vertical spectral damping modification factors (DMFV) for ζ (damping ratio) = 0.5%, 1%, 2%, 3%, 4%, 6%, 7%, 8%, 9%, 10%, 12.5%, 15%, 17.5%, 20%, 25%, 30%, 35% and 40% are calculated. For structural design purpose, the DMFVs of both the vertical pseudo acceleration response spectra and the vertical absolute acceleration response spectra are calculated. The DMFV spectra have their peaks (for ζ < 5%) or valleys (for ζ > 5%) at T = 0.12 s (here T indicates spectral period), and their values get farther away from unity as earthquake magnitude or VGM epicenter distance increases. The effect of earthquake hypocenter depth, local site condition (represented by Vs30) and peak ground acceleration on DMFV generally shows no pattern. For each ζ, the mean DMFV-lgT curves are simulated by highly accurate piecewise functions. Moreover, it is revealed that the normal distribution model is feasible for representing the probabilistic properties of DMFV, especially in the short-to-medium period region where the skewness of the DMFV distribution is not quite pronounced. The standard deviations of DMFV are modeled by piecewise functions as well, yet it is emphasized that the variation of a damping-scaled spectral ordinate is jointly controlled by the probabilistic properties of DMFV and the uncertainties of the 5%-damped spectral ordinates.
... is a deterministic function of the frequency f while mod  is a random scaling factor describing the amplitude variability [31]. The final ground motion ac- ...
... The soil amplification factor V(f) is taken according to [32] for generic soil (V S,30 = 310 m/s). The model-error parameter mod  is the adding lognormal random variable ( ln = 0,  ln  = 0.5) , according to Jalayer and Beck [31], used for increasing the record-to-record variability. For which concerns the envelope function   t e , it is given by ...
Conference Paper
Full-text available
Viscous dampers are energy dissipation devices widely employed for the seismic control of structures. The performance of systems equipped with viscous dampers has been extensively analyzed by employing deterministic approaches. However, these approaches neglect the response dispersion due to the uncertainties in the input as well as in the structural system properties. Some recent works highlighted the important role of these uncertainties in the seismic performance of systems with linear or nonlinear viscous dampers. The present study focuses on the uncertainty in the damper properties and it aims at evaluating its influence on the probabilistic response of the damped system. In particular, the variability of the damper properties is assumed to be constrained by the tolerances allowed in qualification and production control tests. A preliminary study on the damper response is carried out to relate the constitutive damper characteristics to the parameters controlled in the experimental tests and to evaluate the consequences of damper parameter variations on the dissipation properties of the device. In the subsequent part of the study, the response hazard curves, providing the relation between the values of the response parameters of interest and the relevant yearly exceedance probability, are evaluated. In the analyses, a simplified structural system is considered, and the Subset Simulation (SS) algorithm is employed together with the Markov Chain Monte Carlo method to achieve a good estimate of small probabilities of exceedance. A sensitivity analysis, considering the expected variations in the damper properties, is finally carried out by employing the Augmented SS method to study the influence of the device acceptance ranges on the hazard curves.
... Fig. 6 shows the effect of the epicentral distance r (2, 30, 50 km), for a fixed magnitude M = 6. It is worth to note that the present seismic scenario is also consistent with several other works [26,65] in which the Atkinson-Silva ground motion model is adopted, and further details about the parameters used in the present study can be found in Dall'Asta et al. (2017) [2]. ...
... Besides the scenario-related random variables (magnitude and distance), further uncertainties derive from the record-to-record variability expected for ground motions associated with seismic inputs with the same magnitude and distance. This variability is described first by generating the signals as realizations of a white-noise process, and then by including an additional source of variability through the multiplicative factor of the radiation spectra, ε mod , proposed by Jalayer and Beck (2008) [65]. This factor is assumed to be lognormally-distributed, with a lognormal mean equal to 0 and a lognormal standard deviation of 0.5. ...
Conference Paper
Viscous dampers are energy dissipation devices widely employed for the seismic control of new and existing building frames. To date, the performance of systems equipped with viscous dampers has been extensively analyzed by employing deterministic approaches neglecting the response dispersion due to the record-to-record variability effects. This paper analyzes the probabilistic seismic performance of building frames equipped with viscous dampers by highlighting the influence of damper properties. In particular, a probabilistic methodology based on response hazard curves is employed to evaluate the effect of the damper nonlinearity on the performance of structural and non-structural components of building frames. The performance variations due to changes in the damper nonlinearity level are evaluated by considering design scenarios corresponding to dampers having different exponents designed to provide the same deterministic performance. Both response statistics at different seismic intensity levels and demand hazard curves for the monitored engineering parameters are employed as performance measures. It is shown that the damper nonlinearity strongly affects the seismic performance and different trends are observed for the various demand parameters of interest. A comparison with code provisions shows that further investigation is necessary to provide more reliable design formulas for the dampers accounting for their nonlinearity level.
... Future earthquake ground motions represent a major source of uncertainty in performance-based earthquake engineering assessments . A rigorous method for representing ground motion uncertainty consists of building a stochastic (i.e., probabalistic) model for the entire ground motion time history (Au and Beck 2003; Jalayer and Beck 2007; Taflanidis and Beck 2009). However, it is common to represent uncertainty in ground motion with a probabilistic model for a parameter, or a vector of a few parameters, related to the ground motion and known as an intensity measure (IM) (Luco and Cornell 2007; Jalayer and Cornell 2009; Goulet et al. 2007). ...
... The integration in Eq. (17) can be carried out by using a standard Monte Carlo simulation scheme. This paper employs the deaggregation of seismic hazard (McGuire 1995; Bazzurro and Cornell 1999) at different levels of ground motion intensity to obtain a joint probability distribution pðM; rÞ for magnitude and distance (Jalayer and Beck 2007). The stochastic ground motion model proposed by Atkinson and Silva (2000) is used to obtain the PDF pð€ x g jM; rÞ for the ground motion time history given M and r. ...
Conference Paper
The seismic risk assessment of a structure in performance-based design may be significantly affected by the representation of ground motion uncertainty. The uncertainty in the ground motion is commonly represented by adopting a parameter or a vector of parameters known as the intensity measure (IM). In this work, a new measure, called a sufficiency measure, is derived based on information theory concepts, to quantify the suitability of one IM relative to another in representing ground motion uncertainty. Based on this measure, alternative IM’s can be compared in terms of the expected difference in information they provide about a designated structural response parameter. Several scalar IM’s are compared in terms of the amount of information they provide about the seismic response of an existing reinforced-concrete frame structure.
... The second class originates from the complex physical mechanism in propagation/generation of 4 earthquakes, which precludes the complete description of the seismic excitation through the parametrized explanatory variables [21,24,25]. It accounts for the response variability that cannot be described through the explanatory variables, and depending on the modeling approach for describing the seismic excitation [45], it might correspond to latent features of the excitation model, or to a highdimensional stochastic sequence. Ultimately, this uncertainty class needs to be treated as inherent randomness in the simulation model output given the values of the seismic hazard explanatory variables. ...
... Arguably, the most comprehensive (and resource intensive) method for propagating both the uncertainty in the ground motion representation and in the structural model is the fully simulationbased approach, whereby both ground motion records and structural realizations are constructed via appropriate stochastic models (for more details see e.g. Beck and Au 2002;Jalayer and Beck 2008;Jalayer, Iervolino, and Manfredi 2010;Papadimitriou, Beck, and Katafygiotis 2001). To achieve lower costs, hybrid schemes can be employed: NDAP schemes are used to handle RTR, coupled with first/ second-order reliability methods, typically the first-order-second-moment approach (FOSM, Baker and Cornell 2008;Celarec and Dolšek 2013;Cornell et al. 2002;Ellingwood and Kinali 2009;Ibarra and Krawinkler 2005;Lee and Mosalam 2005;Liel et al. 2009;Schotanus et al. 2004;Vamvatsikos and Fragiadakis 2010) to treat additional uncertainties. ...
Article
Quantifying the impact of modelling uncertainty on seismic performance assessment of existing buildings is non-trivial when considering the partial information available on material properties, construction details, and the uncertainty in the capacity models. This task is further complicated when uncertainty related to ground motion representation is considered. To address this issue, record-to-record variability, uncertainties in structural model parameters, and fragility model parameters due to limited sample size are propagated herein by employing a nonlinear dynamic analysis procedure based on recorded ground motions. A one-to-one sampling approach is adopted in which each recorded ground motion is paired up with a different structural model realization. Uncertainty propagation is explored by measuring the impact of different sampling techniques, such as Monte Carlo simulation with standard random sampling and Latin Hypercube sampling (with Simulated Annealing) in the presence of three alternative nonlinear dynamic analysis procedures: Incremental Dynamic Analysis (IDA), Modified Cloud Analysis (MCA), and Cloud to IDA (a highly efficient IDA-like procedure). This is all illustrated through application to an existing reinforced-concrete school building in southern Italy. It is shown that with a small subset of records, both MCA and Cloud to IDA can provide reliable structural fragility (and risk) estimates for three considered limit states, comparable to the results of more resource-intensive schemes. ARTICLE HISTORY
... 11 This type of uncertainty originates from the complex physical mechanism in propagation/generation of earthquakes, which precludes the complete description of the seismic excitation through parametrized explanatory variables (EVs). 1,19,20 Depending on the approach adopted for modeling the seismic excitation (ground motion time-history in context of NLRHA) 21 this variability may be expressed as high-dimensional or nonparametric uncertainty. From the perspective of the numerical simulations to predict the engineering demand parameters (EDPs) of interest, this yields simulation models that are stochastic rather than deterministic: the response output for the same explanatory seismic input parameters (e.g., intensity measure vector) will vary across simulations depending on the exact selection of the ground motion according to the underlying aleatoric uncertainty in the hazard description. ...
Article
Full-text available
Modern performance earthquake engineering practices frequently require a large number of time‐consuming non‐linear time‐history simulations to appropriately address excitation and structural uncertainties when estimating engineering demand parameter (EDP) distributions. Surrogate modeling techniques have emerged as an attractive tool for alleviating such high computational burden in similar engineering problems. A key challenge for the application of surrogate models in earthquake engineering context relates to the aleatoric variability associated with the seismic hazard. This variability is typically expressed as high‐dimensional or non‐parametric uncertainty, and so cannot be easily incorporated within standard surrogate modeling frameworks. Rather, a surrogate modeling approach that can directly approximate the full distribution of the response output is warranted for this application. This approach needs to additionally address the fact that the response variability may change as input parameter changes, yielding a heteroscedastic behavior. Stochastic emulation techniques have emerged as a viable solution to accurately capture aleatoric uncertainties in similar contexts, and recent work by the second author has established a framework to accommodate this for earthquake engineering applications, using Gaussian Process (GP) regression to predict the EDP response distribution. The established formulation requires for a portion of the training samples the replication of simulations for different descriptions of the aleatoric uncertainty. In particular, the replicated samples are used to build a secondary GP model to predict the heteroscedastic characteristics, and these predictions are then used to formulate the primary GP that produces the full EDP distribution. This practice, however, has two downsides: it always requires minimum replications when training the secondary GP, and the information from the non‐replicated samples is utilized only for the primary GP. This research adopts an alternative stochastic GP formulation that can address both limitations. To this end, the secondary GP is trained by measuring the square of sample deviations from the mean instead of the crude sample variances. To establish the primitive mean estimates, another auxiliary GP is introduced. This way, information from all replicated and non‐replicated samples is fully leveraged for estimating both the EDP distribution and the underlying heteroscedastic behavior, while formulation accommodates an implementation using no replications. The case study examples using three different stochastic ground motion models demonstrate that the proposed approach can address both aforementioned challenges.
... This limitation stems from multifaceted factors, including but not limited to instrumentation constraints, budgetary limitations, and sporadic occurrence of earthquake events of practical significance. Consequently, an alternative approach involves the simulation of ground motions (Galasso et al., 2013;Jalayer and Beck, 2007). ...
Article
Limited availability of recorded ground motions poses a challenge for reliable probabilistic seismic-hazard analysis (PSHA), even in highly seismic regions like the Western United States. Stochastic ground motions are commonly employed to address this challenge. However, the stochastic ground motion models (GMMs) may not consistently generate ground motions compatible with the site hazard due to their calibration using global data, failing to capture site-specific characteristics adequately. In the absence of recorded motions, physics-informed simulations provide a viable alternative but are deterministic with limitations of their own that makes them challenging to support PSHA. This article introduces a Bayesian framework that combines prior knowledge from a stochastic GMM, calibrated with global data, with site-specific data obtained from deterministic physics-informed simulations. The proposed framework utilizes the Rezaeian–Der Kiureghian (2010) model as the stochastic GMM and incorporates site-specific data from the CyberShake 15.12 study. By updating the mean and variance of the predictive relationships, along with the marginal distribution of the model parameters, through Bayesian inference, this framework allows for the simulation of site-specific ground motions consistent with the site characteristics. The statistics of peak ground acceleration distributions, as well as both the median and variability of the elastic response spectra, obtained from the calibrated stochastic GMM, demonstrate consistency with those derived using GMMs based on the Next Generation Attenuation (NGA) database.
... Two actual records were adopted for validation based on the second order statistics, and the results proved the accuracy and applicability of the fully non-stationary form. Besides, Stewart et al. [45], Mavroeidis and Papageorgiou [46], Jalayer and Beck [47], Gidaris et al. [48], and Kwong et al. [49] also made great contributions to the development of stochastic earthquake generation and non-stationary dynamic property in the corresponding field. ...
... When 28 using PBEE, care must be taken to accurately capture the variability in ground motion (GM), 29 which has been shown to be the main contributor to the response uncertainty [6,7]. A common 30 approach for capturing ground motion uncertainty is the intensity measure approach [8]. This 31 approach uses a parameter called a seismic intensity measure (IM), which is a scalar value or 32 a vector that describes the important features of ground motion). ...
Article
Full-text available
The accuracy of seismic risk-based evaluation depends on the selected seismic intensity measures (IMs). The performance of the selected IM is commonly assessed based on efficiency and sufficiency criteria, which reflect the ability of the IM to reduce epistemic uncertainty and avoid bias, respectively. Although various characteristics of structural dynamic response affect IM performance to different extents, the current literature determines IM suitability based on the overall response. This study examines the impact of two dynamic response characteristics , namely higher-mode effect and nonlinearity, on the efficiency and sufficiency of a comprehensive set of common and advanced IMs. Two correlation-based measures are then proposed to explicitly evaluate the effect of higher-mode effect and nonlinearity using statistical tests. The results show that the developed statistical measures can explain the inefficiency and insufficiency of an IM based on dynamic characteristics of response. In addition, while the IM's efficiency and sufficiency changes for different structures and ground motion sets, vector-valued and energy-based scalar IMs provide more consistent performance. Lastly, the impact of higher-mode response and nonlinearity is more pronounced on IMs efficiency than IM sufficiency.
... For the non-stationary seismic excitation description, two popular alternative approaches are considered [59]. The first approach uses a non-stationary stochastic ground motion model to obtain sample realizations of () g xt. ...
Article
Full-text available
The design of seismic protective devices (SPDs), such as fluid viscous dampers (VDs), and inertial vibration absorbers (IVAs), requires the adoption of appropriate models for: (i) the earthquake excitation description (e.g. stochastic stationary/non-stationary versus recorded ground motions); (ii) the seismic structural response estimation (e.g. linear versus nonlinear/hysteretic); and (iii) the seismic performance quantification (e.g. average response versus risk-based performance description). This paper pursues a novel detailed investigation of the impact of modeling fidelity in the design of SPDs, by examining different combinations of models with different levels of sophistication for each of the aforementioned aspects. In this manner, a large model hierarchy is established, resulting in multiple SPD design variants. A bi-objective optimal design formulation is adopted, considering the competing structural vibration suppression (building performance) and device control forces as distinct performance objectives (POs). Comprehensive comparisons are reported for a 3-storey and a 9-storey steel benchmark building, equipped with distributed VDs in all floors and with different types of single-device IVAs including the tuned-inerter-damper (TID), the tuned-mass-damper-inerter (TMDIs) and the tuned-mass-damper (TMD). An innovative methodological approach is established to gauge the impact of the model fidelity by examining the deviation of POs achieved by lower fidelity SPD designs versus the Pareto-optimal fronts corresponding to POs consistent with the higher fidelity assumptions. It is found that the TID is more robust than the TMDI to design modelling assumptions, suggesting that more lightweight IVAs (where majority of secondary mass is replaced by inertance property) relax requirements for high-fidelity models in device tuning. It is further found that large force VDs installed in each building floor are significantly more robust to the design fidelity modelling than single IVA implementations at the expense of increased costs to accommodate the larger number of devices. Consideration of structural nonlinear response becomes important in the SPD design when combined with risk-based performance quantification as opposed to average performance (i.e. the popular H2 design). Moreover, the risk-based performance of nonlinear structures becomes sensitive to the use of recorded ground motions (GMs) as opposed to artificial GMs with only time-domain non-stationarity, as well as to the number of recorded GMs used. The study overall stresses that that the use of lower fidelity models may provide sub-optimal performance in certain settings, and that comparison across the model hierarchy can be leveraged to obtain key insights of the SPD behavior. Additional key findings pertain to the robustness characteristics of the different type of SPDs to the modeling assumptions utilized for the device design.
... In some studies, probabilistic studies are conducted to observe the eff ect of foundation fl exibility of shallow foundation considering uncertainties of ground motion and system parameters through numerical modelling (Moghaddasi et al., 2011;Barcena et al., 2007;Hamidpour et al., 2017). In addition, the stochastic nature of ground motions is another factor thatsignifi cantly infl uencesthe reliability of performance of any structure (Kwon and Elnashai, 2006;Jalayer et al., 2008;Mekki et al., 2016;Sun et al., 2016). Several studies were reported on uncertainty modelling of earthquake motion and its impact on seismic response of structure (e.g., Zhang et al., 2008;Behnamfar et al., 2016;Hanciler et al., 2014). ...
Article
Full-text available
Seismic failure of structures supported on pile foundation has revealed the importance of seismic soil-foundation-structure interaction (SSFSI) for ensuring safe design. The uncertainties in subsoil properties and seismic loading may lead the problem to be more redundant. In this context, the present study attempts to assess the seismic reliability of pile foundation-supported building structure embedded in inhomogeneous clay layer considering inertial interaction. Shear strength of clay and earthquake loading is considered as spatially variable uncertain parameters. A non-linear soil-pile-structure system was assumed, and Monte Carlo simulation (MCS) was adopted to obtain probabilistic response of the system. First-order reliability method (FORM) is used for reliability assessment. The study indicates significant influence of uncertain parameters on the seismic response of building structure. Further, the influence of material and load uncertainty parameters on the probabilistic seismic response of structure designed following older version of code is higher than counterpart structure designed following recent version. FORM based reliability analysis infers thatserviceability criterion may be the governing parameter for pile foundation design. Moreover, the study also indicates that the curvature ductility demand of pile may be considered another crucial design parameter to assess the reliability of pile foundation.
... From a methodological and computational perspective, different PBEE variants exist, adopting different approaches and models for describing seismic hazard (Goulet et al. 2007;Jalayer and Beck 2008;Taflanidis and Beck 2009;Kohrangi et al. 2016), evaluating nonlinear structural response (Fragiadakis et al. 2006;Haselton et al. 2008;Sullivan et al. 2014) and assessing damages and consequences at the component or system level (Porter et al. 2001;HAZUS 2003;Bai et al. 2009). Across these variants, it is widely acknowledged (FEMA-P-58 2012) that the most detailed and comprehensive formulation is established by using nonlinear response history analysis (NLRHA) to define structural response and assembly-based vulnerability (Porter et al. 2001) estimation to quantify consequences. ...
Article
Full-text available
Performance-based earthquake engineering offers a versatile framework for quantifying the seismic performance of structures. Its implementation requires a comprehensive description of the nonlinear structural behavior, facilitated typically via multiple nonlinear response history analyses (NLRHAs). This burden can be very high when high-fidelity finite element models (FEMs) are used to describe structural response. To alleviate it, approximations are commonly employed, using a moderate number of analyses, or even replacing altogether the NLRHA with a nonlinear static analysis. This contribution explores two alternative paths for accommodating the desired computational efficiency: (a) use of reduced order models that are calibrated to closely match the original FEM; (b) adoption of multi-fidelity Monte Carlo (MFMC) that combines the original FEM to guarantee unbiased predictions and the aforementioned reduced order models to accelerate the Monte Carlo implementation. Advancements are established for the MFMC implementation, in order to accommodate the efficient propagation of the different sources of uncertainty across the estimation of the different seismic performance statistics of interest. The accuracy and computational benefits are illustrated for two benchmark structures over two different output variables: repair cost (resiliency quantification) and embodied energy associated with repairs (sustainability quantification).
... 21,22,24 The RF solution is especially interesting in the light of challenges and difficulties involved in considering the modelling uncertainties through application of procedures that rely on recorded ground motions. For stochastic ground motions, it is possible to define a probability distribution for the ground motion time history, and thereby, the uncertainty in the ground motion representation can be considered in a rigorous and straightforward manner through the application of simulation-based methods (eg, [25][26][27] ). In the past decade, many researchers (eg, [28][29][30][31][32][33] ) have proposed to use methods combining Monte Carlo-type simulations and nonlinear dynamic procedures (for the most part incremental dynamic analysis) based on recorded ground motions to propagate the uncertainties in the representation of the ground motion (which manifests itself as record-to-record variability in structural response). ...
Article
Modelling uncertainty can significantly affect the structural seismic reliability assessment. However, the limit state excursion due to this type of uncertainty may not be described by a Poisson process as it lacks renewal properties with the occurrence of each earthquake event. Furthermore, considering uncertainties related to ground motion representation by employing recorded ground motions together with modelling uncertainties is not a trivial task. Robust fragility assessment, proposed previously by the authors, employs the structural response to recorded ground motion as data in order to update prescribed seismic fragility models. Robust fragility can be extremely efficient for considering also the structural modelling uncertainties by creating a dataset of one‐to‐one assignments of structural model realizations and as‐recorded ground motions. This can reduce the computational effort by more than 1 order of magnitude. However, it should be kept in mind that the fragility concept itself is based on the underlying assumption of Poisson‐type renewal. Using the concept of updated robust reliability, considering both the uncertainty in ground motion representation based on as‐recorded ground motion and non ergodic modelling uncertainties, the error introduced through structural reliability assessment by using the robust fragility is quantified. It is shown through specific application to an existing RC frame that this error is quite small when the product of the time interval and the standard deviation of failure rate is small and is on the conservative side.
... This approach allows to compute the expected response spectrum associated with the target spectral acceleration (S a ) at a single period (0.8 seconds for the frames in this study), employing the knowledge of the magnitude, distance and the ε value, the occurrence of the given target S a . Meanwhile, multiplestripe analysis, a nonlinear dynamic analysis method, is employed to determine the structural response for a wide range of ground motion intensities, selected using the CMS approach and to assess multiple performance objectives from onset of damage through global collapse (Jalayer and Beck, 2008;. It is a modified version of incremental dynamic analysis (IDA) (Vamvatsikos and Cornell, 2002) in which the ground motions for each intensity level are selected differently to provide better scaled selected ground motions, leading to a more suitable match and compatibility to the expected hazard scenario. ...
Thesis
Seismic risk assessment holds an important role in understanding the impact of earthquakes and in the need for seismic rehabilitation. This procedure is deemed crucial for regions in where although there is moderate to high seismicity (i.e. Portugal), there is a lack of robust post-earthquake data to derive empirical relationships between damage and cost and thus, to develop appropriate risk management decisions from cost-benefit analyses. The available methods and approaches still contain some uncertainties, some simplifications and some assumptions due to the nature of the peril and a need for a new methodology in the area has emerged. Monetary loss of moment resisting reinforced concrete structures is investigated in this thesis while proposing an advanced and innovative assessment methodology. The methodology covers the monetary loss assessment from the scratch, forms new damage state definitions and their thresholds. Through a comprehensive damage determination and the correspondent engineering demand parameters on each damage state, new thresholds for columns are calculated under several axial load ratios. Five different columns which have different axial load levels, material and geometrical properties are selected from the literature. The detailed numerical models are formed in DIANA and the material properties are tuned against the experimental results in a systematic process which consists of five steps. The models with the calibrated material properties are employed to determine the damage states and correlated with the global and local engineering demand parameters. The proposed methodology is applied to two moment resisting reinforced concrete frames which are synthetically formed and represent Portuguese constructions. Incremental dynamic analysis is performed on OpenSees using a set of ground motions, 550 records in total. The loss ratio for each structure is computed in terms of two engineering demand parameters which are able to represent local or global behaviors of the structural elements. The loss ratios for each structure are also provided within the thesis.
... An alternative philosophy for describing seismic excitations is to use simulated ground motions (Jalayer and Beck, 2008;Galasso et al., 2013). A specific modeling approach for the latter which has been steadily gaining increasing attention by the structural engineering community (Vetter and Taflanidis, 2014;Broccardo and Der Kiureghian, 2015) is the use of stochastic ground motion models (Rezaeian and Der Kiureghian, 2010; Gavin and Dickinson, 2010;Yamamoto and Baker, 2013;Vlachos et al., 2016;Boore, 2003;Atkinson and Silva, 2000). ...
Conference Paper
The recent advances in computational efficiency and the scarcity/absence of recorded ground motions for specific seismicity scenarios have led to an increasing interest in the use of ground motion simulations for seismic hazard analysis, structural demand assessment through response-history analysis, and ultimately seismic risk assessment. Two categories of ground motion simulations, physics-based and stochastic site-based are considered in this study. Physics-based ground motion simulations are generated using algorithms that solve the fault rupture and wave propagation problems and can be used for simulating past and future scenarios. Before being used with confidence, they need to be validated against records from past earthquakes. The first part of the study focuses on the development of rating/testing methodologies based on statistical and information theory measures for the validation of ground motion simulations obtained through an online platform for past earthquake events. The testing methodology is applied in a case-study utilising spectral-shape and duration-related intensity measures (IMs) as proxies for the nonlinear peak and cyclic structural response. Stochastic site-based ground motion simulations model the time-history at a site by fitting a statistical process to ground motion records with known earthquake and site characteristics. To be used in practice, it is important that the output IMs from the developed time-histories are consistent with these prescribed at the site of interest, something that is not necessarily guaranteed by the current models. The second part of the study presents a computationally efficient framework that addresses the modification of stochastic ground motion models for given seismicity scenarios with a dual goal of matching target IMs for specific structures, while preserving desired trends in the physical characteristics of the resultant time-histories. The modification framework is extended to achieve a match to the full probability model of the target IMs. Finally, the proposed modification is validated by comparison to seismic demand of hazard-compatible recorded ground motions. This study shows that ground motion simulation is a promising tool that can be used for many engineering applications.
... For the case study discussed in this paper these predictive relationships are tuned [54] to provide a compatibility to the regional (Chilean) hazard by establishing a match to regional ground motion prediction equations [34]. Once the stochastic ground motion model is defined, the adoption of probability distributions for the seismological parameters facilitates the desired probabilistic description of the seismic hazard [55]. Within this setting, each consequence measure hr(.), used in Eq. (5) to describe seismic risk, is related to (i) the earthquake performance/losses that can be calculated based on the estimated response of the structure (performance given that some seismic event has occurred), as well as to (ii) ...
Article
Full-text available
The tuned mass-damper-inerter (TMDI) is a recently proposed passive vibration suppression device that couples the classical tuned mass-damper (TMD), comprising a secondary mass attached to the structure via a spring and dashpot, with an inerter. The latter is a two-terminal mechanical device developing a resisting force proportional to the relative acceleration of its terminals by the "inertance" constant. In a number of previous studies, optimally tuned TMDIs have been shown to outperform TMDs in mitigating earthquake-induced vibrations in building structures for the same pre-specified secondary mass. TMDI design in these studies involved simplified modeling assumptions, such as adopting a single performance objective and/or modeling seismic excitation as stationary stochastic process. This paper extends these efforts by examining a risk-informed TMDI optimization, adopting multiple objectives and using response history analysis and probabilistic life-cycle criteria to quantify performance. The first performance criterion, representing overall direct benefits, is the life-cycle cost of the system, composed of the upfront TMDI cost and the anticipated seismic losses over the lifetime of the structure. The second performance criterion, introducing risk-aversion attitudes into the design process, is the repair cost with a specific return period (i.e., probability of exceedance over the lifetime of the structure). The third performance criterion, accounting for practical constraints associated with the size of the inerter and its connection to the structure, is the inerter force with a specific return period. A particular variant of the design problem is also examined by combining the first and third performance criteria/objectives. A case study involving a 21-storey building constructed in Santiago, Chile shows that optimal TMDI configurations can accomplish simultaneous reduction of life-cycle and repair costs. However, these cost reductions come at the expense of increased inerter forces. It is further shown that connecting the inerter to lower floors provides considerable benefits across all examined performance criteria as the inerter is engaged in a more efficient way for the same inerter coefficient and attached mass ratios.
... In establishing the PSDS for a structure, the uncertainties of the structural properties [44,45] and the seismic excitations [46], as well as some other random factors such as inaccuracies of modeling [47] need to be considered. However, it is extremely difficult to accurately model the coupling effects of all these sources of uncertainties [48]. ...
Article
Severe vertical ground motions (VGMs) may lead to detrimental seismic damages of large-span planar steel structures (LSPSSs), thus the inelastic seismic demand of LSPSSs under VGMs needs to be quantified so as to ensure the structural seismic reliabilities. In view that the uncertainties of VGMs greatly influence the seismic responses of LSPSSs, the VGMs-induced seismic demands of LSPSSs are investigated in a probabilistic way in this paper. Based on 680 strong VGM records, the vertical ductility demands (μ) of2,100 equivalent single-degree-of-freedom (ESDF) models representing a series of LSPSSs are computed. It is revealed that the influences of VGM properties, including peak ground acceleration, epicentral distance, hypocenter depth, moment magnitude and local site condition, are tiny and irregular on the values of μ. Accordingly, the probabilistic seismic demand model for LSPSSs under VGMs is established regardless of the VGM properties in this paper. From the 1,428,000 computed values of μ, the VGMs-induced seismic demand of LSPSSs follows a positively skewed probabilistic distribution, and the distribution could be well fitted by the lognormal distribution model. The parameters of the lognormal distribution model for μ are simulated by two elementary functions subsequently, in which the effects of vertical strength reduction factor R, post-yield stiffness ratio α and elastic vibrating period T are accounted for. Using the lognormal model and the proposed functions, a group of probabilistic inelastic seismic demand spectra for LSPSSs under VGMs are generated. The established probabilistic spectra could provide the statistic properties of both the peak and the residual seismic demand of LSPSSs in association with pre-set values of T, R and α. Combining the proposed model with certain seismic hazard models that define the probabilistic characteristics of VGMs, the seismic reliability of LSPSSs could be quantified, and a proper structural seismic performance could be guaranteed.
... [4][5][6] Though the most popular methodology for performing this task is the selection of real (ie, recorded from past events) ground motions, 7-10 potentially scaled based on a target intensity measure (IM), an alternative philosophy is the use of simulated ground motions. 11,12 A specific modeling approach for the latter which has been steadily gaining increasing attention by the structural engineering community, [13][14][15] is the use of stochastic ground motion models. [16][17][18][19][20][21][22] These models are based on a parametric description of the spectral and temporal characteristics of the excitation, with synthetic time-histories obtained by filtering a stochastic sequence through the resultant frequency and time domain modulating functions. ...
Article
Full-text available
A computationally efficient framework is presented for modification of stochastic ground motion models to establish compatibility with the seismic hazard for specific seismicity scenarios and a given structure/site. The modification pertains to the probabilistic predictive models that relate the parameters of the ground motion model to seismicity/site characteristics. These predictive models are defined through a mean prediction and an associated variance, and both these properties are modified in the proposed framework. For a given seismicity scenario, defined for example by the moment magnitude and source‐to‐site distance, the conditional hazard is described through the mean and the dispersion of some structure‐specific intensity measure(s). Therefore, for both the predictive models and the seismic hazard, a probabilistic description is considered, extending previous work of the authors that had examined description only through mean value characteristics. The proposed modification is defined as a bi‐objective optimization. The first objective corresponds to comparison for a chosen seismicity scenario between the target hazard and the predictions established through the stochastic ground motion model. The second objective corresponds to comparison of the modified predictive relationships to the pre‐existing ones that were developed considering regional data, and guarantees that the resultant ground motions will have features compatible with observed trends. The relative entropy is adopted to quantify both objectives, and a computational framework relying on kriging surrogate modeling is established for an efficient optimization. Computational discussions focus on the estimation of the various statistics of the stochastic ground motion model output needed for the entropy calculation.
... 8,9 An alternative philosophy for describing seismic excitations is to use simulated ground motions. 10,11 A specific modeling approach for the latter, which has been steadily gaining increasing attention by the structural engineering community, [12][13][14] is the use of stochastic ground motion models. [15][16][17][18][19][20] These models are based on modulation of a stochastic sequence, through functions (filters) that address spectral and temporal characteristics of the excitation. ...
Article
Full-text available
Stochastic ground motion models produce synthetic time-histories by modulating a white noise sequence through functions that address spectral and temporal properties of the excitation. The resultant ground motions can be then used in simulation-based seismic risk assessment applications. This is established by relating the parameters of the aforementioned functions to earthquake and site characteristics through predictive relationships. An important concern related to the use of these models is the fact that through current approaches in selecting these predictive relationships, compatibility to the seismic hazard is not guaranteed. This work offers a computationally efficient framework for the modification of stochastic ground motion models to match target intensity measures (IMs) for a specific site and structure of interest. This is set as an optimization problem with a dual objective. The first objective minimizes the discrepancy between the target IMs and the predictions established through the stochastic ground motion model for a chosen earthquake scenario. The second objective constraints the deviation from the model characteristics suggested by existing predictive relationships, guaranteeing that the resultant ground motions not only match the target IMs but are also compatible with regional trends. A framework leveraging kriging surrogate modeling is formulated for performing the resultant multi-objective optimization, and different computational aspects related to this optimization are discussed in detail. The illustrative implementation shows that the proposed framework can provide ground motions with high compatibility to target IMs with small only deviation from existing predictive relationships and discusses approaches for selecting a final compromise between these two competing objectives.
... In the original paper [4], SS was developed for estimating reliability of complex civil engineering structures such as tall buildings and bridges at risk from earthquakes. It was applied for this purpose in [5] and [24]. SS and its modifications have also been successfully applied to rare event simulation in fire risk analysis [8], aerospace [39,52], nuclear [16], wind [49] and geotechnical engineering [46], and other fields. ...
Chapter
Full-text available
Rare events are events that are expected to occur infrequently or, more technically, those that have low probabilities (say, order of 10−3 or less) of occurring according to a probability model. In the context of uncertainty quantification, the rare events often correspond to failure of systems designed for high reliability, meaning that the system performance fails to meet some design or operation specifications. As reviewed in this section, computation of such rare-event probabilities is challenging. Analytical solutions are usually not available for nontrivial problems, and standard Monte Carlo simulation is computationally inefficient. Therefore, much research effort has focused on developing advanced stochastic simulation methods that are more efficient. In this section, we address the problem of estimating rare-event probabilities by Monte Carlo simulation, importance sampling, and subset simulation for highly reliable dynamic systems.
... Details about the optimized model may be found in [18]. Once the stochastic ground motion model is optimized the adoption of probability distributions for the seismological parameters facilitates a comprehensive probabilistic description of the seismic hazard [26]. In this context, let θ lying in Θ n   , be the augmented vector of continuous uncertain model parameters with probability density functions (PDFs) denoted as p(θ), where Θ denotes is space of possible parameter-values. ...
Conference Paper
Full-text available
In recent decades, the concept of the passive linear tuned mass-damper (TMD) has been considered as a valid option for vibration suppression of dynamically excited building structures due to its relatively simple design and practical implementation. Conceptually, the TMD comprises a mass attached to the structure whose vibration motion is to be controlled (primary structure) via optimally designed/”tuned” linear spring and viscous damper elements. Aside from the dynamical properties of the primary structure, the effectiveness of the TMD depends heavily on the inertia of the attached mass and on the attributes and nature of the dynamic excitation. For effective control of wind-induced vibrations a TMD mass weighting in between 0.5% to 1% of the total building weight is usually sufficient. However, controlling earthquake induced oscillations in buildings commonly requires a significantly heavier TMD mass. In this respect, recently, a generalization of the passive linear TMD was proposed incorporating an “inerter” device: the tuned mass-damper-inerter (TMDI). The inerter is a two-terminal device developing a resisting force proportional to the relative acceleration of its terminals. The underlying constant of proportionality (inertance) can be up to two orders of magnitude larger than the device physical mass. In this regard, it was shown analytically and numerically that optimally designed TMDI outperforms the classical TMD for a fixed attached mass in terms of relative displacement variance of linear primary structures under stochastic seismic excitations by exploiting the “mass amplification” inerter property. In this work, the optimal risk-informed design of the TMDI for seismic protection of multi-storey buildings in the region of Chile is addressed. Note that the Chilean seismo-tectonic environment is dominated by large magnitude seismic events yielding ground motions of long effective duration whose damage potential can be well reduced by means of TMDs. In this respect, a probabilistic framework is established for design optimization considering seismic risk criteria. Quantification of this risk through response analysis is considered and the seismic hazard is described by a recently developed stochastic ground motion model that offers hazard-compatibility with ground motion prediction equations available for Chile. Multiple criteria are utilized in the design optimization. The main one, representing overall direct benefits, is the life-cycle cost of the system, composed of the upfront TMDI cost and the anticipated seismic losses over the lifetime of the structure. For enhanced decision support, two additional criteria are examined, both represented through some response characteristic with specific probability of exceedance over the lifetime of the structure (therefore corresponding to design events with specific annual rate of exceedance). One such characteristic corresponds to the repair cost, and incorporates risk-averse attitudes into the design process, whereas the other corresponds to the inerter force, which incorporates practical constraints for the force transfer between TMDI and the supporting structure. This ultimately leads to a multi-objective formulation of the design problem. Stochastic simulation is used to estimate all required risk measures, whereas a Kriging metamodel is developed to support an efficient optimization process. The results show that the proposed design framework facilitates a clear demonstration of the benefits of the TMDI (over the TMD) as well as the evaluation of the comparative benefits of increasing the mass of the TMD against increasing the inertance of the TMDI.
... The uncertain factors considered in this study include the earthquake-induced ground motion, the spatial variability of the shear strength parameters and the fluctuation of the groundwater level. Among them, the earthquake-induced ground motion at a site in a specified exposure time is characterized as a random variable (Jalayer and Beck, 2008;Juang et al., 2008;Juang et al., 2010), the spatial variability of the shear strength parameters is characterized as a random field (Fenton, 1999;Griffiths and Fenton, 2004;Wang and Cao, 2013;Gong et al., 2014a;Feng and Jimenez, 2014), and the fluctuated groundwater level is treated as a random variable (Kim et al., 2004;Rahardjo et al., 2010;Zhang et al., 2014). The stability analysis within the proposed probabilistic framework is carried out using a finite difference program FLAC version 7.0 (2011). ...
Article
This paper presents a probabilistic approach for seismic stability analysis of a slope at a given site in a specified exposure time. For a probabilistic seismic stability analysis, the ground motion parameter, in terms of the peak ground acceleration (PGA), at a given site in a specified exposure time of interest (say, 30 years) is treated as a random variable, and the PGA distribution at the given site is derived based on the USGS National Seismic Hazard Maps data. Further, the spatial variability of the soil property is simulated herein by a random field, and the fluctuation of the groundwater level is simulated by a random variable. Within the probabilistic framework, a deterministic model for evaluating the slope stability is required; here, a pseudo-static analysis is adopted and implemented through 2D finite difference program FLAC version 7.0. In the face of the uncertainties in the input parameters, the performance or safety of the slope is expressed as a failure probability; within the proposed probabilistic analysis framework, a recently developed sampling method is adopted for the uncertainties propagation through the deterministic solution model. This probabilistic analysis framework is demonstrated with an illustrative example of a two-layer earth slope. Finally, a parametric study is undertaken to investigate how the failure probability of the slope (at a given site in a specified exposure time) is affected by the uncertain factors such as the earthquake-induced ground motion and the spatial variability of soil property. The study results demonstrate the versatility and effectiveness of the proposed framework for probabilistic seismic stability analysis of slope at a given site in a specified exposure time.
... In the original paper [4], SS was developed for estimating reliability of complex civil engineering structures such as tall buildings and bridges at risk from earthquakes. It was applied for this purpose in [5] and [24]. SS and its modifications have also been successfully applied to rare event simulation in fire risk analysis [8], aerospace [39,52], nuclear [16], wind [49] and geotechnical engineering [46], and other fields. ...
Chapter
Full-text available
Rare events are events that are expected to occur infrequently or, more technically, those that have low probabilities (say, order of 10−3 or less) of occurring according to a probability model. In the context of uncertainty quantification, the rare events often correspond to failure of systems designed for high reliability, meaning that the system performance fails to meet some design or operation specifications. As reviewed in this section, computation of such rare-event probabilities is challenging. Analytical solutions are usually not available for nontrivial problems, and standard Monte Carlo simulation is computationally inefficient. Therefore, much research effort has focused on developing advanced stochastic simulation methods that are more efficient. In this section, we address the problem of estimating rare-event probabilities by Monte Carlo simulation, importance sampling, and subset simulation for highly reliable dynamic systems.
... 5, to provide a hazard compatibility by establishing a match to regional ground motion prediction equations. Adoption of probability distributions for the seismological parameters facilitates then a comprehensive probabilistic description of the seismic hazard (Jalayer and Beck 2008). ...
Article
Full-text available
The assessment of the effectiveness of mass dampers for the Chilean region within a multi-objective decision framework utilizing life-cycle performance criteria is considered in this paper. The implementation of this framework focuses here on the evaluation of the potential as a cost-effective protection device of a recently proposed liquid damper, called tuned liquid damper with floating roof (TLD-FR). The TLD-FR maintains the advantages of traditional tuned liquid dampers (TLDs), i.e. low cost, easy tuning, alternative use of water, while establishing a linear and generally more robust/predictable damper behavior (than TLDs) through the introduction of a floating roof. At the same time it suffers (like all other liquid dampers) from the fact that only a portion of the total mass contributes directly to the vibration suppression, reducing its potential effectiveness when compared to traditional tuned mass dampers. A life-cycle design approach is investigated here for assessing the compromise between these two features, i.e. reduced initial cost but also reduced effectiveness (and therefore higher cost from seismic losses), when evaluating the potential for TLD-FRs for the Chilean region. Leveraging the linear behavior of the TLD-FR a simple parameterization of the equations of motion is established, enabling the formulation of a design framework that beyond TLDs-FR is common for other type of linear mass dampers, something that supports a seamless comparison to them. This framework relies on a probabilistic characterization of the uncertainties impacting the seismic performance. Quantification of this performance through time-history analysis is considered and the seismic hazard is described by a stochastic ground motion model that is calibrated to offer hazard-compatibility with ground motion prediction equations available for Chile. Two different criteria related to life-cycle performance are utilized in the design optimization, in an effort to support a comprehensive comparison between the examined devices. The first one, representing overall direct benefits, is the total life-cycle cost of the system, composed of the upfront device cost and the anticipated seismic losses over the lifetime of the structure. The second criterion, incorporating risk-averse concepts into the decision making, is related to consequences (repair cost) with a specific probability of exceedance over the lifetime of the structure. A multi-objective optimization is established and stochastic simulation is used to estimate all required risk measures. As an illustrative example, the performance of different mass dampers placed on a 21-story building in the Santiago area is examined.
Article
Performance-based earthquake engineering (PBEE) is a popular direction in the earthquake community, and at this stage, risk-based PBEE has become mainstream. In the risk-based probabilistic framework, seismic fragility analysis constitutes the most important link, and corresponding research on the mainshock–aftershock sequence has received widespread attention in recent years. Since a mainshock is often accompanied by multiple aftershocks and there is great uncertainty in the vibration characteristics of aftershocks, a seismic fragility analysis of structures under a stochastic mainshock-multi-aftershock sequence is meaningful and necessary. The corresponding questions, such as how to derive the multi-dimensional analytical fragility form under a stochastic mainshock-multi-aftershock sequence and how to correlate multiple intensity measures with multiple demand parameters, still require further investigation. In this paper, a direct analytical derivation of the multi-dimensional seismic fragility spaces of structures under nonstationary stochastic mainshock-multi-aftershock sequences is introduced. The methodology framework, implementation steps, and application examples are also provided in detail. Moreover, two scenarios, the one-mainshock-one-aftershock and one-mainshock-two-aftershocks, are considered, and the obtained multi-dimensional analytical fragility spaces for both scenarios are validated. In general, the matching accuracy of the fragility results for both scenarios is proven to be high, and the direct analytical derivation of the multi-dimensional fragility spaces is validated to be ideally consistent, which further provides a reference for multi-dimensional risk analysis under nonstationary stochastic mainshock-multi-aftershock sequences in future work.
Article
Full-text available
Accurately characterizing ground motions is crucial for estimating probabilistic seismic hazard and risk. The growing number of ground-motion models, and increased use of simulations in hazard and risk assessments, warrants a comparison between the different techniques available to predict ground motions. This research aims at investigating how the use of different ground-motion models can affect seismic hazard and risk estimates. For this purpose, a case study is considered with a circular seismic source zone and two line sources. A stochastic ground-motion model is used within a Monte Carlo analysis to create a benchmark hazard output. This approach allows the generation of many records, helping to capture details of the ground-motion median and variability, which a ground motion prediction equation may fail to properly model. A variety of ground-motion models are fitted to the simulated ground motion data, with fixed and magnitude-dependant standard deviations (sigmas) considered. These include classic ground motion prediction equations (with basic and more complex functional forms), and a model using an artificial neural network. Hazard is estimated from these models and then we extend the approach to a risk assessment for an inelastic single-degree-of-freedom-system. Only the artificial neural network produces accurate hazard results below an annual frequency of exceedance of 1 × 10–3 years⁻¹. This has a direct impact on risk estimates—with ground motions from large, close-to-site events having more influence on results than expected. Finally, an alternative to ground-motion modelling is explored through an observational-based hazard assessment which uses recorded strong-motions to directly quantify hazard.
Article
Full-text available
A novel Bayesian Augmented-Learning framework, quantifying the uncertainty of spectral representations of stochastic processes in the presence of missing data, is developed. The approach combines additional information (prior domain knowledge) of the physical processes with real, yet incomplete, observations. Bayesian deep learning models are trained to learn the underlying stochastic process, probabilistically capturing temporal dynamics, from the physics-based pre-simulated data. An ensemble of time domain reconstructions are provided through recurrent computations using the learned Bayesian models. Models are characterized by the posterior distribution of model parameters, whereby uncertainties over learned models, reconstructions and spectral representations are all quantified. In particular , three recurrent neural network architectures, (namely long short-term memory, or LSTM, LSTM-Autoencoder, LSTM-Autoencoder with teacher forcing mechanism), which are implemented in a Bayesian framework through stochastic variational inference, are investigated and compared under many missing data scenarios. An example from stochastic dynamics pertaining to the characterization of earthquake-induced stochastic excitations even when the source load data records are incomplete is used to illustrate the framework. Results highlight the superiority of the proposed approach, which adopts additional information, and the versatility of outputting many forms of results in a probabilistic manner.
Article
Full-text available
A Bayesian framework to stochastically characterize ground motions even in the presence of missing data is developed. This approach features the combination of seismological knowledge (a priori knowledge) with empirical observations (even incomplete) via Bayesian inference. At its core is a Bayesian neural network model that probabilistically learns temporal patterns from ground motion data. Uncertainties are accounted for throughout the framework. Performance of the approach has been quantitatively demonstrated via various missing data scenarios. This framework provides a general solution to dealing with missing data in ground motion records by providing various forms of representation of ground motions in a probabilistic manner, allowing it to be adopted for numerous engineering and seismological applications. Notably, it is compatible with the versatile Monte Carlo simulation scheme, such that stochastic dynamic analyses are still achievable even with missing data. Furthermore, it serves as a complementary approach to current stochastic ground-motion models in data-scarce regions under the growing interests of PBEE (performance-based earthquake engineering), mitigating the data-model dependence dilemma due to the paucity of data, and ultimately, as a fundamental solution to the limited data problem in data scarce regions.
Article
Full-text available
A consistent seismic hazard and fragility framework considering combined capacity-demand uncertainties is proposed, in light of the probability density evolution method (PDEM). The PDEM has solid theoretical basis in the reliability field, and it is integrated within the performance-based earthquake engineering (PBEE) for hazard-fragility assessment in this paper. During the analysis, the sample sets with different assigned probability are required to determine in advance, and the equivalent extreme events with virtual stochastic process are required to establish for solution. Both the uncertainties of capacity and demand are considered, and a combined performance index (CPI) is defined as concerned physical variable in PDEM, through pushover static and timehistory dynamic analyses. A non-stationary stochastic earthquake model is introduced using spectral representation of random functions, and the real characteristics of ground motions are reflected by one or two variables for each probability space. The peak ground acceleration (PGA) and spectral acceleration of the first period [Sa(T1)] of non-stationary stochastic ground motions are then obtained for each earthquake level, and the equivalent extreme events are also performed to discuss the statistical information of PGA or Sa(T1) through PDEM. The exceeding probability of PGA or Sa(T1) for each earthquake level is acquired, and a connection between the fragility value and hazard extent is built. The final 3D consistent hazard-fragility curves are then given, and the exceeding probability for different limit states, earthquake levels as well as intensity exceeding conditions can be predicted. Moreover, a comparison with the four classic approaches in the state-of-the-art is performed to verify the accuracy of PDEM procedure. In general, the framework avoids the pre-defined lognormal fragility shape and proves the combined efficiency and accuracy with the Monte Carlo simulation (MCS). The consistency from probabilistic hazard to fragility is realized without re-selecting earthquake waves, which is mainly attributed to the application of PDEM and non-stationary ground motions. The proposed framework provides new ideas for the consistent non-parametric hazard and fragility assessment scheme in the PBEE.
Article
This paper investigates the impact of the different sources of excitation variability within the stochastic kriging framework recently developed by the authors for the estimation of the distribution of engineering demand parameters (EDPs) in applications that the seismic hazard is described through stochastic ground motion models. For a given seismic event, described by seismicity characteristics such as the moment magnitude and the distance to source, one can distinguish two type of uncertainties in the excitation description: (g.i) the stochastic sequence utilized within the excitation model; (g.ii) the predictive models relating the ground motion parameters, such as significant duration, arias intensity, or frequency properties of seismic waves, to the seismicity characteristics. The original formulation of the authors treats (g.i) as aleatoric uncertainty and (g.ii) as parametric uncertainty, including the latter in the model parameters for the risk characterization, representing the metamodel input. Implementation establishes the EDP distribution approximation by utilizing a database of EDP estimates for a set of different input parameters, considering replications of these estimates for different stochastic sequences for some small part of this database. The database with replications is first leveraged to approximate the heteroscedastic behavior with respect to the aleatoric uncertainty, and the entire database is subsequently used, coupled with the previous heteroscedasticity approximation, to establish the stochastic kriging predictions for the EDP distribution. For excitation models with a large number of ground motion parameters, the original formulation leads to a significant increase of the input dimensionality for the metamodel development, requiring a larger database for facilitating accurate EDP approximations. Here, an alternative formulation is investigated, considering some (or perhaps even all) of the ground motion parameters to be part of the aleatoric uncertainty of the excitation description. It is shown that careful selection is needed for the exact ground motion parameters that can be treated as part of this aleatoric uncertainty, to accommodate an accurate overall EDP approximation. Similarities are discussed for applications that consider the description of the seismic hazard through the selection of ground motions based on intensity measures. An extensive validation of this new approach is presented considering two different stochastic ground motion models.
Article
Seismic isolation is considered an effective solution to protect buildings and related content from earthquakes, and consequently reduce seismic losses. However, the overall reliability levels achieved on these systems by following the design rules suggested by codes are not uniform and they may be strongly influenced by some choices made in the structural design. This study aims to investigate the seismic reliability of structural systems equipped with high-damping rubber bearings, which is a widely used class of isolators. An extensive parametric analysis is performed to assess the influence of design choices on the failure probability, considering design parameters concerning both the isolation system and the superstructure, such as: isolation period; bearings shear strain; percentage of flat sliders (i.e., bearing shape factors); superstructure overstrength ratio. A set of case studies have been configurated by varying and combining all the aforesaid parameters. A stochastic model is used for the bidirectional seismic input and the generation of horizontal ground motion components, whereas full probabilistic analyses are performed via Subset Simulation to achieve accurate estimates of the demand hazard curves up to very small failure probabilities. To reduce the computational effort, a 3D-model with a reduced number of DOFs (Degrees of Freedoms) is adopted for each case study. It consists of an uncoupled bidirectional elastoplastic model of the superstructure, and an advanced nonlinear 3D model of the rubber isolators, accounting for the coupling between vertical and horizontal response in large displacements. For each case analysed, demand hazard curves are evaluated to illustrate the probabilistic properties of the seismic response for both isolation system and superstructure. Results show a noticeable sensitivity of the system reliability with respect to the examined design choices and in some cases the achieved structural performance can be far from the safety levels required by the Codes.
Article
This paper revisits the implementation of surrogate modeling (metamodeling) techniques within seismic risk assessment, for applications that the seismic hazard is described through stochastic ground motion models. Emphasis is placed on how to efficiently address the aleatoric uncertainty in the ground motions, stemming in this case from the stochastic sequence utilized within the excitation model. Previous work has accommodated this uncertainty by approximating the statistics of the engineering demand parameters (EDPs), something that required a large number of replication simulations (for different stochastic sequences) for each training point that was used to inform the metamodel calibration. Using kriging (Gaussian Process regression) as surrogate model, an alternative formulation is discussed here, aiming to minimize the replications for each training point. This is achieved by approximating directly the EDP distribution. It is shown that accommodating heteroscedastic behavior with respect to the aleatoric uncertainty is absolutely critical for achieving an accurate approximation, and two different approaches are presented for establishing this objective. The first approach adopts a stochastic kriging formulation, utilizing a small number of replications for judicially selected inputs, leveraging a secondary surrogate model over the latter inputs to address the heteroscedastic behavior. The second approach uses no replications, establishing a heteroscedastic nugget formulation to accommodate the EDP distribution estimation. A functional relationship is introduced between the nugget and the excitation intensity features to approximate the heteroscedastic behavior. This relationship is explicitly optimized during the metamodel calibration.
Article
This paper develops a record-based stochastic ground motion model for the Chilean Subduction zone that generates ground motions compatible with the seismic hazard (represented by a local Ground Motion Predictive Equation GMPE) for both interplate and intraslab mechanism and considering rock soil conditions. The stochastic ground motion is obtained by the modulation of white-noise sequence in both time and spectral domain applying non-stationary filters. Compatibility is obtained by a proper tuning of predictive relationships, which offers a relation between seismological parameters and the driven parameters in the non-stationary filters. This process demands an intensive computational optimization problem where the parameters in the predictive relationships are found such that the stochastic ground motions generated match with the GMPE. The computational burden is decreased by the use of a Kriging metamodel which provides a high-fidelity approximation of the mean response spectra for different natural structural periods and the predictive relationship parameters. Once the predictive relationships are adjusted, the results are validated by comparing the prediction of mean spectral acceleration response from the GMPE with the prediction using the stochastic ground motion model. The Kriging model is built once and could be used to repeat the optimization for new GMPEs and predictive relationships (as a result of an updating process to incorporate new important seismic events) within a relatively low computational cost. In addition, a post-adjustment methodology is proposed to improve the results, using a scaling and spectral matching technique.
Article
The cost-effective design of seismic protective devices considering multiple criteria related to their lifecycle performance is examined, focusing on applications to fluid viscous dampers. The adopted framework is based on nonlinear time-history analysis for describing structural behavior, an assembly-based vulnerability approach for quantifying earthquake losses, and on characterization of the earthquake hazard through stochastic ground motion modeling. The probabilistic (lifecycle) performance is quantified through the expected value of some properly defined risk consequences measured over the space of the uncertain parameters (i.e., random variables) for the structural system and seismic hazard. The main design objective considered is the mean total lifecycle cost, composed of the upfront protective device cost and the present value of future earthquake losses. For incorporating risk-aversion attitudes in the decision-making process, an additional objective is examined, corresponding to consequences (repair cost in the example considered in this study) with a specific small-exceedance probability over the lifetime of the structure. This explicitly accounts for low-likelihood but large-consequence seismic events and ultimately leads to a multicriteria design problem. To support the use of complex numerical and probability models, a computational framework relying on kriging surrogate modeling is adopted for performing the resultant multiobjective optimization. The surrogate model is formulated in the so-called augmented input space, composed of both the uncertain model parameters and the design variables (controllable device parameters), and therefore is used to simultaneously support both the uncertainty propagation (calculation of risk integrals for the lifecycle performance) and the design optimization. As an illustrative example, the retrofitting of a three-story building with nonlinear fluid viscous dampers is examined.
Article
Full-text available
In the immediate aftermath of a strong earthquake and in the presence of an ongoing aftershock sequence, scientific advisories in terms of seismicity forecasts play quite a crucial role in emergency decision-making and risk mitigation. Epidemic Type Aftershock Sequence (ETAS) models are frequently used for forecasting the spatio-temporal evolution of seismicity in the short-term. We propose robust forecasting of seismicity based on ETAS model, by exploiting the link between Bayesian inference and Markov Chain Monte Carlo Simulation. The methodology considers the uncertainty not only in the model parameters, conditioned on the available catalogue of events occurred before the forecasting interval, but also the uncertainty in the sequence of events that are going to happen during the forecasting interval. We demonstrate the methodology by retrospective early forecasting of seismicity associated with the 2016 Amatrice seismic sequence activities in central Italy. We provide robust spatio-temporal short-term seismicity forecasts with various time intervals in the first few days elapsed after each of the three main events within the sequence, which can predict the seismicity within plus/minus two standard deviations from the mean estimate within the few hours elapsed after the main event.
Article
The collapse capacity of a structure employing hysteretic energy dissipating devices (HEDDs) is considerably influenced by the uncertainties which are categorized to the aleatory and epistemic uncertainties. This study aims to comparatively evaluate uncertainty-propagations to the seismic collapse performance of the low-rise steel moment-resisting frames (SMRFs) with and without HEDDs, and to investigate on the effects of HEDDs to the failure modes of damped structures when the uncertainties are collectively propagated to the seismic response. In order to achieve this, incremental dynamic analyses are carried out to assess the collapse capacities of typical low-rise SMRFs with and without HEDDs. The Monte-Carlo simulation adopting a Latin hypercube sampling method is then performed to reflect the probabilistic uncertainty-propagation to the collapse capacities of structures. The analysis results show that the collapse capacities of low-rise SMRFs are considerably changed due to the uncertainty-propagation and HEDDs decrease the uncertainty-propagation to the collapse capacities of low-rise SMRFs because they induces a constant collapse mode with relatively low variation.
Article
A systematic probabilistic framework is discussed in this paper for detailed evaluation of the life-cycle repair cost of structural systems. A comprehensive methodology is initially presented for earthquake loss estimation; this methodology uses the nonlinear time-history response of the structure under a given excitation to estimate the damage in a detailed, component level. A realistic, stochastic ground motion model is then discussed for describing the acceleration time history for future seismic excitations. The parameters of this model are connected to the regional seismicity characteristics by appropriate predictive relationships. Quantification of the uncertainty in the regional seismicity, through a probabilistic description, leads then to a complete stochastic characterization of future ground motions. In this setting, the life-cycle repair cost can be quantified by its expected value over the space of the uncertain parameters for the structural and excitation models. Because of the complexity of these models, estimation of this expected value through stochastic simulation is suggested. A framework for probabilistic sensitivity analysis is then presented, based on stochastic sampling concepts. This sensitivity analysis aims to identify the structural and excitation properties that contribute more to the repair cost over the life-cycle of the structure, considering the probabilistic models selected to characterize the uncertainty in these properties.
Article
Considering the influence of bridge structure service period on earthquake loading, equal exceeding probability method was applied to reduce earthquake role, and two fortification criterions of current anti-seismic code for highway bridge were supplemented to three levels. The probability theory was used to randomize target response spectrum by considering the randomness of ground motion. Combined with coherence function and phase difference spectrum theory, the non-stationary random ground motions of spatial correlation multi-points for existing bridge structure were generated by using MATLAB programming. Simulation result indicates that ground motion peak acceleration can be reduced rationally by using equal exceeding probability method. Probability theory can be used to get random response spectrum, which can well simulate the randomness of ground motion, and the variation coefficient maximum difference value of thirty random response spectrums is 0.064, it meets accuracy requirement. The calculating response spectrums fit well to random target response spectrums, the goodnesses of fit for points No.1 and No.2 are 0.82 and 0.81 respectively, they meet accuracy requirement. The artificial ground motions can reflect the service period of existing bridge structure and the randomness of ground motion, and are similar to actual earthquake records.
Article
Full-text available
Probabilistic seismic hazard analysis (PSHA) integrates over all potential earthquake occurrences and ground motions to estimate the mean frequency of exceedance of any given spectral acceleration at the site. For improved communication and insights, it is becoming common practice to display the relative contributions to that hazard from the range of values of magnitude, M, distance, R, and epsilon, ɛ, the number of standard deviations from the median ground motion as predicted by an attenuation equation. The proposed disaggregation procedures, while conceptually similar, differ in several important points that are often not reported by the researchers and not appreciated by the users. We discuss here such issues, for example, definition of the probability distribution to be disaggregated, different disaggregation techniques, disaggregation of R versus ln R, and the effects of different binning strategies on the results. Misconception of these details may lead to unintended interpretations of the relative contributions to hazard. Finally, we propose to improve the disaggregation process by displaying hazard contributions in terms of not R, but latitude, longitude, as well as M and ɛ. This permits a display directly on a typical map of the faults of the surrounding area and hence enables one to identify hazard-dominating scenario events and to associate them with one or more specific faults, rather than a given distance. This information makes it possible to account for other seismic source characteristics, such as rupture mechanism and near-source effects, during selection of scenario-based ground-motion time histories for structural analysis.
Article
Full-text available
A versatile, nonstationary stochastic ground-motion model accounting for the time variation of both intensity and frequency content typical of real earthquake ground motions is formulated and validated. An extension of the Thomson's spectrum estimation method is used to adaptively estimate the evolutionary power spectral density (PSD) function of the target ground acceleration record. The parameters of this continuous-time, analytical, stochastic earthquake model are determined by least-square fitting the analytical evolutionary PSD function of the model to the target evolutionary PSD function estimated. As application examples, the proposed model is applied to two actual earthquake records. In each case, model validation is obtained by comparing the second-order statistics of several traditional ground-motion parameters and the probabilistic linear-elastic response spectra simulated using the earthquake model with their deterministic counterparts characterizing the target record.
Article
Full-text available
A formal probabilistic framework for seismic assessment of a structural system can be built around the expression for the probability of exceeding a limit state capacity, a measure of the reliability of system under seismic excitations. Common probabilistic tools are implemented in order to derive a simplified closed-form expression for the probability of exceeding a limit state capacity. This closed-from expression is particularly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. This framework implements non linear dynamic analysis procedures in order to estimate variability in the response of the structure ("demand") to seismic excitations. Alternative methods for designing a program of nonlinear analyses and for applying the results of dynamic analysis, particularly as it relates to displacement-based "demand" and "capacity" estimation, are discussed. These alternative methods are presented through a comprehensive case study of an existing 7-story moment-resisting frame structure in Los Angeles. This structure represents an older reinforced concrete structure with degrading behavior in nonlinear range. The onset of global dynamic instability in the structure is used to define the system "capacity" in this study. The probabilistic model describing the structural demand in the vicinity of system capacity is modified in order to explicitly account for the large displacement demands particular to a system close to the onset of global instability. This leads to an alternative presentation of the probabilistic framework in the range of global instability in the structure. Ground motion record selection is potentially significant in implementing a program of nonlinear dynamic analyses. However, it is demonstrated that even under the most extreme cases, namely structures with very short and very long first-mode periods, the structural response is conditionally statistically independent of the ground motion characteristics such as magnitude and source to site distance for a given seismic intensity level. This conclusion may justify "random" record selection taking into consideration the site's soil condition and its position with respect to the major faults around it.
Article
Full-text available
Alternative non-linear dynamic analysis procedures, using real ground motion records, can be used to make probability-based seismic assessments. These procedures can be used both to obtain parameter estimates for specific probabilistic assessment criteria such as demand and capacity factored design and also to make direct probabilistic performance assessments using numerical methods. Multiple-stripe analysis is a non-linear dynamic analysis method that can be used for performance-based assessments for a wide range of ground motion intensities and multiple performance objectives from onset of damage through global collapse. Alternatively, the amount of analysis effort needed in the performance assessments can be reduced by performing the structural analyses and estimating the main parameters in the region of ground motion intensity levels of interest. In particular, single-stripe and double-stripe analysis can provide local probabilistic demand assessments using minimal number of structural analyses (around 20 to 40). As a case study, the displacement-based seismic performance of an older reinforced concrete frame structure, which is known to have suffered shear failure in its columns during the 1994 Northridge Earthquake, is evaluated. Copyright © 2008 John Wiley & Sons, Ltd.
Article
In a full Bayesian probabilistic framework for "robust" system identification, structural response predictions and performance reliability are updated using structural test data D by considering the predictions of a whole set of possible structural models that are weighted by their updated probability. This involves integrating h(theta)p(theta/D) over the whole parameter space, where 0 is a parameter vector defining each model within the set of possible models of the structure, h(theta) is a model prediction of a response quantity of interest, and p(theta/D) is the updated probability density for 0, which provides a measure of how plausible each model is given the data D. The evaluation of this integral is difficult because the dimension of the parameter space is usually too large for direct numerical integration and p(theta/D)) is concentrated in a small region in the parameter space and only known up to a scaling constant. An adaptive Markov chain Monte Carlo simulation approach is proposed to evaluate the desired integral that is based on the Metropolis-Hastings algorithm and a concept similar to simulated annealing. By carrying out a series of Markov chain simulations with limiting stationary distributions equal to a sequence of intermediate probability densities that converge on p(theta/D), the region of concentration of p(theta/D)) is gradually portrayed. The Markov chain samples are used to estimate the desired integral by statistical averaging. The method is illustrated using simulated dynamic test data to update the robust response variance and reliability of a moment-resisting frame for two cases: one where the model is only locally identifiable based on the data and the other where it is unidentifiable.
Article
An analytical study of the failure region of the first excursion reliability problem for linear dynamical systems subjected to Gaussian white noise excitation is carried out with a view to constructing a suitable importance sampling density for computing the first excursion failure probability. Central to the study are 'elementary failure regions', which are defined as the failure region in the load space corresponding to the failure of a particular output response at a particular instant. Each elementary failure region is completely characterized by its design point, which can be computed readily using impulse response functions of the system. It is noted that the complexity of the first excursion problem stems from the structure of the union of the elementary failure regions. One important consequence of this union structure is that, in addition to the global design point, a large number of neighboring design points are important in accounting for the failure probability. Using information from the analytical study, an importance sampling density is proposed. Numerical examples are presented, which demonstrate that the efficiency of using the proposed importance sampling density to calculate system reliability is remarkable.
Conference Paper
The seismic risk assessment of a structure in performance-based design may be significantly affected by the representation of ground motion uncertainty. The uncertainty in the ground motion is commonly represented by adopting a parameter or a vector of parameters known as the intensity measure (IM). In this work, a new measure, called a sufficiency measure, is derived based on information theory concepts, to quantify the suitability of one IM relative to another in representing ground motion uncertainty. Based on this measure, alternative IM’s can be compared in terms of the expected difference in information they provide about a designated structural response parameter. Several scalar IM’s are compared in terms of the amount of information they provide about the seismic response of an existing reinforced-concrete frame structure.
Article
Ground-motion relations are developed for California using a stochastic simulation method that exploits the equivalence between finite-fault models and a two-corner point-source model of the earthquake spectrum. First, stochastic simulations are generated for finite-fault ruptures, in order to define the average shape and amplitude level of the radiated spectrum at near-source distances as a function of earthquake size. The length and width of the fault plane are defined based on the moment magnitude of the earthquake and modeled with an array of subfaults. The radiation from each subfault is modeled as a Brune point source using the stochastic model approach; the subfault spectrum has a single-corner frequency. An earthquake rupture initiates at a randomly chosen subfault (hypocenter), and propagates in all directions along the fault plane. A subfault is triggered when rupture propagation reaches its center. Simulations are generated for an observation point by summing the subfault time series, appropriately lagged in time. Fourier spectra are computed for records simulated at many azimuths, placed at equidistant observation points around the fault. The mean Fourier spectrum for each magnitude, at a reference near-source distance, is used to define the shape and amplitude levels of an equivalent point-source spectrum that mimics the salient finite-fault effects. The functional form for the equivalent point-source spectrum contains two corner frequencies. Stochastic point-source simulations, using the derived two-corner source spectrum, are then performed to predict peak-ground-motion parameters and response spectra for a wide range of magnitudes and distances, for generic California sites. The stochastic ground-motion relations, given in the Appendix for rock and soil sites, are in good agreement with the empirical strong-motion database for California; the average ratio of observed to simulated amplitudes is near unity over all frequencies from 0.2 to 12 Hz. The stochastic relations agree well with empirical regression equations (e.g., Abrahamson and Silva, 1997; Boore et al., 1997; Sadigh et al., 1997) in the magnitude-distance ranges well represented by the data, but are better constrained at large distances, due to the use of attenuation parameters based on regional seismographic data. The stochastic ground-motion relations provide a sound basis for estimation of ground motions for earthquakes of magnitude 4 through 8, at distances from 1 to 200 km.
Article
Ground-motion attenuation equations for rock sites in central and eastern North America are derived, based on the predictions of a stochastic ground-motion model. Four sets of attenuation equations are developed (i.e., 2 crustal regions × 2 magnitude scales). The associated uncertainties are derived by considering the uncertainties in parameter values, as well as those uncertainties associated with the ground-motion model itself. Comparison to data shows a reasonable agreement. Comparison to other attenuation functions for the region shows consistency with most attenuation functions in current use.
Chapter
This is the standard text for introductory advanced undergraduate and first-year graduate level courses in signal processing. The text gives a coherent and exhaustive treatment of discrete-time linear systems, sampling, filtering and filter design, reconstruction, the discrete-time Fourier and z-transforms, Fourier analysis of signals, the fast Fourier transform, and spectral estimation. The author develops the basic theory independently for each of the transform domains and provides illustrative examples throughout to aid the reader. Discussions of applications in the areas of speech processing, consumer electronics, acoustics, radar, geophysical signal processing, and remote sensing help to place the theory in context. The text assumes a background in advanced calculus, including an introduction to complex variables and a basic familiarity with signals and linear systems theory. If you have this background, the book forms an up-to-date and self-contained introduction to discrete-time signal processing that is appropriate for students and researchers. Discrete-Time Signal Processing also includes an extensive bibliography.
Article
In this paper we summarize our recently-published work on estimating horizontal response spectra and peak acceleration for shallow earthquakes in western North America. Although none of the sets of coefficients given here for the equations are new, for the convenience of the reader and in keeping with the style of this special issue, we provide tables for estimating random horizontal-component peak acceleration and 5 percent damped pseudo-acceleration response spectra in terms of the natural, rather than common, logarithm of the ground-motion parameter. The equations give ground motion in terms of moment magnitude, distance, and site conditions for strike-slip, reverse-slip, or unspecified faulting mechanisms. Site conditions are represented by the shear velocity averaged over the upper 30 m, and recommended values of average shear velocity are given for typical rock and soil sites and for site categories used in the National Earthquake Hazards Reduction Program's recommended seismic code provisions. In addition, we stipulate more restrictive ranges of magnitude and distance for the use of our equations than in our previous publications. Finally, we provide tables of input parameters that include a few corrections to site classifications and earthquake magnitude (the corrections made a small enough difference in the ground-motion predictions that we chose not to change the coefficients of the prediction equations).
Article
Analysis of the seismic risk to a structure requires assessment of both the rate of occurrence of future earthquake ground motions hazard and the effect of these ground motions on the structure response.These two pieces are often linked using an intensity measure such as spectral acceleration. However, earth scientists typically use the geometric mean of the spectral accelerations of the two horizontal components of ground motion as the intensity measure for hazard analysis, while structural engineers often use spectral acceleration of a single horizontal component as the intensity measure for response analysis. This inconsistency in definitions is typically not recognized when the two assessments are combined, resulting in unconservative conclusions about the seismic risk to the structure.The source and impact of the problem is examined in this paper, and several potential resolutions are proposed.This discussion is directly applicable to probabilistic analyses, but also has implications for deterministic seismic evaluations. DOI: 10.1193/1.2191540
Article
Introduced in this paper are several alternative ground-motion intensity measures (IMs) that are intended for use in assessing the seismic performance of a structure at a site susceptible to near-source and/or ordinary ground motions. A comparison of such IMs is facilitated by defining the "efficiency" and "sufficiency" of an IM, both of which are criteria necessary for ensuring the accuracy of the structural performance assessment. The efficiency and sufficiency of each alternative IM, which are quantified via (i) nonlinear dynamic analyses of the structure under a suite of earthquake records and (ii) linear regression analysis, are demonstrated for the drift response of three different moderate- to long-period buildings subjected to suites of ordinary and of near-source earthquake records. One of the alternative IMs in particular is found to be relatively efficient and sufficient for the range of buildings considered and for both the near-source and ordinary ground motions.
Article
Theoretical predictions of seismic motions as a function of source strength are often expressed as frequency-domain scaling models. The observations of inter-est to strong-motion seismology, however, are usually in the time domain (e.g., various peak motions, including magnitude). The method of simulation presented here makes use of both domains; its essence is to filter a suite of windowed, stochastic time series so that the amplitude spectra are equal, on the average, to the specified spectra. Because of its success in predicting peak and rms accelerations (Hanks and McGuire, 1981), an ~-squared spectrum with a high-frequency cutoff (fro), in addition to the usual whole-path anelastic attenuation, and with a constant stress parameter (Aa) has been used in the applications of the simulation method. With these assumptions, the model is particularly simple: the scaling with source size depends on only one parameter--seismic moment or, equivalently, moment magnitude. Besides peak acceleration, the model gives a good fit to a number of ground motion amplitude measures derived from previous analyses of hundreds of recordings from earthquakes in western North America, ranging from a moment magnitude of 5.0 to 7.7. These measures of ground motion include peak velocity, Wood-Anderson instrument response, and response spectra. The model also fits peak velocities and peak accelerations for South African earthquakes with moment magnitudes of 0.4 to 2.4 (with fm = 400 Hz and Aa = 50 bars, compared to fro" 15 Hz and Aa = 100 bars for the western North America data). Remarkably, the model seems to fit all essential aspects of high-frequency ground motions for earthquakes over a very large magnitude range. Although the simulation method is useful for applications requiring one or more time series, a simpler, less costly method based on various formulas from random vibration theory will often suffice for applications requiring only peak motions. Hanks and McGuire (1981) used such an approach in their prediction of peak acceleration. This paper contains a generalization of their approach; the formulas used depend on the moments (in the statistical sense) of the squared amplitude spectra, and therefore can be applied to any time series having a stochastic character, including ground acceleration, velocity, and the oscillator outputs on which response spectra and magnitude are based.
Article
A nonlinear model and an analytical procedure for calculating the cyclic response of nonductile reinforced concrete columns are presented. The main characteristics of the model include the ability to represent flexure or shear failure under monotonically increasing or reversed cyclic loading. Stiffness degradation with cyclic loading can also be represented. The model was implemented in a multipurpose analysis program and was used to calculate the response of selected columns representative of older construction. A comparison of the calculated response with experimental results shows that the strength, failure mode and general characteristics of the measured cyclic response can be well represented by the model.
Article
ABSTRACT This paper introduces a method for the evaluation of the seismic risk at the site of an engineering project. The results are in terms of a ground,motion parameter (such as peak acceleration) versus average,return period. The method,incorporates the influence of all potential sources of earthquakes and the average activity rates assigned to them. Arbitrary geographical relationships between,the site and po- tential point, line, or areal sources can be modeled with computational ease. In the range of interest, the derived distributions of maximum annual ground motions are in the form of Type I or Type II extreme value distributions, if the more com- monly assumed magnitude,distribution and attenuation laws are used.
Article
Using a database of 655 recordings from 58 earthquakes, empirical response spectral attenuation relations are derived for the average horizontal and vertical component for shallow earthquakes in active tectonic regions. A new feature in this model is the inclusion of a factor to distinguish between ground motions on the hanging wall and footwall of dipping faults. The site response is explicitly allowed to be non-linear with a dependence on the rock peak acceleration level.
Article
In a full Bayesian probabilistic framework for "robust" system identification, structural response predictions and performance reliability are updated using structural test data D by considering the predictions of a whole set of possible structural models that are weighted by their updated probability. This involves integrating h(θ)p(θ|D) over the whole parameter space, where θ is a parameter vector defining each model within the set of possible models of the structure, h(θ) is a model prediction of a response quantity of interest, and p(θ|D) is the updated probability density for θ, which provides a measure of how plausible each model is given the data D. The evaluation of this integral is difficult because the dimension of the parameter space is usually too large for direct numerical integration and p(θ|D) is concentrated in a small region in the parameter space and only known up to a scaling constant. An adaptive Markov chain Monte Carlo simulation approach is proposed to evaluate the desired integral that is based on the Metropolis-Hastings algorithm and a concept similar to simulated annealing. By carrying out a series of Markov chain simulations with limiting stationary distributions equal to a sequence of intermediate probability densities that converge on p(θ|D), the region of concentration of p(θ|D) is gradually portrayed. The Markov chain samples are used to estimate the desired integral by statistical averaging. The method is illustrated using simulated dynamic test data to update the robust response variance and reliability of a moment-resisting frame for two cases: one where the model is only locally identifiable based on the data and the other where it is unidentifiable.
Article
A parsimonious stochastic seismic ground-motion model is used to study the effect of ground-motion nonstationarities on the response of simple linear and softening nonlinear systems. This model captures with at most nine parameters the features of the ground motion which are important for computing dynamic response, including the amplitude and frequency-content nonstationarities of the earthquake. Simple approximate expressions for the mean-square response statistics are obtained and are used to demonstrate analytically the importance of modeling the temporal nonstationarity in the frequency content of the ground motion, not only as expected for the nonlinear system, but also for linear systems. For the nonlinear systems, the phenomenon of 'moving resonance' is demonstrated whereby the shortening of the system frequencies, due to stiffness softening with increasing amplitudes, tracks the shift of the dominant frequencies of the ground motion, leading to a large resonant build-up in response amplitudes.
Article
A method is presented for efficiently computing small failure probabilities encountered in seismic risk problems involving dynamic analysis. It is based on a procedure recently developed by the writers called Subset Simulation in which the central idea is that a small failure probability can be expressed as a product of larger conditional failure probabilities, thereby turning the problem of simulating a rare failure event into several problems that involve the conditional simulation of more frequent events. Markov chain Monte Carlo simulation is used to efficiently generate the conditional samples, which is otherwise a nontrivial task. The original version of Subset Simulation is improved by allowing greater flexibility for incorporating prior information about the reliability problem so as to increase the efficiency of the method. The method is an effective simulation procedure for seismic performance assessment of structures in the context of modern performance-based design. This application is illustrated by considering the failure of linear and nonlinear hysteretic structures subjected to uncertain earthquake ground motions. Failure analysis is also carried out using the Markov chain samples generated during Subset Simulation to yield information about the probable scenarios that may occur when the structure fails.
Article
Reliability-based design sensitivity analysis involves studying the dependence of the failure probability on design parameters. Conventionally, this requires repeated evaluations of the failure probability for different values of the design parameters, which is a direct but computationally expensive task. An efficient simulation approach is presented to perform reliability-based design sensitivity analysis using only one simulation run. The approach is based on consideration of an ‘augmented reliability problem’ where the design parameters are artificially considered as uncertain. It is shown that the desired information about reliability sensitivity can be extracted through failure analysis of the augmented problem. The required computational effort is relatively insensitive to the number of uncertain parameters but generally grows exponentially with the number of design parameters whose sensitivity is to be studied. The latter implies that the proposed approach is applicable for studying the sensitivity of a small number of design parameters, say, less than 3, although this drawback appears unavoidable whenever multi-dimensional information is sought. Examples are presented to illustrate applications of the approach to reliability-based retrofit of structures.
Article
An analytical study of the failure region of the first excursion reliability problem for linear dynamical systems subjected to Gaussian white noise excitation is carried out with a view to constructing a suitable importance sampling density for computing the first excursion failure probability. Central to the study are ‘elementary failure regions’, which are defined as the failure region in the load space corresponding to the failure of a particular output response at a particular instant. Each elementary failure region is completely characterized by its design point, which can be computed readily using impulse response functions of the system. It is noted that the complexity of the first excursion problem stems from the structure of the union of the elementary failure regions. One important consequence of this union structure is that, in addition to the global design point, a large number of neighboring design points are important in accounting for the failure probability. Using information from the analytical study, an importance sampling density is proposed. Numerical examples are presented, which demonstrate that the efficiency of using the proposed importance sampling density to calculate system reliability is remarkable.
Article
Structural failures in recent earthquakes and hurricanes have exposed the weakness of current design procedures and shown the need for new concepts and methodologies for building performance evaluation and design. A central issue is proper consideration and treatment of the large uncertainty in the loadings and the complex building behavior in the nonlinear range in the evaluation and design process. A reliability-based framework for design is proposed for this purpose. Performance check of the structures is emphasized at two levels corresponding to incipient damage and incipient collapse. Minimum lifecycle cost criteria are proposed to arrive at optimal target reliability for performance-based design under multiple natural hazards. The issue of the structural redundancy under stochastic loads is also addressed. Effects of structural configuration, ductility capacity, 3-D motions, and uncertainty in demand versus capacity are investigated. A uniform-risk redundancy factor is proposed to ensure uniform reliability for structural systems of different degree of redundancy. The inconsistency of the reliability/redundancy factor in current codes is pointed out.
Article
A new simulation approach, called ‘subset simulation’, is proposed to compute small failure probabilities encountered in reliability analysis of engineering systems. The basic idea is to express the failure probability as a product of larger conditional failure probabilities by introducing intermediate failure events. With a proper choice of the conditional events, the conditional failure probabilities can be made sufficiently large so that they can be estimated by means of simulation with a small number of samples. The original problem of calculating a small failure probability, which is computationally demanding, is reduced to calculating a sequence of conditional probabilities, which can be readily and efficiently estimated by means of simulation. The conditional probabilities cannot be estimated efficiently by a standard Monte Carlo procedure, however, and so a Markov chain Monte Carlo simulation (MCS) technique based on the Metropolis algorithm is presented for their estimation. The proposed method is robust to the number of uncertain parameters and efficient in computing small probabilities. The efficiency of the method is demonstrated by calculating the first-excursion probabilities for a linear oscillator subjected to white noise excitation and for a five-story nonlinear hysteretic shear building under uncertain seismic excitation.
Article
A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two-dimensional rigid-sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four-term virial coefficient expansion.
Article
A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two-dimensional rigid-sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four-term virial coefficient expansion. The Journal of Chemical Physics is copyrighted by The American Institute of Physics.
Article
SUMMARY A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates. Examples of the methods, including the generation of random orthogonal matrices and potential applications of the methods to numerical problems arising in statistics, are discussed.
Stochastic modelling of California ground motions Copyright q DOI: 10.1002/eqe ALTERNATIVE REPRESENTATIONS OF GROUND-MOTION UNCERTAINTY 79 9
  • G Atkinson
  • Silva
Atkinson G, Silva W. Stochastic modelling of California ground motions. Bulletin of the Seismic Society of America 2000; 90(2):255–274. Copyright q 2007 John Wiley & Sons, Ltd. Earthquake Engng Struct. Dyn. 2008; 37:61–79 DOI: 10.1002/eqe ALTERNATIVE REPRESENTATIONS OF GROUND-MOTION UNCERTAINTY 79 9. Oppenheim AV, Schafer RW. Discrete-Time Signal Processing (2nd edn). Prentice Hall Signal Processing Series. Prentice-Hall: Englewood Cliffs, NJ, 1998.
Seismic drift demands for two SMRF structures with brittle connections. Structural Engineering World Wide. England Paper T158-3
  • N Luco
  • Cornell
  • Ca
Luco N, Cornell CA. Seismic drift demands for two SMRF structures with brittle connections. Structural Engineering World Wide. England Paper T158-3. Elsevier: Oxford, 1998.
A technical framework for probability-based demand and capacity factor design (DCFD) seismic formats
  • Jalayer
Jalayer F, Cornell CA. A technical framework for probability-based demand and capacity factor design (DCFD) seismic formats. Pacific Earthquake Engineering Research Center Report, vol. 08. 2003.
Hazard Code Version 30 Documentation PEER strong motion database
  • Na Abrahamson
Abrahamson NA. Hazard Code Version 30 Documentation, May 2001. 24. PEER strong motion database. http://peer.berkeley.edu/smcat/ (7 February 2005).
Seismic drift demands for two SMRF structures with brittle connections. Structural Engineering World Wide
  • N Luco
  • C A Cornell
Luco N, Cornell CA. Seismic drift demands for two SMRF structures with brittle connections. Structural Engineering World Wide. England Paper T158-3. Elsevier: Oxford, 1998.
Hazard Code Version 30 Documentation
  • N A Abrahamson
Abrahamson NA. Hazard Code Version 30 Documentation, May 2001. 24. PEER strong motion database. http://peer.berkeley.edu/smcat/ (7 February 2005).
  • Boore
A technical framework for probability-based demand and capacity factor design (DCFD) seismic formats
  • Jalayerf Cornellca
Direct probabilistic seismic hazard analysis: implementing non-linear dynamic assessments
  • Jalayerf
Hazard Code Version 30 Documentation May2001
  • Abrahamsonna