J. L. Beck’s research while affiliated with University of Naples Federico II and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (94)


Bayesian Post-Processing for Subset Simulation for Decision Making Under Risk
  • Conference Paper

May 2012

·

24 Reads

·

Beck J. L

Estimation of the failure probability, that is, the probability of unacceptable system performance, is an important and computationally challenging problem in reliability engineering. In cases of practical interest, the failure probability is given by an integral over a high-dimensional uncertain parameter space. Over the past decade, the engineering research community has realized the importance of advanced stochastic simulation methods for solving reliability problems. Subset Simulation, proposed by Au and Beck, provides an efficient algorithm for computing failure probabilities for general high-dimensional reliability problems. Here, a Bayesian post-processor for the original Subset Simulation method is presented that produces the posterior PDF of the failure probability which can be used in risk analyses for life-cycle cost analysis, decision making under risk, etc.


Fig. 1. A model of the case-study building: (a) the structural model consisting of two moment-resisting frames in series; (b) the component backbone curve; (c) the component hysteretic behavior
Analyzing the Sufficiency of Alternative Scalar and Vector Intensity Measures of Ground Shaking Based on Information Theory
  • Article
  • Full-text available

March 2012

·

626 Reads

·

133 Citations

Journal of Engineering Mechanics

The seismic risk assessment of a structure in performance-based design (PBD) may be significantly affected by the representation of ground motion uncertainty. In PBD, the uncertainty in the ground motion is often represented by a probabilistic description of a scalar parameter, or low-dimensional vector of parameters, known as the intensity measure (IM), rather than a full probabilistic description of the ground motion time history in terms of a stochastic model. In this work, a new procedure employing relative sufficiency measure is introduced on the basis of information theory concepts to quantify the suitability of one IM relative to another in the representation of ground motion uncertainty. On the basis of this relative sufficiency measure, several alternative scalar- and vector-valued IMs are compared in terms of the expected difference in information they provide about a predicted structural response parameter, namely, the seismically induced drift in an existing reinforced-concrete frame structure. It is concluded that the most informative of the eight considered IMs for predicting the nonlinear drift response are two scalar IMs and a vector IM that depend only on the spectral ordinates at the periods of the first two (small-amplitude) modes of vibration.

Download

On the Optimal Scaling of the Modified Metropolis-Hastings Algorithm

August 2011

·

29 Reads

·

8 Citations

Estimation of small failure probabilities is one of the most important and challenging problems in reliability engineering. In cases of practical interest, the failure probability is given by a high-dimensional integral. Since multivariate integration suffers from the curse of dimensionality, the usual numerical methods are inapplicable. Over the past decade, the civil engineering research community has increasingly realized the potential of advanced simulation methods for treating reliability problems. The Subset Simulation method, in-troduced by Au & Beck (2001a), is considered to be one of the most robust advanced simulation techniques for solving high-dimensional nonlinear problems. The Modified Metropolis-Hastings (MMH) algorithm, a varia-tion of the original Metropolis-Hastings algorithm (Metropolis et al. 1953, Hastings 1970), is used in Subset Simulation for sampling from conditional high-dimensional distributions. The efficiency and accuracy of Subset Simulation directly depends on the ergodic properties of the Markov chain generated by MMH, in other words, on how fast the chain explores the parameter space. The latter is determined by the choice of one-dimensional proposal distributions, making this choice very important. It was noticed in Au & Beck (2001a) that the perfor-mance of MMH is not sensitive to the type of the proposal PDFs, however, it strongly depends on the variance of proposal PDFs. Nevertheless, in almost all real-life applications, the scaling of proposal PDFs is still largely an art. The issue of optimal scaling was realized in the original paper by Metropolis (Metropolis et al. 1953). Gelman, Roberts, and Gilks (Gelman et al. 1996) have been the first authors to publish theoretical results about the optimal scaling of the original Metropolis-Hastings algorithm. They proved that for optimal sampling from a high-dimensional Gaussian distribution, the Metropolis-Hastings algorithm should be tuned to accept approx-imately 25% of the proposed moves only. This came as an unexpected and counter-intuitive result. Since then a lot of papers has been published on the optimal scaling of the original Metropolis-Hastings algorithm. In this paper, in the spirit of Gelman et al. (1996), we address the following question which is of high practical importance: what are the optimal one-dimensional Gaussian proposal PDFs for simulating a high-dimensional conditional Gaussian distribution using the MMH algorithm? We present a collection of observations on the optimal scaling of the Modified Metropolis-Hastings algorithm for different numerical examples, and develop an optimal scaling strategy for MMH when it is employed within Subset Simulation for estimating small failure probabilities.


Discussion of paper by F. Miao and M. Ghosn “Modified subset simulation method for reliability analysis of structural systems”, Structural Safety, 33:251–260, 2011

January 2011

·

74 Reads

·

14 Citations

Structural Safety

The subject paper presents a ‘Regenerative Adaptive Subset Simulation’ (RASS) algorithm that includes some modifications to the original Subset Simulation algorithm for calculating small failure probabilities for dynamic systems that was first proposed by Au and Beck in [1]. In particular, the authors state that their proposed modifications overcome some limitations of the original Metropolis–Hastings algorithm used in Subset Simulation, including the ‘burn-in’ problem and the difficulty of the selection of the proposal distribution. This discussion intends to clarify several issues associated with the paper.


Life-cycle Cost Optimal Design of Passive Dissipative Devices for Seismic Risk Mitigation

September 2009

·

11 Reads

·

1 Citation

The cost effective performance of structures has long been recognized to be an important topic in the design of civil engineering systems. This design approach requires proper integration of (i) methodologies for treating the uncertainties related to natural hazards and to the structural behavior over the entire lifecycle of the building, (ii) tools for evaluating the performance using socioeconomic criteria, as well as (iii) algorithms appropriate for stochastic analysis and optimization. A complete probabilistic framework is presented in this paper for detailed estimation and optimization of the life-cycle cost of earthquake engineering systems. The focus is placed on the design of passive dissipative devices. The framework is based on a knowledge-based interpretation of probability (Jaynes, 2003), which leads to a realistic framework for formulating the design problem, and on an efficient novel approach to stochastic optimization problems (Taflanidis and Beck, 2008). The latter facilitates an efficient solution of this design problem and thus allows for consideration of complex models for describing structural performance. A comprehensive methodology is initially discussed for earthquake loss estimation; this methodology uses the nonlinear time-history response of the structure under a given excitation to estimate the damages in a detailed, component level. A realistic probabilistic model is then presented for describing the ground motion time history for future earthquake excitations. This model establishes a direct link between the probabilistic seismic hazard description of the structural site and the acceleration time history of future ground motions. In this setting, the life-cycle cost is given by an expected value over the space of the uncertain parameters for the structural system, performance evaluation and excitation models. Because of the complexity of these models, calculation of this expected value by means of stochastic simulation techniques is adopted. This approach, though, involves an unavoidable estimation error and significant computational cost, features which make the associated optimization challenging. An efficient framework, consisting of two stages, is presented for the optimization in such stochastic design problems. The first stage implements a novel approach, called-Stochastic Subset Optimization (SSO), for efficiently exploring the sensitivity of the objective function to both the design variables as well as the model parameters. Using a small number of stochastic analyses SSO iteratively identifies a subset of the original design space that has high plausibility of containing the optimal design variables and additionally consists of near-optimal solutions. The second stage, if needed, adopts some other stochastic optimization algorithm to pinpoint the optimal design variables within that subset. All information available from the first stage is exploited in order to improve the efficiency of the second optimization stage. An example is presented that considers the retrofitting of a four-story reinforced concrete office building with viscous dampers. Complex system, excitation and performance evaluation models are considered, that incorporate all important characteristics of the true system and its environment into the design process. The results illustrate the capabilities of the proposed framework for improving the structural behavior in a manner that is meaningful to its stakeholders (socio-economic criteria), as well as its capabilities for computational efficiency and the treatment of complex analysis models.


Seismic Loss Estimation Based on End-to-end Simulation

June 2008

·

106 Reads

·

12 Citations

Recently, there has been increasing interest in simulating all aspects of the seismic risk prob- lem, from the source mechanism to the propagation of seismic waves to nonlinear time-history analysis of structural response and finally to building damage and repair costs. This study presents a framework for per- forming truly "end-to-end" simulation. A recent region-wide study of tall steel-frame building response to a Mw 7.9 scenario earthquake on the southern portion of the San Andreas Fault is extended to consider eco- nomic losses. In that study a source mechanism model and a velocity model, in conjunction with a finite- element model of Southern California, were used to calculate ground motions at 636 sites throughout the San Fernando and Los Angeles basins. At each site, time history analyses of a nonlinear deteriorating structural model of an 18-story steel moment-resisting frame building were performed, using both a pre-Northridge earthquake design (with welds at the moment-resisting connections that are susceptible to fracture) and a modern code (UBC 1997) design. This work uses the simulation results to estimate losses by applying the MDLA (Matlab Damage and Loss Analysis) toolbox, developed to implement the PEER loss-estimation methodology. The toolbox includes damage prediction and repair cost estimation for structural and non- structural components and allows for the computation of the mean and variance of building repair costs con- ditional on engineering demand parameters (i.e. inter-story drift ratios and peak floor accelerations). Here, it is modified to treat steel-frame high-rises, including aspects such as mechanical, electrical and plumbing sys- tems, traction elevators, and the possibility of irreparable structural damage. Contour plots of conditional mean losses are generated for the San Fernando and the Los Angeles basins for the pre-Northridge and mod- ern code designed buildings, allowing for comparison of the economic effects of the updated code for the sce- nario event. In principle, by simulating multiple seismic events, consistent with the probabilistic seismic haz- ard for a building site, the same basic approach could be used to quantify the uncertain losses from future earthquakes.


Stochastic Subset Optimization for optimal reliability problems

April 2008

·

63 Reads

·

93 Citations

Probabilistic Engineering Mechanics

Reliability-based design of a system often requires the minimization of the probability of system failure over the admissible space for the design variables. For complex systems this probability can rarely be evaluated analytically and so it is often calculated using stochastic simulation techniques, which involve an unavoidable estimation error and significant computational cost. These features make efficient reliability-based optimal design a challenging task. A new method called Stochastic Subset Optimization (SSO) is proposed here for iteratively identifying sub-regions for the optimal design variables within the original design space. An augmented reliability problem is formulated where the design variables are artificially considered as uncertain and Markov Chain Monte Carlo techniques are implemented in order to simulate samples of them that lead to system failure. In each iteration, a set with high likelihood of containing the optimal design parameters is identified using a single reliability analysis. Statistical properties for the identification and stopping criteria for the iterative approach are discussed. For problems that are characterized by small sensitivity around the optimal design choice, a combination of SSO with other optimization algorithms is proposed for enhanced overall efficiency.


Effects of Two Alternative Representations of Ground Motion Uncertainty on Probabilistic Seismic Demand Assessment of Structures

January 2008

·

118 Reads

·

73 Citations

Earthquake Engineering & Structural Dynamics

A probabilistic representation of the entire ground-motion time history can be constructed based on a stochastic model that depends on seismic source parameters. An advanced stochastic simulation scheme known as Subset Simulation can then be used to efficiently compute the small failure probabilities corresponding to structural limit states. Alternatively, the uncertainty in the ground motion can be represented by adopting a parameter (or a vector of parameters) known as the intensity measure (IM) that captures the dominant features of the ground shaking. Structural performance assessment based on this representation can be broken down into two parts, namely, the structure-specific part requiring performance assessment for a given value of the IM, and the site-specific part requiring estimation of the likelihood that ground shaking with a given value of the IM takes place. The effect of these two alternative representations of ground-motion uncertainty on probabilistic structural response is investigated for two hazard cases. In the first case, these two approaches are compared for a scenario earthquake event with a given magnitude and distance. In the second case, they are compared using a probabilistic seismic hazard analysis to take into account the potential of the surrounding faults to produce events with a range of possible magnitudes and distances. The two approaches are compared on the basis of the probabilistic response of an existing reinforced-concrete frame structure, which is known to have suffered shear failure in its columns during the 1994 Northridge Earthquake in Los Angeles, California. Copyright © 2007 John Wiley & Sons, Ltd.


Simulation of an 1857-like Mw 7.9 San Andreas fault earthquake and the response of tall steel moment frame buildings in southern California -A prototype study

·

·

D Komatitsch

·

[...]

·

J L Beck

In 1857, an earthquake of magnitude 7.9 occurred on the San Andreas fault, starting at Parkfield and rupturing in a southeasterly direction for more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. The strong shaking in the basins due to this earthquake would have had significant long-period content (2-8 s), and the objective of this study is to quantify the impact of such an earthquake on two 18-story steel moment frame building models, hypothetically located at 636 sites on a 3.5 km grid in southern California. End-to-end simulations include modeling the source and rupture of a fault at one end, numerically propagating the seismic waves through the earth structure, simulating the damage to engineered structures and estimating the economic impact at the other end using high-performance computing. In this prototype study, we use an inferred finite source model of the magnitude 7.9, 2002 Denali fault earthquake in Alaska, and map it onto the San Andreas fault with the rupture originating at Parkfield and propagating southward over a distance of 290 km. Using the spectral element seismic wave propagation code, SPECFEM3D, we simulate an 1857-like earthquake on the San Andreas fault and compute ground motions at the 636 analysis sites. Using the nonlinear structural analysis program, FRAME3D, we subsequently analyze 3-D structural models of an existing tall steel building designed using the 1982 Uniform Building Code (UBC), as well as one designed according to the 1997 UBC, subjected to the computed ground motion at each of these sites. We summarize the performance of these structural models on contour maps of peak interstory drift. We then perform an economic loss analysis for the two buildings at each site, using the Matlab Damage and Loss Analysis (MDLA) toolbox developed to implement the PEER loss-estimation methodology. The toolbox includes damage prediction and repair cost estimation for structural and non-structural components and allows for the computation of the mean and variance of building repair costs conditional on engineering demand parameters (i.e. inter-story drift ratios and peak floor accelerations). Here, we modify it to treat steel-frame high-rises, including aspects such as mechanical, electrical and plumbing systems, traction elevators, and the possibility of irreparable structural damage. We then generate contour plots of conditional mean losses for the San Fernando and the Los Angeles basins for the pre-Northridge and modern code-designed buildings, allowing for comparison of the economic effects of the updated code for the scenario event. In principle, by simulating multiple seismic events, consistent with the probabilistic seismic hazard for a building site, the same basic approach could be used to quantify the uncertain losses from future earthquakes.



Citations (66)


... Similar to Painter Street bridge, Meloland bridge has been part of the California Strong Motion Instrumentation Program (CSMIP), where its response under earthquakes has been monitored since 1978 by California Department of Transportation (CALTRANS) (Hipley and Huang, 1997). Due to the availability of adequate data, this bridge has been evaluated as a case study by previous researchers (Maragakis et al., 1991;Werner et al., 1993;Maragakis et al., 1994;Zhang and Makris, 2001;Zhang and Makris, 2002;Kwon and Elnashai, 2006;Kwon and Elnashai, 2008;Kampas and Makris, 2013;Shamsabadi et al., 2013b;Bebamzadeh et al., 2014;Rahmani et al., 2014). Rich literature review and accessible data regarding this bridge makes it possible to evaluate the modelling accuracy by comparing the estimations with the recorded results and previous studies. ...

Reference:

Nonlinear Soil-structure Interaction of Bridges : Practical Approach Considering Small Strain Behaviour
Model Identification and Seismic Analysis of Meloland Road Overcrossing
  • Citing Article
  • May 1993

... The batch approach requires significantly higher computational costs, as a large number of samples are needed to reach a detailed balance condition. Although more efficient sampling methods, such as adaptive MCMC and transitional MCMC proposed by Beck and Au [38] and by Ching and Chen [39], respectively, have been introduced to improve efficiency, the computational burden remains considerably higher compared to the recursive approach. ...

Bayesian updating of structural models and reliability using Markov chain Monte Carlo simulation
  • Citing Article
  • April 2002

Journal of Engineering Mechanics

... For the uncertainty in the event location, the logarithm of the epicentral distance, r, for the earthquake events is assumed to follow a Gaussian distribution with mean log(20) km and standard deviation 0.5.Figure 7(a) illustrates the PDFs for M and r. For the ground motion, the probabilistic model described in detail in [26] is adopted: the high-frequency and low-frequency (long-period) components of the earthquake excitation are separately modeled and then combined to form the acceleration input. The highfrequency component is modeled by the stochastic method (see [27] for more details) which involves modifying a white-noise sequence Z w by (i) a timedomain envelope function and (ii) a frequency-domain filter, that are both expressed as nonlinear functions of the moment magnitude and the epicentral distance of the seismic event. ...

Smart Base Isolation Design including Model Uncertainty in Ground Motion Characterization
  • Citing Conference Paper
  • June 2007

... In addition, the sampling approaches, which have been extensively explored in recent decades, have shown their advantages due to high accuracy in terms of the reliability calculations. Some relevant contributions on the sampling algorithms could be standard Monte Carlo algorithms [18,19] and advanced sampling techniques including importance sampling [20][21][22][23][24], BUS algorithm (Bayesian updating with structural reliability methods) [25], subset simulations [26,27] with Gibbs sampling [28], and other advanced sampling approaches [29][30][31][32][33]. Besides, the surrogate modeling, mainly Gaussian process, has been also explored for the context of reliability updating within the Bayesian perspective [34,35]. Recently, a momentbased approach is also explored for reliability updating within the classical Bayesian framework [36]. ...

First Excursion Probabilities for Linear Systems by Very Efficient Importance Sampling
  • Citing Article
  • July 2001

Probabilistic Engineering Mechanics

... These findings coincide with other analytical studies that have been performed by a number of researchers that have shown, for short-span overpass bridges, that the seismicFigure 2.10 OpenSees finite element model of bridge-foundation-ground system (Zhang et al., 2008). response of the bridge superstructure is integrated with the response of the abutments and embankment soil and is largely influenced by the response of the soil foundation (Werner et al., 1987Werner et al., , 1990Werner et al., , 1994; Wilson and Tan, 1990a,b). Based on regional empirical information, five idealized soil profiles and three foundation types were used to construct the finite element models with realistic structural details and soil profile data. ...

Use of Strong Motion Records for Model Evaluation and Seismic Analysis of a Bridge Structure
  • Citing Conference Paper
  • July 1994

... The gap therefore, at any given point of time, between computational need and computational resource, i.e., between the grand problem that the community would like to solve and the problem it is able to tackle due to hardware and/or algorithmic limitations, has always existed. Naturally, then, continual efforts have been made by the community to invent clever and efficient simulation schemes [7,101112131415. While substantial effort continues to be made to develop and benchmark new and efficient sampling schemes, the limits of performance of a given algorithm, e.g., what is the best attainable accuracy of the method for a fixed computational effort and if that is good enough, have not received comparable attention. ...

Benchmark Study on Reliability Estimation in Higher Dimensions of Structural Systems – An Overview
  • Citing Conference Paper
  • September 2005

... The development of both improved analysis methods and models could allow the evaluation of the seismic risk related to precast industrial buildings. Both collapse risk and direct economic losses could be evaluated by means of numerical results, as already provided for RC frame structures [8,9]. Some validation of numerical models were performed in the past and some examples concerned the Emilia-Romagna earthquakes: they investigated the behavior of some structural typologies, as historical constructions [10] and masonry structures [10,11]. ...

Incorporating Losses due to Repair Costs, Downtime and Fatalities in Performance-based Earthquake Engineering
  • Citing Conference Paper
  • June 2007

... Stochastic Subset Optimization (SSO) was initially suggested for reliability-based optimization problems (for a proper definition of such problems see Section 5.1 later on) in [9] and has been recently [8] extended to address general stochastic design problems, such as the one in (2). The basic features of the algorithm are summarized next. ...

Reliability-based Optimal Design by Efficient Stochastic Simulation
  • Citing Conference Paper
  • June 2006

... One type of method for adding information uses dynamic vibration response measurements. These data can be used in a Bayesian probabilistic framework to update structural model parameters (e.g., Beck 2000, Papadimitriou 2004). These techniques employ an indirect inference from vibration data into tangible engineering parameters such as stiffness and mass. ...

Updating Robust Reliability for Bridges Using Measured Vibrational Data
  • Citing Conference Paper
  • December 1999