## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

In recent years, Bayesian model updating techniques based on measured data have been applied to system identification of structures and to structural health monitoring. A fully probabilistic Bayesian model updating approach provides a robust and rigorous framework for these applications due to its ability to characterize modeling uncertainties associated with the underlying structural system and to its exclusive foundation on the probability axioms. The plausibility of each structural model within a set of possible models, given the measured data, is quantified by the joint posterior probability density function of the model parameters. This Bayesian approach requires the evaluation of multidimensional integrals, and this usually cannot be done analytically. Recently, some Markov chain Monte Carlo simulation methods have been developed to solve the Bayesian model updating problem. However, in general, the efficiency of these proposed approaches is adversely affected by the dimension of the model parameter space. In this paper, the Hybrid Monte Carlo method is investigated (also known as Hamiltonian Markov chain method), and we show how it can be used to solve higher-dimensional Bayesian model updating problems. Practical issues for the feasibility of the Hybrid Monte Carlo method to such problems are addressed, and improvements are proposed to make it more effective and efficient for solving such model updating problems. New formulae for Markov chain convergence assessment are derived. The effectiveness of the proposed approach for Bayesian model updating of structural dynamic models with many uncertain parameters is illustrated with a simulated data example involving a ten-story building that has 31 model parameters to be updated.

To read the full-text of this research,

you can request a copy directly from the authors.

... The i.i.d. assumption for observation errors, which is frequently adopted in studies on probabilistic structural updating [26,27], implies that knowing the uncertainty at other times or locations does not affect the uncertainty at a specific time or location. This assumption is justified by the fact that the errors depend on the accuracy of the specific deflection-collecting method, which remains spatiotemporally invariant for the structure. ...

... The shrinkage of the concrete and reduced structural rigidity also give rise to increasing deflection. Herein, the stochastic state variables are defined as dimensionless uncertainty parameters that characterize the variations of the abovementioned variables from their nominal values [27], and these parameters are assumed to be spatially unvarying within the bridge. Creep and shrinkage. ...

... Structural rigidity. A dimensionless stochastic variable θ e is introduced to represent the variation in structural rigidity, which is characterized in terms of the variation of the global stiffness matrix [27]: ...

The time-dependent deflection data collected by a structural health monitoring (SHM) system contains information indicating the damage accumulation of prestressed concrete bridges (PSCBs). However, for this information to be used, the results must be fully translated into reliable metrics. Model-based structural identification (SI) targeting the time-dependent deflection of PSCBs is affected by time-dependent material behavior such as creep and shrinkage as well as the unknown path of structural deterioration, which should be considered in a stochastic volatility model (SVM). This paper presents an inference framework for an SVM conditioned on the time-dependent deflection of PSCBs. In this framework, the augmented state space includes the full ranges of the unobserved state variables affecting the time-dependent deflection of PSCBs, namely, structural rigidity, creep, shrinkage, prestress level and dead load level, along with the volatility parameters associated with the deterioration path of structural rigidity, which is modeled via the Wiener process. Exploiting the latent structure of the SVM, a cyclic Markov chain Monte Carlo (MCMC) sampler is proposed to draw samples from the joint posterior distribution of the unobserved state variables and volatility parameters. In an illustrative example, two-year continuous deflection monitoring data of an existing bridge are utilized as the target information for inference. The updated model can indicate the accumulated damage of the case bridge over the monitoring period. The proposed SI framework is able to support accurate probabilistic analysis of the time-dependent deflection of PSCB.

... This procedure is referred to as Bayesian model class selection (Beck and Yuen, 2004). However, it is typically non-trivial to compute the evidence due to the multi-dimensional integral in Equation (1.21), and is thus usually estimated numerically using MCMC methods (see, e.g., Ching and Cheng, 2007;Cheung and Beck, 2009). On the contrary, the evidence does not affect the shape of the posterior distribution and can be neglected in sampling from the posterior distribution. ...

... Bayesian model updating using observed dynamic response data has a broad range of applications in a number of engineering fields Cheung and Beck, 2009;Jensen et al., 2013;. In the campaign of model updating, uncertainties in both modelling and observation procedures should be appropriately considered; hence, uncertainty quantification (UQ) metrics are significant so as to comprehensively and quantitatively measure the stochastic discrepancy between model predictions and observations. ...

... On the other hand, Markov chain Monte Carlo (MCMC) algorithms are generally accepted as the most attractive Bayesian inference techniques Cheung and Beck, 2009). Of particular importance among these algorithms is transitional Markov chain Monte Carlo (TMCMC) and has also utilized TMCMC to perform the ABC updating framework. ...

In the real world, a significant challenge faced in the safe operation and maintenance of infrastructures is the lack of available information or data. This results in a large degree of uncertainty and the requirement for robust and efficient uncertainty quantification (UQ) tools in order to derive the most realistic estimates of the behavior of structures. While the probabilistic approach has long been utilized as an essential tool for the quantitative mathematical representation of uncertainty, a common criticism is that the approach often involves insubstantiated subjective assumptions because of the scarcity or imprecision of available information. To avoid the inclusion of subjectivity, the concepts of imprecise probabilities have been developed, and the distributional probability-box (p-box) has gained the most attention among various types of imprecise probability models since it can straightforwardly provide a clear separation between aleatory and epistemic uncertainty. This thesis concerns the realistic consideration and numerically efficient calibraiton and propagation of aleatory and epistemic uncertainties (hybrid uncertainties) based on the distributional p-box. The recent developments including the Bhattacharyya distance-based approximate Bayesian computation (ABC) and non-intrusive imprecise stochastic simulation (NISS) methods have strengthened the subjective assumption-free approach for uncertainty calibration and propagation. However, these methods based on the distributional p-box stand on the availability of the prior knowledge determining a specific distribution family for the p-box. The target of this thesis is hence to develop a distribution-free approach for the calibraiton and propagation of hybrid uncertainties, strengthening the subjective assumption-free UQ approach. To achieve the above target, this thesis presents five main developments to improve the Bhattacharyya distance-based ABC and NISS frameworks. The first development is on improving the scope of application and efficiency of the Bhattacharyya distance-based ABC. The dimension reduction procedure is proposed to evaluate the Bhattacharyya distance when the system under investigation is described by time-domain sequences. Moreover, the efficient Bayesian inference method within the Bayesian updating with structural reliability methods (BUS) framework is developed by combining BUS with the adaptive Kriging-based reliability method, namely AK-MCMC. The second development of the distribution-free stochastic model updating framework is based on the combined application of the staircase density functions and the Bhattacharyya distance. The staircase density functions can approximate a wide range of distributions arbitrarily close; hence the development achieved to perform the Bhattacharyya distance-based ABC without limiting hypotheses on the distribution families of the parameters having to be updated. The aforementioned two developments are then integrated in the third development to provide a solution to the latest edition (2019) of the NASA UQ challenge problem. The model updating tasks under very challenging condition, where prior information of aleatory parameters are extremely limited other than a common boundary, are successfully addressed based on the above distribution-free stochastic model updating framework. Moreover, the NISS approach that simplifies the high-dimensional optimization to a set of one-dimensional searching by a first-order high-dimensional model representation (HDMR) decomposition with respect to each design parameter is developed to efficiently solve the reliability-based design optimization tasks. This challenge, at the same time, elucidates the limitations of the current developments, hence the fourth development aims at addressing the limitation that the staircase density functions are designed for univariate random variables and cannot acount for the parameter dependencies. In order to calibrate the joint distribution of correlated parameters, the distribution-free stochastic model updating framework is extended by characterizing the aleatory parameters using the Gaussian copula functions having marginal distributions as the staircase density functions. This further strengthens the assumption-free approach for uncertainty calibration in which no prior information of the parameter dependencies is required. Finally, the fifth development of the distribution-free uncertainty propagation framework is based on another application of the staircase density functions to the NISS class of methods, and it is applied for efficiently solving the reliability analysis subproblem of the NASA UQ challenge 2019. The above five developments have successfully strengthened the assumption-free approach for both uncertainty calibration and propagation thanks to the nature of the staircase density functions approximating arbitrary distributions. The efficiency and effectiveness of those developments are sufficiently demonstrated upon the real-world applications including the NASA UQ challenge 2019.

... In general, the posterior PDF of uncertain parameters specified in Eq. 19 can be hardly calculated, since the integrals involved in the model evidence in Eq. 19 cannot be analytically evaluated for high-dimensional spaces. To address such a problem, numerical methods based on probabilistic sampling simulation have been proposed as alternative ways to statistically estimate the integrals and have been successfully incorporated into Bayesian inference framework [2,[4][5][6]. Among the developed sampling methods, the Transitional Markov Chain Monte Carlo (TMCMC) algorithm proposed by Ching and Chen [5] has been demonstrated to be able to efficiently draw samples from some difficult PDFs (e.g., multimodal PDFs, very peaked PDFs, and PDFs with flat manifold) and estimate the model evidence. ...

... Based on Bayes' rule, the posterior PDF of unknown parameter vector x = {θ T , σ 2 } T after updated is given by Cheung and Beck [4] as follows: ...

... For constructing the model proposal, we take help of the Hamiltonian dynamics from physics. The idea of Hamilton's equations are applied to Bayesian statistics in formulating Hamiltonian Monte Carlo [37,38]. ...

... The key idea of a spatio-temporal process via modified Hamiltonian equations Let the total energy H(θ), also known as Hamiltonian function, be defined as V (θ) + W (p), where V (θ) is the potential energy and W (p) = 1 2 p T M −1 p, with M being a chosen matrix (mass), is the kinetic energy, (see [38] for more details). Then the original Hamiltonian equations are given by ...

The solutions of Hamiltonian equations are known to describe the underlying phase space of the mechanical system. In Bayesian Statistics, the only place, where the properties of solutions to the Hamiltonian equations are successfully applied, is Hamiltonian Monte Carlo. In this article, we propose a novel spatio-temporal model using a strategic modification of the Hamiltonian equations, incorporating appropriate
stochasticity via Gaussian processes. The resultant sptaio-temporal process, continuously varying with time, turns out to be nonparametric, nonstationary, nonseparable and no-Gaussian. Besides, the lagged correlations tend to zero as the spatio-temporal lag tends to infinity. We investigate the theoretical properties of the new spatio-temporal process, along with its continuity and smoothness properties. Considering the Bayesian paradigm, we derive methods for complete Bayesian inference using MCMC techniques. Applications of our new model and methods to two simulation experiments and two real data sets revealed encouraging performance.

... Physics-based parametric approaches are used where a functional form is available or synthesised for the system and parameters are estimated [7][8][9]. Meanwhile, non-parametric approaches such as Bayesian probabilistic framework and Markov Chain Monte Carlo have been put forward to tackle the model uncertainty of nonlinear systems in identification [10][11][12][13]. Similarly, deep learning and neural networks have also opened paths to nonlinear system identification [14,15]. ...

... now usingβ, (11) gives: ...

In the field of structural dynamics, system identification usually refers to building mathematical models from an experimentally-obtained data set. To build reliable models using the measurement data, the mathematical model must be representative of the structure. In this work, attention is given to robust identification of nonlinear structures. We draw inspiration from reduced order modelling to determine a suitable model for the system identification. There are large similarities between reduced order modelling and system identification fields, i.e. both are used to replicate the dynamics of a system using a mathematical model with low complexity. Reduced Order Models (ROMs) can accurately capture the physics of a system with a low number of degrees of freedom; thus, in system identification, a model based on the form of a ROM is potentially more robust. Nonlinear system identification of a structure is presented, where inspiration is taken from a novel ROM to form the model. A finite-element model of the structure is built to simulate an experiment and the identification is performed. It is shown how the ROM-inspired model in the system identification improves the accuracy of the predicted response, in comparison to a standard nonlinear model. As the data is gathered from simulations, system identification is first demonstrated on the high fidelity data, then the fidelity of data is reduced to represent a more realistic experiment. A good response agreement is achieved when using the ROM-inspired model, which accounts for the kinetic energy of unmodelled modes. The estimated parameters of this model are also demonstrated to be more robust and rely on the underlying physics of the system.

... In addition to the above research, there are many improved MCMC algorithms. To overcome the problems of low sampling efficiency and the complex posterior probability distribution of traditional MCMC algorithm, Cheung and Beck [31] proposed a Hybrid MCMC (HMCMC) algorithm, which shows great potential in the Bayesian model updating with high-dimensional uncertainty parameters. Ching and Chen [32] proposed the Transitional MCMC (TMCMC) algorithm. ...

... Currently, the most widely used MCMC sampling algorithm is the standard MH algorithm. With the increase of parameter dimension, the "sampling stagnation" phenomenon is easy to occur, which makes it difficult to reflect statistical characteristics of the whole parameter space by Markov chain [31]. To improve the quality of samples, we use a more efficient DRAM algorithm as the foundational sampling method to generate multi-source Markov chains by setting different initial variances for proposal distributions. ...

In this paper, we present a new Bayesian model updating method that could overcome the problem of low sampling efficiency and over-reliance on single-chain proposal distribution of traditional Markov chain Monte Carlo algorithm. The delayed rejection and adaptive strategy are introduced in sampling process to obtain certain number of Markov chains from different proposal distributions, which can independently adjust the variances of proposal distributions and improve the acceptance rate of candidate samples. The abnormal chain detection criterion is adopted to eliminate abnormal Markov chains. Then, the initial variances of different proposal distributions are analogous to the accuracy index of multi-source sensors in signal domain. And the multi-source sensors grouping weighted fusion algorithm is introduced to fuse the screened Markov chains to approach the posterior probability distribution with high accuracy. The implicit relationship between the parameters to be updated and the responses of finite element model is mined by Kriging surrogate model to improve the computational efficiency. The results of study cases demonstrate that the proposed method has good updating efficiency, excellent updating accuracy and higher acceptance rate of samples, which provides a new idea for solving the stochastic model updating.

... Ching et al. [8] proposed the Transition MCMC (TMCMC) method, which samples from a string of intermediate probability density functions (pdf) to estimate the posterior distributions, which can avoid sampling directly from the complex posterior pdf, which improved sampling efficiency. Cheung et al. [9] proposed a Hybrid MCMC (HMCMC) method, which shows great potential in the Bayesian model updating with high-dimensional uncertainty parameters. However, most of the above methods are based on a single Markov chain, which means that these methods will rely too much on the selection 2 of the proposal distribution variance, the sampling efficiency of single-chain MCMC method is low when meets high parameter dimension, the quality of the obtained posterior samples is poor. ...

... The structural model shown in figure 1 is a common space truss structure in engineering, which consists of 28 nodes, 66 rod units and 48 degrees of freedom. The constraint condition is that 4 supports (node number 1,8,9,16) are fixed. When analyzing the structure, only the Y and Z directional DOF of each node are considered, which are numbered in turn. ...

Aiming at the problem of difficulty in selecting the proposal distribution and low computational efficiency in the traditional Markov chain Monte Carlo algorithm, a Bayesian model updating method using surrogate model technology and simulated annealing algorithm is proposed. Firstly, the Kriging surrogate model is used to mine the implicit relationship between the structural parameters to be updated and the corresponding dynamic responses, and the Kriging model that meets the accuracy requirement is used to replace the complex finite element model to participate in the iterative calculation to improve the model updating efficiency. Then, the simulated annealing algorithm is introduced to reorganize the Markov chains from different proposal distributions to obtain high-quality posterior samples, which are used to estimate the parameters posterior distributions. Finally, a space truss structure is used to verify the effectiveness of the proposed method.

... For example, Tsai & Li (2008) and Chitsazan & Tsai (2015) adopted the Bayesian model averaging for groundwater modelling. In addition, Cheung &Beck (2009), andLo &Leung (2019) carried out Bayesian model updating to quantify parameter uncertainties using field data. This class of problems is also known as back analysis. ...

... For example, Tsai & Li (2008) and Chitsazan & Tsai (2015) adopted the Bayesian model averaging for groundwater modelling. In addition, Cheung &Beck (2009), andLo &Leung (2019) carried out Bayesian model updating to quantify parameter uncertainties using field data. This class of problems is also known as back analysis. ...

Excavation is a complex multistage problem, where field responses of soil properties such as deflections at one stage of the operation depend on responses at the preceding stage. In order to help asset managers make better decisions and thus improve safety, soil properties should be accurately identified using sensor-data collected at the current stage. This task is not easy to accomplish, mainly because of its intrinsic ambiguity. Sensors usually only measure effects (e.g., field responses) but not causes (e.g., soil parameter values). A strategy that helps meet this challenge is to perform inverse analysis to validate soil parameter values. Error-Domain Model Falsification (EDMF) is a methodology that achieves this goal. More precisely, EDMF helps identify good behavior models of excavation by falsifying soil parameter values for which the predictions of the corresponding behavior models cannot explain field-response measurements collected by sensors. However, a remaining challenge is the identification of soil parameter values that are not falsified by EDMF, especially when the computation of the predictions is time-consuming. This paper proposes a new framework that combines EDMF and an optimization algorithm for efficient identification of soil parameter values. Results on a full-scale excavation site in Singapore show that the new framework is robust and accurate, and it has the potential to improve current practice, which relies primarily on surrogate models without uncertainty.

... This approach requires a large number of iteration and is computationally exhaustive when updating a complex structure. Cheung and Beck [34] developed an efficient algorithm based on Hamiltonian dynamics to expedite the convergence. In this formulation, the parameter space is mapped with a Hamiltonian system such that the solution of the governing equation generates the required candidate states. ...

... In this formulation, the parameter space is mapped with a Hamiltonian system such that the solution of the governing equation generates the required candidate states. Conventionally, the leapfrog algorithm [34,35] is used to solve the governing equations. This technique requires numerical integration of the objective function with respect to the uncertain parameters. ...

The load rating of a steel truss bridge is experimentally identified in this study using an improved Bayesian model updating algorithm. The initial element model is sequentially updated to match the static and dynamic characteristics of the bridge. For this purpose, a modified version of the Hamiltonian Monte Carlo (HMC) simulation is adopted for closed-form candidate generation that helps in faster convergence compared to the Markov Chain Monte Carlo simulation. The updated model works as a digital twin of the original structure to predict its load-carrying capacity and performance under proof or design load. The proposed approach incorporates in-situ conditions in its formulation and helps to reduce the risk involved in bridge load testing at its full capacity. The rating factor for each member is estimated from the updated model, which also indicates the weak links and possible failure mechanism. The efficiency of the improved HMC-based algorithm is demonstrated using limited sensor data, which can be easily adopted for other existing bridges.

... Bayesian model updating using observed dynamic response data has a broad range of applications in a number of engineering fields Ktafygiotis and Beck 1998;Cheung and Beck 2009;Jensen et al. 2013;Rocchetta et al. 2018). In Bayesian model updating, uncertainties in both the simulation and observation procedures should be appropriately considered; hence, uncertainty quantification (UQ) metrics are significant in order to comprehensively and quantitatively measure the stochastic discrepancy between model predictions and observations. ...

... On the other hand, Markov chain Monte Carlo (MCMC) algorithms are generally accepted as the most attractive Bayesian inference tools (Beck and Au 2002;Cheung and Beck 2009). Of particular importance among these algorithms is transitional Markov chain Monte Carlo (TMCMC) (Ching and Chen 2007;Betz et al. 2016) and Bi et al. (2019) also utilized TMCMC to perform the ABC updating framework. ...

In this study, a two-step approximate Bayesian computation (ABC) updating framework using dynamic response data is developed. In this framework, the Euclidian and Bhattacharyya distances are utilized as uncertainty quantification (UQ) metrics to define approximate likelihood functions in the first and second steps, respectively. A new Bayesian inference algorithm combining Bayesian updating with structural reliability methods (BUS) with the adaptive Kriging model is then proposed to effectively execute the ABC updating framework. The performance of the proposed procedure is demonstrated with a seismic-isolated bridge model updating application using simulated seismic response data. This application denotes that the Bhattacharyya distance is a powerful UQ metric with the capability to recreate wholly the distribution of target observations and the proposed procedure can provide satisfactory results with much-reduced computational demand compared with other well-known methods, such as transitional Markov chain Monte Carlo (TMCMC).

... Model-based techniques involve system identification and model updating techniques [9]. They rely on expert knowledge to build accurate, physics-based models of structures that are calibrated based on real structure measurements [10][11][12][13]. However, model updating techniques can be expensive, time-consuming, and prone to modeling errors, especially for complex structures [14,15]. ...

Structural damage detection using unsupervised learning methods has been a trending topic in the structural health monitoring (SHM) research community during the past decades. In the context of SHM, unsupervised learning methods rely only on data acquired from intact structures for training the statistical models. Consequently, they are often seen as more practical than their supervised counterpart in implementing an early-warning damage detection system in civil structures. In this article, we review publications on data-driven structural health monitoring from the last decade that relies on unsupervised learning methods with a focus on real-world application and practicality. Novelty detection using vibration data is by far the most common approach for unsupervised learning SHM and is, therefore, given more attention in this article. Following a brief introduction, we present the state-of-the-art studies in unsupervised-learning SHM, categorized by the types of used machine-learning methods. We then examine the benchmarks that are commonly used to validate unsupervised-learning SHM methods. We also discuss the main challenges and limitations in the existing literature that make it difficult to translate SHM methods from research to practical applications. Accordingly, we outline the current knowledge gaps and provide recommendations for future directions to assist researchers in developing more reliable SHM methods.

... Since Eq. (9) usually does not have a closed-form solution, simulation techniques are typically leveraged to generate samples of the posterior distributions of possible network topologies. MCMC methods such as Metropolis-Hastings (M-H) algorithm and Gibbs sampling are commonly used to perform the Bayesian inference through sampling [40], [41]. In this study, M-H algorithm is employed to implement the proposed Bayesian approach since the Gibbs sampling algorithm requires an analytical solution to the conditional distributions of each parameter in the model. ...

Analyzing the behavior of complex interdependent networks requires complete information about the network topology and the interdependent links across networks. For many applications such as critical infrastructure systems, understanding network interdependencies is crucial to anticipate cascading failures and plan for disruptions. However, data on the topology of individual networks are often publicly unavailable due to privacy and security concerns. Additionally, interdependent links are often only revealed in the aftermath of a disruption as a result of cascading failures. We propose a scalable nonparametric Bayesian approach to reconstruct the topology of interdependent infrastructure networks from observations of cascading failures. Metropolis-Hastings algorithm coupled with the infrastructure-dependent proposal are employed to increase the efficiency of sampling possible graphs. Results of reconstructing a synthetic system of interdependent infrastructure networks demonstrate that the proposed approach outperforms existing methods in both accuracy and computational time. We further apply this approach to reconstruct the topology of one synthetic and two real-world systems of interdependent infrastructure networks, including gas-power-water networks in Shelby County, TN, USA, and an interdependent system of power-water networks in Italy, to demonstrate the general applicability of the approach.

... Dogan [6,7] discussed confidence interval estimation in SD models using bootstrapping and the likelihood ratio method. Cheung and Beck [8] explained Bayesian updating in Monte Carlo processes, and background material on the mathematics of Markov Chain Monte Carlo (MCMC) is readily available [9,10]. ...

We present a practical guide and step-by-step flowchart for establishing uncertainty intervals for key model outcomes in a simulation model in the face of uncertain parameters. The process started with Powell optimization to find a set of uncertain parameters (the optimum parameter set or OPS) that minimized the model fitness error relative to historical data. Optimization also helped in refinement of parameter uncertainty ranges. Next, traditional Monte Carlo (TMC) randomization or Markov Chain Monte Carlo (MCMC) was used to create a sample of parameter sets that fit the reference behavior data nearly as well as the OPS. Under the TMC method, the entire parameter space was explored broadly with a large number of runs, and the results were sorted for selection of qualifying parameter sets (QPS) to ensure good fit and parameter distributions that were centrally located within the uncertainty ranges. In addition, the QPS outputs were graphed as sensitivity graphs or box-and-whisker plots for comparison with the historical data. Finally, alternative policies and scenarios were run against the OPS and all QPS, and uncertainty intervals were found for projected model outcomes. We illustrated the full parameter uncertainty approach with a (previously published) system dynamics model of the U.S. opioid epidemic, and demonstrated how it can enrich policy modeling results.

... In general, the model updating techniques can be subdivided into three groups: a) deterministic model updating [18; 21-24], b) stochastic model updating through uncertainty quantification approach [25][26][27][28][29], and c) multiple modelling methods [29][30][31]. Both deterministic and stochastic model updating are essentially parameter estimation schemes which aim for solving an inverse problem by repeatedly running the forward problem. ...

Structural systems are in general quite sophisticated in terms of interactions between individual members with respect to their stressing and resistance. In addition, the stochastic nature of different loads and resistances, respectively, is quite different. A consistent statistical representation of the random variables for a structural component/member is therefore quite complex. In this context, the implementation of intelligent structures and the incorporation of the measuring data can help to improve the prediction of random variables for a certain member and therefore to update models in the design process. Depending on the obtained measurement data, the updating can either refer to the statistical representation quality of different values, but also to the quantity of random variables or even to the update of the mechanical model and its boundary conditions. For that reason, a classification of updating possibilities in the context of the design process described by different hierarchical levels is useful. The aim of this article is to give an overview and insight on corresponding concepts of sensor‐based design strategies for steel structures. In addition, first research results regarding the effect of measurement‐based updating on the quality improvement of the design process are presented.

... The probabilistic characterization of the observation error, model inadequacy and prior parameter uncertainty are herein introduced as possible choices for the terms involved in the BMU strategies [6,14,59]. ...

Model updating procedures are commonly used to identify numerical models of a structure to be subsequently used for reliable assessment of its behaviour under environmental loads. In the case of historic masonry buildings, the uncertainties that are involved in the knowledge process (material properties, geometry, boundary conditions, etc.) can severely affect the matching between the experimental data and the corresponding model output. To account for the different sources of uncertainties that are involved in the model updating procedure for historic confined masonry towers, this paper proposes an application of the Bayesian paradigm. Effects of parameter uncertainty, observation errors and model inadequacy are explored by comparing the output of the numerical model against real measured modal data. The proposed methodology aims at obtaining the posterior distribution of unknown quantities to estimate their uncertainty and to identify values of the parameters to be used in the numerical model for subsequent analyses. The comparison among the updated distributions related to different initial probabilistic modelling assumptions (prior distributions, measurement errors and modelling uncertainties) shows significant improvements of the predictive capabilities with a considerable reduction of the initial uncertainties, which confirm the potential of the proposed approach.

... Then, Vanik et al. [33] identified the change of stiffness parameters (damage parameters) from structural modal data according to the probability that the structural stiffness degradation was greater than a certain limit. In the following decades, many researchers, such as Beck and his co-workers (Yuen, Ching, Cheung) [34][35][36][37][38][39], Sohn [40], Kerschen [41], Wu [42] and Fraccone [43], put forward a number of technical algorithms for uncertainty quantification in model updating and structural health monitoring using Bayesian inference. Yuen [44] reviewed the applications of Bayesian methods in civil engineering to structural damage identification using response time history and using modal data, respectively. ...

Structural damage identification is often recast as the procedure to find the damage parameters from the measured data and the prescribed structural model. However, noises are encountered inevitably in practical tests and this essentially introduces uncertainty into damage identification. Most of the existed damage identification methods are deterministic and unable to provide the uncertainty information. In this paper, the widely-used sensitivity approach, though being deterministic, is further explored and extended for uncertainty quantification. The key lies in the equivalence between Tikhonov regularization in the sensitivity approach and the prior distribution in Bayesian inference. Motivated by this equivalence, a new uncertainty quantification approach is proposed where the uncertainty information is directly drawn from the deterministic sensitivity-based regime. As is noteworthy, the proposed approach can quickly quantify the uncertainty even when the probabilistic information of the measurement noise and the prior distribution are unknown. Furthermore, considering the model errors, a measurement-changes-correction strategy is adopted where the measured data is simply corrected by pre-post measurement changes and by this way, uncertainty is quantified in the same sensitivity-based regime. Numerical examples and an experimental case are studied to verify the effectiveness of the proposed approach for uncertainty quantification in structural damage identification.

... However, it is inefficient to directly perform the classical MCMC such as the Metropolis-Hastings (MH) algorithm for Bayesian updating due to the long burn-in period. Therefore, many improvements overcoming this issue have been proposed by researchers in different documents [8,[13][14][15]. As an alternate, particle filter (PF) uses a random metric consisting of a set of particles together with the weights to provide an approximation for the target PDF. ...

Bayesian model updating has computational capability of reducing uncertainties in engineering models and yielding valuable inferences for model predictions from observed data. Importance sampling (IS) and Markov chain Monte Carlo (MCMC) are the two main techniques among the existing simulation-based methods for Bayesian model updating. Motivated by the fact IS outperforms MCMC in terms of computational efficiency once the proposal importance sampling density (ISD) is appropriately chosen, this paper proposes an innovative adaptive importance sampling (AIS) algorithm using Gaussian mixture to overcome the limitations of conventional IS-based methods. Integrating of the concepts of population Monte Carlo (PMC) and cross entropy (CE) contributes to the major novelty of the proposed algorithm. Consequently, the proposed AIS algorithm can successfully construct the ISD that resembles the sophisticated target posterior density. Moreover, a metric quantifying sampling effectiveness called normalized effective sample size (N-ESS) is also adopted to measure the similarity between the ISD and the target density. One benchmark example of system identification and one case study of chloride-induced concrete corrosion are investigated to demonstrate the accuracy and robustness of the proposed algorithm.

... Algorithms were developed from the Bayesian perspective, using Gibbs samplers for structural updating [31][32][33][34] and structural health monitoring [35,36]. The use of Transitional MCMC was also studied in [37], as well as Hamilton MCMC in [38]. Structural reliability assessments are supported by importance sampling algorithms [39,40]. ...

This paper introduces a fast Gibbs sampler for solving a fully Bayesian problem in operational modal analysis. The proposed method is able to infer modal properties from the FFT of well-separated modes. System identification and related uncertainties are captured by a posterior distribution. The classical resonance description is wrapped into a hierarchical probabilistic model. The model is sampled through an enhanced Gibbs sampler including a Metropolis–Hasting step. The entire sampling scheme is new and the fast convergence of the algorithm is enabled by two strategies. First, the use of a collapsed Gibbs algorithm allows for an efficient sampling of the mode shape. Second, the use of adequate candidate distributions in the Metropolis–Hasting step provides excellent acceptance ratios, around 75%. Eventually, the numerical inference procedure is compared to the state-of-the-art Fast Bayesian FFT Algorithm (Fast-BFFTA), which is the most commonly used algorithm for Bayesian operational modal analysis. The sampler surpasses the Fast-BFFTA for small data-based identification, while remaining sustainable in terms of computing requirements.

... Vanik et al. (2000) have further improved this method to evaluate the posterior uncertainties, which may significantly contribute to structural health assessment (Yuen and Kuok, 2011). Various types of Monte Carlo method were developed within the Bayesian model updating framework (Beck and Au, 2002;Ching and Chen, 2007;Cheung and Beck, 2009), demonstrating the robustness of probabilistic updating methods when dealing with test data. put forward a novel Bayesian model updating approach by analytically deriving the posterior uncertainty and conducted numerical verification, while Kuok and Yuen (2012) came up with a two-stage framework for Bayesian structural health monitoring and model updating that was verified on a real structure. ...

Model updating techniques are often applied to calibrate the numerical models of bridges using structural health monitoring data. The updated models can facilitate damage assessment and prediction of responses under extreme loading conditions. Some researchers have adopted surrogate models, for example, Kriging approach, to reduce the computations, while others have quantified uncertainties with Bayesian inference. It is desirable to further improve the efficiency and robustness of the Kriging‐based model updating approach and analytically evaluate its uncertainties. An active learning structural model updating method is proposed based on the Kriging method. The expected feasibility learning function is extended for model updating using a Bayesian objective function. The uncertainties can be quantified through a derived likelihood function. The case study for verification involves a multisensory vehicle‐bridge system comprising only two sensors, with one installed on a vehicle parked temporarily on the bridge and another mounted directly on the bridge. The proposed algorithm is utilized for damage detection of two beams numerically and an aluminum model beam experimentally. The proposed method can achieve satisfactory accuracy in identifying damage with much less data, compared with the general Kriging model updating technique. Both the computation and instrumentation can be reduced for structural health monitoring and model updating.

... The inference information of uncertain parameters is given according to their posterior probability distribution [7]. Most commonly, Bayesian updating is conducted via sampling methods, e.g., the Markov chain Monte Carlo (MCMC) method, which allows direct sampling from the posterior distribution without solving the potential high-dimensional integrals in the the Bayes formula [6][7][8][9][10]. ...

With site-specific information, model updating can calibrate the probability distributions of material parameters, and the updated distributions can be further utilized to facilitate a more realistic slope reliability assessment. Many model updating methods have been proposed, while the inherent spatial variability of material properties is scarcely discussed in the literature, mainly due to the curse of dimensionality when considering thousands of random variables. The BUS (Bayesian updating with structural reliability methods) algorithm can well tackle the high-dimensional problem by converting it into an equivalent structural reliability problem. The BUS algorithm can integrate monitoring data and field observations to back analyse stability parameters. However, when generating a conditional random field using a large number of in-situ test data, the BUS algorithm becomes less efficient because low acceptance probability often occurs, while the Kriging algorithm is applicable for this situation. This study proposes an effective approach for model updating that combines Kriging-based conditional random field with the BUS algorithm to integrate multi-type observations. To illustrate the effectiveness of the proposed approach, an undrained saturated clay slope with a spatially varying soil parameter is taken as an example. The results indicate that the proposed approach is able to update the probability distribution of spatially varying soil parameters and update the slope reliability using multi-type observations with reasonable calculation efficiency.

... (1) completely describes the plausibility of the model parameters θ, but its topology may usually be very complicated, especially in the highdimensional case (Yuen 2010). Based on the model class M and the data D, model parameters θ can be classified into three categories (Katafygiotis and Lam 2002), namely globally identifiable, locally identifiable, and unidentifiable, depending on whether the set of maximum likelihood estimates is a singleton, finite, or uncountable continuum respectively, in the parameter space (Cheung and Beck 2009). For dynamic examples, parameter identifiability depends on the number of observed degrees of freedom (Yuen 2010). ...

Model uncertainty is a key factor that could influence the accuracy and reliability of numerical model-based analysis. It is necessary to acquire an appropriate updating approach which could search and determine the realistic model parameter values from measurements. In this paper, the Bayesian model updating theory combined with the transitional Markov chain Monte Carlo (TMCMC) method and K-means cluster analysis is utilized in the updating of the structural model parameters. Kriging and polynomial chaos expansion (PCE) are employed to generate surrogate models to reduce the computational burden in TMCMC. The selected updating approaches are applied to three structural examples with different complexity, including a two-storey frame, a ten-storey frame, and the national stadium model. These models stand for the lowdimensional linear model, the high-dimensional linear model, and the nonlinear model, respectively. The performances of updating in these three models are assessed in terms of the prediction uncertainty, numerical efforts, and prior information. This study also investigates the updating scenarios using the analytical approach and surrogate models. The uncertainty quantification in the Bayesian approach is further discussed to verify the validity and accuracy of the surrogate models. Finally, the advantages and limitations of the surrogate model-based updating approaches are discussed for different structural complexity. The
possibility of utilizing the boosting algorithm as an ensemble learning method for improving the surrogate models is also presented.

... The efficiency of Bayesian updating is largely influenced by the choice of sampling algorithms, such as hybrid Monte Carlo simulation (Cheung and Beck, 2009), adaptive MH sampling (Sun and Büyüköztürk, 2016), transitional MCMC (Ching and Chen, 2007), and most recently the Hamiltonian Monte Carlo (Boulkaibet et al., 2017). For sufficiently peaked posterior distribution, an asymptotic approximation was proposed Katafygiotis, 1998: Papadimitriou et al., 2001). ...

Significant disparities among the error variances (referred to as heteroscedasticity) for the modal variables are observed in structures, which is conventionally overruled in Bayesian updating methodology by pooling into a single error variance. A recent study proposed a heteroscedastic Bayesian model updating implemented via Gibbs sampling, thereby considerably restricting the choice of priors and also requiring the evaluation of the conditional posteriors in standard form. This study extends an existing Bayesian hierarchical model that allows a conditionally heteroscedastic error model (for the modal variables) to marginally follow a Student’s t distribution, contrasting the commonly adopted Gaussian error model. Due to the analytically intractable nature of the resulting joint posterior, the model is rendered through a Metropolis–Hastings (MH) sampling, a well-acclaimed Markov Chain Monte Carlo (MCMC) algorithm. The proposed modification leads to improved accuracy in the updating variables along with a lesser degree of uncertainty and improved noise immunity. The efficiency of the algorithm is demonstrated numerically using simulated set of partial measurements of a multi-degrees of freedom (MDOF) shear building.

... Khodaparast et al. [27] developed two perturbation methods for the estimation of the first and second statistical moments of randomized updating parameters from measured variability in modal responses. Cheung et al. [28] improved hybrid Monte Carlo simulation and applied it to update structural dynamic models with many uncertain parameters. Fang et al. [29] developed a stochastic model updating method based on response surface models and Monte Carlo simulation to update the mean and variance of model parameters. ...

This paper presents a novel dynamic model updating procedure to efficiently update the intervals and non-probabilistic correlation coefficients matrix (NPCCM) of modal parameters, and establish its accurate and reliable uncertainty boundary. To accomplish these features, an ellipsoidal convex model is adopted to quantify the non-probabilistic uncertainties and correlations of the measured multi-responses, and the dynamic model updating problem for composite laminate structures is divided into an interval updating problem and a correlation updating problem. Then, a performance measure approach (PMA) and the non-probabilistic covariance propagation equations are developed to achieve the intervals and NPCCM of the calculated responses respectively. By using optimization algorithms to reduce the errors of the intervals and the NPCCM between the measured modal responses and calculated modal responses, the intervals and the NPCCM of modal parameters can be updated effectively, and their multidimensional ellipsoidal model can be established eventually. To improve the stability and efficiency of the updating processes, the Sobol sensitivity analysis method is applied to determine the important modal parameters, and an optimal polynomial response surface model is developed to replace the original complex structures. An experimental example for composite laminate structure is investigated to verify the effectiveness and accuracy of the proposed method.

... This expansion approach can provide both globally and locally identifiable models for the cases with insufficient data (Robert and Casella 2004). Significant number of studies are available to enhance the computational efficiency of posterior sampling in Bayesian model updating of systems (Beck and Au 2002, Catanach and Beck 2017, Cheung and Beck 2009, Ching and Chen 2007, Lyngdoh et al. 2019, Prajapat and Ray-Chaudhuri 2018. The applications of methods such as transitional MCMC technique and Gibbs sampling derived from the basic MCMC technique are also notable in this regard (Cheung and Bansal 2017, Ching and Chen 2007, Huang et al. 2017. ...

The prediction error variances for frequencies are usually considered as unknown in the Bayesian system identification process. However, the error variances for mode shapes are taken as known to reduce the dimension of an identification problem. The present study attempts to explore the effectiveness of Bayesian approach of model parameters updating using Markov Chain Monte Carlo (MCMC) technique considering the prediction error variances for both the frequencies and mode shapes. To remove the ergodicity of Markov Chain, the posterior distribution is obtained by Gaussian Random walk over the proposal distribution. The prior distributions of prediction error variances of modal evidences are implemented through inverse gamma distribution to assess the effectiveness of estimation of posterior values of model parameters. The issue of incomplete data that makes the problem ill-conditioned and the associated singularity problem is prudently dealt in by adopting a regularization technique. The proposed approach is demonstrated numerically by considering an eight-storey frame model with both complete and incomplete modal data sets. Further, to study the effectiveness of the proposed approach, a comparative study with regard to accuracy and computational efficacy of the proposed approach is made with the Sequential Monte Carlo approach of model parameter updating.

... To examine the feasibility of Bayesian model updating for bridge structures, Asadollahi et al. [15] applied the transitional MCMC (TMCMC) method to a cable-stayed bridge model and proposed a method to quantify the uncertainty in the prediction error. Jang and Smyth [29] performed a sensitivity-based cluster analysis to screen uncertain parameters, and the hybrid Monte Carlo (HMC) [30] method was applied to update a suspension bridge model. Mao et al. [31] compared the efficiency of two random sampling algorithms, i.e., the Metropolis-Hastings (MH) algorithm [32,33] and HMC algorithm, in the model updating for a full-scale long-span suspension bridge. ...

Bayesian model updating framework provides a reliable method for building high-fidelity finite element models (FEMs). To realize the efficient model updating of large-scale civil engineering structures, a practical Bayesian inference framework based on software interaction is proposed. The newly developed framework was applied to update the FEM of a long-span cable-stayed bridge, Ting Kau Bridge in Hong Kong, utilizing measured modal parameters from the literature. The model updating results are found to be highly sensitive to the selection of model classes. Furthermore, the area of the main girder of the bridge deck is a key parameter influencing the lower modes of the cable-stayed bridge. A full-scale vehicular load test is conducted on the Ting Kau Bridge to obtain the displacement influence line through the data recorded by GPS sensors on the bridge. The set of measured influence lines is employed to verify the accuracy of the updated FEM. The results demonstrate that the characteristics of the FEM updated using the proposed Bayesian model updating framework based on measured dynamic properties are consistent with the structural characteristics of the bridge. The proposed framework can facilitate the structural health monitoring of large-scale civil engineering structures.

... The strength of stochastic model updating methods, in particular the increasing popularity of Bayesian inference in dealing with uncertainties, have been well documented [149][150][151]. In parallel with Bayesian applications, criticisms of its computational expense have also been well documented. ...

As important links in the transport infrastructure system, cable-stayed bridges are among the most popular candidates for implementing structural health monitoring (SHM) technology. The primary aim of SHM for these bridges is to ensure their structural integrity and satisfactory performance by monitoring their behaviour over time. Finite element (FE) model updating is a well-recognised approach for SHM purposes, as an accurate model serves as a baseline reference for damage detection and long-term monitoring efforts. One of the many challenges is the development of the initial FE model that can accurately reflect the dynamic characteristics and the overall behaviour of a bridge. Given the size, slenderness, use of long cables, and high levels of structural redundancy, precise initial models of long-span cable-stayed bridges are desirable to better facilitate the model updating process and to improve the accuracy of the final updated model. To date, very few studies offer in-depth discussions on the modelling approaches for cable-stayed bridges and the methods used for model updating. As such, this article presents the latest advances in finite element modelling and model updating methods that have been widely adopted for cable-stayed bridges, through a critical literature review of existing research work. An overview of current SHM research is presented first, followed by a comprehensive review of finite element modelling of cable-stayed bridges, including modelling approaches of the deck girder and cables. A general overview of model updating methods is then given before reviewing the model updating applications to cable-stayed bridges. Finally, an evaluation of all available methods and assessment for future research outlook are presented to summarise the research achievements and current limitations in this field.

... In recent years, HMC has been widely used and developed rapidly and has made remarkable achievements in various statistical applications [23][24][25]. e HMC method has been applied to Bayesian analysis and reliability analysis of structural engineering problems perfectly [26][27][28]. ere is no research on this aspect in the reliability analysis and residual life assessment of corrosion pipelines. In this paper, the system reliability analysis method is used to evaluate the pipelines with multiple corrosion pits. ...

In this paper, the reliability analysis and residual life assessment model of gas pipelines with multiple corrosion pits are established. Aiming at the simulation evaluation of small failure probability of gas pipelines, a new method for reliability analysis and residual life assessment of gas pipelines with multiple internal corrosion pits is proposed, which is called the Hamiltonian Monte Carlo subset simulation (HMC-SS) method. Compared with the traditional MCS (Monte Carlo simulation) algorithm, the HMC-SS method has the advantages of less sampling, low cost, and high accuracy. And compared with the random walk SS method, the HMC-SS method can analyze the state space more efficiently and achieve faster convergence. In this paper, the HMC-SS method is applied to the reliability analysis and residual life assessment of gas pipeline engineering, and the sensitivity analysis of the random parameters affecting the failure probability of the pipeline is carried out. The results show that the corrosion rate, the depth of corrosion defects, and the wall thickness of the pipeline have great influence on the residual life of the pipeline, while the yield strength, working pressure, and the length of corrosion pits have no obvious influence on the failure probability and residual life of the pipeline. The analysis shows that the proposed HMC-SS method can be used as a reasonable tool for failure assessment of natural gas pipelines affected by corrosion to determine the remaining life of the pipeline system. This method provides a reliable theoretical basis for the integrity management of the gas pipeline.

... Email: lipeipei626@ gmail.com However, constructing a good proposal PDF requires a lot of samples, and the samples from the start of the chain (burn-in period) may not precisely represent the desired distribution and are generally dropped, which is time-consuming (Cheung and Beck 2009). Furthermore, using MCMCs, the explicit posterior PDF of the random variable of interest cannot be directly obtained. ...

Bayesian updating of the reliability of deteriorating engineering structures based on inspection data has been attracting a lot of attention recently because it can provide more accurate estimates of the structural reliability as the number of inspection data increases. However, in the process of updating the reliability of deteriorating structures, it is not a trivial work to obtain the posterior distribution of the random variable of interest due to its multidimensional parameter integral space and complex integral function. This paper presents a new effective method for obtaining the explicit posterior distribution for the random variable of interest and evaluating time-variant reliability combined with all updated random variables. In the proposed method, the Smolyak-type quadrature formula is first applied to obtain the first three posterior moments of the uncertain parameters, and the three-parameter lognormal distribution is used to approximate their posterior probability distributions. Then, the two-layer Smolyak-type sparse grid is adopted to estimate the first three posterior moments of the random variable of interest, and its explicit posterior distribution can also be approximated by the three-parameter lognormal distribution. Finally, the time-variant reliability analysis considering Bayesian updating is conducted using all updated random variables. Numerical examples demonstrate that the proposed method requires less computational cost, but the results provided are almost the same as those of the Markov chain Monte Carlo simulation.

... Peng et al. [11] introduced the maximum entropy method into Bayesian theory and fused the cuckoo algorithm with the standard MH sampling algorithm to improve the acceptance rate of candidate samples. Cheung et al. [12] proposed a hybrid MCMC (HMCMC) algorithm to solve the FEMU problems with highdimensional uncertain parameters. Compared with the FEMU methods based on modal parameters, the FEMU methods based on FRFs have gradually become the mainstream methods [13]. ...

Aiming at the problems that Markov chain Monte Carlo algorithm is not easy to converge, has high rejection rate, and is easy to be disturbed by the noise when the parameter dimension is high, an improved model updating method combining the singular values of frequency response functions and the beetle antennae search algorithm is proposed. Firstly, the Latin hypercube sampling is used to extract the training samples. The Hankel matrix is reconstructed using the calculated frequency response functions and is decomposed by singular value decomposition. The effective singular values are retained to represent the frequency response functions. Secondly, according to the training samples and the corresponding singular values, the support vector machine surrogate model is fitted and its accuracy is tested. Then, the posterior probability distribution of parameters is estimated by introducing the beetle antennae search algorithm on the basis of standard Metropolis–Hastings algorithm to improve the performance of Markov chains and the ergodicity of samples. The results of examples show that the Markov chains have better overall performance and the acceptance rate of candidate samples is increased after updating. Even if the Gaussian white noise is introduced into the test frequency response functions under the single and multiple working damage conditions, satisfactory updating results can also be obtained.

... The observational data contain various inherent uncertainties affecting seismic loss, and therefore a statistical method or a probabilistic method for developing the SLFs is preferred to deal with the inherent uncertainties included in the observational data. Recently, a Bayesian approach, one of the probabilistic methods, has been widely used for updating fragility functions related to seismic damages (Bai, Gardoni, and Hueste 2011;Bai, Hueste, and Gardoni 2015;Giordano et al. 2020;Koutsourelakis 2010;Singhal and Kiremidjian 1998) or vulnerability functions related to seismic losses (Cheung and Beck 2009;Ching, Beck, and Porter 2004;Noh et al. 2017;Tubaldi et al. 2021) with the use of the observational data. Talyor et al. (2001) updated the average loss ratio for buildings based on the vulnerability curve presented in the ATC-13 (1985) through the Bayesian approach using observational data from the 1994 Northridge earthquake. ...

In low-to-moderate seismicity regions, seismic loss functions (SLFs) are barely established due to limited observational data, making it difficult to derive decision-making on disaster prevention and management. Herein, a Bayesian framework is developed to update the SLFs with limited observational data. The proposed point-based Bayesian method updates local probability density function parameters for damage ratios at each seismic intensity, which helps to avoid an unrealistic underestimation of damage ratios in the low-to-moderate range of seismic intensities. The feasibility of the developed framework in a low-to-moderate seismicity region is verified by the comparison between the updated SLF and post-event data.

... Since Eq. (2) rarely has a closed-form solution, simulation techniques are typically leveraged to generate samples of the posterior distributions of interest. Markov Chain Monte Carlo (MCMC) methods such as the Metropolis-Hastings algorithm, Gibbs sampling, and Hamilton Monte Carlo (HMC) have been developed to perform sampling efficiently [29,30]. ...

Measuring the performance of infrastructure networks is critical to the recovery management and resource allocation before, during, and after a system’s disruption. However, the lack of data often hinders the ability to accurately estimate infrastructure performance, resulting in uncertainty in its evaluation which can lead to biased estimates. To address this challenge, this study develops a Bayesian approach to measure the performance of the infrastructure network at the component level and incorporate it in the evaluation of the system-level serviceability. Component fragility metrics are estimated using a hierarchical Bayesian model and then integrated into system serviceability assessment using Monte Carlo simulation and a shortest-path algorithm. These performance measures can be dynamically updated as more data becomes available. A case study of the water distribution system of Shelby County in Tennessee subject to earthquake and flood hazard is presented to illustrate the proposed approach. Results show that system topology is more important in determining component functionality under seismic hazard while vulnerability is the dominant factor in the case of flood hazard.

... Reducible uncertainty covers the features not inherent in structure (i.e., measurement uncertainty), while irreducible uncertainty refers to the model uncertainty attributed to the inherent features of structure (i.e., manufacturing tolerances, material degradation). Statistical methods have been developed to address the reducible uncertainty in model updating problems, in which the minimum variance method [8,9] and the Bayesian framework [10][11][12] are both effective solutions previously. Though several methods (i.e., the extended finite element updating technique [13], the random matrix theory [14], the inverse fuzzy arithmetic [15]) have been developed subsequently, the work of irreducible uncertainty in model updating problems should be stressed from the perspective of the sample size requirements and the difficulty of acquiring information. ...

Model updating techniques have achieved extensive applications in numerical models with uncertainties inherently in practical systems, whereas the stochastic theory is ineffective under insufficient knowledge. Additionally, model updating in the face of correlated uncertainties and complex numerical models remains challenging. In the present study, a novel interval model updating framework was proposed to tackle down the correlated uncertainties with limit samples. Such a framework has the advantage that parameters can be updated with high precision regardless of whether the relationship between input and output is linear or nonlinear. To achieve this advantage, the convex modelling technique and the Chebyshev surrogate model were employed for uncertain parameter quantization and numerical model approximation, respectively. Subsequently, the matrix-similarity method considering correlation propagation was developed to build the two-step interval model updating process, which was converted into a deterministic model updating problem. The menti esults. Notably, three examples verified the effectiveness and superiority of the proposed framework in both linear and nonlinear relationships. As revealed from the results, the proposed interval model updating framework in the present study is suitable for coping with the updating problems of the parameter's bounds and their correlations.

Full waveform inversion (FWI) to estimate physical properties of a system is one of the major research topics in science and engineering. This study proposes a probabilistic approach toward these solutions by applying the unscented Kalman filter (UKF). The responses of the vertical displacements of a layered half-space subjected to a harmonic vertical disk load on the surface are calculated from an estimated profile of shear-wave velocities and compared with measurements or observations. In the calculation of the dynamic responses, the thin-layer method (TLM), which is efficient for layered media, is employed. In order to improve the solutions to the considered inverse problem, regularization terms are included in the observations so that the differences in the material properties between two consecutive layers vanish. The proposed UKF method is demonstrated with a variety of FWI problems in a layered half-space. The results show that the proposed method can estimate the material properties of a layered half-space accurately.

Vibration-based global damage detection based on updating of finite element (FE) model by targeting the modal measurements is a significant area of interest in structural health monitoring (SHM). In a typical modal testing setup, the measured mode shapes have missing components against various degrees of freedom (DOFs) due to the limitation in the number of sensors available. In this context, a novel Gibbs sampling approach is proposed for updating of FE model incorporating model reduction (MR) to facilitate the global-level detection of structural damages from incomplete modal measurements. In addition to the ease with similar sizes of analytical and experimental mode shapes, the proposed Gibbs sampling approach (for updating the reduced order FE model in the Bayesian framework) has some important advantages like: (A) no need for consideration of system mode shapes as parameters (unlike needed in the typical Gibbs sampling approach) thereby having a significant reduction in the number of parameters, (B) non-requirement of mode matching with consequent reduction in computation time to a significant extent. A generalized formulation is presented in this work providing the scope for incorporating measurements from multiple sensor setups. Moreover, formulations are adapted to incorporate multiple sets of data/measurements from each setup targeting the epistemic uncertainty. Finally, validation is carried out with both numerical (truss structure and building structure) and experimental (laboratory building structure) exercises in comparison with the typical Gibbs sampling approach having a full-sized model. The proposed approach is observed to be evolved as a computationally efficient technique with satisfactory performance in FE model updating and global damage detection.

This work proposes a Bayesian updating approach, called parallel Bayesian optimization and quadrature (PBOQ). It is rooted in Bayesian updating with structural reliability methods (BUS) and offers a coherent Bayesian approach for the BUS analysis by assuming Gaussian process priors. The first step of the method, i.e., parallel Bayesian optimization, effectively explores a constant c in BUS by a novel parallel infill sampling strategy. The second step (parallel Bayesian quadrature) then infers the posterior distribution by another parallel infill sampling strategy using subset simulation. The proposed approach enables to make the fullest use of prior knowledge and parallel computing, resulting in a substantial reduction of the computational burden of model updating. Four numerical examples with varying complexity are investigated for demonstrating the proposed method against several existing methods. The results show the potential benefits by advocating a coherent Bayesian fashion to the BUS analysis.

Model updating based on intelligent algorithms has achieved great success in structural damage detection (SDD). But the appropriate selection of objective functions remains unclear and becomes an obstacle to applying the methods to real-world steel structures. In this paper, a multi-objective identification method based on modal feature extraction and linear weight sum was proposed, and the best weight values to gain the best solution were also determined. A hybrid particle swarm optimization (HPSO) was selected as a solver to update structural parameters for accurate SDD results. First of all, six single objective functions based on modal feature extraction were considered, and numerical simulations show that the one based on MTMAC indicator exhibits certain superiority over the other. In order to provide a fair comparison among different objective functions, a quantified indicator named damage vector consistency (DVC) is also defined, which describes the consistency between identified result and the assumed one. After that, a multi-objective identification method is formulated by linearly combining an MTMAC-based objective function and another selected single objective function. Different weight values were also investigated to find out the best solution for accurate SDD. Three numerical simulations were conducted, including a simply-supported beam, a two-story steel frame, and a 31-bar plane truss. Their SDD results verify the applicability of the proposed multi-objective optimization method. Some relative discussions are also described in detail.

In the semi-probabilistic approach of structural design, the partial safety factors are defined by considering some degree of uncertainties to actions and resistance, associated with the parameters’ stochastic nature. However, uncertainties for individual structures can be better examined by incorporating measurement data provided by sensors from an installed health monitoring scheme. In this context, the current study proposes an approach to revise the partial safety factor for existing structures on the action side, γE by integrating Bayesian model updating. A simple numerical example of a beam-like structure with artificially generated measurement data is used such that the influence of different sensor setups and data uncertainties on revising the safety factors can be investigated. It is revealed that the health monitoring system can reassess the current capacity reserve of the structure by updating the design safety factors, resulting in a better life cycle assessment of structures. The outcome is furthermore verified by analysing a real life small railway steel bridge ensuring the applicability of the proposed method to practical applications.

Finite element (FE) model updating is essential to improve the reliability of physical model‐based approaches in structural engineering applications. The surrogate model is considered an alternative to time‐consuming iterative FE analyses in performing the updating procedure. This paper presents a Bayesian neural network (BNN) as the surrogate model for probabilistic FE model updating using the measured modal data. The BNN involves high computational efficiency by introducing the approximate Gaussian inference of the posterior distribution. In practice, the modal data are usually incomplete because of the measurement noise and limited sensors. The developed BNN exploits the nonlinear relationship between the selected parameters and incomplete modal data. As opposed to the traditional surrogate‐based approach, the proposed framework uses the modal data as inputs and structural parameters to be updated as outputs. It enables uncertainty quantification of the estimated structural parameters efficiently. In particular, an adaptive sampling strategy is established to shrink the searching space of optimal updating parameters based on the truncated Gaussian distribution. Numerical examples are conducted to demonstrate the effectiveness of the presented approach. Then it is applied to the laboratory and experimental structures using the measured data. Results indicate that the proposed framework is accurate and efficient for parameter uncertainty quantification in structural model updating.

The implementation of reliability methods in the framework of Bayesian model updating of structural dynamic models using measured responses is explored for high-dimensional model parameter spaces. The formulation relies on a recently established analogy between Bayesian updating problems and reliability problems. Under this framework, samples following the posterior distribution of the Bayesian model updating problem can be obtained as failure samples in an especially devised reliability problem. An approach that requires only minimal modifications to the standard subset simulation algorithm is proposed and implemented. The scheme uses an adaptive strategy to select the threshold value that determines the last subset level. Due to the basis of the formulation, the approach does not make use of any problem-specific information and, therefore, any type of structural model can be considered. The approach is combined with an efficient parametric model reduction technique for an effective numerical implementation. The performance of the proposed implementation is assessed numerically for a linear building model and a nonlinear three-dimensional bridge structural model. The results indicate that the proposed implementation represents an effective numerical technique to address high-dimensional Bayesian model updating problems involving complex structural dynamic models.

Model updating is a widely adopted method to minimize the error between test results from the real structure and outcomes from the finite element (FE) model for obtaining an accurate and reliable FE model of the target structure. However, uncertainties from the environment, excitation and measurement variability can reduce the accuracy of predictions of the updated FE model. The Bayesian model updating method using multiple Markov chains based on differential evolution adaptive metropolis (DREAM) algorithm is explored, which runs multiple chains simultaneously for a global exploration, and it automatically tunes the scale and orientation of the proposal distribution during the evolution of the posterior distribution. The performance of the proposed method is illustrated numerically with a beam model and a three-span rigid frame bridge. Results show that the DREAM algorithm is capable for updating the FE model in civil engineering. It extends the Bayesian model updating method to multiple Markov chains scenario, which provides higher accuracy than single chain algorithm such as the delayed rejection adaptive metropolis-hastings (DRAM) method. Moreover, results from both examples indicate that the proposed method is insensitive to values of initial parameters, which avoid errors resulting from inappropriate prior knowledge of parameters in the FE model updating.

The present work addresses the inverse problem of damage identification within the Bayesian framework. The inverse problem is formulated as a parameter estimation one and it is built on the response of the continuous model of the structure provided by the Generalized Integral Transform technique. The Hamiltonian Monte Carlo (HMC) method is considered for sampling the posterior probability density function of the cohesion parameters, which are the ones considered for the description of the damage state of the structure. Numerical simulations considering an Euler-Bernoulli beam were carried out in order to assess the applicability of the proposed damage identification approach. The numerical results shown that the HMC method was able to identify the considered damage scenarios, yielding Markov chains with relatively high convergence rates and with uncorrelated states right at the beginning of the chains.

Bayesian model updating is a powerful tool for system identification and model calibration as new measurements or observations become available. At present, Bayesian updating with structural reliability methods (BUS) is an efficient and easy-to-implement method, which can transform the multi-dimensional integration in Bayesian updating into a reliability problem by introducing an auxiliary variable. However, the efficiency and accuracy of BUS are dependent on the constant c in the performance function constructed in BUS for some reliability methods, while c is not available in advance generally. Therefore, this paper proposes an efficient Bayesian updating method with two-step adaptive Kriging (AK), which is named as BUS-AK², where both c and the transformed reliability problem can be efficiently solved by AK. In the proposed method, BUS-AK² first constructs the Kriging model of likelihood function by expected improvement (EI) learning function, and the maximum of likelihood function and c can be obtained. Then the Kriging model of the performance function in BUS is constructed, where U learning function is employed to obtain samples from the posterior distribution. In addition, for the BUS problem with a small acceptance rate, combined global and local sampling technique is incorporated into the proposed method to further improve modeling efficiency. Examples show that the proposed method has obvious computational advantages for the BUS problem with a small acceptance rate and multiple observations.

The timber frame is one of the most popular structural systems in the modern timber community. A full-scale test of a timber frame under cyclic loading is executed to investigate the seismic performance. The test results showed that the semirigid joints had a significant influence on the global structural response. Hence, the same type of beam-column joint was also tested to analyze the cyclic behavior. Based on the test results of the full-scale timber frame and beam-column joint, a simplified model whose nonlinearity is assumed to be concentrated plasticity is created. The nonlinear spring element is assigned to the Pinching4 model, where the features of the pinching effect, strength and stiffness degradation are considered. Because the numerical model makes some simplified assumptions that may be different from the actual structural performance, the Bayesian method is employed to update the simplified model using the test data. The statistical characteristics of the updated parameters are estimated from the posterior probability distribution, which can be used for the uncertainty and reliability analysis. The updated model is evaluated in terms of envelope curves, strength degradation and energy dissipation. The comparison results demonstrate that the updated model is accurate and reliable for parametric studies. Furthermore, the influence of gravity loading and various aspect ratios of timber structures on the elastic stiffness and the maximum loading capacity is investigated. The calibrated model of a single timber frame can be easily extended to full-scale and complex structural configurations featuring multiple timber frame structures, providing a reference for timber buildings’ practical design.

Structural systems are in general quite sophisticated in terms of interactions between individual members with respect to their stressing and resistance. In addition, the stochastic nature of different loads and resistances, respectively, is quite different. A consistent statistical representation of the random variables for a structural component/member is therefore quite complex. In this context, the implementation of intelligent structures and the incorporation of the measuring data can help to improve the prediction of random variables for a certain member and therefore to update models in the design process. Depending on the obtained measurement data, the updating can either refer to the statistical representation quality of different values, but also to the quantity of random variables or even to the update of the mechanical model and its boundary conditions. For that reason, a classification of updating possibilities in the context of the design process described by different hierarchical levels is useful. The aim of this article is to give an overview and insight on corresponding concepts of sensor-based design strategies for steel structures. In addition, first research results regarding the effect of measurement-based updating on the quality improvement of the design process is presented.

The tuning of structural models to the experimental dynamic response entails the choice of a proper objective function. The goal of the so-called model updating process is the optimization of the chosen objective function, which measures the discrepancy between the experimental and simulated dynamic responses. This research focuses on the application of a non-parametric subspace-based objective function to the estimation of the modelling parameters of a beam-like structure. Differently from parametric optimization, non-parametric objective functions do not require the assessment of the modal parameters and descend from direct manipulation of the experimental and simulated data. The use of parametric optimization may lead to the discard of important information, which could be lost when extracting the modal parameters. Conversely, non-parametric optimization may store valuable information, which may lead to the estimation of both the stiffness and mass matrices even in the case of operational response, characterized by unknown excitation. In a second step, the research focuses on the quantification of the uncertainty of the parameters following an elementary Bayesian approach. The authors attempt to estimate the probability density function of the parameters by isolating and quantifying two sources of uncertainties: the uncertainty of the structural model and that of the optimization method.

The Dou-gong joint is a key element in traditional Chinese timber structures. This work presents the mechanical properties of Dou-gong joints based on experimental investigations and numerical analyses. Six full-scale joints were tested under vertical monotonic loading and horizontal reversed cyclic loading. Typical features, including the failure modes, maximum load-carrying capacity, deterioration in stiffness and strength, and energy dissipation, are reported and compared. The results indicate that a Dou-gong joint has exceptional deformability and good energy dissipation capacity. The load-carrying capacity performs well under vertical loading, while it is poor under horizontal loading. The Bayesian model is constructed to predict the force-displacement curves based on the test data, in which the uncertainty of the model parameters is quantified from the posterior distribution. The posterior estimation of the parameters provides insight into the uncertainty assessment of the Dou-gong joint and provides useful information for reliability analysis.

This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level.

This paper presents a newly developed simulation-based approach for Bayesian model updating, model class selection, and model averaging called the transitional Markov chain Monte Carlo (TMCMC) approach. The idea behind TMCMC is to avoid the problem of sampling from difficult target probability density functions (PDFs) but sampling from a series of intermediate PDFs that converge to the target PDF and are easier to sample. The TMCMC approach is motivated by the adaptive Metropolis-Hastings method developed by Beck and Au in 2002 and is based on Markov chain Monte Carlo. It is shown that TMCMC is able to draw samples from some difficult PDFs (e.g., multimodal PDFs, very peaked PDFs, and PDFs with flat manifold). The TMCMC approach can also estimate evidence of the chosen probabilistic model class conditioning on the measured data, a key component for Bayesian model class selection and model averaging. Three examples are used to demonstrate the effectiveness of the TMCMC approach in Bayesian model updating, model class selection, and model averaging.

A Bayesian probabilistic approach is presented for selecting the most plausible class of models for a structural or mechanical
system within some specified set of model classes, based on system response data. The crux of the approach is to rank the classes of
models based on their probabilities conditional on the response data which can be calculated based on Bayes’ theorem and an asymptotic
expansion for the evidence for each model class. The approach provides a quantitative expression of a principle of model parsimony or
of Ockham’s razor which in this context can be stated as "simpler models are to be preferred over unnecessarily complicated ones."
Examples are presented to illustrate the method using a single-degree-of-freedom bilinear hysteretic system, a linear two-story frame, and
a ten-story shear building, all of which are subjected to seismic excitation.

The method of Common Random Numbers is a technique used to reduce the variance of difference estimates in simulation optimization problems. These differences are commonly used to estimate gradients of objective functions as part of the process of determining optimal values for parameters of a simulated system. Asymptotic results exist which show that using the Common Random Numbers method in the iterative Finite Difference Stochastic Approximation optimization algorithm (FDSA) can increase the optimal rate of convergence of the algorithm from the typical rate of k<sup>-1/3</sup> to the faster k<sup>-1/2</sup>, where k is the algorithm's iteration number. Simultaneous Perturbation Stochastic Approximation (SPSA) is a newer and often much more efficient optimization algorithm, and we will show that this algorithm, too, converges faster when the Common Random Numbers method is used. We will also provide multivariate asymptotic covariance matrices for both the SPSA and FDSA errors.

A new Bayesian model updating approach is presented for linear structural models. It is based on the Gibbs sampler, a stochastic simulation method that decomposes the uncertain model parameters into three groups, so that the direct sampling from any one group is possible when conditional on the other groups and the in- complete modal data. This means that even if the number of uncertain parameters is large, the effective dimen- sion for the Gibbs sampler is always three and so high-dimensional parameter spaces that are fatal to most sampling techniques are handled by the method, making it more practical for health monitoring of real structures. The approach also inherits the advantages of Bayesian techniques: it not only updates the optimal estimate of the structural parameters but also updates the associ- ated uncertainties. The approach is illustrated by ap- plying it to two examples of structural health monitor- ing problems, in which the goal is to detect and quan- tify any damage using incomplete modal data obtained from small-amplitude vibrations measured before and after a severe loading event, such as an earthquake or explosion.

This paper is also the originator of the Markov Chain Monte Carlo methods developed in the following chapters. The potential of these two simultaneous innovations has been discovered much latter by statisticians (Hastings 1970; Geman and Geman 1984) than by of physicists (see also Kirkpatrick et al. 1983). 5.5.5 ] PROBLEMS 211

In a full Bayesian probabilistic framework for "robust" system identification, structural response predictions and performance reliability are updated using structural test data D by considering the predictions of a whole set of possible structural models that are weighted by their updated probability. This involves integrating h(theta)p(theta/D) over the whole parameter space, where 0 is a parameter vector defining each model within the set of possible models of the structure, h(theta) is a model prediction of a response quantity of interest, and p(theta/D) is the updated probability density for 0, which provides a measure of how plausible each model is given the data D. The evaluation of this integral is difficult because the dimension of the parameter space is usually too large for direct numerical integration and p(theta/D)) is concentrated in a small region in the parameter space and only known up to a scaling constant. An adaptive Markov chain Monte Carlo simulation approach is proposed to evaluate the desired integral that is based on the Metropolis-Hastings algorithm and a concept similar to simulated annealing. By carrying out a series of Markov chain simulations with limiting stationary distributions equal to a sequence of intermediate probability densities that converge on p(theta/D), the region of concentration of p(theta/D)) is gradually portrayed. The Markov chain samples are used to estimate the desired integral by statistical averaging. The method is illustrated using simulated dynamic test data to update the robust response variance and reliability of a moment-resisting frame for two cases: one where the model is only locally identifiable based on the data and the other where it is unidentifiable.

The problem of updating a structural model and its associated uncertainties
by utilizing structural response data is addressed. Using a Bayesian probabilistic
formulation, 6the updated "posterior" probability distribution of the uncertain
parameters is obtained and it is found that for a large number of data points it is
very peaked at some "optimal" values of the parameters. These optimal parameters
can be obtained by minimizing a positive-definite measure-of-fit function. This
paper focuses on the identifiability of the optimal parameters. The problem of
finding the whole set of optimal models that have the same output at the observed
degrees of freedom for a given input is resolved for the first time, by presenting an
algorithm which methodically and efficiently searches the parameter space. Also,
a simplified expression is given for the weighting coefficients associated with each
optimal model which are involved in the probability distribution for the predicted
response.

A Markov chain simulation method based on the Metropolis-Hastings algorithm and simulated
annealing is proposed to update the robust reliability integrals based on a Bayesian statistical
approach. It is applied to update the reliability of a structure based on its identified natural frequencies.

Multivariate stochastic optimization plays a major role in the analysis and control of many engineering systems. In almost all real-world optimization problems, it is necessary to use a mathematical algorithm that iteratively seeks out the solution because an analytical (closed-form) solution is rarely available. In this spirit, the "simultaneous perturbation stochastic approximation (SPSA)" method for difficult multivariate optimization problems has been developed. SPSA has recently attracted considerable international attention in areas such as statistical parameter estimation, feedback control, simulation-based optimization, signal and image processing, and experimental design. The essential feature of SPSA - which accounts for its power and relative ease of implementation - is the underlying gradient approximation that requires only two measurements of the objective function regardless of the dimension of the optimization problem. This feature allows for a significant decrease in the cost of optimization, especially in problems with a large number of variables to be optimized.

Foreword Preface Part I. Principles and Elementary Applications: 1. Plausible reasoning 2. The quantitative rules 3. Elementary sampling theory 4. Elementary hypothesis testing 5. Queer uses for probability theory 6. Elementary parameter estimation 7. The central, Gaussian or normal distribution 8. Sufficiency, ancillarity, and all that 9. Repetitive experiments, probability and frequency 10. Physics of 'random experiments' Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle 12. Ignorance priors and transformation groups 13. Decision theory: historical background 14. Simple applications of decision theory 15. Paradoxes of probability theory 16. Orthodox methods: historical background 17. Principles and pathology of orthodox statistics 18. The Ap distribution and rule of succession 19. Physical measurements 20. Model comparison 21. Outliers and robustness 22. Introduction to communication theory References Appendix A. Other approaches to probability theory Appendix B. Mathematical formalities and style Appendix C. Convolutions and cumulants.

The hybrid Monte Carlo algorithm does not completely remove problems with ergodicity in the molecular dynamics trajectories unless the length of each trajectory, τ0, is kept shorter than the period of the fastest mode of the system, 2π/ωmax. The correlations which remain when larger values of τ0 are used may be eliminated by randomizing not only the molecular dynamics velocities but also the length of the trajectory, τ0, at the beginning of each new trajectory. This allows τ0 to be increased to the time scale of the slowest modes in the system, 1/ωmin, and reduces the correlation time (and therefore the computer time used) by a factor of around ωmax/2ωmin.

The problem of updating a structural model and its associated uncertainties by utilizing dynamic response data is addressed using a Bayesian statistical framework that can handle the inherent ill-conditioning and possible nonuniqueness in model updating applications. The objective is not only to give more accurate response predictions for prescribed dynamic loadings but also to provide a quantitative assessment of this accuracy. In the methodology presented, the updated (optimal) models within a chosen class of structural models are the most probable based on the structural data if all the models are equally plausible a priori. The prediction accuracy of the optimal structural models is given by also updating probability models for the prediction error. The precision of the parameter estimates of the optimal structural models, as well as the precision of the optimal prediction-error parameters, can be examined. A large-sample asymptotic expression is given for the updated predictive probability distribution of the uncertain structural response, which is a weighted average of the predictive probability distributions for each optimal model. This predictive distribution can be used to make model predictions despite possible nonuniqueness in the optimal models.

Until the advent of powerful and accessible computing methods, the experimenter was often confronted with a difficult choice. Either describe an accurate model of a phenomenon, which would usually preclude the computation of explicit answers, or choose a standard model which would allow this computation, but may not be a close representation of a realistic model. This dilemma is present in many branches of statistical applications, for example, in electrical engineering, aeronautics, biology, networks, and astronomy. To use realistic models, the researchers in these disciplines have often developed original approaches for model fitting that are customized for their own problems. (This is particularly true of physicists, the originators of Markov chain Monte Carlo methods.) Traditional methods of analysis, such as the usual numerical analysis techniques, are not well adapted for such settings.

Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies

We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.

In a full Bayesian probabilistic framework for "robust" system identification, structural response predictions and performance reliability are updated using structural test data D by considering the predictions of a whole set of possible structural models that are
weighted by their updated probability. This involves integrating h(θ)p(θ|D) over the whole parameter space, where θ is a parameter vector defining each model within the set of possible models of the structure, h(θ) is a model prediction of a response quantity of interest,
and p(θ|D) is the updated probability density for θ, which provides a measure of how plausible each model is given the data D. The evaluation of this integral is difficult because the dimension of the parameter space is usually too large for direct numerical integration and
p(θ|D) is concentrated in a small region in the parameter space and only known up to a scaling constant. An adaptive Markov chain
Monte Carlo simulation approach is proposed to evaluate the desired integral that is based on the Metropolis-Hastings algorithm and a
concept similar to simulated annealing. By carrying out a series of Markov chain simulations with limiting stationary distributions equal
to a sequence of intermediate probability densities that converge on p(θ|D), the region of concentration of p(θ|D) is gradually portrayed.
The Markov chain samples are used to estimate the desired integral by statistical averaging. The method is illustrated using simulated
dynamic test data to update the robust response variance and reliability of a moment-resisting frame for two cases: one where the model
is only locally identifiable based on the data and the other where it is unidentifiable.

System identification of structures using their measured earthquake response can play a key role in structural health monitoring, structural control and improving performance-based design. Implementation using data from strong seismic shaking is complicated by the nonlinear hysteretic response of structures. Furthermore, this inverse problem is ill-conditioned1 for example, even if some components in the structure show substantial yielding, others will exhibit nearly elastic response, producing no information about their yielding behavior. Classical least-squares or maximum likelihood estimation will not work with a realistic class of hysteretic models because it will be unidentifiable based on the data. It is shown here that Bayesian updating and model class selection provide a powerful and rigorous approach to tackle this problem when implemented using a recently developed stochastic simulation algorithm called Transitional Markov Chain Monte Carlo. The updating and model class selection is performed on a previously-developed class of Masing hysteretic structural models that are relatively simple yet can give realistic responses to seismic loading. The theory for the Masing hysteretic models, and the theory used to perform the updating and model class selection, are presented and discussed. An illustrative example is given that uses simulated dynamic response data and shows the ability of the algorithm to identify hysteretic systems even when the class of models is unidentifiable based on the data.

The problem of updating a structural model and its associated uncertainties by utilizing structural response data is addressed. In an identifiable case, the posterior probability density function (PDF) of the uncertain model parameters for given measured data can be approximated by a weighted sum of Gaussian distributions centered at a number of discrete optimal values of the parameters at which some positive measure-of-fit function is minimized. The present paper focuses on the problem of model updating in the general unidentifiable case for which certain simplifying assumptions available for identifiable cases are not valid. In this case, the PDF is distributed in the neighbourhood of an extended and usually highly complex manifold of the parameter space that cannot be calculated explicitly. The computational difficulties associated with calculating the highly complex posterior PDF are discussed and a new adaptive algorithm, referred to as the tangential-projection (TP) algorithm, allowing for an efficient approximate representation of the above manifold and the posterior PDF is presented. Using this approximation, expressions for calculating the uncertain predictive response are established. A numerical example involving noisy data is presented to demonstrate the proposed method. Copyright © 2002 John Wiley & Sons, Ltd.

We present a new method for the numerical simulation of lattice field theory. A hybrid (molecular dynamics/Langevin) algorithm is used to guide a Monte Carlo simulation. There are no discretization errors even for large step sizes. The method is especially efficient for systems such as quantum chromodynamics which contain fermionic degrees of freedom. Detailed results are presented for four-dimensional compact quantum electrodynamics including the dynamical effects of electrons.

In this paper we present an explicit fourth-order method for the integration of Hamilton's equations. This method preserves the property that the time evolution of such a system yields a canonical transformation from the initial conditions to the final state. That is, the integration step is an explicit symplectic map. Although the result is first derived for a specific type of Hamiltonian, it is shown to be quite general. In particular, the results can be applied to any Lie group.

The concept of robust reliability is defined to take into account uncertainties from structural modeling in addition to the uncertain
excitation that a structure will experience during its lifetime. A Bayesian probabilistic methodology for system identification is integrated
with probabilistic structural analysis tools for the purpose of updating the assessment of the robust reliability based on dynamic test data.
Methods for updating the structural reliability for both identifiable and unidentifiable models are presented. Application of the methodology
to a simple beam model of a single-span bridge with soil-structure interaction at the abutments, including a case with a tuned-mass damper
attached to the deck, shows that the robust reliabilities computed before and after updating with "measured" dynamic data can differ
significantly.

Consider the problem of loss function minimization when only (possibly noisy) measurements of the loss function are available. In particular, no measurements of the gradient of the loss function are assumed available (as required in the steepest descent or Newton-Raphson algorithms). Stochastic approximation (SA) algorithms of the multivariate Kiefer-Wolfowitz (finite-difference) form have long been considered for such problems, but with only limited success. The simultaneous perturbation SA (SPSA) algorithm has successfully addressed one of the major shortcomings of those finite-difference SA algorithms by significantly reducing the number of measurements required in many multivariate problems of practical interest. This SPSA algorithm displays the classic behaviour of 1st-order search algorithms by typically exhibiting a steep initial decline in the loss function followed by a slow decline to the optimum. This paper presents a 2nd-order SPSA algorithm that is based on estimating both the loss function gradient and inverse Hessian matrix at each iteration. The aim of this approach is to emulate the acceleration properties associated with deterministic algorithms of Newton-Raphson form, particularly in the terminal phase where the 1st-order SPSA algorithm slows down in its convergence. This 2nd-order SPSA algorithm requires only three loss function measurements at each iteration, independent of the problem dimension. This paper includes a formal convergence result for this 2nd-order approach.

Formula translation.- Formula differentiation.- Generation of Taylor coefficients.- Examples of software for automatic differentiation and generation of Taylor coefficients.- Automatic computation of gradients, Jacobians, Hessians, and applications to optimization.- Automatic error analysis.- Solution of nonlinear systems of equations.- Numerical integration with regorous error estimation.- Additional notes of techniques, applications, and software.

Many numerical methods call for repeated calculation of the first partial derivatives of a given function of several variables. Errors in a program for that calculation can be hard to discover. A procedure is offered for testing the program and locating its errors.

We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.

A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two-dimensional rigid-sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four-term virial coefficient expansion.

A general method, suitable for fast computing machines, for investigating
such properties as equations of state for substances consisting of
interacting individual molecules is described. The method consists
of a modified Monte Carlo integration over configuration space. Results
for the two-dimensional rigid-sphere system have been obtained on
the Los Alamos MANIAC and are presented here. These results are compared
to the free volume equation of state and to a four-term virial coefficient
expansion. The Journal of Chemical Physics is copyrighted by The
American Institute of Physics.

Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several variance reduction techniques and also give guidance on the choice of sample size and allocation.

SUMMARY A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties
of assessing the error in Monte Carlo estimates. Examples of the methods, including the generation of random orthogonal matrices
and potential applications of the methods to numerical problems arising in statistics, are discussed.

A novel method of Bayesian learning with automatic relevance determination prior is presented that provides a powerful approach to problems of classification based on data features, for example, classifying soil liquefaction potential based on soil and seismic shaking parameters, automatically classifying the damage states of a structure after severe loading based on features of its dynamic response, and real-time classification of earthquakes based on seismic signals. After introduction of the theory, the method is illustrated by applying it to an earthquake record dataset from nine earthquakes to build an efficient real-time algorithm for near-source versus far-source classification of incoming seismic ground motion signals. This classification is needed in the development of early warning systems for large earthquakes. It is shown that the proposed methodology is promising since it provides a classifier with higher correct classification rates and better generalization performance than a previous Bayesian learning method with a fixed prior distribution that was applied to the same classification problem.

Consider the problem of loss-function minimization when only
(possibly noisy) measurements of the loss function are available. In
particular, no measurements of the gradient of the loss function are
assumed available. The simultaneous perturbation SA (SPSA) algorithm has
successfully addressed one of the major shortcomings of those
finite-difference SA algorithms by significantly reducing the number of
measurements required in many multivariate problems of practical
interest. This paper presents a second-order SPSA algorithm that is
based on estimating both the loss function gradient and inverse Hessian
matrix at each iteration. The aim of this approach is to emulate the
acceleration properties associated with deterministic algorithms of
Newton-Raphson form, particularly in the terminal phase where the
first-order SPSA algorithm slows down in its convergence. This
second-order SPSA algorithm requires only five loss function
measurements at each iteration, independent of the problem dimension.
This paper represents a significantly enhanced version of a previously
introduced second-order algorithm by the author (1996)

The simultaneous perturbation stochastic approximation (SPSA)
algorithm has attracted considerable attention for challenging
optimization problems where it is difficult or impossible to obtain a
direct gradient of the objective (say, loss) function. The approach is
based on a highly efficient simultaneous perturbation approximation to
the gradient based on loss function measurements. SPSA is based on
picking a simultaneous perturbation (random) vector in a Monte Carlo
fashion as part of generating the approximation to the gradient. This
paper derives the optimal distribution for the Monte Carlo process. The
objective is to minimize the mean square error of the estimate. The
authors also consider maximization of the likelihood that the estimate
be confined within a bounded symmetric region of the true parameter. The
optimal distribution for the components of the simultaneous perturbation
vector is found to be a symmetric Bernoulli in both cases. The authors
end the paper with a numerical study related to the area of experiment
design

The need for solving multivariate optimization problems is
pervasive in engineering and the physical and social sciences. The
simultaneous perturbation stochastic approximation (SPSA) algorithm has
recently attracted considerable attention for challenging optimization
problems where it is difficult or impossible to directly obtain a
gradient of the objective function with respect to the parameters being
optimized. SPSA is based on an easily implemented and highly efficient
gradient approximation that relies on measurements of the objective
function, not on measurements of the gradient of the objective function.
The gradient approximation is based on only two function measurements
(regardless of the dimension of the gradient vector). This contrasts
with standard finite-difference approaches, which require a number of
function measurements proportional to the dimension of the gradient
vector. This paper presents a simple step-by-step guide to
implementation of SPSA in generic optimization problems and offers some
practical suggestions for choosing certain algorithm coefficients

This article is an introduction to the simultaneous perturbation stochastic approximation (SPSA) algorithm for stochastic optimization of multivariate systems. Optimization algorithms play a critical role in the design, analysis, and control of most engineering systems and are in widespread use in the work of APL and other organizations: The future, in fact, will be full of [optimization] algorithms. They are becoming part of almost everything. They are moving up the complexity chain to make entire companies more efficient. They also are moving down the chain as computers spread. (USA Today, 31 Dec 1997) Before presenting the SPSA algorithm, we provide some general background on the stochastic optimization context of interest here

The probability of accepting a candidate move in the hybrid Monte Carlo algorithm can be increased by considering a transition to be between windows of several states at the beginning and end of the trajectory, with a state within the selected window being chosen according to the Boltzmann probabilities. The detailed balance condition used to justify the algorithm still holds with this procedure, provided the start state is randomly positioned within its window. The new procedure is shown empirically to significantly improve performance for a test system of uncoupled oscillators.

On automatic differentiation Mathematical programming: Recent developments and applications

- A Griewank

Griewank, A. 1989. On automatic differentiation. Mathematical programming: Recent developments and applications, M. Iri and K.
Tanabe, eds., 83-108.

Copyright ASCE. For personal use only; all rights reserved. method for updating robust reliability of dynamic systems Bayesian model updating of higher-dimensional dynamic systems Transitional Markov chain Monte Carlo method for Bayesian model updating, model class selection, and model averaging

- Ascelibrary Downloaded
- California Org
- Univ Of
- Reston
- Va
- S H Cheung
- J L Beck
- Tokyo
- J Ching
- Y J Chen
- ͑2007͒

Downloaded from ascelibrary.org by CALIFORNIA, UNIV OF on 09/22/13. Copyright ASCE. For personal use only; all rights reserved. method for updating robust reliability of dynamic systems. " Proc., 18th Engineering Mechanics Conf., ASCE, Reston, Va. Cheung, S. H., and Beck, J. L. ͑2007b͒. " Bayesian model updating of higher-dimensional dynamic systems. " Proc., 10th Int. Conf. on Ap-plications of Statistics and Probability in Civil Engineering (ICASP10), Univ. of Tokyo, Tokyo. Ching, J., and Chen, Y. J. ͑2007͒. " Transitional Markov chain Monte Carlo method for Bayesian model updating, model class selection, and model averaging. " J. Eng. Mech., 133͑7͒, 816–832.

New stochastic simulation 254

- S H Cheung
- J L Beck
- Journal Of Engineering Mechanics
- Asce
- April

Cheung, S. H., and Beck, J. L. ͑2007a͒. " New stochastic simulation 254 / JOURNAL OF ENGINEERING MECHANICS © ASCE / APRIL 2009 J. Eng. Mech. 2009.135:243-255.

Numerical derivatives and nonlinear analysis Mathematical Concepts and methods in science and engineering 31

- H Kagiwada
- R Kalaba
- N Rosakhoo
- K Spingarn

Kagiwada, H., Kalaba, R., Rosakhoo, N., and Spingarn, K. 1986. "Numerical derivatives and nonlinear analysis." Mathematical Concepts
and methods in science and engineering 31, Plenum Press, New York.

Automatic differentiation—Techniques and applica-tions Lecture notes in computer science

- L B Rall
- ͑1981͒

Rall, L. B. ͑1981͒. " Automatic differentiation—Techniques and applica-tions. " Lecture notes in computer science, Vol. 120, Springer, Berlin. Robert, C. P., and Casella, G. ͑1999͒. Monte Carlo statistical methods, Springer, New York.