Article

The sub-interval similarity: A general uncertainty quantification metric for both stochastic and interval model updating

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

One of the key challenges of uncertainty analysis in model updating is the lack of experimental data. The definition of an appropriate uncertainty quantification metric, which is capable of measuring as sufficient as possible information from the limited and sparse experimental data, is significant for the outcome of model updating. This work is dedicated to the definition and investigation of a general-purpose uncertainty quantification metric based on the sub-interval similarity. The discrepancy between the model prediction and the experimental observation is measured in the form of intervals, instead of the common probabilistic distributions which require elaborate experimental data. An exhaustive definition of the similarity between intervals under different overlapping cases is proposed in this work. A sub-interval strategy is developed to compare the similarity considering not only the range of the intervals, but more importantly, the distributed positions of the available observation samples within the intervals. This sub-interval similarity metric is developed to be capable of different model updating frameworks, e.g. the stochastic Bayesian updating and the interval model updating. A simulated example employing the widely known 3-dof mass-spring system is presented to perform both stochastic Bayesian updating and interval updating, to demonstrate the universality of the proposed sub-interval similarity metric. A practical experimental example is followed to demonstrate the feasibility of the proposed metric in practical application cases.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In the real world, a significant challenge faced in designing critical systems is the lack of available data. This results in a large degree of uncertainty and the need for uncertainty quantification tools so as to make risk-informed decisions. The NASA-Langley UQ Challenge 2019 seeks to provide such setting, requiring different discipline-independent approaches to address typical tasks required for the design of critical systems. This paper addresses the NASA-Langley UQ Challenge by proposing 4 key techniques to provide the solution to the challenge: (1) a distribution-free Bayesian model updating framework for the calibration of the uncertainty model; (2) an adaptive pinching approach to analyse and rank the relative sensitivity of the epistemic parameters; (3) the probability bounds analysis to estimate failure probabilities; and (4) a Non-intrusive Stochastic Simulation approach to identify an optimal design point.
Article
Full-text available
In this paper we present a framework for addressing a variety of engineering design challenges with limited empirical data and partial information. This framework includes guidance on the characterisation of a mixture of uncertainties, efficient methodologies to integrate data into design decisions, and to conduct reliability analysis, and risk/reliability based design optimisation. To demonstrate its efficacy, the framework has been applied to the NASA 2020 uncertainty quantification challenge. The results and discussion in the paper are with respect to this application.
Article
Full-text available
This work proposes a novel methodology to fulfil the challenging expectation in stochastic model updating to calibrate the probabilistic distributions of parameters without any assumption about the distribution formats. To achieve this task, an approximate Bayesian computation model updating framework is developed by employing staircase random variables and the Bhattacharyya distance. In this framework, parameters with aleatory and epistemic uncertainties are described by staircase random variables. The discrepancy between model predictions and observations is then quantified by the Bhattacharyya distance-based approximate likelihood. In addition, a Bayesian updating using the Euclidian distance is performed as preconditioner to avoid non-unique solutions. The performance of the proposed procedure is demonstrated with two exemplary applications, a simulated shear building model example and a challenging benchmark problem for uncertainty treatment. These examples demonstrate feasibility of the combined application of staircase random variables and the Bhattacharyya distance in stochastic model updating and uncertainty characterization.
Article
Full-text available
This tutorial paper reviews the use of advanced Monte Carlo sampling methods in the context of Bayesian model updating for engineering applications. Markov Chain Monte Carlo, Transitional Markov Chain Monte Carlo, and Sequential Monte Carlo methods are introduced, applied to different case studies and finally their performance is compared. For each of these methods, numerical implementations and their settings are provided. Three case studies with increased complexity and challenges are presented showing the advantages and limitations of each of the sampling techniques under review. The first case study presents the parameter identification for a spring-mass system under a static load. The second case study presents a 2-dimensional bi-modal posterior distribution and the aim is to observe the performance of each of these sampling techniques in sampling from such distribution. Finally, the last case study presents the stochastic identification of the model parameters of a complex and non-linear numerical model based on experimental data. The case studies presented in this paper consider the recorded data set as a single piece of information which is used to make inferences and estimations on time-invariant model parameters.
Article
Full-text available
The Bhattacharyya distance has been developed as a comprehensive uncertainty quantification metric by capturing multiple uncertainty sources from both numerical predictions and experimental measurements. This work pursues a further investigation of the performance of the Bhattacharyya distance in different methodologies for stochastic model updating. The first procedure is the Bayesian model updating where the Bhattacharyya distance is utilized to define an approximate likelihood function and the transitional Markov chain Monte Carlo algorithm is employed to obtain the posterior distribution of the parameters. In the second model updating procedure, the Bhattacharyya distance is utilized to construct the objective function of an optimization problem. The objective function is defined as the Bhattacharyya distance between the samples of numerical prediction and the samples of the target data. The comparison study is performed on a four degree-of-freedoms mass-spring system. A challenging task is raised in this example by assigning different distributions to the parameters with imprecise distribution coefficients. This requires the stochastic updating procedure to calibrate not the parameters themselves, but their distribution properties. The performance of the Bhattacharyya distance in both Bayesian updating and optimization-based updating procedures are presented and compared. The results demonstrate the Bhattacharyya distance as a comprehensive and universal uncertainty quantification metric in stochastic model updating.
Article
Full-text available
This paper gives an overview of recent advances in the field of non-probabilistic uncertainty quantification. Both techniques for the forward propagation and inverse quantification of interval and fuzzy uncertainty are discussed. Also the modeling of spatial uncertainty in an interval and fuzzy context is discussed. An in depth discussion of a recently introduced method for the inverse quantifica-tion of spatial interval uncertainty is provided and its performance is illustrated using a case studies taken from literature. It is shown that the method enables an accurate quantification of spatial uncertainty under very low data availability and with a very limited amount of assumptions on the underlying uncertainty. Finally, also a conceptual comparison with the class of Bayesian methods for uncertainty quantification is provided.
Article
Full-text available
This paper introduces an improved version of a novel inverse approach for the indirect quantification of multivariate interval uncertainty for high dimensional models under scarce data availability. The method is compared to results obtained via the well-established probabilistic framework of Bayesian model updating via Transitional Markov Chain Monte Carlo in the context of the DLR-AIRMOD test structure. It is shown that the proposed improvements of the inverse method alleviate the curse of dimensionality of the method with a factor up to 10 5. Comparison with the Bayesian results revealed that the most appropriate method depends largely on the desired information and availability of data. In case large amounts of data are available, and/or the analyst desires full (joint)-probabilistic descriptors of the model parameter uncertainty, the Bayesian method is shown to be the most performing. On the other hand however, when such descriptors are not needed (e.g., for worst-case analysis), and only scarce data are available, the interval method is shown to deliver more objective and robust bounds on the uncertain parameters.
Article
Full-text available
The Bhattacharyya distance is a stochastic measurement between two samples and taking into account their probability distributions. The objective of this work is to further generalize the application of the Bhattacharyya distance as a novel uncertainty quantification metric by developing an approximate Bayesian computation model updating framework, in which the Bhattacharyya distance is fully embedded. The Bhattacharyya distance between sample sets is evaluated via a binning algorithm. And then the approximate likelihood function built upon the concept of the distance is developed in a two-step Bayesian updating framework, where the Euclidian and Bhattacharyya distances are utilized in the first and second steps, respectively. The performance of the proposed procedure is demonstrated with two exemplary applications, a simulated mass-spring example and a quite challenging benchmark problem for uncertainty treatment. These examples demonstrate a gain in quality of the stochastic updating by utilizing the superior features of the Bhattacharyya distance, representing a convenient, efficient, and capable metric for stochastic model updating and uncertainty characterization.
Article
Full-text available
Test-analysis comparison metrics are mathematical functions that provide a quantitative measure of the agreement (or lack thereof) between numerical predictions and experimental measurements. While calibrating and validating models, the choice of a metric can significantly influence the outcome, yet the published research discussing the role of metrics, in particular varying levels of statistical information the metrics can contain, has been limited. This manuscript calibrates and validates the model predictions using alternative metrics formulated based on three types of distance-based criteria: (i) the Euclidian distance, i.e. the absolute geometric distance between two points; (ii) the Mahalanobis distance, i.e. the weighted distance that considers the correlations of two point clouds; and (iii) the Bhattacharyya distance, i.e. the statistical distance between two point clouds considering their probabilistic distributions. A comparative study is presented in the first case study, where the influence of various metrics, and the varying levels of statistical information they contain, on the predictions of the calibrated models is evaluated. In the second case study, an integrated application of the distance metrics is demonstrated through a cross validation process with regard to the measurement variability.
Article
Full-text available
Deterministic model updating is now a mature technology widely applied to large-scale industrial structures. It is concerned with the calibration of the parameters of a single model based on one set of test data. It is, of course, well known that different analysts produce different finite element models, make different physics-based assumptions, and parameterize their models differently. Also, tests carried out on the same structure, by different operatives, at different times, under different ambient conditions produce different results. There is no unique model and no unique data. Therefore, model updating needs to take account of modeling and test-data variability. Much emphasis is now placed on what has become known as stochastic model updating where data are available from multiple nominally identical test structures. In this paper two currently prominent stochastic model updating techniques (sensitivity-based updating and Bayesian model updating) are described and applied to the DLR AIRMOD structure.
Article
Full-text available
The objective of this work is to develop and validate a methodology for the identification and quantification of multivariate interval uncertainty in finite element models. The principal idea is to find a solution to an inverse problem, where the variability on the output side of the model is known from measurement data, but the multivariate uncertainty on the input parameters is unknown. For this purpose, the uncertain simulation results set created by propagating interval uncertainty through the model is represented by its convex hull. The same concept is used to model the uncertainty in the measurements. A metric to describe the discrepancy between these convex hulls is defined based on the difference between their volumes and their mutual intersection. By minimisation of this metric, the interval uncertainty on the input side of the model is identified. It is further shown how the procedure can be optimized with respect to output quantity selection. Validation of the methodology is done using simulated measurement data in two case studies. Numerically exact identification of multiple, coupled parameters having interval uncertainty is possible following the proposed methodology. Furthermore, the robustness of the method with respect to the analyst’s initial estimate of the input uncertainty is illustrated. The method presented in this work in se is generic, but for the examples in this paper, it is specifically applied to dynamic models, using eigenfrequencies as output quantities, as commonly applied in modal updating procedures.
Article
Full-text available
This paper presents a newly developed simulation-based approach for Bayesian model updating, model class selection, and model averaging called the transitional Markov chain Monte Carlo (TMCMC) approach. The idea behind TMCMC is to avoid the problem of sampling from difficult target probability density functions (PDFs) but sampling from a series of intermediate PDFs that converge to the target PDF and are easier to sample. The TMCMC approach is motivated by the adaptive Metropolis-Hastings method developed by Beck and Au in 2002 and is based on Markov chain Monte Carlo. It is shown that TMCMC is able to draw samples from some difficult PDFs (e.g., multimodal PDFs, very peaked PDFs, and PDFs with flat manifold). The TMCMC approach can also estimate evidence of the chosen probabilistic model class conditioning on the measured data, a key component for Bayesian model class selection and model averaging. Three examples are used to demonstrate the effectiveness of the TMCMC approach in Bayesian model updating, model class selection, and model averaging.
Article
Full-text available
It is well known that finite element predictions are often called into question when they are in conflict with test results. The area known as model updating is concerned with the correction of finite element models by processing records of dynamic response from test structures. Model updating is a rapidly developing technology, and it is intended that this paper will provide an accurate review of the state of the art at the time of going to press. It is the authors' hope that this work will prove to be of value, especially to those who are getting acquainted with the research base and aim to participate in the application of model updating in industry, where a pressing need exists.
Article
This paper presents a multilevel Quasi-Monte Carlo method for interval analysis, as a computationally efficient method for high-dimensional linear models. Interval analysis typically requires a global optimisation procedure to calculate the interval bounds on the output side of a computational model. The main issue of such a procedure is that it requires numerous full-scale model evaluations. Even when simplified approaches such as the vertex method are applied, the required number of model evaluations scales combinatorially with the number of input intervals. This increase in required model evaluations is especially problematic for highly detailed numerical models containing thousands or even millions of degrees of freedom. In the context of probabilistic forward uncertainty propagation, multi-fidelity techniques such as multilevel Quasi-Monte Carlo show great potential to reduce the computational cost. However, their translation to an interval context is not straightforward due to the fundamental differences between interval and probabilistic methods. In this work, we introduce a multilevel Quasi-Monte Carlo framework. First, the input intervals are transformed to Cauchy random variables. Then, based on these Cauchy random variables, a multilevel sampling is designed. Finally, the corresponding model responses are post-processed to estimate the intervals on the output quantities with high accuracy. Two numerical examples show that the technique is very efficient for a medium to a high number of input intervals. This in comparison with traditional propagation approaches for interval analysis and with results well within a predefined tolerance.
Article
This paper is dedicated to exploring the NASA Langley Challenge on Optimization under Uncertainty by proposing a series of approaches for both forward and inverse treatment of uncertainty propagation and quantification. The primary effort is placed on the categorization of the subproblems as to be forward or inverse procedures, such that dedicated techniques are proposed for the two directions, respectively. The sensitivity analysis and reliability analysis are categorized as forward procedures, while modal calibration & uncertainty reduction, reliability-based optimization, and risk-based design are regarded as inverse procedures. For both directions, the overall approach is based on imprecise probability characterization where both aleatory and epistemic uncertainties are investigated for the inputs, and consequently, the output is described as the probability-box (P-box). Theoretic development is focused on the definition of comprehensive uncertainty quantification criteria from limited and irregular time-domain observations to extract as much as possible uncertainty information, which will be significant for the inverse procedure to refine uncertainty models. Furthermore, a decoupling approach is proposed to investigate the P-box along two directions such that the epistemic and aleatory uncertainties are decoupled, and thus a two-loop procedure is designed to propagate both epistemic and aleatory uncertainties through the systematic model. The key for successfully addressing this challenge is in obtaining on the balance among an appropriate hypothesis of the input uncertainty model, a comprehensive criterion of output uncertainty quantification, and a computational viable approach for both forward and inverse uncertainty treatment.
Article
This paper evaluates responses to the NASA Langley Challenge on Optimization under Uncertainty. The challenge respondents were tasked to quantify the aleatory and epistemic uncertainty impacting the predictions of a computational model calibrated according to a limited number of sub-system and full-system observations. In addition, they evaluated the global sensitivity of the model’s prediction with respect to the epistemic variables in order to choose a few of them for refinement. Ultimately, they had to find a reliability-based design that is robust to all/most epistemic realizations within their identified range of uncertainty. The assessments presented herein evaluate the proposed responses within a synthetic validation framework. This framework gauges the ability of the designs to satisfy the reliability requirements imposed upon them against additional data. These data correspond to additional realizations of the aleatory variables, and to the true but unknown value of the epistemic variables. Best practices and lessons learned are set forth based on the comparative analysis of the various responses.
Article
A robust stochastic model updating framework is developed for a better estimation of uncertain properties of parameters. In this framework, in order to improve the robustness, a resampling process is primarily designed for dealing with the ill sample point, especially for limited sample size problems. Next, a mean distance uncertainty qualification metric is proposed based on the Bhattacharyya distance and the Euclidian distance to fully exploit available information from the measurements. The Particle Swarm Optimization algorithm is subsequently employed to update the input parameters of the investigated structure. Finally, the mass-spring system and the steel plate structures are presented to illustrate the effectiveness and advantages of this proposed method. Discussions on the role of the resampling process have been made through using the measured samples added an ill sample.
Article
In this paper, a new method is proposed to deal with the interval model updating problem using universal grey mathematics and Gaussian process regression model (GPRM). The nonlinear relationship between the input and the output generally exists in the interval model updating problem, which has not been specially discussed in the current papers. In order to handle this issue better, the method concerned about the nonlinear monotonic relationship was proposed. It treats the interval model updating problem in a deterministic framework by universal grey mathematics. What’s more, GPRMs were used as the meta-model to improve computational efficiency. Two numerical examples and one experimental example were made in this paper: in the first numerical example, the method was validated using a three-degree-of-freedom spring-mass system; in the second numerical example, the ability of the method to deal with nonlinear problems was verified with a set of aluminum alloy plates; in the last experimental example, the experiment of a set of aluminum alloy plates was used to illustrate the method, where the thickness of the aluminum alloy plate was used as the updating parameter.
Article
With the increasing complexity of uncertain parameters, the single uncertainty modeling is challenged for structures involving multiple types of uncertainties. However, in the contrast to research on static problems with hybrid uncertainties, dynamic mixed uncertainty problems have not been well addressed. A novel dynamic analysis of structure with hybrid uncertainties is developed for the expressions for the lower and upper bounds of the mean value and standard deviation of the dynamic response. Within this approach, interval variables are adopted to quantify the non-probabilistic uncertainty associated with objective limited information. Some other parameters are considered as random variables. Based on series expansion of random and interval quantities with respect to uncertain parameters, the proposed method evaluates the lower and upper bounds of the first- and second-order moments of the structure response through solving deterministic equations. This method is also capable to solve pure interval problems. Finally, numerical examples are analyzed to illustrate the feasibility and effectiveness of the proposed method. The influence on the structure response caused by the individual system parameters is also investigated.
Article
In this paper, a new interval finite element (FE) model updating strategy is proposed for interval identification of structural parameters in the aspect of uncertainty propagation and uncertainty quantification. The accurate interval estimation of system responses can be efficiently obtained by application of Monte Carlo (MC) simulation combined with surrogate models. By means of the concept of interval length, a novel quantitative index named as interval overlap ratio (IOR) is constructed to characterize the agreement of interval distributions between analytical data and measured data. Two optimization problems are constructed and solved for estimating the nominal values and interval radii of uncertain structural parameters. Finally, the numerical and experimental case studies are given to illustrate the feasibility of the proposed method in the interval identification of structural parameters.
Article
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass–spring system and also against a set of experimentally tested steel plates.
Article
In structural engineering, model updating is often used for non-destructive damage assessment: by calibrating stiffness parameters of finite element models based on experimentally obtained (modal) data, structural damage can be identified, quantified and located. However, the model updating problem is an inverse problem prone to ill-posedness and ill-conditioning. This means the problem is extremely sensitive to small errors, which may potentially detract from the method׳s robustness and reliability. As many errors or uncertainties are present in model updating, both regarding the measurements as well as the employed numerical model, it is important to take these uncertainties suitably into account. This paper aims to provide an overview of the available approaches to this end, where two methods are treated in detail: a non-probabilistic fuzzy approach and a probabilistic Bayesian approach. These methods are both elaborated for the specific case of vibration-based finite element model updating for damage assessment purposes.
Article
The problem of updating a structural model and its associated uncertainties by utilizing dynamic response data is addressed using a Bayesian statistical framework that can handle the inherent ill-conditioning and possible nonuniqueness in model updating applications. The objective is not only to give more accurate response predictions for prescribed dynamic loadings but also to provide a quantitative assessment of this accuracy. In the methodology presented, the updated (optimal) models within a chosen class of structural models are the most probable based on the structural data if all the models are equally plausible a priori. The prediction accuracy of the optimal structural models is given by also updating probability models for the prediction error. The precision of the parameter estimates of the optimal structural models, as well as the precision of the optimal prediction-error parameters, can be examined. A large-sample asymptotic expression is given for the updated predictive probability distribution of the uncertain structural response, which is a weighted average of the predictive probability distributions for each optimal model. This predictive distribution can be used to make model predictions despite possible nonuniqueness in the optimal models.
Article
In a full Bayesian probabilistic framework for "robust" system identification, structural response predictions and performance reliability are updated using structural test data D by considering the predictions of a whole set of possible structural models that are weighted by their updated probability. This involves integrating h(θ)p(θ|D) over the whole parameter space, where θ is a parameter vector defining each model within the set of possible models of the structure, h(θ) is a model prediction of a response quantity of interest, and p(θ|D) is the updated probability density for θ, which provides a measure of how plausible each model is given the data D. The evaluation of this integral is difficult because the dimension of the parameter space is usually too large for direct numerical integration and p(θ|D) is concentrated in a small region in the parameter space and only known up to a scaling constant. An adaptive Markov chain Monte Carlo simulation approach is proposed to evaluate the desired integral that is based on the Metropolis-Hastings algorithm and a concept similar to simulated annealing. By carrying out a series of Markov chain simulations with limiting stationary distributions equal to a sequence of intermediate probability densities that converge on p(θ|D), the region of concentration of p(θ|D) is gradually portrayed. The Markov chain samples are used to estimate the desired integral by statistical averaging. The method is illustrated using simulated dynamic test data to update the robust response variance and reliability of a moment-resisting frame for two cases: one where the model is only locally identifiable based on the data and the other where it is unidentifiable.
Article
The objective of this paper is to give a general overview of recent research activities on non-probabilistic finite element analysis and its application for the representation of parametric uncertainty in applied mechanics. The overview focuses on interval as well as fuzzy uncertainty treatment in finite element analysis. Since the interval finite element problem forms the core of a fuzzy analysis, the paper first discusses the problem of finding output ranges of classical deterministic finite element problems where uncertain physical parameters are described by interval quantities. Different finite element analysis types will be considered. The paper gives an overview of the current state-of-the-art of interval techniques available from literature, focussing on methodological as well as practical aspects of the presented methods when their application in an industrial context is envisaged. Their possible value in the framework of applied mechanics is discussed as well. The paper then gives an overview of recent developments in the extension of the interval methods towards fuzzy finite element analysis. Recent developments in the framework of the transformation method as well as optimisation-based procedures are discussed. Finally, the paper concentrates specifically on implementation strategies for the application of the interval and fuzzy finite element method to large FE problems.
Article
Interval model updating in the presence of irreducible uncertain measured data is defined and solutions are made available for two cases. In the first case, the parameter vertex solution is used but is found to be valid only for particular parameterisation of the finite element model and particular output data. In the second case, a general solution is considered, based on the use of a meta-model which acts as a surrogate for the full finite element mathematical model. Thus, a region of input data is mapped to a region of output data with parameters obtained by regression analysis. The Kriging predictor is chosen as the meta-model in this paper and is found to be capable of predicting the regions of input and output parameter variations with very good accuracy. The interval model updating approach is formulated based on the Kriging predictor and an iterative procedure is developed. The method is validated numerically using a three degree of freedom mass-spring system with both well-separated and close modes. A significant advantage of Kriging interpolation is that it enables the use of updating parameters that are difficult to use by conventional correction of the finite element model. An example of this is demonstrated in an experimental exercise where the positions of two beams in a frame structure are selected as updating parameters.
Article
The sensitivity method is probably the most successful of the many approaches to the problem of updating finite element models of engineering structures based on vibration test data. It has been applied successfully to large-scale industrial problems and proprietary codes are available based on the techniques explained in simple terms in this article. A basic introduction to the most important procedures of computational model updating is provided, including tutorial examples to reinforce the reader’s understanding and a large scale model updating example of a helicopter airframe.
Article
With deterministic methods finite element model parameters are updated by using a single set of experimental data. As a consequence the corrected analytical model only reflects this single test case. However, test data are inherently exposed to uncertainty due to measurement errors, different modal extraction techniques etc. Even a more relevant factor for variability originates from production tolerances and consequently the question arises, how to describe model parameters from the stochastic point of view? Therefore it would be desirable to use statistical properties of multiple sets of experimental and to consider the update parameters as random variables. This paper presents an inverse approach how to identify a stochastic finite element model from uncertain test data. In detail, this work demonstrates a method to adjust design parameter means and their related covariance matrix from multiple sets of experimental modal data. Results are shown from a numerical example.
Article
The usual model updating method may be considered to be deterministic since it uses measurements from a single test system to correct a nominal finite element model. There may however be variability in seemingly identical test structures and uncertainties in the finite element model. Variability in test structures may arise from many sources including geometric tolerances and the manufacturing process, and modelling uncertainties may result from the use of nominal material properties, ill-defined joint stiffnesses and rigid boundary conditions. In this paper, the theory of stochastic model updating using a Monte-Carlo inverse procedure with multiple sets of experimental results is explained and then applied to the case of a simulated three degree-of-freedom system, which is used to fix ideas and also to illustrate some of the practical limitations of the method. In the companion paper, stochastic model updating is applied to a benchmark structure using a contact finite element model that includes common uncertainties in the modelling of the spot welds.
Article
The application of a stochastic model updating technique using Monte-Carlo inverse propagation and multivariate multiple regression to converge a set of analytical models with randomised updating parameters upon a set of nominally identical physical structures is considered. The structure in question is a short beam manufactured from two components, one of folded steel and the other flat. The two are connected by two rows of spot-welds. The main uncertainty in the model is concerned with the spot-weld but there is also considerable manufacturing variability, principally in the radii of the folds.
Article
An optimization method for uncertain structures is suggested based on convex model and a satisfaction degree of interval. In the investigated problem, the uncertainty only exists in constraints. Convex model is used to describe the uncertainty in which the intervals of the uncertain parameters are only needed, not necessarily to know the precise probability distributions. A satisfaction degree of interval which represents the possibility that one interval is smaller than another is employed to deal with the uncertain constraints. Based on a predetermined satisfaction degree level, the uncertain constraints are transformed to deterministic ones, and the transformed optimization problem can be solved by traditional optimization methods. For complex structural problems that the optimization model cannot be expressed in an explicit form, the interval analysis method is adopted to calculate the intervals of the constraints efficiently, and whereby eliminate the optimization nesting. Two numerical examples have been presented to demonstrate the efficiency of the suggested method.
Article
The problem of model updating in the presence of test-structure variability is addressed. Model updating equations are developed using the sensitivity method and presented in a stochastic form with terms that each consist of a deterministic part and a random variable. Two perturbation methods are then developed for the estimation of the first and second statistical moments of randomised updating parameters from measured variability in modal responses (e.g. natural frequencies and mode shapes). A particular aspect of the stochastic model updating problem is the requirement for large amounts of computing time, which may be reduced by making various assumptions and simplifications. It is shown that when the correlation between the updating parameters and the measurements is omitted, then the requirement to calculate the second-order sensitivities is no longer necessary, yet there is no significant deterioration in the estimated parameter distributions. Numerical simulations and a physical experiment are used to illustrate the stochastic model updating procedure.
Article
We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.
Non-probabilistic finite element analysis for parametric uncertainty treatment in applied mechanics: Recent advances
  • Moens
Design of Experiment and Matlab Data Analysis
  • Y Wang
  • L Sui
Y. Wang, L. Sui, Design of Experiment and Matlab Data Analysis, Tsinghua University Press, Beijing, 2012.