Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Hybrid reliability analysis with mixed random and interval uncertainties is a significant challenge in the reliability assessment of engineering structures. The situation will be more intractable when involving incomplete interval data. To obtain reliable estimates of the failure probability limits, an effective parameter estimation method, integrating the quantile variant of the Expectation-Maximization algorithm and Kullback-Leibler divergence, is proposed to transform uncertain variables with incomplete data into random variables with distribution uncertainty. Then, an adaptive Kriging-assisted hybrid reliability analysis method is developed to ensure computational accuracy and efficiency. In this method, a candidate pool incorporating the distribution uncertainty is constructed and its size is adaptively reduced by removing the samples that violate the projection uniformity on input dimensions with exact distributions. Meanwhile, an improved U learning function and an error-based convergence criterion are defined to drive and stop the adaptive process. Then the failure probability limits are estimated by combining the refined Kriging model and Monte Carlo simulation. Four application examples are employed to verify the superiority of the proposed method. Comparison results show that the proposed method can significantly improve computational efficiency while ensuring the accuracy and reliability of the estimated failure probability interval under incomplete interval observations.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... With an introductory overview of imprecise reliability provided by Utkin and Coolen (2007), the current research status of the reliability analysis involving imprecise variables is expressed in question of sensitivity analysis aiming to estimate the effects of uncertain distribution parameters on the uncertainty of failure probability (Yun, Li, Chen, & Wang, 2024). Scientific society also searches for more efficient and accurate methods to solve the reliability problem described by imprecise variables: Xiao, Park, Lin, Ouyang, and Ma (2023) continue developing of adaptive Krigingassisted reliability analysis method under the active learning framework, Abdollahi, Shahraki, Faes, and Rashki (2024) suggest dimension reduction-based solution utilising a soft Monte Carlo simulation. ...
... Many of these factors involve uncertainties, which introduce both epistemic uncertainty (lack of knowledge) and aleatory uncertainty (natural variability) [25,34]. Therefore, conducting reliability analysis on systems with uncertain variables is a critical task in engineering. ...
Article
Full-text available
This paper establishes a hybrid variable system failure probability optimization model based on sampling methods and weighting coefficients. By introducing auxiliary input variables, important sampling functions, and p-box, failure samples are mapped from the random variable space to the p-box variable space. The new weight coefficients are constructed, including important sampling weights and interval weights. Combining discretization methods and Monte Carlo simulation (MCS), the interval weights are transformed into variables, and constraints conforming to the p-box variable distribution are constructed. After calculating the weighting coefficients for all failure samples, the new failure probability optimization model is built. This model is independent of the performance functions and does not involve cyclic optimization, with computational complexity only related to the dimensions. Six cases are used for method comparison, validating that the new method exhibits higher efficiency and accuracy.
... In this context, it is also important to highlight the application to systems, and the uniqueness of the approaches that seek a more effective assessment at system level, which has been a topic also widely researched in this field [41][42][43][44]. Equally, multiple works have been researching the challenge that is to assess small probabilities of failure in reliability analysis [45,46] and the handling of high dimensional spaces [47][48][49][50], or using the same premise, solving non-probabilistic reliability problems [51,52]. The interested reader is directed to the review of Teixeira et al. [1] and other works that further discuss in detail machine-learning based reliability analysis that uses sequential evaluations of ( ). ...
... However, the above methods mainly adopt the one-shot method, i.e., sampling directly without any model information, which requires large sample points to ensure model accuracy [26]. Unlike the one-shot method, the active learning-based sampling method constructs a learning function according to the predictive information from the Kriging model, selects sequential sample points, updates the Kriging model, and then improves the model accuracy in the region interested [27]. Recently, the combination of active learning Kriging modeling and decoupled methods in RBDO has attracted much attention, which can be divided into perfect and imperfect decoupled methods [3]. ...
... Recently, vast efforts have been made to further leverage adaptive surrogate modeling with MCS methods for exploring the limit of the adaptive Kriging methods for uncertainty quantification and reliability assessment [26][27][28][29][30][31]. It has been well documented that the adaptive Kriging method is capable of developing cost-effective surrogate models with relatively high-fidelity levels for lowdimensional problems. ...
Article
Along with the rapid advancement of additive manufacturing technology, 3D-printed structures and materials have been popularly employed in diverse applications. Computer simulations of these structures and materials are often characterized by a vast number of spatial-varied parameters to predict the structural response of interest. Direct Monte Carlo methods are infeasible for the uncertainty quantification and reliability assessment of such systems as they require a huge number of forward model evaluations in order to obtain convergent statistics. To alleviate this difficulty, this paper presents a convolutional dimension-reduction network with knowledge reasoning-based loss regularization as explainable deep learning framework for surrogate modeling and uncertainty quantification of structures with high-dimensional spatial variations. To manage the inherent high-dimensionality, a deep Convolutional Dimension-Reduction network (ConvDR) is constructed to transform the spatial data into a low-dimensional latent space. In the latent space, domain knowledge is formulated as a form of loss regularization to train the ConvDR network as a surrogate model to predict the response of interest. Then evolutionary algorithms are utilized to train the deep convolutional dimension-reduction network. Two 2D structures with manufacturing-induced spatial-variated material compositions are used to demonstrate the performance of the proposed approach.
Article
Multi-fidelity surrogate modeling offers a cost-effective approach to reduce extensive evaluations of expensive physics-based simulations for reliability predictions. However, considering spatial uncertainties in multi-fidelity surrogate modeling remains extremely challenging due to the curse of dimensionality. To address this challenge, this paper introduces a deep learning-based multi-fidelity surrogate modeling approach that fuses multi-fidelity datasets for high-dimensional reliability analysis of complex structures. It first involves a heterogeneous dimension transformation approach to bridge the gap in terms of input format between the low-fidelity and high-fidelity domains. Then, an explainable deep convolutional dimension-reduction network is proposed to effectively reduce the dimensionality of the structural reliability problems. To obtain a meaningful low dimensional space, a new knowledge reasoning-based loss regularization mechanism is integrated with the covariance matrix adaptation evolution strategy to encourage an unbiased linear pattern in the latent space for reliability predictions. Then, the high-fidelity data can be utilized for bias modeling using Gaussian process regression. Finally, Monte Carlo simulation is employed for the propagation of high-dimensional spatial uncertainties. Two structural examples are utilized to validate the effectiveness of the proposed method.
Article
Two-phase degradation is a prevalent degradation mechanism observed in modern systems, typically characterized by a change in the degradation rate or trend of a system’s performance at a specific time point. Ignoring this change in degradation models can lead to considerable biases in predicting the remaining useful life (RUL) of the system, and potentially leading to inappropriate condition-based maintenance decisions. To address this issue, we propose a novel two-phase degradation model based on a reparameterized inverse Gaussian process. The model considers variations in both change points and model parameters among different systems to account for subject-to-subject heterogeneity. The unknown parameters are estimated using both maximum likelihood and Bayesian approaches. Additionally, we propose an adaptive replacement policy based on the distribution of RUL. By sequentially obtaining new degradation data, we dynamically update the estimation of model parameters and of the RUL distribution, allowing for adaptive replacement policies. A simulation study is conducted to assess the performance of our methodologies. Finally, a Lithium-ion battery example is provided to validate the proposed model and adaptive replacement policy. Technical details and additional results of case study are available as online supplementary materials.
Article
Full-text available
The probability box (P-box) model is an effective quantification tool that can deal with aleatory and epistemic uncertainties and can generally be categorized into two classes, namely, parameterized P-box and non-parameterized P-box ones. This paper proposes a new structural reliability analysis method with the non-parameterized P-box uncertainty, through which bounds of the failure probability can be obtained efficiently. For the convenience of calculation, the reliability analysis problems are divided into the univariate and multivariate ones. Firstly, structural failure probability bound analysis for the univariate problem is converted into solving two linear programming models by discretizing the cumulative distribution function (CDF) of the P-box variable, which can be solved efficiently by the simplex algorithm. Secondly, an iterative technique is used to decompose the multivariate problem into a series of univariate problems, in which only one non-parameterized P-box's CDF is optimized and the remaining counterparts are fixed. Hence, the failure probability bounds can be obtained by solving a series of linear programming problems. Finally, the effectiveness of the proposed method is demonstrated by investigating three numerical examples.
Article
Full-text available
Active learning methods have recently surged in the literature due to their ability to solve complex structural reliability problems within an affordable computational cost. These methods are designed by adaptively building an inexpensive surrogate of the original limit-state function. Examples of such surrogates include Gaussian process models which have been adopted in many contributions, the most popular ones being the efficient global reliability analysis (EGRA) and the active Kriging Monte Carlo simulation (AK-MCS), two milestone contributions in the field. In this paper, we first conduct a survey of the recent literature, showing that most of the proposed methods actually span from modifying one or more aspects of the two aforementioned methods. We then propose a generalized modular framework to build on-the-fly efficient active learning strategies by combining the following four ingredients or modules: surrogate model, reliability estimation algorithm, learning function and stopping criterion. Using this framework, we devise 39 strategies for the solution of 20 reliability benchmark problems. The results of this extensive benchmark (more than 12,000 reliability problems solved) are analyzed under various criteria leading to a synthesized set of recommendations for practitioners. These may be refined with a priori knowledge about the feature of the problem to solve, i.e. dimensionality and magnitude of the failure probability. This benchmark has eventually highlighted the importance of using surrogates in conjunction with sophisticated reliability estimation algorithms as a way to enhance the efficiency of the latter.
Article
Full-text available
Epistemic uncertainty widely exists in the early design stage of complex engineering structures or throughout the full-life cycle of innovative structure design, which should be appropriately quantified, managed, and controlled to ensure the reliability and safety of the product. Evidence theory is usually regarded as a promising model to deal with epistemic uncertainty, as it employs a general and flexible framework, the basic probability assignment function, which enables the quantification and propagation of epistemic uncertainty more effective. Due to its strong ability, evidence theory has been applied in the field of structural reliability during the past few decades, and a series of important progresses have been achieved. Evidence-theory-based reliability analysis thus provides an important means for engineering structure design, especially under epistemic uncertainty, and it has become one of the research hotspots in the field of structural reliability. This paper reviews the four main research directions of evidence-theory-based reliability analysis, and each one is focused on solving one critical issue in this field, namely, computational efficiency, parameter correlation, hybrid uncertainties, and reliability-based design optimization. It summarizes the main scientific problems, technical difficulties, and current research status of each direction. Based on the review, this paper also provides an outlook for future research in evidence-theory-based structural reliability analysis.
Article
Full-text available
Aleatory and epistemic uncertainties usually coexist within a mechanistic model, which motivates the hybrid structural reliability analysis considering random and interval variables in this paper. An introduction of the interval variable requires one to recursively evaluate embedded optimizations for the extremum of a performance function. The corresponding structural reliability analysis, hence, becomes a rather computationally intensive task. In this paper, physical characteristics for potential optima of the interval variable are first derived based on the Karush-Kuhn-Tucker condition, which is further programmed as a simulation procedure to pair qualified candidate samples. Then, an outer truncation boundary provided by the first-order reliability method is used to link the size of a truncation domain with the targeted failure probability, whereas the U function is acted as a refinement criterion to remove inner samples for an increased learning efficiency. Given new samples detected by the revised reliability-based expected improvement function, an adaptive Kriging surrogate model is determined to tackle the hybrid structural reliability analysis. Several numerical examples in the literature are presented to demonstrate applications of this proposed algorithm. Compared to benchmark results provided by the brute-force Monte Carlo simulation, the high accuracy and efficiency of this proposed approach have justified its potentials for the hybrid structural reliability analysis.
Article
Full-text available
An uncertainty quantification and validation framework is presented to account for both aleatory and epistemic uncertainties in stochastic simulations of turbine engine components. The spatial variability of the uncertain geometric parameters obtained from coordinate measuring machine data of manufactured parts is represented as aleatory uncertainty. Porosity and defects in the manufactured parts based on micro CT-scanned images are represented as epistemic uncertainty. A stochastic upscaling method and probability box approach are integrated to propagate both the epistemic and aleatory uncertainties from fine models to coarse models to quantify the homogenized elastic modulus uncertainties. The framework is applied for a turbine blade example and validated by modal frequency experiments of the manufactured blade samples. A validation approach, called mean curve validation method, is utilized to effectively compare the p-box of the predictions with the experimental results. The application results show that the proposed framework can significantly reduce the complexity of the engineering problem as well as produce accurate results when both aleatory and epistemic uncertainties exist in the problem.
Article
Full-text available
This paper proposes an efficient Kriging-based subset simulation (KSS) method for hybrid reliability analysis under random and interval variables (HRA-RI) with small failure probability. In this method, Kriging metamodel is employed to replace the true performance function, and it is smartly updated based on the samples in the first and last levels of subset simulation (SS). To achieve the smart update, a new update strategy is developed to search out samples located around the projection outlines on the limit-state surface. Meanwhile, the number of samples in each level of SS is adaptively adjusted according to the coefficients of variation of estimated failure probabilities. Besides, to quantify the Kriging metamodel uncertainty in the estimation of the upper and lower bounds of the small failure probability, two uncertainty functions are defined and the corresponding termination conditions are developed to control Kriging update. The performance of KSS is tested by four examples. Results indicate that KSS is accurate and efficient for HRA-RI with small failure probability.
Article
Full-text available
The expectation–maximization algorithm is a powerful computational technique for finding the maximum likelihood estimates for parametric models when the data are not fully observed. The expectation–maximization is best suited for situations where the expectation in each E-step and the maximization in each M-step are straightforward. A difficulty with the implementation of the expectation–maximization algorithm is that each E-step requires the integration of the log-likelihood function in closed form. The explicit integration can be avoided by using what is known as the Monte Carlo expectation–maximization algorithm. The Monte Carlo expectation–maximization uses a random sample to estimate the integral at each E-step. But the problem with the Monte Carlo expectation–maximization is that it often converges to the integral quite slowly and the convergence behavior can also be unstable, which causes computational burden. In this paper, we propose what we refer to as the quantile variant of the expectation–maximization algorithm. We prove that the proposed method has an accuracy of O(1/K2), while the Monte Carlo expectation–maximization method has an accuracy of Op(1/K). Thus, the proposed method possesses faster and more stable convergence properties when compared with the Monte Carlo expectation–maximization algorithm. The improved performance is illustrated through the numerical studies. Several practical examples illustrating its use in interval-censored data problems are also provided.
Article
Full-text available
Traditional structural uncertainty analysis is mainly based on probability models and requires the establishment of accurate parametric probability distribution functions using large numbers of experimental samples. In many actual engineering problems, the probability distributions of some parameters can be established due to sufficient samples available, whereas for some parameters, due to the lack or poor quality of samples, only their variation intervals can be obtained, or their probability distribution types can be determined based on the existing data while some of the distribution parameters such as mean and standard deviation can only be given interval estimations. This thus will constitute an important type of probability-interval hybrid uncertain problem, in which the aleatory and epistemic uncertainties both exist. The probability-interval hybrid uncertainty analysis provides an important mean for reliability analysis and design of many complex structures, and has become one of the research focuses in the field of structural uncertainty analysis over the past decades. This paper reviews the four main research directions in this area, i.e., uncertainty modeling, uncertainty propagation analysis, structural reliability analysis, and reliability-based design optimization. It summarizes the main scientific problems, technical difficulties, and current research status of each direction. Based on the review, this paper also provides an outlook for future research in probability-interval hybrid uncertainty analysis.
Article
Full-text available
An essential issue in surrogate model-based reliability analysis is the selection of training points. Approaches such as efficient global reliability analysis (EGRA) and adap-tive Kriging Monte Carlo simulation (AK-MCS) methods have been developed to adaptively select training points that are close to the limit state. Both the learning functions and convergence criteria of selecting training points in EGRA and AK-MCS are defined from the perspective of individual responses at Monte Carlo samples. This causes two problems: (1) some extra training points are selected after the reliability estimate already satisfies the accuracy target; and (2) the selected training points may not be the optimal ones for reliability analysis. This paper proposes a Global Sensitivity Analysis enhanced Surrogate (GSAS) modeling method for reliability analysis. Both the convergence criterion and strategy of selecting new training points are defined from the perspective of reliability estimate instead of individual responses of MCS samples. The new training points are identified according to their contribution to the uncertainty in the reliability estimate based on global sensitivity analysis. The selection of new training points stops when the accuracy of the reliability estimate reaches a specific target. Five examples are used to assess the accuracy and efficiency of the proposed method. The results show that the efficiency and accuracy of the proposed method are better than those of EGRA and AK-MCS.
Article
Full-text available
Hybrid reliability analysis (HRA) with both aleatory and epistemic uncertainties is investigated in this paper. The aleatory uncertainties are described by random variables, and the epistemic uncertainties are described by a probability-box (p-box) model. Although tremendous efforts have been devoted to propagating random or p-box uncertainties, much less attention has been paid to analyzing the hybrid reliability with both of them. For HRA, optimization-based Interval Monte Carlo Simulation (OIMCS) is available to estimate the bounds of failure probability, but it requires enormous computational performance. A new method combining the Kriging model with OIMCS is proposed in this paper. When constructing the Kriging model, we only locally approximate the performance function in the region where the sign is prone to be wrongly predicted. It is based on the idea that a surrogate model only exactly predicting the sign of performance function could satisfy the demand of accuracy for HRA. Then OIMCS can be effectively performed based on the Kriging model. Three numerical examples and an engineering application are investigated to demonstrate the performance of the proposed method.
Article
Full-text available
In this letter, we introduce an estimator of Küllback-Leibler divergence based on two independent samples. We show that on any finite alphabet, this estimator has an exponentially decaying bias and that it is consistent and asymptotically normal. To explain the importance of this estimator, we provide a thorough analysis of the more standard plug-in estimator. We show that it is consistent and asymptotically normal, but with an infinite bias. Moreover, if we modify the plug-in estimator to remove the rare events that cause the bias to become infinite, the bias still decays at a rate no faster than . Further, we extend our results to estimating the symmetrized Küllback-Leibler divergence. We conclude by providing simulation results, which show that the asymptotic properties of these estimators hold even for relatively small sample sizes.
Article
Full-text available
Consider a system which is made up of multiple components connected in a series. In this case, the failure of the whole system is caused by the earliest failure of any of the components, which is commonly referred to as competing risks. In certain situations, it is observed that the determination of the cause of failure may be expensive, or may be very difficult to observe due to the lack of appropriate diagnostics. Therefore, it might be the case that the failure time is observed, but its corresponding cause of failure is not fully investigated. This is known as masking. Moreover, this competing risks problem is further complicated due to possible censoring. In practice, censoring is very common because of time and cost considerations on experiments. In this paper, we deal with parameter estimation of the incomplete lifetime data in competing risks using the EM algorithm, where incompleteness arises due to censoring and masking. Several studies have been carried out, but parameter estimation for incomplete data has mainly focused on exponential models. We provide the general likelihood method, and the parameter estimation of a variety of models including exponential, s-normal, and lognormal models. This method can be easily implemented to find the MLE of other models. Exponential and lognormal examples are illustrated with parameter estimation, and a graphical technique for checking model validity.
Article
Hybrid reliability analysis (HRA) has been extensively explored and mainly performed based on two categories of models. Mathematical properties on these models are given and proven in a rigorous manner, providing insights into HRA problems. Potential obstacles of the most appealing Kriging-based method available in literature for HRA include: metamodeling in an augmented-dimensional space spanned by both random and interval parameters resulting in substantial inversion of a large correlation matrix, an absence of distance metric in learning functions causing sample clustering, repeated interval analysis mainly completed by either the sampling-based or optimization-based algorithm leading to excessive computational cost and memory requirements especially for high-dimensional interval inputs. In this sense, a dimension-wise analysis driven active learning paired-Kriging method is proposed, where a paired-Kriging metamodel is constructed solely in the random input space and, at each training point, the interval analysis is conducted by a dimension-wise analysis method. A new learning function and termination criterion are conceived for refining the paired-Kriging model. It is concluded that the proposed method outperforms the common-in-use ones for HRA with high-dimensional interval inputs while its accuracy and efficiency are comparable in the case of low-dimensional interval inputs after validating its numerical performance by typical issues.
Article
For addressing the low efficiency of structural reliability analysis under the random‐interval mixed uncertainties (RIMU), this paper establishes the line sampling method (LS) under the RIMU. The proposed LS divides the reliability analysis under RIMU into two stages. The Markov chain simulation is used to efficiently search the design point under RIMU in the first stage, then the upper and lower bounds of failure probability are estimated by LS in the second stage. To improve the computational efficiency of the proposed LS under RIMU, the Kriging model is employed to reduce the model evaluation numbers in the two stages. For efficiently searching the design point, the Kriging model is constructed and adaptively updated in the first stage to accurately recognize the Markov chain candidate state, and then it is sequentially updated by the improved U learning function in the second stage to accurately estimate the failure probability bounds. The proposed LS under RIMU with Kriging model can not only reduce the model evaluation numbers but also decrease the candidate sample pool size for constructing the Kriging model in two stages. The presented examples demonstrate the superior computational efficiency and accuracy of the proposed method by comparison with some existing methods.
Article
Uncertainty propagation and reliability evaluation, being the crucial parts of engineering system analysis, play vital roles in safety assessment. How to reasonably consider the complex multisource uncertainty behavior in both static and dynamic systems is paramount to ensuring their safe operation. However, there is a significant lack of research on aleatory and epistemic uncertainties for both static and dynamic systems. To this end, a new hybrid exponential model is proposed by combining probabilistic and non-probabilistic exponential models, which aims to accurately measure the uncertainty propagation and reliability evaluation problem with aleatory and epistemic uncertainties for static and dynamic systems. The proposed hybrid exponential model consists of nested double optimization loops. The outer loop performs a probabilistic analysis based on the direct probability integral method, and the inner loop performs a non-probabilistic computation. Then, a new hybrid exponential probability integral method is developed to effectively perform uncertainty propagation and reliability analysis. Finally, four examples, including two static and two dynamic examples with complex performance functions, are tested. The results indicate that the proposed hybrid exponential model offers a universal tool for uncertainty quantification in static and dynamic systems. Moreover, the hybrid exponential probability integral method can accurately and efficiently obtain the upper and lower bounds of the probability density function and cumulative distribution function.
Article
In this work, a novel convexity-oriented time-dependent reliability-based topology optimization (CTRBTO) framework is investigated with overall consideration of universal uncertainties and time-varying natures in configuration design. For uncertain factors, the initial static ones are quantified by the convex set model and nodal dynamic responses are then expressed by the convex process model, where both the boundary rules and time-dependency properties are revealed by the full-dimensional convex-set collocation theorem. Unlike the original deterministic constraints in topology optimization schemes, a new convex time-dependent reliability (CTR) index is defined to give a reasonable failure judgment of local dynamic stiffness and impel the overall CTRBTO strategy. In addition, the gradient-based iterative algorithm is utilized to guarantee the computational robustness and the CTR-driven design sensitivities are explicitly analyzed by the Lagrange multiplier method. Several numerical examples are used to illustrate the effectiveness of the proposed method, and numerical results reflect the significance of this study to a certain extent.
Article
In this article, we propose an active kriging-based learning method for hybrid reliability analysis (HRA) with random and interval variables. An improved sampling strategy is proposed to target the sampling areas. Samples with maximum responses greater than 0 and minimum responses less than 0 are selected and regarded as the candidate samples; then, a U -based learning function is developed in which multiple samples of the interval are considered instead of one particular sample. To terminate the proposed method, a hybrid convergence criterion is proposed. Finally, an improved optimization strategy based on the DIRECT algorithm is developed for the Monte Carlo simulation conducted for the HRA. The performance of the proposed method is demonstrated by four numerical cases. The results illustrate that the proposed method is accurate and efficient for HRA.
Article
Large computational cost is one of the key scientific problems limiting the extensive application of evidence theory in practical engineering. In order to promote the practicability of evidence theory, an efficient reliability analysis method based on the active learning Kriging model is proposed. First, a basic variable is selected according to basic probability assignment (BPA) of evidence variables to divide the evidence space into multiple sub-evidence spaces. Second, intersection points of the performance function and sub-evidence spaces are determined by solving the univariate root-finding problem. And sample points are randomly selected to guarantee the accuracy of the subsequently established surrogate model. Third, Kriging model is established through these sample points, and an active learning function is employed to improve the approximation accuracy with less sample points. Finally, belief (Bel) measure and plausibility (Pl) measure can be obtained efficiently with the surrogate model in evidence-theory-based reliability analysis. Three numerical examples are used to demonstrate the effectiveness of the proposed method, which is also applied to the reliability analysis of positioning accuracy for industrial robots.
Article
This paper proposes a new non-probabilistic time-dependent reliability model for evaluating the kinematic reliability of mechanisms when the input uncertainties are characterized by intervals. Based on the introduction of the non-probabilistic interval process of motion error, the most probable point of an outcrossing is defined to transform the complicated time-dependent problem into a concisely time-independent problem. A non-probabilistic time-dependent reliability index is proposed to evaluate the kinematic reliability of various mechanisms, and two computational strategies are designed to calculate the index. Then, the proposed reliability model is applied to a numerical example, a typical four-bar linkage mechanism and a car rack-and-pinion steering linkage mechanism, to demonstrate the significance of the proposed reliability model. The results show that the proposed model provides an effective tool for making reliability evaluation of time-dependent problems.
Article
This paper proposes a new method for hybrid reliability‐based design optimization under random and interval uncertainties (HRBDO‐RI). In this method, Monte Carlo simulation (MCS) is employed to estimate the upper bound of failure probability, and stochastic sensitivity analysis (SSA) is extended to calculate the sensitivity information of failure probability in HRBDO‐RI. Due to a large number of samples involved in MCS and SSA, Kriging metamodels are constructed to substitute true constraints. To avoid unnecessary computational cost on Kriging metamodel construction, a new screening criterion based on the coefficient of variation of failure probability is developed to judge active constraints in HRBDO‐RI. Then a projection‐outline‐based active learning Kriging is achieved by sequentially select update points around the projection outlines on the limit‐state surfaces of active constraints. Furthermore, the prediction uncertainty of Kriging metamodel is quantified and considered in the termination of Kriging update. Several examples, including a piezoelectric energy harvester design, are presented to test the accuracy and efficiency of the proposed method for HRBDO‐RI.
Article
Random uncertainty and evidence uncertainty usually coexist in actual structures. For efficient reliability analysis of structure in presence of hybrid random and evidence uncertainties (RA-HRE), a single-layer sampling method (SLSM) is proposed. First, two equivalent expectation formulas are derived for the belief and the plausibility measures in RA-HRE. According to these formulas, the belief and the plausibility measures can be directly estimated by only one group of samples of the random variables and focal elements of the evidence variables that are generated at the same level. Second, for greatly reducing the computational cost in RA-HRE by SLSM, the SLSM-based bi-objective adaptive kriging (SLSM-BAK) is subsequently developed to simultaneously estimate the belief and the plausibility measures. Aiming at the two objectives in RA-HRE, that is, accurately estimating the belief and the plausibility measures, a new compound learning function is developed in SLSM-BAK. Based on the compound learning function, the kriging model is adaptively updated to accurately and efficiently recognize the sign of the maximum and minimum performance functions for each random sample over the corresponding sampled focal element. This training process continues until both the estimation precision of the belief and the plausibility measures satisfy the preset requirements.
Article
This paper presents a new method for efficient hybrid reliability analysis under both random and probability-box variables. Due to the existence of probability-box variables, the failure probability yielded by hybrid reliability analysis is an interval value. In practical engineering, numerical models are becoming more and more time-consuming, which promotes that metamodel-assisted reliability analysis methods gain considerable attention. The failure probability in hybrid reliability analysis under both random and probability-box variables can be calculated by transforming the original uncertainty space into the standard normal space. Then a limit-state band with two bounding limit-state surfaces is generated in the standard normal space. In this paper, it is determined that the lower and upper bounds of failure probability can be accurately estimated based on a Kriging metamodel as it can well describe the two bounding limit-state surfaces. Then, a new active learning strategy based on bounding limit-state surface is proposed to sequentially update Kriging metamodel by adding new update points in the vicinity of the bounding limit-state surfaces into design of experiments. Meanwhile, two error measurement functions are presented to terminate the update process by calculating the metamodel error. Combining the bounding-limit-state-surface-based active learning Kriging with interval Monte Carlo simulation, a new method for hybrid reliability analysis under both random and probability-box variables is developed. In this method, the lower and upper bounds of failure probability are estimated by interval Monte Carlo simulation based on the built Kriging metamodel. The proposed method is tested by six examples. Its comparison with some existing reliability analysis methods is provided. The high accuracy and efficiency of the proposed method are validated by comparative results.
Article
An important challenge in structural reliability is to reduce the number of calls to evaluate the performance function, especially the complex implicit performance functions. To reduce the computational burden and improve the reliability analysis efficiency, a new active learning method is developed to consider the probability density function of samples based on the learning function U in an active learning reliability method that combines the kriging and Monte Carlo simulation. In the proposed method, the proposed active learning function contains two parts: part A is based on function U, and part B is based on the probability density function and function U. By changing the weights of parts A and B, the sample points close the limit-state function, and those in the region with a higher probability density function have more weight to be selected compared to the others. Subsequently, the kriging model can be constructed more effectively. The proposed method avoids a large number of time-consuming function evaluations, and the recommended weight is also reported. The performance of the proposed method is evaluated through three numerical examples and one engineering example. The results demonstrate the efficiency and accuracy of the proposed method.
Article
The convex model is commonly applied to quantify the uncertain-but-bounded parameters. However, the typical interval and ellipsoid models may lead to inaccurate approximation for existing experimental data, which may incur either too risk or conservative for safety assessment. To this end, this study aims to create a novel data-driven exponential convex model to achieve accurate approximation for experiment data, in which the dimension reduction minimum volume method plays the key role. Furthermore, a novel relaxed exponential nominal value method (RENVM) is developed to evaluate the corresponding non-probabilistic reliability index robustly and efficiently, and the sensitivities are also derived based on the straight forward perturbation method to guarantee its efficiency. Through numerical and experimental studies, the accuracy and validity of the proposed data-driven exponential convex model are validated compared to the interval and ellipsoid models, and the robustness and efficiency of the proposed RENVM are also demonstrated for solving both linear and nonlinear problems.
Article
This paper proposes a method combining projection-outline-based active learning strategy with Kriging metamodel for reliability analysis of structures with mixed random and convex variables. In this method, it is determined that the approximation accuracy of projection outlines on the limit-state surface is crucial for estimation of failure probability instead of the whole limit-state surface. To efficiently improve the approximation accuracy of projection outlines, a new projection-outline-based active learning strategy is developed to sequentially obtain update points located around the projection outlines. Taking into account the influence of metamodel uncertainty on the estimation of failure probability, a quantification function of metamodel uncertainty is developed and introduced in the stopping condition of Kriging metamodel update. Finally, Monte Carlo simulation is employed to calculate the failure probability based on the refined Kriging metamodel. Four examples including the Burro Creek Bridge and a piezoelectric energy harvester are tested to validate the performance of the proposed method. Results indicate that the proposed method is accurate and efficient for reliability analysis of structures with mixed random and convex variables.
Article
The global reliability sensitivity analysis measures the effect of each model input variable on the failure probability, which is very useful for reliability-based optimization design. The aim of this paper is to propose an alternative method to estimate the global reliability sensitivity indices by one group of model input–output samples. Firstly, Bayes formula is used to convert the original expression of global reliability sensitivity index into an equivalent form where only the unconditional failure probability and the failure-conditional probability density function (PDF) of each model input variable are required. All global reliability sensitivity indices can be simultaneously estimated by this new equivalent form, and the computational cost of the process is independent of the dimensionality of model input variables. Secondly, to improve the efficiency of sampling which aims at calculating the unconditional failure probability and estimating the failure-conditional PDF of every model input simultaneously, subset simulation method is extended to achieve these two aims. In the proposed procedure, subset simulation is used to estimate the unconditional failure probability, and Metropolis-Hastings algorithm is employed to convert the samples in failure domain from the current PDF in subset simulation to the PDF corresponding to the original PDF of model inputs for estimating the failure-conditional PDF of each model input variable. Thirdly, Edgeworth expansion is employed to approximate the failure-conditional PDF of each model input variable. Finally, the global reliability sensitivity index can be easily computed as byproducts using the unconditional failure probability and the failure-conditional PDF of each model input in failure probability analysis, and this process does not need any extra model evaluations after the unconditional failure probability analysis is completed by subset simulation. A headless rivet model, a roof truss structure and a composite cantilever beam structure are analyzed, and the results demonstrate the effectiveness of the proposed method in global reliability sensitivity analysis.
Article
In many structural reliability analysis problems, probability approach is often used to quantify the uncertainty, while it needs a great amount of information to construct precise distributions of the uncertain parameters. However, in many practical engineering applications, distributions of some uncertain variables may not be precisely known due to lack of sufficient sample data. Hence, a complex hybrid reliability problem will be caused when the random and non-precise probability variables both exist in a same structure. In this paper, a new hybrid reliability analysis method is developed based on probability and probability box (p-box) models. Random distributions are used to deal with the uncertain parameters with sufficient information, while the probability box models are employed to deal with the non-precise probability variables. Due to the existence of the p-box parameters, a limit-state band will be resulted and the corresponding reliability index will belong to an interval instead of a fixed value. According to the interval analysis, the hybrid reliability model based on random and probability box variables is constructed and the complex nesting optimization problem will be involved in this hybrid reliability analysis. In order to obtain the minimal and maximal reliability index, the corresponding solution strategy is developed, in which the intergeneration projection genetic algorithm (IP-GA) with fine global convergence performance is employed as inner and outer optimization solver. Four numerical examples are investigated to demonstrate the effectiveness of the present method.
Article
Reliability analysis with both aleatory and epistemic uncertainties is investigated in this paper. The aleatory uncertainties are described with random variables and epistemic uncertainties are tackled with evidence theory. To estimate the bounds of failure probability, several methods have been proposed. However, the existing methods suffer the dimensionality challenge of epistemic variables. To get rid of this challenge, a so-called Random-Set based Monte Carlo Simulation (RS-MCS) method derived from the theory of random sets is offered. Nevertheless, RS-MCS is also computational expensive. So an active learning Kriging (ALK) model which only rightly predicts the sign of performance function is introduced and closely integrated with RS-MCS. The proposed method is termed as ALK-RS-MCS. ALK-RS-MCS accurately predicts the bounds of failure probability using as few function calls as possible. Moreover, in ALK-RS-MCS, an optimization method based on Karush-Kuhn-Tucker (KKT) conditions is proposed to make the estimation of failure probability interval more efficient based on the Kriging model. The efficiency and accuracy of the proposed approach are demonstrated with four examples. This article is protected by copyright. All rights reserved.
Article
Probability and convex set hybrid reliability analysis (HRA) is investigated in this paper. It is figured out that a surrogate model only rightly predicting the sign of the performance function can meet the demand of HRA in accuracy. According to this idea, a methodology based on active learning Kriging model called ALK-HRA is proposed. When constructing the Kriging model, the proposed method only approximates the performance function in some region of interest, i.e., the region where the sign of response tends to be wrongly predicted. Then Monte Carlo Simulation (MCS) is performed based on the Kriging model. ALK-HRA is very accurate for HRA with calling the performance function as few times as possible. Three numerical examples are investigated to demonstrate the efficiency and accuracy of the presented method, which include two simple problems and one complicated engineering application.
Article
The first order approximate reliability method (FARM) and second order approximate reliability method (SARM) are formulated based on evidence theory in this paper. The proposed methods can significantly improve the computational efficiency for evidence-theory-based reliability analysis, while generally provide sufficient precision. First, the most probable focal element (MPFE), an important concept as the most probable point (MPP) in probability-theory-based reliability analysis, is searched using a uniformity approach. Subsequently, FARM approximates the limit-state function around the MPFE using the linear Taylor series, while SARM approximates it using the quadratic Taylor series. With the first and second order approximations, the reliability interval composed of the belief measure and the plausibility measure is efficiently obtained for FARM and SARM, respectively. Two simple problems with explicit expressions and one engineering application of vehicle frontal impact are presented to demonstrate the effectiveness of the proposed methods.
Article
An important challenge in structural reliability is to keep to a minimum the number of calls to the numerical models. Engineering problems involve more and more complex computer codes and the evaluation of the probability of failure may require very time-consuming computations. Metamodels are used to reduce these computation times. To assess reliability, the most popular approach remains the numerous variants of response surfaces. Polynomial Chaos [1] and Support Vector Machine [2] are also possibilities and have gained considerations among researchers in the last decades. However, recently, Kriging, originated from geostatistics, have emerged in reliability analysis. Widespread in optimisation, Kriging has just started to appear in uncertainty propagation [3] and reliability and studies. It presents interesting characteristics such as exact interpolation and a local index of uncertainty on the prediction which can be used in active learning methods. The aim of this paper is to propose an iterative approach based on Monte Carlo Simulation and Kriging metamodel to assess the reliability of structures in a more efficient way. The method is called AK-MCS for Active learning reliability method combining Kriging and Monte Carlo Simulation. It is shown to be very efficient as the probability of failure obtained with AK-MCS is very accurate and this, for only a small number of calls to the performance function. Several examples from literature are performed to illustrate the methodology and to prove its efficiency particularly for problems dealing with high non-linearity, non-differentiability, non-convex and non-connex domains of failure and high dimensionality.
Article
Two types of uncertainty exist in engineering. Aleatory uncertainty comes from inherent variations while epistemic uncertainty derives from ignorance or incomplete information. The former is usually modeled by the probability theory and has been widely researched. The latter can be modeled by the probability theory or nonprobability theories and is much more difficult to deal with. In this work, the effects of both types of uncertainty are quantified with belief and plausibility measures (lower and upper probabilities) in the context of the evidence theory. Input parameters with aleatory uncertainty are modeled with probability distributions by the probability theory. Input parameters with epistemic uncertainty are modeled with basic probability assignments by the evidence theory. A computational method is developed to compute belief and plausibility measures for blackbox performance functions. The proposed method involves the nested probabilistic analysis and interval analysis. To handle black-box functions, we employ the first order reliability method for probabilistic analysis and nonlinear optimization for interval analysis. Two example problems are presented to demonstrate the proposed method.
Article
The first part of this article presents the Monte Carlo implementation of the E step of the EM algorithm. Given the current guess to the maximizer of the posterior distribution, latent data patterns are generated from the conditional predictive distribution. The expected value of the augmented log-posterior is then updated as a mixture of augmented log-posteriors, mixed over the generated latent data patterns (multiple imputations). In the M step of the algorithm, this mixture is maximized to obtain the update to the maximizer of the observed posterior. The gradient and Hessian of the observed log posterior are also expressed as mixtures, mixed over the multiple imputations. The relation between the Monte Carlo EM (MCEM) algorithm and the data augmentation algorithm is noted. Two modifications to the MCEM algorithm (the poor man's data augmentation algorithms), which allow for the calculation of the entire posterior, are then presented. These approximations serve as diagnostics for the validity of the normal approximation to the posterior, as well as starting points for the full data augmentation analysis. The methodology is illustrated with two examples.
Article
In robust design, the main goal is to select the levels of the controllable factors to obtain the optimal operating conditions. To this end, Taguchi recommends that statistical experimental design methods be employed. To adopt his approach, we need to observe a number of replicated observations at each design point. A commonly used assumption behind the data collection procedure is that all the data are fully observed.However, in many industrial experiments, interval-censored observations are frequently available in addition to the fully observed observations. For example, the products are often inspected by a “go or no-go” inspection system which typically provides interval-censored data. Even though fully observed observations are preferred, only partially observed or interval-censored observations are available in practice owing to inherent limitations or time/cost considerations. When a data set consists of both partially and fully observed observations, it is commonly referred to as “incomplete” in the statistics literature.In this paper, we calculate the optimal operating conditions for the process based on a dual response approach using incomplete data. For robust design optimization problems, the dual response approach is a commonly used technique but the novel approach in this study is that we estimate the process mean and variance with incomplete data using the EM algorithm. Thus, it is possible to find the optimal operating conditions using all of the information available. The applicability of the proposed method is illustrated for a case study incorporating incomplete data. The performance of the proposed method is compared with the ordinary method through Monte Carlo simulations and this substantiates the proposed method.
Article
S ummary A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situations, applications to grouped, censored or truncated data, finite mixture models, variance component estimation, hyperparameter estimation, iteratively reweighted least squares and factor analysis.
Article
Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most eective algorithms for maximum likelihood estimation because they consis- tently drive the likelihood uphill by maximizing a simple surrogate function for the loglikelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in ma- jorizing or minorizing an objective function. In our opinion, MM algorithms deserve to part of the standard toolkit of professional statisticians. The current article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition to surveying previous work on MM algorithms, this ar- ticle introduces some new material on constrained optimization and standard error estimation.