## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Article

Uncertainty quantification (UQ) is the process of determining the effect of input uncertainties on response metrics of interest. These input uncertainties may be characterized as either aleatory uncertainties, which are irreducible variabilities inherent in nature, or epistemic uncertainties, which are reducible uncertainties resulting from a lack of knowledge. When both aleatory and epistemic uncertainties are mixed, it is desirable to maintain a segregation between aleatory and epistemic sources such that it is easy to separate and identify their contributions to the total uncertainty. Current production analyses for mixed UQ employ the use of nested sampling, where each sample taken from epistemic distributions at the outer loop results in an inner loop sampling over the aleatory probability distributions. This paper demonstrates new algorithmic capabilities for mixed UQ in which the analysis procedures are more closely tailored to the requirements of aleatory and epistemic propagation. Through the combination of stochastic expansions for computing statistics and interval optimization for computing bounds, interval-valued probability, second-order probability, and Dempster–Shafer evidence theory approaches to mixed UQ are shown to be more accurate and efficient than previously achievable.

To read the full-text of this research,

you can request a copy directly from the authors.

... The uncertainty in the process is quantified via non-probabilistic, interval-valued, predictions [2]. Uncertainties can be modelled probabilistically, non-probabilistically or by a mixture of these two approaches [3,4,5]. Probabilistic approaches often require unwarranted assumptions to model epistemic uncertainty and this can lead to underestimation of the true level of uncertainty if a lack of data affects the study [6]. ...

... The random samples in the data set D N define N constraints in Eq. (3). The constraints are given by: ...

... The proposed method has been numerically compared to traditional approaches using three realistic case studies. Different degrees of a lack of samples and models complexity are investigated, showing that the new method finds 3 Note that the computational cost can be reduced of an order of magnitude by adopting linear programming solvers 22 solutions with a good compromise between accuracy and reliability. Potential improvements for future research on IPMs include: Study the relationship between IPMs and different non-deterministic predictors such as e.g., Gaussian Process regressors, Fuzzy regressors, etc. ...

Interval Predictor Models (IPMs) offer a non-probabilistic, interval-valued, characterization of the uncertainty affecting random data generating processes. IPMs are constructed directly from data, with no assumptions on the distributions of the uncertain factors driving the process, and are therefore exempt from the subjectivity induced by such a practice. The reliability of an IPM defines the probability of correct predictions for future samples and, in practice, its true value is always unknown due to finite samples sizes and limited understanding of the process. This paper proposes an overview of scenario optimization programs for the identification of IPMs. Traditional IPM identification methods are compared with a new scheme which softens the scenario constraints and exploits a trade-off between reliability and accuracy. The new methods allows prescribing predictors that achieve higher accuracy for a quantifiable reduction in the reliability. Scenario optimization theory is the mathematical tool used to prescribe formal epistemic bounds on the predictors reliability. A review of relevant theorems and bounds is proposed in this work. Scenario-based reliability bounds hold distribution-free, non asymptotically, and quantify the uncertainty affecting the model's ability to correctly predict future data. The applicability of the new approach is tested on three examples: i) on the modelling of a trigonometric function affected by a noise term, ii) on the identification of a black-box system-controller dynamic response model and, iii) on the modelling of the vibration response of a car suspension arm crossed by a crack of unknown length. The points of strength and limitations of the new IPM are discussed based on the accuracy, computational cost, and width of the resulting epistemic bounds.

... Hence, the effective quantification and control of uncertainties are very important to ensure the stability and the reliability of engineering structures [1,2]. Uncertainty quantification (UQ) represents a quantitative characterization process of propagating the input uncertainties to the system response, which has attracted great interests of researchers in different fields, especially in experimental and numerical modeling [3]. Generally, uncertainties can be classified into stochastic uncertainty and epistemic uncertainty [4,5]. ...

... Therefore, the maximum and minimum of structural response approximately reaches at the vertexes of the joint focal element. Consider the uncertainty problem with two-dimensional evidence variables as shown in Fig. 1, for a joint focal element marked with shadow, the maximum and minimum of the performance function are approximately determined by calculating the responses at four vertexes x (1,2,3,4) . Moreover, in order to obtain the Bel and Pl, the responses at all the vertexes of all the joint focal elements need to be calculated. ...

... The FD of all evidence variables are given in Refs. [3,4], and the BPA structures of ten evidence variables are shown in Fig. 8. ...

Evidence theory has the powerful feature to quantify epistemic uncertainty. However, the huge computational cost has become the main obstacle of evidence theory on engineering applications. In this paper, an efficient uncertainty quantification (UQ) method based on dimension reduction decomposition is proposed to improve the applicability of evidence theory. In evidence-based UQ, the extremum analysis is required for each joint focal element, which generally can be achieved by collocating a large number of nodes. Through dimension reduction decomposition, the response of any point can be predicted by the responses of corresponding marginal collocation nodes. Thus, a marginal collocation node method is proposed to avoid the call of original performance function at all joint collocation nodes in extremum analysis. Based on this, a marginal interval analysis method is further developed to decompose the multidimensional extremum searches for all joint focal elements into the combination of a few one-dimensional extremum searches. Because it overcomes the combinatorial explosion of computation caused by dimension, this proposed method can significantly improve the computational efficiency for evidence-based UQ, especially for the high dimensional uncertainty problems. In each one-dimensional extremum search, as the response at each marginal collocation node is actually calculated by using the original performance function, the proposed method can provide a relatively precise result by collocating marginal nodes even for some nonlinear functions. The accuracy and efficiency of the proposed method are demonstrated by three numerical examples and two engineering applications.

... In the IVP method [5] [12], the model input variables which are identified as epistemic, , for which there is only interval information and no data is available, are in the outer loop, and are simply treated as single intervals. Therefore, there is no distribution information for the epistemic variables; any sample within the provided interval bounds is a possible realization. ...

... The BP method (also referred to as second order probability method in literature [5] [12]) uses the exact same idea with the IVP method, in the sense that both methods separate the variables in outer epistemic and inner aleatory loops. The difference lies in the way how the outer loop epistemic variables are treated. ...

... As noted in [4], in the mixed uncertainty case, the variancebased decomposition of the total variance contributed by rather than is obtained, as can be seen in the denominator of Eqs. (4) and (5). The variance due to the aleatory variable set is involved in the inner loop computation of the statistical QoI of the response, conditional on the epistemic vector realization, therefore it acts as a weight of the contribution of each realization of . ...

This work investigates the modelling of epistemic input parameter uncertainties, and the numerical techniques for uncertainty quantification and uncertainty-based sensitivity analysis in the presence of both aleatory and epistemic uncertainties. Two different approaches are used, the interval valued probability (IVP) method and the Bayesian probabilistic (BP) method. In both cases, a double loop method is used to computationally separate the two different uncertainty types and propagate them within the model. These two approaches are successfully applied on a high-dimensional jet engine secondary air system model from aerospace engineering. The different outputs obtained by the two approaches are interpreted and compared. For the global sensitivity analysis of the epistemic variables, an empirical “pinching” strategy is applied when using the IVP method. With the BP method, variance-based global sensitivity analysis of the epistemic variables is performed. Novel expressions for the Sobol indices of a statistic of a response, conditional on the epistemic variables, are presented and interpreted.

... Parameter uncertainties could be classified as aleatory or epistemic in most cases [135][136][137] . Generally, an aleatory uncertainty is an irreducible variability due to the inherently random nature, while an epistemic uncertainty is an error due to a lack the knowledge of the reality. ...

... When the variable is involved with both aleatory and epistemic uncertainties, a 2-dimensional (2-D) MC sampling is used as a segregation to separate and identify the effects of each source of uncertainty [135] . The 2-D MC sampling is known as a "nested" sampling approach, which typically has two loops. ...

... Some research has been done to improve the efficiency of the 2-D MC analysis using statistical methods, such as interval-valued probability (IVP), second-order probability (SOP), and Dempster-Shafer evidence theory (DSET). Interested readers are referred to [135] for details. ...

A building system is highly nonlinear and commonly includes a variety of parameters ranging from the architectural design to the building mechanical and energy system. Serving as a powerful tool to understand the complicated building system, sensitivity analysis (SA) has been receiving increasing attention in the recent decade.
This review paper focused on the application of SA in the building performance analysis (BPA). First, the existing review papers on the application of SA for BPA were briefly reviewed and summarized. Then, a large amount of recent SA case studies in BPA were reviewed. The critical information regarding the implementation of a SA, such as the sampling method, the SA method, etc., were extracted and summarized. Next, an extensive analysis was performed to evaluate the role of the SA in the BPA. This includes: 1) The typical selection of the input and output parameters for the SA were visualized; 2) The uncertainty level and sampling strategies for the input parameters were investigated; 3) The principles, benefits, and drawbacks of the SA methods were analyzed and summarized; and 4) The various tools used in the SA, including the simulation software, sampling tools, and SA tools were introduced to facilitate the practitioners. Lastly, several conclusions were summarized on the role of the SA in the BPA. Suggestions were also provided to offer practical solutions to the current challenges.

... However, for the mechanical systems, the performance function is highly nonlinear, the polynomial based RSM is not accurate enough for the real limit state function. Some researchers [21][22][23] also focus on the stochastic expansion method for the mixed uncertainties, but these methods are still based on the probabilistic assumption to some extent. Artificial neural network(ANN) algorithm has been rapidly developed for universal function approximations [24][25][26]. ...

... In order to verify the results, the failure probability of this problem is calculated by the direct MC with 1000000 samples, and the max probability and min probability are as list in table 1. It can be seen that the failure probability of U f P and L f P obtained through the HU-BP neural network and the MC is almost the same, (21) The output layer function is : The output layer function is : ...

... Second, the response expansions are readily differentiated with respect to their expansion variables (local SA), and terms may be reorganized to provide Sobol sensitivities from a variance-based decomposition [9,32] (global SA). Finally, response moment expressions may be differentiated with respect to auxilliary nonprobabilistic variables, enabling gradient-based design under uncertainty (the subject of this paper) or gradient-based intervalestimation for epistemic UQ [33]. For application to design under uncertainty, analytic moments and their design sensitivities are described in Sections 5.1.1-5.1.4. ...

... In particular, this implies restriction of stochastic expansion approximation to dimensions requiring L 2 metrics (mean, variance, probability) and handling of dimensions requiring L ∞ metrics (minima and maxima) through other means (i.e., direct optimization without stochastic expansion approximation). For infinitely differentiable smooth problems, related work [33] has shown that L 2 and L ∞ convergence rates are indistinguishable in this case and that combined expansions can reduce computational expense; however, this level of smoothness is too strong of an assumption in most applications. ...

... Over the past decades, numerous studies have been conducted for quantifying uncertainties and assessing the reliability in the non-probabilistic framework for various engineering problems involving incomplete information [9][10][11][12][13][14][15][16][17][18][19]. Moreover, for structures exhibiting both aleatory and epistemic uncertainties, structural reliability assessment and optimization methods based on probabilistic and convex set mixed models have also been substantially discussed by numerous researchers [10][11][12][13][14][15][16][17][18][19][20]25]. ...

... In practical engineering, both aleatory and epistemic uncertainties might simultaneously exist in the same problem (Eldred et al. 2011;Yao et al. 2013aYao et al. , 2013bYao et al. , 2013c, which is the so-called hybrid or mixed uncertainties. For example, an engineering structure often involves multidimensional uncertain parameters related to material properties, loads, dimensions, etc. ...

Epistemic uncertainty widely exists in the early design stage of complex engineering structures or throughout the full-life cycle of innovative structure design, which should be appropriately quantified, managed, and controlled to ensure the reliability and safety of the product. Evidence theory is usually regarded as a promising model to deal with epistemic uncertainty, as it employs a general and flexible framework, the basic probability assignment function, which enables the quantification and propagation of epistemic uncertainty more effective. Due to its strong ability, evidence theory has been applied in the field of structural reliability during the past few decades, and a series of important progresses have been achieved. Evidence-theory-based reliability analysis thus provides an important means for engineering structure design, especially under epistemic uncertainty, and it has become one of the research hotspots in the field of structural reliability. This paper reviews the four main research directions of evidence-theory-based reliability analysis, and each one is focused on solving one critical issue in this field, namely, computational efficiency, parameter correlation, hybrid uncertainties, and reliability-based design optimization. It summarizes the main scientific problems, technical difficulties, and current research status of each direction. Based on the review, this paper also provides an outlook for future research in evidence-theory-based structural reliability analysis.

... Classical probability theory is usually utilized to describe the aleatory uncertainty [1][2][3][4], while epistemic uncertainty can be quantified by some other techniques, such as fuzzy sets [5,6], evidence theory [7,8], convex model [9][10][11][12][13][14][15] and interval model [16][17][18][19][20][21]. Furthermore, these theories handling epistemic uncertainty are usually combined with the probability theory to quantify the mixed aleatory and epistemic uncertainties [22,23]. In this paper, both aleatory and epistemic uncertainties are considered in reliability analysis. ...

In this paper, subset simulation importance sampling (SSIS) method is extended for reliability analysis of structures with mixed random and interval variables. To improve its efficiency in cases with computationally expensive performance functions, an efficient Kriging-assisted SSIS method is proposed. In this method, Kriging metamodel is employed to substitute the actual performance function to decrease the number of its evaluations. Furthermore, it is determined that only the samples in the last level of SSIS are involved in the calculation of failure probability bounds. Then, from these samples, an update strategy based on measuring the possibility of correctly predicting the sign of performance function is developed to obtain update points, which are employed in the sequential refinement of Kriging metamodel. Additionally, metamodel uncertainty of Kriging is quantified and then considered in the termination criteria of Kriging update. The computational efficiency, accuracy and robustness of the proposed method is elucidated by its comparison with some existing methods in implementing the reliability analysis of six examples.

... Many scholars have studied these challenge problems using possibility theory [3], Info-gap theory [4], evidence theory [5][6][7], and stochastic set theory [8]. Subsequently, domestic and foreign scholars studied the uncertainty quantification under parameter [9], the uncertainty quantification based on fuzzy set theory under stochastic-cognitive mixed uncertainty [10], and the aleatory-epistemic mixed uncertainty separation [11], fast algorithm of uncertainty quantification [12,13], uncertainty quantification in multidisciplinary optimization [14,15], stochastic field uncertainty quantification [16,17], and so on. ...

In order to solve the problem of limited sample size and life data in the storage reliability evaluation of high-value weapons, the storage reliability evaluation method of electronic product based on storage performance modeling and simulation was studied. The problem of epistemic-aleatory mixed uncertainty propagation in product storage performance model with time degradation factor was discussed emphatically. The calculation method of two-level Monte Carlo simulation and non-intrusive polynomial chaos is proposed, which solved the problem that the computational capacity of storage performance simulation would increase explosivelly when the number of uncertainty parameters is large. Finally, the effectiveness of the method is verified by an example of circuit storage reliability evaluation, and the calculation efficiency is about 100 times higher than that of the two-lever MCS method.

... The failure threshold is uncertain and difficult to determine. The uncertainty can be divided into two categories, accidental uncertainty and cognitive uncertainty (Eldred et al. 2011;Zeng et al. 2013). Incidental uncertainty is inherent uncertainty caused by factors such as changes in the external environment and has random characteristics. ...

There are often many reasons for equipment failure. When the performance of a certain aspect drops to a certain threshold, the equipment will fail. Affected by other factors, the threshold is uncertain. A reliability model of uncertain thresholds where degradation and external shocks compete with each other is established, and the reliability of the model are evaluated according to uncertainty theory. Under three different shock types, the reliability of the equipment is obtained. The reliability with uncertain thresholds and the reliability with constants thresholds are compared. The results show that in different periods of equipment operation, the reliability of the uncertain thresholds is different with the reliability of the constants thresholds. If the threshold is simply regarded as a known constant, it will cause inaccuracies in the reliability assessment of the system, and miss the best maintenance time, causing unnecessary losses. Taking the microelectronic mechanical system as an example, the superiority of the proposed model is illustrated.

... This approach reduces the total number of scenarios by focusing simulations on risk-sensitive regions to find a limit surface (i.e., the set of perturbation parameters distinguishing whether a system is 8 and polynomial chaos expansion. 9 The second group of methods relies on the creation of surrogate regression models-or response surfaces-to estimate the process response. 10,11 Surrogate models, also known as reduced-order models, are typically constructed from a limited number of samples and are fast to execute. ...

This study proposes an interpolation-based response surface surrogate methodology to manage a large number of scenarios in dynamic probabilistic risk assessment. It adopts the shape Dynamic Time Warping algorithm to cluster the interpolation neighborhood from time series sample data. The interpolation method was adapted from Taylor Kriging to allow a reduced-order model of the Taylor series. In order to demonstrate its applicability to complex issues in risk assessment for nuclear engineering, an example risk response surface to estimate emergency core cooling system (ECCS) criteria for triplex silicon carbide (SiC) accident-tolerant fuel was constructed. The response surface was exploited to estimate the cumulative failure probability of the fuel cladding structure due to the uncertainties in operator actions and safety systems. The functional failures were assessed based on a combination of individual layer failures computed by coupling Risk Analysis Virtual Environment software with a pressurized water reactor 1000-MW(electric) RELAP5 model and the in-house fuel performance assessment module. Results showed that SiC cladding failure probability spiked less than 1 min after a large-break loss-of- coolant accident whenever the current ECCS criteria for Zircaloy-4 (Zr-4) cladding was used. However, it still provides an increased safety margin of three orders of magnitude compared to Zr-4. This positive margin could be utilized to relax active ECCS requirements by allowing deviations of up to 450 s in its actuation time. The proposed surrogate methodology generated a response surface of SiC cladding failure probability reasonably well, with a significant savings of computation time. This methodology is expected to be useful in the analysis of system response with complex uncertainty sources.

... Uncertainty generally exists in various practical engineering problems, such as the processing error of the structure size, the actual measurement error, the difference of the material parameter attribute, and the assumption of the real physical system [1][2][3][4]. These uncertainties will cause fluctuations in the performance of the actual engineering system, and severe cases can even cause system failure [5][6][7]. Therefore, in the initial stage of design, it is necessary to consider the various uncertainties that the product may encounter, so as to improve the performance level of the product throughout its life cycle. The design optimization under uncertainty method has been recognized as a promising orientation in the field of engineering design optimization [2,8,9]. ...

The rapidly changing requirements of engineering optimization problems require unprecedented levels of compatibility to integrate diverse uncertainty information to search optimum among design region. The sophisticated optimization methods tackling uncertainty involve reliability-based design optimization and robust design optimization. In this paper, a novel alternative approach called risk-based design optimization (RiDO) has been proposed to counterpoise design results and costs under hybrid uncertainties. In this approach, the conditional value at risk (CVaR) is adopted for quantification of the hybrid uncertainties. Then, a CVaR estimation method based on Monte Carlo simulation (MCS) scenario generation approach is derived to measure the risk levels of the objective and constraint functions. The RiDO under hybrid uncertainties is established and leveraged to determine the optimal scheme which satisfies the risk requirement. Three examples with different calculation complexity are provided to verify the developed approach.

... Nested MC approach, also known as 2D MC approach, has been quite widely used in literature for the segregation of epistemic and aleatory uncertainties [36][37][38]. The nested MC approach is an extension of MC simulation and employs two loops allowing variability and uncertainty to be modelled separately. ...

The sophistication of building energy performance tools has significantly increased the number of user inputs and parameters used to define energy models. There are numerous sources of uncertainty in model parameters which exhibit varied characteristics. Therefore, uncertainty analysis is crucial to ensure the validity of simulation results when assessing and predicting the performance of complex energy systems, especially in the absence of adequate experimental or real-world data. Furthermore, different kinds of uncertainties are often propagated using similar methods, which leads to a false sense of validity. A comprehensive framework to systematically identify, quantify and propagate these uncertainties is missing. The main aim of this research is to formulate an uncertainty framework to identify and quantify different types of uncertainties associated with reduced-order grey box energy models used in heat demand predictions of the building stock. The study introduces an integrated uncertainty approach based on a copula-based theory and nested Fuzzy Monte Carlo approach to address the correlations and separate the different kinds of uncertainties. Nested Fuzzy Monte-Carlo approach coupled with Latin Hypercube Sampling is used to propagate these uncertainties. Results signify the importance of uncertainty identification and propagation within an energy system and thus, an integrated approach to uncertainty quantification is necessary to maintain the relevance of developed building simulation models. Moreover, segregation of relevant uncertainties aids the stakeholders in supporting risk-related design decisions for improved data collection or model improvement.

... However, in practice, such information is often missing or only partially known due to our lack of knowledge of the underlying physical systems. Such uncertainty, which is termed as epistemic uncertainty, has been discussed in many works, for example [21,22,16,26,12]. In particular, in [26,12], the authors proposed methods that first estimate the range of the input uncertain parameters and then approximate the solution in this estimated parameter domain. ...

In this work we propose a numerical framework for uncertainty quantification (UQ) for time-dependent problems with neural network surrogates. The new appoach is based on approximating the exact time-integral form of the equation by neural networks, of which the structure is an adaptation of the residual network. The network is trained with data generated by running high-fidelity simulation or by conducting experimental measurements for a very short time. The whole procedure does not require any probability information from the random parameters and can be conducted offline. Once the distribution of the random parameters becomes available \textit{a posteriori}, one can post-process the neural network surrogate to calculate the statistics of the solution. Several numerical examples are presented to demonstrate the procedure and performance of the proposed method.

... The coefficients of the expansion (15) can be calculated via spectral projection. 27 This approach projects the response x against each basis function using inner products and employs the polynomial orthogonality properties to extract each coefficient. Each coefficient in Eq. (16) is calculated as: ...

The chapter provides an overview of methods to quantify uncertainty in orbital mechanics. It also provides an initial classification of these methods with particular attention to whether the quantification method requires a knowledge of the system model or not. For some methods the chapter provides applications examples and numerical comparisons on selected test cases.

... Therefore, stochastic parameters cannot be reliably quantified and possibilistic approaches such 10 as interval [2] or fuzzy [3] theory are required. However, in reality uncertainties are usually neither purely aleatory nor purely epistemic but of mixed nature [4,5], which is also referred to as polymorphic or deep uncertainties by some authors [6,7]. Those mixed uncertain input parameters demand for appropriate approaches, as e.g. ...

Imprecise random fields consider both, aleatory and epistemic uncertainties. In this paper, spatially varying material parameters representing the constitutive parameters of a damage model for concrete are defined as imprecise random fields by assuming an interval valued correlation length. For each correlation length value, the corresponding random field is discretized by Karhunen-Loève expansion. In a first study, the effect of the series truncation is discussed as well as the resulting variance error on the probability box (p-box) that represents uncertainty on the damage in a concrete beam as a result of the imprecise random field. It is shown that a certain awareness for the influence of the truncation order on the local field variance is needed when the series is truncated according to a fixed mean variance error. In the following case study, the main investigation is on the propagation of imprecise random fields in the context of non-linear finite element problems, i.e. quasi-brittle damage of a four-point bended concrete beam. The global and local damage as the quantities of interest are described by a p-box. The influence of several imprecise random field input parameters to the resulting p-boxes is studied. Furthermore, it is examined whether correlation length values located within the interval, so-called intermediate values, affect the p-box bounds. It is shown that, from the engineering point of view, a pure vertex analysis of the correlation length intervals is sufficient to determine the p-box in this context. Keywords: Uncertainty quantification, imprecise random fields, interval valued correlation length, Karhunen-Loève expansion, non-linear stochastic finite element method, probability box approach

... Specifically, the aleatory and epistemic uncertainties coexist within an engineering simulation model (Chen and Qiu 2018). This motivates many investigations on the structural reliability analysis with hybrid input uncertainties (Eldred et al. 2011;Wang and Qiu 2010). ...

Aleatory and epistemic uncertainties usually coexist within a mechanistic model, which motivates the hybrid structural reliability analysis considering random and interval variables in this paper. An introduction of the interval variable requires one to recursively evaluate embedded optimizations for the extremum of a performance function. The corresponding structural reliability analysis, hence, becomes a rather computationally intensive task. In this paper, physical characteristics for potential optima of the interval variable are first derived based on the Karush-Kuhn-Tucker condition, which is further programmed as a simulation procedure to pair qualified candidate samples. Then, an outer truncation boundary provided by the first-order reliability method is used to link the size of a truncation domain with the targeted failure probability, whereas the U function is acted as a refinement criterion to remove inner samples for an increased learning efficiency. Given new samples detected by the revised reliability-based expected improvement function, an adaptive Kriging surrogate model is determined to tackle the hybrid structural reliability analysis. Several numerical examples in the literature are presented to demonstrate applications of this proposed algorithm. Compared to benchmark results provided by the brute-force Monte Carlo simulation, the high accuracy and efficiency of this proposed approach have justified its potentials for the hybrid structural reliability analysis.

... The appearance of various uncertain information in different forms (probability distributions, intervals, fuzzy sets) has made the accurate estimation to input/output parameters an imperative yet tough task. The challenge brought by multiple uncertainties, also termed as mixed uncertainty, has been addressed in a series of publications [1][2][3][4]. ...

... In the global surrogate method, the system response can be approximated by establishing a surrogate model, thus improving the computational efficiency of the extreme value analysis of all joint focal elements. Originally, the global surrogate models were applied to the evidence theory model [42][43][44]. Subsequently, Jacobi polynomial expansion was introduced to calculate acoustic system response in the context of evidence theory [45]. ...

Traditional approaches used for analyzing the mechanical properties of auxetic structures are commonly based on deterministic techniques, where the effects of uncertainties are neglected. However, uncertainty is widely presented in auxetic structures, which may affect their mechanical properties greatly. The evidence theory has a strong ability to deal with uncertainties; thus, it is introduced for the modelling of epistemic uncertainties in auxetic structures. For the response analysis of a typical double-V negative Poisson’s ratio (NPR) structure with epistemic uncertainty, a new sequence-sampling-based arbitrary orthogonal polynomial (SS-AOP) expansion is proposed by introducing arbitrary orthogonal polynomial theory and the sequential sampling strategy. In SS-AOP, a sampling technique is developed to calculate the coefficient of AOP expansion. In particular, the candidate points for sampling are generated using the Gauss points associated with the optimal Gauss weight function for each evidence variable, and the sequential-sampling technique is introduced to select the sampling points from candidate points. By using the SS-AOP, the number of sampling points needed for establishing AOP expansion can be effectively reduced; thus, the efficiency of the AOP expansion method can be improved without sacrificing accuracy. The proposed SS-AOP is thoroughly investigated through comparison to the Gaussian quadrature-based AOP method, the Latin-hypercube-sampling-based AOP (LHS-AOP) method and the optimal Latin-hypercube-sampling-based AOP (OLHS-AOP) method.

... Approaches to circumvent the exhaustive double loop simulation include interval MCS and interval importance sampling [55,56], stochastic expansions and optimization-based interval estimation [57] as well as surrogate modeling via optimization and approximation techniques [58]. Latest methods to improve computational performance for uncertainty quantification, for instance, combine p-boxes, univariate dimension reduction method and optimization [59], utilize the augmented space integral [60] or apply line outage distribution factors [43]. ...

In this work, the reliability of complex systems under consideration of imprecision is addressed. By joining two methods coming from different fields, namely, structural reliability and system reliability, a novel methodology is derived. The concepts of survival signature, fuzzy probability theory and the two versions of non-intrusive stochastic simulation (NISS) methods are adapted and merged, providing an efficient approach to quantify the reliability of complex systems taking into account the whole uncertainty spectrum. The new approach combines both of the advantageous characteristics of its two original components: 1. a significant reduction of the computational effort due to the separation property of the survival signature, i.e., once the system structure has been computed, any possible characterization of the probabilistic part can be tested with no need to recompute the structure and 2. a dramatically reduced sample size due to the adapted NISS methods, for which only a single stochastic simulation is required, avoiding the double loop simulations traditionally employed.
Beyond the merging of the theoretical aspects, the approach is employed to analyze a functional model of an axial compressor and an arbitrary complex system, providing accurate results and demonstrating efficiency and broad applicability.

... This kind of distribution parameter uncertainty resulting from the lack of knowledge of the fundamental phenomena can be classified as epistemic uncertainty. Many scholars [34][35][36][37][38][39][40] believe that those two kinds of uncertainties should be studied and distinguished because different types of uncertainty will lead to different model behavior. Morio [41] investigated the effect of the variability of variable and proposed the average global sensitivity measures in the entire space of the distribution parameter. ...

The failure probability-based global sensitivity is proposed to evaluate the influence of input variables on the failure probability. But for the problem that the distribution parameters of variables are uncertain due to the lack of data or acknowledgement, if the original failure probability-based global sensitivity is employed to evaluate the influences of different uncertainty sources directly, the computational cost will be prohibitive. To address this issue, this work proposes the novel predictive failure probability (PFP) based on global sensitivity. By separating the overall uncertainty of variables into inherent uncertainty and distribution parameter uncertainty, the PFP can be evaluated by a single loop with equivalent transformation. Then, the PFP based global sensitivities with respect to (w.r.t) the overall uncertainty, inherent uncertainty and distribution parameter uncertainty are proposed, respectively, and their relationships are discussed, which can be used to measure the influences of different uncertainty sources. To compute those global sensitivities efficiently, the Monte Carlo method and Kriging-based method are employed for comparison. Several examples including two numerical examples and three engineering practices are investigated to validate the reasonability and efficiency of the proposed method.

... Evidence theory [39,40], possibility theory [41], credal sets, fuzzy sets and ambiguity sets theory [42][43][44][45], are some of the most used paradigms for this [46]. Distributionally robust CCPs have been proposed to identify robust designs that satisfy probabilistic constraints for a whole set of uncertainty models [47][48][49]. The authors of [50] present a hybrid reliability optimization method for handling imprecision via a combination of fuzzy and probabilistic uncertainty models. ...

Reliability-based design approaches via scenario optimization are driven by data thereby eliminating the need for creating a probabilistic model of the uncertain parameters. A scenario approach not only yields a reliability-based design that is optimal for the existing data, but also a probabilistic certificate of its correctness against future data drawn from the same source. In this article, we seek designs that minimize not only the failure probability but also the risk measured by the expected severity of requirement violations. The resulting risk-based solution is equipped with a probabilistic certificate of correctness that depends on both the amount of data available and the complexity of the design architecture. This certificate is comprised of an upper and lower bound on the probability of exceeding a value-at-risk (quantile) level. A reliability interval can be easily derived by selecting a specific quantile value and it is mathematically guaranteed for any reliability constraints having a convex dependency on the decision variable, and an arbitrary dependency on the uncertain parameters. Furthermore, the proposed approach enables the analyst to mitigate the effect of outliers in the data set and to trade-off the reliability of competing requirements.

In the modern power system, the uncertainties such as renewable generation and electric vehicles are usually modelled as either interval or probabilistic variables for the power flow analysis. It is meaningful to study the mixed impacts of the interval and probabilistic variables on the power flow, but the existing methods considering the mixed impacts lack accuracy. This paper proposes a novel power flow analysis method considering both interval and probabilistic uncertainties, in which the probability box (P-box) model is established to investigate the power flow influenced by multi-type uncertainties. A probability-interval sample classification method combined with interpolation is proposed to achieve an accurate P-box representation of the power flow. Also, the correlation of uncertainties is fully considered where a novel extended optimizing-scenario method is proposed for obtaining the P-box model considering the multi-dimensional correlation of interval variables. Three test cases are carried out to verify the effectiveness of the proposed method. The P-box represented results clearly reflect the fluctuation range and the corresponding probability of the power flow. The specific influences of the sample size, correlation, capacity of multi-type uncertainties on the power flow and the P-box model are also determined and summarized.

Representations are developed and illustrated for the distribution of link property values at the time of link failure in the presence of aleatory uncertainty in link properties. The following topics are considered: (i) defining properties for weak links and strong links, (ii) cumulative distribution functions (CDFs) for link failure time, (iii) integral-based derivation of CDFs for link property at time of link failure, (iv) sampling-based approximation of CDFs for link property at time of link failure, (v) verification of integral-based and sampling-based determinations of CDFs for link property at time of link failure, (vi) distributions of link properties conditional on time of link failure, and (vii) equivalence of two different integral-based derivations of CDFs for link property at time of link failure.

The popular use of response surface methodology (RSM) accelerate the solutions of parameter identification and response analysis issues. However, accurate RSM models subject to aleatory and epistemic uncertainties are still challenging to construct, especially for multidimensional inputs, which is widely existed in real-world problems. In this study, an adaptive interval response surface methodology (AIRSM) based on extended active subspaces is proposed for mixed random and interval uncertainties. Based on the idea of subspace dimension reduction, extended active subspaces are given for mixed uncertainties and interval active variable representation is derived for the construction of AIRSM. A weighted response surface strategy is introduced and tested for predicting the accurate boundary. Moreover, an interval dynamic correlation index is defined, and significance check and cross validation are reformulated in active subspaces to evaluate the AIRSM. The high efficiency of AIRSM is demonstrated on two test examples: three-dimensional nonlinear function and speed reducer design. They both possess a dominant one-dimensional active subspace with small estimation error and the accuracy of AIRSM is verified by comparing with full-dimensional Monte Carlo simulates, thus providing a potential template for tackling high-dimensional problems involving mixed aleatory and interval uncertainties.

For the reliability analysis of complex engineering structures, the estimation of the bounds of failure probability with interval distribution parameters is an important task when the perfect information of random variables is unavailable and the corresponding probability distributions are imprecise. The present work proposes an active learning Kriging‐based method combining with adaptive radial based importance sampling to compute the bounds of failure probability. For computing the bounds, the classical double‐loop optimization model is always investigated in the standard normal space. To decouple the computation, the inner‐loop optimization is addressed with the monotonicity of the commonly used probability distributions. When suffering the high dimensional problem, the dimension reduction method is introduced in monotonic analysis. While for the outer‐loop optimization, the normal space is decomposed with spheres, then the proposed method with an adaptive updated procedure is given. With this method, the bounds of failure probability can be estimated efficiently, especial for the rare event. Numerical examples are investigated to validate the rationality and superiority of the proposed method. Finally, the proposed method is applied to the reliability analysis of turbine blade and aeronautical hydraulic pipeline system with interval distribution parameters. This article is protected by copyright. All rights reserved.

Two non-intrusive uncertainty propagation approaches are proposed for the performance analysis of engineering systems described by expensive-to-evaluate deterministic computer models with parameters defined as interval variables. These approaches employ a machine learning based optimization strategy, the so-called Bayesian optimization, for evaluating the upper and lower bounds of a generic response variable over the set of possible responses obtained when each interval variable varies independently over its range. The lack of knowledge caused by not evaluating the response function for all the possible combinations of the interval variables is accounted for by developing a probabilistic description of the response variable itself by using a Gaussian Process regression model. An iterative procedure is developed for selecting a small number of simulations to be evaluated for updating this statistical model by using well-established acquisition functions and to assess the response bounds. In both approaches, an initial training dataset is defined. While one approach builds iteratively two distinct training datasets for evaluating separately the upper and lower bounds of the response variable, the other one builds iteratively a single training dataset. Consequently, the two approaches will produce different bound estimates at each iteration. The upper and lower response bounds are expressed as point estimates obtained from the mean function of the posterior distribution. Moreover, a confidence interval on each estimate is provided for effectively communicating to engineers when these estimates are obtained at a combination of the interval variables for which no deterministic simulation has been run. Finally, two metrics are proposed to define conditions for assessing if the predicted bound estimates can be considered satisfactory. The applicability of these two approaches is illustrated with two numerical applications, one focusing on vibration and the other on vibro-acoustics.

As a typical method for hybrid uncertainty analysis, imprecise probability theory recently plays an increasing important role in many engineering systems. To extend its application, this paper proposes a new imprecise probability model and a more efficient numerical method for hybrid uncertainty propagation. Instead of crisp real values, fuzzy sets with membership functions are utilized to describe the epistemic uncertainty in the distribution parameters of input random variables. In the presence of fuzzy distribution parameters, the conventional random moments of output response will include certain fuzzy characteristic. Based on the cut-set operation and fuzzy decomposition theorem, the calculation of fuzzy random moment can be transformed into a series of λ-cut interval random moments. To further decrease the huge computational cost caused by the original complex computational model, a relatively simple metamodel with explicit expression is constructed by the radial basis functions. Meanwhile, the bisection points with nestedness property are adopted as the experimental design strategy. Finally, two examples are implemented to verify the effectiveness of the proposed model and method in practical application.

In this article, we develop a new methodology for integrating epistemic uncertainties into the computation of performance measures of Markov chain models. We developed a power series algorithm that allows for combining perturbation analysis and uncertainty analysis in a joint framework. We characterize statistically several performance measures, given that distribution of the model parameter expressing the uncertainty about the exact parameter value is known. The technical part of the article provides convergence result, bounds for the remainder term of the power series, and bounds for the validity region of the approximation. In the algorithmic part of the article, an efficient implementation of the power series algorithm for propagating epistemic uncertainty in queueing models with breakdowns and repairs is discussed. Several numerical examples are presented to illustrate the performance of the proposed algorithm and are compared with the corresponding Monte Carlo simulations ones.

Uncertainties, which exist widely in the design of aerospace systems, have a great impact on system response and may lead to design failures. Both the probability and non-probability uncertainties exist in real engineering cases. Thus, it is imperative to incorporate the mixed uncertainties into the aerospace systems design and optimization. Present study proposed an efficient design optimization method considering the random and interval uncertainties to conduct the uncertainty analysis and optimize the design of a hybrid rocket motor with mixed uncertainties. A polynomial chaos expansion model was selected to deal with the random uncertainties, and an interval analysis method based on the first order Taylor expansion was adopted to calculate the response bounds of interval uncertainties. A simple structure was firstly used to verify the efficiency and accuracy of the proposed method. The random-interval analysis method was very efficient in searching the optimal results, showing considerable computational cost savings. Subsequently, a design model of hybrid rocket motor was introduced and verified by a long-time firing test. In the optimization of the upper-stage hybrid rocket motor, the comparison between uncertainty analysis and traditional methods showed that the design optimization considering the uncertainty in design variables was able to achieve more robust results. Compared with the deterministic scheme, the uncertainty scheme increased the total engine mass by nearly 18%, but the probabilities of all constraints considering parameter fluctuations remained above 0.9. Capable of solving complex problems with mixed uncertainties, this method shows its potential in aerospace engineering applications.

Degradation data collected from a single accelerated degradation testing (ADT) are usually non-sufficient, which causes a lack of knowledge and leads to epistemic uncertainties. So multi-source ADT datasets are usually integrated for degradation analysis and reliability evaluations. Nevertheless, the discrepancies among multi-source ADT datasets are ignored in current integration methods, which cannot make full use of all information in the integration process. To address these problems, an integration method of multi-source ADT datasets is developed for degradation analysis and reliability evaluations. Firstly, the evaluation index system for an ADT dataset is constructed. Then, the relative qualities of ADT datasets are computed through an additive model and a weight formula. After that, the degradation process is described by an uncertain accelerated degradation model to quantify epistemic uncertainties, and the related statistical analysis is given through a proposed uncertain least weighted squared error to consider dataset discrepancies. A simulation study and an application case are utilized to illustrate the proposed method. Results show that the proposed method can well capture the discrepancies among multi-source ADT datasets and make a quantitative comparison; additionally, it can notably decrease uncertainties in the degradation analysis.

This paper presents an extended polynomial chaos formalism for epistemic uncertainties and a new framework for evaluating sensitivities and variations of output probability density functions (PDF) to uncertainty in probabilistic models of input variables. An ”extended” polynomial chaos expansion (PCE) approach is developed that accounts for both aleatory and epistemic uncertainties, modeled as random variables, thus allowing a unified treatment of both types of uncertainty. We explore in particular epistemic uncertainty associated with the choice of prior probabilistic models for input parameters. A PCE-based Kernel Density (KDE) construction provides a composite map from the PCE coefficients and germ to the PDF of quantities of interest (QoI). The sensitivities of these PDF with respect to the input parameters are then evaluated. Input parameters of the probabilistic models are considered. By sampling over the epistemic random variable, a family of PDFs is generated and the failure probability is itself estimated as a random variable with its own PCE. Integrating epistemic uncertainties within the PCE framework results in a computationally efficient paradigm for propagation and sensitivity evaluation. Two typical illustrative examples are used to demonstrate the proposed approach.

Aleatory and epistemic uncertainties are being increasingly incorporated in verification, validation, and uncertainty quantification (UQ). However, the crucial UQ of high efficiency and confidence remains challenging for mixed multidimensional uncertainties. In this study, a generalized active subspace (GAS) for dimension reduction is presented and the characteristics of GAS are investigated by interval analysis. An adaptive response surface model can then be employed for uncertainty propagation. Since the precise eigenvalues of interval matrix are difficult to solve in mathematics, three alternative estimate methods, i.e. interval eigenvalue analysis (IEA), empirical distribution function (EDF), and Taylor expansions, are developed for the GAS computation and practical use. The efficacy of the GAS and the estimate methods is demonstrated on three test examples: a three-dimensional response function, a standard NASA test of six-dimensional mixed uncertainties, and a NACA0012 airfoil design case of ten epistemic uncertainties. The IEA estimate is comparatively more suitable, but needs more computational cost due to the requirement of bound matrices. When the uncertainty level is small, the three methods are all applicable and the estimate based on EDF can be more efficient. The methodology exhibits high accuracy and strong adaptability in dimension reduction, thus providing a potential template for tackling a wide variety of multidimensional mixed aleatory-epistemic UQ problems.

Redundant design has been widely used in aerospace systems, nuclear systems, etc. which calls for particular attention to common cause failure problems in such systems with various kinds of redundant mechanisms. Besides, imprecision and epistemic uncertainties also need to be taken into account for system reliability modeling and assessment. In this paper, a comprehensive study based on the evidential network is performed for the reliability analysis of complex systems with common cause failures and mixed uncertainties. The decomposed partial α-factor is used to separate the contribution of independent parts and common cause parts of basic failure events. Mixed uncertainties are quantified and expressed by the D-S evidence theory, and the system reliability with uncertainties is modeled by evidential network. Furthermore, two layers, i.e. a decomposed event layer and coupling layer, are embedded into the evidential network of the system, and, as a result, the hierarchical structure of system reliability is constructed. The importance and sensitivities of various component types and their impact on system reliability are detected. The presented evidential network-based hierarchical method is applied to analyze the reliability of an auxiliary power supply system of a train and the results demonstrate the effectiveness of this method.

A computer-simulation-based three-stage optimization strategy is proposed for the resilience enhancement of urban gas distribution networks (GDNs). In stage I (pre-earthquake stage), the Fixed Proportion and Direct Comparison Genetic Algorithm (FPDC-GA) is applied to select key pipelines need to be strenthened or replaced under limited funding in preparation for future potential earthquakes. In stage II (post-earthquake stage), pressure tests must be carried out according to the gas leakage situation reported by users or detected by devices. The Multi-Label K-Nearest-Neighbor (ML-KNN) algorithm is used to predict the corresponding failed pipelines and optimize the pipeline pressure test order. In stage III (repair stage), a strategy based on a greedy algorithm is applied to optimize the pipeline repair sequence. The proposed methods were applied to the GDN of a city in northern China. The following conclusions were drawn from the results: (1) The FPDC-GA enhanced the robustness and resourcefulness of the GDN system to the maximum level within the available funding budget. (2) The pipeline pressure test order calculated using the ML-KNN algorithm was significantly improved compared with a random pressure test order or one based on the empirical failure probability of pipelines. (3) After optimization using a greedy algorithm, the performance recovery curves under different earthquake conditions were shaped as an exponential function, which indicates that the performance of the GDNs recovered in the most efficient manner. The findings of this study could be useful as tools for the seismic resilience enhancement of GDNs in different stages. The proposed optimization algorithms can also be extended to the lifeline of other networks.

A computer-simulation-based three-stage optimization strategy is proposed for the resilience enhancement of urban gas distribution networks (GDNs). In stage I (pre-earthquake stage), the Fixed Proportion and Direct Comparison Genetic Algorithms (FPDC-GA) are applied to select key pipelines for repair or replacement under limited funding in preparation for future potential earthquakes. In stage II (post-earthquake stage), pressure testing must be carried out according to the gas leakage situation reported by users or detected by devices. The Multi-Label K-Nearest-Neighbor (ML-KNN) algorithm is used to predict the corresponding failed pipelines and optimize the pipeline pressure test order. In stage III (repair stage), a strategy based on a greedy algorithm is applied to optimize the pipeline repair sequence. The proposed methods were applied to the GDN of a city in northern China. The following conclusions were drawn from the results: (1) The FPDC-GA method enhanced the resilience robustness and resourcefulness of the GDN system to the maximum level within the available funding budget. (2) The pipeline pressure test order calculated using the ML-KNN algorithm was significantly improved compared with a random pressure test order or one based on the empirical failure probability of pipelines. (3) After optimization using a greedy algorithm, the recovery curves under different earthquake conditions were shaped as an exponential function, which indicates that the performance of the GDNs recovered in the most efficient manner. The findings of this study could be useful as tools for the seismic resilience enhancement of GDNs in different stages. The proposed optimization algorithms can also be extended to the lifeline of other networks.

The performance of complex systems is closely related to its safety and reliability. Grasping the performance of complex systems accurately and timely can help avoid or reduce losses if performance drops. In engineering practice, the monitoring information in interval form can express uncertainty. The upper and lower bounds of monitoring information are easier to obtain than the probability distribution, which can reduce the requirements for information distribution. In this paper, the interval evidential reasoning approach is used to evaluate performance of complex systems. A Monte Carlo simulation-based method for calculating the reliability and weight of evidence is proposed. The initial performance evaluation results are obtained by constructing a nonlinear optimization model. Then the historical performance evaluation information is fully considered to obtain the optimal evaluation results dynamically using a linear weighted update method. An uncertainty quantification method is proposed to quantify the uncertainty of input information and evaluation results. Finally, a case study is carried out for a kind of diesel generator to validate the applicability of the proposed method, and shows that the proposed method can improve the evaluation accuracy.

Reasonable assessment of web cracking probability is essential to ensure the service performance of corroded prestressed concrete (PC) bridges. In this paper, a time-dependent prediction model of effective prestress for different prestress tensioning techniques and a corrosion propagation model are established. Meanwhile, an assessment approach of web cracking probability considering both aleatory and epistemic uncertainties is proposed. The case analysis results show that the doubled-tensioned prestress technique is obviously more effective in decreasing the web cracking probability than the traditional prestress tensioning technique. The existing probabilistic method that only considers the aleatory uncertainty may greatly underestimate the probability of web cracking. In comparison, the fastest time to reach the threshold value of web cracking probability when considering the epistemic uncertainty is over 11% earlier than that of using the existing methods. Additionally, the epistemic uncertainty of the chloride diffusion coefficient among the selected corrosion parameters has a most significant impact on the probability of web cracking. Therefore, the epistemic uncertainty of corrosion parameters, especially for the chloride diffusion coefficient, should be minimized as much as possible in this assessment process.

Owing to uncertainty factors present in the system, computer-aided engineering (CAE) models suffer from limitations in terms of accuracy of test model representation. This paper proposes a new predictive model, termed designable generative adversarial network (DGAN), which applies the Inverse generator neural network to GAN, one of the methods employed for data augmentation. Statistical model-based validation and calibration technology, employed for improving the accuracy of a predictive model, is used to compare the prediction accuracy of the DGAN. Statistical model-based technology can construct a predictive model through calibration between actual test data and CAE data by considering uncertainty factors. However, the achievable improvement in prediction accuracy is limited, depending on the degree of approximation of the CAE model. DGAN can construct a predictive model through machine learning using only actual test data, improve the prediction accuracy of an actual test model, and present design variables that affect the response data, which is the output of the predictive model. The performance of the proposed prediction model was evaluated and verified, as a case study, through a numerical example and system level vehicle crash test model including parameter uncertainties.

The frequency response function (FRF) that links structural responses and external excitations is required to realize the dynamic analysis of a clamp-pipe system. Due to random geometry and stiffness parameters, the FRF becomes a stochastic quantity that is represented by a surrogate model based on the polynomial chaos expansion (PCE) in this paper. Considering that an FRF trajectory is mainly controlled by resonance and anti-resonance frequencies, a reference frequency coordinate characterized by the controlling frequencies associated with a nominal value of input random variables is first determined. By reserving the original amplitude, an FRF sample is projected to the reference coordinate through a linear transform. Then, a PCE surrogate model with coefficients estimated by the Gauss-quadrature scheme is obtained, and the utility of an inverse transform allows one to obtain the predicted FRF result in the original domain. Together with estimations on the confidence intervals and extreme boundary results, engineering applications are demonstrated by the uncertain FRF analysis of straight and L-shape clamp-pipe systems.

The treatment of uncertainty using extra-probabilistic approaches, like intervals or p-boxes, allows for a clear separation between epistemic uncertainty and randomness in the results of risk assessments. This can take the form of an interval of failure probabilities; the interval width W being an indicator of "what is unknown". In some situations, W is too large to be informative. To overcome this problem, we propose to reverse the usual chain of treatment by starting with the targeted value of W that is acceptable to support the decision-making, and to quantify the necessary reduction in the input p-boxes that allows achieving it. In this view, we assess the feasibility of this procedure using two case studies (risk of dike failure, and risk of rupture of a frame structure subjected to lateral loads). By making the link with the estimation of excursion sets (i.e. the set of points where a function takes values below some prescribed threshold), we propose to alleviate the computational burden of the procedure by relying on the combination of Gaussian Process metamodels and sequential design of computer experiments. The considered test cases show that the estimates can be achieved with only a few tens of calls to the computationally intensive algorithm for mixed aleatory/epistemic uncertainty propagation.

A novel efficient, robust design optimization (RDO) framework is proposed to reduce the computational cost by many orders, generally associated with conventional RDO practice. It involves the deterministic optimization of the problem consisting of random variables during the first step, followed by the uncertainty quantification (UQ) at the obtained optimum solutions in the second step. Currently, during the UQ process, the design variables are taken as independent and normally distributed. This methodology takes precedence over the standard approach as it approximately provides similar optimized solutions compared to the traditional methods. Moreover, the optimization and the UQ phases are completely decoupled so that the formulation of different weights is not required in the RDO function to acquire a Pareto front. This yields a much faster and practical approach in terms of computational cost. The presented algorithm was implemented to a constrained, unconstrained nonlinear analytical problems and an engineering problem where the mass of a stiffened panel is minimized, involving seven random input variables and thirteen deterministic design variables. The robust optimization of the stiffened panel is carried out using the proposed and traditional RDO method, and their performance is compared. A sequential sampling method based on Sobol Sequence was utilized to compare the optimal stochastic responses obtained with this algorithm to regular RDO. The results revealed a high accuracy of the present algorithm despite using substantially fewer function calls. Therefore, this study will be influential in obtaining robust optimal designs with a lower computational cost.

Static response analysis of a dual crane system (DCS) is conducted using fuzzy parameters. The fuzzy static equilibrium equation is established and two fuzzy perturbation methods, including the compound function/fuzzy perturbation method (CFFPM) and modified compound function/fuzzy perturbation method (MCFFPM), are presented. The CFFPM employs the level-cut technique to transform the fuzzy static equilibrium equation into several interval equations with different cut levels. The interval Jacobian matrix, the first and second interval virtual work vectors, and the inverse of interval Jacobian matrix are approximated by the first-order Taylor series and Neumann series. The fuzzy static response field for every cut level is obtained by a synthesis of the compound function technique, the interval perturbation method, and the fuzzy algorithm. In the MCFFPM, the fuzzy static response field for every cut level is derived based on the surface rail generation method, the modified Sherman–Morrison–Woodbury formula, and the fuzzy theory. Compared with the Monte Carlo method (MCM), numerical examples demonstrate that the MCFFPM has a better accuracy than the CFFPM and both of them bring a higher efficiency than the MCM, especially when it comes to uncertainty quantification of fuzzy parameters on the static response of the DCS.

High-quality factor resonant cavities are challenging structures to model in electromagnetics owing to their large sensitivity to minute parameter changes. Therefore, uncertainty quantification strategies are pivotal to understanding key parameters affecting the cavity response. We discuss here some of these strategies focusing on shielding effectiveness properties of a canonical slotted cylindrical cavity that will be used to develop credibility evidence in support of predictions made using computational simulations for this application.

Sensitivity analysis investigates how the change in the output of a computational model can be attributed to changes of its input parameters. Identifying the input parameters that propagate more uncertainty on the ruin probability associated with insurance risk models is a challenging problem. In this paper, we consider the classical risk model, where an epistemic-uncertainty veils the true values of the claim size distribution rate and the Poisson arrival rate. Based on the available data for calibrating the probability distributions that model gaps of knowledge on these rates, and using the Taylor-series expansion methodology, we obtain the ruin probability under polynomial form in uncertain rates as a computational model. Specifically, we get a new sensitivity estimate of the ruin probability with respect to uncertain parameters. We provide a coherent framework within which we can accurately characterize statistically the uncertain ruin probability. In addition, we use the Markov's inequality to estimate the risk incurred by working with uncertain ruin probability rather than that evaluated at fixed parameters. A series of numerical experiments are presented to illustrate the potential of the proposed approach.

In reliability assessment, a difficulty is to handle a complex system with hybrid uncertainty (aleatory and epistemic uncertainty) and dependency problem. Probability-box is a general model to represent hybrid uncertainty. Arithmetic rules on the structure are mostly used between independent random variables. However, in practice, dependency problems are also common in reliability assessment. In addition, in most real applications, there is some prior information on the dependency of components, but the available information may be not enough to determine dependent parameters. The issue is named non-deterministic dependency problem in the paper. Affine arithmetic is hence used to produce dependent interval estimates. The arithmetic sometimes has a better effect than probability-box arithmetic (interval arithmetic) in dealing with dependency problem. Bayesian network is a commonly used model in reliability assessment. Under Bayesian network framework, this paper proposes a dependency bounds analysis method that combines affine arithmetic and probability-box method to handle hybrid uncertainty and non-deterministic dependency. For the sake of illustration, this method is applied to two real systems. To show the advantages of the proposed method, the proposed method is compared with the Frechet inequalities and 2-stage Monte Carlo method in the second case study.

Aerospace engineering has been at the forefront both in modeling and design optimization due to the demand for high performance. The objective of this chapter is to show how numerical optimization has been useful in the design of aerospace systems and to give an idea of the challenges involved.

Conventional reliability-based design optimization (RBDO) requires a probabilistic analysis of the reliability constraints for each of the trial designs controlled by the optimizer. For complex constraints that require time-consuming numerical analyses, this process can be extremely time-consuming. The basis of this paper is an innovative concept called "approximately equivalent deterministic constraint," previously demonstrated for single constraint problems. This paper demonstrates that the concept, related to safety-factor based design, is valid for multiple reliability constraints. In addition to improved efficiency, the method has several unique advantages: (1) it incorporates the partial safety factor concept that most designers are familiar with, (2) the probabilistic analysis loop is entirely decoupled from the optimization loop for easy code implementation, and (3) it produces progressively improved reliable designs in the initial steps that help designers keep track of their designs. The method is demonstrated using an example that suggests that the safety-factor based RBDO approach is efficient and robust. © 2001 by Wu, Shin, Sues, & Cesare. Published by the American Institute of Aeronautics and Astronautics, Inc.

Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor-products or sparse grids. When tailoring the basis functions or interpolation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for general probabilistic analysis problems. In this paper, we explore relative performance of these methods using a number of simple algebraic test problems, and analyze observed differences. In these computational experiments, performance of PCE and SC is shown to be very similar, although when differences are evident, SC is the consistent winner over traditional PCE formulations. This stems from the practical difficulty of optimally synchronizing the form of the PCE with the integration approach being employed, resulting in slight over- or under-integration of prescribed expansion form. With additional nontraditional tailoring of PCE form, it is shown that this performance gap can be reduced, and in some cases, eliminated. Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc.

Non-intrusive polynomial chaos expansion (NIPCE) methods based on orthogonal poly-nomials and stochastic collocation (SC) methods based on Lagrange interpolation poly-nomials are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic depen-dence. Both techniques reside in the collocation family, in that they sample the response metrics of interest at selected locations within the random domain without intrusion to simulation software. In this work, we explore the use of polynomial order refinement (p-refinement) approaches, both uniform and adaptive, in order to automate the assessment of UQ convergence and improve computational efficiency. In the first class of p-refinement approaches, we employ a general-purpose metric of response covariance to control the uni-form and adaptive refinement processes. For the adaptive case, we detect anisotropy in the importance of the random variables as determined through variance-based decompo-sition and exploit this decomposition through anisotropic tensor-product and anisotropic sparse grid constructions. In the second class of p-refinement approaches, we move from anisotropic sparse grids to generalized sparse grids and employ a goal-oriented refinement process using statistical quantities of interest. Since these refinement goals can frequently involve metrics that are not analytic functions of the expansions (i.e., beyond low order response moments), we additionally explore efficient mechanisms for accurately and effi-ciently estimated tail probabilities from the expansions based on importance sampling.

Epistemic uncertainty, characterizing lack-of-knowledge, is often prevalent in engineering applications. However, the methods we have for analyzing and propagating epistemic uncertainty are not as nearly widely used or well-understood as methods to propagate aleatory uncertainty (e.g. inherent variability characterized by probability distributions). In this paper, we examine three methods used in propagating epistemic uncertainties: interval analysis, Dempster-Shafer evidence theory, and second-order probability. We demonstrate examples of their use on a problem in structural dynamics, specifically in the assessment of margins. In terms of new approaches, we examine the use of surrogate methods in epistemic analysis, both surrogate-based optimization in interval analysis and use of polynomial chaos expansions to provide upper and lower bounding approximations. Although there are pitfalls associated with surrogates, they can be powerful and efficient in the quantification of epistemic uncertainty. Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

The use of kriging models for approximation and metamodel-based design and optimization has been steadily on the rise in the past decade. The widespread use of kriging models appears to be hampered by 1) computationally efficient algorithms for accurately estimating the model's parameters, 2) an effective method to assess the resulting model's quality, and 3) the lack of guidance in selecting the appropriate form of the kriging model. We attempt to address these issues by comparing 1) maximum likelihood estimation and cross validation parameter estimation methods for selecting a kriging model's parameters given its form and 2) an R2 of prediction and the corrected Akaike information criterion assessment methods for quantifying the quality of the created kriging model. These methods are demonstrated with six test problems. Finally, different forms of kriging models are examined to determine if more complex forms are more accurate and easier to fit than simple forms of kriging models for approximating computer models.

Surrogate-based optimization (SBO) methods have become established as effective techniques for engineering design problems through their ability to tame nonsmoothness and reduce computational expense. Possible surrogate modeling techniques include data fits (local, multipoint, or global), multifidelity model hierarchies, and reduced-order models, and each of these types has unique features when employed within SBO. This paper explores a number of SBO algorithmic variations and their effect for different surrogate modeling cases. First, general facilities for constraint management are explored through approximate subproblem formulations (e.g., direct surrogate), constraint relaxation techniques (e.g., homotopy), merit function selections (e.g., augmented Lagrangian), and iterate acceptance logic selections (e.g., filter methods). Second, techniques specialized to particular surrogate types are described. Computational results are presented for sets of algebraic test problems and an engineering design application solved using the DAKOTA software. I.

Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) meth- ods are attractive techniques for uncertainty quantification (UQ) due to their strong math- ematical basis and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadra- ture, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor product or sparse grids. When tailoring the basis functions or interpo- lation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for a range of probabilistic analysis problems. In addition, analytic features of the expansions can be exploited for moment estimation and stochastic sensitivity analysis. In this paper, the latest ideas for tailoring these expansion methods to numerical integration approaches will be explored, in which expansion formula- tions are modified to best synchronize with tensor-product quadrature and Smolyak sparse grids using linear and nonlinear growth rules. The most promising stochastic expansion ap- proaches are then carried forward for use in new approaches for mixed aleatory-epistemic UQ, employing second-order probability approaches, and design under uncertainty, em- ploying bilevel, sequential, and multifidelity approaches.

This paper proposes a new method that extends the efficient global optimization to address stochastic black-box systems. The
method is based on a kriging meta-model that provides a global prediction of the objective values and a measure of prediction
uncertainty at every point. The criterion for the infill sample selection is an augmented expected improvement function with
desirable properties for stochastic responses. The method is empirically compared with the revised simplex search, the simultaneous
perturbation stochastic approximation, and the DIRECT methods using six test problems from the literature. An application
case study on an inventory system is also documented. The results suggest that the proposed method has excellent consistency
and efficiency in finding global optimal solutions, and is particularly useful for expensive systems.

This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

The anisotropic sparse grid is a natural extension of the isotropic sparse grid, adapted for situations in which the behavior of the data varies with respect to particular spatial dimensions. An appropriately cong- ured anisotropic sparse grid can achieve accuracy comparable to that of an isotropic sparse grid, while further compounding the reduction in the number of function evaluations that an isotropic sparse grid oers com- pared to a product rule. To modify an isotropic sparse grid algorithm into an anisotropic one requires changes to the selection criterion and the combining coecients used in connection with the component product rules. This article discusses these changes and their implementation in a particular computer code.

Two types of sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies. These plans are shown to be improvements over simple random sampling with respect to variance for a class of estimators that includes the sample mean and the empirical distribution function. 6 figures.

In many engineering optimization problems, the number of function evaluations is severely limited by time or cost. These problems pose a special challenge to the field of global optimization, since existing methods often require more function evaluations than can be comfortably afforded. One way to address this challenge is to fit response surfaces to data collected by evaluating the objective and constraint functions at a few points. These surfaces can then be used for visualization, tradeoff analysis, and optimization. In this paper, we introduce the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering. We then show how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule. The key to using response surfaces for global optimization lies in balancing the need to exploit the approximating surface (by sampling where it is minimized) with the need to improve the approximation (by sampling where prediction error may be high). Striking this balance requires solving certain auxiliary problems which have previously been considered intractable, but we show how these computational obstacles can be overcome.

This paper presents a basic tutorial on epistemic uncertainty quantification methods. Epistemic uncertainty, characterizing lack-of-knowledge, is often prevalent in engineering applications. However, the methods we have for analyzing and propagating epistemic uncertainty are not as nearly widely used or well-understood as methods to propagate aleatory uncertainty (e.g. inherent variability characterized by probability distributions). We examine three methods used in propagating epistemic uncertainties: interval analysis, Dempster-Shafer evidence theory, and second-order probability. We demonstrate examples of their use on a problem in structural dynamics.

Reliability methods are probabilistic algorithms for quantifying the effect of simulation input uncertainties on response metrics of interest. In particular, they compute approximate response function distribution statistics (probability, reliability, and response levels) based on specified input random variable probability distributions. In this paper, a number of algorithmic variations are explored for both the forward reliability analysis of computing probabilities for specified response levels (the reliability index approach (RIA)) and the inverse reliability analysis of computing response levels for specified probabilities (the performance measure approach (PMA)). These variations include limit state linearizations, probability integrations, warm starting, and optimization algorithm selections. The resulting RIA/PMA reliability algorithms for uncertainty quantification are then employed within bi-level and sequential reliability-based design optimization approaches. Relative performance of these uncertainty quantification and reliability-based design optimization algorithms are presented for a number of computational experiments performed using the DAKOTA/UQ software.

Engineers often perform sensitivity analyses to explore how changes in the inputs of a physical process or a model affect the outputs. This type of exploration is also important for the decision-making process. Specifically, engineers may want to explore whether the available information is sufficient to make a robust decision, or whether there exists sufficient uncertainty—i.e., lack of information—that the optimal solution to the decision problem is unclear, in which case it can be said to be sensitive to information state. In this paper, it is shown that an existing method for modeling and propagating uncertainty, called Probability Bounds Analysis (PBA), actually provides a general approach for exploring the global sensitivity of a decision problem that involves both probabilistic and imprecise information. Specifically, it is shown that PBA conceptually generalizes an approach to sensitivity analysis suggested in the area of decision analysis. The global nature of the analysis theoretically guarantees that the decision maker will identify any sensitivity in the formulated problem and information state. However, a tradeoff is made in the numerical implementation of PBA; a particular existing implementation that preserves the guarantee of identifying existing sensitivity is overly conservative and can result in "false alarms." The use of interval arithmetic in sensitivity analysis is discussed, and additional advantages and limitations of PBA as a sensitivity analysis tool are identified.

Two types of sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies. These plans are shown to be improvements over simple random sampling with respect to variance for a class of estimators which includes the sample mean and the empirical distribution function.

Most numerical integration techniques consist of approximating the integrand by a polynomial in a region or regions and then integrating the polynomial exactly. Often a complicated integrand can be factored into a non-negative ''weight'' function and another function better approximated by a polynomial, thus $\int_{a}^{b} g(t)dt = \int_{a}^{b} \omega (t)f(t)dt \approx \sum_{i=1}^{N} w_i f(t_i)$. Hopefully, the quadrature rule ${\{w_j, t_j\}}_{j=1}^{N}$ corresponding to the weight function $\omega$(t) is available in tabulated form, but more likely it is not. We present here two algorithms for generating the Gaussian quadrature rule defined by the weight function when: a) the three term recurrence relation is known for the orthogonal polynomials generated by $\omega$(t), and b) the moments of the weight function are known or can be calculated.

This paper provides details on the application of the Hermite Polynomial Chaos (PC) method for rep- resenting geometric uncertainty for a model problem as a step towards the development of a stochastic compressible Euler and Navier-Stokes tool. Currently, results have been obtained for Laplace's equation in two dimensions in which the location of one of the boundaries is uncertain. Detailed comparisions between Polynomial Chaos and Monte Carlo simulations are made including precision and convergence studies of the statistics of the distributions as well as pointwise comparisons of histograms within the domain.

Nonintrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification due to their fast convergence properties and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, cubature, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients and requires the use of structured collocation point sets derived from tensor product or sparse grids. Once PCE or SC representations have been obtained for a response metric of interest, analytic expressions can be derived for the moments of the expansion and for the design derivatives of these moments, allowing for efficient design under uncertainty formulations involving moment control (e.g., robust design). This paper presents two approaches for moment design sensitivities, one involving a single response function expansion over the full range of both the design and uncertain variables and one involving response function and derivative expansions over only the uncertain variables for each instance of the design variables. These two approaches present trade-offs involving expansion dimensionality, global versus local validity, collocation point data requirements, and L 2 (mean, variance, probability) versus L ∞ (minima, maxima) interrogation requirements. Given this capability for analytic moments and moment sensitivities, bilevel, sequential, and multifidelity formulations for design under uncertainty are explored. Computational results are presented for a set of algebraic benchmark test problems, with attention to design formulation, stochastic expansion type, stochastic sensitivity approach, and numerical integration method.

The second order probability technique is a method where an incompletely specified (finite-argument) probability function is modeled as a random vector over the associated natural simplex of all possible probability functions of a fixed number of common arguments. The application of this technique in a general context using the Dirichlet family generalization of the uniform distribution over a simplex, in updating an event upon a conditional probability statement when no such disjointness as in the Judy Benjamin (JB) problem holds and sought with a nonuniform prior, is analyzed.

This report forms the user's guide for Version 1.1 of SOL/NPSOL, a set of Fortran subroutines designed to minimize an abritary smooth function subject to constraints, which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints. (NPSOL may also be used for unconstrained, bound-constrained and linearly constrained optimization.) The user must provide subroutines that define the objective and constraint functions and their gradients. All matrices are treated as dense, and hence NPSOL is not intended for large sparse problems. NPSOL uses a sequential quadratic programming (SQP) algorithm, in which the search direction is the solution of a quadratic programming (QP) subproblem. The algorithm treats bounds, linear constraints and nonlinear constraints separately. The Hessian of each QP subproblem is a positive-definite quasi-Newton approximation to the Hessian of an augmented Lagrangian function. The steplength at each iteration is required to produce a sufficient decrease in an augmented Lagrangian merit function. Each QP subproblem is solved using a quadratic programming package with several features that improve the efficiency of an SQP algorithm.

This report forms the user's guide for Version 4.0 of NPSOL, a set of Fortran subroutines designed to minimize a smooth function subject to constraints, which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints. (NPSOL may also be used for unconstrained, bound-constrained and linearly constrained optimization.). The user must provide subroutines that define the objective and constraint functions and (optionally) their gradients. All matrices are treated as dense, and hence NPSOL is not intended for large sparse problems. NPSOL uses a sequential quadratic programming (SQP) algorithm, in which the search direction is the solution of a quadratic programming (QP) subproblem. The algorithm treats bounds, linear constraints and nonlinear constraints separately. The Hessian of each QP subproblem is a positive-definite quasi-Newton approximation to the Hessian of the Lagrangian function. The steplength at each iteration is required to produce a sufficient decrease in an augmented Lagrangian merit function. Each QP subproblem is solved using a quadratic programming package with several features that improve the efficiency of an SQP algorithm.

Many engineering applications are characterized by implicit response functions that are expensive to evaluate and sometimes nonlinear in their behavior, making reliability analysis difficult. This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space. The method begins with a Gaussian process model built from a very small number of samples, and then adaptively chooses where to generate subsequent samples to ensure that the model is accurate in the vicinity of the limit state. The resulting Gaussian process model is then sampled using multimodal adaptive importance sampling to calculate the probability of exceeding (or failing to exceed) the response level of interest. By locating multiple points on or near the limit state, more complex and nonlinear limit states can be modeled, leading to more accurate probability integration. By concentrating the samples in the area where accuracy is important (i.e., in the vicinity of the limit state), only a small number of true function evaluations are required to build a quality surrogate model. The resulting method is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. This new method is applied to a collection of example problems including one that analyzes the reliability of a microelectromechanical system device that current available methods have difficulty solving either accurately or efficiently. Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

A comprehensive framework is set forth for the analysis of structural reliability under incomplete probability information. Under stipulated requirements of consistency, invariance, operability, and simplicity, a method is developed to incorporate in the reliability analysis incomplete probability information on random variables, including moments, bounds, marginal distributions, and partial joint distributions. The method is consistent with the philosophy of Ditlevsen's generalized reliability index and complements existing second-moment and full-distribution structural reliability theories.

Several simple test problems are used to explore the following approaches to the representation of the uncertainty in model predictions that derives from uncertainty in model inputs: probability theory, evidence theory, possibility theory, and interval analysis. Each of the test problems has rather diffuse characterizations of the uncertainty in model inputs obtained from one or more equally credible sources. These given uncertainty characterizations are translated into the mathematical structure associated with each of the indicated approaches to the representation of uncertainty and then propagated through the model with Monte Carlo techniques to obtain the corresponding representation of the uncertainty in one or more model predictions. The different approaches to the representation of uncertainty can lead to very different appearing representations of the uncertainty in model predictions even though the starting information is exactly the same for each approach. To avoid misunderstandings and, potentially, bad decisions, these representations must be interpreted in the context of the theory/procedure from which they derive.

Several algorithms are given and compared for computing Gauss quadrature rules. It is shown that given the three term recurrence relation for the orthogonal polynomials generated by the weight function, the quadrature rule may be generated by computing the eigenvalues and first component of the orthornormalized eigenvectors of a symmetric tridiagonal matrix. An algorithm is also presented for computing the three term recurrence relation from the moments of the weight function.

In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e, Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainty (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. This presentation discusses and illustrates the conceptual and computational basis of QMU in analyses that use computational models to predict the behavior of complex systems. Topics considered include (1) the role of aleatory and epistemic uncertainty in QMU, (2) the representation of uncertainty with probability, (3) the probabilistic representation of uncertainty in QMU analyses involving only epistemic uncertainty, (4) the probabilistic representation of uncertainty in QMU analyses involving aleatory and epistemic uncertainty, (5) procedures for sampling-based uncertainty and sensitivity analysis, (6) the representation of uncertainty with alternatives to probability such as interval analysis, possibility theory and evidence theory, (7) the representation of uncertainty with alternatives to probability in QMU analyses involving only epistemic uncertainty, and (8) the representation of uncertainty with alternatives to probability in QMU analyses involving aleatory and epistemic uncertainty. Concepts and computational procedures are illustrated with both notional examples and examples from reactor safety and radioactive waste disposal.

Optimization of structures with respect to performance, weight or cost is a well-known application of mathematical optimization theory. However optimization of structures with respect to weight or cost under probabilistic reliability constraints or optimization with respect to reliability under cost/weight constraints has been subject of only very few studies. The difficulty in using probabilistic constraints or reliability targets lies in the fact that modern reliability methods themselves are formulated as a problem of optimization. In this paper two special formulations based on the so-called first-order reliability method (FORM) are presented. It is demonstrated that both problems can be solved by a one-level optimization problem, at least for problems in which structural failure is characterized by a single failure criterion. Three examples demonstrate the algorithm indicating that the proposed formulations are comparable in numerical effort with an approach based on semi-infinite programming but are definitely superior to a two-level formulation.