## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

The use of Statistics in risk assessment studied is an expanding field where the selection of the proper technique is often difficult to make. This is the case with the sensitivity analysis methods used in conjunction with Monte Carlo computer codes. The Monte Carlo approach is commonly used in risk assessment, where it can be used to estimate the uncertainty in the model's output due to the uncertainty in the model's input parameters. This treatment is referred to as uncertainty Analysis, and is generally complemented with a Sensitivity Analysis, which is aimed at the identification of the most influential system parameters. Often different sensitivity analysis techniques are used in similar contexts, and it would be useful to identify (a) whether certain technique(s) perform better than others and (b) when two or more techniques can provide complementary information.In this article a number of sensitivity analysis techniques are compared in the case of non-linear model responses. The test models originate from the context of the risk analysis for the disposal of radioactive waste, where sensitivity analysis plays a crucial role. The statistics taken in to consideration include: •Pearson Correlation Coefficient•Partial Correlation Coefficient

To read the full-text of this research,

you can request a copy directly from the authors.

... The screening methods have been developed for screening the non-influential variables in moderate to high dimensional problems [11,12]. The variance-based sensitivity indices [13,14,15], rooted in the Random sampling-high dimensional model representation (RS-HDMR) [16], aim at measuring the relative importance of the input variables by attributing the model response variance to each input variable and their interactions. ...

... respectively, where cov D Ĝ 2 (u) ,Ĝ 2 (u ′ ) is the covariance of the squared GPR modelĜ 2 (u). The posterior variance in Eq. (13) is further derived as: ...

... The proof of Lemma 1 is presented in Appendix A. As can be seen from Eq. (12), the posterior mean of V u Ĝ (u) consists of two parts, i.e., ϑ 1 and ϑ 2 , where ϑ 1 is the variance of the posterior mean µ G (u), and ϑ 2 refers to the expectation of the posterior variance σ 2 G (u), indicating that, as long as the closed-form expressions of the posterior mean and variance of the GPR modelĜ (u) are available, the analytical expression (not in closed form) of the posterior mean of V u Ĝ (u) can be generated. Eq. (13) indicates that the posterior variance of V u Ĝ (u) equals to the expectation of the posterior covariance of the induced stochastic process modelĜ 2 (u) (not Gaussian). The posterior covariance ofĜ 2 (u) is uniquely determined by the posterior mean and covariance of the GPR modelĜ (u), whose closed-form expression can be analogously formulated with Eq. (7) and (8). ...

Variance-based sensitivity indices play an important role in scientific computation and data mining, thus the significance of developing numerical methods for efficient and reliable estimation of these sensitivity indices based on (expensive) computer simulators and/or data cannot be emphasized too much. In this article, the estimation of these sensitivity indices is treated as a statistical inference problem. Two principle lemmas are first proposed as rules of thumb for making the inference. After that, the posterior features for all the (partial) variance terms involved in the main and total effect indices are analytically derived (not in closed form) based on Bayesian Probabilistic Integration (BPI). This forms a data-driven method for estimating the sensitivity indices as well as the involved discretization errors. Further, to improve the efficiency of the developed method for expensive simulators, an acquisition function, named Posterior Variance Contribution (PVC), is utilized for realizing optimal designs of experiments, based on which an adaptive BPI method is established. The application of this framework is illustrated for the calculation of the main and total effect indices, but the proposed two principle lemmas also apply to the calculation of interaction effect indices. The performance of the development is demonstrated by a illustrative numerical example and three engineering benchmarks with finite element models.

... SAMO focuses on the characterization of model output distribution, like first or second moments, while ROSA is constrained into the typical reliability measures, such as failure probability and reliability index. Against this background, lots of ROSA techniques have been widely studied and employed, such as derivative-based SA [10], variance-based global SA [11], non-parametric global SA [12][13][14] and moment-independent global SA [15][16][17][18][19][20][21], et al., to meet different kinds of analysis purposes. They have been widely used in the field of civil engineering [22][23][24], aerospace engineering [25,26] and many other field [27]. ...

... Whereas when investigating the PFP based global sensitivity indices, although the global sensitivity O i can measure the contributions of overall uncertainty, it cannot be decomposed as the variance-based global sensitivity. Actually, for the global sensitivity O i , the conditional PFP P F|X i (x i ) needs to eliminate the uncertainty of X i , which means the uncertainties of u i and θ i are eliminate simultaneously according to the transformation in Eq. (14). Thus, the conditional PFP P F|X i (x i ) can be denoted as ...

... Compute samples (x 1 , x 2 , ..., x N ) for var iables = (X 1 , X 2 , ..., X n ) with the separated samples of distribution parameters (θ 1 , θ 2 , ..., θ N ) and ( 1 , 2 , ..., N ) according to the inverse transformation in Eq. (14). ...

The failure probability-based global sensitivity is proposed to evaluate the influence of input variables on the failure probability. But for the problem that the distribution parameters of variables are uncertain due to the lack of data or acknowledgement, if the original failure probability-based global sensitivity is employed to evaluate the influences of different uncertainty sources directly, the computational cost will be prohibitive. To address this issue, this work proposes the novel predictive failure probability (PFP) based on global sensitivity. By separating the overall uncertainty of variables into inherent uncertainty and distribution parameter uncertainty, the PFP can be evaluated by a single loop with equivalent transformation. Then, the PFP based global sensitivities with respect to (w.r.t) the overall uncertainty, inherent uncertainty and distribution parameter uncertainty are proposed, respectively, and their relationships are discussed, which can be used to measure the influences of different uncertainty sources. To compute those global sensitivities efficiently, the Monte Carlo method and Kriging-based method are employed for comparison. Several examples including two numerical examples and three engineering practices are investigated to validate the reasonability and efficiency of the proposed method.

... Importance ranking influential factors (e.g., input parameters) is based on sensitivity measures calculated using correlation coefficients. The most commonly-used correlation coefficients found in literature include Pearson Correlation Coefficient (PEAR), Spearman Correlation Coefficient (SPEAR), Standardized Regression Coefficients (SRCs), and Partial Correlation Coefficients (PCCs) [40][41][42][43][44][45][46][47]. For example, the SCRs indicate the total strength of the linear relationship between the model output and each input parameter, while the PCCs indicate the strength of the linear relationship between the model output and each input parameter after the linear effects of all other input parameters are removed. ...

... Performance of the correlation-based methods is measured by the coefficient of determination (RY 2 ), which is defined as the ratio of the total sum of squares to the regression sum of squares and represents the fraction of the model variance explained by the regression [40,41,44]. A low value of RY 2 would signal "a poor regression model." ...

Fire is historically and analytically a significant contributor to nuclear power plant risk. The level of fire risk and the methods, tools, and data for modeling this risk are highly debated by experts. One area of debate is the input data used in fire modeling and how to deal with this data’s high rate of uncertainty. This report outlines work performed for determining the key parameters causing this uncertainty and how it propagates into nuclear power plant PRA models. This research used the Integrated Probabilistic Risk Assessment (I-PRA) method and paves the way to correctly assess and reduce data uncertainty in fire modeling.

... Details of the various UA/SA methods can be found in Saltelli and Marivoet (1990), Melching (1995), Saltelli et al. (2000), Helton and Davis (2002), Saltelli et al. (2004), Saltelli et al. (2005), Manache and Melching (2008), Gan et al. (2014) to name a few. SA techniques can be classified on three basesexploration of factor space, purpose of SA, and type of sensitivity measure (Figure 1-1 advantages, and disadvantages are presented below. ...

... Typical sensitivity measures include the Pearson product moment coefficient (PEAR), standardized regression coefficient (SRC), partial correlation coefficient (PCC), Spearman coefficient (SPEAR), standardized rank regression coefficient (SRRC), and partial rank correlation coefficient (PRCC). Saltelli and Marivoet (1990), Helton and Davis (2002), and Manache and Melching (2008) have assessed the strengths and weaknesses of these methods where the latter have also summarized some additional correlation/regression based measures. ...

... Several SA techniques are available in literature and they can be subdivided into Local and Global methods [76], with the latter being more suitable for dimensionality reduction, since they quantify the contribution of each input to the variability of the output over the entire range of values of both the input and the output [20]. Global SA can be also divided into Regression-Based Sensitivity Analysis (RBSA) methods, also known as non-parametric techniques [73] [3,62,74]; and Distribution-Based Sensitivity Analysis (DBSA) methods, also known as moment-independent methods [12], such as δ indicator [10], input saliency [49], Hellinger distance [30] and Kullback-Leibler divergence [30]. However, both the RBSA and VBSA methods, in general, suffer from the output function non-smoothness and/or multimodality (as explained in detail in Section 3.1). ...

... Global SA methods can be divided into three categories [12]: 1) RBSA methods, 2) VBSA methods and 3) DBSA methods. RBSA or non-parametric methods [73] exploit regression techniques to fit a regression model on a set of I/O relations and to use the regression coefficients as indices of sensitivity. RBSA methods are typically the simplest ones, also associated to the lowest computational cost, but their performance strongly depends on the output form which is often required to be linear. ...

In the safety analyses of passive systems for nuclear energy applications, computationally demanding models can be substituted by fast-running surrogate models coupled with adaptive sampling techniques; for speeding up the exploration of the components and system state-space and the characterization of the conditions leading to failure (i.e., the system Critical failure Regions, CRs). However, in some cases of non-smoothness and multimodality of the state-space, the existing approaches do not suffice. In this paper, we propose a novel methodological framework, based on Finite Mixture Models (FMMs) and Adaptive Kriging (AK-MCS) for CRs characterization in case of non-smoothness and/or multimodality of the output. The framework contains three main steps: 1) dimensionality reduction through FMMs to tackle the output non-smoothness and multimodality, while focusing on its clusters defining the system failure; 2) adaptive training (AK-MCS) of the metamodel on the reduced space to mimic the time-demanding model and, finally, 3) use of the trained metamodel provide the output for new input combinations and retrieve information about the CRs.
The framework is applied to the case study of a generic Passive Safety System (PSS) for Decay Heat Removal (DHR) designed for advanced Nuclear Power Plants (NPPs). The PSS operation is modelled through a time-demanding Thermal-Hydraulic (T-H) model and the pressure selected for characterizing the PSS response to accidental conditions shows a strong non-smooth and multimodal behavior. A comparison with an alternative approach of literature relying on the use of Support Vector Classifier (SVC) to cluster the output domain is presented to support the framework as a valid approach in challenging CRs characterization.

... 33 The correlation-based global methods use the input-output correlation as a measure of sensitivity and the standardized regression coefficients (SRCs) and partial correlation coefficients (PCCs) are commonly used. 48,52,67,73,[81][82][83][84] When these two methods are conducted for the rank-transformed data, they are referred to as the standardized rank regression coefficients (SRRCs) and partial rank correlation coefficients (PRCCs). 52 For instance, the State-of-the-Art Reactor Consequence Analyses (SOARCA) applied these correlation-based methods to rank the physical input parameters in the NPP accident progression model. ...

... 60,62,68 The performance of the correlation-based methods is measured by the coefficient of determination. 73,81,82 The lower value of the coefficient of determination indicates a poor performance of the correlation-based methods and, typically, this occurs when the model is non-linear and non-monotonic and involves interactions. 28,49,67,84 To improve the performance of the correlation-based SA, non-parametric regression methods, such as locally-weighted regression, the generalized additive model, projection pursuit regression, and recursive partition regression methods, have also been applied for SA. ...

An Integrated Probabilistic Risk Assessment (I-PRA) framework combines spatio-temporal probabilistic simulations of underlying failure mechanisms with classical PRA logic consisting of event trees and fault trees. The Importance Measure (IM) methods commonly used in classical PRA, e.g., Fussell-Vesely IM and Risk Achievement Worth, generate the ranking of PRA components at the basic event level utilizing the one-at-a-time and local methods. These classical IM methods, however, are not sufficient for I-PRA where, in addition to the component-level risk importance ranking, the ranking of risk-contributing factors (e.g., physical design parameters) associated with the underlying failure mechanisms is desired. In this research, the Global IM method is suggested and implemented for the I-PRA framework. The Global IM can account for four key aspects of I-PRA, including (i) ranking of input parameters at the failure-mechanism level based on the contribution to the system risk metrics, (ii) uncertainty of input parameters, (iii) nonlinearity and interactions among input parameters inside the model, and (iv) uncertainty associated with the system risk estimates; therefore, it enhances the accuracy of risk-importance ranking when the risk model has a high-level of non-linearity, interactions, and uncertainties. This paper shows (a) qualitative justifications for the selection of the Global IM method for the I-PRA framework and (b) quantitative proof of concept using three case studies, including two illustrative fault tree examples and one practical application of the I-PRA framework for Generic Safety Issue 191 at nuclear power plants (a sump blockage issue following a loss-of-coolant accident).

... However, the weight of the top ranks would be the same as the low ranks in this way. Since the output of interest is mostly affected by a few top parameters, the importance weights of parameters should decrease with the ranks, which can be achieved by replacing the ranks with the Savage scores [21]. ...

... In order to characterize the influence of each input parameter on the MCL in different break size cases and at different accident phases more clearly, the importance ranking orders of the input parameters were transformed into the Savage scores [21]. So that the parameters can be categorized According to the final processing of the results, the parameter importance ranking table of the CPR1000 reactor during the SBLOCA scenario was obtained, as summarized in Table 2. ...

The phenomenon identification and ranking table (PIRT) is an important basis in the nuclear power plant (NPP) thermal-hydraulic analysis. This study focuses on the importance ranking of the input parameters when lacking the PIRT, and the target scenario is the small break loss of coolant accident (SBLOCA) in a pressurized water reactor (PWR) CPR1000. A total of 54 input parameters which might have influence on the figure of merit (FOM) were identified, and the sensitivity measure of each input on the FOM was calculated through an optimized moment-independent global sensitivity analysis method. The importance ranking orders of the parameters were transformed into the Savage scores, and the parameters were categorized based on the Savage scores. A parameter importance ranking table for the SBLOCA scenario of the CPR1000 reactor was obtained, and the influences of some important parameters at different break sizes and different accident stages were analyzed.

... Thus, it is an effective approach to investigate the influence of the geometrical parameters on the frequency range with double negativity by using global sensitivity analysis (GSA) [38,39]. In general, several GSA methods have been proposed for different purposes, e.g., the variance-based GSA, the moment-independent GSA, the qualitative GSA, the non-parametric methods, and so on [40,41]. In this subsection, a non-parametric method [38,41] ...

... In general, several GSA methods have been proposed for different purposes, e.g., the variance-based GSA, the moment-independent GSA, the qualitative GSA, the non-parametric methods, and so on [40,41]. In this subsection, a non-parametric method [38,41] ...

Double negative elastic metamaterials are a kind of novel artificial materials with unique ability to manipulate elastic wave propagation in the subwavelength scale. They are generally designed by inducing the combination of multiple resonance with different negative effective parameters. However, due to the limitation of empirical structural shape and design tools, these metamaterials are made by complex multiphase materials but possess a relatively narrow frequency range with double negativity, which is unsuitable for practical engineering applications. In this work, a shape optimization and single-phase chiral elastic metamaterials (EMMs) based strategy is presented for designing the EMMs with a broadband double negativity (negative mass density and bulk modulus). Several numerical examples are presented to validate the method by considering different initial shapes, target frequency ranges and design variables. Besides, numerical simulations related to double negativity including negative refraction and imaging are investigated by using the optimized chiral EMMs. Interestingly, elastic wave mode conversion and super-resolution imaging of 0.28λ are observed. The proposed metamaterial combined with the design approach is a very efficient way to obtain double negativity over a broad frequency band and it may thus have great potential for designing new elastic metamaterials.

... GSA, also called importance measure [5][6][7], tries to measure the contributions of the uncertainties of input parameters on the uncertainties of model output (such as the variance of the model output, the failure probability of the structure and system) in the global scope. At present, many global sensitivity analysis techniques are available, such as the non-parametric method developed by Helton and Saltelli [8,9], the variance-based method proposed by Sobol and Saltelli [4,10], and the moment-independent method introduced and developed by Borgonovo, Wei et al. [11][12][13][14][15]. ...

In the class of global sensitivity analysis methods, the elementary effects method focuses on identifying a few significant input parameters in a mathematical or engineering model including numerous input parameters with very few calculations. It was proved that the sensitivity index based on the elementary effects method is an appropriate proxy of the total sensitivity index based on the variance-based method. Nevertheless, it should be pointed out that two variance-based indices, i.e., the first-order and total sensitivity index, can denote the first-order and total effect of each input parameter to the model output respectively, while the elementary effects based sensitivity index can only reflect the total contribution of the input parameter to the model output, but cannot distinguish this effect resulting from each input parameter alone or the interactions between this input parameter and the others. Therefore, this paper proposes a first-order sensitivity index based on the elementary effects method by employing the high dimensional model representation. Next, the link between the first-order sensitivity index on the variance-based method and the proposed sensitivity index is explored. Subsequently, three computational algorithms, i.e., Monte Carlo simulation method, sparse grid method and dimensional reduction method, are developed to estimate the proposed sensitivity index.

... Specifically, for correlation coefficient estimations, various types of correlation coefficients are selected mainly based on the linearity between input factors and model outputs. When they have a linear relationship, Pearson and partial correlation coefficients are appropriate methods (Saltelli and Marivoet, 1990). If the relationship is non-linear, Spearman and partial rank correlation coefficients can be used as alternatives (Pastres et al., 1999). ...

Sensitivity analysis (SA) has been used to evaluate the behavior and quality of environmental models by estimating the contributions of potential uncertainty sources to quantities of interest (QoI) in the model output. Although there is an increasing literature on applying SA in environmental modeling, a pragmatic and specific framework for spatially distributed environmental models (SD-EMs) is lacking and remains a challenge. This article reviews the SA literature for the purposes of providing a step-by-step pragmatic framework to guide SA, with an emphasis on addressing potential uncertainty sources related to spatial datasets and the consequent impact on model predictive uncertainty in SD-EMs. The framework includes: identifying potential uncertainty sources; selecting appropriate SA methods and QoI in prediction according to SA purposes and SD-EM properties; propagating perturbations of the selected potential uncertainty sources by considering the spatial structure; and verifying the SA measures based on post-processing. The proposed framework was applied to a SWAT (Soil and Water Assessment Tool) application to demonstrate the sensitivities of the selected QoI to spatial inputs, including both raster and vector datasets - for example, DEM and meteorological information - and SWAT (sub)model parameters. The framework should benefit SA users not only in environmental modeling areas but in other modeling domains such as those embraced by geographical information system communities.

... If there is a good linear relationship between the input variable and the output variable, the equation for calculating the sensitivity coefficient can be further simplified to the following form (Saltelli and Marivoet 1990;Helton 1993;Kleijnen and Helton 1999;Saltelli et al. 2000;Borgonovo 2006;Borgonovo and Plischke 2016): ...

Based on daily meteorological data from 693 weather stations for the period 1960–2017, the characteristics of dryness/wetness trends and their relation with reference crop evapotranspiration (ET0) and precipitation changes were assessed in China. The results showed that the semi-arid/semi-humid areas and humid areas of southwestern China experienced a drying trend, while the arid areas in northwestern China and the humid areas in southern China became wetter over the past 58 years. The dryness/wetness trends were highly sensitive to the variability of ET0 in the arid and semi-arid areas and to precipitation in the humid and semi-humid areas. Regional differences in the contributions of ET0 and precipitation to the dryness/wetness trends were significant. In the arid areas, the average contribution of ET0 was 7% larger than that of precipitation, except in winter. A decreased ET0 due to a reduction in wind speed and the increase in precipitation led to a wetting trend in these areas. In the semi-arid/semi-humid areas, ET0 had a significantly larger effect on the dryness/wetness trends than precipitation. Contribution of ET0 was about 21% larger than that of precipitation in spring and about 7% larger at the annual timescale. Due to an increase in ET0 because of rising temperature and decrease in precipitation, the areas tended to be drier. In humid areas, precipitation was the dominant factor for the wetting trends. Contribution of precipitation was about 4 to 10% larger than that of ET0 in summer and at the annual timescale and even completely dominated the trend of SPEI in winter. However, contribution of ET0 in spring was slightly larger than that of precipitation in the humid areas, and a significant increase in temperature led to an increase in ET0 and resulted in a tendency of dryness. Aside from the reduction in precipitation, increased evapotranspiration had comparable contributions to the trends of dryness, even greater than that of precipitation in the drying season and in drying areas. Our findings suggested that the effects of ET0 against the background of global warming deserve more attention in future studies of dryness/wetness or drought.

... For broad overviews, we refer the reader to the monographs of Saltelli, Ratto, Tarantola, and Campolongo (2012) and Borgonovo (2017) and to the Handbook of Uncertainty Quantification (Ghanem, Higdon, & Owhadi, 2017) for comprehensive discussions. Among global methods, in risk analysis regression-based methods have been among the first to be applied for factor prioritization (Helton, 1993;Saltelli & Marivoet, 1990). Reviews are offered in Helton, Johnson, Sallaberry, and Storlie (2006); Storlie and Helton (2008); and Storlie, Swiler, Helton, and Sallaberry (2009). ...

Quantitative models support investigators in several risk analysis applications. The calculation of sensitivity measures is an integral part of this analysis. However, it becomes a computationally challenging task, especially when the number of model inputs is large and the model output is spread over orders of magnitude. We introduce and test a new method for the estimation of global sensitivity measures. The new method relies on the intuition of exploiting the empirical cumulative distribution function of the simulator output. This choice allows the estimators of global sensitivity measures to be based on numbers between 0 and 1, thus fighting the curse of sparsity. For density-based sensitivity measures, we devise an approach based on moving averages that bypasses kernel-density estimation. We compare the new method to approaches for calculating popular risk analysis global sensitivity measures as well as to approaches for computing dependence measures gathering increasing interest in the machine learning and statistics literature (the Hilbert-Schmidt independence criterion and distance covariance). The comparison involves also the number of operations needed to obtain the estimates, an aspect often neglected in global sensitivity studies. We let the estimators undergo several tests, first with the wing-weight test case, then with a computationally challenging code with up to
k
=
30
,
000
inputs, and finally with the traditional Level E benchmark code.

... Compared with the local SA, global SA [9] can measure the effect of input variables on the model output in their entire distribution ranges and provide the interaction effect among different input 2 of 19 variables. In the literature, many global SA techniques, such as the non-parametric techniques [10], screening approaches [11], Sobol's variance-based (ANOVA) methods [12,13] and moment-independent methods [14,15], can be found, among which the variance-based method has gained the most attention. Variance is an important component of reliability analysis, but is insufficient on its own for the analysis of structural reliability; see, for e.g., [16]. ...

Although more and more reliability-oriented sensitivity analysis (ROSA) techniques are now available, review and comparison articles of ROSA are absent. In civil engineering, many of the latest indices have never been used to analyse structural reliability for very small failure probability. This article aims to analyse and compare different sensitivity analysis (SA) techniques and discusses their strengths and weaknesses. For this purpose, eight selected sensitivity indices are first described and then applied in two different test cases. Four ROSA type indices are directly oriented on the failure probability or reliability index beta, and four other indices (of a different type) are oriented on the output of the limit state function. The case study and results correspond to cases under common engineering assumptions, where only two independent input variables with Gaussian distribution of the load action and the resistance are applied in the ultimate limit state. The last section of the article is dedicated to the analysis of the different results. Large differences between first-order sensitivity indices and very strong interaction effects obtained from ROSA are observed for very low values of failure probability. The obtained numerical results show that ROSA methods lack a common platform that clearly interprets the relationship of indices to their information value. This paper can help orientate in the selection of which sensitivity measure to use.

... The non-parametric statistical test has been considered in this study to measure the statistical significance of the proposed features and feature selection [59]. The CP signal features with p-values more than 0.001 are discarded and the remaining features are retained after non-parametric statistical test based feature selection technique [60]. From the statistical test results, it is also observed that out of 34 features evaluated from the IMFs of CP signal, 26 CP signal features are significant for the discrimination of normal and apnea classes. ...

The sleep apnea is a sleep-related pathology in which the breathing or the respiratory activity of an individual is obstructed, resulting in the variations in the cardio-pulmonary (CP) activity. The monitoring of both cardiac (heart rate (HR)) and pulmonary (respiration rate (RR)) activities are important for the automated detection of this ailment. In this paper, we propose a novel automated approach for sleep apnea detection using the bivariate CP signal. The bivariate CP signal is formulated using both HR and RR signals extracted from the electrocardiogram (ECG) signal. The approach consists of three stages. First, the bivariate CP signal is decomposed into intrinsic mode functions (IMFs) and residuals for both HR and RR channels using bivariate fast and adaptive empirical mode decomposition (FAEMD). Second, the features are extracted using the time-domain analysis, spectral analysis, and time-frequency domain analysis of IMFs from the CP signal. The time-frequency domain features are computed from the cross time-frequency matrices of IMFs of CP signal. The cross time-frequency matrix of each IMF is evaluated using the Stockwell (S)-transform. Third, the support vector machine (SVM) and the random forest (RF) classifiers are used for automated detection of sleep apnea using the features from the bivariate CP signal. Our proposed approach has demonstrated an average sensitivity and specificity of 82.27% and 78.67%, respectively for sleep apnea detection using a 10-fold cross-validation method. The approach has yielded an average sensitivity and specificity of 73.19% and 73.13%, respectively for the subject-specific cross-validation. The performance of the approach has been compared with other CPC features used for the detection of sleep apnea.

... To achieve this purpose, global sensitivity analysis (Saltelli 2002a;Borgonovo and Plischke 2016;Zhou et al. 2021) provides a feasible way, which can measure the contribution of input uncertainties to output quantities of interest over the entire distribution range of input variables. At present, global sensitivity analysis for univariate output has been widely studied and many methods have been developed, such as non-parametric methods (Saltelli and Marivoet 1990;Mara et al. 2015), moment-independent methods (Borgonovo 2007;Li et al. 2016a), and variance-based methods (Saltelli et al. 2010;Alexanderian et al. 2020). However, the research on global sensitivity analysis for multivariate output is still progressing. ...

Global sensitivity analysis is of great significance for risk assessment of structural systems. In order to efficiently perform sensitivity analysis for systems with multivariate output, this paper adopts Kriging-based analytical (KBA) technique to estimate multivariate sensitivity indices (MSI). Two MSI are studied in this paper, namely, MSI based on principal component analysis (MSI-PCA) and MSI based on covariance decomposition (MSI-CD). For MSI-PCA, Kriging models of inputs and each retained output principal component (PC) are firstly established, and then KBA technique is used to derive the sensitivities associated with each retained PC and the generalized MSI-PCA. For MSI-CD, Kriging model is constructed to map input variables and each time output variable, based on which subset variances and the corresponding MSI-CD are derived by KBA technique. In addition, to avoid constructing Kriging model at each time instant when calculating MSI-CD, a new double-loop Kriging (D-Kriging) method is developed to further improve the efficiency. The accuracy and efficiency of KBA and D-Kriging methods for MSI estimation are tested and discussed by four examples in Sect. 4.

... Compared to local SA, global SA can measure the averaged contribution of the input variables to the uncertainty of the model output in their entire uncertainty ranges [8][9][10][11]. Due to this reason, many different kinds of global SA methods have been proposed to analyze the effects of the input variables on the different statistical characteristics of the model output, such as non-parametric techniques [12,13], variance-based method [14,15], distribution-based method [16][17][18] and failure probability-based method [19]. In this paper, we focus on global reliability SA. ...

In order to efficiently assess the influence of input variables on the failure of the structural systems, an improved global reliability sensitivity analysis (SA) method is proposed in this paper. The new method is based on the state dependent parameter (SDP) method and the efficient sampling techniques. In the new method, the efficient sampling techniques are first used to generate samples that are more efficient for reliability analysis, and then the SDP method is further employed to estimate the global reliability sensitivity index by using the same set of samples as the reliability analysis. Two efficient sampling methods, e.g., importance sampling (IS) and truncated importance sampling (TIS), are employed in this paper, and the strategies of combining these methods with the SDP method for global reliability SA are discussed. Compared with the existing SDP method, the new method is more efficient for global reliability SA of structural systems. Three examples are used in the paper to demonstrate the efficiency and precision of the new methods.

... The sensitivity metric is the regression/correlation coefficient between the input parameters and output after Monte Carlo sampling. (Iman and Helton, 1988), (Saltelli and Marivoet, 1990) Regional SA RSA Statistical-based GSA. A binary split of the input parameters from a Monte Carlo sample is determined by whether the resulting output respective to an input sample exhibit required behaviors. ...

Uncertainty and sensitivity analysis (UA/SA) aid in assessing whether model complexity is warranted and under what conditions. To support these analyses a variety of software tools have been developed to provide UA/SA methods and approaches in a more accessible manner. This paper applies a hybrid bibliometric approach using 11 625 publications sourced from the Web of Science database to identify software packages for UA/SA used within the environmental sciences and to synthesize evidence of general research trends and directions. Use of local sensitivity approaches was determined to be prevalent, although adoption of global sensitivity analysis approaches is increasing. We find that interest in uncertainty management is also increasing, particularly in improving the reliability and effectiveness of UA/SA. Although available software is typically open-source and freely available, uptake of software tools is apparently slow or their use is otherwise under-reported. Longevity is also an issue, with many of the identified software appearing to be unmaintained. Improving the general usability and accessibility of UA/SA tools may help to increase software longevity and the awareness and adoption of purpose-appropriate methods. Usability should be improved so as to lower the "cost of adoption" of incorporating the software in the modelling workflow. An overview of available software is provided to aid modelers in choosing an appropriate software tool for their purposes. Code and representative data used for this analysis can be found at https://github.com/frog7/uasa-trends (10.5281/zenodo.3406946).

... GSA aims at measuring the contribution of input uncertainty to the model output uncertainty by exploring the whole distribution ranges of model inputs. Many GSA techniques are available in the few decades, such as nonparametric technique [3][4][5], Variance-based GSA [6][7][8][9], and some new indices named as momentindependent importance measures [10][11][12]. Some corresponding computational methods [13][14][15][16] have been presented. ...

In risk and reliability assessment, the failure probability-based global sensitivity analysis (GSA) and the failure probability-based regional sensitivity analysis (RSA) have attracted much interest. In this article, we deduce the relationship of the failure probability-based GSA importance measure and copula, and point out that the failure probability-based GSA importance measure can be interpreted as the dependence measure between the failure probability and the input variables from copula viewpoint. To calculate the importance measure, the least square fitting copula (LSFC) method is proposed subsequently. The method decouples the double-loop estimating of the conditional failure probability. Additionally, to analyze and identify the effects of the different regions of the input variables on failure probability, a RSA importance measure is proposed, its properties are investigated and proved. At last, an engineering example is employed to demonstrate and validate the effectiveness of the LSFC method and the proposed RSA importance measure.

... In order to synthesize the results of different methods, the rank transformation is carried out and the Savage score is employed to replace the rank [17]. Savage score is calculated as follows: ...

The best estimate plus uncertainty (BEPU) analysis is recommended by IAEA for safety analysis of nuclear power plants. As large break loss of coolant accident (LBLOCA) is one of the most concerned accidents, the application of BEPU in the analysis of LBLOCA is the research focus worldwide. This paper proposes a method for uncertainty quantification of LBLOCA based on the framework of CSAU, the RELAP5 code is adopted as the analysis tool, peak cladding temperature (PCT) is selected as the output of interest, and input parameters as well as constitutive models with substantial influence on PCT are identified based on several phenomenon identification ranking tables (PIRT) established for LBLOCA. Uncertainties of the selected models are quantified through non-parametric curve estimation method and CIRCÉ method. The Wilks' formula is adopted to determine the number of code runs for uncertainty propagation, and uncertainty analysis is performed later so that the distribution as well as 95% confidence interval of PCT can be obtained. Several sensitivity analysis methods are utilized to rank the effect of parameters on PCT after the uncertainty analysis, and the results of different methods will be synthesized by using the Savage score. Finally, uncertainty analysis based on the sensitivity analysis is performed to verify the accuracy of the sensitivity analysis. The method was applied to the analysis of LOFT LP-02-6 experiment. Results show that the calculated values conform to the normal distribution, the calculated values can well envelop the experimental one, and all the calculated values are smaller than the conservative calculation. Sensitivity analysis proves to be accurate as results of the uncertainty analysis before and after the sensitivity analysis are quite similar.

... Sensitivity analysis based on non-parametric statistical methods was proposed by Saltelli and Marivoet (1990), which is very mature and effective for the linear system. In this method, a multiple linear regression model is established based on data, and the sensitivity coefficient of each parameter is obtained by using the following formula (Cai et al., 2008). ...

Parameter matching for power system with micro-electric vehicle is carried out based on the prototype. The performance of micro-electric vehicle is verified in the ADVISOR software. The mathematical model and calculation method of sensitivity analysis for the complete vehicle curb mass to maximum acceleration of evaluating indicator are established as the example. The maximum acceleration and SOC are selected as the evaluation indexes of the micro-electric vehicle performance, the typical vehicle structural parameters are selected as the influence factors based on many simulation comparisons and comprehensive analysis. The non-parametric statistical method is proposed to analyse the sensitivity of typical structural factors affecting its performance by drawing on one-at-a-time sensitivity measures. Analysis of the results shows that the windward area, the air resistance coefficient, and the centroid height are the most important factors affecting the two evaluation indexes. The final conclusion is significant for the design and application of micro-electric vehicles.

... Probabilistic (or global) sensitivity methods are the gold standard for studying the simulator input-output behavior under uncertainty and have been increasingly studied since the early 1990's. Works such as Saltelli and Marivoet (1990); Helton (1993) are among the first to propose dependence measures such as Pearson's, Spearman's rank and the partial rank correlation coefficients to study the strength of the dependence between uncertain simulator inputs and output. Campolongo and Saltelli (1997) observe that the linearity assumption underlining these dependence measures leads to a shortcoming when the simulator inputoutput mapping is non-linear and non-additive, and advocate the use of variance-based sensitivity measures. ...

Copula theory is concerned with defining dependence structures given appropriate marginal distributions. Probabilistic sensitivity analysis is concerned with quantifying the strength of the dependence among the output of a simulator and the uncertain simulator inputs. In this work, we investigate the connection between these two families of methods. We define four classes of sensitivity measures based on the distance between the empirical copula and the product copula. We discuss the new classes in the light of transformation invariance and Rényi's postulate D of dependence measures. The connection is constructive: the new classes extend the current definition of sensitivity measures and one gains an of understanding which sensitivity measures in use are, in fact, copula-based. Also a set of new visualization tools can be obtained. These tools ease the communication of results to the modeler and provide insights not only on statistical dependence but also on the partial behavior of the output as a function of the inputs. Application to the benchmark simulator for sensitivity analysis concludes the work.

... In previous studies of Borgonovo et al. [27], a number of uncertainty importance measures proposed in the past are investigated and compared comprehensively to shed some light on their characteristics and applicability for modelers, decision makers and analysts. The importance measures are classified into four categories, i.e. non-parametric techniques [28], variance-based sensitivity indices [29], moment-independent importance measures [30] and derivative-based global sensitivity measures [31]. However, all the importance measures consider different source of uncertainty only under the mathematical framework of probability theory, which is inadequate for models with epistemic uncertainty due to the lack of knowledge or data. ...

In this paper, a novel strategy for the importance analysis of structural performance models under both aleatory and epistemic uncertainties is presented. Random variables and fuzzy numbers are adopted for the representation of the two types of uncertainty respectively. Based on the statistical moments of the model outputs, two categories of importance measures are proposed under a hybrid framework that composed of probability theory and fuzzy logic. The first category is for the input factors with aleatory uncertainty, while the other is for the input factors with epistemic uncertainty. In order to depict the credibility of the importance measures of the random factors, a stability indicator is further introduced. Under the hybrid framework, the statistical moments of the performance outputs are fuzzy membership functions instead of deterministic values. Therefore, the importance measures are defined based on the area differences between conditional and unconditional membership functions. For the estimation of the proposed importance measures and stability indicator, a uniform discretization of the fuzzy membership function is first performed to combine the fuzzy factors with the random samples. Then, the Monte Carlo simulation (MCS) and Gorman & Seo’s three-point estimates (GSP) are employed as uncertainty propagation methods to address the statistical moments of the performance outputs. Finally, the proposed importance measures and stability indicator are studied through two numerical examples by MCS and GSP comparatively for demonstrating their benefits in stability, applicability and efficiency.

... Second, given that previous forecasting studies on energy demand have not conducted sensitivity analysis, this study uses the partial rank correlation coefficient (PRCC) to conduct sensitivity analysis to determine the input variable that is most influential in contributing to energy demand for the respective countries. Khoshroo et al. (2018), Marino et al. (2008) and Saltelli and Marivoet (1990) argue that PRCC is the most reliable and efficient method for sensitivity analysis. Third, to provide an accurate forecasting model, this study used high-frequency data for modeling. ...

Purpose
This paper aims to use artificial neural networks to develop models for forecasting energy demand for Australia, China, France, India and the USA.
Design/methodology/approach
The study used quarterly data that span over the period of 1980Q1-2015Q4 to develop and validate the models. Eight input parameters were used for modeling the demand for energy. Hyperparameter optimization was performed to determine the ideal parameters for configuring each country’s model. To ensure stable forecasts, a repeated evaluation approach was used. After several iterations, the optimal models for each country were selected based on predefined criteria. A multi-layer perceptron with a back-propagation algorithm was used for building each model.
Findings
The results suggest that the validated models have developed high generalizing capabilities with insignificant forecasting deviations. The model for Australia, China, France, India and the USA attained high coefficients of determination of 0.981, 0.9837, 0.9425, 0.9137 and 0.9756, respectively. The results from the partial rank correlation coefficient further reveal that economic growth has the highest sensitivity weight on energy demand in Australia, France and the USA while industrialization has the highest sensitivity weight on energy demand in China. Trade openness has the highest sensitivity weight on energy demand in India.
Originality/value
This study incorporates other variables such as financial development, foreign direct investment, trade openness, industrialization and urbanization, which are found to have an important effect on energy demand in the model to prevent underestimation of the actual energy demand. Sensitivity analysis is conducted to determine the most influential variables. The study further deploys the models for hands-on predictions of energy demand.

... Nonparametric statistics are not based on parameterized families of probability distributions [86]. Some examples of the typically used parameters are mean, median, mode, variance, range, and standard deviation. ...

High-throughput DNA sequencing (HTS) has changed our understanding of the microbial composition present in a wide range of environments. Applying HTS methods to air samples from different environments allows the identification and quantification (relative abundance) of the microorganisms present and gives a better understanding of human exposure to indoor and outdoor bioaerosols. To make full use of the avalanche of information made available by these sequences, repeated measurements must be taken, community composition described, error estimates made, correlations of microbiota with covariates (variables) must be examined, and increasingly sophisticated statistical tests must be conducted, all by using bioinformatics tools. Knowing which analysis to conduct and which tools to apply remains confusing for bioaerosol scientists, as a litany of tools and data resources are now available for characterizing microbial communities. The goal of this review paper is to offer a guided tour through the bioinformatics tools that are useful in studying the microbial ecology of bioaerosols. This work explains microbial ecology features like alpha and beta diversity, multivariate analyses, differential abundances, taxonomic analyses, visualization tools and statistical tests using bioinformatics tools for bioaerosol scientists new to the field. It illustrates and promotes the use of selected bioinformatic tools in the study of bioaerosols and serves as a good source for learning the "dos and don'ts" involved in conducting a precise microbial ecology study.

... The first suggested sensitivity indices are defined for linear regression [SM90,Hel93]. Then, several researchers [Sob93,IH90,Wag95] defined similar sensitivity indices almost simultaneously but in different fields. ...

Sensitivity analysis is a powerful tool to study mathematical models and computer codes. It reveals the most impacting input variables on the output variable, by assigning values to the the inputs, that we call "sensitivity indices". In this setting, the Shapley effects, recently defined by Owen, enable to handle dependent input variables. However, one can only estimate these indices in two particular cases: when the distribution of the input vector is known or when the inputs are Gaussian and when the model is linear. This thesis can be divided into two parts. First, the aim is to extend the estimation of the Shapley effects when only a sample of the inputs is available and their distribution is unknown. The second part focuses on the linear Gaussian framework. The high-dimensional problem is emphasized and solutions are suggested when there are independent groups of variables. Finally, it is shown how the values of the Shapley effects in the linear Gaussian framework can estimate of the Shapley effects in more general settings.

... The underlying idea of correlation as one of the sensitivity analysis methods is to derive information about output sensitivity from the input/output dataset's statistical analysis (Saltelli and Marivoet 1990;Storlie et al. 2009;Pianosi et al. 2016). This method uses the correlation coefficient between the input factor x i (different forcings in this study) and the output factor y (CRU data in this case) as a sensitivity measure. ...

Previous studies revealed that many areas in Africa experienced an apparent warming rate in surface temperature in the last century. However, the contributing factors have not been investigated in details. In the present study, natural and anthropogenic forcings accountable for surface temperature variability and change are examined from the historical Coupled Model Inter-comparison Project phase six (CMIP6) simulations and future projections under three Shared Socioeconomic Pathways (SSP1-2.6, SSP2.45 and SSP5-8.5), which represent low, moderate and high emission scenarios, respectively. Results indicate that from 1901 to 2014, surface temperature has increased by ~0.07 °C/decade over both Eastern Africa (EAF) and Sahara (SAH) regions, 0.06 °C/decade in Southern Africa (SAF) and Western Africa (WAF) regions and follows the global warming trend. It is found that Greenhouse Gases (GHG) and Land-Use (LU) change are the leading contributors to the observed warming in historical surface temperature over Africa. Anthropogenic Aerosols (AA) show a cooling effect on surface temperature. Under both SSP1-2.6 and SSP2-4.5 emission scenarios, the surface temperature increases to 2059 and decline afterwards. On the other hand, under SSP5-8.5, the surface temperature is expected to increase throughout the 21st century. The impacts of warming will be hard-felt in SAH and SAF regions compared to other areas. The analysis of rare hot and cold events (2080–2099) based on the 20-year annual highest and lowest daily surface temperature relative to the recent past (1995–2014) under SSP2-4.5 indicates that both events are likely to increase significantly in the later 21st century. Nevertheless, proper management of Land use and control of anthropogenic factors (GHGs and AA) may lead to a substantial reduction in further warming over Africa.

... Due to their strict conjunction with Monte Carlo simulation, methods based on correlation and regression analysis were among the first techniques to be developed and used for SA [107][108][109]. In general terms, correlation and regression analysis aim at retrieving information regarding output sensitivity through statistical post-processing of a Monte Carlo simulation. ...

Power systems are increasingly affected by various sources of uncertainty at all levels. The investigation of their effects thus becomes a critical challenge for their design and operation. Sensitivity Analysis (SA) can be instrumental for understanding the origins of system uncertainty, hence allowing for a robust and informed decision-making process under uncertainty. The SA value as a support tool for model-based inference is acknowledged; however, its potential is not fully realized yet within the power system community. This is due to an improper use of long-established SA practices, which sometimes prevent an in-depth model sensitivity investigation, as well as to partial communication between the SA community and the final users, ultimately hindering non-specialists’ awareness of the existence of effective strategies to tackle their own research questions. This paper aims at bridging the gap between SA and power systems via a threefold contribution: (i) a bibliometric study of the state-of-the-art SA to identify common practices in the power system modeling community; (ii) a getting started overview of the most widespread SA methods to support the SA user in the selection of the fittest SA method for a given power system application; (iii) a user-oriented general workflow to illustrate the implementation of SA best practices via a simple technical example.

... For calculating sensitivity indices of flexoelectric nanostructures, there are numerous global sensitivity techniques such as Fourier amplitude sensitivity test techniques (Cosmo et al. 2017), Morri techniques (Pianosi et al. 2016), multiple regression techniques (Mustafa et al. 2017), non-parametric techniques (Saltelli and Marivoet 1990), moment-independent techniques (Greegar and Manohar 2016;Luyi et al. 2012) and variance-based techniques (Sobol 2001;Zhang et al. 2015). Sobol's sensitivity indices for inherent uncertainties of flexoelectric nanostructures can provide the contribution of the uncertainty of the input properties variables on the variance or distribution parameters of the model outputs (Sobol 2001;Saltelli 2002;Yun et al. 2017). ...

Recent research shows that flexoelectricity may prominently affect the electromechanical coupling responses of elastic dielectrics at the nanoscale. From the perspective of devices design, it is urgent to know how the input parameters affect the electromechanical coupling behaviors of flexoelectric nanostructures. In this work, global sensitivity analysis is applied to elastic dielectric nanoplates to decompose the attribution of each of the parameters. Meanwhile, the existing hierarchical regression is found not suitable for simultaneously evaluating the multicollinearity and high dimensionality problems, when global sensitivity analysis of flexoelectric nanostructures is obtained combining polynomial chaos expansion (PCE). In order to overcome the above issues, the following strategies is proposed: 1) First, an adaptive sparse scheme is employed to build the sparse PCE. The number of terms of the PCE is decreased through choosing the most related polynomials with respect to a given model output. 2) Then, the hierarchical regression can be carried out iteratively via combining with the adaptive-sparse scheme. 3) Finally, the Sobol sensitivity indices are calculated through using these procedures. Further, Sobol sensitivity indices reveal that the thickness is the decisive input parameter that strongly affects the buckling and vibration responses of the flexoelectric nanoplate; the flexoelectric coefficients is the next key parameter that affect the buckling and vibration responses of flexoelectric nanoplate. Our finding also demonstrates that the influence of the flexoelectric coefficient is much stronger than that of the piezoelectric coefficient, which revealed the domination of the flexoelectric effect in ultra-thin piezoelectric nanostructures.

... This has been applied to a cardiovascular model in [24] utilizing computational approach, for instance, automatic differentiation, to derive the sensitivities of their model parameters describing its dynamic response from sitting to standing transition. While PRCC is an efficient sampling-based index method, eFAST is a reliable variance-based approach [25,26]. Importantly, PRCCs provide a measure of monotonicity when the linear effects of other variables are removed, and eFAST measures fractional variance accounted for by individual and groups of variables [10]. ...

This work examines a cardiovascular-respiratory system model capable of describing its dynamic response to a constant workload. The heart rate and alveolar ventilations are considered fundamental controls of the system which represent the baroreceptor and chemoreceptor loops, respectively. Multi-method sensitivity analysis is performed to quantify how variations in the parameters influence the model output. Sensitivity approaches include the traditional sensitivity analysis, partial rank correlation coefficient and extended Fourier amplitude sensitivity test. A set of parameters which can be reliably estimated from given measurements is determined from subset selection. Hemodynamic and respiratory data acquired from ergometric exercise are used for model identification and validation obtaining subject-specific parameter estimates.

... A non-parametric statistical analysis was used in the analysis since the probability distributions of the calculated beam depths using different design methods might not follow a normal probability distribution [70][71][72]. A five-number summary of datasets was used, which is comprised of the lower limit (LL), the first quartile (Q 1 ), the median (M), the third quartile (Q 3 ) and the upper limit (UL). ...

Shrink-swell movements of soils cause angular distortion to substructures leading to significant damage to lightweight structures. The built environment of lightweight structures, particularly single-detached dwellings, may compromise the structural performance and cause unforeseen maintenance that may expedite deterioration of the entire build. Due to the importance of damage minimisation in the design phase of single-detached dwellings, this paper aims to review and compare existing design methods for raft substructures on expansive soils through parametric comparison. The comparison considered parameters related to soil properties, environmental factors and stress conditions, including substructure configuration, affecting the shrink-swell potential of expansive soils. The comparison observed that PTI method calculated beam depths with most proximate values to the overall median, while Lytton and Briaud method calculated beam depths closest to the overall third quartile with respect to all considered design methods. WRI and BRAB method obtained larger values of beam depths, specifically for scenarios with higher plasticity index, liquid limit and longer span, which can be considered as outliers. AS 2870, Walsh and Mitchell method were in the less conservative range based on the range of beam depths calculated. Calculated required beam depths ranged from 300 to 815 mm neglecting outliers with higher dispersion of values when the active depth zone was deeper, the plasticity index and liquid limit were higher, applied uniform load was higher and span of the substructure was longer. This review paper presents the range of probable values, variability and degree of central tendency depending on the values of beam depths calculated by different current design methods that are useful for designers.

... In addition, local methods do not fully explore the input space, since they do not consider the correlation 15 between the input parameters. Regression-based methods Saltelli and Marivoet (1990) rely on a linear regression fit between the input and the output, where the sensitivity measures can be inferred from the regression coefficients (in standardized form). This class of methods is suitable when the response varies linearly with the input factors, which limits its applications for non-linear problems. ...

Analyzing the variance of complex computer models is an essential practice to assess and improve that model by identifying the influential parameters that cause the output variance. Variance-based sensitivity analysis is the process of decomposing the output variance into components associated with each input parameter. In this study, we applied a new concept of variance-based sensitivity analysis inspired by the game theory proposed by Shapley. The technique is called the Shapley effect, and it investigates the contribution of each input parameter as well as its interactions with every other parameter in the system by exploring all possible permutations between them. The Shapley effect is compared to the common Sobol indices technique (first order and total effects) to investigate their performance under correlated and uncorrelated parameters. The Shapley effect demonstrated superior performance when compared to the Sobol indices for correlated input parameters. Shapley effect captured the correlation between the input parameters, expressing the variance contribution in a single index instead of two indices, and normalization of the fractional indices is preserved without over or underestimation. On the other hand, the two algorithms we selected to calculate Sobol indices under correlated inputs experienced different issues including: over/underestimating the output variance, first order effect could be larger than the total effect, possibility of negative indices, unnormalized fractional indices, and difficulty to interpret the results. However, Sobol showed satisfactory performance when the inputs are uncorre-lated as the numerical values and input ranking were in good agreement with Shapley effect. The main disadvantage of Shapley effect is its large computational cost especially for high dimensional problems where the number of possible input subsets becomes very large. The results of our tests showed that the thermal fission cross-section carried most of the uncertainty at BOL, and its contribution declines after fuel burnup, which is replaced by the uncertainty contribution of the fast cross-section parameters. Published by Elsevier Ltd.

... Secondly, unlike the previous forecasting studies on emissions, this study utilises the Partial Rank Correlation Coefficient (PRCC) to conduct sensitivity analysis to determine the input variable that is most influential in contributing to carbon emissions for the respective countries. Khoshroo et al. (2018), Marino et al. (2008) and Saltelli and Marivoet (1990) argue that PRCC is the most reliable and efficient method for sensitivity analysis. Additionally, unlike previous studies, this study uses high-frequency data to provide accurate forecasting models. ...

This study applies an artificial neural network (ANN) to develop models for forecasting carbon emission intensity for Australia, Brazil, China, India, and USA. Nine parameters that play an essential role in contributing to carbon emissions intensity were selected as input variables. The input parameters are economic growth, energy consumption, R&D, financial development, foreign direct investment, trade openness, industrialisation, and urbanisation. The study used quarterly data which span over the period 1980Q1-2015Q4 to develop, train and validate the models. To ensure the reproducibility of the results, twenty simulations were performed for each country. After numerous iterations, the optimal models for each country were selected based on predefined criteria. A 9-5-1 multi-layer perceptron with back-propagation algorithm was sufficient in building the models which have been trained and validated. Results from the validated models show that the predicted versus actual values indicate approximately zero errors along with higher coefficients of determination (R2) of 0.80 for Australia, 0.91 for Brazil, 0.95 for China, 0.99 for India and 0.87 for USA. The Partial Rank Correlation Coefficient (PRCC) results reveal that for Australia, R&D has the highest sensitivity weight while for Brazil and the USA, urbanisation has the highest sensitivity weight. For China, population size has the highest sensitivity weight while energy consumption has the highest sensitivity weight in India. The ANN models presented in this study have been validated and reliable to predict the growth of CO2 emission intensity for Australia, Brazil, China, India, and USA with negligible forecasting errors. The models developed from this study could serve as tools for international organisations and environmental policymakers to forecast and help in climate change policy decision-making.

... Spearman's rank correlation coefficient (SRCC) is a nonparametric statistic that measures the strength of monotonic relationships between two sets of variables (Daniel, 1978;Saltelli, 1990;Gautheir, 2001). The relationships between geothermal gradients and the coupled impacts resulting from upper and lower sedimentation rate-dependent exposure times (see Section 4.2) were compared to the multi-molecular changes recorded by MPCA scores (see Section 4.3). ...

Guaymas Basin is a submarine depression in the Gulf of California marking the northern end of the East Pacific Rise mid-oceanic spreading ridge. The basin receives high input of sedimentary organic matter (SOM) from elevated productivity in the overlying surface waters and runoff from the surrounding continent. This, coupled with high sedimentation rates, produces near-uniform compositions of SOM. Various hydrothermal vent complexes occur along this margin. One of these is Cathedral Hill, a hydrothermal mound with sulfide chimneys surrounded by mats dominated by sulfide-oxidizing Beggiatoa, covering sediments stained by locally produced oil. We collected four push cores along a transect at this site (4 m × 0.21 m) extending from near the vent center to the ambient sediment just outside the microbial mat cover. Porewater temperatures near the mound center were projected to reach 155 °C by 21 cm below the sea floor (cmbsf). Within these conditions, a kinetic model based on vitrinite reflectance equivalence (%Re) predicts petroleum formation as shallow as 15–18 cmbsf, with metagenesis commencing at less than 60 cmbsf. Bulk extract data (total lipid extracts as well as polar and apolar fractions), solvent-extracted sediment TOC (herein referred to as protokerogen TOC), and molecular thermal maturation parameters support these generation estimates. In recent years, the application of chemometric techniques to comprehensive two-dimensional gas chromatographic (GC×GC) analyses has allowed comparison of thousands of unique hydrocarbons within oils. Here we reconstruct the shallow subsurface petroleum system by applying multiway principal component analysis (MPCA) and hierarchical cluster analysis (HCA) directly to GC×GC chromatograms. We then compare the resulting multivariate models to a systematic survey of subtracted GC×GC chromatograms, a transect heat map of sample hydrocarbon compound diversities, and profiles of various thermal maturation parameters to elucidate how these hydrocarbon matrices are attenuated by production, migration, and/or thermochemical oxidation. Sample matrices have up to 5700 unique compounds spanning a range of normal and branched alkanes, saturated and unsaturated biomarkers, substituted and unsubstituted polycyclic aromatic hydrocarbons (PAHs), perhydro-PAHs, and benzothiophenes with up to six ring-cycles (i.e., benzoperylenes, dibenzopyrenes, dibenzochrysenes, and indenopyrenes). These matrices display systematic temperature-dependent trends. Generation likely begins in 8–10 and 15–18 cmbsf where sediments are exposed to ∼110 °C vent porewater temperatures, which is shallower than predictions based on our kinetic model. Independent of these sites of generation is ubiquitous staining of the sediments from advected oil that is heavily dominated by PAHs as well as two stratigraphic bands of migrated oil that extend horizontally across the transect at 0–2 and ∼6–10 cmbsf, respectively. The MPCA models along with non-statistical validation techniques show evidence of decreasing diversities and concentrations of alkylated aromatic hydrocarbons concomitant with elevated abundances of dealkylated PAHs and/or the migration of unsubstituted PAHs from deeper basin depths as sediments become exposed to more severe hydrothermal conditions. These results indicate that even at relatively small spatial scales, the petroliferous sediments at hydrothermal vent sites can be highly complex.

... Spearman's rank correlation coefficient (SRCC) is a nonparametric statistic that measures the strength of monotonic relationships between two sets of variables (Daniel, 1978;Saltelli, 1990;Gautheir, 2001). The relationships between geothermal gradients and the coupled impacts resulting from upper and lower sedimentation rate-dependent exposure times (see Section 4.2) were compared to the multi-molecular changes recorded by MPCA scores (see Section 4.3). ...

Petroleum is one of the most chemically complex materials on Earth. Its origins begin in certain depositional conditions that favors the accumulation and preservation organic matter from once living organisms. Petroleum is formed when these organic‐rich rocks are heated and the expelled fluids move into reservoirs. A petroleum system is established when key elements, and processes occur within a basin allowing for petroleum to be exploited. Petroleum is mostly composed saturated (paraffins and naphthenes) and aromatic hydrocarbons. These are the desired species most easily in refined into fuels and lubes. Petroleum also contains compounds with other elements, mostly nitrogen, sulfur, and oxygen. These heteroatomic species require additional processing, influencing the value of a crude. Although pressure continues to build to curtail the use of fossil fuels, petroleum demand is expected to remain close to current levels and there is more than enough oil reserves and resources to meet future demands. A large portion of the world's conventional reserves resides in the Middle East. However, the United States is currently the largest producer largely from the development of unconventional resources that are produced from shales and microporous carbonates using long lateral wells and hydraulic fracturing. The fear that “peak” oil production occurred in the late twentieth century has proven to be false.

... Sensitivity analysis has been applied to many models [36][37][38][39]. The Partial Rank Correlation Coefficient (PRCC) method appears to be the most efficient and reliable among the sampling-based indices [40]. Correlation provides a measure of the strength of a linear association between an input and an output. ...

Cementitious composites with microencapsulated healing agents are appealing due to the advantages of self-healing. The polymeric shell and polymeric healing agents in microcapsules have been proven effective in self-healing, while these microcapsules decrease the effective elastic properties of cementitious composites before self-healing happens. The reduction of effective elastic properties can be evaluated by micromechanics. The substantial complicacy included in micromechanical models leads to the need of specifying a large number of parameters and inputs. Meanwhile, there are nonlinearities in input–output relationships. Hence, it is a prerequisite to know the sensitivity of the models. A micromechanical model which can evaluate the effective properties of the microcapsule-contained cementitious material is proposed. Subsequently, a quantitative global sensitivity analysis technique, the Extended Fourier Amplitude Sensitivity Test (EFAST), is applied to identify which parameters are required for knowledge improvement to achieve the desired level of confidence in the results. Sensitivity indices for first-order effects are computed. Results show the volume fraction of microcapsules is the most important factor which influences the effective properties of self-healing cementitious composites before self-healing. The influence of interfacial properties cannot be neglected. The research sheds new light on the influence of parameters on microcapsule-contained self-healing composites.

Model applications for delivering reliable information on soil water content (θv) and soil temperature (Tsoil) specific to Andosols (Kuroboku) are still limited despite their large area coverage (0.84% of the global terrestrial surface) and great potentials for improving agricultural production. The performance of the HYDRUS-1D model was therefore evaluated by comparing the predicted θv and Tsoil with field observations gathered from the vadose zone of a volcanic ash soil located in Fuchu (western suburb of Tokyo) representative of a temperate monsoon climate. The necessary soil properties to operate the model were obtained from both field and laboratory experiments while climatic data used for the meteorological submodels were extracted from a nearby weather station. A sensitivity and uncertainty analysis leveraging a Monte Carlo method was conducted to identify the soil hydraulic parameters critical to precise and reliable θv, vapor flow, and heat flow simulations. The temporal dynamics of simulated θv and Tsoil in the vadose zone of Andosol during the 365-day period were consistent with the values monitored under field condition demonstrating overall great performance of HYDRUS-1D. Specifying soil layer-specific hydraulic parameters significantly improved the goodness of fits between predicted and measured θv compared to the default simulation in which the surface soil layer was employed for the entire depth assuming a homogeneous profile. Water vapor influence on the total water flux and Tsoil dynamics was negligible during the whole period. Although the magnitudes of sensitivity and the contributions of soil hydraulic parameters to the water flow varied with soil profile and soil water regime, the foremost proportions of uncertainties were from the parameters w2, α2 and n2. While all remaining soil hydraulic parameters significantly contributed to substantial change in the predicted θv, their overall influence was relatively small. The HYDRUS-1D model can be used as effective tool for predicting θv and Tsoil in the vadose zone of Andosols in temperate monsoon environments for decision supporting in agriculture and other sectors such as to optimize water, crop yield and quality. The performance of the model can be greatly increased by setting soil layer-specific hydraulic parameters and focusing on the calibration of w2, α2 and n2.

In the design of artillery external ballistics, sensitivity analysis can effectively quantify the influence of multi-source uncertain parameters on the dispersion of projectile landing points to improve the precise attack ability of artillery. However, for a complicated artillery external ballistic system containing multiple inputs and outputs, its mapping relationships are not definite under uncertainty and it is difficult to estimate a comprehensive sensitivity index due to involving the calculation of high dimensional integral. Therefore, a sensitivity analysis method based on the combination of variance and covariance decomposition with the approximate high dimensional model representation (AHDMR) is proposed to measure the influence of muzzle state parameters, projectile characteristic parameters, etc. on projectile landing points under uncertainty in this paper. First, we establish the numerical simulation model of artillery external ballistics by combing the external ballistic theory and Runge–Kutta algorithm to acquire the mapping relationships between the uncertain input parameters and the dispersion of projectile landing points and implement uncertainty analysis under different uncertainty levels (UL) and distributions. Then, with the use of a set of orthogonal polynomials for uniform and Gaussian distribution, respectively, the high dimensional model representation of the mapping relationship is approximately expressed and the compressive sensitivity indices can be effectively estimated based on the Monte Carlo simulation. Moreover, the comparison results of two numerical examples indicate the proposed sensitivity analysis method is accurate and practical. Finally, through the method, the importance rankings of multi-uncertain parameters on projectile landing points for two distributions are effectively quantified under the UL = [0.01, 0.02, 0.03, 0.04, 0.05].

To write differential state equations of investigated actuating devices in the needed Cauchy’s normal form, we must abandon the traditional theory of electrical circuits for the theory of electromagnetic circuits. In this work, the known and developed new mathematical models for the analysis of special states have been involved. For solving this problem, it is necessary, at first, to construct a mathematical model of the actuating device. This model is based on the construction of a monodromy matrix, and simulation of transient and steady-state processes at the same time. To make the numerical analysis more convenient, differential equations for models of electromechanical state are written down in Cauchy's normal form. The algorithm involves the study of transient and stationary processes by decomposing them into constituent parts using the mathematical apparatus of the classical theory of nonlinear differential equations, which are calculated in a relatively simple way. The transitional process is obtained as a result of the differentiation of state equations for given initial conditions. We obtain a steady-state process by the initial conditions that exclude transient response. Such conditions we receive by the iterative Newton method. The proposed method of auxiliary variation equations allowed bypass procedure of differentiation of matrix coefficients over the argument that ensured the possibility of the algorithm application of the method of parametric sensitivity. The method of analysis can be spread to more complex nonlinear systems, such as electric motors.

This paper presents a new dependence measure for importance analysis based on multivariate probability integral transformation (MPIT), which can assess the effect of an individual input, or a group of inputs on the whole uncertainty of model output. The mathematical properties of the new measure are derived and discussed. The nonparametric method for estimating the new measure is presented. The effectiveness of the new measure is compared with the well-known delta and extended delta indices, respectively, through a linear example, a risk assessment model and the Level E model. Results show that the proposed index can produce the same importance rankings as the delta and extended delta indices in these three examples. Yet the computation of the proposed measure is quite tractable due to the univariate nature of MPIT. The results also show that the established estimation method can provide robust estimate for the new measure in a quite efficient manner.

This book constitutes the refereed conference proceedings of the 4th International Conference on Emerging Technologies in Computing, iCEtiC 2021, held in August 2021. Due to VOVID-19 pandemic the conference was helt virtually.
The 15 revised full papers were reviewed and selected from 44 submissions and are organized in topical sections covering Information and Network Security; Cloud, IoT and Distributed Computing; AI, Expert Systems and Big Data Analytics

In this paper, we propose derivative-oriented parametric sensitivity indices to investigate the influence of parameter uncertainty on a previously proposed failure probability-based importance measure in the presence of multidimensional dependencies. Herein, the vine copula function, a powerful mathematical tool for modeling variable dependencies, is utilized to establish the joint probability density function (PDF) for multidimensional dependencies. Based on the properties of the copula function, the developed parametric sensitivity indices are decomposed into independent and dependent parts. Using these parts, different types of contributions to the failure probability are identified. By computing the kernel function for each marginal PDF and the copula kernel function for each pair-copula PDF involved in the vine factorization, a general numerical algorithm is developed for estimating separated parametric sensitivity indices. Finally, the feasibility of the proposed indices and numerical solutions is verified through a numerical example and by solving two engineering problems.

In the performance-based design context, buildings can be evaluated through simulation procedures to estimate their performance in different environmental criteria. In this context, a crucial issue is the identification of the most influent input variables in some specific performance criteria, which can only be appropriately assessed through a robust sensitivity analysis approach. Thus, the objective of this study is to develop a method to estimate the influence of design variables in the thermal and energy performance of buildings through a systematic procedure, using building simulation programmes and statistical tools. The method has a broad scope of combining a local with a global sensitivity analysis using the Morris elementary effects method, and uncertainty analysis of the performance criteria. A case study of a low-income house located in southern Brazil was evaluated and simulated in the EnergyPlus™ programme. Four different performance criteria were considered (such as the degree-hours for heating and cooling and the energy consumption for heating and cooling). The simulation experiment considered 21 design variables, such as the thermal properties of the construction components, openings characteristics, airflow parameters and solar orientation of the house. The study identified the most influential design variable in each criterion, highlighting the thermal transmittance and the solar absorptance of the roof, and the ventilation area of the windows. The combination of the local and global approaches leads to a better understanding of the behaviour of the model and can help the decision-maker to optimise the building performance more accurately.

Health state assessment is a key issue in health management of aeronautical relay subject to complex interference environment. The input reliability of assessment model has direct connection with the assessment result and the reliability of the assessment model. Due to the limitation of resources and monitoring technology, it is impossible to simultaneously improve the reliabilities of all the characteristics. Thus, some important characteristics should be sorted by the role they play in the assessment model. Implementing the quantitatively analysis of the influence of the input reliability can provide guidance. The belief rule base model with attribute reliability (BRB-r) provides such a modeling framework and analysis method. It is one of the expert systems that can aggregate unreliable quantitative data and expert knowledge and has traceability between the model input and output. Thus, in this paper, a new health state assessment model based on BRB-r for aeronautical relay is developed for the first time where the calculation method of model reliability is further developed. Then, to quantitatively analyze the effectiveness of the input reliability on the model output and the model reliability, the sensitivity analysis of attribute reliability is deduced based on the first-order local sensitivity method. The obtained sensitivity coefficient of attribute reliability represents its effectiveness on the constructed health state assessment model and can provide guidance in health management for aeronautical relay under limited resource. A case study of health sate estimation of the JRC-7M aeronautical relay is conducted to illustrate the application of the new model.

The composition of the modern aerospace system becomes more and more complex. The performance degradation of any device in the system may cause it difficult for the whole system to keep normal working states. Therefore, it is essential to evaluate the performance of complex aerospace systems. In this paper, the performance evaluation of complex aerospace systems is regarded as a Multi-Attribute Decision Analysis (MADA) problem. Based on the structure and working principle of the system, a new Evidential Reasoning (ER) based approach with uncertain parameters is proposed to construct a nonlinear optimization model to evaluate the system performance. In the model, the interval form is used to express the uncertainty, such as error in testing data and inaccuracy in expert knowledge. In order to analyze the subsystems that have a great impact on the performance of the system, the sensitivity analysis of the evaluation result is carried out, and the corresponding maintenance strategy is proposed. For a type of Inertial Measurement Unit (IMU) used in a rocket, the proposed method is employed to evaluate its performance. Then, the parameter sensitivity of the evaluation result is analyzed, and the main factors affecting the performance of IMU are obtained. Finally, the comparative study shows the effectiveness of the proposed method.

Hazardous Natural events can cascade into Technological accidental scenarios (so called NaTech accidents). The occurrence of these accidents can degrade the performance of the preventive and mitigative safety barriers installed in the technological plants. Such performance degradation is typically assessed by expert judgement, without considering the effect of the magnitude of the natural hazard, nor its increasing frequency of occurrence in view of climate change. In this work, a novel sensitivity analysis framework is developed to identify the safety barriers whose performance degradation is most critical and thus needs careful modeling for realistic risk assessment. The framework is based on the calculation of a set of sensitivity measures, namely the Beta, the Conditional Value at Risk (CVaR) and the Value of Information (VoI), and their use to prioritize the safety barriers with respect to the need of:
•accounting for performance degradation during an accidental scenario;
•planning investments for further characterization of the safety barrier performance.
An application is shown with respect to a case study of literature that consists of a chemical facility equipped with five safety barriers (of three different types, active, passive and procedural). NaTech scenarios can occur, triggered by floods and earthquakes. The results obtained with the Beta measure indicate that two-out-of-five barriers (one active and one passive) deserve accurate modelling of the performance degradation due to natural events. An additional outcome is that in the case study considered, both CVaR and VoI rank the passive barrier as the most effective in mitigating the scenarios escalation: therefore, this barrier is the one for which the decision maker could decide to invest resources for improving the characterization of its performance to obtain a more realistic assessment of the risk.

The computational inverse technique-based high-fidelity numerical modeling is a comprehensive analysis of the experimental data and the numerical simulation model instead of a simple modeling analysis process or an optimal iteration process. Appropriate physical experiments are required to ensure the relatively strong sensitivity between the measured responses and the modeling parameters, while the numerical solution is expected to be available. In addition, the identification of the model parameters should address three problems, i.e., high computational intensity and the ill-posedness of the system, improvement for the identification efficiency and stability and the optimality of the solution to a specific extent.

Because machine learning has been widely used in various domains, interpreting internal mechanisms and predictive results of models is crucial for further applications of complex machine learning models. However, the interpretability of complex machine learning models on biased data remains a difficult problem. When the important explanatory features of concerned data are highly influenced by contaminated distributions, particularly in risk-sensitive fields, such as self-driving vehicles and healthcare, it is crucial to provide a robust interpretation of complex models for users. The interpretation of complex models is often associated with analyzing model features by measuring feature importance. Therefore, this paper proposes a novel method derived from high-dimensional model representation (HDMR) to measure feature importance. The proposed method can provide robust estimation when the input features follow contaminated distributions. Moreover, the method is model-agnostic, which can enhance its ability to compare different interpretations due to its generalizability. Experimental evaluations on artificial models and machine learning models show that the proposed method is more robust than the traditional method based on HDMR.

Dynamic models with both random and random process inputs are frequently used in engineering. However, sensitivity analysis (SA) for such models is still a challenging problem. This paper, therefore, proposes a new multivariate SA technique to aid the safety design of these models. The new method can decompose the SA of dynamic models into a series of SA of their principle components based on singular value decomposition, which will make the SA of dynamic models much more efficient. It is shown that the effect of both random and random process inputs on the uncertainty of dynamic output can be measured from their effects on both the distributions and directions of the principle components, based on which the individual sensitivities are defined. The generalized sensitivities are then proposed to synthesize the information that is spread between the principal components to assess the influence of each input on the entire uncertainty of dynamic output. The properties of the new sensitivities are derived and an efficient estimation algorithm is proposed based on unscented transformation. Numerical results are discussed with application to a hydrokinetic turbine blade model, where the new method is compared with the existing variance-based method.

Please download the full-text which includes an extended summary

A method for inducing a desired rank correlation matrix on a multivariate input random variable for use in a simulation study is introduced in this paper. This method is simple to use, is distribution free, preserves the exact form of the marginal distributions on the input variables, and may be used with any type of sampling scheme for which correlation of input variables is a meaningful concept. A Monte Carlo study provides an estimate of the bias and variability associated with the method. Input variables used in a model for study of geologic disposal of radioactive waste provide an example of the usefulness of this procedure. A textbook example shows how the output may be affected by the method presented in this paper.

As modeling efforts expand to a broader spectrum of areas the amount of computer time required to exercise the corresponding computer codes has become quite costly (several hours for a single run is not uncommon). This costly process can be directly tied to the complexity of the modeling and to the large number of input variables (often numbering in the hundreds) Further, the complexity of the modeling (usually involving systems of differential equations) makes the relationships among the input variables not mathematically tractable. In this setting it is desired to perform sensitivity studies of the input-output relationships. Hence, a judicious selection procedure for the choic of values of input variables is required, Latin hypercube sampling has been shown to work well on this type of problem.

This document is for users of a computer program developed by the authors at Sandia National Laboratories. The computer program is designed to be used in conjunction with sensitivity analyses of complex computer models. In particular, this program is most useful in analyzing input-output relationships when the input has been selected using the Latin hypercube sampling program developed at Sandia (Iman and Shortencarier, 1984). The present computer program calculates the partial correlation coefficients and/or the standardized regression coefficients from the multivariate input to, and output from, a computer model. These coefficients can be calculated on either the original observations or on the ranks of the original observations. The coefficients provide alternative measures of the relative contribution (importance) of each of the various inputs to the observed output variations. Relationships between the coefficients and differences in their interpretations are identified. If the computer model output has an associated time or spatial history then the computer program will generate a graph of the coefficients over time or space for each input-variable, output-variable combination of interest, thus indicating the importance of each input over time or space. The computer program is user-friendly and written in FORTRAN 77 to facilitate portability.

The nuclear fuel waste disposal concept chosen for development and assessment in Canada involves the isolation of corrosion-resistant containers of waste in a vault located deep in plutonic rock. This volume describes the methods, models and data used to perform the second post-closure assessment. The results are presented and their significance is discussed. Conclusions are planned improvements are listed.

Statistical techniques for sensitivity analysis of a complex model are presented. Included in these techniques are Latin hypercube sampling, partial rank correlation, rank regression, and predicted error sum of squares. The synthesis of these techniques was motivated by the need to analyze a model for the surface movement of radionuclides. The model and statistical techniques presented in this report are part of a project funded by the Nuclear Regulatory Commission to develop a methodology to assess the risk associated with geologic disposal of radioactive waste.

Final storage of spent nuclear fuel--KBS3, SKBF, Swedish Nuclear Fuel Supply Co

- Skbf Kbs

SKBF/KBS, Final storage of spent nuclear fuel--KBS3, SKBF, Swedish Nuclear Fuel Supply Co./Division KBS report ISSN 0349-6015 (1983).

Environmental and safety studies for nuclear waste management

- D M Wuschke
- K K Mehta
- J W Dormuth
- T Andres
- G R Sherman
- E L J Rosinger
- B W Goodwin
- J A K Reid
- R B Lyon

Wuschke, D. M., Mehta, K. K., Dormuth, J. W., Andres, T., Sherman, G. R., Rosinger, E. L. J., Goodwin, B. W., Reid, J. A. K. & Lyon, R. B., Environmental and safety studies for nuclear waste management, volume 3: Post-closure assessment. Atomic Energy of Canada Ltd report AECL TR-1127-3 (1981).

PREP and SPOP utilities. Two FORTRAN programs for sample preparation, uncertainty analysis and sensitivity analysis in Montecarlo simulation. Programs description and user's guide

- A Saltelli

Saltelli, A., PREP and SPOP utilities. Two FORTRAN programs for sample preparation, uncertainty analysis and sensitivity analysis in Montecarlo simulation. Programs description and user's guide. Joint Research Centre of Ispra report EUR 11034 EN, Luxembourg (1987).

An assessme'nt of the radiologicai consequences of disposal of high-level waste in coastal geologic formations. Nuclear Radiation Protection Board

- M D Hill
- G Lawson

Hill, M. D. & Lawson, G., An assessme'nt of the radiologicai consequences of disposal of high-level waste in coastal geologic formations. Nuclear Radiation Protection Board, report NRPB-RI08, Harwell (UK) 1980.

Long-Term radiation protection objectives for radioactive waste disposal

- NEA

NEA, Long-Term radiation protection objectives for radioactive waste disposal. Nuclear Energy Agency experts report, OECD, Paris (F), 1984.

LISA, A code for safety assessment in nuclear waste disposal

- Saltelli

An assessment of the radiological consequences of disposal of high-level waste in coastal geologic formations

- Hill

PREP and SPOP utilities. Two FORTRAN programs for sample preparation, uncertainty analysis and sensitivity analysis in Montecarlo simulation

- Saltelli