Figure 2 - uploaded by Takuya Iwanaga
Content may be subject to copyright.
The six major themes of 'challenges and outlook' in the theory, methods and application of SA.

The six major themes of 'challenges and outlook' in the theory, methods and application of SA.

Source publication
Article
Full-text available
Sensitivity analysis (SA) is en route to becoming an integral part of mathematical modeling. The tremendous potential benefits of SA are, however, yet to be fully realized, both for advancing mechanistic and data-driven modeling of human and natural systems, and in support of decision making. In this perspective paper, a multidisciplinary group of...

Contexts in source publication

Context 1
... the significant progress and popularity of sensitivity analysis (SA) in recent years, it is timely to revisit the fundamentals of this relatively young research area, identify its grand challenges and research gaps, and probe into the ways forward. To this end, the multidisciplinary authorship team has identified six major themes of 'challenges and outlook', as outlined in Figure 2. In the following, we discuss our perspective on the past, present and future of SA under each theme in a dedicated section. ...
Context 2
... Much work is needed to realize the tremendous untapped potential of SA for mathematical modeling of socio-environmental and other societal problems which are confounded by uncertainty (Section 3.2). SA can help with the management of uncertainty by (1) characterizing how models and the underlying real-world systems work, (2) identifying the adequate level of model complexity for the problem of interest, and (3) pointing to possible model deficiencies and non-identifiability issues, as well as where to invest to reduce critical uncertainties. ...

Similar publications

Article
Our study is keyed to the development of a methodological approach to assess the workflow and performance associated with the operation of a crude-oil desalting/demulsification system. Our analysis is data-driven and relies on the combined use of (a) Global Sensitivity Analysis (GSA), (b) machine learning, and (c) rigorous model discrimination/iden...

Citations

... This provides a comprehensive spectrum of sensitivity information as it bridges the variance-and derivative-based approaches. For example, it produces sensitivity indices of the two most common GSA approaches, the derivative-based (Morris, 1991) and the variance-based methods (Sobol, 2001), while being more computationally efficient and statistically robust (Becker, 2020;Puy et al., 2021). To summarize global sensitivities, VARS integrates the directional variograms over a given perturbation scale (e.g. ...
Thesis
Full-text available
Permafrost affects hydrological, meteorological, and ecological processes in over one-quarter of the land surface in the Northern Hemisphere. Permafrost degradation has been observed over the last few decades and is projected to accelerate under climatic warming. However, simulating permafrost dynamics is challenging due to process complexity, scarcity of observations, spatial heterogeneity, and permafrost disequilibrium with external climate forcing. Hydrologic-land-surface models (H-LSMs), which act as the lower boundary condition of the current generation of Earth system models (ESMs), are suitable for diagnosing and predicting permafrost evolution, as they couple heat and water interactions across soil-vegetation-atmosphere interfaces and are applicable for large-scale assessments. This thesis aims to improve the ability of H-LSMs to simulate permafrost dynamics and concurrently represent hydrology. Specific research contributions are made on four fronts: (1) assessing the uncertainty introduced to the modelling due to permafrost initialization, (2) investigating the sensitivity of permafrost dynamics to different H-LSM parameters, associated issues of parameter identifiability, and sensitivity to external forcing datasets, (3) evaluating the strength of permafrost-hydrology coupling in H-LSMs in data-scarce regions under parameter uncertainty, and (4) assessing the fate of permafrost thaw and associated changes in streamflow under an ensemble of future climate projections. The analyses and results of this thesis that illuminate these central issues and various solutions for permafrost-based applications of H-LSMs are proposed. First, uncertainty in model initialization determines the length of required spin-up cycles; 200-1000 cycles may be required to ensure proper model initialization under different climatic conditions and initial soil moisture contents. Further, the uncertainty due to initialization can lead to divergent permafrost simulations, such as active layer thickness variations of up to ~2m. Second, the sensitivity of various permafrost characteristics is mainly driven by surface insulation (canopy height and snow-cover fraction) and soil properties (depth and fraction of organic matter content). Additionally, the results underscore the difficulties inherent in H-LSM simulation of all aspects of permafrost dynamics, primarily due to poor identifiability of influential parameters and the limitations of currently-available forcing data sets. Third, different H-LSM parameterizations favor different sources of data (i.e. streamflow, soil temperature profiles, and permafrost maps), and it is challenging to configure a model faithful to all data sources. Overall, the modelling results show that surface insulation (through snow cover) and model initialization are primary regulators of permafrost dynamics and different parameterizations produce different low-flow but similar high-flow regimes. Lastly, severe permafrost degradation is projected to occur under all climate change scenarios, even under the most optimistic ones. The degradation and climate change, collectively, are likely to alter several streamflow signatures, including an increase of winter and summer flows. Permafrost fate has strategic importance for the exchange of water, heat, and carbon fluxes over large areas, and can amplify the rate of climate change through a positive feedback mechanism. However, existing projections of permafrost are subject to significant uncertainty, stemming from several sources. This thesis quantifies and reduces this uncertainty by studying initialization, parameter identification, and evaluation of H-LSMs, which ultimately lead to configuring an H-LSM with higher fidelity to assess the impact of climate change. As a result, this work is a step forward in improving the realism of H-LSM simulations in permafrost regions. Further research is needed to refine simulation capability, and to develop improved observational datasets for permafrost and their associated climate forcing.
... Simulation results of the proposed model show the detection and classification performance for various types of cyber-attacks. However, for a general network, only a small subset of factors has a significant impact on the system output (Razavi et al., 2021). For sensitivity analysis, the relevancy factor is calculated to determine the dependency of the system output on the input variables (Baghban et al., 2019). ...
Article
Cyber-attacks have become one of the main threats to the security, reliability, and economic operation of power systems. Detection and classification of multiple cyber-attacks pose challenges for ensuring the stability and security of power systems. To address this issue, this study proposes an automatic identification and classification method for multiple cyber-attacks based on the deep capsule convolution neural network. Spatial correlations among different nodes and temporal features from history operation status in the transmitted data packets are extracted by the convolution neural network. Capsules in the proposed structure have important implications for maintaining the topological consistency contained in the measurement matrix. Furthermore, the proposed method is model-free and avoids the impact of system parameters uncertainties on detection performance. Multiple kinds of typical cyber-attacks, including false data injection attacks, replay attacks, denial of service attacks, time-delay attacks, and deception attacks, are considered and modeled in this paper. Numerical results on the IEEE 39-bus test system show that the proposed method can achieve 99.97% detection accuracy on single cyber-attacks and 96.25% detection accuracy on multiple cyber-attacks. Comparative results illustrate the proposed method outperforms than traditional neural networks. This approach provides a solution for the problem of multiple cyber-attack detection and classification.
... Importance measures are techniques used to evaluate the relative importance of input variables (or features) in a complex system or model. In recent years, many researchers have extensively studied these techniques in Machine Learning (ML), Statistics, and Operations Research (Weisberg 2005;Saltelli et al. 2008;James et al. 2013;Molnar 2020;Razavi et al. 2021). ...
Article
Full-text available
Discriminating the role of input variables in a hydrological system or in a multivariate hydrological study is particularly useful. Nowadays, emerging tools, called feature importance measures, are increasingly being applied in hydrological applications. In this study, we propose a virtual experiment to fully understand the functionality and, most importantly, the usefulness of these measures. Thirteen importance measures related to four general classes of methods are quantitatively evaluated to reproduce a benchmark importance ranking. This benchmark ranking is designed using a linear combination of ten random variables. Synthetic time series with varying distribution, cross-correlation, autocorrelation and random noise are simulated to mimic hydrological scenarios. The obtained results clearly suggest that a subgroup of three feature importance measures (Shapley-based feature importance, derivative-based measure, and permutation feature importance) generally provide reliable rankings and outperform the remaining importance measures, making them preferable in hydrological applications.
... uncertainty as well as enhancing the use of SA to assist decision-making are two of the six most important challenges highlighted through Razavi et al. (2021). In addition, the majority of SA studies are based on one or two objective functions. ...
... It also clarifies the relationship and role of SA to uncertainty quantification and improves the use of SA in support of decision-making. This can be considered as a significant contribution to the SA future challenges and outlooks identified by Razavi et al. (2021). Table 5. ...
Article
Full-text available
Many hydrological applications employ conceptual-lumped models to support water resource management techniques. This study aims to evaluate the workability of applying a daily time-step conceptual-lumped model, HYdrological MODel (HYMOD), to the Headwaters Benue River Basin (HBRB) for future water resource management. This study combines both local and global sensitivity analysis (SA) approaches to focus on which model parameters most influence the model output. It also identifies how well the model parameters are defined in the model structure using six performance criteria to predict model uncertainty and improve model performance. The results showed that both SA approaches gave similar results in terms of sensitive parameters to the model output, which are also well-identified parameters in the model structure. The more precisely the model parameters are constrained in the small range, the smaller the model uncertainties, and therefore the better the model performance. The best simulation with regard to the measured streamflow lies within the narrow band of model uncertainty prediction for the behavioral parameter sets. This highlights that the simulated discharges agree with the observations satisfactorily, indicating the good performance of the hydrological model and the feasibility of using the HYMOD to estimate long time-series of river discharges in the study area.
... Parameter Sensitivity analysis (SA), a method by continuously disturbing parameters in a model to evaluate the effects of the parameters' change on model outputs and state variables, can be used to reveal the key parameters that influence the hydrological cycle (Razavi and Gupta, 2015;Razavi et al., 2021;Song et al., 2015), and gain knowledge about the hydrological cycle under specific model structure (Mai et al., 2022). The Sobol' method, a global sensitivity analysis method based on variance decomposition, can provide quantitative estimates about the sensitivity of single parameter and parameter interaction 85 for highly nonlinear models (Khorashadi Zadeh et al., 2017;Sobol', 1993). ...
Preprint
Full-text available
Many rivers in the East Asian Monsoon region originates from the Qinghai-Tibet Plateau (QTP), which provide huge amount of fresh water resources for downstream counties. As a region characterized by high altitude and cold weather, distributed hydrological modelling provide valuable knowledge about water cycle and cryosphere of the QTP. However, the lack of streamflow data restricts the application of hydrological models in this data-sparse region. Previous studies have demonstrated the possibility of using remote sensing evapotranspiration (RS-ET) data to improve modelling. However, in the QTP, the mechanisms driving such improvements haven’t been understood thoroughly. In this study, such driving mechanisms were explored through the rainfall-runoff modelling of the Soil and Water Assessment Tool (SWAT) in the Yalong River Basin of the QTP. Three experiments of model calibrations were conducted using streamflow data at the basin outlet, basins averaged RS-ET data of the Global Land Evaporation Amsterdam Model (GLEAM), and the combination of the both data, under the framework of the Generalized Likelihood Uncertainty Analysis (GLUE). The results show that compared with calibration using streamflow data solely, the Nash-Sutcliffe Efficiency of simulated streamflow at 50% quantiles for the calibration using both of streamflow and RS-ET data increased from 0.71 to 0.81 in the calibration period, while in the validation period improved from 0.75 to 0.84, and more observations are embraced by the uncertainty bands. Similar improvements are also found for the ET estimates. Comparison of parameter posterior distributions among the three experiments demonstrated that calibration using both types of observations could increase the number of parameters that posterior distributions are different from assumed uniform prior distribution, indicating the degree of equifinality was reduced. A more comprehensive parameter sensitivity analysis by the Sobol' method were also conducted for reasoning the differences among the three calibrations. Although the number of the detected sensitive parameters are almost same, the sensitive parameter detected based on both types of observations covers surface runoff generation, snow-melting, soil water movement and evaporation processes, while using single type of observations, the identified sensitive parameters are only the ones related the hydrological processed quantified by the observations. From the aspects of model performance and parameter sensitivity, it is demonstrated that not only the model output performs better, but also the characteristics of water cycle are captured more effectively, highlighting the necessity of incorporating RS-ET data for hydrological model calibration in the QTP. Moreover, adopting observations or information about soil property or snow-melting processes to make more reasonable estimates of parameter distribution could further reduce simulation uncertainty under the calibration strategies proposed in this study.
... High-dimensional models in the water resources sector typically are non-identifiable in the sense that it is not possible to estimate unique parameter values from the data and conditions in forcing data, model structure and errors . Methods to reduce dimensionality are therefore commonly recommended before uncertainty analysis is undertaken, for example, by identifying the important subset of factors influencing model outputs using sensitivity analysis (SA) (Lam et al., 2020;Razavi et al., 2021). SA is becoming a common practice in the modeling process, which involves attributing the variation in the outputs of a model to different factors Saltelli et al., 2019). ...
... A subsequent quantification of the differences between conditional and unconditional model output quantities of actual interest, the QoI, is warranted in order to reduce the risk of a mismatch with parameter sensitivities and to appreciate the corresponding change to the predictive quantity by FF. In this way, one can avoid cases where model variations cannot be explained in the dimensionally reduced model (Razavi et al., 2021), and hence increase the effectiveness of the reduction in parameter dimension. ...
... Variance-based sensitivity indices can then be computed analytically from the PCE using the same approach we use for independent variables. However, the meaning of variance-based sensitivity indices for correlated variables is still highly debated in Razavi et al. (2021). ...
Article
Full-text available
Factor Fixing (FF) is a common method for reducing the number of model parameters to lower computational cost. FF typically starts with distinguishing the insensitive parameters from the sensitive and pursues uncertainty quantification (UQ) on the resulting reduced‐order model, fixing each insensitive parameter at a fixed value. There is a need, however, to expand such a common approach to consider the effects of decision choices in the FF‐UQ procedure on metrics of interest. Therefore, to guide the use of FF and increase confidence in the resulting dimension‐reduced model, we propose a new adaptive framework consisting of four principles: 1) re‐parameterize the model first to reduce obvious non‐identifiable parameter combinations, 2) focus on decision relevance especially with respect to errors in quantities of interest (QoI), 3) conduct adaptive evaluation and robustness assessment of errors in the QoI across FF choices as sample size increases, and 4) reconsider whether fixing is warranted. The framework is demonstrated on a spatially distributed water quality model. The error in estimates of QoI caused by FF can be estimated using a Polynomial Chaos Expansion (PCE) surrogate model. Built with 70 model runs, the surrogate is computationally inexpensive to evaluate and can provide global sensitivity indices for free. For the selected catchment, just two factors may provide an acceptably accurate estimate of model uncertainty in the average annual load of Total Suspended Solids (TSS), suggesting that reducing the uncertainty in these two parameters is a priority for future work before undertaking further formal uncertainty quantification. This article is protected by copyright. All rights reserved.
... A whole range of methods are available for sensitivity analysis ranging from local "one-at-a-time" (OAT) methods, which are performed by varying one parameter at a time around a nominal value, to global variancebased decomposition methods, which are based on characterizing the output variance by decomposing it into parts associated with individual inputs or groups of them. 17 The latter are usually considered as computationally more demanding than the OAT methods, where the parameter sensitivity is estimated by assessing variations in model output by perturbing one parameter, holding all other parameters fixed, restoring the parameter value to its original value and repeating the procedure for each individual input. While such difference-quotient-based OAT approach may seem rather simple and intuitive at first, it has several important drawbacks: it (a) uses a limited range of the input parameter variations around some nominal point, * and then the results can vary with the nominal point location, and (b) completely neglects possible inter-parameter interactions and, therefore, may give biased estimates. ...
... One of the more important and productive areas of modern mathematical analysis is the concept understanding of systems with partial differentials, both linear and nonlinear, and its consequences for practically all domains of scientific research. That is the reason for the development of nonlinear partial differential equations which are modelled to clarify a variety of significant physical situations [2][3][4][5]. Such models contribute significantly to the analysis of the behavioural and mechanical aspect that arise in a wide range of other scientific disciplines, such as ionized science, optical fibres, astronomy, life sciences, and information technology. ...
Article
Full-text available
The nonlinear Schrödinger equation (NLSE) is one of the most important physical models for explaining the dynamics of optical soliton proliferation in optical fiber theory. Due to an extensive variety of applications for ultrafast signal routing systems and short light pulses in communications, optical soliton propagation in nonlinear fibers is a topic of substantial present interest. To overcome the ill-posedness of the unstable NLSE, a new term 𝑥𝑡 is introduced which leads to the Hamiltonian amplitude equation. In this work, we study exact wave solutions to the nonlinear complex model by utilizing a set of analytical approaches namely as the exp(−𝜉(𝛿)) method, the improved F-expansion technique, and the extended modified auxiliary equation mapping method. Numerous solutions, including singular periodic, multi-periodic, single bell-shaped, and multi-bell-shaped wave frameworks were found. The proposed plan of action is swift, strong, as well as offers the requirements needed to verify the origin of these solutions. Additionally, a dynamics representation of the fascinating behavior of several solidarity using 2D, 3D, and contour graphs is provided, also we discuss the stability analysis and modulation instability of the proposed model.
... The GSA attempts to provide a ''global" representation of how different uncertain quantities interact to influence some function of the output [16]. An overview of the state of the art on SA for deterministic input and uncertain parameters can be found in a recent paper by Razavi et al. [17]. ...
Article
In the framework of dynamic excitations to be considered during the design phase of structures, the most crucial one is the ground motion acceleration. To increase the structural performances against seismic actions, one of the most effective design criteria is to introduce damping devices. An efficient approach to define the optimal parameters of the damping system is based on the design sensitivity analysis, which provides a quantitative estimate of desirable design change, by relating the available design variables. In this paper a method to evaluate the sensitivities of stochastic response characteristics of structural systems with damping devices subjected to seismic excitations, modelled as fully non-stationary Gaussian stochastic processes, is proposed. The main steps are: i) to define the time-frequency varying response (TFR) function for non-classically damped systems; ii) to evaluate closed form solutions for the first-order derivatives of the TFR function as well as of the one-sided evolutionary power spectral density function of the structural response, with respect to damping parameters of devices; iii) to perform a design sensitivity analysis selecting as performance measure function the non-geometric spectral moments of nodal displacements. A numerical application demonstrates how the proposed approach is suitable to cope with practical problems of engineering interest.
... To perform the sensitivity analysis, the input parameters were removed one by one from the dataset and the errors were recorded. If the error increases after removing the parameter, it suggests that the parameter has important in predicting the outcome accurately (Razavi et al. 2021). But if the increase in error was observed to be negligible, then the parameter is considered to be less significant. ...
Article
Full-text available
Development and Operations (DevOps) is a relatively recent phenomenon that can be defined as a multidisciplinary effort to improve and accelerate the delivery of business values in terms of IT solutions. Many software organizations are heading towards DevOps to leverage its benefits in terms of improved development speed, stability, collaboration, and communication. DevOps practices are essential to effectively implement in software organizations, but little attention has been given in the literature to how these practices can be managed. Our study aims to propose and develop a framework for effectively managing DevOps practices. We have conducted an empirical study using the publicly available HELENA2 dataset to identify the best practices for effectively implementing DevOps. Furthermore, we have used the prediction algorithms such as Support Vector Machine (SVM), Artificial Neural Network (ANN) and Random Forest (RF) to develop a prediction model for DevOps implementation. The findings of this study show that “Continuous deployment”, “Coding standards”, “Continuous integration”, and “Daily Standup” "are the most significant practicesduring the life cycle of projects for effectively managing the DevOps practices. The contribution of this study is not only limited to investigating the best DevOps practices but also provides a prediction of DevOps project success and prioritization of best practices. It can assist software organizations in getting the best possible practices to focus on based on the nature of their projects.