Science topics: MathematicsSensitivity Analysis

Science topic

# Sensitivity Analysis - Science topic

Explore the latest questions and answers in Sensitivity Analysis, and find Sensitivity Analysis experts.

Questions related to Sensitivity Analysis

I am working on my research in inventory management field. I am unable to understand the procedure of sensitivity analysis of any model. How to perform optimization and sensitivity analysis in MATLAB. Kindly suggest and guide me for this.

Can you please provide guidance on performing sensitivity analysis in Fuzzy AHP using Excel?

I need a step-by-step procedure on how to perform a single parameter sensitivity analysis to evaluate the impact of parameters on a vulnerability index. I am particularly confused about how to create the sub-areas in GIS and compute the parameter rates and weights.

Based on my 3D analysis, how can I move an object from its ' Y old' dimension to a new 'Y New' dimension?

I want to plot a graph showing the effect of OP on syngas composition, but I am struggling with sensitivity analysis in Aspen Plus. Which manipulated variable should I choose? Here's a picture.

Hello,

I am trying to conduct a sensitivity analysis for a simple mediation model, that was conducted in SPSS using the PROCESS Macro by Hayes (Model 4).

I have the following information:

- Sample Size N = 1081
- α = .05
- 1−β = .90

How can I find out, which size the indirect effect has, that can be detected with these settings? Which program can I use for this?

Do I have to know the Effects of path a and path b to be able to conduct the sensitivity analysis?

Figure is taken from: The Englisch Wikipedia site "Mediation (statistics)"

Dear researchers,

I am writing to request your assistance in obtaining

*Furthermore, I would appreciate your providing practical recommendations or best practices for conducting sensitivity analysis in this domain. Your contribution will greatly benefit my study, and I appreciate your support.***literature, research papers, or any valuable insights regarding sensitivity analysis in the artificial neural network modelling of geopolymer concrete.**Thank you for your time and consideration.

I am working on a meta-analysis. However, from the sensitivity analysis I got s significant change on the result (but on Egger regression it is okay --> p>0.05).

Illustration:

1. Original finding (11 studies): OR = 0.60 (95% CI = 0.45-2.1, p=0.45)

2. After sensitivity analysis (10 studies --> let's say removing study X): OR = 0.50 (95% CI = 0.25-0.61, p=0.04)

Note: these are not the true value but it resembles our finding

What must I do? Remove study from the final one or still includes original one?

And how to interpret it? Thank you

How to optimize number of theoretical stages in reactive, rectifying and stripping section using sensitivity analysis tool in ASPEN Plus? what would be the independent variable and dependent variable? Kindly explain i am performing simulations on reactive distillation process.

How to proceed with Sensitivity Analysis? Can it be done on RevMan5 or we need to export files to some other software?

Throughout the literature, the Curve Number (CN) has been identified as the most sensitive parameter in hydrological models. However, in my project, when I conducted a sensitivity analysis, the CN number did not show the expected level of sensitivity. What could be the reason for this occurrence? Thank you.

Sensitivity analysis using SRCC can be performed using SRCC = 1-6∑N i=1[R(xi)-R(yi)]^2/ N(N2-1) . One can get a single point value of SRCC between 2 different parameters such as DNI and efficiency. For example as DNI varies from 500 to 900 W/m^2, efficiency also rises between 60% and 67%. here we can use the above formula and get one point value of SRCC that varies between -1 to 1.

But I need to know, how one can get a series of SRCC values ? I am adding papers, in which authors used same formula and got the plots with number(series) of SRCC values. Kindly help to understand this problem.

Currently using Msc Marc Mentat for FEA. Now i conduct the mesh convergence/sensitivity analysis manually. So I need to know if there any automatic way to conduct the mesh convergence/sensitivity analysis

I have found formula to calculate importance factor of each variable of ANN model from Linda Milne paper namely "Feature Selection Using Neural Networks with Contribution Measures".

Can anyone explain how to use this formula? (attached are the pictures of this formula)

ANN architecture is 9-9-1, so i am having weights for input-hidden layers in 9x9 matrix and weights for hidden-output layers in 1x9 matrix. I am confused in wji, woj, wjl mentioned in the formula. Can anyone explain how to input weights in this formula?

Thanks

Hello, I'm simulating a blanking operation in Abaqus/Explicit and right now I'm trying to figure out the optimum size of the smallest elements. My question is: do I have to simulate the whole blanking operation (which can take hours from a certain element size onward) in order to do a good mesh sensitivity analysis or can I just simulate only a fraction of the operation, like only punching half of the way, or a tenth of the way and still have a valid mesh sensitivity analysis?

Thanks in advance and feel free to ask for further information.

How to plot Approximate entropy, Permutation entropy, and sensitivity analysis for a chaotic map to check their randomness behavior and complexity. I tried to code these tests but couldn't get a good result. If anyone helps me, it will be helpful to me.

Hi, all.

I am using G*power to perform a sensitivity analysis for a one-way MANOVA. The analysis suggested my study had a minimum detectable effect size of f^2(V) = .01.

I was not quite sure about this effect size f^2(V).

Is it just the effect size f^2 for pillai?

I have used tool in the Likert Scale. How can I do scale sensitivity analysis?

I need to do a sensitivity analysis of recruited studies in a meta-analysis using RevMan 5. Can anyone guide me?

After having run the propensity score analysis in R, I need to conduct a sensitivity analysis in the same software. Kindly suggest which R-package should I use.

Measuring uncertainty and sensitivity analysis for the human development index i want to set up goalposts for living standard (Income indicator) can help me thanks in advance ...........

Hello,

I am using the sensitivity analysis tool in Aspen Plus, the results are available successfully, however, the results tab in the sensitivity analysis section does now show any results, I can generate the results graph. but the results in the table are not shown!

I have this forest plot using the standardized mean difference, however, the confidence interval lines for each study got inside the study square because they have narrow values relevant to the scale, is there a way to change the scale? Or should I just do a sensitivity analysis to see if the study with the highest SMD would affect the overall estimated SMD?

I did the meta analysis using metacont using random-effect model and Cohen's method and plotted using forest.meta from meta package.

When AHP is used for calculation of weights only and SAW is used for ranking the alternatives , how to carry out the sensitivity analysis ?

How to get the sensitivity matrix and also how to get sensitivity matrix for a cracked beam using FEM by carrying out natural frequency sensitivity analysis

Dear all,

I'm currently working on the optimization of the daily activity chain by using the GA algorithm. I developed a utility function consisting of (10) variables and their weights as follows:

U=v1w1+v2w2+........+v10w10.

The numerical values of the variables can be obtained from different sources, but the weight of each variable is based on the user's preferences. What's the best way to represent these weights: sensitive or scenarios analysis?

Dear ResearchGate community,

I am working on DNA samples obtained from patients screened for cervical cancer/ HPV infection. To calculate the viral load of each sample, I am looking for a technique to know the amount of the viral DNA present in each of the samples.

That said, considering the fact that the viral genetic content can be present both as episomes as well as integrated into the human genome, what would be the best approach to get insight from the "effective" viral load?

In other words, for the diagnostic purposes which one of the two types of viral DNA- if not both- is of significance, and how can it be quantified?

Many thanks in advance,

Hello everyone, I am performing a Mendelian Randomization in R to check whether there is a causal association between genetically predicted eGFR and severe COVID-19. In the end of my analysis I would like to check if the sensitivity analysis is robust. Do I have to check if the IV assumptions hold or do I check it some other way?

I read that single-parameter sensitivity analysis and map removal sensitivity analysis are commonly used to assess the influence of parameters in DRASTIC and other index-based model. However, how do I actually do it? Because the output of DRASTIC is a raster with too many pixels trying to calculate for all pixels is an extremely huge task. Therefore, is there a software or way that I can do it efficiently? Can it carried out in ArcMap/ArcGIS? I've attached an example of an article that used sensitivity analysis for DRASTIC-LU model.

I have two questions and hope for some expert advice please?

1. My understanding is that when conducting an economic evaluation of clinical trial data, no discounting of costs is applied if the follow-up period of the trial was 12 months or less. Is this still the standard practice and can you please provide a recent reference?

2. How can one adjust for uncertainties/biases when you use historic health outcomes data? If the trial was non-randomised, how can you adjust for that within an economic evaluation other than the usual probabilistic sensitivity analysis?

Thank you so much.

**Dear researchers**

**As we know, recently a new type of derivatives have been introduced which depend two parameters such as fractional order and fractal dimension. These derivatives are called fractal-fractional derivatives and divided into three categories with respect to kernel: power-law type kernel, exponential decay-type kernel, generalized Mittag-Leffler type kernel.**

**The power and accuracy of these operators in simulations have motivated many researchers for using them in modeling of different diseases and processes.**

**Is there any researchers working on these operators for working on equilibrium points, sensitivity analysis and local and global stability?**

**If you would like to collaborate with me, please contact me by the following:**

**Thank you very much.**

**Best regards**

**Sina Etemad, PhD**

My SWATCUP unable to accept more than 18 parameters for sensitivity analysis? Is there any one who have solution in this regard?

Dear colleagues, I am looking for a user-friendly tool to conduct some sensitivity analysis from simulation-based experiments. It would be good if it includes procedures for specifying the parameters to explore, to compute and generate samples and to evaluate the sensitivity with sophisticated modern methods like the Morris method.

I know about two of them already: SimLab and Dakota. However, SimLab seems to not be available any more (https://joint-research-centre.ec.europa.eu/sensitivity-analysis-samo/simlab-and-other-software_en) and I cannot find an alternative download site. This tool was my preferred one. I also know about a Python library, SALib. Any other ideas and suggestions?

Hello,

I am currently working on sensitivity analysis in the context of AHP. I use the online tool BPMSG from Goepel, maybe someone here knows it. However, I have a problem with the traceability of the results. Let's assume that there are exactly 3 criteria in the AHP (C1,C2,C3). Then I would like to know how the final value for an alternative (a1) results if one of the criteria changes in weighting, right?

I'll just say C1 decreases by x. However, the value x that is taken away from C1 must be distributed to C2 and C3. I just wonder

**which method**is used to do this. Is x simply distributed equally to C2 and C3 or does this happen according to the share of C2 or C3 in the sum of C2 and C3?When I do that, I get the following for the remaining two criteria:

(C1-x) = New C1

(C2 + (C2 / (C2 + C3)) * x) = New C2

(C3 + (C3 / (C2 + C3)) * x) = New C3

Unfortunately, however, I do not know if this is correct. If I multiply the criteria with the corresponding values of alternative a1 and combine the whole thing to a final value, I can calculate the same again with the other alternatives. When I compare the graphs to see how big x has to be to change the final prioritization of the alternatives, I always get the wrong values compared to the online tool. Therefore I would like to know if the redistribution of the weights is correct.

I hope someone can help me despite the long question. Thanks a lot!

We have used RSM for modelling a problem. In this problem, the formulation between output and inputs was created. But, we try to do a sensitivity analysis for validation. Please introduce a good way for this issue.

Best Regards

Hi, i need to perform a mesh sensitivity analysis for a stenosis CFD analysis. Is it sufficient to try some different meshes and look if the residuals (x-velocity,y-vel,z-vel) converge?

Thank you all, I'm a desperate beginner..

Hi! I am asking a question regarding a randomized controlled trial. So this trial compared steroid to steroid+MMF as first line treatment in Immune Thrombocytopenia (ITP), the primary outcome was time to the treatment failure which was defined as platelet count <30x10^9 in spite of

**2 WEEKS**treatment in steroid arm, and in spite of**2MONTHS**treatment in the steroid+MMF arm.First of all, this feels kinda weird as to why did the trial design defined the treatment failure differently?

And in the statistical analysis part, here is the explanation:

"Sensitivity analysis will include landmark analysis or shifting the time line to classify all treatment failures before 2 months as at 2 months in order to prevent potential biases caused by different definitions of treatment failure time frames between the two groups."

What does this mean? And did it justify using two different definitions to measure the outcomes?

For your information, here is the link for the trial design:

Thanks a lot! I am new to this field so I may be asking very basic questions. Sorry about that ;)

I was tried with the command meta summarize leave one out, but it says unrecognized.

this is for systematic and meta-analysis

After doing FAHP, how to check the robustness of the model using sensitivity analysis. e.g I have four factors in my model, each factor has some weight in such a way that sum total of all is 1. If I change the weight of one factor keeping the total same (1), what will be the impact on other factors' weight. How to calculate the revised weight of other factors?

Hi all,

I am working on meta analysis after screening few studies. I am trying to do subgroup analysis (based on geographical location, gender, age) to understand the heterogenicity in the MA. I have datas on mean, sd, sample size extracted from the studies. Upon doing MA, i found even in every set of subgroup analysis, the heterogenicity (I square value) is too high. Is that implies that i cannot combine all these studies or recommend some other way to conclude meta analysis.

I cannot do sensitivity analysis as the sample size in all studies are less than 70.

May i get suggestions on this?

Thanks much

I'm interested to test the robustness of an outcome definition using different cut-off points (e.g. 80% of pills taken vs. 90% pills taken to define adherence). Many articles reported such comparisons with sensitivity analysis but were not specifically clear on the type of test employed. I doubt a simple chi-square couldn't be used since both outcome definitions are applied in the same sample (groups are not independent).

Any suggestion of a statistical test to handle such types of data?

Please can I get a concise description of how to carry out a single parameter sensitivity analysis in flood hazard mapping using the MCDM method?

I want to know which ways are common sensitivity analysis that are performed in MCDM-related studies.

The result of sensitivity analysis (the coefficients) helps us decide whether the estimated ATT was the pure effects of the treatment or not. But, on which coefficient should I focus more?

I am looking for advice how to show the changes in a dynamical model system based on equations. I want to carry out a sensitivity analysis and want to show how strong the impact of a percentage change in one parameter is displayed in the system itself. Can anyone recommend some literature or how you did it by yourself?

I solved some questions and I did the sensitivity analysis. Two of the parameters returned a sensitivity index of 1.

I would like to know if this sensitivity index of 1 has any special meaning.

In general, models in this paper need to be validated, so what aspects should be used to verify discrete dynamic Bayesian network? And whether the sensitivity analysis in the GeNIe software can verify the validity?

Please help me, thank you

I am trying to develop a WASP8 model that can predict the dispersion of contaminants in seawater. Before I do a sensitivity analysis, I wanted to know if the flow of discharge has any effect on the disperison.

Thank you

I want to do some sensitivity analysis by altering the meteorology (e.g. increasing temperature) in WRF-Chem model. Can anybody suggest me how I can do this?

Thanks in advance for your kind help.

Best Wishes,

Anwar Khan

I would like to perform a sensitivity analysis of a CFD solver. There are 8 input variables, for each of them there are 2-3 prescribed numerical values.

To evaluate one set of parameters three costly simulations (each running for 20 hours on 800 cpu cores). Budget for these simulations is limited and due to the queuing system of the HPC, it would take a long time to get the results.

I'm aware of latin hypercube hierarchical refinement methods that allows starting the sensitivity analysis with smaller budget and subsequently incorporating newer results when they're available.

But those methods works with continuous variables. Is there a method for categorical and ranked/ordinal variables?

Dear all,

I have simulated a chemical process and I would like to run a sensitivity analysis on the impact of its parameters. Could you provide me with a tutorial on Monte Carlo method?

Thanks in advance,

I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?

As the title says, I am looking to vary a Fortran defined variable in sensitivity analysis.

For example, I am trying to vary the mole ratio of toluene to methanol in a feed stream to study the effects of the molar feed ratio on conversion of toluene.

**can be used to check the variation of the**

*Sensitivity Analysis**optimum solution*when changing the coefficients of the

*objective function*or constant values in

*constraints*. Are there exist any other things to investigate using this approach?

Hello

I'm currently trying to create a chi-map using topotoolbox for matlab. In the available literature most of the calculations use a single mn ratio (0.45-0.5) for the entire area, and some others do a sensitivity analysis in order to get the best mn ratio per watershed. However, I don't know whether the calculations would improve using the best mn ratio per stream or it really doesn't matter because the sensitivity analysis is good enough in a calculation per watershed.

By the way, I'm working in an landscape highly controlled by faults activity.

Any comments will be appreciated.

Best Regards

Lester

Mathematical programming is the best optimization tool with many years of strong theoretical background. Also, it is demonstrated that it can solve complex optimization problems on the scale of one million design variables, efficiently. Also, the methods are so reliable! Besides, there is mathematical proof for the existence of the solution and the globality of the optimum.

However, in some cases in which there are discontinuities in the objective function, there would be some problems due to non-differentiable problem. Some methods such as sub-gradients are proposed to solve such problems. However, I cannot find many papers in the state-of-the-art of engineering optimization of discontinuous optimization using mathematical programming. Engineers mostly use metaheuristics for such cases.

Can all problems with discontinuities be solved with mathematical programming? Is it easy to implement sub-gradients for large scale industrial problems? Do they work in non-convex problems?

A simple simple example of such a function is attached here.

We all know that mathematical programming is the best optimization tool with many years of strong theoretical background which presents reliable solutions with high efficiency. Also, the mathematical proof for the global optimality would be available. However, in some cases with a lack of knowledge in which, the analytical calculation of sensitives is impossible, researchers will prefer to use metaheuristics. However, they are inefficient and unreliable in large scale problems.

The development of surrogate models such as the Kriging method, model-based methods such as Radial basis function interpolation, and novel machine learning tools helps us to approximate the objective function. So, model-based sensitivity can be used instead. Also, machine learning can help to predict sensitivity information!

So, the improvement of function or sensitivity approximation, coupled with mathematical optimization will disappear the metaheuristics. In this way, I guess that there would be no need for metaheuristics (at least in continuous optimization as I know).

What do you think about it? Are you agreeing? Do you have any experience? Also, I am interested in both mathematical programming and metaheuristics, but prefer the efficiency.

Hi

I need to do a grid sensitivity analysis and to find the best node size which gives the most accurate heat loss. But the problem is that I found decreasing the node size changes the heat loss but it never converges. I changed the code to the simplest case with constant temperature on the boundaries but still, I see the same problem.

(the domain is two dimensional with attached boundary conditions.)

It would be appreciated if help me.

I have attached my code.

Hi everyone, I was wondering if there are any good yearly generic or field-specific conferences on topics of sensitivity and uncertainty analysis. Thanks! Shahroz

Dear all,

I hope you were healthy with good sanity.

everyone who learns about the financial model recognized the sensitivity analysis

but the problem is how could be simulated with MatLab if you have any m.file, would you please attach for me.

best wishes,

Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.

Many thanks!

The mathematics behind the inverse of large sparse matrices is very interesting and widely used in several fields. Sometimes, It is required to find the inverse of these kinds of matrices. However, finding the same is computationally costly. I want to know, the related research, what happens when a single entry (or a few entries) are perturbed in the original matrix then how much it will affect the entries of inverse of the matrix.

**How to quantify feature importance/sensitivity analysis in discrete Bayesian Network?**

I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.

I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.

Thank you very much.

Hello everyone. I am currently doing dynamic study on a distillation column. How can I do sensitivity analysis on the column i.e., change in feed conditions (temperature, pressure, flowrate). I want to avoid using MATLAB for sensitivity analysis and want to do it in Aspen Plus Dynamics. It is requested to please guide. Thank you.

I have 24 sets of model results based on different inputs to 3 model parameters. Since all inputs are equally plausible, I am using the coefficient of variation to quantify uncertainties of the model.

I would like to estimate the relative contribution to that level of uncertainty of each model parameter. I have come across Sobol's main and total effects. Yet, a simulation is not required in my case, and I'm not sure how to apply this appraoch to my results. I will appreciate any recommendation.

I am using SLP method with sensitivity analysis using adjoint method. How to check if the obtained solution is actually the global optimum?

Probabilistic sensitivity analysis is criticised for potentially introducing uncertainty itself because of the consideration of the distribution of the parameters. Are there ways of addressing this potential for additional uncertainty?

Hi all,

I got a question on an equation I cannot understand 100% - related to the semi-analytical adjoint method.

The equation I am trying to understand is Eq (4) in chapter

**3. Sensitivity analysis -**Please find the attached document.So it seems like this Eq(4) is calculated from Eq (1), (2) and (3).

However, when I have tried to solve by myself, I got Lamda in the 3rd term not the transpose of Lamda in Eq (4).

Can anyone please explain how Eq(4) could be achieved by Eq (1),(2) and (3).

If given a trained discrete Bayesian Network, how to quantify the impacts of different nodes to a target node? E.g., the impacts of various factors on farmers' adoption of certain new technology.

One thing I can think of is to measure the cross-entropy between the unconditional distribution of the target variable and the conditional distribution given the evidence of the investigated factor. This sounds like the gain of information in the target variable, but I did not find literature to support it... Please let me know if you have other suggestions. Thank you!

Meta-analysis: It is important to perform sensitivity analysis when heterogeneity is significant. However, when the researchers did sensitivity analyses about results (I² changed from 93% to 80%) did not decrease the heterogeneity obviously. How to interpret this point?

I have below model:

X = A*B

Where A = a list of values with Lognormal distribution (size = 13)

and B = another list of values with Lognormal distribution (size = 13)

How can I perform 1st order, 2nd order and total Sobol sensitivity analysis of this model in R programming?

Please help me with the steps.

I am conducting a systematic review and meta analysis of the prevalence of lost to follow-up among MDR TB patients. I am using STATA 14 for the analysis. I have performed the pooled estimate using the metan command and I have got high heterogeneity (I

^{2}=94.4%, p<0.0001). How can I proceed then? Shall I go through additional analyses like sub group analysis, meta regression, sensitivity analysis or funnel plots?