Science topic

# Sensitivity Analysis - Science topic

Explore the latest questions and answers in Sensitivity Analysis, and find Sensitivity Analysis experts.
Questions related to Sensitivity Analysis
Question
I am working on my research in inventory management field. I am unable to understand the procedure of sensitivity analysis of any model. How to perform optimization and sensitivity analysis in MATLAB. Kindly suggest and guide me for this.
Thank you for your response but I want to know the procedure of sensitivity analysis like what will be the coding or steps in MATLAB to do sensitivity analysis for the mathematical model of inventory. Kindly guide me for this.
Question
Can you please provide guidance on performing sensitivity analysis in Fuzzy AHP using Excel?
Saaty, the creator opf AHP stated in writing that you can't use fuzzy in AHP, because the latter is already fuzzy
Question
I need a step-by-step procedure on how to perform a single parameter sensitivity analysis to evaluate the impact of parameters on a vulnerability index. I am particularly confused about how to create the sub-areas in GIS and compute the parameter rates and weights.
Look up univariate sensitivity analysis :)
Question
Based on my 3D analysis, how can I move an object from its ' Y old' dimension to a new 'Y New' dimension?
To move an object from its old Y dimension to a new Y dimension based on your 3D analysis, you can follow these general steps:
1. Select the object: Identify the object in your 3D analysis that you want to move and make sure it is selected or highlighted.
2. Access the transformation tools: Look for the transformation tools or options in your 3D analysis software. These tools typically include options for translation, rotation, and scaling.
3. Choose the translation tool: Locate the translation tool within the transformation options. This tool allows you to move the object in the desired direction.
4. Specify the axis: Determine the axis along which you want to move the object. In this case, you want to change the Y dimension, so you would specify the Y-axis as the axis for translation.
5. Enter the distance: Specify the distance you want to move the object along the Y-axis. This can be the difference between the old Y dimension and the new Y dimension.
6. Apply the transformation: Click or activate the tool to apply the translation based on the specified distance. The object should move accordingly, aligning with the new Y dimension.
Note that the specific steps may vary depending on the software you are using for your 3D analysis. Consult the software documentation or help resources for detailed instructions on how to perform translations or transformations in your particular software.
Question
I want to plot a graph showing the effect of OP on syngas composition, but I am struggling with sensitivity analysis in Aspen Plus. Which manipulated variable should I choose? Here's a picture.
Hello, your problem is only mass balance when you mix air at different oxygen content with syngas? Or your problem is the effect of the oxygen on the final gas composition after the reaction?
In both case, you don't vary the concentration of the oxygen keeping the constant of the air stream because this variable is not present.
While, if this is not a problem you can vary the mole o mass flow of the oxygen (the air flow will change) and then you can write in the tabulate section the formula mole oxygen in air/ mole air (this is the inlet concentration of the oxygen that you set) and in the same way you can write mole oxygen in the gas after mixer / mole gas after mixer (that is the final concentration oxygen i.e. your result). It is obvious that you define all variable that you will write in tabulate.
If you have to keep constant the air flow, so you can try to find the variable concentration in the property set and you will se in vary section.
BR,
Domenico Flagiello
Question
Hello,
I am trying to conduct a sensitivity analysis for a simple mediation model, that was conducted in SPSS using the PROCESS Macro by Hayes (Model 4).
I have the following information:
• Sample Size N = 1081
• α = .05
• 1−β = .90
How can I find out, which size the indirect effect has, that can be detected with these settings? Which program can I use for this?
Do I have to know the Effects of path a and path b to be able to conduct the sensitivity analysis?
Figure is taken from: The Englisch Wikipedia site "Mediation (statistics)"
See my attached paper where I performed a univariate sensitivity analysis. It tells you what to do there :)
Question
Dear researchers,
I am writing to request your assistance in obtaining literature, research papers, or any valuable insights regarding sensitivity analysis in the artificial neural network modelling of geopolymer concrete. Furthermore, I would appreciate your providing practical recommendations or best practices for conducting sensitivity analysis in this domain. Your contribution will greatly benefit my study, and I appreciate your support.
Thank you for your time and consideration.
There are lot of researches in google scholar, you can find many types of research done in the same. Sandesh Karki may be interested to work in the same.
Question
I am working on a meta-analysis. However, from the sensitivity analysis I got s significant change on the result (but on Egger regression it is okay --> p>0.05).
Illustration:
1. Original finding (11 studies): OR = 0.60 (95% CI = 0.45-2.1, p=0.45)
2. After sensitivity analysis (10 studies --> let's say removing study X): OR = 0.50 (95% CI = 0.25-0.61, p=0.04)
Note: these are not the true value but it resembles our finding
What must I do? Remove study from the final one or still includes original one?
And how to interpret it? Thank you
Thank you very much for sharing your thoughts, Dr. Sébastien Buczinski Joseph Whittaker.
Question
How to optimize number of theoretical stages in reactive, rectifying and stripping section using sensitivity analysis tool in ASPEN Plus? what would be the independent variable and dependent variable? Kindly explain i am performing simulations on reactive distillation process.
Precisely! @Abdul Wahab
Please feel free to contact me if you have any questions.
Question
How to proceed with Sensitivity Analysis? Can it be done on RevMan5 or we need to export files to some other software?
Question
Sensitivity Analysis
Thank you very much for your valuable information.
Question
Throughout the literature, the Curve Number (CN) has been identified as the most sensitive parameter in hydrological models. However, in my project, when I conducted a sensitivity analysis, the CN number did not show the expected level of sensitivity. What could be the reason for this occurrence? Thank you.
A DNK is where a researcher makes a comment or asked a question with so many abbreviations that one simply Doesn't Know what s/he is talking about. Best wishes David Booth
Question
Sensitivity analysis using SRCC can be performed using SRCC = 1-6∑N i=1[R(xi)-R(yi)]^2/ N(N2-1) . One can get a single point value of SRCC between 2 different parameters such as DNI and efficiency. For example as DNI varies from 500 to 900 W/m^2, efficiency also rises between 60% and 67%. here we can use the above formula and get one point value of SRCC that varies between -1 to 1.
But I need to know, how one can get a series of SRCC values ? I am adding papers, in which authors used same formula and got the plots with number(series) of SRCC values. Kindly help to understand this problem.
Without going into too much detail, I wonder if a slightly different approach would solve your problem.
The Spearman rank correlation coefficient is the Pearson correlation coefficient estimated for ranks. Therefore, I suggest that you give up the formula you used. Instead, find ranks and estimate Pearson correlation coefficient for ranks.
Question
Currently using Msc Marc Mentat for FEA. Now i conduct the mesh convergence/sensitivity analysis manually. So I need to know if there any automatic way to conduct the mesh convergence/sensitivity analysis
Amir Mustakim Ab Rashid may use a statistical method to find the error in between the final answers.
Question
I have found formula to calculate importance factor of each variable of ANN model from Linda Milne paper namely "Feature Selection Using Neural Networks with Contribution Measures".
Can anyone explain how to use this formula? (attached are the pictures of this formula)
ANN architecture is 9-9-1, so i am having weights for input-hidden layers in 9x9 matrix and weights for hidden-output layers in 1x9 matrix. I am confused in wji, woj, wjl mentioned in the formula. Can anyone explain how to input weights in this formula?
Thanks
Use the below code in MATLAB software.
i=[0.9332 -1.2736 -0.9796 1.2489
1.6467 0.0158 0.6810 -0.9129
-0.7345 -0.8109 2.1678 -1.3348
-0.0144 1.9623 1.5066 1.1229
-1.8064 1.2488 -0.4296 -0.3624
-1.0439 0.3471 -1.6714 -1.1997]; % Weights between input and hidden layer
O=[-0.7313 0.0880 0.5374 0.2884 -0.2156 -0.3838]; % Weights between hidden layer and output layer
d=size(i);
i=abs(i);
O=abs(O);
ii=sum(i')';
for k=1:d(1) % define for loop for each input as below
R1(k)=i(k,1)*O(1,k)/ii(k,1);
end
sum(R1);
for k=1:d(1)
R2(k)=i(k,2)*O(1,k)/ii(k,1);
end
sum(R2);
for k=1:d(1)
R3(k)=i(k,3)*O(1,k)/ii(k,1);
end
sum(R3);
for k=1:d(1)
R4(k)=i(k,4)*O(1,k)/ii(k,1);
end
sum(R4);
% calculate the percentage for each input as below
RR1=sum(R1)/(sum(R1)+sum(R2)+sum(R3)+sum(R4))*100;
RR2=sum(R2)/(sum(R1)+sum(R2)+sum(R3)+sum(R4))*100;
RR3=sum(R3)/(sum(R1)+sum(R2)+sum(R3)+sum(R4))*100;
RR4=sum(R4)/(sum(R1)+sum(R2)+sum(R3)+sum(R4))*100;
% percentage of variables.
R=[RR1,RR2,RR3,RR4]'
Good luck
Question
Hello, I'm simulating a blanking operation in Abaqus/Explicit and right now I'm trying to figure out the optimum size of the smallest elements. My question is: do I have to simulate the whole blanking operation (which can take hours from a certain element size onward) in order to do a good mesh sensitivity analysis or can I just simulate only a fraction of the operation, like only punching half of the way, or a tenth of the way and still have a valid mesh sensitivity analysis?
Well, mesh independence is usually done for most critical conditions, like highest velocity involved.
Question
How to plot Approximate entropy, Permutation entropy, and sensitivity analysis for a chaotic map to check their randomness behavior and complexity. I tried to code these tests but couldn't get a good result. If anyone helps me, it will be helpful to me.
Question
Hi, all.
I am using G*power to perform a sensitivity analysis for a one-way MANOVA. The analysis suggested my study had a minimum detectable effect size of f^2(V) = .01.
Is it just the effect size f^2 for pillai?
Hello Frost,
To your specific question, does the computed value (using the "Sensitivity" option, which solves for ES) using G*Power for a manova problem yield the f^2 associated with Pillai's trace (V): Yes.
Cohen's f^2 is a ratio of explained variation divided by unexplained variation (where the two values: explained, unexplained, sum to 1). It has been likened to a "signal" to "noise" ratio, from communications research.
Question
I have used tool in the Likert Scale. How can I do scale sensitivity analysis?
I don't think there really is such a thing. But if you analyse using a Chi square test, it's kind of like a sensitive analysis because it uses expected values :)
Question
I need to do a sensitivity analysis of recruited studies in a meta-analysis using RevMan 5. Can anyone guide me?
Question
After having run the propensity score analysis in R, I need to conduct a sensitivity analysis in the same software. Kindly suggest which R-package should I use.
Question
Measuring uncertainty and sensitivity analysis for the human development index i want to set up goalposts for living standard (Income indicator)  can help me thanks in advance ...........
The standard of living dimension is measured by gross national income per capita. The HDI uses the logarithm of income, to reflect the diminishing importance of income with increasing GNI. The scores for the three HDI dimension indices are then aggregated into a composite index using geometric mean.
Question
Hello,
I am using the sensitivity analysis tool in Aspen Plus, the results are available successfully, however, the results tab in the sensitivity analysis section does now show any results, I can generate the results graph. but the results in the table are not shown!
Question
I have this forest plot using the standardized mean difference, however, the confidence interval lines for each study got inside the study square because they have narrow values relevant to the scale, is there a way to change the scale? Or should I just do a sensitivity analysis to see if the study with the highest SMD would affect the overall estimated SMD?
I did the meta analysis using metacont using random-effect model and Cohen's method and plotted using forest.meta from meta package.
The problem in the CIs is caused by the practically impossible outlier. As dear Gordon mentioned, an SMD of 15 does not happen. By including such an outlier, the forest function tried to generate a plot with an X-axis from -15 to +15 in a very limited space. So, on the plot, the CI lines will be very narrow.
Of note, the problematic outlier has caused other issues:
Your overall SMD is practically an unweighted average. Why? In the random-effects model, the weight is usually calculated as 1 / (t^2 + vi), and as your t^2 is that large (about 20), the influence of the variance of each effect size (which is usually less than 1) is almost nothing.
Also, such a t^2 is practically impossible. Its square root is about 4.5. We cannot expect such a standard deviation for the effect sizes.
I hope this clarifies the importance of the problematic SMD. I would follow Gordon's recommendation. If I could not find any error in data extraction, and I could not get an answer from the authors, I would exclude this study, and I would write in the manuscript something like "...this study was excluded, as the effect size was practically impossible (SMD = 15), indicating some errors in the report."
Finally, correcting or excluding this SMD will resolve the problem with the CI lines.
Question
When AHP is used for calculation of weights only and SAW is used for ranking the alternatives , how to carry out the sensitivity analysis ?
Dear Vidya
Unfortunately, what you observed is very common, and again, false.
I am sure that nobody can justify this procedure.
Instead, you can use objective weights fron entropy, or statistics, or still bether, use the marginal values for each criterion. Both are exact, and based on values extracted from the data.
And most important, both can be used to evaluate alternatives. They are not trade-offs.
If more information is needed, please don't hesitate and contact me. I will be happy to help
Question
How to get the sensitivity matrix and also how to get sensitivity matrix for a cracked beam using FEM by carrying out natural frequency sensitivity analysis
Try to depend on the fourth order PDE.
Question
Dear all,
I'm currently working on the optimization of the daily activity chain by using the GA algorithm. I developed a utility function consisting of (10) variables and their weights as follows:
U=v1w1+v2w2+........+v10w10.
The numerical values of the variables can be obtained from different sources, but the weight of each variable is based on the user's preferences. What's the best way to represent these weights: sensitive or scenarios analysis?
The sensitivity analysis is better. You can study the sensitivity of the output with respect to the chosen weights using any of the sensitivity analysis methods. In my opinion, one of the most powerful methods is the Non-Intrusive Polynomial Chaos Expansion method (PCE). It is a stochastic method which will show the direct sensitivity of the output to each input parameter and the nonlinear interaction of the input parameters and there effect on the output. You can check the following papers if you want more details:
1) Spatial Variation in Sensitivity of Hurricane Surge Characteristics to Hurricane Parameters
2) Multi-physics modelling andsensitivity analysis of olympic rowing boat dynamics
Question
Dear ResearchGate community,
I am working on DNA samples obtained from patients screened for cervical cancer/ HPV infection. To calculate the viral load of each sample, I am looking for a technique to know the amount of the viral DNA present in each of the samples.
That said, considering the fact that the viral genetic content can be present both as episomes as well as integrated into the human genome, what would be the best approach to get insight from the "effective" viral load?
In other words, for the diagnostic purposes which one of the two types of viral DNA- if not both- is of significance, and how can it be quantified?
Why don't you run Real-Time PCR (LightCycler)? It can be used for quantitative analysis of a gene copy number so it can be used for viruses as well.
Question
Hello everyone, I am performing a Mendelian Randomization in R to check whether there is a causal association between genetically predicted eGFR and severe COVID-19. In the end of my analysis I would like to check if the sensitivity analysis is robust. Do I have to check if the IV assumptions hold or do I check it some other way?
You vary the parameters of the MR and see how the output values vary, whether wildly or gradually, and then make comparisons :)
Question
I read that single-parameter sensitivity analysis and map removal sensitivity analysis are commonly used to assess the influence of parameters in DRASTIC and other index-based model. However, how do I actually do it? Because the output of DRASTIC is a raster with too many pixels trying to calculate for all pixels is an extremely huge task. Therefore, is there a software or way that I can do it efficiently? Can it carried out in ArcMap/ArcGIS? I've attached an example of an article that used sensitivity analysis for DRASTIC-LU model.
Sensitivity analyses reveal how the model outputs change when more factors enter incrementally into a model, or when a single parameter is fixed while others vary, or when all parameters are fixed while one varies (i.e. univariate). A sensitive parameter is one that elicits a large change in model output for a small change in a given input parameter. The univariate sensitivity of model response (S) with respect to parameter x, is equal to the ratio of the relative change observed in the state variable y for a given relative change in the value x, with y being sensitive to x when S > c. 1, otherwise it is robust.
See: Hamby, D. M. (1994) A review of techniques for parameter sensitivity analysis of environmental models. Environmental Monitoring and Assessment 32: 135-154.
Question
1. My understanding is that when conducting an economic evaluation of clinical trial data, no discounting of costs is applied if the follow-up period of the trial was 12 months or less. Is this still the standard practice and can you please provide a recent reference?
2. How can one adjust for uncertainties/biases when you use historic health outcomes data? If the trial was non-randomised, how can you adjust for that within an economic evaluation other than the usual probabilistic sensitivity analysis?
Thank you so much.
Hi,
Maybe some of these references are of help to you:
Economic Evaluation in Clinical Trials By Henry A. Glick, Jalpa A. Doshi, Seema S. Sonnad, Daniel Polsky 2014 | 272 Pages | ISBN: 0199685029
Economic Evaluation of Cancer Drugs: Using Clinical Trial and Real-World Data by Iftekhar Khan, Ralph Crott, et al. English | 2019 | ISBN: 1498761305 | 442 pages
Design & analysis of clinical trials for economic evaluation & reimbursement: an applied approach using SAS & STATA
Iftekhar Khan
Series: Chapman & Hall/CRC biostatistics series
Publisher: CRC Press, Year: 2015
ISBN: 978-1-4665-0548-3,1466505486
Methods for the Economic Evaluation of Health Care Programmes
Michael F. Drummond, Mark J. Sculpher, Karl Claxton, Greg L. Stoddart, George W. Torrance,ISBN: 0199665877 | 2015 | 461 pages |
Question
Dear researchers
As we know, recently a new type of derivatives have been introduced which depend two parameters such as fractional order and fractal dimension. These derivatives are called fractal-fractional derivatives and divided into three categories with respect to kernel: power-law type kernel, exponential decay-type kernel, generalized Mittag-Leffler type kernel.
The power and accuracy of these operators in simulations have motivated many researchers for using them in modeling of different diseases and processes.
Is there any researchers working on these operators for working on equilibrium points, sensitivity analysis and local and global stability?
Thank you very much.
Best regards
Yes I am
Question
My SWATCUP unable to accept more than 18 parameters for sensitivity analysis? Is there any one who have solution in this regard?
Try to only have 18 then? My huge model on seed dispersal had fewer than 18 :)
Question
Dear colleagues, I am looking for a user-friendly tool to conduct some sensitivity analysis from simulation-based experiments. It would be good if it includes procedures for specifying the parameters to explore, to compute and generate samples and to evaluate the sensitivity with sophisticated modern methods like the Morris method.
I know about two of them already: SimLab and Dakota. However, SimLab seems to not be available any more (https://joint-research-centre.ec.europa.eu/sensitivity-analysis-samo/simlab-and-other-software_en) and I cannot find an alternative download site. This tool was my preferred one. I also know about a Python library, SALib. Any other ideas and suggestions?
Any non-Matlab based solutions ? :-) My university does not provide a license.
Question
Hello,
I am currently working on sensitivity analysis in the context of AHP. I use the online tool BPMSG from Goepel, maybe someone here knows it. However, I have a problem with the traceability of the results. Let's assume that there are exactly 3 criteria in the AHP (C1,C2,C3). Then I would like to know how the final value for an alternative (a1) results if one of the criteria changes in weighting, right?
I'll just say C1 decreases by x. However, the value x that is taken away from C1 must be distributed to C2 and C3. I just wonder which method is used to do this. Is x simply distributed equally to C2 and C3 or does this happen according to the share of C2 or C3 in the sum of C2 and C3?
When I do that, I get the following for the remaining two criteria:
(C1-x) = New C1
(C2 + (C2 / (C2 + C3)) * x) = New C2
(C3 + (C3 / (C2 + C3)) * x) = New C3
Unfortunately, however, I do not know if this is correct. If I multiply the criteria with the corresponding values of alternative a1 and combine the whole thing to a final value, I can calculate the same again with the other alternatives. When I compare the graphs to see how big x has to be to change the final prioritization of the alternatives, I always get the wrong values compared to the online tool. Therefore I would like to know if the redistribution of the weights is correct.
I hope someone can help me despite the long question. Thanks a lot!
Kindly viait..
Question
We have used RSM for modelling a problem. In this problem, the formulation between output and inputs was created. But, we try to do a sensitivity analysis for validation. Please introduce a good way for this issue.
Best Regards
Good luck.
Question
Hi, i need to perform a mesh sensitivity analysis for a stenosis CFD analysis. Is it sufficient to try some different meshes and look if the residuals (x-velocity,y-vel,z-vel) converge?
Thank you all, I'm a desperate beginner..
At first you should start with a coarse mesh (may be the software default), then check the targeted parameters.
Then increase the accuracy of the mesh ( make it finer) and check the same targeted parameter.
Then further repeat it with finer and finer meshes, and check the same output parameter.
You should repeat it till you got a mesh independent solution.
And you can present this data in your paper.
Best regards
Question
Kindly recommend me some methods.
First of all, you do not Need to carry out a feature sensitivity analysis. It is not a requirement, but just an added bonus.
Secondly, in most applications, Feature Selection (via feature importances) and Feature Sensitivity are similar and complementary. Adding feature sensitivity after feature selection seems to be overkill. (More on this later)
While there are many different algorithms for estimating the sensitivity of a model on a set of features, I believe that you may be relatively new to Machine learning and suggest that you start with a couple of simple approaches:
a) How sensitive is the model to changes in the value of a feature? The classical way to carry this out is to shuffle or perturb the values of feature and see the magnitude of change in the model prediction.
b) How is the model affected if a certain feature's measurement is missing?
These 2 approaches would be a good point to start with sensitivity analysis. In case you are short on time and need to get results quickly, there are good libraries that can carry out sensitivity analysis for you. For instance, the pytolemaic package can do sensitivity analysis for most python ML models and generates pretty good reports, along with figures automatically (https://pypi.org/project/pytolemaic/).
Question
Hi! I am asking a question regarding a randomized controlled trial. So this trial compared steroid to steroid+MMF as first line treatment in Immune Thrombocytopenia (ITP), the primary outcome was time to the treatment failure which was defined as platelet count <30x10^9 in spite of 2 WEEKS treatment in steroid arm, and in spite of 2MONTHS treatment in the steroid+MMF arm.
First of all, this feels kinda weird as to why did the trial design defined the treatment failure differently?
And in the statistical analysis part, here is the explanation:
"Sensitivity analysis will include landmark analysis or shifting the time line to classify all treatment failures before 2 months as at 2 months in order to prevent potential biases caused by different definitions of treatment failure time frames between the two groups."
What does this mean? And did it justify using two different definitions to measure the outcomes?
Thanks a lot! I am new to this field so I may be asking very basic questions. Sorry about that ;)
I looked at the paper. I don't really like muticentre trials precisely because of this problem and their attempt to reduce bias, which actually introduces further bias. However, I believe there is no problem comparing the two treatments, even though they differ in duration, since the variable considered is time to treatment failure. The protocols are different, but failure is defined the same for both treatments i.e. platelet count < 30 x 10^9. Next time, they should really plan for equal duration of trials at one facility, examine treatment success, and maybe not publish in open access :)
Question
I was tried with the command meta summarize leave one out, but it says unrecognized.
this is for systematic and meta-analysis
The 'meta summarize' command should be available in Stata 16. Sounds more like you have a typing error somewhere in your code. This can result in the 'unrecognized' error.
A good reference for technical help on Stata code is the Stata Forum: https://www.statalist.org/forums/
Kind regards
Alex Kørup
Question
After doing FAHP, how to check the robustness of the model using sensitivity analysis. e.g I have four factors in my model, each factor has some weight in such a way that sum total of all is 1. If I change the weight of one factor keeping the total same (1), what will be the impact on other factors' weight. How to calculate the revised weight of other factors?
Question
Hi all,
I am working on meta analysis after screening few studies. I am trying to do subgroup analysis (based on geographical location, gender, age) to understand the heterogenicity in the MA. I have datas on mean, sd, sample size extracted from the studies. Upon doing MA, i found even in every set of subgroup analysis, the heterogenicity (I square value) is too high. Is that implies that i cannot combine all these studies or recommend some other way to conclude meta analysis.
I cannot do sensitivity analysis as the sample size in all studies are less than 70.
May i get suggestions on this?
Thanks much
I agree with Ali Yasen Mohamedahmed To solve high I²-heterogeneity issues this RG link may be useful:
Question
I'm interested to test the robustness of an outcome definition using different cut-off points (e.g. 80% of pills taken vs. 90% pills taken to define adherence). Many articles reported such comparisons with sensitivity analysis but were not specifically clear on the type of test employed. I doubt a simple chi-square couldn't be used since both outcome definitions are applied in the same sample (groups are not independent).
Any suggestion of a statistical test to handle such types of data?
It's change in what YOU DID not some other test method. They want to know how your answer changes if you change important parameters a little bit in what YOU DID ALREADY. This is not some other statistical test. David Booth. Here's a dumb example:
Suppose you have the equation 2x=6. X=3 is the solution. They wish to know how sensitive the solution is if you Change 6 to say 6.05 calculate the new x and compare it to x=3. That's sensitivity. GOOD LUCK DB
Question
Please can I get a concise description of how to carry out a single parameter sensitivity analysis in flood hazard mapping using the MCDM method?
Use mutual information or entropy theory
Question
I want to know which ways are common sensitivity analysis that are performed in MCDM-related studies.
Dear Mahmut
However I don't have an answer because I don't know in detail the method our friend Thanh posed.
I already mentioned Thanh that it is improbable that after 0.5% the selected alternative holds, because most probable the intervening criteria are decreasing their Lambda as long as the Lambda of one criterion is changed, and then, their margins of variation are decreasing simultaneously, until a moment where the selected criterion ceases to be important for the best selection, and then, a new criterion is now responsible, and most probably producing a change on the initial ranking.
This can be clearly seen when the original straight line of a certain objective, changes to another straight line, but with a lower slope, corresponding to a lower marginal cost, and then originating a convex curve, which represents the utility curve
Trying to answer your specific question, and always according to my reasoning, it appears that it is not natural that the same procedure produces alternatives changes for lambda values below 05, which agrees with what I said above, and suddenly, from 0.5 > lambda< 1, it remains constant. I would check this
I am attaching the resulting curves produced by SIMUS/IOSA, published in one of my books, for solving a problem, where the best alternatives was subject to two criteria C3 and C5, and their simultaneous variation. The curve of the left is from increasing criteria C1 and C5 at the same time. That on the left is for decreasing.
Observe that the curve on the left shows a straight line and for a certain value changes forming a convex curve
Question
The result of sensitivity analysis (the coefficients) helps us decide whether the estimated ATT was the pure effects of the treatment or not. But, on which coefficient should I focus more?
HI
Question
I am looking for advice how to show the changes in a dynamical model system based on equations. I want to carry out a sensitivity analysis and want to show how strong the impact of a percentage change in one parameter is displayed in the system itself. Can anyone recommend some literature or how you did it by yourself?
Maybe you can consider the recursive least squares algorithm (RLS) with forgetting factor (RLS-FF). RLS is the recursive application of the well-known least squares (LS) regression algorithm, so that each new data point is taken in account to modify (correct) a previous estimate of the parameters from some linear (or linearized) correlation thought to model the observed system. The method allows for the dynamical application of LS to time series acquired in real-time. As with LS, there may be several correlation equations with the corresponding set of dependent (observed) variables. For the RLS-FF algorithm, acquired data is weighted according to its age, with increased weight given to the most recent data. The correlation parameters are updated gradually.
Application example ― I have applied the RLS-FF algorithm to estimate the parameters from the KLa correlation, used to predict the O2 gas-liquid mass-transfer, hence giving increased weight to most recent data:
Question
I solved some questions and I did the sensitivity analysis. Two of the parameters returned a sensitivity index of 1.
I would like to know if this sensitivity index of 1 has any special meaning.
Hello Nneka,
Sensitivity reflects the proportion of true positives (on some criterion variable) that a given indicator flags as positive. A value of 1 indicates that every true positive case was so identified/flagged by your indicator variable.
This could occur if:
1. The indicator scores are genuinely indicative of and highly corresponding to true status (this is the ideal case; the others which follow are not).
2. All cases were true positives, in which case the indicator scores don't necessarily have to relate to true status.
3. You have a very small sample size.
4. The indicator variable is actually redundant with the true status measurement (for example: if the indicator variable was BMI score and the true status was whether or not a case was classified as morbidly obese).
Question
In general, models in this paper need to be validated, so what aspects should be used to verify discrete dynamic Bayesian network? And whether the sensitivity analysis in the GeNIe software can verify the validity?
Hi there, I am not familiar with dynamic Bayesian networks. However, any results from a newly applied approach need to be validated with evidence from experiments, measurements or other proven algorithms. e.g. a frequentist approach shall have a similar finding. Hopefully is helpful. ~DM
Question
I am trying to develop a WASP8 model that can predict the dispersion of contaminants in seawater. Before I do a sensitivity analysis, I wanted to know if the flow of discharge has any effect on the disperison.
Thank you
Question
I want to do some sensitivity analysis by altering the meteorology (e.g. increasing temperature) in WRF-Chem model. Can anybody suggest me how I can do this?
Best Wishes,
Anwar Khan
One thing to keep in mind is that several meteorological parameters covary. So, just perturbing one of them alone may disturb the geostrophic and hydrostatic balance in the model, creating unrealistic waves in the model solution.
Question
I would like to perform a sensitivity analysis of a CFD solver. There are 8 input variables, for each of them there are 2-3 prescribed numerical values.
To evaluate one set of parameters three costly simulations (each running for 20 hours on 800 cpu cores). Budget for these simulations is limited and due to the queuing system of the HPC, it would take a long time to get the results.
I'm aware of latin hypercube hierarchical refinement methods that allows starting the sensitivity analysis with smaller budget and subsequently incorporating newer results when they're available.
But those methods works with continuous variables. Is there a method for categorical and ranked/ordinal variables?
Thank you, Andrea
Question
Dear all,
I have simulated a chemical process and I would like to run a sensitivity analysis on the impact of its parameters. Could you provide me with a tutorial on Monte Carlo method?
following
Question
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
Question
As the title says, I am looking to vary a Fortran defined variable in sensitivity analysis.
For example, I am trying to vary the mole ratio of toluene to methanol in a feed stream to study the effects of the molar feed ratio on conversion of toluene.
By ensuring that the variable isn't declared as a constant. For instance; https://www.tutorialspoint.com/fortran/fortran_constants.htm
Question
Sensitivity Analysis can be used to check the variation of the optimum solution when changing the coefficients of the objective function or constant values in constraints. Are there exist any other things to investigate using this approach?
Sensitivity analysis is useful to determine the robustness of the optimal solution. If the optimal solution changes significantly, when one of the problem parameters is changed only slightly, then the optimal solution is said to be sensitive to changes in that parameter, otherwise, it is robust.
Sensitivity analysis also gives insights into the problem under study. You can use it to validate your hypotheses about the problem or you can derive conclusions about the relationship of the optimal objective function value to the various parameters of the problem. This helps ground a problem from practice on a more reliable and intuitive basis, and demonstrates its applicability in practice.
Question
Hello
I'm currently trying to create a chi-map using topotoolbox for matlab. In the available literature most of the calculations use a single mn ratio (0.45-0.5) for the entire area, and some others do a sensitivity analysis in order to get the best mn ratio per watershed. However, I don't know whether the calculations would improve using the best mn ratio per stream or it really doesn't matter because the sensitivity analysis is good enough in a calculation per watershed.
By the way, I'm working in an landscape highly controlled by faults activity.
Best Regards
Lester
One option would be to use the ChiProfiler tool that integrates with TopoToolbox that Sean Gallen (Colorado State University) created. It is available from his Zenodo site for download at https://zenodo.org/record/321868#.YCV91WhKg2w or on Github at https://github.com/sfgallen/ChiProfiler.
Article to reference for use of ChiProfiler is:
Gallen, S.F., Wegmann, K.W.: River profile response to normal fault growth and linkage: An example from the Hellenic forearc of south-central Crete, Greece, Earth Surf. Dynam., 2017, http://www.earth-surf-dynam.net/5/161/2017/.
Question
Mathematical programming is the best optimization tool with many years of strong theoretical background. Also, it is demonstrated that it can solve complex optimization problems on the scale of one million design variables, efficiently. Also, the methods are so reliable! Besides, there is mathematical proof for the existence of the solution and the globality of the optimum.
However, in some cases in which there are discontinuities in the objective function, there would be some problems due to non-differentiable problem. Some methods such as sub-gradients are proposed to solve such problems. However, I cannot find many papers in the state-of-the-art of engineering optimization of discontinuous optimization using mathematical programming. Engineers mostly use metaheuristics for such cases.
Can all problems with discontinuities be solved with mathematical programming? Is it easy to implement sub-gradients for large scale industrial problems? Do they work in non-convex problems?
A simple simple example of such a function is attached here.
Your ideas for dividing the region and using local optimizer are so nice!
Thanks a lot!
Question
We all know that mathematical programming is the best optimization tool with many years of strong theoretical background which presents reliable solutions with high efficiency. Also, the mathematical proof for the global optimality would be available. However, in some cases with a lack of knowledge in which, the analytical calculation of sensitives is impossible, researchers will prefer to use metaheuristics. However, they are inefficient and unreliable in large scale problems.
The development of surrogate models such as the Kriging method, model-based methods such as Radial basis function interpolation, and novel machine learning tools helps us to approximate the objective function. So, model-based sensitivity can be used instead. Also, machine learning can help to predict sensitivity information!
So, the improvement of function or sensitivity approximation, coupled with mathematical optimization will disappear the metaheuristics. In this way, I guess that there would be no need for metaheuristics (at least in continuous optimization as I know).
What do you think about it? Are you agreeing? Do you have any experience? Also, I am interested in both mathematical programming and metaheuristics, but prefer the efficiency.
Mathematical optimisation kills all heuristics - if you pick them right.
Question
Hi
I need to do a grid sensitivity analysis and to find the best node size which gives the most accurate heat loss. But the problem is that I found decreasing the node size changes the heat loss but it never converges. I changed the code to the simplest case with constant temperature on the boundaries but still, I see the same problem.
(the domain is two dimensional with attached boundary conditions.)
It would be appreciated if help me.
I have attached my code.
Hi,
Use a finer grids near the wall with hexahedral mesh and try to use another turbulence model like SST model.
Question
Hi everyone, I was wondering if there are any good yearly generic or field-specific conferences on topics of sensitivity and uncertainty analysis. Thanks! Shahroz
Question
Dear all,
I hope you were healthy with good sanity.
everyone who learns about the financial model recognized the sensitivity analysis
but the problem is how could be simulated with MatLab if you have any m.file, would you please attach for me.
best wishes,
Question
Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.
Many thanks!
Usually + or - 25%
Question
The mathematics behind the inverse of large sparse matrices is very interesting and widely used in several fields. Sometimes, It is required to find the inverse of these kinds of matrices. However, finding the same is computationally costly. I want to know, the related research, what happens when a single entry (or a few entries) are perturbed in the original matrix then how much it will affect the entries of inverse of the matrix.
A standard trick in these cases is to use the Sherman Morrison formula
However, the inverse of a sparse matrix does not have to be sparse and in particular one does not want to store inverses for sparse large matrices, so that the formula would rather have to be applied to the action A^-1b of the inverse on the right hand side of the linear system, so as to correct the solution to the original linear system A^-1b with a hopefully limited amount of operations.
Please notice that this is a very generic comment, I am sure somebody in the sparse solver community will have studied the problem in much greater depth.
Question
How to quantify feature importance/sensitivity analysis in discrete Bayesian Network?
Question
I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.
I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.
Thank you very much.
Hi? You might want to have a look at one of my publications - 10.1016/j.envsoft.2020.104800
I recently conducted a similar study where I applied three different sensitivity analysis methods to fire simulations and compared their results!
Cheers!
Question
Hello everyone. I am currently doing dynamic study on a distillation column. How can I do sensitivity analysis on the column i.e., change in feed conditions (temperature, pressure, flowrate). I want to avoid using MATLAB for sensitivity analysis and want to do it in Aspen Plus Dynamics. It is requested to please guide. Thank you.
Hi? You might want to have a look at one of my publications - 10.1016/j.envsoft.2020.104800
I recently conducted a similar study trying to analyze the impact/influence of a parameter in the model output using a SALib python library! SALib is quite easy to use.
Cheers!
Question
I have 24 sets of model results based on different inputs to 3 model parameters. Since all inputs are equally plausible, I am using the coefficient of variation to quantify uncertainties of the model.
I would like to estimate the relative contribution to that level of uncertainty of each model parameter. I have come across Sobol's main and total effects. Yet, a simulation is not required in my case, and I'm not sure how to apply this appraoch to my results. I will appreciate any recommendation.
Hi? You might want to have a look at one of my publications - 10.1016/j.envsoft.2020.104800
Question
I am using SLP method with sensitivity analysis using adjoint method. How to check if the obtained solution is actually the global optimum?
Since such validation is unreliable in most of the cases, mathematicians have presented various benchmarks (case study) and have given a proof for their global optimum. These are why benchmarks are developed in optimization and control (to validate a methodology with a high reliability and confidence). However, we have some benchmarks in optimal control, while the global solution is only the best known optimum, not definitely the absolute one.
Question
Probabilistic sensitivity analysis is criticised for potentially introducing uncertainty itself because of the consideration of the distribution of the parameters. Are there ways of addressing this potential for additional uncertainty?
If you look deeper into the literature, you have some sensitivity analysis methods that are independent of the sampling techniques!
Question
Hi all,
I got a question on an equation I cannot understand 100% - related to the semi-analytical adjoint method.
The equation I am trying to understand is Eq (4) in chapter 3. Sensitivity analysis - Please find the attached document.
So it seems like this Eq(4) is calculated from Eq (1), (2) and (3).
However, when I have tried to solve by myself, I got Lamda in the 3rd term not the transpose of Lamda in Eq (4).
Can anyone please explain how Eq(4) could be achieved by Eq (1),(2) and (3).
It means is proportional to :)
Question
If given a trained discrete Bayesian Network, how to quantify the impacts of different nodes to a target node? E.g., the impacts of various factors on farmers' adoption of certain new technology.
One thing I can think of is to measure the cross-entropy between the unconditional distribution of the target variable and the conditional distribution given the evidence of the investigated factor. This sounds like the gain of information in the target variable, but I did not find literature to support it... Please let me know if you have other suggestions. Thank you!
Question
Meta-analysis: It is important to perform sensitivity analysis when heterogeneity is significant. However, when the researchers did sensitivity analyses about results (I² changed from 93% to 80%) did not decrease the heterogeneity obviously. How to interpret this point?
The percentage of variation across studies that is due to heterogeneity rather than chance decreased
Question
I have below model:
X = A*B
Where A = a list of values with Lognormal distribution (size = 13)
and B = another list of values with Lognormal distribution (size = 13)
How can I perform 1st order, 2nd order and total Sobol sensitivity analysis of this model in R programming?