Science topics: MathematicsSensitivity Analysis

Science topic

# Sensitivity Analysis - Science topic

Explore the latest questions and answers in Sensitivity Analysis, and find Sensitivity Analysis experts.

Questions related to Sensitivity Analysis

Currently,i'm researching cost effectiveness research using decision tree. I wanted to do sensitivity analysis for my study. Please if you can give a guidance on this issue.

How to optimize number of theoretical stages in reactive, rectifying and stripping section using sensitivity analysis tool in ASPEN Plus? what would be the independent variable and dependent variable? Kindly explain i am performing simulations on reactive distillation process.

Respected Group members

I had recently submitted my thesis for final evaluation for which one of the evaluators flagged the issue of non-use of q square values as a tool for estimating out of sample predictive power of the model in our studies. The evaluator in this regard highlighted that the phenomenon of sensitivity analysis has not been addressed at all and must be addressed through calculating q-square values.

In the light of aforementioned facts, I would like to ask the procedure to calculate q square values or perform sensitivity analysis in our study which involves model development and validation through CB-SEM approach using AMOS software? I

Kindly enlighten!

How do we perform a Sensitivity analysis in an integrative review?

- The review was based on PRISMA 2020 guidelines,

- It is qualitative by nature, this means it does not include meta-analysis and it aimed to review the psychological factors affecting Willingness to pay (for mature restoration).

Hello Everyone,

I am currently conducting a sensitivity analysis for a Life Cycle Assessment (LCA) study and have encountered an intriguing issue. During the analysis, I observed an inconsistent shift in one direction for a specific parameter on the tornado plot. Specifically:

Increasing the parameter by 10% resulted in a 45% increase in the overall output model.

Decreasing the parameter by 10% led to a 25% increase in the overall output model.

These results seem counterintuitive to me. Does this behavior make sense within the context of sensitivity analysis? Has anyone experienced similar findings or can provide insights into this phenomenon?

Additionally, if you know of any articles or literature that discuss such anomalies in sensitivity analysis, I would greatly appreciate it if you could share them with me.

Thank you for your assistance!

I am having trouble trying to understand how to establish equivalence bounds, and I hope someone can break it down for me. This is what I read from Lakens (2017):

*"Based on a sensitivity analysis in power analysis software (such as G*Power), we can calculate that with 100 participants in each condition, 80% desired power, and an a of .05, the SESOI in a null effect significance test is D ¼ 0.389; and using the power analysis calculation for an equivalence test for independent samples, assuming a true effect size of 0, 80% power is achieved when lower bound = 0.414 and upper bound = 0.414. As such, without practical boundaries or theoretical boundaries that indicate which effect size is meaningful, the maximum sample size you are willing to collect implicitly determines your SESOI."*

Presuming I am aiming for a small to medium effect size of d = .15 for a between group comparison where I am collecting n =150 per condition, how do I calculate the lower and upper equivalence bounds for such a scenario? Would also appreciate if someone could link me up with a published paper (preferably psych articles) that employed equivalence bound testing.

I computed Principal coordinate analyses (pcoa,{vegan}) to describe compositions of different plant communities among samples using Bray–Curtis dissimilarities. But now the reviewers for my research paper have asked to do a sensitivity analysis for the model used. Can someone please help me with this?

I have some thematic maps which is used to calculate water potential zones using AHP but I wasn't able to perform Sensitivity in ARC-MAP

# 111

Dear Abbas El Toufaili, Dario Pozzetto, Elio Padoano, Luca Toneatti, and Ghassan Fakhoury

I read your article

Selection of a Suitable Waste to Energy Technology for Greater Beirut Area Using the Analytic Hierarchy Process

My comments

*1-*In page 5 you say:

*“4) the consistency of each pairwise comparison matrix is checked;”*

Yes, it checks the consistency in judgements from the DM, and what is it good for? You can’t assume that the values derived from the DM mind are the same as in the real word. They are not even judgements but intuition, and there is not a theorem or an axiom that supports thar assumption, very convenient indeed. I have posted many times this, curiously nobody refuted me

*“6) a sensitivity analysis is eventually performed to verify the stability of the ranking.”*

Not really, sensitivity analysis tests the stability of the best or selected alternative

*” To choose the criteria/sub-criteria that are used in the evaluation of the alternatives, a comprehensive literature review was conducted on the WtE technologies and on solid waste and energy sectors”*

If you do not have the alternatives that can be used in YOUR CASE, how can you select the criteria to evaluate them?You need criteria that are relevant to your alternatives, that can be different in other similar projects. There is not a universal set of criteria, although you can learn indeed from other projects.

*“The first level presents the goal, which is the selection of a suitable WtE technology for the treatment of MSW in GBA”*

So now you agree with me, wren you say that the first thing is to select that the WtE technologies or alternatives; you are contradicting yourself!

Now you are talking about the weights designed arbitrarily by a set of experts, and what happens if another set of experts disagree?

**You cannot use AHP in this study**because almost all of the 12 criteria are related, and AHP works only with independent criteria, as Saaty said it very clearly. How can you explain this?

2- You cannot use one criterion to perform sensitivity analysis and keep the other constant, because it is reasonable. This is the famous ceteris paribus, a principle rejected by most economists, with reason, because it is not congruent with reality. If you go to a doctor due to a headache, he/she will not limit to study the brain because the problems may be associated with other parts of the body. Another interesting and wrong assumption in AHP.

3- You say this:

*“To apply the AHP method and obtain credible preferences”*The re asl world does not work with preferences, but with facts; preferences change, facts don’t

In page 7 you have four different types of criteria, which is fine. It appears that you must have five experts, one for each type, which is reasonable.

Now, consider that the expert on Environment-health has to compare with each one on the four other experts on pair of criteria. This means that this guy must have knowledge of the other four disciplines, if not, how is he going to discuss with each specialist? I hope that you recognize that this is possible but not probable.

My question is, how two experts addressing two different disciplines can agree, when there is even disagreement between two persons discussing about something they BOTH know?

The mentioned guy most probably knows nothing about technology sophistication and vice versa. Each one will be defending about their acquiring knowledge and expertise. How can the reach an agreement about how many times is a criterion more important thins other? Hoe do they refute each other?

See the fallacy of this procedure?

4- Needless to say, the way in which sensitivity analysis in AHP and most MCDM methods is inappropriate, because you can’t use a ceteris paribus principle,’ it does not have

any mathematical support. In addition, selecting the criterion to vary with the maximum weight it does not have either any support, other than intuition.

I hope these comments may help

Nolberto Munier

I'm working on p-norm topology optimization in plane stress using a MATLAB code adapted from the article An efficient 146-line 3D sensitivity analysis code of stress-based topology optimization" by Hao Deng, Praveen S. Vulimiri and Albert C.To. I've noticed small sensitivity values (e.g., 4.54e-05, -7.30e-09) with a stress norm parameter (p) of 5. Are such values typical in this context, and should negative sensitivity values be expected? The relevant codes are attached.

Your experiences and recommendations would be greatly appreciated.

Thanks!

I am working on an infectious disease of 10 to 15 compartment. I need I need maple code to solve the DFE,ENDEMIC,BASIC REPRODUCTION NUMBER AND ALSO TO PERFORM SENSITIVITY ANALYSIS.

I want to plot a graph showing the effect of OP on syngas composition, but I am struggling with sensitivity analysis in Aspen Plus. Which manipulated variable should I choose? Here's a picture.

Abstraction: Models simplify complex real-world systems by focusing on essential elements and relationships while ignoring less relevant details.

Mathematical Representation: They are expressed using mathematical equations, symbols, or formalism, making them precise and quantifiable.

Purpose-Driven: Models are created for specific purposes, such as prediction, explanation, optimization, or decision-making, and their structure reflects this purpose.

Assumptions: Models are based on assumptions about the behavior of the system being modeled. These assumptions can vary in realism and complexity.

Variables: Models involve variables that represent system components or attributes, and these variables are interconnected through mathematical relationships.

Parameters: Models often include parameters, which are constants or coefficients that influence the behavior of the system and can be adjusted for calibration or scenario analysis.

Validity and Applicability: The accuracy and applicability of a model depend on how well it reflects the real system, and models can be validated through data comparison.

Sensitivity Analysis: Models can be used for sensitivity analysis to understand how changes in input variables or parameters affect the output.

Time Dependency: Depending on the type of model, it may be static (time-independent) or dynamic (incorporating time as a variable).

Generalization: Models can be designed to provide insights beyond the specific case they represent, allowing for general principles and trends to be identified.

Interpretation: They provide a framework for interpreting data, making predictions, and testing hypotheses about the system under study.

Communication: Models facilitate communication and collaboration among experts and stakeholders by providing a common language and framework.

Limitations: Models have limitations due to simplifications and assumptions, and these limitations should be understood and acknowledged.

Solvability: Mathematical models are often solvable, meaning that they allow for analysis and computation to obtain solutions or insights.

Predictive Power: Many models are designed to make predictions about future states or behaviors of a system based on its current or past state.

These characteristics highlight the versatility and utility of mathematical models in various fields for understanding, decision-making, and problem-solving.

Abstraction:

Models simplify complex real-world systems by focusing on essential elements and relationships while ignoring less relevant details.

Mathematical Representation:

They are expressed using mathematical equations, symbols, or formalism, making them precise and quantifiable.

Purpose-Driven:

Models are created for specific purposes, such as prediction, explanation, optimization, or decision-making, and their structure reflects this purpose.

Assumptions:

Models are based on assumptions about the behavior of the system being modeled. These assumptions can vary in realism and complexity.

Variables:

Models involve variables that represent system components or attributes, and these variables are interconnected through mathematical relationships.

Parameters:

Models often include parameters, which are constants or coefficients that influence the behavior of the system and can be adjusted for calibration or scenario analysis.

Validity and Applicability:

The accuracy and applicability of a model depend on how well it reflects the real system, and models can be validated through data comparison.

Sensitivity Analysis:

Models can be used for sensitivity analysis to understand how changes in input variables or parameters affect the output.

Time Dependency:

Depending on the type of model, it may be static (time-independent) or dynamic (incorporating time as a variable).

Generalization:

Models can be designed to provide insights beyond the specific case they represent, allowing for general principles and trends to be identified.

Interpretation:

They provide a framework for interpreting data, making predictions, and testing hypotheses about the system under study.

Communication:

Models facilitate communication and collaboration among experts and stakeholders by providing a common language and framework.

Limitations:

Models have limitations due to simplifications and assumptions, and these limitations should be understood and acknowledged.

Solvability:

Mathematical models are often solvable, meaning that they allow for analysis and computation to obtain solutions or insights.

Predictive Power:

Many models are designed to make predictions about future states or behaviors of a system based on its current or past state.

I am working on my research in inventory management field. I am unable to understand the procedure of sensitivity analysis of any model. How to perform optimization and sensitivity analysis in MATLAB. Kindly suggest and guide me for this.

Can you please provide guidance on performing sensitivity analysis in Fuzzy AHP using Excel?

I need a step-by-step procedure on how to perform a single parameter sensitivity analysis to evaluate the impact of parameters on a vulnerability index. I am particularly confused about how to create the sub-areas in GIS and compute the parameter rates and weights.

Based on my 3D analysis, how can I move an object from its ' Y old' dimension to a new 'Y New' dimension?

Hello,

I am trying to conduct a sensitivity analysis for a simple mediation model, that was conducted in SPSS using the PROCESS Macro by Hayes (Model 4).

I have the following information:

- Sample Size N = 1081
- α = .05
- 1−β = .90

How can I find out, which size the indirect effect has, that can be detected with these settings? Which program can I use for this?

Do I have to know the Effects of path a and path b to be able to conduct the sensitivity analysis?

Figure is taken from: The Englisch Wikipedia site "Mediation (statistics)"

Dear researchers,

I am writing to request your assistance in obtaining

*Furthermore, I would appreciate your providing practical recommendations or best practices for conducting sensitivity analysis in this domain. Your contribution will greatly benefit my study, and I appreciate your support.***literature, research papers, or any valuable insights regarding sensitivity analysis in the artificial neural network modelling of geopolymer concrete.**Thank you for your time and consideration.

I am working on a meta-analysis. However, from the sensitivity analysis I got s significant change on the result (but on Egger regression it is okay --> p>0.05).

Illustration:

1. Original finding (11 studies): OR = 0.60 (95% CI = 0.45-2.1, p=0.45)

2. After sensitivity analysis (10 studies --> let's say removing study X): OR = 0.50 (95% CI = 0.25-0.61, p=0.04)

Note: these are not the true value but it resembles our finding

What must I do? Remove study from the final one or still includes original one?

And how to interpret it? Thank you

How to proceed with Sensitivity Analysis? Can it be done on RevMan5 or we need to export files to some other software?

Throughout the literature, the Curve Number (CN) has been identified as the most sensitive parameter in hydrological models. However, in my project, when I conducted a sensitivity analysis, the CN number did not show the expected level of sensitivity. What could be the reason for this occurrence? Thank you.

Sensitivity analysis using SRCC can be performed using SRCC = 1-6∑N i=1[R(xi)-R(yi)]^2/ N(N2-1) . One can get a single point value of SRCC between 2 different parameters such as DNI and efficiency. For example as DNI varies from 500 to 900 W/m^2, efficiency also rises between 60% and 67%. here we can use the above formula and get one point value of SRCC that varies between -1 to 1.

But I need to know, how one can get a series of SRCC values ? I am adding papers, in which authors used same formula and got the plots with number(series) of SRCC values. Kindly help to understand this problem.

Currently using Msc Marc Mentat for FEA. Now i conduct the mesh convergence/sensitivity analysis manually. So I need to know if there any automatic way to conduct the mesh convergence/sensitivity analysis

I have found formula to calculate importance factor of each variable of ANN model from Linda Milne paper namely "Feature Selection Using Neural Networks with Contribution Measures".

Can anyone explain how to use this formula? (attached are the pictures of this formula)

ANN architecture is 9-9-1, so i am having weights for input-hidden layers in 9x9 matrix and weights for hidden-output layers in 1x9 matrix. I am confused in wji, woj, wjl mentioned in the formula. Can anyone explain how to input weights in this formula?

Thanks

Hello, I'm simulating a blanking operation in Abaqus/Explicit and right now I'm trying to figure out the optimum size of the smallest elements. My question is: do I have to simulate the whole blanking operation (which can take hours from a certain element size onward) in order to do a good mesh sensitivity analysis or can I just simulate only a fraction of the operation, like only punching half of the way, or a tenth of the way and still have a valid mesh sensitivity analysis?

Thanks in advance and feel free to ask for further information.

How to plot Approximate entropy, Permutation entropy, and sensitivity analysis for a chaotic map to check their randomness behavior and complexity. I tried to code these tests but couldn't get a good result. If anyone helps me, it will be helpful to me.

Hi, all.

I am using G*power to perform a sensitivity analysis for a one-way MANOVA. The analysis suggested my study had a minimum detectable effect size of f^2(V) = .01.

I was not quite sure about this effect size f^2(V).

Is it just the effect size f^2 for pillai?

I have used tool in the Likert Scale. How can I do scale sensitivity analysis?

I need to do a sensitivity analysis of recruited studies in a meta-analysis using RevMan 5. Can anyone guide me?

After having run the propensity score analysis in R, I need to conduct a sensitivity analysis in the same software. Kindly suggest which R-package should I use.

Measuring uncertainty and sensitivity analysis for the human development index i want to set up goalposts for living standard (Income indicator) can help me thanks in advance ...........

Hello,

I am using the sensitivity analysis tool in Aspen Plus, the results are available successfully, however, the results tab in the sensitivity analysis section does now show any results, I can generate the results graph. but the results in the table are not shown!

I have this forest plot using the standardized mean difference, however, the confidence interval lines for each study got inside the study square because they have narrow values relevant to the scale, is there a way to change the scale? Or should I just do a sensitivity analysis to see if the study with the highest SMD would affect the overall estimated SMD?

I did the meta analysis using metacont using random-effect model and Cohen's method and plotted using forest.meta from meta package.

When AHP is used for calculation of weights only and SAW is used for ranking the alternatives , how to carry out the sensitivity analysis ?

How to get the sensitivity matrix and also how to get sensitivity matrix for a cracked beam using FEM by carrying out natural frequency sensitivity analysis

Dear all,

I'm currently working on the optimization of the daily activity chain by using the GA algorithm. I developed a utility function consisting of (10) variables and their weights as follows:

U=v1w1+v2w2+........+v10w10.

The numerical values of the variables can be obtained from different sources, but the weight of each variable is based on the user's preferences. What's the best way to represent these weights: sensitive or scenarios analysis?

Dear ResearchGate community,

I am working on DNA samples obtained from patients screened for cervical cancer/ HPV infection. To calculate the viral load of each sample, I am looking for a technique to know the amount of the viral DNA present in each of the samples.

That said, considering the fact that the viral genetic content can be present both as episomes as well as integrated into the human genome, what would be the best approach to get insight from the "effective" viral load?

In other words, for the diagnostic purposes which one of the two types of viral DNA- if not both- is of significance, and how can it be quantified?

Many thanks in advance,

Hello everyone, I am performing a Mendelian Randomization in R to check whether there is a causal association between genetically predicted eGFR and severe COVID-19. In the end of my analysis I would like to check if the sensitivity analysis is robust. Do I have to check if the IV assumptions hold or do I check it some other way?

I read that single-parameter sensitivity analysis and map removal sensitivity analysis are commonly used to assess the influence of parameters in DRASTIC and other index-based model. However, how do I actually do it? Because the output of DRASTIC is a raster with too many pixels trying to calculate for all pixels is an extremely huge task. Therefore, is there a software or way that I can do it efficiently? Can it carried out in ArcMap/ArcGIS? I've attached an example of an article that used sensitivity analysis for DRASTIC-LU model.

I have two questions and hope for some expert advice please?

1. My understanding is that when conducting an economic evaluation of clinical trial data, no discounting of costs is applied if the follow-up period of the trial was 12 months or less. Is this still the standard practice and can you please provide a recent reference?

2. How can one adjust for uncertainties/biases when you use historic health outcomes data? If the trial was non-randomised, how can you adjust for that within an economic evaluation other than the usual probabilistic sensitivity analysis?

Thank you so much.

**Dear researchers**

**As we know, recently a new type of derivatives have been introduced which depend two parameters such as fractional order and fractal dimension. These derivatives are called fractal-fractional derivatives and divided into three categories with respect to kernel: power-law type kernel, exponential decay-type kernel, generalized Mittag-Leffler type kernel.**

**The power and accuracy of these operators in simulations have motivated many researchers for using them in modeling of different diseases and processes.**

**Is there any researchers working on these operators for working on equilibrium points, sensitivity analysis and local and global stability?**

**If you would like to collaborate with me, please contact me by the following:**

**Thank you very much.**

**Best regards**

**Sina Etemad, PhD**

My SWATCUP unable to accept more than 18 parameters for sensitivity analysis? Is there any one who have solution in this regard?

Dear colleagues, I am looking for a user-friendly tool to conduct some sensitivity analysis from simulation-based experiments. It would be good if it includes procedures for specifying the parameters to explore, to compute and generate samples and to evaluate the sensitivity with sophisticated modern methods like the Morris method.

I know about two of them already: SimLab and Dakota. However, SimLab seems to not be available any more (https://joint-research-centre.ec.europa.eu/sensitivity-analysis-samo/simlab-and-other-software_en) and I cannot find an alternative download site. This tool was my preferred one. I also know about a Python library, SALib. Any other ideas and suggestions?

Hello,

I am currently working on sensitivity analysis in the context of AHP. I use the online tool BPMSG from Goepel, maybe someone here knows it. However, I have a problem with the traceability of the results. Let's assume that there are exactly 3 criteria in the AHP (C1,C2,C3). Then I would like to know how the final value for an alternative (a1) results if one of the criteria changes in weighting, right?

I'll just say C1 decreases by x. However, the value x that is taken away from C1 must be distributed to C2 and C3. I just wonder

**which method**is used to do this. Is x simply distributed equally to C2 and C3 or does this happen according to the share of C2 or C3 in the sum of C2 and C3?When I do that, I get the following for the remaining two criteria:

(C1-x) = New C1

(C2 + (C2 / (C2 + C3)) * x) = New C2

(C3 + (C3 / (C2 + C3)) * x) = New C3

Unfortunately, however, I do not know if this is correct. If I multiply the criteria with the corresponding values of alternative a1 and combine the whole thing to a final value, I can calculate the same again with the other alternatives. When I compare the graphs to see how big x has to be to change the final prioritization of the alternatives, I always get the wrong values compared to the online tool. Therefore I would like to know if the redistribution of the weights is correct.

I hope someone can help me despite the long question. Thanks a lot!

We have used RSM for modelling a problem. In this problem, the formulation between output and inputs was created. But, we try to do a sensitivity analysis for validation. Please introduce a good way for this issue.

Best Regards

Hi, i need to perform a mesh sensitivity analysis for a stenosis CFD analysis. Is it sufficient to try some different meshes and look if the residuals (x-velocity,y-vel,z-vel) converge?

Thank you all, I'm a desperate beginner..

Hi! I am asking a question regarding a randomized controlled trial. So this trial compared steroid to steroid+MMF as first line treatment in Immune Thrombocytopenia (ITP), the primary outcome was time to the treatment failure which was defined as platelet count <30x10^9 in spite of

**2 WEEKS**treatment in steroid arm, and in spite of**2MONTHS**treatment in the steroid+MMF arm.First of all, this feels kinda weird as to why did the trial design defined the treatment failure differently?

And in the statistical analysis part, here is the explanation:

"Sensitivity analysis will include landmark analysis or shifting the time line to classify all treatment failures before 2 months as at 2 months in order to prevent potential biases caused by different definitions of treatment failure time frames between the two groups."

What does this mean? And did it justify using two different definitions to measure the outcomes?

For your information, here is the link for the trial design:

Thanks a lot! I am new to this field so I may be asking very basic questions. Sorry about that ;)

I was tried with the command meta summarize leave one out, but it says unrecognized.

this is for systematic and meta-analysis

After doing FAHP, how to check the robustness of the model using sensitivity analysis. e.g I have four factors in my model, each factor has some weight in such a way that sum total of all is 1. If I change the weight of one factor keeping the total same (1), what will be the impact on other factors' weight. How to calculate the revised weight of other factors?

Hi all,

I am working on meta analysis after screening few studies. I am trying to do subgroup analysis (based on geographical location, gender, age) to understand the heterogenicity in the MA. I have datas on mean, sd, sample size extracted from the studies. Upon doing MA, i found even in every set of subgroup analysis, the heterogenicity (I square value) is too high. Is that implies that i cannot combine all these studies or recommend some other way to conclude meta analysis.

I cannot do sensitivity analysis as the sample size in all studies are less than 70.

May i get suggestions on this?

Thanks much

I'm interested to test the robustness of an outcome definition using different cut-off points (e.g. 80% of pills taken vs. 90% pills taken to define adherence). Many articles reported such comparisons with sensitivity analysis but were not specifically clear on the type of test employed. I doubt a simple chi-square couldn't be used since both outcome definitions are applied in the same sample (groups are not independent).

Any suggestion of a statistical test to handle such types of data?

Please can I get a concise description of how to carry out a single parameter sensitivity analysis in flood hazard mapping using the MCDM method?

I want to know which ways are common sensitivity analysis that are performed in MCDM-related studies.

The result of sensitivity analysis (the coefficients) helps us decide whether the estimated ATT was the pure effects of the treatment or not. But, on which coefficient should I focus more?

I am looking for advice how to show the changes in a dynamical model system based on equations. I want to carry out a sensitivity analysis and want to show how strong the impact of a percentage change in one parameter is displayed in the system itself. Can anyone recommend some literature or how you did it by yourself?

I solved some questions and I did the sensitivity analysis. Two of the parameters returned a sensitivity index of 1.

I would like to know if this sensitivity index of 1 has any special meaning.

In general, models in this paper need to be validated, so what aspects should be used to verify discrete dynamic Bayesian network? And whether the sensitivity analysis in the GeNIe software can verify the validity?

Please help me, thank you

I am trying to develop a WASP8 model that can predict the dispersion of contaminants in seawater. Before I do a sensitivity analysis, I wanted to know if the flow of discharge has any effect on the disperison.

Thank you

I want to do some sensitivity analysis by altering the meteorology (e.g. increasing temperature) in WRF-Chem model. Can anybody suggest me how I can do this?

Thanks in advance for your kind help.

Best Wishes,

Anwar Khan

I would like to perform a sensitivity analysis of a CFD solver. There are 8 input variables, for each of them there are 2-3 prescribed numerical values.

To evaluate one set of parameters three costly simulations (each running for 20 hours on 800 cpu cores). Budget for these simulations is limited and due to the queuing system of the HPC, it would take a long time to get the results.

I'm aware of latin hypercube hierarchical refinement methods that allows starting the sensitivity analysis with smaller budget and subsequently incorporating newer results when they're available.

But those methods works with continuous variables. Is there a method for categorical and ranked/ordinal variables?

Dear all,

I have simulated a chemical process and I would like to run a sensitivity analysis on the impact of its parameters. Could you provide me with a tutorial on Monte Carlo method?

Thanks in advance,

I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?

As the title says, I am looking to vary a Fortran defined variable in sensitivity analysis.

For example, I am trying to vary the mole ratio of toluene to methanol in a feed stream to study the effects of the molar feed ratio on conversion of toluene.

**can be used to check the variation of the**

*Sensitivity Analysis**optimum solution*when changing the coefficients of the

*objective function*or constant values in

*constraints*. Are there exist any other things to investigate using this approach?

Hello

I'm currently trying to create a chi-map using topotoolbox for matlab. In the available literature most of the calculations use a single mn ratio (0.45-0.5) for the entire area, and some others do a sensitivity analysis in order to get the best mn ratio per watershed. However, I don't know whether the calculations would improve using the best mn ratio per stream or it really doesn't matter because the sensitivity analysis is good enough in a calculation per watershed.

By the way, I'm working in an landscape highly controlled by faults activity.

Any comments will be appreciated.

Best Regards

Lester

We all know that mathematical programming is the best optimization tool with many years of strong theoretical background which presents reliable solutions with high efficiency. Also, the mathematical proof for the global optimality would be available. However, in some cases with a lack of knowledge in which, the analytical calculation of sensitives is impossible, researchers will prefer to use metaheuristics. However, they are inefficient and unreliable in large scale problems.

The development of surrogate models such as the Kriging method, model-based methods such as Radial basis function interpolation, and novel machine learning tools helps us to approximate the objective function. So, model-based sensitivity can be used instead. Also, machine learning can help to predict sensitivity information!

So, the improvement of function or sensitivity approximation, coupled with mathematical optimization will disappear the metaheuristics. In this way, I guess that there would be no need for metaheuristics (at least in continuous optimization as I know).

What do you think about it? Are you agreeing? Do you have any experience? Also, I am interested in both mathematical programming and metaheuristics, but prefer the efficiency.

Mathematical programming is the best optimization tool with many years of strong theoretical background. Also, it is demonstrated that it can solve complex optimization problems on the scale of one million design variables, efficiently. Also, the methods are so reliable! Besides, there is mathematical proof for the existence of the solution and the globality of the optimum.

However, in some cases in which there are discontinuities in the objective function, there would be some problems due to non-differentiable problem. Some methods such as sub-gradients are proposed to solve such problems. However, I cannot find many papers in the state-of-the-art of engineering optimization of discontinuous optimization using mathematical programming. Engineers mostly use metaheuristics for such cases.

Can all problems with discontinuities be solved with mathematical programming? Is it easy to implement sub-gradients for large scale industrial problems? Do they work in non-convex problems?

A simple simple example of such a function is attached here.