Science topic

Uncertainty Analysis - Science topic

Explore the latest questions and answers in Uncertainty Analysis, and find Uncertainty Analysis experts.
Questions related to Uncertainty Analysis
  • asked a question related to Uncertainty Analysis
Question
5 answers
The concept of propensity was first introduced by Popper (1957). It is considered as an alternative interpretation of the mathematical concept of probability. Bunge (1981) examined the mathematical concept of probability and its personalist, frequentist, and propensity interpretations. He stated,
“The personalist concept is invalid because the probability function makes no room for any persons; and the frequency interpretation in mathematically incorrect because the axioms that define the probability measure do not contain the (semiempirical) notion of frequency. On the other hand the propensity interpretation of probability is found to be mathematically unobjectionable and the one actually employed in science and technology and compatible with both a possibilist ontology and a realist epistemology.”
However, it seems that the concept of propensity has been forgotten for many years. I've recently read several papers on propensity and found that it may be a useful concept for measurement uncertainty analysis, rather than the concept of "degree of belief" of Bayesian approaches to measurement uncertainty analysis.
BungeM 1981 Four concepts of probability Applied Mathematical Modelling 5(5) 306-312
Popper K R 1957 The propensity interpretation of the calculus of probability and the quantum theory, In S. KSrner (ed.), Observation and interpretation. Butterworths, London.
Relevant answer
Answer
Gang (John) Xie Thanks for your input. I think subjective probability or degree of belief may be suitable for some fields such as social science, it is not suitable for other fields such as measurement science. For an example, the normal distribution is also called the law of error. That is, the normal distribution is a physical law that describe the intrinsic property of a measurement system; it exists objectively, independent of person's mind or knowledge.
  • asked a question related to Uncertainty Analysis
Question
3 answers
I am doing uncertainty in rainfall runoff model ("hbv") and wants to get optimal value of parameter, but the difficulty is the selection of the likelihood function? (the cod in Rstudio and interpretation)
Relevant answer
Answer
I would start with the normal likelihood function. See the attached to get started. Best wishes David Booth
  • asked a question related to Uncertainty Analysis
Question
3 answers
Hello everyone,
During calibration of a hydrological model using synthetic data, I was trying to add Gaussian noise to the simulated streamflow data. But many streamflow values are near to and equal to zero, if I add a Gaussian noise with certain standard deviation (lets' say 10%), it will make many data values negative. But streamflow data values as negative will be invalid.
Could anyone please suggest how should I proceed ? Also, cite studies which supports that method.
Relevant answer
Answer
Oh I see, thanks for explaining!
It seems to me that you already provided one possible answer to your question: Your model should include noisiness of data.
Are you somehow stuck with a model that cannot adequately handle noisy data?
If so then you will have to fudge a bit and check if your fudgy approach will cause unwanted error/bias in your calibration. E.g.: Do you expect problems if you simply add the noise you had in mind and turn the negative values to 0's before starting the calibration?
  • asked a question related to Uncertainty Analysis
Question
5 answers
Dear all,
I am looking for a research work that implemented an uncertainty or statistical framework to study the impact of the geometric parameters on the fracture response.
I appreciate any help.
Thank you in advance,
Moj Ab
  • asked a question related to Uncertainty Analysis
Question
7 answers
My lab needs to determine a more robust method of determining uncertainty of our gas analysis using a monte carlo simulation which is determined annually . We sample a QC cylinder of known gas concentration before and after every batch of samples. If i assume a normal distribution of the gas analysis, can i simply enter the mean and standard deviation of all the QC results for the respective gas component for the year and then run a simulation off this and calculate the uncertainty? If i assume a rectangular distribution, can i simply use the lowest and highest QC result for the respective component and use these as my highest and lowest bounds, run the simulation and calculate the uncertainty of this? Any guidance or help would be greatly appreciated. Thank you Lachlan
  • asked a question related to Uncertainty Analysis
Question
5 answers
I am looking after method to find out uncertainty in readings those are conducted on engine output.
Relevant answer
Answer
I would start by qualifying measurement accuracy by measuring variations of observations in a priori stable conditions, then having framed what I would assume to be measurement accuracy I would bin measurements according to measurement accuracy over whole space of operating conditions. Variations of observation parameters in same operating conditions would be flagged uncertainty.
  • asked a question related to Uncertainty Analysis
Question
3 answers
Please answer keeping in mind the future job market i.e. after the lockdown is over. Add your age, education level, and employment status as well please.
Relevant answer
Answer
Dear Mati Ullah,
Living with the COVID-19 pandemic for more than a year has changed the way we think of jobs. Freedom and personal control seem to be far more important, capturing attention that may have previously gone to money, rewards, or titles. I have found that since COVID-19 became a regular part of our lives, recruiters: care less about employment gaps, care more about cover letters and care more about interview thank-you notes.
job seekers may be taking themselves out of the running even before or right after the virtual interview because they’re ignoring the key factors to which recruiters are suddenly paying attention.
Regards
  • asked a question related to Uncertainty Analysis
Question
7 answers
Does anyone know about the different uncertainty analysis models? How can I use it?
Relevant answer
Answer
Dear Farzin,
Uncertainty analysis is a broad topic and many frameworks and methods have been developed and applied in this respect. What is the purpose of your uncertainty analysis? What kind of model are you using?
Generally, uncertainty analysis includes several steps: identification and classification of uncertainties; importance assessment (determination of most important uncertainty sources); quantification of most important uncertainty sources; propagation of uncertainty sources through model; communication of uncertainties. See for instance this publication , where four of these steps are described and applied to a case study for the Dutch river Waal with a 2D hydrody-namic model.
  • asked a question related to Uncertainty Analysis
Question
8 answers
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
  • asked a question related to Uncertainty Analysis
Question
7 answers
How to measure an optics thermal drift error in a laser measurement system(Renishaw XL80)?
Please tell me about optics thermal drift and what is the source for optics thermal drift. And How to find the value of optics thermal drift. And how to reduce it?
  • asked a question related to Uncertainty Analysis
Question
4 answers
Please suggest the methods or ways to do the uncertainty analysis of the kinetic data from a reaction. We have carried a reaction in homogeneous medium and calculated the rate constants and want to do the uncertainty analysis.
Relevant answer
Answer
Thank you all for giving valuable inputs.
  • asked a question related to Uncertainty Analysis
Question
5 answers
Using the Gauss quadrature method how can we obtain the points and weights for any distribution function like gamma distribution or beta distribution etc.?
Relevant answer
Answer
I would suggest however for specific problems you try something like :
Gaussian Quatrature and distribution functions - Bing
Best wishes, David Booth
  • asked a question related to Uncertainty Analysis
Question
9 answers
I have completed uncertainty analysis for my heat exchanger calorimetric lab.but I am not sure the calculation of enthalpy ( it is required for Q=mr*deltaH) due to ref.prop program. Normally my main equation is U=cv*dT+P*v and I have calculated  uncertainity according to temperature and pressure sensor and I admitted constant the value of specific heat and specific weight but these values have a uncertainty due to Refprop uncertanities. I am not sure whether to add this uncertainty. I would like to learn your opinions and advices.
   
Relevant answer
Answer
  • asked a question related to Uncertainty Analysis
Question
3 answers
Hello everyone
my project is about the uncertainty analysis of a sequencing batch reactor(SBR).I have a few question in modeling of sbr and also modeling of dissolved oxygen in tank sbr in ASMs models.can anyone guide me?
Relevant answer
Answer
Attached for your kind perusal.
  • asked a question related to Uncertainty Analysis
Question
1 answer
Interested in numerical techniques for traditional deterministic (Tikhonov's) and especially in information-probabilistic (Bayesian) paradigm, uncertainty propagation analysis etc.
Relevant answer
Answer
Attached for your kind perusal.
Please look at Theorem 2.7 Frechet derivative and 2.6.
  • asked a question related to Uncertainty Analysis
Question
8 answers
Hi everyone, I was wondering if there are any good yearly generic or field-specific conferences on topics of sensitivity and uncertainty analysis. Thanks! Shahroz
  • asked a question related to Uncertainty Analysis
Question
1 answer
Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.
Many thanks!
Relevant answer
Answer
Usually + or - 25%
  • asked a question related to Uncertainty Analysis
Question
2 answers
I'm working on the calibration of a hydrological model which is Python-based. In addition to my manual calibration, I need to carry out an automatic calibration and most articles I reviewed recommended PEST. If you have any experience with running the mentioned software in windows 10, please let me know your suggestion.
Adugnaw
Relevant answer
Answer
Thank you for your help@Ujjwal KC
  • asked a question related to Uncertainty Analysis
Question
8 answers
I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.
I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.
Thank you very much.
Relevant answer
Answer
Hi? You might want to have a look at one of my publications - 10.1016/j.envsoft.2020.104800
I recently conducted a similar study where I applied three different sensitivity analysis methods to fire simulations and compared their results!
Cheers!
  • asked a question related to Uncertainty Analysis
Question
8 answers
When calculating a budget or a risk reserve, a simulation or estimation is performed.
Sometimes the Monte Carlo simulation is used.
It seems that each administration and each company uses different confidence percentiles when summarising the simulations in order to take a final decision.
Commonly, 70%, 75% or 80% percentiles are used. The American administration uses 60% for civil works projects...
My doubt is, is there any recommendation or usual approach to choose a percentile?
Is there any standard or normalized confidence percentile to use?
I expected to find such an answer in the AACE International or International Cost Estimating and Analysis Association, but I did not.
Thank you for sharing your knowledge.
Relevant answer
Answer
I believe setting confidence intervals is more of an art than an exact science. It really depends on how important it is that something remains within the confidence interval, but there are no real quantitative standards.
If going over budget almost certainly means default you may want to base risk reserves on a high percentle. If default is not imminent, like with governmental institutions, the percentile may be set lower.
  • asked a question related to Uncertainty Analysis
Question
5 answers
I have been using Fuzzy Delphi method for quite sometime until recently someone asked me why I did not consider conducting uncertainty analysis and parameter sensitivity analysis on my FDM result. I found some reference where these type of analysis were conducted on AHP (analytical hierarchical processing) but none on FDM. I am not sure how to respond to this as it is not a standard procedure for FDM in the literature (to my best knowledge). Is it a even appropriate to conduct the analysis for FDM, if not what would be a viable respond?
  • asked a question related to Uncertainty Analysis
Question
5 answers
What is uncertainty analysis, how can we perform uncertainty analysis for photocatalytic reaction. I am checking the effect of different concentration, pH, temperature on dye degradation through nanomaterials. Please let me know the basics and how to do this kind of analysis
Thanks
Relevant answer
  • asked a question related to Uncertainty Analysis
Question
3 answers
In solving an optimisation problem with uncertain input parameters, we are using Monte Carlo simulation (MCS) and scenario reduction to arrive at a number of scenarios with their associated probabilities. Optimisation algorithm outputs decision variables for different scenarios. For example, the power of a generator at different scenarios; Assume that in a generation scheduling problem with Ng generators, at scenario 1 with probability 0.3 power of generator 1 is 12, at scenario 2 with probability 0.2 power of generator 1 is 17 and at scenario 3 with probability 0.5 power of generator 1 is 9. Then in practice, how the decision maker must set the power of this generator?
Relevant answer
Answer
This is a problem that needs to be solved with stochastic optimization algorithms. I recommend you take a look at the Benders Decomposition, an algorithm that is well suited for stochastic problems with scenarios. The output of this algorithm is a single value of your variable, given that the algorithm handles the uncertainty in the parameters.
  • asked a question related to Uncertainty Analysis
Question
8 answers
Hi guys,
Can anyone describe PCE in simple words, please? How can we find a PC basis and what is an appropriate sparse PC basis?
Thanks in advance
Relevant answer
Answer
PCE is a sum of truncated terms to estimate response of a dynamic system when uncertainties are involved, specifically in its design or parameters. Imagine you have a dynamic system such as a mass and spring in which your mass follows a normal distribution. If you want to find the mean and standard deviation of your system's response, let's say your mass acceleration, when such uncertainty exists, you can build a polynomial chaos expansion for your system and get your mean and standard deviation from it.
To build your PCE you need a set of basis functions which their types depend on your random variables' type, here your mass is normally distributed so based on the literature you should choose Hermite polynomials as your basis functions.
There are plenty of papers out there on this topic that explain this AWESOME tool in detail.
Good luck.
  • asked a question related to Uncertainty Analysis
Question
4 answers
I would like to conduct Monte Carlo uncertainty analysis for actual and predicted data in R;
Kindly share some effective resources;
However, I have this <<https://rdrr.io/rforge/metRology/man/uncertMC.html>> where understanding the hyperparameters of it is another challenge.
Please share your views/knowledge/ effective stuff/materials
best regard!
Relevant answer
Answer
Mohammed Deeb , Muhammad Ali , and Cristian Ramos-Vera thank you so much for your response which strengthens the idea for sure, however, the requirement as mentioned is slightly different. this " " section 2.3 the exactly the requirement is matched and the graph after analysis needs to draw as attached one. I hope you will find something precisely to the requirements.
I pray for your safe stay in this period.
best regards
Suraj
  • asked a question related to Uncertainty Analysis
Question
54 answers
Following well-known books in quantum mechanics (QM), such as "Quantum Mechanics" by Eugen Merzbacher, once influential and authoritative on QM, is today a slippery slope in many parts.
The simultaneous appearance of classical wave and particle aspects no longer can be defended in QM. The only thing that exists in Nature, as we know from quantum field theory (QFT), is quantum waves, NOT particles and NOT waves as in Fourier analysis (classical).
The Heisenberg uncertainty principle in QM seems, thus, to have to be modified -- as it is, in contradiction with itself, based on continuity by using the Fourier transform. Can we express it in QFT terms (not based on continuity)? Can that influence LIGO and other applications?
See also Sean Carroll recently at https://www.youtube.com/watch?v=rBpR0LBsUfM and after the 45 minute mark specially.
Thus, there is only one (not two or even three) case of photon interference in the two-slit experiment, and that is the case that is often neglected -- the quantum wave case. See the two-slit experiment at low-intensity, for example at: https://www.youtube.com/watch?v=GzbKb59my3U
Relevant answer
Answer
All: For a background on why we can suspect that the Heisenberg uncertainty principle is fundamentally wrong, one can see its supposed "justification" using the classical theory (not quantum) of Fourier transform, see:
  • asked a question related to Uncertainty Analysis
Question
5 answers
I'm looking for a free and user-friendly tool. I'm familiar with python (and a bit R).
Thank you
Relevant answer
Answer
Aryan Shahabian The assumptions change as the methods change as the problem changes. In my experience, sensitivity analysis was about how a solution to a problem changes as the input conditions or input data or ... change. It might help you to consider what a sensitivity analysis on the methods in one or the other of the attached papers might mean. IMO this is especially interesting because adaptive lasso in both of these situations has an oracle property as is mentioned. Best wishes, David Booth
  • asked a question related to Uncertainty Analysis
Question
6 answers
I know that comparison of these terms may require more than one question but I would appreciate it if you could briefly define each and compare with relevant ones.
Relevant answer
Answer
Perfect explanation from Prof Alexander Kolker , and I may add that in some applications prediction and forecasting are interchangeably used.
  • asked a question related to Uncertainty Analysis
Question
9 answers
I have run a number of time-consuming complex simulations: -approx. 200 samples
-input parameters are not definable as simple numerical values (they are like: low, mid, high, etc.)
So, input parameters and output values are known but the model is so complex as it's like a black box.
Is it possible to sort the input parameters by their importance? If yes, How?
Relevant answer
Answer
Quenouille's jackknife may be an alternative to Efron's bootstrap. Both have their merits.
Brief comparisons of bootstrap and jackknife are:
  • asked a question related to Uncertainty Analysis
Question
4 answers
Hello everyone!
I am using SWAT 2012, Rev. 635 for simulation of daily river discharge in my study area. I am using SUFI 2 module of SWAT CUP for uncertainty analysis. The daily hydrological data under analysis is from 2013-2015. The problem is that I am not getting the desired R2 and NS value for the observed and simulated daily river discharge. The images of the parameters selected, 95 ppu plot, summary_stat, and global sensitivity of the parameters are attached. The desirable R2 and NS value is 0.75 and above. Please guide me where should I make necessary changes to get the desired results.
Thanks in advance!
Relevant answer
Answer
You have very good R2 and NSE. try More iteration with 900 simulation( 100 simulation for each parameter).
Definitely you will get good NSE Value
  • asked a question related to Uncertainty Analysis
Question
7 answers
Dear swat cup users/expert,
Please I have completed creating my watershed model using swat and I am using the swat cup program to perform sensitivity analysis, calibration, validation and uncertainty analysis.
I have read the swat cup manual and other literature but I still have challenges after my first iteration.
1. I have checked the 95ppu plot from the result of the first iteration but I don’t know whether I am on track or not
2. How to interpret the dotty plots result, I do not understand, the results to enable me know which parameter is sensitive or not. Should I ignore that and used the global sensitivity analysis?
3. I have checked the result of the global sensitivity analysis, but I don’t know whether in my next iteration I should delete those parameters that are not sensitive or I should keep all the same parameters and go ahead to perform the next iteration with the new parameters suggested by the program
4. Please I want to know after the first iteration, what are the steps will I follow till I achieve my objective function, P-factor and R-factor.
Please find attached the results of my first iteration. Thank you.
Relevant answer
Answer
Dear Tankpa,
The answer for all your questions is available in SWAT-CUP calibration manual (Abbaspour et al. 2007; 2013,…) and the SWAT theoretical documentation such as Neitsch et al. 2005, … please read them thoroughly.
Meanwhile, based on the figures you have indicated I can give general insight:
The model has highly overestimated the observed flow; I suggest you check where exactly is the gauged subbasin in this specific SWAT project. Flow_out 1 is for reach #1 or subbasin #1 is this really where the observed file obtained? Because the location of gauging station plays a great role here. Check the input data used for simulation including meteorological and spatial input data if it is representative to the watershed area.
SWAT-CUP is an iterative program. Therefore, you have to run it repeatedly until acceptable model efficiency is obtained. As it is a semi-automatic calibration technique, you can vary the values of sensitive parameters by keeping in mind to maintain the allowable minimum and maximum boundaries. It is also important to see related literature which has been undertaken in regions which are hydro-meteorological similar to your case study area to identify sensitive hydrologic parameters.
The main causes to the overprediction of runoff may be due to higher surface runoff and higher base flow. Therefore, you need to reduce SR by decreasing the value of CN2 for different LU and reduce baseflow by increasing deep percolation loss (GWQMN), see SWAT calibration technique for more detail in the link below or the image attached below (
After the first iteration improt the new parameters which the best parametrs for the prevoius iteration and rerun the program, give it more number of simulation like more than 500.
📷
  • asked a question related to Uncertainty Analysis
Question
4 answers
Hi,
This question pertains to gas chromatography to measure levels of carbon monoxide using reduction gas detector technology.
I have developed a quadratic regression using 6 standard CO concentration levels (each determined as averages from 5 instrument readings, with SDs) and the least square regression method. So there are standard errors (SE) associated with the coefficients of the regression. Now based on the instrument reading Y, sample concentration (X) can be computed using (-b+SQRT(b^2-4a(c-Y)))/2a. With algebraic error propagation, the errors being propagated are only the SE of the coefficients and the SD of Y. How can I include the error contribution from the standards' SDs?
Thanks
Relevant answer
Answer
Unfortunately no.
  • asked a question related to Uncertainty Analysis
Question
4 answers
Health Economics, Cost-effectiveness analysis, Uncertainty analysis and Oncology modelling
Relevant answer
Answer
Thanks Rob. Your response was very helpful. Considering alternate survival distribution seems very logical here. Thank you again for sharing your knowledge with us and I hope more people will get benefit by reading this. Best Regards, Masnoon
  • asked a question related to Uncertainty Analysis
Question
9 answers
I am working on estimation of the uncertainty with in the prediction of a fluid dynamic problem by a numerical code. There are some input parameters (thermal conductivity, heat capacity and..) and also, some target parameters (evaporation rate) in numerical code which would be interesting to be investigated by the sensitivity analysis and finally estimating the margins of uncertainty in prediction of the target parameters. I was wondering if could someone tell me about a standard procedure to analysis the uncertainties in predictions of a numerical code ?
Relevant answer
Answer
You need to calculate a kind of non linear approximation of maximum and minimum and put a check on this reaching to the steady state situation...
  • asked a question related to Uncertainty Analysis
Question
3 answers
PTC22 sys: "Both pre- and post-test uncertainty calculations
are required." and "The number of locations and frequency of measurements shall be determined by the pre-test uncertainty analysis."
Is any body knows any instructions to calculate pre-test uncertainty?
Relevant answer
Answer
There is no quick answer to this question, although it does seem to strike at the heart of the problem as far as practical measurements are concerned. To see this, a very useful read would be for example
I also recommend this lecture by Prof. Tinsley Oden
Good luck with your quest!
  • asked a question related to Uncertainty Analysis
Question
5 answers
Data is given in Excel files, obtained by LabView. Some data are given in negative values(DRAG) , some errors in charts.
Relevant answer
Answer
For wind tunnel tests, generally error analysis is performed using the repeatability of experiments. I would suggest you to repeat your experiments certain times, to get the repeatability curves. Then, you need to find the standard deviation (S.D) and the uncertainty (percentage error) of your data at specified points. The less amount of uncertainty would also help you in validation of obtained data.
  • asked a question related to Uncertainty Analysis
Question
5 answers
How can I perform uncertainty analysis( 95% confidence) of a nonlinear regression equation y=a(x)^b without converting into linear equation (or logarithmic transformation)?
Relevant answer
Answer
Dear Mohammad Zakwan,
Using JCGM 101:2008 Evaluation of measurement data — Supplement 1 to the “Guide to the expression of uncertainty in measurement” — Propagation of distributions using a Monte Carlo method.
Yours sincerely,
Elcio Oliveira
  • asked a question related to Uncertainty Analysis
Question
3 answers
Hi
How can I do the uncertainty analysis for the typical components of a plant in the exergy analysis?
Relevant answer
Answer
Hi Iman,
What you need are the standard uncertainties of all the variables that you need in calculating e.g. the exergy of a pump together with assumptions of the underlying distributions. Normally, one assumes a normal distribution using the standard uncertainty as the standard deviation, and the nominal value (often a measured value) as the expected value. If the variable can have values close to zero, it is better to use lognormal distribution and relative standard deviation. If you don't have any idea of the uncertainties of the values needed in the calculations, you may simply assume uncertainties on relative basis, and use some 'clever guess'.
After getting (or guessing) the uncertainties and related distributions, the easiest way is to use Monte Carlo simulation, i.e. to generate random samples of all variables, and to calculate a sample of exergies using these samples. The standard deviation of these exergies gives you the standard uncertainty of the related exergy. In prcactise this easiest using vectorized code, e.g. in Matlab or R. You can do it in Excel too but in Excel, it is inconvenient to work with the kind of big samples that you need in Monte carlo simulation.
Of course, you also use the error propgation formula based on linearization, but that works resonable well only for normal distributions and small relative uncertainties of the variables needed in the calculations.
You can find may good texts by googling with the keywords “monte carlo simulation standard uncertainty propagation”.
  • asked a question related to Uncertainty Analysis
Question
11 answers
Dear Group members,
I want to learn in details about Uncertainty Measurement. Kindly share any presentation or research paper on the same.
Regards
Relevant answer
Answer
Dear Ankita,
The problem of the measurement uncertainty evaluation in chemical analysis has been the subject of a large number of guidance documents. It would be useful for you to start with the EURACHEM/CITAC Guide "Quantifying Uncertainty in Analytical Measurement" available now in 3rd edition (2012) at https://www.eurachem.org/images/stories/Guides/pdf/QUAM2012_P1.pdf. This document which can be seen as an analytical-chemical adaptation of the basic ISO Guide (the GUM) provides useful guidelines and detailed examples worked out for analytical chemists. Another useful source that accounts for different approaches to the uncertainty in chemical testing is the EUROLAB TR No. 1 (2007) "Measurement uncertainty revisited: Alternative approaches to uncertainty evaluation" (http://www.eurolab.org/documents/1-2007.pdf). Training courses on the subject, organized around the world, can be of help, specifically, the online course "Estimation of measurement uncertainty in chemical analysis" at https://sisu.ut.ee/measurement/uncertainty.
However, it is necessary to face the truth. The qualified uncertainty estimation in a complicated analytical measurement requires great effort; this is a serious study, not a routine task.
  • asked a question related to Uncertainty Analysis
Question
2 answers
Hi, I am working in the area of risk quantification to come up with something like value at risk for IT service provider using uncertainty theories like Possibility theory. There are several works in risk quantification using scales. But I am focusing on financial exposures and not numerical scales alone. Any pointer for work done in those areas - may be for other industries?
Relevant answer
Answer
Hi, You can read my review paper titled,"Neural Network-based Uncertainty Quantification: A Survey of Methodologies and Applications". Freely downloadable from https://ieeexplore.ieee.org/document/8371683/
  • asked a question related to Uncertainty Analysis
Question
4 answers
I have to calculate the uncertainty analysis of actual and predicted values (using different soft computing techniques) for a given data-set, can any body explain in details..
Relevant answer
Answer
The steps are:
1. Ascertain the sources of possible errors.
2. Carry out your measurements.
3. Estimate the uncertainty of each input parameter e.g. standard error.
4. Make sure the errors are independent of each other e.g. using the Durbin-Watson statistic.
5. Calculate your your result e.g. mean and standard error.
6. Find the combined standard uncertainty e.g. use summation in quadrature (square root of the sum of the squares of the values).
7. Express the uncertainty in terms of a coverage factor (e.g. k = 2) together with a size of the uncertainty interval, and state a level of confidence. Multiply the combined standard uncertainty by k to give an expanded uncertainty, which should provide a 95% level of confidence.
  • asked a question related to Uncertainty Analysis
Question
2 answers
A recent paper I published documents the duals method applied to a cardiovascular disease risk model with a 208 dimensional error vector (ASME JESMDT, 2018). I have also done work on turbine surface part cooling models (ASME Turbo Expo, Berlin 2008). Some recent work has been done applying the duals method to assess cooling technologies for HPT parts. This work will be completed this summer with a manuscript prepared for fall submission.
Relevant answer
Answer
Thanks for the cited articles. I will read and consider their context when I begin writing the next article on gas turbine cooling optimization including uncertainty and automatic error arithmetic.
  • asked a question related to Uncertainty Analysis
Question
3 answers
Analysis applied to Life Cycle Assessment.
Relevant answer
Answer
Hi Lenin,
Comparisons are always possible in some regard, although you need to clarify your question a bit.
FAST is a way to estimate first-order sensitivity indices using a spectral method. So-called "E-FAST" is an extension of the approach which estimates total-order sensitivity indices (Saltelli et al, 1999).
What I assume you mean by the Sobol' results is using the Monte Carlo procedure of Sobol e.g. Sobol (1993) to estimate first-order and possibly total-order sensitivity indices.
When you talk about comparing, I guess you mean that you have results from both on your model and want to understand the difference? Or a more general comparison?
I think it is fair to argue that these days, FAST is slightly outdated. The initial advantage was that it improves computation efficiency over Monte Carlo, but comes at the cost of extra assumptions of smoothness, and some bias (see e.g. Xu and Gertner, 2011). There are now more efficient approaches based on metamodels (aka surrogate models or emulators), such as Gaussian processes, or polynomial chaos expansions. A very nice package on these (and more info) can be found at http://www.uqlab.com/.
However, even metamodels impose assumptions. Arguably, if can run your model lots of times, you should opt for the Monte Carlo approach because it will always converge as long as your model is square-integrable (not a very restrictive assumption). You can monitor convergence by plotting the values of sensitivity indices as the number of model evaluations increases. When the estimates become acceptably stable (e.g. vary only 1% over a certain number of iterations), you can have fairly good confidence in your results.
Not sure if this answers your q, but if you clarify I could try to elaborate.
Best
William
Saltelli, A., Tarantola, S. and Chan, K. (1999), Quantitative model-independent method for global sensitivity analysis of model output, Technometrics 41(1), 39–56
Sobol’, I. M. (1993), Sensitivity estimates for nonlinear mathematical models, Mathematical Modeling and Computational Experiment 1(4), 407–414
Xu, C. and Gertner, G. (2011), Understanding and comparisons of different sampling approaches for the Fourier amplitudes sensitivity test (FAST), Computational Statistics and Data Analysis 55, 184–198
  • asked a question related to Uncertainty Analysis
Question
4 answers
Anyone knows how the voltage fluctuations can be incorporated into experimental uncertainty?
Relevant answer
Answer
Dear Aleksandar Janjic
Many thanks for your answer.
  • asked a question related to Uncertainty Analysis
Question
13 answers
I have calculated the uncertainty in the developed rating curve (stage discharge relationship). Now the different catchment have different uncertainty level, do we correlate it with any hydrological phenomenon or what will be the reason for that?
Relevant answer
Answer
i agree with Rafael Anleu
  • asked a question related to Uncertainty Analysis
Question
14 answers
In case of an uncertainty analysis say we perform DoE, analysing which we get optimization of the given parameters. How are we supposed to interpret uncertainty analysis with that result? What are the best software for this purpose meant for reservoir simulation studies?
Relevant answer
Answer
Yes. But it may not be so simple and straight forward. A good understanding of rock geomechanical properties, exisitng stresses and their orientation including their impact on existing faults and fractures could be required.  Available fomration integirty test  and their associated issues ?
  • asked a question related to Uncertainty Analysis
Question
4 answers
Analysis : Monte Carlo uncertainty analysis with SimaPro using Lognormal distribution (defined by the pedigree matrix for most  inventory items).
I'm getting a very large negative variations (outliers) in the resultant uncertainty range for 'Toxicity' categories (human & ecotoxicity). It is a simple analysis with 7-10 inventory items. The other 2 damage categories (Ecosystemsn and Resources) getting acceptable variations. However, the large negative variation affects the final results giving large std deviations having negative impacts within the 95% confident interval. 
Any idea why negative impact figures ?  or suggested reading, please.
Thank you in advance for your  efforts in responding.
Relevant answer
Answer
You may use this pdf link, it might help you.
  • asked a question related to Uncertainty Analysis
Question
8 answers
I have complete annual ‘hourly’ electricity (UK Grid) import into and PV generation for a detached family house with a 6kW array of solar photovoltaics. Sadly I only have 4-month worth of actual data for the total electricity that was exported back to the grid (and not used within the property). Is there any way of estimating (from the existing data) the approximate export back to the grid for the remaining 8 month for which I have no export data? It would be sufficient to have a monthly approximation with an uncertainty band attached so long as the method is robust and referenceable.
Data sheet summary of monthly values attached!
Sincere thanks
M
Relevant answer
Answer
Simply, search the monthly values of the received solar energy (kWh / m²) and multiply these values by the area of the PV panel and its electrical efficiency.
You can use the meteonorm software to get the monthly solar irradiation data.
  • asked a question related to Uncertainty Analysis
Question
1 answer
In a pure squeezing case, the X and Y quadrature uncertainty satisfies the minimum uncertainty relation, which is ΔX ΔY=1. But from the experimental point of view, this can not be true. It should be ΔX ΔY >1. So what is the value for ΔX ΔY in general in the experiment ?
Relevant answer
Answer
Look at this website it may be helpful for your topic. Good luck.
  • asked a question related to Uncertainty Analysis
Question
4 answers
I am looking for ISO 7066-1-1989, Its basically for Assessment of uncertainty in calibration and use of flow measurement devices -Linear calibration relationships. Please share it if some one have. I will be thankful for this. 
Relevant answer
Answer
I am using salt dilution method for discharge measurement in small strems, too. Constant injection can do better method (versus single) 
We used as reference measurement of the hydrometric propeller or doppler. 
Our results were no more be within 10%
  • asked a question related to Uncertainty Analysis
Question
4 answers
in uncertainty analysis with hong's two point estimation method,
two problems are facing, one location one is higher than mean and location two is negative, hesitate.
second, the weights are find by input uncertain parameter and its use is to multiply with final output variable. results are not proper, please anybody can help how to handle last function if let us say only one random variable.
Thanks
Relevant answer
Answer
Dear Behzad,
To find the values of skewness and kurtosis, we need to have all data.
Thus, we couldn't determine these values from mean and variance.
However, we can calculate skewness and kurtosis from histogram using the attached matlab code with an acceptable accuracy.
Good luck!
  • asked a question related to Uncertainty Analysis
Question
6 answers
I would be interested in any insights into how confident or uncertain people are when making aesthetic judgements, i.e., beauty, attractiveness, liking, etc.
So far, I have not come across any direct assessments, but maybe some of you know better or have suggestions for indirect evidence?
Relevant answer
Answer
Confidence is not a very big topic in aesthetic research, but there are some apers from the very applied part of aesthtics, e.g. surgery:
Sterodimas, A., Radwanski, H. N., & Pitanguy, I. (2009). Aesthetic Plastic Surgery: Junior Plastic Surgeons' Confidence in a Training Program. Aesthetic Plastic Surgery, 33(1), 131-132. doi: 10.1007/s00266-008-9299-3
best,
CCC
  • asked a question related to Uncertainty Analysis
Question
2 answers
Hello everybody. I'm performing uncertainties analysis on my species distribution models. It accounts for Generalized Circular Models (GCMs), Representative Concentration Pathways (RCP) and algorithms. Some methods used on literature are ANOVA and GLM. However my data is not normal and is zero-inflated. I also need to map them, so I'm looking for sum of squares or deviance explained. I've been digging for solutions, but I hadn't much lucky. Any suggestions ??
Relevant answer
Answer
If you have zero inflated data and your outcome is a count then you could make use of zero inflated poisson or negative binomial models.
  • asked a question related to Uncertainty Analysis
Question
1 answer
I have captured photographes at windward of fin and tube heat exchanger during frosting test and processed these images via Matlab Image Processing Toolbox and determined frost thickness value by using a reference meter. The uncertainty comes from two aspects. One of them is meter, mounted at heat exchanger, the another is the error from reading the image pixell values. How can be determined the reading error? Thanks.
Relevant answer
Answer
Ergin Bayrak, 
May be, you can plan for a depth gauge (screw gauge type) embedded with a thin thermocouple to measure the thickness of the frost.  The data obtained can be compared with your photographic data.  
Regards, 
Dr.S.Ravindran 
  • asked a question related to Uncertainty Analysis
Question
15 answers
I have the volume of fuel (gasoline) dispensed of 78,804 petro pumps that were collected during one year. The dataset represents the measurement error for 20l dispensed, where the average = 7.99 ml and SD = 45.99 and it follows a Gauss distribution. I do not have any other information. It is also worth noting that petro pumps are not very precise devices. So, which estimator should I use for the measurement uncertainty according to my database?
Relevant answer
Answer
You are correct. The SD is small. 
The error bounds 80% at one SD and 99.2% at two SD shows that the distribution is not normal, but reasonably close. The range from -5000 mL to 2000 mL is not surprising, but unfortunate. One should think such errors would be obvious.
Good study.
  • asked a question related to Uncertainty Analysis
Question
19 answers
What are the scientific basis and best acceptable modelling theory which exist prove projections of CC data and analysis of uncertainty analysis scientifically?
Relevant answer
Answer
There was a good paper on the uncertainty in CC forecast by Allen et al  in Nature:
Quantifying the uncertainty in forecasts of anthropogenic climate change
Nature 407, 617-620 (5 October 2000) | doi:10.1038/35036559
May be get started from there?
  • asked a question related to Uncertainty Analysis
Question
2 answers
When can I use a fixed-effects Tobit model? I read that it may produce biased results. Should I use a random-effects model instead?
Relevant answer
I have a panel with a dependent variable that ranges from 0 to 1. When I ran a linear model, a hausman test supported the fixed-effects estimation. But since my DV can only range from 0 to 1, the FE linear model isn't the most appropriate.
  • asked a question related to Uncertainty Analysis
Question
5 answers
I have just started to learn Quantum Mechanics and was reading Stern- Gerlach Experiment, In the Sequential Application of the apparatuses to Ag beam.(Suppose the beam travelling towards y direction as it comes out of the oven) 
How can we prove the uncertainty principle being followed after firstly making the beam pass through the SGz apparatus and then letting the Sz+ beam only passing through SGx apparatus and then letting the Sx+ beam through the SGz apparatus?
As after the first interaction with the SGz apparatus we get two beams corresponding to Sz+ and Sz- both having the same magnitude i.e h(bar)/2.  , now letting one of them to pass through SGx gets us TWO beams Sx+ and Sx- both having the same magnitude i.e h(bar)/2.
So I don't see uncertainty principle being followed since we are 100% certain about the magnitude of component of Spin everytime.
  1. Where Am I wrong??
  2. How can we deduce from this Experiment that two components of angular momentum(SPIN) can't be measured simultaneously?? as i see, we can.
Relevant answer
Answer
Let's now add a third SG device while keeping the first two in the crossed configuration, i.e. the first one being an Sz filter, the second one an Sx filter.
We start out by making the third one an Sx device, just as the second one and repeat one of the previous experiments. Whenever we accept the Sx+ beam from filter 2, we get a pure Sx+ beam after the third filter. If we choose Sx-, we get Sx- again.
So, like in the configuration with two SG devices, we note the following: choosing either the Sx+ or Sx- path, the outcome of a consecutive Sx measurement is determined. For this statement it is irrelevant whether the first SG device selected Sz+ or Sz- (or did not make a selection [field gradient switched off]).
In other words, after choosing Sz+ or Sz- in the first filter, Sz is known. After choosing Sx+ or Sx- in the second filter, Sx is known. The crucial question is now, whether both are now known simultaneously, i.e. whether Sz is still known, after having determined Sx.
To find about about this, we change the triple filter setup in the order Sz, Sx, Sz (again). Let us select Sz+ in the first one, then Sx+ and see, what comes out of the third one: the result is a beam split in two with equal intensities (each part now carrying 1/8 of the original population). By choosing to know Sx, we loose the Sz information! This result is what ultimately relates the SG experiment to the uncertainty principle (imho).
However, we can still go one step further: we leave the second filter active but we admit both Sx+ and Sx- channels to enter the third one. Classically, we would say that we know the beam is split in two [at any time we could put a detector and be reassured of that fact...]. By admitting both Sx+ and Sx- to go through, however, we choose to not know for the individual atoms "which path they have taken". Now a "second miracle" happens: we only find atoms exiting the channel corresponding to the channel which we originally selected in the first SG device. It appears as though the original Sz information is "recovered", actually it rather was "not destroyed" by our choice to not insist on knowing Sx. In this sense, we may say that each atom went both ways in the second filter.
These findings can be thought of in analogy with interference experiments on waves. Think of imaging a grating with an optical setup, e.g. a microscope. If, in the diffraction plane you choose to only let pass the zero order beam (with an appropriate aperture), the image information ("there actually was a grating!!") is lost. No problem with classical waves here.
However, our (well, mine at least :-) thinking is challenged if we continue to think of atoms as of classical ball-type objects. It can't follow two paths at the same time, can it? Experiments seem to say it can. Diffraction experiments can be done not only with electrons, but with atoms and molecules, too. Welcome to quantum mechanics.
Edit: made a correction to the third paragraph which did have an error in the first version (Sx for filter 1 instead of Sz) + a few typos.
  • asked a question related to Uncertainty Analysis
Question
4 answers
I would like to consider two ANOVA models: Nested and Factorial.
In a Nested ANOVA model:   yijk = µ + αi + βij + εijk   (attached file)
The R2 from ANOVA is simply not a reliable indicator of relative importance.
But what about R2 in factorial ANOVA models:  
yijk = µ + αi + βj + (αβ)ij + εijk
 
yijk = µ + αi + βj + εijk
 
In model (1) and (2), is the R2 (adjusted by number of parameters) a reliable indicator of relative importance for each factor?
Relevant answer
Answer
Thanks a lot Patrick, modelling in a linear regression framework is a valuable idea. Best regards.
  • asked a question related to Uncertainty Analysis
Question
3 answers
Hi,
Three major propositions were put forward in various publications from 1981 to 2016 in probabilistic methods namely:
1.      Analytical methods.  
2.      Simulation methods.
3.      Approximate methods.
Can any one give me some examples of different methods for each category? I also need some good references to get a clear cut idea regarding the same. I wish to submit a detail literature review prior to my proposal exam.
  • asked a question related to Uncertainty Analysis
Question
7 answers
Dear Experts,
I need to know what methods could be used in uncertainty analysis in the spatial modelling? and which is the best?
  • asked a question related to Uncertainty Analysis
Question
5 answers
In SWAT2012 based on ArcGIS 10.2 have not provided component in "SWAT Simulation" tab for doing sensitivity analysis & Uncertainty analysis ? Why ?. I have to do sensitivity analysis & Uncertainty analysis for my sediment yield estimation work. Kindly guide me to solve the problem.
Thank you very much...
Relevant answer
Answer
Dear Arun,
SWAT-CUP is a calibration/uncertainty or sensitivity program interface for SWAT.
You can download it at : http://swat.tamu.edu/software/swat-cup/  
With my best regards
Prof. Bachir achour
  • asked a question related to Uncertainty Analysis
Question
11 answers
If so, what is its form and significance and is there a connection between temporal and spatial Uncertainty? For example, is the temporal uncertainty asymmetric, so that the forward component is associated with greater disorder? 
Relevant answer
Answer
 George and Stephen
By the way the uncertainty in quantum theory is quite different in origin
to one you may have by simply having a stochastic process. The uncertainty is
simply intrinsic in the nature of a quantum particle, such as an electron,with  both wave and particle properties...believe it or not.
  • asked a question related to Uncertainty Analysis
Question
3 answers
Hello
I am using weka 3.7.12 and I should run AdditiveRegression classifier because it does not have adaboost for regression problems(*if anyone knows sth about this please tell me, is it true using additiveRegrresion replace Adaboost.R2 !!? *). My main problem is just need to calculate 95% confidence interval for each predicted value,I am new in weka , I hope someone help me it is an emergency problem ; thanks everyone.
Relevant answer
Answer
Majid - You can use the Weka Experimenter and get the confidence interval option. The below link gives you the steps for working with experimenter. I have tried for a simple data and it has couple of options (setting up confidence).
All the best.
  • asked a question related to Uncertainty Analysis
Question
12 answers
I was thinking about the different decision making methods under certain and uncertain conditions. My specific question is that:
As you know, we have many MCDM tools like AHP- ANP- TOPSIS- VIKOR- PROMOTHEE- MOORA- SIR and many other methods and all of them have been developed to fuzzy, type-2 fuzzy, intuitionistic fuzzy and Grey environments. Which one is really more applicable under uncertain situations? fuzzy ? type-2 fuzzy? Intuitionistic fuzzy? or Grey environment for a decision making method? I know each uncertain logic has its applications but sometimes a tiny different in collected data may causes different results by each methods.
All the ideas and comments are appreciated. Hope that all of the experts take an action to this question by following or leaving their valuable comments. 
Relevant answer
Answer
Dear Amin,
Its totally depend on availability of data.
you may get little help form my paper attached here.
Basically, in real world situation when DMs cannot make fuzzy membership function to represent true situation then they can go for Grey systems theory.  
  • asked a question related to Uncertainty Analysis
Question
8 answers
The measuring system consists of four measuring instruments (thermometer, shunt, voltage channel of a power analyser and current channel of the power analyser). The extended uncertainty of the whole system equals 0.63% and at the same time the extended uncertainty of the thermometer measuring equals 5%. Is this possible?
Relevant answer
That's depending on what you are measuring and the functional relation between your variables.
  • asked a question related to Uncertainty Analysis
Question
3 answers
Hello Every one, in most of the literature what i have seen is people uses GLUE equation which is given by the Beven and Binley which is the ratio of error varioance between observed and simulated variable with exponent N, but what i have observed is N plays very important role in this equation and it has exponential effect, my question is does anyone knows what N value should be used to avoid it? 
Relevant answer
Answer
It's potentially case sensitive. Please see the following paper:
  • asked a question related to Uncertainty Analysis
Question
3 answers
It is needed for nanofluid flow in pipes
Relevant answer
Answer
Basically, you need to express your output as a function of uncertain parameters. First order Taylor series expansion is the easiest option because you can calculate the variability and other measures of uncertainty in output analytically. However, if your relation (or model) is highly nonlinear within the uncertainty region for parameters, you can try Polynomial Chaos expansions. Since you will get analytical expressions for mean, variance or any other statistical measure of output, these expansions are quite useful when your model is computationally extensive. The third option is to try higher order terms in Taylor series expansion or response surface and neural network as mentioned by Sayan and then go for Monte Carlo simulation to have probability distribution.
  • asked a question related to Uncertainty Analysis
Question
7 answers
Hello everyone 
I am going to work on using Ensemble Methods in Machin Learning such as Boosting, Bagging, Random Forest &.... to reduce and analysis of predicted precipitation by Global Weather Models(Ecmwf, Ncep and etc.).
My first question is about possibility of this opinion , Is it possible at all to use these methods to combine members outputs of models? 
For second question, can anyone recommend a review article or a book that explained these methods completely?
Thanks for your time and consideration in answering my question.
Relevant answer
Answer
Thanks for your answers Stefan Lessmann and Leonard A Smith , it was very useful to me. but if you take a look to pic I posted, you can get my meaning.
Minor black lines are Ensmeble members of ECMWF model that are 50. 
Major red line is mean of member`s Value.
My purpose of using Ensemble methods (bagging,Boosting,BMA ...) is just combine members`s value and convert them to one line like Red line but not mean of them. this is my problem. Can you help me to solve  !!?
  • asked a question related to Uncertainty Analysis
Question
9 answers
I need to do uncertainty analysis for heat exchanger laboratory. I would be very grateful if you could share any documents regarding this calculation. Thanks.
Relevant answer
Answer
Thanks for your interest Dear Adolfo.
  • asked a question related to Uncertainty Analysis
  • asked a question related to Uncertainty Analysis