Questions related to Uncertainty Analysis
The concept of propensity was first introduced by Popper (1957). It is considered as an alternative interpretation of the mathematical concept of probability. Bunge (1981) examined the mathematical concept of probability and its personalist, frequentist, and propensity interpretations. He stated,
“The personalist concept is invalid because the probability function makes no room for any persons; and the frequency interpretation in mathematically incorrect because the axioms that define the probability measure do not contain the (semiempirical) notion of frequency. On the other hand the propensity interpretation of probability is found to be mathematically unobjectionable and the one actually employed in science and technology and compatible with both a possibilist ontology and a realist epistemology.”
However, it seems that the concept of propensity has been forgotten for many years. I've recently read several papers on propensity and found that it may be a useful concept for measurement uncertainty analysis, rather than the concept of "degree of belief" of Bayesian approaches to measurement uncertainty analysis.
BungeM 1981 Four concepts of probability Applied Mathematical Modelling 5(5) 306-312
Popper K R 1957 The propensity interpretation of the calculus of probability and the quantum theory, In S. KSrner (ed.), Observation and interpretation. Butterworths, London.
I am doing uncertainty in rainfall runoff model ("hbv") and wants to get optimal value of parameter, but the difficulty is the selection of the likelihood function? (the cod in Rstudio and interpretation)
During calibration of a hydrological model using synthetic data, I was trying to add Gaussian noise to the simulated streamflow data. But many streamflow values are near to and equal to zero, if I add a Gaussian noise with certain standard deviation (lets' say 10%), it will make many data values negative. But streamflow data values as negative will be invalid.
Could anyone please suggest how should I proceed ? Also, cite studies which supports that method.
I am looking for a research work that implemented an uncertainty or statistical framework to study the impact of the geometric parameters on the fracture response.
I appreciate any help.
Thank you in advance,
My lab needs to determine a more robust method of determining uncertainty of our gas analysis using a monte carlo simulation which is determined annually . We sample a QC cylinder of known gas concentration before and after every batch of samples. If i assume a normal distribution of the gas analysis, can i simply enter the mean and standard deviation of all the QC results for the respective gas component for the year and then run a simulation off this and calculate the uncertainty? If i assume a rectangular distribution, can i simply use the lowest and highest QC result for the respective component and use these as my highest and lowest bounds, run the simulation and calculate the uncertainty of this? Any guidance or help would be greatly appreciated. Thank you Lachlan
I am looking after method to find out uncertainty in readings those are conducted on engine output.
Please answer keeping in mind the future job market i.e. after the lockdown is over. Add your age, education level, and employment status as well please.
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
How to measure an optics thermal drift error in a laser measurement system(Renishaw XL80)?
Please tell me about optics thermal drift and what is the source for optics thermal drift. And How to find the value of optics thermal drift. And how to reduce it?
Please suggest the methods or ways to do the uncertainty analysis of the kinetic data from a reaction. We have carried a reaction in homogeneous medium and calculated the rate constants and want to do the uncertainty analysis.
Using the Gauss quadrature method how can we obtain the points and weights for any distribution function like gamma distribution or beta distribution etc.?
I have completed uncertainty analysis for my heat exchanger calorimetric lab.but I am not sure the calculation of enthalpy ( it is required for Q=mr*deltaH) due to ref.prop program. Normally my main equation is U=cv*dT+P*v and I have calculated uncertainity according to temperature and pressure sensor and I admitted constant the value of specific heat and specific weight but these values have a uncertainty due to Refprop uncertanities. I am not sure whether to add this uncertainty. I would like to learn your opinions and advices.
Interested in numerical techniques for traditional deterministic (Tikhonov's) and especially in information-probabilistic (Bayesian) paradigm, uncertainty propagation analysis etc.
Hi everyone, I was wondering if there are any good yearly generic or field-specific conferences on topics of sensitivity and uncertainty analysis. Thanks! Shahroz
Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.
I'm working on the calibration of a hydrological model which is Python-based. In addition to my manual calibration, I need to carry out an automatic calibration and most articles I reviewed recommended PEST. If you have any experience with running the mentioned software in windows 10, please let me know your suggestion.
I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.
I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.
Thank you very much.
When calculating a budget or a risk reserve, a simulation or estimation is performed.
Sometimes the Monte Carlo simulation is used.
It seems that each administration and each company uses different confidence percentiles when summarising the simulations in order to take a final decision.
Commonly, 70%, 75% or 80% percentiles are used. The American administration uses 60% for civil works projects...
My doubt is, is there any recommendation or usual approach to choose a percentile?
Is there any standard or normalized confidence percentile to use?
I expected to find such an answer in the AACE International or International Cost Estimating and Analysis Association, but I did not.
Thank you for sharing your knowledge.
I have been using Fuzzy Delphi method for quite sometime until recently someone asked me why I did not consider conducting uncertainty analysis and parameter sensitivity analysis on my FDM result. I found some reference where these type of analysis were conducted on AHP (analytical hierarchical processing) but none on FDM. I am not sure how to respond to this as it is not a standard procedure for FDM in the literature (to my best knowledge). Is it a even appropriate to conduct the analysis for FDM, if not what would be a viable respond?
What is uncertainty analysis, how can we perform uncertainty analysis for photocatalytic reaction. I am checking the effect of different concentration, pH, temperature on dye degradation through nanomaterials. Please let me know the basics and how to do this kind of analysis
In solving an optimisation problem with uncertain input parameters, we are using Monte Carlo simulation (MCS) and scenario reduction to arrive at a number of scenarios with their associated probabilities. Optimisation algorithm outputs decision variables for different scenarios. For example, the power of a generator at different scenarios; Assume that in a generation scheduling problem with Ng generators, at scenario 1 with probability 0.3 power of generator 1 is 12, at scenario 2 with probability 0.2 power of generator 1 is 17 and at scenario 3 with probability 0.5 power of generator 1 is 9. Then in practice, how the decision maker must set the power of this generator?
Can anyone describe PCE in simple words, please? How can we find a PC basis and what is an appropriate sparse PC basis?
Thanks in advance
I would like to conduct Monte Carlo uncertainty analysis for actual and predicted data in R;
Kindly share some effective resources;
However, I have this <<https://rdrr.io/rforge/metRology/man/uncertMC.html>> where understanding the hyperparameters of it is another challenge.
Please share your views/knowledge/ effective stuff/materials
Following well-known books in quantum mechanics (QM), such as "Quantum Mechanics" by Eugen Merzbacher, once influential and authoritative on QM, is today a slippery slope in many parts.
The simultaneous appearance of classical wave and particle aspects no longer can be defended in QM. The only thing that exists in Nature, as we know from quantum field theory (QFT), is quantum waves, NOT particles and NOT waves as in Fourier analysis (classical).
The Heisenberg uncertainty principle in QM seems, thus, to have to be modified -- as it is, in contradiction with itself, based on continuity by using the Fourier transform. Can we express it in QFT terms (not based on continuity)? Can that influence LIGO and other applications?
See also Sean Carroll recently at https://www.youtube.com/watch?v=rBpR0LBsUfM and after the 45 minute mark specially.
Thus, there is only one (not two or even three) case of photon interference in the two-slit experiment, and that is the case that is often neglected -- the quantum wave case. See the two-slit experiment at low-intensity, for example at: https://www.youtube.com/watch?v=GzbKb59my3U
I'm looking for a free and user-friendly tool. I'm familiar with python (and a bit R).
I know that comparison of these terms may require more than one question but I would appreciate it if you could briefly define each and compare with relevant ones.
I have run a number of time-consuming complex simulations: -approx. 200 samples
-input parameters are not definable as simple numerical values (they are like: low, mid, high, etc.)
So, input parameters and output values are known but the model is so complex as it's like a black box.
Is it possible to sort the input parameters by their importance? If yes, How?
I am using SWAT 2012, Rev. 635 for simulation of daily river discharge in my study area. I am using SUFI 2 module of SWAT CUP for uncertainty analysis. The daily hydrological data under analysis is from 2013-2015. The problem is that I am not getting the desired R2 and NS value for the observed and simulated daily river discharge. The images of the parameters selected, 95 ppu plot, summary_stat, and global sensitivity of the parameters are attached. The desirable R2 and NS value is 0.75 and above. Please guide me where should I make necessary changes to get the desired results.
Thanks in advance!
Dear swat cup users/expert,
Please I have completed creating my watershed model using swat and I am using the swat cup program to perform sensitivity analysis, calibration, validation and uncertainty analysis.
I have read the swat cup manual and other literature but I still have challenges after my first iteration.
1. I have checked the 95ppu plot from the result of the first iteration but I don’t know whether I am on track or not
2. How to interpret the dotty plots result, I do not understand, the results to enable me know which parameter is sensitive or not. Should I ignore that and used the global sensitivity analysis?
3. I have checked the result of the global sensitivity analysis, but I don’t know whether in my next iteration I should delete those parameters that are not sensitive or I should keep all the same parameters and go ahead to perform the next iteration with the new parameters suggested by the program
4. Please I want to know after the first iteration, what are the steps will I follow till I achieve my objective function, P-factor and R-factor.
Please find attached the results of my first iteration. Thank you.
This question pertains to gas chromatography to measure levels of carbon monoxide using reduction gas detector technology.
I have developed a quadratic regression using 6 standard CO concentration levels (each determined as averages from 5 instrument readings, with SDs) and the least square regression method. So there are standard errors (SE) associated with the coefficients of the regression. Now based on the instrument reading Y, sample concentration (X) can be computed using (-b+SQRT(b^2-4a(c-Y)))/2a. With algebraic error propagation, the errors being propagated are only the SE of the coefficients and the SD of Y. How can I include the error contribution from the standards' SDs?
Health Economics, Cost-effectiveness analysis, Uncertainty analysis and Oncology modelling
I am working on estimation of the uncertainty with in the prediction of a fluid dynamic problem by a numerical code. There are some input parameters (thermal conductivity, heat capacity and..) and also, some target parameters (evaporation rate) in numerical code which would be interesting to be investigated by the sensitivity analysis and finally estimating the margins of uncertainty in prediction of the target parameters. I was wondering if could someone tell me about a standard procedure to analysis the uncertainties in predictions of a numerical code ?
How can I perform uncertainty analysis( 95% confidence) of a nonlinear regression equation y=a(x)^b without converting into linear equation (or logarithmic transformation)?
Dear Group members,
I want to learn in details about Uncertainty Measurement. Kindly share any presentation or research paper on the same.
Hi, I am working in the area of risk quantification to come up with something like value at risk for IT service provider using uncertainty theories like Possibility theory. There are several works in risk quantification using scales. But I am focusing on financial exposures and not numerical scales alone. Any pointer for work done in those areas - may be for other industries?
I have to calculate the uncertainty analysis of actual and predicted values (using different soft computing techniques) for a given data-set, can any body explain in details..
A recent paper I published documents the duals method applied to a cardiovascular disease risk model with a 208 dimensional error vector (ASME JESMDT, 2018). I have also done work on turbine surface part cooling models (ASME Turbo Expo, Berlin 2008). Some recent work has been done applying the duals method to assess cooling technologies for HPT parts. This work will be completed this summer with a manuscript prepared for fall submission.
In case of an uncertainty analysis say we perform DoE, analysing which we get optimization of the given parameters. How are we supposed to interpret uncertainty analysis with that result? What are the best software for this purpose meant for reservoir simulation studies?
Analysis : Monte Carlo uncertainty analysis with SimaPro using Lognormal distribution (defined by the pedigree matrix for most inventory items).
I'm getting a very large negative variations (outliers) in the resultant uncertainty range for 'Toxicity' categories (human & ecotoxicity). It is a simple analysis with 7-10 inventory items. The other 2 damage categories (Ecosystemsn and Resources) getting acceptable variations. However, the large negative variation affects the final results giving large std deviations having negative impacts within the 95% confident interval.
Any idea why negative impact figures ? or suggested reading, please.
Thank you in advance for your efforts in responding.
I have complete annual ‘hourly’ electricity (UK Grid) import into and PV generation for a detached family house with a 6kW array of solar photovoltaics. Sadly I only have 4-month worth of actual data for the total electricity that was exported back to the grid (and not used within the property). Is there any way of estimating (from the existing data) the approximate export back to the grid for the remaining 8 month for which I have no export data? It would be sufficient to have a monthly approximation with an uncertainty band attached so long as the method is robust and referenceable.
Data sheet summary of monthly values attached!
In a pure squeezing case, the X and Y quadrature uncertainty satisfies the minimum uncertainty relation, which is ΔX ΔY=1. But from the experimental point of view, this can not be true. It should be ΔX ΔY >1. So what is the value for ΔX ΔY in general in the experiment ?
I am looking for ISO 7066-1-1989, Its basically for Assessment of uncertainty in calibration and use of flow measurement devices -Linear calibration relationships. Please share it if some one have. I will be thankful for this.
in uncertainty analysis with hong's two point estimation method,
two problems are facing, one location one is higher than mean and location two is negative, hesitate.
second, the weights are find by input uncertain parameter and its use is to multiply with final output variable. results are not proper, please anybody can help how to handle last function if let us say only one random variable.
I would be interested in any insights into how confident or uncertain people are when making aesthetic judgements, i.e., beauty, attractiveness, liking, etc.
So far, I have not come across any direct assessments, but maybe some of you know better or have suggestions for indirect evidence?
Hello everybody. I'm performing uncertainties analysis on my species distribution models. It accounts for Generalized Circular Models (GCMs), Representative Concentration Pathways (RCP) and algorithms. Some methods used on literature are ANOVA and GLM. However my data is not normal and is zero-inflated. I also need to map them, so I'm looking for sum of squares or deviance explained. I've been digging for solutions, but I hadn't much lucky. Any suggestions ??
I have captured photographes at windward of fin and tube heat exchanger during frosting test and processed these images via Matlab Image Processing Toolbox and determined frost thickness value by using a reference meter. The uncertainty comes from two aspects. One of them is meter, mounted at heat exchanger, the another is the error from reading the image pixell values. How can be determined the reading error? Thanks.
I have the volume of fuel (gasoline) dispensed of 78,804 petro pumps that were collected during one year. The dataset represents the measurement error for 20l dispensed, where the average = 7.99 ml and SD = 45.99 and it follows a Gauss distribution. I do not have any other information. It is also worth noting that petro pumps are not very precise devices. So, which estimator should I use for the measurement uncertainty according to my database?
What are the scientific basis and best acceptable modelling theory which exist prove projections of CC data and analysis of uncertainty analysis scientifically?
I have just started to learn Quantum Mechanics and was reading Stern- Gerlach Experiment, In the Sequential Application of the apparatuses to Ag beam.(Suppose the beam travelling towards y direction as it comes out of the oven)
How can we prove the uncertainty principle being followed after firstly making the beam pass through the SGz apparatus and then letting the Sz+ beam only passing through SGx apparatus and then letting the Sx+ beam through the SGz apparatus?
As after the first interaction with the SGz apparatus we get two beams corresponding to Sz+ and Sz- both having the same magnitude i.e h(bar)/2. , now letting one of them to pass through SGx gets us TWO beams Sx+ and Sx- both having the same magnitude i.e h(bar)/2.
So I don't see uncertainty principle being followed since we are 100% certain about the magnitude of component of Spin everytime.
- Where Am I wrong??
- How can we deduce from this Experiment that two components of angular momentum(SPIN) can't be measured simultaneously?? as i see, we can.
I would like to consider two ANOVA models: Nested and Factorial.
In a Nested ANOVA model: yijk = µ + αi + βij + εijk (attached file)
The R2 from ANOVA is simply not a reliable indicator of relative importance.
But what about R2 in factorial ANOVA models:
yijk = µ + αi + βj + (αβ)ij + εijk
yijk = µ + αi + βj + εijk
In model (1) and (2), is the R2 (adjusted by number of parameters) a reliable indicator of relative importance for each factor?
Three major propositions were put forward in various publications from 1981 to 2016 in probabilistic methods namely:
1. Analytical methods.
2. Simulation methods.
3. Approximate methods.
Can any one give me some examples of different methods for each category? I also need some good references to get a clear cut idea regarding the same. I wish to submit a detail literature review prior to my proposal exam.
In SWAT2012 based on ArcGIS 10.2 have not provided component in "SWAT Simulation" tab for doing sensitivity analysis & Uncertainty analysis ? Why ?. I have to do sensitivity analysis & Uncertainty analysis for my sediment yield estimation work. Kindly guide me to solve the problem.
Thank you very much...
If so, what is its form and significance and is there a connection between temporal and spatial Uncertainty? For example, is the temporal uncertainty asymmetric, so that the forward component is associated with greater disorder?
Basically I need to know the different approached that could be used in for uncertainty analysis in discharge rating curve.
Beside that curious to know why it is so important now a days?
I am using weka 3.7.12 and I should run AdditiveRegression classifier because it does not have adaboost for regression problems(*if anyone knows sth about this please tell me, is it true using additiveRegrresion replace Adaboost.R2 !!? *). My main problem is just need to calculate 95% confidence interval for each predicted value,I am new in weka , I hope someone help me it is an emergency problem ; thanks everyone.
I was thinking about the different decision making methods under certain and uncertain conditions. My specific question is that:
As you know, we have many MCDM tools like AHP- ANP- TOPSIS- VIKOR- PROMOTHEE- MOORA- SIR and many other methods and all of them have been developed to fuzzy, type-2 fuzzy, intuitionistic fuzzy and Grey environments. Which one is really more applicable under uncertain situations? fuzzy ? type-2 fuzzy? Intuitionistic fuzzy? or Grey environment for a decision making method? I know each uncertain logic has its applications but sometimes a tiny different in collected data may causes different results by each methods.
All the ideas and comments are appreciated. Hope that all of the experts take an action to this question by following or leaving their valuable comments.
The measuring system consists of four measuring instruments (thermometer, shunt, voltage channel of a power analyser and current channel of the power analyser). The extended uncertainty of the whole system equals 0.63% and at the same time the extended uncertainty of the thermometer measuring equals 5%. Is this possible?
Hello Every one, in most of the literature what i have seen is people uses GLUE equation which is given by the Beven and Binley which is the ratio of error varioance between observed and simulated variable with exponent N, but what i have observed is N plays very important role in this equation and it has exponential effect, my question is does anyone knows what N value should be used to avoid it?
I am going to work on using Ensemble Methods in Machin Learning such as Boosting, Bagging, Random Forest &.... to reduce and analysis of predicted precipitation by Global Weather Models(Ecmwf, Ncep and etc.).
My first question is about possibility of this opinion , Is it possible at all to use these methods to combine members outputs of models?
For second question, can anyone recommend a review article or a book that explained these methods completely?
Thanks for your time and consideration in answering my question.
I need to do uncertainty analysis for heat exchanger laboratory. I would be very grateful if you could share any documents regarding this calculation. Thanks.
Does Matlab have a feature of rough set theory for attribute reduction? Please suggest any other open source software for rough set theory.