Science topic

Uncertainty Quantification - Science topic

Explore the latest questions and answers in Uncertainty Quantification, and find Uncertainty Quantification experts.
Questions related to Uncertainty Quantification
  • asked a question related to Uncertainty Quantification
Question
2 answers
I need to perform uncertainty quantification by Monte Carlo Method, using Python in Abaqus, of the laminate grades (ply angle) and variation of the amount of fiber and resin. What recommendation would you give me to start, thank you very much.
Relevant answer
Answer
To perform uncertainty quantification using the Monte Carlo Method in Abaqus with Python, you can follow these general steps:
  1. Define the Model: Set up your finite element model in Abaqus for the laminate structure you want to analyze. Define the geometry, material properties, boundary conditions, and loading conditions.
  2. Create Python Scripts: Write Python scripts to automate the process of running simulations and analyzing results. Use the Abaqus scripting interface (Abaqus Scripting Reference Manual) to interact with the Abaqus model and perform operations such as creating instances, assigning materials, applying loads, and running simulations.
  3. Define Parameters for Uncertainty: Identify the parameters that introduce uncertainty in your analysis. This may include variations in ply angle, fiber and resin content, material properties, environmental conditions, etc.
  4. Implement Monte Carlo Simulation: Use Python to generate random samples for the uncertain parameters based on probability distributions. For each sample, modify the Abaqus model accordingly and perform a simulation. Repeat this process for a large number of samples (iterations) to generate a statistically significant dataset.
  5. Analyze Results: Post-process the simulation results to extract relevant quantities of interest (QoI) such as stress, strain, displacement, or failure criteria. Calculate statistical quantities such as mean, standard deviation, probability density functions (PDFs), and cumulative distribution functions (CDFs) for the QoI.
  6. Visualize Results: Visualize the results using plots, histograms, scatter plots, and other graphical representations to gain insights into the variability and uncertainty in the system behavior.
  7. Validate and Interpret Results: Validate the uncertainty quantification results against experimental data or other benchmark models. Interpret the results to understand the impact of uncertainty on the performance and reliability of the laminate structure.
Here are some specific recommendations to get started:
  • Familiarize yourself with Python scripting in Abaqus by referring to the Abaqus Scripting Reference Manual and other documentation available from Dassault Systèmes.
  • Use Python libraries such as NumPy and SciPy for random number generation, probability distributions, and statistical analysis.
  • Consider using the abqscript module in Abaqus to execute Python scripts directly within the Abaqus environment.
  • Break down your analysis into smaller, manageable steps and gradually build up the complexity of your Python scripts as you gain experience.
By following these steps and leveraging Python scripting capabilities in Abaqus, you can perform uncertainty quantification using the Monte Carlo Method for your laminate structure analysis.
Please follow me if it's helpful. All the very best. Regards, Safiul
  • asked a question related to Uncertainty Quantification
Question
3 answers
I am working on uncertainty quantification of Numerical Model where I am using Fluent and MATLAB. I am facing while writing a journal file ( A script consisting of all the commands using Text User Interface) to automate the work in Matlab. But while writing scripts for result analysis in fluent using TUI, I am facing problem and could not find any way to write them. I would be very grateful if you could help me out to solve it.
Relevant answer
Answer
If you get any solution plz share with me too.
  • asked a question related to Uncertainty Quantification
Question
6 answers
My paper "Bringing uncertainty quantification to the extreme-edge with memristor-based Bayesian neural networks" has been published in nature communication since the 20th November. But on google scholar, only the pre-print from research square is available...
Relevant answer
Answer
Dear Djohan Bonnet Quite an annoying problem indeed. I guess with some patient the problem will resolve itself, but what you may try is to add the DOI of the published paper in Research Square. According to https://help.researchsquare.com/hc/en-us/articles/360049698731-Can-I-withdraw-or-remove-my-preprint-from-the-platform they state and I quote "If your manuscript has been published, a link to the published DOI can be added to your preprint. This will allow readers to view and cite the published work."
Perhaps this speed up things.
Best regards.
  • asked a question related to Uncertainty Quantification
Question
65 answers
If an electron A at a specific spacetime loses a certain number of quanta of energy (say, 100 quanta), naturally its total energy has come down. Or, will anyone claim that it has thus increased or that it is in a constant state? Now imagine that it is accelerated later by other forces.
Consider another electron B at another spacetime. It has not lost so many quanta of energy (say, only 50 quanta). Like A, now B is also being accelerated with the same amount of energy.
Of course, whether our measurement of the acceleration energy in the two cases is absolutely exact is yet another ambiguous matter, but we suppose that they are equal.
Will the latter be at a better position in the total energy content than the former? Or, will it be claimed that their energy, mass, etc. After receiving equal acceleration from outside, are equal, merely because they are both electrons already taken to possess a certain mass?
Moreover, we know that in the path that both the electrons take there will be other physical influences which we do not determine and cannot. These influences must be at least slightly different from each other.
In short, the mass, energy, etc. of the two electrons will never be equal at any physical state, not have they been absolutely equal at any time. And we know that nothing in the world is in a static state. So, there is no reason to suppose that electrons will have a static mass, energy, etc.
Of course, we can calculate and fix them as supposedly static mass, energy, etc. These will be useful for practical purposes, but not as absolutes.
That is, our generalized determination of an exact mass for an electron need not be the exact energy, mass, etc. of an electron in various physically processual circumstances. At normal circumstances within a specific chemical element, and when freed from it, the electron will have different values.
This shows that no electron (in itself) will be identical in all its properties with any other. Our description of these properties may be considered as identical. But this description in physics is meant merely for pragmatic purposes! One cannot now universalize it and say that the mass, energy, etc. of electrons are the same everywhere.
What about the said values (mass, energy, etc.) of other particles like photon, neutrino, etc.? I believe none can prove their case to be otherwise in the case of these particles / wavicles too.
That is, there is nothing in the world, including electrons, quarks, photons, neutrinos, etc., with an exact duplicate anywhere else. This is the foundation for the principle of physical identity.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Relevant answer
Answer
Richard Lewis, thanks. I will read the article given.
Practically nobody noticed this discussion of mine. So I put it in some other discussions too. But none read it or none responded. They all know much better.
  • asked a question related to Uncertainty Quantification
Question
8 answers
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
  • asked a question related to Uncertainty Quantification
Question
8 answers
Hi everyone, I was wondering if there are any good yearly generic or field-specific conferences on topics of sensitivity and uncertainty analysis. Thanks! Shahroz
  • asked a question related to Uncertainty Quantification
Question
1 answer
Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.
Many thanks!
Relevant answer
Usually + or - 25%
  • asked a question related to Uncertainty Quantification
Question
6 answers
It seems that at the present level of science and mathematics science, we are able to measure with considerable precision the uncertainty of the shaping of many socio-economic phenomena, let us say regularly researched (measured quantitatively) in the past and now.
I conduct research on the dependence of uncertainty in running a business and the risk of bankruptcy of enterprises with the use of multidimensional discriminatory models and uncertainty indicators in 1990-2004 (the period of Poland's systemic transformation) for the scientific conference for the 100th anniversary of the Department of Economic History at the University of Poznań (planned for 2021.) .
In the science of cliometry, we adopted (Dr. D.A. Zalewski (1994); Dr. J. Wallusch (2009)) as a measure of long-term uncertainty, the variance, i.e. the arithmetic mean index of the squared deviations (differences) of individual feature values from the mean, but not literally the variance of a single variable, but a change in process variance that arises under the influence of changes in the size of the random component, i.e. there is a variable variance in the time series. Cliometrics decided that uncertainty is a derivative of variability, and this is precisely measured by variance. For a specific study, we accept long series of at least several dozen observations and calculate them in the GARCH / ARCH model. Thus, in cliometry, we consider that both uncertainty and risk are quantifiable provided we have data series.
In My Approach, I propose to use as a measure of uncertainty the ex post measurement of the number of erroneous forecasts and expectations of changes in a given indicator (scientific approach from Ferderer, J. Peter (1993): The impact of uncertainty on aggregate investment spending: an empirical analysis. Journal of Money, Credit and Banking, 25, 30–48.), e.g. GDP growth or inflation, investment, unemployment rate in relation to forecasts and expectations published in the media by a specific and selected group of professional centers dealing with a given process , a socio-economic phenomenon.
The more erroneous forecasts I detect by making an accuracy analysis (matching the old forecasts to the statistical data disclosed by a research center that regularly analyzes a given phenomenon), the higher the economic uncertainty index is calculated for a given phenomenon. My approach derives directly from the technical problem of uncertainty of measurement with a physical instrument explained by physicists.
The problem of uncertainty in running a business
For example, how do I assess the uncertainty in running a business (company) in a given country and time, e.g. 12 months. Well, I compare the variability (variability index) in two series of data (observations): the first series is the statistics of newly opened businesses, and the second series is the statistics of businesses closed and suspended in the same time (12 month). In my opinion, the imposition of the variability in these two series of observations gives a very good understanding of the scale of uncertainty in economic activity in a given year or years in a given country.
In my opinion, uncertainty is like a shadow changing its form with each change in the incidence of light rays, the more I can notice the more the light source and the object on which the light falls.
Relevant answer
Answer
The analogy of light source and shadow is a good one ! The higher the risk, the higher uncertainty?
  • asked a question related to Uncertainty Quantification
Question
3 answers
I know that the question is a bit ambitious, therefore I want to break it down:
Have there been efforts to accurately measure the spatio-temporal mismatch? If yes, did it work?
Are the factors that impact the mismatch clear or is it an open question? (I guess this is partly answered since some factors are obvious)
Did someone actually manage to model the spatio-temporal shift/deformation between weather radar and on ground precipitation estimates?
It would be very helpful if you can point me to related research. I am dealing with <1km spatial and <5 minute temporal resolutions of precipitation estimates. The mismatch is obvious, but not well correlated to the height difference between radar (up to 3km) and on ground estimates.
Relevant answer
Answer
Thank you for your input David McNaughton. Evaporation is definitely one of the factors that should be considered. I found some recent literature about a correction model for wind drift and evaporation using WRF simulations, but the relatively coarse (30 min) temporal resolution does not describe the temporal mismatch that I see.
  • asked a question related to Uncertainty Quantification
Question
10 answers
How to calculate the sum and the subtraction of many random variables that follow exponential distributions and have different parameters ?
(The value of Lambda is different for all or some variables).
example :
L(t) = f(t) + g(t) - h(t)
with
f(t) = a.expo(-a.t)
g(t) = b.expo(-b.t)
h(t) = c.expo(-c.t)
st:
a = Lambda_1
b = Lambda_2
c = Lambda_3.
Relevant answer
Answer
(continued)
In case of more terms (all with different means m_j>0, j=1,2,...,n) the formulas are as follows (ti replaced by -s)
ch.f.(X_1+X_2+...+X_n)(t) = 1/ [ (1+m_1 s) (1 + m_2 s) ... (1 + m_n s)]
= \sum_{j=1}^n A_j / (1+m_j s),
where A_j = \prod_{k\ne j} [ 1 - m_k / m_j]^{-1}
Therefore, in such cases the density of the sum is equal to
\sum_{j=1}^n A_j / m_j \exp( - x/m_j ), for x>0.
If X_j in the sum is preceded by sign -, then the first two formulas remain valid after replacing m_j by - m_j. The last requires replacing the exponential density for positive variable by the opposite one.
  • asked a question related to Uncertainty Quantification
Question
8 answers
When calculating a budget or a risk reserve, a simulation or estimation is performed.
Sometimes the Monte Carlo simulation is used.
It seems that each administration and each company uses different confidence percentiles when summarising the simulations in order to take a final decision.
Commonly, 70%, 75% or 80% percentiles are used. The American administration uses 60% for civil works projects...
My doubt is, is there any recommendation or usual approach to choose a percentile?
Is there any standard or normalized confidence percentile to use?
I expected to find such an answer in the AACE International or International Cost Estimating and Analysis Association, but I did not.
Thank you for sharing your knowledge.
Relevant answer
Answer
I believe setting confidence intervals is more of an art than an exact science. It really depends on how important it is that something remains within the confidence interval, but there are no real quantitative standards.
If going over budget almost certainly means default you may want to base risk reserves on a high percentle. If default is not imminent, like with governmental institutions, the percentile may be set lower.
  • asked a question related to Uncertainty Quantification
Question
4 answers
What are the most important methods to quantify risk (failures, uncertainties...) for production systems ?
Especially for production systems, We look also on other fields ...
Relevant answer
Answer
Hello Oussama!
I advise you to use dynamic Bayesian networks. I think it is a very suitable method for dynamic systems.
Good luck
  • asked a question related to Uncertainty Quantification
Question
2 answers
Both the formal (MCMC) and informal (Generalized Likelihood Uncertainty Estimation) Bayesian methods are widely used in the quantification of uncertainty. As far as I know, the GLUE method is extremely subjective and the choice of the likelihood function is various. This is confusing. So, what are the advantages of GLUE and is it worthy of being admired? Is it just because it doesn't need to reference the error function? What is the pros and cons between the two methods? What to pay attention to when constructing a new informal likelihood function(like LOA)?
Relevant answer
Answer
Mr. Peng, I think, your problem can be solved with you read Vrug's and Baven's paper about MCMC formal and GLUE.
  • asked a question related to Uncertainty Quantification
Question
5 answers
Uncertainty of Results
(International research on the different aspects related to the uncertainty of results in engineering and management systems)
Explanation
In the previous years (more than 80 years), the great physicists such as Heisenberg or Einstein have introduced valuable theories related to the uncertainty especially about time and space (uncertainty principle). Moreover, in Metrology Science, the issues related to the uncertainty for measuring in tools, have strongly been considered (uncertainty of measurement). There are many complex or simple formulas, which can describe the uncertainty in these issues. But, in the engineering or management systems, more than the past, the speed of changes is very incredible. Now, we want to consider and study this important issue from a new aspect. It seems that the uncertainty of results belonged to the engineering or management systems must be calculated. This research needs to deep study and the cooperation of different scholars related to the several branches of engineering and management fields.
Questions
1. Do you think that we can do this research?
2. What do you think about the determination of comprehensive formula for uncertainty of results? (see attached file)
3. What do you think about the impact role of the other factors such as human factors and so on?
4. What is your opinion about this research?
5. Is there university or research center to help us?
6. Do you assist with us in this research? (If yes, please send us: name, email address, affiliation, expertise)
Relevant answer
Answer
Could you tell us what you mean about the new aspect, please?
  • asked a question related to Uncertainty Quantification
Question
4 answers
When dealing with an uncertainty characterization problem, we do not know what the true value of an uncertain variable is. So, how can we evaluate which method (e.g. Monte Carlo simulation, Chance-constrained programming, etc.) could better estimate the true value of an uncertain variable?
Relevant answer
Answer
At first I would say that the true value cannot be known. What you can get is a mean value of an interval that will represent the interval of values which can be be attributed (with a probability) to the measurand (or result or measurement).
There are several reference document to know how to estimate and express the uncertainty and most of them have forms of calculation and variants that end up being equivalent. There are TWO main approach for uncertainty estimation. GUM (from JCGM100:2008) and Monte Carlo MCM (JCGM 101:2008). The fact is in most of cases both yield equivalent results, above all when math model have several quantities with similar uncertainty contribution. However there are cases when the measuran probability distribution when the distribution obtained by monte carlo is not gaussian or assimetric, GUM and MCM could be quite different. JCGM101:2008 states that in these cases GUM is not a good estimator of measurand, and so you should lean towards MCM approach. Having said that, and answering your question, I would say that the approach you should take, deppends on which is your business:
a) If you are a researcher that need to know truly your process, I think MCM in all cases is the best approach since is not based on numerical approximations.
b) If you are giving a calibration or test service, or even participating in proficiency test or comparisons I think you should estimate by GUM. Don't forget that GUM is more than a calculation approach, is an international agreement. So, if you don't clarify, everybody will supose you used that approach and everybody will know what are you talking about. For example, In a proficiency test if you use MCM and other laboratories use GUM and your results are different, I guess it is unimportant who calculates it better. Even if you're the best you will be out with no complaint.
c) For a routinary work with low precision, you can use eurachem 2012 simplified method (it's GUM), which is easy an fast, and you don't use partial derivatives nor hard equations.
d) Just in case I recommend you check out the freeware mcmalchimia (www.mcmalchimia.com), that makes all calculations for you in no time and you get MCM and GUM result at one time.
I hope have been useful
  • asked a question related to Uncertainty Quantification
Question
3 answers
I have an equation like ( 𝑓(𝑞,𝑞̇,𝑠)=𝑀(𝑞,𝑠)(𝑞𝑑̈+𝐾𝑝𝑒 ̇+𝐾𝑑𝑒)+𝑁(𝑞,𝑞,𝑠̇) )
where M(q, s) is the n × n inertia matrix of the entire system, q denotes the n × 1 column
matrix of joint variables (joint/internal coordinates), s represents system parameters such as
mass and the characteristic lengths of the bodies, and f (q, ˙ q, s) is the n×1 columnmatrix of
generalized driving forces which might be functions of the system’s generalized coordinates,
and/or speeds, and/or system parameters. The term N(q, ˙ q, s) includes inertia-related loads
such as Coriolis and centripetal generalized forces, as well as gravity terms.Defining
the n×1 column matrix of the desired joint trajectory as qd (t), one can express the tracking
error as (e(t) = qd (t) − q(t)).
I have 2 uncertanity parameters s=(s1,s2) in mass of two links , i wanna integrated uncertanity to find mean of 𝑓 for every joint of my two link sereise robot .
distribution of 2 non-deterministic parameters is uniform and basis function is legendre ,
by using galerkin projection can obtain this equation at end : 𝑓𝑗𝑙=(1/𝑐𝑙2)<𝑓𝑗,𝜑𝑙(𝑠)>
l = 0;....;Nt ; and j = 1;....;n: that Nt is the coefficient number and n is size of f vector .
my question is : how can i calculate inner product of upper equation to find for example f10 from orthogonal equation that mentioned upper (<𝑓𝑗,𝜑𝑙(𝑠)> ) i know the 𝜑𝑙(𝑠) but what should i consider for (f j) for my example in this Integral :
∬𝑓𝑗(𝑠1,𝑠2).𝜑𝑙(𝑠1,𝑠2).𝑝𝑑𝑓(𝑠1).𝑝𝑑𝑓(𝑠2)𝑑𝑠1 𝑑𝑠2=𝑓𝑗𝑙
Relevant answer
Answer
Elyad Xanbabayi Sorry to hear that. hope you find the answer soon
  • asked a question related to Uncertainty Quantification
Question
3 answers
I am facing problem while writing the journal file in Fluent using TUI commands. I could not extract result ( temperature distribution along the center line, velocity distribution along the centerline ), from the CFD post processing using fluent journal file.
Relevant answer
Answer
Well, I am not sure but first you always start with the error. What does it say?
Then try to do it line by line in the terminal of Fluent and see if you get the error and where and why.
  • asked a question related to Uncertainty Quantification
Question
2 answers
In the context of time series analysis, there are several multi-step ahead prediction (MSAP) strategies such as the recursive and direct strategies (the two fundamentally distinct and opposed mechanisms). The recursive strategy is the most popular one amongst practitioners. Considering that initial random weights cause inconsistency at the output of RNNs (unless it's been dealt with properly), how to quantify uncertainty over the forecast horizon. I need bands within which the forecasts oscillate.
Relevant answer
Answer
Hi Fouad,
Best,
André
  • asked a question related to Uncertainty Quantification
Question
9 answers
I have run a number of time-consuming complex simulations: -approx. 200 samples
-input parameters are not definable as simple numerical values (they are like: low, mid, high, etc.)
So, input parameters and output values are known but the model is so complex as it's like a black box.
Is it possible to sort the input parameters by their importance? If yes, How?
Relevant answer
Answer
Quenouille's jackknife may be an alternative to Efron's bootstrap. Both have their merits.
Brief comparisons of bootstrap and jackknife are:
  • asked a question related to Uncertainty Quantification
Question
2 answers
Are these feasible?
  • Since the fluid no longer strictly follows the "incompressibility" during heating, I`ve sampled the five coefficients under heating conditions,which are uncorrelated.
  • Since the "incompressibility" is observed under adiabatic conditions, I`ve drawn samples according to the definition of five parameters in "Theory and Simulation of Turbulence [M]", among which some of these parameters are correlated.
reference
  • The condition "incompressibility" is used for the "NS equations → Reynolds equations → k-ε model".This condition is not strictly correct when the fluid is heated, 'cause the density changes.
  • The five coefficients for the k-εmodel are defined or derived by the equation under the condition that the velocity is constant, i.e. incompressible.
  • In "Uncertainty Quantification of Turbulence Model Coefficients via Latin Hypercude from Method ",Abdelkader Frendi put forward the sampling method of the five parameters.
  • There is a similar sampling method in "Theory and Simulation of Turbulence [M]"(2nd edition)
Relevant answer
Answer
The k-ε is not strictly appropriate under almost any real circumstances. Nevertheless, it or close cousins of it are widely and usefully applied in reacting flows in which density is not constant and, in the case of multiple-phase flows, mass is not necessarily conserved in any individual phase. One of the provisos of such applications is that the velocity decomposition is density weighted or what is frequently called Favre averaged, rather than the more customary Reynolds decomposition/averaging.
  • asked a question related to Uncertainty Quantification
Question
11 answers
Hello!
Can you please explain to me the difference between an uncertainty quatification and sensitivity analysis?
Thank you!
Relevant answer
Answer
Both analyzes - Uncerainty quantification (UQ) and Sensitvity analysis (SA) - are essential parts of real - world modeling processes. The mathematical models created are designed to predict the behavior of the physical systems they represent.
- The uncertainty quantification is a mathematical approach that evaluates the uncertainty of model output(s) based on uncertainties of model inputs. UQ deals with identifying sources of uncertainty and including them in the model, assessing the uncertainty for each input and propagating them through the model in uncertainty of the output.
- The sensitivity analysis studies the contribution of uncertainty of each model input to the uncertainty of the model output and identifies dominant contributors to the uncertainty of model ouput.
In the following, I will refer to applying the two types of analysis in metrology, a field in which measurement uncertainty is an essential parameter.
For metrologists, UQ is the evaluating the uncertainty of the result of a measurement, y. If the measured value y is not measured directly, but it is calculated from the values ​​of other quantities, values ​​to be further denoted by xi, i = 1, 2, ..., N, it is necessary first to establish the mathematical model of measurement:
y = f (x1, x2, ..., xN). (1)
The uncertainty of y, u(y), can be calculated starting from Eq.(1) and using the law of propagation of uncertainty (GUM). For the simplest case where the input quantities are uncorrelated (see Eq.(10) of the GUM), u(y) results from:
u2(y) = (df /dx1) 2 u2 (x1) + (df /dx2) 2 u2(x2) + ... + (df/dxN)2 u2(xN) (2)
where u(xi) are the uncertainties associated with the input estimates xi, and the derivatives (df/dxi) are called sensitivity coefficients. The uncertainty quantification is the calculation of u(y) using Eq.(2).
In the usual approaches, SA is derivative based and it analyzes the impact of an model input in the mathematical model on the model output, keeping the other input parameters fixed. In the case discussed here, the same Eq. (2) is the basis of the sensitivity analysis. It refers to determining the variation in y produced by the uncertainty u(xi) of one of the input values while the other input values ​​are fixed. As it follows from Eq.(2), the influence of u(xi) on u(y) is described by the sensitivity coefficients (df/dxi) of Eq.(2) and the change of y generated by the uncertainty u(xi) is given by
ui(y) = (df/dxi) u(xi), i = 1, 2, ..., N. (3)
Eq.(3) is the sensitivity analysis for the case presented.
  • asked a question related to Uncertainty Quantification
Question
1 answer
Hello, I'm interested in use the dakota software to construct a multifidelity surrogate with polynomial chaos expansion, this for substituing a Finite element Model
Some code example?
Thank you
Relevant answer
Answer
Have you considered using Pynamical instead, which is a Python extension for modeling chaos/chaotic elements/edge of chaos phenomena in dynamical systems? Geoff Boeing, the developer, has done extensive research that could be helpful to you. You might want to visit his page and read some of his work to see if it would be useful in building your model.
  • asked a question related to Uncertainty Quantification
Question
2 answers
Hi, I am working in the area of risk quantification to come up with something like value at risk for IT service provider using uncertainty theories like Possibility theory. There are several works in risk quantification using scales. But I am focusing on financial exposures and not numerical scales alone. Any pointer for work done in those areas - may be for other industries?
Relevant answer
Answer
Hi, You can read my review paper titled,"Neural Network-based Uncertainty Quantification: A Survey of Methodologies and Applications". Freely downloadable from https://ieeexplore.ieee.org/document/8371683/
  • asked a question related to Uncertainty Quantification
Question
6 answers
the gPC is used for uncertainty quantification, I find the gPC coefficients for Z=f(theta),where theta is a random variable(it may have any kind of pdf), I know the first coefficient a0 is the mean of Z and the sum of other coefficient to the power of 2 is variance.
but How can I find the shape of the pdf?
because Z might have Uniform or Gamma or even Multi Modal Gaussian Distribution, in that case mean and variance is not that much useful.
Relevant answer
Answer
Dear Dr Issam.
I do not intent to build a pdf polynomial.
Dear Dr Joachim my problem is to find parameters/shape of pdf of Z=f(theta), where the rv theta possesses a given pdf, but f is not a given polynomial. f could be any function. gPC (generalized polynomial chaos) is a method to approximate a random variable (somehow like Fourier series which can approximate a function in deterministic case). it is an spectral method. for example if theta has a Gaussian distribution the Hermite polynomial is chosen for expansion of f(theta). after calculation of this coefficients (like Fourier series which a0 is DC value) a0 is the mean (the first moment) and the second moment is sum of all other coefficients to the power of 2 . now my question is based on this coefficients how to characterize the pdf of Z (because Z might have multi-modal Gaussian distribution and in that case mean and variance is not helpful).
  • asked a question related to Uncertainty Quantification
Question
3 answers
Greetings,
I'm interested in reading literature covering research on performance of deep learning systems. Specifically, works that attempt to quantify how performance changes when the fully trained system is exposed to real world data, which may have model deviations not expressed in training data. Think "Google flu trends": (http://science.sciencemag.org/content/343/6176/1203)
Please share references for this problem (your personal work or otherwise).
Thank you.
Relevant answer
Answer
My main paper in Russian, in English only 3 paper.
  • asked a question related to Uncertainty Quantification
Question
1 answer
When we consider the Gaussian Markov Random Field simulation for spatial property modeling, how to construct discrete approximation of each realization to be incorporated in the DoE model, such as for optimization under uncertainty?
Relevant answer
Answer
  • asked a question related to Uncertainty Quantification
Question
8 answers
I have complete annual ‘hourly’ electricity (UK Grid) import into and PV generation for a detached family house with a 6kW array of solar photovoltaics. Sadly I only have 4-month worth of actual data for the total electricity that was exported back to the grid (and not used within the property). Is there any way of estimating (from the existing data) the approximate export back to the grid for the remaining 8 month for which I have no export data? It would be sufficient to have a monthly approximation with an uncertainty band attached so long as the method is robust and referenceable.
Data sheet summary of monthly values attached!
Sincere thanks
M
Relevant answer
Answer
Simply, search the monthly values of the received solar energy (kWh / m²) and multiply these values by the area of the PV panel and its electrical efficiency.
You can use the meteonorm software to get the monthly solar irradiation data.
  • asked a question related to Uncertainty Quantification
Question
3 answers
In senstivity analysis as part of uncertainty quantification framework, how helpful is using surrogate models for numerical weather prediction programs NWP like WRF 
Relevant answer
Answer
Look at this doc it may be helpful for your topic. Good luck.
  • asked a question related to Uncertainty Quantification
Question
5 answers
Hello SWAT/SWAT-CUP Users,
It is said that the parameter with largest t-statistic value is the most sensitive parameter in SWAT-CUP Sensitivity analysis results. There are negative and positive t-statistic values. Do we take the large absolute value,  only the large positive values?
You can refer to the attached diagram to elaborate the answers.
Thank you
Relevant answer
Answer
Look at the p-values. Take only parameters whose p-value is less than  0.05. I some times include parameters whose p-value is slightly above the 0.05 threshold
  • asked a question related to Uncertainty Quantification
Question
2 answers
Supposing that I have mass concentration profile data of different aerosol species (dry state) along with RH profile from a chemical transport model (CTM). This CTM do have an internal aerosol routine which calculated AOD & SSA. In order to compute aerosol radiative forcing (ARF), one required scatter phase function or asymmetry parameter in addition to AOD & SSA values. Of course, I am aware that atmospheric profile data & surface reflectance information are also needed, which can be obtained respectively from radiosondes (alternately use climatological atmospheric profile that corresponds to the region of study) and MODIS. As per the above description of available data, I chose to use OPAC (assumption of external mixing) to compute the aerosol optical and scattering properties by using mass concentration data of different species. Once I have AOD, SSA and g, I was using the SBDART to compute ARF. 
My main questions are:
(a) Are there studies focused on giving a detailed uncertainty estimate combining all sources of uncertainty from measurements/model simulations till radiative forcing computation (say using offline RT model)?
(b) Is there any study which give the uncertainty in ARF computation by using asymmetry parameter instead of scattering phase function?
(c) What is the uncertainty of a general chemical transport model simulations of aerosol species mass concentrations? 
Relevant answer
Answer
Short Answer
A) This is hard
B) most complications come from the unknown scattering physics as the particle size puts you simultaneously in the realm of rayleigh scattering and mie scattering and the composite scattering function is complex and also a function of altitude.
Here are a few references to why this problem is difficult:
  • asked a question related to Uncertainty Quantification
Question
2 answers
In purity assessment it is an uncertainty component.
Relevant answer
Answer
Do you mean calculating molar absorbtivity by Monte Carlo? The study was an analytical study on purity assingment of chloramphenicol (CAP). One of the impurities was has a UV peak near CAP, and in the study there was an assumption of treatig both molecules as if they has the same molar absorbtivity coefficient. It has a very small contribution and was neglected. 
  • asked a question related to Uncertainty Quantification
Question
3 answers
Emission inventories are always associated with uncertainty? I need a software to carry out uncertainty analysis on sets of emission values. @RISK or CrystalBall or any other one will be highly appreciated.
Relevant answer
Answer
Giwa, there is a Software called Emisens, it estimates uncertainties from road traffic emissions, please check this PhD thesis: http://infoscience.epfl.ch/record/149809/files/EPFL_TH4793.pdf. You can also contact Professor Alain Clappier, he developed this Software: alain.clappier@live-cnrs.unistra.fr
  • asked a question related to Uncertainty Quantification
Question
1 answer
Should we use just one value for each realization? Or different quartiles? What if one of the quartiles is eliminated in the reduced model?
Relevant answer
Answer
You may use the two stage stochastic modeling technique. Some decisions are made before realizations of the uncertain events and some of them are after them.
  • asked a question related to Uncertainty Quantification
Question
4 answers
We have conducted population size estimation study among KAP (key affected population) using different methods. So we have got various estimates, although with overlapping acceptable intervals. Then we hypothesized the median of the various estimates could be the most plausible size estimate. It has also been suggested to use an average. How do we reassure, was it the best decision in our case? 
Relevant answer
Answer
It looks like location estimation right? Then, the mean is the best estimator if the data shows Gaussian distribution. You can compute the residuals and perform a statistical test if your assumption (Gaussian) was not wrong.
The median works superior if your data shows some outliers. You can use boxplots with notches to compare the location of different data sets. There is also a 'conversion' for the median so that it meets the result of the mean if the data is Gaussian.
There is a third location estimator, called shortest half.
The drawback of median is, that it has slower convergence.
Two references that might help.
mean vs median vs shortest half, Chapter 4:
@BOOK{Rousseeuw1987,
title = {Robust regression and outlier detection},
publisher = {Wiley},
year = {1987},
author = {Rousseeuw, P.J. and Leroy, A.M.},
series = {Wiley series in probability and mathematical statistics : Applied
probability and statistics},
doi = {10.1002/0471725382},
}
boxplots:
@ARTICLE{McGill1978,
author = {McGill, R. and Tukey, J.W. and Larsen, W.A.},
title = {Variations of Box Plots},
journal = {The American Statistician},
year = {1978},
volume = {32},
pages = {12--16},
number = {1},
doi = {10.2307/2683468},
issn = {00031305},
publisher = {Taylor \& Francis, Ltd. on behalf of the American Statistical Association},
}
I hope this was somehow helpful.
  • asked a question related to Uncertainty Quantification
Question
26 answers
Specifically, I am interested in uncertainty in building energy model predictions (see thesis attached), although I am interested in the views from other disciplines also. It seems to me that uncertainty is a vital aspect when making any predictions about performance (be it a building or an rocket engine). If the inputs to a model are uncertain (which they inevitably are in many cases) that there is an inherent variability (uncertainty) associated with the output of that model. Therefore I think it is very important that this should be communicated when outputting model predictions. However, it seems this is not done in many cases. Do you think uncertainty is an important aspect in model predictions? Does it depend on the industry or application? 
Relevant answer
Answer
I am agreed with all that the uncertainties should be taken into account from the beginning as well as all the sources of uncertainties should be deeply analyzed.
  • asked a question related to Uncertainty Quantification
Question
3 answers
The existence of uncertainty cannot be neglected in order to obtain reliable results in vibration-based damage detection. There are several methods (probabilistic, fuzzy and non-probabilistic) to consider uncertainty, but only few of them are using Artificial Neural Network implementation to counter the problem of uncertainty. So, is there any detailed explanation regarding on how to consider the uncertainties in ANN? What is the optimum value of uncertainty for frequency and mode shape?
Relevant answer
Answer
If you are concerned with removing noise (denoising) from your measured vibration signals, probably wavelet based denoising methods would be useful. You may probably need to denoise first, then use ANN.
(1) If the noise is producing impulsive behaviour in measured vibration signal, (wherein your signal of interest is smooth or continuous) you may use one of the available wavelet based denoising function in MATLAB, which will tend to remove impulsive noise.
(2) If your signal of interest itself is impulsive (like in bearing or gear vibration signals), then you may have to use some customised wavelet based denoising methods and check which method would be suitable for you.
I hope I have understood your problem....
  • asked a question related to Uncertainty Quantification
Question
4 answers
For static models with only a few parameters and outputs, sophisticated methods for Uncertainty Quantification (UQ) requiring a large number of samples for uncertainty propagation may be applied. This case seems to be what most literature on UQ relates to. I'm working on UQ methods applicable for 1-D simulation models of dynamical systems. Advice on references is gratefully received. The scope is ODE or DAE models developed in Modelica (at least 100 parameters, 10 input signals, 20 output signals, say 30 minutes per simulation run). UQ of steady-state simulation results as well as of system dynamics is of interest.
Relevant answer
Answer
I suggest that you consider using a surrogate model. There are a number of surrogate models: response surface, polynomial chaos expansion, neural network, Kriging, etc.
The book below is a very good one on this topic:
[31] A. Forrester, A. Sóbester, A. Keane, “Engineering design via surrogate modelling: A practical guide.” Wiley, 2008