Science topics: Applied MathematicsUncertainty Quantification
Science topic
Uncertainty Quantification - Science topic
Explore the latest questions and answers in Uncertainty Quantification, and find Uncertainty Quantification experts.
Questions related to Uncertainty Quantification
I need to perform uncertainty quantification by Monte Carlo Method, using Python in Abaqus, of the laminate grades (ply angle) and variation of the amount of fiber and resin. What recommendation would you give me to start, thank you very much.
I am working on uncertainty quantification of Numerical Model where I am using Fluent and MATLAB. I am facing while writing a journal file ( A script consisting of all the commands using Text User Interface) to automate the work in Matlab. But while writing scripts for result analysis in fluent using TUI, I am facing problem and could not find any way to write them. I would be very grateful if you could help me out to solve it.
My paper "Bringing uncertainty quantification to the extreme-edge with memristor-based Bayesian neural networks" has been published in nature communication since the 20th November. But on google scholar, only the pre-print from research square is available...
If an electron A at a specific spacetime loses a certain number of quanta of energy (say, 100 quanta), naturally its total energy has come down. Or, will anyone claim that it has thus increased or that it is in a constant state? Now imagine that it is accelerated later by other forces.
Consider another electron B at another spacetime. It has not lost so many quanta of energy (say, only 50 quanta). Like A, now B is also being accelerated with the same amount of energy.
Of course, whether our measurement of the acceleration energy in the two cases is absolutely exact is yet another ambiguous matter, but we suppose that they are equal.
Will the latter be at a better position in the total energy content than the former? Or, will it be claimed that their energy, mass, etc. After receiving equal acceleration from outside, are equal, merely because they are both electrons already taken to possess a certain mass?
Moreover, we know that in the path that both the electrons take there will be other physical influences which we do not determine and cannot. These influences must be at least slightly different from each other.
In short, the mass, energy, etc. of the two electrons will never be equal at any physical state, not have they been absolutely equal at any time. And we know that nothing in the world is in a static state. So, there is no reason to suppose that electrons will have a static mass, energy, etc.
Of course, we can calculate and fix them as supposedly static mass, energy, etc. These will be useful for practical purposes, but not as absolutes.
That is, our generalized determination of an exact mass for an electron need not be the exact energy, mass, etc. of an electron in various physically processual circumstances. At normal circumstances within a specific chemical element, and when freed from it, the electron will have different values.
This shows that no electron (in itself) will be identical in all its properties with any other. Our description of these properties may be considered as identical. But this description in physics is meant merely for pragmatic purposes! One cannot now universalize it and say that the mass, energy, etc. of electrons are the same everywhere.
What about the said values (mass, energy, etc.) of other particles like photon, neutrino, etc.? I believe none can prove their case to be otherwise in the case of these particles / wavicles too.
That is, there is nothing in the world, including electrons, quarks, photons, neutrinos, etc., with an exact duplicate anywhere else. This is the foundation for the principle of physical identity.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
Hi everyone, I was wondering if there are any good yearly generic or field-specific conferences on topics of sensitivity and uncertainty analysis. Thanks! Shahroz
Hi everyone, I am performing Sobol's sensitivity analysis and wondering if there is a way to set a threshold on sensitivity index so that parameters with a sensitivity index greater than the threshold is sensitive.
Many thanks!
It seems that at the present level of science and mathematics science, we are able to measure with considerable precision the uncertainty of the shaping of many socio-economic phenomena, let us say regularly researched (measured quantitatively) in the past and now.
I conduct research on the dependence of uncertainty in running a business and the risk of bankruptcy of enterprises with the use of multidimensional discriminatory models and uncertainty indicators in 1990-2004 (the period of Poland's systemic transformation) for the scientific conference for the 100th anniversary of the Department of Economic History at the University of Poznań (planned for 2021.) .
In the science of cliometry, we adopted (Dr. D.A. Zalewski (1994); Dr. J. Wallusch (2009)) as a measure of long-term uncertainty, the variance, i.e. the arithmetic mean index of the squared deviations (differences) of individual feature values from the mean, but not literally the variance of a single variable, but a change in process variance that arises under the influence of changes in the size of the random component, i.e. there is a variable variance in the time series. Cliometrics decided that uncertainty is a derivative of variability, and this is precisely measured by variance. For a specific study, we accept long series of at least several dozen observations and calculate them in the GARCH / ARCH model. Thus, in cliometry, we consider that both uncertainty and risk are quantifiable provided we have data series.
In My Approach, I propose to use as a measure of uncertainty the ex post measurement of the number of erroneous forecasts and expectations of changes in a given indicator (scientific approach from Ferderer, J. Peter (1993): The impact of uncertainty on aggregate investment spending: an empirical analysis. Journal of Money, Credit and Banking, 25, 30–48.), e.g. GDP growth or inflation, investment, unemployment rate in relation to forecasts and expectations published in the media by a specific and selected group of professional centers dealing with a given process , a socio-economic phenomenon.
The more erroneous forecasts I detect by making an accuracy analysis (matching the old forecasts to the statistical data disclosed by a research center that regularly analyzes a given phenomenon), the higher the economic uncertainty index is calculated for a given phenomenon. My approach derives directly from the technical problem of uncertainty of measurement with a physical instrument explained by physicists.
The problem of uncertainty in running a business
For example, how do I assess the uncertainty in running a business (company) in a given country and time, e.g. 12 months. Well, I compare the variability (variability index) in two series of data (observations): the first series is the statistics of newly opened businesses, and the second series is the statistics of businesses closed and suspended in the same time (12 month). In my opinion, the imposition of the variability in these two series of observations gives a very good understanding of the scale of uncertainty in economic activity in a given year or years in a given country.
In my opinion, uncertainty is like a shadow changing its form with each change in the incidence of light rays, the more I can notice the more the light source and the object on which the light falls.
I know that the question is a bit ambitious, therefore I want to break it down:
Have there been efforts to accurately measure the spatio-temporal mismatch? If yes, did it work?
Are the factors that impact the mismatch clear or is it an open question? (I guess this is partly answered since some factors are obvious)
Did someone actually manage to model the spatio-temporal shift/deformation between weather radar and on ground precipitation estimates?
It would be very helpful if you can point me to related research. I am dealing with <1km spatial and <5 minute temporal resolutions of precipitation estimates. The mismatch is obvious, but not well correlated to the height difference between radar (up to 3km) and on ground estimates.
How to calculate the sum and the subtraction of many random variables that follow exponential distributions and have different parameters ?
(The value of Lambda is different for all or some variables).
example :
L(t) = f(t) + g(t) - h(t)
with
f(t) = a.expo(-a.t)
g(t) = b.expo(-b.t)
h(t) = c.expo(-c.t)
st:
a = Lambda_1
b = Lambda_2
c = Lambda_3.
When calculating a budget or a risk reserve, a simulation or estimation is performed.
Sometimes the Monte Carlo simulation is used.
It seems that each administration and each company uses different confidence percentiles when summarising the simulations in order to take a final decision.
Commonly, 70%, 75% or 80% percentiles are used. The American administration uses 60% for civil works projects...
My doubt is, is there any recommendation or usual approach to choose a percentile?
Is there any standard or normalized confidence percentile to use?
I expected to find such an answer in the AACE International or International Cost Estimating and Analysis Association, but I did not.
Thank you for sharing your knowledge.
What are the most important methods to quantify risk (failures, uncertainties...) for production systems ?
Especially for production systems, We look also on other fields ...
Both the formal (MCMC) and informal (Generalized Likelihood Uncertainty Estimation) Bayesian methods are widely used in the quantification of uncertainty. As far as I know, the GLUE method is extremely subjective and the choice of the likelihood function is various. This is confusing. So, what are the advantages of GLUE and is it worthy of being admired? Is it just because it doesn't need to reference the error function? What is the pros and cons between the two methods? What to pay attention to when constructing a new informal likelihood function(like LOA)?
Uncertainty of Results
(International research on the different aspects related to the uncertainty of results in engineering and management systems)
Explanation
In the previous years (more than 80 years), the great physicists such as Heisenberg or Einstein have introduced valuable theories related to the uncertainty especially about time and space (uncertainty principle). Moreover, in Metrology Science, the issues related to the uncertainty for measuring in tools, have strongly been considered (uncertainty of measurement). There are many complex or simple formulas, which can describe the uncertainty in these issues. But, in the engineering or management systems, more than the past, the speed of changes is very incredible. Now, we want to consider and study this important issue from a new aspect. It seems that the uncertainty of results belonged to the engineering or management systems must be calculated. This research needs to deep study and the cooperation of different scholars related to the several branches of engineering and management fields.
Questions
1. Do you think that we can do this research?
2. What do you think about the determination of comprehensive formula for uncertainty of results? (see attached file)
3. What do you think about the impact role of the other factors such as human factors and so on?
4. What is your opinion about this research?
5. Is there university or research center to help us?
6. Do you assist with us in this research? (If yes, please send us: name, email address, affiliation, expertise)
When dealing with an uncertainty characterization problem, we do not know what the true value of an uncertain variable is. So, how can we evaluate which method (e.g. Monte Carlo simulation, Chance-constrained programming, etc.) could better estimate the true value of an uncertain variable?
I have an equation like ( 𝑓(𝑞,𝑞̇,𝑠)=𝑀(𝑞,𝑠)(𝑞𝑑̈+𝐾𝑝𝑒 ̇+𝐾𝑑𝑒)+𝑁(𝑞,𝑞,𝑠̇) )
where M(q, s) is the n × n inertia matrix of the entire system, q denotes the n × 1 column
matrix of joint variables (joint/internal coordinates), s represents system parameters such as
mass and the characteristic lengths of the bodies, and f (q, ˙ q, s) is the n×1 columnmatrix of
generalized driving forces which might be functions of the system’s generalized coordinates,
and/or speeds, and/or system parameters. The term N(q, ˙ q, s) includes inertia-related loads
such as Coriolis and centripetal generalized forces, as well as gravity terms.Defining
the n×1 column matrix of the desired joint trajectory as qd (t), one can express the tracking
error as (e(t) = qd (t) − q(t)).
I have 2 uncertanity parameters s=(s1,s2) in mass of two links , i wanna integrated uncertanity to find mean of 𝑓 for every joint of my two link sereise robot .
distribution of 2 non-deterministic parameters is uniform and basis function is legendre ,
by using galerkin projection can obtain this equation at end : 𝑓𝑗𝑙=(1/𝑐𝑙2)<𝑓𝑗,𝜑𝑙(𝑠)>
l = 0;....;Nt ; and j = 1;....;n: that Nt is the coefficient number and n is size of f vector .
my question is : how can i calculate inner product of upper equation to find for example f10 from orthogonal equation that mentioned upper (<𝑓𝑗,𝜑𝑙(𝑠)> ) i know the 𝜑𝑙(𝑠) but what should i consider for (f j) for my example in this Integral :
∬𝑓𝑗(𝑠1,𝑠2).𝜑𝑙(𝑠1,𝑠2).𝑝𝑑𝑓(𝑠1).𝑝𝑑𝑓(𝑠2)𝑑𝑠1 𝑑𝑠2=𝑓𝑗𝑙
I am facing problem while writing the journal file in Fluent using TUI commands. I could not extract result ( temperature distribution along the center line, velocity distribution along the centerline ), from the CFD post processing using fluent journal file.
In the context of time series analysis, there are several multi-step ahead prediction (MSAP) strategies such as the recursive and direct strategies (the two fundamentally distinct and opposed mechanisms). The recursive strategy is the most popular one amongst practitioners. Considering that initial random weights cause inconsistency at the output of RNNs (unless it's been dealt with properly), how to quantify uncertainty over the forecast horizon. I need bands within which the forecasts oscillate.
I have run a number of time-consuming complex simulations: -approx. 200 samples
-input parameters are not definable as simple numerical values (they are like: low, mid, high, etc.)
So, input parameters and output values are known but the model is so complex as it's like a black box.
Is it possible to sort the input parameters by their importance? If yes, How?
Are these feasible?
- Since the fluid no longer strictly follows the "incompressibility" during heating, I`ve sampled the five coefficients under heating conditions,which are uncorrelated.
- Since the "incompressibility" is observed under adiabatic conditions, I`ve drawn samples according to the definition of five parameters in "Theory and Simulation of Turbulence [M]", among which some of these parameters are correlated.
reference
- The condition "incompressibility" is used for the "NS equations → Reynolds equations → k-ε model".This condition is not strictly correct when the fluid is heated, 'cause the density changes.
- The five coefficients for the k-εmodel are defined or derived by the equation under the condition that the velocity is constant, i.e. incompressible.
- In "Uncertainty Quantification of Turbulence Model Coefficients via Latin Hypercude from Method ",Abdelkader Frendi put forward the sampling method of the five parameters.
- There is a similar sampling method in "Theory and Simulation of Turbulence [M]"(2nd edition)
Hello!
Can you please explain to me the difference between an uncertainty quatification and sensitivity analysis?
Thank you!
Hello, I'm interested in use the dakota software to construct a multifidelity surrogate with polynomial chaos expansion, this for substituing a Finite element Model
Some code example?
Thank you
Hi, I am working in the area of risk quantification to come up with something like value at risk for IT service provider using uncertainty theories like Possibility theory. There are several works in risk quantification using scales. But I am focusing on financial exposures and not numerical scales alone. Any pointer for work done in those areas - may be for other industries?
the gPC is used for uncertainty quantification, I find the gPC coefficients for Z=f(theta),where theta is a random variable(it may have any kind of pdf), I know the first coefficient a0 is the mean of Z and the sum of other coefficient to the power of 2 is variance.
but How can I find the shape of the pdf?
because Z might have Uniform or Gamma or even Multi Modal Gaussian Distribution, in that case mean and variance is not that much useful.
Greetings,
I'm interested in reading literature covering research on performance of deep learning systems. Specifically, works that attempt to quantify how performance changes when the fully trained system is exposed to real world data, which may have model deviations not expressed in training data. Think "Google flu trends": (http://science.sciencemag.org/content/343/6176/1203)
Please share references for this problem (your personal work or otherwise).
Thank you.
When we consider the Gaussian Markov Random Field simulation for spatial property modeling, how to construct discrete approximation of each realization to be incorporated in the DoE model, such as for optimization under uncertainty?
I have complete annual ‘hourly’ electricity (UK Grid) import into and PV generation for a detached family house with a 6kW array of solar photovoltaics. Sadly I only have 4-month worth of actual data for the total electricity that was exported back to the grid (and not used within the property). Is there any way of estimating (from the existing data) the approximate export back to the grid for the remaining 8 month for which I have no export data? It would be sufficient to have a monthly approximation with an uncertainty band attached so long as the method is robust and referenceable.
Data sheet summary of monthly values attached!
Sincere thanks
M
In senstivity analysis as part of uncertainty quantification framework, how helpful is using surrogate models for numerical weather prediction programs NWP like WRF
Hello SWAT/SWAT-CUP Users,
It is said that the parameter with largest t-statistic value is the most sensitive parameter in SWAT-CUP Sensitivity analysis results. There are negative and positive t-statistic values. Do we take the large absolute value, only the large positive values?
You can refer to the attached diagram to elaborate the answers.
Thank you
Supposing that I have mass concentration profile data of different aerosol species (dry state) along with RH profile from a chemical transport model (CTM). This CTM do have an internal aerosol routine which calculated AOD & SSA. In order to compute aerosol radiative forcing (ARF), one required scatter phase function or asymmetry parameter in addition to AOD & SSA values. Of course, I am aware that atmospheric profile data & surface reflectance information are also needed, which can be obtained respectively from radiosondes (alternately use climatological atmospheric profile that corresponds to the region of study) and MODIS. As per the above description of available data, I chose to use OPAC (assumption of external mixing) to compute the aerosol optical and scattering properties by using mass concentration data of different species. Once I have AOD, SSA and g, I was using the SBDART to compute ARF.
My main questions are:
(a) Are there studies focused on giving a detailed uncertainty estimate combining all sources of uncertainty from measurements/model simulations till radiative forcing computation (say using offline RT model)?
(b) Is there any study which give the uncertainty in ARF computation by using asymmetry parameter instead of scattering phase function?
(c) What is the uncertainty of a general chemical transport model simulations of aerosol species mass concentrations?
In purity assessment it is an uncertainty component.
Emission inventories are always associated with uncertainty? I need a software to carry out uncertainty analysis on sets of emission values. @RISK or CrystalBall or any other one will be highly appreciated.
Should we use just one value for each realization? Or different quartiles? What if one of the quartiles is eliminated in the reduced model?
We have conducted population size estimation study among KAP (key affected population) using different methods. So we have got various estimates, although with overlapping acceptable intervals. Then we hypothesized the median of the various estimates could be the most plausible size estimate. It has also been suggested to use an average. How do we reassure, was it the best decision in our case?
Specifically, I am interested in uncertainty in building energy model predictions (see thesis attached), although I am interested in the views from other disciplines also. It seems to me that uncertainty is a vital aspect when making any predictions about performance (be it a building or an rocket engine). If the inputs to a model are uncertain (which they inevitably are in many cases) that there is an inherent variability (uncertainty) associated with the output of that model. Therefore I think it is very important that this should be communicated when outputting model predictions. However, it seems this is not done in many cases. Do you think uncertainty is an important aspect in model predictions? Does it depend on the industry or application?
The existence of uncertainty cannot be neglected in order to obtain reliable results in vibration-based damage detection. There are several methods (probabilistic, fuzzy and non-probabilistic) to consider uncertainty, but only few of them are using Artificial Neural Network implementation to counter the problem of uncertainty. So, is there any detailed explanation regarding on how to consider the uncertainties in ANN? What is the optimum value of uncertainty for frequency and mode shape?
For static models with only a few parameters and outputs, sophisticated methods for Uncertainty Quantification (UQ) requiring a large number of samples for uncertainty propagation may be applied. This case seems to be what most literature on UQ relates to. I'm working on UQ methods applicable for 1-D simulation models of dynamical systems. Advice on references is gratefully received. The scope is ODE or DAE models developed in Modelica (at least 100 parameters, 10 input signals, 20 output signals, say 30 minutes per simulation run). UQ of steady-state simulation results as well as of system dynamics is of interest.