Science topics: Probability TheoryUncertainty
Science topic
Uncertainty - Science topic
Uncertainty is the condition in which reasonable knowledge regarding risks, benefits, or the future is not available.
Questions related to Uncertainty
Advancing Supplier Selection: Evaluating Fuzzy MCDM vs. Quantum-Inspired Optimization in High-Dimensional, Uncertain Decision Spaces
Supplier selection remains a highly complex multi-criteria decision-making (MCDM) challenge, particularly in environments characterized by uncertainty, incomplete information, and dynamic market fluctuations. Traditional fuzzy MCDM methods (e.g., Fuzzy AHP, Fuzzy TOPSIS, Fuzzy ITARA) have been widely adopted for their ability to handle linguistic imprecision and expert-driven evaluations, making them well-suited for procurement decision-making. However, the advent of quantum-inspired optimization (QIO) algorithms, which exploit principles of quantum superposition, entanglement, and probabilistic heuristics, presents an alternative paradigm that claims superior efficiency in solving high-dimensional combinatorial optimization problems.
This raises several critical academic and methodological challenges that remain underexplored:
1. Computational Complexity vs. Decision Interpretability
While QIOs claim exponential speedup in solving large-scale supplier selection problems, their lack of explainability and interpretability could hinder practical adoption in procurement. To what extent can QIOs outperform traditional fuzzy MCDM methods in real-world procurement applications, considering the trade-offs between computational complexity, solution quality, and decision transparency?
2. Uncertainty Representation in Supplier Selection Models
Fuzzy logic excels at modeling qualitative uncertainty and linguistic vagueness, whereas quantum-inspired approaches rely on probabilistic distributions and non-classical optimization heuristics. Can QIOs effectively model vague, preference-driven supplier evaluation criteria, or do they require hybridization with fuzzy-based uncertainty modeling to enhance robustness?
3. Empirical Benchmarking and Industrial Feasibility
Despite theoretical claims, empirical studies comparing QIOs with fuzzy MCDM methods in procurement remain scarce. What are the empirical performance benchmarks in terms of solution convergence, computational efficiency, and procurement cost optimization, particularly in real-world datasets? Given the limitations of Noisy Intermediate-Scale Quantum (NISQ) computing, how practical is QIO for procurement optimization, and what are the short-term and long-term adoption barriers?
4. Hybrid Fuzzy-Quantum Decision Frameworks
Could a hybrid fuzzy-quantum model bridge the interpretability-efficiency gap by leveraging the strengths of fuzzy logic in linguistic modeling while incorporating the computational advantages of quantum-inspired heuristics? What hybrid methodologies could be explored to enhance procurement decision robustness in dynamic, multi-stakeholder supply chain environments?
Given these open research questions, I invite scholars and practitioners specializing in quantum computing, fuzzy logic, procurement science, uncertainty modeling, and AI-driven supply chain optimization to share insights, empirical findings, and theoretical advancements on:
- The fundamental trade-offs between fuzzy logic-based MCDM and QIO in procurement decision-making.
- Comparative studies or case studies benchmarking these methodologies in real-world supplier selection problems.
- The feasibility of integrating QIOs into procurement decision systems given current technological constraints.
- Hybrid approaches that combine fuzzy MCDM with quantum-inspired heuristics for superior decision-making under uncertainty.
Climate Extremes
If we could still have the occurrence of a wide variety of natural weather and climate extremes, even, if there are no anthropogenic changes in a climate, given the fact that many weather and climate extremes remain to be the result of natural climate variability, while, natural decadal or multi-decadal variations in the climate provide the backdrop for anthropogenic climate changes, then, what is the physical basis on which a changing climate leading to changes in the frequency, intensity, spatial extent, duration and timing of weather and climate extremes, and in turn, resulting in unprecedented extremes results only from ‘anthropogenic changes in climate’?
Are we scientifically ‘clear and distinct’ about the exact factors that influence the observed changes and in turn, the projected changes in climate; if changes in extremes of a climate or weather variable remain no more related in a simple way ‘always’ to changes in the mean of the same variable (and given the uncertainty in the natural variability of climate; uncertainties in climate model parameters and structure; and uncertainty in the projections of future emissions)?
Suresh Kumar Govindarajan, Professor [HAG]
IIT Madras, 24-Dec-2024
I recently added an AI-generated summary to my article "Intra-industry diversification effects under firm-specific contingencies on the demand side" (2020, Florian Smeritschnig, Jakob Muellner, Phillip C. Nell, Martin Weiss) by including it as a supplementary resources. I did this to give those who do not have open access to the full text a brief understanding of the main points of my article.
The problem is that the title is still reflected on the article's subpage under "Linked data", but there is actually no file available and no bottom to delete it. Could you please delete the title?
The second question is whether it's possible to upload an additional file - e.g. the AI-generated summary of the article, but not the public or private full text - to the same article page, but without it appearing as a completely separate entry in the list of my other research items? For example, I have the paper "Electoral uncertainty and the multinational corporation a conceptualization, firm-level effects and strategies" (2024) by Puck, Muellner & Reinprecht. I added an AI-generated summary to this publication as a supplementary resources, and this summary is now among my other research items, along with the original article itself. I don't want to have two entries with the same title, it creates unnecessary confusion, but I do want the AI-generated summary to appear only in the research item entry. Is there a way to do this?
Best,
Jakob
How can large-scale phylogenomic datasets be improved to reduce uncertainty in tree resolution?
Gold prices declined in November 2024 due to several economic and geopolitical factors. A significant driver was the strength of the U.S. dollar, which made gold more expensive for international investors holding other currencies. Additionally, investors anticipated a potential interest rate cut by the Federal Reserve, which created a sense of uncertainty in the gold market.
This trend was also influenced by caution among investors ahead of the U.S. presidential elections, as some temporarily shifted away from gold as a safe-haven asset. Moreover, high U.S. bond yields continued to add pressure on gold prices by providing investors with alternative returns.
In summary, the primary reasons behind gold's price decline in November included the strong dollar, possible changes in interest rates, and shifts in investor sentiment amidst global economic stability
Aviation MRO handles both planned and unscheduled aircraft maintenance. An independent MRO often rely on information coming from their clients or customers, which consist of maintenance request, parts demands, etc. The MRO needs to ensure that the aircraft spare parts are always available whenever needed but still needs to be evaluated to avoid any understock or overstock inventory situation. Forecasting demand of the spare parts is the common method used; however, there are still rooms for improvement needed due to several constraints, such as limitations in accessing data, collaboration complexity, and demand uncertainty.
In a paper I read:
"In addition to the point estimate, a confidence interval is commonly used to account for the uncertainty in the ML estimator."
Would the scholars attending the Research Gate agree or comment on that?
Thank you
MS
What personal psychological characteristics does intolerance of uncertainty affect?
analysis of strategies where intolerance of uncertainty is beneficial for the individual.
I appreciate any contributions and ideas to my study.
thanks for your answers
Dear all,
When modeling uncertainties that affect measurements (e.g., a continuous variable), how can we also account for the uncertainty associated with the Gold Standard (e.g., reference measurements designed by experts) for the item being measured?
Thank you so much for your help and for fetching some reference papers/books
Cordially,
Hubert
The determination of our destiny depends on the decisions we make at each moment, which in a sense are conditioned by the Uncertainty Principle.
So... the determination of our destiny depends on the decisions we make at any given moment, which in a certain sense are conditioned by the Uncertainty Principle because God does love to play dice.
In order to determine the state of our situation we must take into account that we have many alternatives which creates several possibilities for our future precisely because of that Uncertainty Principle.
But the fact of the many possibilities should not lead us to think of “other universes” but of the alternatives we have at every moment to create or destroy...our own universe.
This is one of the reasons we have as mankind to make a real paradigm shift in all fields of our existence.
Edgar
In general, the greater the environmental uncertainty, the more attention management in organizations must direct towards the external environment.
How does uncertainty differ from model performance evaluation using statistical metrics such as MSE, RMSE, MAE, MAPE, and R² in the field of real estate appraisal?
In this rapidly changing world, where cause and consequence are intertwined through complex psychological layers, effective decision-making becomes crucial. The fear of career uncertainties, lack of intellectual confidence, or facing immense barriers affects educational, academic, or professional choices. When certain principles are applied, crucial decisions may result in better outcomes. Since decisions are made under the influence of objective and subjective perspectives, their sufficiency and reliability can greatly vary. There are multiple frameworks used in effective decision-making. One of them is the VIPS framework: Values, Interests, Personality, and Skills. In your opinion, how can this framework be used to foster effective decision-making?
2) How is the formation of the universe?
The universe, at its most fundamental level, appears to operate according to the principles of quantum mechanics, where uncertainty and indeterminacy play key roles in shaping its evolution. In classical computational theory, Turing’s Halting Problem demonstrates that it is impossible to predict whether a system will reach a final state or run indefinitely. This raises profound questions about the nature of the universe: could it, too, one day halt, reaching a state where no further evolution is possible? However, the inherent unpredictability of quantum mechanics—through phenomena like superposition, quantum fluctuations, and entanglement—may offer a safeguard against such a scenario. This paper explores the intersection of quantum mechanics and the Halting Problem, suggesting that quantum uncertainty prevents the universe from settling into a static, final state. By continuously introducing randomness and variation into the fabric of reality, quantum processes ensure the universe remains in perpetual motion, avoiding a halting condition. We will examine the scientific and philosophical implications of this theory and its potential to reshape our understanding of cosmology.
How do new uncertainty relations impact our understanding of quantum mechanics?
Paradox 1 - The Laws of Physics Invalidate Themselves, When They Enter the Singularity Controlled by Themselves.
Paradox 2 - The Collapse of Matter Caused by the Law of Gravity Will Eventually Destroy the Law of Gravity.
The laws of physics dominate the structure and behavior of matter. Different levels of material structure correspond to different laws of physics. According to reductionism, when we require the structure of matter to be reduced, the corresponding laws of physics are also reduced. Different levels of physical laws correspond to different physical equations, many of which have singularities. Higher level equations may enter singularities when forced by strong external conditions, pressure, temperature, etc., resulting in phase transitions such as lattice and magnetic properties being destroyed. Essentially the higher level physics equations have failed and entered the lower level physics equations. Obviously there should exist a lowest level physics equation which cannot be reduced further, it would be the last line of defense after all the higher level equations have failed and it is not allowed to enter the singularity. This equation is the ultimate equation. The equation corresponding to the Hawking-Penrose spacetime singularity [1] should be such an equation.
We can think of the physical equations as a description of a dynamical system because they are all direct or indirect expressions of energy-momentum quantities, and we have no evidence that it is possible to completely detach any physical parameter, macroscopic or microscopic, from the Lagrangian and Hamiltonian.
Gravitational collapse causes black holes, which have singularities [2]. What characterizes a singularity? Any finite parameter before entering a spacetime singularity becomes infinite after entering the singularity. Information becomes infinite, energy-momentum becomes infinite, but all material properties disappears completely. A dynamical equation, transitioning from finite to infinite, is impossible because there is no infinite source of dynamics, and also the Uncertainty Principle would prevent this singularity from being achieved*. Therefore, while there must be a singularity according to the Singularity Principle, this singularity must be inaccessible, or will not enter. Before entering this singularity, a sufficiently long period of time must have elapsed, waiting for the conditions that would destroy it, such as the collision of two black holes.
Most of these singularities, however, can usually be resolved by pointing out that the equations are missing some factor, or noting the physical impossibility of ever reaching the singularity point. In other words, they are probably not 'real'.” [3] We believe this statement is correct. Nature will not destroy by itself the causality it has established.
-----------------------------------------------
Notes
* According to the uncertainty principle, finite energy and momentum cannot be concentrated at a single point in space-time.
-----------------------------------------------
References
[1] Hawking, S. (1966). "Singularities and the geometry of spacetime." The European Physical Journal H 39(4): 413-503.
[2] Hawking, S. W. and R. Penrose (1970). "The singularities of gravitational collapse and cosmology." Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 314(1519): 529-548.
==================================================
补充 2023-1-14
Structural Logic Paradox
Russell once wrote a letter to Ludwig Wittgenstein while visiting China (1920 - 1921) in which he said "I am living in a Chinese house built around a courtyard *......" [1]. The phrase would probably mean to the West, "I live in a house built around the back of a yard." Russell was a logician, but there is clearly a logical problem with this expression, since the yard is determined by the house built, not vice versa. The same expression is reflected in a very famous poem "A Moonlit Night On The Spring River" from the Tang Dynasty (618BC - 907BC) in China. One of the lines is: "We do not know tonight for whom she sheds her ray, But hear the river say to its water adieu." The problem here is that the river exists because of the water, and without the water there would be no river. Therefore, there would be no logic of the river saying goodbye to its water. There are, I believe, many more examples of this kind, and perhaps we can reduce these problems to a structural logic pradox †.
Ignoring the above logical problems will not have any effect on literature, but it should become a serious issue in physics. The biggest obstacle in current physics is that we do not know the structure of elementary particles and black holes. Renormalization is an effective technique, but offers an alternative result that masks the internal structure and can only be considered a stopgap tool. Hawking and Penrose proved the Singularity Theorem, but no clear view has been developed on how to treat singularities. It seems to us that this scenario is the same problem as the structural logic described above. Without black holes (and perhaps elementary particles) there would be no singularities, and (virtual) singularities accompany black holes. Since there is a black hole and there is a singularity, how does a black hole not collapse today because of a singularity, will collapse tomorrow because of the same singularity? Do yards make houses disappear? Does a river make water disappear? This is the realistic explanation of the "paradox" in the subtitle of this question. The laws of physics do not destroy themselves.
-------------------------------------------------
Notes
* One of the typical architectural patterns in Beijing, China, is the "quadrangle", which is usually a square open space with houses built along the perimeter, and when the houses are built, a courtyard is formed in the center. Thus, before the houses were built, it was the field, not the courtyard. The yard must have been formed after the house was built, even though that center open space did not substantially change before or after the building, but the concept changed.
† I hope some logician or philosopher will point out the impropriety.
-------------------------------------------------
References
[1] Monk, R. (1990). Ludwig Wittgenstein: the duty of genius. London: J. Cape. Morgan, G. (Chinese version @2011)
In discrete systems, such as Markov chains, Shannon entropy can be used to explain the uncertainty and complexity of the system. In continuous systems, such as pure jump Markov processes, does the corresponding differential entropy have a clear physical meaning? If so, how can differential entropy be used to interpret continuous systems?
Forecasting inherently involves uncertainty, which arises from various sources such as model limitations, data inaccuracies, and unpredictable environmental factors. My question, "Can we forecast uncertainty in predictions?" seeks to explore whether it is possible to quantify and anticipate the degree of uncertainty within a forecast.
We have learned in QM the famous U. Principle which is probably the most important thing in this branch.
We also have learned that space-time stays together in GR.
The problem of measurements in QM comes from U. Principle & vice-versa and why it is not present in GR, not in the same form but analog?
Thanks
I am wondering if anyone has come across calibration curve and uncertainty for PM2.5, SO2, O3, and NO2 monitors maintained and operated by the CPCB?
Please share link herein for the same if you are aware.
Best regards,
Prashant
I'm looking for a way to measure the uncertainty (standard deviation) on the quantification (area) of the components of an XPS spectrum using CasaXPS.
I found these options in the software, but they don't satisfy me:
1) From the "Quantify (F7)" window, in the "Regions" tab, clicking on "Calculate Error Bars" but it is independent of the fit and changes with each click.
2) From the "Quantify (F7)" window, in the "Components" tab, by clicking on "Monte Carlo", I obtain only the relative value and not the absolute one. But above all, the values do not follow the goodness of the fit: even with components that are not fitted and clearly incorrect, the value is low.
As I have not found these methods to be reliable, my idea is to use the RMS as an estimation of the error on the sum of the areas of the components and then obtain the percentage error of the individual components.
My scope is to provide the composition of my sample, and also the percentages of the various components, for each element, all with their measurement error.
Does anyone know if there is a more automatic and reliable method?
Thus explaining free will, the self and the law of identity.
What is the approach to determine uncertainty in energy and inflation time vs. inflation pressure of tyre control unit?
How do you measure the uncertainty of equipment adopted in the Regression Models developed to measure the energy and inflation time of a tyre pressure control Unit?
How do I ascertain uncertainty measurement?
Energy vs inflation pressure for three radii of the tyre
the
In particle accelerators, various subatomic particles are collided at exact points with predetermined momentum. The collision must occur at an exact point; otherwise, detectors cannot register any resulting collision products.
In this case, both the position and momentum of the particles are almost precisely known. This raises questions about the practicality and credibility of the Heisenberg Uncertainty Principle (HUP). Your informed comments would be highly appreciated.
For more critical analysis of the HUP, please see:
The origin of Heisenberg's uncertainty principle can be better understood through the lens of complex vector spaces. In my paper " ," I explore how representing complementary variables as complex numbers provides a deeper insight into quantum mechanics.
- Position and Momentum: By representing position (x) and momentum (p) as complex variables, the uncertainty principle is expressed as the product of their uncertainties having a lower bound related to Planck's constant. This formulation highlights the intrinsic uncertainties and the probabilistic nature of measurements in quantum mechanics.
- Energy and Time: Similarly, energy and time uncertainties are expressed in complex terms, showing the internal vibrations of particles and their states in a complex vector space. This provides a more comprehensive understanding of quantum uncertainties.
- Physical Origin of Uncertainty: The physical origin of Heisenberg's uncertainty principle is attributed to the vibrations and interactions of particles in the complex plane. This complex representation provides insight into why there is a lower limit to the precision with which complementary variables can be measured simultaneously.
These points illustrate how the use of complex numbers in quantum mechanics aligns with the holographic principle ( ) and offers a unified framework for understanding quantum phenomena. For a detailed exploration of these ideas, you can refer to my paper available on ResearchGate.
I am exploring the relationship between the uncertainty in the coordinates of the center of mass of a rigid body and the uncertainty of the corresponding elements of the inertia tensor. Any idea that would be applicable in a more or less general way? Of course, a Monte Carlo simulation would be useful and i am planning to do it, but I am thinking more of an analytical relationship. My idea is not having to perform the calculations starting from the mass distribution but from the center of mass position (i.e: given a center of mass displacement from the most expected value, the the inertia tensor changes in this or that way).
For Example Climate Sensitivity
Climate sensitivity refers to the change in radiative forcing, or surface air temperature, resulting from a doubling of atmospheric CO2 concentration. Estimates of climate sensitivity range from about 2°C to 5°C per doubling of CO2. This range exists due to uncertainties in atmospheric physics, particularly feedback mechanisms between various atmospheric states and surface air temperature or CO2 concentration. For instance, the impact of changes in radiative forcing on cloud processes is not well understood, leaving uncertainty about whether clouds will mitigate or amplify warming.
Below is a list of the main feedbacks in the climate system:
- Water vapor feedback
- Lapse rate feedback
- Ice albedo feedback
- Carbon cycle feedback
Climate scientists dedicate their careers to refining our understanding of these feedbacks. They use paleoclimate data, model experiments, and modern measurements to address these uncertainties. However, uncertainties remain in each feedback, and when combined, they create a complex, high-dimensional problem. Depending on the true magnitude of these feedbacks, climate sensitivity could range from 2°C to 5°C.
I need other aspects and related most relevant questions arising among different communities?
Hydrologic studies on different basins, while focused on distinct geographic areas, share several key similarities in terms of methodology, objectives, and challenges. These similarities can be categorized as follows:
1. Core Objectives
2-Data Collection and Analysis
3- Methodological Approaches
4 -Challenges and Uncertainties
5- Integrated Management Approaches
6- Applications and Outcomes
By focusing on these common aspects, hydrologic studies can effectively address the unique characteristics and challenges of different basins while leveraging shared methodologies and goals to advance the understanding and management of water resources globally.
What is your opinion ?
How and to what extent can Total Cost Management evolve towards Systemic Value Management (SVM)?
In this regard, what are the “Validating Principles and Actions for a Global Practitioner Movement”?
The persistence of structural crises and the widespread uncertainty and volatility of any forecast on times, costs and resources for complex projects have nowadays become a challenge for managerial disciplines and for Total Cost Management in particular. The value, policy and social aspects seem to be taking on ever greater importance for each project, transforming it into a very complex project. The AICE – The Italian Association for Total Cost Management is engaged in research in this regard (see for example https://www.mdpi.com/2071-1050/14/19/12890 ) and asks experts in these disciplines to participate in an ongoing survey in this regard.
Can you make your contribution by participating in this survey and expressing your point of view?
You will find every detail at the following link https://forms.gle/PEpZ8YTn8yTdSg5z6
In my thesis on reverse logistics, I explore the use of the ANP-TOPSIS hybrid method in handling uncertainty. I'm interested in hearing from researchers about the effectiveness of 'Rough Set,' 'Neutrosophic,' or 'Fuzzy' theories in addressing uncertainty within supplier selection processes. Any insights on these approaches would be greatly appreciated!
In the context of machine learning models for healthcare that predominantly handle discrete data and require high interpretability and simplicity, which approach offers more advantages:
Rough Set Theory or Neutrosophic Logic?
I invite experts to share their insights or experiences regarding the effectiveness, challenges, and suitability of these methodologies in managing uncertainties within health applications.
I encountered challenges while conducting GSEM due to uncertainties in running the analysis with solely observed variables. This was particularly prominent when dealing with categorical variables such as x, y, and z, where all of them served as dependent variables.
I've fitted a latent growth mixture model to time series data. It consists of a value (population prevalence) at 11 time points for a sample of 150 areas. Said model was fitted using the lcmm package in R and identified a two class model as optimal - reflecting an hlme model as follows:
gmm2 <- gridsearch(rep = 1500, maxiter = 50, minit = gmm1, hlme(Value ~ jrtime, subject = "ID", random=~jrtime, ng = 2, data = rec, mixture = ~ jrtime, nwg=T))
Said model uses the "Value"/prevalence data for each area as the primary variable. However, the original "Value" column within the data also relates to 95% confidence intervals (reflecting different sample sizes which contributed to the observations at each time point and for each area). Under the current approach all observations of "Value" are treated equally. Should I (and is there a good method through which to) account for the differing levels of uncertainty in "Value" as part of my lcmm?
I wondered if such could be coded into the package, but this does not appear to be the case. Therefore I wondered whether a manual account could be taken (such as adding the CI range as a covariate)? However, I also wondered whether there was scope to add such as part of the prior setting if applying an alternative Bayesian approach.
Any advice/links to relevant literature (or shareable code) would be hugely appreciated.
The three spatial dimensions (x, y and z) of spacetime can be physically demonstrated. However, what about the time dimension? Mathematically, time is a fundamental part of spacetime.
ds2 = c2dt2 – dx2– dy2 – dz2
However, physically where does the time dimension reside? The physical constants (c, G and ħ) all incorporate time. This is not just time in the abstract. Gravity and relative motion affect the local rate of time. On an absolute scale, these constants (c, G and ħ) require coordination with the local rate of time. How do you visualize the time component of spacetime?
This is a discussion question, so I will give my answer to start the discussion. John Archibald Wheeler proposed the uncertainty principle and vacuum zero-point-energy requires that on the scale of Planck length, spacetime must be oscillating at Planck frequency. He designated this “quantum foam”. This oscillation (internal clock) would give every volume of spacetime its intrinsic time dimension. This oscillating spacetime model of the universe ultimately generates a fundamental particle’s gravity and electrical charge.
Can Physical Constants Which Are Obtained with Combinations of Fundamental Physical Constants Have a More Fundamental Nature?
Planck Scales (Planck's 'units of measurement') are different combinations of the three physical constants h, c, G, Planck Scales=f(c,h,G):
Planck Time: tp=√ℏG/c^5=5.31x10^-44s ......(1)
Planck Length: Lp=√ℏG/c^3=1.62x10^-35m ......(2)
Planck Mass: Mp=√ℏc/G=2.18x10^-8 kg ......(3)
“These quantities will retain their natural meaning for as long as the laws of gravity, the propagation of light in vacuum and the two principles of the theory of heat hold, and, even if measured by different intelligences and using different methods, must always remain the same.”[1] And because of the possible relation between Mp and the radius of the Schwarzschild black hole, the possible generalized uncertainty principle [2], makes them a dependent basis for new physics [3]. But what exactly is their natural meaning?
However, the physical constants, the speed of light, c, the Planck constant, h, and the gravitational constant, G, are clear, fundamental, and invariant.
c: bounds the relationship between Space and Time, with c = ΔL/ Δt, and Lorentz invariance [4];
h: bounds the relationship between Energy and Momentum with h=E/ν = Pλ, and energy-momentum conservation [5][6];
G: bounds the relationship between Space-Time and Energy-Momentum, with the Einstein field equation c^4* Gμν = (8πG) * Tμν, and general covariance [7].
The physical constants c, h, G already determine all fundamental physical phenomena‡. So, can the Planck Scales obtained by combining them be even more fundamental than they are? Could it be that the essence of physics is (c, h, G) = f(tp, Lp, Mp)? rather than equations (1), (2), (3). From what physical fact, or what physical imagination, are we supposed to get this notion? Never seeing such an argument, we just take it and use it, and still recognize c,h,G fundamentality. Obviously, Planck Scales are not fundamental physical constants, they can only be regarded as a kind of 'units of measurement'.
So are they a kind of parameter? According to Eqs. (1)(2)(3), c,h,G can be directly replaced by c,h,G and the substitution expression loses its meaning.
So are they a principle? Then who are they expressing? What kind of behavioral pattern is expressed? The theory of quantum gravity takes this as a " baseline ", only in the order sense, not in the exact numerical value.
Thus, Planck time, length, mass, determined entirely by h, c, G, do they really have unquestionable physical significance?
-----------------------------------------
Notes
‡ Please ignore for the moment the phenomena within the nucleus of the atom, eventually we will understand that they are still determined by these three constants.
-----------------------------------------
References
[1] Robotti, N. and M. Badino (2001). "Max Planck and the 'Constants of Nature'." Annals of Science 58(2): 137-162.
[2] Maggiore, M. (1993). A generalized uncertainty principle in quantum gravity. Physics Letters B, 304(1), 65-69. https://doi.org/https://doi.org/10.1016/0370-2693(93)91401-8
[3] Kiefer, C. (2006). Quantum gravity: general introduction and recent developments. Annalen der Physik, 518(1-2), 129-148.
[4] Einstein, A. (1905). On the electrodynamics of moving bodies. Annalen der Physik, 17(10), 891-921.
[5] Planck, M. (1900). The theory of heat radiation (1914 (Translation) ed., Vol. 144).
[6] Einstein, A. (1917). Physikalisehe Zeitschrift, xviii, p.121
[7] Petruzziello, L. (2020). A dissertation on General Covariance and its application in particle physics. Journal of Physics: Conference Series,
What is the TRMM satellite precipitation program? And how can it help humans?
as you know :
GPM can provide worldwide rain and snow data at any time
Using microwave and infrared technology. The TRMM sensor package has been expanded with GPM, which improves the ability to observe precipitation. GPM nuclear observatory to two-frequency radar i.e. Ku and Ka bands compared to TRMM four-channel high-frequency satellites from
As a result, the microwave radiometer increases the observability for light and solid precipitation. As a result, the GPM mission can provide more. These monthly in situ gauge data will be used in the final implementation. . This GPM satellite provides very accurate and detailed, for example, GPM rainfall measurements across India. GPM satellite data enables the researcher to study various hydrological applications such as climate research, drought monitoring, flood forecasting, agricultural planning. Etc . Uncertainty in satellite precipitation data caused by several factors including spatial and
study time scales; It has reported some key factors such as instrumental uncertainty, sampling uncertainty, recovery. Algorithm uncertainty, regional and topographic effects and side data are necessary to pay attention to.
I keen for sharing your knowledge and experience as i am working in my thesis about drawing road maps for the new shipyards to have a systemic step for having lean in its manufacture process. But here i am talking about Shiprepair or Rig Repair not the new building where the different is huge with respect of uncertainty scope and short time frame.
Suppose I am making a solution of a particular concentration and I am doing it with step wise dilution from a stock solution. If I use a same volumetric flask for 2 time for the consecutive dilution, then do I have to add uncertainty of that volumetric flask two(2) time or only one(1) time in the calculation of total measurement uncertainty.
Unfortunately, I do not have access to any reliable or relatable databases to complete a Life Cycle Assessment of Bitcoin mining so I have began completing the task manually. I was wondering if anyone had an guidance on calculating uncertainties or sensitivity analysis to ensure I can consider errors in my study.
Any tips regarding this or conducting a life cycle assessment manually will be greatly appreciated!
We resolve problem of cosmological constant discrepancy.
In order to do this we assume Single Uncertainty Sphere, meaning that only one exists on whole Universe and in that way the energy density just reduced by 122 orders of magnitude and make theoretical values of cosmological constant coherent with observations.
This hypothesis gives much larger impact not only on astrophysics, but also how we should perceive structure how space is constructed.
Feel free to comment and encourage for discussion about this assumption.
Do you think it is too abstract and crazy?
Please refer to this preprint to see more details:
Suppose L_p is the usual Lebesgue space over (0,1) if you wish. Suppose T_j:L_1-->L_2 defines a sequence of continuous linear operators. Suppose l_1(L_1) is the Banach space of sequences from L_1 with norm (f_j_j-->||f_1||+||f_2||+... . Suppose L_2(l_inf) is the Banach space of sequences (f_j)_j from L_2 with the norm (f_j)_j-->||sup_j|f_j||. Finally, suppose T:l_1(L_1)-->L^2(l_inf) is a linear map defined by
(f_j)_j-->(T_j(f_j))_j.
It seems to me that the fact that T is well-defined, i.e. all outputs are in L_2(l_inf), AND each T_j is continuous implies T is continuous by the closed graph theorem. This is because the candidate limit (f_j)_j when arguing T has a closed graph has to satisfy f_j=T_j(x_j) where (x_j^n)_j converges to (x_j)_j in L_1(l_1).
My uncertainty stems from the following example. Fix T_1 and let T_j=log(j+9)T_1 for j>2. Since this sequence (T_j)_j is not uniformly L_1-->L_2 bounded, the corresponding operator T cannot be bounded(continuous). However, the slow growth of the operator norms is slow enough so that for (f_j)_j in L_1(l_1),
||sup_{j\le N}|T_j(f_j)|||<=||T_1||(sum_j (log(j+9))^2||f_j||^2)^{1/2}.
I'm just estimating by replacing maximal function on left with square function within the L_2 norm. In other words, since (f_j) in L^1(l_1), the right side of the inequality is finite and independent of N. Does this not imply T is well-defined from L_1(l_1) into L_2(l_inf) and thus contradicting the closed graph theorem argument above.
What am I missing? What dumb oversight am I not seeing?
How to Reasonably Weight the Uncertainty of Laser Tracker and the Mean Square Error of Level to Obtain Accurate H(Z)-value?
In an era defined by the digitization of measurement processes and the increasing use of artificial intelligence, do you believe that focusing on the digitization of high-precision measurements through AI approach applications is advantageous? For instance, imagine an AI interpreter for analog instruments using optical vision.
This question arises considering the questionable reliability inherent in AI, based on probabilistic algorithms that can generate precise but not necessarily infallible measurements. Additionally, there is complexity in evaluating uncertainty in automatic measurements, considering environmental factors such as lighting and the quality of the optical viewer that could affect the reliability of results. How can we balance the promise of AI precision with the need for absolute reliability in high-precision metrology, especially concerning traceability to primary standards?"
WHAT DOES THE AGNOSTIC STATEMENT, postulated by Dawkins:
„GOD ALMOST CERTAINLY DOES NOT EXIST”,
actually imply?
- Is this statement in line with logic and reason or represent Dawkins' individual view as to the existence of god to flatter both believers and non-believers?
- Is this agnostic statement correct, just because it has many followers, among whom there are even well-known scientists?
- Anyway, is it appropriate for a scientist like Richard Dawkins, nota bene from OXFORD – the ˌworld renowned university, to declare himself a religious agnostic?
Answering this questions based on argumentum ad rem, you"ll come to the right answer.
_________________________________________
* More detailed info in my video lecture on YouTube:
>THE FICTION OF AGNOSTICISM: CANI vs HUXLEY. TIME FOR NEW DEFINITIONS!
> INTERMEDIALISM vs AGNOSTICISM: CANI'S COMPLEX PLANE OF RELIGIOSITY vs DAWKINS’ LINEAR SCALE
How do humans handle the anxiety of uncertainty about entering eternal salvation? How? Why?
For example, There is no doubt that global sea level is rising, and based on the global mean sea level (GMSL)data, we can calculated the trend of the GMSL. However, we all know that that must be some interannual/decadal variations of the GMSL, and even the alising errors of our data. We can get the linear trend of GMSL timeseires based on least-square method. However, how can we estimate the uncertainty range of this trend? 1, GMSL timeseires have autocorrelation; 2, the variations of GMSL timeseries are not the white noises, the standard deviation of GMSL anomalies is not 1.
Dear colleagues
I just notice a research gate anouncement about the readership of a document I am one of the authors of : « Risk, uncertainty and agriculturel development » . This book is described as having been edited by Frank S. Conklin,, Bruce McCarlJames, A. Roumasset, Jean-Marc Boussard and Inderjit Singh. . Now, it is true that James A. Roumasset, Jean-Marc Boussard and Inderjit Singh are the editors. . But Frank S. Conklin, and Bruce McCarl, have nothiong to do with it (they were even not among ther participants of the conference the book is the proceedings of …). Please correct your files !
Hello,
we are doing XRF analysis of PM10 aerosol samples with the PX-375. However, we calibrate the instrument with the UCDavis reference material with PM2.5 elemental loading. Does anybody know if this is fine or how large is the uncertainty? Our PM2.5 to PM10 ratio varies between 0.5 and 0.9.
Thanks for your help.
Ivonne
Hello,
We need some debate about the role of statistics in scientific confirmation.
Thanks
Dr. F CHELLAI
Hello, I'm seeking clarification on the selection of suitable boundary conditions for simulating shear deformation of a screw dislocation using LAMMPS. In my script, I currently employ the following commands:
```lammps
fix 1 upper setforce 0.0 0.0 0.0
fix 2 lower setforce 0.0 0.0 0.0
fix 3 upper move NULL ${strainrate} NULL
fix 4 lower move NULL-${strainrate} NULL
fix 5 mobile nve
```
I have several uncertainties:
1. Should I fix all three degrees of freedom in fix 1 & 2 for shear deformation in the Y direction, or are specific degrees of freedom recommended?
2. In fix 3 & 4, should I use NULL or 0.0?
3. Should fix 5 be applied to only middle atoms or to all atoms?
Any insights would be greatly appreciated!

I believe temperature T, pressure P and volume V are all measurable quantities. Entropy S is not measurable. Further, there is an uncertainty relation associated with temperature and energy. That is, temperature can only be measured by bringing a system into contact with another system of known temperature. However, energy can only be measured in an isolated system.
Fear, interest rates and uncertainties on rising and markets under pressure, what next??
There are many kinds of certainty in the world, but there is only one kind of uncertainty.
I: We can think of all mathematical arguments as "causal" arguments, where everything behaves deterministically*. Mathematical causality can be divided into two categories**: The first type, structural causality - is determined by static types of relations such as logical, geometrical, algebraic, etc. For example, "∵ A>B, B>C; ∴ A>C"; "∵ radius is R; ∴ perimeter = 2πR"; ∵ x^2=1; ∴ x1=1, x2=√-1; .......The second category, behavioral causality - the process of motion of a system described by differential equations. Such as the wave equation ∂^2/ ∂t^2-a^2Δu=0 ...
II: In the physical world, physics is mathematics, and defined mathematical relationships determine physical causality. Any "physical process" must be a parameter of time and space, which is the essential difference between physical and mathematical causality. Equations such as Coulomb's law F=q1*q2/r^2 cannot be a description of a microscopic interaction process because they do not contain differential terms. Abstracted "forces" are not fundamental quantities describing the interaction. Equations such as the blackbody radiation law and Ohm's law are statistical laws and do not describe microscopic processes.
The objects analyzed by physics, no matter how microscopic†, are definite systems of energy-momentum, are interactions between systems of energy-momentum, and can be analyzed in terms of energy-momentum. The process of maintaining conservation of energy-momentum is equal to the process of maintaining causality.
III: Mathematically a probabilistic event can be any distribution, depending on the mandatory definitions and derivations. However, there can only be one true probabilistic event in physics that exists theoretically, i.e., an equal probability distribution with complete randomness. If unequal probabilities exist, then we need to ask what causes them. This introduces the problem of causality and negates randomness. Bohr said "The probability function obeys an equation of motion as did the co-ordinates in Newtonian mechanics "[1]. So, Weinberg said of the Copenhagen rules, "The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics" [2].
IV: The wave function in quantum mechanics describes a deterministic evolution energy-momentum system [3]. The behavior of the wave function follows the Hamiltonian principle [4] and is strictly an energy-momentum evolution process***. However, the Copenhagen School interpreted the wave function as "probabilistic" nature [23]. Bohr rejected Einstein's insistence on causality by replacing the term "complementarity" with his own invention, "complementarity". Bohr rejects Einstein's insistence on causality, replacing it with his own invention of "complementarity" [5].
Schrödinger ascribed a reality of the same kind that light waves possessed to the waves that he regards as the carriers of atomic processes by using the de Broglie procedure; he attempts "to construct wave packets (wave parcels) that have relatively small dimensions in all directions," and which can obviously represent the moving " and which can obviously represent the moving corpuscle directly [4][6].
Born and Heisenberg believe that an exact representation of processes in space and time is quite impossible and that one must then content oneself with presenting the relations between the observed quantities, which can only be interpreted as properties of the motions in the limiting classical cases [6]. Heisenberg, in contrast to Bohr, believed that the wave equation gave a causal, albeit probabilistic description of the free electron in configuration space [1].
The wave function itself is a function of time and space, and if the "wave-function collapse" at the time of measurement is probabilistic evolution, with instantaneous nature, [3], neither time (Δt=0) nor spatial transition is required. then it is in conflict not only with the Special Relativity, but also with the Uncertainty Principle. Because the wave function represents some definite energy and momentum, which appear to be infinite when required to follow the Uncertainty Principle [7], ΔE*Δt>h and ΔP*Δx>h.
V: We must also be mindful of the fact that the amount of information about a completely random event. From a quantum measurement point of view, it is infinite, since the true probability event of going from a completely unknown state A before the measurement to a completely determined state B after the measurement is completely without any information to base it on‡.
VI: The Uncertainty Principle originated in Heisenberg's analysis of x-ray microscopy [8] and its mathematical derivation comes from the Fourier Transform [8][10]. E and t, P and x, are two pairs of commuting quantities [11]. While the interpretation of the Uncertainty Principle has been long debated [7][9], "Either the color of the light is measured precisely or the time of arrival of the light is measured precisely." This choice also puzzled Einstein [12], but because of its great convenience as an explanatory "tool", physics has extended it to the "generalized uncertainty principle " [13].
Is this tool not misused? Take for example a time-domain pulsed signal of width τ, which has a Stretch (Scaling Theorem) property with the frequency-domain Fourier transform [14], and a bandwidth in the frequency domain B ≈ 1/τ. This is the equivalent of the uncertainty relation¶, where the width in the time domain is inversely proportional to the width in the frequency domain. However, this relation is fixed for a definite pulse object, i.e., both τ and B are constant, and there is no problem of inaccuracy.
In physics, the uncertainty principle is usually explained in terms of single-slit diffraction [15]. Assuming that the width of the single slit is d, the distribution width (range) of the interference fringes can be analyzed when d is different. Describing the relationship between P and d in this way is equivalent to analyzing the forced interaction that occurs between the incident particle and d. The analysis of such experimental results is consistent with the Fourier transform. But for a fixed d, the distribution does not have any uncertainty. This situation is confirmed experimentally, "We are not free to trade off accuracy in the one at the expense of the other."[16].
The usual doubt lies in the diffraction distribution that appears when a single photon or a single electron is diffracted. This does look like a probabilistic event. But the probabilistic interpretation actually negates the Fourier transform process. If we consider a single particle as a wave packet with a phase parameter, and the phase is statistical when it encounters a single slit, then we can explain the "randomness" of the position of a single photon or a single electron on the screen without violating the Fourier transform at any time. This interpretation is similar to de Broglie's interpretation [17], which is in fact equivalent to Bohr's interpretation [18][19]. Considering the causal conflict of the probabilistic interpretation, the phase interpretation is more rational.
VII. The uncertainty principle is a "passive" principle, not an "active" principle. As long as the object is certain, it has a determinate expression. Everything is where it is expected to be, not this time in this place, but next time in another place.
Our problems are:
1) At observable level, energy-momentum conservation (that is, causality) is never broken. So, is it an active norm, or just a phenomenon?
2) Why is there a "probability" in the measurement process (wave packet collapse) [3]?
3) Does the probabilistic interpretation of the wave function conflict with the uncertainty principle? How can this be resolved?
4) Is the Uncertainty Principle indeed uncertain?
------------------------------------------------------------------------------
Notes:
* Determinism here is a narrow sense of determinism, only for localized events. My personal attitude towards determinism in the broad sense (without distinguishing predictability, Fatalism, see [20] for a specialized analysis) is negative. Because, 1) we must note that complete prediction of all states is dependent on complete boundary conditions and initial conditions. Since all things are correlated, as soon as any kind of infinity exists, such as the spacetime scale of the universe, then the possibility of obtaining all boundary conditions is completely lost. 2) The physical equations of the upper levels can collapse by entering a singularity (undergoing a phase transition), which can lead to unpredictability results.
** Personal, non-professional opinion.
*** Energy conservation of independent wave functions is unquestionable, and it is debatable whether the interactions at the time of measurement obey local energy conservation [21].
† This is precisely the meaning of the Planck Constant h, the smallest unit of action. h itself is a constant of magnitude Js. For the photon, when h is coupled to time (frequency) and space (wavelength), there is energy E = hν,momentum P = h/λ.
‡ Thus, if a theory is to be based on "information", then it must completely reject the probabilistic interpretation of the wave function.
¶ In the field of signal analysis, this is also referred to by some as "The Uncertainty Principle", ΔxΔk=4π [22].
------------------------------------------------------------------------------
References:
[1] Faye, J. (2019). "Copenhagen Interpretation of Quantum Mechanics." The Stanford Encyclopedia of Philosophy from <https://plato.stanford.edu/archives/win2019/entries/qm-copenhagen/>.
[2] Weinberg, S. (2020). Dreams of a Final Theory, Hunan Science and Technology Press.
[3] Bassi, A., K. Lochan, S. Satin, T. P. Singh and H. Ulbricht (2013). "Models of wave-function collapse, underlying theories, and experimental tests." Reviews of Modern Physics 85(2): 471.
[4] Schrödinger, E. (1926). "An Undulatory Theory of the Mechanics of Atoms and Molecules." Physical Review 28(6): 1049-1070.
[5] Bohr, N. (1937). "Causality and complementarity." Philosophy of Science 4(3): 289-298.
[6] Born, M. (1926). "Quantum mechanics of collision processes." Uspekhi Fizich.
[7] Busch, P., T. Heinonen and P. Lahti (2007). "Heisenberg's uncertainty principle." Physics Reports 452(6): 155-176.
[8] Heisenberg, W. (1927). "Principle of indeterminacy." Z. Physik 43: 172-198. “不确定性原理”源论文。
[9] https://plato.stanford.edu/archives/sum2023/entries/qt-uncertainty/; 对不确定性原理更详细的历史介绍,其中包括了各种代表性的观点。
[10] Brown, L. M., A. Pais and B. Poppard (1995). Twentieth Centure Physics(I), Science Press.
[11] Dirac, P. A. M. (2017). The Principles of Quantum Mechanics, China Machine Press.
[12] Pais, A. (1982). The Science and Life of Albert Einstein I
[13] Tawfik, A. N. and A. M. Diab (2015). "A review of the generalized uncertainty principle." Reports on Progress in Physics 78(12): 126001.
[15] 曾谨言 (2013). 量子力学(QM), Science Press.
[16] Williams, B. G. (1984). "Compton scattering and Heisenberg's microscope revisited." American Journal of Physics 52(5): 425-430.
Hofer, W. A. (2012). "Heisenberg, uncertainty, and the scanning tunneling microscope." Frontiers of Physics 7(2): 218-222.
Prasad, N. and C. Roychoudhuri (2011). "Microscope and spectroscope results are not limited by Heisenberg's Uncertainty Principle!" Proceedings of SPIE-The International Society for Optical Engineering 8121.
[17] De Broglie, L. and J. A. E. Silva (1968). "Interpretation of a Recent Experiment on Interference of Photon Beams." Physical Review 172(5): 1284-1285.
[18] Cushing, J. T. (1994). Quantum mechanics: historical contingency and the Copenhagen hegemony, University of Chicago Press.
[19] Saunders, S. (2005). "Complementarity and scientific rationality." Foundations of Physics 35: 417-447.
[21] Carroll, S. M. and J. Lodman (2021). "Energy non-conservation in quantum mechanics." Foundations of Physics 51(4): 83.
[23] Born, M. (1955). "Statistical Interpretation of Quantum Mechanics." Science 122(3172): 675-679.
=========================================================
According to Shannon's definition, entropy measures information, choice, and uncertainty. Then, does negative differential entropy imply small uncertainty, less choice, and little information?
I am curious about the latest AI and ML models or methods that are being utilized to manage uncertainties and risks in Supply Chain Management. I am interested in how these models identify, evaluate, and mitigate risks. Any examples of industries where these models have been particularly successful would be beneficial.
Hi all,
Can you tell us how COVID 19 has triggered most of the innovative processes in the public and private sectors during this period of uncertainty? and from the non-profit sector? in your country?
Can anyone suggest to me
1. "how to calculate uncertainty for Redlich-Kister coefficients?"
2. Uncertainties of the limiting partial molar properties can be calculated from the uncertainties of the Ai (Redlih-Kister) parameters
Will academics EVER stop anthropomorphizing "probabilistic uncertainty"? It is something "seen" in "findings" AND (to say the least) not SOME THING. It may well be mainly connected to poor observations OR very preliminary "discoveries". Do people really believe that probabilistic uncertainty can be hard-wired?? Unless you have evidence in real and appropriate actual contexts, such as a naturalistic could SEE, OR at least AS seen sometime(s) in ontogeny with DIRECT OVERT EVIDENCE , then [otherwise] : STOP IT STOP! Understand?.
Every organization that strives to survive, to develop and to be sustainable, must be ready to face all the challenges that today’s turbulent and uncertain times carry with them. Organizations of all types and sizes are faced by external and internal factors and influences that make it uncertain whether they will achieve their objectives. The almost unimaginable pace of technical and technological progress, the dramatic acceleration of changes in all spheres of life, as well as the general feelings of uncertainty, actually can raise the question as to what extent is prevention still really possible at all?
The statement inquires about the potential mathematical relationship between entropy and standard deviation. Entropy and standard deviation are both concepts used in statistics and information theory.
Entropy is a measure of uncertainty or randomness in a probability distribution. It quantifies the average amount of information required to describe an event or a set of outcomes. It is commonly used in the field of information theory to assess the efficiency of data compression algorithms or to analyze the randomness of data.
On the other hand, standard deviation is a statistical measure that quantifies the dispersion or variability of a set of data points. It provides information about the average distance of data points from the mean or central value. It is widely used in data analysis to understand the spread of data and to compare the variability among different datasets.
While entropy and standard deviation are both statistical measures, they capture different aspects of data. Entropy focuses on the uncertainty or information content, while standard deviation focuses on the dispersion or variability. As such, there is no direct mathematical relationship between entropy and standard deviation.
However, depending on the specific context and the nature of the data, there might be some indirect connections or relationships between entropy and standard deviation. For instance, in certain probability distributions, higher entropy might be associated with higher variability or larger standard deviation, but this relationship is not universally applicable.
In summary, while entropy and standard deviation are both important statistical measures, they serve different purposes and do not have a direct mathematical relationship. The relationship between them, if any, would depend on the specific characteristics of the data being analyzed.
Artificial Intelligence in Petroleum Engineering
1. Whether AI alone - would be able to mimic a real oil/gas field production scenario - in the absence of reservoir simulation?
2. What would be the various sources of 'uncertainty' that would be associated with an AI technique?
3. How would AI consider
‘measurement uncertainties’ (errors in measurement of state variables associated with reservoir rock properties and rock-fluid interaction properties)
as well as
‘structural uncertainties’ (errors associated with the mathematical representation of actual draining principles of hydrocarbons)
"explicitly";
along with
‘parametric uncertainty’ (resulting from the coupled effect of both measurement as well as structural uncertainties)?
4. How would AI do justice – particularly with the very limited data set associated with ‘reservoir permeability’?
5. Whether AI would reasonably support a hydrocarbon reservoir with a sparse, heterogeneous, anisotropic, multi-phase, multi-dimensional data?
6. Whether AI would be supplied with the best ever algorithms for coping with all kinds of uncertainties – by explicitly segregating the various forms of uncertainties by an efficient data training?
7. Whether AI-powered robots will be able to detect the oil seeps (in deep sea) efficiently by mitigating the exploration risk while lessening the harms to marine life?
8. Whether AI has the ability to forecast ‘well collapses’ – well before its occurrence?
To what extent, ‘downtime’ would be expected to be reduced - upon introducing ‘traffic light system’?
9. To what extent, the concept of ‘digital twins’ remains efficient in addressing the challenges associated with the hydrocarbon industry?
10. To what extent, AI would remain helpful – by efficiently recognizing patterns through deep learning – towards averting the catastrophes associated with HSE?
I calculated the uncertainity for the all the elements and ions according to EPA guideline. However I am stuck with the PM2.5 concentration uncertainity. The EPA dataset from Baltimore estimated the uncertainty of PM2.5 by dividing by 10, but the data from St. Louis is multiplied by 12.
SO, how can I calulate the uncertainity for PM2.5 data?
In a world characterized by uncertainty and change, sustainable innovations are a key factor for the future of our humanity. A new mindset and approach to innovation is needed to find sustainable solutions that meet both environmental and social needs.
What is your meanings, informations and suggestions?
We all know that digital drugs have negative effects on the performance of workers or students in universities and schools, some of which are psychological, social and economic, but the question is what are the types of these drugs and what are the criteria on the basis of which the decision maker can judge each of these types in light of uncertainty
Heighenberg Uncertainty Principle has been found applicable only applicable to atomic systems while Quantum Theory of Uncertainty Principle of Integral Space has been foundle suitable for all the systems of Universe and Nature.Thats why it has been pronounced as "Quantum Theory of Everything" which was a dream of many pioneers including Albert Einstein ,Neels Bohr, Schrodinger,Tesla etc.
For details following recent three research papers have been uploaded.
It is known that Heighenberg Uncertainty Principle is applicable to only atomic systems but how the dynamics of molecular,biomolecular,Biochemical,biomedical and social systems may be discussed?
Recently an " Uncertainty Principle of Integral Space " has given satisfactory results in all above systems.
Above principle has been discussed in form of A Quantum Model . Details may be seen in attached recent research paper.