Science topic

Probabilistic Models - Science topic

Explore the latest questions and answers in Probabilistic Models, and find Probabilistic Models experts.
Questions related to Probabilistic Models
  • asked a question related to Probabilistic Models
Question
2 answers
Quantum mechanics is not just a new theory (of physics, of microworld etc) but a new epistemological shade of the paradigm: it takes causality a step further from the standard Newtonian traditional to a 1 to 1 correspondence where in probabilistic terms the state is related not to a standard Abstract state conception, i.e state of Rest, state of liquid but to the time projected outcome to a point where the two might be indistiguishable or construct each others identity.
The success of this epistemological facelift has not been reolicated in other domain of physics or of science, which would probably happen in the same way that some positivist approaches have been empkoyed in psychology and else.
The reason for this is that this framework has been falsely attacged to the QM domain and also because it has not been discerned as such one or articulated clearly and understand able enough.
Relevant answer
Answer
Dear Preston Guynn,
You can say the same thing about anything inside the body of knowledge in physics, which weakens your argument,
  • asked a question related to Probabilistic Models
Question
3 answers
Case study based
Relevant answer
Answer
In mathematical inventory models, historical demand data must be analyzed to understand the pattern of variation in demand by finding the demand rate and standard deviation, then finding the coefficient of variation for demand. Using the coefficient of variation, the type of demand can be determined, whether it is deterministic or probabilistic.
If the coefficient of variation is less than 20%, this means that the demand is deterministic . If the coefficient of variation is greater than 20%, this means that the demand is probabilistic and must be addressed using probability theory, as the probability distribution of the demand data is found.
*The type of lead time must be known in the place of application, whether it is deterministic or probabilistic, because it affects the inventory model. Using historical data, the type of lead time is known to estimate its probability distribution.
*Determining the level of service required by the decision maker in the place of application, usually estimated between (95 - 99%), which represents the probability of meeting customer demand without running out of stock during the replenishment cycle.
*The type of review of inventory levels is determined (continuous review or periodic review)
Continuous review: In this system, inventory levels are monitored continuously, and a supply request is submitted when the inventory reaches the reorder point.
Periodic review: Inventory is reviewed at fixed intervals, and the order volume is adjusted to restore inventory to a target level.
*Calculating inventory costs in your model:
1-Holding cost :It is the cost of keeping one unit for each Inventory cycle and includes the cost of place, insurance cost, examination and inspection cost
2-Setup cost :It is the cost of preparing the order or preparing the machines for each order, as it is calculated for each order and begins with the issuance of the purchase or production order and ends with the arrival of the materials to the stored. It includes costs issuing documents, transporting, unloading materials and arranging them in stores, communications, employee salaries and other administrative costs
3-Shortage cost: It is the represents the cost of loss that the company or institution will incur will incur due to not having inventory when it is needed, as well as the bad impact on the company's reputation.
The main goal of inventory models is to minimize total cost while achieving the required level of service
to meet customer demands.
*Intelligent techniques such as artificial neural network or deep learning can be used to predict demand based on historical demand data and then build a mathematical model for inventory based on the prediction results.
*Artificial intelligence algorithms such as genetic algorithm or swarm algorithms and others can be used to solve the probabilistic inventory model and improve the classical solution of the inventory model.
  • asked a question related to Probabilistic Models
Question
3 answers
I'm dealing with some clustered data (clinical data on patients undergoing a specific procedure at several medical sites), and I need to account for a random effect through the site of intervention.
Given the fact I'm analysing both continous and binary categorical outcomes, i selected linear and logistic mixed-effect models as my models of choice, importing my covariate at fixed-effect level and including a random effect parameter for my clustering label.
Here comes the problem: I run this analysis in conda with Python 3.8 and, as far as I can see, statsmodel does support LMM (So i'm fine with my continous outcomes) but not binominal models. The only option available would be "BinomialBayesMixedGLM", but I'd rather stay within probabilistic if possible.
I tried using rpy2 package to access R packages within my python environment, but due to some incompatibilities I cannot solve with my current machine (I need to stick to MacOS11.7, which is dragging some more constraints in package updates), it doesn't work properly.
Any other approach for working with binary outcomes with a probabilistic mixed-effect model in python?
Relevant answer
Answer
You may consider using the R interpreter directly for this analysis. It will be easier than calling an R script from Python.
Here is an example R script:
if (!require(lme4)) install.packages("lme4")
data <- cbpp
data$binary_outcome <- factor(ifelse(cbpp$incidence > 0, 1, 0))
str(data)
model <- glmer(binary_outcome ~ size + (1 | period), data = data, family = binomial)
summary(model)
As a suggestion for using R rather than Python for classical statistical modeling, R is often considered better and easier to use due to its design, which specifically caters to statisticians. While Python is popular for data science and machine learning, R's specialized focus on classical statistics, ease of use, and rich package ecosystem make it the preferred choice for traditional statistical analyses.
  • asked a question related to Probabilistic Models
Question
4 answers
We assume that vacuum cleaners can explode spontaneously.
Very few catastrophic accidents due to vacuum cleaner explosions have occurred in the world recently.
We also assume that the mechanism or physics inherent in the explosion is similar to that of the Big Bang creating the universe.
The cornerstone is the transformation of vacuum potential energy into quantum matter and vice versa:
ρ(x,y,z,t)=Const * V(x,y,z,t) . . . . (1)
where V is the potential energy or electrical potential of the battery (nearly 120 volts).
However, Rule 1 is a probabilistic event that lasts from days to millions of years and leads to a concentration of energy (heat) at a small point.
This extremely high concentration (temperature) is the cause of the explosion.
Relevant answer
Answer
The following video is one of the vacuum cleaner explosions that occurred in China when they were not functional.
We believe this explains many of the characteristics of vacuum cleaner explosions.
  • asked a question related to Probabilistic Models
Question
1 answer
Hello everyone, I am conducting research on Probabilistic Seismic Hazard Assessment (PSHA) and I am looking for software recommendations that can handle PSHA with mainshock and aftershock analysis. Could you please suggest any software tools capable of performing this analysis? I would greatly appreciate your insights and recommendations. Thank you!
Relevant answer
Answer
  • asked a question related to Probabilistic Models
Question
1 answer
Hello everyone. I have a DEM model of a mountain slope area and I am planning to do a failure probabilistic analysis for it using PLAXIS 3D. Is there any manuals or discussions about how to do it. Or someone who can share their experiences. Thank you very much.
Relevant answer
Answer
Certainly! Conducting a probabilistic analysis in PLAXIS 3D can be a valuable approach for assessing the stability of mountain slopes. Here are some general steps and considerations for performing such an analysis:
Define Probability Distributions: Identify the parameters that may influence the stability of your slope, such as material properties (e.g., soil cohesion, friction angle), geometry (e.g., slope angle, height), and external factors (e.g., rainfall intensity). Assign probability distributions to these parameters based on available data, literature, or expert judgment. Common distributions include normal, lognormal, and uniform distributions.
Generate Random Samples: Use statistical techniques (e.g., Monte Carlo simulation) to generate random samples from the defined probability distributions for each parameter. This step involves creating multiple realizations of your slope model, each with different sets of input parameters sampled from their respective distributions.
Perform PLAXIS Simulations: Set up and run PLAXIS 3D simulations for each generated realization of the slope model. Ensure that the model accurately represents the terrain, material properties, boundary conditions, and loading conditions. Conduct stability analyses (e.g., slope stability, factor of safety calculations) for each simulation.
Analyze Results: Collect and analyze the results from all simulations. This may involve comparing factors of safety, failure mechanisms, or critical failure surfaces across different realizations. Identify trends or patterns in the data to assess the overall probability of slope failure and associated uncertainties.
Sensitivity Analysis: Perform sensitivity analyses to understand the influence of individual parameters on slope stability and failure probability. This helps identify which parameters have the most significant impact and where additional data or refinement may be needed.
Interpretation and Reporting: Interpret the probabilistic analysis results and communicate findings effectively. Provide insights into the likelihood and consequences of slope failure under different scenarios, along with recommendations for risk management and mitigation strategies.
As for specific manuals or discussions on probabilistic analysis in PLAXIS 3D, you may want to refer to the PLAXIS documentation, online forums, or academic literature for guidance. Additionally, reaching out to experienced users or consulting with geotechnical engineering professionals familiar with probabilistic modeling could provide valuable insights and practical advice.
you can see useful information in our paper about a probabilistic analysis with link below:
  • asked a question related to Probabilistic Models
Question
4 answers
House-selling is one of the typical tasks of the Optimal Stopping problems. Offers come in daily for an asset, such as a house, that you wish to sell. Let Xi denote the amount of the offer received on day i. X1,X2,... are independent random variables, according uniform distribution on the interval (0...1). Each offer costs an amount C>0 to observe. When you receive an offer Xi, you must decide whether accept it or to wait for a better offer. The reward sequence depends on whether or not recall of past observations is allowed. If you may not recall past offers, then Di(X1,...,Xi)=Xi – i*C. If you are allowed to recall past offers, then Di(X1,...,Xi)=max(X1,...,Xi) – i*C. These tasks may be extended to infinite horizon (i is unlimited). So, there 4 different task statements :
  • without recall, infinite horizon
  • without recall, finite horizon
  • with recall, infinite horizon
  • with recall, finite horizon
First three tasks are quite simple, but I was unable to prove solution of the last task (in strict form, although I found a solution). If anyone knows her solution, please write it or send an article (link to the article) where it is written. Thank you in advance.
Relevant answer
Answer
Hello, I would suggest to first understand the problem statement. The bid prices cannot be uniformly distributed on the interval from 0 to 1. The bid prices have a typical kind of distribution (log-normal). You can study it at once. This will affect the solutions you get. I see no need to formalise the price range to the interval 0,1. That's really my personal non-binding opinion
  • asked a question related to Probabilistic Models
Question
1 answer
It would be helpful for me if anyone give some idea about a probabilistic model for ship capsizing risk analysis so that I could be able to go forward for my research.
Relevant answer
Answer
1. What is your definition of a "ship"? I assume you are not including row boats - but you need to define the population. Is that based upon displacement, dollar value, number of passengers and crew?
2. Are you limiting this to "capsizing" - which is a rollover incident - or any sinking?
3. What sort of hull? - the issues for a catamaran are different than a traditional hull, or other hull variations.
4. Given these definitoins - you need to gather the empirical data on what has happened in the past - capsizing events per million nautical miles.
5. Then work out the theory FMEA - capsizing due to bad loading, due to hull damage (striking an object, or even shark attacks), due to rogue waves, etc (lots of etc)
6. Then work out the probabilities of these events.
7. Finally - what is the goal of your study? If you come up with this estimate - what is its use - to prevent capsizing?
  • asked a question related to Probabilistic Models
Question
15 answers
Which subject studies the possibility of an afterlife? The answer may be theology and or philosophy. I wish we had a subject that more probabilistically and scientifically studied the possibility of an afterlife.
Relevant answer
Answer
The question as to whether an afterlife could possibly exist should be be rephrased - so as to obviate from the outset any views and/or arguments that would be informed and cognitively biased by framing the question within the context of ideologies and/or non-objective views and/or 'revealed' yet objectively unproven knowledge (as far as this latter point goes, there are many contradictory "revealed knowledges" out there, and this simple fact alone is enough to objectively invalidate any such approach).
In all matters of fundamental interpretations of reality, the go-to science can only be the science that sits atop the 'Auguste Lecomte' hierarchy of the sciences, just below pure mathematics : to wit, Mathematical Physics - with any results then in need of validation by both theoretical and experimental proofs.
The way to objectively rephrase 'Is an afterlife possible?' is to ask instead 'Can (human) consciousness exist without a material substrate'. Only theoretical physics can possibly provide an answer: In turn, a physics-based rephrasing of this new question is the simple question as to whether consciousness is a fundamental feature of reality, or whether it's derivative aka emergent instead.
There are many very solid physics-based arguments as to why consciousness must be fundamental, and extremely solid arguments why it cannot possibly be emergent.
Therefore, I'd say that the answer to any version of your question is that 'Physics' is the only serious discipline that can tackle the question.
  • asked a question related to Probabilistic Models
Question
2 answers
How can probability theory and statistical modeling contribute to our understanding of phonological variation and probabilistic phonological processes?
Relevant answer
Answer
Check out these few articles.
file:///C:/Users/user/Downloads/aldereteEtAl_21_Probabili.pdf
The hand of statistics and probability is open to express and present any type of model and probability theory.
What matters is your ideas and data.
  • asked a question related to Probabilistic Models
Question
70 answers
ESSENTIAL REASON IN PHYSICISTS’ USE OF LOGIC:
IN OTHER SCIENCES TOO!
Raphael Neelamkavil, Ph.D., Dr. phil.
1. The Logic of PhysicsPhysics students begin with meso-world experiments and theories. Naturally, at the young age, they get convinced that the logic they follow at that level is identical with the ideal of scientific method. Convictions on scientific temper may further confirm them in this. This has far-reaching consequences in the concept of science and of the logic of science.
But, unquestionably, the logic behind such an application of the scientific method is only one manner of realizing (1) the ideal of scientific method, namely, observe, hypothesize, verify, theorize, attempt to falsify for experimental and theoretical advancements, etc., and (2) the more general ideal of reason.
But does any teacher or professor of physics (or of other sciences) instruct their students on the advantages of thinking and experimenting in accordance with the above-mentioned fundamental fact of all scientific practice in mind, or make them capable of realizing the significance of this in the course of time? I think, no.
This is why physicists (and for that matter all scientists) fail at empowering their students and themselves in favour of the growth of science, thought, and life. The logic being followed in the above-said mode of practice of scientific method, naturally, becomes for the students the genuine form of logic, instead of being an instantiation of the ideal of logic as reason. This seems to be the case in most of the practices and instruction of all sciences till today. A change of the origin, justification, and significance of the use of logic in physics from the very start of instruction in the sciences is the solution for this problem. The change must be in the foundations.
All humans equate (1) this sort of logic of each science, and even logic as such, with (2) reason as such. Reason as such, in fact, is more generic of all kinds of logic. Practically none of the professors (of physics as well as of other sciences) terms the version of logic of their science as an instantiation of reason, which may be accessed ever better as the science eventually grows into something more elaborate and complex. Physicist gets more and more skilled at reasoning only as and when she/he wants to grow continuously into a genuine physicist.
As the same students enter the study of recent developments in physics like quantum physics, relativity, nano-physics (Greek nanos, “dwarf”; but in physics, @ 10-9), atto-physics (@ 10-18), etc., they forget to make place for the strong mathematical effects that are due by reason of the conceptual and processual paradoxes due to epistemological and physical-ontological difference between the object-sizes and the sizes of ourselves / our instruments. The best examples are the Uncertainty Principle, the Statistical Interpretation of QM, Quantum Cosmology, etc.
They tend to believe that some of these and similar physics may defy our (meso-physical) logic – but by this mistakenly intending that all forms of reasoning would have to fail if such instances of advanced physics are accepted in all of physics. As a result, again, their logic tends to continue to be of the same level as has been taken while they did elementary levels of physics.
Does this not mean that the ad hoc make-believe interpretations of the logic of the foundations of QM, Quantum Cosmology, etc. are the culprits that naturally make the logic of traditional physics inadequate as the best representative of the logic of nature? In short, in order to find a common platform, the logic of traditional and recent branches of physics must improve so to adequate itself to nature’s logic.
Why do I not suggest that the hitherto logic of physics be substituted by quantum logic, relativity logic, thermodynamic logic, nano-logic, atto-logic, or whatever other logic of any recent branch of physics that may be imagined? One would substitute logic in this manner only if one is overwhelmed by what purportedly is the logic of the new branches of physics. But, in the first place, I wonder why logic should be equated directly with reason. The attempt should always be to bring the logic of physics in as much correspondence with the logic of nature, so that reason in general can get closer to the latter. This must be the case not merely with physicists, but also with scientists from other disciplines and even from philosophy, mathematics, and logic itself.
Therefore, my questions are: What is the foundational reason that physicists should follow and should not lose at any occasion? Does this, how does this, and should this get transformed into forms of logic founded on a more general sort of physical reason? Wherein does such reason consist and where does it exist? Can there be a form of logic in which the logical laws depend not merely on the size of objects or the epistemological level available at the given object sizes, but instead, on the universal characteristics of all that exist? Or, should various logics be used at various occasions, like in the case of the suggested quantum logic, counterfactual logic, etc.?
Just like logic is not to be taken as a bad guide by citing the examples of the many logicians, scientists, and “logical” human beings doing logic non-ideally, I believe that there is a kernel of reason behind physics, justified solely on the most basic and universal characteristics of physical existents. These universals cannot belong solely to physics, but instead, to all the sciences, because they belong to all existents.
This kernel of reason in physics is to be insisted upon at every act of physics, even if many physicists (and other scientists and philosophers) may not ensure that kernel in their work. I shall discuss these possibly highest universals and connect them to logic meant as reason, when I elaborate on: 3. The Ontology of Physics (in a forthcoming discussion in RG)
The matter on which physicists do logical work is existent matter-energy in its fundamental implications and the derivative implications from the fundamental ones. This is to be kept in mind while doing any logically acceptable work physics, because existent matter-energy corpora in processuality delineate all possible forms of use of logic in physics, which logic is properly to be termed nature’s reason.
Moreover, conclusions are not drawn up by one subject (person) in physics for use by the same subject alone. Hence, we have the following two points to note in the use of logic in physics and the sciences: (1) the intersubjectively awaited necessity of human reason in its delineation in logical methods should be upheld at least by a well-informed community, and (2) the need for such reason behind approved physics should then be spread universally with an open mind that permits and requires further scientific advancements.
These will make future generations further question the genuineness of such logic / specific realization of reason, and constantly encourage attempts to falsify theories or their parts so that physics can bring up more genuine instantiations of human reason. But is such human reason based on the reason active in nature?
Although the above arguments and the following definition of logic in physics might look queer or at least new and unclear for many physicists, for many other scientists, for many mathematicians, and even for many logicians, I define here logic for use in physics as the fundamental aspect of reason that physics should uphold constantly in every argument and conclusion due from it:
Logic in physics is (1) the methodological science (2) of approaching the best intersubjectively rational and structural consequences (3) in what may be termed thought (not in emotions) (4) in clear terms of ever higher truth-probability achievable in statements and conclusions (5) in languages of all kinds (ordinary language, mathematics, computer algorithms, etc.) (6) based on the probabilistically methodological use, (7) namely, of the rules of all sensible logics that exemplify the Laws of Identity, Non-contradiction, and Excluded Middle, (8) which in turn must pertain to the direct and exhaustive physical implications of “to exist”.
Here I have not defined logic in physics very simply as “the discipline of the rules of thought”, “the discipline of the methodological approach to truths”, etc., for obvious reasons clarified by the history of the various definitions of logic.
But here comes up another question: Is the reason pertaining to physical nature the same as the most ideal form of human reason? From within the business of physics, how to connect the reason of physical nature with that of humans? I may suggest some answers from the epistemological and ontological aspects. But I would appreciate your responses in this regard too.
2. The Epistemology of Physics (in a forthcoming discussion in RG)
3. The Ontology of Physics (in a forthcoming discussion in RG)
Relevant answer
Answer
Yordan Epitropov, This is a comment you have written earlier too. If possible, please elaborate and give details of the argument given!
  • asked a question related to Probabilistic Models
Question
5 answers
I am interested in doing probabilistic analysis using geotechnical software. My own reviews I am looking to do probabilistic analysis in Plexis software, but I could not find how to do this analysis in the software, please guide me.
Relevant answer
Answer
According to https://www.seequent.com/products-solutions/plaxis-le/ , performing limit equilibrium analyses with PLAXIS LE is essential for LEM slope stability analysis. Accomplishing engineering analyses of slope stability, groundwater flow, and consolidation.
"PLAXIS 2D LE and PLAXIS 3D LE are powerful applications for limit equilibrium slope stability analysis and finite element analysis of groundwater seepage.
Perform rapid and comprehensive analyses
Determine the full 3D slip surface at hundreds of locations in extensive models, such as open-pit mines, riverbanks, and road and rail corridors with multi-plane slope stability analysis (MPA).
Automatically calculate the slip direction at each location using orientation analysis features. Design with confidence when considering faults, weak planes, and pore-water pressures.
Get access to the most available search methods on the market including Greco, Cuckoo, Wedges. Use probabilistic analysis such as Monte Carlo, Latin Hypercube, and the Alternative Point Estimation Method (APEM) to build robust digital twins. Further refine models with sensitivity analysis and spatial variability features.
Analyze slopes with various methods
Calculate safety factors and increase accuracy using 3D analysis of site geology to ensure infrastructure safety and reliability. Perform analysis with limit equilibrium method of slices or stress-based methods. Choose from classic method of slices like Bishop, Janbu, Spencer, Morgenstern-Price, GLE, and Sarma methods. Perform hybrid Kulhawy analysis by importing stress fields in 2D and 3D.
Analyze slopes with various methods
Calculate safety factors and increase accuracy using 3D analysis of site geology to ensure infrastructure safety and reliability. Perform analysis with limit equilibrium method of slices or stress-based methods. Choose from classic method of slices like Bishop, Janbu, Spencer, Morgenstern-Price, GLE, and Sarma methods. Perform hybrid Kulhawy analysis by importing stress fields in 2D and 3D.
Analyze saturated or unsaturated soils
Provide stable analysis of groundwater flow in saturated or unsaturated soils. Consider staged construction and excavation modeling scenarios."
  • asked a question related to Probabilistic Models
Question
2 answers
Challenge faced worldwide: a new spread of Covid
We all have lived through Covid19, across the world. Prior to the availability of vaccines, Non-Pharmaceutic Interventions by health-aware governments were implemented, with significant success, well into a stage of lockdown, where residents of a country were asked and then required to stay at home, with stringent conditions to get out of their homes. The logistics of food supply was usually well managed, even if there were cases of people remaining isolated from food supply at times.
Anticipating the risk through propagation models
The key to not letting Covid-19 take its toll, and it actually did take its toll, especially among elderly residents of care homes, in Italy, France, the UK, etc, was to model in an anticipatory manner the spread of the disease and assess its risk realistically.
Macro-models available (statistics), but what about micro-level (few humans)?
Modelling was mostly at macro-level: cities, regions, countries. However the different context of human interaction in daily life received much less attention, although large data sets and use cases build on a number of elementary interactions, and smaller numbers of humans involved in each.
Elementary interactions of few humans
Our endeavour, which could not afford the ambition of health statisticians in larger teams, to model the spread of the disease at country level, focused instead then, in the years 2019-2020, onto elementary use cases of interaction, with few humans involved (few starting from 2). Such use cases covered elderly patients of care homes, and their interactions during joint meals in the care home meal area, with tables shared, it also covered households in close (and closed) interaction during lockdown. It also tried to make sense of large events, where many humans interact during a limited time (football game, women's day celebration, etc).
The typology of likely propagation in such use cases was modeled, and parameters of a simple but robust model were tuned to known data, and in turn simulations could be run, and such simulation could be assessed on other known outcomes (such as the observation of virus propagation among the citizen team running a polling center during elections in France).
Next steps: anticipating the wave coming, with micro-models?
Can we ask the researchgate community if anyone is interested to undertake similar micro-level models of elementary human interaction leading to a likely spread of the virus?
Could we consider building a federated collaborative project, with data fed by anyone having access to these (literature, publications, etc)?
What approach do you recommend? Have you published on the topic?
REF
Here is a reference to the model mentioned above, with associated training/verification data:
[1] Agent Based Model for Covid 19 Transmission: -field approach based on context of interaction, July 2020, R. Di Francesco, DOI: 10.13140/RG.2.2.24583.83364
Relevant answer
Answer
We generally advance by post-mortems, dear Renaud Di Francesco
Modelling better would mean that we are able to learn from such post-mortems, i.e. to apply Markov chains with exactness.
  • asked a question related to Probabilistic Models
Question
4 answers
p - hacking and falsification of statistical results Improvement processes regarding process evaluations Foreword The significance of a statistical statement is denoted by p, as a probabilistic variable. The term "significance" translated into the Englsh version means "clarity, the essential". There is no question that significance as a measurement variable in probabilistic statistics plays an extraordinary role. Nevertheless, it is often subjected to manipulation by keeping the number of random variables - i.e. measured values - small, or even filtering them. In addition, the inadequate integration of all process parameters and the inadequate use of probability densities mean that processes are inadequately evaluated both now and in the future.
so what to do? suppressing data, those ones that are unliked? see pic p-hacking3
A plausible example, Fig.1.1, of this was given in Spectrum of Science SPECIAL 3.7, Chapter "Estimating Error, the Curse of the P-Value", Regina Nuzzo, Gallaudet University Washington.
Better way: include all data any make a frequency-scale and gain parameter values for a probabilitydensity see pic p-hacking2
That is ok?
Relevant answer
Answer
... perhaps we could use probabilitydensities that respect skewness, longer tails?
  • asked a question related to Probabilistic Models
Question
8 answers
Will academics EVER stop anthropomorphizing "probabilistic uncertainty"? It is something "seen" in "findings" AND (to say the least) not SOME THING. It may well be mainly connected to poor observations OR very preliminary "discoveries". Do people really believe that probabilistic uncertainty can be hard-wired?? Unless you have evidence in real and appropriate actual contexts, such as a naturalistic could SEE, OR at least AS seen sometime(s) in ontogeny with DIRECT OVERT EVIDENCE , then [otherwise] : STOP IT STOP! Understand?.
Relevant answer
Answer
OK. We tried.
  • asked a question related to Probabilistic Models
Question
4 answers
I want to use this attenuation relationship (suitable for central Italy) in probabilistic analysis therefore It must consider the soil type as a random variable with a certain probabilistic distribution.
Moreover, If there is any formula or methodology in which soil type from the epicenter to the target point is taken to account for obtaining an attenuation relationship, I would appreciate it if you introduce it.
Many thanks in advance.
Relevant answer
Answer
Many thanks, anyway Gianluca Regina
  • asked a question related to Probabilistic Models
Question
2 answers
Please give some references for your answer. Thank you so much
Relevant answer
I agree with David Eugene Booth :)
  • asked a question related to Probabilistic Models
Question
5 answers
Probabilistic modelling, Forecasting, Renewable energy uncertainty prediction, ARIMA
Relevant answer
Answer
The question is not clear because ARIMA model is itself probabilistic and not deterministic. Please clarify it better.
  • asked a question related to Probabilistic Models
Question
6 answers
Could any expert try to examine our novel approach for multi-objective optimization?
The brand new approch was entitled "Probability - based multi - objective optimization for material selection", and published by Springer available at https://link.springer.com/book/9789811933509,
DOI: 10.1007/978-981-19-3351-6.
Relevant answer
Answer
  • asked a question related to Probabilistic Models
  • asked a question related to Probabilistic Models
Question
3 answers
if someone can please share any report/paper/thesis it will be highly appreciated.
Relevant answer
Answer
The "technique" leaves the time it finds.
Have you drawn a representative sample from a population?
Did you submit a structured questionnaire to the sample units?
Good: start analyzing the data that describe the phenomenon you are studying and the rest will come by itself.
  • asked a question related to Probabilistic Models
Question
1 answer
I am working on a dataset which contains some of the censored data. As the probabilistic approaches such as the Bayesian estimation technique can be used to consider the censored data, however, I am interested to deploy the Machine Learning using Python. I will appreciate if any literature is shared or provided with any suggestions/guidance. The problem is classification.
Thanks
Relevant answer
Answer
Take a look at the attached little trick and see if that helps you a bit. David Booth
  • asked a question related to Probabilistic Models
Question
2 answers
If we have multiple experts to get the prior probabilities for the parent nodes how will the experts fill the node probabilities such as low, medium and high and how will we get the consensus of all the expert about the probability distribution of the parent node.
If someone can please share any paper/Questionnaire/expert based Bayesian network where all these queries are explained it will be highly appreciated.
Relevant answer
Answer
Ette Etuk Thank you so much for the feedback. Actually if you have a lot of stockholders and you want to create a consensus among them then how will we incorporate the probabilities in the parent nodes in the Bayesian network.
  • asked a question related to Probabilistic Models
Question
2 answers
Any technique/method to convert deterministic value into probabilistic values in Bayesian network in order to improve the results.
Relevant answer
Answer
not clear to me!! why would one need to induce randomness in deterministic values ?
  • asked a question related to Probabilistic Models
Question
5 answers
I'm studying the topic from Probabilistic Robotics by Thrun Burgard and Fox.
In the Extended Kalman Filter algorithm, we linearized the action model in the following way.
𝑔(𝑢(𝑡),𝑥(𝑡-1)) = 𝑔(𝑢(𝑡),𝜇(𝑡−1)) + 𝐺(𝑡)⋅(𝑥(𝑡−1)−𝜇(𝑡−1))
𝑔(𝑢(𝑡),𝑥(𝑡-1)) is the action model and 𝐺(𝑡) is its Jacobian matrix with respect to the state 𝑥(𝑡−1).
I don't see how this guarantees linearity because 𝑔 could be nonlinear in 𝑢(𝑡). The authors don't mention anything about why this is the case.
In other words, I imagined that the multivariate Taylor expansion for this where we get a linear function in both 𝑢(𝑡) and 𝑥(𝑡−1)
Relevant answer
Answer
Hi,
It should be noted that the Kalman filter is part of the theory of estimation of states of a system represented by a linear model. It is therefore an algorithm that provides estimates of some unknown variables from observed measurements over time. The filter uses as input the command u(t) and the output of the model y(t) and as output it provides an estimate of the output, in other words an estimate of the states of the system. For non-linear systems there is an extension that can deal with these cases: the extended Kalman filter. In my opinion there is no particular requirement on the control when applying this algorithm.
Also please take a look at the links.
Best regards
  • asked a question related to Probabilistic Models
Question
3 answers
Hi guys,
I am using the negative log likelihood function of a drichilet distribution as my loss function while implementing this paper:
I parameterise the distribution using my network outcome and compute the negative log likelihood of the observed ground truth.
Issue is that I found sometimes the loss is negative, which means that likelihood at the point of observation is greater than 1.
My understanding of this phenomenon comes with two sets:
- This is normal, as likelihood function can be higher 1
- This stands for overfitting, means that the likelihood function probably peaks at the point of the observation so much that other areas of the support would be zero, if we throw an observation there to be test
Relevant answer
Answer
I would suggest that you are probably using a poor method of fitting. Please look at the entries on the attached Google search. Good luck David Booth PS I note that R has procedures for doing this fit.
  • asked a question related to Probabilistic Models
Question
6 answers
Hello, is there a way for someone who wants to find the parameter which effects the function maximally?
at first, I think, he/she makes a model according to the data in hand then he/she takes the partial derivative with respect to each parameter and according to the result maximum one gives the most effective parameter for that model. Is there any other way especially there is not necessary to make a model before taking a derivative? Such as in conditional probabilistic models such as Bayes networks there is only data and graph network. Thanks.
Relevant answer
Answer
I think I found an answer to the question about finding the most influential parameter from data without modeling. But it is necessary to get a matrix inverse and there are some shortcomings if parameters have a large gap between their sequential items. In short, now, I think it is better to model the data via known functions or neural networks and find the influential terms via perturbing the parameter. And if necessary make the future space orthogonal to each other otherwise the effects of each other may not be predictable in higher layers.
  • asked a question related to Probabilistic Models
Question
7 answers
Please look at the text of the section on random walk from page 9 to formula 4.7, where you will find mathematical calculations justifying the probabilistic interpretation of the Riemann zeta function.
Relevant answer
Answer
If the distribution of the zeros of the Riemann zeta function is implicit in your question, then you may find the following paper exciting:
Regards,
  • asked a question related to Probabilistic Models
Question
2 answers
  • I have seen the evidence of using network analysis to conduct further cost-effectiveness analysis in two papers. The fractional polynomial models were involved because of nonproportional hazards. I would like to know how to combine the evidence of network meta-analysis with the partitioned survival model in the cost-effectiveness analysis about cancer? What software can be used better? I wonder how the authors connect the two. The abstracts of the two papers are as follows.
  • 1. Front Public Health. 2022 Apr 15;10:869960. doi: 10.3389/fpubh.2022.869960.
  • eCollection 2022.
  • Cost-Effectiveness Analysis of Five Systemic Treatments for Unresectable
  • Hepatocellular Carcinoma in China: An Economic Evaluation Based on Network
  • Meta-Analysis.
  • Zhao M(1)(2), Pan X(1)(2), Yin Y(1)(2), Hu H(1)(2), Wei J(3), Bai Z(3), Tang
  • W(1)(2).
  • BACKGROUND AND OBJECTIVE: Unresectable hepatocellular carcinoma (uHCC) is the
  • main histological subtype of liver cancer and causes a great disease burden in
  • China. We aimed to evaluate the cost-effectiveness of five first-line systemic
  • treatments newly approved in the Chinese market for the treatment of uHCC,
  • namely, sorafenib, lenvatinib, donafenib, sintilimab plus bevacizumab (D + A),
  • and atezolizumab plus bevacizumab (T + A) from the perspective of China's
  • healthcare system, to provide a basis for decision-making.
  • METHODS: We constructed a network meta-analysis of 4 clinical trials and used
  • fractional polynomial models to indirectly compare the effectiveness of
  • treatments. The partitioned survival model was used for cost-effectiveness
  • analysis. Primary model outcomes included the costs in US dollars and health
  • outcomes in quality-adjusted life-years (QALYs) and the incremental
  • cost-effectiveness ratio (ICER) under a willingness-to-pay threshold of $33,521
  • (3 times the per capita gross domestic product in China) per QALY. We performed
  • deterministic and probabilistic sensitivity analyses to investigate the
  • robustness. To test the effect of active treatment duration on the conclusions,
  • we performed a scenario analysis.
  • RESULTS: Compared with sorafenib, lenvatinib, donafenib, D + A, and T + A
  • regimens, it yielded an increase of 0.25, 0.30, 0.95, and 1.46 life-years,
  • respectively. Correspondingly, these four therapies yielded an additional 0.16,
  • 0.19, 0.51, and 0.86 QALYs and all four ICERs, $40,667.92/QALY gained,
  • $27,630.63/QALY gained, $51,877.36/QALY gained, and $130,508.44/QALY gained,
  • were higher than $33,521 except for donafenib. T + A was the most effective
  • treatment and donafenib was the most economical option. Sensitivity and scenario
  • analysis results showed that the base-case analysis was highly reliable.
  • CONCLUSION: Although combination therapy could greatly improve patients with
  • uHCC survival benefits, under the current WTP, donafenib is still the most
  • economical option.
  • 2. Value Health. 2022 May;25(5):796-802. doi: 10.1016/j.jval.2021.10.016. Epub 2021
  • Dec 1.
  • Cost-Effectiveness of Systemic Treatments for Metastatic Castration-Sensitive
  • Prostate Cancer: An Economic Evaluation Based on Network Meta-Analysis.
  • Wang L(1), Hong H(2), Alexander GC(1), Brawley OW(3), Paller CJ(4), Ballreich
  • J(5).
  • OBJECTIVES: To assess the cost-effectiveness of systemic treatments for
  • metastatic castration-sensitive prostate cancer from the US healthcare sector
  • perspective with a lifetime horizon.
  • METHODS: We built a partitioned survival model based on a network meta-analysis
  • of 7 clinical trials with 7287 patients aged 36 to 94 years between 2004 and
  • 2018 to predict patient health trajectories by treatment. We tested parameter
  • uncertainties with probabilistic sensitivity analyses. We estimated drug
  • acquisition costs using the Federal Supply Schedule and adopted generic drug
  • prices when available. We measured cost-effectiveness by an incremental
  • cost-effectiveness ratio (ICER).
  • RESULTS: The mean costs were approximately $392 000 with androgen deprivation
  • therapy (ADT) alone and approximately $415 000, $464 000, $597 000, and $959 000
  • with docetaxel, abiraterone acetate, enzalutamide, and apalutamide, added to
  • ADT, respectively. The mean quality-adjusted life-years (QALYs) were 3.38 with
  • ADT alone and 3.92, 4.76, 3.92, and 5.01 with docetaxel, abiraterone acetate,
  • enzalutamide, and apalutamide, added to ADT, respectively. As add-on therapy to
  • ADT, docetaxel had an ICER of $42 069 per QALY over ADT alone; abiraterone
  • acetate had an ICER of $58 814 per QALY over docetaxel; apalutamide had an ICER
  • of $1 979 676 per QALY over abiraterone acetate; enzalutamide was dominated. At
  • a willingness to pay below $50 000 per QALY, docetaxel plus ADT is likely the
  • most cost-effective treatment; at any willingness to pay between $50 000 and
  • $200 000 per QALY, abiraterone acetate plus ADT is likely the most
  • cost-effective treatment.
  • CONCLUSIONS: These findings underscore the value of abiraterone acetate plus ADT
  • given its relative cost-effectiveness to other systemic treatments for
  • metastatic castration-sensitive prostate cancer.
Relevant answer
Answer
Did you ask the authors?
  • asked a question related to Probabilistic Models
Question
2 answers
I’m trying to fit some kind of causal model to continuous value data by solving differential equations probabilistically (machine learning).
Currently I’m solving complex-valued vector quadratic differential equation so there are more cross correlations between variables.
dx(t)/dt = diag(Ax(t)x(t)^h) + Bx(t) + c + f(t)
or just
dx(t)/dt = diag(Ax(t)x(t)^h) + Bx(t) + c
diag() takes diagonal of the square matrix.
But my diff. eq. math is rusty because I have studied differential equations 20 years ago. I solved the equation in 1-dimensional case but would need help for vector valued x(t).
Would someone point me to appropriate material?
EDIT: I did edit the question to be a bit more clear to read.
Relevant answer
Answer
I understand that you want to solve a system of differential equations, but the details you provide is a bit confusing.
A second order differential equation is one that contains the second derivative of the unknown function and maybe, but not necessarily, the first derivative. These equations are linear if the unknown function and its derivatives are raised to the first power only; otherwise they are non-linear. Usually, by quadratic differential equation, we mean one that contains the square of the unknown function. They are different things.
The example you provide seems to be a first order differential equation, but could you write it more clearly?
  • asked a question related to Probabilistic Models
Question
3 answers
Hi for a network traffic analysis task i need a probabilistic model to analysis sequences of network data. Euch observation is here an event consisting of structured Information (e.g. ip adresses, ports, protocol type). Im interested in the dependencies between these observations using a generative model. Any ideas?
Relevant answer
Answer
Here, you might find a good starting point:
  • asked a question related to Probabilistic Models
Question
3 answers
*what if we only managed to have 1 discharge value (eg. 1.8m3/s) of a river section, is it possible for us to create a predictive hydrograph?
*what are the parameters needed?
*is there any article, journal to support the probabilistic analysis?
Thank you in advance!
Relevant answer
Answer
The hydrograph is the signature response of stream, or river, to its input as rainfall, snowmelt or springflow in some instances, and is a function of the land use, topography, stream and channel type and morphology, including gradient, sinuosity, storage such as lakes and ponds, etc. If a scientist or hydrologist wants to consider a hydrograph for a stream, it is usually measured, developing a stage discharge relationship at a stable cross section and measuring stage through time Such as with a water level recorder, transducer, etc. The USGS and probably some hydrology books have various papers or books that describe how to accomplish this. For research level work, trying to generate a hydrograph would require applying appropriate standards of measurement and monitoring. Rainfall is also measured at select locations and snow pack if present has water storage that slowly melts with temperature increase, or may quickly melt with a warm rain on snow event. Professor Luna Leopold installed a staff gauge in the nearby stream near his house, and used a telescope to collect stage level during storms through time due to his great interest in hydrology. If your country has nearby gauged streams and rainfall within the same physiographic, climactic and topographic zones, you would be much better off using that data to try to adjust it to the stream of interest. I think I may have uploaded a brief report that developed relationship between short term data in the upper Chattooga River, to the long term stream gauging data in the lower Chattooga River. In most instances, predicting hydrographs also requires predicting rainfall or in some instances snowmelt. Various methods are used for flood prediction, but a common one is taking the annual peak flow for a extended period of time as 25, 50 or more years and ordering the data and plotting on probability paper, extending the curve to some extent for even less frequent events. You might also look up the Unit Hydrograph approach which is based on a hydrograph with 1 inch of water yield. A substantial amount of instrumentation and measurement is the standard approach. Various models may become useful in making estimated to ungauged circumstances when validated to the conditions specific to your area.
  • asked a question related to Probabilistic Models
Question
2 answers
Hello.
I'm a beginner of diffusion tensor imaging.
I want to know pros and cons & similarities and differences between probabilistic tractography of FSL and MRtrix.
Could you explain about them?
I really appreciate it if someone could help!
Relevant answer
Answer
Essentially, MRtrix uses spherical harmonics for FOD modelling, whereas FSL uses Bayesian probability theory.
Jerome
  • asked a question related to Probabilistic Models
Question
3 answers
I have two questions and hope for some expert advice please?
1. My understanding is that when conducting an economic evaluation of clinical trial data, no discounting of costs is applied if the follow-up period of the trial was 12 months or less. Is this still the standard practice and can you please provide a recent reference?
2. How can one adjust for uncertainties/biases when you use historic health outcomes data? If the trial was non-randomised, how can you adjust for that within an economic evaluation other than the usual probabilistic sensitivity analysis?
Thank you so much.
Relevant answer
Answer
Hi,
Maybe some of these references are of help to you:
Economic Evaluation in Clinical Trials By Henry A. Glick, Jalpa A. Doshi, Seema S. Sonnad, Daniel Polsky 2014 | 272 Pages | ISBN: 0199685029
Economic Evaluation of Cancer Drugs: Using Clinical Trial and Real-World Data by Iftekhar Khan, Ralph Crott, et al. English | 2019 | ISBN: 1498761305 | 442 pages
Design & analysis of clinical trials for economic evaluation & reimbursement: an applied approach using SAS & STATA
Iftekhar Khan
Series: Chapman & Hall/CRC biostatistics series
Publisher: CRC Press, Year: 2015
ISBN: 978-1-4665-0548-3,1466505486
Methods for the Economic Evaluation of Health Care Programmes
Michael F. Drummond, Mark J. Sculpher, Karl Claxton, Greg L. Stoddart, George W. Torrance,ISBN: 0199665877 | 2015 | 461 pages |
  • asked a question related to Probabilistic Models
Question
5 answers
Hi all,
In a experimental investigation, there are two parameters to be measured, say X1 and X2. My goal is to see how X1 varies with X2. Specifically, I am interested in classifying the graph of X1 versus X2 according to a number of characteristic graphs. Each characteristic graph corresponds to a specific state of the system which I need to determine.
The problem is with the graph of X1 vs X2 undergoing significant changes when replicating the test, thus making the classification a perplexing task. A simple approach I could think of is taking the average of these graphs, but I am not sure if this is reasonable; I am looking for a more mathematical framework.
Any comments would be appreciated.
Regards,
Armin
Relevant answer
Answer
non-reproducible outcomes suggests that there are one or more fundamental flaws in the research. Your sample size might be too small given system variability. You might be missing some key variables. The analysis might not be appropriate. Some combination of all three. Some experiments are difficult, and maybe I can only have three replicates. I run the experiment again and get a different outcome. If you are certain that you have the right experiment, then try running the experiment several times and block each time, but analyze as one experiment.
  • asked a question related to Probabilistic Models
Question
6 answers
Hello everyone,
Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Relevant answer
Mathematics helps AI scientists to solve challenging deep abstract problems using traditional methods and techniques known for hundreds of years. Math is needed for AI because computers see the world differently from humans. Where humans see an image, a computer will see a 2D- or 3D-matrix. With the help of mathematics, we can input these dimensions into a computer, and linear algebra is about processing new data sets.
Here you can find good sources for this:
  • asked a question related to Probabilistic Models
Question
8 answers
probabilistic metric space which is not a metric space, has been widely developed in theory, but can someone give some example of the applications of such space ?
Relevant answer
Answer
Dear Behrad,
Please look at this new paper. We investigated a routing protocol for d2d communications based on probabilistic normed spaces:
  • asked a question related to Probabilistic Models
Question
4 answers
I invite you to see the newly-launched website about the Choice Wave and the Theory of Economic Parallel Rationality, tradition and innovation revolusionising economic thought.
Relevant answer
Answer
Interesting point,,,,
  • asked a question related to Probabilistic Models
Question
59 answers
A hypothetical example (hehe hypothesis), assume we have enough observations, apply both a “frequentist” and “Bayesian'ist'” model (e.g. linear model with Gaussian error distribution and for Bayesian an uninformative prior to keep it rather vague), we look at the intervals, and both models resulted in the same intervals. Then according to [1] we can say it is similar* to suggest the estimate on the population fell between [1] if we know they are similar. Are both than equally “wrong”? And, do they actually quantify uncertainty, as the both “want” to make (or am I wrong, as they really seem to want, although indeed P(data|estimate) and P(estimate|data)) probabilistic statements on the data about the population. Hence, the data is certain, the estimates are based on the data, so it seems is certain the estimate might approximate the population (assuming perfectly sampled population and this description makes sense) might take on a specified value (note Confidence an Credibility intervals have converged). Again, the data is certain, what is uncertain is what is not in the data. I am just curious what more statistical educated people think of this, how they would communicate this, as this seems hardly discussed (or it is my ignorance).
Thank you in advance for you input.
*Not their words. I just remember a part from the text.
Relevant answer
Answer
I would not count myself to the targeted group of "more statistical educated people" but I would nevertheless participate in this discussion :)
If you use the same probability model for the response and the same structural model (so that the coefficients have the same meaning) in both, frequentists and Bayesian analysis and if you use a flat prior, then the *limits* of the "typical" (1-a) confidence interval are always identical to the *limits* of the "typical" (1-a) credible interval. "Typical" means that the intervals are central, leaving a/2 confidence or credability on either side. So this is not related to having "enough observations". The sample size is relevant only when the prior is "informed". This prior information will be "overridden" or "overruled" by the information from the data in large samples, so that the limits converge with increasing sample size (if the estimators are consistent).
That it "is certain the estimate might approximate the population" is a consequence of the consitency.
So let's take the case that you have a confidence interval and a credible interval with identical limits. Then they still have an entirely different interpretation:
The confidence interval says that all possible estimates outside the interval are deemed statistically incompatible with the (certain) observed data. It stands for itself. It is a random interval (RI) that is derived from the probability distribution of the random variable (RV) that models the response and the sample size (observations are relalizations of the RV, observed confidence intervals are realizations of the RI, which is a function of the RV that returns two limits). A new sample would give a new confidence interval, and this will have different limits. It may not even overlap with the current confidence interval.
The credability interval says that you assign (1-a) probability to the event that the population value is inside this interval. A new sample would add information to your knowledge about the population value (forcing you to *update* your probability distribution assigned to that parameter). There are no two credible intervals - there is only one, and this (always) is based on everything you can resonably know about the population value.
Not sure if I touched your question...
  • asked a question related to Probabilistic Models
Question
3 answers
I have 6 variables with Mean, Stdev, CoV, Min and Max. Find the attached excel file.
Relevant answer
You may use the approach for regression analyses. That should work.
  • asked a question related to Probabilistic Models
Question
4 answers
I'm working on a new probabilistic routing protocol based-on k shortest path selection one, and I wish to test it on the simulator NS3 but there's none much literature on the way to process. I'm looking for some tips which can help to do that, I'm opening. Thx!
Relevant answer
Nice advice! It is very astucious to exploit the existing algorithm and improve them. I'm going to try it ASAP.
Thx!
  • asked a question related to Probabilistic Models
Question
1 answer
Hi there RG community, I get back with a new couple of questions, I'm trying to implement a probabilistic routing protocol in NS2/NS3, but there is no much literature on it. Is there someone do that before now? If so, How can I proceed? I'm opening to exchange on that. Thx!
Relevant answer
Answer
Good question
  • asked a question related to Probabilistic Models
Question
7 answers
Probability
Causation
Relevant answer
In probabilistic approaches to causation, causal relata are represented by events or random variables in a probability space. Since the formalism requires us to make use of negation, conjunction, and disjunction, the relata must be entities (or be accurately represented by entities) to which these operations can be meaningfully applied.
Reference :
  • asked a question related to Probabilistic Models
Question
1 answer
The publicly released THUMS model comes with a standard output of nodal coordinates of certain anatomical locations. Since there has been an already established usergroup, I was wondering if there is a standard output template created with THUMS to output cross sectional forces and moments in different bones (esp the longbones) that could be used to predict the risk of injuries probabilistically ? I could define my own outputs but i was wondering if there is a standard template so the results could be compared across multiple groups.
Thanks!
Relevant answer
Answer
Ask the usergroup. D. Booth
  • asked a question related to Probabilistic Models
Question
4 answers
I am doing a project named as anomaly detection in videos using matlab. I have to perform data associate with clusters using JPDA, but unfortunately it isn't working fine. I have go through distinct papers of JPDA, but these are all about the tracking of any object.
Kindly guide me how do I proceed, or any research paper in which JPDA is used to perform data association not for tracking purpose.
Regards
Relevant answer
Answer
Ijaz Durrani
Thanks for your collaboration. This paper is about pedestrian tracking through JPDAF, but I required paper in which JPDA is used only for association not for tracking
  • asked a question related to Probabilistic Models
Question
2 answers
In R-studio, there are many commands of Gumbel package. Arguments are also different.
I`m asking about the alpha parameter of the Copula which must be greater than 1. If this is the one used to plot the probability paper, how can I choose the value of alpha?
  • asked a question related to Probabilistic Models
Question
9 answers
Data sets, when structured, can be put in vector form (v(1)...v(n)), adding time dependency it's v(i, t) for i=1...n and t=1...T.
Then we have a matrix of terms v(i, j)...
Matrices are important, they can represent linear operators in finite dimension. Composing such operators f, g, as fog translates into matrix product FxG with obvious notations.
Now a classical matrix M is a table of lines and columns, containing numbers or variables. Precisely at line i and column j of such table, we store term m(i, j), usually belonging to real number set R, or complex number set C, or more generally to a group G.
What about generalising such matrix of numbers into a matrix of set (in any field of science, this could mean storing all data collected for a particular parameter "m(i, j)", which is a set M(i, j) of data?
What can we observe, say, define on such matrices of sets?
If you are as curious as me, in your own field of science or engineering, please follow the link below, and more importantly, feedback here with comments, thoughts, advice on how to take this further.
Ref:
Relevant answer
Thank you for sharing this Question
  • asked a question related to Probabilistic Models
Question
17 answers
I am stuck between Quantum mechanics and General relativity. The mind consuming scientific humor ranging from continuous and deterministic to probabilistic seems with no end. I would appreciate anyone for the words which can help me understand at least a bit, with relevance.
Thank you,
Regards,
Ayaz
Relevant answer
Answer
I guess that the Scattering Theory always will be a trend in QM.
The experimental Neutron Diffraction field for example always is creating new tools where QM is widely used.
Although it is attached to a few experimental facilities around the world, still it is a trend.
We always see new discoveries using neutron diffraction in solid-state.
  • asked a question related to Probabilistic Models
Question
3 answers
To know the details how oi is used in probabilistic metric space . As we know that to generalize trianle inequality, we use Triangular norm but how ? need explanation and also how and where it is used in PM space
Relevant answer
Answer
It is a good question
  • asked a question related to Probabilistic Models
Question
3 answers
I now have a set of input and output data, and a low-order transfer function model which has several parameters to be identified.
If I use tfest in Matlab, I can identify a set of parameter results, but this is not what I expect. What I expect to get is an interval that can encompass all or most of the observations. Which probability method can solve my problem? Preferably it is a probabilistic method or a prediction interval (PI) method. I would be very grateful if you could give me a paper or website
Relevant answer
Answer
David Eugene Booth Thank you for your suggestions. Those are useful for me!
  • asked a question related to Probabilistic Models
Question
2 answers
I'm developing a readiness assessment model regarding contractors' preparedness for a specific activity, in order to do so, a survey study was carried out and the data analyzed with PLS-SEM to obtain the CSF contributing to that readiness; nevertheless, due to the subject being too specific, it was impossible to define or quantify a population for it and hence, a probabilistic sample, which can compromise the external validity (generalizability) of my readiness assessment model. Is it feasible trying to reduce that generalizability issue with the minimum sample size requirements (means of power analyses) from Cohen (1992) and the use of PLS predict to determine the prediction power of the model?
I'd be delighted if any colleague could reply to this need
Relevant answer
Answer
In general, using any rule-of-thumb for sample size planning or assessing statistical power is problematic.
Random sampling provides a model free basis for generalization. Propensity score–based methods for generalization require three assumptions to ensure their validity. First, the stable unit treatment value assumption must hold for all units in the experiment and in the population. Second, generalization using propensity score methods requires strongly ignorable treatment assignment in the experiment. Finally, generalization using propensity score methods requires a strongly ignorable sample selection. The development of rules of thumb that take into account sample size since features of probability samples—the benchmark for generalizability—differ markedly in small samples. This raises the issue of how to judge the adequacy of the match between the experimental sample and the inference population.
Probability sampling is the gold standard for generalizing from samples. The idea is to use the adequacy of matching that would be expected if the experiment had a probability sample to develop benchmarks of adequate matching. There is no reason to expect small experimental samples to match inference populations better than probability samples.
  • asked a question related to Probabilistic Models
Question
2 answers
The birth and death probabilities are p_i and q_i respectively and (1-(p_i+q_i)) is the probability for no change in the process. zero ({0}) is an absorbing state and sate space is {0,1,2, ...}. What are the conditions for {0} to be recurrence (positive or null)? Is the set {1,2,3,...} transient? What we can say about duration of process until absorption and stationary distribution if it exists and etc?
Every comment is appreciated.
Relevant answer
Answer
There is no logical (reasonable) condition that {0} is not absorbing, so it is always a recurrence state. {1,2,...} is always transient.
  • asked a question related to Probabilistic Models
Question
8 answers
I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.
I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.
Thank you very much.
Relevant answer
Answer
Hi? You might want to have a look at one of my publications - 10.1016/j.envsoft.2020.104800
I recently conducted a similar study where I applied three different sensitivity analysis methods to fire simulations and compared their results!
Cheers!
  • asked a question related to Probabilistic Models
Question
3 answers
Probabilistic sensitivity analysis is criticised for potentially introducing uncertainty itself because of the consideration of the distribution of the parameters. Are there ways of addressing this potential for additional uncertainty?
Relevant answer
Answer
If you look deeper into the literature, you have some sensitivity analysis methods that are independent of the sampling techniques!
  • asked a question related to Probabilistic Models
Question
4 answers
I have a small dataset and need to learn a mixed probabilistic model (discrete + continuous) and simulate new values taking into account the learned structure.
Relevant answer
Answer
Mixed graphical models (MGMs) are graphical models learned over a combination of continuous and discrete variables. Mixed variable types are common in biomedical datasets.
  • asked a question related to Probabilistic Models
Question
10 answers
How to calculate the sum and the subtraction of many random variables that follow exponential distributions and have different parameters ?
(The value of Lambda is different for all or some variables).
example :
L(t) = f(t) + g(t) - h(t)
with
f(t) = a.expo(-a.t)
g(t) = b.expo(-b.t)
h(t) = c.expo(-c.t)
st:
a = Lambda_1
b = Lambda_2
c = Lambda_3.
Relevant answer
Answer
(continued)
In case of more terms (all with different means m_j>0, j=1,2,...,n) the formulas are as follows (ti replaced by -s)
ch.f.(X_1+X_2+...+X_n)(t) = 1/ [ (1+m_1 s) (1 + m_2 s) ... (1 + m_n s)]
= \sum_{j=1}^n A_j / (1+m_j s),
where A_j = \prod_{k\ne j} [ 1 - m_k / m_j]^{-1}
Therefore, in such cases the density of the sum is equal to
\sum_{j=1}^n A_j / m_j \exp( - x/m_j ), for x>0.
If X_j in the sum is preceded by sign -, then the first two formulas remain valid after replacing m_j by - m_j. The last requires replacing the exponential density for positive variable by the opposite one.
  • asked a question related to Probabilistic Models
Question
4 answers
Does anyone know a geotechnical engineering software which can support subset simulation? I need to do some probabilistic analysis of a geotechnical project. However, due to the small probability, I need to use subset simulation instead of the crude Monte Carlo analysis.
  • asked a question related to Probabilistic Models
Question
8 answers
Does the quantum mechanics gives probabilistic results or exact quantified values?
Relevant answer
Answer
``Probabilist results'' are ``exact quantified values''-about the pobability distribution in question. Not all probability distributions are delta functions, but this doesn't mean that they are uniform, either.
  • asked a question related to Probabilistic Models
Question
3 answers
I am a beginner to probabilistic forecasting. From my research I have a vague idea that monte carlo simulation can be done for injecting uncertainity in the process. Do i need to get multiple point forecasts doing monte carlo and do postprocessing for obtaining a proabibilistic distribution?.Can any one help with the procedure what steps should i follow to do probabilistic forecasting? It would be helpful if someone can share an example
Relevant answer
Answer
The approach presented by Leutbecher and Palmer (2008) aims to assess the sensitivity of the model to initial conditions. The proposed approach can certainly estimate the spread of the trajectories of the model in a phase space and make some rough estimate of the forecast uncertainty, but it should not be confused with probabilistic modelling. The latter can only be performed when the equations used for the forecast are written explicitly for the stochastic variables. The best known model illustrating this principle is that developed for the study of Brownian motions.
The correct mathematical foundation for probabilistic modelling is the Ito calculus. Please kindly consult the following sites:
Ito calculus and Brownian motions:
Itô’s stochastic calculus: Its surprising power for applications:
  • asked a question related to Probabilistic Models
Question
5 answers
  • I wanted to implement Feature selection framework using criterion like probabilistic error or probabilistic distance , i also have doubt like if i have non parametric distribution for my features can i use kernel based estimation techniques to find the class conditional probabilities instead of analytical function to evaluate the criterion like probabilistic error or bayes error rate.
  • i was thinking even if we have some non parametric distribution we can use the probabilities value estimated my kernel density estimation and the integration would converge to summation ultimately in the formula i have attached for error rate.
  • Is my approach is fine or if anyone have tried this can guide me.
Relevant answer
  • asked a question related to Probabilistic Models
Question
17 answers
Do you have any experience with probabilistic software for structural reliability assessment? Any links?
Relevant answer
Answer
  • asked a question related to Probabilistic Models
Question
1 answer
A question to all you stroke and tDCS / TMS researchers.
I want to visualize the lesion location of my participants in relationship to the stimulation site. In my case I have the lesions as ROIs normalised to the MNI standard space. Now I would like to create a 3 D image with the lesion mask as volume and mark position P4 on top. My objective is to see whether I actually tried to stimulate healthy or affected tissue with my tDCS protocol.
Alternatively marking P4 on the 2D slices would be fine as well. I just don't know how. I found the paper by Okamoto et al, 2004 ( ) which gives coordinates for the MNI templates.
Thank everyone for advice.
Relevant answer
Following
  • asked a question related to Probabilistic Models
Question
9 answers
Suppose we have statistics N(m1, m2), where m1 is the value of the first factor, m2 is the value of the second factor, N(m1, m2) is the number of observations corresponding to the values ​​of factors m1 and m2. In this case, the probability P(m1, m2) = N(m1, m2) /K, where K is the total number of observations. In real situations, detailed statistics N(m1, m2) is often unavailable, and only the normalized marginal values ​​S1(m1) and S2(m2) are known, where S1(m1) is the normalized total number of observations corresponding to the value m1 of the first factor and S2(m2) is the normalized total number of observations corresponding to the value m2 of the second factor. In this case P1(m1) = S1(m1)/K and P2(m2) = S2(m2)/K. It is clear that based on P1(m1) and P2(m2) it is impossible to calculate the exact value of P(m1, m2). But how to do this approximately with the best confidence? Thanks in advance for any advice.
Relevant answer
Answer
For your normalising constant or marginal distribution of vector case, you may use saddle point approximation, see e.g., http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.1487&rep=rep1&type=pdf
  • asked a question related to Probabilistic Models
Question
2 answers
I am working in small modular nuclear implementation in Europe, initially. We believe that this should be a social enterprise and your work may well enable the 3 legs of the sustainability to be brought to a common currency of economic (already undercut all other sources), social (harder to prove but acid test is municipal and private pension investment on a massive scale), environmental (high temperature gas dry cooled triso fuelled reactors are vastly superior environmentally to PWRs and are also distinctive in being inherently safe provable in real life as opposed to probabilistically safe which was needed only because the consequence of test proving safety is too severe for LWR physics). We envisage 50 MWe units distributed close to domestic and industrial sites with heat, hydrogen and power demands, so your mapping work wold form a strong basis for site selection. I have done similar work for renewables site selection in Scotland.
Relevant answer
Answer
Robert,
That is good news that people are more comfortable with SMRs. They should be, at least they should be with high temperature 'dry' moderated ones. Thee is a fundamental problem with the physics of wet moderated ones, except in submarines immersed in an infinite heat sink! Unfortunately a decision in USA took the market in the wet direction and only now is the dry route recognised as so much safer and inherently safe at a small scale. Fortunatelyt, UK and Canada knew this and in Uk we only have one wet reactor (Westinghouse Sizewell B) but we have one wet reactor under construction (Hinkley C = EPR) but it is unlikely ever to run.
The cost challenge is not as bleak as you mention. The main reason is that SMRs produce both electricity and low temperature heat in quantities that are saleable. A large reactor cannot do this and it adds 50%+ to the rate of economic return. Furthermore, the electricity is genertaed at the point of use so that the 20% transmission cost is avoided from the delivered price of electricity. Finally, the inherent safety means that most of the ACTIVE safety systems are absent so need neither capital nor maintenance costs.
I am optimistic but also realistic and anticipate that the first commercial ones will run in earnest by about 2029 with a huge upsurge in advanced economies by 2035 with developing world from 2035 where the demand growth really occurs.
Regards,
James
  • asked a question related to Probabilistic Models
Question
6 answers
We are working on a large number of building-related time series data sets that display various degrees of 'randomness'. When plotted, some display recognisable diurnal or seasonal patterns that can be correlated to building operational regimes or the weather (i.e. heating energy consumption with distinct signatures as hourly and seasonal intervals). However some appear to be completely random (Lift data that contains a lot of random noise).
Does anyone know if an established method exists that can be deployed on these data sets and provide some 'quantification' or 'ranking' of how much stochasticity exists in each data set?
Relevant answer
Answer
No, there is nothing precisely like that.
"Random" is what we can not explain or predict (for whatever reason; it does not matter if there is no such possible explanation or if we are just not aware of one).
The model uses some predictors (known to us; like the time of the day, the wether conditions including the day in the year, etc.) and makes a prediction of the response (the energy consumption) - the response value we should expect, given the corresponding values of the predictors. You can see the model as a mathematical formula of the predictor values. The formula contains parameters that make the model flexible and adjustable to observed data (think of an intercept and a slope of a regression line, or the frequency and amplitude of a sinusoidal wave).
The deviation of observations from these expected values are called residuals. They are not explained by the model and are thus considered "random". This randomness is mathematically handled by a probability distribution: we don't say that a particular resudual will be this or that large; instead we give a probability distribution (more correctly, we give the probability distribution of the response, conditional on the predictors). Using this probability model allows us to find the probability of the observed data (what is called the likelihood) given any combination of chosen values of the model parameters. Usually, we "fit" these parameters to maximize this likelihood (-> maximum likelihood estimates).
Thus, given a fitted model (on a given set of observations), we have a (maximized) likelihood (which depends on the data and on the functional model and on the probability model).
This can be used to compare different models. One might just see which of the models has the largest (maximized) likelihood. There are a few practical problems, because models with more parameters can get higher likelihood s just because they are more flexible - not more "correct". This is tried to be accounted for in by giving penalties for the model flexibility. This leads to the formulation of different information criteria (AIC, BIC, DIC and alikes, that all differ in the way the penalties are counted).
So, after that long post, you may look for such ICs to compare different models. The limitation remains that the models are all compared only on the data that was used to fit them, without guarantee that they will behave similar for new data. So if you have enough data it might be wise to fit the models using only a subset of the available data and then check how well these models predict the rest of the data. It does not really matter how you quantify this; I would plot the differences of the models side-by-side in a boxplot or a scatterplot.
  • asked a question related to Probabilistic Models
Question
3 answers
Hi, In my city i have seen a discussion among volcanic hazard researchers (Colombian Geologic Service and the local University), the central subject is the quality on the accuracy of volcanic hazard methodologys (i.e. deterministic method vs probabilistic method), i´d like to learn more about works that compare these two methods with observed events. Please, could anybody share me papers or books about the subject? Thanks a lot.
Relevant answer
Hello !!
Source Rock of the Volcanic Fragments in Wadi Al-batin, Iraq: Geomorphological, Petrographical and Geochemical Evidences
  • asked a question related to Probabilistic Models
Question
8 answers
In eart slope stability analysis by SLOPE/W Analysis
Relevant answer
Answer
Dear Dr. Maysam,
I think that the probabilistic and sensitivity analysis may be enough. But, for intensive results and comparisons, you have to consider the statistical analysis. The choice may depend on the nature of your application and research work.
Good Luck!
  • asked a question related to Probabilistic Models
Question
3 answers
Now a days use of AI/ML in disaster modeling is getting quite popular. I would like to know if there exists any such earthquake model which is better than conventional earthquake models for probabilistic seismic hazard analysis (PSHA) and or for deterministic scenario seismic hazard analysis (DSHA).
Relevant answer
Answer
Machine Learning Approach
  • asked a question related to Probabilistic Models
Question
6 answers
When running a scheduling scheme what probabilistic approach can be use to estimate energy consumption that scheduling algorithm?
Relevant answer
Answer
According to novel techniques many approaches are used for energy consumption. But i am working on machine learning techniques such as artificial neural networks. In this scheme or algorithm you can easily find the energy consumption or demand and its also can be used for optimization and forecasting of load dealing in smart grid or any electrical system connected with renewable energy resources.
  • asked a question related to Probabilistic Models
Question
3 answers
Let's center the discussion mostly on the fact of characterizing geotechnically those solid wastes.
Are probabilistic analysis the best way for these cases of highly heterogeneous materials?
#Slopes #Geotechnics #GeotechnicalEngineering
Relevant answer
Answer
Depending upon the site and orientation of rubbish dump, You may assess its physical morphology and depending upon physical/site assessment and heterogeneous material, you may choose preferred locations/section lines, along which you may carry out slope stability assessment with a view to arrive at the status of slope in terms of FoS or Factor of Safety. When < 2, it is normally unstable/critically stable. You need to have the lab-estimated values of select geotechnical parameters used in a particular analysis method. Emperically chosen values from available datasets/ tables etc. may also be used for cross-check, but ideally lab-estimated values would be of more help.
  • asked a question related to Probabilistic Models
Question
7 answers
Many scholars talk about importance of the role of demons in scientific thought- experiments.The most known are Laplace's Demon and Maxwell's Demon.Laplace claimed that world is completely deterministic. Contrary to this Maxwell claimed that the world is probabilistic and indeterministic. What these and other Demons can say to us about our world? Can Demons in science contribute to better understanding of Universe and scientific discoveries?
Relevant answer
Answer
Hello Dragoljub,
I do not know why technical people are so fond of demons, but let me add a few other demon examples. Schrödinger 's cat, which is used to express the superposition of states, could be considered a demon, though it is not formally given that name. In economics, there is statiscian Maurice Kendall's "Demon of Chance" (circa 1953), which he used to explain the absence of serial correlation as measured by the sliding autocorrelation function of commodity prices such as the time series of weekly wheat or cotton prices. For a discussion of Kendall's Demon, see the following book.
Donald MacKenzie; An Engine, Not a Camera, How Financial Models Shape Markets; MIT Press; 2006; pp. 61-63.
There is also the example of an enormous number of monkeys in a room, each sitting in front of a typewriter and typing furiously. The idea here is that this cohort of simian writers could eventually reproduce the play Hamlet. I believe this demon example, though, again, it is not formally designated as a demon, was supposed to represent Boltzmann's idea, from statistical mechanics, that a possible state of gas molecules in a box would be for them to all colalese into one corner of the box. According to Boltzmann such a trajectory, while unlikely, still has a small but non-zero probability.
My last demon example is Murphy's law, which, and I am paraphrasing here, says that, "Anything the can go wrong will go wrong, and usually at the most inopportune time." As a child, I remember seeing the old Warner Brothers cartoon titled "Falling Hare" (circa 1943) with Bugs Bunny and the Gremlin (a.k.a., demon). This particular demon was the root cause of mechanical and electrical problems at a US Army Air Corps (forerunner of the US Air Force) base during WWII, see https://www.youtube.com/watch?v=ZElJxTCIsJI .
I am sure there are more demon examples out there, somewhere.
Regards,
Tom Cuff
  • asked a question related to Probabilistic Models
Question
5 answers
I need to model a wireless communication system where the absence or presence of the intended receiver should be completely a random process. For instance, when a transmitter sends something, there should be a random probability of whether the receiver would receive the transmitted signals or not. I think I should model the the presence or absence of the receiver as "on/off" modeling. However, I think there might be other useful modeling unknown to me till now. An answer will be highly appreciated.
Relevant answer
Dea Abdullah,
I do not know exactly what is your receiver application. But you can find other statistics other than the completely random variable statistics.
I would explain more the physical meaning of my assumed distribution. The t in the equation is the on or the presence time of receiver. tave is the average period of appearance.
This can be implemented by using a switch which is made on for a time issued by random number generator.
For random variables please visit the book by Glover and Grant: Digital communications
Best wishes