Science topic
Probabilistic Models - Science topic
Explore the latest questions and answers in Probabilistic Models, and find Probabilistic Models experts.
Questions related to Probabilistic Models
Quantum mechanics is not just a new theory (of physics, of microworld etc) but a new epistemological shade of the paradigm: it takes causality a step further from the standard Newtonian traditional to a 1 to 1 correspondence where in probabilistic terms the state is related not to a standard Abstract state conception, i.e state of Rest, state of liquid but to the time projected outcome to a point where the two might be indistiguishable or construct each others identity.
The success of this epistemological facelift has not been reolicated in other domain of physics or of science, which would probably happen in the same way that some positivist approaches have been empkoyed in psychology and else.
The reason for this is that this framework has been falsely attacged to the QM domain and also because it has not been discerned as such one or articulated clearly and understand able enough.
I'm dealing with some clustered data (clinical data on patients undergoing a specific procedure at several medical sites), and I need to account for a random effect through the site of intervention.
Given the fact I'm analysing both continous and binary categorical outcomes, i selected linear and logistic mixed-effect models as my models of choice, importing my covariate at fixed-effect level and including a random effect parameter for my clustering label.
Here comes the problem: I run this analysis in conda with Python 3.8 and, as far as I can see, statsmodel does support LMM (So i'm fine with my continous outcomes) but not binominal models. The only option available would be "BinomialBayesMixedGLM", but I'd rather stay within probabilistic if possible.
I tried using rpy2 package to access R packages within my python environment, but due to some incompatibilities I cannot solve with my current machine (I need to stick to MacOS11.7, which is dragging some more constraints in package updates), it doesn't work properly.
Any other approach for working with binary outcomes with a probabilistic mixed-effect model in python?
We assume that vacuum cleaners can explode spontaneously.
Very few catastrophic accidents due to vacuum cleaner explosions have occurred in the world recently.
We also assume that the mechanism or physics inherent in the explosion is similar to that of the Big Bang creating the universe.
The cornerstone is the transformation of vacuum potential energy into quantum matter and vice versa:
ρ(x,y,z,t)=Const * V(x,y,z,t) . . . . (1)
where V is the potential energy or electrical potential of the battery (nearly 120 volts).
However, Rule 1 is a probabilistic event that lasts from days to millions of years and leads to a concentration of energy (heat) at a small point.
This extremely high concentration (temperature) is the cause of the explosion.
Hello everyone, I am conducting research on Probabilistic Seismic Hazard Assessment (PSHA) and I am looking for software recommendations that can handle PSHA with mainshock and aftershock analysis. Could you please suggest any software tools capable of performing this analysis? I would greatly appreciate your insights and recommendations. Thank you!
Hello everyone. I have a DEM model of a mountain slope area and I am planning to do a failure probabilistic analysis for it using PLAXIS 3D. Is there any manuals or discussions about how to do it. Or someone who can share their experiences. Thank you very much.
House-selling is one of the typical tasks of the Optimal Stopping problems. Offers come in daily for an asset, such as a house, that you wish to sell. Let Xi denote the amount of the offer received on day i. X1,X2,... are independent random variables, according uniform distribution on the interval (0...1). Each offer costs an amount C>0 to observe. When you receive an offer Xi, you must decide whether accept it or to wait for a better offer. The reward sequence depends on whether or not recall of past observations is allowed. If you may not recall past offers, then Di(X1,...,Xi)=Xi – i*C. If you are allowed to recall past offers, then Di(X1,...,Xi)=max(X1,...,Xi) – i*C. These tasks may be extended to infinite horizon (i is unlimited). So, there 4 different task statements :
- without recall, infinite horizon
- without recall, finite horizon
- with recall, infinite horizon
- with recall, finite horizon
First three tasks are quite simple, but I was unable to prove solution of the last task (in strict form, although I found a solution). If anyone knows her solution, please write it or send an article (link to the article) where it is written. Thank you in advance.
It would be helpful for me if anyone give some idea about a probabilistic model for ship capsizing risk analysis so that I could be able to go forward for my research.
Which subject studies the possibility of an afterlife? The answer may be theology and or philosophy. I wish we had a subject that more probabilistically and scientifically studied the possibility of an afterlife.
How can probability theory and statistical modeling contribute to our understanding of phonological variation and probabilistic phonological processes?
ESSENTIAL REASON IN PHYSICISTS’ USE OF LOGIC:
IN OTHER SCIENCES TOO!
Raphael Neelamkavil, Ph.D., Dr. phil.
1. The Logic of PhysicsPhysics students begin with meso-world experiments and theories. Naturally, at the young age, they get convinced that the logic they follow at that level is identical with the ideal of scientific method. Convictions on scientific temper may further confirm them in this. This has far-reaching consequences in the concept of science and of the logic of science.
But, unquestionably, the logic behind such an application of the scientific method is only one manner of realizing (1) the ideal of scientific method, namely, observe, hypothesize, verify, theorize, attempt to falsify for experimental and theoretical advancements, etc., and (2) the more general ideal of reason.
But does any teacher or professor of physics (or of other sciences) instruct their students on the advantages of thinking and experimenting in accordance with the above-mentioned fundamental fact of all scientific practice in mind, or make them capable of realizing the significance of this in the course of time? I think, no.
This is why physicists (and for that matter all scientists) fail at empowering their students and themselves in favour of the growth of science, thought, and life. The logic being followed in the above-said mode of practice of scientific method, naturally, becomes for the students the genuine form of logic, instead of being an instantiation of the ideal of logic as reason. This seems to be the case in most of the practices and instruction of all sciences till today. A change of the origin, justification, and significance of the use of logic in physics from the very start of instruction in the sciences is the solution for this problem. The change must be in the foundations.
All humans equate (1) this sort of logic of each science, and even logic as such, with (2) reason as such. Reason as such, in fact, is more generic of all kinds of logic. Practically none of the professors (of physics as well as of other sciences) terms the version of logic of their science as an instantiation of reason, which may be accessed ever better as the science eventually grows into something more elaborate and complex. Physicist gets more and more skilled at reasoning only as and when she/he wants to grow continuously into a genuine physicist.
As the same students enter the study of recent developments in physics like quantum physics, relativity, nano-physics (Greek nanos, “dwarf”; but in physics, @ 10-9), atto-physics (@ 10-18), etc., they forget to make place for the strong mathematical effects that are due by reason of the conceptual and processual paradoxes due to epistemological and physical-ontological difference between the object-sizes and the sizes of ourselves / our instruments. The best examples are the Uncertainty Principle, the Statistical Interpretation of QM, Quantum Cosmology, etc.
They tend to believe that some of these and similar physics may defy our (meso-physical) logic – but by this mistakenly intending that all forms of reasoning would have to fail if such instances of advanced physics are accepted in all of physics. As a result, again, their logic tends to continue to be of the same level as has been taken while they did elementary levels of physics.
Does this not mean that the ad hoc make-believe interpretations of the logic of the foundations of QM, Quantum Cosmology, etc. are the culprits that naturally make the logic of traditional physics inadequate as the best representative of the logic of nature? In short, in order to find a common platform, the logic of traditional and recent branches of physics must improve so to adequate itself to nature’s logic.
Why do I not suggest that the hitherto logic of physics be substituted by quantum logic, relativity logic, thermodynamic logic, nano-logic, atto-logic, or whatever other logic of any recent branch of physics that may be imagined? One would substitute logic in this manner only if one is overwhelmed by what purportedly is the logic of the new branches of physics. But, in the first place, I wonder why logic should be equated directly with reason. The attempt should always be to bring the logic of physics in as much correspondence with the logic of nature, so that reason in general can get closer to the latter. This must be the case not merely with physicists, but also with scientists from other disciplines and even from philosophy, mathematics, and logic itself.
Therefore, my questions are: What is the foundational reason that physicists should follow and should not lose at any occasion? Does this, how does this, and should this get transformed into forms of logic founded on a more general sort of physical reason? Wherein does such reason consist and where does it exist? Can there be a form of logic in which the logical laws depend not merely on the size of objects or the epistemological level available at the given object sizes, but instead, on the universal characteristics of all that exist? Or, should various logics be used at various occasions, like in the case of the suggested quantum logic, counterfactual logic, etc.?
Just like logic is not to be taken as a bad guide by citing the examples of the many logicians, scientists, and “logical” human beings doing logic non-ideally, I believe that there is a kernel of reason behind physics, justified solely on the most basic and universal characteristics of physical existents. These universals cannot belong solely to physics, but instead, to all the sciences, because they belong to all existents.
This kernel of reason in physics is to be insisted upon at every act of physics, even if many physicists (and other scientists and philosophers) may not ensure that kernel in their work. I shall discuss these possibly highest universals and connect them to logic meant as reason, when I elaborate on: 3. The Ontology of Physics (in a forthcoming discussion in RG)
The matter on which physicists do logical work is existent matter-energy in its fundamental implications and the derivative implications from the fundamental ones. This is to be kept in mind while doing any logically acceptable work physics, because existent matter-energy corpora in processuality delineate all possible forms of use of logic in physics, which logic is properly to be termed nature’s reason.
Moreover, conclusions are not drawn up by one subject (person) in physics for use by the same subject alone. Hence, we have the following two points to note in the use of logic in physics and the sciences: (1) the intersubjectively awaited necessity of human reason in its delineation in logical methods should be upheld at least by a well-informed community, and (2) the need for such reason behind approved physics should then be spread universally with an open mind that permits and requires further scientific advancements.
These will make future generations further question the genuineness of such logic / specific realization of reason, and constantly encourage attempts to falsify theories or their parts so that physics can bring up more genuine instantiations of human reason. But is such human reason based on the reason active in nature?
Although the above arguments and the following definition of logic in physics might look queer or at least new and unclear for many physicists, for many other scientists, for many mathematicians, and even for many logicians, I define here logic for use in physics as the fundamental aspect of reason that physics should uphold constantly in every argument and conclusion due from it:
Logic in physics is (1) the methodological science (2) of approaching the best intersubjectively rational and structural consequences (3) in what may be termed thought (not in emotions) (4) in clear terms of ever higher truth-probability achievable in statements and conclusions (5) in languages of all kinds (ordinary language, mathematics, computer algorithms, etc.) (6) based on the probabilistically methodological use, (7) namely, of the rules of all sensible logics that exemplify the Laws of Identity, Non-contradiction, and Excluded Middle, (8) which in turn must pertain to the direct and exhaustive physical implications of “to exist”.
Here I have not defined logic in physics very simply as “the discipline of the rules of thought”, “the discipline of the methodological approach to truths”, etc., for obvious reasons clarified by the history of the various definitions of logic.
But here comes up another question: Is the reason pertaining to physical nature the same as the most ideal form of human reason? From within the business of physics, how to connect the reason of physical nature with that of humans? I may suggest some answers from the epistemological and ontological aspects. But I would appreciate your responses in this regard too.
2. The Epistemology of Physics (in a forthcoming discussion in RG)
3. The Ontology of Physics (in a forthcoming discussion in RG)
I am interested in doing probabilistic analysis using geotechnical software. My own reviews I am looking to do probabilistic analysis in Plexis software, but I could not find how to do this analysis in the software, please guide me.
Challenge faced worldwide: a new spread of Covid
We all have lived through Covid19, across the world. Prior to the availability of vaccines, Non-Pharmaceutic Interventions by health-aware governments were implemented, with significant success, well into a stage of lockdown, where residents of a country were asked and then required to stay at home, with stringent conditions to get out of their homes. The logistics of food supply was usually well managed, even if there were cases of people remaining isolated from food supply at times.
Anticipating the risk through propagation models
The key to not letting Covid-19 take its toll, and it actually did take its toll, especially among elderly residents of care homes, in Italy, France, the UK, etc, was to model in an anticipatory manner the spread of the disease and assess its risk realistically.
Macro-models available (statistics), but what about micro-level (few humans)?
Modelling was mostly at macro-level: cities, regions, countries. However the different context of human interaction in daily life received much less attention, although large data sets and use cases build on a number of elementary interactions, and smaller numbers of humans involved in each.
Elementary interactions of few humans
Our endeavour, which could not afford the ambition of health statisticians in larger teams, to model the spread of the disease at country level, focused instead then, in the years 2019-2020, onto elementary use cases of interaction, with few humans involved (few starting from 2). Such use cases covered elderly patients of care homes, and their interactions during joint meals in the care home meal area, with tables shared, it also covered households in close (and closed) interaction during lockdown. It also tried to make sense of large events, where many humans interact during a limited time (football game, women's day celebration, etc).
The typology of likely propagation in such use cases was modeled, and parameters of a simple but robust model were tuned to known data, and in turn simulations could be run, and such simulation could be assessed on other known outcomes (such as the observation of virus propagation among the citizen team running a polling center during elections in France).
Next steps: anticipating the wave coming, with micro-models?
Can we ask the researchgate community if anyone is interested to undertake similar micro-level models of elementary human interaction leading to a likely spread of the virus?
Could we consider building a federated collaborative project, with data fed by anyone having access to these (literature, publications, etc)?
What approach do you recommend? Have you published on the topic?
REF
Here is a reference to the model mentioned above, with associated training/verification data:
[1] Agent Based Model for Covid 19 Transmission: -field approach based on context of interaction, July 2020, R. Di Francesco, DOI: 10.13140/RG.2.2.24583.83364
p - hacking and falsification of statistical results Improvement processes regarding process evaluations Foreword The significance of a statistical statement is denoted by p, as a probabilistic variable. The term "significance" translated into the Englsh version means "clarity, the essential". There is no question that significance as a measurement variable in probabilistic statistics plays an extraordinary role. Nevertheless, it is often subjected to manipulation by keeping the number of random variables - i.e. measured values - small, or even filtering them. In addition, the inadequate integration of all process parameters and the inadequate use of probability densities mean that processes are inadequately evaluated both now and in the future.
so what to do? suppressing data, those ones that are unliked? see pic p-hacking3
A plausible example, Fig.1.1, of this was given in Spectrum of Science SPECIAL 3.7, Chapter "Estimating Error, the Curse of the P-Value", Regina Nuzzo, Gallaudet University Washington.
Better way: include all data any make a frequency-scale and gain parameter values for a probabilitydensity see pic p-hacking2
That is ok?
Will academics EVER stop anthropomorphizing "probabilistic uncertainty"? It is something "seen" in "findings" AND (to say the least) not SOME THING. It may well be mainly connected to poor observations OR very preliminary "discoveries". Do people really believe that probabilistic uncertainty can be hard-wired?? Unless you have evidence in real and appropriate actual contexts, such as a naturalistic could SEE, OR at least AS seen sometime(s) in ontogeny with DIRECT OVERT EVIDENCE , then [otherwise] : STOP IT STOP! Understand?.
I want to use this attenuation relationship (suitable for central Italy) in probabilistic analysis therefore It must consider the soil type as a random variable with a certain probabilistic distribution.
Moreover, If there is any formula or methodology in which soil type from the epicenter to the target point is taken to account for obtaining an attenuation relationship, I would appreciate it if you introduce it.
Many thanks in advance.
Please give some references for your answer. Thank you so much
Probabilistic modelling, Forecasting, Renewable energy uncertainty prediction, ARIMA
Could any expert try to examine our novel approach for multi-objective optimization?
The brand new approch was entitled "Probability - based multi - objective optimization for material selection", and published by Springer available at https://link.springer.com/book/9789811933509,
DOI: 10.1007/978-981-19-3351-6.
Your guidance and support will be highly appreciated.
if someone can please share any report/paper/thesis it will be highly appreciated.
I am working on a dataset which contains some of the censored data. As the probabilistic approaches such as the Bayesian estimation technique can be used to consider the censored data, however, I am interested to deploy the Machine Learning using Python. I will appreciate if any literature is shared or provided with any suggestions/guidance. The problem is classification.
Thanks
If we have multiple experts to get the prior probabilities for the parent nodes how will the experts fill the node probabilities such as low, medium and high and how will we get the consensus of all the expert about the probability distribution of the parent node.
If someone can please share any paper/Questionnaire/expert based Bayesian network where all these queries are explained it will be highly appreciated.
Any technique/method to convert deterministic value into probabilistic values in Bayesian network in order to improve the results.
I'm studying the topic from Probabilistic Robotics by Thrun Burgard and Fox.
In the Extended Kalman Filter algorithm, we linearized the action model in the following way.
𝑔(𝑢(𝑡),𝑥(𝑡-1)) = 𝑔(𝑢(𝑡),𝜇(𝑡−1)) + 𝐺(𝑡)⋅(𝑥(𝑡−1)−𝜇(𝑡−1))
𝑔(𝑢(𝑡),𝑥(𝑡-1)) is the action model and 𝐺(𝑡) is its Jacobian matrix with respect to the state 𝑥(𝑡−1).
I don't see how this guarantees linearity because 𝑔 could be nonlinear in 𝑢(𝑡). The authors don't mention anything about why this is the case.
In other words, I imagined that the multivariate Taylor expansion for this where we get a linear function in both 𝑢(𝑡) and 𝑥(𝑡−1)
Hi guys,
I am using the negative log likelihood function of a drichilet distribution as my loss function while implementing this paper:
I parameterise the distribution using my network outcome and compute the negative log likelihood of the observed ground truth.
Issue is that I found sometimes the loss is negative, which means that likelihood at the point of observation is greater than 1.
My understanding of this phenomenon comes with two sets:
- This is normal, as likelihood function can be higher 1
- This stands for overfitting, means that the likelihood function probably peaks at the point of the observation so much that other areas of the support would be zero, if we throw an observation there to be test
Hello, is there a way for someone who wants to find the parameter which effects the function maximally?
at first, I think, he/she makes a model according to the data in hand then he/she takes the partial derivative with respect to each parameter and according to the result maximum one gives the most effective parameter for that model. Is there any other way especially there is not necessary to make a model before taking a derivative? Such as in conditional probabilistic models such as Bayes networks there is only data and graph network. Thanks.
Please look at the text of the section on random walk from page 9 to formula 4.7, where you will find mathematical calculations justifying the probabilistic interpretation of the Riemann zeta function.
Preprint Chaotic dynamics of an electron
- I have seen the evidence of using network analysis to conduct further cost-effectiveness analysis in two papers. The fractional polynomial models were involved because of nonproportional hazards. I would like to know how to combine the evidence of network meta-analysis with the partitioned survival model in the cost-effectiveness analysis about cancer? What software can be used better? I wonder how the authors connect the two. The abstracts of the two papers are as follows.
- 1. Front Public Health. 2022 Apr 15;10:869960. doi: 10.3389/fpubh.2022.869960.
- eCollection 2022.
- Cost-Effectiveness Analysis of Five Systemic Treatments for Unresectable
- Hepatocellular Carcinoma in China: An Economic Evaluation Based on Network
- Meta-Analysis.
- Zhao M(1)(2), Pan X(1)(2), Yin Y(1)(2), Hu H(1)(2), Wei J(3), Bai Z(3), Tang
- W(1)(2).
- BACKGROUND AND OBJECTIVE: Unresectable hepatocellular carcinoma (uHCC) is the
- main histological subtype of liver cancer and causes a great disease burden in
- China. We aimed to evaluate the cost-effectiveness of five first-line systemic
- treatments newly approved in the Chinese market for the treatment of uHCC,
- namely, sorafenib, lenvatinib, donafenib, sintilimab plus bevacizumab (D + A),
- and atezolizumab plus bevacizumab (T + A) from the perspective of China's
- healthcare system, to provide a basis for decision-making.
- METHODS: We constructed a network meta-analysis of 4 clinical trials and used
- fractional polynomial models to indirectly compare the effectiveness of
- treatments. The partitioned survival model was used for cost-effectiveness
- analysis. Primary model outcomes included the costs in US dollars and health
- outcomes in quality-adjusted life-years (QALYs) and the incremental
- cost-effectiveness ratio (ICER) under a willingness-to-pay threshold of $33,521
- (3 times the per capita gross domestic product in China) per QALY. We performed
- deterministic and probabilistic sensitivity analyses to investigate the
- robustness. To test the effect of active treatment duration on the conclusions,
- we performed a scenario analysis.
- RESULTS: Compared with sorafenib, lenvatinib, donafenib, D + A, and T + A
- regimens, it yielded an increase of 0.25, 0.30, 0.95, and 1.46 life-years,
- respectively. Correspondingly, these four therapies yielded an additional 0.16,
- 0.19, 0.51, and 0.86 QALYs and all four ICERs, $40,667.92/QALY gained,
- $27,630.63/QALY gained, $51,877.36/QALY gained, and $130,508.44/QALY gained,
- were higher than $33,521 except for donafenib. T + A was the most effective
- treatment and donafenib was the most economical option. Sensitivity and scenario
- analysis results showed that the base-case analysis was highly reliable.
- CONCLUSION: Although combination therapy could greatly improve patients with
- uHCC survival benefits, under the current WTP, donafenib is still the most
- economical option.
- 2. Value Health. 2022 May;25(5):796-802. doi: 10.1016/j.jval.2021.10.016. Epub 2021
- Dec 1.
- Cost-Effectiveness of Systemic Treatments for Metastatic Castration-Sensitive
- Prostate Cancer: An Economic Evaluation Based on Network Meta-Analysis.
- Wang L(1), Hong H(2), Alexander GC(1), Brawley OW(3), Paller CJ(4), Ballreich
- J(5).
- OBJECTIVES: To assess the cost-effectiveness of systemic treatments for
- metastatic castration-sensitive prostate cancer from the US healthcare sector
- perspective with a lifetime horizon.
- METHODS: We built a partitioned survival model based on a network meta-analysis
- of 7 clinical trials with 7287 patients aged 36 to 94 years between 2004 and
- 2018 to predict patient health trajectories by treatment. We tested parameter
- uncertainties with probabilistic sensitivity analyses. We estimated drug
- acquisition costs using the Federal Supply Schedule and adopted generic drug
- prices when available. We measured cost-effectiveness by an incremental
- cost-effectiveness ratio (ICER).
- RESULTS: The mean costs were approximately $392 000 with androgen deprivation
- therapy (ADT) alone and approximately $415 000, $464 000, $597 000, and $959 000
- with docetaxel, abiraterone acetate, enzalutamide, and apalutamide, added to
- ADT, respectively. The mean quality-adjusted life-years (QALYs) were 3.38 with
- ADT alone and 3.92, 4.76, 3.92, and 5.01 with docetaxel, abiraterone acetate,
- enzalutamide, and apalutamide, added to ADT, respectively. As add-on therapy to
- ADT, docetaxel had an ICER of $42 069 per QALY over ADT alone; abiraterone
- acetate had an ICER of $58 814 per QALY over docetaxel; apalutamide had an ICER
- of $1 979 676 per QALY over abiraterone acetate; enzalutamide was dominated. At
- a willingness to pay below $50 000 per QALY, docetaxel plus ADT is likely the
- most cost-effective treatment; at any willingness to pay between $50 000 and
- $200 000 per QALY, abiraterone acetate plus ADT is likely the most
- cost-effective treatment.
- CONCLUSIONS: These findings underscore the value of abiraterone acetate plus ADT
- given its relative cost-effectiveness to other systemic treatments for
- metastatic castration-sensitive prostate cancer.
I’m trying to fit some kind of causal model to continuous value data by solving differential equations probabilistically (machine learning).
Currently I’m solving complex-valued vector quadratic differential equation so there are more cross correlations between variables.
dx(t)/dt = diag(Ax(t)x(t)^h) + Bx(t) + c + f(t)
or just
dx(t)/dt = diag(Ax(t)x(t)^h) + Bx(t) + c
diag() takes diagonal of the square matrix.
But my diff. eq. math is rusty because I have studied differential equations 20 years ago. I solved the equation in 1-dimensional case but would need help for vector valued x(t).
Would someone point me to appropriate material?
EDIT: I did edit the question to be a bit more clear to read.
Hi for a network traffic analysis task i need a probabilistic model to analysis sequences of network data. Euch observation is here an event consisting of structured Information (e.g. ip adresses, ports, protocol type). Im interested in the dependencies between these observations using a generative model. Any ideas?
*what if we only managed to have 1 discharge value (eg. 1.8m3/s) of a river section, is it possible for us to create a predictive hydrograph?
*what are the parameters needed?
*is there any article, journal to support the probabilistic analysis?
Thank you in advance!
Hello.
I'm a beginner of diffusion tensor imaging.
I want to know pros and cons & similarities and differences between probabilistic tractography of FSL and MRtrix.
Could you explain about them?
I really appreciate it if someone could help!
I have two questions and hope for some expert advice please?
1. My understanding is that when conducting an economic evaluation of clinical trial data, no discounting of costs is applied if the follow-up period of the trial was 12 months or less. Is this still the standard practice and can you please provide a recent reference?
2. How can one adjust for uncertainties/biases when you use historic health outcomes data? If the trial was non-randomised, how can you adjust for that within an economic evaluation other than the usual probabilistic sensitivity analysis?
Thank you so much.
Hi all,
In a experimental investigation, there are two parameters to be measured, say X1 and X2. My goal is to see how X1 varies with X2. Specifically, I am interested in classifying the graph of X1 versus X2 according to a number of characteristic graphs. Each characteristic graph corresponds to a specific state of the system which I need to determine.
The problem is with the graph of X1 vs X2 undergoing significant changes when replicating the test, thus making the classification a perplexing task. A simple approach I could think of is taking the average of these graphs, but I am not sure if this is reasonable; I am looking for a more mathematical framework.
Any comments would be appreciated.
Regards,
Armin
Hello everyone,
Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
probabilistic metric space which is not a metric space, has been widely developed in theory, but can someone give some example of the applications of such space ?
I invite you to see the newly-launched website about the Choice Wave and the Theory of Economic Parallel Rationality, tradition and innovation revolusionising economic thought.
A hypothetical example (hehe hypothesis), assume we have enough observations, apply both a “frequentist” and “Bayesian'ist'” model (e.g. linear model with Gaussian error distribution and for Bayesian an uninformative prior to keep it rather vague), we look at the intervals, and both models resulted in the same intervals. Then according to [1] we can say it is similar* to suggest the estimate on the population fell between [1] if we know they are similar. Are both than equally “wrong”? And, do they actually quantify uncertainty, as the both “want” to make (or am I wrong, as they really seem to want, although indeed P(data|estimate) and P(estimate|data)) probabilistic statements on the data about the population. Hence, the data is certain, the estimates are based on the data, so it seems is certain the estimate might approximate the population (assuming perfectly sampled population and this description makes sense) might take on a specified value (note Confidence an Credibility intervals have converged). Again, the data is certain, what is uncertain is what is not in the data. I am just curious what more statistical educated people think of this, how they would communicate this, as this seems hardly discussed (or it is my ignorance).
Thank you in advance for you input.
*Not their words. I just remember a part from the text.
I have 6 variables with Mean, Stdev, CoV, Min and Max. Find the attached excel file.
I'm working on a new probabilistic routing protocol based-on k shortest path selection one, and I wish to test it on the simulator NS3 but there's none much literature on the way to process. I'm looking for some tips which can help to do that, I'm opening. Thx!
Hi there RG community, I get back with a new couple of questions, I'm trying to implement a probabilistic routing protocol in NS2/NS3, but there is no much literature on it. Is there someone do that before now? If so, How can I proceed? I'm opening to exchange on that. Thx!
The publicly released THUMS model comes with a standard output of nodal coordinates of certain anatomical locations. Since there has been an already established usergroup, I was wondering if there is a standard output template created with THUMS to output cross sectional forces and moments in different bones (esp the longbones) that could be used to predict the risk of injuries probabilistically ? I could define my own outputs but i was wondering if there is a standard template so the results could be compared across multiple groups.
Thanks!
I am doing a project named as anomaly detection in videos using matlab. I have to perform data associate with clusters using JPDA, but unfortunately it isn't working fine. I have go through distinct papers of JPDA, but these are all about the tracking of any object.
Kindly guide me how do I proceed, or any research paper in which JPDA is used to perform data association not for tracking purpose.
Regards
In R-studio, there are many commands of Gumbel package. Arguments are also different.
I`m asking about the alpha parameter of the Copula which must be greater than 1. If this is the one used to plot the probability paper, how can I choose the value of alpha?
Data sets, when structured, can be put in vector form (v(1)...v(n)), adding time dependency it's v(i, t) for i=1...n and t=1...T.
Then we have a matrix of terms v(i, j)...
Matrices are important, they can represent linear operators in finite dimension. Composing such operators f, g, as fog translates into matrix product FxG with obvious notations.
Now a classical matrix M is a table of lines and columns, containing numbers or variables. Precisely at line i and column j of such table, we store term m(i, j), usually belonging to real number set R, or complex number set C, or more generally to a group G.
What about generalising such matrix of numbers into a matrix of set (in any field of science, this could mean storing all data collected for a particular parameter "m(i, j)", which is a set M(i, j) of data?
What can we observe, say, define on such matrices of sets?
If you are as curious as me, in your own field of science or engineering, please follow the link below, and more importantly, feedback here with comments, thoughts, advice on how to take this further.
Ref:
I am stuck between Quantum mechanics and General relativity. The mind consuming scientific humor ranging from continuous and deterministic to probabilistic seems with no end. I would appreciate anyone for the words which can help me understand at least a bit, with relevance.
Thank you,
Regards,
Ayaz
To know the details how oi is used in probabilistic metric space . As we know that to generalize trianle inequality, we use Triangular norm but how ? need explanation and also how and where it is used in PM space
I now have a set of input and output data, and a low-order transfer function model which has several parameters to be identified.
If I use tfest in Matlab, I can identify a set of parameter results, but this is not what I expect. What I expect to get is an interval that can encompass all or most of the observations. Which probability method can solve my problem? Preferably it is a probabilistic method or a prediction interval (PI) method. I would be very grateful if you could give me a paper or website
I'm developing a readiness assessment model regarding contractors' preparedness for a specific activity, in order to do so, a survey study was carried out and the data analyzed with PLS-SEM to obtain the CSF contributing to that readiness; nevertheless, due to the subject being too specific, it was impossible to define or quantify a population for it and hence, a probabilistic sample, which can compromise the external validity (generalizability) of my readiness assessment model. Is it feasible trying to reduce that generalizability issue with the minimum sample size requirements (means of power analyses) from Cohen (1992) and the use of PLS predict to determine the prediction power of the model?
I'd be delighted if any colleague could reply to this need
The birth and death probabilities are p_i and q_i respectively and (1-(p_i+q_i)) is the probability for no change in the process. zero ({0}) is an absorbing state and sate space is {0,1,2, ...}. What are the conditions for {0} to be recurrence (positive or null)? Is the set {1,2,3,...} transient? What we can say about duration of process until absorption and stationary distribution if it exists and etc?
Every comment is appreciated.
I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.
I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.
Thank you very much.
Probabilistic sensitivity analysis is criticised for potentially introducing uncertainty itself because of the consideration of the distribution of the parameters. Are there ways of addressing this potential for additional uncertainty?
I have a small dataset and need to learn a mixed probabilistic model (discrete + continuous) and simulate new values taking into account the learned structure.
How to calculate the sum and the subtraction of many random variables that follow exponential distributions and have different parameters ?
(The value of Lambda is different for all or some variables).
example :
L(t) = f(t) + g(t) - h(t)
with
f(t) = a.expo(-a.t)
g(t) = b.expo(-b.t)
h(t) = c.expo(-c.t)
st:
a = Lambda_1
b = Lambda_2
c = Lambda_3.
Does anyone know a geotechnical engineering software which can support subset simulation? I need to do some probabilistic analysis of a geotechnical project. However, due to the small probability, I need to use subset simulation instead of the crude Monte Carlo analysis.
Does the quantum mechanics gives probabilistic results or exact quantified values?
I am a beginner to probabilistic forecasting. From my research I have a vague idea that monte carlo simulation can be done for injecting uncertainity in the process. Do i need to get multiple point forecasts doing monte carlo and do postprocessing for obtaining a proabibilistic distribution?.Can any one help with the procedure what steps should i follow to do probabilistic forecasting? It would be helpful if someone can share an example
- I wanted to implement Feature selection framework using criterion like probabilistic error or probabilistic distance , i also have doubt like if i have non parametric distribution for my features can i use kernel based estimation techniques to find the class conditional probabilities instead of analytical function to evaluate the criterion like probabilistic error or bayes error rate.
- i was thinking even if we have some non parametric distribution we can use the probabilities value estimated my kernel density estimation and the integration would converge to summation ultimately in the formula i have attached for error rate.
- Is my approach is fine or if anyone have tried this can guide me.
Do you have any experience with probabilistic software for structural reliability assessment? Any links?
A question to all you stroke and tDCS / TMS researchers.
I want to visualize the lesion location of my participants in relationship to the stimulation site. In my case I have the lesions as ROIs normalised to the MNI standard space. Now I would like to create a 3 D image with the lesion mask as volume and mark position P4 on top. My objective is to see whether I actually tried to stimulate healthy or affected tissue with my tDCS protocol.
Alternatively marking P4 on the 2D slices would be fine as well. I just don't know how. I found the paper by Okamoto et al, 2004 ( ) which gives coordinates for the MNI templates.
Thank everyone for advice.
Suppose we have statistics N(m1, m2), where m1 is the value of the first factor, m2 is the value of the second factor, N(m1, m2) is the number of observations corresponding to the values of factors m1 and m2. In this case, the probability P(m1, m2) = N(m1, m2) /K, where K is the total number of observations. In real situations, detailed statistics N(m1, m2) is often unavailable, and only the normalized marginal values S1(m1) and S2(m2) are known, where S1(m1) is the normalized total number of observations corresponding to the value m1 of the first factor and S2(m2) is the normalized total number of observations corresponding to the value m2 of the second factor. In this case P1(m1) = S1(m1)/K and P2(m2) = S2(m2)/K. It is clear that based on P1(m1) and P2(m2) it is impossible to calculate the exact value of P(m1, m2). But how to do this approximately with the best confidence? Thanks in advance for any advice.
I am working in small modular nuclear implementation in Europe, initially. We believe that this should be a social enterprise and your work may well enable the 3 legs of the sustainability to be brought to a common currency of economic (already undercut all other sources), social (harder to prove but acid test is municipal and private pension investment on a massive scale), environmental (high temperature gas dry cooled triso fuelled reactors are vastly superior environmentally to PWRs and are also distinctive in being inherently safe provable in real life as opposed to probabilistically safe which was needed only because the consequence of test proving safety is too severe for LWR physics). We envisage 50 MWe units distributed close to domestic and industrial sites with heat, hydrogen and power demands, so your mapping work wold form a strong basis for site selection. I have done similar work for renewables site selection in Scotland.
We are working on a large number of building-related time series data sets that display various degrees of 'randomness'. When plotted, some display recognisable diurnal or seasonal patterns that can be correlated to building operational regimes or the weather (i.e. heating energy consumption with distinct signatures as hourly and seasonal intervals). However some appear to be completely random (Lift data that contains a lot of random noise).
Does anyone know if an established method exists that can be deployed on these data sets and provide some 'quantification' or 'ranking' of how much stochasticity exists in each data set?
Hi, In my city i have seen a discussion among volcanic hazard researchers (Colombian Geologic Service and the local University), the central subject is the quality on the accuracy of volcanic hazard methodologys (i.e. deterministic method vs probabilistic method), i´d like to learn more about works that compare these two methods with observed events. Please, could anybody share me papers or books about the subject?
Thanks a lot.
In eart slope stability analysis by SLOPE/W Analysis
Now a days use of AI/ML in disaster modeling is getting quite popular. I would like to know if there exists any such earthquake model which is better than conventional earthquake models for probabilistic seismic hazard analysis (PSHA) and or for deterministic scenario seismic hazard analysis (DSHA).
When running a scheduling scheme what probabilistic approach can be use to estimate energy consumption that scheduling algorithm?
Let's center the discussion mostly on the fact of characterizing geotechnically those solid wastes.
Are probabilistic analysis the best way for these cases of highly heterogeneous materials?
#Slopes #Geotechnics #GeotechnicalEngineering
Many scholars talk about importance of the role of demons in scientific thought- experiments.The most known are Laplace's Demon and Maxwell's Demon.Laplace claimed that world is completely deterministic. Contrary to this Maxwell claimed that the world is probabilistic and indeterministic. What these and other Demons can say to us about our world? Can Demons in science contribute to better understanding of Universe and scientific discoveries?
I need to model a wireless communication system where the absence or presence of the intended receiver should be completely a random process. For instance, when a transmitter sends something, there should be a random probability of whether the receiver would receive the transmitted signals or not. I think I should model the the presence or absence of the receiver as "on/off" modeling. However, I think there might be other useful modeling unknown to me till now. An answer will be highly appreciated.