Science topic

# Probabilistic Models - Science topic

Explore the latest questions and answers in Probabilistic Models, and find Probabilistic Models experts.

Questions related to Probabilistic Models

if someone can please share any report/paper/thesis it will be highly appreciated.

I am working on a dataset which contains some of the censored data. As the probabilistic approaches such as the Bayesian estimation technique can be used to consider the censored data, however, I am interested to deploy the Machine Learning using Python. I will appreciate if any literature is shared or provided with any suggestions/guidance. The problem is classification.

Thanks

If we have multiple experts to get the prior probabilities for the parent nodes how will the experts fill the node probabilities such as low, medium and high and how will we get the consensus of all the expert about the probability distribution of the parent node.

If someone can please share any paper/Questionnaire/expert based Bayesian network where all these queries are explained it will be highly appreciated.

Your guidance and support will be highly appreciated.

Could any expert try to examine our novel approach for multi-objective optimization?

The brand new approch was entitled "Probability - based multi - objective optimization for material selection", and published by Springer available at https://link.springer.com/book/9789811933509,

DOI: 10.1007/978-981-19-3351-6.

Any technique/method to convert deterministic value into probabilistic values in Bayesian network in order to improve the results.

I'm studying the topic from Probabilistic Robotics by Thrun Burgard and Fox.

In the Extended Kalman Filter algorithm, we linearized the action model in the following way.

𝑔(𝑢(𝑡),𝑥(𝑡-1)) = 𝑔(𝑢(𝑡),𝜇(𝑡−1)) + 𝐺(𝑡)⋅(𝑥(𝑡−1)−𝜇(𝑡−1))

𝑔(𝑢(𝑡),𝑥(𝑡-1)) is the action model and 𝐺(𝑡) is its Jacobian matrix with respect to the state 𝑥(𝑡−1).

I don't see how this guarantees linearity because 𝑔 could be nonlinear in 𝑢(𝑡). The authors don't mention anything about why this is the case.

In other words, I imagined that the multivariate Taylor expansion for this where we get a linear function in both 𝑢(𝑡) and 𝑥(𝑡−1)

Hi guys,

I am using the negative log likelihood function of a drichilet distribution as my loss function while implementing this paper:

I parameterise the distribution using my network outcome and compute the negative log likelihood of the observed ground truth.

Issue is that I found sometimes the loss is negative, which means that likelihood at the point of observation is greater than 1.

My understanding of this phenomenon comes with two sets:

- This is normal, as likelihood function can be higher 1

- This stands for overfitting, means that the likelihood function probably peaks at the point of the observation so much that other areas of the support would be zero, if we throw an observation there to be test

Hello, is there a way for someone who wants to find the parameter which effects the function maximally?

at first, I think, he/she makes a model according to the data in hand then he/she takes the partial derivative with respect to each parameter and according to the result maximum one gives the most effective parameter for that model. Is there any other way especially there is not necessary to make a model before taking a derivative? Such as in conditional probabilistic models such as Bayes networks there is only data and graph network. Thanks.

Please look at the text of the section on random walk from page 9 to formula 4.7, where you will find mathematical calculations justifying the probabilistic interpretation of the Riemann zeta function.

Preprint Chaotic dynamics on the sphere

- I have seen the evidence of using network analysis to conduct further cost-effectiveness analysis in two papers. The fractional polynomial models were involved because of nonproportional hazards. I would like to know how to combine the evidence of network meta-analysis with the partitioned survival model in the cost-effectiveness analysis about cancer? What software can be used better? I wonder how the authors connect the two. The abstracts of the two papers are as follows.
- 1. Front Public Health. 2022 Apr 15;10:869960. doi: 10.3389/fpubh.2022.869960.
- eCollection 2022.
- Cost-Effectiveness Analysis of Five Systemic Treatments for Unresectable
- Hepatocellular Carcinoma in China: An Economic Evaluation Based on Network
- Meta-Analysis.
- Zhao M(1)(2), Pan X(1)(2), Yin Y(1)(2), Hu H(1)(2), Wei J(3), Bai Z(3), Tang
- W(1)(2).
- BACKGROUND AND OBJECTIVE: Unresectable hepatocellular carcinoma (uHCC) is the
- main histological subtype of liver cancer and causes a great disease burden in
- China. We aimed to evaluate the cost-effectiveness of five first-line systemic
- treatments newly approved in the Chinese market for the treatment of uHCC,
- namely, sorafenib, lenvatinib, donafenib, sintilimab plus bevacizumab (D + A),
- and atezolizumab plus bevacizumab (T + A) from the perspective of China's
- healthcare system, to provide a basis for decision-making.
- METHODS: We constructed a network meta-analysis of 4 clinical trials and used
- fractional polynomial models to indirectly compare the effectiveness of
- treatments. The partitioned survival model was used for cost-effectiveness
- analysis. Primary model outcomes included the costs in US dollars and health
- outcomes in quality-adjusted life-years (QALYs) and the incremental
- cost-effectiveness ratio (ICER) under a willingness-to-pay threshold of $33,521
- (3 times the per capita gross domestic product in China) per QALY. We performed
- deterministic and probabilistic sensitivity analyses to investigate the
- robustness. To test the effect of active treatment duration on the conclusions,
- we performed a scenario analysis.
- RESULTS: Compared with sorafenib, lenvatinib, donafenib, D + A, and T + A
- regimens, it yielded an increase of 0.25, 0.30, 0.95, and 1.46 life-years,
- respectively. Correspondingly, these four therapies yielded an additional 0.16,
- 0.19, 0.51, and 0.86 QALYs and all four ICERs, $40,667.92/QALY gained,
- $27,630.63/QALY gained, $51,877.36/QALY gained, and $130,508.44/QALY gained,
- were higher than $33,521 except for donafenib. T + A was the most effective
- treatment and donafenib was the most economical option. Sensitivity and scenario
- analysis results showed that the base-case analysis was highly reliable.
- CONCLUSION: Although combination therapy could greatly improve patients with
- uHCC survival benefits, under the current WTP, donafenib is still the most
- economical option.
- 2. Value Health. 2022 May;25(5):796-802. doi: 10.1016/j.jval.2021.10.016. Epub 2021
- Dec 1.
- Cost-Effectiveness of Systemic Treatments for Metastatic Castration-Sensitive
- Prostate Cancer: An Economic Evaluation Based on Network Meta-Analysis.
- Wang L(1), Hong H(2), Alexander GC(1), Brawley OW(3), Paller CJ(4), Ballreich
- J(5).
- OBJECTIVES: To assess the cost-effectiveness of systemic treatments for
- metastatic castration-sensitive prostate cancer from the US healthcare sector
- perspective with a lifetime horizon.
- METHODS: We built a partitioned survival model based on a network meta-analysis
- of 7 clinical trials with 7287 patients aged 36 to 94 years between 2004 and
- 2018 to predict patient health trajectories by treatment. We tested parameter
- uncertainties with probabilistic sensitivity analyses. We estimated drug
- acquisition costs using the Federal Supply Schedule and adopted generic drug
- prices when available. We measured cost-effectiveness by an incremental
- cost-effectiveness ratio (ICER).
- RESULTS: The mean costs were approximately $392 000 with androgen deprivation
- therapy (ADT) alone and approximately $415 000, $464 000, $597 000, and $959 000
- with docetaxel, abiraterone acetate, enzalutamide, and apalutamide, added to
- ADT, respectively. The mean quality-adjusted life-years (QALYs) were 3.38 with
- ADT alone and 3.92, 4.76, 3.92, and 5.01 with docetaxel, abiraterone acetate,
- enzalutamide, and apalutamide, added to ADT, respectively. As add-on therapy to
- ADT, docetaxel had an ICER of $42 069 per QALY over ADT alone; abiraterone
- acetate had an ICER of $58 814 per QALY over docetaxel; apalutamide had an ICER
- of $1 979 676 per QALY over abiraterone acetate; enzalutamide was dominated. At
- a willingness to pay below $50 000 per QALY, docetaxel plus ADT is likely the
- most cost-effective treatment; at any willingness to pay between $50 000 and
- $200 000 per QALY, abiraterone acetate plus ADT is likely the most
- cost-effective treatment.
- CONCLUSIONS: These findings underscore the value of abiraterone acetate plus ADT
- given its relative cost-effectiveness to other systemic treatments for
- metastatic castration-sensitive prostate cancer.

I’m trying to fit some kind of causal model to continuous value data by solving differential equations probabilistically (machine learning).

Currently I’m solving complex-valued vector quadratic differential equation so there are more cross correlations between variables.

dx(t)/dt = diag(Ax(t)x(t)^h) + Bx(t) + c + f(t)

or just

dx(t)/dt = diag(Ax(t)x(t)^h) + Bx(t) + c

diag() takes diagonal of the square matrix.

But my diff. eq. math is rusty because I have studied differential equations 20 years ago. I solved the equation in 1-dimensional case but would need help for vector valued x(t).

Would someone point me to appropriate material?

EDIT: I did edit the question to be a bit more clear to read.

Hi for a network traffic analysis task i need a probabilistic model to analysis sequences of network data. Euch observation is here an event consisting of structured Information (e.g. ip adresses, ports, protocol type). Im interested in the dependencies between these observations using a generative model. Any ideas?

*what if we only managed to have 1 discharge value (eg. 1.8m3/s) of a river section, is it possible for us to create a predictive hydrograph?

*what are the parameters needed?

*is there any article, journal to support the probabilistic analysis?

Thank you in advance!

Hello.

I'm a beginner of diffusion tensor imaging.

I want to know pros and cons & similarities and differences between probabilistic tractography of FSL and MRtrix.

Could you explain about them?

I really appreciate it if someone could help!

I have two questions and hope for some expert advice please?

1. My understanding is that when conducting an economic evaluation of clinical trial data, no discounting of costs is applied if the follow-up period of the trial was 12 months or less. Is this still the standard practice and can you please provide a recent reference?

2. How can one adjust for uncertainties/biases when you use historic health outcomes data? If the trial was non-randomised, how can you adjust for that within an economic evaluation other than the usual probabilistic sensitivity analysis?

Thank you so much.

Hi all,

In a experimental investigation, there are two parameters to be measured, say X1 and X2. My goal is to see how X1 varies with X2. Specifically, I am interested in classifying the graph of X1 versus X2 according to a number of characteristic graphs. Each characteristic graph corresponds to a specific state of the system which I need to determine.

The problem is with the graph of X1 vs X2 undergoing significant changes when replicating the test, thus making the classification a perplexing task. A simple approach I could think of is taking the average of these graphs, but I am not sure if this is reasonable; I am looking for a more mathematical framework.

Any comments would be appreciated.

Regards,

Armin

Hello everyone,

Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?

Thank you for your attention and valuable support.

Regards,

Cecilia-Irene Loeza-Mejía

probabilistic metric space which is not a metric space, has been widely developed in theory, but can someone give some example of the applications of such space ?

I invite you to see the newly-launched website about the Choice Wave and the Theory of Economic Parallel Rationality, tradition and innovation revolusionising economic thought.

A hypothetical example (hehe hypothesis), assume we have enough observations, apply both a “frequentist” and “Bayesian'ist'” model (e.g. linear model with Gaussian error distribution and for Bayesian an uninformative prior to keep it rather vague), we look at the intervals, and both models resulted in the same intervals. Then according to [1] we can say it is similar* to suggest the estimate on the population fell between [1] if we know they are similar. Are both than equally “wrong”? And, do they actually quantify uncertainty, as the both “want” to make (or am I wrong, as they really seem to want, although indeed P(data|estimate) and P(estimate|data)) probabilistic statements on the data about the population. Hence, the data is certain, the estimates are based on the data, so it seems is

**certain**the estimate might approximate the population (assuming perfectly sampled population and this description makes sense) might take on a specified value (note Confidence an Credibility intervals have converged). Again, the data is certain, what is uncertain is what is not in the data. I am just curious what more statistical educated people think of this, how they would communicate this, as this seems hardly discussed (or it is my ignorance).Thank you in advance for you input.

*Not their words. I just remember a part from the text.

I have 6 variables with Mean, Stdev, CoV, Min and Max. Find the attached excel file.

I'm working on a new probabilistic routing protocol based-on k shortest path selection one, and I wish to test it on the simulator NS3 but there's none much literature on the way to process. I'm looking for some tips which can help to do that, I'm opening. Thx!

Hi there RG community, I get back with a new couple of questions, I'm trying to implement a probabilistic routing protocol in NS2/NS3, but there is no much literature on it. Is there someone do that before now? If so, How can I proceed? I'm opening to exchange on that. Thx!

The publicly released THUMS model comes with a standard output of nodal coordinates of certain anatomical locations. Since there has been an already established usergroup, I was wondering if there is a standard output template created with THUMS to output cross sectional forces and moments in different bones (esp the longbones) that could be used to predict the risk of injuries probabilistically ? I could define my own outputs but i was wondering if there is a standard template so the results could be compared across multiple groups.

Thanks!

I am doing a project named as anomaly detection in videos using matlab. I have to perform data associate with clusters using JPDA, but unfortunately it isn't working fine. I have go through distinct papers of JPDA, but these are all about the tracking of any object.

Kindly guide me how do I proceed, or any research paper in which JPDA is used to perform data association not for tracking purpose.

Regards

In R-studio, there are many commands of Gumbel package. Arguments are also different.

I`m asking about the alpha parameter of the Copula which must be greater than 1. If this is the one used to plot the probability paper, how can I choose the value of alpha?

Data sets, when structured, can be put in vector form (v(1)...v(n)), adding time dependency it's v(i, t) for i=1...n and t=1...T.

Then we have a matrix of terms v(i, j)...

Matrices are important, they can represent linear operators in finite dimension. Composing such operators f, g, as fog translates into matrix product FxG with obvious notations.

Now a classical matrix M is a table of lines and columns, containing numbers or variables. Precisely at line i and column j of such table, we store term m(i, j), usually belonging to real number set R, or complex number set C, or more generally to a group G.

What about generalising such matrix of numbers into a matrix of set (in any field of science, this could mean storing all data collected for a particular parameter "m(i, j)", which is a set M(i, j) of data?

What can we observe, say, define on such matrices of sets?

If you are as curious as me, in your own field of science or engineering, please follow the link below, and more importantly, feedback here with comments, thoughts, advice on how to take this further.

Ref:

I am stuck between

**Quantum mechanics**and**General relativity**. The mind consuming scientific humor ranging from continuous and deterministic to probabilistic seems with no end. I would appreciate anyone for the words which can help me understand at least a bit, with relevance.Thank you,

Regards,

Ayaz

To know the details how oi is used in probabilistic metric space . As we know that to generalize trianle inequality, we use Triangular norm but how ? need explanation and also how and where it is used in PM space

I now have a set of input and output data, and a low-order transfer function model which has several parameters to be identified.

If I use tfest in Matlab, I can identify a set of parameter results, but this is not what I expect. What I expect to get is an interval that can encompass all or most of the observations. Which probability method can solve my problem? Preferably it is a probabilistic method or a prediction interval (PI) method. I would be very grateful if you could give me a paper or website

I'm developing a readiness assessment model regarding contractors' preparedness for a specific activity, in order to do so, a survey study was carried out and the data analyzed with PLS-SEM to obtain the CSF contributing to that readiness; nevertheless, due to the subject being too specific, it was impossible to define or quantify a population for it and hence, a probabilistic sample, which can compromise the external validity (generalizability) of my readiness assessment model. Is it feasible trying to reduce that generalizability issue with the minimum sample size requirements (means of power analyses) from Cohen (1992) and the use of PLS predict to determine the prediction power of the model?

I'd be delighted if any colleague could reply to this need

The birth and death probabilities are p_i and q_i respectively and (1-(p_i+q_i)) is the probability for no change in the process. zero ({0}) is an absorbing state and sate space is {0,1,2, ...}. What are the conditions for {0} to be recurrence (positive or null)? Is the set {1,2,3,...} transient? What we can say about duration of process until absorption and stationary distribution if it exists and etc?

Every comment is appreciated.

I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.

I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.

Thank you very much.

Probabilistic sensitivity analysis is criticised for potentially introducing uncertainty itself because of the consideration of the distribution of the parameters. Are there ways of addressing this potential for additional uncertainty?

I have a small dataset and need to learn a mixed probabilistic model (discrete + continuous) and simulate new values taking into account the learned structure.

Hello,

I am currently working on Beta Distribution, and I am using the distribution to model a knowledge/opinion into the software. The user adjusts the alpha and beta values of Beta Distribution to come with a graph that resembles his opinion. What I am really interested in is how we can add up two Beta Distribution graphs to come with a more generalized Beta Distribution.

For every opinion, there is a set of alpha and beta value. Say two opinions have the following alpha and beta values.

Opinion 1: [α_1 = 2 ;

*β*_1 = 5]Opinion 2: [α_2 = 4 ;

*β*_2 = 5](I have attached the graph of these two sets of alpha and beta for visualization)

It will be much helpful for my undergoing research if I get to know how I can add these two graphs or merge these two graphs into one? Any suggestions would be greatly appreciated.
Thank you for your valuable time.

How to calculate the sum and the subtraction of many random variables that follow exponential distributions and have different parameters ?

(The value of Lambda is different for all or some variables).

example :

L(t) = f(t) + g(t) - h(t)

with

f(t) = a.expo(-a.t)

g(t) = b.expo(-b.t)

h(t) = c.expo(-c.t)

st:

a = Lambda_1

b = Lambda_2

c = Lambda_3.

Does anyone know a geotechnical engineering software which can support subset simulation? I need to do some probabilistic analysis of a geotechnical project. However, due to the small probability, I need to use subset simulation instead of the crude Monte Carlo analysis.

Does the quantum mechanics gives probabilistic results or exact quantified values?

I am a beginner to probabilistic forecasting. From my research I have a vague idea that monte carlo simulation can be done for injecting uncertainity in the process. Do i need to get multiple point forecasts doing monte carlo and do postprocessing for obtaining a proabibilistic distribution?.Can any one help with the procedure what steps should i follow to do probabilistic forecasting? It would be helpful if someone can share an example

- I wanted to implement Feature selection framework using criterion like probabilistic error or probabilistic distance , i also have doubt like if i have non parametric distribution for my features can i use kernel based estimation techniques to find the class conditional probabilities instead of analytical function to evaluate the criterion like probabilistic error or bayes error rate.
- i was thinking even if we have some non parametric distribution we can use the probabilities value estimated my kernel density estimation and the integration would converge to summation ultimately in the formula i have attached for error rate.
- Is my approach is fine or if anyone have tried this can guide me.

Do you have any experience with probabilistic software for structural reliability assessment? Any links?

A question to all you stroke and tDCS / TMS researchers.

I want to visualize the lesion location of my participants in relationship to the stimulation site. In my case I have the lesions as ROIs normalised to the MNI standard space. Now I would like to create a 3 D image with the lesion mask as volume and mark position P4 on top. My objective is to see whether I actually tried to stimulate healthy or affected tissue with my tDCS protocol.

Alternatively marking P4 on the 2D slices would be fine as well. I just don't know how. I found the paper by Okamoto et al, 2004 ( ) which gives coordinates for the MNI templates.

Thank everyone for advice.

Dear friends,

Please kindly list the procedure steps shortly on how to make the probabilistic and deterministic earthquake hazard map using GIS or any other related software.

Thanks,

Suppose we have statistics N(m1, m2), where m1 is the value of the first factor, m2 is the value of the second factor, N(m1, m2) is the number of observations corresponding to the values of factors m1 and m2. In this case, the probability P(m1, m2) = N(m1, m2) /K, where K is the total number of observations. In real situations, detailed statistics N(m1, m2) is often unavailable, and only the normalized marginal values S1(m1) and S2(m2) are known, where S1(m1) is the normalized total number of observations corresponding to the value m1 of the first factor and S2(m2) is the normalized total number of observations corresponding to the value m2 of the second factor. In this case P1(m1) = S1(m1)/K and P2(m2) = S2(m2)/K. It is clear that based on P1(m1) and P2(m2) it is impossible to calculate the exact value of P(m1, m2). But how to do this approximately with the best confidence? Thanks in advance for any advice.

I am working in small modular nuclear implementation in Europe, initially. We believe that this should be a social enterprise and your work may well enable the 3 legs of the sustainability to be brought to a common currency of economic (already undercut all other sources), social (harder to prove but acid test is municipal and private pension investment on a massive scale), environmental (high temperature gas dry cooled triso fuelled reactors are vastly superior environmentally to PWRs and are also distinctive in being inherently safe provable in real life as opposed to probabilistically safe which was needed only because the consequence of test proving safety is too severe for LWR physics). We envisage 50 MWe units distributed close to domestic and industrial sites with heat, hydrogen and power demands, so your mapping work wold form a strong basis for site selection. I have done similar work for renewables site selection in Scotland.

We are working on a large number of building-related time series data sets that display various degrees of 'randomness'. When plotted, some display recognisable diurnal or seasonal patterns that can be correlated to building operational regimes or the weather (i.e. heating energy consumption with distinct signatures as hourly and seasonal intervals). However some appear to be completely random (Lift data that contains a lot of random noise).

Does anyone know if an established method exists that can be deployed on these data sets and provide some 'quantification' or 'ranking' of how much stochasticity exists in each data set?

Hi, In my city i have seen a discussion among volcanic hazard researchers (Colombian Geologic Service and the local University), the central subject is the quality on the accuracy of volcanic hazard methodologys (i.e. deterministic method vs probabilistic method), i´d like to learn more about works that compare these two methods with observed events. Please, could anybody share me papers or books about the subject?
Thanks a lot.

In eart slope stability analysis by SLOPE/W Analysis

Now a days use of AI/ML in disaster modeling is getting quite popular. I would like to know if there exists any such earthquake model which is better than conventional earthquake models for probabilistic seismic hazard analysis (PSHA) and or for deterministic scenario seismic hazard analysis (DSHA).

When running a scheduling scheme what probabilistic approach can be use to estimate energy consumption that scheduling algorithm?

Let's center the discussion mostly on the fact of characterizing geotechnically those solid wastes.

Are probabilistic analysis the best way for these cases of highly heterogeneous materials?

#Slopes #Geotechnics #GeotechnicalEngineering

Many scholars talk about importance of the role of demons in scientific thought- experiments.The most known are Laplace's Demon and Maxwell's Demon.Laplace claimed that world is completely deterministic. Contrary to this Maxwell claimed that the world is probabilistic and indeterministic. What these and other Demons can say to us about our world? Can Demons in science contribute to better understanding of Universe and scientific discoveries?

I need to model a wireless communication system where the absence or presence of the intended receiver should be completely a random process. For instance, when a transmitter sends something, there should be a random probability of whether the receiver would receive the transmitted signals or not. I think I should model the the presence or absence of the receiver as "on/off" modeling. However, I think there might be other useful modeling unknown to me till now. An answer will be highly appreciated.

Can any body tell me what is difference between 'probability density function' and 'power spectral density function' for random data like wind speed?

We have different metrices to calculate the engieering resilience by comparing disturbed state and mean/equilibrium states. Are then any methods which incorporate the joint probabilistic behaviour of more than one variable to quantify resilience?

I am calculating a value that is computed by dividing the derivative of the cumulative distribution function by the value of the distribution function at that point. It is of the form:

𝐽=𝐹′(𝑥) / 𝐹(𝑥)

Where 𝐹(𝑥)is the cumulative distribution function. To get the confidence band on 𝐹(𝑥)I can use the DKF Inequality.

**How do I get the confidence bands on J?**I have a small multivariate data ( < 10,000 records) with none of the variable following normal distribution (mix of right-skewed and left-skewed variables). As Gaussian Mixture Model is not suitable for this, what are the other methods that allows computing a probability to belong to each cluster, for every point ?

**Hi,**

**Please, I have a question:**

**How to build the Bayesian network and conditional probabilistic tables of the following correlation: x1 → x5 ꓦ x6 ꓦ x7**

**As in paper "A. Malki, D. Benslimane, S.-M. Benslimane, M. Barhamgi, M. Malki, P. Ghodous, and K. Drira, “Data Services with uncertain and correlated semantics,” World Wide Web, vol. 19, no. 1, pp. 157–175, 2016.".**

**Cordially.**

Im working with Nataf model trying to fit a joint probabilistic model for circular and Linear variables, but I have some difficulties in calculating the correlation matrix because, I could find an equation for calculating the equivalent correlation between two circular variables or between a circular variable and a linear variable.

I have applied two different frame-work, i.e. Deterministic (Very fast simulated annealing) and probabilistic (Sampling) algorithms to solve a highly non-linear inverse problem with 6 unknown model parameters to be optimised. As you know, the result of the VFSA is a single solution for the each model parameter and the result of the probabilistic algorithm is a set of solutions sampled from the posterior. I expect that the the solution obtained by the deterministic method should be captured by the samples obtained by the probabilistic approach and the discrepancy should not be significant, but these solutions are far from each other!!!

For both methods, the prior, the forward model and the the noise level are the same. Actually, I defined the prior in probabilistic method uniformly distributed, but in VFSA, the model parameters are updated through sampling of a Cauchy distribution. What causes this discrepancy? Any justification for this matter? is it usual? and what causes this in the problems?

P.N

The attached figure shows the cloud of samples from the probabilistic method, their mean (Black dot) and the optimised value by deterministic algorithm (Red dot)

Thanks for your comments..

If existing microgrid energy management system is deterministic, how to design it in real time probabilistic or stochastic model?

How to get expertise in this modeling?

In this validation process, the team has tried to make sense of the research, devising a working hypothesis, built on scientific bases, such as the reference models that for years have been the pillars of the language sciences, which are: the descriptive method of N. Chomsky ;- The method Lexico-grammatica by M. Gross ; -The Nooj system, according to the Transformational Analysis of Direct Transitive by M. Silberzstein; The probabilistic calculation by Hofmann, according to the Probabilistic latent semantic Analis. The results have given very valid and irrefutable answers such as: - The mathematical laws guide and support the linguistic text, because a language, to be elevated to universal code must be describable, with a rational scientific method. Languages can be converted into a plurality of codes and that formal languages are subjected to techniques of fixity and non-compositionality and therefore guided by mathematical laws pre-established and therefore predictable, was born for market needs and is built in the laboratory Natural languages are subjected to linguistic techniques of causality, and that the first communication and fixed, the second is innate, because ........The homo sapiens transforms the contents of his mental activities into symbols, i.e. letters, numbers, etc. according to anthropology, sociology and natural laws of his culture and therefore semantics belongs only to the man sapiens and to a certain man in the course of that history. The statute of conjecturing that we postulate is that the mathematical laws guide the mind of the homo sapiens in the structuring of the lessies, morphies, dysmorphs in the osmotic voluntary and innate conjecturing of human semantics.

Translated with www.DeepL.com/Translator

Let X = [x1,...,xn] and Y = [y1,...,yn] be two random points in the n-dimensional hypercube with the constraint that x1 + ... + xn = 1 and y1 + ... + yn = 1

Let also the coordinates of the points be uniformly distributed in the [0,1] interval, i.e.: xi,yi ~ U[0,1].

My problem is that of determining the probability distribution (pdf) of the Euclideand distance between X and Y.

Any suggestion will be greatly appreciated.

Best,

f.

I have a dataset and trying to apply soft clustering, preferably Multivariate Gaussian Mixture model, but I have following doubts :

1. Does Multivariate GMM assume the underlying data to follow multivariate normality ? I guess, even if the individual components are Gaussian, their mixture can still be non-gaussian and thus violating this condition. Is this so ?

2. If multivariate normality is indeed required, what are the other ways to attain probabilistic clustering. Will be really helpful, if someone could refer a python or r based implementation.

I want to model the spatial variability of soil in numerical reliability slope stability analysis.

I am carrying out an investigation with a methodological triangulation:

1st step: quantitative. surveys of students from schools in Buenos Aires. sampling by conglomerates (neighborhoods) with similar socio-educational index.

2nd step: qualitative. In-depth interviews with students from Buenos Aires schools. they will be chosen by the different neighborhoods / conglomerates.

The quantitative sampling is not entirely probabilistic (I believe): it does not take all the conglomerates, and I group them under my criteria (which is theoretical, but I still have to develop).

Finally, as the sampling of the quantitative part has certain problems, I do not know if I should do the qualitative part with the same students that were surveyed, and deepen with the interview. but can I take students from other schools and other conglomerates with similar indexes?

In autonomous exploration considering the pose uncertainty, is there any way to predict before hand the pose uncertainty the robot will have if it takes a particular action. This is to plan motion to increase the localizability while making exploration decision.