Science topic
Bayesian Inference - Science topic
Explore the latest questions and answers in Bayesian Inference, and find Bayesian Inference experts.
Questions related to Bayesian Inference
jModelTest suggests set up this models, but MrBayes don´t accept it; neverthless in the paper: Colletotrichum gloeosporioides species complex. Studies in Mycology 13, they used some of those. Thank you for you attention.
I have noisy data points, where the peak signal-to-noise ratio (PSNR) may sometimes be less than unity (hence, more noise than signal may be present). I am fitting a model with fitting parameters to this noisy data, using MCMC (Markov Chain Monte Carlo) methods. I want to know if using a noise filter on the noisy data points (such as a Wiener filter in real space or a bandpass filter in Fourier space), before doing the MCMC fitting, would cause the 90% HPDI contour (highest posterior density interval) of the joint posterior probability distribution of the fitting parameters to be tighter or wider (precision), and closer or farther away from the true parameter values (accuracy)?
Hello fellow researchers,
I am doing a research which involves estimating the parameters of the Cox Ingersoll Ross (CIR) SDE using a Bayesian approach. I propose using the Euler scheme in my approach. Could some one please direct me to any implementation code out there in R, Python or Matlab?
Thank you !!
I have been working on developing a unified marketing measurement model which combines the output of Market Mix Models and Multi touch attribution models.
I came across articles where Bayesian priors is a suggested method, but I am yet to come across any research paper which actually discusses the details of implementation/feasibility of using this Bayesian priors approach.
If someone is/has worked on this topic and could share some notes/references or guide me in the right direction, I would be highly obliged.
In Bayesian Inference, we have to choose a prior distribution of parameter for finding Bayes estimate which depends upon our belief and experience.
I would like to know what are steps or rule we should follow for taking a prior distribution of a parameter. Please help me with the same so that I can proceed.
I'm trying to establish Bayes factor for the difference between two correlation coefficients (Pearson r). (That is, what evidence is there in favor for the null hypothesis that two correlation coefficients do not differ?)
I have searched extensively online but haven't found an answer. I appreciate any tips, preferably links to online calculators or free software tools that can calculate this.
Thank you!
Dear colleagues!
Recently when reconstructing phylogenetic trees based on COI gene for species identification purposes I noticed the strict inconsistency of the topologies produced by neighbor-joining (NJ) and bayesian inference (BI) methods. As you can see on a figure NJ produce clear species bifurcation whereis BI forms stem cluster.
It is only typical for the species with low interspecific divergance (below 2%).
What are the inherent features of these algorithms to make such inconsistency?
Any ideas on BI behaviour?
I do not ask what is true or which algorithm is best. I'm curious what features of BI (for instance) could explain this.
Thank you!
Hello fellow researchers,
I am doing a research in extreme value theory where I have to estimate the parameters of a generalized Pareto distribution using a Bayesian approach. I would really appreciate it anyone can point me to any code in R, Matlab or Python that estimate the GPD.
In general term, Bayesian estimation provides better results than MLE . Is there any situation, Where Maximum Likelihood Estimation (MLE) methods gives better results than Bayesian Estimation Methods?
I'm building a model that estimates the effect of a firm level variable (x) on its financial performance (y). It is a cross-classified three-level model with yearly measures of a firm in level-1 and firms in level-2. Firms are then cross-classified by countries and industries in level 3. To account for the contextual factors that moderate the x->y relationship, the model includes terms that interact (x) with moderators at firm level and industry/country level.
I'm able to estimate the model using frequentist HLM via LME4 on R and bayesian HLM via RStan.
Now, I'm interested in ranking the firms based on magnitude of effect that variable (x) has on financial performance (y). While some literature I came across ranked the individual effects using median values of the random effects, posterior means, or empirical bayes estimates, the models in these papers did not include any interaction terms.
I would appreciate any thoughts on ways in which such ranking can be done, while including the effect of various firm level and cross-level interaction terms.
Thank you!
The Null Hypothesis "statistically" Significance Testing (NHST), in which the P-value serves as the index of “statistically significant,” is the most widely used [misinterpreted and abused] statistical method in psychology (Sterling et al., 1995; Cumming et al., 2007).
Rresearchers overstating conclusions beyond what the data would support and "careful fixing cheats if statistically significant", is quite common! However, a good reviewer (and experienced in same field/subject) would notice tht! Why? Increasing bait_like activities of predatory journals and a widespread lack of methodological sophistication, with researchers using poorly designed experiments with small sample sizes and inappropriate statistical models (Gelman and Carlin, 2014).
In the Neyman–Pearson framework optimally (which may not be possible in some instances) setting α and β assures long-term decision-making efficiency in light of our costs and benefits by committing Type I and Type II errors... this is frequestist approach, would bayesian approach make a difference where trial or studies repetition are limited/not possible? Perhabs, a prior calculations and report of the probability of replication to complement NHST! .
I wonder if it is possible to infer the origin of invasive lineages based on genetic data for many native and invasive populations. I first planned to use DIYABC v2.1.0, but I am not used to Genepop file format and information on input formatting of DNA sequence is scarce. Does it work with >300bp loci? Is triploidy a problem? Is there an easy way to convert FASTA data into genepop data? (Knowing that widgetcon does not handle DNA data for genepop format) Is DIYABC still used (last release in 2015) or are there better options for now? Thanks for any answer.
I have a dataset of 150 samples of 60 binary variables. The pcalg binCItest results are not impressive and i can not find a straightforward way to do this using bnlearn or catnet. I appreciate if you let me know your suggestion, especially if accompanied by a code snippet. Thanks
I am completing a Bayesian Linear Regression in JASP in which I am trying to see whether two key variables (IVs) predict mean accuracy on a task (DV).
When I complete the analysis, for Variable 1 there is a BFinclusion value of 20.802, and for Variable 2 there is a BFinclusion value of 1.271. Given that BFinclusion values quantify the change from prior inclusion odds to posterior inclusion odds and can be interpreted as the evidence in the data for including a predictor in the model, can I directly compare the BFinclusion values for each variable?
For instance, can I say that Variable 1 is approximately 16 times more likely to be included in a model to predict accuracy than Variable 2? (Because 20.802 divided by 1.271 is 16.367 and therefore the inclusion odds for Variable one are approximately 16 times higher).
Thank you in advance for any responses, I really appreciate your time!
Hi,
I have a standard SEIR model and would like to run a simple Bayesian MCMC (Metropolis-Hastings) inference on COVID data. How do you do this on R?
Many thanks!
When using the Bayesian inference approach to identify the posterior distribution of the parameters, should we randomly sample each of the parameters from their corresponding posterior distributions to get the variance of the prediction, or should we just sample points from the converged trace plot (a combination of well-fitted parameters values)?
As there could be a case where different combinations of parameters were able to give a good fit during the fitting process due to correlation. Wondering if taking random samples from the trace plot will be able to capture the parameter uncertainties?
Thank you.
Assume that a measurement gives n observations y1, y2, ......yn. The data are drawn from a normal distribution: Y~N(μ,σ^2), and prior distribution is μ ̃~N(y_prior,σ_prior^2). If σ^2 is known, the posterior mean is the weighted mean of the sample mean y ̅ and the prior mean y_prior. This is the standard solution that can be found in many textbooks or lecture notes. When σ^2 is unknown, I expect to have a similar solution, i.e. simply replace σ^2 with its estimator s^2 in the posterior mean. However, I cannot find this solution anywhere. My question is: “does this solution exists?” If the answer is yes, where I can find a reference? If the answer is no, why?
I am trying to develop a multivariate Poisson-lognormal CAR model. My model has 3 dependent variables, 10 explanatory variables and structured and unstructured random effects. One of my most important variables (traffic volume) in the model is providing a negative sign which should be positive. When I develop a simple negative binomial model (with all variables), the variable ((traffic volume) sign is positive. Thinking it might be because of the influence of another variable, I tried to develop the MVCAR model with only (traffic volume) and structured and unstructured random effect, but the sign is still negative. I cannot remove this variable as its the most important one.
I should be noting that the variance of heterogeneous effects for one of my dependent variable is extremely large (around 5000).
Can anyone tell me what might be the issue?
Thanks in advance
I'm not just interested in priors (probability) of unwanted outcomes and self generating hypothesis, I'm also interested in the level of relevance (values) of certain stimuli.
A paper that can explain some of my interests is: "Predictive Processing and the Varieties of Psychological Trauma" (2017) by Wilkinson, Dodgson & Meares.
Which of the two criterion is more appropriate to select the model of nucleation substitution?
Dear Researchers/Scholars,
Suppose we have time series variable X1, X2 and Y1. where Y1 is dependent on these two. They are more or less linearly related. Data for all these variables are given from 1970 to 2018. We have to forecast values of Y1 for 2040 or 2060 based on these two variables.
What method would you like to suggest (other than a linear regression)?
We have a fact that these series es have a different pattern since 1990. I want to make this 1990-2018 data as prior information and then to find a posterior for Y1. Now, please let me know how to asses this prior distribution?
or any suggestions?
Best Regards,
Abhay
Hello,
I have identified clades with fair support values (BI >0.7) in my backbone tree. I would like to reinforce these supports so I rebuilt ingroup trees for each of the clades using their respective sister and basal groups as outgroups, and using more markers. The ingroup tree contains only a subset of individuals of the clade due to sample and seqeunce availability, but it does encompass all of the subclades. Suppose the ingroup and the backbone trees share the same topology, is it reasonable for me to substitute the backbone branch support with that of the ingroup? (with annotation noting that the branch support is reinforced by a ingroup tree)
Thank you very much in advance!
I've been thinking about this topic for a while. I admit that I still need to do more work to fully understand the full implications of the problem, but suggests that under certain conditions, Bayesian inference may have pathological results. Does it matter for science? Can we just avoid theories that generate those pathologies? If not, what can we do about it?
I have provided a more detailed commentary here:
Hello,
I seem to be having issues with convergence in my Bayesian analysis. I'm using a single gene large dataset of 418 individuals. My PSFR values say N/A in my output but my split frequency is 0.007. Also, my consensus tree gives me posterior probabilities of 0.5 or 1 with no distnguishable clades (see attached). Below is my Bayes block:
begin mrbayes;
charset F_1 = 1 - 655\3;
charset F_2 = 2 - 656\3;
charset F_3 = 3 - 657\3;
partition currentPartition = 3: F_1, F_2, F_3;
set partition = currentPartition;
lset applyto=(1) nst=6 rates=gamma;
lset applyto=(2) nst=2 rates=invgamma;
lset applyto=(3) nst=6 rates=gamma;
unlink statefreq=(all) revmat=(all) shape=(all) pinvar=(all);
prset applyto=(all) ratepr=variable;
mcmc ngen= 24000000 append=yes samplefreq=1000 nchains=8;
sump burnin = 10000;
sumt burnin = 10000;
end;
Any advice? Thanks!
In the litterature I found that we could use at all three different models ( 1. Neighbor-joining trees, 2. Maximum Likelihood and
3. Maximum Parsimony and Bayesian Inference). I would know what is the method that I could apply for my plasmids; knowing that they assigned to the same IncL/M group.
Thank you
In Bayesian inference, likelihood function may be unable to obtain posterior distribution in some cases. In this cases, is there any alternative approach (apart from MCMC-alternative methods) as alternative to likelihood function for bayesian inferences ?
Hello,
I am working with an outcome variable that follows a count (Poisson) distribution.
I have 3 IV that follow a normal distribution and 1 DV that follow a count distribution. Thus, I'd like to compute a Negative Binomial Regression.
Yet, instead of a Maximum Likelihood Estimation, I would like to use Bayesian Inference Approach to specify the estimate of my model Negative Binomial Regression.
I have found this reference :https://cran.r-project.org/web/packages/NegBinBetaBinreg/NegBinBetaBinreg.pdf
But I really cannot manage (yet) to understand how to compute a Bayesian Negative Binomial Regression in R.
I would be really delighted and grateful is someone could provide any help in this regard,
Thank you!
Sincerely,
Nicolas
Dear Researchers,
I have a single point for my parameter as a prior information and 26 data points as current data-set.
How can I incorporate that point(/single point prior value) while doing Bayesian Analysis.
(Initially, I use to run a model with non-informative prior without considering the old info as it wasn't valid).
In this particular case, I want to know, Is there any way to include this old evidence (single prior point)? If yes, How can I? Which way should I select and why?
Best Regards,
Abhay
Suppose we have n samples (x1, x2, …, xn) independently taken from a normal distribution, where known variance σ2 and unknown mean μ.
Considering non-informative prior distributions, the posterior distribution of the mean p(μ/D) follows normal distribution with μn and σn2, where μn is the sample mean of the n samples (i.e., μn=(x1+x2+…+xn)/n), σn2 is σ2/n, and D = {x1, x2, …, xn} (i.e., p(μ/D) ~ N(μn, σ2/n)).
Let the new data D’ be {x1, x2, …, xn, x1new, x2new, …, xknew}. That is, we take additional k (k<n) samples independently from the original distribution N(μ, σ2). However, before taking the additional samples, we can know the posterior predictive distribution for the additional sample. According to Bayesian statistics, the posterior predictive distribution p(xnew/D) follows normal distribution with μn and σn2+ σ2 (i.e., p(xnew/D) ~ N(μn, σ2/n+ σ2)). Namely, the variance becomes higher to reflect the uncertainty of μ. So far, this is what I know.
My question is, if we know p(xnew/D) for the additional samples, can we predict the posterior distribution p(μ/D’) before taking the additional k samples? I think that p(μ/D’) seems to be calculated based on p(xnew/D), but I have not gotten the answer yet. So, I need help. Please borrow your wisdom. Thanks in advance.
The question stated above is the title of a book review (see http://www.journals.uchicago.edu/doi/pdfplus/10.1086/694936?utm_content=bufferaebfd&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer ). I thought it would be interesting to read both opinions about the book reviewed ("The Future of Phylogenetic Systematics: The Legacy of Willi Hennig.") and colleagues' own answers to the questions: "Does the Future of Systematics Really Rest on the Legacy of One Mid-20th-Century German Entomologist?"
Dear all,
Here is my question:
In Bayesian inference, we can always iterate the process of transforming prior probabilities into posterior probability when new data is collected. I wonder if the frequency of data collection can affect the final result. For example, if I want to decide whether a coin is a fair one, I can either 1) flip the coin 10 times and get 4 heads, and then flip it for another 10 times and get 3 more heads; or 2) mark the same observation as flipping the coin 20 times and getting 7 heads. In the first case, I will do Bayesian inference twice, while in the second one, I only need to infer once. Would such a difference in the frequency of data collection affect the final result?
Thanks to your help!
I am currently doing a phylogenetic analysis for a species of virus having many subtypes (Human papillomavirus) for around 100 whole genome specimen and 50 partial sequences belonging to ~6 subtypes. Is there a specific number of Maximum Likelihood / Bayesian Inference trees I need to infer for the phylogenetic study to be accurate?
At the moment for ML I aligned the sequences with ClustalO and
using RAxML with 25 ML trees and autoMRE stopping criterion. But I have no idea if 25ML treees is enough for the analysis
Using R's arms package, I've run two Bayesian analyses, one with "power" as a continuous predictor (the 'null' model) and one with power + condition + condition x power. The WAIC for the two models are nearly identical: -.017 difference. This suggests that there are no condition differences.
But, when I examine the credibility intervals of the condition main effect and the interaction, neither one includes zero: [-0.11, -0.03 ] and [0.05, 0.19]. Further complicating matters, when I use the "hypothesis" command in brms to test if each is zero, the evidence ratios (BFs) are .265 and .798 (indicating evidence in favor of the null, right?) but the test tells me that the expected value of zero is outside the range. I don't understand!
I have the same models tested on a different data set with a different condition manipulation, and again the WAICs are very similar, the CIs don't include zero, but now the evidence ratios are 4.38 and 4.84.
I am very confused. The WAICs for both models indicate no effect of condition but the CIs don't include zero. Furthermore, the BFs indicate a result consistent with (WAIC) no effect in the first experiment but not for the second experiment.
My guess is that this has something to do with my specification of the prior, but I would have thought that all three metrics would be affected similarly by my specification of the prior. Any ideas?
Hi,
Recently I was reading a paper from Will G Hopkins about magnitude-based inferences and downloaded his related spreadsheet. This concept is new for me and still gives me a lot of doubts. Indeed I did not understand what he means as "value of effect statistic" and what he considers beneficial (+ive) or harm (-ive). Someone can help me ?
Thanks in advance
I am have some well-grounded knowledge in Bayesian Inference, Linear mixed models, and probabilistic graphical models. Image processing is a new learning topic for me.
I am aware that the Consistency Index uses the number of changes in a matrix, but I haven't found a way to do the matrix nor to calculate this index on any software.
I've assessed the affect of a population in a field study from a dimensional perspective using the Self-Assessment Manikin. Would it be possible to calculate or infer anxiety from arousal, valence, and dominance SAM scores?
I want to calculate BIC value but I have problem with it in Lisrel. Is Lisrel calculate the Bayesian Information Criteria (BIC)? How can I calculate BIC value in Lisrel output?
AIC and BIC are Information criteria methods used to assess model fit while penalizing the number of estimated parameters. As I understand, when performing model selection, the one with the lowest AIC or BIC is preferred. In a situation am working on, a model with the lowest AIC is not necessarily the one with the lowest BIC. Is there any reason to prefer one over the other?
I was wondering if it makes sense to keep characters that are parsimony-uninformative (autapomorphies) when making a phylogeny analysis using Bayesian inference. Since it is a probabilistic method, it may be informative to keep these characters. Does it make sense?
For my research I am required to develop a computational model that incorporates various parameters( with various measurement units) to be the bases of visual task classifier.
I read some papers, where some authors used Bayesian inference and others used Markov Model.
I understand general concept of both techniques, but I would like to understand them in a better way.
Can anyone kindly suggest easy to understand resources that explain how such techniques are used in developing mathematical models and in classification?
Hi all,
I'm running the new version of MrBayes and I'm using Mega to look at the resulting tree. I noticed that MrBayes automatically includes a translate block in the tre file, substituting the taxa labels by numbers. However, I'd much rather have the original taxa labels in my tree than these numbers. Does anybody know how to avoid this from happening? This hasn't been a problem with earlier versions of MrBayes.
Thanks!
Dear All,
I am using a mesh based Monte Carlo (MC) package (TIM-OS) and want to compare surface intensity with analytical solutions (e.g. Farrell et al).
MC packages give Fluence as output parameter; is it the same as intensity?
How it could be converted to photon/mm2 ??
And finally mesh based packages give this parameter for each element or surface; how it could be compared to analytical solutions which are fully spatially resolved?
Any idea of insights are greatly appreciated.
What is the most generalized and efficient way to express the underlying rules that govern a system? What is the most efficient route to discovering such rules?
In some cases, e.g. with the selection of certain priors, the results from Bayesian methods will yield the same results as using ML methods. In that case, then why use Bayesian? What is the advantage of using Bayesian if in some cases we will get the same results as ML? Also, should both methods always end up with the same conclusions for the same data or for the same statistical technique used?
Please, is anyone have some references regarding the influence of sampling on inference when using bayesian statistics?
I just beginning to use bayesian and I try to better understand some results on personal datas with verry heterogeneous sample size.
Thank's in adance
Guillaume
in Bayesian inference we can model prior knowledge using prior distribution. There is a lot of information available on how to construct flat or weakly informative priors, but I cannot find good examples of how to construct a prior from historical data.
How, would I, for example, deal with the following situations:
1) A manager has to decide on whether to stop or pursuit the redesign of a website. A pilot study is inconclusive about the expected revenue. But, the manager has had five projects in the past, with a recorded revenue increase of factor 1.1, 1.2, 1.1, 1.3, 1. How can one add the managers optimism as a prior to the data from the pilot study.
2) An experiment (N = 20) is conducted where response time is measured in two conditions, A and B. The same experiment has been done in dozens of studies before, and all studies have reported their sample size, the average response time and standard deviation of A and B. Again: How to put that into priors?
I would be very thankful for any pointers to accessible material on the issue.
--Martin
Please see my question that is attached about prior distribution.
I have used spBayes (spGLM) etc, for binary data. Now I want an r package to do analysis on ordered spatial data using Bayesian inference via MCMC. GPS coordinates of observations are included in the data. Suggestions of the appropriate package, code or examples of usage will be appreciated.
Thanks in advance.
I am new to modern phylogenetic analysis, so there are many things I still don't understand. For instance, recently, morphology found a second wind in phylogenetic studies, with many analyses using discrete morphological characters on equal ground with sequences in Bayesian framework. However, most if not all the works I've read were using Mk model of evolution for such characters. At first glance, it is highly unrealistic - a supposedly complex phenotypic character basically behaves like a site, completely neutral and stochastic.
Nevertheless, the trees generated from purely morphological datasets by Bayesian analysis (MAP or consensus) are always extremely similar or even identical to the trees generated by Maximum Parsimony analysis (the shortest tree). Why? Is it some property of the parsimony method itself and Mk model just simulates it?
Also, could someone please explain to me, what's the biological meaning of "stationary distribution" for a phenotypic character in Bayesian framework?
Thank you.
What is the raison d'être of uninformative priors? Isn't one of the Bayesian goals that of allowing to systematically incorporate prior information into inference?
Thanks!
The idea is to use the Bayesian approach to estimate the distributions of spaces between uniform and exponential order statistics. Are there any papers or articles on this subject?
Thanks so much.
I checked the web and found no clear definition on how these various statistical methods differ from each other and how they are estimated. Could anyone elaborate a little bit?
Bayesian evolutionary analysis, I have run my experiment for 100 million generations, but the ESS is still below hundred. What can I do?
We have a poolof hairs (around 10 for each reference sample coming from one individual) representing 15 individuals (e.g.).
We can characterize each of them by microscopy for several morphological characters, and by microspectrophotometry for colours informations.
These methods results for each hair in one set of discontinuous/qualitative data (morphological characters) and for the same hair in one set of continuous/quantitative data (colours informations). We can analyze them separately. That is not a problem.
But how can we analyse the two sets in a pooled matrix (combining qualitative and quantitative data) following a standardized protocol (that could be reused latter, like that)?
The questions we need to answer are :
- to test if all hairs coming from the same people cluster in the same group;
- for an unknown sample (of one hair at minimum), to search the group from which is the closest;
- and of course, to have a statistical estimation of the validity of the clusters or the similarity between unknown hairs and the closest clusters.
What is the best way to do that and the best software easy to use? (like XlStat?)
Thank you for your suggestions and ideas.
Many people apply all different kinds of phylogenetic approaches (Neighbor-joining, Maximum likelihood, Bayesian inference) to identify genes/proteins of interest in different groups of animals and/or plants. Additionally, depending on the method used, various kinds of substitution models are available (AIC, BIC, LRT).
Let's assume that I want to generate robust gene trees to identify orthologs and paralogs of genes of interest in various distantly related species. These trees might contain a few sequences or whole gene families (e.g. homeobox genes...). What is the best suited approach and model for that? Does anyone have good recommendations for reviews/publications that address this problem?
Thanks in advance!
I am constructing a bayesian with the following form to estimate the parameters from the responses:
P(parameters I responses) is proportional to P(responses I Parameters)*P(parameters)
Then I use MCMC to draw sample from the posterion.
My problem is that I have more than one parameter and several responses, which may be correlated. since the responses are correlated I cannot use the joint probability for the likelihood like the following:
P(responses I Parameters)=P(responses1 I Parameters)*P(responses2 I Parameters)
I was wondering if anybody can nicely help me with this problem.
Model discrepancy in such cases?
Hi All,
I am working with mtDNA and want to know how I can determine burn-in generation and number of chain in MrBayes and other Bayesian inference software?
thanks
Hossein
after running my data, I was surprised to get some results > 1 (ie, more than 100% "certain").is that possible or is there something wrong in my data/parameters?
I have a dataset of COI sequences and I'd like to obtain Bayesian Skyline Plots (BSPs) with BEAST for my populations. I made 5 replicates runs obtaining 5 .log and 5 .trees files. I used LogCombiner 2.2.0 to obtain single .log and .trees files from the 5 replicates, in order to construct the BSPs with Tracer. LogCombiner was able to construct a combo file for trees, but it did not for .log files. The program stops without producing any file and without giving any error message. Actually it made nothing....! Any suggestion or hint?
Does anyone know a citable paper in which the marginal likelihood of a normal distribution with unknown mean and variance is derived?
A short sketch of how the procedure should look like: The joint probability is given by P(X,mu,sigma2|alpha,beta), where X is the data. Rearranging gives P(X|mu, sigma2) x P(mu|sigma2) x P(sigma2). Integrating out mu and sigma2 should yield the marginal likelihood function.
I found several paper which work with the marginal likelihood for the linear regression model with a normal prior on the beta and an inverse gamma prior on the sigma2 (see e.g. (Fearnhead & Liu, 2007)). Or deriving the posterior distribution of the unknown parameters, but not the marginal likelihood.
I hope the question was understandable and anyone may help me.
Greetz,
Sven.
I need an example for this package
Many might first think of Bayesian statistics.
"Synthetic estimates" may come to mind. (Ken Brewer included a chapter on synthetic estimation: Brewer, KRW (2002), Combined survey sampling inference: Weighing Basu's elephants, Arnold: London and Oxford University Press.)
My first thought is for "borrowing strength." I would say that if you do that, then you are using small area estimation (SAE). That is, I define SAE as any estimation technique for finite population sampling which "borrows strength."
Some references are given below.
What do you think?
I have selected representative strains from each cluster in my Maximum Likelihood tree for divergence dates estimation but I made sure I covered the entire time period of detection of these strains (1975-2012). Will my exclusion of some of the strains affect my estimated evolutionary rate and times of divergence at the nodes after BEAST analysis?
I have a maximum clade credibility tree drawn and I am trying to get the age range at each node but I don't seem to get it right. How do I get that? I have the node bars displayed but I need the range displayed in years at the node as well. Or how do I interpret [10, 24.58] displayed at the node of my tMRCA?
I am trying to run a JAGS model with two model comparison (both binomial dbern models with several categorical predictors). I am using the rjags package.
Could anyone suggest or give some references how to write such a model? I would greatly appreciate your input.
There are plenty of debates in the literature which statistical practice is better. But both approaches have many advantages but also some shortcomings. Could you suggest any references that would describe which approach to choose and when? Thank you for your valuable help!
hi all,
do you know any published study that report that it would be better (for forecasting accuracy) to have ONLY discrete data in Bayesian Network? or, alternatively, continuous data may lead to inference problem??
I want to make a control chart in R. The LCL, UCL and CL are found by Bayes estimator and use Weibull Distribution. The CL could be found by the Trapezoid rule. How could I make a control chart with R? Is there any syntax or package or journal that I can use or read to solve this problem? Thanks
Im working with continuous characters to generate a tree, so far I have tried MLtraits and Parsimony (TNT). I was wondering if there is a way to run a Bayesian Analysis without discretizing the data.
Thanks,
Melissa
Hi. In MrBayes there is concept of partition where we have to create a subset of codon may anyone tell me how to decide how to know how many subset there will be there for a codon position. If I am analysing whole genome how can I divide it in subset?
For a classification problem, consider two steps: training and testing.
Assume that the Bayesian network classifier is required to be designed based on data.
At first, uniform distribution is assigned to each node because there is no prior information. Then in training step, the parameters of each node is determined based on Maximum Likelihood estimation.
Is it correct to say learned parameters define posterior distribution?
In addition, the next step is testing. P(Class | evidence) is calculated for each feature vectors and the posterior distribution is evaluated for each class node. I would assign the feature vector to any class that have the highest probability.
Is it called the maximum a-posteriori?
Dear all,
I have been thinking about the following: I have a set of models which share the characteristic that they have each two rate parameters and another parameter that has a similar interpretation in all models. Now each model is mixed with a different model of a mechanism to explain aspects that are not covered by the common part. The data I have to compare these different mechanisms is a special case for the common part of all models (not related to the mechanism I'm interested in): both rate parameters have to be equal and the third parameter has to be zero.
So my question is now: when comparing the models (transdimensional mcmc) to figure out the best mechanism, should I use the simpler parametrization (with only one rate paramter and disregarding the parameter which is zero) since my data has this special case?
Or should I use the full model (which, by the way, when compared against the simplified model in this special case data is worse regarding the Bayes factor).
I know the question is a bit abstract, however, I feel adding concrete model details would rather confuse the issue. Since I'm new to this kind of analysis, maybe this is a common case with a common solution? Although I couldn't find anything, yet.
I'm very intersted in your opinions!
Thanks in advance,
I want to use Mr Bayes for Bayesian inference model choice across a wide range of phylogenetic and evolutionary models (MCMC analysis). Which software will be best to convert the alignment files into the nexus file and thus to support Mr Bayes? Can anybody teach me the details about Mr Bayes.