Question
Asked 24 January 2020

How to obtain Bayes factor for difference between two correlations?

I'm trying to establish Bayes factor for the difference between two correlation coefficients (Pearson r). (That is, what evidence is there in favor for the null hypothesis that two correlation coefficients do not differ?)
I have searched extensively online but haven't found an answer. I appreciate any tips, preferably links to online calculators or free software tools that can calculate this.
Thank you!

Most recent answer

Cristian Ramos-Vera
Cesar Vallejo University
Is it possible to do a Bayesian reanalysis, from OR data, which are converted to r-correlation values to estimate the Bayes factor?

All Answers (5)

You need to add a little more structure to calculate the Bayes factor since with a correlation alone you cannot compute a likelihood ratio for the two competing hypotheses. Given that the Pearson correlation is predicated on an assumption of normality, you should be able to define your problem in terms of Gaussian likelihoods with two differing correlation matrices.
Cristian Ramos-Vera
Cesar Vallejo University
Good information Kevin
The way you'd test a "difference between two correlations" would typically be to test an interaction in a multiple regression model (assuming that the two correlations share the same DV but have different IVs). Then it should be straightforward to calculate a BF for a regression coefficient.
Cristian Ramos-Vera
Cesar Vallejo University
Is it possible to do a Bayesian reanalysis, from OR data, which are converted to r-correlation values to estimate the Bayes factor?

Similar questions and discussions

The Hidden Role of Bernoulli's Principle in Dental Equipment and Medical Treatments: From Tools to Physiology
Discussion
Be the first to reply
  • Ali zavareianAli zavareian
Bernoulli's principle, a fundamental concept in fluid mechanics, not only plays a key role in the design of dental equipment but also enhances our understanding of human physiology and medical treatments. However, as dental professionals, our awareness of its impact often remains limited to practical outcomes, rather than its scientific foundations.
Applications in Dental Equipment:
  • High-Speed Dental Handpieces: Efficient rotation of tools via controlled air and water flow.
  • Suction and Ventilation Systems: Negative pressure creation for effective fluid and debris removal.
  • Ultrasonic Scalers and Sprays: Cooling and plaque removal through optimized air and water dynamics.
  • Injection and Casting: Uniform distribution of impression materials and elimination of air bubbles.
Applications in Medical and Dental Treatments:
  • Blood and Air Flow in the Body: Bernoulli's principle aids in understanding:Airway obstructions or sleep apnea: Faster airflows in constricted areas reduce pressure and may exacerbate symptoms. Blood flow through vessels: Pressure changes in narrowed regions help explain certain vascular disorders.
  • Salivary Dynamics: The flow of saliva in ducts and its obstruction can be better understood through Bernoulli's principle.
  • Implants and Orthodontics: Understanding the distribution of forces on teeth and the surrounding bone can improve treatment outcomes.
  • Root Canal Irrigation: The controlled flow of irrigants like sodium hypochlorite in root canals utilizes the principle to enhance cleaning efficiency and safety.
Challenge for Colleagues:
  • Can a deeper understanding of Bernoulli's principle enhance our treatment methods and the design of dental devices?
  • How much do we incorporate this scientific knowledge into our daily practice, and should it be emphasized more in professional training?
This discussion invites dental professionals to reconsider the influence of physics on both treatments and equipment. How do you see Bernoulli's principle affecting your practice, and what insights or experiences can you share?
Can I apply a mixed-effects model for unbalanced sample size and repeated measures?
Question
4 answers
  • Giorgio SperandioGiorgio Sperandio
In my experimental design I have 4 treatments, 3 replicates per treatment and 3 blocks. In each plot I measured whether a plant is infested or not ("Infestate" variable). This measure has been performed to 30 to 40 plants placed at the centre of the plot. Sampling has been performed weekly (variable "Data_rilievo) on the same plants, even though the sample size might vary if some plants die. Treatment does not influence plant death. Thus, I removed from the dataset the observations resulted in plant death.
I obtained the following dataset:
'data.frame': 2937 obs. of 15 variables: $ ID_pianta : chr "_Pianta_1" "_Pianta_2" "_Pianta_3" "_Pianta_4" ... $ Data_rilievo : POSIXct, format: "2023-11-14" "2023-11-14" "2023-11-14" ... $ Blocco : num 2 2 2 2 2 2 2 2 2 2 ... $ Trattamento : chr "Controllo" "Controllo" "Controllo" "Controllo" ... $ Infestate : num 1 0 0 1 0 1 0 0 1 0 ...
I opted for a mixed-effect model with treatment as fixed effect, plant ID ("ID_pianta") as random effect to account for repeated measures, and block ("Blocco") as random effect.
And this is the result
> summary(model) Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod'] Family: binomial ( logit ) Formula: Infestate ~ Trattamento + (1 | ID_pianta) + (1 | Blocco) Data: data AIC BIC logLik deviance df.resid 3835.8 3871.7 -1911.9 3823.8 2931 Scaled residuals: Min 1Q Median 3Q Max -2.1969 -1.0611 0.6139 0.8091 1.5079 Random effects: Groups Name Variance Std.Dev. ID_pianta (Intercept) 0.16880 0.4108 Blocco (Intercept) 0.09686 0.3112 Number of obs: 2937, groups: ID_pianta, 40; Blocco, 3 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.59808 0.20650 2.896 0.003776 ** TrattamentoLavanda -0.16521 0.11116 -1.486 0.137218 TrattamentoRosmarino -0.02389 0.11000 -0.217 0.828075 TrattamentoTimo -0.37733 0.11017 -3.425 0.000615 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) TrttmL TrttmR TrttmntLvnd -0.266 TrttmntRsmr -0.269 0.502 TrattamntTm -0.269 0.499 0.504
I wanted also to check the predictive abilities. I used this code
library(caret) data$Infestate <- factor(data$Infestate, levels = c(0, 1)) # Convert predicted probabilities to binary predictions using a threshold binary_predictions <- ifelse(predicted_probabilities > 0.5, 1, 0) # Convert binary_predictions to a factor with levels 0 and 1 binary_predictions <- factor(binary_predictions, levels = c(0, 1)) # Create a confusion matrix conf_matrix <- confusionMatrix(data$Infestate, binary_predictions) print(conf_matrix)
And these are the results:
Confusion Matrix and Statistics Reference Prediction 0 1 0 1811 28 1 751 55 Accuracy : 0.7055 95% CI : (0.6877, 0.7228) No Information Rate : 0.9686 P-Value [Acc > NIR] : 1 Kappa : 0.0709 Mcnemar's Test P-Value : <2e-16 Sensitivity : 0.70687 Specificity : 0.66265 Pos Pred Value : 0.98477 Neg Pred Value : 0.06824 Prevalence : 0.96862 Detection Rate : 0.68469 Detection Prevalence : 0.69527 Balanced Accuracy : 0.68476 'Positive' Class : 0
It seems te model is good in predicting negative but it predicts 751 false positive. How to deal this aspect? Can the model be considered a good predictor? How can I increase predictive abilities?
How to compare repeated measures correlation between groups?
Question
6 answers
  • Jake HarrisJake Harris
I am measuring two continuous variables over time in four groups. Firstly, I want to determine if the two variables correlate in each group. I then want to determine if there is significant differences in these correlations between groups.
For context, one variable is weight, and one is a behaviour score. The groups are receiving various treatment and I want to test if weight change influences the behaviour score differently in each group.
I have found the r package rmcorr (Bakdash & Marusich, 2017) to calculate correlation coefficients for each group, but am struggling to determine how to correctly compare correlations between more than two groups. The package diffcorr allows comparing between two groups only.
I came across this article describing a different method in SPSS:
However, I don't have access to SPSS so am wondering if anyone has any suggestions on how to do this analysis in r (or even Graphpad Prism).
Or I could the diffcorr package to calculate differences for each combination of groups, but then would I need to apply a multiple comparison correction?
Alternatively, Mohr & Marcon 2005 describe a different method using spearman correlation that seems like it might be more relevant, however I wonder why their method doesn’t seem to have been used by other researches? It also looks difficult to implement so I’m unsure if it’s the right choice.
Any advice would be much appreciated!

Related Publications

Chapter
Bayesian methods for statistical inference, including estimation, testing, and classification, are developed in this chapter. These are formulated as decision theoretic problems described in Chapter 5, with different parameter spaces, action spaces, and loss functions. Some important special topics in Bayesian analysis, such as empirical Bayes and...
Article
Full-text available
Conceptual Distinctions of Bayesian and Classical Approaches in StatisticsFundamentals of the ResearchWhen the progressing process is taken to be account, it’s seen that the approaches on statistics have polarized between two pivots. Despite of the domination of Classical approach in statistics, Bayesian approach has also crucial arguments. It’s ne...
Preprint
Full-text available
Analysis of competing risks data plays an important role in the lifetime data analysis. Recently Feizjavadian and Hashemi (Computational Statistics and Data Analysis, vol. 82, 19-34, 2015) provided a classical inference of a competing risks data set using four-parameter Marshall-Olkin bivariate Weibull distribution when the failure of an unit at a...
Got a technical question?
Get high-quality answers from experts.