Discover the world's scientific knowledge

With 160+ million publication pages, 25+ million researchers and 1+ million questions, this is where everyone can access science

You can use AND, OR, NOT, "" and () to specify your search.

PublicationsAuthorsQuestions
Multilevel Modeling: standardized beta-coefficients & Cohen's r (effect size)
Question
  • Jun 2017
I am a PhD student in Health Psychology conducting Multilevel Analyses in R. I am using the lmer() function to analyze my dyadic diary data.
I want to estimate the effect sizes of my Level-1 predictors. Using the sjt.lmer function of the sjPlot package, I derived the standardized beta-coefficients (show.std=TRUE). My questions is: Can I interpret these standardized beta-coefficients in terms of Cohen’s r effect size?
As I understood Cohen’s r, Cohen’s r reflects the partial correlation with values of r >= .10 indicating a small effect.
When I calculate Cohen’s r ‘by hand’ (r = betax * (SD(x)/SD(y)), with beta derived from the multilevel model and the SD’s derived from the descriptive statistics (summary()), I derive larger estimates than those derived from using the sjt.lmer function (standardized beta-coefficients). I guess this would be due to the fact that only the beta-coefficient takes the other predictors into account.
I would appreciate any advice on how to calculate and interpret effect sizes for my multilevel model. Can I interpret the standardized beta-coefficients as derived from sjt.lmer() in terms of Cohen’s r effect size?
Thank you very much in advance,
Fabiola
… 
  • 5 Answers
What are some recommendations for determining sample sizes for multilevel data? (i.e., power analysis)
Question
  • Jun 2017
I am planning to run a daily diary study for my dissertation and would like to run a power analysis beforehand. I have conducted research on multilevel power analyses and am somewhat confused as to how to get the estimates. The papers I have read suggest that you either need pilot study data to derive the estimates or you need to get the estimates from previous research. For example, a paper by Bolger et al. (2011) provides Mplus syntax for the multilevel power analysis and state that the estimates for the Mplus syntax must come from a simulated dataset (but it is unclear how they simulated the data). If I were to rely on previous research, I am not sure how I would get some of the estimates. For the Bolger et al. (2011) power analysis, I need to estimate the within-subjects residual variance, the fixed effect of the predictor on the outcome variable at the between-person level, the covariance of the intercept and the within-subjects predictor's slope, etc. Some of these estimates are not reported in past research. For example, daily diary studies with Level 1 predictions do not seem to report any Level 2 results. Additionally, I do not think they report residual variances. My preliminary assessment is that it appears that there would be a lot of blind guessing involved and/or a wild goose chase to get some of these estimates. I am curious to see how other researchers conduct power analyses for multilevel data or how they decide on appropriate sample sizes before starting data collection.
… 
  • 77 Views
  • 4 Answers
Random factor, Repeated Measures Anova vs factorial ANOVA, and/or MultiLevel/Mixed Models?
Question
  • Jun 2019
Dear all,
My research question is whether therapists (Independent variable IV) influence treatment effects (dependent variable, DV). Or in other words, are some therapists more 'effective' than others?
The data includes about 60 therapists and >1000 patients in total, each with a pre- and post-test on a questionnaire measuring symptom severity. There is a large variability in the number of patients per therapist, although 10 patients per therapist is the minimum.
I have a couple of questions:
1. Is it then true to assume that therapists should be a random factor in my analyses?
Because, the IV therapists has a lot of levels (i.e., > 60), and also because I am not interested in the effect of each therapist, but more in the "overall effect of the variable therapists" on outcome.
2. In general, I learned that a repeated measures ANOVA with pre- and post-scores is better (more power) than a factorial ANOVA that includes difference scores as the DV. However, am I correct that in a repeated measures ANOVA only fixed factors can be included and no random factors?
So if 1. is true, is the only option then to conduct a factorial ANOVA with therapist as random factor and difference scores as DV?
Or should I just include therapist as fixed factor in a repeated measures ANOVA?
3. Somehow I guess we have to control for the number of patients per therapist, or take the individual (i.e., therapist) variance into account. Is it 'enough' or 'sufficient' to include 'number of patients per therapist' as a covariate in the analyses? Or is it best to analyse everything in a mixed models approach? Also considering the precious concerns?
Thank you in advance!
Best,
Naline
… 
  • 410 Views
  • 15 Answers
Using Propensity Score Analysis with aggregated level data for "intent-to-treat" analysis
Question
  • Aug 2016
I am trying to evaluate impact of an intervention that was implemented in very poor areas (more poor people, undeserved communities). In addition, the location of these areas were such that health services were limited because of various administrative reasons. Thus, the intervention areas had two problems: (1) individuals residing in these areas were mostly poor, illiterate and belonged to undeserved communities; (2) the geographical location of the area was also contributing to their vulnerability (as people with similar profile but living elsewhere (non-intervention areas) had better access to services. I have a cross sectional data about health service utilization from both types of areas at endline. There is no baseline data available for intervention and control. I am willing to do two analyses: (1) intent to treat analysis: Here, I wish to compare the service utilization in "areas" (irrespective of whether the household in intervention area was exposed to the intervention). The aim is to see whether the intervention could bring some change at "area" (village) level. My question is: can I use Propensity Score Analysis for this? (by matching intervention "areas" with control "areas" on aggregated values of covariates obtained from survey and Census?). For example, matching intervention areas with non-intervention areas in terms of % of poor households, % of illiterate population, etc. The second analysis is to examine the treatment effect: Here I am using Propensity score analysis at individual level (comparing those who were exposed in intervention areas with matched unexposed people from non-intervention areas). Is it right way of analysing data for my objective?
… 
  • 672 Views
  • 4 Answers
Could anybody please suggest a way to analyze my LCMS metabolomics data to integrate all the components of the study?
Question
  • Jan 2017
I have two soils, two treatments, 4 depths and 5 time points. How do I choose features from my spectral analyses data for multilevel analyses of the data? I would appreciate any pointers.
… 
  • 3 Views
  • 2 Answers
Dichotomous or multiple categories for nominal/ordinal variables?
Question
  • Feb 2016
Hello everybody,
Right now, I am working on an article for my PhD. I have to make a decision about the nominal/ordinal variables: do I make them dichotomous or do I choose multiple categories and what are the arguments to do either one of the decisions. Does anyone have a scientific source for me about this?
Eventually, the variables will be used in multilevel regression analyses and LAG-analyses.
Thanks in advance for your answers!
Lilian
… 
  • 6 Answers
Effect size calculation for pairwise comparisons (Sidak adjustment)?
Question
  • Jul 2014
I've generated pairwise comparisons after running change over time analyses on my data (I have 3 time points so have used multilevel modeling). I now know the mean difference from each time point to the other, and whether this is significant. However, I'm unsure how to calculate effect sizes for these differences. SPSS has automatically performed a Sidak adjustment for multiple comparisons. If anybody knows the answer to this I'd be extremely grateful, thanks!
… 
  • 4 Answers
Does Mplus aggregate the outcome variable at L2 in MLM?
Question
  • May 2022
Hi everyone,
I am conducting multilevel analyses in Mplus with students (L1) nested in classrooms (L2). I have several predictors at L1 and L2 that predict individual behavior. The output gives me the results for the within and between model. I am wondering now, whether Mplus aggregates automatically my outcome variable for the between model at L2 or whether my L2 variables are predicting individual behavior at L1 as my predictors at L1 do.
I thought this is an easy question but I got from colleagues contradicting answers. That’s why I am hoping to get this clarified here.
best wishes,
Sebastian
… 
  • 527 Views
  • 8 Answers
How can we obtain/calculate the variance of an effect size for meta-analysis?
Question
  • Feb 2022
Excuse me if these questions are very naive about meta-analysis. We are conducting a meta-analysis and we have multiple-effect sizes per study (with the effect sizes being based on the same group of people within the individual study), so we realize we need to conduct either multi-level meta-analysis or robust variance estimate meta-analysis but there are 2 issues we are having difficulty figuring out.
1) Why are multilevel meta-analyses considered to be 3-level analyses? What are the 3 levels? Wouldn't it be a 2-level model with effect sizes nested within studies? Everyone seems to be modeling these types of data as a 3-level model and I'm having difficulty understanding how this translates into a 3-level model when I normally would think of it as a 2-level model.
2) All analyses with multiple effect sizes per study require us to provide the variance of each of the effect sizes. We are having trouble figuring out where to get that information, as studies don't report it. How do we calculate the variance of an effect size, especially when there are multiple effect sizes per study and we need an effect size for each effect size? What information do we use to calculate their variance? (We are working with unstandardized regression coefficients which we convert to standardized betas, or use standardized regression coefficients when those are provided.)
Again, apologies if these are naive questions, but would appreciate any help in advance!
… 
  • 22 Views
  • 3 Answers
How to save survey designed dataset for other analysis (preferred R)?
Question
  • Nov 2024
My question is how to conduct more analysis after setting up a survey design? or how to conduct analysis based on complex survey data. I know some r packages or some commands supporting for these, such as survey package in r, svy using stata, multilevel using mplus. But I found these basically supporting analysis such as regression, correction or something like that. What if about doing other analysis?
Is there any way to conduct some analysis such as calculating growth curve velocity or something like that with survey data with stratification and clustering? or how to add weight when doing these analysis?
For example, here is the simple dataframe:
Data <- data.frame( X =c(1,4,6,4,1,7,3,2,2), Y = c(6,5,9,9,43,65,45,67,90), weight = c(0.1,1.2,4,0,0,5,0.65,1,0) )
Using the survey package to include the weight variable.
library(survey) dat_weight <- svydesign(ids = ~1, data = Data, weights = Data$weight)
after doing these, how can I conduct other analyses? Saving this object (dat_weight) as a simple dataframe and use/export it for other analyses (such as latent variable modeling, and so on)? Can I ask is it possible to do that?
I am struggling to figure out do some more complex analysis using other packages, such as growth modeling, pca, etc.
… 
  • 112 Views
  • 2 Answers
1
2
3
4
5
6
App Store
Get it on Google Play
Company
About us
News
Careers
Support
Help Center
Business solutions
Advertising
Recruiting
© 2008-2025 ResearchGate GmbH. All rights reserved.
  • Terms
  • Privacy
  • Copyright
  • Imprint
  • Consent preferences
Join for free
Log in