Question
Asked 28 April 2019

Sensitivity analyses for minimum detectable effect for multilevel models?

I am trying to find an R code to conduct sensitivity analyses for minimum detectable effect for 1) linear mixed models and 2) multilevel binary logistic regression. I've looked into a few packages, including paramtest, simr, and PowerUpR, but have not been successful. Has anyone done a similar analysis or had luck with any of these packages or others? Thank you!

Most recent answer

Zhiyun Wang
Renmin University of China
may I ask if you have made a sensitivity analysis? I also need to do a sensitivity analysis recently, but I did not find any statements or steps with specific explanations and explanations. Would you like to provide me with some operable suggestions and materials?

Popular answers (1)

Maurizio Sicorello
Central Institute of Mental Health
Hey Lisa, my solution to that problem was:
1. Run your empirical model in lme4 and save it
2. Save a range of relevant effect sizes in a vector, informed by where you think the smallest detectable effect sizes should be.
3. Loop through these effect sizes and successively replace your empirical effect size in your model with them
4. Run a power analysis on the replaced effect size with simr
You can download my code here: https://osf.io/uz9sy/. Just search for "sensitivity analysis" to find the relevant section. If you need more context, have a look at the supplements of our preprint: https://osf.io/h7qkj/
This paper might be helpful to use simR:
I actually did the same things twice, to make the process faster: First, I looped through a larger range of effect sizes with large steps in between (e.g. from .05 to .60 by steps of .05). After I narrowed it down, I used a smaller range and smaller step sizes (e.g. from .10 to .15 by .005).
(Edit: The procedure might have problems when there are missing values. If that's the case, I'd recommend creating a dataframe without rows containing missing. Also, remember that scaling of predictor/outcome matter for the effect sizes to make sense. I standardized on the total SDs, but depending on the context no/different scaling might be more appropriate).
If you have any questions, feel free to ask.
Best,
Maurizio
5 Recommendations

All Answers (4)

Martin Lukac
London School of Economics and Political Science
Hi Lisa,
I started working on this some time ago, but did not have time to finish it yet. I've adopted the power analysis script from Gelman & Hill's book on multilevel regression models. Do you want to work on this together and maybe turn it into an easy to use package? Feel free to get in touch via martin.lukac [at] kuleuven [dot] be
Best,
Martin
Kelvyn Jones
University of Bristol
You may like to know about
which is a software tool which provides a sandpit for power analysts etc in multilevel models. There is a version that can be used with R.
Maurizio Sicorello
Central Institute of Mental Health
Hey Lisa, my solution to that problem was:
1. Run your empirical model in lme4 and save it
2. Save a range of relevant effect sizes in a vector, informed by where you think the smallest detectable effect sizes should be.
3. Loop through these effect sizes and successively replace your empirical effect size in your model with them
4. Run a power analysis on the replaced effect size with simr
You can download my code here: https://osf.io/uz9sy/. Just search for "sensitivity analysis" to find the relevant section. If you need more context, have a look at the supplements of our preprint: https://osf.io/h7qkj/
This paper might be helpful to use simR:
I actually did the same things twice, to make the process faster: First, I looped through a larger range of effect sizes with large steps in between (e.g. from .05 to .60 by steps of .05). After I narrowed it down, I used a smaller range and smaller step sizes (e.g. from .10 to .15 by .005).
(Edit: The procedure might have problems when there are missing values. If that's the case, I'd recommend creating a dataframe without rows containing missing. Also, remember that scaling of predictor/outcome matter for the effect sizes to make sense. I standardized on the total SDs, but depending on the context no/different scaling might be more appropriate).
If you have any questions, feel free to ask.
Best,
Maurizio
5 Recommendations
Zhiyun Wang
Renmin University of China
may I ask if you have made a sensitivity analysis? I also need to do a sensitivity analysis recently, but I did not find any statements or steps with specific explanations and explanations. Would you like to provide me with some operable suggestions and materials?

Similar questions and discussions

How do I report the results of a linear mixed models analysis?
Question
47 answers
  • Subina SainiSubina Saini
1) Because I am a novice when it comes to reporting the results of a linear mixed models analysis, how do I report the fixed effect, including including the estimate, confidence interval, and p-value in addition to the size of the random effects. I am not sure how to report these in writing. For example, how do I report the confidence interval in APA format and how do I report the size of the random effects?
2) How do you determine the significance of the size of the random effects (i.e. how do you determine if the size of the random effects is too large and how do you determine the implications of that size)?
3) Our study consisted of 16 participants, 8 of which were assigned a technology with a privacy setting and 8 of which were not assigned a technology with a privacy setting. Survey data was collected weekly. Our fixed effect was whether or not participants were assigned the technology. Our random effects were week (for the 8-week study) and participant. How do I justify using a linear mixed model for this study design? Is it accurate to say that we used a linear mixed model to account for missing data (i.e. non-response; technology issues) and participant-level effects (i.e. how frequently each participant used the technology; differences in technology experience; high variability in each individual participant's responses to survey questions across the 8-week period). Is this a sufficient justification? 
I am very new to mixed models analyses, and I would appreciate some guidance. 
2x2 repeated measures (fully within-subjects) ANOVA power analysis in G*Power?
Question
3 answers
  • Lydia SearleLydia Searle
Hello,
I am trying to do a power analysis for a 2x2 repeated measures design to determine how many participants I need to achieve 80% power. I'm new to the world of power analysis and don't really have a strong stats background.
IV1 = face orientation
Level 1 = upright
Level 2 = inverted
IV2 = context
Level 1 = background present
Level 2 = background removed
This is a fully within-subjects design. I'm trying to use G*Power 3.1 to do the calculation. This is what I have entered into G*Power so far:
Test family: F tests
Statistical test: ANOVA: Repeated measures, within factors
Type of power analysis: A priori...
Effect size f = 0.25 (just assuming a medium effect)
Alpha err prob = 0.05
Power = 0.8
Number of groups = 1
Number of measurements = 4
Corr among rep measures = 0.5 (leaving it at the default)
Nonsphericity correction E = 1 (leaving it at the default)
The number of groups and number of measurements is the part I'm having an issue with. Will G*Power let me calculate n for a 2x2 within design, or is it assuming this is a 1x4 design? From what I've read and watched, number of groups comes into play if you have a between factor, which I don't, so I've set this to 1. As I have a 2x2 design, each participant is being measured 4 times, hence I've put number of measurements to 4.
Sometimes I've read/heard that G*Power DOES allow you to do a 2x2 within design, and sometimes I've read/heard that it does NOT allow you to do this.
I've had a look at GLIMMPSE 3.0.0. as an alternative but there are many fields it requires where I don't know the answer, mainly that there are a list of tests to choose from, none of which are a repeated measures ANOVA. It also wants me to put the means and SDs for each condition, but I haven't run the study yet, plus it's exploratory so I can't even really guess.
Can anyone with some stats / G*Power knowledge help?
Thank you,
Lydia

Related Publications

Article
Full-text available
Variance parameters in mixed or multilevel models can be difficult to estimate, espe-cially when the number of groups is small. We propose a maximum penalized likelihood approach which is equivalent to estimating variance parameters by their marginal poste-rior mode, given a weakly informative prior distribution. By choosing the prior from the gamm...
Got a technical question?
Get high-quality answers from experts.