Renmin University of China
Question
Asked 28 April 2019
Sensitivity analyses for minimum detectable effect for multilevel models?
I am trying to find an R code to conduct sensitivity analyses for minimum detectable effect for 1) linear mixed models and 2) multilevel binary logistic regression. I've looked into a few packages, including paramtest, simr, and PowerUpR, but have not been successful. Has anyone done a similar analysis or had luck with any of these packages or others? Thank you!
Most recent answer
may I ask if you have made a sensitivity analysis? I also need to do a sensitivity analysis recently, but I did not find any statements or steps with specific explanations and explanations. Would you like to provide me with some operable suggestions and materials?
Popular answers (1)
Central Institute of Mental Health
Hey Lisa, my solution to that problem was:
1. Run your empirical model in lme4 and save it
2. Save a range of relevant effect sizes in a vector, informed by where you think the smallest detectable effect sizes should be.
3. Loop through these effect sizes and successively replace your empirical effect size in your model with them
4. Run a power analysis on the replaced effect size with simr
You can download my code here: https://osf.io/uz9sy/. Just search for "sensitivity analysis" to find the relevant section. If you need more context, have a look at the supplements of our preprint: https://osf.io/h7qkj/
This paper might be helpful to use simR:
I actually did the same things twice, to make the process faster: First, I looped through a larger range of effect sizes with large steps in between (e.g. from .05 to .60 by steps of .05). After I narrowed it down, I used a smaller range and smaller step sizes (e.g. from .10 to .15 by .005).
(Edit: The procedure might have problems when there are missing values. If that's the case, I'd recommend creating a dataframe without rows containing missing. Also, remember that scaling of predictor/outcome matter for the effect sizes to make sense. I standardized on the total SDs, but depending on the context no/different scaling might be more appropriate).
If you have any questions, feel free to ask.
Best,
Maurizio
5 Recommendations
All Answers (4)
London School of Economics and Political Science
Hi Lisa,
I started working on this some time ago, but did not have time to finish it yet. I've adopted the power analysis script from Gelman & Hill's book on multilevel regression models. Do you want to work on this together and maybe turn it into an easy to use package? Feel free to get in touch via martin.lukac [at] kuleuven [dot] be
Best,
Martin
University of Bristol
You may like to know about
which is a software tool which provides a sandpit for power analysts etc in multilevel models. There is a version that can be used with R.
Central Institute of Mental Health
Hey Lisa, my solution to that problem was:
1. Run your empirical model in lme4 and save it
2. Save a range of relevant effect sizes in a vector, informed by where you think the smallest detectable effect sizes should be.
3. Loop through these effect sizes and successively replace your empirical effect size in your model with them
4. Run a power analysis on the replaced effect size with simr
You can download my code here: https://osf.io/uz9sy/. Just search for "sensitivity analysis" to find the relevant section. If you need more context, have a look at the supplements of our preprint: https://osf.io/h7qkj/
This paper might be helpful to use simR:
I actually did the same things twice, to make the process faster: First, I looped through a larger range of effect sizes with large steps in between (e.g. from .05 to .60 by steps of .05). After I narrowed it down, I used a smaller range and smaller step sizes (e.g. from .10 to .15 by .005).
(Edit: The procedure might have problems when there are missing values. If that's the case, I'd recommend creating a dataframe without rows containing missing. Also, remember that scaling of predictor/outcome matter for the effect sizes to make sense. I standardized on the total SDs, but depending on the context no/different scaling might be more appropriate).
If you have any questions, feel free to ask.
Best,
Maurizio
5 Recommendations
Similar questions and discussions
Related Publications
Variance parameters in mixed or multilevel models can be difficult to estimate, espe-cially when the number of groups is small. We propose a maximum penalized likelihood approach which is equivalent to estimating variance parameters by their marginal poste-rior mode, given a weakly informative prior distribution. By choosing the prior from the gamm...