Science topics: Power Analyses
Science topic
Power Analyses - Science topic
Explore the latest questions and answers in Power Analyses, and find Power Analyses experts.
Questions related to Power Analyses
Is there a simple way to determine the sample size required to calculate a moderated mediation? We are talking about a mediation with 3 mediators and one moderator. I would appreciate any tips!
Dear all,
In social sciences, it is often recommended to determine the sample size we need with an a priori power analysis. This analysis requires to provide, among others, an expected effect size, which is usually provided by prior works similar to our own. However, this index is not always reported. In this context, according to Perugini et al. (2018), it is possible to use a sensitivity power analysis to determine the minimum effect size that can be reliably detected. This analysis is computed from, among others, the available sample size for the study.
The problem i'm facing is the following: do I have to compute a sensitivity power analysis before each statistical analysis? If not, and assuming that 3 distinct analyses must be conducted, from which analysis should I determine the appropriate effect size for my study?
Thank you for considering my request.
Best,
Kévin
Dear All,
As stated above, I want to build a multiple linear regression model based on 4 or 5 independent variable (4 continous and 1 categorical) to predict one dependent variable (continous). Since it is a new approach, I want to do a pilot study first.
I assume that I will use an a-priori power analysis with alpha = 0.05 and power of 95%.
How do I determine the effect size (f2)? Should it be large for the pilot study and medium for the following study? Or there are other approaches?
Hello everyone,
I wanted to estimate the required sample size for a CFA model, and the output returns the required sample size of 37, which is way to low for a model with 60 measured variables. I can't understand what I am doing wrong. Is the function returning the required sample size per item?
Please see below all the information about the model and the performed power analysis.
I would be highly obliged if someone could inspect the syntax and parameters below and let me know what am I missing, or direct me to another method of performing power analysis for CFA.
Model:
60 items
12 factors
2 higher order factors (6 factors each)
the 2 higher order factors are correlated
loadings of the 12 factors on the higher order factors are fixed to 1
Method used to perform a-priori power analysis: package semPower in R
ap <- semPower.aPriori(effect = 0.05, effect.measure = 'RMSEA',
alpha = .05, power = .80, df = df)
summary(ap)
Method used to calculate degrees of freedom:
df = p · (p + 1)/2 − q
df=60*(60+1)/2 - (2+60+60+1)
p is the number of observed variables
q is the number of free parameters of the hypothesized model composed from (a) loadings, (b) item-residual variances (c) covariance/regression parameters between factors and between item residuals
Issue:
The problem is that the function returns the required sample size of 37, which of course is way to low for a model with 60 measured variables.
I can't understand what I am doing wrong. Is the function returning the required sample size per item?
Thank you in advance !
Hello everyone!
In a recent research project, I computed a Mixed-Design ANCOVA, i.e., a repeated measures analysis with one within-subjects factor (4 points of measurement) + one between-subjects factor (2 groups) + three covariates.
Is there any software package for doing a post-hoc power analysis in this context? I think G*Power does not contain an option for repeated measures ANCOVAS...
If not, do you know any step-by-step instruction or recommendation for calculating this "by hand"?
Thank you in advance!
Daniel Spitzenstätter
I am running different tests (t-tests, correlation, anova) and therefore have computed different power analyses. Should I report all of these, or only the test which needs the most participants to ensure enough power (as therefore all other tests will have enough power)? Thanks in advance!
Hello,
I would like to ask about the sample size and power analysis calculations. I have 6 conditions for my survey. Additionally, 9 questions/choice tasks with 2 options (+1 status quo option) in each. I will be analyzing them by using Mixed Logit Model after I complete the experiment. So, I have been searching about formulas etc. but it seems like there is not a formula specifically for Discrete Choice Experiments will be analyzed with MXL. What do you suggest me? Are there any alternative ways about this?
Any advice would be much appreciated! :)
Hi everyone,
I tested an SEM model with 2 IV, 4 mediators and 1 DV on a sample of 1000 participants (see attached figure). Could you please help me to find an estimation for a good sample size using power analysis for this multiple-mediator model.
Best,
Robin
Hello everyone, I conducted a longitudinal study (four time points) and tested a longitudinal mediator. I submitted the study , then Editor's comment:
"You put up one hypothesis that predicts a null effect (no mediation), and at several points you interpret insignificant findings as evidence for null effects. From my perspective, such interpretations require a power analyses for respective statistics"
I have no idea...
Cry for help~~~
Thank you in advance.
Hi, everyone
In relation with the statistical power analysis, the relationship between effect size and sample size has crucial aspects, which bring me to a point that, I think, most of the time, this sample size decision makes me feel confusing. Let me ask something about it! I've been working on rodents, and as far as I know, a prior power analysis based on an effect size estimate is very useful in deciding of sample size. When it comes to experimental animal studies, providing the animal refinement is a must for researchers, therefore it would be highly anticipated for those researchers to reduce the number of animals for each group, just to a level which can give adequate precision for refraining from type-2 error. If effect size obtained from previous studies prior to your study, then it's much easier to estimate. However, most of the papers don't provide any useful information neither on means and standard deviations nor on effect sizes. Thus it makes it harder to make an estimate without a plot study.
So, in my case, when taken into account the effect size which I've calculated using previous similar studies, sample size per group (4 groups, total sample size = 40 ) should be around 10 for statistical power (0.80). In this case, what do you suggest about the robustness of checking residuals or visual assessments using Q-Q plots or other approaches when the sample size is small (<10) ?
Kind regards,
Hi everyone, I'm preparing for an experiment and need to calculate the minimum sample size. Participants (level-2) are randomly assigned to one of the two conditions (level-1), and they will be measured with the same scales at two timepoints (level-3). Based on previous research, I decided that the effect size of condition on the outcomes is f=0.2.
Based on this design, how should I calculate the sample size I need?
How can we estimate sample size for a moderated mediation analysis with more than two mediators? I am looking for scripts, macros, syntaxes, that allow us to estimate sample sizes for research questions that include more complex models such as the above.
Hi, I have a simple question.
I am hoping to perform a power analysis/sample size estimation for an RCT. We will be controlling for baseline symptoms, and using post-treatment or change scores as our outcome variable, Ie we will use an "ANCOVA" designs showed to increase power: https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-019-3671-2
Would any body be able to point me towards the best tool for sample size estimation for such a model?
thanks!
It is said that the current version of G*Power (3.1.9.2) cannot conveniently do power analyses for repeated measures designs with more than two factors (see https://stats.stackexchange.com/questions/59235/repeated-measures-within-factors-settings-for-gpower-power-calculation ). If so, how can I determine sample size used for the design with 3, 4, or even 5 factors? Thank you!
GPower is only equipped to deal with calculations of bivariate regressions and needs SD s for both predictor samples.
However, if one has a multiple regression with more than 2 outcome variables (multivariate) , is there a way to calculate sample size with an alpha of 0.05 and assuming a moderate effect size of 0.35 apriori?
thanks in advance.
I have done a study for serum metabolite analysis in a control and experimental group. The metaboanalyst software based analysis shows a significant alteration in quite a few metabolites. To run a stats, I have two questions:
1- With n=3/group what fold change in metabolite levels could be reliably detected?
2- How do we estimate statistical power for the study?
Dear community,
I'm looking into ways how to do an a-priori power analysis for an fMRI experiment where the main analysis will be a representational similarity analysis (RSA).
The experiment will present the same stimuli in two successive fMRI-sessions (with a behavioral training in between). For each fMRI session, I plan to do a model-based RSA on brain responses elicited by the stimuli. Voxel restriction will be done with a searchlight procedure. The most interesting outcome will be the difference in these results between the two training sessions, as estimated with a GLM contrast.
I think this is not uncommon as i found other experiments adopting a similar analysis procedure. I found no clue however on how to estimate the necessary sample size to achieve a certain statistical power (say 80%).
Since this is a bit of a frankenstein made from other common statistical approaches, I'm not sure if the general logic of fMRI-power analysis applies here.
Has anybody experience in this area or can point me to literature that contemplates this issue?
Thanks,
Oliver
I'm developing a readiness assessment model regarding contractors' preparedness for a specific activity, in order to do so, a survey study was carried out and the data analyzed with PLS-SEM to obtain the CSF contributing to that readiness; nevertheless, due to the subject being too specific, it was impossible to define or quantify a population for it and hence, a probabilistic sample, which can compromise the external validity (generalizability) of my readiness assessment model. Is it feasible trying to reduce that generalizability issue with the minimum sample size requirements (means of power analyses) from Cohen (1992) and the use of PLS predict to determine the prediction power of the model?
I'd be delighted if any colleague could reply to this need
Hi everyone, we need some help with a request from a reviewer.
Here's the issue:
In our study we investigated the impairment in the ability to recognize facial emotions when a facemask is present (standard mask, transparent mask, and no mask). One reviewer argued that we employed a too small number of trials (40 items: 10 faces * 4 facial expressions). Moreover, due to the non-normality of data, we ran non-parametric tests (Kruskal-Wallis; Mann-Whitney; Friedman test; Wilcoxon signed-rank test). Does it make sense to compute the observed power of the analysis in response to the reviewer's concern? And in particular, how can we calculate the observed power of a Kruskal-Wallis test (SPSS does not have the point and click option for observed power as for the non-parametric analysis)?
Furthermore, would it be the same to compute the observed power (post hoc) of the relative parametric tests (i.e., compute the observed power of the ANOVA relative to the Kruskal Wallis test)?
Lastly, in a bunch of forum many staticians clearly state that any post hoc computation of observed power is completely useless (not to say "nonsense"). But, aside from blog posts, is there some relevant paper we could quote to support this claim? Also, if that's the case, how can we justify our number of trials without using observed power?
P.S.: In our case we always observed significant results with very low p values (always <.001) and when we ran the corresponding parametric analysis we always had a >.90 observed power.
How can we sort this out?
thank you guys in advance,
Hello, Are there alternative ways of approaching the problem of controlling for alpha inflation which consider instances with very small and hard to reach samples with which multiple hypothesis (which are all testing the same dependant variables)?
In my case, the Bonferroni correction is too restrictive: 1) Since the alpha is so small, it will make it hard to yield significant results and 2) It requires that I recruit too many participants for the number that is available in order to have sufficient statistical power. Indeed, my population is hard to reach because it is a very specific clinical population.
On the study: We want to identify which return-to-work obstacle predict return-to-work in organ transplant recipients. Return-to-work obstacles are measured using a single instrument, which has 10 independant subscales. Thus, 10 tests will be performed. For each test, there will be 2 independant variables : 1) How much a return-to-work obstacle (subscale) is perceived as importante (score : 1-7) and 2) How much self-efficacy one has regarding this obstacle/subscale (score: 1-7). The dependent variables are: 1) The intention do return to work (yes/no), 2) Employer's intention of welcoming the employee back (yes/no). The dependent variables are measured 6 months after the dependent variable.
Thanks so much in advance for your help! I am very thankful for this wonderful community.
Hi everyone,
my question is related to post hoc when dealing with huge size effect, e.g. Cohen's d> 2.
No one asked me to perform this kind of analysis, it is just a matter of personal curiosity. It is claimed that in post hoc power analyses, p-value is in a 1-to-1 relation with the observed power, and this is clearly true. However, for huge size effects, I used a Monte Carlo simulation to understand if this holds anyway. Surprisingly, the curve shows a quite big area instead of a line (you can see it attached, N=1000 experiments from normally distributed populations).
Am I wrong to say that for huge size effects, PHP analysis gives reasonable results?
Hi, so I am carrying out a study and for my results, I will be conducting 3 different repeated t-tests to see if there is a statistical difference between scores from 3 questionnaires that have been complete pre and post-intervention.
Do I also need to carry out 3 different G priori power analyses because it asks me to input the mean and SD and I have 3 sets of those.
Say I'm using GPower to calculate the number of participants needed in a multiple regression to detect one of the predictor's unique effect. I want to be able to detect a correlation of .3. What you typically do is select:
- Test family: F tests
- Statistical test: Fixed model, R2 increase
- Type of power analysis: A priori (but I guess for my question, it really doesn't matter)
GPower ask for an effect size (f2), and has a tool to convert R2 to f2. In the toolbox, you can select a "Direct" input, which is partial R2. So from what I understand, GPower uses partial correlation has an input.
Is it strictly for partial correlation, or is it OK to use this for part correlation as well? I know both return the same p-value (it's just two different beta-to-correlation transformation), so I'm not even sure if it is relevant at all.
Thanks in advance for any insights on this question!
I have run a power analysis on my occupancy models that says for a power of 0.85, and using 204 sample units with 11 replicates, the detection probability would need to be about 0.096.
How can I convert this detection probability to tell me how many detections (1's) would need to be in my detection history for that to be the detection probability?
Thanks for any help!
I am doing a power analysis to determine the number of subjects needed based off of a study that used mixed effects logistic regression. I would guess that the odds ratio from this can't be entered into G*Power as though it were from regular logistic regression...or can it? If not, is there another way?
Edit: To provide more information, the data is at the trial level, clustered within subjects and within items. This is why I feel like treating this outcome like a regular logistic regression isn't right.
I used repeated measures for my analysis. Now I need to calculate the power. I have 1 between variable (2 levels) and 2 within variables (one has 2 levels and one has 3 levels). If I expect to an medium effect size (f = 0.25), can 186 participants achieve a 0.80 power level?
I used G*power for this analysis, but I am not sure whether it is correct as I have more than 1 within variable. If G*power can do this, shall I enter 12 (2*2*3) for the group and 2 for the measurement?
Anyone has an idea about this? Thank you very much in advance!
Could anyone please provide me some guidance in how to plan a sample for a Welch's t test (with some references)? One can find a plenty of information online for a Student's t test. But I couldn't find anything about the calculation for a Welch's t test.
I've heard the power of a Welch's test is similar to the power of a Student's one. But I also wasn't able to find how to calculate the power (so I could derive the calculation of the sample size). And also, I couldn't find confirmation that they are similar enough for me to use the same procedures as in the Student's test.
I appreciate any answers!!
I am planning to measure response time and accuracy in reaction to visually presented words. My design is 2x2 with respect to fixed factors, and I have about 5-6 additional random factors.
I plan to analyze the responses using linear mixed effects models (for accuracy data I will use a generalized mixed model).
My concerns are regarding stimulus selection and sample size.
Are there any tools/guidelines for computing the number of stimuli in each condition and the number of subjects needed in order to achieve a certain power level for the two-way interaction of the fixed factors? (suppose I have some estimate about effect size, and assuming alpha = 0.05).
How do I determine the number of participants needed to achieve X power, with a small effect size (assuming f2 of .02), for a multivariate regression?
As far as I can tell G*Power can only do this for univariate regression. Is there some modification I can make to its output? Is there another program I can use?
Note that I have no data in hand, I am only going off of the assumption that the effect size is small.
How to do power analyses for repeated measures designs with MORE THAN ONE within-subject or between-subject factor? For example, a 2*3 repeated measures design with two within-subject factors.
It seems that the current version of G*Power (3.1.9.2) is not appropriate to do so? Any other solutions?
Specifically, I have a 2*3 repeated measures design with two within-subject factors, and I want to do a prior power analysis to determine the sample size. If I expect to achieve a 0.80 power level with an (assumed) medium effect size (f = 0.25), how many participants do I need?
Many thanks.
for G power software users: when to use chi-square and when to use Z test as test family in a priori sample size calculation.
Another question:
If my primary outcome is nominal data (clinical cure), I count the number of patients cured and then I get the % of patients cured in every group, and I am having two groups (test and standard therapy), also previous trials in this discipline used non parametric analysis for their primary endpoint (which is clinical cure also), can I use chi-square in Priori sample size calculation, and use it in post hoc power analysis ?? (the primary outcome was non normally distributed
Thanks in advance
Hello,
I want to find the losses ( switching and conduction losses) of MOSFET and its diode. For the MOSFET (the positive half cycle of the current ) I am using these equations: Switching losses ( E= Voff Ion/6* (ton+toff)) where ton= tdon+tr and toff=tdoff+tf from datasheet. And for the conduction losses I am using (EON=Ion^2*Rdon*Ton).
For the diode (the negative half cycle of the current) I am using these equations: Switching losses ( E= Voff Ion/6* (ton+toff)) where ton= tdon+tr and toff=tdoff+tf+trr (reverse recovery time) from datasheet. And for the conduction losses I am using (EON=Vsd*Ion*Ton) where Vsd is forward voltage of diode from datasheet.
My questions are:
1) Am I following the right equations?
2) For the diode losses calculation , the energy losses is negative because of the negative current, so to calculate the total losses of mosfet+diode , should I take the absolute value or the negative one ?
3) For SiC MOSFET , should I consider the reverse recovery time (trr) to calculate the diode losses ?
Thanks in Advance..
Hello,
I have implemented my design, that is a network on chip, by ISE. now I am trying to get dynamic power. I have written a verilog test fixture for my design and generated a vcd file for it. But when I use XPA, the dynamic power seems wrong. I attach my test fixture.
please tell me if the vcd file generation is wrong or not complete. I should mention that my simulation dose not use all of the switches in the network.
Hello all,
I am trying to conduct a power analysis (via Monte Carlo simulation) to see how large a sample I would need to collect for a study. The hypothesized model is Hayes' PROCESS model 21 (attached below; moderated mediation, with the "a" and "b" path moderated by separate moderators).
I was wondering if there was any guidance I could receive in the best way to do so via Mplus or r (either through resources or syntax - I have read previous preacher papers but still find myself having difficulty conducting such an analysis). In doing some research these two software packages seem the most appropriate, but any other guidance would be welcome.
Thank you all so much!
If we are using Data loggers for calculating voltage, current, power of an on-site industrial motor (when the motor is running) and obtained the above three parameters by using specific calculations , how long and how often should we take our measurements from the loggers to get an ideal value? And what will be the accuracy of the measurements obtained?
I am conducting a mixed-method research on the sexual health of people with physical disabilities, and I wanted to address some questions regarding the "disableism" or social discrimination regarding the sexuality of disabled people. I am wondering if there is any survey on sexual discrimination that focuses on issues regarding disabled people? Thank you in advance.
What is the main differences between thermoelectric modüle as using peltier modüle and thermoelectric modüle as using thermoelectric generator.
conducting a research on the health hazards and injuries of coastal fishing kombo south the Gambia. so the total number of fishermen in the this area is 7000. so i need a sample size from this population that will be representative, valid, and reliable from this population. so that i can test for a correlation: injuries are more frequent among the least experienced fishermen
My research proposes a quantitative approach using data from previous longitudinal observational studies to answer the following questions.
(sample size is approximately 300 participants)
1. What is the prevalence of a disease?
2. What clinical signs and symptoms were present in patients are associated with this disease?
3. What risk factors were present in those who diagnosed with this disease?
We are designing a comparative analysis between an infected group and a healthy group but we are unsure as to how much patients we would need to have a meaningful comparison.
The case series in question will be plotting non-validated measures (idiosyncratic ratings on a scale of 1-10) and using visual inspection and reliable change indexs as methods of analysis.
The findings from one of our previous study showed that the indicators were too high compared to other literature. Thus we planned to carry out a study in small scale to validate this finding? Its is cross sectional study.
The study is designed to measure executive function in children in 3 consecutive years. The children will come into the lab annually with their parent. Since executive function is itself a highly complex phenotype, a battery of standardized tests will be used.
Hi all,
I was wondering if anyone could help advise about any calculator or formula to retrospectively calculate the power of a study which uses ROC/AUC analyses? Unfortunately, of the information I have seen thus far the calculators are for a priori analyses.
I understand some researchers advise against post hoc power analyses but I am interested in any calculators/formulas people may be aware of nonetheless
Many thanks
I am learning about Metabolomics correlation (to disease/biomarkers) based experiments. However, such studies perform untargeted metabolomics to identify novel metabolite biomarkers and I am not clear on how to conduct a power analysis for such an experiment. I did a thorough review of literature and was unable to find much details. Any help will be appreciated.
I'm going to study the expression of a particular micro-RNA in tumor samples in comparison to normal tissue. We obtain gastric tumor and normal samples from same patient. How many samples do we need to our results be valid for differential expression study and investigating the micro-RNA of interest as biomarker? is there any authentic guideline or article as reference? Thanks for your help.
mainly having problem in series apf
Trying to set up some power analysis for infection status with nested factors.
I conducted a multiplatform metabolomic approach (fingerprinting by CE-MS, GC-MS, LC-MS and NMR) with blood (n=74) and urine (n=27) samples collected from dogs. I used the samples I got (it was very hard to obtain them), but I was asked to calculate the ideal number of participants my study should have to check if the "n" I used was enough to both type of samples.
Any idea will be most helpful.
Yours sincerely
Mariana Santos
Suppose we have an EEG study with 3 groups. The recorded numbers of participants for each group were 7, 3, and 7, respectively. The reviewer correctly claimed that this is not enough statistically significant to study the differences among the groups. I quote: "aiming for a medium effect size of Cohen’s f= 0.25 (#=0.05) and a power of 0.85 for condition x group interaction effects, one would need a total sample size of 48 subjects, thus at least 16 subjects per group." Which equation led to that conclusion? Suppose we reduce the number of groups to two. How would that change the minimum total number of subjects?
Thank you very much for your reply.
How to take Dynamic power and switching power report using design complier or prime time synopsys tool?
Building RTL Power Models in RTL Compiler
In behavioral science what should be the effect size to calculate power and determine sample size. Cohen (1988) in his article quantitative methods in psychology (1992) and his book Statistical power analysis for the behavioral sciences suggested three classification as small (.1), medium (.3) and large (.5) which one do you think is more appropriate and what if I go for the medium effect size.
I am hoping to connect with people who are familiar with or working through power calculations for negative binomial and ZINB models. I have not found any software with power analysis calculators that can accommodate NB and ZINB regressions. I am comfortable with the theory and application but my math background is not strong enough to write functions for the power calculation. Suggestions for relevant literature and R code greatly appreciated.
I can't afford to buy an Energy Analyzer. I am planning to record the energy consumed by Milling operation when different parameters are used to machine and find out the variation. There is no Energy Analyzer available here. Any other way to measure the power consumed and verify that?
In practice, the water can be used to transfer voltage and current?
I want an article about Automatic Generation control in economic load dispatch.
I need some basic ideas on designing a WPT system using simulation tools, and I also want to know how to calculate the parasitic inductance, capacitance, and resistance of the coil either by using a formula or a Network analyzer?
I am not too familiar with power analysis techniques. Is my understanding correct that the power analyses typically reported in EEG research papers is the power AVERAGED across a segment of the waveform/EEG data?
I am looking for publications adopting a modified power law relationship having a monomial structure with a coefficient repesented by a function like an error function or an hyperbolic function. See the attachment. Other suggestions on the coefficient are well accepted.
Is there any good procedure for estimating a proper sample dimensions from an Adaptive Cluster Sampling?