Science topics: Power Analyses
Science topic

# Power Analyses - Science topic

Explore the latest questions and answers in Power Analyses, and find Power Analyses experts.
Questions related to Power Analyses
• asked a question related to Power Analyses
Question
Is there a simple way to determine the sample size required to calculate a moderated mediation? We are talking about a mediation with 3 mediators and one moderator. I would appreciate any tips!
Thank you for your sharing! I have read your paper, and you performed the power analysis with the procedure below:
"Using the number of predictors as four including the interaction effect as shown in the research model (market orientation implementation, market orientation internalization, learning orientation, and implementation x learning orientation), medium effect size level (.15), a moderate significance level (α =.05), and a power requirement of .80, the minimum required sample size was 85."
May I know which test-family and statistical test you chose?
Thanks!
• asked a question related to Power Analyses
Question
Dear all,
In social sciences, it is often recommended to determine the sample size we need with an a priori power analysis. This analysis requires to provide, among others, an expected effect size, which is usually provided by prior works similar to our own. However, this index is not always reported. In this context, according to Perugini et al. (2018), it is possible to use a sensitivity power analysis to determine the minimum effect size that can be reliably detected. This analysis is computed from, among others, the available sample size for the study.
The problem i'm facing is the following: do I have to compute a sensitivity power analysis before each statistical analysis? If not, and assuming that 3 distinct analyses must be conducted, from which analysis should I determine the appropriate effect size for my study?
Thank you for considering my request.
Best,
Kévin
According to J.B. Maverick, sensitivity analysis is an analysis method used to identify how much variations in the input values for a given variable will impact the results for a mathematical model. Sensitivity analysis can be applied in several disciplines, including business analysis, investing, environmental studies, engineering, physics, and chemistry.
All models and studies executed to draw conclusions or inferences for policy decisions are based on assumptions regarding the validity of the inputs used in calculations. Sensitivity analysis is concerned with the uncertainty inherent in mathematical models where the values for the inputs used in the model can vary. It is the companion analytical tool to uncertainty analysis, and the two are often used together.
The conclusions drawn from studies or mathematical calculations can be significantly altered depending on how a certain variable is defined or the parameters chosen for a study. When the results of a study or computation do not significantly change due to variations in underlying assumptions, they are considered robust. If variations in foundational inputs or assumptions significantly change outcomes, sensitivity analysis can be employed to determine how changes in inputs, definitions, or modeling can improve the accuracy or robustness of any results.
Sensitivity analysis can be helpful in various situations, including forecasting or predicting as well as identifying where improvements or adjustments need to be made in a process. However, the use of historical data can sometimes lead to inaccurate results when forecasting since past results don't necessarily lead to future outcomes. Below are a few common applications of sensitivity analysis.
Return on Investment
In a business context, sensitivity analysis can improve decisions based on certain calculations or modeling. A company can use sensitivity analysis to identify the inputs which have the biggest impact on the return on a company's investment (ROI). The inputs that have the greatest effect returns should be considered more carefully. Sensitivity analysis can also be used to allocate assets and resources.
One simple example of sensitivity analysis used in a business is an analysis of the effect of including a certain piece of information in a company's advertising, comparing sales results from ads that differ only in whether or not they include the specific piece of information.
• asked a question related to Power Analyses
Question
Dear All,
As stated above, I want to build a multiple linear regression model based on 4 or 5 independent variable (4 continous and 1 categorical) to predict one dependent variable (continous). Since it is a new approach, I want to do a pilot study first.
I assume that I will use an a-priori power analysis with alpha = 0.05 and power of 95%.
How do I determine the effect size (f2)? Should it be large for the pilot study and medium for the following study? Or there are other approaches?
Dear Rizky, the most important point is such studies is the avoiding of chance correlations due to the multiplicity of regressors, herewith enclosed a very important paper that explains the issue and I think you should take into consideration...
• asked a question related to Power Analyses
Question
Hello everyone,
I wanted to estimate the required sample size for a CFA model, and the output returns the required sample size of 37, which is way to low for a model with 60 measured variables. I can't understand what I am doing wrong. Is the function returning the required sample size per item?
Please see below all the information about the model and the performed power analysis.
I would be highly obliged if someone could inspect the syntax and parameters below and let me know what am I missing, or direct me to another method of performing power analysis for CFA.
Model:
60 items
12 factors
2 higher order factors (6 factors each)
the 2 higher order factors are correlated
loadings of the 12 factors on the higher order factors are fixed to 1
Method used to perform a-priori power analysis: package semPower in R
ap <- semPower.aPriori(effect = 0.05, effect.measure = 'RMSEA',
alpha = .05, power = .80, df = df)
summary(ap)
Method used to calculate degrees of freedom:
df = p · (p + 1)/2 − q
df=60*(60+1)/2 - (2+60+60+1)
p is the number of observed variables
q is the number of free parameters of the hypothesized model composed from (a) loadings, (b) item-residual variances (c) covariance/regression parameters between factors and between item residuals
Issue:
The problem is that the function returns the required sample size of 37, which of course is way to low for a model with 60 measured variables.
I can't understand what I am doing wrong. Is the function returning the required sample size per item?
1) your parameter count does not seem correct for the proposed model. You would also typically estimate residual variances for the first-order factors. Those latent residual variances seem to be missing from your free parameter count (q).
2) Your semPower calculation formula/syntax/code in the form in which you used it is limited to studying the behavior of/sample size requirement for the RMSEA fit index. It only tells you about the "sufficient" sample size to reject the null hypothesis of "close fit" using RMSEA (H0: RMSEA <= .05). It does not provide any information about whether the sample size would be enough to estimate the model parameters properly (without bias and with sufficient power). Clearly, N = 37 would not be enough already because the number of parameters in your model would be larger than the sample size. The description of the package (link that you provided) shows you how to set up the proper commands/syntax to examine power for other hypotheses that may be more relevant for your sample size planning than just the RMSEA.
As mentioned by David Morse , a Monte Carlo simulation study would give you a clearer and more comprehensive answer as to the optimal/required sample size question. See also
Muthén, L. K., & Muthén, B. O. (2002). How to use a Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling, 9(4), 599–620. https://doi.org/10.1207/S15328007SEM0904_8
• asked a question related to Power Analyses
Question
Hello everyone!
In a recent research project, I computed a Mixed-Design ANCOVA, i.e., a repeated measures analysis with one within-subjects factor (4 points of measurement) + one between-subjects factor (2 groups) + three covariates.
Is there any software package for doing a post-hoc power analysis in this context? I think G*Power does not contain an option for repeated measures ANCOVAS...
If not, do you know any step-by-step instruction or recommendation for calculating this "by hand"?
Daniel Spitzenstätter
Did you find a proper software for a-priori power analysis for mixed design ANCOVA? I'm also interested in power analysis for this design ( 1 wth subject factor (e measurements, 1 btw subject factor (2 group) and 1 covariant).
Earlier you mentioned "observed power" command in SPSS which I also saw and run it. What do you think, is observed power proper for this design? Also, is eta square is proper for effect size (for each F test in the design)?
I used G*Power 3 app for Windows in which there is only ANCOVA option (both sensitivity and a priori power analysis). I'm not sure if it is the right power analysis.
Best wishes,
Vahide.
• asked a question related to Power Analyses
Question
I am running different tests (t-tests, correlation, anova) and therefore have computed different power analyses. Should I report all of these, or only the test which needs the most participants to ensure enough power (as therefore all other tests will have enough power)? Thanks in advance!
Agree Salah Ahmed
• asked a question related to Power Analyses
Question
Hello,
I would like to ask about the sample size and power analysis calculations. I have 6 conditions for my survey. Additionally, 9 questions/choice tasks with 2 options (+1 status quo option) in each. I will be analyzing them by using Mixed Logit Model after I complete the experiment. So, I have been searching about formulas etc. but it seems like there is not a formula specifically for Discrete Choice Experiments will be analyzed with MXL. What do you suggest me? Are there any alternative ways about this?
Any advice would be much appreciated! :)
Sample size calculations and power analysis for data collected with choice experiments and analysed with mixed logit models can be done, in principle, by designing and conducting Monte Carlo simulations. The design of such simulations is challenging, as credible assumptions are needed to be made about a variety of unknowns. These include assumptions on the set of utility parameters expected to vary, their mode of variation (finite mixing, such as latent classes, or continuous, such as parametric, semi-parametric distributions with their means and covariance matrices), the specific experimental design to be used and---importantly---the values of the parameters for the data generating process. Finally, the threshold sizes of the effects that are to be identified in estimation, and the minimal confidence sought to be achieved for such estimates need specification. For example, different power curves can be obtained for treatments that affect the probability of selection (e.g. market shares), relative values of utility parameters, and marginal rates of substitutions (e.g. marginal WTPs).
• asked a question related to Power Analyses
Question
Hi everyone,
I tested an SEM model with 2 IV, 4 mediators and 1 DV on a sample of 1000 participants (see attached figure). Could you please help me to find an estimation for a good sample size using power analysis for this multiple-mediator model.
Best,
Robin
We were asked to provide a power analysis for a sample size of 20 in PLS-PM, fair enough. We did it. You can see in Radosevic, S. and Yoruk, E. (2013) Entrepreneurial propensity of innovation systems, Research Policy, 42(5). But, I agree with remarks here that with a sample size of 1000 you should be fine unless the reviewers ask for it.
• asked a question related to Power Analyses
Question
Hello everyone, I conducted a longitudinal study (four time points) and tested a longitudinal mediator. I submitted the study , then Editor's comment:
"You put up one hypothesis that predicts a null effect (no mediation), and at several points you interpret insignificant findings as evidence for null effects. From my perspective, such interpretations require a power analyses for respective statistics"
I have no idea...
Cry for help~~~
The available vedio here. And I think that the editor preferred to report the power of each analysis. Thereby he had desire to interpret your results with cautions and justifications. For example when your results are insignificant, this insignificant is due to low power and small sample size of your study instead of true effect. https://www.google.com/url?sa=t&source=web&rct=j&url=https://m.youtube.com/watch%3Fv%3DtFmTlEoqy8I&ved=2ahUKEwiH_Kui_bjwAhWrzTgGHVWjC2UQwqsBegQIBRAB&usg=AOvVaw0MTlHpHlGClB9t2oAncuTs
• asked a question related to Power Analyses
Question
Hi, everyone
In relation with the statistical power analysis, the relationship between effect size and sample size has crucial aspects, which bring me to a point that, I think, most of the time, this sample size decision makes me feel confusing. Let me ask something about it! I've been working on rodents, and as far as I know, a prior power analysis based on an effect size estimate is very useful in deciding of sample size. When it comes to experimental animal studies, providing the animal refinement is a must for researchers, therefore it would be highly anticipated for those researchers to reduce the number of animals for each group, just to a level which can give adequate precision for refraining from type-2 error. If effect size obtained from previous studies prior to your study, then it's much easier to estimate. However, most of the papers don't provide any useful information neither on means and standard deviations nor on effect sizes. Thus it makes it harder to make an estimate without a plot study. So, in my case, when taken into account the effect size which I've calculated using previous similar studies, sample size per group (4 groups, total sample size = 40 ) should be around 10 for statistical power (0.80). In this case, what do you suggest about the robustness of checking residuals or visual assessments using Q-Q plots or other approaches when the sample size is small (<10) ?
Kind regards,
I cannot agree with the practice of estimating sample size based on previous studies. There are a number of important reasons for this.
1. The most important reason is that the sample size should have adequate power to detect the smallest effect size that is clinically significant. It doesn't matter what previous researchers have reported. If there is a clinically significant effect, then the study should have the power to detect it.
For instance, previous research may have shown that mask wearing reduces risk of Covid transmission by 50%. Fine. But even a 20% reduction in transmission risk is of considerable public health importance, so your study should be capable of detecting this. A study powered to detect a 20% risk reduction is, of course, comfortably powered to detect anything bigger.
2. The second reason is that early studies can suffer from, well, early study syndrome.
a) They are done by people who really believe in the effect, and who are prepared to put in unusual efforts to make the study work, so the study may have unrealistic levels of input.
b) Early studies take place in a context where protocols are evolving, and so the methodological quality is often lower – we learn by our mistakes; I'm not blaming early researchers!
c) They are more likely to be published if they find something interesting (a significant effect size).
And might I add that if your research actually matters there is no excuse for 80% power. It's a lazy habit. It's not ethical to run research that has a baked-in 20% chance of failing to find an important effect. Participants give their time and work for nothing (and animals give their lives). We have an ethical duty not to waste these on research that has one chance in five of failing to find something useful if it really exists.
• asked a question related to Power Analyses
Question
Hi everyone, I'm preparing for an experiment and need to calculate the minimum sample size. Participants (level-2) are randomly assigned to one of the two conditions (level-1), and they will be measured with the same scales at two timepoints (level-3). Based on previous research, I decided that the effect size of condition on the outcomes is f=0.2.
Based on this design, how should I calculate the sample size I need?
Hello Linwei,
This link offers both simulation software and guidance on how to go about the business of a priori power analysis for multilevel designs: http://www.bristol.ac.uk/cmm/learning/multilevel-models/samples.html
• asked a question related to Power Analyses
Question
How can we estimate sample size for a moderated mediation analysis with more than two mediators? I am looking for scripts, macros, syntaxes, that allow us to estimate sample sizes for research questions that include more complex models such as the above.
Hi Patricia,
I would recommend a manipulation of mediator design (Pirlott et al., 2016; doi:10.1016/j.jesp.2015.09.012), since there is harsh critique on the “traditional” methods that were introduced by Barron and Kenny for mediation analyses (Fiedler et al. 2017, doi:10.1016/j.jesp.2017.11.008). Some journals do not even publish articles with this type of analyses anymore (e.g. Health Psychology; Journal of Experimental Social Psychology).The method of that is described in Pirlott et al. is advantageous for many reasons and you can simply calculate the sample size on basis of t-tests or correlation analyses.
Good Luck!
• asked a question related to Power Analyses
Question
Hi, I have a simple question.
I am hoping to perform a power analysis/sample size estimation for an RCT. We will be controlling for baseline symptoms, and using post-treatment or change scores as our outcome variable, Ie we will use an "ANCOVA" designs showed to increase power: https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-019-3671-2
Would any body be able to point me towards the best tool for sample size estimation for such a model?
thanks!
In response to --> so why adjusting?
In a true experiment with random allocation to groups (i.e., an RCT) that has both baseline and follow-up measures on the outcome variable, the principle reason for including the baseline measure as a covariate is to reduce the error term. Variability in the follow-up measure (i.e., the DV) that is accounted for by the linear relationship between baseline and follow-up scores is partialled out of the error term. The cost is 1 df. But that cost is usually more than made up for by the reduction in SSerror.
• asked a question related to Power Analyses
Question
It is said that the current version of G*Power (3.1.9.2) cannot conveniently do power analyses for repeated measures designs with more than two factors (see https://stats.stackexchange.com/questions/59235/repeated-measures-within-factors-settings-for-gpower-power-calculation ). If so, how can I determine sample size used for the design with 3, 4, or even 5 factors? Thank you!
Neil, thank you for your kind and I agree with you.
• asked a question related to Power Analyses
Question
GPower is only equipped to deal with calculations of bivariate regressions and needs SD s for both predictor samples.
However, if one has a multiple regression with more than 2 outcome variables (multivariate) , is there a way to calculate sample size with an alpha of 0.05 and assuming a moderate effect size of 0.35 apriori?
For once I would not recommend David Eugene Booth 's advice as there is no guarantee of quality if you simply Google. My advice would be to use G* Power which is the widely recommended free software for sample size calculations. There is also a table on this specific subject in Andy Field's textbook on discovering statistics using SPSS.
• asked a question related to Power Analyses
Question
I have done a study for serum metabolite analysis in a control and experimental group. The metaboanalyst software based analysis shows a significant alteration in quite a few metabolites. To run a stats, I have two questions:
1- With n=3/group what fold change in metabolite levels could be reliably detected?
2- How do we estimate statistical power for the study?
1. It depends what you want to do next...
a) If your are going straight forward for publication then larger fold change = looks better. I would say minimum of 3.
b) if you are planning investigate further your results I believe anything above 1.5 fold change could be interesting.
2. statistical power ?? n=3, there is no power. I would go with simple T-test...
• asked a question related to Power Analyses
Question
Dear community,
I'm looking into ways how to do an a-priori power analysis for an fMRI experiment where the main analysis will be a representational similarity analysis (RSA).
The experiment will present the same stimuli in two successive fMRI-sessions (with a behavioral training in between). For each fMRI session, I plan to do a model-based RSA on brain responses elicited by the stimuli. Voxel restriction will be done with a searchlight procedure. The most interesting outcome will be the difference in these results between the two training sessions, as estimated with a GLM contrast.
I think this is not uncommon as i found other experiments adopting a similar analysis procedure. I found no clue however on how to estimate the necessary sample size to achieve a certain statistical power (say 80%).
Since this is a bit of a frankenstein made from other common statistical approaches, I'm not sure if the general logic of fMRI-power analysis applies here.
Has anybody experience in this area or can point me to literature that contemplates this issue?
Thanks,
Oliver
Topic is bit dusty, but I wonder if you got any answers?
• asked a question related to Power Analyses
Question
I'm developing a readiness assessment model regarding contractors' preparedness for a specific activity, in order to do so, a survey study was carried out and the data analyzed with PLS-SEM to obtain the CSF contributing to that readiness; nevertheless, due to the subject being too specific, it was impossible to define or quantify a population for it and hence, a probabilistic sample, which can compromise the external validity (generalizability) of my readiness assessment model. Is it feasible trying to reduce that generalizability issue with the minimum sample size requirements (means of power analyses) from Cohen (1992) and the use of PLS predict to determine the prediction power of the model?
I'd be delighted if any colleague could reply to this need
In general, using any rule-of-thumb for sample size planning or assessing statistical power is problematic.
Random sampling provides a model free basis for generalization. Propensity score–based methods for generalization require three assumptions to ensure their validity. First, the stable unit treatment value assumption must hold for all units in the experiment and in the population. Second, generalization using propensity score methods requires strongly ignorable treatment assignment in the experiment. Finally, generalization using propensity score methods requires a strongly ignorable sample selection. The development of rules of thumb that take into account sample size since features of probability samples—the benchmark for generalizability—differ markedly in small samples. This raises the issue of how to judge the adequacy of the match between the experimental sample and the inference population.
Probability sampling is the gold standard for generalizing from samples. The idea is to use the adequacy of matching that would be expected if the experiment had a probability sample to develop benchmarks of adequate matching. There is no reason to expect small experimental samples to match inference populations better than probability samples.
• asked a question related to Power Analyses
Question
Hi everyone, we need some help with a request from a reviewer.
Here's the issue:
In our study we investigated the impairment in the ability to recognize facial emotions when a facemask is present (standard mask, transparent mask, and no mask). One reviewer argued that we employed a too small number of trials (40 items: 10 faces * 4 facial expressions). Moreover, due to the non-normality of data, we ran non-parametric tests (Kruskal-Wallis; Mann-Whitney; Friedman test; Wilcoxon signed-rank test). Does it make sense to compute the observed power of the analysis in response to the reviewer's concern? And in particular, how can we calculate the observed power of a Kruskal-Wallis test (SPSS does not have the point and click option for observed power as for the non-parametric analysis)?
Furthermore, would it be the same to compute the observed power (post hoc) of the relative parametric tests (i.e., compute the observed power of the ANOVA relative to the Kruskal Wallis test)?
Lastly, in a bunch of forum many staticians clearly state that any post hoc computation of observed power is completely useless (not to say "nonsense"). But, aside from blog posts, is there some relevant paper we could quote to support this claim? Also, if that's the case, how can we justify our number of trials without using observed power?
P.S.: In our case we always observed significant results with very low p values (always <.001) and when we ran the corresponding parametric analysis we always had a >.90 observed power.
How can we sort this out?
Hello all,
If the analysis was in any way a repeated measures design and if the individual trials were recorded as data points, then the number of trials would play a role in the df for various effects, most notably any interactions involving trials. So, in this way, they could have an effect on the statistical power (sampling distributions of F involving larger df have lower critical values).
However, if the authors simply summed or averaged the outcome over the set of trials and recorded that as the DV, then trials still has potential import for several reasons:
1. Too few trials would not yield as reliable an indication of participant performance as would a higher number.
2. Too many trials might result in participant fatigue or inattentiveness, either of which might bias the estimated performance downwards.
So, what's the "right" number of trials? In my opinion, unless one were willing to evaluate this specific task empirically using samples from the target population of interest, I don't think anyone would really know.
Best wishes to all.
• asked a question related to Power Analyses
Question
Hello, Are there alternative ways of approaching the problem of controlling for alpha inflation which consider instances with very small and hard to reach samples with which multiple hypothesis (which are all testing the same dependant variables)?
In my case, the Bonferroni correction is too restrictive: 1) Since the alpha is so small, it will make it hard to yield significant results and 2) It requires that I recruit too many participants for the number that is available in order to have sufficient statistical power. Indeed, my population is hard to reach because it is a very specific clinical population.
On the study: We want to identify which return-to-work obstacle predict return-to-work in organ transplant recipients. Return-to-work obstacles are measured using a single instrument, which has 10 independant subscales. Thus, 10 tests will be performed. For each test, there will be 2 independant variables : 1) How much a return-to-work obstacle (subscale) is perceived as importante (score : 1-7) and 2) How much self-efficacy one has regarding this obstacle/subscale (score: 1-7). The dependent variables are: 1) The intention do return to work (yes/no), 2) Employer's intention of welcoming the employee back (yes/no). The dependent variables are measured 6 months after the dependent variable.
Thanks so much in advance for your help! I am very thankful for this wonderful community.
Thanks so much for your answers, Béatrice Marianne Ewalds-Kvist and Rolando Gonzales Martinez !
I have chosen the Benjamini-Hochberg to control for error rates. However, I am seriously considering Baysian methods as my population is too specialized, and therefore small for frequentist methods.
• asked a question related to Power Analyses
Question
Hi everyone,
my question is related to post hoc when dealing with huge size effect, e.g. Cohen's d> 2.
No one asked me to perform this kind of analysis, it is just a matter of personal curiosity. It is claimed that in post hoc power analyses, p-value is in a 1-to-1 relation with the observed power, and this is clearly true. However, for huge size effects, I used a Monte Carlo simulation to understand if this holds anyway. Surprisingly, the curve shows a quite big area instead of a line (you can see it attached, N=1000 experiments from normally distributed populations).
Am I wrong to say that for huge size effects, PHP analysis gives reasonable results?
Given power is determined by a small set of things, like effect size (so in pre-study power, the min effect size to detect) and sample size, you should think if reporting power adds anything to reporting these.
• asked a question related to Power Analyses
Question
Hi, so I am carrying out a study and for my results, I will be conducting 3 different repeated t-tests to see if there is a statistical difference between scores from 3 questionnaires that have been complete pre and post-intervention.
Do I also need to carry out 3 different G priori power analyses because it asks me to input the mean and SD and I have 3 sets of those.
Hi Patricia Viana,
You're right in thinking that you should carry out three different power analyses. Following this, it is common practice to recruit the largest sample size that is calculated out of all the relevant power analyses. By doing so, you maximise your chances of being able to identify a statistical difference for all the effects you are interested in.
Best of luck with your research!
• asked a question related to Power Analyses
Question
Say I'm using GPower to calculate the number of participants needed in a multiple regression to detect one of the predictor's unique effect. I want to be able to detect a correlation of .3. What you typically do is select:
- Test family: F tests
- Statistical test: Fixed model, R2 increase
- Type of power analysis: A priori (but I guess for my question, it really doesn't matter)
GPower ask for an effect size (f2), and has a tool to convert R2 to f2. In the toolbox, you can select a "Direct" input, which is partial R2. So from what I understand, GPower uses partial correlation has an input.
Is it strictly for partial correlation, or is it OK to use this for part correlation as well? I know both return the same p-value (it's just two different beta-to-correlation transformation), so I'm not even sure if it is relevant at all.
Thanks in advance for any insights on this question!
I think Gpower needs a partial correlation specifically, but it will fit both a correlation and a partial correlation. For instance, the best scenario is partial correlation = correlation (other predictors have no effects), and the worst is partial correlation <= correlation (other predictors explain some variance, but the correlation is thus bigger than the partial). It both cases, you'll have the power to detect a partial correlation or correlation of at least .30.
If you use a part correlation instead, Gpower will overestimate the power of your analysis (because part correlation <= partial correlation), which may or may not be a good thing.
The real question is what is .30?
• asked a question related to Power Analyses
Question
I have run a power analysis on my occupancy models that says for a power of 0.85, and using 204 sample units with 11 replicates, the detection probability would need to be about 0.096.
How can I convert this detection probability to tell me how many detections (1's) would need to be in my detection history for that to be the detection probability?
Thanks for any help!
I think it's circular reasoning because the detection probability is calculated using your criteria, which already include the detections!
• asked a question related to Power Analyses
Question
I am doing a power analysis to determine the number of subjects needed based off of a study that used mixed effects logistic regression. I would guess that the odds ratio from this can't be entered into G*Power as though it were from regular logistic regression...or can it? If not, is there another way?
Edit: To provide more information, the data is at the trial level, clustered within subjects and within items. This is why I feel like treating this outcome like a regular logistic regression isn't right.
Thank you!
• asked a question related to Power Analyses
Question
I used repeated measures for my analysis. Now I need to calculate the power. I have 1 between variable (2 levels) and 2 within variables (one has 2 levels and one has 3 levels). If I expect to an medium effect size (f = 0.25), can 186 participants achieve a 0.80 power level?
I used G*power for this analysis, but I am not sure whether it is correct as I have more than 1 within variable. If G*power can do this, shall I enter 12 (2*2*3) for the group and 2 for the measurement?
Wei Li, if you tell us what software you will use to estimate your repeated measures model, someone may be able to give better advice about how to use simulation for power analysis using that same software. E.g., in Stata, the -corr2data- command comes to mind. Other stats packages probably have similar tools.
HTH.
• asked a question related to Power Analyses
Question
Could anyone please provide me some guidance in how to plan a sample for a Welch's t test (with some references)? One can find a plenty of information online for a Student's t test. But I couldn't find anything about the calculation for a Welch's t test.
I've heard the power of a Welch's test is similar to the power of a Student's one. But I also wasn't able to find how to calculate the power (so I could derive the calculation of the sample size). And also, I couldn't find confirmation that they are similar enough for me to use the same procedures as in the Student's test.
Bruno
If you are planning to use equal size samples, use formulas for Student’s T-test. It is the most powerful one in this case and when samples sizes are equal it is very robust against moderate variances difference.
If you have some idea about variances difference, you can use GPower, a free and very good program for sample size calculation. You can easily find it in the internet.
Jorge Ortiz
• asked a question related to Power Analyses
Question
I am planning to measure response time and accuracy in reaction to visually presented words. My design is 2x2 with respect to fixed factors, and I have about 5-6 additional random factors.
I plan to analyze the responses using linear mixed effects models (for accuracy data I will use a generalized mixed model).
My concerns are regarding stimulus selection and sample size.
Are there any tools/guidelines for computing the number of stimuli in each condition and the number of subjects needed in order to achieve a certain power level for the two-way interaction of the fixed factors? (suppose I have some estimate about effect size, and assuming alpha = 0.05).
Here is a recent paper that could help
• asked a question related to Power Analyses
Question
How do I determine the number of participants needed to achieve X power, with a small effect size (assuming f2 of .02), for a multivariate regression?
As far as I can tell G*Power can only do this for univariate regression. Is there some modification I can make to its output? Is there another program I can use?
Note that I have no data in hand, I am only going off of the assumption that the effect size is small.
There exists a distinction between multiple and multivariate regeression. for instance, a regression analysis with one dependent variable and 8 independent variables is NOT a multivariate regression.  It’s a multiple regression.  on the other hand, When you’re jointly modeling the variation in multiple response variables, that is Multivariate regression modeling.
• asked a question related to Power Analyses
Question
How to do power analyses for repeated measures designs with MORE THAN ONE within-subject or between-subject factor? For example, a 2*3 repeated measures design with two within-subject factors.
It seems that the current version of G*Power (3.1.9.2) is not appropriate to do so? Any other solutions?
Specifically, I have a 2*3 repeated measures design with two within-subject factors, and I want to do a prior power analysis to determine the sample size. If I expect to achieve a 0.80 power level with an (assumed) medium effect size (f = 0.25), how many participants do I need?
Many thanks.
The general problem is accounting for the fact that within-subject non-independence (= correlations = clustering of observations) reduces the effective sample size. The actual sample size has to be boosted to adjust for this effect. There are two main approaches:
(1) Analytical. This means using the standard power analysis equations in combination with an adjustment factor (the design effect https://en.wikipedia.org/wiki/Design_effect ) that takes account for the degree of within-subject correlation and the number of observations per subject.
(2) Simulation. This approach is more difficult to implement (usually some programming will be required -- but perhaps other answers will identify more accessible simulation based methods), but can deal with any level of complexity (see two example articles and a tutorial below -- CoI: I co-authored two of them). The basic approach is to simulate multiple (100s or 1000s) random data sets where the alternative hypothesis is true, with a given effect size, then test the null hypothesis in each data set. The proportion of data sets where the null hypothesis is rejected is an estimate of power. The assumptions that generated the data set (e.g. sample size) can then be adjusted until the desired power is achieved.
Either approach (especially 1) will require simplifying assumptions (e.g. it will be easier to treat the two factors that you're interested independently when using approach 1. This is fine -- power analysis always involves a great deal of simplification, and often you can argue that the simplification is conservative (increases required sample size), so at least errs on the side of over-powering.
To sum up, my advice would be to use simulation if you're comfortable with it, otherwise simplify the problem and use approach 1.
A final point: I strongly discourage using general predefined effect size definitions like "medium effect size (f = 0.25)". To quote the first article listed below: "the study should be powered to detect ... the smallest effect that would be considered biologically meaningful. In other words, the study should be sufficiently sensitive to detect the smallest effect that, in the judgement of the researcher, is worth detecting." Effect size is highly context-specific, and in my experience it's always possible for a researcher with knowledge of the specific field to define an effect that is worth detecting.
--------------------
Simulation-based power analysis for clustered data using R:
• asked a question related to Power Analyses
Question
for G power software users: when to use chi-square and when to use Z test as test family in a priori sample size calculation.
Another question:
If my primary outcome is nominal data (clinical cure), I count the number of patients cured and then I get the % of patients cured in every group, and I am having two groups (test and standard therapy), also previous trials in this discipline used non parametric analysis for their primary endpoint (which is clinical cure also), can I use chi-square in Priori sample size calculation, and use it in post hoc power analysis ?? (the primary outcome was non normally distributed
Cross Validated

Questions
Tags
Users

Read this post in our app!
up vote4down votefavorite
Are different p-values for chi-squared and z test expected for testing difference in proportions?
r chi-squared p-value binomial proportion
I'm trying to test the difference in proportions using the z test method and chi-squared method, but am getting very different answers. Is that normal?
My data:
CI CII
Male 205 102
Female 83 39
Calculating the z score I get 0.25 which should correlate to a p-value of 0.4013. Calculating the chi-squared score I get 0.0626 correlating to a p-value of 0.8025.
I read that the z-score requires some assumptions (probability of success is ~0.5 and n is high). Is this violating those? Or is it just the nature of these different approaches that gives very different answers with the same meaning (no evidence of difference).
I'm certainly open to miscalculations, but I've re-checked. If this behaviour isn't normal I'll recheck again!
Here are my calculations in R.
> r1 <- 205
> r2 <- 102
> n1 <- 288
> n2 <- 141
> (p1 <- r1/n1)
[1] 0.7118056
> (p2 <- r2/n2)
[1] 0.7234043
> (common.proportion <- (r1+r2)/(n1+n2))
[1] 0.7156177
> (se.pooled <- sqrt(common.proportion*(1-common.proportion)*(1/n1+1/n2)))
[1] 0.0463676
> (zscore <- (p1-p2)/se.pooled)
[1] -0.2501466
>
> # chi-squared
> prop.test(c(205,102), c(288,141), correct = FALSE)
2-sample test for equality of proportions without continuity
correction
data: c(205, 102) out of c(288, 141)
X-squared = 0.0626, df = 1, p-value = 0.8025
alternative hypothesis: two.sided
95 percent confidence interval:
-0.10208385 0.07888645
sample estimates:
prop 1 prop 2
0.7118056 0.7234043
share improve this question
Tom
542●1●7●14
editedMar 13 '15 at 17:33
gung
87.5k●23●194●398

A related issue. If I'm presenting confidence intervals around each proportion calculated using the binomial distribution, but then comparing them and presenting a p-value using chi-squared, that seems a bit wrong. I might have overlapping CIs, but then a p-value that's less than 0.05. Am I thinking correctly? – Tom Mar 13 '15 at 4:29
3

Tom, can you show your math? These two tests should give very similar results for your sample sizes (especially if making the same choice about continuity corrections). – Alexis Mar 13 '15 at 5:19

Thanks, @Alexis. I've added the math in now. – Tom Mar 13 '15 at 7:24
order by               active
oldest

up vote7down voteaccepted
Very simple: both the z test and the contingency table χ2χ2 test are two tailedtests, but you have got the one-sided pp-value for your z test statistic. That is for H0:p1−p2=0H0:p1−p2=0, the pp-value = P(|Z|≥|z|)P(|Z|≥|z|), but your reported pp-value is only P(Z≤z)P(Z≤z).
Notice that 0.4013×2≈0.80250.4013×2≈0.8025. Easy!
Alexis
10.8k●2●31●70
editedMar 13 '15 at 19:05
5

And the square of the Z-score is (−0.2501466)² = 0.06257, which equals the test statistic X-squaredfrom the prop.test() output. – Karl Ove Hufthammer Mar 13 '15 at 18:55
1

Thank you for that "easy" answer! (Which prompted a good schooling on one-tailed tests. And now @KarlOveHufthammer you're sending me down another schooling. If x-squared is simply the z score squared, why do we even have it? And why isn't it called z-squared? (Obviously, not to be answered here. I have a lot to learn!) – Tom Mar 16 '15 at 6:29
2

@Tom The reason that it’s called X-squared in the R output, is that the X is really an ASCII interpretation of the greek capital letter chi (Χ) (the lowercase version of this letter looks like this: χ). And it’s a chi-squared test, which is used in lots of other situations. That said, the test(s) should never be used for comparing binomial proportions, as they have terrible statistical properties. See stats.stackexchange.com/questions/82720/… – Karl Ove Hufthammer Mar 16 '15 at 16:42

ORName
Email
2017 Stack Exchange, Inc
• asked a question related to Power Analyses
Question
K = M/ root of Lp * Ls. How do I satisfy this equation?
Not Possible for 20 metres, 20mm is possible.
• asked a question related to Power Analyses
Question
Hello,
I want to find the losses ( switching and conduction losses) of MOSFET and its diode. For the MOSFET (the positive half cycle of the current ) I am using these equations: Switching losses ( E= Voff Ion/6* (ton+toff)) where ton= tdon+tr and toff=tdoff+tf from datasheet. And for the conduction losses I am using (EON=Ion^2*Rdon*Ton).
For the diode (the negative half cycle of the current) I am using these equations: Switching losses ( E= Voff Ion/6* (ton+toff)) where ton= tdon+tr and toff=tdoff+tf+trr (reverse recovery time) from datasheet. And for the conduction losses I am using (EON=Vsd*Ion*Ton) where Vsd is forward voltage of diode from datasheet.
My questions are:
1) Am I following the right equations?
2) For the diode losses calculation , the energy losses is negative because of the negative current, so to calculate the total losses of mosfet+diode , should I take the absolute value or the negative one ?
3) For SiC MOSFET , should I consider the reverse recovery time (trr) to calculate the diode losses ?
Switching Losses and Conduction losses calculation is welly presented in attached file . Hope you will get the solution
• asked a question related to Power Analyses
Question
Hello,
I have implemented my design, that is a network on chip, by ISE. now I am trying to get dynamic power. I have written a  verilog test fixture for my design and generated a vcd file for it. But when I use XPA, the dynamic power seems wrong. I attach my test fixture.
please tell me if the vcd file generation is wrong or not complete. I should mention that my simulation dose not use all of the switches in the network.
Dyanamic power in FPGA gives estimated values but these values are practically wrong. For good estimated value go for Synopsis ASIC flow.
• asked a question related to Power Analyses
Question
Hello all,

I am trying to conduct a power analysis (via Monte Carlo simulation) to see how large a sample I would need to collect for a study. The hypothesized model is Hayes' PROCESS model 21 (attached below; moderated mediation, with the "a" and "b" path moderated by separate moderators).

I was wondering if there was any guidance I could receive in the best way to do so via Mplus or r (either through resources or syntax - I have read previous preacher papers but still find myself having difficulty conducting such an analysis). In doing some research these two software packages seem the most appropriate, but any other guidance would be welcome.

Thank you all so much!
Monte Carlo methods can be used to estimate statistical power for survival analyses. Simple computer programs are presented to illustrate this approachThe Monte Carlo approach provides a way to perform power calculations for a wide range of study conditions. The approach illustrated in this study should simplify the task of calculating power for survival analyses, particularly in epidemiologic research on occupational cohorts.
• asked a question related to Power Analyses
Question
If we are using Data loggers for calculating voltage, current, power of an on-site industrial motor (when the motor is running) and obtained the above three parameters by using specific calculations , how long and how often should we take our measurements from the loggers to get an ideal value? And what will be the accuracy of the measurements obtained?
Dear Asha,
a complex question:
First we need to know how continuous the operation of the motor is. Is it running at constant load, 24/7; is it stopped and started frequently, is it operating over a wide range of loads (e. g. a pump with a valve for flow control or a fan with a damper)?  So I would check first how continuous operation is and then decide so that we cover a good sample of operating conditions. I would start by logging consumption for three to five days, unless we already know more about variation of operating conditions, and analyse it to determine then the actual measurements and have a good basis for calculation of consumption.
Accuracy will depend on the device (see data sheet) and on how accurately our sample matches real operations.
does this help?
best regards
Johannes
• asked a question related to Power Analyses
Question
I am conducting a mixed-method research on the sexual health of people with physical disabilities, and I wanted to address some questions regarding the "disableism" or social discrimination regarding the sexuality of disabled people. I am wondering if there is any survey on sexual discrimination that focuses on issues regarding disabled people? Thank you in advance.
The Center for Relationships and and Sexual education may have one. Here is their website.
• asked a question related to Power Analyses
Question
What is the main differences between thermoelectric modüle as using peltier modüle and thermoelectric modüle as using thermoelectric generator.
The generator works on Seebak effect. Generates voltage when there is temp difference exists. Peltier module works on Peltier effect which is opposite of Seebak, ie voltage difference across two surfaces creates temp difference.
• asked a question related to Power Analyses
Question
conducting a research on the health hazards and injuries of coastal fishing kombo south the Gambia. so the total number of fishermen in the this area is 7000. so i need a sample size from this population that will be representative, valid, and reliable from this population. so that i can test for a correlation: injuries are more frequent among the least experienced fishermen
Samba -
First, sample sizes are based on a standard error goal, which is often expressed in a confidence interval, or it could be on power to go with your 'significance' level.  A p-value is driven by sample size, so it is easier to get a small one, if your sample size is large, regardless as to effect size.  You have to be careful of what this means.
You need to consider "effect size."
I suggest not using hypothesis tests.
The sample size 'formulas' you likely see on the internet are for sampling for proportions from an infinite population, with simple random sampling, and for the worst case, where p=q=0.5. This may not be of use to you.  For each variable/question you will need to use (1) a method compatible with that type of data - often continuous or perhaps yes/no, (2) what sample design is best - stratification often lowering sample size needs, whereas clustering may be substantially more convenient, (3) and very importantly, the standard deviation of the population (or in each stratum).  Number 3 may require a pilot study.
Also, because you are investigating a finite population, you may need a finite population correction (fpc) factor. (I have a definition on my RG page that I did for a Sage Pub encyclopedia.)
There are chapters on sample size in the following two survey sampling textbooks, and more information in many others you could find:
Cochran, W.G(1977), Sampling Techniques, 3rd ed., John Wiley & Sons
and
Blair, E. and Blair, J(2015), Applied Survey Sampling, Sage Publications.
In Cochran, he explains sample size in a chapter for simple random sampling, and in succeeding chapters, he relates information on other designs and the impacts.
Unless you are sampling for proportions and have a small sample compared to the population size, for simple random sampling, and the proportion is expected to be near 0.5, there isn't a one-size-fits-all formula solution.  It can get complicated. You may need a pilot study to judge this, as well as test a questionnaire, logistics, and/or whatever considerations you might have.
Note that if you want to look at various levels of experience, you might stratify, but that isn't exactly stratification because you really want to know information for each group, not just get better overall (aggregate) estimates. You would effectively be comparing two or more populations.  You need a good estimate as to how many are in each of those subpopulations.  And/or you might do regressions of variables of interest versus age, but age isn't quite a continuous variable, and things could get messy.  So unless you are happy to compare a few age groups, this may be very involved.  (I have done a lot with continuous data regressions, but am not familiar with the other kinds.)  If you look at a number of age-group subpopulations and investigate each, you may need to consider the fpc in each group, just as in stratification.
I urge you to decide ahead of time exactly what you will do with the data, and what design(s) you are using, considering every variable of importance to you.  People sometimes may gather data and then wonder what to do with it.  You need to plan first.  Asking about your sample size needs was a good step.
Cheers - Jim
• asked a question related to Power Analyses
Question
Model image attached. Many thanks.
Let me add my two cents, to the very good above responses. PROCESS uses traditional ordinal least squares (OLS) for the regression at step 1 and 2 of your model, and in consequence you can check the power of each step using G*Power, as you would do in a normal mutlivariate/hierarchical reggression. However, in order to test the conditional indirect effect (moderated mediation), PROCESS uses bootstrapping to construct confidence intervals. Bootstrapping is a type of Montecarlo simulation, which is equal to generating data as previous respondants answered. In consequence, if you set the bootstrapped samples to more than 1000, you are very unlikely to have power issues. Hope this helps, and good luck with your research.
• asked a question related to Power Analyses
Question
My research proposes a quantitative approach using data from previous longitudinal observational studies to answer the following questions.
(sample size is approximately 300 participants)
1. What is the prevalence of a disease?
2. What clinical signs and symptoms were present in patients are associated with this disease?
3. What risk factors were present in those who diagnosed with this disease?
I don't get it. As I see this, there is not any need to perform any test or to calculate any power to answer the three questions.
Q3 is a bit strange. A good diagnosis should depend on risk factors. If this is the case, the presence of the risk factors is not independent of the diagnosis, and the answer of such a question will be quite meaningless.
• asked a question related to Power Analyses
Question
We are designing a comparative analysis between an infected group and a healthy group but we are unsure as to how much patients we would need to have a meaningful comparison.
As our friends mentioned, there are many factors involved to decide about your sample size. But to keep it simple, if I want to suggest you a number without knowing the details of your data. I would say at least you need 15 sample for each group so that the degree of freedom would be n1 + n2 - 2 = 28 in case of comparison. So that you can approximate it by normal distribution if you have information about your population parameters. So you need at least 15 number of sample for each group, Clearly, if you are able to have more number of sample based on your time and money, you can have more accurate comparison. Good luck :)
• asked a question related to Power Analyses
Question
The case series in question will be plotting non-validated measures (idiosyncratic ratings on a scale of 1-10) and using visual inspection and reliable change indexs as methods of analysis.
• asked a question related to Power Analyses
Question
The findings from one of our previous study showed that the indicators were too high compared to other literature. Thus we planned to carry out a study in small scale to validate this finding? Its is cross sectional study.
Deepak -
If you are looking at continuous data in a cross sectional survey, and wish to re-survey a subset of those surveyed previously, for the same information, to validate the previous results, you might try the following:  For a given item on your survey (doing this for each key item),  let x_i be the response in your first survey, and y_i be the corresponding response in the validation survey.  (Going to the same source, however, you may just automatically be given the same response, and you won't learn anything. However, this might work for a case such as a US federal audit of a subset of State audits, for example.)  If you plot those (xi, yi) data points on a scatterplot, you can see if a simple linear regression has a slope of one, with regression to the origin.  Such a slope of unity would mean you are not getting changed results. However, if you are correct that the x's tend to be larger than the y's, the slope will be less than one.  Your sample size would be determined by what it takes (assuming you can get good data) to make the standard error for your slope acceptably small.  If you go to page 10, section 5, Option # 1 in
you will see a set of six lines of equations - with more below that. Those equations involve a robust estimator for the slope b, and also the estimated total for a finite population. Just consider the second, fourth, and fifth lines, which look at b, its estimated variance, and the underlying sigma for the random factors of the estimated residuals.  You can obtain these results, for example, from SAS PROC REG,  if you let regression weight w be 1/x.  You will need a preliminary new sample to estimate sigma (fifth line there) for any given item.  Then you can estimate sample size needed to obtain whatever you deem an acceptable estimated standard error of b by using the variance estimate (fourth line).  - Notice that on the second line I accidentally left off a subscript from an x.  Oops!

If, however, you are doing a validation survey using a different sample, you could, for a given data item, find the estimated mean and its standard error in your survey to be validated, and do the same for the results of a new survey, and find a confidence interval around the difference in the two means.  Again you would have to find results for a small preliminary set of new data (the previous one already being established), and then do the algebra to see what (complete) new sample size is needed to obtain a suitable confidence interval around the difference in those means for that data item.  -  Once again, for a give data item, you are getting an estimate of a constant  sigma, but this time for the y data, not involving regression residuals, and then estimating the sample size needed to hopefully attain the accuracy you specify that you want.
I hope this might be relevant enough to give you a few ideas.
Cheers - Jim
PS - There may be some standard methods to handle this in your field, but these are the ideas that occurred to me.
• asked a question related to Power Analyses
Question
The study is designed to measure executive function in children in 3 consecutive years. The children will come into the lab annually with their parent. Since executive function is itself a highly complex phenotype, a battery of standardized tests will be used.
Dear Beatrice
Great answer! thank you very much! Do you have guidance about the best statistical approaches for discriminating trajectories?
• asked a question related to Power Analyses
Question
Hi all,
I was wondering if anyone could help advise about any calculator or formula to retrospectively calculate the power of a study which uses ROC/AUC analyses? Unfortunately, of the information I have seen thus far the calculators are for a priori analyses.
I understand some researchers advise against post hoc power analyses but I am interested in any calculators/formulas people may be aware of nonetheless
Many thanks
You can count me to the people advicing against post-hoc power analyses. Such analyses are non-sensical. But here is a link, nonetheless ;) :
The R-package "pROC" can do this.
This is an add-on package for "R" (just in case you don't know; it's a very powerful, useful, and free statistics software).
• asked a question related to Power Analyses
Question
I am learning about Metabolomics correlation (to disease/biomarkers) based experiments. However, such studies perform untargeted metabolomics to identify novel metabolite biomarkers and I am not clear on how to conduct a power analysis for such an experiment. I did a thorough review of literature and was unable to find much details. Any help will be appreciated.
You can have a look at this paper
Data-Driven Sample Size Determination for Metabolic Phenotyping Studies
Benjamin J. Blaise * in Analytical Chemistry
• asked a question related to Power Analyses
Question
I'm going to study the expression of a particular micro-RNA in tumor samples in comparison to normal tissue. We obtain gastric tumor and normal samples from same patient. How many samples do we need to our results be valid for differential expression study and investigating the micro-RNA of interest as biomarker? is there any authentic guideline or article as reference? Thanks for your help.
It depends on the power of study, the percent of confidence (almost 95%) and the previous results from same study (if not exist, just do a pilot with 20 samples and use the results to calculate the statistical sample size).
• asked a question related to Power Analyses
Question
mainly having problem in series apf
Please see the attached file it may be useful to you.
• asked a question related to Power Analyses
Question
Trying to set up some power analysis for infection status with nested factors.
• asked a question related to Power Analyses
Question
I conducted a multiplatform metabolomic approach (fingerprinting by CE-MS, GC-MS, LC-MS and NMR) with blood (n=74) and urine (n=27) samples collected from dogs. I used the samples I got (it was very hard to obtain them), but I was asked to calculate the ideal number of participants my study should have to check if the "n" I used was enough to both type of samples.
Any idea will be most helpful.
Yours sincerely
Mariana Santos
The number of participants in a study also (or mainly?) depends on the research goals. If that is classification and you are interested in e.g. discriminating metabolites, the effect size between the two groups mainly determine the need for many participants. If the effects are really big, you can do without many participants. If you are after smaller effects, you may need to increase your sample size. If your goal is prediction models (like in regression and the prediction of traits), you want to use samples that contain the necessary variation / describe the relation between metabolites and traits. For this it will be important that your participants span the range of all the 'natural' variation. Adding  more participants that describe the same variation will not really improve the results of your study.
• asked a question related to Power Analyses
Question
Suppose we have an EEG study with 3 groups. The recorded numbers of participants for each group were 7, 3, and 7, respectively. The reviewer correctly claimed that this is not enough statistically significant to study the differences among the groups. I quote: "aiming for a medium effect size of Cohen’s f= 0.25 (#=0.05) and a power of 0.85 for condition x group interaction effects, one would  need a total sample size of 48 subjects, thus at least 16 subjects per group." Which equation led to that conclusion? Suppose we reduce the number of groups to two. How would that change the minimum total number of subjects?
Dear Lukás,
in general, I pretty much agree with Lo J Bour's comment. However, you might also take into consideration looking at confidence intervals rather than following the "standard" concept of power analysis. See, e.g.,
Hoenig, J.M., & Heisey, D.M. (2001). The Abuse of Power. The American Statistician, 55:1, 19-24.
Also, it might be useful to consider á priori, what effect size would actually be of practical relevance in the context of your research problem, and then calculate sample sizes necessary to test on significance of group mean differences representing that specific effect size. In this context, the following article might also be of interest:
Fritz, C.O., Morris, P., & Richler, J. (2012). Effect Size Estimates: Current Use, Calculations, and Interpretation. Journal of Experimental Psychology: General, Vol. 141, No. 1, 2-18.
Regards, Klaus Blischke
• asked a question related to Power Analyses
Question
How to take Dynamic power and switching power report using design complier or prime time synopsys tool?
The question is a bit confusing. PrimeTime is a timing analysis tool and will not report power.  I suppose you want to influence the power estimation carried out by Design Compiler by specifying the switching activity at the inputs. Please check the manual of design compiler on how you might be able to do this.
The report statements provided in the other answer have nothing to do with power estimation.
• asked a question related to Power Analyses
Question
Building RTL Power Models in RTL Compiler
It is always good to build power model before performing synthesis
• asked a question related to Power Analyses
Question
In behavioral science what should be the effect size to calculate power and determine sample size. Cohen (1988) in his article quantitative methods in psychology (1992) and his book Statistical power analysis for the behavioral sciences suggested three classification as small (.1), medium (.3) and large (.5) which one do you think is more appropriate and what if I go for the medium effect size.
I'll second Dr. Karns in saying that you may be able to find effect sizes in the literature pertaining to your study, but this is not always as trivial as it may seem at face value.
In ecology, for example, effect sizes are rarely reported in published studies and variability in reported effect sizes can often make it difficult to establish a single effect size from the literature. Furthermore, reported effect sizes may not actually be relevant to the size of effect that you want/need to detect to answer your particular question. Let's say, for example, that I want to answer the question: "Does X pollutant have a negative influence on clam populations in X estuary?" (I study clams). If I go to the literature and find an average population decrease (due to pollutants) of 10% and simply use that as my effect size to determine the various parameters of my study design, I may not actually be able to detect a MEANINGFUL effect to answer my question. Remember, I calculated an average effect of 10% from the literature, but that doesn't mean that if I find a 10% decrease in my study that I have detected a meaningful impact of the pollutant in question. You have to consider the meaning of that effect size as well; in this case, I wouldn't consider 10% to be very meaningful.
If you can find effect sizes in the literature, that's great; but it`s important to keep in mind that you may not be able to or, if you are able to, you may have to do some thinking about whether or not the reported effect size(s) are meaningful to your question. This doesn't mean that you can arbitrarily assign an effect size simply because you think it's meaningful, but you have to dive a bit deeper into the literature to tease this out. So not only should you scope the literature for reported effect sizes, you should also look for what other researchers have thought of as meaningful effect sizes as well. This may not be a problem in the behavioural sciences, but certainly is in ecology.
• asked a question related to Power Analyses
Question
I am hoping to connect with people who are familiar with or working through power calculations for negative binomial and ZINB models. I have not found any software with power analysis calculators that can accommodate NB and ZINB regressions. I am comfortable with the theory and application but my math background is not strong enough to write functions for the power calculation. Suggestions for relevant literature and R code greatly appreciated.
The simulation is simple in R.
First you generate some random data for selected parameter values and effect sizes, then you perform the desired test and note whether or not it is classified as significant. These steps have to be repeated very often, and then the power is calculated as the percentage of significant results among these simulations.
This may be repeated for different values for the parameters and effect sizes to create a power curve depending on parameter values and/or effect sizes.
Example for finding a difference in the of a NB distributions:
require(MASS)
calcPower = function(n1,n2,mu1,mu2,size1,size2,rep=1000) {
grp = factor(rep(1:2, c(n1,n2)))
y = matrix(
c(
rnbinom(rep*n1, size=100, mu=mu1),
rnbinom(rep*n2, size=100, mu=mu2) ),
ncol=n1+n2, nrow=rep, byrow=FALSE )
p = apply(y, 1, function(x) {
m = glm.nb(x~grp)
anova(m)[,"Pr(>Chi)"][2]
})
mean(p<0.05)
}
# getting the power for a difference in mu:
> calcPower(10,10,10,15,100,100)
[1] 0.826
# getting the power for a series of
> sapply (13:16, function(mu2) calcPower(10,10,10,mu2,100,100))
[1] 0.482 0.689 0.839 0.947
• asked a question related to Power Analyses
Question
I can't afford to buy an Energy Analyzer. I am planning to record the energy consumed by Milling operation when different parameters are used to machine and find out the variation. There is no Energy Analyzer available here. Any other way to measure the power consumed and verify that?
Hi Roberto, I don't know whether this machine supports for this. I need to ask the technician. I appreciate your help with this.
• asked a question related to Power Analyses
Question
In practice, the water can be used to transfer voltage and current?
Yes and it efficiency is depends upon the type of water i.e. Hard or soft water
• asked a question related to Power Analyses
Question
I want an article about Automatic Generation control in economic load dispatch.
• asked a question related to Power Analyses
Question
I need some basic ideas on designing a WPT system using simulation tools, and I also want to know how to calculate the parasitic inductance, capacitance, and resistance of the coil either by using a formula or a Network analyzer?
Dear Surjit Das Burman, Use MATLAB SIMULINK.
• asked a question related to Power Analyses
Question
I am not too familiar with power analysis techniques. Is my understanding correct that the power analyses typically reported in EEG research papers is the power AVERAGED across a segment of the waveform/EEG data?
You can do power analysis by averaging over some segment of the waveform using an FFT. This will give you power for whatever frequency bins you choose. However, it is generally considered better to analyze the power using something like wavelet analysis which lets you look at power and still keep the time axis. Using wavelets you end up with a plot where the Y-axis is frequency, X-axis is time, and Z-axis is power. I have used EEGLab in the past to do this type of analysis. It's available free. They have excellent documentation and a very good online community for support. Good luck.
• asked a question related to Power Analyses
Question