Sample size planning for the standardized mean difference: Accuracy in parameter estimation via narrow confidence intervals

Indiana University Bloomington, Bloomington, Indiana, United States
Psychological Methods (Impact Factor: 4.45). 01/2007; 11(4):363-85. DOI: 10.1037/1082-989X.11.4.363
Source: PubMed


Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no wider than desired with some specified degree of certainty (e.g., 99% certain the 95% CI will be no wider than omega). The rationale of the AIPE approach to SS planning is given, as is a discussion of the analytic approach to CI formation for the population standardized mean difference. Tables with values of necessary SS are provided. The freely available Methods for the Behavioral, Educational, and Social Sciences (K. Kelley, 2006a) R (R Development Core Team, 2006) software package easily implements the methods discussed.

Download full-text


Available from: Ken Kelley,
  • Source
    • "Social-cognitive outcomes of teachers' engagement fit, values between 0.08 and 0.10 suggest marginally approximate fit, and values 40.10 suggest poor fit (O'Boyle and Williams, 2010). We also suggest that the 90 percent confidence interval for RMSEA may be informative, as this measure provides the opportunity to capture the imprecision of the model fit (Kelley and Rauch, 2006; MacCallum et al., 1996). As a comparative fit index, we used the CFI (Bentler, 1990), a sample-independent index that does not assume that all measurement indicators are completely independent. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Purpose ‐ The purpose of this paper is to seek to investigate Etienne Wenger's theory of social learning in a community of practice by modeling two simultaneous aspects of teachers' collaborative learning: their engagement in close-knit internal groupings and engagement with colleagues that work externally to the core group. These two learning processes are related to two social-cognitive outcomes: teachers' organizational commitment and their sense of impact. Design/methodology/approach ‐ The study investigated a field sample of 246 individual teachers from ten Finnish primary schools. Hypotheses were developed and tested by using multiple regression and structural equation modeling. Findings ‐ The results indicate that local engagement supports teachers' organizational commitment. However, this form of collaborative learning behavior did not support their sense of impact. Moreover, external engagement with trusted colleagues supported sense of impact but not organizational commitment. Research limitations/implications ‐ The study reinforces the importance of teachers' engagement in communities of practice. Specifically, the results suggest two specific social-cognitive outcomes related to two different learning processes situated in teachers' community of practice. It would be highly valuable to replicate this study in various multi-level settings. Practical implications ‐ The study highlights teachers' engagement in communities of practice as a source of their motivational basis and their commitment. Findings recommend school leaders to facilitate internal and external learning communities. Originality/value ‐ The study provides empirical evidence regarding the partial relationships between teachers' local and external learning engagement and the social-cognitive outcomes of these forms of learning behaviors.
    Journal of Educational Administration 08/2014; 52(6). DOI:10.1108/JEA-07-2013-0074
  • Source
    • "Planning a study by focusing on its power is not equivalent to focusing on its accuracy and can lead to different results and decisions (Kelley & Rausch, 2006). For example, for regression coefficients, precision of a parameter estimate depends on sample size, but it is mostly unaffected by effect size, whereas power is affected by both (Kelley and Maxwell, 2003; Figure 2). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Replicability of findings is at the heart of any empirical science. The aim of this article is to move the current replicability debate in psychology towards concrete recommendations for improvement. We focus on research practices but also offer guidelines for reviewers, editors, journal management, teachers, granting institutions, and university promotion committees, highlighting some of the emerging and existing practical solutions that can facilitate implementation of these recommendations. The challenges for improving replicability in psychological science are systemic. Improvement can occur only if changes are made at many levels of practice, evaluation, and reward.
    European Journal of Personality 03/2013; 27(2):108-119. DOI:10.1002/per.1919 · 3.35 Impact Factor
  • Source
    • "When planning new research, previously observed effect sizes can be used to calculate power and thereby estimate appropriate sample sizes. Cohen (1988), Keppel and Wickens (2004), and most statistical textbooks provide guidance on calculating power; a very brief, elementary guide appears in the Appendix along with mention of planning sample sizes based on accuracy in parameter estimation (i.e., planning the size of the CIs; Cumming, 2012; Kelley & Rausch, 2006; Maxwell, Kelley, & Raush, 2008). A brief note on the terminology used in this article may be helpful. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis.
    Journal of Experimental Psychology General 08/2011; 141(1):2-18. DOI:10.1037/a0024338 · 5.50 Impact Factor
Show more