High Impact = High Statistical Standards? Not Necessarily So

Dipartimento di Psicologia Generale, Università di Padova, Padova, Italy.
PLoS ONE (Impact Factor: 3.23). 02/2013; 8(2):e56180. DOI: 10.1371/journal.pone.0056180
Source: PubMed


What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors.

Download full-text


Available from: Francesco Sella, Oct 05, 2015
1 Follower
32 Reads
    • "Most likely it is the result of the statistical practices of the specific research groups working on these species. Impact factor of the journal was also not associated with pseudoreplication, suggesting that better quality journals (if impact factor is a genuine reflection of quality, which, of course, it may not be: Tressoldi et al. 2013) are as likely to accept pseudoreplication as journals with lower impact factors. Similarly, the mean number of citations per year was not associated with pseudoreplication, meaning that studies "
    [Show abstract] [Hide abstract]
    ABSTRACT: Pseudoreplication (the pooling fallacy) is a widely acknowledged statistical error in the behavioural sciences. Taking a large number of data points from a small number of animals creates a false impression of a better representation of the population. Studies of communication may be particularly prone to artificially inflating the data set in this way, as the unit of interest (the facial expression, the call or the gesture) is a tempting unit of analysis. Primate communication studies (551) published in scientific journals from 1960 to 2008 were examined for the simplest form of pseudoreplication (taking more than one data point from each individual). Of the studies that used inferential statistics, 38% presented at least one case of pseudoreplicated data. An additional 16% did not provide enough information to rule out pseudoreplication. Generalized linear mixed models determined that one variable significantly increased the likelihood of pseudoreplication: using observational methods. Actual sample size (number of animals) and year of publication were not associated with pseudoreplication. The high prevalence of pseudoreplication in the primate communication research articles, and the fact that there has been no decline since key papers warned against pseudoreplication, demonstrates that the problem needs to be more actively addressed.
    Animal Behaviour 07/2013; 86(2):483-488. DOI:10.1016/j.anbehav.2013.05.038 · 3.14 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The possibility of predicting random future events before any sensory clues by using human physiology as a dependent variable has been supported by the meta-analysis of Mossbridge et al.1 and by recent findings by Tressoldi et al.2,3. Mossbridge et al.4 defined this phenomenon predictive anticipatory activity (PAA).From a theoretical point of view, one interesting question is whether PAA is related to the effective, real future presentation of these stimuli or whether it is related only to the probability of their presentation.This hypothesis was tested with four experiments two using heart rate and two using pupil dilation as dependent variables.In all four experiments, both a neutral and a potentially threatening stimulus were predicted 7 to 10% above chance, independently from whether the predicted threatening stimulus was presented or not.These findings are discussed with reference to the “grandfather paradox” and some candidate explanations for this phenomena are presented.
    SSRN Electronic Journal 01/2013; DOI:10.2139/ssrn.2371577
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Most researchers acknowledge an intrinsic hierarchy in the scholarly journals ("journal rank") that they submit their work to, and adjust not only their submission but also their reading strategies accordingly. On the other hand, much has been written about the negative effects of institutionalizing journal rank as an impact measure. So far, contributions to the debate concerning the limitations of journal rank as a scientific impact assessment tool have either lacked data, or relied on only a few studies. In this review, we present the most recent and pertinent data on the consequences of our current scholarly communication system with respect to various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings or retractions). These data corroborate previous hypotheses: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. This new system will use modern information technology to vastly improve the filter, sort and discovery functions of the current journal system.
    Frontiers in Human Neuroscience 06/2013; 7:291. DOI:10.3389/fnhum.2013.00291 · 2.99 Impact Factor
Show more