Article

An exploratory test for an excess of significant findings

University of Ioannina, Yannina, Epirus, Greece
Clinical Trials (Impact Factor: 1.94). 02/2007; 4(3):245-53. DOI: 10.1177/1740774507079441
Source: PubMed

ABSTRACT The published clinical research literature may be distorted by the pursuit of statistically significant results.
We aimed to develop a test to explore biases stemming from the pursuit of nominal statistical significance.
The exploratory test evaluates whether there is a relative excess of formally significant findings in the published literature due to any reason (e.g., publication bias, selective analyses and outcome reporting, or fabricated data). The number of expected studies with statistically significant results is estimated and compared against the number of observed significant studies. The main application uses alpha = 0.05, but a range of alpha thresholds is also examined. Different values or prior distributions of the effect size are assumed. Given the typically low power (few studies per research question), the test may be best applied across domains of many meta-analyses that share common characteristics (interventions, outcomes, study populations, research environment).
We evaluated illustratively eight meta-analyses of clinical trials with >50 studies each and 10 meta-analyses of clinical efficacy for neuroleptic agents in schizophrenia; the 10 meta-analyses were also examined as a composite domain. Different results were obtained against commonly used tests of publication bias. We demonstrated a clear or possible excess of significant studies in 6 of 8 large meta-analyses and in the wide domain of neuroleptic treatments.
The proposed test is exploratory, may depend on prior assumptions, and should be applied cautiously.
An excess of significant findings may be documented in some clinical research fields.

0 Bookmarks
 · 
106 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Both evolutionary considerations and recent research suggest that the color red serves as a signal indicating an object's importance. However, until now, there is no evidence that this signaling function of red is also reflected in human memory. To examine the effect of red on memory, we conducted four experiments in which we presented objects colored in four different colors (red, green, blue, and yellow) and measured later memory for the presence of an object and for the color of an object. Across experiments, we varied the type of objects (words vs. pictures), task complexity (single objects vs. multiple objects in visual scenes), and intentionality of encoding (intentional vs. incidental learning). Memory for the presence of an object was not influenced by color. However, in all four experiments, memory for the color of an object depended on color type and was particularly high for red and yellow-colored objects and particularly low for green-colored objects, indicating that the binding of colors into object memory representations varies as a function of color type. Analyzing the observers' confidence in their color memories revealed that color not only influenced objective memory performance but also subjective confidence. Subjective confidence judgments differentiated well between correct and incorrect color memories for red-colored objects, but poorly for green-colored objects. Our findings reveal a previously unknown color effect which may be of considerable interest for both basic color research and applied settings like eyewitness testimony in which memory for color features is relevant. Furthermore, our results indicate that feature binding in memory is not a uniform process by which any attended feature is automatically bound into unitary memory representations. Rather, memory binding seems to vary across different subtypes of features, a finding that supports recent research showing that object features are stored in memory rather independently from each other.
    Frontiers in Psychology 01/2015; 6:231. DOI:10.3389/fpsyg.2015.00231 · 2.80 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Although the effect of stereotype threat concerning women and mathematics has been subject to various systematic reviews, none of them have been performed on the sub-population of children and adolescents. In this meta-analysis we estimated the effects of stereotype threat on performance of girls on math, science and spatial skills (MSSS) tests. Moreover, we studied publication bias and four moderators: test difficulty, presence of boys, gender equality within countries, and the type of control group that was used in the studies. We selected study samples when the study included girls, samples had a mean age below 18years, the design was (quasi-)experimental, the stereotype threat manipulation was administered between-subjects, and the dependent variable was a MSSS test related to a gender stereotype favoring boys. To analyze the 47 effect sizes, we used random effects and mixed effects models. The estimated mean effect size equaled -0.22 and significantly differed from 0. None of the moderator variables was significant; however, there were several signs for the presence of publication bias. We conclude that publication bias might seriously distort the literature on the effects of stereotype threat among schoolgirls. We propose a large replication study to provide a less biased effect size estimate. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
    Journal of School Psychology 02/2015; 53(1):25-44. DOI:10.1016/j.jsp.2014.10.002 · 2.24 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, cognitive scientists and commercial interests (e.g., Fit Brains, Lumosity) have focused research attention and financial resources on cognitive tasks, especially working memory tasks, to explore and exploit possible transfer effects to general cognitive abilities, such as fluid intelligence. The increased research attention has produced mixed findings, as well as contention about the disposition of the evidence base. To address this contention, Au et al. (2014) recently conducted a meta-analysis of extant controlled experimental studies of n-back task training transfer effects on measures of fluid intelligence in healthy adults; the results of which showed a small training transfer effect. Using several approaches, the current review evaluated and re-analyzed the meta-analytic data for the presence of two different forms of small-study effects: (1) publication bias in the presence of low power and; (2) low power in the absence of publication bias. The results of these approaches showed no evidence of selection bias in the working memory training literature, but did show evidence of small-study effects related to low power in the absence of publication bias. While the effect size estimate identified by Au et al. (2014) provided the most precise estimate to date, it should be interpreted in the context of a uniformly low-powered base of evidence. The present work concludes with a brief set of considerations for assessing the adequacy of a body of research findings for the application of meta-analytic techniques.
    Frontiers in Psychology 01/2014; 5:1589. DOI:10.3389/fpsyg.2014.01589 · 2.80 Impact Factor