Reporting Discrepancies Between the Results Database and Peer-Reviewed Publications

Annals of internal medicine (Impact Factor: 17.81). 04/2014; 160(7):477-83. DOI: 10.7326/M13-0480
Source: PubMed

ABSTRACT requires reporting of result summaries for many drug and device trials.
To evaluate the consistency of reporting of trials that are registered in the results database and published in the literature. results database and matched publications identified through and a manual search of 2 electronic databases.
10% random sample of phase 3 or 4 trials with results in the results database, completed before 1 January 2009, with 2 or more groups.
One reviewer extracted data about trial design and results from the results database and matching publications. A subsample was independently verified.
Of 110 trials with results, most were industry-sponsored, parallel-design drug studies. The most common inconsistency was the number of secondary outcome measures reported (80%). Sixteen trials (15%) reported the primary outcome description inconsistently, and 22 (20%) reported the primary outcome value inconsistently. Thirty-eight trials inconsistently reported the number of individuals with a serious adverse event (SAE); of these, 33 (87%) reported more SAEs in Among the 84 trials that reported SAEs in, 11 publications did not mention SAEs, 5 reported them as zero or not occurring, and 21 reported a different number of SAEs. Among 29 trials that reported deaths in, 28% differed from the matched publication.
Small sample that included earliest results posted to the database.
Reporting discrepancies between the results database and matching publications are common. Which source contains the more accurate account of results is unclear, although may provide a more comprehensive description of adverse events than the publication.
Agency for Healthcare Research and Quality.

52 Reads
    • "Any deviations from the primary outcome analyses specified in the clinical trial registry should be clearly indicated (Hartung et al., 2014). Importantly, the predefined alpha should be reported and results that do not meet that level of significance (e.g., p = 0.07) should not be interpreted as " trend " level effects. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The primary goals in conducting clinical trials of treatments for alcohol use disorders (AUDs) are to identify efficacious treatments and determine which treatments are most efficacious for which patients. Accurate reporting of study design features and results is imperative to enable readers of research reports to evaluate to what extent a study has achieved these goals. Guidance on quality of clinical trial reporting has evolved substantially over the past 2 decades, primarily through the publication and widespread adoption of the Consolidated Standards of Reporting Trials statement. However, there is room to improve the adoption of those standards in reporting the design and findings of treatment trials for AUD. This paper provides a narrative review of guidance on reporting quality in AUD treatment trials. Despite improvements in the reporting of results of treatment trials for AUD over the past 2 decades, many published reports provide insufficient information on design or methods. The reporting of alcohol treatment trial design, analysis, and results requires improvement in 4 primary areas: (i) trial registration, (ii) procedures for recruitment and retention, (iii) procedures for randomization and intervention design considerations, and (iv) statistical methods used to assess treatment efficacy. Improvements in these areas and the adoption of reporting standards by authors, reviewers, and editors are critical to an accurate assessment of the reliability and validity of treatment effects. Continued developments in this area are needed to move AUD treatment research forward via systematic reviews and meta-analyses that maximize the utility of completed studies. Copyright © 2015 by the Research Society on Alcoholism.
    Alcoholism Clinical and Experimental Research 08/2015; 39(9). DOI:10.1111/acer.12797 · 3.21 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Since 2007, the US federal government has required that organizations sponsoring clinical trials with a least one site in the United States submit information on these clinical trials to an existing database: Over time, the number of mandatory variables has grown and will probably continue to grow. The database now represents an important source of descriptive information about the landscape for clinical trials. In addition, it constitutes a rich pool of data to test hypotheses—for instance, what variables are associated with an organization’s ability to correctly estimate study completion times or complete those studies in as short a time frame as possible. This paper concludes that for mandated variables that the authors have labeled study identification, protocol and study design, and study execution, the data set constitutes a potentially very valuable research resource. With the exception of some site-related information, incomplete data did not exceed 3%. The incomplete site data are concentrated in several companies, so it is not unreasonable to assume that those data will also become more complete.
    Therapeutic Innovation and Regulatory Science 02/2014; 49(2):218-224. DOI:10.1177/2168479014551643 · 0.46 Impact Factor

  • Annals of surgery 05/2014; 260(3). DOI:10.1097/SLA.0000000000000777 · 8.33 Impact Factor
Show more