Standards of Reporting of Randomized Controlled Trials in General Surgery Can We Do Better?

Academic Unit of Surgical Oncology, K Floor, Royal Hallamshire Hospital, University of Sheffield, S10 2JF, UK.
Annals of Surgery (Impact Factor: 8.33). 11/2006; 244(5):663-7. DOI: 10.1097/01.sla.0000217640.11224.05
Source: PubMed


To evaluate the quality of reporting of surgical randomized controlled trials published in surgical and general medical journals using Jadad score, allocation concealment, and adherence to CONSORT guidelines and to identify factors associated with good quality.
Randomized controlled trials (RCTs) provide the best evidence about the relative effectiveness of different interventions. Improper methodology and reporting of RCTs can lead to erroneous conclusions about treatment effects, which may mislead decision-making in health care at all levels.
Information was obtained on RCTs published in 6 general surgical and 4 general medical journals in the year 2003. The quality of reporting of RCTs was assessed under masked conditions using allocation concealment, Jadad score, and a CONSORT checklist devised for the purpose.
Of the 69 RCTs analyzed, only 37.7% had a Jadad score of > or =3, and only 13% of the trials clearly explained allocation concealment. The modified CONSORT score of surgical trials reported in medical journals was significantly higher than those reported in surgical journals (Mann-Whitney U test, P < 0.001). Overall, the modified CONSORT score was higher in studies with higher author numbers (P = 0.03), multicenter studies (P = 0.002), and studies with a declared funding source (P = 0.022).
The overall quality of reporting of surgical RCTs was suboptimal. There is a need for improving awareness of the CONSORT statement among authors, reviewers, and editors of surgical journals and better quality control measures for trial reporting and methodology.

Download full-text


Available from: Sabapathy P Balasubramanian,
  • Source
    • "As described previously, a number of predictor variables have been shown to have a positive association with the reporting quality of RCTs [3,6,11,15,16,19-23]. The four predictor variables we used were sample size, impact factor, funding reported and journal adoption of the CONSORT statement at the time of our data abstraction. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Methodologists have proposed the formation of a good research question to initiate the process of developing a research protocol that will guide the design, conduct and analysis of randomized controlled trials (RCTs), and help improve the quality of reporting such studies. Five constituents of a good research question based on the PICOT framing include: Population, Intervention, Comparator, Outcome, and Time-frame of outcome assessment. The aim of this study was to analyze if the presence a structured research question, in PICOT format, in RCTs used within a 2010 meta-analysis investigating the effectiveness of femoral nerve blocks after total knee arthroplasty, is independently associated with improved quality of reporting. Twenty-three RCT reports were assessed for the quality of reporting and then examined for the presence of the five constituents of a structured research question based on PICOT framing. We created a PICOT score (predictor variable), with a possible score between 0 and 5; one point for every constituent that was included. Our outcome variable was a 14 point overall reporting quality score (OQRS) and a 3 point key methodological items score (KMIS) based on the proper reporting of allocation concealment, blinding and numbers analysed using the intention-to-treat principle. Both scores, OQRS and KMIS, are based on the Consolidated Standards for Reporting Trials (CONSORT) statement. A multivariable regression analysis was conducted to determine if PICOT score was independently associated with OQRS and KMIS. A completely structured PICOT score question was found in 2 of the 23 RCTs evaluated. Although not statistically significant, higher PICOT was associated with higher OQRS [IRR: 1.267; 95% confidence interval (CI): 0.984, 1.630; p = 0.066] but not KMIS (1.061 (0.515, 2.188); 0.872). These results are comparable to those from a similar study in terms of the direction and range of IRRs estimates. The results need to be interpreted cautiously due to the small sample size. This study showed that PICOT framing of a research question in anesthesia-related RCTs is not often followed. Even though a statistically significant association with higher OQRS was not found, PICOT framing of a research question is still an important attribute within all RCTs.
    BMC Anesthesiology 11/2013; 13(1):44. DOI:10.1186/1471-2253-13-44 · 1.38 Impact Factor
  • Source
    • "Either way, research in the area indicates serious inadequacies in reporting in orthopaedic RCTs. This trend of poor reporting has been seen in other fields as well, including internal medicine [17] and general surgery [22]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background The purpose of this study was to assess the quality of methodology in orthopaedics-related randomized controlled trials (RCTs) published from January 2006 to December 2010 in the top orthopaedic journals based on impact scores from the Thompson ISI citation reports (2010). Methods Journals included American Journal of Sports Medicine; Journal of Orthopaedic Research; Journal of Bone and Joint Surgery, American; Spine Journal; and Osteoarthritis and Cartilage. Each RCT was assessed on ten criteria (randomization method, allocation sequence concealment, participant blinding, outcome assessor blinding, outcome measurement, interventionist training, withdrawals, intent to treat analyses, clustering, and baseline characteristics) as having empirical evidence for biasing treatment effect estimates when not performed properly. Results A total of 232 RCTs met our inclusion criteria. The proportion of RCTs in published journals fell from 6% in 2006 to 4% in 2010. Forty-nine percent of the criteria were fulfilled across these journals, with 42% of the criteria not being amendable to assessment due to inadequate reporting. The results of our regression revealed that a more recent publication year was significantly associated with more fulfilled criteria (β = 0.171; CI = −0.00 to 0.342; p = 0.051). Conclusion In summary, very few studies met all ten criteria. Thus, many of these studies likely have biased estimates of treatment effects. In addition, these journals had poor reporting of important methodological aspects.
    BMC Medical Research Methodology 06/2013; 13(1):76. DOI:10.1186/1471-2288-13-76 · 2.27 Impact Factor
  • Source
    • "The subgroup of high-impact journals showed even a larger proportion. A review on randomised trials in special medical fields found smaller proportions [19,20]. Surprisingly, Hopewell et al. [8] reported that fewer than one third of reports of randomised trials published in 2006 included details of participant flow. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Non-inferiority and equivalence trials require tailored methodology and therefore adequate conduct and reporting is an ambitious task. The aim of our review was to assess whether the criteria recommended by the CONSORT extension were followed. Methods We searched the Medline database and the Cochrane Central Register for reports of randomised non-inferiority and equivalence trials published in English language. We excluded reports on bioequivalence studies, reports targeting on other than the main results of a trial, and articles of which the full-text version was not available. In total, we identified 209 reports (167 non-inferiority, 42 equivalence trials) and assessed the reporting and methodological quality using abstracted items of the CONSORT extension. Results Half of the articles did not report on the method of randomisation and only a third of the trials were reported to use blinding. The non-inferiority or equivalence margin was defined in most reports (94%), but was justified only for a quarter of the trials. Sample size calculation was reported for a proportion of 90%, but the margin was taken into account in only 78% of the trials reported. Both intention-to-treat and per-protocol analysis were presented in less than half of the reports. When reporting the results, a confidence interval was given for 85% trials. A proportion of 21% of the reports presented a conclusion that was wrong or incomprehensible. Overall, we found a substantial lack of quality in reporting and conduct. The need to improve also applied to aspects generally recommended for randomised trials. The quality was partly better in high-impact journals as compared to others. Conclusions There are still important deficiencies in the reporting on the methodological approach as well as on results and interpretation even in high-impact journals. It seems to take more than guidelines to improve conduct and reporting of non-inferiority and equivalence trials.
    Trials 11/2012; 13(1):214. DOI:10.1186/1745-6215-13-214 · 1.73 Impact Factor
Show more