ArticlePDF AvailableLiterature Review

Avoidable Waste in the Production and Reporting of Research Evidence



presented,in meeting abstracts: “Research results should be easily accessible to people
86 Vol 374 July 4, 2009
Avoidable waste in the production and reporting of
research evidence
Iain Chalmers, Paul Glasziou
Without accessible and usable reports, research cannot
help patients and their clinicians. In a published
Personal View,1 a medical researcher with myeloma
refl ected on the way that the results of four randomised
trials relevant to his condition had still not been
published, years after preliminary fi ndings had been
presented in meeting abstracts:
“Research results should be easily accessible to people
who need to make decisions about their own health…
Why was I forced to make my decision knowing that
information was somewhere but not available? Was
the delay because the results were less exciting than
expected? Or because in the evolving fi eld of myeloma
research there are now new exciting hypotheses (or
drugs) to look at? How far can we tolerate the butterfl y
behaviour of researchers, moving on to the next fl ower
well before the previous one has been fully
This experience is not unusual: a recently updated
systematic review of 79 follow-up studies of research
reported in abstracts estimated the rate of publication of
full reports after 9 years to be only 53%.2
Worldwide, over US$100 billion is invested every year
in supporting biomedical research, which results in an
estimated 1 million research publications per year.
Much of this investment has supported basic research.
For example, over two-thirds of government and
charitable investment in biomedical research in the UK
has been for basic research, with less than 10% for
treatment evaluation. The relative lack of support for
applied research and the bureaucracy that regulates
research involving patients have been powerful
disincentives for those who might otherwise have
become involved in research in treatment evaluation. In
recent years, there has been recognition of the need to
address both of these disincentives. In the UK, the
Cooksey enquiry concluded that government support
for applied research should be increased,3 and the
National Institute for Health Research (NIHR) has
responded rapidly to this policy (its funding for clinical
trials will soon be £80 million a year).4 In the USA, a bill
currently before Congress calls for federal support for
evaluations of treatments independent of industry, and
in Italy and Spain, independent research on the eff ects
of drugs is being supported with revenue from a tax on
pharmaceutical company drug promotion.5
This increased investment in independent treatment
evaluation is laudable. Irrespective of who sponsors
research, this investment should be protected from the
avoidable waste of inadequately producing and
reporting research. We examine the causes and degree
of waste occurring at four successive stages: the choice
of research questions; the quality of research design
and methods; the adequacy of publication practices;
and the quality of reports of research (fi gure).
Choosing the wrong questions for research
An effi cient system of research should address health
problems of importance to populations and the
interventions and outcomes considered important by
patients and clinicians. However, public funding of
research is correlated only modestly with disease
burden, if at all.6–8 Within specifi c health problems there
is little research on the extent to which questions
addressed by researchers match questions of relevance
to patients and clinicians. In an analysis of 334 studies,
only nine compared researchers’ priorities with those
of patients or clinicians.9 The fi ndings of these studies
have revealed some dramatic mismatches. For example,
the research priorities of patients with osteoarthritis of
the knee and the clinicians looking after them favoured
more rigorous evaluation of physiotherapy and surgery,
and assessment of educational and coping strategies.
Only 9% of patients wanted more research on drugs,
yet over 80% of randomised controlled trials in patients
with osteoarthritis of the knee were drug evaluations.10
This interest in non-drug interventions in users of
research results is refl ected in the fact that the vast
majority of the most frequently consulted Cochrane
reviews are about non-drug forms of treatment. The
current emphasis on drugs is not simply a feature of
commercial research: controlled trials funded by the UK
Lancet 2009; 374: 86–89
Published Online
June 15, 2009
James Lind Library, James Lind
Initiative, Oxford, UK
(Sir I Chalmers DSc); and Centre
for Evidence-Based Medicine,
Department of Primary Care,
University of Oxford, Oxford,
UK (Prof P Glasziou RACGP)
Correspondence to:
Sir Iain Chalmers, James Lind
Library, James Lind Initiative,
Summertown Pavilion, Middle
Way, Oxford OX2 7LG, UK
Low priority questions
Important outcomes
not assessed
Clinicians and
patients not involved
in setting research
Questions relevant
to clinicians and
Over 50% of studies
designed without
reference to
systematic reviews of
existing evidence
Over 50% of studies
fail to take adequate
steps to reduce
treatment allocation
Appropriate design
and methods?
Over 50% of studies
never published in full
Biased under-
reporting of studies
with disappointing
full publication?
Over 30% of trial
interventions not
sufficiently described
Over 50% of planned
study outcomes not
Most new research
not interpreted in the
context of systematic
assessment of other
relevant evidence
Unbiased and
usable report?
Research waste
Figure: Stages of waste in the production and reporting of research evidence relevant to clinicians and patients
Viewpoint Vol 374 July 4, 2009
Medical Research Council and British medical research
charities between 1980 and 2002, for example, were
substantially more likely to be drug trials than were trials
commissioned by the National Health Service Research
and Development Programme (now the NIHR).11
Furthermore, the outcomes that researchers have
measured have not always been those that patients regard
as most relevant. The exemplary involvement of patients
by researchers assessing treatments for rheumatoid
arthritis showed that, for most patients, fatigue was the
dominant symptom of concern, rather than pain, as
researchers had assumed.12
Many researchers remain dismissive of suggestions that
patients, carers, and clinicians should help to prioritise
research, but there are some signs of change. For example,
the James Lind Alliance has been established to bring
together patients, carers, and clinicians to prioritise
unresolved questions about the eff ects of treatments. In
the case of asthma, unaddressed uncertainties about
possible adverse eff ects of long-term use of steroids and
other powerful treatments emerged as their principal
shared concern.
Doing studies that are unnecessary, or poorly
New research should not be done unless, at the time it
is initiated, the questions it proposes to address cannot
be answered satisfactorily with existing evidence. Many
researchers do not do this—for example, Cooper and
colleagues13 found that only 11 of 24 responding authors
of trial reports that had been added to existing systematic
reviews were even aware of the relevant reviews when
they designed their new studies. About 2500 systematic
reviews of research are now being published every year,
with roughly a quarter of them in the Cochrane Database
of Systematic Reviews. Systematic reviews are now the
most frequently cited form of clinical research14 (the
citation frequency of the Cochrane Database of Systematic
Reviews ranks seventh among general medical
publications), but there is still a long way to go before
we will know what number and proportion of the many
questions of importance to patients and clinicians can
be answered with systematic reviews of existing
evidence. It has been estimated that at least
10 000 systematic reviews would be required to cover the
issues that have been addressed in over half a million
reports of controlled trials.15
New research is also too often wasteful because of
inadequate attention to other important elements of
study design or conduct. For example, in a sample of
234 clinical trials reported in the major general medical
journals, concealment of treatment allocation was often
inadequate (18%) or unclear (26%).16 In an assessment of
487 primary studies of diagnostic accuracy, 20% used
diff erent reference standards for positive and negative
tests, thus overestimating accuracy, and only 17% used
double-blind reading of tests.17
Failure to publish relevant research promptly,
or at all
Biased under-publication and over-publication of research
are forms of unscientifi c and unethical misconduct about
which the public has become increasingly aware,
particularly because of several exposés of suppressed
evidence about serious adverse eff ects of treatments.18
More generally, studies with results that are disappointing
are less likely to be published promptly,19 more likely to
be published in grey literature, and less likely to proceed
from abstracts to full reports.2 The problem of biased
under-reporting of research results mainly from decisions
taken by research sponsors and researchers, not from
journal editors rejecting submitted reports.20
Over the past decade, biased under-reporting and
over-reporting of research have been increasingly
acknowledged as unacceptable, both on scientifi c and on
ethical grounds. Calls for prospective, public registration
of all clinical trials have been issued by infl uential
organisations—eg, WHO and the World Medical
Association (through the latest revision of the Declaration
of Helsinki) and the International Committee of Medical
Journal Editors—and some progress has been made.
WHO’s International Clinical Trials Registry Platform
has been developed to improve transparency and social
involvement in research, and there has been progress,
especially in the USA, where publication of the results of
research is now required. Although these developments
are welcome, public access to the full results of all
research remains an aspiration, and one that continues
to be resisted by some research sponsors and
Biased or unusable reports of research
Although their quality has improved, reports of research
remain much less useful than they should be. Sometimes
this is because of frankly biased reporting—eg, adverse
eff ects of treatments are suppressed, the choice of
primary outcomes is changed between trial protocol and
trial reports,21 and the way data are presented does not
allow comparisons with other, related studies. But even
when trial reports are free of such biases, there are many
respects in which reports could be made more useful to
clinicians, patients, and researchers. We select here just
two of these.
First, if clinicians are to be expected to implement
treatments that have been shown in research to be
useful, they need adequate descriptions of the
interventions assessed, especially when these are
non-drug interventions, such as setting up a stroke unit,
off ering a low fat diet, or giving smoking cessation
advice. Adequate information on interventions is
available in around 60% of reports of clinical trials;22 yet,
by checking references, contacting authors, and doing
additional searches, it is possible to increase to 90% the
proportion of trials for which adequate information
could be made available.22
For more on the
James Lind Alliance see
For more on WHO’s
International Clinical Trials
Registry Platform see
88 Vol 374 July 4, 2009
Second, unless new evidence is set in the context of
updated systematic reviews, readers cannot judge its
relevance. Yet among the world’s major general medical
journals, The Lancet is alone in requiring reports of new
research to be preceded by and to conclude with reference
to systematic reviews of other relevant evidence. In 2005,
the editors wrote: “…we will require authors of clinical
trials submitted to The Lancet to include a clear summary
of previous research fi ndings, and to explain how their
trial’s fi ndings aff ect this summary. The relation between
existing and new evidence should be illustrated by direct
reference to an existing systematic review and
This principle will remain challenging while the need
for up-to-date, reliable systematic reviews of research
ndings remains insuffi ciently recognised and supported.
Although the Cochrane Collaboration aspires to maintain
its reviews by adding new evidence to them, presenting
more detailed analyses, and correcting any mistakes
identifi ed, many Cochrane reviews are not being updated
in a timely manner, and the organisation is struggling to
deal with this defi ciency. The challenge of keeping
existing systematic reviews up to date has not been solved
by any other organisation in the world.
Conclusions and recommendations
Although some waste in the production and reporting of
research evidence is inevitable and bearable, we were
surprised by the levels of waste suggested in the evidence
we have pieced together. Since research must pass
through all four stages shown in the fi gure, the waste is
cumulative. If the losses estimated in the fi gure apply
more generally, then the roughly 50% loss at stages
2, 3, and 4 would lead to a greater than 85% loss, which
implies that the dividends from tens of billions of dollars
of investment in research are lost every year because of
correctable problems. Although we have mainly used
evidence about the design and reporting of clinical trials,
we believe it is reasonable to assume that the problems
also apply to other types of research.
Because there are problems within each stage of
production and reporting, there is no single, simple
solution. But even modest eff orts to understand and
improve production and reporting of research are likely
to yield substantially increased dividends for patients
and the public. Enough is known to justify some specifi c
suggestions for the attention of the research community,
and for action related to each of the stages of design and
reporting. These recommendations are shown in the
panel. Some elements of these recommendations refl ect
policies already implemented by some research funders
in some countries. For example, the NIHR’s Health
Technology Assessment Programme routinely requires
or commissions systematic reviews before funding
primary studies, publishes all research as web-accessible
monographs, and, since 2006, has made all new
protocols freely available.
Even though there is more to learn about the
“epidemiology” and “treatment” of waste in the
production and reporting of research evidence, we
believe that all of our recommendations are justifi ed on
the basis of the evidence we have cited. Action to address
this waste is needed now because it has human as well
as economic consequences, as illustrated by the
quotation with which this Viewpoint began.1
The coauthors contributed equally to the background research and text
of the manuscript.
Confl icts of interest
We declare that we have no confl icts of interest.
Panel: Stages of waste in the production and reporting of research evidence—barriers
(in italics) and recommendations (bullet points)
Questions relevant to clinicians and patients
Poor engagement of end users of research in research questions and design
Increase involvement of patients and clinicians in shaping research agendas and
specifi c questions
Incentives in fellowships and career paths to do primary research even if of low relevance
Emphasise initial training in critical appraisal and systematic reviews rather than the
conduct of primary research
Appropriate design and methods
Poor training in research methods and research reporting
Require training of all clinicians in methodological fl aws and biases in research;
improve training in research methods for those doing research apprenticeships
Lack of methodological input to research design and review of research
Increase numbers of methodologists in health-care research
Incentives for primary research ignore the need to use and improve on existing research on the
same question
Research funding bodies should require—and support—grant proposals to build on
systematic reviews of existing evidence
Published research fails to set the study in the context of all previous similar research
Journal editors should require new studies to be set in the context of systematic
assessments of related studies
Accessible full publication
Non-registration of trials
Require—by incentives and regulation—registration and publication of protocols for
all clinical trials at inception
Failure of sponsors and authors to submit full reports of completed research
Support timely open access to full results on completion
Unbiased and usable report
Poor awareness and use by authors and editors of reporting guidelines
Increase author and journal awareness of and training in reporting guidelines, such as
CONSORT and STARD statements (
Many journal reviews focus on expert judgments about contribution to knowledge, rather than
methods and usability
Supplement peer review of studies with review by methodologists and end users
Space restrictions in journals prevent publication of details of interventions and tests
Support free access repositories—separate from any publications—so that clinicians
and researchers have details of the treatments, test, or instruments studied
CONSORT=Consolidated Standards of Reporting Trials. STARD=Standards for the Reporting of Diagnostic Accuracy Studies.
For more on the Health
Technology Assessment
Programme see
Viewpoint Vol 374 July 4, 2009
We are grateful to Doug Altman, Luis Gabriel Cuervo, John Kirwan,
Marie Claude Lavoie, Sally Davies, Tom Walley, and an anonymous referee
for helpful comments on an earlier draft of this paper.
1 Liberati A. An unfi nished trip through uncertainties. BMJ 2004;
328: 531.
2 Scherer RW, Langenberg P, von Elm E. Full publication of results
initially presented in abstracts. Cochrane Database Syst Rev 2007;
2: MR000005.
3 Cooksey D. A review of UK health research funding. December
2006. nal_
report_636.pdf (accessed Feb 10, 2009).
4 National Institute for Health Research. Transforming health
research: the fi rst two years. Progress report 2006–2008. les/pdfs/nihr%20progress%20report%202
006-2008.pdf (accessed Feb 10, 2009).
5 Garattini S, Chalmers I. Patients and the public deserve big
changes in evaluation of drugs. BMJ 2009; 338: 804–06.
6 Gross CP, Anderson GF, Powe NR. The relation between funding
by the National Institutes of Health and the burden of disease.
N Engl J Med 1999; 340: 1914–15.
7 Stuckler D, King L, Robinson H, McKee M. WHO’s budgetary
allocations and burden of disease: a comparative analysis. Lancet
2008; 372: 1563–69.
8 Perel P, Miranda JJ, Ortiz Z, Casas JP. Relation between the global
burden of disease and randomized clinical trials conducted in Latin
America published in the fi ve leading medical journals. PLoS ONE
2008: 3: e1696.
9 Oliver S, Gray J. A bibliography of research reports about patients’,
clinicians’ and researchers’ priorities for new research. Oxford:
James Lind Alliance, 2006.
liver%20and%20Gray%20report%2007_01_20.pdf (accessed
Feb 3, 2009).
10 Tallon D, Chard J, Dieppe P. Relation between agendas of the
research community and the research consumer. Lancet 2000;
355: 2037–40.
11 Chalmers I, Rounding C, Lock K. Descriptive survey of
non-commercial randomised controlled trials in the United
Kingdom, 1980–2002. BMJ 2003; 327: 1017.
12 Hewlett S, De Wit M, Richards P, et al. Patients and professionals
as research partners: challenges, practicalities and benefi ts.
Arthritis Rheum 2006; 55: 676–80.
13 Cooper NJ, Jones DR, Sutton AJ. The use of systematic reviews
when designing studies. Clin Trials 2005; 2: 260–64.
14 Patsopoulos NA, Analatos AA, Ioannidis JP. Relative citation impact
of various study designs in the health sciences. JAMA 2005;
293: 2362–66.
15 Mallett S, Clarke M. How many Cochrane reviews are needed to
cover existing evidence on the eff ects of health care interventions?
ACP J Club 2003; 139: A11.
16 Hewitt C, Hahn S, Torgerson DJ, Watson J, Bland JM. Adequacy
and reporting of allocation concealment: review of recent trials
published in four general medical journals. BMJ 2005;
330: 1057–58.
17 Rutjes AW, Reitsma JB, Di Nisio M, Smidt N, van Rijn JC,
Bossuyt PM. Evidence of bias and variation in diagnostic accuracy
studies. CMAJ 2006; 174: 469–76.
18 Chalmers I. From optimism to disillusion about commitment to
transparency in the medico-industrial complex. J R Soc Med 2006;
99: 337–41.
19 Hopewell S, Clarke MJ, Stewart L, Tierney J. Time to publication for
results of clinical trials. Cochrane Database Syst Rev 2007; 2: MR000011.
20 Hopewell S, Dickersin K, Clarke MJ, Oxman AD, Loudon K.
Publication bias in clinical trials. Cochrane Database Syst Rev 2008;
2: MR000006.
21 Dwan K, Altman DG, Arnaiz JA, et al. Systematic review of the
empirical evidence of study publication bias and outcome reporting
bias. PLoS ONE 2008; 3: e3081.
22 Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing
from treatment descriptions in trials and reviews? BMJ 2008;
336: 1472–74.
23 Young C, Horton R. Putting clinical trials into context. Lancet 2005;
366: 107–08.
... Existen importantes rezagos en los indicadores de innovación y prevalecen las barreras culturales, regulatorias y financieras que obstaculizan la innovación (México Business Publishing, 2019; Pérez-Oribe & Ibarra, 2019). Si bien en los últimos años se ha visto la aparición de varias iniciativas útiles que avanzan en la dirección correcta, estos esfuerzos pueden encontrarse atomizados dentro de las organizaciones, por lo que es indispensable identificar procesos que permitan acortar la brecha entre los resultados de investigación y su uso en la atención a la salud (Chalmers & Glasziou, 2009;Kitson & Straus, 2010). En el caso del sector público existe además un imperativo ético para que los descubrimientos científicos en el área de la salud beneficien tanto a los pacientes como a la sociedad en general. ...
Full-text available
El cambio constante, las crisis en diversas esferas y la incertidumbre económica motivan el desarrollo de estudios para promover el trabajo interdisciplinario para acelerar las soluciones en la atención a la salud. En este contexto el objetivo de esta investigación es comprobar empíricamente la validez de una estructura teórica de gestión del conocimiento para la investigación traslacional, para ayudar a entender mejor la innovación en salud, sin sacrificar las divergencias de pensamiento o el aporte a la investigación básica. Por medio de la metodología del estudio de caso, se integraron los resultados de la revisión sistemática de la literatura para realizar entrevistas semiestructuradas a investigadores de un Instituto Nacional de Salud en México. Aún con las limitantes del uso de un único caso, el caso posee como atributos el estudio en profundidad de un fenómeno complejo y poco conocido. Como resultado se identificaron las áreas de oportunidad para la gestión del conocimiento para cada una de las etapas de la investigación traslacional, identificando como futuras líneas de investigación el diseño de prácticas para cada etapa.
... Road mapping is a consensual process which identifies the best way to proceed [13]. Prioritizing research areas is the primary step for road mapping; it identifies a clear strategy for future investigations by addressing specific research questions and changing priorities [14,15]. Thus, interventions should arise from valid prioritization of problem [10]. ...
... This means it is difficult to compare findings from these trials, where combined data from small studies could give us clearer information, even more important with so few research studies. The creation of a core outcome set (COS) is urgently needed to improve the quality and efficiency of future research in dysarthria to reduce the widely recognised problem of waste in medical research [10]. ...
Full-text available
Background Dysarthria after stroke is when speech intelligibility is impaired, and this occurs in half of all stroke survivors. Dysarthria often leads to social isolation, poor psychological well-being and can prevent return to work and social lives. Currently, a variety of outcome measures are used in clinical research and practice when monitoring recovery for people who have dysarthria. When research studies use different measures, it is impossible to compare results from trials and delays our understanding of effective clinical treatments. The aim of this study is to develop a core outcome set (COS) to agree what aspects of speech recovery should be measured for dysarthria after stroke (COS-Speech) in research and clinical practice. Methods The COS-Speech study will include five steps: (1) development of a long list of possible outcome domains of speech that should be measured to guide the survey; (2) recruitment to the COS-Speech study of three key stakeholder groups in the UK and Australia: stroke survivors, communication researchers and speech and language therapists/pathologists; (3) two rounds of the Delphi survey process; (4) a consensus meeting to agree the speech outcomes to be measured and a follow-up consensus meeting to match existing instruments/measures (from parallel systematic review) to the agreed COS-Speech; (5) dissemination of COS-Speech. Discussion There is currently no COS for dysarthria after stroke for research trials or clinical practice. The findings from this research study will be a minimum COS, for use in all dysarthria research studies and clinical practice looking at post-stroke recovery of speech. These findings will be widely disseminated using professional and patient networks, research and clinical forums as well as using a variety of academic papers, videos, accessible writing such as blogs and links on social media. Trial registration COS-Speech is registered with the Core Outcome Measures in Effectiveness Trials (COMET) database, October 2021 . In addition, “A systematic review of the psychometric properties and clinical utility of instruments measuring dysarthria after stroke” will inform the consensus meeting to match measures to COS-Speech. The protocol for the systematic reviews registered with the International Prospective Register of Systematic Reviews. PROSPERO registration number: CRD42022302998 .
... Previous studies have demonstrated a mismatch between research priorities identified by people living with a condition, clinicians and researchers [8,9]. The James Lind Alliance (JLA) works with Priority Setting Partnerships (PSPs) of people living with conditions and clinicians to identify questions about treatments and healthcare interventions and prioritize areas for research [10]. ...
Full-text available
Objectives: To identify and prioritize the top 10 research questions for PsA. Methods: The British Psoriatic Arthritis Consortium (BritPACT) formed a Priority Setting Partnership (PSP) comprising of people living with PsA, carers and clinicians, supported by the James Lind Alliance (JLA). This PSP followed the established three-stage JLA process: first, an online survey of people living with PsA, carers and clinicians to identify PsA questions, asking, 'What do you think are the most important unanswered questions in psoriatic arthritis research?' The questions were checked against existing evidence to establish 'true uncertainties' and grouped as 'indicative questions' reflecting the overarching themes. Then a second online survey ranked the 'true uncertainties' by importance. Finally, a workshop including people living with PsA and clinician stakeholders finalized the top 10 research priorities. Results: The initial survey attracted 317 respondents (69% people living with PsA, 15% carers), with 988 questions. This generated 46 indicative questions. In the second survey, 422 respondents (78% people living with PsA, 4% carers) prioritized these. Eighteen questions were taken forward to the final online workshop. The top unanswered PsA research question was 'What is the best strategy for managing patients with psori-atic arthritis including non-drug and drug treatments?' Other top 10 priorities covered diagnosis, prognosis, outcome assessment, flares, comor-bidities and other aspects of treatment ( Conclusion: The top 10 priorities will guide PsA research and enable PsA researchers and those who fund research to know the most important questions for people living with PsA.
... Standardized outcome reporting is crucial for meta-analysis and translation of findings into clinical decision making. 1 Chalmers and Glasziou reported that 85% of research money is wasted because (i) important outcomes are not measured, (ii) planned outcomes are not reported and (iii) published research fails to set the study in the context of similar research previously conducted due to heterogeneity of outcomes choices. 2 One approach to address these problems is the development and implementation of a core outcome set (COS). A COS is an internationally agreed, minimum set of outcomes that are to be consistently measured and reported in all clinical trials. ...
... A number of preprints also contained extremely serious issues, such as ethical and privacy concerns, data manipulation, and flawed designs (21). These data support the notion of a proliferation of bad quality work over the course of the pandemic ("research waste", (22,23)), ultimately leading to a spread of misinformation (24). ...
Background: The quality of COVID-19 preprints should be considered with great care, as their contents can influence public policy. Efforts to improve preprint quality have mostly focused on introducing quick peer review, but surprisingly little has been done to calibrate the public’s evaluation of preprints and their contents. Purpose: The PRECHECK project aimed to generate a tool to teach and guide scientifically literate non-experts to critically evaluate preprints, on COVID-19 and beyond. Methods: To create a checklist, we applied a 4-step procedure consisting of an initial internal review, an external review by a pool of experts (methodologists, meta-researchers/experts on preprints, journal editors, and science journalists), a final internal review, and an implementation stage. For the external review step, experts rated the relevance of each element of the checklist on five-point Likert scales, and provided written feedback. After each internal review round, we applied the checklist on a set of high-quality preprints from an online list of milestone research works on COVID-19 and low-quality preprints, which were eventually retracted, to verify whether the checklist can discriminate between the two categories. Results: At the external review step, 26 of the 54 contacted experts responded. The final checklist contained 4 elements (Research question, Study type, Transparency and integrity, and Limitations), with ‘superficial’ and ‘deep’ levels for evaluation. When using both levels of evaluation, the checklist was effective at discriminating high- from low-quality preprints. Its usability was confirmed in workshops with our target audience: Bachelors students in Psychology and Medicine, and science journalists. Conclusions: We created a simple, easy-to-use tool for helping scientifically literate non-experts navigate preprints with a critical mind. We believe that our checklist has great potential to help guide decisions about the quality of preprints on COVID-19 in our target audience and that this extends beyond COVID-19.
... Reporting completeness, indeed, does not reflect the quality of the entire study but has a substantial effect on evaluation and clinical utilization. Insufficient reporting hinders the identification, transformation, and use of all available prognostic prediction models and causes research waste [36,37]. Reporting guideline-TRIPODmay be an effective solution. ...
Full-text available
Background To investigate the reporting of prognostic prediction model studies in obstetric care through a cross-sectional survey design. Methods PubMed was searched to identify prognostic prediction model studies in obstetric care published from January 2011 to December 2020. The quality of reporting was assessed by the TRIPOD checklist. The overall adherence by study and the adherence by item were calculated separately, and linear regression analysis was conducted to explore the association between overall adherence and prespecified study characteristics. Results A total of 121 studies were included, while no study completely adhered to the TRIPOD. The results showed that the overall adherence was poor (median 46.4%), and no significant improvement was observed after the release of the TRIPOD (43.9 to 46.7%). Studies including both model development and external validation had higher reporting quality versus those including model development only (68.1% vs. 44.8%). Among the 37 items required by the TRIPOD, 10 items were reported adequately with an adherence rate over of 80%, and the remaining 27 items had an adherence rate ranging from 2.5 to 79.3%. In addition, 11 items had a report rate lower than 25.0% and even covered key methodological aspects, including blinding assessment of predictors (2.5%), methods for model-building procedures (4.5%) and predictor handling (13.5%), how to use the model (13.5%), and presentation of model performance (14.4%). Conclusions In a 10-year span, prognostic prediction studies in obstetric care continued to be poorly reported and did not improve even after the release of the TRIPOD checklist. Substantial efforts are warranted to improve the reporting of obstetric prognostic prediction models, particularly those that adhere to the TRIPOD checklist are highly desirable.
... Embora a necessidade de disseminação rápida de informações para a comunidade e sistemas de saúde sobre a covid-19 fosse imperativa, grandes preocupações foram apontadas em relação ao rigor científico, pois estudos conduzidos com metodologias inadequadas, podem originar falhas nos dados obtidos na pesquisa, produzindo resultados enviesados e não confiáveis. Para que isso não ocorra a escolha da pergunta de pesquisa, o desenho, adequação da publicação e qualidade dos relatórios, são etapas importantes para a construção metodológica da pesquisa (CHALMERS; GLASZIOU, 2009;MAHASE, 2020). Assim, destaca-se a importância dos estudos metodológicos, que descrevem o desenvolvimento, a validação e a avaliação de seus instrumentos e métodos, com o objetivo de apresentar resultados sólidos e confiáveis, testes rigorosos de intervenções e procedimentos sofisticados de obtenção de dados nas áreas da saúde (JUNG et al., 2021;MBUAGBAW et al., 2020). ...
Full-text available
OBJETIVO: descrever a metodologia utilizada no estudo, a amostra e a prevalência dos sintomas da fase aguda da infecção de acordo com variáveis socioeconômicas. MÉTODOS: estudo transversal realizado em Rio Grande com indivíduos infectados pela covid-19 no período de dezembro de 2020 a março de 2021. Foram investigados 19 sintomas presentes durante a fase aguda da infecção e analisados, em separado e em categorias de “0-4”, “5-9” e “10 ou mais”, de acordo com sexo, idade e classe econômica. RESULTADOS: 2.919 pessoas fizeram parte da amostra. Os sintomas mais prevalentes foram fadiga (73,7%), dor de cabeça (67,2%), perda de paladar (65,9%), perda de olfato (63,9%) e dores musculares (62,3%). Com relação a ocorrência de sintomas estratificado por sexo, todos os sintomas, exceto tosse produtiva, foram estatisticamente maiores no sexo feminino. Referente à idade, verificou-se que dor de cabeça, dor/desconforto para respirar, perda de paladar, perda de olfato, fadiga, dor de garganta, congestão nasal, diarreia, dores articulares e dores musculares foram estatisticamente maiores entre os adultos (18-59 anos). Quanto a classe econômica, a prevalência dos sintomas falta de ar, dor/desconforto para respirar, alteração de sensibilidade e dores articulares apresentaram aumento linear conforme a redução da classe econômica. CONCLUSÃO: os resultados deste estudo permitiram identificar os sintomas mais frequentes na fase aguda da covid-19 e sua distribuição nos grupos, fornecendo dados para implementação de políticas públicas pelos gestores e respaldo para os profissionais de saúde na assistência a essa população.
Routine implementation and sustainability of evidence-based practices (EBPs) into health care is often the most difficult stage in the change process. Despite major advances in implementation science and quality improvement, a persistent 13- to 15-year research-to-practice gap remains. Nurse leaders may benefit from tools to support implementation that are based on scientific evidence and can be readily integrated into complex health care settings. This article describes development and evaluation of an evidence-based implementation and sustainability toolkit used by health care clinicians seeking to implement EBPs. For this project, implementation science and EBP experts created initial iterations of the toolkit based on Rogers' change theory, the Advancing Research through Close Collaboration (ARCC) model, and phases and strategies from implementation science. Face validity and end-user feedback were obtained after piloting the tool with health care clinicians participating in immersive EBP sessions. The toolkit was then modified, with subsequent content validity and usability evaluations conducted among implementation science experts and health care clinicians. This article presents the newly updated Fuld Institute Evidence-based Implementation and Sustainability Toolkit for health care settings. Nurse leaders seeking to implement EBPs may benefit from an evidence-based toolkit to provide a science-informed approach to implementation and sustainability of practice changes.
Objectives. —To estimate the rate of full publication of the results of randomized clinical trials initially presented as abstracts at national ophthalmology meetings in 1988 and 1989; and to combine data from this study with data from similar studies to determine the rate at which abstracts are subsequently published in full and the association between selected study characteristics and full publication. Data Sources. —Ophthalmology abstracts were identified by review of 1988 and 1989 meeting abstracts for the Association for Research in Vision and Ophthalmology and the American Academy of Ophthalmology. Similar studies were identified either from reports contained in our files or through a MEDLINE search, which combined the textword "abstract" with "or" statements to the Medical Subject Headings ABSTRACTING & INDEXING, CLINICAL TRIALS, PEER REVIEW, PERIODICALS, MEDICAL SOCIETIES, PUBLISHING, MEDLINE, INFORMATION SERVICES, and REGISTRIES. Study Selection. —Ophthalmolgy abstracts were selected from the meeting proceedings if they reported results from a randomized controlled trial. For the summary study, similar studies were eligible for inclusion if they described followup and subsequent full publication for a cohort of abstracts describing the results of any type of research study. All studies had to have followed up abstracts for at least 24 months to be included. Data Extraction. —Authors of ophthalmology abstracts were contacted by letter to ascertain whether there was subsequent full publication. Other information, including characteristics of the study design possibly related to publication, was taken from the abstract. For the summary study, rates of full publication were taken directly from reported results, as were associations between study factors (ie, "significant' results and sample size) and full publication. Data Synthesis. —Sixty-six percent (61/93) of ophthalmology abstracts were published in full. Combined results from 11 studies showed that 51% (1198/2391) of all abstracts were subsequently published in full. Full publication was weakly associated with "significant" results and sample size above the median. Conclusions. —Approximately one half of all studies initially presented in abstract form are subsequently published as full-length reports. Most are published in full within 2 years of appearance as abstracts. Full publication may be associated with "significant" results and sample size.(JAMA. 1994;272:158-162)
Silvio Garattini and Iain Chalmers argue that ending the secrecy surrounding drug trials would benefit all parties
Ministers of health, donor agencies, philanthropists, and international agencies will meet at Bamako, Mali, in November, 2008, to review global priorities for health research. These individuals and organisations previously set health priorities for WHO, either through its regular budget or extra-budgetary funds. We asked what insights can be gained as to their priorities from previous decisions within the context of WHO. We compared the WHO biennial budgetary allocations with the burden of disease from 1994-95 to 2008-09. We obtained data from publicly available WHO sources and examined whether WHO allocations varied with the burden of disease (defined by death and disability-adjusted life years) by comparing two WHO regions-Western Pacific and Africa-that are at differing stages of epidemiological transition. We further assessed whether the allocations differed on the basis of the source of funds (assessed and voluntary contributions) and the mechanism for deciding how funds were spent. We noted that WHO budget allocations were heavily skewed toward infectious diseases. In 2006-07, WHO allocated 87% of its total budget to infectious diseases, 12% to non-communicable diseases, and less than 1% to injuries and violence. We recorded a similar distribution of funding in Africa, where nearly three-quarters of mortality is from infectious disease, and in Western Pacific, where three-quarters of mortality is from non-communicable disease. In both regions, injuries received only 1% of total resources. The skew towards infectious diseases was substantially greater for the WHO extra-budget, which is allocated by donors and has risen greatly in recent years, than for the WHO regular budget, which is decided on by member states through democratic mechanisms and has been held at zero nominal growth. Decision makers at Bamako should consider the implications of the present misalignment of global health priorities and disease burden for health research worldwide. Funds allocated by external donors substantially differ from those allocated by WHO member states. The meeting at Bamako provides an opportunity to consider how this disparity might be addressed.