Article

Análisis crítico de los experimentos clínicos aleatorizados publicados en la literatura urológica ¿Son de buena calidad?

Urología colombiana, ISSN 0120-789X, Vol. 18, Vol. 2, 2009 01/2009;
Source: OAI

ABSTRACT Objetivo: los experimentos clínicos aleatorizados y metanálisis son considerados la punta de la pirámide en cuanto a evidencia científica. Muchas veces se asume que el tener este rango jerárquico es sinónimo de buena calidad metodológica. Este estudio pretende evaluar de manera crítica la calidad en cuanto a validez y reporte de resultados de los experimentos clínicos aleatorizados publicados en los últimos 12 meses en una de las revistas internacionales de mayor reconocimiento y difusión. Materiales y métodos: los estudios fueron seleccionados por 2 investigadores por búsqueda manual electrónica lo cual fue correlacionado con los filtros de búsqueda de experimentos clínicos aleatorizados de pubmed. Los estudios escogidos fueron analizados críticamente por 2 investigadores con entrenamiento en apreciación crítica de la literatura médica utilizando una herramienta estandarizada previamente publicada en la literatura. Resultados: Se encontró un total de 862 artículos publicados, de los cuales solo el 6% corresponde a experimentos clínicos aleatorizados. Hay una deficiencia importante en la mayoría de los estudios en la descripción del mecanismo de aleatorización (solo el 36.5% lo hacen), las perdidas en el seguimiento (reportadas en el 84% de los estudios) y el tipo de análisis (intención a tratar Vs protocolo). Tampoco el cálculo de la muestra es descrito en muchos estudios ,y así el poder para identificar las diferencias es difícilmente cuantificable. Los resultados rara vez (3.85%) son descritos con medidas de asociación como riesgo relativo RR, reducción del riesgo relativo RRR, o número necesario a tratar NNT. Conclusiones. Se necesita un mayor número de publicaciones que tengan un diseño de experimento clínico aleatorizado en la literatura urológica, pero a su vez se necesita mejorar la calidad metodológica ya que en muchos de los estudios publicados, esta es subóptima y si consideramos a los experimentos clínicos como la �mejor evidencia� esto no es adecuado.

0 Bookmarks
 · 
73 Views
  • Source
    JAMA The Journal of the American Medical Association 09/1996; 276(8):637-9. · 30.39 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The American edition of The Journal of Bone and Joint Surgery (JBJS-A) has included a level-of-evidence rating for each of its clinical scientific papers published since January 2003. The purpose of this study was to assess the type and level of evidence found in nine different orthopaedic journals by applying this level-of-evidence rating system. We reviewed all clinical articles published from January through June 2003 in nine orthopaedic journals. Studies of animals, studies of cadavera, basic-science articles, review articles, case reports, and expert opinions were excluded. The remaining 382 clinical articles were randomly assigned to three experienced reviewers and two inexperienced reviewers, who rated them with the JBJS-A grading system. Each reviewer determined whether the studies were therapeutic, prognostic, diagnostic, or economic, and each rated the level of evidence as I, II, III, or IV. Reviewers were blinded to the grades assigned by the other reviewers. According to the reviewers' ratings, 70.7% of the articles were therapeutic, 19.9% were prognostic, 8.9% were diagnostic, and 0.5% were economic. The reviewers graded 11.3% as Level I, 20.7% as Level II, 9.9% as Level III, and 58.1% as Level IV. The kappa values for the interobserver agreement between the experienced reviewers and the inexperienced reviewers were 0.62 for the level of evidence and 0.76 for the study type. The kappa values for the interobserver agreement between the experienced reviewers were 0.75 for the level of evidence and 0.85 for the study type. The kappa values for the agreement between the reviewers' grades and the JBJS-A grades were 0.84 for the level of evidence and 1.00 for the study type. All kappa values were significantly different from zero (p < 0.0001 for all). The percentage of articles that were rated Level I or II increased in accordance with the 2003 journal impact factors for the individual journals (p = 0.0061). Orthopaedic journals with a higher impact factor are more likely to publish Level-I or II articles. The type and level of information in orthopaedic journals can be reliably classified, and clinical investigators should pursue studies with a higher level of evidence whenever feasible.
    The Journal of Bone and Joint Surgery 12/2005; 87(12):2632-8. · 4.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To evaluate the methodological quality and level of evidence of publications in four leading general clinical ophthalmology journals. All 1919 articles published in the American Journal of Ophthalmology, Archives of Ophthalmology, British Journal of Ophthalmology, and Ophthalmology in 2004 were reviewed. The methodological rigor and the level of evidence in the articles were rated according to the McMaster Hedges Project criteria and the Oxford Centre for Evidence-Based Medicine levels of evidence. Overall, 196 (24.4%) of the 804 publications that were included for assessment met the Hedges criteria. Articles on economics evaluation and those on prognosis achieved the highest passing rate, with 80.0% and 74.4% of articles, respectively, meeting the Hedges criteria. Publications on etiology, diagnosis, and treatment fared less well, with respective passing rates of 28.3%, 20.2%, and 14.7%. Published systematic reviews and randomized controlled trials were uncommon in the ophthalmic literature, at least in these four journals during 2004. According to the Oxford criteria, 57.6% of the articles were classified as level 4 evidence compared with 18.1% classified as level 1. Articles on prognosis had the highest proportion (43.0%) rated as level 1 evidence. Generally, articles that reached the Hedges threshold were rated higher on the level-of-evidence scale (Spermans rho = 0.73; P < 0.001). The methodological quality of publications in the clinical ophthalmic literature was comparable to that in the literature of other specialties. There was substantial heterogeneity in quality between different types of articles. Future methodological improvements should focus on the areas identified as having the largest deficiencies.
    Investigative Ophthalmology &amp Visual Science 06/2006; 47(5):1831-8. · 3.66 Impact Factor

Full-text (2 Sources)

Download
5 Downloads
Available from
Oct 7, 2014