Article

Misuse of odds ratios in obesity literature: an empirical analysis of published studies.

Department of Health Care Organization and Policy, School of Public Health, University of Alabama at Birmingham, Birmingham, AL, USA.
Obesity (Impact Factor: 4.39). 03/2012; 20(8):1726-31. DOI: 10.1038/oby.2012.71
Source: PubMed

ABSTRACT Odds ratios (ORs) are widely used in scientific research to demonstrate the associations between outcome variables and covariates (risk factors) of interest, and are often described in language suitable for risks or probabilities, but odds and probabilities are related, not equivalent. In situations where the outcome is not rare (e.g., obesity), ORs no longer approximate the relative risk ratio (RR) and may be misinterpreted. Our study examines the extent of misinterpretation of ORs in Obesity and International Journal of Obesity. We reviewed all 2010 issues of these journals to identify all articles that presented ORs. Included articles were then primarily reviewed for correct presentation and interpretation of ORs; and secondarily reviewed for article characteristics that may have been associated with how ORs are presented and interpreted. Of the 855 articles examined, 62 (7.3%) presented ORs. ORs were presented incorrectly in 23.2% of these articles. Clinical articles were more likely to present ORs correctly than social science or basic science articles. Studies with outcome variables that had higher relative prevalence were less likely to present ORs correctly. Overall, almost one-quarter of the studies presenting ORs in two leading journals on obesity misinterpreted them. Furthermore, even when researchers present ORs correctly, the lay media may misinterpret them as relative RRs. Therefore, we suggest that when the magnitude of associations is of interest, researchers should carefully and accurately present interpretable measures of association--including RRs and risk differences--to minimize confusion and misrepresentation of research results.

0 Bookmarks
 · 
137 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Scientific authors who overreach in presenting results can potentially, without intending to, distort the state of knowledge and inappropriately influence clinicians, decision makers, the media, and the public. The goal of the study was to determine the extent to which authors present overreaching statements in the obesity and nutrition literature, and whether journal, author, or study characteristics are associated with this practice. A total of 937 papers on nutrition or obesity published in 2001 and 2011 in leading specialty, medical, and public health journals were systematically studied to estimate the extent to which authors overstate the results of their study in the published abstract. Focus was placed on overreaching statements that may include (1) reporting an associative relationship as causal; (2) making policy recommendations based on observational data that show associations only (e.g., not cause and effect); and (3) generalizing to a population not represented by their sample. Data were compiled in 2012 and analyzed in 2013. Results indicate that 8.9% of studies have overreaching conclusions with a higher percentage in 2011 compared to 2001 (OR=2.14, risk difference=+3.9%, p=0.020). Unfunded studies (OR=2.41, p=0.039) were more likely to have an overstatement of results of the type described here. In contrast, those with a greater number of coauthors were significantly less likely than those with four or fewer authors (the reference group) to have overstated results (seven or eight authors: OR=0.30, risk difference=-6.1%, p=0.008; ≥9 authors: OR=0.41, risk difference= -4.0%, p=0.037). Overreaching in presenting results in studies focused on nutrition and obesity topics is common in articles published in leading journals. Testable strategies are proposed to reduce the prevalence of such instances in the literature.
    American journal of preventive medicine 11/2013; 45(5):615-21. DOI:10.1016/j.amepre.2013.06.019 · 4.24 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We conducted a systematic review and meta-analysis of the literature examining the relationship between driving performance and engaging in secondary tasks. We extracted data from abstracts of 206 empirical articles published between 1968 and 2012 and developed a logistic regression model to identify correlates of a detrimental relationship between secondary tasks and driving performance. Of 350 analyses, 80% reported finding a detrimental relationship. Studies using experimental designs were 37% less likely to report a detrimental relationship (P = .014). Studies examining mobile phone use while driving were 16% more likely to find such a relationship (P = .009). Quasi-experiments can better determine the effects of secondary tasks on driving performance and consequently serve to inform policymakers interested in reducing distracted driving and increasing roadway safety. (Am J Public Health. Published online ahead of print January 16, 2014: e1-e10. doi:10.2105/AJPH.2013.301750).
    American Journal of Public Health 01/2014; 104(3). DOI:10.2105/AJPH.2013.301750 · 3.93 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background Multivariable logistic regression (MLR) is frequently applied among the multivariable regression models in medial research. Regression models are associated with assumptions, required proper building strategy, and correct reporting of model results. We explained cautions to be required in developing a parsimonious MLR model. Methods In addition to established criteria of MLR and other useful cautions are explained by dividing into three major categories namely, those to be fulfilled at planning stage, those required at model building and diagnostic stage, and those to be cared in reporting of model results. Cautions required in each of the category was further subdivided into associated points and each point is explicitly explained with real example from published studies to understand more clearly what can go wrong if these points are not fulfilled or poorly reported. Results Researchers are advised to get an in-depth understanding of model associated assumptions and other related points like method for selection of potential variables, proper reporting of model results, the method used for diagnosis of outliers etc. before applying any statistical software. Conclusions The MLR model quality and reliability can be improved by implementing the cautions presented in this article.
    02/2014; 4(1):31-39. DOI:10.1016/j.cmrp.2014.01.004

Full-text (2 Sources)

Download
7 Downloads
Available from
Feb 9, 2015