Table 3 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Source publication
Background:
Mass media through the Internet is a powerful means of disseminating medical research. We aimed to determine whether and how the interpretation of research results is misrepresented by the use of "spin" in the health section of Google News. Spin was defined as specific way of reporting, from whatever motive (intentional or unintentiona...
Citations
... For example, a third of UK health research press releases make causal claims based on correlational evidence, give exaggerated advice, or extrapolate animal research to humans (Sumner et al., 2014). About half of the health sections of US, UK, and Canada editions of Google News from July 2013 to January 2014 claimed causal effects despite non-randomized study designs (Haneef et al., 2015). Given the prevalence of mistaking correlational evidence as causal, our rubrics assess the fidelity of media articles in presenting the type of evidence as correlational versus causal. ...
It is perhaps uncontroversial to claim that behavioral science research is playing an increasingly important role in practice. However, practitioners largely rely on media reports rather than original research articles to learn about the science. Do these media reports contain all the information needed to understand the nuances of the research? To assess this question, we develop a set of rubrics to evaluate the fidelity of the media report to the original research article. As an illustration, we apply these rubrics to a sample of media reports based on several research articles published in one journal and identify common patterns, trends, and pitfalls in media presentations. We find preliminary evidence of low fidelity in presenting participant characteristics, contextual elements, and limitations of the original research. The media also appear to misreport correlational evidence as causal and sometimes miss acknowledging the hypothetical nature of evidence when hypothetical scenarios were used as the sole basis of conclusions. Furthermore, the media often present broad conclusions and personal opinions as directly backed by scientific evidence. To support more discerning consumption of behavioral insights from media sources, we propose a checklist to guide practitioners in evaluating and using information from media sources.
... Science communication is all too often polarized, with one side promoting scientific discoveries in an exaggerated manner without adequately addressing their limitations [3,4], while the other reports on a lack of reproducibility in science. Stories on the latter are frequently sensationalized, with narratives such as "science is broken" or news stories focusing heavily on instances of fraud and scientific misconduct [5,6]. ...
Public engagement with reproducibility is crucial for fostering trust in science. This Community Page outlines, through the example of baking Christmas tree meringues, how scientists can effectively engage and educate the public about the importance of reproducibility in research.
... People often overstep the available evidence; that is, they draw inferences that are stronger than warranted by the evidence. This is the case for the general public, university students, media, practitioners, and even researchers (e.g., Bleske-Rechek et al., 2015;Brown et al., 2013;Cofield et al., 2010;Cooper et al., 2012;Han et al., 2022;Haneef et al., 2015;Lazarus et al., 2015;Motz et al., 2023;Mueller & Coon, 2013;Nunes et al., 2019Seifert et al., 2022;Sibulkin & Butler, 2019;Sumner et al., 2014). For example, reviews of published articles on nutrition and obesity found that many of the authors of these studies used causal language when interpreting their own and others' findings even when the research design was inadequate for testing causal effects (Brown et al., 2013;Cofield et al., 2010). ...
... Notwithstanding potential problems with our randomized experiment study description and sampling issues, the responses for some of the non-experimental study descriptions clearly show overstepping of the evidence by at least some researchers. Similar overstepping has commonly been observed with a variety of populations, topics, and methods (e.g., Bleske-Rechek et al., 2015;Brown et al., 2013;Cofield et al., 2010;Cooper et al., 2012;Han et al., 2022;Haneef et al., 2015;Lazarus et al., 2015;Motz et al., 2023;Mueller & Coon, 2013;Nunes et al., 2019Seifert et al., 2022;Sibulkin & Butler, 2019;Sumner et al., 2014). Further, our findings for the intuitive versus counterintuitive results add to past speculation and evidence that critical thinking may be less engaged by intuitive than counterintuitive information (e.g., Lord et al., 1979Lord et al., , 1984Pennycook & Rand, 2019). ...
Objectives
We examined the inferences authors of articles published in violence journals draw from studies about the relationship between attitudes and violent offending.
Methods
Participants (N = 120, 58.3% women) were randomly assigned to one of 12 hypothetical studies, which varied on research design and whether the results were intuitive or counterintuitive.
Results
Participants rarely incorrectly stated that the study demonstrated causation or prediction when not warranted by the research design. However, some participants failed to acknowledge plausible alternate interpretations (e.g., third variable) and selected causal implications that were not warranted by the study’s research design. This was often more so the case when the studies’ results were intuitive than when they were counterintuitive.
Conclusions
Though we did find some evidence of overstepping, our findings suggest that researchers may not overstep the evidence as much as suggested by previous studies.
... Spin can extend beyond published research papers into the news stories that report research. Numerous studies have found evidence of spin, exaggeration or misrepresentation in the media reporting of research [199][200][201][202] . That the media drives hype, sensationalism and the minimisation of uncertainty in science communication is unsurprising and researchers have a role to try to resist that influence. ...
The personal, social and economic burden of chronic pain is enormous. Yet patients with chronic pain, clinicians and the public are often poorly served by an evidence architecture that contains multiple structural weaknesses which reduce confidence in treatment practice. Weaknesses include incomplete research governance, a lack of diversity and inclusivity, inadequate stakeholder engagement, poor methodological rigour and incomplete reporting, a lack of data accessibility and transparency, and a failure to communicate findings with appropriate balance. These issues span pre-clinical research, clinical trials, systematic reviews and impact on the development of clinical guidance and practice. Research misconduct and inauthentic data present a further critical risk. These problems are not unique to research in pain but, combined, they increase bias and uncertainty in research, waste resources, drive the provision of low-value care, increase research and healthcare costs and impede the discovery of potentially more effective interventions, all of which negatively impact people living with pain.This White Paper summarises the discussions and recommendations of the ENhancing TRUSTworthiness in Pain Evidence (ENTRUST-PE) network project, which received funding from the European Commission in 2023 (ERA-NET NEURON Consortium). An international and interdisciplinary group from the pain research community met on multiple occasions with the objective of developing a novel integrated framework for enhancing and facilitating the trustworthiness of evidence for pain. The resulting framework conceptualises trustworthy research as being underpinned by 7 core values: 1. Integrity and Governance, 2. Equity Diversity and Inclusivity, 3. Patient and Public Involvement and Engagement, 4. Methodological Rigour, 5. Openness and Transparency, 6. Balanced Communication, and 7. Data Authenticity. We propose that each of these core values should drive universal actions and behaviours in researchers and stakeholders across all roles and stages of the research process. In this paper we summarise the challenges addressed by each core value, make recommendations for each key stakeholder group in the research ecosystem in order to enhance the trustworthiness of pain research and present the case for systems-level change.
... Another consequence of spin is associated with media coverage and news reports. 11,12 Patients and the general public, especially those searching for new therapies and drugs, are more likely to believe the results of studies reported with spin. 13 Therefore, spin could result in manipulation and misinformed choices. ...
... 7 In addition, misinterpretation of the results of a study can raise suspicions about new treatments and influence policymakers to approve unsuitable regulations and policies. 7,12 Thus, it is opportune to identify, characterize, and quantify spin within the dental literature. As such, this scoping review aimed to map the practice of spin in scientific publications in dentistry. ...
The aim of this review was to map the practice of spin in scientific publications in the dental field. After registering the review protocol (osf.io/kw5qv/), a search was conducted in MEDLINE via PubMed, CENTRAL, Embase, Scopus, LILACS, ClinicalTrials.gov, and OpenGrey databases in June 2023. Any study that evaluated the presence of spin in dentistry was eligible. Data were independently extracted in duplicate by two reviewers. After removing duplicates, 4888 records were screened and 38 were selected for full-text review. Thirteen studies met the eligibility criteria, all of which detected the presence of spin in the primary studies, with the prevalence of spin ranging from 30% to 86%. The most common types of spin assessed in systematic reviews were failure to mention adverse effects of interventions and to report the number of studies/patients contributing to the meta-analysis of main outcomes. In randomized controlled trials, there was a focus on statistically significant within-group and between-group comparisons for primary or secondary outcomes (in abstract results) and claiming equivalence/noninferiority/similarity for statistically nonsignificant results (in abstract conclusions). The practice of spin is widespread in dental scientific literature among different specialties, journals, and countries. Its impact, however, remains poorly investigated.
... It is therefore critical that abstracts transparently report the results of both the beneficial and adverse effects of interventions without misleading the readers. Misleading reporting, misleading interpretation, and misleading extrapolation of study results has been called "spin" [2,3]. In this study, we assessed whether adverse effects of interventions were reported or considered in abstracts of both Cochrane and non-Cochrane reviews of orthodontic interventions and whether spin and what type of spin regarding adverse effects was present when comparing the abstracts with what was sought and reported in these reviews. ...
... In this context, it is important that findings on adverse effects are presented accurately in the abstract without misleading the reader. "A distorted presentation of study results" has been called "spin" [2,3]. This definition and other commonly used definitions of spin and key terminology used in this article are listed in Table 1 [2,3,[17][18][19][20][21][22][23][24][25]. ...
... "A distorted presentation of study results" has been called "spin" [2,3]. This definition and other commonly used definitions of spin and key terminology used in this article are listed in Table 1 [2,3,[17][18][19][20][21][22][23][24][25]. ...
Background
It is critical that abstracts of systematic reviews transparently report both the beneficial and adverse effects of interventions without misleading the readers. This cross-sectional study assessed whether adverse effects of interventions were reported or considered in abstracts of systematic reviews of orthodontic interventions and whether spin on adverse effects was identified when comparing the abstracts with what was sought and reported in these reviews.
Methods
This cross-sectional study (part 2 of 2) used the same sample of 98 systematic reviews orthodontic interventions as used in part 1. Eligible reviews were retrieved from the Cochrane Database of Systematic Reviews and the 5 leading orthodontic journals between August 1 2009 and July 31 2021. Prevalence proportions were sought for 3 outcomes as defined in the published protocol. Univariable logistic regression models were built to explore associations between the presence of spin in the abstract and a series of predictors. Odds ratios (OR) 95% confidence intervals (95% CI) were used to quantify the strength of associations and their precision.
Results
76.5% (75/98) of eligible reviews reported or considered (i.e., discussed, weighted etc.) potential adverse effects of orthodontic interventions in the abstract and the proportion of spin on adverse effects was 40.8% (40/98) in the abstract of these reviews. Misleading reporting was the predominant category of spin, i.e., 90% (36/40). Our explorative analyses found that compared to the Cochrane Database of Systematic Reviews all 5 orthodontic journals had similar odds of the presence of spin on adverse effects in abstracts of systematic reviews of orthodontic interventions. The odds of the presence of spin did not change over the sampled years (OR: 1.03, 95% CI: 0.9 to 1.16) and did not depend on the number of authors (OR: 0.93, 95% CI: 0.71 to 1.21), or on the type of orthodontic intervention (OR: 1.1, 95% CI: 0.45 to 2.67), or whether conflicts of interests were reported (OR: 0.74, 95% CI: 0.32 to 1.68).
Conclusion
End users of systematic reviews of orthodontic interventions have to be careful when interpreting results on adverse effects in the abstracts of these reviews, because they could be jeopardized by uncertainties such as not being reported and misleading reporting as a result of spin.
... We theorized that spin practices in prediction model studies might have a larger impact on clinical guidelines and research funds, especially given the rise of machine learning and artificial intelligence in health care applications [16,40e42]. The effects on reader's interpretation, role of peer-reviewers, number of citations, and assignments of research funds still needs to be assessed within studies on prediction models [11,13,14,23]. ...
Objective: We evaluated the presence and frequency of spin practices and poor reporting standards in studies that developed and/or validated clinical prediction models using supervised machine learning techniques.
Study Design and Setting: We systematically searched PubMed from 01-2018 to 12-2019 to identify diagnostic and prognostic prediction model studies using supervised machine learning. No restrictions were placed on data source, outcome, or clinical specialty.
Results: We included 152 studies: 38% reported diagnostic models and 62% prognostic models. When reported, discrimination was described without precision estimates in 53/71 abstracts (74.6%, [95% CI 63.4 - 83.3]) and 53/81 main texts (65.4%, [95% CI 54.6 - 74.9]). Of the 21 abstracts that recommended the model to be used in daily practice, 20 (95.2% [95% CI 77.3 - 99.8]) lacked any external validation of the developed models. Likewise, 74/133 (55.6% [95% CI 47.2 - 63.8]) studies made recommendations for clinical use in their main text without any external validation. Reporting guidelines were cited in 13/152 (8.6% [95% CI 5.1 - 14.1]) studies.
Conclusion: Spin practices and poor reporting standards are also present in studies on prediction models using machine learning techniques. A tailored framework for the identification of spin will enhance the sound reporting of prediction model studies.
... Mass media news coverage through the Internet often draws its content from a large number of scientific journal articles but often does not report or translate the findings correctly. Haneef et al. found at least one spin (misleading interpretation) in 114 (88%) of news items and 18 different types of spin in the news related to misleading reporting (59%), misleading interpretation (69%), or overgeneralization/misleading extrapolation (41%) of the results, such as extrapolating a beneficial effect from an animal study to humans (21%) [21,22]. In addition, easy access to sharing and downloading information has driven this information explosion, which has been facilitated by an ever-increasing number of smartphone/mobile phone users worldwide, which reached 6.65 billion in 2022 [23]. ...
Both traditional and social media information sources have disseminated information on the COVID-19 pandemic. The content shared may influence public opinion on different mitigation strategies, including vaccination. Misinformation can alter risk perception and increase vaccine hesitancy. This study aimed to explore the impact of using social media as the primary information source about the COVID-19 vaccine on COVID-19 vaccine hesitancy among people living in Canada. Secondary objectives identified other predictors of vaccine hesitancy and distinguished the effects of using traditional and social media sources. We used quota sampling of adults in Canada [N = 985] to conduct an online survey on the Pollfish survey platform between 21st and 28th May 2021. We then used bivariate chi-squared tests and multivariable logistic regression modeling to explore the associations between using social media as one’s primary source of information about the COVID-19 vaccine and vaccine hesitancy. We further analyzed the association between specific types of channels of information and vaccine hesitancy. After controlling for covariates such as age, sex, race, and ethnicity, individuals reporting social media as their primary source of COVID-19 vaccine information versus those who had not used social media as their primary source of COVID-19 vaccine information had 50% higher odds of vaccine hesitancy. Among different channels of information, we found that information from television was associated with a 40% lower odds ratio for vaccine hesitancy. Since social media platforms play an essential role in influencing hesitancy in taking the COVID-19 vaccination, it is necessary to improve the quality of social media information sources and raise people’s trust in information. Meanwhile, traditional media channels, such as television, are still crucial for promoting vaccination programs.
... It may also indirectly erode research quality by enabling researchers to make ambiguously causal implications without being accountable to the methodological rigor required for causal inference. Otherwise, noncausal language may morph into causal language in outlets for medical practitioners (7,10), press releases (17)(18)(19), and media reports (16,20). While some loss of nuance may be attributed to press officers, journalists, and news recipients, too-strong interpretation often starts from the study publications themselves (16). ...
https://academic.oup.com/aje/article/191/12/2084/6655746
... In addition to the omission of limitations and risks, writing techniques used in journal articles, press releases and news media to make scientific research more newsworthy include the use spin and positive framing. In the context of scientific research, spin has been described as communicating findings so that the benefits of an intervention seem stronger or more positive than they actually are [Haneef et al., 2015]. The motivations to use spin to increase newsworthiness when writing about scientific research in news media have been linked to scientists, public relations specialists and journalists. ...
... Despite there being competing interests for newsworthiness, accuracy and relevance of scientific news stories [Cassels et al., 2003;Caulfield et al., 2014;Haneef et al., 2015;Schwitzer, 2013], the responsibility for the production of inaccurate reporting is not straightforward. Science communication researchers have attributed misrepresentation of scientific research to a complex relationship between scientists, science communicators and journalists [Caulfield, 2005]. ...
... The press release was issued on the 10 th of August and the vast majority of reports were published between 10 th and 12 th of August 2017. We restricted our search to Google News because it covers a vast range of news media sources [Filloux, 2013] and has been used previously in media analysis research as the single source of online news media coverage [Haneef et al., 2015;Young Lin and Rosenkrantz, 2017]. Google Chrome, Safari and Firefox were used to search for articles on Google News, all with refreshed browsers histories to ensure that all relevant articles were found and searching history did not affect the articles retrieved. ...
Accurate news media reporting of scientific research is important as most people receive their health information from the media and inaccuracies in media reporting can have adverse health outcomes. We completed a quantitative and qualitative analysis of a journal article, the corresponding press release and the online news reporting of a scientific study. Four themes were identified in the press release that were directly translated to the news reports that contributed to inaccuracies: sensationalism, misrepresentation, clinical recommendations and subjectivity. The pressures on journalists, scientists and their institutions has led to a mutually beneficial relationship between these actors that can prioritise newsworthiness ahead of scientific integrity to the detriment of public health.