Article

Burden of proof: combating inaccurate citation in biomedical literature

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For these reasons, disproportionality analyses should be typically considered as hypothesis-generating studies. Yet, approximately one third of the source articles selected in our study claimed the demonstration of a causal link between an adverse event and a drug, and another third used vague and [39][40][41][42]. Responsibility, therefore, lies primarily with authors and publishers to ensure adequate interpretation of findings in peer-reviewed publications [43]. ...
... Another important finding is that even when authors correctly reported and interpreted their study results, exaggeration of the findings in citing studies is common (i.e., 59.3% of citing studies). These numbers underline the importance of better communication between, and the acculturation of, the scientific community and the lay public regarding pharmacovigilance signal detection studies, and the importance of improving the peer reviewing of citations in scientific articles [42]. Presumably, outside of the scientific literature, among healthcare practitioners not adequately informed to critically interpret disproportionality analyses and the lay public, the rate of misinterpretation is even higher. ...
Article
Full-text available
Previous meta-epidemiological surveys have found considerable misinterpretation of results of disproportionality analyses. We aim to explore the relationship between the strength of causal statements used in title and abstract conclusions of pharmacovigilance disproportionality analyses and the strength of causal language used in citing studies. On March 30, 2022, we selected the 30 disproportionality studies with the highest Altmetric Attention Scores. For each article, we extracted all citing studies using the Dimension database (n = 1434). In parallel, two authors assessed the strength of causal statements in the title and abstract conclusions of source articles and in the paragraph of citing studies. Based on previous studies, the strength of causal language was quantified based on a four-level scale (1—appropriate interpretation; 2—ambiguous interpretation; 3—conditionally causal; 4—unconditionally causal). Discrepancies were solved by discussion until consensus among the team. We assessed the association between the strength of causal statements in source articles and citing studies, separately for the title and abstract conclusions, through multinomial regression models. Overall, 27% (n = 8) of source studies used unconditionally causal statements in their title, 30% (n = 9) in their abstract conclusion, and 17% (n = 5) in both. Only 20% (n = 6) used appropriate statements in their title and in their abstract’s conclusions. Among the 622 citing studies analyzed, 285 (45.8%) used unconditionally causal statements when referring to the findings from disproportionality analysis, and only 164 (26.4%) used appropriate language. Multinomial models found that the strength of causal statements in citing studies was positively associated with the strength of causal language used in abstract conclusions of source articles (Likelihood Ratio Test (LogLRT) p < 0.00001) but not in the titles. In particular, among studies citing source articles with appropriate interpretation, 30.2% (95% confidence interval [CI] 22.8–37.6) contained unconditionally causal statements in their abstract conclusions, versus 56.4% (95% CI 48.7–64.2) for studies citing source articles with unconditionally causal statements. Nearly half of the studies citing pharmacovigilance disproportionality analyses results used causal claims, particularly when the causal language used in the source article was stronger. There is a need for higher caution when writing, interpreting, and citing disproportionality studies.
... For example, it is estimated that up to 25% of general scientific literature contains citation errors. 12 These errors are particularly problematic, especially as many are continuously propagated in future publications and may have significant implications on healthcare policy and outcomes. Publishers may choose to integrate AI tools into the submission process to review citations and flag potential errors. ...
... Publishers may choose to integrate AI tools into the submission process to review citations and flag potential errors. 12 Some publishers have already started to use AI tools such as Proofig 13 to assess image integrity of submitted manuscripts and detect image manipulation. 14 Another area in which AI can be applied is enhancing communication and knowledge dissemination and translation. ...
... Notwithstanding its alignment with the zeitgeist of the ethical treatment of animals such as pets, to our knowledge the UDAR is not endorsed by the UN, UNESCO, or any country in the world, and should therefore not be cited as such, nor as universally or legally binding. Contemporary science faces significant challenges including the reproducibility crisis [42][43][44], misinformation [45][46][47], mis-citations [48], misquotations [49], and discredit [50,51] in what has been named 'the posttruth era' where deceptive information spreads widely, namely through social media. Imprecise or false claims in scientific publications have been shown to influence public attitudes and policies on critical issues like vaccination and climate change [50]. ...
Article
Full-text available
Background The Universal Declaration of Animal Rights (UDAR), adopted in 1977 by an international NGO inspired by the Universal Declaration of Human Rights and made public the following year, aimed to establish a universal code for human conduct toward animals. The declaration was revised twice, in 1989 and 2018, but it failed to be internationally recognised or adopted. While its global influence remained limited, misinterpretations of its scope and context have proliferated in legal and veterinary documents. To gauge its impact on scientific literature, a scoping review across three databases (Scopus, Web of Science Core Collection, and Google Scholar) was conducted for publications citing the UDAR from 1979 to 2022. Results In terms of research field, the UDAR is mostly cited in the fields of law (27%), philosophy, ethics, and religion (17%), clinical medicine (17%), and basic medicine (11%). The 1978 UDAR version was most often cited. Among 305 screened publications, 47.9% contained erroneous or misleading claims about the UDAR. Common errors included linking the UDAR to UNESCO (34.8%) and conferring it universal endorsement or legally binding value (10.2%). More than half (57%, 59/103) of the mentions in the ethics section contained errors, namely confusing UDAR with other animal protection texts. Regarding the type of animal use, most misleading claims were found in scientific publications focusing on the use of animals in research. Conclusions The misappropriation of the UDAR risks providing a false sense of legitimacy and moral compass to editors, reviewers, and readers regarding animal use and highlights that the authors are unaware of ethical or regulatory frameworks governing the proper use of animals in science. This is particularly relevant because the 1978 version, which is antithetical to animal use in science, was most often cited, raising concerns about the governance of animal research in some institutions and the efficacy of the peer review process in detecting these errors. Finally, UDAR mentions grew more than the estimated growth of scientific publications worldwide, thus suggesting an increase in its influence.
... Accurate and constructive peer review is a big challenge but remains a priority goal for research integrity, as highlighted by the 2019 Hong Kong manifesto [28]. With regard to DAs, reviewers are asked to be updated to assess the novelty within the existing knowledge, knowledgeable to judge methodological aspects and innovative aspects, and meticulous to comprehensively peruse all aspects of the work, including potential 'spin' in the interpretation [13] as well as inaccurate citations [29]. This long-term mission could be specifically endorsed by scientific societies such as ISoP, the International Society for Pharmacoepidemiology (ISPE) and the European Association for Clinical Pharmacology and Therapeutics (EACPT), which could synergize to develop quality criteria to score DAs, including a dedicated risk-of-bias tool. ...
Article
After 75 years of clinical use of folic acid antagonists such as methotrexate, relevant pharmacological data currently important for the effective and safe use of methotrexate were reviewed to see if it is possible to improve outcomes. Specifically, to improve how high‐dose methotrexate (HD‐MTX) can be given safely, what doses of MTX (methotrexate) are adequate to achieve therapeutic levels, and what is the appropriate folinic acid (FA) dose for effective rescue. This review is based on 50 years of personal experience with the use of HD‐MTX in published literature. Many pharmacologic studies were performed over 50 years ago, but are still relevant and stand up to scrutiny today. What should be considered HD‐MTX and how it can be given safely and effectively without late toxicity are presented. The variables responsible for effective folinic acid rescue, especially the doses of MTX and folinic acid and the time to start of rescue, are discussed. Understanding these highlighted aspects of therapy could help to prevent acute toxicity, improve treatment results, and prevent late effects.
Article
Full-text available
While still in its infancy, ChatGPT (Generative Pretrained Transformer), introduced in November 2022, is bound to hugely impact many industries, including healthcare, medical education, biomedical research, and scientific writing. Implications of ChatGPT, that new chatbot introduced by OpenAI on academic writing, is largely unknown. In response to the Journal of Medical Science (Cureus) Turing Test - call for case reports written with the assistance of ChatGPT, we present two cases one of homocystinuria-associated osteoporosis, and the other is on late-onset Pompe disease (LOPD), a rare metabolic disorder. We tested ChatGPT to write about the pathogenesis of these conditions. We documented the positive, negative, and rather troubling aspects of our newly introduced chatbot's performance.
Article
Full-text available
Citations are an important, but often overlooked, part of every scientific paper. They allow the reader to trace the flow of evidence, serving as a gateway to relevant literature. Most scientists are aware of citations errors, but few appreciate the prevalence of these problems. The purpose of this study was to examine how often frequently cited papers in biomedical scientific literature are cited inaccurately. The study included an active participation of first authors of included papers; to first-hand verify the citations accuracy. Findings from feasibility study, where we reviewed 1,540 articles containing 2,526 citations of 14 most cited articles in which the authors were affiliated with the Faculty of Medicine University of Belgrade, were further evaluated for external confirmation in an independent verification set of articles. Verification set included 4,912 citations identified in 2,995 articles that cited 13 most cited articles published by authors affiliated with the Mayo Clinic Division of Nephrology and Hypertension. A citation was defined as being accurate if the cited article supported or was in accordance with the statement by citing authors. At least one inaccurate citation was found in 11% and 15% of articles in the feasibility study and verification set, respectively, suggesting that inaccurate citations are common in biomedical literature. The most common problem was the citation of nonexistent findings (38.4%), followed by an incorrect interpretation of findings (15.4%). One fifth of inaccurate citations were due to chains of inaccurate citations. Based on these findings, several actions to reduce citation inaccuracies have been proposed.
Article
Full-text available
Due to the incremental nature of scientific discovery, scientific writing requires extensive referencing to the writings of others. The accuracy of this referencing is vital, yet errors do occur. These errors are called ‘quotation errors’. This paper presents the first assessment of quotation errors in high-impact general science journals. A total of 250 random citations were examined. The propositions being cited were compared with the referenced materials to verify whether the propositions could be substantiated by those materials. The study found a total error rate of 25%. This result tracks well with error rates found in similar studies in other academic fields. Additionally, several suggestions are offered that may help to decrease these errors and make similar studies more feasible in the future.
Article
Full-text available
Background: Inaccurate citations are erroneous quotations or instances of paraphrasing of previously published material that mislead readers about the claims of the cited source. They are often unaddressed due to underreporting, the inability of peer reviewers and editors to detect them, and editors' reluctance to publish corrections about them. In this paper, we propose a new tool that could be used to tackle their circulation. Methods: We provide a review of available data about inaccurate citations and analytically explore current ways of reporting and dealing with these inaccuracies. Consequently, we make a distinction between publication (i.e., first occurrence) and circulation (i.e., reuse) of inaccurate citations. Sloppy reading of published items, literature ambiguity and insufficient quality control in the editorial process are identified as factors that contribute to the publication of inaccurate citations. However, reiteration or copy-pasting without checking the validity of citations, paralleled with lack of resources/motivation to report/correct inaccurate citations contribute to their circulation. Results and discussion: We propose the development of an online annotation tool called "MyCites" as means with which to mark and map inaccurate citations. This tool allows ORCID users to annotate citations and alert authors (of the cited and citing articles) and also editors of journals where inaccurate citations are published. Each marked citation would travel with the digital version of the document (persistent identifiers) and be visible on websites that host peer-reviewed articles (journals' websites, Pubmed, etc.). In the future development of MyCites, challenges such as the conditions of correct/incorrect-ness and parties that should adjudicate that, and, the issue of dealing with incorrect reports need to be addressed.
Article
Full-text available
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).
Article
Full-text available
Objective: The aim of this study was to analyze conflict of interest (COI) and funding disclosure policies of 224 journals listed in Journal Citation Reports as focusing on environmental, occupational, or public health research. Methods: A survey of journal policies and content analysis. Results: About 96.0% of the policies required COI disclosure, 92.4% required funding disclosure, 75.9% defined COIs, 69.6% provided examples of COIs, 68.8% addressed nonfinancial COIs, 33.9% applied to editors and reviewers, 32.1% required discussion of the role of the funding source, and 1.8% included enforcement mechanisms. Policies were significantly associated with journal impact factor and publisher. Conclusion: Although a high percentage of journals in our sample have COI policies that provide substantial guidance to authors, there is a room for improvement. Journals that have not done so should consider developing enforcement mechanisms and applying COI policies to editors and reviewers.
Article
Full-text available
Many of the messages presented in respectable scientific publications are, in fact, based on various forms of rumors. Some of these rumors appear so frequently, and in such complex, colorful, and entertaining ways that we can think of them as academic urban legends. The explanation for this phenomenon is usually that authors have lazily, sloppily, or fraudulently employed sources, and peer reviewers and editors have not discovered these weaknesses in the manuscripts during evaluation. To illustrate this phenomenon, I draw upon a remarkable case in which a decimal point error appears to have misled millions into believing that spinach is a good nutritional source of iron. Through this example, I demonstrate how an academic urban legend can be conceived and born, and can continue to grow and reproduce within academia and beyond.
Article
Full-text available
The number of citations that papers receive has become significant in measuring researchers' scientific productivity, and such measurements are important when one seeks career opportunities and research funding. Skewed citation practices can thus have profound effects on academic careers. We investigated (i) how frequently authors misinterpret original information and (ii) how frequently authors inappropriately cite reviews instead of the articles upon which the reviews are based. To reach this aim, we carried a survey of ecology journals indexed in the Web of Science and assessed the appropriateness of citations of review papers. Reviews were significantly more often cited than regular articles. In addition, 22% of citations were inaccurate, and another 15% unfairly gave credit to the review authors for other scientists' ideas. These practices should be stopped, mainly through more open discussion among mentors, researchers and students.
Article
Full-text available
Despite their shortcomings (1–4), impact factors continue to be a primary means by which academics “quantify the quality of science” (5). One side effect of impact factors is the incentive they create for editors to coerce authors to add citations to their journal. Coercive self-citation does not refer to the normal citation directions, given during a peer-review process, meant to improve a paper. Coercive self-citation refers to requests that (i) give no indication that the manuscript was lacking in attribution; (ii) make no suggestion as to specific articles, authors, or a body of work requiring review; and (iii) only guide authors to add citations from the editor's journal. This quote from an editor as a condition for publication highlights the problem: “you cite Leukemia [once in 42 references]. Consequently, we kindly ask you to add references of articles published in Leukemia to your present article” (6). Gentler language may be used, but the message is clear: Add citations or risk rejection.
Article
Full-text available
This paper studied the intellectual structure of urban studies through a co-citation analysis of its thirty-eight representative journals from 1992 to 2002. Relevant journal co-citation data were retrieved from Social SciSearch , and were subjected to cluster analysis, multidimensional scaling, and factor analysis. A cluster-enhanced two-dimensional map was created, showing a noticeable subject variation along the horizontal axis depicting four clusters of journals differentiated into mainstream urban studies, regional science and urban economics, transportation, and real estate finance. The cluster of the mainstream urban studies journals revealed a higher degree of interdisciplinarity than other clusters. The four-factor solution, though not a perfect match for the cluster solution, demonstrated the interrelationships among the overlapping journals loaded high on different factors. The results also showed a strong negative correlation between the coordinates of the horizontal axis and the mean journal correlation coefficients reflecting the subject variation, and a less revealing positive correlation between the coordinates of the vertical axis and the mean journal correlation coefficients.
Article
Full-text available
To understand belief in a specific scientific claim by studying the pattern of citations among papers stating it. A complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that beta amyloid, a protein accumulated in the brain in Alzheimer's disease, is produced by and injures skeletal muscle of patients with inclusion body myositis. Social network theory and graph theory were used to analyse this network. Citation bias, amplification, and invention, and their effects on determining authority. The network contained 242 papers and 675 citations addressing the belief, with 220,553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding. Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.
Article
Full-text available
This study examines the extent to which scientific and biomedical journals have adopted conflict of interest (COI) policies for authors, and whether the adoption and content of such policies leads to the publishing of authors' financial interest disclosure statements by such journals. In particular, it reports the results of a survey of journal editors about their practices regarding COI disclosures. About 16 percent of 1396 highly ranked scientific and biomedical journals had COI policies in effect during 1997. Less than 1 percent of the articles published during that year in the journals with COI policies contained any disclosures of author personal financial interests while nearly 66 percent of the journals had zero disclosures of author personal financial interests. Nearly three fourths of journal editors surveyed usually publish author disclosure statements suggesting that low rates of personal financial disclosures are either a result of low rates of author financial interest in the subject matter of their publications or poor compliance by authors to the journals' COI policies.
Article
Full-text available
To assess the relationship between the approval of trials by a research ethics committee (REC) and the fact that informed consent from participants (ICP) was obtained, with the quality of study design and methods. Systematic review using a standardised checklist. Methodological and ethical issues of all trials published between 1993 and 1995 in the New England Journal of Medicine, the Lancet, the Journal of the American Medical Association and the British Medical Journal were studied. In addition, clinical trials conducted in Spain and published by at least one Spanish author during the same period in any other journal were also included. We studied the published articles of 767 trials and found the following indicators of lower methodological quality to be independent predictors for failure to disclose REC approval or ICP: absence of concealment of allocation, lack of justification for unblinded trials, not using a treatment for the patients in the control group, absent information on statistical methods, not including sample size estimation, not establishing the rules to stop the trial, and omitting the presentation of a baseline comparison of groups. Trials of higher methodological and scientific quality were more likely to provide information about their ethical aspects.
Article
Full-text available
To investigate whether funding of drug studies by the pharmaceutical industry is associated with outcomes that are favourable to the funder and whether the methods of trials funded by pharmaceutical companies differ from the methods in trials with other sources of support. Medline (January 1966 to December 2002) and Embase (January 1980 to December 2002) searches were supplemented with material identified in the references and in the authors' personal files. Data were independently abstracted by three of the authors and disagreements were resolved by consensus. 30 studies were included. Research funded by drug companies was less likely to be published than research funded by other sources. Studies sponsored by pharmaceutical companies were more likely to have outcomes favouring the sponsor than were studies with other sponsors (odds ratio 4.05; 95% confidence interval 2.98 to 5.51; 18 comparisons). None of the 13 studies that analysed methods reported that studies funded by industry was of poorer quality. Systematic bias favours products which are made by the company funding the research. Explanations include the selection of an inappropriate comparator to the product being investigated and publication bias.
Article
Full-text available
Routine assessment may improve ethical standards and overall quality of trials Our awareness of the requirements for ethical clinical research has increased over the past century. Research ethics committees were set up after the Declaration of Helsinki to review research proposals. Many journals now require a statement that ethical approval has been obtained before they consider a research report for publication. Nevertheless, many published studies do not come up to standard, or at least do not report that they do. For example, 30 out of 37 consecutive studies published in five general paediatric journals did not report whether informed consent was obtained. Twenty four of them did not report whether the committee on research ethics had approved the study.1 We propose that systematic reviews of experimental clinical research on humans should also include information on the ethical standards of the trials. The main reason for including ethics in the checklist of systematic reviews is to increase awareness in the scientific community about the need for high ethical standards in research on humans. The proposal would also encourage reviewers to identify those occasional studies that were so unethical that there may be doubts about the morality of using the results. Although such trials are rare, history has given us too many real examples to allow us to be complacent.2 3 Opinions differ on whether it is justified to disseminate the results of such studies.4 Either way, a conscious decision should be made and revealed to readers of the review. Issues around ethical quality overlap importantly with the central issues of the validity, reliability, and generalisability of research findings. They relate to some of the more subtle potential sources of bias in experimental clinical research. It is thus important to include ethical …
Article
Full-text available
Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have. When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new `disease', female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). `But,' the news editor wanted to know, `was this paper peer reviewed?'. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)
Article
Full-text available
To determine the effects of training on the quality of peer review. Single blind randomised controlled trial with two intervention groups receiving different types of training plus a control group. Reviewers at a general medical journal. Interventions Attendance at a training workshop or reception of a self taught training package focusing on what editors want from reviewers and how to critically appraise randomised controlled trials. Quality of reviews of three manuscripts sent to reviewers at four to six monthly intervals, evaluated using the validated review quality instrument; number of deliberate major errors identified; time taken to review the manuscripts; proportion recommending rejection of the manuscripts. Reviewers in the self taught group scored higher in review quality after training than did the control group (score 2.85 v 2.56; difference 0.29, 95% confidence interval 0.14 to 0.44; P = 0.001), but the difference was not of editorial significance and was not maintained in the long term. Both intervention groups identified significantly more major errors after training than did the control group (3.14 and 2.96 v 2.13; P < 0.001), and this remained significant after the reviewers' performance at baseline assessment was taken into account. The evidence for benefit of training was no longer apparent on further testing six months after the interventions. Training had no impact on the time taken to review the papers but was associated with an increased likelihood of recommending rejection (92% and 84% v 76%; P = 0.002). Short training packages have only a slight impact on the quality of peer review. The value of longer interventions needs to be assessed.
Article
Full-text available
Scientific journals can promote ethical publication practices through policies on conflicts of interest. However, the prevalence of conflict of interest policies and the definition of conflict of interest appear to vary across scientific disciplines. This survey of high-impact, peer-reviewed journals in 12 different scientific disciplines was conducted to assess these variations. The survey identified published conflict of interest policies in 28 of 84 journals (33%). However, when representatives of 49 of the 84 journals (58%) completed a Web-based survey about journal conflict of interest policies, 39 (80%) reported having such a policy. Frequency of policies (including those not published) varied by discipline, from 100% among general medical journals to none among physics journals. Financial interests were most frequently addressed with relation to authors; policies for reviewers most often addressed non-financial conflicts. Twenty-two of the 39 journals with policies (56%) had policies about editors' conflicts. The highest impact journals in each category were most likely to have a published policy, and the frequency of policies fell linearly with rank; for example, policies were published by 58% of journals ranked 1 in their category, 42% of journals ranked third, and 8% of journals ranked seventh (test for trend, p = 0.003). Having a conflict of interest policy was also associated with a self-reported history of problems with conflict of interest. The prevalence of published conflict of interest policies was higher than that reported in a 1997 study, an increase that might be attributable to heightened awareness of conflict of interest issues. However, many of the journals with policies do not make them readily available and many of those policies that were available lacked clear definitions of conflict of interest or details about how disclosures would be managed during peer review and publication.
Article
Full-text available
We report a method of estimating what percentage of people who cited a paper had actually read it. The method is based on a stochastic modeling of the citation process that explains empirical studies of misprint distributions in citations (which we show follows a Zipf law). Our estimate is only about 20% of citers read the original.
Article
Intra-articular fractures of the distal part of the radius in young adults comprise a distinct subgroup of fractures that are difficult to manage and are associated with a high frequency of post-traumatic arthritis. The effect of residual radiocarpal incongruity after this fracture has not been investigated previously. A retrospective study of forty-three fractures in forty young adults (mean age, 27.6 years) was done to determine the components that are critical to the outcome. Treatment included application of a cast alone in twenty-one fractures, insertion of pins and application of a plaster cast in seventeen, external fixation in two fractures, and open reduction and internal fixation in three fractures. At a mean follow-up of 6.7 years, 26 per cent were rated as excellent; 35 per cent, as good; 33 per cent, as fair; and 6 per cent, as poor. There was radiographic evidence of post-traumatic arthritis in twenty-eight (65 per cent) of the fractures. Accurate articular restoration was the most critical factor in achieving a successful result. Of the twenty-four fractures that healed with residual incongruity of the radiocarpal joint, arthritis was noted in 91 per cent, whereas of the nineteen fractures that healed with a congruous joint, arthritis developed in only 11 per cent. A depressed articular surface (a so-called die-punch fragment) was reduced anatomically by closed means in only 49 per cent and was responsible for residual incongruity in 75 per cent of the incongruous joints at late follow-up. Non-union of the ulnar styloid process adversely affected the results. Restoration and maintenance (extra-articular reduction) of the dorsal tilt and radial length did not prove critical except when severe radial shortening occurred.
Article
Mistakes in peer-reviewed papers are easy to find but hard to fix, report David B. Allison and colleagues.
Article
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal.
Article
Purpose: To assess financial, nonfinancial and editors' conflicts of interest (COI) disclosure policies among the most influential biomedical journals publishing original research. Materials and methods: We conducted a cross-sectional study of 399 high-impact biomedical journals in 27 biomedical categories of the Journal Citation Reports (JCR) in December 2011. Information relevant to COI and requirements for disclosures that was publicly available on journal websites was collected. Results: While financial COI disclosures were required by 358 (89.7%) and nonfinancial by 280 (70.2%) journals, 155 (38.8%) required editors' disclosures. Journals in the first decile of the JCR classification scored significantly higher than those in the second decile for all disclosure policies. Ninety (22.6%) journals were published by Elsevier and 59 (14.8%) by Wiley-Blackwell, with Elsevier scoring significantly better in financial disclosure policies (P = 0.022). Clinical journals scored significantly higher than basic journals for all disclosure policies. No differences were observed between open-access (n = 25) and nonopen-access (n = 374) journals for any type of disclosure. Somewhat incoherently, authors' disclosure statements were included in some published manuscript in 57.1% of journals without any COI disclosure policies. Conclusions: Authors' financial COI disclosures were required by about 90% of high-impact clinical and basic journals publishing original research. Unlike recent studies showing a significantly lower prevalence of nonfinancial compared with financial disclosures, the former were required by about 70% of journals, suggesting that editors are increasingly concerned about nonfinancial competing interests. Only 40% of journals required disclosure of editors' COI, in conflict with the recommendations of the most influential editors' associations.
Article
This article has no abstract; the first 100 words appear below. The pages of any book, tract or article dealing with medicine are apt to be profusely sprinkled with numerical superscripts (or their equivalents) guiding the reader to a reference list. Not only does the liberal presence of such reference numbers impart an aura of scholarship, but their judicious placement after this or that assertion subtly suggests documented validity. But watch out — those little numbers may be no more than the trappings of credibility. The primary sources cited may be misquoted, inapplicable, unreliable and occasionally even imaginary. The havoc raised by misquotation is self-evident, but even a literally accurate attribution . . . F.J. Ingelfinger, M.D.
Article
Article
Recently, we examined our current files to determine the incidence of narcotic addiction in 39,946 hospitalized medical patients who were monitored consecutively. Although there were 11,882 patients who received at least one narcotic preparation, there were only four cases of reasonably well documented addiction in patients who had no history of addiction. The addiction was considered major in only one instance. The drugs implicated were meperidine in two patients, Percodan in one, and hydromorphone in one. We conclude that despite widespread use of narcotic drugs in hospitals, the development of addiction is rare in medical patients with no history of addiction.
Article
That occupational exposure to mercury causes reproductive failure in dental personnel has been propagated by repeated reference to a single epidemiological study in Poland, published in 1987. The present paper scrutinizes the results of this study, and monitors its subsequent citation in the literature. Articles referring to the study were located in the Science Citation Index. From references in these papers and through other references, further articles were found. These papers were reviewed, and their content organized in relation to the way in which the Polish study was cited. Most authors referred to the findings in the Polish study without critical evaluation of the data presented. Citation of irrelevant or misleading scientific data in the literature raises unfounded concerns in nonscientific circles and may lead to unwarranted regulations. It is therefore essential that editors and reviewers of scientific articles also scrutinize the literature quoted.
Article
Anxiety about bias, lack of accountability, and poor quality of peer review has led to questions about the imbalance in anonymity between reviewers and authors. To evaluate the effect on the quality of peer review of blinding reviewers to the authors' identities and requiring reviewers to sign their reports. Randomized controlled trial. A general medical journal. A total of 420 reviewers from the journal's database. We modified a paper accepted for publication introducing 8 areas of weakness. Reviewers were randomly allocated to 5 groups. Groups 1 and 2 received manuscripts from which the authors' names and affiliations had been removed, while groups 3 and 4 were aware of the authors' identities. Groups 1 and 3 were asked to sign their reports, while groups 2 and 4 were asked to return their reports unsigned. The fifth group was sent the paper in the usual manner of the journal, with authors' identities revealed and a request to comment anonymously. Group 5 differed from group 4 only in that its members were unaware that they were taking part in a study. The number of weaknesses in the paper that were commented on by the reviewers. Reports were received from 221 reviewers (53%). The mean number of weaknesses commented on was 2 (1.7, 2.1, 1.8, and 1.9 for groups 1, 2, 3, and 4 and 5 combined, respectively). There were no statistically significant differences between groups in their performance. Reviewers who were blinded to authors' dentities were less likely to recommend rejection than those who were aware of the authors' identities (odds ratio, 0.5; 95% confidence interval, 0.3-1.0). Neither blinding reviewers to the authors and origin of the paper nor requiring them to sign their reports had any effect on rate of detection of errors. Such measures are unlikely to improve the quality of peer review reports.
Article
To date, research regarding the influence of conflicts of interest on the presentation of findings by researchers has been limited. To evaluate the sources of funding for published manuscripts, and association between reported findings and conflicts of interest. Data from both print and electronic issues of The New England Journal of Medicine (NEJM) and The Journal of the American Medical Association (JAMA) were analyzed for sources of funding, areas of investigation, conflict of interest (COI), and presentation of results. We reviewed all original manuscripts published during the year 2001 within NEJM (N = 193) and JAMA (N = 205). We use 3 definitions for COI in this paper: a broadly defined criterion, the criterion used by The International Council of Medical Journal Editors (ICMJE), and a criterion defined by the authors. Depending on the COI criteria used, 16.6% to 32.6% of manuscripts had 1 or more author with COI. Based on ICMJE criterion, 38.7% of studies investigating drug treatments had authors with COI. We observed a strong association between those studies whose authors had COI and reported positive findings (P <.001). When controlling for sample size, study design, and country of primary authors, we observed a strong association between positive results and COI (ICMJE definition) among all treatment studies (adjusted odds ratio [OR], 2.35; 95% confidence interval [CI], 1.08 to 5.09) and drug studies alone (OR, 2.64; 95% CI, 1.09 to 6.39). COI is widespread among the authors of published manuscripts and these authors are more likely to present positive findings.
How accurate are citations of frequently cited papers in biomedical literature?
  • V Pavlovic
  • T Weissgerber
  • D Stanisavljevic
Pavlovic V, Weissgerber T, Stanisavljevic D, etal. How accurate are citations of frequently cited papers in biomedical literature?Clin Sci (Lond) 2021;135:-81. doi: 10.1042/CS20201573 pmid: 33599711
Misquotation of a commonly referenced hand surgery study
  • J A Porrino
  • V Jrtan
  • A Daluiski
Porrino JA, JrTan V, Daluiski A. Misquotation of a commonly referenced hand surgery study. J Hand Surg Am 2008;33:-7. doi: 10.1016/j.jhsa.2007.10.007 pmid: 18261657
Follies and Fallacies in Medicine
  • P Skrabanek
  • J Mccormick
Skrabanek P, McCormick J. Follies and Fallacies in Medicine, 3rd edn. Tarragon Press, 1998.
How the spinach, Popeye and iron decimal point error myth was finally busted
  • M Sutton
Sutton M. How the spinach, Popeye and iron decimal point error myth was finally busted. https://archive.ph/ASqr
Spinach-I was right for the wrong reason. Mutations of Mortality
  • T J Hamblin
Hamblin TJ. Spinach-I was right for the wrong reason. Mutations of Mortality. 2010. http://mutated-unmuated.blogspot.com/2010/12/spinach-i-was-right-for-wrong-reason.html.
Lawyer used ChatGPT in court-and cited fake cases. A judge is considering sanctions. Forbes Magazine
  • M Bohannon
Bohannon M. Lawyer used ChatGPT in court-and cited fake cases. A judge is considering sanctions. Forbes Magazine. 2023. https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyerused-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=1f047f3f7c7f
Citation errors in scientific research and publications: causes, consequences, and remedies
  • A Agarwal
  • M Arafa
  • T Avidor-Reiss
  • Taa Hamoda
  • R Shah
Agarwal A, Arafa M, Avidor-Reiss T, Hamoda TAA, Shah R. Citation errors in scientific research and publications: causes, consequences, and remedies. World J Mens Health 2023;41:-5. doi: 10.5534/wjmh.230001 pmid: 37118953
  • M V Simkin
  • V P Roychowdhury
Simkin MV, Roychowdhury VP. Read before you cite!Complex Syst 2003;14:-74.
Peer review: a flawed process at the heart of science and journals
  • R Smith
Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med 2006;99:-82. doi: 10.1177/014107680609900414 pmid: 16574968
Matched-Pairs, Active-Controlled Clinical Trial and Preclinical Animal Study to Compare the Durability, Efficacy and Safety between Polynucleotide Filler and Hyaluronic Acid Filler in the Correction of Crow's Feet: A New Concept of Regenerative Filler
  • C S Pak
Notice of Retraction: Pak CS, et al. A Phase III, Randomized, Double-Blind, Matched-Pairs, Active-Controlled Clinical Trial and Preclinical Animal Study to Compare the Durability, Efficacy and Safety between Polynucleotide Filler and Hyaluronic Acid Filler in the Correction of Crow's Feet: A New Concept of Regenerative Filler. J Korean Med Sci 2014; 29(Suppl 3): S201-S209. J Korean Med Sci 2016;31:-330. doi: 10.3346/jkms.2016.31.2.330. pmid: 26839493
Attestation by governing bodies: literature review. Australian Commission on Safety and Quality in Health Care
  • J Travaglia
  • R Hinchcliff
  • D Carter
  • L Billington
  • M Glennie
  • D Debono
Travaglia J, Hinchcliff R, Carter D, Billington L, Glennie M, Debono D. Attestation by governing bodies: literature review. Australian Commission on Safety and Quality in Health Care. https://www.safetyandquality.gov.au/publications-and-resources/resource-library/attestationgoverning-bodies-literature-review