David Moher’s research while affiliated with Ottawa Hospital Research Institute and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (822)


Figure 1 Confusion matrix from the analysis of the TEST dataset. Actual (y-axis) represents the answers given by the GPT-4 Turbo model, and predicted (x-axis) represents the labels for each question extracted from Schulz et al. 19 Protected by copyright, including for uses related to text and data mining, AI training, and similar technologies.
GPT for RCTs? Using AI to determine adherence to clinical trial reporting guidelines
  • Article
  • Full-text available

March 2025

·

15 Reads

BMJ Open

·

Paul Blazey

·

David Moher

·

[...]

·

Objectives Adherence to established reporting guidelines can improve clinical trial reporting standards, but attempts to improve adherence have produced mixed results. This exploratory study aimed to determine how accurate a large language model generative artificial intelligence system (AI-LLM) was for determining reporting guideline compliance in a sample of sports medicine clinical trial reports. Design This study was an exploratory retrospective data analysis. OpenAI GPT-4 and Meta Llama 2 AI-LLM were evaluated for their ability to determine reporting guideline adherence in a sample of sports medicine and exercise science clinical trial reports. Setting Academic research institution. Participants The study sample included 113 published sports medicine and exercise science clinical trial papers. For each paper, the GPT-4 Turbo and Llama 2 70B models were prompted to answer a series of nine reporting guideline questions about the text of the article. The GPT-4 Vision model was prompted to answer two additional reporting guideline questions about the participant flow diagram in a subset of articles. The dataset was randomly split (80/20) into a TRAIN and TEST dataset. Hyperparameter and fine-tuning were performed using the TRAIN dataset. The Llama 2 model was fine-tuned using the data from the GPT-4 Turbo analysis of the TRAIN dataset. Primary and secondary outcome measures The primary outcome was the F1-score, a measure of model performance on the TEST dataset. The secondary outcome was the model’s classification accuracy (%). Results Across all questions about the article text, the GPT-4 Turbo AI-LLM demonstrated acceptable performance (F1-score=0.89, accuracy (95% CI) = 90% (85% to 94%)). Accuracy for all reporting guidelines was >80%. The Llama 2 model accuracy was initially poor (F1-score=0.63, accuracy (95% CI) = 64% (57% to 71%)) and improved with fine-tuning (F1-score=0.84, accuracy (95% CI) = 83% (77% to 88%)). The GPT-4 Vision model accurately identified all participant flow diagrams (accuracy (95% CI) = 100% (89% to 100%)) but was less accurate at identifying when details were missing from the flow diagram (accuracy (95% CI) = 57% (39% to 73%)). Conclusions Both the GPT-4 and fine-tuned Llama 2 AI-LLMs showed promise as tools for assessing reporting guideline compliance. Next steps should include developing an efficient, open-source AI-LLM and exploring methods to improve model accuracy.

Download

Characteristics of manuscripts included in the PRISMA-S and ROBIS assessment (n=166)
Reporting quality and risk of bias in first revision manuscript searches by individual item
Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomised controlled trial

March 2025

·

67 Reads

BMJ evidence-based medicine

Objective To evaluate the impact of adding librarians and information specialists (LIS) as methodological peer reviewers to the formal journal peer review process on the quality of search reporting and risk of bias in systematic review searches in the medical literature. Design Pragmatic two-group parallel randomised controlled trial. Setting Three biomedical journals. Participants Systematic reviews and related evidence synthesis manuscripts submitted to The BMJ , BMJ Open and BMJ Medicine and sent out for peer review from 3 January 2023 to 1 September 2023. Randomisation (allocation ratio, 1:1) was stratified by journal and used permuted blocks (block size=4). Of 2670 manuscripts sent to peer review during study enrollment, 400 met inclusion criteria and were randomised (62 The BMJ , 334 BMJ Open , 4 BMJ Medicine ). 76 manuscripts were revised and resubmitted in the intervention group and 90 in the control group by 2 January 2024. Interventions All manuscripts followed usual journal practice for peer review, but those in the intervention group had an additional (LIS) peer reviewer invited. Main outcome measures The primary outcomes are the differences in first revision manuscripts between intervention and control groups in the quality of reporting and risk of bias. Quality of reporting was measured using four prespecified PRISMA-S items. Risk of bias was measured using ROBIS Domain 2. Assessments were done in duplicate and assessors were blinded to group allocation. Secondary outcomes included differences between groups for each individual PRISMA-S and ROBIS Domain 2 item. The difference in the proportion of manuscripts rejected as the first decision post-peer review between the intervention and control groups was an additional outcome. Results Differences in the proportion of adequately reported searches (4.4% difference, 95% CI: −2.0% to 10.7%) and risk of bias in searches (0.5% difference, 95% CI: −13.7% to 14.6%) showed no statistically significant differences between groups. By 4 months post-study, 98 intervention and 70 control group manuscripts had been rejected after peer review (13.8% difference, 95% CI: 3.9% to 23.8%). Conclusions Inviting LIS peer reviewers did not impact adequate reporting or risk of bias of searches in first revision manuscripts of biomedical systematic reviews and related review types, though LIS peer reviewers may have contributed to a higher rate of rejection after peer review. Trial registration number Open Science Framework: https://doi.org/10.17605/OSF.IO/W4CK2 .



What is expected of people who lead meetings where the goal is to reach consensus? A scoping review with implications for improving the quality of health research grant peer review and clinical guideline development

February 2025

·

35 Reads

Background. The specific roles and responsibilities expected of leaders of consensus-based decision committees, such as grant peer review panels and guideline development panels, are not well-defined, which makes it difficult to train people to lead well. We aimed to explore, describe and define the roles, responsibilities, and leadership characteristics of leaders of meetings where the goal was to reach a consensus decision. Methods. We conducted a scoping review with thematic synthesis, guided by the Joanna Briggs Institute Scoping Review Methodology, and Arksey & O'Malley's framework for scoping reviews as refined by Levac et al. We searched five bibliographic databases from January 2002-2023 in English: Medline (Ovid), Embase (Ovid), CINAHL (EBSCO) and PsycInfo (EBSCO); Proquest Digital Dissertations and ABI-Inform. We searched grey literature in the fields of health science, biomedicine, education, psychology, management, law, ethics and policy. Abstracts and full-text articles were screened in duplicate to identify eligible studies; data were extracted regarding the roles, responsibilities and characteristics of consensus decision committee leaders. Themes were constructed using reflexive thematic analysis. Results. From 6732 electronic database records and 126 grey literature records, we included 24 articles and 16 websites. There were 166 unique statements extracted related to roles and responsibilities. We constructed 4 themes to describe the roles for leaders of consensus-based decision meetings: (1) organizer and/or resource manager, (2) facilitator, (3) adjudicator and, (4) administrator. Conclusion. Leaders of consensus committees assumed the roles of organiser and/or resource manager, facilitator, adjudicator and administrator. Better clarification of and training for the expected roles and responsibilities of leading consensus decisions are needed. Establishing the roles and responsibilities can inform a systematic process for evaluating the performance of leaders of consensus decision committees.


Publisher preferences for a journal transparency tool: A modified three-round Delphi study

January 2025

·

24 Reads

Background We propose the creation of a journal transparency tool (JTT), which will allow users to obtain information about a given scholarly journal’s operations and policies. We are obtaining preferences from different stakeholders to inform the development of this tool. This study aimed to identify the publishing community’s preferences for the JTT. Methods We conducted a modified three-round Delphi survey. Representatives from publishing houses and journal publishers were recruited through purposeful and snowball sampling. The first two Delphi rounds involved an online survey with items about JTT metrics and user features. During the third round, participants discussed and voted on JTT metric items that did not reach consensus after round 2 within a virtual consensus meeting. We defined consensus as 80% agreement to include or exclude an item in the JTT. Results Eighty-six participants completed the round 1 survey, and 43 participants (50% of round 1) completed the round 2 survey. In both rounds, respondents voted on JTT user feature and JTT metric item preferences and answered open-ended survey questions regarding the JTT. In round 3, a total of 21 participants discussed and voted on JTT metric items that did not reach consensus after round 2 during an online consensus group meeting. Fifteen out of 30 JTT metric items and none of the four JTT user feature items reached the 80% consensus threshold after all rounds of voting. Analysis of the round 3 online consensus group transcript resulted in two themes: ‘factors impacting support for JTT metrics’ and ‘suggestions for user clarity.’ Conclusions Participants suggested that the publishing community’s primary concerns for a JTT are to ensure that the tool is relevant, user-friendly, accessible, and equitable. The outcomes of this research will contribute to developing and refining the tool in accordance with publishing preferences.


Educational intervention For Open Science (Educ4OS) practices promotion: a multicenter cluster randomized controlled trial in early career researchers – Study Protocol

January 2025

·

6 Reads

Aim: Open science (OS) practices aim to make scientific research more accessible, collaborative, sharable, inclusive, transparent, and reproducible. Educational interventions could be valuable in promoting these practices among researchers. As such, our study aims to assess the efficacy of an educational intervention in increasing early career researchers' adherence to OS practices (study registration, open methods, data sharing, open results) in dentistry. Methods: A cluster-randomized controlled trial will be carried out. All PhD students from a convenience sample of 20 Brazilian dental graduate programs will be invited to participate signing the informed consent. The clusters will be randomized in intervention (n=10) and control (n=10) groups. The participants allocated to the intervention will receive an intensive course addressing OS and Responsible Research Practices (RRPs) while the control group will participate in another course addressing Artificial Intelligence in Dentistry. The intervention will be an online interactive course, made available in video format with the participation of professors and researchers, including 75 hours of lectures, workshops, and supporting materials focusing on how to bring more integrity, rigor, and transparency to scientific work using OS principles. The number of OS practices (published as open access or preprint, protocol registration, data sharing, code sharing, disclosure of conflicts of interest, and disclosure of funding sources) adopted by the participants will be collected two and five years after the end of the intervention. An online self-administered questionnaire will collect their perceptions about the OS practices. Outcomes: The primary outcome will be the Open Science Score calculated for each published paper by the participants 2 and 5 years of the end of the intervention. OS practices adopted in the participants’ thesis will be collected and analyzed as a secondary outcome 5 years after the end of the intervention. Other secondary outcomes include the differences in scores on a survey measuring perceptions about OS practices between the intervention and control groups, measured after the course ended.


Educational intervention For Open Science (Educ4OS) practices promotion: a multicenter cluster randomized controlled trial in early career researchers – Study Protocol

January 2025

·

19 Reads

Aim: Open science (OS) practices aim to make scientific research more accessible, collaborative, sharable, inclusive, transparent, and reproducible. Educational interventions could be valuable in promoting these practices among researchers. As such, our study aims to assess the efficacy of an educational intervention in increasing early career researchers' adherence to OS practices (study registration, open methods, data sharing, open results) in dentistry. Methods: A cluster-randomized controlled trial will be carried out. All PhD students from a convenience sample of 20 Brazilian dental graduate programs will be invited to participate signing the informed consent. The clusters will be randomized in intervention (n=10) and control (n=10) groups. The participants allocated to the intervention will receive an intensive course addressing OS and Responsible Research Practices (RRPs) while the control group will participate in another course addressing Artificial Intelligence in Dentistry. The intervention will be an online interactive course, made available in video format with the participation of professors and researchers, including 75 hours of lectures, workshops, and supporting materials focusing on how to bring more integrity, rigor, and transparency to scientific work using OS principles. The number of OS practices (published as open access or preprint, protocol registration, data sharing, code sharing, disclosure of conflicts of interest, and disclosure of funding sources) adopted by the participants will be collected two and five years after the end of the intervention. An online self-administered questionnaire will collect their perceptions about the OS practices. Outcomes: The primary outcome will be the Open Science Score calculated for each published paper by the participants 2 and 5 years of the end of the intervention. OS practices adopted in the participants’ thesis will be collected and analyzed as a secondary outcome 5 years after the end of the intervention. Other secondary outcomes include the differences in scores on a survey measuring perceptions about OS practices between the intervention and control groups, measured after the course ended.


Conducting Pairwise and Network Meta-analyses in Updated and Living Systematic Reviews: a Scoping Review Protocol

January 2025

·

22 Reads

JBI Evidence Synthesis

Objective The objective of this scoping review will be to describe existing guidance documents or studies reporting on the conduct of meta-analyses in updated systematic reviews (USRs) or living systematic reviews (LSRs). Introduction The rapid increase in the medical literature poses a substantial challenge in keeping systematic reviews up to date. In LSRs, a review is updated with a pre-specified frequency or when some other signalling criterion is triggered. While the LSR framework is well-established, there is uncertainty regarding the most appropriate methods for conducting repeated meta-analyses over time, which may result in sub-optimal decision-making. Inclusion criteria Studies of any design (including commentaries, books, manuals) providing guidance on conducting meta-analysis in USRs or LSRs. Methods We will use the JBI methodology for scoping reviews. We will search multiple medical bibliographic databases (Cochrane Library, Embase, ERIC, MEDLINE, JBI Evidence Synthesis , and PsycINFO), statistical and mathematics databases (COBRA, Current Index to Statistics, MathSciNet, Project Euclid Complete, and zbMATH), pre-print archives (Arvix, BioRxiv, and MedRxiv), as well as difficult to locate/unpublished (or gray) literature. Two reviewers will independently screen titles, abstracts, and full-text documents, and extract data. Characteristics of recommendations for meta-analysis in USRs and LSRs will be presented using descriptive statistics and categorized concepts. Details of this review project can be found in Open Science Framework: https://osf.io/9c27g


Update to the PRISMA guidelines for network meta-analyses and scoping reviews and development of guidelines for rapid reviews: a scoping review protocol

January 2025

·

289 Reads

JBI Evidence Synthesis

Objective The objective of this scoping review is to develop a list of items for potential inclusion in the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) reporting guidelines for network meta-analysis (NMA), scoping reviews (ScRs), and rapid reviews (RRs). Introduction The PRISMA extensions for NMA and ScRs were published in 2015 and 2018. However, since then, their methodologies and innovations, including automation, have evolved. There is no reporting guideline for RRs. In 2020, an updated PRISMA statement was published, reflecting advances in the conduct and reporting of systematic reviews. These advances are not yet incorporated into these PRISMA extensions. We will update our previous methods scoping reviews to inform the update of PRISMA-NMA and PRISMA-ScR as well as the development of the PRISMA-RR reporting guidelines. Inclusion criteria This review will include any study designs evaluating the completeness of reporting, or offering reporting guidance, or assessing methods relevant to NMA, ScRs, or RRs. Editorial guidelines and tutorials that describe items related to reporting completeness will also be eligible. Methods We will follow the JBI guidance for scoping reviews. For each PRISMA extension, we will (1) search multiple electronic databases from inception, (2) search for unpublished studies, and (3) scan the reference lists of included studies. There will be no language limitations. Screening and data extraction will be conducted by 2 researchers independently. A third researcher will resolve discrepancies. We will conduct frequency analyses of the identified items. The final list of items will be considered for potential inclusion in the relevant PRISMA reporting guidelines. Review registration NMA protocol (OSF: https://doi.org/10.17605/OSF.IO/7BKWY); ScR protocol (OSF: https://doi.org/10.17605/OSF.IO/MTA4P); RR protocol (OSF: https://doi.org/10.17605/OSF.IO/3JCPE); EQUATOR registration link: https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-systematic-reviews/


Publisher and journal reciprocity for peer review: Not so much

January 2025

·

17 Reads

·

1 Citation

Peer reviewers provide a critical role in helping journals keep publishing. To understand the rewards and incentives offered to peer reviewers, we assessed what journals/publishers offered to one peer reviewer in biomedicine over a 1-month period (June 2023). After receiving 88 peer reviewer invitations, we noted that incentives were minimal. They include access to journal/publisher peer review training materials, reduced author processing charges of future article submissions, and free access to the journal/publisher website. Depending on the acceptance rate (30% or 50%) of recommendations to publish the article, peer review from this sample could generate anywhere from USD897,000toUSD 897,000 to USD 1.45 million dollars when annualized. However, little, if any of this revenue is shared directly or indirectly with peer reviewers. With almost no reciprocity in the peer review process, journals and their publishers need to promote and establish more reciprocity in a system that currently largely favors them disproportionately. This study is an anecdotal perspective of one peer reviewer's experience over a single month. While anecdotal, these findings highlight issues about the fairness and sustainability of the peer review system. We encourage others to expand on what we have done and include more empirical investigations.


Citations (53)


... Establishing evidence-based criteria towards journal creation not only allows new publishers to avoid engaging in dubious scholarly publishing practices but may also help reduce authors' submissions to predatory journals by providing them with a basis to differentiate legitimate journals from 'predatory' ones. One such tool currently in development is the Journal Transparency tool, which seeks to provide users with information about a journal's operations and transparency practices [42,43]. Furthermore, a related future research direction worth exploring may be to study the operational modalities, tools, and methodologies these predatory journals employ to recruit new, inexperienced researchers, and in the publicization of their journal. ...

Reference:

Recommendations and guidelines for creating scholarly biomedical journals: A scoping review
Publisher preferences for a journal transparency tool: A modified three-round Delphi study

... They also have unique reporting requirements: the CONSORT guidelines for reporting parallel arm randomized trials were extended to cluster randomized designs in 2004 and updated in 2012 [10]. Additional extensions were later developed to accommodate novel cluster-randomized designs: specifically, the stepped wedge cluster randomized trial (SW-CRT) in 2018 [11,12] and the cluster randomized cross-over design in 2024 [13]. ...

Reporting of cluster randomised crossover trials: extension of the CONSORT 2010 statement with explanation and elaboration

The BMJ

... This will allow us to understand better how RIF truly impacts factors related to heart health, metabolism, and obesity. While an umbrella review serves to acknowledge the inherent uncertainties within the available information, it offers a comprehensive perspective on the range of summary impacts and the overall quality of the evidence [41,42]. The utility of an umbrella review lies in its ability to provide a comprehensive synthesis of the data by incorporating many systematic reviews and meta-analyses on a particular research question [43e45]. ...

Umbrella-Review, Evaluation, Analysis and Communication Hub (U-REACH ): a novel living umbrella review knowledge translation approach

BMJ Mental Health

... The regulations imposed by these policies may reflect the academic publishers' consideration of the benefits and challenges of authors using AI chatbots. A cross-sectional survey by Ng et al. conducted in 2023 revealed that researchers, although having expressed interest in the applications of AI chatbots in scientific research, received inadequate training in AI tool usage by their academic institutions [28]. In addition to the current limitations of AI chatbots (e.g., unverified content generation), researchers using AI tools without formal training may result in consequences of poor content quality and/or misinformation [28]. ...

Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey
  • Citing Article
  • November 2024

The Lancet Digital Health

... These findings underscore the therapeutic potential of MSCs in restoring vision. Adherence to clinical reporting guidelines and international consensus, particularly with regard to route of administration, dosage and source, will further promote MSC as an effective treatment for retinal degenerative diseases (Renesme et al. 2025). ...

Delphi-driven consensus definition for MSCs and clinical reporting guidelines for MSC-based therapeutics
  • Citing Article
  • October 2024

Cytotherapy

... 33 In keeping with open science best practices, we created an Open Science Framework project folder where the registration, protocol and other trialrelated information are available. 34 CONSORT for pragmatic trials was used to report the trial. 35 ...

Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomized controlled trial
  • Citing Preprint
  • September 2024

... By creating a consensus-based framework for reporting, GLOBAL will help ensure that bibliometric methods are applied consistently and that results can be more reliably interpreted and compared across studies (35,36). The project, informed by a scoping review of bibliometric reporting recommendations (37), is currently in its Delphi process to re ne a 32-item checklist. This will likely help to promote transparency, robustness, and completeness in the reporting of BAs (36, 38, 39, 40). ...

Guidance for the Reporting of Bibliometric Analyses: A Scoping Review

... Pioneering journals, such as the BMJ, have adopted stronger policies that mandate data and code sharing [22]. The EQUATOR network published a clear position stating that every reporting guideline must include a data sharing item and that consensual and evidence-based DMSPs need to be developed in the spirit of reporting guidelines [23]. The Committee on Publication Ethics might also help by developing guidelines to manage situations when researchers refuse to share despite promises in their data sharing statements. ...

Reporting on data sharing: executive position of the EQUATOR Network

The BMJ

... This review was conducted according to COSMIN guidelines for systematic review of OMIs, and results reported in accordance with the PRISMA-COSMIN checklist for OMIs [15,16,18]. ...

Guideline for reporting systematic reviews of outcome measurement instruments (OMIs): PRISMA-COSMIN for OMIs 2024

Journal of Patient-Reported Outcomes

... By consolidating evidence on the factors that shape these decisions, this umbrella review can promote consistency in clinical practice and pinpoint areas in need of training or policy intervention [28]. Conducting an umbrella review is more efficient than undertaking multiple individual reviews, as it consolidates existing research into a single, accessible document [29]. ...

Protocol for the development of a reporting guideline for umbrella reviews on epidemiological associations using cross-sectional, case-control and cohort studies: the Preferred Reporting Items for Umbrella Reviews of Cross-sectional, Case-control and Cohort studies (PRIUR-CCC)

BMJ Open