Lifeng Lin’s research while affiliated with The University of Arizona and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (145)


The proportion of significant interactions (P < 0.05) per meta-analysis using odds ratios are plotted vs. the number of studies per meta-analysis. Panel A analysis is based on tertiles of the control group event rate. Panel B analysis is based on quartiles of the control group event rate
The proportion of significant interactions (P < 0.05) per meta-analysis using odds ratios are plotted vs. the width of range of control group rate per meta-analysis. Panel A analysis is based on tertiles of the control group event rate. Panel B analysis is based on quartiles of the control group event rate
Variability of relative treatment effect among populations with low, moderate and high control group event rates: a meta-epidemiological study
  • Article
  • Full-text available

November 2024

·

22 Reads

M. Hassan Murad

·

Zhen Wang

·

Mengli Xiao

·

[...]

·

Lifeng Lin

Background The current practice in guideline development is to use the control group event rate (CR) as a surrogate of baseline risk and to assume portability of the relative treatment effect across populations with low, moderate and high baseline risk. We sought to emulate this practice in a very large sample of meta-analyses. Methods We retrieved data from all meta-analyses published in the Cochrane Database of Systematic Reviews (2003–2020) that evaluated a binary outcome, reported 2 × 2 data for each individual study and included at least 4 studies. We excluded studies with no events. We conducted meta-analyses with odds ratios and relative risks and performed subgroup analyses based on tertiles of CR. In sensitivity analyses, we evaluated the use of total event rate (TR) instead of CR and using quartiles instead of tertiles. Results The analysis included 2,531 systematic reviews (27,692 meta-analyses, 226,975 studies, 25,669,783 patients).The percentages of meta-analyses with statistically significant interaction (P < 0.05) based on CR tertile or quartile ranged 12–18% across various sensitivity analyses. This percentage increased as the number of studies or range of CR per meta-analysis increased, reflecting increased power of the subgroup test. The percentages of meta-analyses with statistically significant interaction (P < 0.05) with TR quantiles were lower than those with CR but remained higher than expected by chance. Conclusion This analysis suggests that when CR or TR are used as surrogates for baseline risk, relative treatment effects may not be portable across populations with varying baseline risks in many meta-analyses. Categroization of the continuous CR variable and not addressing measurement error limit inferences from such analyses and imply that CR is an undesirable source for baseline risk. Guideline developers and decision-makers should be provided with relative and absolute treatment effects that are conditioned on the baseline risk or derived from studies with similar baseline risk to their target populations.

Download

Process of identifying and analyzing the eligible data from the SMART Safety dataset
Venn diagram depicting four types of quantitative assessments. Group A: two-sided Egger’s test results; group B: two-sided Peters’ test results; group C: one-sided Egger’s test results; group D: one-sided Peters’ test results. The numbers within the overlapping areas represent the agreement between the results from the corresponding groups (tests). The total number of meta-analyses assessed for IPB is 277, which is reflected in the sum of the numbers in each group
Assessment of inverse publication bias in safety outcomes: an empirical analysis

October 2024

·

11 Reads

BMC Medicine

Background The aims of this study were to assess the presence of inverse publication bias (IPB) in adverse events, evaluate the performance of visual examination, and explore the impact of considering effect direction in statistical tests for such assessments. Methods We conducted a cross-sectional study using the SMART Safety, the largest dataset for evidence synthesis of adverse events. The visual assessment was performed using contour-enhanced funnel plots, trim-and-fill funnel plots, and sample-size-based funnel plots. Two authors conducted visual assessments of these plots independently, and their agreements were quantified by the kappa statistics. Additionally, IPB was quantitatively assessed using both the one- and two-sided Egger’s and Peters’ tests. Results In the SMART Safety dataset, we identified 277 main meta-analyses of safety outcomes with at least 10 individual estimates after dropping missing data. We found that about 13.7–16.2% of meta-analyses exhibited IPB according to the one-sided test results. The kappa statistics for the visual assessments roughly ranged from 0.3 to 0.5, indicating fair to moderate agreement. Using the one-sided Egger’s test, 57 out of 72 (79.2%) meta-analyses that initially showed significant IPB in the two-sided test changed to non-significant, while the remaining 15 (20.8%) meta-analyses changed from non-significant to significant. Conclusions Our findings provide supporting evidence of IPB in the SMART Safety dataset of adverse events. They also suggest the importance of researchers carefully accounting for the direction of statistical tests for IPB, as well as the challenges of assessing IPB using statistical methods, especially considering that the number of studies is typically small. Qualitative assessments may be a necessary supplement to gain a more comprehensive understanding of IPB.


Flowchart of Calculating the modified Fragility Index in a Dose-Finding Trial.
Summary of the Data in the Phase I AUY922 Dose-Escalation Trial.
The mFI Results of Robustness Assessment for All Three Trials.
Summary of the Data in the Phase I Pan-AKT Inhibitor MK-2206 Trial.
Robustness Assessment of Oncology Dose-Finding Trials Using the Modified Fragility Index

October 2024

·

11 Reads

Simple Summary In this article, the authors introduce a new metric called the modified Fragility Index (mFI) to assess the accuracy of determining the maximum tolerated dose (MTD) in early oncology clinical trials. The mFI measures how sensitive the MTD decision is to the inclusion of a few more participants in the trial. The authors analyzed three published cancer trials and found that two trials were robust to adding more participants, indicating that the MTD estimate remained stable. However, in the other trial, the MTD estimate was more fragile and could have changed with just one or two more participants. The mFI metric helps researchers make more reliable decisions about the appropriate MTD. By considering the potential impact of additional participants, researchers can improve accuracy and confidence in dose determination, leading to better treatment outcomes for patients. Abstract Objectives: The sample sizes of phase I trials are typically small; some designs may lead to inaccurate estimation of the maximum tolerated dose (MTD). The objective of this study was to propose a metric assessing whether the MTD decision is sensitive to enrolling a few additional subjects in a phase I dose-finding trial. Methods: Numerous model-based and model-assisted designs have been proposed to improve the efficiency and accuracy of finding the MTD. The Fragility Index (FI) is a widely used metric quantifying the statistical robustness of randomized controlled trials by estimating the number of events needed to change a statistically significant result to non-significant (or vice versa). We propose a modified Fragility Index (mFI), defined as the minimum number of additional participants required to potentially change the estimated MTD, to supplement existing designs identifying fragile phase I trial results. Findings: Three oncology trials were used to illustrate how to evaluate the fragility of phase I trials using mFI. The results showed that two of the trials were not sensitive to additional subjects’ participation while the third trial was quite fragile to one or two additional subjects. Conclusions: The mFI can be a useful metric assessing the fragility of phase I trials and facilitating robust identification of MTD.


Harm effects in non-registered versus registered randomized controlled trials of medications: a retrospective cohort study of clinical trials

October 2024

·

20 Reads

BMC Medicine

Background Trial registration aims to address potential bias from selective or non-reporting of findings, and therefore has a vital role in promoting transparency and accountability of clinical research. In this study, we aim to investigate the influence of trial registration on estimated harm effects in randomized controlled trials of medication interventions. Methods We searched PubMed for systematic reviews and meta-analyses of randomized trials on medication harms indexed between January 1, 2015, and January 1, 2020. To be included in the analyses, eligible meta-analyses should have at least five randomized trials with distinct registration statuses (i.e., prospectively registered, retrospectively registered, and non-registered) and 2 by 2 table data for adverse events for each trial. To control for potential confounding, trials in each meta-analysis were analyzed within confounder-harmonized groups (e.g., dosage) identified using the Directed Acyclic Graph method. The harm estimates arising from the trials with different registration statuses were compared within the confounder-harmonized groups using hierarchical linear regression. Results are shown as ratio of odds ratio (OR) and 95% confidence interval (CI). Results The dataset consists of 629 meta-analyses of harms with 10,069 trials. Of these trials, 74.3% were registered, and 23.9% were not registered, and for those registered, 70.6% were prospectively registered, while 26.3% were retrospectively registered. In comparison to prospectively registered trials, both non-registered trials (ratio of OR = 0.82, 95%CI 0.68 to 0.98, P = 0.03) and retrospectively registered trials (ratio of OR = 0.75, 95%CI 0.66 to 0.86, P < 0.01) had lower OR for harms based on 69 and 126 confounders-harmonized groups. The OR of harms did not differ between retrospectively registered and non-registered trials (ratio of OR = 1.02, 95%CI 0.85 to 1.23, P = 0.83) based on 76 confounders-harmonized groups. Conclusions Medication-related harms may be understated in non-registered trials, and there was no obvious evidence that retrospective registration had a demonstrable benefit in reducing such selective or absent reporting. Prospective registration is highly recommended for future trials.


Towards the automatic risk of bias assessment on randomized controlled trials: A comparison of RobotReviewer and humans

September 2024

·

19 Reads

Research Synthesis Methods

RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two different approaches: (1) manually by human reviewers, and (2) automatically by the RobotReviewer. The manual assessment was based on two groups independently, with two additional rounds of verification. The agreement between RobotReviewer and humans was measured via the concordance rate and Cohen's kappa statistics, based on the comparison of binary classification of the risk of bias (low vs. high/unclear) as restricted by RobotReviewer. The concordance rates varied by domain, ranging from 63.07% to 83.32%. Cohen's kappa statistics showed a poor agreement between humans and RobotReviewer for allocation concealment ( κ = 0.25, 95% CI: 0.21–0.30), blinding of outcome assessors ( κ = 0.27, 95% CI: 0.23–0.31); While moderate for random sequence generation ( κ = 0.46, 95% CI: 0.41–0.50) and blinding of participants and personnel ( κ = 0.59, 95% CI: 0.55–0.64). The findings demonstrate that there were domain‐specific differences in the level of agreement between RobotReviewer and humans. We suggest that it might be a useful auxiliary tool, but the specific manner of its integration as a complementary tool requires further discussion.





Consistency regarding the point estimates of rapid approach through ClinicalTrials.gov and systematic approach
Consistency regarding the direction and significance P-value of rapid approach through ClinicalTrials.gov and systematic approach
Accelerating evidence synthesis for safety assessment through ClinicalTrials.gov platform: a feasibility study

July 2024

·

21 Reads

Background Standard systematic review can be labor-intensive and time-consuming meaning that it can be difficult to provide timely evidence when there is an urgent public health emergency such as a pandemic. The ClinicalTrials.gov provides a promising way to accelerate evidence production. Methods We conducted a search on PubMed to gather systematic reviews containing a minimum of 5 studies focused on safety aspects derived from randomized controlled trials (RCTs) of pharmacological interventions, aiming to establish a real-world dataset. The registration information of each trial from eligible reviews was further collected and verified. The meta-analytic data were then re-analyzed by using 1) the full meta-analytic data with all trials and 2) emulated rapid data with trials that had been registered and posted results on ClinicalTrials.gov, under the same synthesis methods. The effect estimates of the full meta-analysis and rapid meta-analysis were then compared. Results The real-world dataset comprises 558 meta-analyses. Among them, 56 (10.0%) meta-analyses included RCTs that were not registered in ClinicalTrials.gov. For the remaining 502 meta-analyses, the median percentage of RCTs registered within each meta-analysis is 70.1% (interquartile range: 33.3% to 88.9%). Under a 20% bias threshold, rapid meta-analyses conducted through ClinicalTrials.gov achieved accurate point estimates ranging from 77.4% (using the MH model) to 83.1% (using the GLMM model); 91.0% to 95.3% of these analyses accurately predicted the direction of effects. Conclusions Utilizing the ClinicalTrials.gov platform for safety assessment with a minimum of 5 RCTs holds significant potential for accelerating evidence synthesis to support urgent decision-making.


Methods for assessing inverse publication bias of adverse events

July 2024

·

14 Reads

·

1 Citation

Contemporary Clinical Trials

In medical research, publication bias (PB) poses great challenges to the conclusions from systematic reviews and meta-analyses. The majority of efforts in methodological research related to classic PB have focused on examining the potential suppression of studies reporting effects close to the null or statistically non-significant results. Such suppression is common, particularly when the study outcome concerns the effectiveness of a new intervention. On the other hand, attention has recently been drawn to the so-called inverse publication bias (IPB) within the evidence synthesis community. It can occur when assessing adverse events because researchers may favor evidence showing a similar safety profile regarding an adverse event between a new intervention and a control group. In comparison to the classic PB, IPB is much less recognized in the current literature; methods designed for classic PB may be inaccurately applied to address IPB, potentially leading to entirely incorrect conclusions. This article aims to provide a collection of accessible methods to assess IPB for adverse events. Specifically, we discuss the relevance and differences between classic PB and IPB. We also demonstrate visual assessment through contour-enhanced funnel plots tailored to adverse events and popular quantitative methods, including Egger's regression test, Peters' regression test, and the trim-and-fill method for such cases. Three real-world examples are presented to illustrate the bias in various scenarios, and the implementations are illustrated with statistical code. We hope this article offers valuable insights for evaluating IPB in future systematic reviews of adverse events.


Citations (59)


... For the assessment of IPB, no single method maintains optimal performance across all settings [35,36]; all statistical methods require specific assumptions regarding the nature of the published and unpublished studies. Readers interested in a comprehensive review of the available methods for assessing IPB may refer to Xing et al. [37]. For the binary outcomes, especially for rare events, the effect measure estimates within individual studies could be mathematically related to their standard errors. ...

Reference:

Assessment of inverse publication bias in safety outcomes: an empirical analysis
Methods for assessing inverse publication bias of adverse events
  • Citing Article
  • July 2024

Contemporary Clinical Trials

... Di sisi lain, chatbot AI seperti ChatGPT, Google Bard [sekarang Gemini], dan Chatsonic memiliki keterbatasan dalam mengutip referensi dengan akurat dan sering kali memberikan DOI yang salah atau tidak ada (Aliwijaya, 2023a;Graf et al., 2023). Sejalan dengan hal tersebut, chatbot cenderung menciptakan kutipan yang tidak ada, yang dapat berdampak negatif jika digunakan tanpa sadar dalam konteks ilmiah (Clelland et al., 2024). ...

Exploring the Limits of Artificial Intelligence for Referencing Scientific Articles
  • Citing Article
  • April 2024

American Journal of Perinatology

... In practice, the control group event rate (CR) has been used as a surrogate for the baseline risk despite many known limitations of CR, particularly its susceptibility to measurement error [11][12][13][14]. When developing a clinical practice guideline, the default option in guideline development software is to use CR in place of baseline risk [15], although external sources of baseline risk were encouraged [3]. ...

Hierarchical models that address measurement error are needed to evaluate the correlation between treatment effect and control group event rate
  • Citing Article
  • March 2024

Journal of Clinical Epidemiology

... This study utilized a subset of the data from a previous study. 10 In summary, we conducted a PubMed search for systematic reviews of adverse events published between January 1, 2015 and January 1, 2020. We included systematic reviews of RCTs focusing on healthcare interventions with adverse events as the exclusive outcome. ...

Influence of lack of blinding on the estimation of medication-related harms: a retrospective cohort study of randomized controlled trials

BMC Medicine

... Sensitivity analysis with the exclusion of outliers is common in scientific literature [90][91][92][93]. Therefore, we carried out the combinatorial exclusion of Experts 4, 6 and 9, individually, in pairs and in total, to verify the behavior of the remaining results. ...

Sensitivity analysis with iterative outlier detection for systematic reviews and meta-analyses
  • Citing Article
  • February 2024

Statistics in Medicine

... Safe waterbirth requires attendance of a qualified intrapartum professional, such as a midwife, but does not require otherwise expensive therapies to be initiated and could be performed in either in-hospital or out-ofhospital birthing scenarios. In fact, this method dates back at least to Roman and Greek ancient times and could be effective in reducing first stage pain, and potentially partial relief during the second stage of labor [18]. A systematic review on this subject pooled trials and showed in that in a carefully selected population, hydrotherapy can reduce the requirement for parenteral opiates or epidural analgesia, increase maternal satisfaction, and may even be protective against adverse events, such as postpartum hemorrhage [19]. ...

Water birth: a systematic review and meta-analysis of maternal and neonatal outcomes
  • Citing Article
  • March 2024

American Journal of Obstetrics and Gynecology

... When including studies of relatively small sample sizes and occurrences of rare events or zero events, GLMM outperformed the conventional two-step inverse variance method and achieved smaller biases and mean squared errors. [21,22] As a sensitivity analysis, we employed conventional randomeffects models using the inverse variance method. The incidence rates were transformed using the Freeman-Tukey double arcsine method with a continuity correction of 0.5 for studies with zero events. ...

The impact of studies with no events in both arms on meta-analysis of rare events: A simulation study using generalized linear mixed model

Research Methods in Medicine & Health Sciences

... A systematic review and network meta-analysis on the efficacy and safety of outpatient cervical ripening methods by Vilchez et al. determined that vaginal misoprostol (25 µg) was most effective (shortest time from intervention to delivery), while not increasing cesarean deliveries, need for subsequent ripening agents, rates of low Apgar scores, or hyperstimulation. This review did not include balloon cervical ripening methods [16]. ...

Outpatient cervical ripening and labor induction with low-dose vaginal misoprostol reduces the interval to delivery: a systematic review and network meta-analysis
  • Citing Article
  • July 2023

American Journal of Obstetrics and Gynecology

... 33 In the presence of a convincing dose-response gradient or large effect, NMA estimates were rated up. 35,36 A detailed description of the GRA-DEing procedure can be found in Supplemental Appendix 6. ...

GRADE GUIDANCE 38: Updated guidance for rating up certainty of evidence due to a dose-response gradient
  • Citing Article
  • September 2023

Journal of Clinical Epidemiology

... This trial was approved by the Institutional Review Board at Anhui Medical University (No. 83220405), and has been registered with the Chinese Clinical Trial Registry Center (Identifier: ChiCTR2200062206). The protocol for the trial was published previously [15], and findings are reported following the CONSORT 2010 statement and its extension to randomized crossover trials [16]. ...

Data extraction error in pharmaceutical versus non-pharmaceutical interventions for evidence synthesis: Study protocol for a crossover trial

Contemporary Clinical Trials Communications