Article

Does artificial intelligence close gaps in clinical pharmacology in the ICU?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Titrating tacrolimus concentration in liver transplantation recipients remains a challenge in the early post-transplant period. This multicenter retrospective cohort study aimed to develop and validate a machine-learning algorithm to predict tacrolimus concentration. Data from 443 patients undergoing liver transplantation between 2017 and 2020 at an academic hospital in South Korea were collected to train machine-learning models. Long short-term memory (LSTM) and gradient-boosted regression tree (GBRT) models were developed using time-series doses and concentrations of tacrolimus with covariates of age, sex, weight, height, liver enzymes, total bilirubin, international normalized ratio, albumin, serum creatinine, and hematocrit. We conducted performance comparisons with linear regression and populational pharmacokinetic models, followed by external validation using the eICU Collaborative Research Database collected in the United States between 2014 and 2015. In the external validation, the LSTM outperformed the GBRT, linear regression, and populational pharmacokinetic models with median performance error (8.8%, 25.3%, 13.9%, and − 11.4%, respectively; P < 0.001) and median absolute performance error (22.3%, 33.1%, 26.8%, and 23.4%, respectively; P < 0.001). Dosing based on the LSTM model’s suggestions achieved therapeutic concentrations more frequently on the chi-square test (P < 0.001). Patients who received doses outside the suggested range were associated with longer ICU stays by an average of 2.5 days (P = 0.042). In conclusion, machine learning models showed excellent performance in predicting tacrolimus concentration in liver transplantation recipients and can be useful for concentration titration in these patients.
Article
Full-text available
Continuous unfractionated heparin is widely used in intensive care, yet its complex pharmacokinetic properties complicate the determination of appropriate doses. To address this challenge, we developed machine learning models to predict over- and under-dosing, based on anti-Xa results, using a monocentric retrospective dataset. The random forest model achieved a mean AUROC of 0.80 [0.77-0.83], while the XGB model reached a mean AUROC of 0.80 [0.76-0.83]. Feature importance was employed to enhance the interpretability of the model, a critical factor for clinician acceptance. After prospective validation, machine learning models such as those developed in this study could be implemented within a computerized physician order entry (CPOE) as a clinical decision support system (CDSS).
Preprint
Full-text available
INTRODUCTION: Intravenous (IV) medications are a fundamental cause of fluid overload (FO) in the intensive care unit (ICU); however, the association between IV medication use (including volume), administration timing, and FO occurrence remains unclear. METHODS: This retrospective cohort study included consecutive adults admitted to an ICU ≥72 hours with available fluid balance data. FO was defined as a positive fluid balance ≥7% of admission body weight within 72 hours of ICU admission. After reviewing medication administration record (MAR) data in three-hour periods, IV medication exposure was categorized into clusters using principal component analysis (PCA) and Restricted Boltzmann Machine (RBM). Medication regimens of patients with and without FO were compared within clusters to assess for temporal clusters associated with FO using the Wilcoxon rank sum test. Exploratory analyses of the medication cluster most associated with FO for medications frequently appearing and used in the first 24 hours was conducted. RESULTS: FO occurred in 127/927 (13.7%) of the patients enrolled. Patients received a median (IQR) of 31 (13-65) discrete IV medication administrations over the 72-hour period. Across all 47,803 IV medication administrations, ten unique IV medication clusters were identified with 121-130 medications in each cluster. Among the ten clusters, cluster 7 had the greatest association with FO; the mean number of cluster 7 medications received was significantly greater in patients in the FO cohort compared to patients who did not experience FO (25.6 vs.10.9. p<0.0001). 51 of the 127 medications in cluster 7 (40.2%) appeared in > 5 separate 3-hour periods during the 72-hour study window. The most common cluster 7 medications included continuous infusions, antibiotics, and sedatives/analgesics. Addition of cluster 7 medications to a prediction model with APACHE II score and receipt of diuretics improved the ability for the model to predict fluid overload (AUROC 5.65, p =0.0004). CONCLUSIONS: Using ML approaches, a unique IV medication cluster was strongly associated with FO. Incorporation of this cluster improved the ability to predict development of fluid overload in ICU patients compared with traditional prediction models. This method may be further developed into real-time clinical applications to improve early detection of adverse outcomes.
Article
Full-text available
Unsupervised clustering of intensive care unit (ICU) medications may identify unique medication clusters (i.e., pharmacophenotypes) in critically ill adults. We performed an unsupervised analysis with Restricted Boltzmann Machine of 991 medications profiles of patients managed in the ICU to explore pharmacophenotypes that correlated with ICU complications (e.g., mechanical ventilation) and patient-centered outcomes (e.g., length of stay, mortality). Six unique pharmacophenotypes were observed, with unique medication profiles and clinically relevant differences in ICU complications and patient-centered outcomes. While pharmacophenotypes 2 and 4 had no statistically significant difference in ICU length of stay, duration of mechanical ventilation, or duration of vasopressor use, their mortality differed significantly (9.0% vs. 21.9%, p < 0.0001). Pharmacophenotype 4 had a mortality rate of 21.9%, compared with the rest of the pharmacophenotypes ranging from 2.5 to 9%. Phenotyping approaches have shown promise in classifying the heterogenous syndromes of critical illness to predict treatment response and guide clinical decision support systems but have never included comprehensive medication information. This first-ever machine learning approach revealed differences among empirically-derived subgroups of ICU patients that are not typically revealed by traditional classifiers. Identification of pharmacophenotypes may enable enhanced decision making to optimize treatment decisions.
Article
Full-text available
Aim: To develop models that predict the presence of medication errors (MEs) (prescription, preparation, administration, and monitoring) using machine learning in NICU patients. Design: Prospective, observational cohort study randomized with machine learning (ML) algorithms. Setting: A 22-bed capacity NICU in Ankara, Turkey, between February 2020 and July 2021. Results: A total of 11,908 medication orders (28.9 orders/patient) for 412 NICU patients (5.53 drugs/patient/day) who received 2,280 prescriptions over 32,925 patient days were analyzed. At least one physician-related ME and nurse-related ME were found in 174 (42.2%) and 235 (57.0%) of the patients, respectively. The parameters that had the highest correlation with ME occurrence and subsequently included in the model were: total number of drugs, anti-infective drugs, nervous system drugs, 5-min APGAR score, postnatal age, alimentary tract and metabolism drugs, and respiratory system drugs as patient-related parameters, and weekly working hours of nurses, weekly working hours of physicians, and number of nurses’ monthly shifts as care provider-related parameters. The obtained model showed high performance to predict ME (AUC: 0.920; 95% CI: 0.876–0.970) presence and is accessible online (http://softmed.hacettepe.edu.tr/NEO-DEER_Medication_Error/). Conclusion: This is the first developed and validated model to predict the presence of ME using work environment and pharmacotherapy parameters with high-performance ML algorithms in NICU patients. This approach and the current model hold the promise of implementation of targeted/precision screening to prevent MEs in neonates. Clinical Trial Registration: ClinicalTrials.gov, identifier NCT04899960.
Article
Full-text available
Background Beta-lactam antimicrobial concentrations are frequently suboptimal in critically ill patients. Population pharmacokinetic (PopPK) modeling is the golden standard to predict drug concentrations. However, currently available PopPK models often lack predictive accuracy, making them less suited to guide dosing regimen adaptations. Furthermore, many currently developed models for clinical applications often lack uncertainty quantification. We, therefore, aimed to develop machine learning (ML) models for the prediction of piperacillin plasma concentrations while also providing uncertainty quantification with the aim of clinical practice. Methods Blood samples for piperacillin analysis were prospectively collected from critically ill patients receiving continuous infusion of piperacillin/tazobactam. Interpretable ML models for the prediction of piperacillin concentrations were designed using CatBoost and Gaussian processes. Distribution-based Uncertainty Quantification was added to the CatBoost model using a proposed Quantile Ensemble method, useable for any model optimizing a quantile function. These models are subsequently evaluated using the distribution coverage error, a proposed interpretable uncertainty quantification calibration metric. Development and internal evaluation of the ML models were performed on the Ghent University Hospital database (752 piperacillin concentrations from 282 patients). Ensuing, ML models were compared with a published PopPK model on a database from the University Medical Centre of Groningen where a different dosing regimen is used (46 piperacillin concentrations from 15 patients.). Results The best performing model was the Catboost model with an RMSE and R2R^2 R 2 of 31.94–0.64 and 33.53–0.60 for internal evaluation with and without previous concentration. Furthermore, the results prove the added value of the proposed Quantile Ensemble model in providing clinically useful individualized uncertainty predictions and show the limits of homoscedastic methods like Gaussian Processes in clinical applications. Conclusions Our results show that ML models can consistently estimate piperacillin concentrations with acceptable and high predictive accuracy when identical dosing regimens as in the training data are used while providing highly relevant uncertainty predictions. However, generalization capabilities to other dosing schemes are limited. Notwithstanding, incorporating ML models in therapeutic drug monitoring programs seems definitely promising and the current work provides a basis for validating the model in clinical practice.
Article
Full-text available
This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency Medicine 2022. Other selected articles can be found online at https://www.biomedcentral.com/collections/annualupdate2022. Further information about the Annual Update in Intensive Care and Emergency Medicine is available from https://link.springer.com/bookseries/8901.
Article
Full-text available
Model‐informed precision dosing (MIPD) approaches typically apply maximum a posteriori (MAP) Bayesian estimation to determine individual pharmacokinetic (PK) parameters with the goal of optimizing future dosing regimens. This process combines knowledge about the individual, in the form of drug levels or pharmacodynamic biomarkers, with prior knowledge of the drug PK in the general population. Use of “flattened priors” (FP), in which the weight of the model priors is reduced relative to observations about the patient, has been previously proposed to estimate individual PK parameters in instances where the patient is poorly described by the PK model. However, little is known about the predictive performance of FP and when to apply FP in MIPD. Here, FP is evaluated in a data set of 4679 adult patients treated with vancomycin. Depending on the PK model, prediction error could be reduced by applying FP in 42‐55% of PK parameter estimations. Machine learning (ML) models could identify instances where FP would outperform MAP with a specificity of 81‐86%, reducing overall root mean squared error (RMSE) of PK model predictions by 12‐22% (0.5‐1.2 mg/L) relative to MAP alone. The factors most indicative of the use of FP were past prediction residuals and bias in past PK predictions. A more clinically practical minimal model was developed using only these two features, reducing RMSE by 5‐18% (0.20‐0.93 mg/L) relative to MAP. This hybrid ML/PK approach advances the precision dosing toolkit by leveraging the power of ML while maintaining the mechanistic insight and interpretability of pharmacokinetic models.
Article
Full-text available
There is a growing focus on making clinical trials more inclusive but the design of trial eligibility criteria remains challenging1, 2–3. Here we systematically evaluate the effect of different eligibility criteria on cancer trial populations and outcomes with real-world data using the computational framework of Trial Pathfinder. We apply Trial Pathfinder to emulate completed trials of advanced non-small-cell lung cancer using data from a nationwide database of electronic health records comprising 61,094 patients with advanced non-small-cell lung cancer. Our analyses reveal that many common criteria, including exclusions based on several laboratory values, had a minimal effect on the trial hazard ratios. When we used a data-driven approach to broaden restrictive criteria, the pool of eligible patients more than doubled on average and the hazard ratio of the overall survival decreased by an average of 0.05. This suggests that many patients who were not eligible under the original trial criteria could potentially benefit from the treatments. We further support our findings through analyses of other types of cancer and patient-safety data from diverse clinical trials. Our data-driven methodology for evaluating eligibility criteria can facilitate the design of more-inclusive trials while maintaining safeguards for patient safety.
Article
Full-text available
Sepsis is the third leading cause of death worldwide and the main cause of mortality in hospitals1–3, but the best treatment strategy remains uncertain. In particular, evidence suggests that current practices in the administration of intravenous fluids and vasopressors are suboptimal and likely induce harm in a proportion of patients1,4–6. To tackle this sequential decision-making problem, we developed a reinforcement learning agent, the Artificial Intelligence (AI) Clinician, which extracted implicit knowledge from an amount of patient data that exceeds by many-fold the life-time experience of human clinicians and learned optimal treatment by analyzing a myriad of (mostly suboptimal) treatment decisions. We demonstrate that the value of the AI Clinician’s selected treatment is on average reliably higher than human clinicians. In a large validation cohort independent of the training data, mortality was lowest in patients for whom clinicians’ actual doses matched the AI decisions. Our model provides individualized and clinically interpretable treatment decisions for sepsis that could improve patient outcomes. © 2018, The Author(s), under exclusive licence to Springer Nature America, Inc.
Article
Study objectives: The objective of this study was to develop and externally validate a model to predict adjunctive vasopressin response in patients with septic shock being treated with norepinephrine for bedside use in the intensive care unit. Design: This was a retrospective analysis of two adult tertiary intensive care unit septic shock populations. Setting: Barnes-Jewish Hospital (BJH) from 2010 to 2017 and Beth Israel Deaconess Medical Center (BIDMC) from 2001 to 2012. Patients: Two septic shock populations (548 BJH patients and 464 BIDMC patients) that received vasopressin as second-line vasopressor. Intervention: Patients who were vasopressin responsive were compared with those who were nonresponsive. Vasopressin response was defined as survival with at least a 20% decrease in maximum daily norepinephrine requirements by one calendar day after vasopressin initiation, without a third-line vasopressor. Measurements: Two supervised machine learning models (gradient-boosting machine [XGBoost] and elastic net penalized logistic regression [EN]) were trained in 1000 bootstrap replications of the BJH data and externally validated in the BIDMC data to predict vasopressin responsiveness. Main results: Vasopressin responsiveness was similar among each cohort (BJH 45% and BIDMC 39%). Mortality was lower for vasopressin responders compared with nonresponders in the BJH (51% vs. 73%) and BIDMC (45% vs. 83%) cohorts, respectively. Both models demonstrated modest discrimination in the training (XGBoost area under receiver operator curve [AUROC] 0.61 [95% confidence interval (CI) 0.61-0.61], EN 0.59 [95% CI 0.58-0.59]) and external validation (XGBoost 0.68 [95% CI 0.63-0.73], EN 0.64 [95% CI 0.59-0.69]) datasets. Conclusion: Vasopressin nonresponsiveness is common and associated with increased mortality. The models' modest performances highlight the complexity of septic shock and indicate that more research will be required before clinical decision support tools can aid in anticipating patient-specific responsiveness to vasopressin.
Article
The aim of this work is to estimate the area‐under the blood concentration curve of tacrolimus following twice‐a‐day (BID) or once‐a‐day (QD) dosing in organ transplant patients, using Xgboost machine learning (ML) models. A total of 4997 and 1452 tacrolimus inter‐dose AUCs from patients on BID and QD tacrolimus, sent to our ISBA expert system (www.pharmaco.chu‐limoges.fr/) for AUC estimation and dose recommendation based on tacrolimus concentrations measured at least at 3 sampling times (predose, approx. 1 and 3h after dosing) were used to develop four ML models based on 2 or 3 concentrations. For each model, data splitting was performed to obtain a training set (75%) and a test set (25%). The Xgboost models in the training set with the lowest RMSE in a ten‐fold cross‐validation experiment were evaluated in the test set and in 6 independent full‐pk datasets from renal, liver and heart transplant patients. ML models based on 2 or 3 concentrations, differences between these concentrations, relative deviations from theoretical times of sampling and 4 covariates (dose, type of transplantation, age and time between transplantation and sampling) yielded excellent AUC estimation performance in the test datasets (relative bias <5% and relative RMSE <10%) and better performance than MAP Bayesian estimation in the 6 independent full‐pk datasets. The Xgboost ML models described allow accurate estimation of tacrolimus interdose AUC and can be used for routine tacrolimus exposure estimation and dose adjustment. They will soon be implemented in a dedicated web interface.
A pragmatic machine learning model to predict carbapenem resistance
  • R J Mcguire
  • S C Yu
  • Pro Payne
  • A M Lai
  • M C Vazquez-Guillamet
  • M H Kollef
  • A P Michelson
How to position clinical trials with digital twins for regulatory success
  • C Fisher
Fisher C How to position clinical trials with digital twins for regulatory success. UNLEARN. White Paper downloaded from https:// www. unlea rn. ai/ clini cal-resea rch. Last accessed 3 Sept 2024