Julie Josse’s research while affiliated with National Institute for Research in Computer Science and Control and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (136)


Subphenotyping prone position responders with machine learning
  • Article
  • Full-text available

March 2025

·

48 Reads

Critical Care

Maxime Fosset

·

·

·

[...]

·

Background Acute respiratory distress syndrome (ARDS) is a heterogeneous condition with varying response to prone positioning. We aimed to identify subphenotypes of ARDS patients undergoing prone positioning using machine learning and assess their association with mortality and response to prone positioning. Methods In this retrospective observational study, we enrolled 353 mechanically ventilated ARDS patients who underwent at least one prone positioning cycle. Unsupervised machine learning was used to identify subphenotypes based on respiratory mechanics, oxygenation parameters, and demographic variables collected in supine position. The primary outcome was 28-day mortality. Secondary outcomes included response to prone positioning in terms of respiratory system compliance, driving pressure, PaO 2 /FiO 2 ratio, ventilatory ratio, and mechanical power. Results Three distinct subphenotypes were identified. Cluster 1 (22.9% of whole cohort) had a higher PaO 2 /FiO 2 ratio and lower Positive End-Expiratory Pressure (PEEP). Cluster 2 (51.3%) had a higher proportion of COVID-19 patients, lower driving pressure, higher PEEP, and higher respiratory system compliance. Cluster 3 (25.8%) had a lower pH, higher PaCO 2 , and higher ventilatory ratio. Mortality differed significantly across clusters (p = 0.03), with Cluster 3 having the highest mortality (56%). There were no significant differences in the proportions of responders to prone positioning for any of the studied parameters. Transpulmonary pressure measurements in a subcohort did not improve subphenotype characterization. Conclusions Distinct ARDS subphenotypes with varying mortality were identified in patients undergoing prone positioning; however, predicting which patients benefited from this intervention based on available data was not possible. These findings underscore the need for continued efforts in phenotyping ARDS through multimodal data to better understand the heterogeneity of this population.

Download

Figure 1: Comparison of the win proportion p W computed with complete pairings and Nearest Neighbor pairings. Setting of Example 1. Boxplots over 100 runs. The two approaches lead to different treatment recommendations (above and below 0.5).
Figure 4: Testing for the impact of the dimension, uncorrelated outcomes setting. Boxplots over 100 runs. DRF AIPW WR, DRF WR and NearestNeigh WR as in Figure 2 .
Figure 6: Testing for double robustness, by mispecifying either propensities or distributional regression.. Boxplots over 100 runs. DRF AIPW WR, DRF WR, NearestNeigh WR with as in Figure 3, 'mispecified' refers to learning a linear logistic regression for propensities (Figure 6a), or doing logistic distribution regression as in Section 4.2.3 and imposing a correlated outcomes for distributional regression (fig. 6b) while the outcomes are generated uncorrelated.
Rethinking the Win Ratio: A Causal Framework for Hierarchical Outcome Analysis

January 2025

·

32 Reads

Quantifying causal effects in the presence of complex and multivariate outcomes is a key challenge to evaluate treatment effects. For \emph{hierarchical} multivarariates outcomes, the FDA recommends the Win Ratio and Generalized Pairwise Comparisons approaches \cite{Pocock2011winratio,Buyse2010}. However, as far as we know, these empirical methods lack causal or statistical foundations to justify their broader use in recent studies. To address this gap, we establish causal foundations for hierarchical comparison methods. We define related causal effect measures, and highlight that depending on the methodology used to compute Win Ratios or Net Benefits of treatments, the causal estimand targeted can be different, as proved by our consistency results. Quite dramatically, it appears that the causal estimand related to the historical estimation approach can yield reversed and incorrect treatment recommendations in heterogeneous populations, as we illustrate through striking examples. In order to compensate for this fallacy, we introduce a novel, individual-level yet identifiable causal effect measure that better approximates the ideal, non-identifiable individual-level estimand. We prove that computing Win Ratio or Net Benefits using a Nearest Neighbor pairing approach between treated and controlled patients, an approach that can be seen as an extreme form of stratification, leads to estimating this new causal estimand measure. We extend our methods to observational settings via propensity weighting, distributional regression to address the curse of dimensionality, and a doubly robust framework. We prove the consistency of our methods, and the double robustness of our augmented estimator. These methods are straightforward to implement, making them accessible to practitioners.


Causal survival analysis, Estimation of the Average Treatment Effect (ATE): Practical Recommendations

January 2025

·

11 Reads

Causal survival analysis combines survival analysis and causal inference to evaluate the effect of a treatment or intervention on a time-to-event outcome, such as survival time. It offers an alternative to relying solely on Cox models for assessing these effects. In this paper, we present a comprehensive review of estimators for the average treatment effect measured with the restricted mean survival time, including regression-based methods, weighting approaches, and hybrid techniques. We investigate their theoretical properties and compare their performance through extensive numerical experiments. Our analysis focuses on the finite-sample behavior of these estimators, the influence of nuisance parameter selection, and their robustness and stability under model misspecification. By bridging theoretical insights with practical evaluation, we aim to equip practitioners with both state-of-the-art implementations of these methods and practical guidelines for selecting appropriate estimators for treatment effect estimation. Among the approaches considered, G-formula two-learners, AIPCW-AIPTW, Buckley-James estimators, and causal survival forests emerge as particularly promising.



Figure 1: Clinicians were given a patient sample (top) and three interpretable ML models (middle: decision tree, logistic regression; bottom: risk score) predicting hemorrhagic shock. We assessed their interaction with the models with missing values.
Figure 2: Experimental setup. Physicians are shown a data entry of a patient with 9 features in an interpretable machine-learning model along with the questions. After the data is gathered qualitative and quantitative methods are used to analyze the results.
Figure 3: The cohort is divided into four clusters, each reflecting different attitudes towards AI/ML, varying levels of familiarity with IML, both within and outside the context of missing values and imputation preferences along with general trust in IML. In the color coding, blue indicates that the mean for this cluster is significantly lower than the global mean, while red indicates that the mean is significantly higher. White or light shades of blue and red suggest no significant difference within the group regarding the variable. Demographical features were only added to describe the clusters.
Expert Study on Interpretable Machine Learning Models with Missing Data

November 2024

·

46 Reads

Inherently interpretable machine learning (IML) models provide valuable insights for clinical decision-making but face challenges when features have missing values. Classical solutions like imputation or excluding incomplete records are often unsuitable in applications where values are missing at test time. In this work, we conducted a survey with 71 clinicians from 29 trauma centers across France, including 20 complete responses to study the interaction between medical professionals and IML applied to data with missing values. This provided valuable insights into how missing data is interpreted in clinical machine learning. We used the prediction of hemorrhagic shock as a concrete example to gauge the willingness and readiness of the participants to adopt IML models from three classes of methods. Our findings show that, while clinicians value interpretability and are familiar with common IML methods, classical imputation techniques often misalign with their intuition, and that models that natively handle missing values are preferred. These results emphasize the need to integrate clinical intuition into future IML models for better human-computer interaction.


Pilot deployment of a machine-learning enhanced prediction of need for hemorrhage resuscitation after trauma - the ShockMatrix pilot study

October 2024

·

51 Reads

·

1 Citation

BMC Medical Informatics and Decision Making

Importance: Decision-making in trauma patients remains challenging and often results in deviation from guidelines. Machine-Learning (ML) enhanced decision-support could improve hemorrhage resuscitation. Aim: To develop a ML enhanced decision support tool to predict Need for Hemorrhage Resuscitation (NHR) (part I) and test the collection of the predictor variables in real time in a smartphone app (part II). Design, setting, and participants: Development of a ML model from a registry to predict NHR relying exclusively on prehospital predictors. Several models and imputation techniques were tested. Assess the feasibility to collect the predictors of the model in a customized smartphone app during prealert and generate a prediction in four level-1 trauma centers to compare the predictions to the gestalt of the trauma leader. Main outcomes and measures: Part 1: Model output was NHR defined by 1) at least one RBC transfusion in resuscitation, 2) transfusion ≥ 4 RBC within 6 h, 3) any hemorrhage control procedure within 6 h or 4) death from hemorrhage within 24 h. The performance metric was the F4-score and compared to reference scores (RED FLAG, ABC). In part 2, the model and clinician prediction were compared with Likelihood Ratios (LR). Results: From 36,325 eligible patients in the registry (Nov 2010-May 2022), 28,614 were included in the model development (Part 1). Median age was 36 [25-52], median ISS 13 [5-22], 3249/28614 (11%) corresponded to the definition of NHR. A XGBoost model with nine prehospital variables generated the best predictive performance for NHR according to the F4-score with a score of 0.76 [0.73-0.78]. Over a 3-month period (Aug-Oct 2022), 139 of 391 eligible patients were included in part II (38.5%), 22/139 with NHR. Clinician satisfaction was high, no workflow disruption observed and LRs comparable between the model and the clinicians. Conclusions and relevance: The ShockMatrix pilot study developed a simple ML-enhanced NHR prediction tool demonstrating a comparable performance to clinical reference scores and clinicians. Collecting the predictor variables in real-time on prealert was feasible and caused no workflow disruption.


Quantifying Treatment Effects: Estimating Risk Ratios in Causal Inference

October 2024

·

2 Reads

Randomized Controlled Trials (RCT) are the current gold standards to empirically measure the effect of a new drug. However, they may be of limited size and resorting to complementary non-randomized data, referred to as observational, is promising, as additional sources of evidence. In both RCT and observational data, the Risk Difference (RD) is often used to characterize the effect of a drug. Additionally, medical guidelines recommend to also report the Risk Ratio (RR), which may provide a different comprehension of the effect of the same drug. While different methods have been proposed and studied to estimate the RD, few methods exist to estimate the RR. In this paper, we propose estimators of the RR both in RCT and observational data and provide both asymptotical and finite-sample analyses. We show that, even in an RCT, estimating treatment allocation probability or adjusting for covariates leads to lower asymptotic variance. In observational studies, we propose weighting and outcome modeling estimators and derive their asymptotic bias and variance for well-specified models. Using semi-parametric theory, we define two doubly robusts estimators with minimal variances among unbiased estimators. We support our theoretical analysis with empirical evaluations and illustrate our findings through experiments.


R2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R^2$$\end{document}scores on model 1∙\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bullet $$\end{document} Normalized explained variance for the three missing data mechanisms (MCAR, Censoring MNAR, Predictive Missingness) introduced above, with 20% of missing values, n=1000\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=1000$$\end{document}, d=9\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=9$$\end{document} and ρ=0.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho = 0.5$$\end{document}
Computation time (in s) of the different imputation methods/learning procedures for the MCAR generative mechanism with 20% of missing values, n=1000\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=1000$$\end{document}, d=9\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=9$$\end{document} and ρ=0.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho = 0.5$$\end{document}
Relative scores on different models in MCAR ∙\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bullet $$\end{document} Relative explained variance for models 2, 3, 4, MCAR with 20% of missing values, n=1000\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=1000$$\end{document}, d=10\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=10$$\end{document} and ρ=0.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho = 0.5$$\end{document}
Bayes consistency in MCAR ∙\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bullet $$\end{document} Consistency with 40% of MCAR values on all variables, on models 2 (linear), 3 (Friedman), 4 (non-linear)
On the consistency of supervised learning with missing values

September 2024

·

44 Reads

·

56 Citations

Statistical Papers

In many application settings, data have missing entries, which makes subsequent analyses challenging. An abundant literature addresses missing values in an inferential framework, aiming at estimating parameters and their variance from incomplete tables. Here, we consider supervised-learning settings: predicting a target when missing values appear in both training and test data. We first rewrite classic missing values results for this setting. We then show the consistency of two approaches, test-time multiple imputation and single imputation in prediction. A striking result is that the widely-used method of imputing with a constant prior to learning is consistent when missing values are not informative. This contrasts with inferential settings where mean imputation is frowned upon as it distorts the distribution of the data. The consistency of such a popular simple approach is important in practice. Finally, to contrast procedures based on imputation prior to learning with procedures that optimize the missing-value handling for prediction, we consider decision trees. Indeed, decision trees are among the few methods that can tackle empirical risk minimization with missing values, due to their ability to handle the half-discrete nature of incomplete variables. After comparing empirically different missing values strategies in trees, we recommend using the “missing incorporated in attribute” method as it can handle both non-informative and informative missing values.


Model-based clustering with missing not at random data

June 2024

·

56 Reads

·

10 Citations

Statistics and Computing

Model-based unsupervised learning, as any learning task, stalls as soon as missing data occurs. This is even more true when the missing data are informative, or said missing not at random (MNAR). In this paper, we propose model-based clustering algorithms designed to handle very general types of missing data, including MNAR data. To do so, we introduce a mixture model for different types of data (continuous, count, categorical and mixed) to jointly model the data distribution and the MNAR mechanism, remaining vigilant to the relative degrees of freedom of each. Several MNAR models are discussed, for which the cause of the missingness can depend on both the values of the missing variable themselves and on the class membership. However, we focus on a specific MNAR model, called MNARz, for which the missingness only depends on the class membership. We first underline its ease of estimation, by showing that the statistical inference can be carried out on the data matrix concatenated with the missing mask considering finally a standard MAR mechanism. Consequently, we propose to perform clustering using the Expectation Maximization algorithm, specially developed for this simplified reinterpretation. Finally, we assess the numerical performances of the proposed methods on synthetic data and on the real medical registry TraumaBase as well.


Re-weighting the randomized controlled trial for generalization: finite-sample error and variable selection

May 2024

·

11 Reads

·

4 Citations

Journal of the Royal Statistical Society Series A (Statistics in Society)

Randomized controlled trials (RCTs) may suffer from limited scope. In particular, samples may be unrepresentative: some RCTs over- or under-sample individuals with certain characteristics compared to the target population, for which one wants conclusions on treatment effectiveness. Re-weighting trial individuals to match the target population can improve the treatment effect estimation. In this work, we establish the expressions of the bias and variance of such re-weighting procedures—also called inverse propensity of sampling weighting (IPSW)—in presence of categorical covariates for any sample size. Such results allow us to compare the theoretical performance of different versions of IPSW estimates. Besides, our results show how the performance (bias, variance, and quadratic risk) of IPSW estimates depends on the two sample sizes (RCT and target population). A by-product of our work is the proof of consistency of IPSW estimates. In addition, we analyse how including covariates that are not necessary for identifiability of the causal effect may impact the asymptotic variance. Including covariates that are shifted between the two samples but not treatment-effect modifiers increases the variance while non-shifted but treatment-effect modifiers do not. We illustrate all the takeaways in a didactic example, and on a semi-synthetic simulation inspired from critical care medicine.


Citations (51)


... Works on online procedure tailored for extremes have already been deployed (Himych et al., 2024), and it might be relevant to see how it can be paired with conformal approaches. Another natural perspective that would deepen our understanding on the benefits of conformalisation is to conformalize the aggregated models as suggested in Susmann et al. (2024), as opposed to aggregating the conformalized models which is what we performed. It would also be interesting to assess the performances of the most recent online conformal algorithms (listed in Section 4.2.4), that might be better suited for non-stationarity. ...

Reference:

Adaptive probabilistic forecasting of French electricity spot prices
Probabilistic Prediction of Arrivals and Hospitalizations in Emergency Departments in Île-de-France
  • Citing Article
  • December 2024

International Journal of Medical Informatics

... However, currently most AI/ML solutions applied to trauma science correspond to retrospective algorithm development and validation studies. Very few studies attempt prospective validation, work-flow integration or assess impact on patient outcome [74] to prospective validation and proof-of-concept studies are urgently needed to assess and prove feasibility and utility, safety and potential harm of trauma specific AI/ML solutions and are a matter of ongoing research ( [75], Clin Trials Shockmatrix, NCT06270615). Numerous challenges remain such as data quality, data granularity and the availability of reliable continuous data [76]. ...

Pilot deployment of a machine-learning enhanced prediction of need for hemorrhage resuscitation after trauma - the ShockMatrix pilot study

BMC Medical Informatics and Decision Making

... Recent works study missing data in several different areas, including principal component analysis (Zhu et al. 2022), U -statistics (Cannings and Fan 2022), changepoint detection (Follain et al. 2022), testing whether the missingness is independent of the data (Berrett and Samworth 2023) and classification (Sell et al. 2024). Moreover, it also receives a lots of attention in the framework of prediction (Le Morvan et al. 2020a,b, 2021Ayme et al. 2022Ayme et al. , 2023Zaffran et al. 2023Zaffran et al. , 2024Josse et al. 2024). For the uncertainty quantification with missing data, Gui et al. (2023) and Shao and Zhang (2023) consider uncertainty quantification for matrix completion tasks; Seedat et al. (2023) consider the problem of missingness in the response, which is generally known as semi-supervised setting. ...

On the consistency of supervised learning with missing values

Statistical Papers

... Unlike traditional approaches such as random forest (RF) or Bayesian Additive Regression Trees (BART2), BRF maintains robust predictive accuracy evidenced by stable RMSE even at extreme missingness levels (e.g., 75%), particularly under Missing Not at Random (MNAR) mechanisms. These findings align with recent work by Sportisse et al. [38] and Albu et al. [39], who emphasized the necessity of model-based imputation to preserve feature interactions and the advantages of Bayesian ensembles in high-dimensional nonlinear models. BRF's integration of probabilistic uncertainty quantification during imputation, as advocated by Rubin [40] and Little [41], mitigates bias caused by dependency on unobserved variables, outperforming frequentist counterparts like RF and BART2, which exhibit higher RMSE variability due to single imputation chains or proximity-based methods [42,43]. ...

Model-based clustering with missing not at random data

Statistics and Computing

... Four studies varied the covariate set [25,27,28] non-probability samples [50]. Studies of other methodological aspects addressed covariate selection [70,76,77], missing data [65,71], violations of the positivity assumption [78,79], sensitivity analysis [68,69], stabilizing estimators in causal meta-analysis [73], bias and variance equations [67], a treatment importance metric [75], target validity [1] general modified approaches, reviews and primer articles [18,64,66,72,74]. ...

Re-weighting the randomized controlled trial for generalization: finite-sample error and variable selection
  • Citing Article
  • May 2024

Journal of the Royal Statistical Society Series A (Statistics in Society)

... Despite the recognized importance of sleep in cancer development, the specific causal linkage between sleep duration and the incidence of GBM has not been thoroughly examined. Traditional observational studies could identify associations between variables, but they often struggle to establish causality due to confounding factors and reverse causation (Colnet et al. 2024). For example, an observed association between sleep patterns and GBM could be influenced by an unmeasured confounder, or GBM itself could alter sleep patterns. ...

Causal Inference Methods for Combining Randomized Trials and Observational Studies: A Review
  • Citing Article
  • February 2024

Statistical Science

... Four studies varied the covariate set [25,27,28] non-probability samples [50]. Studies of other methodological aspects addressed covariate selection [70,76,77], missing data [65,71], violations of the positivity assumption [78,79], sensitivity analysis [68,69], stabilizing estimators in causal meta-analysis [73], bias and variance equations [67], a treatment importance metric [75], target validity [1] general modified approaches, reviews and primer articles [18,64,66,72,74]. ...

Generalizing treatment effects with incomplete covariates: Identifying assumptions and multiple imputation algorithms
  • Citing Article
  • March 2023

Biometrical Journal

... Four studies varied the covariate set [25,27,28] non-probability samples [50]. Studies of other methodological aspects addressed covariate selection [70,76,77], missing data [65,71], violations of the positivity assumption [78,79], sensitivity analysis [68,69], stabilizing estimators in causal meta-analysis [73], bias and variance equations [67], a treatment importance metric [75], target validity [1] general modified approaches, reviews and primer articles [18,64,66,72,74]. ...

Causal effect on a target population: A sensitivity analysis to handle missing covariates

Journal of Causal Inference

... In the same context, missing values in statistical analysis signify the absence of data points (observation is not stored) in the study data set. Such missing data might rise due to different factors, such as incomplete data collection, measurement errors, or data corruption (Mayer et al., 2022). In the current study, no missing data or values were identified, and the study data set exhibited no noteworthy deviations from the typical pattern due to the use of the Likert scale. ...

R-miss-tastic: a unified platform for missing values methods and workflows
  • Citing Article
  • October 2022

The R Journal

... By contrast, liberal strategies set higher targets, generally above 9 g/dL, on the premise that an increased oxygen-carrying capacity could improve the clinical outcomes and mitigate organ hypoxia [3]. Despite this intuitive rationale, evidence from multiple randomized controlled trials (RCTs) has challenged the necessity of liberal thresholds in the absence of acute hemorrhage [4][5][6]. Nevertheless, it is crucial to recognize that oxygen delivery in critical illness is multifactorial. While liberal transfusions might bolster hemoglobin levels, other determinants, such as cardiac output, microcirculatory function, ventilation status, and the patient's underlying pathology, ultimately govern tissue oxygenation [12]. ...

Association between in-ICU red blood cells transfusion and 1-year mortality in ICU survivors

Critical Care