Critical care (London, England)

Published by Springer Nature
Online ISSN: 1364-8535
Learn more about this page
Recent publications
Impact of maximum level of mobilization as well as different physiotherapeutic regimens on MSTN gene expression on day 15 and myostatin plasma trajectory. a MSTN gene expression was not influenced by standard physiotherapy (sPT), protocol-based physiotherapy (pPT) or protocol-based physiotherapy with additional muscle activating measures (pPT +) as it was significantly decreased over healthy controls (hc) in all groups. b Myostatin plasma levels showed a similar pattern with decreased values in all critically ill patients independent of the intervention and with a significant recovery of time (GLM: p < 0.001; n = 7 patients receiving sPT, n = 10 receiving pPT and n = 19 receiving pPT + adMeas with values from all three timepoints were analyzed). c MSTN gene expression did not show any difference due to the achieved level of mobilization and neither a reduction over baseline values. d Myostatin plasma trajectory presented similarly without any impact of the achieved level of mobilization but a significant reduction in all critically ill patients. A significant recovery over time was also evident (GLM: p = 0.001; n = 14 patients reaching level 2, n = 10 reaching level 3 and n = 5 reaching level 4 with values from all three timepoints were analyzed). GLM = general linear model for the factor "time" in critically ill; mRNA = messenger ribonucleic acid. # p < 0.050, ## p < 0.010 and ### p < 0.001 for Kruskal-Wallis test between healthy controls and critically ill. *p < 0.05, **p < 0.01 and ***p < 0.001 for post hoc test comparison with healthy controls
Background: The objective was to investigate the role of gene expression and plasma levels of the muscular protein myostatin in intensive care unit-acquired weakness (ICUAW). This was performed to evaluate a potential clinical and/or pathophysiological rationale of therapeutic myostatin inhibition. Methods: A retrospective analysis from pooled data of two prospective studies to assess the dynamics of myostatin plasma concentrations (day 4, 8 and 14) and myostatin gene (MSTN) expression levels in skeletal muscle (day 15) was performed. Associations of myostatin to clinical and electrophysiological outcomes, muscular metabolism and muscular atrophy pathways were investigated. Results: MSTN gene expression (median [IQR] fold change: 1.00 [0.68-1.54] vs. 0.26 [0.11-0.80]; p = 0.004) and myostatin plasma concentrations were significantly reduced in all critically ill patients when compared to healthy controls. In critically ill patients, myostatin plasma concentrations increased over time (median [IQR] fold change: day 4: 0.13 [0.08/0.21] vs. day 8: 0.23 [0.10/0.43] vs. day 14: 0.40 [0.26/0.61]; p < 0.001). Patients with ICUAW versus without ICUAW showed significantly lower MSTN gene expression levels (median [IQR] fold change: 0.17 [0.10/0.33] and 0.51 [0.20/0.86]; p = 0.047). Myostatin levels were directly correlated with muscle strength (correlation coefficient 0.339; p = 0.020) and insulin sensitivity index (correlation coefficient 0.357; p = 0.015). No association was observed between myostatin plasma concentrations as well as MSTN expression levels and levels of mobilization, electrophysiological variables, or markers of atrophy pathways. Conclusion: Muscular gene expression and systemic protein levels of myostatin are downregulated during critical illness. The previously proposed therapeutic inhibition of myostatin does therefore not seem to have a pathophysiological rationale to improve muscle quality in critically ill patients. Trial registration: ISRCTN77569430 -13th of February 2008 and ISRCTN19392591 17th of February 2011.
Background The COVID-19 pandemic presented major challenges for critical care facilities worldwide. Infections which develop alongside or subsequent to viral pneumonitis are a challenge under sporadic and pandemic conditions; however, data have suggested that patterns of these differ between COVID-19 and other viral pneumonitides. This secondary analysis aimed to explore patterns of co-infection and intensive care unit-acquired infections (ICU-AI) and the relationship to use of corticosteroids in a large, international cohort of critically ill COVID-19 patients. Methods This is a multicenter, international, observational study, including adult patients with PCR-confirmed COVID-19 diagnosis admitted to ICUs at the peak of wave one of COVID-19 (February 15th to May 15th, 2020). Data collected included investigator-assessed co-infection at ICU admission, infection acquired in ICU, infection with multi-drug resistant organisms (MDRO) and antibiotic use. Frequencies were compared by Pearson’s Chi-squared and continuous variables by Mann–Whitney U test. Propensity score matching for variables associated with ICU-acquired infection was undertaken using R library MatchIT using the “full” matching method. Results Data were available from 4994 patients. Bacterial co-infection at admission was detected in 716 patients (14%), whilst 85% of patients received antibiotics at that stage. ICU-AI developed in 2715 (54%). The most common ICU-AI was bacterial pneumonia (44% of infections), whilst 9% of patients developed fungal pneumonia; 25% of infections involved MDRO. Patients developing infections in ICU had greater antimicrobial exposure than those without such infections. Incident density (ICU-AI per 1000 ICU days) was in considerable excess of reports from pre-pandemic surveillance. Corticosteroid use was heterogenous between ICUs. In univariate analysis, 58% of patients receiving corticosteroids and 43% of those not receiving steroids developed ICU-AI. Adjusting for potential confounders in the propensity-matched cohort, 71% of patients receiving corticosteroids developed ICU-AI vs 52% of those not receiving corticosteroids. Duration of corticosteroid therapy was also associated with development of ICU-AI and infection with an MDRO. Conclusions In patients with severe COVID-19 in the first wave, co-infection at admission to ICU was relatively rare but antibiotic use was in substantial excess to that indication. ICU-AI were common and were significantly associated with use of corticosteroids. Trial registration NCT04836065 (retrospectively registered April 8th 2021). Graphical abstract
Characteristics of ventilator-associated pneumonia (VAP in the groups with vs. without early corticosteroid therapy
Rationale Early corticosteroid treatment is used to treat COVID-19-related acute respiratory distress syndrome (ARDS). Infection is a well-documented adverse effect of corticosteroid therapy. Objectives To determine whether early corticosteroid therapy to treat COVID-19 ARDS was associated with ventilator-associated pneumonia (VAP). Methods We retrospectively included adults with COVID-19-ARDS requiring invasive mechanical ventilation (MV) for ≥ 48 h at any of 15 intensive care units in 2020. We divided the patients into two groups based on whether they did or did not receive corticosteroids within 24 h. The primary outcome was VAP incidence, with death and extubation as competing events. Secondary outcomes were day 90-mortality, MV duration, other organ dysfunctions, and VAP characteristics. Measurements and main results Of 670 patients (mean age, 65 years), 369 did and 301 did not receive early corticosteroids. The cumulative VAP incidence was higher with early corticosteroids (adjusted hazard ratio [aHR] 1.29; 95% confidence interval [95% CI] 1.05–1.58; P = 0.016). Antibiotic resistance of VAP bacteria was not different between the two groups (odds ratio 0.94, 95% CI 0.58–1.53; P = 0.81). 90-day mortality was 30.9% with and 24.3% without early corticosteroids, a nonsignificant difference after adjustment on age, SOFA score, and VAP occurrence (aHR 1.15; 95% CI 0.83–1.60; P = 0.411). VAP was associated with higher 90-day mortality (aHR 1.86; 95% CI 1.33–2.61; P = 0.0003). Conclusions Early corticosteroid treatment was associated with VAP in patients with COVID-19-ARDS. Although VAP was associated with higher 90-day mortality, early corticosteroid treatment was not. Longitudinal randomized controlled trials of early corticosteroids in COVID-19-ARDS requiring MV are warranted.
Abstract Background The optimal level of positive end-expiratory pressure (PEEP) during mechanical ventilation for COVID-19 pneumonia remains debated and should ideally be guided by responses in both lung volume and perfusion. Capnodynamic monitoring allows both end-expiratory lung volume ( $${\text{EELV}}_{{{\text{CO}}_{2} }}$$ EELV CO 2 ) and effective pulmonary blood flow (EPBF) to be determined at the bedside with ongoing ventilation. Methods Patients with COVID-19-related moderate to severe respiratory failure underwent capnodynamic monitoring of $${\text{EELV}}_{{{\text{CO}}_{2} }}$$ EELV CO 2 and EPBF during a step increase in PEEP by 50% above the baseline (PEEPlow to PEEPhigh). The primary outcome was a > 20 mm Hg increase in arterial oxygen tension to inspired fraction of oxygen (P/F) ratio to define responders versus non-responders. Secondary outcomes included changes in physiological dead space and correlations with independently determined recruited lung volume and the recruitment-to-inflation ratio at an instantaneous, single breath decrease in PEEP. Mixed factor ANOVA for group mean differences and correlations by Pearson’s correlation coefficient are reported including their 95% confidence intervals. Results Of 27 patients studied, 15 responders increased the P/F ratio by 55 [24–86] mm Hg compared to 12 non-responders (p
Circulatory status day 1-4. Graph illustrating the distribution of highest recorded circulatory support for each day. Patients are categorized according to vasopressor support on admission and stratified according to temperature intervention. No-vasopressor support, mean arterial blood pressure (MAP) ≥ 70 with no inotropic or vasopressor support; moderate-vasopressor support, MAP < 70 or any dose dopamine, or dobutamine, or noradrenaline/adrenaline dose ≤ 0.25 µg/kg/min; and high-vasopressor support, noradrenaline/adrenaline dose > 0.25 µg/kg/min. D/C, Discharge; ICU, Intensive care unit; Normo; normothermia; and TTM33, targeted temperature management at 33 °C
Background Targeted temperature management at 33 °C (TTM33) has been employed in effort to mitigate brain injury in unconscious survivors of out-of-hospital cardiac arrest (OHCA). Current guidelines recommend prevention of fever, not excluding TTM33. The main objective of this study was to investigate if TTM33 is associated with mortality in patients with vasopressor support on admission after OHCA. Methods We performed a post hoc analysis of patients included in the TTM-2 trial, an international, multicenter trial, investigating outcomes in unconscious adult OHCA patients randomized to TTM33 versus normothermia. Patients were grouped according to level of circulatory support on admission: (1) no-vasopressor support, mean arterial blood pressure (MAP) ≥ 70 mmHg; (2) moderate-vasopressor support MAP < 70 mmHg or any dose of dopamine/dobutamine or noradrenaline/adrenaline dose ≤ 0.25 µg/kg/min; and (3) high-vasopressor support, noradrenaline/adrenaline dose > 0.25 µg/kg/min. Hazard ratios with TTM33 were calculated for all-cause 180-day mortality in these groups. Results The TTM-2 trial enrolled 1900 patients. Data on primary outcome were available for 1850 patients, with 662, 896, and 292 patients in the, no-, moderate-, or high-vasopressor support groups, respectively. Hazard ratio for 180-day mortality was 1.04 [98.3% CI 0.78–1.39] in the no-, 1.22 [98.3% CI 0.97–1.53] in the moderate-, and 0.97 [98.3% CI 0.68–1.38] in the high-vasopressor support groups with regard to TTM33. Results were consistent in an imputed, adjusted sensitivity analysis. Conclusions In this exploratory analysis, temperature control at 33 °C after OHCA, compared to normothermia, was not associated with higher incidence of death in patients stratified according to vasopressor support on admission. Trial registration Clinical trials identifier NCT02908308 , registered September 20, 2016.
Oxygenation parameters during peripheral VA-ECMO support. S a O 2 : arterial oxygen saturation of hemoglobin; P a O 2 : arterial partial pressure of oxygen; F I O 2 : inspired oxygen fraction; F S O 2 : sweep gas oxygen fraction; S PRE O 2 : preoxygenator oxygen saturation of hemoglobin; S POST O 2 : postoxygenator oxygen saturation of hemoglobin; P POST O 2 : postoxygenator oxygen partial pressure
During refractory cardiogenic shock and cardiac arrest, veno-arterial extracorporeal membrane oxygenation (VA-ECMO) is used to restore a circulatory output. However, it also impacts significantly arterial oxygenation. Recent guidelines of the Extracorporeal Life Support Organization (ELSO) recommend targeting postoxygenator partial pressure of oxygen (P POST O 2 ) around 150 mmHg. In this narrative review, we intend to summarize the rationale and evidence for this P POST O 2 target recommendation. Because this is the most used configuration, we focus on peripheral VA-ECMO. To date, clinicians do not know how to set the sweep gas oxygen fraction (F S O 2 ). Because of the oxygenator’s performance, arterial hyperoxemia is common during VA-ECMO support. Interpretation of oxygenation is complex in this setting because of the dual circulation phenomenon, depending on both the native cardiac output and the VA-ECMO blood flow. Such dual circulation results in dual oxygenation, with heterogeneous oxygen partial pressure (PO 2 ) along the aorta, and heterogeneous oxygenation between organs, depending on the mixing zone location. Data regarding oxygenation during VA-ECMO are scarce, but several observational studies have reported an association between hyperoxemia and mortality, especially after refractory cardiac arrest. While hyperoxemia should be avoided, there are also more and more studies in non-ECMO patients suggesting the harm of a too restrictive oxygenation strategy. Finally, setting F S O 2 to target strict normoxemia is challenging because continuous monitoring of postoxygenator oxygen saturation is not widely available. The threshold of P POST O 2 around 150 mmHg is supported by limited evidence but aims at respecting a safe margin, avoiding both hypoxemia and severe hyperoxemia.
Study flowchart. Flowchart summarizing patient selection and inclusion process as well as number of patients with AKI according to the full KDIGO definition, to sCr or to UO criteria
Background Acute kidney injury (AKI) has been reported as a frequent complication of critical COVID-19. We aimed to evaluate the occurrence of AKI and use of kidney replacement therapy (KRT) in critical COVID-19, to assess patient and kidney outcomes and risk factors for AKI and differences in outcome when the diagnosis of AKI is based on urine output (UO) or on serum creatinine (sCr). Methods Multicenter, retrospective cohort analysis of patients with critical COVID-19 in seven large hospitals in Belgium. AKI was defined according to KDIGO within 21 days after ICU admission. Multivariable logistic regression analysis was used to explore the risk factors for developing AKI and to assess the association between AKI and ICU mortality. Results Of 1286 patients, 85.1% had AKI, and KRT was used in 9.8%. Older age, obesity, a higher APACHE II score and use of mechanical ventilation at day 1 of ICU stay were associated with an increased risk for AKI. After multivariable adjustment, all AKI stages were associated with ICU mortality. AKI was based on sCr in 40.1% and UO in 81.5% of patients. All AKI stages based on sCr and AKI stage 3 based on UO were associated with ICU mortality. Persistent AKI was present in 88.6% and acute kidney disease (AKD) in 87.6%. Rapid reversal of AKI yielded a better prognosis compared to persistent AKI and AKD. Kidney recovery was observed in 47.4% of surviving AKI patients. Conclusions Over 80% of critically ill COVID-19 patients had AKI. This was driven by the high occurrence rate of AKI defined by UO criteria. All AKI stages were associated with mortality (NCT04997915).
Background Noninvasive ventilation (NIV) is a promising alternative to invasive mechanical ventilation (IMV) with a particular importance amidst the shortage of intensive care unit (ICU) beds during the COVID-19 pandemic. We aimed to evaluate the use of NIV in Europe and factors associated with outcomes of patients treated with NIV. Methods This is a substudy of COVIP study—an international prospective observational study enrolling patients aged ≥ 70 years with confirmed COVID-19 treated in ICU. We enrolled patients in 156 ICUs across 15 European countries between March 2020 and April 2021.The primary endpoint was 30-day mortality. Results Cohort included 3074 patients, most of whom were male (2197/3074, 71.4%) at the mean age of 75.7 years (SD 4.6). NIV frequency was 25.7% and varied from 1.1 to 62.0% between participating countries. Primary NIV failure, defined as need for endotracheal intubation or death within 30 days since ICU admission, occurred in 470/629 (74.7%) of patients. Factors associated with increased NIV failure risk were higher Sequential Organ Failure Assessment (SOFA) score (OR 3.73, 95% CI 2.36–5.90) and Clinical Frailty Scale (CFS) on admission (OR 1.46, 95% CI 1.06–2.00). Patients initially treated with NIV (n = 630) lived for 1.36 fewer days (95% CI − 2.27 to − 0.46 days) compared to primary IMV group (n = 1876). Conclusions Frequency of NIV use varies across European countries. Higher severity of illness and more severe frailty were associated with a risk of NIV failure among critically ill older adults with COVID-19. Primary IMV was associated with better outcomes than primary NIV. Clinical Trial Registration NCT04321265 , registered 19 March 2020, .
Expression and respect of HCPs opinion regarding perceiving disproportionate organ support
Factors associated with moral distress -multivariate analysis
Bar charts depicting proportion of reported moral distress related to the predictors for moral distress
Multivariate analysis-predictors for moral distress
Background: Providing palliative care at the end of life (EOL) in intensive care units (ICUs) seems to be modified during the COVID-19 pandemic with potential burden of moral distress to health care providers (HCPs). We seek to assess the practice of EOL care during the COVID-19 pandemic in ICUs in the Czech Republic focusing on the level of moral distress and its possible modifiable factors. Methods: Between 16 June 2021 and 16 September 2021, a national, cross-sectional study in intensive care units (ICUs) in Czech Republic was performed. All physicians and nurses working in ICUs during the COVID-19 pandemic were included in the study. For questionnaire development ACADEMY and CHERRIES guide and checklist were used. A multivariate logistic regression model was used to analyse possible modifiable factors of moral distress. Results: In total, 313 HCPs (14.5% out of all HCPs who opened the questionnaire) fully completed the survey. Results showed that 51.8% (n = 162) of respondents were exposed to moral distress during the COVID-19 pandemic. 63.1% (n = 113) of nurses and 71.6% of (n = 96) physicians had experience with the perception of inappropriate care. If inappropriate care was perceived, a higher chance for the occurrence of moral distress for HCPs (OR, 1.854; CI, 1.057-3.252; p = 0.0312) was found. When patients died with dignity, the chance for moral distress was lower (OR, 0.235; CI, 0.128-0.430; p < 0.001). The three most often reported differences in palliative care practice during pandemic were health system congestion, personnel factors, and characteristics of COVID-19 infection. Conclusions: HCPs working at ICUs experienced significant moral distress during the COVID-19 pandemic in the Czech Republic. The major sources were perceiving inappropriate care and dying of patients without dignity. Improvement of the decision-making process and communication at the end of life could lead to a better ethical and safety climate. Trial registration: NCT04910243 .
ROC curves to estimate the discriminatory value of TMAD for SICM compared with MAPSE and GLS. TMADMid, midpoint tissue motion annular displacement; %TMAD, the percentage value of the midpoint displacement in relation to the total length of the left ventricle; MAPSE, mitral annular plane systolic excursion; and GLS, global longitudinal strain
Background There is no formal diagnostic criterion for sepsis-induced cardiomyopathy (SICM), but left ventricular ejection fraction (LVEF) < 50% was the most commonly used standard. Tissue motion annular displacement (TMAD) is a novel speckle tracking indicator to quickly assess LV longitudinal systolic function. This study aimed to evaluate the feasibility and discriminatory value of TMAD for predicting SICM, as well as prognostic value of TMAD for mortality. Methods We conducted a single-center retrospective observational study in patients with sepsis or septic shock who underwent echocardiography examination within the first 24 h after admission. Basic clinical information and conventional echocardiographic data, including mitral annular plane systolic excursion (MAPSE), were collected. Based on speckle tracking echocardiography (STE), global longitudinal strain (GLS) and TMAD were, respectively, performed offline. The parameters acquisition rate, inter- and intra-observer reliability, time consumed for measurement were assessed for the feasibility analysis. Areas under the receiver operating characteristic curves (AUROC) values were calculated to assess the discriminatory value of TMAD/GLS/MAPSE for predicting SICM, defined as LVEF < 50%. Kaplan–Meier survival curve analysis was performed according to the cutoff values in predicting SICM. Cox proportional hazards model was performed to determine the risk factors for 28d and in-hospital mortality. Results A total of 143 patients were enrolled in this study. Compared with LVEF, GLS or MAPSE, TMAD exhibited the highest parameter acquisition rate, intra- and inter-observer reliability. The mean time for offline analyses with TMAD was significantly shorter than that with LVEF or GLS ( p < 0.05). According to the AUROC analysis, TMADMid presented an excellent discriminatory value for predicting SICM (AUROC > 0.9). Patients with lower TMADMid (< 9.75 mm) had significantly higher 28d and in-hospital mortality (both p < 0.05). The multivariate Cox proportional hazards model revealed that BMI and SOFA were the independent risk factors for 28d and in-hospital mortality in sepsis cases, but TMAD was not. Conclusion STE-based TMAD is a novel and feasible technology with promising discriminatory value for predicting SICM with LVEF < 50%.
The comparison of the receiver operating characteristics (ROC) curves of baseline pulse pressure variation (PPV base ) at a tidal volume of 6 mL/ kg predicted body weight versus changes in pulse pressure variation during a tidal volume challenge (ΔPPV TVC 6-8 )
Background: Prone position is frequently used in patients with acute respiratory distress syndrome (ARDS), especially during the Coronavirus disease 2019 pandemic. Our study investigated the ability of pulse pressure variation (PPV) and its changes during a tidal volume challenge (TVC) to assess preload responsiveness in ARDS patients under prone position. Methods: This was a prospective study conducted in a 25-bed intensive care unit at a university hospital. We included patients with ARDS under prone position, ventilated with 6 mL/kg tidal volume and monitored by a transpulmonary thermodilution device. We measured PPV and its changes during a TVC (ΔPPV TVC6-8) after increasing the tidal volume from 6 to 8 mL/kg for one minute. Changes in cardiac index (CI) during a Trendelenburg maneuver (ΔCITREND) and during end-expiratory occlusion (EEO) at 8 mL/kg tidal volume (ΔCI EEO8) were recorded. Preload responsiveness was defined by both ΔCITREND ≥ 8% and ΔCI EEO8 ≥ 5%. Preload unresponsiveness was defined by both ΔCITREND < 8% and ΔCI EEO8 < 5%. Results: Eighty-four sets of measurements were analyzed in 58 patients. Before prone positioning, the ratio of partial pressure of arterial oxygen to fraction of inspired oxygen was 104 ± 27 mmHg. At the inclusion time, patients were under prone position for 11 (2-14) hours. Norepinephrine was administered in 83% of cases with a dose of 0.25 (0.15-0.42) µg/kg/min. The positive end-expiratory pressure was 14 (11-16) cmH2O. The driving pressure was 12 (10-17) cmH2O, and the respiratory system compliance was 32 (22-40) mL/cmH2O. Preload responsiveness was detected in 42 cases. An absolute change in PPV ≥ 3.5% during a TVC assessed preload responsiveness with an area under the receiver operating characteristics (AUROC) curve of 0.94 ± 0.03 (sensitivity: 98%, specificity: 86%) better than that of baseline PPV (0.85 ± 0.05; p = 0.047). In the 56 cases where baseline PPV was inconclusive (≥ 4% and < 11%), ΔPPV TVC6-8 ≥ 3.5% still enabled to reliably assess preload responsiveness (AUROC: 0.91 ± 0.05, sensitivity: 97%, specificity: 81%; p < 0.01 vs. baseline PPV). Conclusion: In patients with ARDS under low tidal volume ventilation during prone position, the changes in PPV during a TVC can reliably assess preload responsiveness without the need for cardiac output measurements. Trial registration: (NCT04457739). Registered 30 June 2020 -Retrospectively registered,
Conceptual framework for an holistic approach of discomfort in the ICU.
The family experience and suggestions for improvement
The intensive care unit (ICU) is a complex environment where patients, family members and healthcare professionals have their own personal experiences. Improving ICU experiences necessitates the involvement of all stakeholders. This holistic approach will invariably improve the care of ICU survivors, increase family satisfaction and staff wellbeing, and contribute to dignified end-of-life care. Inclusive and transparent participation of the industry can be a significant addition to develop tools and strategies for delivering this holistic care. We present a report, which follows a round table on ICU experience at the annual congress of the European Society of Intensive Care Medicine. The aim is to discuss the current evidence on patient, family and healthcare professional experience in ICU is provided, together with the panel’s suggestions on potential improvements. Combined with industry, the perspectives of all stakeholders suggest that ongoing improvement of ICU experience is warranted.
Background: Neurologic manifestations are increasingly reported in patients with coronavirus disease 2019 (COVID-19). Yet, data on prevalence, predictors and relevance for outcome of neurological manifestations in patients requiring intensive care are scarce. We aimed to characterize prevalence, risk factors and impact on outcome of neurologic manifestations in critically ill COVID-19 patients. Methods: In the prospective, multicenter, observational registry study PANDEMIC (Pooled Analysis of Neurologic DisordErs Manifesting in Intensive care of COVID-19), we enrolled COVID-19 patients with neurologic manifestations admitted to 19 German intensive care units (ICU) between April 2020 and September 2021. We performed descriptive and explorative statistical analyses. Multivariable models were used to investigate factors associated with disorder categories and their underlying diagnoses as well as to identify predictors of outcome. Results: Of the 392 patients included in the analysis, 70.7% (277/392) were male and the mean age was 65.3 (SD ± 3.1) years. During the study period, a total of 2681 patients with COVID-19 were treated at the ICUs of 15 participating centers. New neurologic disorders were identified in 350 patients, reported by these centers, suggesting a prevalence of COVID-19-associated neurologic disorders of 12.7% among COVID-19 ICU patients. Encephalopathy (46.2%; 181/392), cerebrovascular (41.0%; 161/392) and neuromuscular disorders (20.4%; 80/392) were the most frequent categories identified. Out of 35 cerebrospinal fluid analyses with reverse transcriptase PCR for SARS-COV-2, only 3 were positive. In-hospital mortality was 36.0% (140/389), and functional outcome (mRS 3 to 5) of surviving patients was poor at hospital discharge in 70.9% (161/227). Intracerebral hemorrhage (OR 6.2, 95% CI 2.5-14.9, p < 0.001) and acute ischemic stroke (OR 3.9, 95% CI 1.9-8.2, p < 0.001) were the strongest predictors of poor outcome among the included patients. Conclusions: Based on this well-characterized COVID-19 ICU cohort, that comprised 12.7% of all severe ill COVID-19 patients, neurologic manifestations increase mortality and morbidity. Since no reliable evidence of direct viral affection of the nervous system by COVID-19 could be found, these neurologic manifestations may for a great part be indirect para- or postinfectious sequelae of the infection or severe critical illness. Neurologic ICU complications should be actively searched for and treated.
Flow diagram for inclusion and exclusion criteria of patients
Model and threshold performance plots. a Time-varying results of HSI model at different prediction times before hemodynamic interventions, compared to Shock Index and systolic blood pressure (systolic BP); b median comparison between the US cohort and TPEVGH cohort (not all the features are listed due to complications of whole figure); c HSI recall-precision curve of TPEVGH cohort; d HSI recall-specificity curve of TPEVGH cohort
Time-varying true alarms and leading time plots. a The fraction of events that correctly trigger an alarm is reported per hour in 24 h before any hemodynamic intervention occurs. b The distribution of timing of the first alarm in the 24 h before an event. 95% unstable patients can be identified over 5 h in advance to interventions
Background Early prediction model of hemodynamic instability has the potential to improve the critical care, whereas limited external validation on the generalizability. We aimed to independently validate the Hemodynamic Stability Index (HSI), a multi-parameter machine learning model, in predicting hemodynamic instability in Asian patients. Method Hemodynamic instability was marked by using inotropic, vasopressor, significant fluid therapy, and/or blood transfusions. This retrospective study included among 15,967 ICU patients who aged 20 years or older (not included 20 years) and stayed in ICU for more than 6 h admitted to Taipei Veteran General Hospital (TPEVGH) between January 1, 2010, and March 31, 2020, of whom hemodynamic instability occurred in 3053 patients (prevalence = 19%). These patients in unstable group received at least one intervention during their ICU stays, and the HSI score of both stable and unstable group was calculated in every hour before intervention. The model performance was assessed using the area under the receiver operating characteristic curve (AUROC) and was compared to single indicators like systolic blood pressure (SBP) and shock index. The hemodynamic instability alarm was set by selecting optimal threshold with high sensitivity, acceptable specificity, and lead time before intervention was calculated to indicate when patients were firstly identified as high risk of hemodynamic instability. Results The AUROC of HSI was 0.76 (95% CI, 0.75–0.77), which performed significantly better than shock Index (0.7; 95% CI, 0.69–0.71) and SBP (0.69; 95% CI, 0.68–0.70). By selecting 0.7 as a threshold, HSI predicted 72% of all 3053 patients who received hemodynamic interventions with 67% in specificity. Time-varying results also showed that HSI score significantly outperformed single indicators even up to 24 h before intervention. And 95% unstable patients can be identified more than 5 h in advance. Conclusions The HSI has acceptable discrimination but underestimates the risk of stable patients in predicting the onset of hemodynamic instability in an external cohort.
Respiratory mechanics and gas exchanges according to level of positive end-expiratory pressure in supine position
Objective The aim of this prospective longitudinal study was to compare driving pressure and absolute PaO 2 /FiO 2 ratio in determining the best positive end-expiratory pressure (PEEP) level. Patients and methods In 122 patients with acute respiratory distress syndrome, PEEP was increased until plateau pressure reached 30 cmH 2 O at constant tidal volume, then decreased at 15-min intervals, to 15, 10, and 5 cmH 2 O. The best PEEP by PaO 2 /FiO 2 ratio (PEEP O2 ) was defined as the highest PaO 2 /FiO 2 ratio obtained, and the best PEEP by driving pressure (PEEP DP ) as the lowest driving pressure. The difference between the best PEEP levels was compared to a non-inferiority margin of 1.5 cmH 2 O. Main results The best mean PEEP O2 value was 11.9 ± 4.7 cmH 2 O compared to 10.6 ± 4.1 cmH 2 O for the best PEEP DP : mean difference = 1.3 cmH 2 O (95% confidence interval [95% CI], 0.4–2.3; one-tailed P value, 0.36). Only 46 PEEP levels were the same with the two methods (37.7%; 95% CI 29.6–46.5). PEEP level was ≥ 15 cmH 2 O in 61 (50%) patients with PEEP O2 and 39 (32%) patients with PEEP DP ( P = 0.001). Conclusion Depending on the method chosen, the best PEEP level varies. The best PEEP DP level is lower than the best PEEP O2 level. Computing driving pressure is simple, faster and less invasive than measuring PaO 2 . However, our results do not demonstrate that one method deserves preference over the other in terms of patient outcome. Clinical trial number : #ACTRN12618000554268 . Registered 13 April 2018.
Regional fraction of wasted ventilation (blue boxes) and wasted perfusion (red boxes) at PEEP 5 and 15 cmH 2 O. Results are expressed in mean and Tukey box plots. ND: non-dependent part of the lungs, M: middle part of the lungs, D: dependent part of the lungs. *p < 0.05, †p < 0.01 by paired t-test
Topographic distribution of lung units with different Ventilation-Perfusion ( ˙ V ˙ Q ) ratios in a representative study patient at PEEP 5 and 15 cmH 2 O. ˙ V ˙ Q ratio was calculated as the pixel-level ventilation divided by perfusion measured by electrical impedance tomography. ˙ V ˙ Q ratio ranges from < 0.1 (non-ventilated units, red) to 1 (normal units, white) to > 10 (non-perfused units, blue). The color scale is displayed on the right side of the figure. ND: non-dependent part of the lungs, M: middle part of the lungs, D: dependent part of the lungs
Correlations between recruitability (R/I ratio) and the improvement in Wasted Ventilation (A) and Wasted Perfusion (B) between PEEP 5 and 15 cmH 2 O
Purpose: In the acute respiratory distress syndrome (ARDS), decreasing Ventilation-Perfusion [Formula: see text] mismatch might enhance lung protection. We investigated the regional effects of higher Positive End Expiratory Pressure (PEEP) on [Formula: see text] mismatch and their correlation with recruitability. We aimed to verify whether PEEP improves regional [Formula: see text] mismatch, and to study the underlying mechanisms. Methods: In fifteen patients with moderate and severe ARDS, two PEEP levels (5 and 15 cmH2O) were applied in random order. [Formula: see text] mismatch was assessed by Electrical Impedance Tomography at each PEEP. Percentage of ventilation and perfusion reaching different ranges of [Formula: see text] ratios were analyzed in 3 gravitational lung regions, leading to precise assessment of their distribution throughout different [Formula: see text] mismatch compartments. Recruitability between the two PEEP levels was measured by the recruitment-to-inflation ratio method. Results: In the non-dependent region, at higher PEEP, ventilation reaching the normal [Formula: see text] compartment (p = 0.018) increased, while it decreased in the high [Formula: see text] one (p = 0.023). In the middle region, at PEEP 15 cmH2O, ventilation and perfusion to the low [Formula: see text] compartment decreased (p = 0.006 and p = 0.011) and perfusion to normal [Formula: see text] increased (p = 0.003). In the dependent lung, the percentage of blood flowing through the non-ventilated compartment decreased (p = 0.041). Regional [Formula: see text] mismatch improvement was correlated to lung recruitability and changes in regional tidal volume. Conclusions: In patients with ARDS, higher PEEP optimizes the distribution of both ventilation (in the non-dependent areas) and perfusion (in the middle and dependent lung). Bedside measure of recruitability is associated with improved [Formula: see text] mismatch.
Background Multiple organ dysfunction syndrome (MODS) is a critical driver of sepsis morbidity and mortality in children. Early identification of those at risk of death and persistent organ dysfunctions is necessary to enrich patients for future trials of sepsis therapeutics. Here, we sought to integrate endothelial and PERSEVERE biomarkers to estimate the composite risk of death or organ dysfunctions on day 7 of septic shock. Methods We measured endothelial dysfunction markers from day 1 serum among those with existing PERSEVERE data. TreeNet® classification model was derived incorporating 22 clinical and biological variables to estimate risk. Based on relative variable importance, a simplified 6-biomarker model was developed thereafter. Results Among 502 patients, 49 patients died before day 7 and 124 patients had persistence of MODS on day 7 of septic shock. Area under the receiver operator characteristic curve (AUROC) for the newly derived PERSEVEREnce model to predict death or day 7 MODS was 0.93 (0.91–0.95) with a summary AUROC of 0.80 (0.76–0.84) upon tenfold cross-validation. The simplified model, based on IL-8, HSP70, ICAM-1, Angpt2/Tie2, Angpt2/Angpt1, and Thrombomodulin, performed similarly. Interaction between variables—ICAM-1 with IL-8 and Thrombomodulin with Angpt2/Angpt1—contributed to the models’ predictive capabilities. Model performance varied when estimating risk of individual organ dysfunctions with AUROCS ranging from 0.91 to 0.97 and 0.68 to 0.89 in training and test sets, respectively. Conclusions The newly derived PERSEVEREnce biomarker model reliably estimates risk of death or persistent organ dysfunctions on day 7 of septic shock. If validated, this tool can be used for prognostic enrichment in future pediatric trials of sepsis therapeutics. Graphical abstract
Changes in physiological parameters. (a Compliance, b PaO 2 /FiO 2 , c driving pressure) during proning sessions
Flowchart O 2 -Responders during extended prone position sessions. O 2 -Responders = session with an increase of + 20 mmHg in the PaO 2 / FiO 2 ratio. H = hour; n = number; SP = supine position
Abbreviations ARDS: Acute respiratory distress syndrome; CI: Confidence interval; COVID19: Coronavirus disease 2019; ECMO: Extracorporeal membrane oxygenation; ICU: Intensive care units; IQR: Interquartile range; iNO: Inhaled nitric oxide; OR: Odd ratio; PCR: Polymerase chain reaction; PEEP: Positive end expiratory pressure; PP: Prone positioning; RASS: Richmond agitation Sedation Scale.
Reported reasons for ending prone position sessions
Background During the COVID-19 pandemic, many more patients were turned prone than before, resulting in a considerable increase in workload. Whether extending duration of prone position may be beneficial has received little attention. We report here benefits and detriments of a strategy of extended prone positioning duration for COVID-19-related acute respiratory distress syndrome (ARDS). Methods A eetrospective, monocentric, study was performed on intensive care unit patients with COVID-19-related ARDS who required tracheal intubation and who have been treated with at least one session of prone position of duration greater or equal to 24 h. When prone positioning sessions were initiated, patients were kept prone for a period that covered two nights. Data regarding the incidence of pressure injury and ventilation parameters were collected retrospectively on medical and nurse files of charts. The primary outcome was the occurrence of pressure injury of stage ≥ II during the ICU stay. Results For the 81 patients included, the median duration of prone positioning sessions was 39 h [interquartile range (IQR) 34–42]. The cumulated incidence of stage ≥ II pressure injuries was 26% [95% CI 17–37] and 2.5% [95% CI 0.3–8.8] for stages III/IV pressure injuries. Patients were submitted to a median of 2 sessions [IQR 1–4] and for 213 (94%) prone positioning sessions, patients were turned over to supine position during daytime, i.e., between 9 AM and 6 PM. This increased duration was associated with additional increase in oxygenation after 16 h with the PaO 2 /FiO 2 ratio increasing from 150 mmHg [IQR 121–196] at H+ 16 to 162 mmHg [IQR 124–221] before being turned back to supine ( p = 0.017). Conclusion In patients with extended duration of prone position up to 39 h, cumulative incidence for stage ≥ II pressure injuries was 26%, with 25%, 2.5%, and 0% for stage II, III, and IV, respectively. Oxygenation continued to increase significantly beyond the standard 16-h duration. Our results may have significant impact on intensive care unit staffing and patients’ respiratory conditions. Trial registration : Institutional review board 00006477 of HUPNVS, Université Paris Cité, APHP, with the reference: CER-2021-102, obtained on October 11th 2021. Registered at Clinicaltrials (NCT05124197).
Neutrophils from COVID-19 patients undergoing NETosis express GSDMD. A Single-cell analysis of BALF from patients with COVID-19 across severity status (Healthy control, Moderate, and Severe). UMAP visualization from gene expression data of 66,452 cells, highlighting neutrophil expression cluster in severe COVID-19 patients (red) from bronchoalveolar lavage fluid (BALF) cells. B Pie chart plot representing the proportion of GSDMD expressing neutrophils. C UpSet plot showing the intersection of inflammasome genes expressed in neutrophils, including PYCARD, CASP4, and CASP1, derived from COVID-19 severe patients. The point diagram indicates the intersection among the genes and the bar graph shows the number of GSDMD expressing neutrophils in each intersection (y-axis). D Representative confocal analysis of GSDMD-NT and NETs in the lung tissue sample from autopsies of COVID-19 patients (n = 6 or control n = 3). Immunostaining for DNA (DAPI, blue), myeloperoxidase (MPO, green), and the GSDMD cleaved fraction (GSDMD-NT, red) are shown. The scale bar indicates 50 μm at 630× magnification. E Zoomed images of Fig. 1D inset white square. F The expression of GSDMD-NT was quantified by MFI per field. G Correlation between colocalization GSDMD-NT:DAPI and NETs (MPO:DAPI). H Representative confocal analysis of GSDMD and NETs in the blood neutrophils isolated from COVID-19 patients (n = 5) or controls (n = 5). Cells were stained for DNA (DAPI, blue), MPO (green), and GSDMD-NT (red). Scale bar indicates 50 μm, 4× digital zoom was performed in the inset white square. I Expression of GSDMD-NT was quantified by MFI per field. J Expression of full-length GSDMD (GSDMD-FL) and active GSDMD (GSDMD-NT) by Western blot. Moderate COVID-19 (M, n = 3) severe COVID-19 (S, n = 4), and healthy controls (n = 36). (K) Western blot quantification by densitometry. GSDMD-NT values obtained were normalized to total beta-actin (L) Circulating amounts of MPO/DNA-NETs and M GSDMD from plasma of patients with moderate COVID-19 (n = 15) severe COVID-19 (n = 21), and healthy controls (n = 320). N Correlation between plasmatic levels of MPO/DNA-NETs and GSDMD. The data are expressed as mean ± SEM (*p < 0.05; t test in F and I, Pearson's correlation in G and M, one-way ANOVA followed by Tukey's in K and L)
(See legend on previous page.)
Inflammatory caspases mediate GSDMD cleavage and NETs formation after SARS-Cov-2 neutrophil infection. Neutrophils were isolated from healthy controls (n = 6) and COVID-19 patients (n = 8). A The neutrophil lysates were harvested for immunoblot analysis of pro-caspase-1, pro-caspase-4, and their cleaved fraction caspase-1-p20 and caspase-4-p20. The α-actin was used as a loading control. B Human neutrophils were isolated from healthy controls (n = 8). Cells were treated with caspase-1 inhibitor (Ac-YVAD-CHO, 25uM) or pan-caspase inhibitor (Z-VAD-FMK, 50uM). After 1 h, the cells were incubated with SARS-CoV-2 or Mock (virus control) and cultured for 4 h at 37 °C. Representative immunostaining images for DNA (DAPI, blue), myeloperoxidase (MPO, green), and the GSDMD cleaved fraction (GSDMD-NT, red) are shown. The scale bar indicates 50 μm at 630× magnification. 4 × digital zoom was performed in the inset white square. C GSDMD-NT expression was quantified by MFI per field. D The concentrations of MPO/DNA-NETs in the supernatants were determined using the picogreen test. The data are expressed as mean ± SEM (*or # p < 0.05, one-way ANOVA followed by Tukey's test in C and D)
GSDMD inhibition prevents cell damage induced by NETs associated with SARS-CoV-2 infection. Blood isolated neutrophils (1 × 10 6 cells) from healthy donors, pretreated, or not, with disulfiram (30 µM) were incubated, or not, with SARS-CoV-2 (n = 36). After 1 h, these neutrophils were washed twice and co-cultured with lung epithelial cells (A549, 2 × 10 5 cells) or endothelial cells (HUVEC, 2 × 10 5 cells) previously stained with viability dye for 24 h at 37 °C. A Representative dot plots of FACS analysis for viability dye + A549 cells. B Frequency of viability dye + A549 cells. C Representative dot plots of FACS analysis of viability dye + HUVEC. E Frequency of viability dye + HUVEC cells. Data are representative of at least two independent experiments and are shown as mean ± SEM (*or # p < 0.05, one-way ANOVA followed by Tukey's test in B and D)
Background The release of neutrophil extracellular traps (NETs) is associated with inflammation, coagulopathy, and organ damage found in severe cases of COVID-19. However, the molecular mechanisms underlying the release of NETs in COVID-19 remain unclear. Objectives We aim to investigate the role of the Gasdermin-D (GSDMD) pathway on NETs release and the development of organ damage during COVID-19. Methods We performed a single-cell transcriptome analysis in public data of bronchoalveolar lavage. Then, we enrolled 63 hospitalized patients with moderate and severe COVID-19. We analyze in blood and lung tissue samples the expression of GSDMD, presence of NETs, and signaling pathways upstreaming. Furthermore, we analyzed the treatment with disulfiram in a mouse model of SARS-CoV-2 infection. Results We found that the SARS-CoV-2 virus directly activates the pore-forming protein GSDMD that triggers NET production and organ damage in COVID-19. Single-cell transcriptome analysis revealed that the expression of GSDMD and inflammasome-related genes were increased in COVID-19 patients. High expression of active GSDMD associated with NETs structures was found in the lung tissue of COVID-19 patients. Furthermore, we showed that activation of GSDMD in neutrophils requires active caspase1/4 and live SARS-CoV-2, which infects neutrophils. In a mouse model of SARS-CoV-2 infection, the treatment with disulfiram inhibited NETs release and reduced organ damage. Conclusion These results demonstrated that GSDMD-dependent NETosis plays a critical role in COVID-19 immunopathology and suggests GSDMD as a novel potential target for improving the COVID-19 therapeutic strategy.
PRISMA flowchart
Background: The prognostic value of extravascular lung water (EVLW) measured by transpulmonary thermodilution (TPTD) in critically ill patients is debated. We performed a systematic review and meta-analysis of studies assessing the effects of TPTD-estimated EVLW on mortality in critically ill patients. Methods: Cohort studies published in English from Embase, MEDLINE, and the Cochrane Database of Systematic Reviews from 1960 to 1 June 2021 were systematically searched. From eligible studies, the values of the odds ratio (OR) of EVLW as a risk factor for mortality, and the value of EVLW in survivors and non-survivors were extracted. Pooled OR were calculated from available studies. Mean differences and standard deviation of the EVLW between survivors and non-survivors were calculated. A random effects model was computed on the weighted mean differences across the two groups to estimate the pooled size effect. Subgroup analyses were performed to explore the possible sources of heterogeneity. Results: Of the 18 studies included (1296 patients), OR could be extracted from 11 studies including 905 patients (464 survivors vs. 441 non-survivors), and 17 studies reported EVLW values of survivors and non-survivors, including 1246 patients (680 survivors vs. 566 non-survivors). The pooled OR of EVLW for mortality from eleven studies was 1.69 (95% confidence interval (CI) [1.22; 2.34], p < 0.0015). EVLW was significantly lower in survivors than non-survivors, with a mean difference of -4.97 mL/kg (95% CI [-6.54; -3.41], p < 0.001). The results regarding OR and mean differences were consistent in subgroup analyses. Conclusions: The value of EVLW measured by TPTD is associated with mortality in critically ill patients and is significantly higher in non-survivors than in survivors. This finding may also be interpreted as an indirect confirmation of the reliability of TPTD for estimating EVLW at the bedside. Nevertheless, our results should be considered cautiously due to the high risk of bias of many studies included in the meta-analysis and the low rating of certainty of evidence. Trial registration the study protocol was prospectively registered on PROSPERO: CRD42019126985.
In the ideal intensive care unit (ICU) of the future, all patients are free from delirium, a syndrome of brain dysfunction frequently observed in critical illness and associated with worse ICU-related outcomes and long-term cognitive impairment. Although screening for delirium requires limited time and effort, this devastating disorder remains underestimated during routine ICU care. The COVID-19 pandemic brought a catastrophic reduction in delirium monitoring, prevention, and patient care due to organizational issues, lack of personnel, increased use of benzodiazepines and restricted family visitation. These limitations led to increases in delirium incidence, a situation that should never be repeated. Good sedation practices should be complemented by novel ICU design and connectivity, which will facilitate non-pharmacological sedation, anxiolysis and comfort that can be supplemented by balanced pharmacological interventions when necessary. Improvements in the ICU sound, light control, floor planning, and room arrangement can facilitate a healing environment that minimizes stressors and aids delirium prevention and management. The fundamental prerequisite to realize the delirium-free ICU, is an awake non-sedated, pain-free comfortable patient whose management follows the A to F (A–F) bundle. Moreover, the bundle should be expanded with three additional letters, incorporating humanitarian care: gaining (G) insight into patient needs, delivering holistic care with a ‘home-like’ (H) environment, and redefining ICU architectural design (I). Above all, the delirium-free world relies upon people, with personal challenges for critical care teams to optimize design, environmental factors, management, time spent with the patient and family and to humanize ICU care.
The obesity paradox has been observed in short-term outcomes from critical illness. However, little is known regarding the impact of obesity on long-term outcomes for survivors of critically ill patients. We aimed to evaluate the influence of obesity on long-term mortality outcomes after discharge alive from ICU. The adult patients who were discharged alive from the last ICU admission were extracted. After exclusion, a total of 7619 adult patients discharged alive from ICU were included, with 4-year mortality of 32%. The median body mass index (BMI) was 27.2 (IQR 24–31.4) kg/m ² , and 2490 (31.5%) patients were classified as obese or morbidly obese. The morbidly obese patients had the highest ICU and hospital length of stay. However, higher BMI was associated with lower hazard ratio for 4-year mortality. The results showed the obesity paradox may be also suitable for survivors of critically ill patients.
Updated HACOR scores of patients with successful NIV and NIV failure from initiation to 24 h of NIV. Data are means and standard deviations. *p < 0.01 for the comparison of patients with successful NIV versus NIV failure. H0 = before NIV, H1-2 = after 1-2 h of NIV, H12 = after 12 h of NIV, H24 = after 24 h of NIV, NIV = noninvasive ventilation, HACOR = heart rate, acidosis, consciousness, oxygenation, and respiratory rate
Background Heart rate, acidosis, consciousness, oxygenation, and respiratory rate (HACOR) have been used to predict noninvasive ventilation (NIV) failure. However, the HACOR score fails to consider baseline data. Here, we aimed to update the HACOR score to take into account baseline data and test its predictive power for NIV failure primarily after 1–2 h of NIV. Methods A multicenter prospective observational study was performed in 18 hospitals in China and Turkey. Patients who received NIV because of hypoxemic respiratory failure were enrolled. In Chongqing, China, 1451 patients were enrolled in the training cohort. Outside of Chongqing, another 728 patients were enrolled in the external validation cohort. Results Before NIV, the presence of pneumonia, cardiogenic pulmonary edema, pulmonary ARDS, immunosuppression, or septic shock and the SOFA score were strongly associated with NIV failure. These six variables as baseline data were added to the original HACOR score. The AUCs for predicting NIV failure were 0.85 (95% CI 0.84–0.87) and 0.78 (0.75–0.81) tested with the updated HACOR score assessed after 1–2 h of NIV in the training and validation cohorts, respectively. A higher AUC was observed when it was tested with the updated HACOR score compared to the original HACOR score in the training cohort (0.85 vs. 0.80, 0.86 vs. 0.81, and 0.85 vs. 0.82 after 1–2, 12, and 24 h of NIV, respectively; all p values < 0.01). Similar results were found in the validation cohort (0.78 vs. 0.71, 0.79 vs. 0.74, and 0.81 vs. 0.76, respectively; all p values < 0.01). When 7, 10.5, and 14 points of the updated HACOR score were used as cutoff values, the probability of NIV failure was 25%, 50%, and 75%, respectively. Among patients with updated HACOR scores of ≤ 7, 7.5–10.5, 11–14, and > 14 after 1–2 h of NIV, the rate of NIV failure was 12.4%, 38.2%, 67.1%, and 83.7%, respectively. Conclusions The updated HACOR score has high predictive power for NIV failure in patients with hypoxemic respiratory failure. It can be used to help in decision-making when NIV is used.
Survival analysis in terms of the identified subphenotypes in development and three validation cohorts. DI: Delayed Improving; RI: Rapidly Improving; DW: Delayed Worsening; RW: Rapidly Worsening. The A, B, C, and D show the survival analysis results in development and three validation cohorts, respectively
SHAP value-based predictor contribution to the subphenotype prediction of the predictive model in development cohort. Features' importance is ranked based on SHAP values. In this figure, each point represented a single observation. The horizontal location showed whether the effect of that value was associated with a positive (a SHAP value greater than 0) or negative (a SHAP value less than 0) impact on prediction. Color showed whether the original value of that variable was high (in red) or low (in blue) for that observation. For example, in RW, a low platelets value had a positive impact on the RW subphenotype prediction; the "low" came from the blue color, and the "positive" impact was shown on the horizontal axis. DI: Delayed Improving; RI: Rapidly Improving; DW: Delayed Worsening; RW: Rapidly Worsening
Background Sepsis is a heterogeneous syndrome, and the identification of clinical subphenotypes is essential. Although organ dysfunction is a defining element of sepsis, subphenotypes of differential trajectory are not well studied. We sought to identify distinct Sequential Organ Failure Assessment (SOFA) score trajectory-based subphenotypes in sepsis. Methods We created 72-h SOFA score trajectories in patients with sepsis from four diverse intensive care unit (ICU) cohorts. We then used dynamic time warping (DTW) to compute heterogeneous SOFA trajectory similarities and hierarchical agglomerative clustering (HAC) to identify trajectory-based subphenotypes. Patient characteristics were compared between subphenotypes and a random forest model was developed to predict subphenotype membership at 6 and 24 h after being admitted to the ICU. The model was tested on three validation cohorts. Sensitivity analyses were performed with alternative clustering methodologies. Results A total of 4678, 3665, 12,282, and 4804 unique sepsis patients were included in development and three validation cohorts, respectively. Four subphenotypes were identified in the development cohort: Rapidly Worsening ( n = 612, 13.1%), Delayed Worsening ( n = 960, 20.5%), Rapidly Improving ( n = 1932, 41.3%), and Delayed Improving ( n = 1174, 25.1%). Baseline characteristics, including the pattern of organ dysfunction, varied between subphenotypes. Rapidly Worsening was defined by a higher comorbidity burden, acidosis, and visceral organ dysfunction. Rapidly Improving was defined by vasopressor use without acidosis. Outcomes differed across the subphenotypes, Rapidly Worsening had the highest in-hospital mortality (28.3%, P -value < 0.001), despite a lower SOFA (mean: 4.5) at ICU admission compared to Rapidly Improving (mortality:5.5%, mean SOFA: 5.5). An overall prediction accuracy of 0.78 (95% CI, [0.77, 0.8]) was obtained at 6 h after ICU admission, which increased to 0.87 (95% CI, [0.86, 0.88]) at 24 h. Similar subphenotypes were replicated in three validation cohorts. The majority of patients with sepsis have an improving phenotype with a lower mortality risk; however, they make up over 20% of all deaths due to their larger numbers. Conclusions Four novel, clinically-defined, trajectory-based sepsis subphenotypes were identified and validated. Identifying trajectory-based subphenotypes has immediate implications for the powering and predictive enrichment of clinical trials. Understanding the pathophysiology of these differential trajectories may reveal unanticipated therapeutic targets and identify more precise populations and endpoints for clinical trials.
Compliance of the aerated lung at PEEP 5 cmH 2 O (C BABY LUNG ). Left panel: C BABY LUNG as a function of ARDS severity. Right panel. Relationship between hyperinflation-to-recruitment ratio and C BABY LUNG as a function of ECMO status. Circles are individual datapoints. Black line is the regression line on the whole population. a, p < 0.05 vs severe ARDS under ECMO. ARDS, acute respiratory distress syndrome, ECMO, extracorporeal membrane oxygenation, MODERATE, moderate ARDS, PEEP, positive end-expiratory pressure, SEVERE, severe ARDS without ECMO, SEVERE ECMO, severe ARDS under ECMO
Background PEEP selection in severe COVID-19 patients under extracorporeal membrane oxygenation (ECMO) is challenging as no study has assessed the alveolar recruitability in this setting. The aim of the study was to compare lung recruitability and the impact of PEEP on lung aeration in moderate and severe ARDS patients with or without ECMO, using computed tomography (CT). Methods We conducted a two-center prospective observational case–control study in adult COVID-19-related patients who had an indication for CT within 72 h of ARDS onset in non-ECMO patients or within 72 h after ECMO onset. Ninety-nine patients were included, of whom 24 had severe ARDS under ECMO, 59 severe ARDS without ECMO and 16 moderate ARDS. Results Non-inflated lung at PEEP 5 cmH 2 O was significantly greater in ECMO than in non-ECMO patients. Recruitment induced by increasing PEEP from 5 to 15 cmH 2 O was not significantly different between ECMO and non-ECMO patients, while PEEP-induced hyperinflation was significantly lower in the ECMO group and virtually nonexistent. The median [IQR] fraction of recruitable lung mass between PEEP 5 and 15 cmH 2 O was 6 [4–10]%. Total superimposed pressure at PEEP 5 cmH 2 O was significantly higher in ECMO patients and amounted to 12 [11–13] cmH 2 O. The hyperinflation-to-recruitment ratio (i.e., a trade-off index of the adverse effects and benefits of PEEP) was significantly lower in ECMO patients and was lower than one in 23 (96%) ECMO patients, 41 (69%) severe non-ECMO patients and 8 (50%) moderate ARDS patients. Compliance of the aerated lung at PEEP 5 cmH 2 O corrected for PEEP-induced recruitment (C BABY LUNG ) was significantly lower in ECMO patients than in non-ECMO patients and was linearly related to the logarithm of the hyperinflation-to-recruitment ratio. Conclusions Lung recruitability of COVID-19 pneumonia is not significantly different between ECMO and non-ECMO patients, with substantial interindividual variations. The balance between hyperinflation and recruitment induced by PEEP increase from 5 to 15 cmH 2 O appears favorable in virtually all ECMO patients, while this PEEP level is required to counteract compressive forces leading to lung collapse. C BABY LUNG is significantly lower in ECMO patients, independently of lung recruitability.
Volume-outcome relationship of COVID-19 ECMO. Case volume. Case volume vs. survival in low (n = 96, survival 20%), intermediate (n = 329, survival 30%) and high volume (n = 248, survival 38%) ECMO centers. Annual case volumes prior to the pandemic were defined as low (< 20/year), intermediate (20-49/year) and high (> 50/year). Lower lines (ECMO ICU discharged alive) depict the percentage of patents discharged alive from the ECMO providing ICU. ICU discharge destinations were mainly other ICUs (40%), rehabilitation facilities (33%), or general wards (23%) (data not shown)
Clinical characteristics of study cohort
Background Severe COVID-19 induced acute respiratory distress syndrome (ARDS) often requires extracorporeal membrane oxygenation (ECMO). Recent German health insurance data revealed low ICU survival rates. Patient characteristics and experience of the ECMO center may determine intensive care unit (ICU) survival. The current study aimed to identify factors affecting ICU survival of COVID-19 ECMO patients. Methods 673 COVID-19 ARDS ECMO patients treated in 26 centers between January 1st 2020 and March 22nd 2021 were included. Data on clinical characteristics, adjunct therapies, complications, and outcome were documented. Block wise logistic regression analysis was applied to identify variables associated with ICU-survival. Results Most patients were between 50 and 70 years of age. PaO 2 /FiO 2 ratio prior to ECMO was 72 mmHg (IQR: 58–99). ICU survival was 31.4%. Survival was significantly lower during the 2nd wave of the COVID-19 pandemic. A subgroup of 284 (42%) patients fulfilling modified EOLIA criteria had a higher survival (38%) ( p = 0.0014, OR 0.64 (CI 0.41–0.99)). Survival differed between low, intermediate, and high-volume centers with 20%, 30%, and 38%, respectively ( p = 0.0024). Treatment in high volume centers resulted in an odds ratio of 0.55 (CI 0.28–1.02) compared to low volume centers. Additional factors associated with survival were younger age, shorter time between intubation and ECMO initiation, BMI > 35 (compared to < 25), absence of renal replacement therapy or major bleeding/thromboembolic events. Conclusions Structural and patient-related factors, including age, comorbidities and ECMO case volume, determined the survival of COVID-19 ECMO. These factors combined with a more liberal ECMO indication during the 2nd wave may explain the reasonably overall low survival rate. Careful selection of patients and treatment in high volume ECMO centers was associated with higher odds of ICU survival. Trial registration Registered in the German Clinical Trials Register (study ID: DRKS00022964, retrospectively registered, September 7th 2020, . Graphical abstract
Flowchart of participants. HFNC high-flow nasal cannula; LUS lung ultrasound; APP awake prone positioning
(See legend on previous page.)
Background: Awake prone positioning (APP) reduces the intubation rate in COVID-19 patients treated by high-flow nasal cannula (HFNC). However, the lung aeration response to APP has not been addressed. We aimed to explore the lung aeration response to APP by lung ultrasound (LUS). Methods: This two-center, prospective, observational study enrolled patients with COVID-19-induced acute hypoxemic respiratory failure treated by HFNC and APP. LUS score was recorded 5-10 min before, 1 h after APP, and 5-10 min after supine in the first APP session within the first three days. The primary outcome was LUS score changes in the first three days. Secondary outcomes included changes in SpO2/FiO2 ratio, respiratory rate and ROX index (SpO2/FiO2/respiratory rate) related to APP, and the rate of treatment success (patients who avoided intubation). Results: Seventy-one patients were enrolled. LUS score decreased from 20 (interquartile range [IQR] 19-24) to 19 (18-21) (p < 0.001) after the first APP session, and to 19 (18-21) (p < 0.001) after three days. Compared to patients with treatment failure (n = 20, 28%), LUS score reduction after the first three days in patients with treatment success (n = 51) was greater (- 2.6 [95% confidence intervals - 3.1 to - 2.0] vs 0 [- 1.2 to 1.2], p = 0.001). A decrease in dorsal LUS score > 1 after the first APP session was associated with decreased risk for intubation (Relative risk 0.25 [0.09-0.69]). APP daily duration was correlated with LUS score reduction in patients with treatment success, especially in dorsal lung zones (r = - 0.76; p < 0.001). Conclusions: In patients with acute hypoxemic respiratory failure due to COVID-19 and treated by HFNC, APP reduced LUS score. The reduction in dorsal LUS scores after APP was associated with treatment success. The longer duration on APP was correlated with greater lung aeration. Trial registration This study was prospectively registered on on April 22, 2021. Identification number NCT04855162 .
PROMIZING stepwise algorithm and study flowchart. CPAP: continuous positive airway pressure, SNR: screened and non-randomized, PROMIZING: Proportional assist ventilation for minimizing the duration of mechanical ventilation study, PSV: pressure support ventilation, SBT: spontaneous breathing trials
Ventilator settings and respiratory parameters at baseline (pre-randomization) according groups. Data presented as mean ± standard deviation. Pairwise comparisons between groups by Tukey Honest Significant Difference Test where p = 0.05 was taken as a threshold for these post-hoc comparisons: * Difference (p < 0.05) between Not ready for weaning group vs. SNR group. † Difference (p < 0.05) between ZERO CPAP tolerance failure group vs. SNR group. § Difference (p < 0.05) between SBT failure group vs. SNR group. SNR: screened and non-randomized, F i O 2 : fraction of inspired oxygen, CPAP: continuous positive airway pressure, PEEP: positive end-expiratory pressure, SBT: spontaneous breathing trial
Distribution of patients in the mechanical ventilation process according to the mode of ventilation at enrollment
Background Liberating patients from mechanical ventilation (MV) requires a systematic approach. In the context of a clinical trial, we developed a simple algorithm to identify patients who tolerate assisted ventilation but still require ongoing MV to be randomized. We report on the use of this algorithm to screen potential trial participants for enrollment and subsequent randomization in the Proportional Assist Ventilation for Minimizing the Duration of MV (PROMIZING) study. Methods The algorithm included five steps: enrollment criteria, pressure support ventilation (PSV) tolerance trial, weaning criteria, continuous positive airway pressure (CPAP) tolerance trial (0 cmH 2 O during 2 min) and spontaneous breathing trial (SBT): on fraction of inspired oxygen (F i O 2 ) 40% for 30–120 min. Patients who failed the weaning criteria, CPAP Zero trial, or SBT were randomized. We describe the characteristics of patients who were initially enrolled, but passed all steps in the algorithm and consequently were not randomized. Results Among the 374 enrolled patients, 93 (25%) patients passed all five steps. At time of enrollment, most patients were on PSV (87%) with a mean (± standard deviation) F i O 2 of 34 (± 6) %, PSV of 8.7 (± 2.9) cmH 2 O, and positive end-expiratory pressure of 6.1 (± 1.6) cmH 2 O. Minute ventilation was 9.0 (± 3.1) L/min with a respiratory rate of 17.4 (± 4.4) breaths/min. Patients were liberated from MV with a median [interquartile range] delay between initial screening and extubation of 5 [1–49] hours. Only 7 (8%) patients required reintubation. Conclusion The trial algorithm permitted identification of 93 (25%) patients who were ready to extubate, while their clinicians predicted a duration of ventilation higher than 24 h .
Percentage of studies in the two decades adopting different infusion timings. Fluid challenge, fluid challenge
Abbreviations CI: Cardiac index; SVI: Stroke volume index; CO: Cardiac output; SV: Stroke volume; ICU: Intensive care unit; CVP: Central venous pressure; VTI: Velocity-time integral in the left ventricular outflow tract; ABF: Aortic blood flow.
Fluid challenge characteristics and haemodynamic monitoring in the included studies
Comparison between 2011-2021 and 2000-2010 decades regarding the modality of fluid challenge administration
Introduction Fluid challenges are widely adopted in critically ill patients to reverse haemodynamic instability. We reviewed the literature to appraise fluid challenge characteristics in intensive care unit (ICU) patients receiving haemodynamic monitoring and considered two decades: 2000–2010 and 2011–2021. Methods We assessed research studies and collected data regarding study setting, patient population, fluid challenge characteristics, and monitoring. MEDLINE, Embase, and Cochrane search engines were used. A fluid challenge was defined as an infusion of a definite quantity of fluid (expressed as a volume in mL or ml/kg) in a fixed time (expressed in minutes), whose outcome was defined as a change in predefined haemodynamic variables above a predetermined threshold. Results We included 124 studies, 32 (25.8%) published in 2000–2010 and 92 (74.2%) in 2011–2021, overall enrolling 6,086 patients, who presented sepsis/septic shock in 50.6% of cases. The fluid challenge usually consisted of 500 mL (76.6%) of crystalloids (56.6%) infused with a rate of 25 mL/min. Fluid responsiveness was usually defined by a cardiac output/index (CO/CI) increase ≥ 15% (70.9%). The infusion time was quicker (15 min vs 30 min), and crystalloids were more frequent in the 2011–2021 compared to the 2000–2010 period. Conclusions In the literature, fluid challenges are usually performed by infusing 500 mL of crystalloids bolus in less than 20 min. A positive fluid challenge response, reported in 52% of ICU patients, is generally defined by a CO/CI increase ≥ 15%. Compared to the 2000–2010 decade, in 2011–2021 the infusion time of the fluid challenge was shorter, and crystalloids were more frequently used.
Traumatic cardiac arrest (TCA)
Options for cardiac resuscitation of patients in traumatic cardiac arrest (TCA) due to non-compressible haemorrhage from non-ballistic penetrating injuries: Addition to the 2021 ERC guidelines [15] on the treatment of patients in traumatic cardiac arrest (original figure adapted with permission)
Early haemorrhage control and minimizing the time to definitive care have long been the cornerstones of therapy for patients exsanguinating from non-compressible haemorrhage (NCH) after penetrating injuries, as only basic treatment could be provided on scene. However, more recently, advanced on-scene treatments such as the transfusion of blood products, resuscitative thoracotomy (RT) and resuscitative endovascular balloon occlusion of the aorta (REBOA) have become available in a small number of pre-hospital critical care teams. Although these advanced techniques are included in the current traumatic cardiac arrest algorithm of the European Resuscitation Council (ERC), published in 2021, clear guidance on the practical application of these techniques in the pre-hospital setting is scarce. This paper provides a scoping review on how these advanced techniques can be incorporated into practice for the resuscitation of patients exsanguinating from NCH after penetrating injuries, based on available literature and the collective experience of several helicopter emergency medical services (HEMS) across Europe who have introduced these advanced resuscitation interventions into routine practice. Graphical Abstract
Change in mechanical power between predicted body weight-guided ventilation (PBW-Vent) and driving pressure-guided ventilation (ΔP-Vent). A The violin plots represent the mechanical power (thick horizontal line: median; thin horizontal dashed lines: 25th and 75th percentiles). *Denotes statistical significance. B Individual data
Background: Whether targeting the driving pressure (∆P) when adjusting the tidal volume in mechanically ventilated patients with the acute respiratory distress syndrome (ARDS) may decrease the risk of ventilator-induced lung injury remains a matter of research. In this study, we assessed the effect of a ∆P-guided ventilation on the mechanical power. Methods: We prospectively included adult patients with moderate-to-severe ARDS. Positive end expiratory pressure was set by the attending physician and kept constant during the study. Tidal volume was first adjusted to target 6 ml/kg of predicted body weight (PBW-guided ventilation) and subsequently modified within a range from 4 to 10 ml/kg PBW to target a ∆P between 12 and 14 cm H2O. The respiratory rate was then re-adjusted within a range from 12 to 40 breaths/min until EtCO2 returned to its baseline value (∆P-guided ventilation). Mechanical power was computed at each step. Results: Fifty-one patients were included between December 2019 and May 2021. ∆P-guided ventilation was feasible in all but one patient. The ∆P during PBW-guided ventilation was already within the target range of ∆P-guided ventilation in five (10%) patients, above in nine (18%) and below in 36 (72%). The change from PBW- to ∆P-guided ventilation was thus accompanied by an overall increase in tidal volume from 6.1 mL/kg PBW [5.9-6.2] to 7.7 ml/kg PBW [6.2-8.7], while respiratory rate was decreased from 29 breaths/min [26-32] to 21 breaths/min [16-28] (p < 0.001 for all comparisons). ∆P-guided ventilation was accompanied by a significant decrease in mechanical power from 31.5 J/min [28-35.7] to 28.8 J/min [24.6-32.6] (p < 0.001), representing a relative decrease of 7% [0-16]. With ∆P-guided ventilation, the PaO2/FiO2 ratio increased and the ventilatory ratio decreased. Conclusion: As compared to a conventional PBW-guided ventilation, a ∆P-guided ventilation strategy targeting a ∆P between 12 and 14 cm H2O required to change the tidal volume in 90% of the patients. Such ∆P-guided ventilation significantly reduced the mechanical power. Whether this physiological observation could be associated with clinical benefit should be assessed in clinical trials.
CONSORT Flow Diagram in INCLASS trial. SOFA, Sequential Organ Failure Assessment
Background Clarithromycin may act as immune-regulating treatment in sepsis and acute respiratory dysfunction syndrome. However, clinical evidence remains inconclusive. We aimed to evaluate whether clarithromycin improves 28-day mortality among patients with sepsis, respiratory and multiple organ dysfunction syndrome. Methods We conducted a multicenter, randomized, clinical trial in patients with sepsis. Participants with ratio of partial oxygen pressure to fraction of inspired oxygen less than 200 and more than 3 SOFA points from systems other than the respiratory function were enrolled between December 2017 and September 2019. Patients were randomized to receive 1 gr of clarithromycin or placebo intravenously once daily for 4 consecutive days. The primary endpoint was 28-day all-cause mortality. Secondary outcomes were 90-day mortality; sepsis response (defined as at least 25% decrease in SOFA score by day 7); sepsis recurrence; and differences in peripheral blood cell populations and leukocyte transcriptomics. Results Fifty-five patients were allocated to each arm. By day 28, 27 (49.1%) patients in the clarithromycin and 25 (45.5%) in the placebo group died (risk difference 3.6% [95% confidence interval (CI) − 15.7 to 22.7]; P = 0.703, adjusted OR 1.03 [95%CI 0.35–3.06]; P = 0.959). There were no statistical differences in 90-day mortality and sepsis response. Clarithromycin was associated with lower incidence of sepsis recurrence (OR 0.21 [95%CI 0.06–0.68]; P = 0.012); significant increase in monocyte HLA-DR expression; expansion of non-classical monocytes; and upregulation of genes involved in cholesterol homeostasis. Serious and non-serious adverse events were equally distributed. Conclusions Clarithromycin did not reduce mortality among patients with sepsis with respiratory and multiple organ dysfunction. Clarithromycin was associated with lower sepsis recurrence, possibly through a mechanism of immune restoration. Clinical trial registration identifier NCT03345992 registered 17 November 2017; EudraCT 2017-001056-55.
Background Bacterial burden as well as duration of bacteremia influence the outcome of patients with bloodstream infections. Promptly decreasing bacterial load in the blood by using extracorporeal devices in addition to anti-infective therapy has recently been explored. Preclinical studies with the Seraph® 100 Microbind® Affinity Blood Filter (Seraph® 100), which consists of heparin that is covalently bound to polymer beads, have demonstrated an effective binding of bacteria and viruses. Pathogens adhere to the heparin coated polymer beads in the adsorber as they would normally do to heparan sulfate on cell surfaces. Using this biomimetic principle, the Seraph® 100 could help to decrease bacterial burden in vivo. Methods This first in human, prospective, multicenter, non-randomized interventional study included patients with blood culture positive bloodstream infection and the need for kidney replacement therapy as an adjunctive treatment for bloodstream infections. We performed a single four-hour hemoperfusion treatment with the Seraph® 100 in conjunction with a dialysis procedure. Post procedure follow up was 14 days. Results Fifteen hemodialysis patients (3F/12 M, age 74.0 [68.0–78.5] years, dialysis vintage 28.0 [11.0–45.0] months) were enrolled. Seraph® 100 treatment started 66.4 [45.7–80.6] hours after the initial positive blood culture was drawn. During the treatment with the Seraph® 100 with a median blood flow of 285 [225–300] ml/min no device or treatment related adverse events were reported. Blood pressure and heart rate remained stable while peripheral oxygen saturation improved during the treatment from 98.0 [92.5–98.0] to 99.0 [98.0–99.5] %; p = 0.0184. Four patients still had positive blood culture at the start of Seraph® 100 treatment. In one patient blood cultures turned negative during treatment. The time to positivity (TTP) was increased between inflow and outflow blood cultures by 36 [− 7.2 to 96.3] minutes. However, overall TTP increase was not statistical significant. Conclusions Seraph® 100 treatment was well tolerated. Adding Seraph® 100 to antibiotics early in the course of bacteremia might result in a faster resolution of bloodstream infections, which has to be evaluated in further studies . Trail registration : Identifier: NCT02914132 , first posted September 26, 2016.
4F-PCC and AA patient identification. AA = andexanet alfa, 4F-PCC = four-factor prothrombin complex concentrate, GCS = Glasgow Coma Scale score
Background: Andexanet alfa is approved (FDA "accelerated approval"; EMA "conditional approval") as the first specific reversal agent for factor Xa (FXa) inhibitor-associated uncontrolled or life-threatening bleeding. Four-factor prothrombin complex concentrates (4F-PCC) are commonly used as an off-label, non-specific, factor replacement approach to manage FXa inhibitor-associated life-threatening bleeding. We evaluated the effectiveness and safety of andexanet alfa versus 4F-PCC for management of apixaban- or rivaroxaban-associated intracranial hemorrhage (ICH). Methods: This two-cohort comparison study included andexanet alfa patients enrolled at US hospitals from 4/2015 to 3/2020 in the prospective, single-arm ANNEXA-4 study and a synthetic control arm of 4F-PCC patients admitted within a US healthcare system from 12/2016 to 8/2020. Adults with radiographically confirmed ICH who took their last dose of apixaban or rivaroxaban < 24 h prior to the bleed were included. Patients with a Glasgow Coma Scale (GCS) score < 7, hematoma volume > 60 mL, or planned surgery within 12 h were excluded. Outcomes were hemostatic effectiveness from index to repeat scan, mortality within 30 days, and thrombotic events within five days. Odds ratios (ORs) with 95% confidence intervals (CI) were calculated using propensity score-overlap weighted logistic regression. Results: The study included 107 andexanet alfa (96.6% low dose) and 95 4F-PCC patients (79.3% receiving a 25 unit/kg dose). After propensity score-overlap weighting, mean age was 79 years, GCS was 14, time from initial scan to reversal initiation was 2.3 h, and time from reversal to repeat scan was 12.2 h in both arms. Atrial fibrillation was present in 86% of patients. Most ICHs were single compartment (78%), trauma-related (61%), and involved the intracerebral and/or intraventricular space(s) (53%). ICH size was ≥ 10 mL in volume (intracerebral and/or ventricular) or ≥ 10 mm in thickness (subdural or subarachnoid) in 22% of patients and infratentorial in 15%. Andexanet alfa was associated with greater odds of achieving hemostatic effectiveness (85.8% vs. 68.1%; OR 2.73; 95% CI 1.16-6.42) and decreased odds of mortality (7.9% vs. 19.6%; OR 0.36; 95% CI 0.13-0.98) versus 4F-PCC. Two thrombotic events occurred with andexanet alfa and none with 4F-PCC. Conclusions: In this indirect comparison of patients with an apixaban- or rivaroxaban-associated ICH, andexanet alfa was associated with better hemostatic effectiveness and improved survival compared to 4F-PCC. Trial registration NCT02329327; registration date: December 31, 2014.
Study flow diagram
Characteristics of mechanically ventilated patients based on early sedation depth status
Background: Mechanically ventilated patients have experienced greater periods of prolonged deep sedation during the coronavirus disease (COVID-19) pandemic. Multiple studies from the pre-COVID era demonstrate that early deep sedation is associated with worse outcome. Despite this, there is a lack of data on sedation depth and its impact on outcome for mechanically ventilated patients during the COVID-19 pandemic. We sought to characterize the emergency department (ED) and intensive care unit (ICU) sedation practices during the COVID-19 pandemic, and to determine if early deep sedation was associated with worse clinical outcomes. Study design and methods: Dual-center, retrospective cohort study conducted over 6 months (March-August, 2020), involving consecutive, mechanically ventilated adults. All sedation-related data during the first 48 h were collected. Deep sedation was defined as Richmond Agitation-Sedation Scale of - 3 to - 5 or Riker Sedation-Agitation Scale of 1-3. To examine impact of early sedation depth on hospital mortality (primary outcome), we used a multivariable logistic regression model. Secondary outcomes included ventilator-, ICU-, and hospital-free days. Results: 391 patients were studied, and 283 (72.4%) experienced early deep sedation. Deeply sedated patients received higher cumulative doses of fentanyl, propofol, midazolam, and ketamine when compared to light sedation. Deep sedation patients experienced fewer ventilator-, ICU-, and hospital-free days, and greater mortality (30.4% versus 11.1%) when compared to light sedation (p < 0.01 for all). After adjusting for confounders, early deep sedation remained significantly associated with higher mortality (adjusted OR 3.44; 95% CI 1.65-7.17; p < 0.01). These results were stable in the subgroup of patients with COVID-19. Conclusions: The management of sedation for mechanically ventilated patients in the ICU has changed during the COVID pandemic. Early deep sedation is common and independently associated with worse clinical outcomes. A protocol-driven approach to sedation, targeting light sedation as early as possible, should continue to remain the default approach.
Background Therapeutic drug monitoring (TDM) may represent an invaluable tool for optimizing antimicrobial therapy in septic patients, but extensive use is burdened by barriers. The aim of this study was to assess the impact of a newly established expert clinical pharmacological advice (ECPA) program in improving the clinical usefulness of an already existing TDM program for emerging candidates in tailoring antimicrobial therapy among critically ill patients. Methods This retrospective observational study included an organizational phase (OP) and an assessment phase (AP). During the OP (January–June 2021), specific actions were organized by MD clinical pharmacologists together with bioanalytical experts, clinical engineers, and ICU clinicians. During the AP (July–December 2021), the impact of these actions in optimizing antimicrobial treatment of the critically ill patients was assessed. Four indicators of performance of the TDM-guided real-time ECPA program were identified [total TDM-guided ECPAs July–December 2021/total TDM results July–December 2020; total ECPA dosing adjustments/total delivered ECPAs both at first assessment and overall; and turnaround time (TAT) of ECPAs, defined as optimal (< 12 h), quasi-optimal (12–24 h), acceptable (24–48 h), suboptimal (> 48 h)]. Results The OP allowed to implement new organizational procedures, to create a dedicated pathway in the intranet system, to offer educational webinars on clinical pharmacology of antimicrobials, and to establish a multidisciplinary team at the morning bedside ICU meeting. In the AP, a total of 640 ECPAs were provided for optimizing 261 courses of antimicrobial therapy in 166 critically ill patients. ECPAs concerned mainly piperacillin–tazobactam (41.8%) and meropenem (24.9%), and also other antimicrobials had ≥ 10 ECPAs (ceftazidime, ciprofloxacin, fluconazole, ganciclovir, levofloxacin, and linezolid). Overall, the pre–post-increase in TDM activity was of 13.3-fold. TDM-guided dosing adjustments were recommended at first assessment in 61.7% of ECPAs (10.7% increases and 51.0% decreases), and overall in 45.0% of ECPAs (10.0% increases and 35.0% decreases). The overall median TAT was optimal (7.7 h) and that of each single agent was always optimal or quasi-optimal. Conclusions Multidisciplinary approach and timely expert interpretation of TDM results by MD Clinical Pharmacologists could represent cornerstones in improving the cost-effectiveness of an antimicrobial TDM program for emerging TDM candidates.
Cumulative incidence of ventilator-associated pneumonia, stratified by corticosteroids use
Abstract Objective To assess the impact of treatment with steroids on the incidence and outcome of ventilator-associated pneumonia (VAP) in mechanically ventilated COVID-19 patients. Design Propensity-matched retrospective cohort study from February 24 to December 31, 2020, in 4 dedicated COVID-19 Intensive Care Units (ICU) in Lombardy (Italy). Patients Adult consecutive mechanically ventilated COVID-19 patients were subdivided into two groups: (1) treated with low-dose corticosteroids (dexamethasone 6 mg/day intravenous for 10 days) (DEXA+); (2) not treated with corticosteroids (DEXA−). A propensity score matching procedure (1:1 ratio) identified patients' cohorts based on: age, weight, PEEP Level, PaO2/FiO2 ratio, non-respiratory Sequential Organ Failure Assessment (SOFA) score, Charlson Comorbidity Index (CCI), C reactive protein plasma concentration at admission, sex and admission hospital (exact matching). Intervention Dexamethasone 6 mg/day intravenous for 10 days from hospital admission. Measurements and main results Seven hundred and thirty-nine patients were included, and the propensity-score matching identified two groups of 158 subjects each. Eighty-nine (56%) DEXA+ versus 55 (34%) DEXA− patients developed a VAP (RR 1.61 (1.26–2.098), p = 0.0001), after similar time from hospitalization, ICU admission and intubation. DEXA+ patients had higher crude VAP incidence rate (49.58 (49.26–49.91) vs. 31.65 (31.38–31.91)VAP*1000/pd), (IRR 1.57 (1.55–1.58), p
Background With ICU mortality rates decreasing, it is increasingly important to identify interventions to minimize functional impairments and improve outcomes for survivors. Simultaneously, we must identify robust patient-centered functional outcomes for our trials. Our objective was to investigate the clinimetric properties of a progression of three outcome measures, from strength to function. Methods Adults (≥ 18 years) enrolled in five international ICU rehabilitation studies. Participants required ICU admission were mechanically ventilated and previously independent. Outcomes included two components of the Physical Function in ICU Test-scored (PFIT-s): knee extensor strength and assistance required to move from sit to stand (STS); the 30-s STS (30 s STS) test was the third outcome. We analyzed survivors at ICU and hospital discharge. We report participant demographics, baseline characteristics, and outcome data using descriptive statistics. Floor effects represented ≥ 15% of participants with minimum score and ceiling effects ≥ 15% with maximum score. We calculated the overall group difference score (hospital discharge score minus ICU discharge) for participants with paired assessments. Results Of 451 participants, most were male ( n = 278, 61.6%) with a median age between 60 and 66 years, a mean APACHE II score between 19 and 24, a median duration of mechanical ventilation between 4 and 8 days, ICU length of stay (LOS) between 7 and 11 days, and hospital LOS between 22 and 31 days. For knee extension, we observed a ceiling effect in 48.5% (160/330) of participants at ICU discharge and in 74.7% (115/154) at hospital discharge; the median [1st, 3rd quartile] PFIT-s difference score ( n = 139) was 0 [0,1] ( p < 0.05). For STS assistance, we observed a ceiling effect in 45.9% (150/327) at ICU discharge and in 77.5% (79/102) at hospital discharge; the median PFIT-s difference score ( n = 87) was 1 [0, 2] ( p < 0.05). For 30 s STS, we observed a floor effect in 15.0% (12/80) at ICU discharge but did not observe a floor or ceiling effect at hospital discharge. The median 30 s STS difference score ( n = 54) was 3 [1, 6] ( p < 0.05). Conclusion Among three progressive outcome measures evaluated in this study, the 30 s STS test appears to have the most favorable clinimetric properties to assess function at ICU and hospital discharge in moderate to severely ill participants.
Kaplan-Meier curve of 6-month survival in patients with sepsis (red) and without sepsis (blue)
Trajectory of outcomes to 6 months in patients with sepsis (red) and without sepsis (blue). A-C Circles are mean and error bars are 95% confidence interval. P values calculated from the interaction between sepsis and time from a mixed-effect generalized linear model with Gaussian distribution, including center as random effect, and adjusted by age, sex, ICU admission source, APACHE III score, type of admission (medical vs. surgical), lung transplant patients, trauma, creatinine, heart rate, mean arterial pressure, presence of chronic cardiovascular disease and ICU length of stay. Models were further adjusted by the baseline value of the outcome of interest as fixed effect. D Outcomes assessed at 6 months of follow-up. Boxes represent median and interquartile range. Whiskers extend 1.5 times the interquartile range beyond the first and third quartiles per the conventional Tukey method. Transparent circles beyond the whiskers represent outliers. Abbreviations: WHODAS, WHO Disability Assessment Schedule 2.0; IES-R, Impact of Event Scale-Revised; IADL, instrumental activities of daily living; and MoCA-BLIND, Montreal Cognitive Assessment
Long-term outcomes according to the presence of sepsis
Background Data on long-term outcomes after sepsis-associated critical illness have mostly come from small cohort studies, with no information about the incidence of new disability. We investigated whether sepsis-associated critical illness was independently associated with new disability at 6 months after ICU admission compared with other types of critical illness. Methods We conducted a secondary analysis of a multicenter, prospective cohort study in six metropolitan intensive care units in Australia. Adult patients were eligible if they had been admitted to the ICU and received more than 24 h of mechanical ventilation. There was no intervention. Results The primary outcome was new disability measured with the WHO Disability Assessment Schedule 2.0 (WHODAS) 12 level score compared between baseline and 6 months. Between enrollment and follow-up at 6 months, 222/888 (25%) patients died, 100 (35.5%) with sepsis and 122 (20.1%) without sepsis ( P < 0.001). Among survivors, there was no difference for the incidence of new disability at 6 months with or without sepsis, 42/106 (39.6%) and 106/300 (35.3%) (RD, 0.00 (− 10.29 to 10.40), P = 0.995), respectively. In addition, there was no difference in the severity of disability, health-related quality of life, anxiety and depression, post-traumatic stress, return to work, financial distress or cognitive function. Conclusions Compared to mechanically ventilated patients of similar acuity and length of stay without sepsis, patients with sepsis admitted to ICU have an increased risk of death, but survivors have a similar risk of new disability at 6 months. Trial registration NCT03226912, registered July 24, 2017.
SARS-CoV-2 spike RBD protein led to dysfunction of the RAS system in mice. The expression level of (A). ACE2 and (B). ACE in the lung tissues of mice followed by injection of Control-Fc or SARS-CoV-2 spike RBD was determined by IHC, and the representative images were shown. The left panel, 100×, scale bar = 100 μm; the right panel, 400×, scale bar = 100 μm. Mice were treated with saline or LPS combined by Control-Fc or SARS-CoV-2 spike RBD-Fc for 3 days. (C). The protein expression levels of ACE and ACE2 in the lung tissues were determined by western blot analysis, and the representative results were shown. (D). Angiotensin I (Ang I) and (E). Angiotensin II (Ang II) concentrations in the BALF were determined by ELISA analysis (n = 5-6 per group). Data are shown as mean ± S.D., *P < 0.05, **P < 0.01, and N.S. indicates not significant
SARS-CoV-2 spike RBD protein activated the NOX1 and NOX2. Mice were exposed to LPS with or without SARS-CoV-2 spike RBD-Fc protein for 3 days. A The mRNA expression levels of AT 1 R and AT 2 R in the lung tissues of mice with indicated treatment were determined by qPCR analysis (n = 5 per group). B The protein expression levels of IκB and p-IκB in the lung tissues were determined by western blot analysis, and the representative results were shown. C, D The mRNA expression levels of indicated gene in the lung tissues were determined by qPCR analysis (n = 5 per group). E The protein expression levels of NOX1 and NOX2 in the lung tissues were determined by western blot analysis, and the representative results were shown. Data are shown as mean ± S.D., *P < 0.05, **P < 0.01, and N.S. indicates not significant
Background SARS-CoV-2 infection leads to acute lung injury (ALI) and acute respiratory distress syndrome (ARDS). Both clinical data and animal experiments suggest that the renin–angiotensin system (RAS) is involved in the pathogenesis of SARS-CoV-2-induced ALI. Angiotensin-converting enzyme 2 (ACE2) is the functional receptor for SARS-CoV-2 and a crucial negative regulator of RAS. Recombinant ACE2 protein (rACE2) has been demonstrated to play protective role against SARS-CoV and avian influenza-induced ALI, and more relevant, rACE2 inhibits SARS-CoV-2 proliferation in vitro. However, whether rACE2 protects against SARS-CoV-2-induced ALI in animal models and the underlying mechanisms have yet to be elucidated. Methods and Results Here, we demonstrated that the SARS-CoV-2 spike receptor-binding domain (RBD) protein aggravated lipopolysaccharide (LPS)-induced ALI in mice. SARS-CoV-2 spike RBD protein directly binds and downregulated ACE2, leading to an elevation in angiotensin (Ang) II. AngII further increased the NOX1/2 through AT 1 R, subsequently causing oxidative stress and uncontrolled inflammation and eventually resulting in ALI/ARDS. Importantly, rACE2 remarkably reversed SARS-CoV-2 spike RBD protein-induced ALI by directly binding SARS-CoV-2 spike RBD protein, cleaving AngI or cleaving AngII. Conclusion This study is the first to prove that rACE2 plays a protective role against SARS-CoV-2 spike RBD protein-aggravated LPS-induced ALI in an animal model and illustrate the mechanism by which the ACE2-AngII-AT 1 R-NOX1/2 axis might contribute to SARS-CoV-2-induced ALI.
Literature search and characteristics of the included studies
NOS criteria for quality of cohort studies
NOS criteria for quality of case-control studies
Background The purpose of this study was to clarify the prognostic value of Pentraxin-3 (PTX3) on the mortality of patients with sepsis. Methods Publications published up to January 2021 were retrieved from PubMed, EMBASE, and the Cochrane library. Data from eligible cohort and case–control studies were extracted for the meta-analysis. Multivariate regression analysis was used to evaluate the correlation of the outcomes with sample size and male proportion. Results A total of 17 studies covering 3658 sepsis patients were included. PTX3 level was significantly higher in non-survivor compared to survivor patients (SMD (95% CI): −1.06 (−1.43, −0.69), P < 0.001). Increased PTX3 level was significantly associated with mortality (HR (95% CI): 2.09 (1.55, 2.81), P < 0.001). PTX3 showed good predictive capability for mortality (AUC:ES (95% CI): 0.73 (0.70, 0.77), P < 0.001). The outcome comparing PTX3 level in non-survivors vs. survivors and the outcome of the association between PTX3 and mortality were associated with sample size but not male proportion. AUC was associated with both sample size and male proportion. Conclusions PTX3 level was significantly higher in non-survivor compared to survivor patients with sepsis. Elevated PTX3 level was significantly associated with mortality. Furthermore, the level of PTX3 might predict patient mortality.
Top-cited authors
Jean-Louis Vincent
  • Université Libre de Bruxelles
Fabio Silvio Taccone
  • Université Libre de Bruxelles
Donat Rudolf Spahn
  • University of Zurich
Louis Riddez
  • Karolinska Institutet
Radko Komadina
  • General and Teaching Hospital Celje