Paul Herent’s research while affiliated with Institut Curie and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (12)


Figure 1: ONCOPILOT Foundation Model Training and Evaluation (A) Overview of the datasets used for training the ONCOPILOT segmentation model, including the distribution across train, test, and validation sets. (B) Diagram illustrating the ONCOPILOT segmentation model's workflow. The model accepts visual prompts (either point-clicks or bounding boxes) of 3D tumor volumes and outputs corresponding 3D segmentation masks. Optional editing allows for real or simulated radiologist interaction, where positive and negative edit-points can be set manually in a viewer environment or automatically during evaluation.
Figure 2: ONCOPILOT Performance Against Baseline (A) Radar plot (top) and table (bottom) displaying segmentation DICE scores across 7 lesion types for 3 different ONCOPILOT models (point, point-edit, bbox) compared to the best-performing baseline from the ULS23 segmentation challenge on the 10% held-out test set. (B) Examples of successful segmentations from the test set, comparing point mode (left columns) and bbox mode (right columns). The top row shows the visual prompt provided to the model, the middle row displays the ground truth mask for that slice, and the bottom row presents the ONCOPILOT model's predicted segmentation.
Figure 3: ONCOPILOT Performance on Different Lesion Types Bar plot showing the mean DICE scores from ONCOPILOT segmentation masks in point mode (red) and point-edit mode (blue) for: (A) spherical lesions (sphericity > 0.6) versus irregular lesions (see Methods for the sphericity formula), (B) large lesions (long axis > 15 mm) versus smaller lesions, (C) voluminous lesions (volume > 1 mL) versus smaller lesions. (D) Boxplot displaying the distribution of DICE scores produced by ONCOPILOT in point mode (red) and point-edit mode (blue) across various lesion types in the 10% held-out test set, with median values and interquartile ranges highlighted. (E) Boxplot showing RECIST measurements derived from ONCOPILOT's predicted masks in point mode (red) and point-edit mode (blue) across different lesion types in the 10% held-out test set, highlighting median values and interquartile ranges. The long axis is defined as the longest possible line in the axial plane across the predicted 3D mask. ***: p-value < 0.001; n.s.: non-significant.
Figure 4: ONCOPILOT Integration Into Radiologist's Workflow (A) Diagram and results comparing ON-COPILOT in point, point-edit, and bbox modes against three radiologists for the long-axis measurement of diverse oncological lesions. Median absolute error (mm) and median relative error (% of lesion size) are shown. P-values from t-tests compare ONCOPILOT models to radiologists for long-axis measurement error, without statistical significance p ≥ 0.05. The long axis is the longest line in the axial plane across the predicted 3D mask. (B) Boxplot (bottom) of ONCOPILOT's tumors long-axis measurement performance against radiologists. Left: median absolute error (mm) vs. ground truth. Right: median relative error (% of lesion size). Median and interquartile ranges are shown. (C) Diagram of an experiment evaluating radiologists' inter-operator variability and measurement time while measuring tumors' long-axis using a digital viewer for manual vs. ONCOPILOT-assisted (bbox mode) long-axis assessments. (D) Boxplots show radiologists' inter-operator variability in measurement error (left) and measurement time (right) using manual vs. ONCOPILOT-assisted annotations across diverse tumors, with t-test p-values; n=3.
Figure S1: ONCOPILOT Long Axis Performance Across Different Organs (A) Table showing the mean and median long-axis measurements (in mm) for the various organ types in the test set. (B) Example of a suboptimal segmentation by ONCOPILOT on a small lung nodule from the LIDC-IDRI dataset with magnification of the overlay on the rightmost panel, with a DICE of 0.66 in point mode.

+2

ONCOPILOT: A Promptable CT Foundation Model For Solid Tumor Evaluation
  • Preprint
  • File available

October 2024

·

66 Reads

Léo Machado

·

Hélène Philippe

·

Élodie Ferreres

·

[...]

·

Paul Hérent

Carcinogenesis is a proteiform phenomenon, with tumors emerging in various locations and displaying complex, diverse shapes. At the crucial intersection of research and clinical practice, it demands precise and flexible assessment. However, current biomarkers, such as RECIST 1.1's long and short axis measurements, fall short of capturing this complexity, offering an approximate estimate of tumor burden and a simplistic representation of a more intricate process. Additionally, existing supervised AI models face challenges in addressing the variability in tumor presentations, limiting their clinical utility. These limitations arise from the scarcity of annotations and the models' focus on narrowly defined tasks. To address these challenges, we developed ONCOPILOT, an interactive radiological foundation model trained on approximately 7,500 CT scans covering the whole body, from both normal anatomy and a wide range of oncological cases. ONCOPILOT performs 3D tumor segmentation using visual prompts like point-click and bounding boxes, outperforming state-of-the-art models (e.g., nnUnet) and achieving radiologist-level accuracy in RECIST 1.1 measurements. The key advantage of this foundation model is its ability to surpass state-of-the-art performance while keeping the radiologist in the loop, a capability that previous models could not achieve. When radiologists interactively refine the segmentations, accuracy improves further. ONCOPILOT also accelerates measurement processes and reduces inter-reader variability, facilitating volumetric analysis and unlocking new biomarkers for deeper insights. This AI assistant is expected to enhance the precision of RECIST 1.1 measurements, unlock the potential of volumetric biomarkers, and improve patient stratification and clinical care, while seamlessly integrating into the radiological workflow.

Download

Promptable foundation model for automatic whole body RECIST measurement.

June 2024

·

10 Reads

·

1 Citation

Journal of Clinical Oncology

e13643 Background: RECIST 1.1 is the gold standard for tumor evaluation in clinical trials and patient care. However, it faces challenges due to the subjective selection and measurement of lesions, notably because of inter-observer variability in the 20-30% range (1), which can potentially result in inaccuracies in therapeutic response classification (2). The main objective of this study is to assess the added value of a radiological foundation model to assist the radiologist to measure RECIST across different lesion topographies, using visual prompting. Methods: We use a promptable segmentation algorithm, based on a visual foundation model, pre-trained on various CT-scans and then trained on a subset of the Medical Segmentation Decathlon (MSD) dataset, covering 538 patients with pancreas, liver, colon and lung lesions. It outputs a 3D segmentation mask of the lesion, using a visual prompt, i.e. a 3D bounding box (BBox) as input. We then measure the capacity of the model to measure RECIST on the MSD validation set and on the KiTS validation set (99 samples), which features a new lesion type (kidney). To simulate inter-reader variability of up to 23%, we provide as inputs BBoxes centered on the lesions but with a varying error (15% variability: +10 pixels, 23% variability: +15 pixels). Results: The model is able to correct variability in BBox input: with all BBox error ranges, it beats the input variability (15% and 23% thresholds in each column) across many organ types, except in the pancreas when using large error BBoxes. Moreover, the model is able to generalize to new lesion types not seen during training, on an external dataset (KiTS - Kidney). Conclusions: Unlike existing supervised machine learning models dedicated to lesion detections in specific organs, our approach is organ and lesion agnostic and offers a more reliable, precise tumor assessment. This is a first step before allowing the longitudinal evaluation of tumors while safeguarding the clinical intuition necessary for selecting the right target lesions. Moreover, we believe that these methods will be crucial in advancing beyond RECIST 1.1, facilitating the identification of new prognostic biomarkers and proxies for tumor burden derived from previously underexplored radiomics features, ultimately refining the efficacy of clinical trial assessments and elevating the standard of patient care. 1. Yoon, S et al. 2015. 2. Kuhl et al. 2018. [Table: see text]


Figure 1: Our training strategy. Starting from an existing language model such as BioMedLM, we continue the pre-training on our corpus of medical textbooks. Then, we use GPT-4, prompted with knowledge from the textbooks, to generate clinical cases that are used to fine-tune the model.
Figure 2: Accuracy distribution by question (number of correct propositions divided by number of total propositions) on the FreeCN dataset of GPT-4 and BioMedLM + Books + MQG
Figure 3: Accuracy per subject of BioMedLM and GPT-4
Results on the all evaluation dataset
Efficient Medical Question Answering with Knowledge-Augmented Question Generation

May 2024

·

47 Reads

In the expanding field of language model applications, medical knowledge representation remains a significant challenge due to the specialized nature of the domain. Large language models, such as GPT-4, obtain reasonable scores on medical question answering tasks, but smaller models are far behind. In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach. We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model. Additionally, we introduce ECN-QA, a novel medical question answering dataset containing ``progressive questions'' composed of related sequential questions. We show the benefits of our training strategy on this dataset. The study's findings highlight the potential of small language models in the medical domain when appropriately fine-tuned. The code and weights are available at https://github.com/raidium-med/MQG.


A deep learning method for predicting knee osteoarthritis radiographic progression from MRI

October 2021

·

212 Reads

·

61 Citations

Arthritis Research & Therapy

Background The identification of patients with knee osteoarthritis (OA) likely to progress rapidly in terms of structure is critical to facilitate the development of disease-modifying drugs. Methods Using 9280 knee magnetic resonance (MR) images (3268 patients) from the Osteoarthritis Initiative (OAI) database , we implemented a deep learning method to predict, from MR images and clinical variables including body mass index (BMI), further cartilage degradation measured by joint space narrowing at 12 months. Results Using COR IW TSE images, our classification model achieved a ROC AUC score of 65%. On a similar task, trained radiologists obtained a ROC AUC score of 58.7% highlighting the difficulty of the classification task. Additional analyses conducted in parallel to predict pain grade evaluated by the WOMAC pain index achieved a ROC AUC score of 72%. Attention maps provided evidence for distinct specific areas as being relevant in those two predictive models, including the medial joint space for JSN progression and the intra-articular space for pain prediction. Conclusions This feasibility study demonstrates the interest of deep learning applied to OA, with a potential to support even trained radiologists in the challenging task of identifying patients with a high-risk of disease progression.


Figure 1
Figure 2
Figure 4
A Deep Learning Method for Identifying Predictors of Knee Osteoarthritis Radiographic Progression From Baseline MRI

May 2021

·

43 Reads

-- Background -- The identification of patients with knee osteoarthritis (OA) likely to progress rapidly in terms of structure is critical to facilitate the development of disease-modifying drugs. -- Methods -- Using data from the Osteoarthritis Initiative database (OAI), we implemented a Deep Learning method to predict, from baseline magnetic resonance images, further cartilage degradation, the latter being measured by Joint Space Narrowing at 12 months. -- Results -- Using COR IW TSE images, our classification model achieved a ROC AUC score of 65% to be compared with a ROC AUC score of 58.7% obtained by trained radiologists. Additional analyses conducted in parallel to predict pain grade evaluated by the WOMAC pain index achieved a ROC AUC score of 72%. Attention maps provided evidence for distinct specific areas as being relevant in those two predictive models, including the internal femoro-tibial compartment for JSN progression and the intra-articular space for pain prediction. -- Conclusions -- This feasibility study demonstrates the interest of deep learning applied to OA, with a potential to support even trained radiologists in the challenging task of identifying patients with a high-risk of disease progression.


Kaplan–Meier curves for the high-risk individuals and the ones with low or medium risk according to AI-severity
The threshold to assign individuals into a high-risk group was the 2/3 quantile of the AI-severity score computed for patients of the KB development cohort. a Kaplan–Meier curves were obtained for the 150 leftover KB patients from the development cohort. b Kaplan–Meier curves were obtained for the 135 patients of the IGR validation cohort. p-values for the log-rank test were equal to 4.77e–07 (KB) and 4.00e–12 (IGR). The two terciles used to determine threshold values for low-, medium-, and high-risk groups were equal to 0.187 and 0.375. Diamonds correspond to censoring of patients who were still hospitalized at the time when data ceased to be updated. The bands correspond to the sequence of the 95% confidence intervals of the survival probabilities for each day. KB Kremlin-Bicêtre hospital, IGR Institut Gustave Roussy hospital.
AUC values when comparing AI-severity to other prognostic scores for COVID-19 severity/mortality
The AI-severity model was trained using the severity outcome defined as an oxygen flow rate of 15 L/min or higher, the need for mechanical ventilation, or death. When evaluating AI-severity on the alternative outcomes, the model was not trained again. a AUC results are reported on the leftover KB patients from the development cohort (150 patients). b The mean AUC (averaged over outcomes and over hospitals) as a function of the sample size (sum of sample sizes for the development and validation cohorts) used to construct the score. c AUC results are reported on the external validation set from IGR (135 patients). Models are sorted from left to right (and from top to bottom in the legend) by decreasing order of AUC values (averaged over outcomes and over hospitals). Error bars represent the 95% confidence intervals obtained with the DeLong procedure. Stars indicate the order of magnitude of p-values for the DeLong one-sided test in which we test if AUCAI-severity > AUCother score, • 0.05 < p ≤ 0.10, *0.01 < p ≤ 0.05, **0.001 < p ≤ 0.01, ***p ≤ 0.001. KB Kremlin-Bicêtre hospital, IGR Institut Gustave Roussy hospital, ICU intensive care unit, NEWS2 National Early Warning Score 2, AUC area under the curve.
Confusion matrix obtained with AI-severity, which includes CT scan information in addition to clinical and biological variables and with C & B, which contains only clinical and biological variables
Values in the matrices correspond to the number of patients in each category, which is defined by the true severity status and its predicted one. The confusion matrix was computed using the outcome “oxygen flow rate of 15 L/min or higher and/or the need for mechanical ventilation and/or patient death.” For both scores, we considered the 2/3 quantile—computed using the development cohort (KB)—to distinguish severe patients from non-severe patients. In addition to the neural network variable computed from CT scan images, the variables included in AI-severity consist of oxygen saturation, age, sex, platelet, and urea. The variables included in C & B consist of oxygen saturation, age, sex, platelet, urea, LDH, hypertension, chronic kidney disease, dyspnea, and neutophil values. Both scores were constructed using a feature selection algorithm that selected optimal variables. KB Kremlin-Bicêtre hospital, IGR Institut Gustave Roussy hospital.
Integrating deep learning CT-scan model, biological and clinical variables to predict severity of COVID-19 patients

January 2021

·

334 Reads

·

173 Citations

The SARS-COV-2 pandemic has put pressure on intensive care units, so that identifying predictors of disease severity is a priority. We collect 58 clinical and biological variables, and chest CT scan data, from 1003 coronavirus-infected patients from two French hospitals. We train a deep learning model based on CT scans to predict severity. We then construct the multimodal AI-severity score that includes 5 clinical and biological variables (age, sex, oxygenation, urea, platelet) in addition to the deep learning model. We show that neural network analysis of CT-scans brings unique prognosis information, although it is correlated with other markers of severity (oxygenation, LDH, and CRP) explaining the measurable but limited 0.03 increase of AUC obtained when adding CT-scan information to clinical variables. Here, we show that when comparing AI-severity with 11 existing severity scores, we find significantly improved prognosis performance; AI-severity can therefore rapidly become a reference scoring approach. The SARS-COV-2 pandemic has put pressure on intensive care units, so that predicting severe deterioration early is a priority. Here, the authors develop a multimodal severity score including clinical and imaging features that has significantly improved prognostic performance in two validation datasets compared to previous scores.


Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images

January 2021

·

317 Reads

·

2 Citations

As AI-based medical devices are becoming more common in imaging fields like radiology and histology, interpretability of the underlying predictive models is crucial to expand their use in clinical practice. Existing heatmap-based interpretability methods such as GradCAM only highlight the location of predictive features but do not explain how they contribute to the prediction. In this paper, we propose a new interpretability method that can be used to understand the predictions of any black-box model on images, by showing how the input image would be modified in order to produce different predictions. A StyleGAN is trained on medical images to provide a mapping between latent vectors and images. Our method identifies the optimal direction in the latent space to create a change in the model prediction. By shifting the latent representation of an input image along this direction, we can produce a series of new synthetic images with changed predictions. We validate our approach on histology and radiology images, and demonstrate its ability to provide meaningful explanations that are more informative than GradCAM heatmaps. Our method reveals the patterns learned by the model, which allows clinicians to build trust in the model's predictions, discover new biomarkers and eventually reveal potential biases.


Figure 1: Population description for the KB and IGR hospitals. Among the 1,003 patients of the study, biological and clinical variables were available for 989 individuals. Categorical variables are expressed as percentages [available]. Continuous variables are shown as median (IQR) [available]. Association with severity are reported with p-values for each center and the pooled p-value has been obtained with Stouffer's method to combine p-values. p-values that are shown are not adjusted for multiplicity. Variables and pooled p-values are in bold when the variable is significant after Bonferroni adjustment to account for multiple testing across the 63 variables. For continuous variables, odds ratios are computed for an increase of one standard deviation of the continuous variable. KB odds ratios are in blue, IGR in red.
AI-based multi-modal integration of clinical characteristics, lab tests and chest CTs improves COVID-19 outcome prediction of hospitalized patients

May 2020

·

188 Reads

·

9 Citations

With 15% of severe cases among hospitalized patients1, the SARS-COV-2 pandemic has put tremendous pressure on Intensive Care Units, and made the identification of early predictors of severity a public health priority. We collected clinical and biological data, as well as CT scan images and radiology reports from 1,003 coronavirus-infected patients from two French hospitals. Radiologists' manual CT annotations were also available. We first identified 11 clinical variables and 3 types of radiologist-reported features significantly associated with prognosis. Next, focusing on the CT images, we trained deep learning models to automatically segment the scans and reproduce radiologists' annotations. We also built CT image-based deep learning models that predicted severity better than models based on the radiologists' scan reports. Finally, we showed that including CT scan features alongside the clinical and biological data yielded more accurate predictions than using clinical and biological data only. These findings show that CT scans provide insightful early predictors of severity.


Abdominal musculature segmentation and surface prediction from CT using deep learning for sarcopenia assessment

May 2020

·

90 Reads

·

41 Citations

Diagnostic and Interventional Imaging

Purpose The purpose of this study was to build and train a deep convolutional neural networks (CNN) algorithm to segment muscular body mass (MBM) to predict muscular surface from a two-dimensional axial computed tomography (CT) slice through L3 vertebra. Materials and methods An ensemble of 15 deep learning models with a two-dimensional U-net architecture with a 4-level depth and 18 initial filters were trained to segment MBM. The muscular surface values were computed from the predicted masks and corrected with the algorithm's estimated bias. Resulting mask prediction and surface prediction were assessed using Dice similarity coefficient (DSC) and root mean squared error (RMSE) scores respectively using ground truth masks as standards of reference. Results A total of 1025 individual CT slices were used for training and validation and 500 additional axial CT slices were used for testing. The obtained mean DSC and RMSE on the test set were 0.97 and 3.7 cm² respectively. Conclusion Deep learning methods using convolutional neural networks algorithm enable a robust and automated extraction of CT derived MBM for sarcopenia assessment, which could be implemented in a clinical workflow.


Detection and characterization of MRI breast lesions using deep learning

March 2019

·

157 Reads

·

115 Citations

Diagnostic and Interventional Imaging

Purpose: The purpose of this study was to assess the potential of a deep learning model to discriminate between benign and malignant breast lesions using magnetic resonance imaging (MRI) and characterize different histological subtypes of breast lesions. Materials and methods: We developed a deep learning model that simultaneously learns to detect lesions and characterize them. We created a lesion-characterization model based on a single two-dimensional T1-weighted fat suppressed MR image obtained after intravenous administration of a gadolinium chelate selected by radiologists. The data included 335 MR images from 335 patients, representing 17 different histological subtypes of breast lesions grouped into four categories (mammary gland, benign lesions, invasive ductal carcinoma and other malignant lesions). Algorithm performance was evaluated on an independent test set of 168 MR images using weighted sums of the area under the curve (AUC) scores. Results: We obtained a cross-validation score of 0.817 weighted average receiver operating characteristic (ROC)-AUC on the training set computed as the mean of three-shuffle three-fold cross-validation. Our model reached a weighted mean AUC of 0.816 on the independent challenge test set. Conclusion: This study shows good performance of a supervised-attention model with deep learning for breast MRI. This method should be validated on a larger and independent cohort.


Citations (8)


... This approach is characterized by the selection of predictive variables from data and can be used in similar questioning. Thus, our method extends beyond the specific context of the GLA:D program and could be applied to other exercise therapy programs targeting knee OA, and potentially, for a broader range of conditions [62,[65][66][67]. ...

Reference:

Personalized Predictions for Changes in Knee Pain Among Patients With Osteoarthritis Participating in Supervised Exercise and Education: Prognostic Model Study
A deep learning method for predicting knee osteoarthritis radiographic progression from MRI

Arthritis Research & Therapy

... Figura 5. Resultado do ajuste das imagens de TC de casos severos e não-severos. Fonte: Elaborada pelos autores (2024).A rede neural de ajuste dinâmico das imagens conseguiu uma melhor performance em comparação com as imagens sem ajustes, demonstrando que os valores de janela e nível específicos, podem melhorar a acurácia de classificação.As especificidades obtidas em ambos os modelos preditores foram baixas, com e sem ajuste das imagens, conforme observado na Tabela 3. Isso indica um baixo desempenho na tarefa de classificar casos não-severos e reforça o desafio de prever a severidade da COVID-19 um mês à frente, mesmo quando mais dados são utilizados nessa tarefa(Lassau et al.;Purkayastha et al., 2021).Em comparação com o único trabalho publicado de predição da severidade da COVID-19 um mês à frente e que usa a mesma base de imagens STOIC, nosso trabalho obteve o melhor AUC de 0.647, que é próximo ao pior resultado deAleem et al. (2023), mas abaixo aos melhores resultados desse mesmo trabalho, um AUC de 0.863, ao usar técnicas de aumento de dados; e AUC de 0.787, sem aumento.Uma melhoria nos resultados pode ser obtida com o emprego de uma base de imagens que possua um melhor balanceamento entre casos severos e não-severos, mas isso demandará mais recursos de hardware e maior tempo de treinamento do modelo.Outra limitação desse estudo, foi a não procura pelo melhor modelo de classificação para essa tarefa, pois, dados as limitações de hardware, buscou-se usar um modelo mais comum, para a tarefa classificação e que fosse possível a sua aplicação, como componente do filtro dinâmico. Entretanto, no futuro, como continuação desse estudo, encontremos um modelo de classificação com melhor desempenho e o compararemos com ele fazendo parte do modelo de rede neural de ajuste dinâmico das imagens.Observou-se que o nosso modelo proposto sempre realça as densidades correspondentes às regiões opacas de vidro-fosco do parênquima pulmonar, decorrentes da progressão ou severidade da COVID-19. ...

Integrating deep learning CT-scan model, biological and clinical variables to predict severity of COVID-19 patients

... The concept of generative models emerged in the 1980s when researchers started investigating the use of neural networks for data modeling and generation. Geoffrey Hinton was a key figure during this period, introducing the Deep Belief Network (DBN) in 1986 [104,118,127,133,134]. The DBN utilized multilayer perceptrons as fundamental units, enabling the accomplishment of complex learning tasks through layer stacking [31]. ...

Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images

... All CT scans were performed preoperatively within a time interval of 3 months before surgery, with or without intravenous contrast administration. Body composition indices were measured on a single axial CT slice at the third lumbar vertebral level using a semiautomated deep-learning-based method with a U-Net architecture algorithm, as previously described [25,26]. These muscle segmentations were all reviewed and corrected by an experienced (14 years) musculoskeletal radiologist and included the psoas muscles, the paraspinal muscles, and the abdominal wall muscles ( Figure 1). ...

Abdominal musculature segmentation and surface prediction from CT using deep learning for sarcopenia assessment
  • Citing Article
  • May 2020

Diagnostic and Interventional Imaging

... Prior studies describe separate use of DL algorithms, volume of disease, and radiomics for diagnosis, disease severity, treatment response, outcome (death), oxygen supplement, intubation and ICU admission in patients with SARS-CoV-2 pneumonia. 5,6,[17][18][19] Although performance of our DL algorithm and radiomics approach is similar to prior reports, besides the influence of motion artifacts, we document both the comparative and additive value of DL-based and radiomics features in prediction of outcome and need for ICU admission. The previously reported subjective grading of disease extent in each lobe, a tedious and time-consuming process, we demonstrate that quantitative lung lobelevel information on volume and percentage of affected lungs is superior for assessing disease severity and predicting patient outcome. ...

AI-based multi-modal integration of clinical characteristics, lab tests and chest CTs improves COVID-19 outcome prediction of hospitalized patients

... Further validation is required for the identified pathways; however, these have diagnostic potential for HCC. Integration with a machine learning model may allow for genome-wide data interpretation leading to more identified pathways linked to HCC Schmauch et al. [80] ResNet50, supervised attention mechanism ...

Diagnosis of focal liver lesions from ultrasound using deep learning
  • Citing Article
  • March 2019

Diagnostic and Interventional Imaging

... The average area under the curve (AUC) calculated using three-fold cross-validation was 0.817. 78 Antropova et al. were the first to propose a classification task for benign and malignant lesions in breast MRI using a pre-trained VGG19 network. In their study, ROIs surrounding each lesion on transverse slices of breast DCE-T1WI images at the precontrast (t0) and the first two post-contrast time points (t1, t2) were utilized. ...

Detection and characterization of MRI breast lesions using deep learning
  • Citing Article
  • March 2019

Diagnostic and Interventional Imaging

... Over the past couple of decades, tremendous research has been observed for the design and development of CAD tools for multimodal data to act as assistants to domain experts in reaching fast and concrete diagnoses. From old-school computer vision (CV) algorithms, to machine learning (ML), to the more recent DL architectures, great progress has been observed in this field, producing outstanding results [6,10,[12][13][14][15]. Radiological imaging has become an inseparable part of the diagnosis process, with technologies like X-ray, ultrasound, MRI, CT, and PET scans playing a vital role in assisting experts in accurate diagnoses. ...

Brain age prediction of healthy subjects on anatomic MRI with deep learning: going beyond with an "explainable AI" mindset