Ken Enda’s research while affiliated with Hokkaido University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


Patient flowchart. ALN axillary lymph node, CEUS contrast-enhanced ultrasonography, PST preoperative systemic therapy.
Overview of our bimodal model consisted of DL part and LightGBM part. The DL contained image preprocessing steps, where the RGB channels were overlaid by putting the high-echo areas of the conventional US image in R and the contrast areas of the CEUS image in G. Then, the image was input into DL models to derive the image-based predictive covariate (iP: 0–1) for ALN metastasis. In the LightGBM part input data were iP and imaging features which radiologists validated in conventional US and CEUS images of ALN and primary lesions. The output was defined as final predictive covariate (final P). The performance of models were compared in the two settings. Setting A had input data from conventional US images and the imaging features. Setting B had input data from conventional US images combined with CEUS images and the imaging features. DL deep learning, LightGBM light-gradient boosting machine, CEUS contrast-enhanced ultrasonography, US ultrasonography, ALN axillary lymph node.
Diagnostic performance of our proposed model and the radiologist’s diagnosis for predicting ALN metastasis in the test cohort. US ultrasonography, CEUS contrast-enhanced ultrasonography.
Representative images of the CEUS and conventional US images (a,b) CEUS (a) and conventional US (b) images with ground truth positive; radiologist’s diagnosis positive; ML model positive. (c,d) CEUS (c) and conventional US (d) images with ground truth positive; radiologist’s diagnosis negative; ML model positive. (e,f) CEUS (e) and conventional US (f) images with ground truth negative; Radiologist’s diagnosis positive; ML model negative. US ultrasonography, CEUS contrast-enhanced ultrasonography, ML machine learning.
Artificial intelligence can extract important features for diagnosing axillary lymph node metastasis in early breast cancer using contrast-enhanced ultrasonography
  • Article
  • Full-text available

February 2025

·

11 Reads

Tomohiro Oshino

·

Ken Enda

·

Hirokazu Shimizu

·

[...]

·

Contrast-enhanced ultrasound (CEUS) plays a pivotal role in the diagnosis of primary breast cancer and in axillary lymph node (ALN) metastasis. However, the imaging features that are clinically crucial for lymph node metastasis have not been fully elucidated. Hence, we developed a bimodal model to predict ALN metastasis in patients with early breast cancer by integrating CEUS images with the annotated imaging features. The model adopted a light-gradient boosting machine to produce feature importance, enabling the extraction of clinically crucial imaging features. In this retrospective study, the diagnostic performance of the model was investigated using 788 CEUS images of ALNs obtained from 788 patients who underwent breast surgery between 2013 and 2021, with the ground truth defined by the pathological diagnosis. The results indicated that the test cohort had an area under the receiver operating characteristic curve (AUC) value of 0.93 (95% confidence interval: 0.88, 0.98). The model had an accuracy of 0.93, which was higher than the radiologist’s diagnosis (accuracy of 0.85). The most important imaging features were heterogeneous enhancement, diffuse cortical thickening, and eccentric cortical thickening. Our model has an excellent diagnostic performance, and the extracted imaging features could be crucial for confirming ALN metastasis in clinical settings.

Download

Diagnosis on Ultrasound Images for Developmental Dysplasia of the Hip with a Deep Learning-Based Model Focusing on Signal Heterogeneity in the Bone Region

February 2025

·

21 Reads

Background: Developmental dysplasia of the hip (DDH) is a prevalent issue in infants, with ultrasound crucial for early detection. Existing automatic diagnostic models lack precision due to noise, but 3D technology may enhance it. This study aimed to create and assess a deep-learning-based model for automated DDH diagnosis by using 3D transformation technology on two-dimensional ultrasound images. Methods: A retrospective study of 417 infants at risk of DDH used ultrasound images, combining convolutional neural networks and image processing. The images were analyzed using algorithms such as HigherHRNet-W48. The approach included apex point estimation, signal heterogeneity analysis of ilium, which focused on the bony area with high intensity and evaluate ilium rotation, alpha angle creation, and the establishment of a comprehensive method for DDH diagnosis. Results: Key findings include: (1) Superior accuracy in apex point estimation by the HigherHRNet-W48 model, even better than orthopedic residents. (2) Thorough quality assessments of ultrasound images, leading to qualified and disqualified categories, with qualified images displaying notably lower error rates. (3) The AUC of the model for DDH detection in the qualifying images was 0.92, exceeding the diagnostic accuracy of the resident, indicating the diagnostic capability of the tool. Conclusions: The study developed a deep-learning-based model for DDH detection in infants, melding 3D technology with deep learning to address challenges like noise and rotation in ultrasound images. The study’s innovation demonstrated a comparative accuracy to specialized evaluations, even with non-specialist images, highlighting its potential to assist novice sonographers and enhance diagnostic precision.


Fig. 1: Dataset Curation and Model Training Strategy a) Study workflow showing ROI selection from hematoxylin and eosin-stained whole slide images and patch extraction (left), and model training approaches (right): fine-tuning (FT) with full parameter updates and linear probing (LP) with frozen encoder weights. b) Distribution of patch counts in the Local dataset, with horizontal lines indicating the training patch count conditions (10, 25, 100, and 500 patches). c) Distribution of tumor classes (glioblastoma, astrocytoma, oligodendroglioma, primary CNS lymphoma, and metastatic tumors) shown in pie charts for Local dataset (n=252, upper) and EBRAINS dataset (n=698, lower).
Fig. 2: Model Performance Evaluation and Classification Analysis a) Box plots comparing model performances on the Local dataset validation sets with 500 patches/case. From left to right: macro recall and patch accuracy for both fine-grained and coarse-grained classification tasks. Each point represents a validation fold metric, with lines connecting points from the same fold. Red points indicate mean values across folds. b) Box plots showing model performances on the EBRAINS dataset, evaluated using models trained with 500 patches/case. Metrics displayed match panel (a). c) Confusion matrices for the Local dataset (left: UNI(LP), right: UNI(FT)), combining results from all validation folds. Classes are glioblastoma (G), astrocytoma (A), oligodendroglioma (O), metastatic tumors (M), and PCNSL (L). d) Confusion matrices for the EBRAINS dataset using ensemble predictions from all folds (left: UNI(LP), right: UNI(FT)). Ground truth columns for glioma classes are expanded into molecular subtypes -G class: GBM, AA, and DA, all IDH(-); A class: GBM, AA, and DA, all IDH(+); O class: AO and O (GBM: Glioblastoma, AA: Anaplastic astrocytoma, DA: Diffuse astrocytoma, AO: Anaplastic oligodendroglioma, O: Oligodendroglioma).
Transfer Learning Strategies for Pathological Foundation Models: A Systematic Evaluation in Brain Tumor Classification

January 2025

·

15 Reads

Foundation models pretrained on large-scale pathology datasets have shown promising results across various diagnostic tasks. Here, we present a systematic evaluation of transfer learning strategies for brain tumor classification using these models. We analyzed 252 cases comprising five major tumor types: glioblastoma, astrocytoma, oligodendroglioma, primary central nervous system lymphoma, and metastatic tumors. Comparing state-of-the-art foundation models with conventional approaches, we found that foundation models demonstrated robust classification performance with as few as 10 patches per case, challenging the traditional assumption that extensive per-case image sampling is necessary. Furthermore, our evaluation revealed that simple transfer learning strategies like linear probing were sufficient, while fine-tuning often degraded model performance. These findings suggest a paradigm shift from extensive data collection to efficient utilization of pretrained features, providing practical implications for implementing AI-assisted diagnosis in clinical pathology.


Clinical Parameters. (a) Scheme of the radiographic parameters. (b) The parameters on a representative image. Unstable hips are typically left sided, exhibiting a high value of acetabular index, low value of O-edge angle, and low Yamamuro A.
Overview of end-to-end models. The settings were prepared as follows: Setting A as pure CNNs, Setting B as aaCM followed by models for tabular data, and Setting C as the bimodal model. In Setting A, the unstable hips directly predicted from the X-ray images. In Setting B, the regions of interest (ROIs) were extracted from the X-ray images, and then, four clinical parameters per a hip (acetabular index, O-edge angle, Yamamuro A, Yamamuro B) were generated. Finally, unstable hips were predicted using models for tabular data. In Setting C, the features obtained from Setting A were concatenated with the automatically-produced clinical measurements. The combined data were fed into a fully connected model for prediction. aaCM: automatic algorithms of clinical measurements; CNNs: convolutional neural networks on images.
Automatic algorithm of the clinical parameters. For each bROI extracted by YOLOv5, binarization was applied to transform the bone area into a blob. The local threshold was calculated at every individual point of the image. Then, the blob was detected by labeling processing. The contour of the blob colored by red was described by a convex hull. Next, the featured points based on the contour were detected as follows; P1: the bottommost point in the ROI containing the acetabulum; P2: the lateral point to P1with straight line; P3: the lateralmost point in ROIs with the ischium; P4: the upmost point in ROIs; P5: the point nearest to the middle point between the innermost points in ROIs with the femur. bROI: bone region of interest; YOLO: You Only Look Once.
Model performance. (a) Distribution of the accuracy, AUPRC, AUROC, and F1 score of the model. EfficientNetB4 models were trained during six-fold cross-validation per group. The 10th, 50th (median), and 90th quantiles, as well as minimum and maximum, are shown. A paired t-test with Bonferroni correction for multiple comparisons. *P < 0.05, **P < 0.01, ***P < 0.001 compared with setting C. (b) Precision–recall and receiver operating characteristics curves of settings A, B, and C. The mean of the six-fold cross-validation is shown. AUPRC: the area under the precision-recall curve; AUROC: the area under the receiver operating characteristics.
Feature importance and Grad-CAM. (a) SHAP values of each parameter in the images of the infants with unstable hips. The position indicates the SHAP value, and higher SHAP values signify a greater positive impact on the outcome. The top-ranked parameters have higher SHAP values. The color of each point in the plot represents the relative magnitude of the feature value, while the position indicates the SHAP value. (b) Representative images of positive and negative cases in which Grad-CAM heat-map was integrated with hROIs. SHAP: the SHapley Additive exPlanations; hROIs: hip regions of interest. Contr: contralateral.
Bimodal machine learning model for unstable hips in infants: integration of radiographic images with automatically-generated clinical measurements

August 2024

·

95 Reads

·

2 Citations

Bimodal convolutional neural networks (CNNs) are frequently combined with patient information or several medical images to enhance the diagnostic performance. However, the technologies that integrate automatically generated clinical measurements within the images are scarce. Hence, we developed a bimodal model that produced automatic algorithm for clinical measurement (aaCM) from radiographic images and integrated the model with CNNs. In this multicenter research project, the diagnostic performance of the model was investigated with 813 radiographic hip images of infants at risk of developmental dysplasia of the hips (232 and 581 images of unstable and stable hips, respectively), with the ground truth defined by provocative examinations. The results indicated that the accuracy of aaCM was equal or higher than that of specialists, and the bimodal model showed better diagnostic performance than LightGBM, XGBoost, SVM, and single CNN models. aaCM can provide expert’s knowledge in a high level, and our proposed bimodal model has better performance than the state-of-art models.


Figure 1
Table 1
Figure 3
Table 4
Mean absolute error of acetabular index
Explainable AI Models on Radiographic Images Integrated with Clinical Measurements: Prediction for Unstable Hips in Infants

December 2023

·

26 Reads

Considering explainability is crucial in medical artificial intelligence, technologies to quantify Grad-CAM heatmaps and perform automatic integration based on domain knowledge remain lacking. Hence, we created an end-to-end model that produced CAM scores on regions of interest (CSoR), a measure of relative CAM activity, and feature importance scores by automatic algorithms for clinical measurement (aaCM) followed by LightGBM. In this multicenter research project, the diagnostic performance of the model was investigated with 813 radiographic hip images in infants at risk of unstable hips, with the ground truth defined by provocative examinations. The results indicated that the accuracy of aaCM was higher than that of specialists, and the model with ad hoc adoption of aaCM outperformed the image-only-based model. Subgroup analyses in positive cases indicated significant differences in CSoR between the unstable and contralateral sides despite containing only binary labels (positive or negative). In conclusion, aaCM reinforces the performance, and CSoR potentially indicates model reliability.


Fatal case of subdural empyema caused by Campylobacter rectus and Slackia exigua

May 2023

·

16 Reads

·

3 Citations

Autopsy and Case Reports

We report a fatal subdural empyema caused by Campylobacter rectus in a 66-year-old female who developed acute onset of confusion, dysarthria, and paresis in her left extremities. A CT scan showed hypodensity in a crescentic formation with a mild mid-line shift. She had a bruise on her forehead caused by a fall several days before admission, which initially raised subdural hematoma (SDH) diagnosis, and a burr hole procedure was planned. However, her condition deteriorated on the admission night, and she died before dawn. An autopsy revealed that she had subdural empyema (SDE) caused by Campylobacter rectus and Slackia exigua. Both microorganisms are oral microorganisms that rarely cause extra-oral infection. In our case, head trauma caused a skull bone fracture, and sinus infection might have expanded to the subdural space causing SDE. CT/MRI findings were not typical for either SDH or SDE. Early recognition of subdural empyema and prompt initiation of treatment with antibiotics and surgical drainage is essential for cases of SDE. We present our case and a review of four reported cases. Keywords Campylobacter rectus; Empyema, Subdural; Hematoma, Subdural; Sinusitis


Machine Learning Algorithms: Prediction and Feature Selection for Clinical Refracture after Surgically Treated Fragility Fracture

April 2022

·

86 Reads

·

10 Citations

Background: The number of patients with fragility fracture has been increasing. Although the increasing number of patients with fragility fracture increased the rate of fracture (refracture), the causes of refracture are multifactorial, and its predictors are still not clarified. In this issue, we collected a registry-based longitudinal dataset that contained more than 7000 patients with fragility fractures treated surgically to detect potential predictors for clinical refracture. Methods: Based on the fact that machine learning algorithms are often used for the analysis of a large-scale dataset, we developed automatic prediction models and clarified the relevant features for patients with clinical refracture. Formats of input data containing perioperative clinical information were table data. Clinical refracture was documented as the primary outcome if the diagnosis of fracture was made at postoperative outpatient care. A decision-tree-based model, LightGBM, had moderate accuracy for the prediction in the test and the independent dataset, whereas the other models had poor accuracy or worse. Results: From a clinical perspective, rheumatoid arthritis (RA) and chronic kidney disease (CKD) were noted as the relevant features for patients with clinical refracture, both of which were associated with secondary osteoporosis. Conclusion: The decision-tree-based algorithm showed the precise prediction of clinical refracture, in which RA and CKD were detected as the potential predictors. Understanding these predictors may improve the management of patients with fragility fractures.

Citations (3)


... Previous studies have stated that tree-based models outperform deep learning (DL) models on tabular data 24 , because imaging features are regarded as tabular data. A light-gradient boosting machine (LightGBM) with feature selection, a representative tree-based model, has been widely applied for evaluating tabular datasets and extracting feature importance for each parameter [25][26][27][28] . Attempts have also been made to predict ALN metastasis in breast cancer based on conventional US images of primary breast cancer tumour 29 . ...

Reference:

Artificial intelligence can extract important features for diagnosing axillary lymph node metastasis in early breast cancer using contrast-enhanced ultrasonography
Bimodal machine learning model for unstable hips in infants: integration of radiographic images with automatically-generated clinical measurements

... It is known to cause polymicrobial infections with other obligate anaerobes. [10,16] The most recently published case report of subdural empyema with S. exigua and Campylobacter rectus, a microaerophilic pathogen was found during autopsy [18]. ...

Fatal case of subdural empyema caused by Campylobacter rectus and Slackia exigua

Autopsy and Case Reports

... Previous studies have stated that tree-based models outperform deep learning (DL) models on tabular data 24 , because imaging features are regarded as tabular data. A light-gradient boosting machine (LightGBM) with feature selection, a representative tree-based model, has been widely applied for evaluating tabular datasets and extracting feature importance for each parameter [25][26][27][28] . Attempts have also been made to predict ALN metastasis in breast cancer based on conventional US images of primary breast cancer tumour 29 . ...

Machine Learning Algorithms: Prediction and Feature Selection for Clinical Refracture after Surgically Treated Fragility Fracture