Shu Liao’s research while affiliated with United Imaging and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (20)


Flowchart of the data selection and exclusion criteria, and the separation of the training and testing set.
Network structure of the deep learning–based fast MR reconstruction framework used in this study.
(a) Deep learning–based reconstruction T1‐FLAIR image from 4.5× acceleration factor, (b) the corresponding full sampled T1‐FLAIR image, (c) deep learning–based reconstruction T2‐FLAIR image with 4.5× acceleration factor, and (d) the corresponding full sampled T2‐FLAIR image. Both radiologists made the same diagnosis decision to the deep learning–reconstructed and full sampled T1‐FLAIR image as normal subject, and the deep learning–reconstructed and full sampled T2‐FLAIR image has bilateral basal ganglia.
Typical example of (a) zero‐filled reconstruction, (b) parallel imaging reconstruction, and (c) deep learning–based reconstruction result of T1‐FLAIR image with 4.5× acceleration factor. (d) This shows the full sampled reconstruction result. Note that Gibbs artifacts are suppressed by the deep learning–based reconstruction result compared to the full sampled image, and significant differences are highlighted by the green circle.
(a) Deep learning–based reconstruction of T2‐FLAIR images with a 4.5× acceleration factor; (b) VARNET reconstructed T2‐FLAIR image, showing areas of apparent signal enhancement in the cerebral cortex (thin arrows), while artifact correction remains suboptimal (thick arrows); (c) CycleGAN‐based network reconstructed T2‐FLAIR image, exhibiting distortion in both signal and extent of the lesion; (d) deep learning–based reconstruction of T1‐FLAIR images with a 4.5× acceleration factor; (e) VARNET reconstructed T1‐FLAIR image; (f) CycleGAN‐based network reconstructed T1‐FLAIR image, the artifact correction in the last two images is suboptimal, and their overall sharpness remains inadequate.
Accelerating Brain MR Imaging With Multisequence and Convolutional Neural Networks
  • Article
  • Full-text available

November 2024

·

5 Reads

Zhanhao Mo

·

He Sui

·

Zhongwen Lv

·

[...]

·

Shu Liao

Purpose Magnetic resonance imaging (MRI) refers to one of the critical image modalities for diagnosis, whereas its long acquisition time limits its application. In this study, the aim was to investigate whether deep learning–based techniques are capable of using the common information in different MRI sequences to reduce the scan time of the most time‐consuming sequences while maintaining the image quality. Method Fully sampled T1‐FLAIR, T2‐FLAIR, and T2WI brain MRI raw data originated from 217 patients and 105 healthy subjects. The T1‐FLAIR and T2‐FLAIR sequences were subsampled using Cartesian masks based on four different acceleration factors. The fully sampled T1/T2‐FLAIR images were predicted from undersampled T1/T2‐FLAIR images and T2WI images through deep learning–based reconstruction. They were qualitatively assessed by two senior radiologists in accordance with the diagnosis decision and a four‐point scale image quality score. Furthermore, the images were quantitatively assessed based on regional signal‐to‐noise ratios (SNRs) and contrast‐to‐noise ratios (CNRs). The chi‐square test was performed, where p < 0.05 indicated a difference with statistical significance. Results The diagnosis decisions from two senior radiologists remained unchanged in accordance with the accelerated and fully sampled images. There were no significant differences in the regional SNRs and CNRs of most assessed regions (p > 0.05) between the accelerated and fully sampled images. Moreover, no significant difference was identified in the image quality assessed by two senior radiologists (p > 0.05). Conclusion Deep learning–based image reconstruction is capable of significantly expediting the brain MR imaging process and producing acceptable image quality without affecting diagnosis decisions.

Download

Figure 1. Schematics of general hybrid deep-learning and iterative reconstruction (hybrid DL-IR)
Figure 3. ACS reconstruction performance under different acceleration factors in the external validation dataset (A) Representative T1w FLAIR and T2w FLAIR head images reconstructed from fully sampled k-space data from the external validation dataset and from downsampled k-space data with acceleration factors of 2, 3, and 4 using ACS and PI methods. (B) NRMSE of ACS-and PI-reconstructed images under different acceleration factors for external validation dataset. Statistical analyses are performed using paired t tests (n = 78), ***p < 0.001. Significant differences are observed in all acceleration factors. See also Table S4.
Figure 4. ACS reconstruction for 100-s-level MRI scans and single-breath-hold MRI scans (A and B) Representative images of the head (A) and knee (B) reconstructed by ACS (at a 100-s level) and PI using four pulse sequences. (C and D) Reconstruction of the chest MR images by ACS with data acquired in a single breath hold and by PI with data acquired in three breath holds at the transversal (C) and sagittal (D) sections. The red circle in (C) labels a focal lesion, which is missed in the three-breath-hold acquisition reconstructed by PI while being successfully captured in the single-breath-hold acquisition reconstructed by ACS. The reference in (D) is acquired with a spoiled gradient echo sequence in a single breath hold. Red circles highlight the focal lesions in the liver. See also Figure S3.
Fast and low-dose medical imaging generation empowered by hybrid deep-learning and iterative reconstruction

July 2023

·

202 Reads

·

11 Citations

Cell Reports Medicine

Fast and low-dose reconstructions of medical images are highly desired in clinical routines. We propose a hybrid deep-learning and iterative reconstruction (hybrid DL-IR) framework and apply it for fast magnetic resonance imaging (MRI), fast positron emission tomography (PET), and low-dose computed tomography (CT) image generation tasks. First, in a retrospective MRI study (6,066 cases), we demonstrate its capability of handling 3- to 10-fold under-sampled MR data, enabling organ-level coverage with only 10- to 100-s scan time; second, a low-dose CT study (142 cases) shows that our framework can successfully alleviate the noise and streak artifacts in scans performed with only 10% radiation dose (0.61 mGy); and last, a fast whole-body PET study (131 cases) allows us to faithfully reconstruct tumor-induced lesions, including small ones (<4 mm), from 2- to 4-fold-accelerated PET acquisition (30-60 s/bp). This study offers a promising avenue for accurate and high-quality image reconstruction with broad clinical value.


Nomograms predict prognosis and hospitalization time using non-contrast CT and CT perfusion in patients with ischemic stroke

July 2022

·

34 Reads

·

4 Citations

Background Stroke is a major disease with high morbidity and mortality worldwide. Currently, there is no quantitative method to evaluate the short-term prognosis and length of hospitalization of patients. Purpose We aimed to develop nomograms as prognosis predictors based on imaging characteristics from non-contrast computed tomography (NCCT) and CT perfusion (CTP) and clinical characteristics for predicting activity of daily living (ADL) and hospitalization time of patients with ischemic stroke. Materials and methods A total of 476 patients were enrolled in the study and divided into the training set (n = 381) and testing set (n = 95). Each of them owned NCCT and CTP images. We propose to extract imaging features representing as the Alberta stroke program early CT score (ASPECTS) values from NCCT, ischemic lesion volumes from CBF, and TMAX maps from CTP. Based on imaging features and clinical characteristics, we addressed two main issues: (1) predicting prognosis according to the Barthel index (BI)–binary logistic regression analysis was employed for feature selection, and the resulting nomogram was assessed in terms of discrimination capability, calibration, and clinical utility and (2) predicting the hospitalization time of patients–the Cox proportional hazard model was used for this purpose. After feature selection, another specific nomogram was established with calibration curves and time-dependent ROC curves for evaluation. Results In the task of predicting binary prognosis outcome, a nomogram was constructed with the area under the curve (AUC) value of 0.883 (95% CI: 0.781–0.985), the accuracy of 0.853, and F1-scores of 0.909 in the testing set. We further tried to predict discharge BI into four classes. Similar performance was achieved as an AUC of 0.890 in the testing set. In the task of predicting hospitalization time, the Cox proportional hazard model was used. The concordance index of the model was 0.700 (SE = 0.019), and AUCs for predicting discharge at a specific week were higher than 0.80, which demonstrated the superior performance of the model. Conclusion The novel non-invasive NCCT- and CTP-based nomograms could predict short-term ADL and hospitalization time of patients with ischemic stroke, thus allowing a personalized clinical outcome prediction and showing great potential in improving clinical efficiency. Summary Combining NCCT- and CTP-based nomograms could accurately predict short-term outcomes of patients with ischemic stroke, including whose discharge BI and the length of hospital stay. Key Results Using a large dataset of 1,310 patients, we show a novel nomogram with a good performance in predicting discharge BI class of patients (AUCs > 0.850). The second nomogram owns an excellent ability to predict the length of hospital stay (AUCs > 0.800).


Multi-Modal MRI Reconstruction Assisted with Spatial Alignment Network

April 2022

·

66 Reads

·

33 Citations

IEEE Transactions on Medical Imaging

In clinical practice, multi-modal magnetic resonance imaging (MRI) with different contrasts is usually acquired in a single study to assess different properties of the same region of interest in the human body. The whole acquisition process can be accelerated by having one or more modalities under-sampled in the k{k} -space. Recent research has shown that, considering the redundancy between different modalities, a target MRI modality under-sampled in the k{k} -space can be more efficiently reconstructed with a fully-sampled reference MRI modality. However, we find that the performance of the aforementioned multi-modal reconstruction can be negatively affected by subtle spatial misalignment between different modalities, which is actually common in clinical practice. In this paper, we improve the quality of multi-modal reconstruction by compensating for such spatial misalignment with a spatial alignment network. First, our spatial alignment network estimates the displacement between the fully-sampled reference and the under-sampled target images, and warps the reference image accordingly. Then, the aligned fully-sampled reference image joins the multi-modal reconstruction of the under-sampled target image. Also, considering the contrast difference between the target and reference images, we have designed a cross-modality-synthesis-based registration loss in combination with the reconstruction loss, to jointly train the spatial alignment network and the reconstruction network. The experiments on both clinical MRI and multi-coil k{k} -space raw data demonstrate the superiority and robustness of the multi-modal MRI reconstruction empowered with our spatial alignment network. Our code is publicly available at https://github.com/woxuankai/SpatialAlignmentNetwork .


Deep Learning in Prostate Cancer Diagnosis Using Multiparametric Magnetic Resonance Imaging With Whole-Mount Histopathology Referenced Delineations

January 2022

·

113 Reads

·

19 Citations

Background: Multiparametric magnetic resonance imaging (mpMRI) plays an important role in the diagnosis of prostate cancer (PCa) in the current clinical setting. However, the performance of mpMRI usually varies based on the experience of the radiologists at different levels; thus, the demand for MRI interpretation warrants further analysis. In this study, we developed a deep learning (DL) model to improve PCa diagnostic ability using mpMRI and whole-mount histopathology data. Methods: A total of 739 patients, including 466 with PCa and 273 without PCa, were enrolled from January 2017 to December 2019. The mpMRI (T2 weighted imaging, diffusion weighted imaging, and apparent diffusion coefficient sequences) data were randomly divided into training ( n = 659) and validation datasets ( n = 80). According to the whole-mount histopathology, a DL model, including independent segmentation and classification networks, was developed to extract the gland and PCa area for PCa diagnosis. The area under the curve (AUC) were used to evaluate the performance of the prostate classification networks. The proposed DL model was subsequently used in clinical practice (independent test dataset; n = 200), and the PCa detective/diagnostic performance between the DL model and different level radiologists was evaluated based on the sensitivity, specificity, precision, and accuracy. Results: The AUC of the prostate classification network was 0.871 in the validation dataset, and it reached 0.797 using the DL model in the test dataset. Furthermore, the sensitivity, specificity, precision, and accuracy of the DL model for diagnosing PCa in the test dataset were 0.710, 0.690, 0.696, and 0.700, respectively. For the junior radiologist without and with DL model assistance, these values were 0.590, 0.700, 0.663, and 0.645 versus 0.790, 0.720, 0.738, and 0.755, respectively. For the senior radiologist, the values were 0.690, 0.770, 0.750, and 0.730 vs. 0.810, 0.840, 0.835, and 0.825, respectively. The diagnosis made with DL model assistance for radiologists were significantly higher than those without assistance ( P < 0.05). Conclusion: The diagnostic performance of DL model is higher than that of junior radiologists and can improve PCa diagnostic accuracy in both junior and senior radiologists.


Review and Prospect: Artificial Intelligence in Advanced Medical Imaging

December 2021

·

1,189 Reads

·

72 Citations

Frontiers in Radiology

Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.


A deep learning method for eliminating head motion artifacts in computed tomography

December 2021

·

82 Reads

·

15 Citations

Purpose Involuntary patient movement results in data discontinuities during computed tomography (CT) scans which lead to a serious degradation in the image quality. In this paper, we specifically address artifacts induced by patient motion during a head scan. Method Instead of trying to solve an inverse problem, we developed a motion simulation algorithm to synthesize images with motion‐induced artifacts. The artifacts induced by rotation, translation, oscillation and any possible combination are considered. Taking advantage of the powerful learning ability of neural networks, we designed a novel 3D network structure with both a large reception field and a high image resolution to map the artifact‐free images from artifact‐contaminated images. Quantitative results of the proposed method were evaluated against the results of U‐Net and proposed networks without dilation structure. Thirty sets of motion contaminated images from two hospitals were selected to do a clinical evaluation. Result Facilitating the training dataset with artifacts induced by variable motion patterns and the neural network, the artifact can be removed with good performance. Validation dataset with simulated random motion pattern showed outperformed image correction, and quantitative results showed the proposed network had the lowest normalized root‐mean‐square error, highest peak signal‐to‐noise ratio and structure similarity, indicating our network gave the best approximation of gold standard. Clinical image processing results further confirmed the effectiveness of our method. Conclusion We proposed a novel deep learning‐based algorithm to eliminate motion artifacts. The convolutional neural networks trained with synthesized image pairs achieved promising results in artifacts reduction. The corrected images increased the diagnostic confidence compared with artifacts contaminated images. We believe that the correction method can restore the ability to successfully diagnose and avoid repeated CT scans in certain clinical circumstances.


Morphology-Guided Prostate MRI Segmentation with Multi-slice Association

September 2021

·

31 Reads

·

3 Citations

Lecture Notes in Computer Science

Prostate segmentation from magnetic resonance (MR) images plays an important role in prostate cancer diagnosis and treatment. Previous works typically overlooked large variations of prostate shapes, especially on the boundary area. Furthermore, the small glandular areas at ending slices also make the task very challenging. To overcome these problems, this paper presents a two-stage framework that explicitly utilizes prostate morphological representations (e.g., point, boundary) to accurately localize the prostate region with a coarse volumetric segmentation. Based on the 3D coarse outputs of the first stage, a 2D segmentation network with multi-slice association is further introduced to produce more reliable and accurate segmentation, due to large slice thickness in prostate MR images. Besides, several novel loss functions are further designed to enhance the consistency of prostate boundaries. Extensive experiments on large prostate MRI dataset show superior performance of our proposed method compared to several state-of-the-art methods.


Multi-Modal MRI Reconstruction with Spatial Alignment Network

August 2021

·

71 Reads

In clinical practice, magnetic resonance imaging (MRI) with multiple contrasts is usually acquired in a single study to assess different properties of the same region of interest in human body. The whole acquisition process can be accelerated by having one or more modalities under-sampled in the k-space. Recent researches demonstrate that, considering the redundancy between different contrasts or modalities, a target MRI modality under-sampled in the k-space can be better reconstructed with the helps from a fully-sampled sequence (i.e., the reference modality). It implies that, in the same study of the same subject, multiple sequences can be utilized together toward the purpose of highly efficient multi-modal reconstruction. However, we find that multi-modal reconstruction can be negatively affected by subtle spatial misalignment between different sequences, which is actually common in clinical practice. In this paper, we integrate the spatial alignment network with reconstruction, to improve the quality of the reconstructed target modality. Specifically, the spatial alignment network estimates the spatial misalignment between the fully-sampled reference and the under-sampled target images, and warps the reference image accordingly. Then, the aligned fully-sampled reference image joins the under-sampled target image in the reconstruction network, to produce the high-quality target image. Considering the contrast difference between the target and the reference, we particularly design the cross-modality-synthesis-based registration loss, in combination with the reconstruction loss, to jointly train the spatial alignment network and the reconstruction network. Our experiments on both clinical MRI and multi-coil k-space raw data demonstrate the superiority and robustness of our spatial alignment network. Code is publicly available at https://github.com/woxuankai/SpatialAlignmentNetwork.


Data-Consistency in Latent Space and Online Update Strategy to Guide GAN for Fast MRI Reconstruction

October 2020

·

35 Reads

·

5 Citations

Lecture Notes in Computer Science

Magnetic Resonance Imaging (MRI) is one of the most commonly used modalities in medical imaging, and one of the main limitations is its slow scanning and reconstruction speed. Recently, deep learning used for compressed sensing (CS) methods have been proposed to accelerate the acquisition by undersampling in the K-space and reconstruct images with neural networks. However, there are still some challenges remained: First, directly training networks based on L1/L2 distance to the target fully sampled images may lead to fuzzy reconstruction images because L1/L2 loss only enforces the overall image or patch similarity, but does not consider the local details such as anatomical sharpness. Second, Generative Adversarial Networks (GAN) can partially solve this problem. The undersampling image gets the latent space through the encoder, and the image is reconstructed by the decoder based on GAN loss, but it may generate unrealistic details by lacking of constraints in K-space domain. Third, most of the networks after training are fixed and have limited adaptation capability in the inference time, and the patient-specific information cannot be effectively used. To resolve these challenges, we proposed a new compressed sensing GAN reconstruction method, and there are two main contributions: (1) We proposed a encoder-decoder structure, which guided GAN optimization strategy data-consistency in latent space to improve the reconstruction quality such as preserving more local details and improving the anatomical sharpness while constraining GAN to follow the data distribution in the K-space to prevent the unrealistic details. (2) An online update strategy is used to find the best representation in the latent space for the underlying patient, and the reconstruct result can be further improved by incorporating the patient-specific information. Extensive experimental results show the effectiveness of our method.


Citations (15)


... The images were reconstructed using hybrid deep learning and iterative reconstruction. 27 (4), all filters on the scan user interface that could affect the PSF during image reconstruction were turned off, including image filter, raw filter, and elliptical filter. The background phase on the phase images had been removed during SWI postprocessing by vendor software on the scanner. ...

Reference:

Automatic segmentation and diameter measurement of deep medullary veins
Fast and low-dose medical imaging generation empowered by hybrid deep-learning and iterative reconstruction

Cell Reports Medicine

... An independent samples t-test was conducted to compare the delta (∆) mBI between males and females. The ∆mBI at discharge was coded as a binary variable, with 1 indicating improvement and 0 indicating maintenance or worsening [57,58]. The formula used was ∆mBI = Discharge mBI-an admission mBI > 0 was defined as improved and ≤0 was defined as unimproved [59]. ...

Nomograms predict prognosis and hospitalization time using non-contrast CT and CT perfusion in patients with ischemic stroke

... Although theoretically sound, crafting an optimal regularizer remains a difficult task. Recent deep learning-based approaches [5], [6], [19], [37]- [39], [46]- [48] have shown promising results, enabling fast and accurate reconstructions. However, these methods are often "black-boxes," lacking interpretability and physical insight, which limits their clinical applications. ...

Multi-Modal MRI Reconstruction Assisted with Spatial Alignment Network
  • Citing Article
  • April 2022

IEEE Transactions on Medical Imaging

... 18 However, mpMRI has limitations, including the risk of misinterpreting low-signal intensity areas, mistakenly associating rim enhancement with restricted diffusion and malignancy and improperly applying PI-RADS scoring, which can vary depending on the radiologist's experience. [19][20][21][22] Radiomics based on mpMRI provide quantitative features, including morphological, statistical and textural characteristics, which offer insights beyond human visual perception. These features are crucial for understanding tumour phenotypes and heterogeneity, thereby delivering valuable diagnostic and prognostic information. ...

Deep Learning in Prostate Cancer Diagnosis Using Multiparametric Magnetic Resonance Imaging With Whole-Mount Histopathology Referenced Delineations

... Deep learning reconstruction (DLR) for MRI reconstruction aims to generate high-quality images from sampled k-space data. DL-based image reconstruction for MRI can automatically and fully exploit the available data information and recover the lost information when guided by certain prior knowledge [91][92][93][94][95][96][97][98]. Existing DLR methods can be classified into two major categories, model-based and data-driven methods. ...

Review and Prospect: Artificial Intelligence in Advanced Medical Imaging

Frontiers in Radiology

... In this case, a compressed perception algorithm or an iterative reconstruction algorithm can be used to eliminate the artifacts. Among them, iterative reconstruction algorithms can compensate for the sensitivity of inverse-projection algorithms to noise and incomplete data [48,49]. However, iterative reconstruction algorithms are more complex and require more computational resources and time. ...

A deep learning method for eliminating head motion artifacts in computed tomography

... Besides multi-modal fusion, semantically-consistent segmentation of tumors across thick slices is also critical for high-quality tumor segmentation. Several studies (Li et al. 2021a;Chen et al. 2020) have developed 3D tumor segmentation methods by modifying existing methods originally proposed for organ segmentation. However, existing 3D segmentation networks are typically designed for nearlyisotropic 3D images, and their application to clinical prostate MR images with thick slices usually results in limited performance (Zhang et al. 2020a) as shown in Fig. 1(b). ...

Morphology-Guided Prostate MRI Segmentation with Multi-slice Association
  • Citing Chapter
  • September 2021

Lecture Notes in Computer Science

... The fastMRI Dataset is the largest public MRI dataset with raw k-space data. Following [53], 227 and 45 pairs of single-coil PDWI and FS-PDWI knee volumes are selected for training and testing, respectively, resulting in a total of 8,332 pairs of 2D images for training and 1,665 images for testing. The 2D image size is 320×320. ...

Learning MRI k-Space Subsampling Pattern Using Progressive Weight Pruning
  • Citing Chapter
  • September 2020

Lecture Notes in Computer Science

... There has also been intense research in selecting a loss function different from the squared L 2 -loss [81,10], and, specifically, GAN-based loss functions have been applied to image post-processing in CT [156,163] and image reconstruction in MRI (Fourier inversion) [158,107,36,159,75]. However, in these papers, the authors discard providing any randomness to the generator, instead only giving it the prior. ...

Data-Consistency in Latent Space and Online Update Strategy to Guide GAN for Fast MRI Reconstruction
  • Citing Chapter
  • October 2020

Lecture Notes in Computer Science

... However, the enlarged receptive field of UNet with pooling layers can lose the details of an image presented at a high frequency [48,49]. Therefore, we applied a cycleGAN-based mask extractor [50][51][52] to the VMAT plan's recorded videos. ...

Unpaired Mr to CT Synthesis with Explicit Structural Constrained Adversarial Learning
  • Citing Conference Paper
  • April 2019