Figure 2 - available from: Scientific Reports
This content is subject to copyright. Terms and conditions apply.
Ishikawa Fishbone Diagram. The cause and effect diagram for motion impaired and expiratory phase chest CT examinations.

Ishikawa Fishbone Diagram. The cause and effect diagram for motion impaired and expiratory phase chest CT examinations.

Source publication
Article
Full-text available
We hypothesized that clinical process improvement strategies can reduce frequency of motion artifacts and expiratory phase scanning in chest CT. We reviewed 826 chest CT to establish the baseline frequency. Per clinical process improvement guidelines, we brainstormed corrective measures and priority-pay-off matrix. The first intervention involved e...

Citations

... motion artifact by tremors) is desirable. The repeat will increase the radiation dose, contrast media volume, and cost [2]. Sedation can be used with patients who have involuntary movements in case if they do not have any heart problems. ...
Article
Full-text available
All the current headsets for CT scanners tables use very weak immobilizations techniques which can't work with Parkinson's or Huntington's patients. Some head holders use expanding balloons to hold the patients head, but it can't stop involuntary movement. Therefore, this paper proposes more radical design that can be used especially with such conditions but with an intact skull.
... The quality control process involves designing and implementing a quality control program, collecting and analyzing data, investigating results that are outside the acceptance levels for the quality control program, and taking corrective action to bring these results back to an acceptable level (11). For example, some studies (29,30) have improved the quality of thoracic CT examinations by providing patients with breathing training. The issues raised by Waaler and Hofmann (12) regarding the rejection and duplication of diagnostic x-ray images pose new challenges to radiographic imaging. ...
Article
Full-text available
Purpose: To standardize the radiography imaging procedure, an image quality control framework using the deep learning technique was developed to segment and evaluate lumbar spine x-ray images according to a defined quality control standard. Materials and methods: A dataset comprising anteroposterior, lateral, and oblique position lumbar spine x-ray images from 1,389 patients was analyzed in this study. The training set consisted of digital radiography images of 1,070 patients (800, 798, and 623 images of the anteroposterior, lateral, and oblique position, respectively) and the validation set included 319 patients (200, 205, and 156 images of the anteroposterior, lateral, and oblique position, respectively). The quality control standard for lumbar spine x-ray radiography in this study was defined using textbook guidelines of as a reference. An enhanced encoder-decoder fully convolutional network with U-net as the backbone was implemented to segment the anatomical structures in the x-ray images. The segmentations were used to build an automatic assessment method to detect unqualified images. The dice similarity coefficient was used to evaluate segmentation performance. Results: The dice similarity coefficient of the anteroposterior position images ranged from 0.82 to 0.96 (mean 0.91 ± 0.06); the dice similarity coefficient of the lateral position images ranged from 0.71 to 0.95 (mean 0.87 ± 0.10); the dice similarity coefficient of the oblique position images ranged from 0.66 to 0.93 (mean 0.80 ± 0.14). The accuracy, sensitivity, and specificity of the assessment method on the validation set were 0.971-0.990 (mean 0.98 ± 0.10), 0.714-0.933 (mean 0.86 ± 0.13), and 0.995-1.000 (mean 0.99 ± 0.12) for the three positions, respectively. Conclusion: This deep learning-based algorithm achieves accurate segmentation of lumbar spine x-ray images. It provides a reliable and efficient method to identify the shape of the lumbar spine while automatically determining the radiographic image quality.
... No prior studies have reported on the incidence of motion artifacts in patients with SARS-CoV-2, however, a pre-SARS-CoV-2 pandemic study described a high incidence of motion artifacts and expiratory phase scanning in about 1/3 of all chest CTs. 16 Given the change in quantitative pixel values with motion artifacts, it is not surprising that exclusion of motion-impaired chest CTs improved performance of both DL-based and radiomics features. However, none of the described DL or radiomics approaches including the ones used in our study check CT images for presence of motion artifacts. ...
Article
Purpose Comparison of deep learning algorithm, radiomics and subjective assessment of chest CT for predicting outcome (death or recovery) and intensive care unit (ICU) admission in patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Methods The multicenter, ethical committee-approved, retrospective study included non-contrast-enhanced chest CT of 221 SARS-CoV-2 positive patients from Italy (n = 196 patients; mean age 64 ± 16 years) and Denmark (n = 25; mean age 69 ± 13 years). A thoracic radiologist graded presence, type and extent of pulmonary opacities and severity of motion artifacts in each lung lobe on all chest CTs. Thin-section CT images were processed with CT Pneumonia Analysis Prototype (Siemens Healthineers) which yielded segmentation masks from a deep learning (DL) algorithm to derive features of lung abnormalities such as opacity scores, mean HU, as well as volume and percentage of all-attenuation and high-attenuation (opacities >−200 HU) opacities. Separately, whole lung radiomics were obtained for all CT exams. Analysis of variance and multiple logistic regression were performed for data analysis. Results Moderate to severe respiratory motion artifacts affected nearly one-quarter of chest CTs in patients. Subjective severity assessment, DL-based features and radiomics predicted patient outcome (AUC 0.76 vs AUC 0.88 vs AUC 0.83) and need for ICU admission (AUC 0.77 vs AUC 0.0.80 vs 0.82). Excluding chest CT with motion artifacts, the performance of DL-based and radiomics features improve for predicting ICU admission. Conclusion DL-based and radiomics features of pulmonary opacities from chest CT were superior to subjective assessment for differentiating patients with favorable and adverse outcomes.
... Many LEP interventions, including the use of remote interpreters, may not be operationally feasible for urgent and emergent radiology settings in which there is expected rapid throughput of patients and both interpreters (in person or remote) and imaging equipment are limited resources. Although LEP has been shown to affect image quality during CT and MRI [4,5], interpreters may not be routinely used during standard radiography in which examination times are substantially shorter, with an average duration on the order of 4 min [6]. ...
Article
Full-text available
BACKGROUND Disproportionally high rates of COVID-19 have been noted among communities with limited English proficiency (LEP), resulting in an unmet need for improved multilingual care and interpreter services. To enhance multilingual care, we created a freely available web app (RadTranslate™) that provides multilingual radiology exam instructions. The purpose of this study was to evaluate the implementation of this intervention in radiology. METHODS The device-agnostic web app leverages artificial-intelligence text-to-speech to provide standardized, human-like spoken exam instructions in the patient’s preferred language. Standardized phrases were collected from a consensus group consisting of technologists, radiologists, and ancillary staff. RadTranslate was piloted in Spanish for chest radiographs (CXR) performed at a COVID-19 triage outpatient center that served a predominantly Spanish-speaking Latine community. Implementation included a tablet displaying the app in the CXR room. Imaging appointment duration (IAD) was measured and compared between pre- and post-implementation groups. RESULTS In the 63-day test period following launch, there were 1267 app uses, with technologists voluntarily switching exclusively to RadTranslate for Spanish-speaking patients. The most used phrases were a general explanation of the exam (30% of total) followed by instructions to disrobe and remove any jewelry (12%). There was no significant difference in the IAD, 11±7 min (mean ± standard deviation) and 12±3 for standard-of-care versus RadTranslate, respectively; however, variability was significantly lower when RadTranslate was used (p=0.003). CONCLUSION AI-aided multilingual audio instructions were successfully integrated into imaging workflows, reducing strain on medical interpreters and variance in throughput resulting in more reliable average exam length.
Article
Purpose To assess the impact of instructional videos in patients’ primary language on abdominal MR image quality for whom English is a second language (ESL). Methods Twenty-nine ESL patients viewed Spanish or Mandarin-Chinese instructional videos (approximately 2.5 min in duration) in the preparation room before abdominal MRI (ESL–video group). Comparison groups included 50 ESL patients who underwent MRI before video implementation (ESL–no video group) and 81 English-speaking patients who were matched for age, gender, magnet strength, and history of prior MRI with patients in the first two groups. Three radiologists independently assess respiratory motion and image quality on turbo spin-echo T2-weighted images (T2WI) and postcontrast T1-weighted images (T1WI) using 1 to 5 Likert scales. Groups were compared using Kruskal-Wallis tests as well as generalized estimating equations (GEEs) to adjust for possible confounders. Results For T2WI respiratory motion and T2WI overall image quality, Likert scores of the ESL–no video group (mean score across readers of 2.6 ± 0.1 and 2.6 ± 0.1) were lower (all P < .001) compared with English-speaking (3.3 ± 0.2 and 3.3 ± 0.1) and ESL–video (3.2 ± 0.1 and 3.0 ± 0.2) groups. In the GEE model, mean T2WI respiratory motion (both adjusted P < .001) and T2WI overall quality (adjusted P = .03 and .11) were higher in English and ESL–video groups compared with ESL–no video group. For T1WI respiratory motion and T1WI overall image quality, Likert scores were not different between groups (P > .05), including in GEE model (adjusted P > .05). Conclusion Providing ESL patients an instructional video in their primary language before abdominal MRI is an effective intervention to improve imaging quality.
Article
Innovations in CT have been impressive among imaging and medical technologies in both the hardware and software domain. The range and speed of CT scanning improved from the introduction of multidetector-row CT scanners with wide-array detectors and faster gantry rotation speeds. To tackle concerns over rising radiation doses from its increasing use and to improve image quality, CT reconstruction techniques evolved from filtered back projection to commercial release of iterative reconstruction techniques, and recently, of deep learning (DL)-based image reconstruction. These newer reconstruction techniques enable improved or retained image quality versus filtered back projection at lower radiation doses. DL can aid in image reconstruction with training data without total reliance on the physical model of the imaging process, unique artifacts of PCD-CT due to charge sharing, K-escape, fluorescence x-ray emission, and pulse pileups can be handled in the data-driven fashion. With sufficiently reconstructed images, a well-designed network can be trained to upgrade image quality over a practical/clinical threshold or define new/killer applications. Besides, the much smaller detector pixel for PCD-CT can lead to huge computational costs with traditional model-based iterative reconstruction methods whereas deep networks can be much faster with training and validation. In this review, we present techniques, applications, uses, and limitations of deep learning-based image reconstruction methods in CT.