Article

Studierfenster: an Open Science Cloud-Based Medical Imaging Analysis Platform

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( www.studierfenster.at ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... It provides visualization and segmentation tools for medical images and is built as a client-server model. Its main component is the Medical 3D Viewer, which offers various annotation and segmentation tools [7]. ...
... The server side is written in C, C++, and Python, using libraries like Insight Toolkit (ITK), Visualization Toolkit (VTK), X Toolkit (XTK), and Slice:Drop. Server requests are processed by a Python Flask server [7]. ...
Conference Paper
Full-text available
Segmentation is a crucial procedure in medical image analysis. The usage of automatic algorithms in this field is an attractive alternative to manual segmentation. One promising semi-automatic segmentation tool is the Grow-Cut algorithm, which allows n-dimensional image seg-mentation, providing interactive and dynamic features. Currently, using the GrowCut algorithm for medical image segmentation with a user interface is only possible via medical image analysis software, making it device-and platform-dependent. The GrowCut algorithm without a user interface is available via various implementations but requires a lot of technical knowledge of the user. The aim of this contribution is to provide a user interface for the GrowCut algorithm on the basis of a web application. This is achieved by implementing an adapted version of the GrowCut algorithm, the Fast GrowCut algorithm, into a client/server based, web-hosted 3-dimensional medical image viewer, called StudierFenster. As a result, the Fast GrowCut algorithm can be used directly inside the online environment without installing software and without technical knowledge of the user. It is now possible to use the segmentation tool on any 2-dimensional transverse slice of a 3-dimensional image. The workflow was made user-friendly, allowing input to be drawn with a brush onto the image and loading the output automatically, making it immediately visible.
... Zou et al. reported tele-radiotherapy system for a medical alliance in China that provides immediate access to the patient radiotherapy planning and evaluation [24] . Egger et al introduced Studierfenster: a free,non-commercial open science client-server framework for medical image analysis [25] . Zaki et al. reported the utility of cloud computing in analyzing GPU-Accelerated deformable image registration of CT and CBCT Images in Head and neck cancer radiation therapy [26] . ...
Preprint
Full-text available
Purpose To develop a cloud-based automated treatment planning system for intensity-modulated radiation therapy and evaluate its efficacy and safety for tumors in various anatomical sites under general clinical scenarios. Results All the plans from both groups satisfy the PTV prescription dose coverage requirement of at least 95% of the PTV volume. The mean HI of plan A group and plan group is 0.84 and 0.81 respectively, with no statistically significant difference from those of plan C group.The mean CI, PQM, OOT and POT are 0.806, 77.55. 410 s and 185 s for plan A group, and 0.841, 76.87, 515.1 s and 271.1 s for plan B group, which were significantly superior than those of plan C group except for the CI of plan A group. There is no statistically significant difference between the dose accuracies of plan B and plan C groups. Conclusion It is concluded that the overall efficacy and safty of the Desargues Cloud TPS are not significantly different to those of Varian Eclipse, while some efficacy indicators of plans generated from automatic planning without or with manual adjustments are even significantly superior to those of fully manual plans from Eclipse. The cloud-based automatic treatment planning additionally increase the efficiency of treatment planning process and facilitate the sharing of planning knowledge. Materials and methods The cloud-based automatic radiation treatment planning system, Desargues Cloud TPS, was designed and developed based on browser/server mode, where all the computing intensive functions were deployed on the server and user interfaces were implemented on the web. The communication between the browser and the server was through the local area network (LAN) of a radiotherapy institution. The automatic treatment planning module adopted a hybrid of both knowledge-based planning (KBP) and protocol-based automatic iterative optimization (PB-AIO), consisting of three steps: beam angle optimization (BAO), beam fluence optimization (BFO) and machine parameter optimization (MPO). 53 patients from two institutions have been enrolled in a multi-center self-controlled clinical validation. For each patient, three IMRT plans was designed. The plan A and B were designed on Desargues Cloud TPS using automatic planning without and with manual adjustments respectively. The plan C was designed on Varian Eclipse TPS using fully manual planning. The efficacy indicators were heterogeneous index, conformity index, plan quality metric, overall operation time and plan optimization time. The safety indicators were gamma indices of dose verification.
... For decades already ML has found important applications in the field of medicine [31], e.g., in enhancing diagnosis and detection [63], segmentation [16,25], survival prognosis [32], surgical planning [34], personalized treatment [64], and the discovery of biomarkers for specific pathologies [26], as clinical data management tools [30], or as research tools [15,38,52,62]. The predictive and analytical power of these ML models has grown steadily and now decisions based on it are qualitatively competitive with (and often more efficient than) those made by healthcare professionals [14], occasionally even outperforming them on specific, narrowly defined tasks such as pneumonia detection in chest X-rays [53], or improving radiologists' performance during breast cancer screenings [71]. ...
Chapter
Machine learning (ML), especially deep learning (DL), is a field of research that has recently attracted enormous attention and is currently evolving rapidly. New applications in economics, industry, and healthcare create new challenges for the sustainable development of our society. We describe the organization and realization of a machine learning seminar that integrates a theoretical seminar and a practical tutorial focusing on the employments of DL in liver cancer diagnostics. The seminar has been designed for master students in mathematics and computer science with the aim to prepare them for potential master theses and future work assignments in the area of ML in healthcare. The students were educated in accessing and understanding the information about four different DL architectures provided in scientific publications. They were instructed in implementing these models in the PyTorch framework for image classification and segmentation using publicly available medical data. To offer the students easy access to the necessary computing power, we created a remote development platform based on modern cloud technologies, making fast and efficient training of the models possible. We publish the code for this interactive cloud platform, providing an easy-to-handle, out-of-the-box solution that eliminates the need for high technical literacy among students or the acquisition of hardware. Additionally, we publish the exercises and standard solutions and offer a helpful guide and first-hand experience for future seminars with a similar scope.
... In this contribution, we describe two open science initiatives from our group: StudierFenster [1] (http://studierfenster.icg.tugraz.at, Fig. 1), an open, browser-based framework for biomedical image analysis, and MedShapeNet [2] (https://medshapenet.ikim.nrw, ...
Conference Paper
Full-text available
In the dynamic landscape of digitized healthcare, open science principles are instrumental in driving transformative changes. This contribution describes two open science initiatives: StudierFenster, a cloud-based framework for (bio-)medical image analysis, and MedShapeNet, a comprehensive and open-access dataset of medical shapes. StudierFenster offers seamless access to medical image analysis tools through common web browsers, facilitating widespread utilization. Med-ShapeNet bridges the gap between medical imaging and 3D deep learning by providing a repository of anatomical shapes extracted from real patient data. With over 100,000 shapes spanning various datasets, MedShapeNet enables diverse applications in medical image analysis, mixed reality, and 3D printing.
... In another study, Egger, et al. [83] introduced Studierfenster, a client-server platform for (bio-) medical image analysis that is also free and open-source. It has several features, including the ability to display medical data in popular web browsers. ...
Article
Medical data processing has grown into a prominent topic in the latest decades with the primary goal of maintaining patient data via new information technologies, including the Internet of Things (IoT) and sensor technologies, which generate patient indexes in hospital data networks. Innovations like distributed computing, Machine Learning (ML), blockchain, chatbots, wearables, and pattern recognition can adequately enable the collection and processing of medical data for decision-making in the healthcare era. Particularly, to assist experts in the disease diagnostic process, distributed computing is beneficial by digesting huge volumes of data swiftly and producing personalized smart suggestions. On the other side, the current globe is confronting an outbreak of COVID-19, so an early diagnosis technique is crucial to lowering the fatality rate. ML systems are beneficial in aiding radiologists in examining the incredible amount of medical images. Nevertheless, they demand a huge quantity of training data that must be unified for processing. Hence, developing Deep Learning (DL) confronts multiple issues, such as conventional data collection, quality assurance, knowledge exchange, privacy preservation, administrative laws, and ethical considerations. In this research, we intend to convey an inclusive analysis of the most recent studies in applications of distributed computing platforms based on five categorized platforms, including cloud computing, edge, fog, IoT, and hybrid platforms. So, we evaluated 27 articles regarding the usage of the proposed framework, deployed methods, and applications, noting the advantages, drawbacks, and the applied dataset and screening the security mechanism and the presence of the Transfer Learning (TL) method. As a result, it was proved that most of the recent research (about 43%) used the IoT platform as the environment for the proposed architecture, and most of the studies (about 46%) were done in 2021. In addition, the most popular utilized DL algorithm was the Convolutional Neural Network (CNN), with a percentage of 19.4%. Hence, despite how technology changes, delivering appropriate therapy for patients is the primary aim of healthcare-associated departments. Therefore, further studies are recommended for developing more functional architectures based on DL and distributed environments and better evaluation of the present healthcare data analysis models.
... Research bridging AI and Radiology must consider data parameters (e.g., access, quality), patient interests (e.g., privacy, ethics, liability), coding & billing, and system maintenance; these must be addressed to enable widespread adoption of imaging-AI. While most medical facilities and healthcare systems maintain their data centers and servers due to data-security concerns, AI tools are increasingly implemented as cloud-based inference systems [5][6][7]. As healthcare entities adopt cloud infrastructures, AI deployments need to be increasingly concerned about data security. ...
Preprint
Full-text available
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
... A pure end-user and browser-based solution can be tried out in the online framework StudierFenster (www.studierfenster.at [34]) within the 3D Skull Reconstruction module [35]. ...
Preprint
Full-text available
We present a deep learning-based approach for skull reconstruction for MONAI, which has been pre-trained on the MUG500+ skull dataset. The implementation follows the MONAI contribution guidelines, hence, it can be easily tried out and used, and extended by MONAI users. The primary goal of this paper lies in the investigation of open-sourcing codes and pre-trained deep learning models under the MONAI framework. Nowadays, open-sourcing software, especially (pre-trained) deep learning models, has become increasingly important. Over the years, medical image analysis experienced a tremendous transformation. Over a decade ago, algorithms had to be implemented and optimized with low-level programming languages, like C or C++, to run in a reasonable time on a desktop PC, which was not as powerful as today's computers. Nowadays, users have high-level scripting languages like Python, and frameworks like PyTorch and TensorFlow, along with a sea of public code repositories at hand. As a result, implementations that had thousands of lines of C or C++ code in the past, can now be scripted with a few lines and in addition executed in a fraction of the time. To put this even on a higher level, the Medical Open Network for Artificial Intelligence (MONAI) framework tailors medical imaging research to an even more convenient process, which can boost and push the whole field. The MONAI framework is a freely available, community-supported, open-source and PyTorch-based framework, that also enables to provide research contributions with pre-trained models to others. Codes and pre-trained weights for skull reconstruction are publicly available at: https://github.com/Project-MONAI/research-contributions/tree/master/SkullRec
... According to the 2020 American College of Radiology (ACR) Data Science Institute (DSI) Artificial Intelligence Survey, about 30% of responding radiologists use AI as part of their clinical practice, and of those radiologists, most are from larger institutions with collaborative access to scientists. 2 While some no-code web-based applications have been developed to lower this barrier, none provide a solution that is both endto-end and code-free. [3][4][5] Conversely, data scientists who are not working closely with clinicians may build algorithms that do not make a meaningful impact on patient care. ...
Article
Full-text available
Objective To develop a free, vendor-neutral software suite, the American College of Radiology (ACR) Connect, which serves as a platform for democratizing artificial intelligence (AI) for all individuals and institutions. Materials and Methods Among its core capabilities, ACR Connect provides educational resources; tools for dataset annotation; model building and evaluation; and an interface for collaboration and federated learning across institutions without the need to move data off hospital premises. Results The AI-LAB application within ACR Connect allows users to investigate AI models using their own local data while maintaining data security. The software enables non-technical users to participate in the evaluation and training of AI models as part of a larger, collaborative network. Discussion Advancements in AI have transformed automated quantitative analysis for medical imaging. Despite the significant progress in research, AI is currently underutilized in current clinical workflows. The success of AI model development depends critically on the synergy between physicians who can drive clinical direction, data scientists who can design effective algorithms, and the availability of high-quality datasets. ACR Connect and AI-LAB provide a way to perform external validation as well as collaborative, distributed training. Conclusion In order to create a collaborative AI ecosystem across clinical and technical domains, the ACR developed a platform that enables non-technical users to participate in education and model development.
... A cloud-based medical imaging platform, Studierfenster (http:// studierfenster.tugraz.at/), offers multiple functions such as 2D and 3D visualization of medical data, and calculation of medical metrics [151] . It also generates defective parts of the skull by using deep learning algorithms [ 151 , 152 ]. ...
Article
Cranioplasty treatment is the surgical repair of a bone defect in the skull resulting from a previous operation or injury. Decompressive craniectomy (DC) is a surgical procedure, that is followed by the cranioplasty surgery. DC is usually performed to treat patients with traumatic brain injury, intracranial hemorrhage, cerebral infarction, brain edema, skull fractures, etc. In many published clinical cases, cranioplasty surgery is reported to restore cranial symmetry with good cosmetic outcomes and neurophysiologically relevant functional outcomes. In this review, a number of key issues related to the manufacturing of patient-specific implants, clinical complications, cosmetic outcomes, and newer alternative therapies are discussed. While exploring alternative therapeutic treatments to cranioplasty, biomolecules and cellular-based approaches have been emphasized. The currently practiced trends in the restoration of cranial defects involve 3D printing to produce patient-specific prefabricated cranial implants, that provide better cosmetic outcomes. Regardless of the advancements in image processing and 3D printing, the complete clinical procedure is time-consuming and requires significant costs. To reduce manual intervention and to meet unmet clinical demands, it has been highlighted that automated implant design by data-driven methods can accelerate the design and manufacturing of patient-specific cranial implants. The data-driven approaches, encompassing E-platforms such as computer applications, publicly accessible clinical databases, and artificial intelligence with 3D printing will lead to the development of the next generation of patient-specific cranial implants, which can provide better predictable clinical outcomes. Statement of Significance Cranioplasty is performed to reconstruct cranial defects of patients who have undergone decompressive craniectomy. Cranioplasty improves the aesthetic and functional outcomes of those patients. To meet the clinical demands of cranioplasty surgery, accelerated designing and manufacturing of 3D cranial implants are required. This review provides an overview of implant biomaterials and bone flap manufacturing methods for cranioplasty surgery. Along with that, tissue engineering and regenerative medicine-based approaches to reduce complications associated with implant biomaterials are covered. The potential use of computer applications and data-driven artificial intelligence-based approaches are highlighted to accelerate the clinical protocols of cranioplasty treatment with less manual intervention.
... More and more image data is being kept online due to the fast advancement of mobile internet technologies. Particularly in the sphere of medicine, images have replaced words as an essential source of information [25]- [28]. Under this background, it seems that it is very important to suggest an intelligent classification method which can deal with significant images as CT data of Covid-19 and provide accurate healthy/infected decision. ...
Article
Full-text available
broad family of viruses called coronaviruses may infect people. The infection's symptoms are often relatively minor and resemble a normal cold. Since the coronavirus disease of 2019 (Covid-19) has never been observed in humans, anyone can contract it, and no one has an innate immunity to it. The detection of Covid-19 is now a critical task for medical practitioners. computed tomography (CT) scans can be considered as the best way to diagnose Covid-19. For patients with severe symptoms, imaging might help to assess the seriousness of the disease. Also, the CT scan can be helpful for determining a plan of care for a patient. This work focuses on classifying Covid-19 cases for healthy and infected by presenting a powerful scheme of recognizing CT scan images. In this study will be provided by proposing a model based on applying deep feature extractions with support vector machine (SVM). Big dataset of CT scan images is employed, it is available in the repository of GitHub and Kaggle. Remarkable result of 100% have been benchmarked as the highest evaluation after investigations. The proposed model can automatically detect between healthy and infected individuals
... We believe this work will inspire further research in this domain and consequently provide for an accurate and efficient approach to automatizing cranial implant design for biomedical purposes. Finally, we plan to provide the addition of CranGAN to StudierFenster [6] to provide an end-user friendly version to the community. ...
Conference Paper
Automatizing cranial implant design has become an increasingly important avenue in biomedical research. Benefits in terms of financial resources, time and patient safety necessitate the formulation of an efficient and accurate procedure for the same. This paper attempts to provide a new research direction to this problem, through an adversarial deep learning solution. Specifically, in this work, we present CranGAN - a 3D Conditional Generative Adversarial Network designed to reconstruct a 3D representation of a complete skull given its defective counterpart. A novel solution of employing point cloud representations instead of conventional 3D meshes and voxel grids is proposed. We provide both qualitative and quantitative analysis of our experiments with three separate GAN objectives, and compare the utility of two 3D reconstruction loss functions viz. Hausdorff Distance and Chamfer Distance. We hope that our work inspires further research in this direction. Clinical relevance- This paper establishes a new research direction to assist in automated implant design for cranioplasty.
... A cloud-based medical imaging platform, Studierfenster (http:// studierfenster.tugraz.at/), offers multiple functions such as 2D and 3D visualization of medical data, and calculation of medical metrics [151] . It also generates defective parts of the skull by using deep learning algorithms [ 151 , 152 ]. ...
... Furthermore, we acknowledge the REACT-EU project KITE (Plattform für KI-Translation Essen). Finally, we want to make the interested reader aware of our medical image processing framework Studier-Fenster ( www.studierfenster.at ) [159] , where medical deep learning approaches can be tried out in a standard web browser. ...
Article
Full-text available
Deep learning has remarkably impacted several different scientific disciplines over the last few years. For example, in image processing and analysis, deep learning algorithms were able to outperform other cutting-edge methods. Additionally, deep learning has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where deep learning outperformed humans, for example with object recognition and gaming. Deep learning is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the deep learning field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term ‘deep learning’, and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of ‘medical deep learning’ is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys.
... Another online tool, GradioHub, has been proposed as a collaborative way for clinicians and biomedical researchers to share and study data (Abid et al., 2020). Finally, the open-source tool Studierfenster has been recently been created for the purpose of biomedical data visualization, enabling rendering of both 3D and 2D data, as well as letting the user annotate the data they are examining (Egger et al., 2022). These tools represent a small fraction of the many web-based medical imaging libraries available. ...
Article
Full-text available
Epilepsy affects more than three million people in the United States. In approximately one-third of this population, anti-seizure medications do not control seizures. Many patients pursue surgical treatment that can include a procedure involving the implantation of electrodes for intracranial monitoring of seizure activity. For these cases, accurate mapping of the implanted electrodes on a patient’s brain is crucial in planning the ultimate surgical treatment. Traditionally, electrode mapping results are presented in static figures that do not allow for dynamic interactions and visualizations. In collaboration with a clinical research team at a Level 4 Epilepsy Center, we developed N-Tools-Browser, a web-based software using WebGL and the X-Toolkit (XTK), to help clinicians interactively visualize the location and functional properties of implanted intracranial electrodes in 3D. Our software allows the user to visualize the seizure focus location accurately and simultaneously display functional characteristics (e.g., results from electrical stimulation mapping). Different visualization modes enable the analysis of multiple electrode groups or individual anatomical locations. We deployed a prototype of N-Tools-Browser for our collaborators at the New York University Grossman School of Medicine Comprehensive Epilepsy Center. Then, we evaluated its usefulness with domain experts on clinical cases.
... In [12], DICOM images are transferred to and rendered in a web browser, which requires a high-throughput internet connection, large RAM capacity and hardware capability for volume rendering. The Studierfenster platform [13] offers visualization and image analysis with advanced features like AR/VR, cranial implant design, and facial reconstruction. DICOM images are converted to NRRD format on the client side to avoid transferring patient data to the server. ...
Article
Full-text available
Quick access to radiological images is important for timely diagnosis and effective patient treatment. In this paper, we present a web-based client–server system for seamless image and volume rendering of DICOM images that provides fast access to the data needed for diagnosis without placing a heavy load on computer resources on the client side. DICOM images are rendered on the server, and the resulting 2D images are sent to physicians who can view and analyze them via web browser. Security of patient medical data is ensured by encryption during storage and transfer. The system’s communication model hides the latency of remote rendering to ensure a seamless experience for the user.
Article
Full-text available
This study offers a systematic literature review on the application of Convolutional Neural Networks in Virtual Reality, Augmented Reality, Mixed Reality, and Extended Reality technologies. We categorise these applications into three primary classifications: interaction, where the networks amplify user engagements with virtual and augmented settings; creation, showcasing the networks’ ability to assist in producing high-quality visual representations; and execution, emphasising the optimisation and adaptability of apps across diverse devices and situations. This research serves as a comprehensive guide for academics, researchers, and professionals in immersive technologies, offering profound insights into the cross-disciplinary realm of network applications in these realities. Additionally, we underscore the notable contributions concerning these realities and their intersection with neural networks.
Chapter
Cloud computing is reshaping healthcare by offering a flexible solution for stakeholders to access data remotely. It revolutionizes data creation, storage, and sharing, enabling professionals to access patient information from anywhere, enhancing care and streamlining operations. Adoption is increasing due to its efficiency and innovation benefits. Services like SaaS, PaaS, and IaaS offer flexibility, driving adoption. Challenges include data breaches, necessitating robust security measures. Despite challenges, cloud computing has transformed healthcare, improving decision-making, data security, record sharing, and automation. During COVID-19, it has been crucial, highlighting its importance in advancing healthcare. Providers must embrace cloud technology for its potential to enhance medical data analysis and improve healthcare services.
Article
Full-text available
Aortic dissections (ADs) are serious conditions of the main artery of the human body, where a tear in the inner layer of the aortic wall leads to the formation of a new blood flow channel, named false lumen. ADs affecting the aorta distally to the left subclavian artery are classified as a Stanford type B aortic dissection (type B AD). This is linked to substantial morbidity and mortality, however, the course of the disease for the individual case is often unpredictable. Computed tomography angiography (CTA) is the gold standard for the diagnosis of type B AD. To advance the tools available for the analysis of CTA scans, we provide a CTA collection of 40 type B AD cases from clinical routine with corresponding expert segmentations of the true and false lumina. Segmented CTA scans might aid clinicians in decision making, especially if it is possible to fully automate the process. Therefore, the data collection is meant to be used to develop, train and test algorithms.
Article
Precision medicine research benefits from machine learning in the creation of robust models adapted to the processing of patient data. This applies both to pathology identification in images, i.e., annotation or segmentation, and to computer-aided diagnostic for classification or prediction. It comes with the strong need to exploit and visualize large volumes of images and associated medical data. The work carried out in this paper follows on from a main case study piloted in a cancer center. It proposes an analysis pipeline for patients with osteosarcoma through segmentation, feature extraction and application of a deep learning model to predict response to treatment. The main aim of the AWESOMME project is to leverage this work and implement the pipeline on an easy-to-access, secure web platform. The proposed WEB application is based on a three-component architecture: a data server, a heavy computation and authentication server and a medical imaging web-framework with a user interface. These existing components have been enhanced to meet the needs of security and traceability for the continuous production of expert data. It innovates by covering all steps of medical imaging processing (visualization and segmentation, feature extraction and aided diagnostic) and enables the test and use of machine learning models. The infrastructure is operational, deployed in internal production and is currently being installed in the hospital environment. The extension of the case study and user feedback enabled us to fine-tune functionalities and proved that AWESOMME is a modular solution capable to analyze medical data and share research algorithms with in-house clinicians.
Poster
Full-text available
We introduce two open science initiatives: StudierFenster, an open, browser-based framework for biomedical image analysis, and MedShapeNet, a comprehensive repository of medical shapes.
Article
Full-text available
The specific genetic subtypes that gliomas exhibit result in variable clinical courses and the need to involve multidisciplinary teams of neurologists, epileptologists, neurooncologists and neurosurgeons. Currently, the diagnosis of gliomas pivots mainly around the preliminary radiological findings and the subsequent definitive surgical diagnosis (via surgical sampling). Radiomics and radiogenomics present a potential to precisely diagnose and predict survival and treatment responses, via morphological, textural, and functional features derived from MRI data, as well as genomic data. In spite of their advantages, it is still lacking standardized processes of feature extraction and analysis methodology among different research groups, which have made external validations infeasible. Radiomics and radiogenomics can be used to better understand the genomic basis of gliomas, such as tumor spatial heterogeneity, treatment response, molecular classifications and tumor microenvironment immune infiltration. These novel techniques have also been used to predict histological features, grade or even overall survival in gliomas. In this review, workflows of radiomics and radiogenomics are elucidated, with recent research on machine learning or artificial intelligence in glioma.
Article
Medical image segmentation is a crucial task in computer-aided diagnosis. While deep learning has significantly improved this field, relying solely on local computing power makes it challenging to achieve real-time segmentation results. Furthermore, traditional convolutional neural networks (CNNs) lack the ability to extract global features. To address these issues, this paper proposes a cloud-based medical image segmentation method that leverages multi-feature extraction and interactive fusion. Specifically, this method employs cloud computing to process a large number of medical images and overcome local computing power limitations. It also combines Transformer and CNNs to extract global and local features, respectively, and introduces an interactive fusion attention module to improve segmentation accuracy. The proposed approach is validated on multiple medical image datasets, and experimental results demonstrate its effectiveness and progress.
Conference Paper
The number of digital medical images is growing constantly over the years. This opens new possibilities of extracting information from them using computer-assisted methods, such as artificial intelligence. In this context, the application of radiomics has received increasing attention since 2012. In radiomics, medical image data is exploited by extracting numerous features from them that are not directly visible to the human eye. These features provide valuable information for diagnosis, prognosis and therapy, especially in cancer research. In this paper, we introduce a web-based radiomics module for end users under StudierFenster (http://www.studierfenster.at), which can extract image features for tumor characterization. StudierFenster is an online, open science medical image processing framework, where multiple clinically relevant modules and applications have been integrated since its initiation in 2018/2019, such as a medical VR viewer and automatic cranial implant design. The newly integrated Radiomics module allows the upload of medical images and segmentations of a region of interest to StudierFenster, where predefined radiomic features are calculated from them using the ‘PyRadiomics’ Python package. The radiomics module is able to calculate not only the basic first-order statistics of the images, but also more advanced features that capture the 2D/3D shape and gray level characteristics. The design of the radiomics module follows the architecture of StudierFenster, where computation-intensive procedures, such as preprocessing of the data and calculating the features for each image-segmentation pair, are executed on a server. The results are stored in a CSV file, which can afterwards be downloaded in a web-based user interface.
Article
Many people in society are facing problems related to health care, and diseases in the body are unable to be identified even with the presence of sensing technologies. The major reason for such failures in the identification process is that no virtual technologies are identified in the market. Most healthcare solicitations aim to design a particular application that provides information only about sensing values and fails to recall the virtual representation of that represented values. Therefore, this article provides an integration platform that connects the sensing devices with virtual reality/audio reality (VR/AR) techniques, which are applied in real time for detecting the presence of infections inside the body. In addition, one type of swarm intelligent algorithm is implemented in the recognition procedure with a modified fitness function and it is termed fruit fly optimization (FFO). The process of FFO provides much low layer perception, thus enhancing the output for smooth operation. To examine the real-time conditions, the projected AR/VR procedure is applied with biomedical sensors where three different case studies are separated. From the comparative numerical results, it is pragmatic that the proposed method provides better numerical results with 65% full-scale representations and less than 0.5 dB of distortion at 0.3% tuning force.
Article
Full-text available
In the past decade, deep learning (DL) has achieved unprecedented success in numerous fields, such as computer vision and healthcare. Particularly, DL is experiencing an increasing development in advanced medical image analysis applications in terms of segmentation, classification, detection, and other tasks. On the one hand, tremendous needs that leverage DL’s power for medical image analysis arise from the research community of a medical, clinical, and informatics background to share their knowledge, skills, and experience jointly. On the other hand, barriers between disciplines are on the road for them, often hampering a full and efficient collaboration. To this end, we propose our novel open-source platform, i.e., MEDAS–the MEDical open-source platform As Service. To the best of our knowledge, MEDAS is the first open-source platform providing collaborative and interactive services for researchers from a medical background using DL-related toolkits easily and for scientists or engineers from informatics modeling faster. Based on tools and utilities from the idea of RINV (Rapid Implementation aNd Verification), our proposed platform implements tools in pre-processing, post-processing, augmentation, visualization, and other phases needed in medical image analysis. Five tasks, concerning lung, liver, brain, chest, and pathology, are validated and demonstrated to be efficiently realizable by using MEDAS. MEDAS is available at http://medas.bnc.org.cn/.
Article
Full-text available
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term ‘deep learning’, and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.
Article
Full-text available
The article introduces two complementary datasets intended for the development of data-driven solutions for cranial implant design, which remains to be a time-consuming and laborious task in current clinical routine of cranioplasty. The two datasets, referred to as the SkullBreak and SkullFix in this article, are both adapted from a public head CT collection CQ500 (http://headctstudy.qure.ai/dataset) with CC BY-NC-SA 4.0 license. The SkullBreak contains 114 and 20 complete skulls, each accompanied by five defective skulls and the corresponding cranial implants, for training and evaluation respectively. The SkullFix contains 100 triplets (complete skull, defective skull and the implant) for training and 110 triplets for evaluation. The SkullFix dataset was first used in the MICCAI 2020 AutoImplant Challenge (https://autoimplant.grand-challenge.org/) and the ground truth, i.e., the complete skulls and the implants in the evaluation set are held private by the organizers. The two datasets are not overlapping and differ regarding data selection and synthetic defect creation and each serves as a complement to the other. Besides cranial implant design, the datasets can be used for the evaluation of volumetric shape learning algorithms, such as volumetric shape completion. This article gives a description of the two datasets in detail.
Poster
Full-text available
We extended an existing web-based tool called Studierfenster (http://studierfenster.icg.tugraz.at/) for this contribution, which was built by students from TU Graz, with a semi-automatic aortic centerline calculation functionality. Studierfenster is a tool that renders three-dimensional volumes, defined by the user, and allows the user to perform multiple tasks on that volume. The functionality added to Studierfenster consists of two parts. The first part calculates the initial centerline of the aorta from a CTA scan. Therefore, the user needs to provide two seed points inside the aorta. The output of the initial centerline calculation is a Dijkstra’s shortest path between the first and the second seed point within the aortic vessel. The second part of this contribution, integrates a centerline smoothing algorithm developed by Alvarez et al., which further smoothes the initial centerline. Our tool provides a robust centerline calculation that can even work for cases where the contrast of the CTA is not sufficient for a segmentation-based centerline calculation of the aorta.
Article
Full-text available
Featured Application: This review provides a critical review of deep/machine learning algorithms used in the identification of ischemic stroke and demyelinating brain diseases. It evaluates their strengths and weaknesses when applied to real world clinical data. Abstract: Medical brain image analysis is a necessary step in computer-assisted/computer-aided diagnosis (CAD) systems. Advancements in both hardware and software in the past few years have led to improved segmentation and classification of various diseases. In the present work, we review the published literature on systems and algorithms that allow for classification, identification, and detection of white matter hyperintensities (WMHs) of brain magnetic resonance (MR) images, specifically in cases of ischemic stroke and demyelinating diseases. For the selection criteria, we used bibliometric networks. Of a total of 140 documents, we selected 38 articles that deal with the main objectives of this study. Based on the analysis and discussion of the revised documents, there is constant growth in the research and development of new deep learning models to achieve the highest accuracy and reliability of the segmentation of ischemic and demyelinating lesions. Models with good performance metrics (e.g., Dice similarity coefficient, DSC: 0.99) were found; however, there is little practical application due to the use of small datasets and a lack of reproducibility. Therefore, the main conclusion is that there should be multidisciplinary research groups to overcome the gap between CAD developments and their deployment in the clinical environment.
Article
Full-text available
We present Biomedisa, a free and easy-to-use open-source online platform developed for semi-automatic segmentation of large volumetric images. The segmentation is based on a smart interpolation of sparsely pre-segmented slices taking into account the complete underlying image data. Biomedisa is particularly valuable when little a priori knowledge is available, e.g. for the dense annotation of the training data for a deep neural network. The platform is accessible through a web browser and requires no complex and tedious configuration of software and model parameters, thus addressing the needs of scientists without substantial computational expertise. We demonstrate that Biomedisa can drastically reduce both the time and human effort required to segment large images. It achieves a significant improvement over the conventional approach of densely pre-segmented slices with subsequent morphological interpolation as well as compared to segmentation tools that also consider the underlying image data. Biomedisa can be used for different 3D imaging modalities and various biomedical applications.
Article
Full-text available
The introduction of quantitative image analysis has given rise to fields such as radiomics which have been used to predict clinical sequelae. One growing area of interest for analysis is brain tumours, in particular glioblastoma multiforme (GBM). Tumour segmentation is an important step in the pipeline in the analysis of this pathology. Manual segmentation is often inconsistent as it varies between observers. Automated segmentation has been proposed to combat this issue. Methodologies such as convolutional neural networks (CNNs) which are machine learning pipelines modelled on the biological process of neurons (called nodes) and synapses (connections) have been of interest in the literature. We investigate the role of CNNs to segment brain tumours by firstly taking an educational look at CNNs and perform a literature search to determine an example pipeline for segmentation. We then investigate the future use of CNNs by exploring a novel field-radiomics. This examines quantitative features of brain tumours such as shape, texture, and signal intensity to predict clinical outcomes such as survival and response to therapy.
Chapter
Full-text available
In the treatment of head and neck cancer, physicians can benefit from augmented reality in preparing and executing treatment. We present a system allowing a physician wearing an untethered augmented reality headset to see medical visualizations precisely overlaid onto the patient. Our main contribution is a strategy for markerless registration of 3D imaging to the patient’s face. We use a neural network to detect the face using the headset’s depth sensor and register it to computed tomography data. The face registration is seamlessly combined with the headset’s continuous self-localization. We report on registration error and compare our approach to an external, high-precision tracking system.
Poster
Full-text available
Fast and fully automatic design of 3-D printed patient-specific cranial implant is highly desired in cranioplasty. To this end, various deep learning-based approaches are investigated. To facilitate supervised training, a database containing 200 high-resolution healthy CT skulls acquired in clinical routine is constructed. Due to the unavailability of large number of defected skulls from clinic, artificial defects are introduced to simulate that caused in a real cranial surgery.
Conference Paper
Full-text available
In the treatment of head and neck cancer, physicians can benefit from augmented reality in preparing and executing treatment. We present a system allowing a physician wearing an untethered augmented reality headset to see medical visualizations precisely overlaid onto the patient. Our main contribution is a strategy for markerless registration of 3D imaging to the patient's face. We use a neural network to detect the face using the headset's depth sensor and register it to computed tomography data. The face registration is seamlessly combined with the headset's continuous self-localization. We report on registration error and compare our approach to an external, high-precision tracking system.
Poster
Full-text available
An important part of these applications is medical image segmentation. Medical image segmentation algorithms are needed to evaluate and compare segmentations. These algorithms are used and proven for years in all different fields of image processing. Although image segmentation has grown rapidly in medicine a major part of the tools and applications stayed the same for years. Especially in terms of availability, cross platform support and usability there is major room for improvements. This contribution aims to remedy the mentioned problems by the development of a cross platform web tool for manual image segmentation and calculation of segmentation scores.
Article
Full-text available
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.
Article
Full-text available
Image-based algorithmic software segmentation is an increasingly important topic in many medical fields. Algorithmic segmentation is used for medical three-dimensional visualization, diagnosis or treatment support, especially in complex medical cases. However, accessible medical databases are limited, and valid medical ground truth databases for the evaluation of algorithms are rare and usually comprise only a few images. Inaccuracy or invalidity of medical ground truth data and image-based artefacts also limit the creation of such databases, which is especially relevant for CT data sets of the maxillomandibular complex. This contribution provides a unique and accessible data set of the complete mandible, including 20 valid ground truth segmentation models originating from 10 CT scans from clinical practice without artefacts or faulty slices. From each CT scan, two 3D ground truth models were created by clinical experts through independent manual slice-by-slice segmentation, and the models were statistically compared to prove their validity. These data could be used to conduct serial image studies of the human mandible, evaluating segmentation algorithms and developing adequate image tools.
Chapter
Full-text available
We propose a new Patch-based Iterative Network (PIN) for fast and accurate landmark localisation in 3D medical volumes. PIN utilises a Convolutional Neural Network (CNN) to learn the spatial relationship between an image patch and anatomical landmark positions. During inference, patches are repeatedly passed to the CNN until the estimated landmark position converges to the true landmark location. PIN is computationally efficient since the inference stage only selectively samples a small number of patches in an iterative fashion rather than a dense sampling at every location in the volume. Our approach adopts a multi-task learning framework that combines regression and classification to improve localisation accuracy. We extend PIN to localise multiple landmarks by using principal component analysis, which models the global anatomical relationships between landmarks. We have evaluated PIN using 72 3D ultrasound images from fetal screening examinations. PIN achieves quantitatively an average landmark localisation error of 5.59 mm and a runtime of 0.44 s to predict 10 landmarks per volume. Qualitatively, anatomical 2D standard scan planes derived from the predicted landmark locations are visually similar to the clinical ground truth.
Article
Full-text available
The lack of publicly available datasets of computed-tomography angiography (CTA) images for pulmonary embolism (PE) is a problem felt by physicians and researchers. Although a number of computer-aided detection (CAD) systems have been developed for PE diagnosis, their performance is often evaluated using private datasets. In this paper, we introduce a new public dataset called FUMPE (standing for Ferdowsi University of Mashhad's PE dataset) which consists of three-dimensional PE-CTA images of 35 different subjects with 8792 slices in total. For each benchmark image, two expert radiologists provided the ground-truth with the assistance of a semi-automated image processing software tool. FUMPE is a challenging benchmark for CAD methods because of the large number (i.e., 3438) of PE regions and, more especially, because of the location of most of them (i.e., 67%) in lung peripheral arteries. Moreover, due to the reporting of the Qanadli score for each PE-CTA image, FUMPE is the first public dataset which can be used for the analysis of mortality and morbidity risks associated with PE. We also report some complementary prognosis information for each subject.
Article
Full-text available
The authors present a method to interconnect the Visualisation Toolkit (VTK) and Unity. This integration enables them to exploit the visualisation capabilities of VTK with Unity’s widespread support of virtual, augmented, and mixed reality displays, and interaction and manipulation devices, for the development of medical image applications for virtual environments. The proposed method utilises OpenGL context sharing between Unity and VTK to render VTK objects into the Unity scene via a Unity native plugin. The proposed method is demonstrated in a simple Unity application that performs VTK volume rendering to display thoracic computed tomography and cardiac magnetic resonance images. Quantitative measurements of the achieved frame rates show that this approach provides over 90 fps using standard hardware, which is suitable for current augmented reality/virtual reality display devices. © 2018 Institution of Engineering and Technology.All right reserved.
Article
Full-text available
Computed tomography (CT) was performed for an 18-year-old female pony with enterolithiasis in the prone and supine positions. CT images from the prone position revealed displacement of the large dorsal colon, which contained an enterolith to the ventral side of the abdomen, and those from the supine position revealed displacement to the dorsal side. A high-density material suggestive of a metallic foreign body was also observed in the enterolith core. An enterolith (422 g, 104 mm) was surgically removed from the large dorsal colon. This caused no complications after surgery and increased the horse’s weight. Changing positions during CT helps identify the exact location of enterolith and intestinal displacement due to enterolith weight, as well as size and number.
Article
Full-text available
Importance: Non-contrast head CT scan is the current standard for initial imaging of patients with head trauma or stroke symptoms. Objective: To develop and validate a set of deep learning algorithms for automated detection of following key findings from non-contrast head CT scans: intracranial hemorrhage (ICH) and its types, intraparenchymal (IPH), intraventricular (IVH), subdural (SDH), extradural (EDH) and subarachnoid (SAH) hemorrhages, calvarial fractures, midline shift and mass effect. Design and Settings: We retrospectively collected a dataset containing 313,318 head CT scans along with their clinical reports from various centers. A part of this dataset (Qure25k dataset) was used to validate and the rest to develop algorithms. Additionally, a dataset (CQ500 dataset) was collected from different centers in two batches B1 & B2 to clinically validate the algorithms. Main Outcomes and Measures: Original clinical radiology report and consensus of three independent radiologists were considered as gold standard for Qure25k and CQ500 datasets respectively. Area under receiver operating characteristics curve (AUC) for each finding was primarily used to evaluate the algorithms. Results: Qure25k dataset contained 21,095 scans (mean age 43.31; 42.87% female) while batches B1 and B2 of CQ500 dataset consisted of 214 (mean age 43.40; 43.92% female) and 277 (mean age 51.70; 30.31% female) scans respectively. On Qure25k dataset, the algorithms achieved an AUC of 0.9194, 0.8977, 0.9559, 0.9161, 0.9288 and 0.9044 for detecting ICH, IPH, IVH, SDH, EDH and SAH respectively. AUCs for the same on CQ500 dataset were 0.9419, 0.9544, 0.9310, 0.9521, 0.9731 and 0.9574 respectively. For detecting calvarial fractures, midline shift and mass effect, AUCs on CQ500 dataset were 0.9244, 0.9276 and 0.8583 respectively, while AUCs on Qure25k dataset were 0.9624, 0.9697 and 0.9216 respectively.
Data
Full-text available
Non-contrast and contrast-enhanced t1-weighted MRI scans from a healthy subject. Non-contrast scan acquired 2010 and contrast-enhanced scan acquired 2015 (Note: the contrast-enhanced scan has been registered to the non-contrast scan). If you use anything for your own research, please give credits to: J. Egger "non-contrast and contrast-enhanced t1-weighted MRI scans", ResearchGate, January 2018. and L. Lindner, B. Pfarrkirchner, C. Gsaxner, D. Schmalstieg, J. Egger. "TuMore: Generation of Synthetic Brain Tumor MRI Data for Deep Learning Based Segmentation Approaches". SPIE Medical Imaging, Machine Learning and Artificial Intelligence, Paper 10579-63, February 2018.
Data
Full-text available
10 MRI Glioblastoma multiforme (GBM) Datasets with manual expert segmentations (Ground truth)
Article
Full-text available
In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon’s desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time.
Article
Full-text available
Quantitative extraction of high-dimensional mineable data from medical images is a process known as radiomics. Radiomics is foreseen as an essential prognostic tool for cancer risk assessment and the quantification of intratumoural heterogeneity. In this work, 1615 radiomic features (quantifying tumour image intensity, shape, texture) extracted from pre-treatment FDG-PET and CT images of 300 patients from four different cohorts were analyzed for the risk assessment of locoregional recurrences (LR) and distant metastases (DM) in head-and-neck cancer. Prediction models combining radiomic and clinical variables were constructed via random forests and imbalance-adjustment strategies using two of the four cohorts. Independent validation of the prediction and prognostic performance of the models was carried out on the other two cohorts (LR: AUC = 0.69 and CI = 0.67; DM: AUC = 0.86 and CI = 0.88). Furthermore, the results obtained via Kaplan-Meier analysis demonstrated the potential of radiomics for assessing the risk of specific tumour outcomes using multiple stratification groups. This could have important clinical impact, notably by allowing for a better personalization of chemo-radiation treatments for head-and-neck cancer patients from different risk groups.
Conference Paper
Full-text available
3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions. Testing code will be made available online, along with pre-trained models http://aaronsplace.co.uk/papers/jackson2017recon.
Article
Full-text available
Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.
Poster
Full-text available
Virtual Reality (VR) is an immersive technology that replicates an environment via computer-simulated reality. VR gets a lot of attention in computer games but has also great potential in other areas, like the medical domain. Examples are planning, simulations and training of medical interventions, like for facial surgeries where an aesthetic outcome is important. However, importing medical data into VR devices is not trivial, especially when a direct connection and visualization from your own application is needed. Furthermore, most researcher don’t build their medical applications from scratch, rather they use platforms, like MeVisLab, Slicer or MITK. The platforms have in common that they integrate and build upon on libraries like ITK and VTK, further providing a more convenient graphical interface to them for the user. In this contribution, we demonstrate the usage of a VR device for medical data under MeVisLab. Therefore, we integrated the OpenVR library into MeVisLab as an own module. This enables the direct and uncomplicated usage of head mounted displays, like the HTC Vive under MeVisLab. Summarized, medical data from other MeVisLab modules can directly be connected per drag-and-drop to our VR module and will be rendered inside the HTC Vive for an immersive inspection.
Article
Full-text available
In this publication, the interactive planning and reconstruction of cranial 3D Implants under the medical prototyping platform MeVisLab as alternative to commercial planning software is introduced. In doing so, a MeVisLab prototype consisting of a customized data-flow network and an own C++ module was set up. As a result, the Computer-Aided Design (CAD) software prototype guides a user through the whole workflow to generate an implant. Therefore, the workflow begins with loading and mirroring the patients head for an initial curvature of the implant. Then, the user can perform an additional Laplacian smoothing, followed by a Delaunay triangulation. The result is an aesthetic looking and well-fitting 3D implant, which can be stored in a CAD file format, e.g. STereoLithography (STL), for 3D printing. The 3D printed implant can finally be used for an in-depth pre-surgical evaluation or even as a real implant for the patient. In a nutshell, our research and development shows that a customized MeVisLab software prototype can be used as an alternative to complex commercial planning software, which may also not be available in every clinic. Finally, not to conform ourselves directly to available commercial software and look for other options that might improve the workflow.
Article
The aim of this paper is to provide a comprehensive overview of the MICCAI 2020 AutoImplant Challenge1. The approaches and publications submitted and accepted within the challenge will be summarized and reported, highlighting common algorithmic trends and algorithmic diversity. Furthermore, the evaluation results will be presented, compared and discussed in regard to the challenge aim: seeking for low cost, fast and fully automated solutions for cranial implant design. Based on feedback from collaborating neurosurgeons, this paper concludes by stating open issues and post-challenge requirements for intra-operative use. The codes can be found at https://github.com/Jianningli/tmi.
Article
Machine learning for health must be reproducible to ensure reliable clinical use. We evaluated 511 scientific papers across several machine learning subfields and found that machine learning for health compared poorly to other areas regarding reproducibility metrics, such as dataset and code accessibility. We propose recommendations to address this problem.
Book
The AutoImplant Cranial Implant Design Challenge (AutoImplant 2020: https:// autoimplant.grand-challenge.org/) was initialized jointly by the Graz University of Technology (TU Graz) and the Medical University of Graz (MedUni Graz), Austria, through an interdisciplinary project “Clinical Additive Manufacturing for Medical Applications” (CAMed: https://www.medunigraz.at/camed/) between the two institutions. The project aims to provide more affordable, faster, and patient-friendly solutions to the design and manufacturing of medical implants, including cranial implants, which is needed in order to repair a defective skull from a brain tumor surgery or trauma.
Chapter
Aortic dissection (AD) is a condition of the main artery of the human body, resulting in the formation of a new flow channel, or false lumen (FL). The disease is usually diagnosed with a computed tomography angiography (CTA) scan during the acute phase. A better understanding of the causes of AD requires knowledge of aortic geometry prior to the event, which is available only in very rare circumstances. In this work, we propose an approach to reconstruct the aorta before the formation of a dissection by performing 3D inpainting with a two-stage generative adversarial network (GAN). In the first stage of our two-stage GAN, a network is trained on the 3D edge information of the healthy aorta to reconstruct the aortic wall. The second stage infers the image information of the aorta to reconstruct the entire dataset. We train our two-stage GAN with 3D patches from 55 non-dissected aortic datasets and evaluate it on 20 more non-dissected datasets, demonstrating that our proposed 3D architecture outperforms its 2D counterpart. To obtain pre-dissection aortae, we mask the entire FL in AD datasets. Finally, we provide qualitative feedback from a renown expert on the obtained pre-dissection cases.
Chapter
In this study, we present a baseline approach for AutoImplant (https://autoimplant.grand-challenge.org/) – the cranial implant design challenge, which can be formulated as a volumetric shape learning task. In this task, the defective skull, the complete skull and the cranial implant are represented as binary voxel grids. To accomplish this task, the implant can be either reconstructed directly from the defective skull or obtained by taking the difference between a defective skull and a complete skull. In the latter case, a complete skull has to be reconstructed given a defective skull, which defines a volumetric shape completion problem. Our baseline approach for this task is based on the former formulation, i.e., a deep neural network is trained to predict the implants directly from the defective skulls. The approach generates high-quality implants in two steps: First, an encoder-decoder network learns a coarse representation of the implant from downsampled, defective skulls; The coarse implant is only used to generate the bounding box of the defected region in the original high-resolution skull. Second, another encoder-decoder network is trained to generate a fine implant from the bounded area. On the test set, the proposed approach achieves an average dice similarity score (DSC) of 0.8555 and Hausdorff distance (HD) of 5.1825 mm. The codes are available at https://github.com/Jianningli/autoimplant.
Article
Background and Objective: Augmented reality (AR) can help to overcome current limitations in computer assisted head and neck surgery by granting “X-ray vision” to physicians. Still, the acceptance of AR in clinical applications is limited by technical and clinical challenges. We aim to demonstrate the benefit of a marker-free, instant calibration AR system for head and neck cancer imaging, which we hypothesize to be acceptable and practical for clinical use. Methods: We implemented a novel AR system for visualization of medical image data registered with the head or face of the patient prior to intervention. Our system allows the localization of head and neck carcinoma in relation to the outer anatomy. Our system does not require markers or stationary infrastructure, provides instant calibration and allows 2D and 3D multi-modal visualization for head and neck surgery planning via an AR head-mounted display. We evaluated our system in a pre-clinical user study with eleven medical experts. Results: Medical experts rated our application with a system usability scale score of 74.8 ± 15.9, which signifies above average, good usability and clinical acceptance. An average of 12.7 ± 6.6 minutes of training time was needed by physicians, before they were able to navigate the application without assistance. Conclusions: Our AR system is characterized by a slim and easy setup, short training time and high usability and acceptance. Therefore, it presents a promising, novel tool for visualizing head and neck cancer imaging and pre-surgical localization of target structures.
Conference Paper
Aortic dissection (AD) is a condition of the main artery of the human body, resulting in the formation of a new flow channel, or false lumen (FL). The disease is usually diagnosed with a computed tomography angiography (CTA) scan during the acute phase. A better understanding of the causes of AD requires knowledge of aortic geometry prior to the event, which is available only in very rare circumstances. In this work, we propose an approach to reconstruct the aorta before the formation of a dissection by performing 3D inpainting with a two-stage generative adversarial network (GAN). In the first stage of our two-stage GAN, a network is trained on the 3D edge information of the healthy aorta to reconstruct the aortic wall. The second stage infers the image information of the aorta to reconstruct the entire dataset. We train our two-stage GAN with 3D patches from 55 non-dissected aortic datasets and evaluate it on 20 more non-dissected datasets, demonstrating that our proposed 3D architecture outperforms its 2D counterpart. To obtain pre-dissection aortae, we mask the entire FL in AD datasets. Finally, we provide qualitative feedback from a renown expert on the obtained pre-dissection cases.
Conference Paper
Augmented reality for medical applications allows physicians to obtain an inside view into the patient without surgery. In this context , we present an augmented reality application running on a standard smartphone or tablet computer, providing visualizations of medical image data, overlaid with the patient, in a video see-through fashion. Our system is based on the registration of medical imaging data to the patient using a single 2D photograph of the patient. From this image, a 3D model of the patient's face is reconstructed using a convolutional neural network, to which a pre-operative CT scan is automatically registered. For efficient processing, this is performed on a server PC. Finally, anatomical and pathological information is sent back to the mobile device and can be displayed, accurately registered with the live patient, on the screen. Hence, our cost-effective, markerless approach needs only a smart-phone and a server PC for image processing. We present a qualitative and quantitative evaluation using real patient photos and CT from the clinical routine in facial surgery, reporting overall processing times and registration errors.
Article
Aortic dissection (AD) is a condition of the main artery of the human body, resulting in the formation of a new flow channel, or false lumen. The disease is usually diagnosed with a computed tomography angiography scan during the acute phase. A better understanding of the causes of AD requires knowledge of the aortic geometry (segmentation), including the true and false lumina, which is very time-consuming to reconstruct when performed manually on a slice-by-slice basis. Hence, different automatic and semi-automatic medical image analysis approaches have been proposed for this task over the last years. In this review, we present and discuss these computing techniques used to segment dissected aortas, also in regard to the detection and visualization of clinically relevant information and features from dissected aortas for customized patient-specific treatments.
Article
Introduction: Various prefabricated maxillofacial implants are used in the clinical routine for the surgical treatment of patients. In addition to these prefabricated implants, customized CAD/CAM implants become increasingly important for a more precise replacement of damaged anatomical structures. This paper reviews the design and manufacturing of patient-specific implants for the maxillofacial area. Areas covered: The contribution of this publication is to give a state-of-the-art overview in the usage of customized facial implants. Moreover, it provides future perspectives, including 3D printing technologies, for the manufacturing of patient-individual facial implants that are based on patient’s data acquisitions, like Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). Expert opinion: The main target of this review is to present various designing software and 3D manufacturing technologies that have been applied to fabricate facial implants. In doing so, different CAD designing software’s are discussed, which are based on various methods and have been implemented and evaluated by researchers. Finally, recent 3D printing technologies that have been applied to manufacture patient-individual implants will be introduced and discussed.
Article
Machine learning models have great potential in biomedical applications. A new platform called GradioHub offers an interactive and intuitive way for clinicians and biomedical researchers to try out models and test their reliability on real-world, out-of-training data.
Conference Paper
Image segmentation plays a major role in medical imaging. Especially in radiology, the detection and development of tumors and other diseases can be supported by image segmentation applications. Tools that provide image segmentation and calculation of segmentation scores are not available at any time for every device due to the size and scope of functionalities they offer. These tools need huge periodic updates and do not properly work on old or weak systems. However, medical use-cases often require fast and accurate results. A complex and slow software can lead to additional stress and thus unnecessary errors. The aim of this contribution is the development of a cross-platform tool for medical segmentation use-cases. The goal is a device-independent and always available possibility for medical imaging including manual segmentation and metric calculation. The result is Studierfenster (studierfenster.at), a web-tool for manual segmentation and segmentation metric calculation. In this contribution, the focus lies on the segmentation metric calculation part of the tool. It provides the functionalities of calculating directed and undirected Hausdorff Distance (HD) and Dice Similarity Coefficient (DSC) scores for two uploaded volumes, filtering for specific values, searching for specific values in the calculated metrics and exporting filtered metric lists in different file formats.
Conference Paper
In this work, fully automatic binary segmentation of GBMs (glioblastoma multiforme) in 2D magnetic resonance images is presented using a convolutional neural network trained exclusively on synthetic data. The precise segmentation of brain tumors is one of the most complex and challenging tasks in clinical practice and is usually done manually by radiologists or physicians. However, manual delineations are time-consuming, subjective and in general not reproducible. Hence, more advanced automated segmentation techniques are in great demand. After deep learning methods already successfully demonstrated their practical usefulness in other domains, they are now also attracting increasing interest in the field of medical image processing. Using fully convolutional neural networks for medical image segmentation provides considerable advantages, as it is a reliable, fast and objective technique. In the medical domain, however, only a very limited amount of data is available in the majority of cases, due to privacy issues among other things. Nevertheless, a sufficiently large training data set with ground truth annotations is required to successfully train a deep segmentation network. Therefore, a semi-automatic method for generating synthetic GBM data and the corresponding ground truth was utilized in this work. A U-Net-based segmentation network was then trained solely on this synthetically generated data set. Finally, the segmentation performance of the model was evaluated using real magnetic resonance images of GBMs.
Article
Artificial intelligence (AI) — the ability of a machine to perform cognitive tasks to achieve a particular goal based on provided data — is revolutionizing and reshaping our health-care systems. The current availability of ever-increasing computational power, highly developed pattern recognition algorithms and advanced image processing software working at very high speeds has led to the emergence of computer-based systems that are trained to perform complex tasks in bioinformatics, medical imaging and medical robotics. Accessibility to ‘big data’ enables the ‘cognitive’ computer to scan billions of bits of unstructured information, extract the relevant information and recognize complex patterns with increasing confidence. Computer-based decision-support systems based on machine learning (ML) have the potential to revolutionize medicine by performing complex tasks that are currently assigned to specialists to improve diagnostic accuracy, increase efficiency of throughputs, improve clinical workflow, decrease human resource costs and improve treatment choices. These characteristics could be especially helpful in the management of prostate cancer, with growing applications in diagnostic imaging, surgical interventions, skills training and assessment, digital pathology and genomics. Medicine must adapt to this changing world, and urologists, oncologists, radiologists and pathologists, as high-volume users of imaging and pathology, need to understand this burgeoning science and acknowledge that the development of highly accurate AI-based decision-support applications of ML will require collaboration between data scientists, computer researchers and engineers.
Chapter
Computer-aided Design (CAD) software enables the design of patient-specific cranial implants, but it often requires of a lot of manual user-interactions. This paper proposes a Deep Learning (DL) approach towards the automated CAD of cranial implants, allowing the design process to be less user-dependent and even less time-consuming. The problem of reconstructing a cranial defect, which is essentially filling in a region in a skull, was posed as a 3D shape completion task and, to solve it, a Volumetric Convolutional Denoising Autoencoder was implemented using the open-source DL framework PyTorch. The autoencoder was trained on 3D skull models obtained by processing an open-access dataset of Magnetic Resonance Imaging brain scans. The 3D skull models were represented as binary voxel occupancy grids and experiments were carried out for different voxel resolutions. For each experiment, the autoencoder was evaluated in terms of quantitative and qualitative 3D shape completion performance. The obtained results showed that the implemented Deep Neural Network is able to perform shape completion on 3D models of defected skulls, allowing for an efficient and automatic reconstruction of cranial defects.
Article
BACKGROUND: Non-contrast head CT scan is the current standard for initial imaging of patients with head trauma or stroke symptoms. We aimed to develop and validate a set of deep learning algorithms for automated detection of the following key findings from these scans: intracranial haemorrhage and its types (ie, intraparenchymal, intraventricular, subdural, extradural, and subarachnoid); calvarial fractures; midline shift; and mass effect. METHODS: We retrospectively collected a dataset containing 313 318 head CT scans together with their clinical reports from around 20 centres in India between Jan 1, 2011, and June 1, 2017. A randomly selected part of this dataset (Qure25k dataset) was used for validation and the rest was used to develop algorithms. An additional validation dataset (CQ500 dataset) was collected in two batches from centres that were different from those used for the development and Qure25k datasets. We excluded postoperative scans and scans of patients younger than 7 years. The original clinical radiology report and consensus of three independent radiologists were considered as gold standard for the Qure25k and CQ500 datasets, respectively. Areas under the receiver operating characteristic curves (AUCs) were primarily used to assess the algorithms. FINDINGS: The Qure25k dataset contained 21 095 scans (mean age 43 years; 9030 [43%] female patients), and the CQ500 dataset consisted of 214 scans in the first batch (mean age 43 years; 94 [44%] female patients) and 277 scans in the second batch (mean age 52 years; 84 [30%] female patients). On the Qure25k dataset, the algorithms achieved an AUC of 0·92 (95% CI 0·91-0·93) for detecting intracranial haemorrhage (0·90 [0·89-0·91] for intraparenchymal, 0·96 [0·94-0·97] for intraventricular, 0·92 [0·90-0·93] for subdural, 0·93 [0·91-0·95] for extradural, and 0·90 [0·89-0·92] for subarachnoid). On the CQ500 dataset, AUC was 0·94 (0·92-0·97) for intracranial haemorrhage (0·95 [0·93-0·98], 0·93 [0·87-1·00], 0·95 [0·91-0·99], 0·97 [0·91-1·00], and 0·96 [0·92-0·99], respectively). AUCs on the Qure25k dataset were 0·92 (0·91-0·94) for calvarial fractures, 0·93 (0·91-0·94) for midline shift, and 0·86 (0·85-0·87) for mass effect, while AUCs on the CQ500 dataset were 0·96 (0·92-1·00), 0·97 (0·94-1·00), and 0·92 (0·89-0·95), respectively. INTERPRETATION: Our results show that deep learning algorithms can accurately identify head CT scan abnormalities requiring urgent attention, opening up the possibility to use these algorithms to automate the triage process.Qure.ai.
Chapter
This chapter is devoted to some segmentation method of image and video. For image segmentation, five types of methods are detailed, including threshold segmentation, region-based segmentation, partial differential equation based segmentation, clustering based segmentation, and the graph theory based segmentation. For video segmentation, we shall introduce the motion region extraction method based on cumulative difference.
Conference Paper
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif- ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implemen- tation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called dropout that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry
Conference Paper
Virtual Reality (VR) is an immersive technology that replicates an environment via computer-simulated reality. VR gets a lot of attention in computer games but has also great potential in other areas, like the medical domain. Examples are planning, simulations and training of medical interventions, like for facial surgeries where an aesthetic outcome is important. However, importing medical data into VR devices is not trivial, especially when a direct connection and visualization from your own application is needed. Furthermore, most researcher don’t build their medical applications from scratch, rather they use platforms, like MeVisLab, Slicer or MITK. The platforms have in common that they integrate and build upon on libraries like ITK and VTK, further providing a more convenient graphical interface to them for the user. In this contribution, we demonstrate the usage of a VR device for medical data under MeVisLab. Therefore, we integrated the OpenVR library into MeVisLab as an own module. This enables the direct and uncomplicated usage of head mounted displays, like the HTC Vive under MeVisLab. Summarized, medical data from other MeVisLab modules can directly be connected per drag-and-drop to our VR module and will be rendered inside the HTC Vive for an immersive inspection. © (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.