Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Automatic Non-rigid Histological Image Registration (ANHIR) challenge was organized to compare the performance of image registration algorithms on several kinds of microscopy histology images in a fair and independent manner. We have assembled 8 datasets, containing 355 images with 18 different stains, resulting in 481 image pairs to be registered. Registration accuracy was evaluated using manually placed landmarks. In total, 256 teams registered for the challenge, 10 submitted the results, and 6 participated in the workshop. Here, we present the results of 7 well-performing methods from the challenge together with 6 well-known existing methods. The best methods used coarse but robust initial alignment, followed by non-rigid registration, used multiresolution, and were carefully tuned for the data at hand. They outperformed off-the-shelf methods, mostly by being more robust. The best methods could successfully register over 98 % of all landmarks and their mean landmark registration accuracy (TRE) was 0.44 % of the image diagonal. The challenge remains open to submissions and all images are available for download.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Existing methods only achieve a limited success due to the overwhelmingly large image scale, complex histology structure change across adjacent slides, and significant tissue appearance difference in different staining. 16 Although a successful 3D histology image reconstruction method has been developed, it requires ground-truth images (i.e. block-face images) and image stacks of relatively limited size (e.g. ...
... 6 Recently, the ANHIR challenge was organized to systematically compare the performances of image registration algorithms for microscopy histology images. 16 Most methods described in the ANHIR challenge are based on a rigid registration followed by a non-rigid registration. Rigid registration is determined by RANSAC from feature points, whereas non-rigid registration is performed by local affine transformation, demons algorithms, or interpolations. ...
... -CGNReg is an unsupervised registration approach and can be efficiently trained without ground-truth image deformation information. -Instead of resizing images to a lower resolution, 16 we recover the WSI registration results with patch-based image registration results at the highest image resolution. Therefore, high resolution tissue details from WSIs are retained. ...
Article
Full-text available
For routine pathology diagnosis and imaging-based biomedical research, Whole-slide image (WSI) analyses have been largely limited to a 2D tissue image space. For a more definitive tissue representation to support fine-resolution spatial and integrative analyses, it is critical to extend such tissue-based investigations to a 3D tissue space with spatially aligned serial tissue WSIs in different stains, such as Hematoxylin and Eosin (H&E) and Immunohistochemistry (IHC) biomarkers. However, such WSI registration is technically challenged by the overwhelming image scale, the complex histology structure change, and the significant difference in tissue appearances in different stains. The goal of this study is to register serial sections from multi-stain histopathology whole-slide image blocks. We propose a novel translation-based deep learning registration network CGNReg that spatially aligns serial WSIs stained in H&E and by IHC biomarkers without prior deformation information for the model training. First, synthetic IHC images are produced from H&E slides through a robust image synthesis algorithm. Next, the synthetic and the real IHC images are registered through a Fully Convolutional Network with multi-scaled deformable vector fields and a joint loss optimization. We perform the registration at the full image resolution, retaining the tissue details in the results. Evaluated with a dataset of 76 breast cancer patients with 1 H&E and 2 IHC serial WSIs for each patient, CGNReg presents promising performance as compared with multiple state-of-the-art systems in our evaluation. Our results suggest that CGNReg can produce promising registration results with serial WSIs in different stains, enabling integrative 3D tissue-based biomedical investigations.
... Recently, deep learning (DL) has made big leaps in many fields of image processing [15]. In particular, there are some studies on histological image registration based on DL [16]- [18]. One method proposed by Zhao et al. as mentioned in [16] is based on a supervised convolutional neural network (CNN) with manually labeled key points. ...
... In particular, there are some studies on histological image registration based on DL [16]- [18]. One method proposed by Zhao et al. as mentioned in [16] is based on a supervised convolutional neural network (CNN) with manually labeled key points. In such supervised learning-based methods, the ground-truth data with known correspondences across the set of training images is required [19]. ...
... There are repetitive textures in histological images because of homogeneous tissues; 3). There may exist section missing when processing tissue sections [16]. With the existence of these challenges, common strategies for performance optimization in registration do not perform well in histological images [16]. ...
Article
Full-text available
Registration of multiple stained images is a fundamental task in histological image analysis. In supervised methods, obtaining ground-truth data with known correspondences is laborious and time-consuming. Thus, unsupervised methods are expected. Unsupervised methods ease the burden of manual annotation but often at the cost of inferior results. In addition, registration of histological images suffers from appearance variance due to multiple staining, repetitive texture, and section missing during making tissue sections. To deal with these challenges, we propose an unsupervised structural feature guided convolutional neural network (SFG). Structural features are robust to multiple staining. The combination of low-resolution rough structural features and high-resolution fine structural features can overcome repetitive texture and section missing, respectively. SFG consists of two components of structural consistency constraints according to the formations of structural features, i.e., dense structural component and sparse structural component. The dense structural component uses structural feature maps of the whole image as structural consistency constraints, which represent local contextual information. The sparse structural component utilizes the distance of automatically obtained matched key points as structural consistency constraints because the matched key points in an image pair emphasize the matching of significant structures, which imply global information. In addition, a multi-scale strategy is used in both dense and sparse structural components to make full use of the structural information at low resolution and high resolution to overcome repetitive texture and section missing. The proposed method was evaluated on a public histological dataset (ANHIR) and ranked first as of Jan 18th, 2022.
... Histology image registration involves aligning microscopy images for various purposes, including creating 3D reconstructions from 2D [2], combining data from slices with different stain samples [3], and from multimodal registration [4]. ...
... Furthermore, histology image registration is challenging due to various factors such as large image sizes (Giga-pixel resolution), repetitive texture, non-linear elastic deformation, occlusions, missing sections, non-rigid deformation, contrast differences between the images, differences in appearance, and local structure between slices. These challenges make it difficult to find unique landmarks for alignment and to ensure accurate registration [3], [6]. ...
... (iii) Benchmarking. In image processing applications and machine learning, benchmark-data sets have been the gold standard to verify and validate the performance of dedicated algorithms, e.g., [83][84][85][86]. Typically, these images stem from real objects or were artifcially generated. ...
... Furthermore, diferent staining protocols need to be accounted for. Tis problem is currently being tackled by the participants of the Automatic Nonrigid Histological Image Registration (ANHIR) Challenge [85,86,88,89]. ...
Article
Full-text available
Classical analysis of biological samples requires the destruction of the tissue’s integrity by cutting or grinding it down to thin slices for (Immuno)-histochemical staining and microscopic analysis. Despite high specificity, encoded in the stained 2D section of the whole tissue, the structural information, especially 3D information, is limited. Computed tomography (CT) or magnetic resonance imaging (MRI) scans performed prior to sectioning in combination with image registration algorithms provide an opportunity to regain access to morphological characteristics as well as to relate histological findings to the 3D structure of the local tissue environment. This review provides a summary of prevalent literature addressing the problem of multimodal coregistration of hard- and soft-tissue in microscopy and tomography. Grouped according to the complexity of the dimensions, including image-to-volume (2D ⟶ 3D), image-to-image (2D ⟶ 2D), and volume-to-volume (3D ⟶ 3D), selected currently applied approaches are investigated by comparing the method accuracy with respect to the limiting resolution of the tomography. Correlation of multimodal imaging could position itself as a useful tool allowing for precise histological diagnostic and allow the a priori planning of tissue extraction like biopsies.
... In the context of histopathology image analysis, one of the important initial tasks is to visually compare tissue sections of the same subject but of different biomarkers in a single frame. This is achieved by determining a transformation that maximizes the similarity between a reference image and its associated moving image (Borovec et al., 2018;Borovec et al., 2020;Fernandez-Gonzalez et al., 2002;Gupta et al., 2018). With the aim of improving precision and reducing F I G U R E 6 Some challenging examples of (a) normal and (b) tumor epithelial regions that have been randomly extracted from WSIs stained by AE1AE3 (bright brown) and BerEP4 (dark brown). ...
... More recently, the Automatic Non-rigid Histological Image Registration (ANHIR) challenge (Borovec et al., 2020) was organized as a part of the ISBI 2019 to compare the performance of different image registration models on several different histopathology images (see Figure 1). Most of the submitted models to this grand challenge were using similar classical techniques. ...
Article
Full-text available
The increasing adoption of the whole slide image (WSI) technology in histopathology has dramatically transformed pathologists' workflow and allowed the use of computer systems in histopathology analysis. Extensive research in Artificial Intelligence (AI) with a huge progress has been conducted resulting in efficient, effective, and robust algorithms for several applications including cancer diagnosis, prognosis, and treatment. These algorithms offer highly accurate predictions but lack transparency, understandability, and actionability. Thus, explainable artificial intelligence (XAI) techniques are needed not only to understand the mechanism behind the decisions made by AI methods and increase user trust but also to broaden the use of AI algorithms in the clinical setting. From the survey of over 150 papers, we explore different AI algorithms that have been applied and contributed to the histopathology image analysis workflow. We first address the workflow of the histopathological process. We present an overview of various learning‐based, XAI, and actionable techniques relevant to deep learning methods in histopathological imaging. We also address the evaluation of XAI methods and the need to ensure their reliability on the field. This article is categorized under: Application Areas > Health Care
... Mostly, data is stored in a centralized location for collaboration. The number of data sharing repositories still exists for medical fields, e.g., radiology, pathology, and genomics [2,6,10]. Collaborative data sharing (CDS) have several issues, i.e., patient privacy, data ownership, and international laws [8]. ...
... On each iteration, the model is trained locally, and then weights/gradient are transferred to the server (Algorithm 1 -lines 3 to 9). The server receives the model and then aggregates the weights of all the clients (Algorithm 1 -lines 5 to 6). This helps the model learn the diverse institute data and transfer it to all the client connections. ...
Article
Full-text available
In the Internet of Medical Things (IoMT), collaboration among institutes can help complex medical and clinical analysis of disease. Deep neural networks (DNN) require training models on large, diverse patients to achieve expert clinician-level performance. Clinical studies do not contain diverse patient populations for analysis due to limited availability and scale. DNN models trained on limited datasets are thereby constraining their clinical performance upon deployment at a new hospital. Therefore, there is significant value in increasing the availability of diverse training data. This research proposes institutional data collaboration alongside an adversarial evasion method to keep the data secure. The model uses a federated learning approach to share model weights and gradients. The local model first studies the unlabeled samples classifying them as adversarial or normal. The method then uses a centroid-based clustering technique to cluster the sample images. After that, the model predicts the output of the selected images, and active learning methods are implemented to choose the sub-sample of the human annotation task. The expert within the domain takes the input and confidence score and validates the samples for the model’s training. The model re-trains on the new samples and sends the updated weights across the network for collaboration purposes. We use the InceptionV3 and VGG16 model under fabricated inputs for simulating Fast Gradient Signed Method (FGSM) attacks. The model was able to evade attacks and achieve a high accuracy rating of 95%.
... The latter group includes techniques with rigid or affine body transformation. The cost functions are based on histology feature matching (such as blood vessels or small tissue voids) [7], mutual information of pixel intensities [23], and many others [2,3,18,28]. More sophisticated methods include tissue mask boundary alignment [19], background segmentation with B-spline registration [12], step-wise registrations of image patches with kernel density estimation, and hierarchical resolution regression [16]. ...
... Registration methods tested in this study originate from two families: featurebased and intensity-based. The feature-based methods such as the B-spline elastic [3] (ELT) and moving least squares [34] (MLS) utilize the scale invariant feature transform (SIFT) [21] which finds grid points in two images subjected to registration. Since finding the points is carried out separately for each image, the points may not correspond in terms of number and location in the two images. ...
Chapter
Full-text available
Registration of images from re-stained tissue sections is an initial step in generating ground truth image data for machine learning applications in histopathology. In this paper, we focused on evaluating existing feature-based and intensity-based registration methods using regions of interest (ROIs) extracted from whole slide images (n = 25) of human ileum that was first stained with hematoxylin and eosin (H&E) and then re-stained with immunohistochemistry (IHC). Elastic and moving least squares deformation models with rigid, affine and similarity feature matching were compared with intensity-based methods utilizing an optimizer to find rigid and affine transformation parameters. Corresponding color H&E and IHC ROIs were registered through gray-level luminance and deconvoluted hematoxylin images. Our goal was to identify methods that can yield a high number of correctly registered ROIs and low median (MTRE) and average (ATRE) target registration errors. Based on the benchmark landmarks (n = 5020) placed across the ROIs, the elastic deformation model with rigid matching and the intensity-based rigid registrations on color-deconvoluted hematoxylin channels yielded the highest (86%, 100%) rates of correctly registered ROIs. For these two methods, the MTRE was 2.00 and 2.12 pixels (\(0.982\) \(\,\upmu \) m, \(1.04\) \(\,\upmu \) m), and ATRE was 3.14 and 4.0 pixels (1.54\(\,\upmu \) m, 1.964\(\,\upmu \) m), respectively. Although the intensity-based rigid registration was the slowest of all methods tested, it may be more practical in use due to the highest rate of correctly registered ROIs and the second-best MTRE. The WSIs and ROIs with landmarks that we prepared can be valuable in benchmarking other image registration approaches.KeywordsMultimodal image registrationComputational pathologyTissue imaging
... The objective of the challenge was to align tissue in the IHC WSIs to corresponding tissue in the H&E WSIs. WSI registration has previously been addressed in the ANHIR challenge 29 . While the ANHIR challenge made valuable contributions to the field of WSI registration, it was limited by the high quality of sections and WSIs, which is not representative of clinical material, as well as by the availability of both training and test data with 355 WSIs in total, albeit originating from a wide variety of organs and stains. ...
Preprint
Full-text available
The alignment of tissue between histopathological whole-slide-images (WSI) is crucial for research and clinical applications. Advances in computing, deep learning, and availability of large WSI datasets have revolutionised WSI analysis. Therefore, the current state-of-the-art in WSI registration is unclear. To address this, we conducted the ACROBAT challenge, based on the largest WSI registration dataset to date, including 4,212 WSIs from 1,152 breast cancer patients. The challenge objective was to align WSIs of tissue that was stained with routine diagnostic immunohistochemistry to its H&E-stained counterpart. We compare the performance of eight WSI registration algorithms, including an investigation of the impact of different WSI properties and clinical covariates. We find that conceptually distinct WSI registration methods can lead to highly accurate registration performances and identify covariates that impact performances across methods. These results establish the current state-of-the-art in WSI registration and guide researchers in selecting and developing methods.
... This integrative approach can resolve local transcriptomic profiles of pathology-associated proteins to comprehensively assess the molecular impact of neuropathology on spatially-defined microenvironments. Simultaneous detection of mRNA and proteins on the same tissue section reduces the computational burden associated with aligning spatially adjacent tissue sections and their latent structural variations in tissue architecture (18,(73)(74)(75). ...
Preprint
Full-text available
Neuropathological lesions in the brains of individuals affected with neurodegenerative disorders are hypothesized to trigger molecular and cellular processes that disturb homeostasis of local microenvironments. Here, we applied the 10x Genomics Visium Spatial Proteogenomics (Visium-SPG) platform, which measures spatial gene expression coupled with immunofluorescence protein co-detection, in post-mortem human brain tissue from individuals with late-stage Alzheimer's disease (AD) to investigate changes in spatial gene expression with respect to amyloid-β (Aβ) and hyperphosphorylated tau (pTau) pathology. We identified Aβ-associated transcriptomic signatures in the human inferior temporal cortex (ITC) during late-stage AD, which we further investigated at cellular resolution with combined immunofluorescence and single molecule fluorescent in situ hybridization (smFISH) co-detection technology. We present a workflow for analysis of Visium-SPG data and demonstrate the power of multi-omic profiling to identify spatially-localized changes in molecular dynamics that are linked to pathology in human brain disease. We provide the scientific community with web-based, interactive resources to access the datasets of the spatially resolved AD-related transcriptomes at https://research.libd.org/Visium_SPG_AD/.
... Specifically, we adopted a multi-step, automatic, and non-rigid histological image registration method 62,63 and applied it to our dataset. First, the images were converted into grayscale, downsampled, and histogram equalized. ...
Article
Full-text available
Advances in computational algorithms and tools have made the prediction of cancer patient outcomes using computational pathology feasible. However, predicting clinical outcomes from pre-treatment histopathologic images remains a challenging task, limited by the poor understanding of tumor immune micro-environments. In this study, an automatic, accurate, comprehensive, interpretable, and reproducible whole slide image (WSI) feature extraction pipeline known as, IMage-based Pathological REgistration and Segmentation Statistics (IMPRESS), is described. We used both H&E and multiplex IHC (PD-L1, CD8+, and CD163+) images, investigated whether artificial intelligence (AI)-based algorithms using automatic feature extraction methods can predict neoadjuvant chemotherapy (NAC) outcomes in HER2-positive (HER2+) and triple-negative breast cancer (TNBC) patients. Features are derived from tumor immune micro-environment and clinical data and used to train machine learning models to accurately predict the response to NAC in breast cancer patients (HER2+ AUC = 0.8975; TNBC AUC = 0.7674). The results demonstrate that this method outperforms the results trained from features that were manually generated by pathologists. The developed image features and algorithms were further externally validated by independent cohorts, yielding encouraging results, especially for the HER2+ subtype.
... Denoising of noise-corrupted images is a very significant step in many image processing problems like image registration [1], image segmentation [2], image classification [3], etc., where the performance mainly depends on the correctness of original image contents. In short, image denoising improves a raw image's quality and creates a good foundation for its subsequent processing. ...
Article
Full-text available
The degradation in the visual quality of images is often seen due to various noises added inevitably at the time of image acquisition. Its restoration has thus become a fundamental and significant problem in image processing. Many attempts have been made in the recent past to efficiently denoise such images. But, the best possible solution to this problem is still an open research problem. This paper validates the effectiveness of one such popular image denoising approach, where an adaptive image patch clustering is followed by the suboptimal Wiener filter operation in the Principal Component Analysis (PCA) domain. The experimentation is conducted on grayscale images corrupted by four different noise types: speckle, salt & pepper, Gaussian, and Poisson. The efficiency of image denoising is quantified in terms of various famous image quality metrics. The comprehensive performance analysis of the denoising approach against the four noise models underlies its suitability for various applications. It certainly gives the new researchers a direction for selecting the image-denoising method.
... Even in consecutive tissue cuts there are substantial differences in the tissue appearance caused by changes in the tissue microenvironment or artifacts such as tissue folding, tearing, or cutting (Taqi et al., 2018), which all complicate data alignment. Robust and automated registration of histology images can be challenging (Borovec et al., 2020), and thus many studies deploy non-algorithmic strategies such as clearing and re-staining of the tissue slides (Hinton et al., 2019). A newly emerging direction is stainless imaging, including approaches such as ultraviolet microscopy (Fereidouni et al., 2017), stimulated Raman histology (Hollon et al., 2020), or colorimetric imaging (Balaur et al., 2021). ...
Article
Full-text available
In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in the realm of a single modality, neglecting the broader clinical context, which inevitably diminishes their potential. Integration of different data modalities provides opportunities to increase robustness and accuracy of diagnostic and prognostic models, bringing AI closer to clinical practice. AI models are also capable of discovering novel patterns within and across modalities suitable for explaining differences in patient outcomes or treatment resistance. The insights gleaned from such models can guide exploration studies and contribute to the discovery of novel biomarkers and therapeutic targets. To support these advances, here we present a synopsis of AI methods and strategies for multimodal data fusion and association discovery. We outline approaches for AI interpretability and directions for AI-driven exploration through multimodal data interconnections. We examine challenges in clinical adoption and discuss emerging solutions.
... Besides natural images, we build a Histo24 dataset consisting of 24 uncompressed 768 × 512 histological images, in order to evaluate our codec on images of different modality. These histological images are randomly cropped from high resolution ANHIR dataset [67], which is originally used for histological image registration task. The DLPR coding system is implemented with Pytorch. ...
Preprint
Lossless and near-lossless image compression is of paramount importance to professional users in many technical fields, such as medicine, remote sensing, precision engineering and scientific research. But despite rapidly growing research interests in learning-based image compression, no published method offers both lossless and near-lossless modes. In this paper, we propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression. In the lossless mode, the DLPR coding system first performs lossy compression and then lossless coding of residuals. We solve the joint lossy and residual compression problem in the approach of VAEs, and add autoregressive context modeling of the residuals to enhance lossless compression performance. In the near-lossless mode, we quantize the original residuals to satisfy a given $\ell_\infty$ error bound, and propose a scalable near-lossless compression scheme that works for variable $\ell_\infty$ bounds instead of training multiple networks. To expedite the DLPR coding, we increase the degree of algorithm parallelization by a novel design of coding context, and accelerate the entropy coding with adaptive residual interval. Experimental results demonstrate that the DLPR coding system achieves both the state-of-the-art lossless and near-lossless image compression performance with competitive coding speed.
... Multi-stain whole-slide-image (WSI) registration is an active field of research and has previously been addressed in the ANHIR challenge [1]. It is unclear, however, how the current WSI registration methods would perform on a real-world data set from routine clinical workflows, where artifacts such as glass cracks, pen marks or air bubbles are common. ...
Preprint
Full-text available
Multi-stain whole-slide-image (WSI) registration is an active field of research. It is unclear, however, how the current WSI registration methods would perform on a real-world data set. AutomatiC Registration Of Breast cAncer Tissue (ACROBAT) challenge is held to verify the performance of the current WSI registration methods by using a new dataset that originates from routine diagnostics to assess real-world applicability. In this report, we present our solution for the ACROBAT challenge. We employ a two-step approach including rigid and non-rigid transforms. The experimental results show that the median 90th percentile is 1,250 um for the validation dataset.
... Deep Learning (DL) describes a subset of Machine Learning (ML) algorithms built upon the concepts of neural networks [1]. Over the last decade, DL has shown great promise in various problem domains such as semantic segmentation [2][3][4][5], quantum physics [6], segmentation of regions of interest (such as tumors) in medical images [7][8][9][10][11][12], medical landmark detection [13,14], image registration [15,16], predictive modelling [17], among many others [18][19][20][21][22]. The majority of this vast research was enabled by the abundance of DL libraries made open source and publicly available, with some of the major ones being TensorFlow (developed by Google) [23] and PyTorch [24] by Facebook (originally developed as Caffe [25] by the University of California at Berkeley), which represent the most widely used libraries that facilitated DL research. ...
Article
Full-text available
Deep Learning (DL) has greatly highlighted the potential impact of optimized machine learning in both the scientific and clinical communities. The advent of open-source DL libraries from major industrial entities, such as TensorFlow (Google), PyTorch (Facebook), and MXNet (Apache), further contributes to DL promises on the democratization of computational analytics. However, increased technical and specialized background is required to develop DL algorithms, and the variability of implementation details hinders their reproducibility. Towards lowering the barrier and making the mechanism of DL development, training, and inference more stable, reproducible, and scalable, without requiring an extensive technical background, this manuscript proposes the Generally Nuanced Deep Learning Framework (GaNDLF). With built-in support for k-fold cross-validation, data augmentation, multiple modalities and output classes, and multi-GPU training, as well as the ability to work with both radiographic and histologic imaging, GaNDLF aims to provide an end-to-end solution for all DL-related tasks, to tackle problems in medical imaging and provide a robust application framework for deployment in clinical workflows.
... Another interesting example of different staining data is the ANHIR [21] dataset. It consists of 46 sets of images taken from the adjacent slides and stained with different techniques (including IHC and H&E). ...
Article
Full-text available
We present EndoNuke, an open dataset consisting of tiles from endometrium immunohistochemistry slides with the nuclei annotated as keypoints. Several experts with various experience have annotated the dataset. Apart from gathering the data and creating the annotation, we have performed an agreement study and analyzed the distribution of nuclei staining intensity.
... However, taking sequential images with multiple hybridization rounds for several target genes can sometimes be misaligned because of possible technical or experimental errors. Therefore, correcting these misalignments is an essential step; thus, tools for precise alignments of multiple images have been developed (14). Another imaging-based approach for spatially resolved transcriptomics is in situ sequencing (ISS). ...
Article
Full-text available
Single-cell RNA sequencing (scRNA-seq) has greatly advanced our understanding of cellular heterogeneity by profiling individual cell transcriptomes. However, cell dissociation from the tissue structure causes a loss of spatial information, which hinders the identification of intercellular communication networks and global transcriptional patterns present in the tissue architecture. To overcome this limitation, novel transcriptomic platforms that preserve spatial information have been actively developed. Significant achievements in imaging technologies have enabled in situ targeted transcriptomic profiling in single cells at singlemolecule resolution. In addition, technologies based on mRNA capture followed by sequencing have made possible profiling of the genome-wide transcriptome at the 55-100 μm resolution. Unfortunately, neither imaging-based technology nor capturebased method elucidates a complete picture of the spatial transcriptome in a tissue. Therefore, addressing specific biological questions requires balancing experimental throughput and spatial resolution, mandating the efforts to develop computational algorithms that are pivotal to circumvent technology-specific limitations. In this review, we focus on the current state-of-the-art spatially resolved transcriptomic technologies, describe their applications in a variety of biological domains, and explore recent discoveries demonstrating their enormous potential in biomedical research. We further highlight novel integrative computational methodologies with other data modalities that provide a framework to derive biological insight into heterogeneous and complex tissue organization.
... 1. matches 2. Homography can be estimated using random sample consensus (RANSAC) 11 3. The alignment score is above the cutoff of 71.4% (described in the next section) ...
Article
Full-text available
Background Modern molecular pathology workflows in neuro-oncology heavily rely on the integration of morphologic and immunohistochemical patterns for analysis, classification, and prognostication. However, despite the recent emergence of digital pathology platforms and artificial intelligence-driven computational image analysis tools, automating the integration of histomorphologic information found across these multiple studies is challenged by large files sizes of whole slide images (WSIs) and shifts/rotations in tissue sections introduced during slide preparation. Methods To address this, we develop a workflow that couples different computer vision tools including scale-invariant feature transform (SIFT) and deep learning to efficiently align and integrate histopathological information found across multiple independent studies. We highlight the utility and automation potential of this workflow in the molecular subclassification and discovery of previously unappreciated spatial patterns in diffuse gliomas. Results First, we show how a SIFT-driven computer vision workflow was effective at automated WSI alignment in a cohort of 107 randomly selected surgical neuropathology cases (97/107 (91%) showing appropriate matches, AUC = 0.96). This alignment allows our AI-driven diagnostic workflow to not only differentiate different brain tumor types, but also integrate and carry out molecular subclassification of diffuse gliomas using relevant immunohistochemical biomarkers (IDH1-R132H, ATRX). To highlight the discovery potential of this workflow, we also examined spatial distributions of tumors showing heterogenous expression of the proliferation marker MIB1 and Olig2. This analysis helped uncovered an interesting and unappreciated association of Olig2 positive and proliferative areas in some gliomas (r = 0.62). Conclusion This efficient neuropathologist-inspired workflow provides a generalizable approach to help automate a variety of advanced immunohistochemically compatible diagnostic and discovery exercises in surgical neuropathology and neuro-oncology.
... Besides pure technical problems, life scientists face practical challenges if they want to use published methods. For instance, a grand challenge of non-linear image registration has been performed in 2018 (Borovec et al., 2020). However, it proved very difficult to apply any of the more successful methods, either because they were closed source or because the necessary documentation was not readily available. ...
Article
Full-text available
Image analysis workflows for Histology increasingly require the correlation and combination of measurements across several whole slide images. Indeed, for multiplexing, as well as multimodal imaging, it is indispensable that the same sample is imaged multiple times, either through various systems for multimodal imaging, or using the same system but throughout rounds of sample manipulation (e.g. multiple staining sessions). In both cases slight deformations from one image to another are unavoidable, leading to an imperfect superimposition Redundant and thus a loss of accuracy making it difficult to link measurements, in particular at the cellular level. Using pre-existing software components and developing missing ones, we propose a user-friendly workflow which facilitates the nonlinear registration of whole slide images in order to reach sub-cellular resolution level. The set of whole slide images to register and analyze is at first defined as a QuPath project. Fiji is then used to open the QuPath project and perform the registrations. Each registration is automated by using an elastix backend, or semi-automated by using BigWarp in order to interactively correct the results of the automated registration. These transformations can then be retrieved in QuPath to transfer any regions of interest from an image to the corresponding registered images. In addition, the transformations can be applied in QuPath to produce on-the-fly transformed images that can be displayed on top of the reference image. Thus, relevant data can be combined and analyzed throughout all registered slides, facilitating the analysis of correlative results for multiplexed and multimodal imaging.
... A recent ANHIR challenge [100] uses multiple data sets, where the ground truth is obtained by external markers on the histological series. Their landmarks were placed by multiple human annotators. ...
Article
Full-text available
Registration of histological serial sections is a challenging task. Serial sections exhibit distortions and damage from sectioning. Missing information on how the tissue looked before cutting makes a realistic validation of 2D registrations extremely difficult. This work proposes methods for ground-truth-based evaluation of registrations. Firstly, we present a methodology to generate test data for registrations. We distort an innately registered image stack in the manner similar to the cutting distortion of serial sections. Test cases are generated from existing 3D data sets, thus the ground truth is known. Secondly, our test case generation premises evaluation of the registrations with known ground truths. Our methodology for such an evaluation technique distinguishes this work from other approaches. Both under- and over-registration become evident in our evaluations. We also survey existing validation efforts. We present a full-series evaluation across six different registration methods applied to our distorted 3D data sets of animal lungs. Our distorted and ground truth data sets are made publicly available.
... High-resolution optical microscopes have limited fields of view, so large samples are imaged using a series of individual fields which are then stitched together computationally to form a complete mosaic image using software such as ASHLAR [48,49]. A nonrigid (B-spline) method [32,42] is applied to register microscopy histology mosaics from different imaging processes [15], e.g., CyCIF and H&E. CyCIF mosaics can be up to 50,000 pixels in each dimension and contain as many as 60 channels, each depicting a different marker. ...
Preprint
Full-text available
Inspection of tissues using a light microscope is the primary method of diagnosing many diseases, notably cancer. Highly multiplexed tissue imaging builds on this foundation, enabling the collection of up to 60 channels of molecular information plus cell and tissue morphology using antibody staining. This provides unique insight into disease biology and promises to help with the design of patient-specific therapies. However, a substantial gap remains with respect to visualizing the resulting multivariate image data and effectively supporting pathology workflows in digital environments on screen. We, therefore, developed Scope2Screen, a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images. Our approach scales to analyzing 100GB images of 10^9 or more pixels per channel, containing millions of cells. A multidisciplinary team of visualization experts, microscopists, and pathologists identified key image exploration and annotation tasks involving finding, magnifying, quantifying, and organizing ROIs in an intuitive and cohesive manner. Building on a scope2screen metaphor, we present interactive lensing techniques that operate at single-cell and tissue levels. Lenses are equipped with task-specific functionality and descriptive statistics, making it possible to analyze image features, cell types, and spatial arrangements (neighborhoods) across image channels and scales. A fast sliding-window search guides users to regions similar to those under the lens; these regions can be analyzed and considered either separately or as part of a larger image collection. A novel snapshot method enables linked lens configurations and image statistics to be saved, restored, and shared. We validate our designs with domain experts and apply Scope2Screen in two case studies involving lung and colorectal cancers to discover cancer-relevant image features.
Article
Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets.
Article
Background and objective: Histopathological image registration is an essential component in digital pathology and biomedical image analysis. Deep-learning-based algorithms have been proposed to achieve fast and accurate affine registration. Some previous studies assume that the pairs are free from sizeable initial position misalignment and large rotation angles before performing the affine transformation. However, large-rotation angles are often introduced into image pairs during the production process in real-world pathology images. Reliable initial alignment is important for registration performance. The existing deep-learning-based approaches often use a two-step affine registration pipeline because convolutional neural networks (CNNs) cannot correct large-angle rotations. Methods: In this manuscript, a general framework ARoNet is developed to achieve end-to-end affine registration for histopathological images. We use CNNs to extract global features of images and fuse them to construct correspondent information for affine transformation. In ARoNet, a rotation recognition network is implemented to eliminate great rotation misalignment. In addition, a self-supervised learning task is proposed to assist the learning of image representations in an unsupervised manner. Results: We applied our model to four datasets, and the results indicate that ARoNet surpasses existing affine registration algorithms in alignment accuracy when large angular misalignments (e.g., 180 rotation) are present, providing accurate affine initialization for subsequent non-rigid alignments. Besides, ARoNet shows advantages in execution time (0.05 per pair), registration accuracy, and robustness. Conclusion: We believe that the proposed general framework promises to simplify and speed up the registration process and has the potential for clinical applications.
Chapter
In medical practice it is often necessary to jointly analyze differently stained histological sections. However, when slides are being prepared tissues are subjected to deformations and registration is highly required. Although the transformation between images is generally non-rigid, one of the most challenging subproblems is to calculate the initial affine transformation. Existing learning-based approaches adopt convolutional architectures that are usually limited in receptive field, while global context should be considered. Coupled with small datasets and unsupervised learning paradigm, this results in overfitting and adjusting the structure only locally. We introduce transformer-based affine histological image registration (TAHIR) approach. It successfully aggregates global information, requires no histological data to learn and is based on knowledge transfer from nature domain. The experiments show that TAHIR outperforms existing methods by a large margin on most-commonly used histological image registration benchmark in terms of target registration error, being more robust at the same time. The code is available at https://github.com/VladPyatov/ImgRegWithTransformers.KeywordsImage registrationAffine transformationHistologyTransformerDeep Learning
Chapter
The use of modern approaches based on convolutional neural networks (CNNs) for segmentation of whole slide images (WSIs) helps pathologists obtain more stable and quantitative analysis results and improve diagnosis objectivity. But working with WSIs is extremely difficult due to their resolution, size and the presence of a large number of incompatible image storage formats from equipment manufacturers. In addition, the use of modern CNN-based image analysis methods is complicated by the need to use a set of tools with a low level of internal integration.In order to facilitate the interaction of histologists with whole slide images and modern image analysis methods we implemented PathScribe – a new universal cross-platform cloud-based tool for comfortable viewing and manipulating large collections of WSIs on almost any device, including tablets and smartphones.We also consider the important problem of automatic tissue type recognition on WSIs and propose a new CNN-based method of automatic tissue type recognition on WSIs with a 2 subsets of PATH-DT-MSU dataset which contain high-quality whole slide images of digestive tract tumors with tissue type area annotations.The proposed method achieved 0.929 accuracy on CRC-VAL-HE-7K dataset (9 classes) and 0.97 accuracy on PATH-DT-MSU-WSS1, WSS2 datasets (5 classes). The developed method allows to classify the areas corresponding to the gastric own mucous glands in the lamina propria and distinguish the tubular structures of a highly differentiated gastric adenocarcinoma with normal glands.KeywordsHistologyDigital PathologySegmentationHistological Image ViewerWhole Slide ImagesSegmentationConvolutional Neural Networks
Article
Full-text available
Interest in spatial omics is on the rise, but generation of highly multiplexed images remains challenging, due to cost, expertise, methodical constraints, and access to technology. An alternative approach is to register collections of whole slide images (WSI), generating spatially aligned datasets. WSI registration is a two-part problem, the first being the alignment itself and the second the application of transformations to huge multi-gigapixel images. To address both challenges, we developed Virtual Alignment of pathoLogy Image Series (VALIS), software which enables generation of highly multiplexed images by aligning any number of brightfield and/or immunofluorescent WSI, the results of which can be saved in the ome.tiff format. Benchmarking using publicly available datasets indicates VALIS provides state-of-the-art accuracy in WSI registration and 3D reconstruction. Leveraging existing open-source software tools, VALIS is written in Python, providing a free, fast, scalable, robust, and easy-to-use pipeline for registering multi-gigapixel WSI, facilitating downstream spatial analyses.
Conference Paper
Multimodal microscopy experiments that image the same population of cells under different experimental conditions have become a widely used approach in systems and molecular neuroscience. The main obstacle is to align the different imaging modalities to obtain complementary information about the observed cell population (e.g., gene expression and calcium signal). Traditional image registration methods perform poorly when only a small subset of cells are present in both images, as is common in multimodal experiments. We cast multimodal microscopy alignment as a cell subset matching problem. To solve this non-convex problem, we introduce an efficient and globally optimal branch-and-bound algorithm to find subsets of point clouds that are in rotational alignment with each other. In addition, we use complementary information about cell shape and location to compute the matching likelihood of cell pairs in two imaging modalities to further prune the optimization search tree. Finally, we use the maximal set of cells in rigid rotational alignment to seed image deformation fields to obtain a final registration result. Our framework performs better than the state-of-the-art histology alignment approaches regarding matching quality and is faster than manual alignment, providing a viable solution to improve the throughput of multimodal microscopy experiments.
Article
Multiplex immunohistochemistry/immunofluorescence (mIHC/mIF) is a developing technology which facilitates the evaluation of multiple, simultaneous protein expressions at single-cell resolution while preserving tissue architecture. These approaches have shown great potential for biomarker discovery, yet many challenges remain. Importantly, streamlined cross-registration of multiplex immunofluorescence images with additional imaging modalities and immunohistochemistry (IHC) can be helpful in increasing the plex and/or improving the quality of the data generated by potentiating downstream processes such as cell segmentation. To address this problem, a fully automated process was designed to perform a hierarchical, parallelizable, and deformable registration of multiplexed digital whole-slide images. We generalized the calculation of mutual information as a registration criterion to an arbitrary number of dimensions, making it well-suited for multiplexed imaging. We also used self-information of a given IF channel as a criterion to select the optimal channels to use for registration. Additionally, as precise labeling of cellular membranes in situ is essential for robust cell segmentation, a pan-membrane immunohistochemical staining method was developed for incorporation into mIF panels or for use as an IHC followed by cross-registration. In this study, we demonstrate this process by registering whole-slide 6-plex/7-color mIF images with whole-slide brightfield mIHC images, including a CD3 and a pan-membrane stain. Our algorithm, whole slide imaging, mutual information registration (WSIMIR), performed highly accurate registration allowing the retrospective generation of an 8-plex/9-color, whole slide image, and out-performed two alternative automated methods for cross-registration by Jaccard index and Dice similarity coefficient (WSIMIR vs. automated WARPY p<0.01 and <0.01, vs. HALO + transformix p=0.083 and 0.049, respectively). Furthermore, the addition of a pan-membrane IHC stain cross-registered to a mIF panel facilitated improved automated cell segmentation across whole mIF slide images, as measured by significantly increased correct detections, Jaccard index (0.78 vs. 0.65), and Dice similarity coefficient (0.88 vs. 0.79).
Preprint
Full-text available
Computational Pathology (CoPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CoPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology facilitating transformational changes in the diagnosis and treatment of cancer diseases. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CoPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CoPath. In this article we provide a comprehensive review of more than 700 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CoPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CoPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CoPath.
Article
Registration of multiple sections in a tissue block is an important pre-requisite task before any cross-slide image analysis. Non-rigid registration methods are capable of finding correspondence by locally transforming a moving image. These methods often rely on an initial guess to roughly align an image pair linearly and globally. This is essential to prevent convergence to a non-optimal minimum. We explore a deep feature based registration (DFBR) method which utilises data-driven descriptors to estimate the global transformation. A multi-stage strategy is adopted for improving the quality of registration. A visualisation tool is developed to view registered pairs of WSIs at different magnifications. With the help of this tool, one can apply a transformation on the fly without the need to generate a transformed moving WSI in a pyramidal form. We compare the performance on our dataset of data-driven descriptors with that of hand-crafted descriptors. Our approach can align the images with only small registration errors. The efficacy of our proposed method is evaluated for a subsequent non-rigid registration step. To this end, the first two steps of the ANHIR winner's framework are replaced with DFBR to register image pairs provided by the challenge. The modified framework produce comparable results to those of the challenge winning team.
Article
Full-text available
Background To develop and validate an MRI texture-based machine learning model for the noninvasive assessment of renal function. Methods A retrospective study of 174 diabetic patients (training cohort, n = 123; validation cohort, n = 51) who underwent renal MRI scans was included. They were assigned to normal function (n = 71), mild or moderate impairment (n = 69), and severe impairment groups (n = 34) according to renal function. Four methods of kidney segmentation on T2-weighted images (T2WI) were compared, including regions of interest covering all coronal slices (All-K), the largest coronal slices (LC-K), and subregions of the largest coronal slices (TLCO-K and PIZZA-K). The speeded-up robust features (SURF) and support vector machine (SVM) algorithms were used for texture feature extraction and model construction, respectively. Receiver operating characteristic (ROC) curve analysis was used to evaluate the diagnostic performance of models. Results The models based on LC-K and All-K achieved the nonsignificantly highest accuracy in the classification of renal function (all p values > 0.05). The optimal model yielded high performance in classifying the normal function, mild or moderate impairment, and severe impairment, with an area under the curve of 0.938 (95% confidence interval [CI] 0.935–0.940), 0.919 (95%CI 0.916–0.922), and 0.959 (95%CI 0.956–0.962) in the training cohorts, respectively, as well as 0.802 (95%CI 0.800–0.807), 0.852 (95%CI 0.846–0.857), and 0.863 (95%CI 0.857–0.887) in the validation cohorts, respectively. Conclusion We developed and internally validated an MRI-based machine-learning model that can accurately evaluate renal function. Once externally validated, this model has the potential to facilitate the monitoring of patients with impaired renal function.
Chapter
While hematoxylin and eosin (H &E) is a standard staining procedure, immunohistochemistry (IHC) staining further serves as a diagnostic and prognostic method. However, acquiring special staining results requires substantial costs. Hence, we proposed a strategy for ultra-high-resolution unpaired image-to-image translation: Kernelized Instance Normalization (KIN), which preserves local information and successfully achieves seamless stain transformation with constant GPU memory usage. Given a patch, corresponding position, and a kernel, KIN computes local statistics using convolution operation. In addition, KIN can be easily plugged into most currently developed frameworks without re-training. We demonstrate that KIN achieves state-of-the-art stain transformation by replacing instance normalization (IN) layers with KIN layers in three popular frameworks and testing on two histopathological datasets. Furthermore, we manifest the generalizability of KIN with high-resolution natural images. Finally, human evaluation and several objective metrics are used to compare the performance of different approaches. Overall, this is the first successful study for the ultra-high-resolution unpaired image-to-image translation with constant space complexity. Code is available at: https://github.com/Kaminyou/URUST.KeywordsUnpaired image-to-image translationUltra-high-resolutionStain transformationWhole slide image
Article
Full-text available
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https:// learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Article
Full-text available
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin–eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors’ work and discusses the opportunities and challenges of AI.
Chapter
Current registration evaluations typically compute target registration error using manually annotated datasets. As a result, the quality of landmark annotations is crucial for unbiased comparisons because registration algorithms are trained and tested using these landmarks. Even though some data providers claim to have mitigated the inter-observer variability by having multiple raters, quality control such as a third-party screening can still be reassuring for intended users. Examining the landmark quality for neurosurgical datasets (RESECT and BITE) poses specific challenges. In this study, we applied the variogram, which is a tool extensively used in geostatistics, to convert 3D landmark distributions into an intuitive 2D representation. This allowed us to identify potential problematic cases efficiently so that they could be examined by experienced radiologists. In both the RESECT and BITE datasets, we identified and confirmed a small number of landmarks with potential localization errors and found that, in some cases, the landmark distribution was not ideal for an unbiased assessment of non-rigid registration errors. Under discussion, we provide some constructive suggestions for improving the utility of publicly available annotated data.
Chapter
Full-text available
Image registration is an essential task in electron microscope (EM) image analysis, which aims to accurately warp the moving image to align with the fixed image, to reduce the spatial deformations across serial slices resulted during image acquisition. Existing learning-based registration approaches are primarily based on Convolution Neural Networks (CNNs). However, for the requirements of EM image registration, CNN-based methods lack the capability of learning global and long-term semantic information. In this work, we propose a new framework, Cascaded LST-UNet, which integrates a sharpening skip-connection layer with the Swin Transformer based U-Net structure in a cascaded manner for unsupervised EM image registration. Our experimental results on a public dataset show that our method consistently outperforms the baseline approaches.
Chapter
Cortical surface registration is a fundamental tool for neuroimaging analysis that has been shown to improve the alignment of functional regions relative to volumetric approaches. Classically, image registration is performed by optimizing a complex objective similarity function, leading to long run times. This contributes to a convention for aligning all data to a global average reference frame that poorly reflects the underlying cortical heterogeneity. In this paper, we propose a novel unsupervised learning-based framework that converts registration to a multi-label classification problem, where each point in a low-resolution control grid deforms to one of fixed, finite number of endpoints. This is learned using a spherical geometric deep learning architecture, in an end-to-end unsupervised way, with regularization imposed using a deep Conditional Random Field (CRF). Experiments show that our proposed framework performs competitively, in terms of similarity and areal distortion, relative to the most popular classical surface registration algorithms and generates smoother deformations than other learning-based surface registration methods, even in subjects with atypical cortical morphology. The code can be found in https://github.com/mohamedasuliman/DDR/.KeywordsDeep learningUnsupervised learningCortical surface registrationConditional random fields
Article
Assessing the degree of liver fibrosis is fundamental for the management of patients with chronic liver disease, in liver transplants procedures, and in general liver disease research. The fibrosis stage is best assessed by histopathologic evaluation, and Masson's Trichrome stain (MT) is the stain of choice for this task in many laboratories around the world. However, the most used stain in histopathology is Hematoxylin Eosin (HE) which is cheaper, has a faster turn-around time and is the primary stain routinely used for evaluation of liver specimens. In this paper, we propose a novel digital pathology system that accurately detects and quantifies the footprint of fibrous tissue in HE whole slide images (WSI). The proposed system produces virtual MT images from HE using a deep learning model that learns deep texture patterns associated with collagen fibers. The training pipeline is based on conditional generative adversarial networks (cGAN), which can achieve accurate pixel-level transformation. Our comprehensive training pipeline features an automatic WSI registration algorithm, which qualifies the HE/MT training slides for the cGAN model. Using liver specimens collected during liver transplantation procedures, we conducted a range of experiments to evaluate the detected footprint of selected anatomical features. Our evaluation includes both image similarity and semantic segmentation metrics. The proposed system achieved enhanced results in the experiments with significant improvement over the state-of-the-art CycleGAN learning style, and over direct prediction of fibrosis in HE without having the virtual MT step.
Article
In kidney transplant biopsies, both inflammation and chronic changes are important features that predict long-term graft survival. Quantitative scoring of these features is important for transplant diagnostics and kidney research. However, visual scoring is poorly reproducible and labor-intensive. The goal of this study was to investigate the potential of convolutional neural networks (CNNs) to quantify inflammation and chronic features in kidney transplant biopsies. A structure segmentation CNN and a lymphocyte detection CNN were applied on 125 whole-slide image pairs of PAS-, and CD3-stained slides. The CNN results were used to quantify healthy and sclerotic glomeruli, interstitial fibrosis, tubular atrophy, and inflammation both within non-atrophic and atrophic tubuli, and in areas of interstitial fibrosis. The computed tissue features showed high correlations with Banff lesion scores of five pathologists. Analyses on a small subset showed a moderate correlation towards higher CD3⁺ cell density within scarred regions and higher CD3⁺ cell count inside atrophic tubuli correlated with long-term change of estimated glomerular filtration rate. The presented CNNs are valid tools to yield objective quantitative information on glomeruli number, fibrotic tissue, and inflammation within scarred and non-scarred kidney parenchyma in a reproducible fashion. CNNs have the potential to improve kidney transplant diagnostics and will benefit the community as a novel method to generate surrogate endpoints for large-scale clinical studies.
Chapter
Inter-modality imaging problems are ubiquitous in medical imaging. The acquisition of multiple modalities – or different variations of the same modality – is frequent in many clinical applications and research problems. Inter-modality image analysis techniques have been developed to cope with the differences between modalities, but they do not always perform as well as their intra-modality counterparts. Image synthesis can be used to bridge this gap, by making one of the two modalities resemble the other, and using intra-modality methods. It is often the case that the error in the synthesis is outweighed by the improvement in performance given by switching from intra- to inter-modality analysis. In this chapter, we present a survey of methods based on this idea, with a focus on two of the most representative problems in medical image analysis, registration and segmentation.
Chapter
Infrared thermography (IRT) has been used as a tool to detect pregnancy in horses. IRT measures heat emission from the body surface, which increases with increased blood flow and metabolic activity in uterine and fetal tissues. This study aimed to extract the entropy-based features of the IRT image texture and compare them to find pregnancy-related and color-related differences. In the current study, 40 mares were divided into non-pregnant and pregnant groups and IRT imaging was performed. The thermal images were converted into grayscale images using four conversion methods: grayscale image and three basic channels, the Red, Green, and Blue channels. Image texture features were calculated in each channel, based on four entropy measures: two-dimensional sample entropy, two-dimensional fuzzy entropy, two-dimensional dispersion entropy, and two-dimensional distribution entropy. The signs of higher irregularity and complexity of IRT image texture were evidenced for the composite Grayscale image using two-dimensional sample entropy, two-dimensional fuzzy entropy, and two-dimensional dispersion entropy. The pattern of the IRT image texture differed in the Red and Blue channels depending on the considered entropy feature.KeywordsTwo-dimensional entropiesSurface temperatureMares
Article
Full-text available
Endoscopic submucosal dissection can remove large superficial gastrointestinal lesions in en bloc. A detailed pathological evaluation of the resected specimen is required to assess the risk of recurrence after treatment. However, the current method of sectioning specimens to a thickness of a few millimeters does not provide information between the sections that are lost during the preparation. In this study, we have produced three-dimensional images of the entire dissected lesion for nine samples by using micro-CT imaging system. Although it was difficult to diagnose histological type on micro-CT images, it successfully evaluates the extent of the lesion and its surgical margins. Micro-CT images can depict sites that cannot be observed by the conventional pathological diagnostic process, suggesting that it may be useful to use in a complementary manner.
Article
Full-text available
Reporting biomarkers assessed by routine immunohistochemical (IHC) staining of tissue is broadly used in diagnostic pathology laboratories for patient care. So far, however, clinical reporting is predominantly qualitative or semi-quantitative. By creating a multitask deep learning framework, DeepLIIF, we present a single-step solution to stain deconvolution/separation, cell segmentation and quantitative single-cell IHC scoring. Leveraging a unique de novo dataset of co-registered IHC and multiplex immunofluorescence (mpIF) staining of the same slides, we segment and translate low-cost and prevalent IHC slides to more informative, but also more expensive, mpIF images, while simultaneously providing the essential ground truth for the superimposed brightfield IHC channels. A new nuclear-envelope stain, LAP2beta, with high (>95%) cell coverage is also introduced to improve cell delineation/segmentation and protein expression quantification on IHC slides. We show that DeepLIIF trained on clean IHC Ki67 data can generalize to noisy images as well as other nuclear and non-nuclear markers.
Article
In the recent decade, medical image registration and fusion process has emerged as an effective application to follow up diseases and decide the necessary therapies based on the conditions of patient. For many of the considerable diagnostic analyses, it is common practice to assess two or more different histological slides or images from one tissue sample. A specific area analysis of two image modalities requires an overlay of the images to distinguish positions in the sample that are organized at a similar coordinate in both images. In particular cases, there are two common challenges in digital pathology: first, dissimilar appearances of images resulting due to staining variances and artifacts, second, large image size. In this paper, we develop algorithm to overcome the fact that scanners from different manufacturers have variations in the images. We propose whole slide image registration algorithm where adaptive smoothing is employed to smooth the stained image. A modified scale-invariant feature transform is applied to extract common information and a joint distance helps to match keypoints correctly by eliminating position transformation error. Finally, the registered image is obtained by utilizing correct correspondences and the interpolation of color intensities. We validate our proposal using different images acquired from surgical resection samples of lung cancer (adenocarcinoma). Extensive feature matching with apparently increasing correct correspondences and registration performance on several images demonstrate the superiority of our method over state-of-the-art methods. Our method potentially improves the matching accuracy that might be beneficial for computer-aided diagnosis in biobank applications.
Conference Paper
Tumours arise within complex 3D microenvironments, but the routine 2D analysis of tumours often underestimates the spatial heterogeneity. In this paper, we present a methodology to reconstruct and analyse 3D tumour models from routine clinical samples allowing 3D interactions to be analysed at cellular resolution. Our workflow involves cutting thin serial sections of tumours followed by labelling of cells using markers of interest. Serial sections are then scanned, and digital multiplexed data are created for computational reconstruction. Following spectral unmixing, a registration method of the consecutive images based on a pre-alignment, a parametric and a non-parametric image registration step is applied. For the segmentation of the cells, an ellipsoidal model is proposed and for the 3D reconstruction, a cubic interpolation method is used. The proposed 3D models allow us to identify specific interaction patterns that emerge as tumours develop, adapt and evolve within their host microenvironment. We applied our technique to map tumour-immune interactions of colorectal cancer and preliminary results suggest that 3D models better represent the tumor-immune cells interaction revealing mechanisms within the tumour microenvironment and its heterogeneity.
Article
The coronavirus of 2019 (COVID-19) was declared a global pandemic by World Health Organization in March 2020. Effective testing is crucial to slow the spread of the pandemic. Artificial intelligence and machine learning techniques can help COVID-19 detection using various clinical symptom data. While deep learning (DL) approach requiring centralized data is susceptible to a high risk of data privacy breaches, federated learning (FL) approach resting on decentralized data can preserve data privacy, a critical factor in the health domain. This paper reviews recent advances in applying DL and FL techniques for COVID-19 detection with a focus on the latter. A model FL implementation use case in health systems with a COVID-19 detection using chest X-ray image data sets is studied. We have also reviewed applications of previously published FL experiments for COVID-19 research to demonstrate the applicability of FL in tackling health research issues. Last, several challenges in FL implementation in the healthcare domain are discussed in terms of potential future work.
Article
Full-text available
Histopathologic assessment routinely provides rich microscopic information about tissue structure and disease process. However, the sections used are very thin, and essentially capture only 2D representations of a certain tissue sample. Accurate and robust alignment of sequentially cut 2D slices should contribute to more comprehensive assessment accounting for surrounding 3D information. Towards this end, we here propose a two-step diffeomorphic registration approach that aligns differently stained histology slides to each other, starting with an initial affine step followed by estimating a deformation field. It was quantitatively evaluated on ample (n = 481) and diverse data from the automatic non-rigid histological image registration challenge, where it was awarded the second rank. The obtained results demonstrate the ability of the proposed approach to robustly (average robustness = 0.9898) and accurately (average relative target registration error = 0.2%) align differently stained histology slices of various anatomical sites while maintaining reasonable computational efficiency (<1 min per registration). The method was developed by adapting a general-purpose registration algorithm designed for 3D radiographic scans and achieved consistently accurate results for aligning high-resolution 2D histologic images. Accurate alignment of histologic images can contribute to a better understanding of the spatial arrangement and growth patterns of cells, vessels, matrix, nerves, and immune cell interactions.
Article
Full-text available
3D medical image registration is of great clinical importance. However, supervised learning methods require a large amount of accurately annotated corresponding control points (or morphing), which are very difficult to obtain. Unsupervised learning methods ease the burden of manual annotation by exploiting unlabeled data without supervision. In this paper, we propose a new unsupervised learning method using convolutional neural networks under an end-to-end framework, Volume Tweening Network (VTN), for 3D medical image registration. We propose three innovative technical components: (1) An end-to-end cascading scheme that resolves large displacement; (2) An efficient integration of affine registration network; and (3) An additional invertibility loss that encourages backward consistency. Experiments demonstrate that our algorithm is 880x faster (or 3.3x faster without GPU acceleration) than traditional optimization-based methods and achieves state-of-theart performance in medical image registration.
Article
Full-text available
Our research [Team Name: URAN] of 'Neuromorphic Neural Network for Multimodal Brain Tumor Segmentation and Survival analysis' demonstrates its performance on 'Overall Survival Prediction', without any prior training using multi-modal MRI Image segmentation by medical doctors (though provided by BraTS 2018 challenge data). Two segmentation categories are adopted instead of three segmentations , which is beyond the ordinary scope of BraTS 2018's segmentation challenge. The test results(the worst blind test, 51%: the nominal test, 71%) of 'survival prediction' challenge have proved the early feasibility of our neuromorphic neural network for helping the medical doctors, considering the human clinical accuracy of 50%~70%.
Poster
Full-text available
Image registration is a common task for many biomedical analysis applications. The present work focuses on the benchmarking of registration methods on differently stained histological slides. This is a challenging task due to the differences in the appearance model, the repetitive texture of the details and the large image size, between other issues. Our benchmarking data is composed of 616 image pairs at two different scales — average image diagonal 2.4k and 5k pixels. We compare eleven fully automatic registration methods covering the widely used similarity measures. For each method, the best parameter configuration is found and subsequently applied to all the image pairs. The performance of the algorithms is evaluated from several perspectives — the registrations (in)accuracy on manually annotated landmarks, the method robustness and its computation time.
Conference Paper
Full-text available
Image registration is a common task for many biomedical analysis applications. The present work focuses on the benchmarking of registration methods on differently stained histological slides. This is a challenging task due to the differences in the appearance model, the repetitive texture of the details and the large image size, between other issues. Our benchmarking data is composed of 616 image pairs at two different scales - average image diagonal 2.4k and 5k pixels. We compare eleven fully automatic registration methods covering the widely used similarity measures (and optimization strategies with both linear and elastic transformation). For each method, the best parameter configuration is found and subsequently applied to all the image pairs. The performance of the algorithms is evaluated from several perspectives - the registrations (in)accuracy on manually annotated landmarks, the method robustness and its processing computation time.
Article
Full-text available
Motivation: Digital pathology enables new approaches that expand beyond storage, visualization or analysis of histological samples in digital format. One novel opportunity is 3D histology, where a three-dimensional reconstruction of the sample is formed computationally based on serial tissue sections. This allows examining tissue architecture in 3D, for example, for diagnostic purposes. Importantly, 3D histology enables joint mapping of cellular morphology with spatially resolved omics data in the true 3D context of the tissue at microscopic resolution. Several algorithms have been proposed for the reconstruction task, but a quantitative comparison of their accuracy is lacking. Results: We developed a benchmarking framework to evaluate the accuracy of several free and commercial 3D reconstruction methods using two whole slide image datasets. The results provide a solid basis for further development and application of 3D histology algorithms and indicate that methods capable of compensating for local tissue deformation are superior to simpler approaches. Availability: Code: https://github.com/BioimageInformaticsTampere/RegBenchmark. Whole slide image datasets: http://urn.fi/urn:nbn:fi:csc-kata20170705131652639702. Contact: pekka.ruusuvuori@tut.fi. Supplementary information: Supplementary data are available at Bioinformatics online.
Article
Full-text available
Histology permits the observation of otherwise invisible structures of the internal topography of a specimen. Although it enables the investigation of tissues at a cellular level, it is invasive and breaks topology due to cutting. Three-dimensional (3D) reconstruction was thus introduced to overcome the limitations of single-section studies in a dimensional scope. 3D reconstruction finds its roots in embryology, where it enabled the visualisation of spatial relationships of developing systems and organs, and extended to biomedicine, where the observation of individual, stained sections provided only partial understanding of normal and abnormal tissues. However, despite bringing visual awareness, recovering realistic reconstructions is elusive without prior knowledge about the tissue shape. 3D medical imaging made such structural ground truths available. In addition, combining non-invasive imaging with histology unveiled invaluable opportunities to relate macroscopic information to the underlying microscopic properties of tissues through the establishment of spatial correspondences; image registration is one technique that permits the automation of such a process and we describe reconstruction methods that rely on it. It is thereby possible to recover the original topology of histology and lost relationships, gain insight into what affects the signals used to construct medical images (and characterise them), or build high resolution anatomical atlases. This paper reviews almost three decades of methods for 3D histology reconstruction from serial sections, used in the study of many different types of tissue. We first summarise the process that produces digitised sections from a tissue specimen in order to understand the peculiarity of the data, the associated artefacts and some possible ways to minimise them. We then describe methods for 3D histology reconstruction with and without the help of 3D medical imaging, along with methods of validation and some applications. We finally attempt to identify the trends and challenges that the field is facing, many of which are derived from the cross-disciplinary nature of the problem as it involves the collaboration between physicists, histopathologists, computer scientists and physicians.
Article
Full-text available
Delineation of the cardiac right ventricle is essential in generating clinical measurements such as ejection fraction and stroke volume. Given manual segmentation on the first frame, one approach to segment right ventricle from all of the magnetic resonance images is to find point correspondence between the sequence of images. Finding the point correspondence with non-rigid transformation requires a deformable image registration algorithm which often involves computationally expensive optimization. The central processing unit (CPU) based implementation of point correspondence algorithm has been shown to be accurate in delineating organs from a sequence of images in recent studies. The purpose of this study is to develop computationally efficient approaches for deformable image registration. We propose a graphics processing unit (GPU) accelerated approach to improve the efficiency. The proposed approach consists of two parallelization components: Parallel Compute Unified Device Architecture (CUDA) version of the deformable registration algorithm; and the application of an image concatenation approach to further parallelize the algorithm. Three versions of the algorithm were implemented: 1) CPU; 2) GPU with only intra-image parallelization (Sequential image registration); and 3) GPU with inter and intra-image parallelization (Concatenated image registration). The proposed methods were evaluated over a data set of 16 subjects. CPU, GPU sequential image, and GPU concatenated image methods took an average of 113.13, 16.50 and 5.96 seconds to segment a sequence of 20 images, respectively. The proposed parallelization approach offered a computational performance improvement of around 19× in comparison to the CPU implementation while retaining the same level of segmentation accuracy. This study demonstrated that the GPU computing could be utilized for improving the computational performance of a non-rigid image registration algorithm without compromising the accuracy.
Article
Full-text available
Objective: To develop a flexible method of separation and quantification of immunohistochemical staining by means of color image analysis. Study design: An algorithm was developed to deconvolve the color information acquired with red-green-blue (RGB) cameras and to calculate the contribution of each of the applied stains based on stain-specific RGB absorption. The algorithm was tested using different combinations of diaminobenzidine, hematoxylin and eosin at different staining levels. Results: Quantification of the different stains was not significantly influenced by the combination of multiple stains in a single sample. The color deconvolution algorithm resulted in comparable quantification independent of the stain combinations as long as the histochemical procedures did not influence the amount of stain in the sample due to bleaching because of stain solubility and saturation of staining was prevented. Conclusion: This image analysis algorithm provides a robust and flexible method for objective immunohistochemical analysis of samples stained with up to three different stains using a laboratory microscope, standard RGB camera setup and the public domain program NIH Image.
Article
Full-text available
In this paper, we present a fast method for registration of multiple large, digitised whole-slide images (WSIs) of serial histology sections. Through cross-slide WSI registration, it becomes possible to select and analyse a common visual field across images of several serial section stained with different protein markers. It is, therefore, a critical first step for any downstream co-localised cross-slide analysis. The proposed registration method uses a two-stage approach, first estimating a fast initial alignment using the tissue sections’ external boundaries, followed by an efficient refinement process guided by key biological structures within the visual field. We show that this method is able to produce a high quality alignment in a variety of circumstances, and demonstrate that the refinement is able to quantitatively improve registration quality. In addition, we provide a case study that demonstrates how the proposed method for cross-slide WSI registration could be used as part of a specific co-expression analysis framework.
Conference Paper
Full-text available
We present an image processing pipeline which accepts a large number of images, containing spatial expression information for thousands of genes in Drosophila imaginal discs. We assume that the gene activations are binary and can be expressed as a union of a small set of non-overlapping spatial patterns, yielding a compact representation of the spatial activation of each gene. This lends itself well to further automatic analysis, with the hope of discovering new biological relationships. Traditionally, the images were labeled manually, which was very time consuming. The key part of our work is a binary pattern dictionary learning algorithm, that takes a set of binary images and determines a set of patterns, which can be used to represent the input images with a small error. We also describe the preprocessing phase, where input images are segmented to recover the activation images and spatially aligned to a common reference. We compare binary pattern dictionary learning to existing alternative methods on synthetic data and also show results of the algorithm on real microscopy images of the Drosophila imaginal discs.
Article
Full-text available
The form and exact function of the blood vessel network in some human organs, like spleen and bone marrow, are still open research questions in medicine. In this paper we propose a method to register the immunohistological stainings of serial sections of spleen and bone marrow specimens to enable the visualization and visual inspection of blood vessels. As these vary much in caliber, from mesoscopic (millimeter-range) to microscopic (few micrometers, comparable to a single erythrocyte), we need to utilize a multi-resolution approach. Our method is fully automatic; it is based on feature detection and sparse matching. We utilize a rigid alignment and then a non-rigid deformation, iteratively dealing with increasingly smaller features. Our tool pipeline can already deal with series of complete scans at extremely high resolution, up to 620 megapixels. The improvement presented increases the range of represented details up to smallest capillaries. This paper provides details on the multi-resolution non-rigid registration approach we use. Our application is novel in the way the alignment and subsequent deformations are computed (using features, i.e. “sparse”). The deformations are based on all images in the stack (“global”). We also present volume renderings and a 3D reconstruction of the vascular network in human spleen and bone marrow on a level not possible before. Our registration makes easy tracking of even smallest blood vessels possible, thus granting experts a better comprehension. A quantitative evaluation of our method and related state of the art approaches with seven different quality measures shows the efficiency of our method. We also provide z-profiles and enlarged volume renderings from three different registrations for visual inspection.
Conference Paper
Full-text available
This paper presents a method for correcting erratic pairwise registrations when reconstructing a volume from 2D histology slices. Due to complex and unpredictable alterations of the content of histology images, a pairwise rigid registration between two adjacent slices may fail systematically. Conversely, a neighbouring registration, which potentially involves one of these two slices, will work. This grounds our approach: using correct spatial correspondences established through neighbouring registrations to account for direct failures. We propose to search the best alignment of every couple of adjacent slices from a finite set of transformations that involve neighbouring slices in a transitive fashion. Using the proposed method, we obtained reconstructed volumes with increased coherence compared to the classical pairwise approach, both in synthetic and real data.
Article
Full-text available
Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robust 3D registration technique for microscopic image reconstruction, and we demonstrate our method on two ssTEM datasets of drosophila brain neural tissues, serial confocal laser scanning microscopic images of a drosophila brain, serial histopathological images of renal cortical tissues and a synthetic test case. The results show that the presented fully automatic method is promising to reassemble continuous volumes and minimize artificial deformations for all data and outperforms four state-of-the-art 3D registration techniques to consistently produce solid 3D reconstructed anatomies with less discontinuities and deformations.
Article
Full-text available
Most medical image registration algorithms suffer from a directionality bias that has been shown to largely impact subsequent analyses. Several approaches have been proposed in the literature to address this bias in the context of nonlinear registration, but little work has been done for global registration. We propose a symmetric approach based on a block-matching technique and least-trimmed square regression. The proposed method is suitable for multimodal registration and is robust to outliers in the input images. The symmetric framework is compared with the original asymmetric block-matching technique and is shown to outperform it in terms of accuracy and robustness. The methodology presented in this article has been made available to the community as part of the NiftyReg open-source package.
Conference Paper
Full-text available
It is known that image registration is mostly driven by image edges. We have taken this idea to the extreme. In segmented images, we ignore the interior of the components and focus on their boundaries only. Furthermore, by assuming spatial compactness of the components, the similarity criterion can be approximated by sampling only a small number of points on the normals passing through a sparse set of keypoints. This leads to an order-of-magnitude speed advantage in comparison with classical registration algorithms. Surprisingly, despite the crude approximation, the accuracy is comparable. By virtue of the segmentation and by using a suitable similarity criterion such as mutual information on labels, the method can handle large appearance differences and large variability in the segmentations. The segmentation does not need not be perfectly coherent between images and over-segmentation is acceptable. We demonstrate the performance of the method on a range of different datasets, including histological slices and Drosophila imaginal discs, using rigid transformations.
Article
Full-text available
Histology is the microscopic inspection of plant or animal tissue. It is a critical component in diagnostic medicine and a tool for studying the pathogenesis and biology of processes such as cancer and embryogenesis. Tissue processing for histology has become increasingly automated, drastically increasing the speed at which histology labs can produce tissue slides for viewing. Another trend is the digitization of these slides, allowing them to be viewed on a computer rather than through a microscope. Despite these changes, much of the routine analysis of tissue sections remains a painstaking, manual task that can only be completed by highly trained pathologists at a high cost per hour. There is, therefore, a niche for image analysis methods that can automate some aspects of this analysis. These methods could also automate tasks that are prohibitively time-consuming for humans, e.g., discovering new disease markers from hundreds of whole-slide images (WSIs) or precisely quantifying tissues within a tumor.
Conference Paper
Full-text available
Most registration algorithms suffer from a directionality bias that has been shown to largely impact on subsequent analyses. Several approaches have been proposed in the literature to address this bias in the context of non-linear registration but little work has been done in the context of global registration. We propose a symmetric approach based on a block-matching technique and least trimmed square regression. The proposed method is suitable for multi-modal registration and is robust to outliers in the input images. The symmetric framework is compared to the original asymmetric block-matching technique, outperforming it in terms accuracy and robustness.
Article
Full-text available
Image registration of biological data is challenging as complex deformation problems are common. Possible deformation effects can be caused in individual data preparation processes, involving morphological deformations, stain variations, stain artifacts, rotation, translation, and missing tissues. The combining deformation effects tend to make existing automatic registration methods perform poor. In our experiments on serial histopathological images, the six state of the art image registration techniques, including TrakEM2, SURF + affine transformation, UnwarpJ, bUnwarpJ, CLAHE + bUnwarpJ and BrainAligner, achieve no greater than 70% averaged accuracies, while the proposed method achieves 91.49% averaged accuracy. The proposed method has also been demonstrated to be significantly better in alignment of laser scanning microscope brain images and serial ssTEM images than the benchmark automatic approaches (p < 0.001). The contribution of this study is to introduce a fully automatic, robust and fast image registration method for 2D image registration.
Conference Paper
Full-text available
We describe an automatic method for fast registration of images with very different appearances. The images are jointly segmented into a small number of classes, the segmented images are registered, and the process is repeated. The segmentation calculates feature vectors on superpixels and then it finds a softmax classifier maximizing mu-tual information between class labels in the two images. For speed, the registration considers a sparse set of rectangular neighborhoods on the interfaces between classes. A triangulation is created with spatial regularization handled by pairwise spring-like terms on the edges. The optimal transformation is found globally using loopy belief propagation. Multiresolution helps to improve speed and ro-bustness. Our main application is registering stained histological slices, which are large and differ both in the local and global appear-ance. We show that our method has comparable accuracy to standard pixel-based registration, while being faster and more general.
Article
Full-text available
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e. g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task-and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-) expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms' similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations.
Conference Paper
Full-text available
The analysis of protein-level multigene expression signature maps computed from the fusion of differently stained immunohistochemistry images is an emerging tool in cancer management. Creating these maps requires registering sets of histological images, a challenging task due to their large size, the non-linear distortions existing between consecutive sections and to the fact that the images correspond to different histological stains and thus, may have very different appearance. In this manuscript, we present a novel segmentation-based registration algorithm that exploits a multi-class pyramid and optimizes a fuzzy class assignment specially designed for this task. Compared to a standard nonrigid registration, the proposed method achieves an improved matching on both synthetic as well as real histological images of cancer lesions.
Article
Full-text available
Registration of histopathology images of consecutive tissue sections stained with different histochemical or immunohistochemical stains is an important step in a number of application areas, such as the investigation of the pathology of a disease, validation of MRI sequences against tissue images, multi-scale physical modelling etc. In each case information from each stain needs to be spatially aligned and combined to ascertain physical or functional properties of the tissue. However, in addition to the gigabyte size images and non-rigid distortions present in the tissue, a major challenge for registering differently stained histology image pairs is the dissimilar structural appearance due to different stains highlighting different substances in tissues. In this paper, we address this challenge by developing an unsupervised content classification method which generates multi-channel probability images from a roughly aligned image pair. Each channel corresponds to one automatically identified content class. The probability images enhance the structural similarity between image pairs. By integrating the classification method into a multi-resolution block matching based non-rigid registration scheme (our previous work [9]), we improve the performance of registering multi-stained histology images. Evaluation was conducted on 77 histological image pairs taken from three liver specimens and one intervertebral disc specimen. In total six types of histochemical stains were tested. We evaluated our method against the same registration method implemented without applying the classification algorithm (intensity based registration) and the state-of-art mutual information based registration. Superior results are obtained with the proposed method.
Article
Full-text available
The aim of this paper is to validate an image registration pipeline used for histology image alignment. In this work a set of histology images are registered to their correspondent optical blockface images to make a histology volume. Then multi-modality fiducial markers are used to validate the alignment of histology images. The fiducial markers are catheters perfused with a mixture of cuttlefish ink and flour. Based on our previous investigations this fiducial marker is visible in medical images, optical blockface images and it can also be localized in histology images. The properties of this fiducial marker make it suitable for validation of the registration techniques used for histology image alignment. This paper reports on the accuracy of a histology image registration approach by calculation of target registration error using these fiducial markers.
Article
Full-text available
Three dimensional (3D) tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications). This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D) histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. The different strategies have been evaluated on two liver specimens (80 sections in total) stained with Hematoxylin and Eosin (H and E), Sirius Red, and Cytokeratin (CK) 7. A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.
Article
Full-text available
Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where temporal structural or anatomical changes are investigated; and iii) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner.
Article
Full-text available
For the past 25 years NIH Image and ImageJ software have been pioneers as open tools for the analysis of scientific images. We discuss the origins, challenges and solutions of these two programs, and how their history can serve to advise and inform other software projects.
Article
Full-text available
Anatomy of large biological specimens is often reconstructed from serially sectioned volumes imaged by high-resolution microscopy. We developed a method to reassemble a continuous volume from such large section series that explicitly minimizes artificial deformation by applying a global elastic constraint. We demonstrate our method on a series of transmission electron microscopy sections covering the entire 558-cell Caenorhabditis elegans embryo and a segment of the Drosophila melanogaster larval ventral nerve cord.
Article
Full-text available
This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrically align two images (the reference and sensed images). The reviewed approaches are classified according to their nature (area-based and feature-based) and according to four basic steps of image registration procedure: feature detection, feature matching, mapping function design, and image transformation and resampling. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of image registration and outlook for the future research are discussed too. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas.
Article
In this paper, we present an easy-to-follow procedure for the analysis of tissue sections from 3D cell cultures (spheroids) by matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI MSI) and laser scanning confocal microscopy (LSCM). MALDI MSI was chosen to detect the distribution of the drug of interest, while fluorescence immunohistochemistry (IHC) followed by LSCM was used to localize the cells featuring specific markers of viability, proliferation, apoptosis and metastasis. The overlay of the mass spectrometry (MS) and IHC spheroid images, typically without any morphological features, required fiducial-based coregistration. The MALDI MSI protocol was optimized in terms of fiducial composition and antigen epitope preservation to allow MALDI MSI to be performed and directly followed by IHC analysis on exactly the same spheroid section. Once MS and IHC images were coregistered, the quantification of the MS and IHC signals was performed by an algorithm evaluating signal intensities along equidistant layers from the spheroid boundary to its center. This accurate colocalization of MS and IHC signals showed limited penetration of the clinically tested drug perifosine into spheroids during a 24 h period, revealing the fraction of proliferating and promi