Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Automatic Non-rigid Histological Image Registration (ANHIR) challenge was organized to compare the performance of image registration algorithms on several kinds of microscopy histology images in a fair and independent manner. We have assembled 8 datasets, containing 355 images with 18 different stains, resulting in 481 image pairs to be registered. Registration accuracy was evaluated using manually placed landmarks. In total, 256 teams registered for the challenge, 10 submitted the results, and 6 participated in the workshop. Here, we present the results of 7 well-performing methods from the challenge together with 6 well-known existing methods. The best methods used coarse but robust initial alignment, followed by non-rigid registration, used multiresolution, and were carefully tuned for the data at hand. They outperformed off-the-shelf methods, mostly by being more robust. The best methods could successfully register over 98 % of all landmarks and their mean landmark registration accuracy (TRE) was 0.44 % of the image diagonal. The challenge remains open to submissions and all images are available for download.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Recently, deep learning (DL) has made big leaps in many fields of image processing [15]. In particular, there are some studies on histological image registration based on DL [16]- [18]. One method proposed by Zhao et al. as mentioned in [16] is based on a supervised convolutional neural network (CNN) with manually labeled key points. ...
... In particular, there are some studies on histological image registration based on DL [16]- [18]. One method proposed by Zhao et al. as mentioned in [16] is based on a supervised convolutional neural network (CNN) with manually labeled key points. In such supervised learning-based methods, the ground-truth data with known correspondences across the set of training images is required [19]. ...
... There are repetitive textures in histological images because of homogeneous tissues; 3). There may exist section missing when processing tissue sections [16]. With the existence of these challenges, common strategies for performance optimization in registration do not perform well in histological images [16]. ...
Article
Full-text available
Registration of multiple stained images is a fundamental task in histological image analysis. In supervised methods, obtaining ground-truth data with known correspondences is laborious and time-consuming. Thus, unsupervised methods are expected. Unsupervised methods ease the burden of manual annotation but often at the cost of inferior results. In addition, registration of histological images suffers from appearance variance due to multiple staining, repetitive texture, and section missing during making tissue sections. To deal with these challenges, we propose an unsupervised structural feature guided convolutional neural network (SFG). Structural features are robust to multiple staining. The combination of low-resolution rough structural features and high-resolution fine structural features can overcome repetitive texture and section missing, respectively. SFG consists of two components of structural consistency constraints according to the formations of structural features, i.e., dense structural component and sparse structural component. The dense structural component uses structural feature maps of the whole image as structural consistency constraints, which represent local contextual information. The sparse structural component utilizes the distance of automatically obtained matched key points as structural consistency constraints because the matched key points in an image pair emphasize the matching of significant structures, which imply global information. In addition, a multi-scale strategy is used in both dense and sparse structural components to make full use of the structural information at low resolution and high resolution to overcome repetitive texture and section missing. The proposed method was evaluated on a public histological dataset (ANHIR) and ranked first as of Jan 18th, 2022.
... The problem is challenging due to: (i) a very high resolution of the images, (ii) complex, large deformations, (iii) difference in the appearance and partially missing data. A dedicated challenge named Automatic Non-rigid Histological Image Registration Challenge (ANHIR) [1,2,3] was organized in conjunction with the IEEE ISBI 2019 conference to address the problem and compare algorithms developed by different researchers. The challenge organizers provided a high quality and open dataset [1,4,5,6,7], manually annotated by experts and reasonably divided into training and evaluation sets. ...
... The challenge participants proposed several different algorithms, mostly using the classical, iterative approach to the image registration [2]. The three best scoring methods were quite similar. ...
... We use the median of rTRE to compare the methods, following the original challenge rules. [2,3]. ...
Article
Full-text available
The use of different dyes during histological sample preparation reveals distinct tissue properties and may improve the diagnosis. Nonetheless, the staining process deforms the tissue slides and registration is necessary before further processing. The importance of this problem led to organizing an open challenge named Automatic Non-rigid His-tological Image Registration Challenge (ANHIR), organized jointly with the IEEE ISBI 2019 conference. The challenge organizers provided 481 image pairs and a server-side evaluation platform making it possible to reliably compare the proposed algorithms. The majority of the methods proposed for the challenge were based on the classical, iterative image registration, resulting in high computational load and arguable usefulness in clinical practice due to the long analysis time. In this work, we propose a deep learning-based unsupervised nonrigid registration method, that provides results comparable to the solutions of the best scoring teams, while being significantly faster during the inference. We propose a multi-level, patch-based training and inference scheme that makes it possible to register images of almost any size, up to the highest resolution provided by the challenge organizers. The median target registration error is close to 0.2% of the image diagonal while the average registration time, including the data loading and initial alignment, is below 3 seconds. We freely release both the training and inference code making the results fully reproducible.
... The problem is challenging due to: (i) a very high resolution of the images, (ii) complex, large deformations, (iii) difference in the appearance and partially missing data. A dedicated challenge named Automatic Non-rigid Histological Image Registration Challenge (ANHIR) [1,2,3] was organized in conjunction with the IEEE ISBI 2019 conference to address the problem and compare algorithms developed by different researchers. The challenge organizers provided a high quality and open dataset [1,4,5,6,7], manually annotated by experts and reasonably divided into training and evaluation sets. ...
... The challenge participants proposed several different algorithms, mostly using the classical, iterative approach to the image registration [2]. The three best scoring methods were quite similar. ...
... We use the median of rTRE to compare the methods, following the original challenge rules. [2,3]. ...
Chapter
Full-text available
The use of different dyes during histological sample preparation reveals distinct tissue properties and may improve the diagnosis. Nonetheless, the staining process deforms the tissue slides and registration is necessary before further processing. The importance of this problem led to organizing an open challenge named Automatic Non-rigid Histological Image Registration Challenge (ANHIR), organized jointly with the IEEE ISBI 2019 conference. The challenge organizers provided 481 image pairs and a server-side evaluation platform making it possible to reliably compare the proposed algorithms. The majority of the methods proposed for the challenge were based on the classical, iterative image registration, resulting in high computational load and arguable usefulness in clinical practice due to the long analysis time. In this work, we propose a deep learning-based unsupervised nonrigid registration method, that provides results comparable to the solutions of the best scoring teams, while being significantly faster during the inference. We propose a multi-level, patch-based training and inference scheme that makes it possible to register images of almost any size, up to the highest resolution provided by the challenge organizers. The median target registration error is close to 0.2% of the image diagonal while the average registration time, including the data loading and initial alignment, is below 3 s. We freely release both the training and inference code making the results fully reproducible.
... formation available in structures revealed by different dyes from consecutive slices may be useful for segmentation, classification, grading, or 3-D reconstruction. However, the registration of differently stained histology images is challenging, because of: (i) the necessity of fully automatic registration, (ii) the extremely large size of histopathology images, (iii) real-time requirements in many clinical situations, (iv) the problem of missing data due to differences in the tissue appearance, and (v) the lack of unique features due to a repetitive texture in these large images [1] . ...
... The importance of the histology registration was a motivation to organize an open challenge called Automatic Non-rigid Histological Image Registration (ANHIR) [1,4,5] . It was organized jointly with the IEEE ISBI 2019 conference (International Symposium on Biomedical Imaging) and more than 250 participants from more than 30 countries registered for the competition. ...
... It was organized jointly with the IEEE ISBI 2019 conference (International Symposium on Biomedical Imaging) and more than 250 participants from more than 30 countries registered for the competition. Surprisingly, almost all of the best scoring methods were based on the classical, iterative image registration resulting in the long time required for the analysis [1] . Even though the registration accuracy of the proposed methods is close to the level of the human annotation, the computational time is relatively high, thus limiting their clinical usefulness. ...
Article
Background and objective The use of several stains during histology sample preparation can be useful for fusing complementary information about different tissue structures. It reveals distinct tissue properties that combined may be useful for grading, classification, or 3-D reconstruction. Nevertheless, since the slide preparation is different for each stain and the procedure uses consecutive slices, the tissue undergoes complex and possibly large deformations. Therefore, a nonrigid registration is required before further processing. The nonrigid registration of differently stained histology images is a challenging task because: (i) the registration must be fully automatic, (ii) the histology images are extremely high-resolution, (iii) the registration should be as fast as possible, (iv) there are significant differences in the tissue appearance, and (v) there are not many unique features due to a repetitive texture. Methods In this article, we propose a deep learning-based solution to the histology registration. We describe a registration framework dedicated to high-resolution histology images that can perform the registration in real-time. The framework consists of an automatic background segmentation, iterative initial rotation search and learning-based affine/nonrigid registration. Results We evaluate our approach using an open dataset provided for the Automatic Non-rigid Histological Image Registration (ANHIR) challenge organized jointly with the IEEE ISBI 2019 conference. We compare our solution to the challenge participants using a server-side evaluation tool provided by the challenge organizers. Following the challenge evaluation criteria, we use the target registration error (TRE) as the evaluation metric. Our algorithm provides registration accuracy close to the best scoring teams (median rTRE 0.19% of the image diagonal) while being significantly faster (the average registration time is about 2 seconds). Conclusions The proposed framework provides results, in terms of the TRE, comparable to the best-performing state-of-the-art methods. However, it is significantly faster, thus potentially more useful in clinical practice where a large number of histology images are being processed. The proposed method is of particular interest to researchers requiring an accurate, real-time, nonrigid registration of high-resolution histology images for whom the processing time of traditional, iterative methods in unacceptable. We provide free access to the software implementation of the method, including training and inference code, as well as pretrained models. Since the ANHIR dataset is open, this makes the results fully and easily reproducible.
... Automatic, nonrigid registration of histological images acquired using distinct dyes is an important task (Borovec et al 2020). It makes it possible to fuse information from structures of the same tissue that may be not revealed by one of the applied dyes (Kugler et al 2019). ...
... It makes it possible to fuse information from structures of the same tissue that may be not revealed by one of the applied dyes (Kugler et al 2019). Nevertheless, the problem is extremely challenging due to the following aspects: (i) differences in the tissue appearance due to the use of multiple dyes and sample preparation often leading to the problem of missing data, (ii) enormously large size of the microscopy images (gigapixel resolution), (iii) large and complex nonrigid deformations, (iv) difficulty to find globally unique features due to a repetitive texture (Borovec et al 2020, ANHIR Website 2019. ...
... The importance and difficulty of the problem led to organizing a dedicated challenge named Automatic Non-rigid Histological Image Registration (ANHIR) (Borovec et al 2020, ANHIR Website 2019. The challenge was organized jointly with the IEEE ISBI 2019 conference and the organizers provided an open, annotated dataset containing several tissue types acquired using multiple dyes, and an automatic, independent, server-side evaluation tool. ...
Article
Full-text available
The use of multiple dyes during histological sample preparation can reveal distinct tissue properties. However, since the slide preparation differs for each dye, the tissue slides are being deformed and a nonrigid registration is required before further processing. The registration of histology images is complicated because of: (i) a high resolution of histology images, (ii) complex, large, nonrigid deformations, (iii) difference in the appearance and partially missing data due to the use of multiple dyes. In this work, we propose a multistep, automatic, nonrigid image registration method dedicated to histology samples acquired with multiple stains. The proposed method consists of a feature-based affine registration, an exhaustive rotation alignment, an iterative, intensity-based affine registration, and a nonrigid alignment based on modality independent neighbourhood descriptor coupled with the Demons algorithm. A dedicated failure detection mechanism is proposed to make the method fully automatic, without the necessity of any manual interaction. The described method was proposed by the AGH team during the Automatic Non-rigid Histological Image Registration (ANHIR) challenge. The ANHIR dataset consists of 481 image pairs annotated by histology experts. Moreover, the ANHIR challenge submissions were evaluated using an independent, server-side evaluation tool. The main evaluation criteria was the target registration error normalized by the image diagonal. The median of median target registration error is below 0.19%. The proposed method is currently the second-best in terms of the average ranking of median target registration error, without statistically significant differences compared to the top-ranked method. We provide an open access to the method software and used parameters, making the results fully reproducible.
... In the following, we present a 3-step registration pipeline that was ranked first in the ANHIR challenge where image registration methods have been evaluated on consecutive images. A quantitative comparison of state-of-the-art image registration methods applied to histopathology images can be found in the challenge results in [5]. ...
... After registration, a landmark error for the public training data (N pairs = 230) of MMrTRE = 0.19% is reached (see [5] for full challenge results). ...
... These can be explained by a higher level of structural differences in the ANHIR images (see e.g. [5] Fig. S3) and a lower average image resolution where some cases have only a tenth of the resolution of the HyReCo data. ...
Preprint
Full-text available
We compare variational image registration in consectutive and re-stained sections from histopathology. We present a fully-automatic algorithm for non-parametric (nonlinear) image registration and apply it to a previously existing dataset from the ANHIR challenge (230 slide pairs, consecutive sections) and a new dataset (hybrid re-stained and consecutive, 81 slide pairs, ca. 3000 landmarks) which is made publicly available. Registration hyperparameters are obtained in the ANHIR dataset and applied to the new dataset without modification. In the new dataset, landmark errors after registration range from 13.2 micrometers for consecutive sections to 1 micrometer for re-stained sections. We observe that non-parametric registration leads to lower landmark errors in both cases, even though the effect is smaller in re-stained sections. The nucleus-level alignment after non-parametric registration of re-stained sections provides a valuable tool to generate automatic ground-truth for machine learning applications in histopathology.
... In the context of histopathology image analysis, one of the important initial tasks is to visually compare tissue sections of the same subject but of different biomarkers in a single frame. This is achieved by determining a transformation that maximizes the similarity between a reference image and its associated moving image (Borovec et al., 2018;Borovec et al., 2020;Fernandez-Gonzalez et al., 2002;Gupta et al., 2018). With the aim of improving precision and reducing F I G U R E 6 Some challenging examples of (a) normal and (b) tumor epithelial regions that have been randomly extracted from WSIs stained by AE1AE3 (bright brown) and BerEP4 (dark brown). ...
... More recently, the Automatic Non-rigid Histological Image Registration (ANHIR) challenge (Borovec et al., 2020) was organized as a part of the ISBI 2019 to compare the performance of different image registration models on several different histopathology images (see Figure 1). Most of the submitted models to this grand challenge were using similar classical techniques. ...
Article
Full-text available
The increasing adoption of the whole slide image (WSI) technology in histopathology has dramatically transformed pathologists' workflow and allowed the use of computer systems in histopathology analysis. Extensive research in Artificial Intelligence (AI) with a huge progress has been conducted resulting in efficient, effective, and robust algorithms for several applications including cancer diagnosis, prognosis, and treatment. These algorithms offer highly accurate predictions but lack transparency, understandability, and actionability. Thus, explainable artificial intelligence (XAI) techniques are needed not only to understand the mechanism behind the decisions made by AI methods and increase user trust but also to broaden the use of AI algorithms in the clinical setting. From the survey of over 150 papers, we explore different AI algorithms that have been applied and contributed to the histopathology image analysis workflow. We first address the workflow of the histopathological process. We present an overview of various learning‐based, XAI, and actionable techniques relevant to deep learning methods in histopathological imaging. We also address the evaluation of XAI methods and the need to ensure their reliability on the field. This article is categorized under: Application Areas > Health Care
... These deformations are inherently non-invertible and the Jacobian determinant happens to be negative. Finally, we calculated deformation fields using state-of-the-art affine/nonrigid registration methods on an open, medical dataset [13,14]. An exemplary visualization of the applied deformations on a synthetic image is shown in Figure 1. ...
... UMO-2019/32/T/ST6/00065. Part of this study is a numerical simulation study for which no ethical approval was required. Another part was conducted retrospectively using human subject data made available in open access for the ANHIR challenge [13,14]. The authors declare no conflict of interest. ...
Article
Full-text available
Inverting a deformation field is a crucial part for numerous image registration methods and has an important impact on the final registration results. There are methods that work well for small and relatively simple deformations. However, a problem arises when the deformation field consists of complex and large deformations, potentially including folding. For such cases, the state-of-the-art methods fail and the inversion results are unpredictable. In this article, we propose a deep network using the encoder-decoder architecture to improve the inverse calculation. The network is trained using deformations randomly generated using various transformation models and their compositions, with a symmetric inverse consistency error as the cost function. The results are validated using synthetic deformations resembling real ones, as well as deformation fields calculated during registration of real his-tology data. We show that the proposed method provides an approximate inverse with a lower error than the current state-of-the-art methods.
... The problem is difficult due to: (i) complex, large deformations, (ii) difference in the appearance and partially missing data, (iii) a very high resolution of the images. The importance of the problem led to organizing an Automatic Non-rigid Histological Image Registration Challenge (ANHIR) [1][2][3], jointly with the IEEE ISBI 2019 conference. The provided dataset [1,[4][5][6][7] consists of 481 image pairs annotated by experts, reasonably divided into the training (230) and the evaluation (251) set. ...
... The proposed algorithm was evaluated using all the image pairs provided for the ANHIR challenge [1][2][3]. The data set is open and can be freely downloaded, so results are fully reproducible. ...
Chapter
Full-text available
The use of different stains for histological sample preparation reveals distinct tissue properties and may result in a more accurate diagnosis. However, as a result of the staining process, the tissue slides are being deformed and registration is required before further processing. The importance of this problem led to organizing an open challenge named Automatic Non-rigid Histological Image Registration Challenge (ANHIR), organized jointly with the IEEE ISBI 2019 conference. The challenge organizers provided several hundred image pairs and a server-side evaluation platform. One of the most difficult sub-problems for the challenge participants was to find an initial, global transform, before attempting to calculate the final, non-rigid deformation field. This article solves the problem by proposing a deep network trained in an unsupervised way with a good generalization. We propose a method that works well for images with different resolutions, aspect ratios, without the necessity to perform image padding, while maintaining a low number of network parameters and fast forward pass time. The proposed method is orders of magnitude faster than the classical approach based on the iterative similarity metric optimization or computer vision descriptors. The success rate is above 98% for both the training set and the evaluation set. We make both the training and inference code freely available.
... Mostly, data is stored in a centralized location for collaboration. The number of data sharing repositories still exists for medical fields, e.g., radiology, pathology, and genomics [2,6,10]. Collaborative data sharing (CDS) have several issues, i.e., patient privacy, data ownership, and international laws [8]. ...
... On each iteration, the model is trained locally, and then weights/gradient are transferred to the server (Algorithm 1 -lines 3 to 9). The server receives the model and then aggregates the weights of all the clients (Algorithm 1 -lines 5 to 6). This helps the model learn the diverse institute data and transfer it to all the client connections. ...
Article
Full-text available
In the Internet of Medical Things (IoMT), collaboration among institutes can help complex medical and clinical analysis of disease. Deep neural networks (DNN) require training models on large, diverse patients to achieve expert clinician-level performance. Clinical studies do not contain diverse patient populations for analysis due to limited availability and scale. DNN models trained on limited datasets are thereby constraining their clinical performance upon deployment at a new hospital. Therefore, there is significant value in increasing the availability of diverse training data. This research proposes institutional data collaboration alongside an adversarial evasion method to keep the data secure. The model uses a federated learning approach to share model weights and gradients. The local model first studies the unlabeled samples classifying them as adversarial or normal. The method then uses a centroid-based clustering technique to cluster the sample images. After that, the model predicts the output of the selected images, and active learning methods are implemented to choose the sub-sample of the human annotation task. The expert within the domain takes the input and confidence score and validates the samples for the model’s training. The model re-trains on the new samples and sends the updated weights across the network for collaboration purposes. We use the InceptionV3 and VGG16 model under fabricated inputs for simulating Fast Gradient Signed Method (FGSM) attacks. The model was able to evade attacks and achieve a high accuracy rating of 95%.
... The latter group includes techniques with rigid or affine body transformation. The cost functions are based on histology feature matching (such as blood vessels or small tissue voids) [7], mutual information of pixel intensities [23], and many others [2,3,18,28]. More sophisticated methods include tissue mask boundary alignment [19], background segmentation with B-spline registration [12], step-wise registrations of image patches with kernel density estimation, and hierarchical resolution regression [16]. ...
... Registration methods tested in this study originate from two families: featurebased and intensity-based. The feature-based methods such as the B-spline elastic [3] (ELT) and moving least squares [34] (MLS) utilize the scale invariant feature transform (SIFT) [21] which finds grid points in two images subjected to registration. Since finding the points is carried out separately for each image, the points may not correspond in terms of number and location in the two images. ...
... Challenges (210,223) focus on localizing specific landmarks, including the amniotic and the heart, using ultrasound and MR images. Challenge (211) focuses on the classification of surgery videos. ...
Preprint
Full-text available
The astounding success made by artificial intelligence (AI) in healthcare and other fields proves that AI can achieve human-like performance. However, success always comes with challenges. Deep learning algorithms are data-dependent and require large datasets for training. The lack of data in the medical imaging field creates a bottleneck for the application of deep learning to medical image analysis. Medical image acquisition, annotation, and analysis are costly, and their usage is constrained by ethical restrictions. They also require many resources, such as human expertise and funding. That makes it difficult for non-medical researchers to have access to useful and large medical data. Thus, as comprehensive as possible, this paper provides a collection of medical image datasets with their associated challenges for deep learning research. We have collected information of around three hundred datasets and challenges mainly reported between 2013 and 2020 and categorized them into four categories: head & neck, chest & abdomen, pathology & blood, and ``others''. Our paper has three purposes: 1) to provide a most up to date and complete list that can be used as a universal reference to easily find the datasets for clinical image analysis, 2) to guide researchers on the methodology to test and evaluate their methods' performance and robustness on relevant datasets, 3) to provide a ``route'' to relevant algorithms for the relevant medical topics, and challenge leaderboards.
... In order to evaluate our algorithm, we choose four datasets, two of them have 2D images: FIRE [33] and ANHIR [34] and the other two are 3D volumes: IXI [35] and ADNI [36]. FIRE and IXI are used to perform mono-modal registration, while ANHIR and ADNI are used for multi-modal registration. ...
Preprint
Full-text available
Autograd-based software packages have recently renewed interest in image registration using homography and other geometric models by gradient descent and optimization, e.g., AirLab and DRMIME. In this work, we emphasize on using complex matrix exponential (CME) over real matrix exponential to compute transformation matrices. CME is theoretically more suitable and practically provides faster convergence as our experiments show. Further, we demonstrate that the use of an ordinary differential equation (ODE) as an optimizable dynamical system can adapt the transformation matrix more accurately to the multi-resolution Gaussian pyramid for image registration. Our experiments include four publicly available benchmark datasets, two of them 2D and the other two being 3D. Experiments demonstrate that our proposed method yields significantly better registration compared to a number of off-the-shelf, popular, state-of-the-art image registration toolboxes.
... For this work, we used publicly available data from the Automatic Non-rigid Histological Image Registration (ANHIR) challenge 2019 [5,24,25]. This challenge was set up to compare the performance of different registration algorithms on a varied set of microscopy histology images. ...
Article
Full-text available
We present a novel method to assess the variations in protein expression and spatial heterogeneity of tumor biopsies with application in computational pathology. This was done using different antigen stains for each tissue section and proceeding with a complex image registration followed by a final step of color segmentation to detect the exact location of the proteins of interest. For proper assessment, the registration needs to be highly accurate for the careful study of the antigen patterns. However, accurate registration of histopathological images comes with three main problems: the high amount of artifacts due to the complex biopsy preparation, the size of the images, and the complexity of the local morphology. Our method manages to achieve an accurate registration of the tissue cuts and segmentation of the positive antigen areas.
... The recent ANHIR challenge (Borovec et al., 2020) uses multiple data sets, where the ground truth is obtained by external markers on the histological series. Luo et al. (2020b) discuss the implications of such manual evaluations. ...
Preprint
Full-text available
Registration of histological serial sections is a challenging task. Serial sections exhibit distortions from sectioning. Missing information on how the tissue looked before cutting makes a realistic validation of 2D registrations impossible. This work proposes methods for more realistic evaluation of registrations. Firstly, we survey existing registration and validation efforts. Secondly, we present a methodology to generate test data for registrations. We distort an innately registered image stack in the manner similar to the cutting distortion of serial sections. Test cases are generated from existing 3D data sets, thus the ground truth is known. Thirdly, our test case generation premises evaluation of the registrations with known ground truths. Our methodology for such an evaluation technique distinguishes this work from other approaches. We present a full-series evaluation across six different registration methods applied to our distorted 3D data sets of animal lungs. Our distorted and ground truth data sets are made publicly available.
... Three types of mutual features would be introduced here: point, intensity and structure. Point feature is prevalent so that many approaches of point matching are used for mutual feature extraction [6]. Although some point-feature-based registration algorithms do not require any explicit set of point matches, the feature descriptors quantifying the degree of point correspondence still control the precision of registration directly. ...
Article
Full-text available
Non‐rigid registration, performing well in all‐weather and all‐day/night conditions, directly determine the reliability of visible (VIS) and infrared (IR) image fusion. On account of non‐planar scenes and differences between IR and VIS cameras, non‐linear transformation models are more helpful to non‐rigid image registration than the affine model. However, most of non‐linear models usually used on non‐rigid registration are constructed by control points at present. Aiming at the issue that the adaptiveness and generalization of the control‐point‐based models are limited, adaptive enhanced affine transformation (AEAT) is proposed for image registration, generalizing the affine model from linear to non‐linear case. Firstly, Gaussian weighted shape context, measuring the structural similarity between multimodal images, is designed to extract putative matches from edge maps of IR and VIS images. Secondly, to implement global image registration, the optimal parameters of the AEAT modal are estimated from putative matches by a strategy of subsection optimization. Experiment results show that this approach is robust in different registration tasks and outperforms several competitive methods on registration precision and speed.
... Deep Learning (DL) describes a subset of Machine Learning (ML) algorithms built upon the concepts of neural networks [1]. Over the last decade, DL has shown great promise in various problem domains such as semantic segmentation [2][3][4][5], quantum physics [6], segmentation of regions of interest (such as tumors) in medical images [7][8][9][10][11][12], medical landmark detection [13,14], image registration [15,16], predictive modelling [17], among many others [18][19][20][21][22]. The majority of this vast research was enabled by the abundance of DL libraries made open source and publicly available, with some of the major ones being TensorFlow (developed by Google) [23] and PyTorch [24] by Facebook (originally developed as Caffe [25] by the University of California at Berkeley), which represent the most widely used libraries that facilitated DL research. ...
Preprint
Full-text available
Deep Learning (DL) has greatly highlighted the potential impact of optimized machine learning in both the scientific and clinical communities. The advent of open-source DL libraries from major industrial entities, such as TensorFlow (Google), PyTorch (Facebook), and MXNet (Apache), further contributes to DL promises on the democratization of computational analytics. However, increased technical and specialized background is required to develop DL algorithms, and the variability of implementation details hinders their reproducibility. Towards lowering the barrier and making the mechanism of DL development, training, and inference more stable, reproducible, and scalable, without requiring an extensive technical background, this manuscript proposes the \textbf{G}ener\textbf{a}lly \textbf{N}uanced \textbf{D}eep \textbf{L}earning \textbf{F}ramework (GaNDLF). With built-in support for $k$-fold cross-validation, data augmentation, multiple modalities and output classes, and multi-GPU training, as well as the ability to work with both radiographic and histologic imaging, GaNDLF aims to provide an end-to-end solution for all DL-related tasks, to tackle problems in medical imaging and provide a robust application framework for deployment in clinical workflows.
... In recent years, numerous contests for the automatic processing of such images have been launched [1]. The medical field, and more specifically computational pathology has recently received increasing attention, partially due to the competition emerging, where deep learning approaches have been awarded first place in various challenges : MoNuSAC2020 [2], BACH18 [3], ANHIR2019 [4], CAMELYON17 [5], CAMELYON16 [6] and TUPAC16 [7]. In general, competition winners have used one or more architectures that have performed particularly well in other popular image recognition competitions. ...
Conference Paper
Full-text available
Recent advances in deep learning and image sensors present the opportunity to unify these two complementary fields of research for a better resolution of the image classification problem. Deep learning provides image processing with the representational power necessary to improve the performance of image classification methods. The deep networks are considered as the most powerful tool in visual recognition, notably on the automated analysis of histological slides using Whole Slide Image (WSI) scanning techniques. In this paper, we proposed a comparative study analysis for efficient localization of tumor regions in kidney histopathology WSIs by employing state-of-the-art convolutional deep learning networks such as ResNet and VGGNet which can be fine-tuned to the target problem and exploit the powerful transfer learning techniques that save low computational costs. Experimental results conducted on an independent set of 17.300 images derived from 12 WSIs show that ResNet50 and VGG19 combined with VGG16 achieve almost 97% and 96% of accuracy respectively, for classifying tumor and normal tissues to identify relevant regions of interest (ROIs).
... In recent years, numerous contests for the automatic processing of such images have been launched [1]. The medical field, and more specifically computational pathology has recently received increasing attention, partially due to the competition emerging, where deep learning approaches have been awarded first place in various challenges: MoNuSAC2020 [2], BACH18 [3], ANHIR2019 [4], CAMELYON17 [5], CAMELYON16 [6] and TUPAC16 [7]. In general, competition winners have used one or more architectures that have performed particularly well in other popular image recognition competitions. ...
Chapter
Full-text available
Recent advances in deep learning and image sensors present the opportunity to unify these two complementary fields of research for a better resolution of the image classification problem. Deep learning provides image processing with the representational power necessary to improve the performance of image classification methods. The deep networks are considered as the most powerful tool in visual recognition, notably on the automated analysis of histological slides using Whole Slide Image (WSI) scanning techniques. In this paper, we proposed a comparative study analysis for efficient localization of tumor regions in kidney histopathology WSIs by employing state-of-the-art convolutional deep learning networks such as ResNet and VGGNet which can be fine-tuned to the target problem and exploit the powerful transfer learning techniques that save low computational costs. Experimental results conducted on an independent set of 17.300 images derived from 12 WSIs show that ResNet50 and VGG19 combined with VGG16 achieve almost 97% and 96% of accuracy respectively, for classifying tumor and normal tissues to identify relevant regions of interest (ROIs).
... A recent ANHIR challenge [100] uses multiple data sets, where the ground truth is obtained by external markers on the histological series. Their landmarks were placed by multiple human annotators. ...
Article
Full-text available
Registration of histological serial sections is a challenging task. Serial sections exhibit distortions and damage from sectioning. Missing information on how the tissue looked before cutting makes a realistic validation of 2D registrations extremely difficult. This work proposes methods for ground-truth-based evaluation of registrations. Firstly, we present a methodology to generate test data for registrations. We distort an innately registered image stack in the manner similar to the cutting distortion of serial sections. Test cases are generated from existing 3D data sets, thus the ground truth is known. Secondly, our test case generation premises evaluation of the registrations with known ground truths. Our methodology for such an evaluation technique distinguishes this work from other approaches. Both under- and over-registration become evident in our evaluations. We also survey existing validation efforts. We present a full-series evaluation across six different registration methods applied to our distorted 3D data sets of animal lungs. Our distorted and ground truth data sets are made publicly available.
... However, taking sequential images with multiple hybridization rounds for several target genes can sometimes be misaligned because of possible technical or experimental errors. Therefore, correcting these misalignments is an essential step; thus, tools for precise alignments of multiple images have been developed (14). Another imaging-based approach for spatially resolved transcriptomics is in situ sequencing (ISS). ...
Article
Single-cell RNA sequencing (scRNA-seq) has greatly advanced our understanding of cellular heterogeneity by profiling individual cell transcriptomes. However, cell dissociation from the tissue structure causes a loss of spatial information, which hinders the identification of intercellular communication networks and global transcriptional patterns present in the tissue architecture. To overcome this limitation, novel transcriptomic platforms that preserve spatial information have been actively developed. Significant achievements in imaging technologies have enabled in situ targeted transcriptomic profiling in single cells at singlemolecule resolution. In addition, technologies based on mRNA capture followed by sequencing have made possible profiling of the genome-wide transcriptome at the 55-100 μm resolution. Unfortunately, neither imaging-based technology nor capturebased method elucidates a complete picture of the spatial transcriptome in a tissue. Therefore, addressing specific biological questions requires balancing experimental throughput and spatial resolution, mandating the efforts to develop computational algorithms that are pivotal to circumvent technology-specific limitations. In this review, we focus on the current state-of-the-art spatially resolved transcriptomic technologies, describe their applications in a variety of biological domains, and explore recent discoveries demonstrating their enormous potential in biomedical research. We further highlight novel integrative computational methodologies with other data modalities that provide a framework to derive biological insight into heterogeneous and complex tissue organization.
... 1. matches 2. Homography can be estimated using random sample consensus (RANSAC) 11 3. The alignment score is above the cutoff of 71.4% (described in the next section) ...
Article
Full-text available
Background Modern molecular pathology workflows in neuro-oncology heavily rely on the integration of morphologic and immunohistochemical patterns for analysis, classification, and prognostication. However, despite the recent emergence of digital pathology platforms and artificial intelligence-driven computational image analysis tools, automating the integration of histomorphologic information found across these multiple studies is challenged by large files sizes of whole slide images (WSIs) and shifts/rotations in tissue sections introduced during slide preparation. Methods To address this, we develop a workflow that couples different computer vision tools including scale-invariant feature transform (SIFT) and deep learning to efficiently align and integrate histopathological information found across multiple independent studies. We highlight the utility and automation potential of this workflow in the molecular subclassification and discovery of previously unappreciated spatial patterns in diffuse gliomas. Results First, we show how a SIFT-driven computer vision workflow was effective at automated WSI alignment in a cohort of 107 randomly selected surgical neuropathology cases (97/107 (91%) showing appropriate matches, AUC = 0.96). This alignment allows our AI-driven diagnostic workflow to not only differentiate different brain tumor types, but also integrate and carry out molecular subclassification of diffuse gliomas using relevant immunohistochemical biomarkers (IDH1-R132H, ATRX). To highlight the discovery potential of this workflow, we also examined spatial distributions of tumors showing heterogenous expression of the proliferation marker MIB1 and Olig2. This analysis helped uncovered an interesting and unappreciated association of Olig2 positive and proliferative areas in some gliomas (r = 0.62). Conclusion This efficient neuropathologist-inspired workflow provides a generalizable approach to help automate a variety of advanced immunohistochemically compatible diagnostic and discovery exercises in surgical neuropathology and neuro-oncology.
... High-resolution optical microscopes have limited fields of view, so large samples are imaged using a series of individual fields which are then stitched together computationally to form a complete mosaic image using software such as ASHLAR [48,49]. A nonrigid (B-spline) method [32,42] is applied to register microscopy histology mosaics from different imaging processes [15], e.g., CyCIF and H&E. CyCIF mosaics can be up to 50,000 pixels in each dimension and contain as many as 60 channels, each depicting a different marker. ...
Article
Inspection of tissues using a light microscope is the primary method of diagnosing many diseases, notably cancer. Highly multiplexed tissue imaging builds on this foundation, enabling the collection of up to 60 channels of molecular information plus cell and tissue morphology using antibody staining. This provides unique insight into disease biology and promises to help with the design of patient-specific therapies. However, a substantial gap remains with respect to visualizing the resulting multivariate image data and effectively supporting pathology workflows in digital environments on screen. We, therefore, developed Scope2Screen, a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images. Our approach scales to analyzing 100GB images of 109 or more pixels per channel, containing millions of individual cells. A multidisciplinary team of visualization experts, microscopists, and pathologists identified key image exploration and annotation tasks involving finding, magnifying, quantifying, and organizing regions of interest (ROIs) in an intuitive and cohesive manner. Building on a scope-to-screen metaphor, we present interactive lensing techniques that operate at single-cell and tissue levels. Lenses are equipped with task-specific functionality and descriptive statistics, making it possible to analyze image features, cell types, and spatial arrangements (neighborhoods) across image channels and scales. A fast sliding-window search guides users to regions similar to those under the lens; these regions can be analyzed and considered either separately or as part of a larger image collection. A novel snapshot method enables linked lens configurations and image statistics to be saved, restored, and shared with these regions. We validate our designs with domain experts and apply Scope2Screen in two case studies involving lung and colorectal cancers to discover cancer-relevant image features.
... High-resolution optical microscopes have limited fields of view, so large samples are imaged using a series of individual fields which are then stitched together computationally to form a complete mosaic image using software such as ASHLAR [48,49]. A nonrigid (B-spline) method [32,42] is applied to register microscopy histology mosaics from different imaging processes [15], e.g., CyCIF and H&E. CyCIF mosaics can be up to 50,000 pixels in each dimension and contain as many as 60 channels, each depicting a different marker. ...
Preprint
Full-text available
Inspection of tissues using a light microscope is the primary method of diagnosing many diseases, notably cancer. Highly multiplexed tissue imaging builds on this foundation, enabling the collection of up to 60 channels of molecular information plus cell and tissue morphology using antibody staining. This provides unique insight into disease biology and promises to help with the design of patient-specific therapies. However, a substantial gap remains with respect to visualizing the resulting multivariate image data and effectively supporting pathology workflows in digital environments on screen. We, therefore, developed Scope2Screen, a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images. Our approach scales to analyzing 100GB images of 10^9 or more pixels per channel, containing millions of cells. A multidisciplinary team of visualization experts, microscopists, and pathologists identified key image exploration and annotation tasks involving finding, magnifying, quantifying, and organizing ROIs in an intuitive and cohesive manner. Building on a scope2screen metaphor, we present interactive lensing techniques that operate at single-cell and tissue levels. Lenses are equipped with task-specific functionality and descriptive statistics, making it possible to analyze image features, cell types, and spatial arrangements (neighborhoods) across image channels and scales. A fast sliding-window search guides users to regions similar to those under the lens; these regions can be analyzed and considered either separately or as part of a larger image collection. A novel snapshot method enables linked lens configurations and image statistics to be saved, restored, and shared. We validate our designs with domain experts and apply Scope2Screen in two case studies involving lung and colorectal cancers to discover cancer-relevant image features.
... Besides pure technical problems, life scientists face practical challenges if they want to use published methods. For instance, a grand challenge of non-linear image registration has been performed in 2018 (Borovec et al., 2020). However, it proved very difficult to apply any of the more successful methods, either because they were closed source or because the necessary documentation was not readily available. ...
Article
Full-text available
Image analysis workflows for Histology increasingly require the correlation and combination of measurements across several whole slide images. Indeed, for multiplexing, as well as multimodal imaging, it is indispensable that the same sample is imaged multiple times, either through various systems for multimodal imaging, or using the same system but throughout rounds of sample manipulation (e.g. multiple staining sessions). In both cases slight deformations from one image to another are unavoidable, leading to an imperfect superimposition Redundant and thus a loss of accuracy making it difficult to link measurements, in particular at the cellular level. Using pre-existing software components and developing missing ones, we propose a user-friendly workflow which facilitates the nonlinear registration of whole slide images in order to reach sub-cellular resolution level. The set of whole slide images to register and analyze is at first defined as a QuPath project. Fiji is then used to open the QuPath project and perform the registrations. Each registration is automated by using an elastix backend, or semi-automated by using BigWarp in order to interactively correct the results of the automated registration. These transformations can then be retrieved in QuPath to transfer any regions of interest from an image to the corresponding registered images. In addition, the transformations can be applied in QuPath to produce on-the-fly transformed images that can be displayed on top of the reference image. Thus, relevant data can be combined and analyzed throughout all registered slides, facilitating the analysis of correlative results for multiplexed and multimodal imaging.
... For example, this occurs when multiple hybridization rounds are sequentially imaged or when creating a 3D data set by using adjacent 2D slices. The process to correct for this misalignment is often referred to as image registration and can be performed using a variety of different transformation algorithms and strategies ( Fig. 2B; Borovec et al. 2020). As a demonstrating example, registration transformation is applied to a ST breast cancer data set (Andersson et al. 2020b), in which six sections were taken serially with a distance of 48 microns from each other. ...
Article
Spatial transcriptomics is a rapidly growing field that promises to comprehensively characterize tissue organization and architecture at the single-cell or subcellular resolution. Such information provides a solid foundation for mechanistic understanding of many biological processes in both health and disease that cannot be obtained by using traditional technologies. The development of computational methods plays important roles in extracting biological signals from raw data. Various approaches have been developed to overcome technology-specific limitations such as spatial resolution, gene coverage, sensitivity, and technical biases. Downstream analysis tools formulate spatial organization and cell–cell communications as quantifiable properties, and provide algorithms to derive such properties. Integrative pipelines further assemble multiple tools in one package, allowing biologists to conveniently analyze data from beginning to end. In this review, we summarize the state of the art of spatial transcriptomic data analysis methods and pipelines, and discuss how they operate on different technological platforms.
Article
In the recent decade, medical image registration and fusion process has emerged as an effective application to follow up diseases and decide the necessary therapies based on the conditions of patient. For many of the considerable diagnostic analyses, it is common practice to assess two or more different histological slides or images from one tissue sample. A specific area analysis of two image modalities requires an overlay of the images to distinguish positions in the sample that are organized at a similar coordinate in both images. In particular cases, there are two common challenges in digital pathology: first, dissimilar appearances of images resulting due to staining variances and artifacts, second, large image size. In this paper, we develop algorithm to overcome the fact that scanners from different manufacturers have variations in the images. We propose whole slide image registration algorithm where adaptive smoothing is employed to smooth the stained image. A modified scale-invariant feature transform is applied to extract common information and a joint distance helps to match keypoints correctly by eliminating position transformation error. Finally, the registered image is obtained by utilizing correct correspondences and the interpolation of color intensities. We validate our proposal using different images acquired from surgical resection samples of lung cancer (adenocarcinoma). Extensive feature matching with apparently increasing correct correspondences and registration performance on several images demonstrate the superiority of our method over state-of-the-art methods. Our method potentially improves the matching accuracy that might be beneficial for computer-aided diagnosis in biobank applications.
Article
Full-text available
Several studies underscore the potential of deep learning in identifying complex patterns, leading to diagnostic and prognostic biomarkers. Identifying sufficiently large and diverse datasets, required for training, is a significant challenge in medicine and can rarely be found in individual institutions. Multi-institutional collaborations based on centrally-shared patient data face privacy and ownership challenges. Federated learning is a novel paradigm for data-private multi-institutional collaborations, where model-learning leverages all available data without sharing data between institutions, by distributing the model-training to the data-owners and aggregating their results. We show that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and evaluate generalizability on data from institutions outside the federation. We further investigate the effects of data distribution across collaborating institutions on model quality and learning patterns, indicating that increased access to data through data private multi-institutional collaborations can benefit model quality more than the errors introduced by the collaborative method. Finally, we compare with other collaborative-learning approaches demonstrating the superiority of federated learning, and discuss practical implementation considerations. Clinical adoption of federated learning is expected to lead to models trained on datasets of unprecedented size, hence have a catalytic impact towards precision/personalized medicine.
Chapter
Inter-modality imaging problems are ubiquitous in medical imaging. The acquisition of multiple modalities – or different variations of the same modality – is frequent in many clinical applications and research problems. Inter-modality image analysis techniques have been developed to cope with the differences between modalities, but they do not always perform as well as their intra-modality counterparts. Image synthesis can be used to bridge this gap, by making one of the two modalities resemble the other, and using intra-modality methods. It is often the case that the error in the synthesis is outweighed by the improvement in performance given by switching from intra- to inter-modality analysis. In this chapter, we present a survey of methods based on this idea, with a focus on two of the most representative problems in medical image analysis, registration and segmentation.
Article
In kidney transplant biopsies, both inflammation and chronic changes are important features that predict long-term graft survival. Quantitative scoring of these features is important for transplant diagnostics and kidney research. However, visual scoring is poorly reproducible and labor-intensive. The goal of this study was to investigate the potential of convolutional neural networks (CNNs) to quantify inflammation and chronic features in kidney transplant biopsies. A structure segmentation CNN and a lymphocyte detection CNN were applied on 125 whole-slide image pairs of PAS-, and CD3-stained slides. The CNN results were used to quantify healthy and sclerotic glomeruli, interstitial fibrosis, tubular atrophy, and inflammation both within non-atrophic and atrophic tubuli, and in areas of interstitial fibrosis. The computed tissue features showed high correlations with Banff lesion scores of five pathologists. Analyses on a small subset showed a moderate correlation towards higher CD3⁺ cell density within scarred regions and higher CD3⁺ cell count inside atrophic tubuli correlated with long-term change of estimated glomerular filtration rate. The presented CNNs are valid tools to yield objective quantitative information on glomeruli number, fibrotic tissue, and inflammation within scarred and non-scarred kidney parenchyma in a reproducible fashion. CNNs have the potential to improve kidney transplant diagnostics and will benefit the community as a novel method to generate surrogate endpoints for large-scale clinical studies.
Article
The coronavirus of 2019 (COVID-19) was declared a global pandemic by World Health Organization in March 2020. Effective testing is crucial to slow the spread of the pandemic. Artificial intelligence and machine learning techniques can help COVID-19 detection using various clinical symptom data. While deep learning (DL) approach requiring centralized data is susceptible to a high risk of data privacy breaches, federated learning (FL) approach resting on decentralized data can preserve data privacy, a critical factor in the health domain. This paper reviews recent advances in applying DL and FL techniques for COVID-19 detection with a focus on the latter. A model FL implementation use case in health systems with a COVID-19 detection using chest X-ray image data sets is studied. We have also reviewed applications of previously published FL experiments for COVID-19 research to demonstrate the applicability of FL in tackling health research issues. Last, several challenges in FL implementation in the healthcare domain are discussed in terms of potential future work.
Article
Full-text available
Histopathologic assessment routinely provides rich microscopic information about tissue structure and disease process. However, the sections used are very thin, and essentially capture only 2D representations of a certain tissue sample. Accurate and robust alignment of sequentially cut 2D slices should contribute to more comprehensive assessment accounting for surrounding 3D information. Towards this end, we here propose a two-step diffeomorphic registration approach that aligns differently stained histology slides to each other, starting with an initial affine step followed by estimating a deformation field. It was quantitatively evaluated on ample (n = 481) and diverse data from the automatic non-rigid histological image registration challenge, where it was awarded the second rank. The obtained results demonstrate the ability of the proposed approach to robustly (average robustness = 0.9898) and accurately (average relative target registration error = 0.2%) align differently stained histology slices of various anatomical sites while maintaining reasonable computational efficiency (<1 min per registration). The method was developed by adapting a general-purpose registration algorithm designed for 3D radiographic scans and achieved consistently accurate results for aligning high-resolution 2D histologic images. Accurate alignment of histologic images can contribute to a better understanding of the spatial arrangement and growth patterns of cells, vessels, matrix, nerves, and immune cell interactions.
Article
With the advent and great advances of methods based on deep learning in image analysis, it appears that they can be effective in digital pathology to support the work of pathologists. However, a major limitation in the development of computer-aided diagnostic systems for pathology is the cost of data annotation. Evaluation of tissue (histopathological) and cellular (cytological) specimens seems to be a complex challenge. To simplify the laborious process of obtaining a sufficiently large set of data, a number of different systems could be used for image annotation. Some of these systems are reviewed in this paper with a comparison of their capabilities.
Article
We present EndoNuke, an open dataset consisting of tiles from endometrium immunohistochemistry slides with the nuclei annotated as keypoints. Several experts with various experience have annotated the dataset. Apart from gathering the data and creating the annotation, we have performed an agreement study and analyzed the distribution of nuclei staining intensity.
Chapter
Histology and histopathology provide the gold standard for diagnosticians to understand the underlying relationship between the structures on a tissue sample and the affliction of the subject from whom the tissue was taken. A fundamental step toward achieving this is staining the tissue with chemicals to accentuate features within it. Over the past one-and-a-half centuries many stains have been developed, with each stain providing its own unique set of features. Very often, obtaining multiple slides with diverse stains is a cumbersome process that is both expensive and prone to human error. The objective of this chapter is to treat the issue of obtaining the image of a stained tissue from a differently stained one as an image-to-image translation problem. The solution is based on a suitably chosen conditional generative adversarial network, which transforms an image from one feature space to another. This chapter discusses the choice of objective functions and image-quality metrics. In addition, it highlights some of the issues with suggestions to address them.
Article
Full-text available
Endoscopic submucosal dissection can remove large superficial gastrointestinal lesions in en bloc. A detailed pathological evaluation of the resected specimen is required to assess the risk of recurrence after treatment. However, the current method of sectioning specimens to a thickness of a few millimeters does not provide information between the sections that are lost during the preparation. In this study, we have produced three-dimensional images of the entire dissected lesion for nine samples by using micro-CT imaging system. Although it was difficult to diagnose histological type on micro-CT images, it successfully evaluates the extent of the lesion and its surgical margins. Micro-CT images can depict sites that cannot be observed by the conventional pathological diagnostic process, suggesting that it may be useful to use in a complementary manner.
Article
Full-text available
Reporting biomarkers assessed by routine immunohistochemical (IHC) staining of tissue is broadly used in diagnostic pathology laboratories for patient care. So far, however, clinical reporting is predominantly qualitative or semi-quantitative. By creating a multitask deep learning framework, DeepLIIF, we present a single-step solution to stain deconvolution/separation, cell segmentation and quantitative single-cell IHC scoring. Leveraging a unique de novo dataset of co-registered IHC and multiplex immunofluorescence (mpIF) staining of the same slides, we segment and translate low-cost and prevalent IHC slides to more informative, but also more expensive, mpIF images, while simultaneously providing the essential ground truth for the superimposed brightfield IHC channels. A new nuclear-envelope stain, LAP2beta, with high (>95%) cell coverage is also introduced to improve cell delineation/segmentation and protein expression quantification on IHC slides. We show that DeepLIIF trained on clean IHC Ki67 data can generalize to noisy images as well as other nuclear and non-nuclear markers.
Article
Full-text available
3D medical image registration is of great clinical importance. However, supervised learning methods require a large amount of accurately annotated corresponding control points (or morphing), which are very difficult to obtain. Unsupervised learning methods ease the burden of manual annotation by exploiting unlabeled data without supervision. In this paper, we propose a new unsupervised learning method using convolutional neural networks under an end-to-end framework, Volume Tweening Network (VTN), for 3D medical image registration. We propose three innovative technical components: (1) An end-to-end cascading scheme that resolves large displacement; (2) An efficient integration of affine registration network; and (3) An additional invertibility loss that encourages backward consistency. Experiments demonstrate that our algorithm is 880x faster (or 3.3x faster without GPU acceleration) than traditional optimization-based methods and achieves state-of-theart performance in medical image registration.
Article
Full-text available
Our research [Team Name: URAN] of 'Neuromorphic Neural Network for Multimodal Brain Tumor Segmentation and Survival analysis' demonstrates its performance on 'Overall Survival Prediction', without any prior training using multi-modal MRI Image segmentation by medical doctors (though provided by BraTS 2018 challenge data). Two segmentation categories are adopted instead of three segmentations , which is beyond the ordinary scope of BraTS 2018's segmentation challenge. The test results(the worst blind test, 51%: the nominal test, 71%) of 'survival prediction' challenge have proved the early feasibility of our neuromorphic neural network for helping the medical doctors, considering the human clinical accuracy of 50%~70%.
Conference Paper
Full-text available
Image registration is a common task for many biomedical analysis applications. The present work focuses on the benchmarking of registration methods on differently stained histological slides. This is a challenging task due to the differences in the appearance model, the repetitive texture of the details and the large image size, between other issues. Our benchmarking data is composed of 616 image pairs at two different scales - average image diagonal 2.4k and 5k pixels. We compare eleven fully automatic registration methods covering the widely used similarity measures (and optimization strategies with both linear and elastic transformation). For each method, the best parameter configuration is found and subsequently applied to all the image pairs. The performance of the algorithms is evaluated from several perspectives - the registrations (in)accuracy on manually annotated landmarks, the method robustness and its processing computation time.
Article
Full-text available
Motivation: Digital pathology enables new approaches that expand beyond storage, visualization or analysis of histological samples in digital format. One novel opportunity is 3D histology, where a three-dimensional reconstruction of the sample is formed computationally based on serial tissue sections. This allows examining tissue architecture in 3D, for example, for diagnostic purposes. Importantly, 3D histology enables joint mapping of cellular morphology with spatially resolved omics data in the true 3D context of the tissue at microscopic resolution. Several algorithms have been proposed for the reconstruction task, but a quantitative comparison of their accuracy is lacking. Results: We developed a benchmarking framework to evaluate the accuracy of several free and commercial 3D reconstruction methods using two whole slide image datasets. The results provide a solid basis for further development and application of 3D histology algorithms and indicate that methods capable of compensating for local tissue deformation are superior to simpler approaches. Availability: Code: https://github.com/BioimageInformaticsTampere/RegBenchmark. Whole slide image datasets: http://urn.fi/urn:nbn:fi:csc-kata20170705131652639702. Contact: pekka.ruusuvuori@tut.fi. Supplementary information: Supplementary data are available at Bioinformatics online.
Article
Full-text available
Histology permits the observation of otherwise invisible structures of the internal topography of a specimen. Although it enables the investigation of tissues at a cellular level, it is invasive and breaks topology due to cutting. Three-dimensional (3D) reconstruction was thus introduced to overcome the limitations of single-section studies in a dimensional scope. 3D reconstruction finds its roots in embryology, where it enabled the visualisation of spatial relationships of developing systems and organs, and extended to biomedicine, where the observation of individual, stained sections provided only partial understanding of normal and abnormal tissues. However, despite bringing visual awareness, recovering realistic reconstructions is elusive without prior knowledge about the tissue shape. 3D medical imaging made such structural ground truths available. In addition, combining non-invasive imaging with histology unveiled invaluable opportunities to relate macroscopic information to the underlying microscopic properties of tissues through the establishment of spatial correspondences; image registration is one technique that permits the automation of such a process and we describe reconstruction methods that rely on it. It is thereby possible to recover the original topology of histology and lost relationships, gain insight into what affects the signals used to construct medical images (and characterise them), or build high resolution anatomical atlases. This paper reviews almost three decades of methods for 3D histology reconstruction from serial sections, used in the study of many different types of tissue. We first summarise the process that produces digitised sections from a tissue specimen in order to understand the peculiarity of the data, the associated artefacts and some possible ways to minimise them. We then describe methods for 3D histology reconstruction with and without the help of 3D medical imaging, along with methods of validation and some applications. We finally attempt to identify the trends and challenges that the field is facing, many of which are derived from the cross-disciplinary nature of the problem as it involves the collaboration between physicists, histopathologists, computer scientists and physicians.
Article
Full-text available
Delineation of the cardiac right ventricle is essential in generating clinical measurements such as ejection fraction and stroke volume. Given manual segmentation on the first frame, one approach to segment right ventricle from all of the magnetic resonance images is to find point correspondence between the sequence of images. Finding the point correspondence with non-rigid transformation requires a deformable image registration algorithm which often involves computationally expensive optimization. The central processing unit (CPU) based implementation of point correspondence algorithm has been shown to be accurate in delineating organs from a sequence of images in recent studies. The purpose of this study is to develop computationally efficient approaches for deformable image registration. We propose a graphics processing unit (GPU) accelerated approach to improve the efficiency. The proposed approach consists of two parallelization components: Parallel Compute Unified Device Architecture (CUDA) version of the deformable registration algorithm; and the application of an image concatenation approach to further parallelize the algorithm. Three versions of the algorithm were implemented: 1) CPU; 2) GPU with only intra-image parallelization (Sequential image registration); and 3) GPU with inter and intra-image parallelization (Concatenated image registration). The proposed methods were evaluated over a data set of 16 subjects. CPU, GPU sequential image, and GPU concatenated image methods took an average of 113.13, 16.50 and 5.96 seconds to segment a sequence of 20 images, respectively. The proposed parallelization approach offered a computational performance improvement of around 19× in comparison to the CPU implementation while retaining the same level of segmentation accuracy. This study demonstrated that the GPU computing could be utilized for improving the computational performance of a non-rigid image registration algorithm without compromising the accuracy.
Article
Full-text available
In this paper, we present a fast method for registration of multiple large, digitised whole-slide images (WSIs) of serial histology sections. Through cross-slide WSI registration, it becomes possible to select and analyse a common visual field across images of several serial section stained with different protein markers. It is, therefore, a critical first step for any downstream co-localised cross-slide analysis. The proposed registration method uses a two-stage approach, first estimating a fast initial alignment using the tissue sections’ external boundaries, followed by an efficient refinement process guided by key biological structures within the visual field. We show that this method is able to produce a high quality alignment in a variety of circumstances, and demonstrate that the refinement is able to quantitatively improve registration quality. In addition, we provide a case study that demonstrates how the proposed method for cross-slide WSI registration could be used as part of a specific co-expression analysis framework.
Conference Paper
Full-text available
We present an image processing pipeline which accepts a large number of images, containing spatial expression information for thousands of genes in Drosophila imaginal discs. We assume that the gene activations are binary and can be expressed as a union of a small set of non-overlapping spatial patterns, yielding a compact representation of the spatial activation of each gene. This lends itself well to further automatic analysis, with the hope of discovering new biological relationships. Traditionally, the images were labeled manually, which was very time consuming. The key part of our work is a binary pattern dictionary learning algorithm, that takes a set of binary images and determines a set of patterns, which can be used to represent the input images with a small error. We also describe the preprocessing phase, where input images are segmented to recover the activation images and spatially aligned to a common reference. We compare binary pattern dictionary learning to existing alternative methods on synthetic data and also show results of the algorithm on real microscopy images of the Drosophila imaginal discs.
Article
Full-text available
The form and exact function of the blood vessel network in some human organs, like spleen and bone marrow, are still open research questions in medicine. In this paper we propose a method to register the immunohistological stainings of serial sections of spleen and bone marrow specimens to enable the visualization and visual inspection of blood vessels. As these vary much in caliber, from mesoscopic (millimeter-range) to microscopic (few micrometers, comparable to a single erythrocyte), we need to utilize a multi-resolution approach. Our method is fully automatic; it is based on feature detection and sparse matching. We utilize a rigid alignment and then a non-rigid deformation, iteratively dealing with increasingly smaller features. Our tool pipeline can already deal with series of complete scans at extremely high resolution, up to 620 megapixels. The improvement presented increases the range of represented details up to smallest capillaries. This paper provides details on the multi-resolution non-rigid registration approach we use. Our application is novel in the way the alignment and subsequent deformations are computed (using features, i.e. “sparse”). The deformations are based on all images in the stack (“global”). We also present volume renderings and a 3D reconstruction of the vascular network in human spleen and bone marrow on a level not possible before. Our registration makes easy tracking of even smallest blood vessels possible, thus granting experts a better comprehension. A quantitative evaluation of our method and related state of the art approaches with seven different quality measures shows the efficiency of our method. We also provide z-profiles and enlarged volume renderings from three different registrations for visual inspection.
Article
Full-text available
Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robust 3D registration technique for microscopic image reconstruction, and we demonstrate our method on two ssTEM datasets of drosophila brain neural tissues, serial confocal laser scanning microscopic images of a drosophila brain, serial histopathological images of renal cortical tissues and a synthetic test case. The results show that the presented fully automatic method is promising to reassemble continuous volumes and minimize artificial deformations for all data and outperforms four state-of-the-art 3D registration techniques to consistently produce solid 3D reconstructed anatomies with less discontinuities and deformations.
Article
Full-text available
Most medical image registration algorithms suffer from a directionality bias that has been shown to largely impact subsequent analyses. Several approaches have been proposed in the literature to address this bias in the context of nonlinear registration, but little work has been done for global registration. We propose a symmetric approach based on a block-matching technique and least-trimmed square regression. The proposed method is suitable for multimodal registration and is robust to outliers in the input images. The symmetric framework is compared with the original asymmetric block-matching technique and is shown to outperform it in terms of accuracy and robustness. The methodology presented in this article has been made available to the community as part of the NiftyReg open-source package.
Conference Paper
Full-text available
It is known that image registration is mostly driven by image edges. We have taken this idea to the extreme. In segmented images, we ignore the interior of the components and focus on their boundaries only. Furthermore, by assuming spatial compactness of the components, the similarity criterion can be approximated by sampling only a small number of points on the normals passing through a sparse set of keypoints. This leads to an order-of-magnitude speed advantage in comparison with classical registration algorithms. Surprisingly, despite the crude approximation, the accuracy is comparable. By virtue of the segmentation and by using a suitable similarity criterion such as mutual information on labels, the method can handle large appearance differences and large variability in the segmentations. The segmentation does not need not be perfectly coherent between images and over-segmentation is acceptable. We demonstrate the performance of the method on a range of different datasets, including histological slices and Drosophila imaginal discs, using rigid transformations.
Article
Full-text available
Histology is the microscopic inspection of plant or animal tissue. It is a critical component in diagnostic medicine and a tool for studying the pathogenesis and biology of processes such as cancer and embryogenesis. Tissue processing for histology has become increasingly automated, drastically increasing the speed at which histology labs can produce tissue slides for viewing. Another trend is the digitization of these slides, allowing them to be viewed on a computer rather than through a microscope. Despite these changes, much of the routine analysis of tissue sections remains a painstaking, manual task that can only be completed by highly trained pathologists at a high cost per hour. There is, therefore, a niche for image analysis methods that can automate some aspects of this analysis. These methods could also automate tasks that are prohibitively time-consuming for humans, e.g., discovering new disease markers from hundreds of whole-slide images (WSIs) or precisely quantifying tissues within a tumor.
Article
Full-text available
Image registration of biological data is challenging as complex deformation problems are common. Possible deformation effects can be caused in individual data preparation processes, involving morphological deformations, stain variations, stain artifacts, rotation, translation, and missing tissues. The combining deformation effects tend to make existing automatic registration methods perform poor. In our experiments on serial histopathological images, the six state of the art image registration techniques, including TrakEM2, SURF + affine transformation, UnwarpJ, bUnwarpJ, CLAHE + bUnwarpJ and BrainAligner, achieve no greater than 70% averaged accuracies, while the proposed method achieves 91.49% averaged accuracy. The proposed method has also been demonstrated to be significantly better in alignment of laser scanning microscope brain images and serial ssTEM images than the benchmark automatic approaches (p < 0.001). The contribution of this study is to introduce a fully automatic, robust and fast image registration method for 2D image registration.
Article
Full-text available
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e. g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task-and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-) expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms' similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations.
Article
Full-text available
Registration of histopathology images of consecutive tissue sections stained with different histochemical or immunohistochemical stains is an important step in a number of application areas, such as the investigation of the pathology of a disease, validation of MRI sequences against tissue images, multi-scale physical modelling etc. In each case information from each stain needs to be spatially aligned and combined to ascertain physical or functional properties of the tissue. However, in addition to the gigabyte size images and non-rigid distortions present in the tissue, a major challenge for registering differently stained histology image pairs is the dissimilar structural appearance due to different stains highlighting different substances in tissues. In this paper, we address this challenge by developing an unsupervised content classification method which generates multi-channel probability images from a roughly aligned image pair. Each channel corresponds to one automatically identified content class. The probability images enhance the structural similarity between image pairs. By integrating the classification method into a multi-resolution block matching based non-rigid registration scheme (our previous work [9]), we improve the performance of registering multi-stained histology images. Evaluation was conducted on 77 histological image pairs taken from three liver specimens and one intervertebral disc specimen. In total six types of histochemical stains were tested. We evaluated our method against the same registration method implemented without applying the classification algorithm (intensity based registration) and the state-of-art mutual information based registration. Superior results are obtained with the proposed method.
Article
Full-text available
The aim of this paper is to validate an image registration pipeline used for histology image alignment. In this work a set of histology images are registered to their correspondent optical blockface images to make a histology volume. Then multi-modality fiducial markers are used to validate the alignment of histology images. The fiducial markers are catheters perfused with a mixture of cuttlefish ink and flour. Based on our previous investigations this fiducial marker is visible in medical images, optical blockface images and it can also be localized in histology images. The properties of this fiducial marker make it suitable for validation of the registration techniques used for histology image alignment. This paper reports on the accuracy of a histology image registration approach by calculation of target registration error using these fiducial markers.
Article
Full-text available
Three dimensional (3D) tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications). This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D) histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. The different strategies have been evaluated on two liver specimens (80 sections in total) stained with Hematoxylin and Eosin (H and E), Sirius Red, and Cytokeratin (CK) 7. A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.
Article
Full-text available
Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where temporal structural or anatomical changes are investigated; and iii) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner.
Article
Full-text available
For the past 25 years NIH Image and ImageJ software have been pioneers as open tools for the analysis of scientific images. We discuss the origins, challenges and solutions of these two programs, and how their history can serve to advise and inform other software projects.
Article
Full-text available
Anatomy of large biological specimens is often reconstructed from serially sectioned volumes imaged by high-resolution microscopy. We developed a method to reassemble a continuous volume from such large section series that explicitly minimizes artificial deformation by applying a global elastic constraint. We demonstrate our method on a series of transmission electron microscopy sections covering the entire 558-cell Caenorhabditis elegans embryo and a segment of the Drosophila melanogaster larval ventral nerve cord.
Article
Full-text available
This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrically align two images (the reference and sensed images). The reviewed approaches are classified according to their nature (area-based and feature-based) and according to four basic steps of image registration procedure: feature detection, feature matching, mapping function design, and image transformation and resampling. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of image registration and outlook for the future research are discussed too. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas.
Article
Full-text available
Molecular classification of diseases based on multigene expression signatures is increasingly used for diagnosis, prognosis, and prediction of response to therapy. Immunohistochemistry (IHC) is an optimal method for validating expression signatures obtained using high-throughput genomics techniques since IHC allows a pathologist to examine gene expression at the protein level within the context of histologically interpretable tissue sections. Additionally, validated IHC assays may be readily implemented as clinical tests since IHC is performed on routinely processed clinical tissue samples. However, methods have not been available for automated n-gene expression profiling at the protein level using IHC data. We have developed methods to compute expression level maps (signature maps) of multiple genes from IHC data digitized on a commercial whole slide imaging system. Areas of cancer for these expression level maps are defined by a pathologist on adjacent, co-registered H&E slides, allowing assessment of IHC statistics and heterogeneity within the diseased tissue. This novel way of representing multiple IHC assays as signature maps will allow the development of n-gene expression profiling databases in three dimensions throughout virtual whole organ reconstructions.
Article
Full-text available
This paper presents a review of automated image registration methodologies that have been used in the medical field. The aim of this paper is to be an introduction to the field, provide knowledge on the work that has been developed and to be a suitable reference for those who are looking for registration methods for a specific application. The registration methodologies under review are classified into intensity or feature based. The main steps of these methodologies, the common geometric transformations, the similarity measures and accuracy assessment techniques are introduced and described.
Conference Paper
Full-text available
Here we present a new image registration algorithm for the alignment of histological sections that combines the ideas of B-spline based elastic registration and consistent image registration, to allow simultaneous registration of images in two directions (direct and inverse). In principle, deformations based on B-splines are not invertible. The consistency term overcomes this limitation and allows registration of two images in a completely symmetric way. This extension of the elastic registration method simplifies the search for the optimum deformation and allows registering with no information about landmarks or deformation regularization. This approach can also be used as the first step to solve the problem of group-wise registration.
Conference Paper
Full-text available
Non-rigid image registration (NIR) is an essential tool for morphologic comparisons in the presence of intra- and inter-individual anatomic variations. Many NIR methods have been developed, but are es- pecially difficult to evaluate since point-wise inter-image correspondence is usually unknown, i.e., there is no "Gold Standard" to evaluate perfor- mance. The Non-rigid Image Registration Evaluation Project (NIREP) has been started to develop, establish, maintain, and endorse a standard- ized set of relevant benchmarks and metrics for performance evaluation of nonrigid image registration algorithms. This paper describes the basic framework of the project.
Article
Full-text available
The physical (microtomy), optical (microscopy), and radiologic (tomography) sectioning of biological objects and their digitization lead to stacks of images. Due to the sectioning process and disturbances, movement of objects during imaging for example, adjacent images of the image stack are not optimally aligned to each other. Such mismatches have to be corrected automatically by suitable registration methods. Here, a whole brain of a Sprague Dawley rat was serially sectioned and stained followed by digitizing the 20 μm thin histologic sections. We describe how to prepare the images for subsequent automatic intensity based registration. Different registration schemes are presented and their results compared to each other from an anatomical and mathematical perspective. In the first part we concentrate on rigid and affine linear methods and deal only with linear mismatches of the images. Digitized images of stained histologic sections often exhibit inhomogenities of the gray level distribution coming from staining and/or sectioning variations. Therefore, a method is developed that is robust with respect to inhomogenities and artifacts. Furthermore we combined this approach by minimizing a suitable distance measure for shear and rotation mismatches of foreground objects after applying the principal axes transform. As a consequence of our investigations, we must emphasize that the combination of a robust principal axes based registration in combination with optimizing translation, rotation and shearing errors gives rise to the best reconstruction results from the mathematical and anatomical view point. Because the sectioning process introduces nonlinear deformations to the relative thin histologic sections as well, an elastic registration has to be applied to correct these deformations. In the second part of the study a detailed description of the advances of an elastic registration after affine linear registration of the rat brain is given. We found quantitative evidence that affine linear registration is a suitable starting point for the alignment of histologic sections but elastic registration must be performed to improve significantly the registration result. A strategy is presented that enables to register elastically the affine linear preregistered~ rat brain sections and the first one hundred images of serial histologic sections through both occipital lobes of a human brain (6112 images). Additionally, we will describe how a parallel implementation of the elastic registration was realized. Finally, the computed force fields have been applied here for the first time to the morphometrized data of cells determined automatically by an image analytic framework.
Article
Full-text available
EMPIRE10 (Evaluation of Methods for Pulmonary Image REgistration 2010) is a public platform for fair and meaningful comparison of registration algorithms which are applied to a database of intrapatient thoracic CT image pairs. Evaluation of nonrigid registration techniques is a nontrivial task. This is compounded by the fact that researchers typically test only on their own data, which varies widely. For this reason, reliable assessment and comparison of different registration algorithms has been virtually impossible in the past. In this work we present the results of the launch phase of EMPIRE10, which comprised the comprehensive evaluation and comparison of 20 individual algorithms from leading academic and industrial research groups. All algorithms are applied to the same set of 30 thoracic CT pairs. Algorithm settings and parameters are chosen by researchers expert in the configuration of their own method and the evaluation is independent, using the same criteria for all participants. All results are published on the EMPIRE10 website (http://empire10.isi.uu.nl). The challenge remains ongoing and open to new participants. Full results from 24 algorithms have been published at the time of writing. This paper details the organization of the challenge, the data and evaluation methods and the outcome of the initial launch with 20 algorithms. The gain in knowledge and future work are discussed.
Article
In this paper, we present an easy-to-follow procedure for the analysis of tissue sections from 3D cell cultures (spheroids) by matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI MSI) and laser scanning confocal microscopy (LSCM). MALDI MSI was chosen to detect the distribution of the drug of interest, while fluorescence immunohistochemistry (IHC) followed by LSCM was used to localize the cells featuring specific markers of viability, proliferation, apoptosis and metastasis. The overlay of the mass spectrometry (MS) and IHC spheroid images, typically without any morphological features, required fiducial-based coregistration. The MALDI MSI protocol was optimized in terms of fiducial composition and antigen epitope preservation to allow MALDI MSI to be performed and directly followed by IHC analysis on exactly the same spheroid section. Once MS and IHC images were coregistered, the quantification of the MS and IHC signals was performed by an algorithm evaluating signal intensities along equidistant layers from the spheroid boundary to its center. This accurate colocalization of MS and IHC signals showed limited penetration of the clinically tested drug perifosine into spheroids during a 24 h period, revealing the fraction of proliferating and promigratory/proinvasive cells present in the perifosine-free areas, decrease of their abundance in the perifosine-positive regions and distinguishing between apoptosis resulting from hypoxia/nutrient deprivation and drug exposure.
Article
The analysis of the pure motion of subnuclear structures without influence of the cell nucleus motion and deformation is essential in live cell imaging. In this work, we propose a 2D contour-based image registration approach for compensation of nucleus motion and deformation in fluorescence microscopy time-lapse sequences. The proposed approach extends our previous approach which uses a static elasticity model to register cell images. Compared to that scheme, the new approach employs a dynamic elasticity model for forward simulation of nucleus motion and deformation based on the motion of its contours. The contour matching process is embedded as a constraint into the system of equations describing the elastic behavior of the nucleus. This results in better performance in terms of the registration accuracy. Our approach was successfully applied to real live cell microscopy image sequences of different types of cells including image data that was specifically designed and acquired for evaluation of cell image registration methods. An experimental comparison with existing contour-based registration methods and an intensity-based registration method has been performed. We also studied the dependence of the results on the choice of method parameters.
Article
Cardiac magnetic resonance perfusion examinations enable non-invasive quantification of myocardial blood flow. However, motion between frames due to breathing must be corrected for quantitative analysis. Although several methods have been proposed, there is a lack of widely available benchmarks to compare different algorithms. We sought to compare many algorithms from several groups in an open benchmark challenge. Nine clinical studies from two different centers comprising normal and diseased myocardium at both rest and stress were made available for this study. The primary validation measure was regional myocardial blood flow based on the transfer coefficient (K trans), which was computed using a compartment model and the myocardial perfusion reserve (MPR) index. The ground truth was calculated using contours drawn manually on all frames by a single observer, and visually inspected by a second observer. Six groups participated and 19 different motion correction algorithms were compared. Each method used one of three different motion models: rigid, global affine or local deformation. The similarity metric also varied with methods employing either sum-of-squared differences, mutual information or cross-correlation. There were no significant differences in K trans or MPR compared across different motion models or similarity metrics. Compared with the ground truth, only K trans for the sum of squared differences metric, and for local deformation motion models, had significant bias. In conclusion, the open benchmark enabled evaluation of clinical perfusion indices over a wide range of methods. In particular, there was no benefit of non-rigid registration techniques over the other methods evaluated in this study. The benchmark data and results are available from the Cardiac Atlas Project (www.cardiacatlas.org).
Article
In Digital Pathology, one of the most simple and yet most useful feature is the ability to view serial sections of tissue simultaneously on a computer monitor. This enables the pathologist to evaluate the histology and expression of multiple markers for a patient in a single review. However, the rate limiting step in this process is the time taken for the pathologist to open each individual image, align the sections within the viewer, with a maximum of four slides at a time, and then manually move around the section. In addition, due to tissue processing and pre-analytical steps, sections with different stains have non-linear variations between the two acquisitions, that is, they will stretch and change shape from section to section. To date, no solution has come close to a workable solution to automatically align the serial sections into one composite image. This research work address this problem to obtain an automated serial section alignment tool enabling the pathologists to simply scroll through the various sections in a single viewer. To this aim a multiresolution intensity-based registration method using mutual information as a similarity metric, an optimizer based on an evolutionary process and a bilinear transformation has been used. To characterize the performance of the algorithm 40 cases x 5 different serial sections stained with hematoxiline-eosine (HE), estrogen receptor (ER), progesterone receptor (PR), Ki67 and human epidermal growth factor receptor 2 (Her2), have been considered. The qualitative results obtained are promising, with average computation time of 26.4s for up to 14660x5799 images running interpreted code.
Article
The compensation of cell motion is an important step in single-particle tracking analysis of live cells. This step is required in most of the cases, since the movement of subcellular foci is superimposed by the movement and deformation of the cell, while only the local motion of the foci is important to be analysed. The cell motion and deformation compensation is usually performed by means of image registration. There are a number of approaches with different models and properties presented in the literature that perform cell image registration. However, the evaluation of the registration approach quality on real data is a tricky problem due to the fact that some stable features in the images with a priori no local motion are needed. In this paper we propose a methodology for creating live cell nuclei image sequences with stable features imposed. The features are defined using the regions of fluorescence bleaching invoked by the UV laser. Data with different deformations are acquired and can be used for evaluation of the cell image registration methods. Along with that, we describe an image analysis technique and a metric that can characterize the quality of the method quantitatively. The proposed methodology allows building a ground truth dataset for testing and thoroughly evaluating cell image registration methods.
Conference Paper
The reconstruction of a 3D volume from a stack of 2D histology slices is still a challenging problem especially if no external references are available. Without a reference, standard registration approaches tend to align structures that should not be perfectly aligned. In this work we introduce a deformable, reference-free reconstruction method that uses an internal structural probability map (SPM) to regularize a free-form deformation. The SPM gives an estimate of the original 3D structure of the sample from the misaligned and possibly corrupted 2D slices. We present a consecutive as well as a simultaneous reconstruction approach that incorporates this estimate in a deformable registration framework. Experiments on synthetic and mouse brain datasets indicate that our method produces similar results compared to reference-based techniques on synthetic datasets. Moreover, it improves the smoothness of the reconstruction compared to standard registration techniques on real data.
Article
Cardiac histo-anatomical structure is a key determinant in all aspects of cardiac function. While some characteristics of micro- and macrostructure can be quantified using non-invasive imaging methods, histology is still the modality that provides the best combination of resolution and identification of cellular/sub-cellular substrate identities. The main limitation of histology is that it does not provide inherently consistent three-dimensional (3D) volume representations. This paper presents methods developed within our group to reconstruct 3D histological datasets. It includes the use of high-resolution MRI and block-face images to provide supporting volumetric datasets to guide spatial reintegration of 2D histological section data, and presents recent developments in sample preparation, data acquisition, and image processing.
Article
Purpose: The primary objective of this study is to perform a blinded evaluation of a group of retrospective image registration techniques using as a gold standard a prospective, marker-based registration method. To ensure blindedness, all retrospective registrations were performed by participants who had no knowledge of the gold standard results until after their results had been submitted. A secondary goal of the project is to evaluate the importance of correcting geometrical distortion in MR images by comparing the retrospective registration error in the rectified images, i.e., those that have had the distortion correction applied, with that of the same images before rectification. Method: Image volumes of three modalities (CT, MR, and PET) were obtained from patients undergoing neurosurgery at Vanderbilt University Medical Center on whom bone-implanted fiducial markers were mounted. These volumes had all traces of the markers removed and were provided via the Internet to project collaborators outside Vanderbilt, who then performed retrospective registrations on the volumes, calculating transformations from CT to MR and/or from PET to MR. These investigators communicated their transformations again via the Internet to Vanderbilt, where the accuracy of each registration was evaluated. In this evaluation, the accuracy is measured at multiple volumes of interest (VOIs), i.e., areas in the brain that would commonly be areas of neurological interest. A VOI is defined in the MR image and its centroid c is determined. Then, the prospective registration is used to obtain the corresponding point c′ in CT or PET. To this point, the retrospective registration is then applied, producing c″ in MR. Statistics are gathered on the target registration error (TRE), which is the distance between the original point c and its corresponding point c″. Results: This article presents statistics on the TRE calculated for each registration technique in this study and provides a brief description of each technique and an estimate of both preparation and execution time needed to perform the registration. Conclusion: Our results indicate that retrospective techniques have the potential to produce satisfactory results much of the time, but that visual inspection is necessary to guard against large errors.
Article
This book really shows how registration works: the flip-book appearing at the top right corners shows a registration of a human knee from bent to straight position (keeping bones rigid). Of course, the book also provides insight into concepts and practical tools. The presented framework exploits techniques from various fields such as image processing, numerical linear algebra, and optimization. Therefore, a brief overview of some preliminary literature in those fields is presented in the introduction (references [1–51]), and registration-specific literature is assembled at the end of the book (references [52–212]). Examples and results are based on the FAIR software, a package written in MATLAB. The FAIR software, as well as a PDF version of this entire book, can be freely downloaded from www.siam.org/books/fa06. This book would not have been possible without the help of Bernd Fischer, Eldad Haber, Claudia Kremling, Jim Nagy, and the Safir Research Group from Lübeck: Sven Barendt, Björn Beuthin, Konstantin Ens, Stefan Heldmann, Sven Kabus, Janine Olesch, Nils Papenberg, Hanno Schumacher, and Stefan Wirtz. I'm also indebted to Sahar Alipour, Reza Heydarian, Raya Horesh, Ramin Mafi, and Bahram Marami Dizaji for improving the manuscript.
Conference Paper
The reconstruction of histology sections into a 3-D volume receives increased attention due to its various applications in modern medical image analysis. To guarantee a geometrically coherent reconstruction, we propose a new way to register histological sections simultaneously to previously acquired reference images and to neighboring slices in the stack. To this end, we formulate two potential functions and associate them to the same Markov random field through which we can efficiently find an optimal solution. Due to our simultaneous formulation and the absence of any segmentation step during the reconstruction we can dramatically reduce error propagation effects. This is illustrated by experiments on carefully created synthetic as well as real data sets.