Article

Superadditivity and Convex Optimization for Globally Optimal Cell Segmentation Using Deformable Shape Models

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Cell nuclei segmentation is challenging due to shape variation and closely clustered or partially overlapping objects. Most previous methods are not globally optimal, limited to elliptical models, or are computationally expensive. In this work, we introduce a globally optimal approach based on deformable shape models and global energy minimization for cell nuclei segmentation and cluster splitting. We propose an implicit parameterization of deformable shape models and show that it leads to a convex energy. Convex energy minimization yields the global solution independently of the initialization, is fast, and robust. To jointly perform cell nuclei segmentation and cluster splitting, we developed a novel iterative global energy minimization method, which leverages the inherent property of superadditivity of the convex energy. This property exploits the lower bound of the energy of the union of the models and improves the computational efficiency. Our method provably determines a solution close to global optimality. In addition, we derive a closed/form solution of the proposed global minimization based on the superadditivity property for non/clustered cell nuclei. We evaluated our method using fluorescence microscopy images of five different cell types comprising various challenges, and performed a quantitative comparison with previous methods. Our method achieved state/of/the/art or improved performance.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In contrast, implicit parameterizations often enable using convex optimization (e.g., [15,16]). In [15], elliptical models were considered. ...
... In [15], elliptical models were considered. [16] introduced a globally optimal method for more general shapes based on DSMs, where cluster splitting was performed by pruning a graph representation of image regions. However, the computational efficiency is limited by a rather conservative pruning algorithm. ...
... which means that the set energy function is superadditive for disjoint image regions and β = 0 (i.e. a DSM cannot fit better to an image region than it fits to any sub-region, see [16] for a proof). One approach for graph pruning is to use Eq. ...
... Early studies [11][12][13][14][15][16] addressed this task by exploiting intensity information or together with spatial information among pixels in the clump, but they tend to fail in overlapping cases where the intensity deficiency issue arises. Recently, shape prior-based models [17][18][19][20][21][22] aims at compensating intensity deficiency by introducing prior shape information (shape priors) about cytoplasm into shape-aware segmentation frameworks, say the level set model, for segmenting overlapping cytoplasms. However, they restrict to modeling shape priors by limited shape hypotheses about cytoplasm, exploiting cytoplasm-level shape priors alone, and imposing no shape constraint on the resulting shape of the cytoplasm. ...
... In order to alleviate the intensity deficiency issue, shape prior-based methods [17][18][19][20][21][22] have shown a better segmentation performance by introducing prior shape information (shape priors) to guide the segmentation. They model shape priors by either a simple shape assumption (e.g. ...
... Although improving the segmentation performance over traditional works, these existing methods [17][18][19][20][21][22] suffer from three main shortcomings. First, since they model shape priors by limited shape hypotheses, e.g., a shape assumption or one of the pre-collected shape examples, the modeled shape priors are usually insufficient to pinpoint the occluded boundary parts of the cytoplasm. ...
Article
Segmenting overlapping cytoplasms in cervical smear images is a clinically essential task for quantitatively measuring cell-level features to screen cervical cancer. This task, however, remains rather challenging, mainly due to the deficiency of intensity (or color) information in the overlapping region. Although shape prior-based models that compensate intensity deficiency by introducing prior shape information about cytoplasm are firmly established, they often yield visually implausible results, as they model shape priors only by limited shape hypotheses about cytoplasm, exploit cytoplasm-level shape priors alone, and impose no shape constraint on the resulting shape of the cytoplasm. In this paper, we present an effective shape prior-based approach, called constrained multi-shape evolution, that segments all overlapping cytoplasms in the clump simultaneously by jointly evolving each cytoplasm's shape guided by the modeled shape priors. We model local shape priors (cytoplasm-level) by an infinitely large shape hypothesis set which contains all possible shapes of the cytoplasm. In the shape evolution, we compensate intensity deficiency for the segmentation by introducing not only the modeled local shape priors but also global shape priors (clump-level) modeled by considering mutual shape constraints of cytoplasms in the clump. We also constrain the resulting shape in each evolution to be in the built shape hypothesis set for further reducing implausible segmentation results. We evaluated the proposed method in two typical cervical smear datasets, and the extensive experimental results confirm its effectiveness.
... These methods vary in two axes: spatial resolution and gene throughput. On one hand, technologies such as Multiplexed Error-Robust Fluorescence In-Situ Hybridization (MERFISH) and In-Situ Sequencing (ISS), achieve cellular or even subcellular resolution [10] through cell segmentation [11], [12], but are limited to measuring up to a couple of hundred preselected genes. On the other hand, spatially resolved RNA sequencing, such as Spatial Transcriptomics [13], commercially available as 10x's Visium, and Slide-seq [14], enable high-throughput gene profiling by capturing mRNAs in-situ at the cost of spots with the size of tens of cells. ...
... In the case of high-resolution spatial data, given that each spot corresponds to an individual cell (i.e., n i = 1), it is desirable to produce sparse allocations, in the sense that we prefer Y c,i close to 0 or 1. In general, assuming that Y c,i ∈ {0, n i }, then (11) implies that Y c,i = n i for exactly one cell type c and is zero for all other cell types. Consequently, for binary Y we obtain ...
Preprint
Full-text available
Single-cell RNA sequencing (scRNA-seq) and spatially-resolved imaging/sequencing technologies have revolutionized biomedical research. On one hand, scRNA-seq data provides for individual cells information about a large portion of the transcriptome, but does not include the spatial context of the cells. On the other hand, spatially resolved measurements come with a trade-off between resolution, throughput and gene coverage. Combining data from these two modalities can provide a spatially resolved picture with enhances resolution and gene coverage. Several methods have been recently developed to integrate these modalities, but they use only the expression of genes available in both modalities. They don't incorporate other relevant and available features, especially the spatial context. We propose DOT, a novel optimization framework for assigning cell types to tissue locations. Our model (i) incorporates ideas from Optimal Transport theory to leverage not only joint but also distinct features, such as the spatial context, (ii) introduces scale-invariant distance functions to account for differences in the sensitivity of different measurement technologies, and (iii) provides control over the abundance of cells of different types in the tissue. We present a fast implementation based on the Frank-Wolfe algorithm and we demonstrate the effectiveness of DOT on correctly assigning cell types or estimating the expression of missing genes in spatial data coming from two areas of the brain, the developing heart, and breast cancer samples.
Article
Nuclei segmentation is a fundamental prerequisite in the digital pathology workflow. The development of automated methods for nuclei segmentation enables quantitative analysis of the wide existence and large variances in nuclei morphometry in histopathology images. However, manual annotation of tens of thousands of nuclei is tedious and time-consuming, which requires significant amount of human effort and domain-specific expertise. To alleviate this problem, in this paper, we propose a weakly-supervised nuclei segmentation method that only requires partial point labels of nuclei. Specifically, we propose a novel boundary mining framework for nuclei segmentation, named BoNuS, which simultaneously learns nuclei interior and boundary information from the point labels. To achieve this goal, we propose a novel boundary mining loss, which guides the model to learn the boundary information by exploring the pairwise pixel affinity in a multiple-instance learning manner. Then, we consider a more challenging problem, i.e., partial point label, where we propose a nuclei detection module with curriculum learning to detect the missing nuclei with prior morphological knowledge. The proposed method is validated on three public datasets, MoNuSeg, CPM, and CoNIC datasets. Experimental results demonstrate the superior performance of our method to the state-of-the-art weakly-supervised nuclei segmentation methods. Code: https://github.com/hust-linyi/bonus .
Article
Least squares regression (LSR) and its extended methods are widely used for image classification. However, these LSR-based methods do not consider the importance of global information and ignore the connection between feature learning and regression representations. To address these problems, we propose a novel method called orthogonal autoencoder regression (OAR) that considers global information by combining the feature learning part with the regression representation part. In addition, to enhance the model’s feature learning ability, we introduce an orthogonal autoencoder model to learn more effective data. To promote the model’s regression representation ability, we also add weight constraints to the model and make the OAR more discriminative. An iterative algorithm with the alternating direction method of multipliers (ADMM) is proposed to solve the model. The experimental results from several databases demonstrate the effectiveness of the OAR.
Preprint
Full-text available
Image blur and image noise are imaging artifacts intrinsically arising in image acquisition. In this paper, we consider multi-frame blind deconvolution (MFBD), where image blur is described by the convolution of an unobservable, undeteriorated image and an unknown filter, and the objective is to recover the undeteriorated image from a sequence of its blurry and noisy observations. We present two new methods for MFBD, which, in contrast to previous work, do not require the estimation of the unknown filters. The first method is based on likelihood maximization and requires careful initialization to cope with the non-convexity of the loss function. The second method circumvents this requirement and exploits that the solution of likelihood maximization emerges as an eigenvector of a specifically constructed matrix, if the signal subspace spanned by the observations has a sufficiently large dimension. We describe a pre-processing step, which increases the dimension of the signal subspace by artificially generating additional observations. We also propose an extension of the eigenvector method, which copes with insufficient dimensions of the signal subspace by estimating a footprint of the unknown filters (that is a vector of the size of the filters, only one is required for the whole image sequence). We have applied the eigenvector method to synthetically generated image sequences and performed a quantitative comparison with a previous method, obtaining strongly improved results.
Article
Full-text available
Two of the most common tasks in medical imaging are classification and segmentation. Either task requires labeled data annotated by experts, which is scarce and expensive to collect. Annotating data for segmentation is generally considered to be more laborious as the annotator has to draw around the boundaries of regions of interest, as opposed to assigning image patches a class label. Furthermore, in tasks such as breast cancer histopathology, any realistic clinical application often includes working with whole slide images, whereas most publicly available training data are in the form of image patches, which are given a class label. We propose an architecture that can alleviate the requirements for segmentation-level ground truth by making use of image-level labels to reduce the amount of time spent on data curation. In addition, this architecture can help unlock the potential of previously acquired image-level datasets on segmentation tasks by annotating a small number of regions of interest. In our experiments, we show using only one segmentation-level annotation per class, we can achieve performance comparable to a fully annotated dataset.
Article
Full-text available
Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments. Cellpose was trained on a new dataset of highly varied images of cells, containing over 70,000 segmented objects. We also demonstrate a three-dimensional (3D) extension of Cellpose that reuses the two-dimensional (2D) model and does not require 3D-labeled data. To support community contributions to the training data, we developed software for manual labeling and for curation of the automated results. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.
Article
Full-text available
We present RFOVE, a region-based method for approximating an arbitrary 2D shape with an automatically determined number of possibly overlapping ellipses. RFOVE is completely unsupervised, operates without any assumption or prior knowledge on the object's shape and extends and improves the Decremental Ellipse Fitting Algorithm (DEFA) [1]. Both RFOVE and DEFA solve the multi-ellipse fitting problem by performing model selection that is guided by the minimization of the Akaike Information Criterion on a suitably defined shape complexity measure. However, in contrast to DEFA, RFOVE minimizes an objective function that allows for ellipses with higher degree of overlap and, thus, achieves better ellipse-based shape approximation. A comparative evaluation of RFOVE with DEFA on several standard datasets shows that RFOVE achieves better shape coverage with simpler models (less ellipses). As a practical exploitation of RFOVE, we present its application to the problem of detecting and segmenting potentially overlapping cells in fluorescence microscopy images. Quantitative results obtained in three public datasets (one synthetic and two with more than 4000 actual stained cells) show the superiority of RFOVE over the state of the art in overlapping cells segmentation.
Article
Full-text available
Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline 1. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net 2, FCN 3, and Mask-RCNN 4 were popularly used, typically based on ResNet 5 or VGG 6 base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.
Article
Full-text available
Differently to semantic segmentation, instance segmentation assigns unique labels to each individual instance of the same object class. In this work, we propose a novel recurrent fully convolutional network architecture for tracking such instance segmentations over time, which is highly relevant, e.g., in biomedical applications involving cell growth and migration. Our network architecture incorporates convolutional gated recurrent units (ConvGRU) into a stacked hourglass network to utilize temporal information, e.g., from microscopy videos. Moreover, we train our network with a novel embedding loss based on cosine similarities, such that the network predicts unique embeddings for every instance throughout videos, even in the presence of dynamic structural changes due to mitosis of cells. To create the final tracked instance segmentations, the pixel-wise embeddings are clustered among subsequent video frames by using the mean shift algorithm. After showing the performance of the instance segmentation on a static in-house dataset of muscle fibers from H&E-stained microscopy images, we also evaluate our proposed recurrent stacked hourglass network regarding instance segmentation and tracking performance on six datasets from the ISBI celltracking challenge, where it delivers state-of-the-art results.
Article
Full-text available
We propose a segmentation method for nuclei in glioblastoma histopathologic images based on a sparse shape prior guided variational level set framework. By spectral clustering and sparse coding, a set of shape priors is exploited to accommodate complicated shape variations. We automate the object contour initialization by a seed detection algorithm and deform contours by minimizing an energy functional that incorporates a shape term in a sparse shape prior representation, an adaptive contour occlusion penalty term, and a boundary term encouraging contours to converge to strong edges. As a result, our approach is able to deal with mutual occlusions and detect contours of multiple intersected nuclei simultaneously. Our method is applied to several whole-slide histopathologic image datasets for nuclei segmentation. The proposed method is compared with other state-of-the-art methods and demonstrates good accuracy for nuclei detection and segmentation, suggesting its promise to support biomedical image-based investigations.
Conference Paper
Full-text available
We present a region based method for segmenting and splitting images of cells in an automatic and unsupervised manner. The detection of cell nuclei is based on the Bradley's method. False positives are automatically identified and rejected based on shape and intensity features. Additionally, the proposed method is able to automatically detect and split touching cells. To do so, we employ a variant of a region based multi-ellipse fitting method that makes use of constraints on the area of the split cells. The quantitative assessment of the proposed method has been based on two challenging public datasets. This experimental study shows that the proposed method outperforms clearly existing methods for segmenting fluorescence microscopy images.
Article
Full-text available
Two successful approaches for the segmentation of biomedical images are (1) the selection of segment candidates from a merge-tree, and (2) the clustering of small superpixels by solving a Multi-Cut problem. In this paper, we introduce a model that unifies both approaches. Our model, the Candidate Multi-Cut (CMC), allows joint selection and clustering of segment candidates from a merge-tree. This way, we overcome the respective limitations of the individual methods: (1) the space of possible segmentations is not constrained to candidates of a merge-tree, and (2) the decision for clustering can be made on candidates larger than superpixels, using features over larger contexts. We solve the optimization problem of selecting and clustering of candidates using an integer linear program. On datasets of 2D light microscopy of cell populations and 3D electron microscopy of neurons, we show that our method generalizes well and generates more accurate segmentations than merge-tree or Multi-Cut methods alone.
Article
Full-text available
Microscopy imaging plays a vital role in understanding many biological processes in development and disease. The recent advances in automation of microscopes and development of methods and markers for live cell imaging has led to rapid growth in the amount of image data being captured. To efficiently and reliably extract useful insights from these captured sequences, automated cell tracking is essential. This is a challenging problem due to large variation in the appearance and shapes of cells depending on many factors including imaging methodology, biological characteristics of cells, cell matrix composition, labeling methodology, etc. Often cell tracking methods require a sequence-specific segmentation method and manual tuning of many tracking parameters, which limits their applicability to sequences other than those they are designed for. In this paper, we propose 1) a deep learning based cell proposal method, which proposes candidates for cells along with their scores, and 2) a cell tracking method, which links proposals in adjacent frames in a graphical model using edges representing different cellular events and poses joint cell detection and tracking as the selection of a subset of cell and edge proposals. Our method is completely automated and given enough training data can be applied to a wide variety of microscopy sequences. We evaluate our method on multiple fluorescence and phase contrast microscopy sequences containing cells of various shapes and appearances from ISBI cell tracking challenge, and show that our method outperforms existing cell tracking methods. Code is available at: https://github.com/SaadUllahAkram/CellTracker
Conference Paper
Full-text available
Efficient and effective cell segmentation of neuroendocrine tumor (NET) in whole slide scanned images is a difficult task due to a large number of cells. The weak or misleading cell boundaries also present significant challenges. In this paper, we propose a fast, high throughput cell segmentation algorithm by combining top-down shape models and bottom-up image appearance information. A scalable sparse manifold learning method is proposed to model multiple subpopulations of different cell shape priors. Followed by a shape clustering on the manifold, a novel affine transform-approximated active contour model is derived to deform contours without solving a large amount of computationally-expensive Euler-Lagrange equations, and thus dramatically reduces the computational time. To the best of our knowledge, this is the first report of a high throughput cell segmentation algorithm for whole slide scanned pathology specimens using manifold learning to accelerate active contour models. The proposed approach is tested using 12 NET images, and the comparative experiments with the state of the arts demonstrate its superior performance in terms of both efficiency and effectiveness.
Article
Full-text available
This paper concerns automated cell counting and detection in microscopy images. The approach we take is to use convolutional neural networks (CNNs) to regress a cell spatial density map across the image. This is applicable to situations where traditional single-cell segmentation-based methods do not work well due to cell clumping or overlaps. We make the following contributions: (i) we develop and compare architectures for two fully convolutional regression networks (FCRNs) for this task; (ii) since the networks are fully convolutional, they can predict a density map for an input image of arbitrary size, and we exploit this to improve efficiency by end-to-end training on image patches; (iii) we show that FCRNs trained entirely on synthetic data are able to give excellent predictions on microscopy images from real biological experiments without fine-tuning, and that the performance can be further improved by fine-tuning on these real images. Finally, (iv) by inverting feature representations, we show to what extent the information from an input image has been encoded by feature responses in different layers. We set a new state-of-the-art performance for cell counting on standard synthetic image benchmarks and show that the FCRNs trained entirely with synthetic data can generalise well to real microscopy images both for cell counting and detections for the case of overlapping cells.
Conference Paper
Full-text available
Accurate and automatic detection and delineation of cervical cells are two critical precursor steps to automatic Pap smear image analysis and detecting pre-cancerous changes in the uterine cervix. To overcome noise and cell occlusion, many segmentation methods resort to incorporating shape priors, mostly enforcing elliptical shapes (e.g. [1]). However, elliptical shapes do not accurately model cervical cells. In this paper, we propose a new continuous variational segmentation framework with star-shape prior using directional derivatives to segment overlapping cervical cells in Pap smear images. We show that our star-shape constraint better models the underlying problem and outperforms state-of-the-art methods in terms of accuracy and speed.
Article
Full-text available
In this paper we present an improved algorithm for the segmentation of cytoplasm and nuclei from clumps of overlapping cervical cells. This problem is notoriously difficult because of the degree of overlap among cells, the poor contrast of cell cytoplasm and the presence of mucus, blood and inflammatory cells. Our methodology addresses these issues by utilising a joint optimization of multiple level set functions, where each function represents a cell within a clump, that have both unary (intra-cell) and pairwise (inter-cell) constraints. The unary constraints are based on contour length, edge strength and cell shape, while the pairwise constraint is computed based on the area of the overlapping regions. In this way, our methodology enables the analysis of nuclei and cytoplasm from both free-lying and overlapping cells. We provide a systematic evaluation of our methodology using a database of over 900 images generated by synthetically overlapping images of free-lying cervical cells, where the number of cells within a clump is varied from 2 to 10 and the overlap coefficient between pairs of cells from 0:1 to 0:5. This quantitative assessment demonstrates that our methodology can successfully segment clumps of up to 10 cells, provided the overlap between pairs of cells is below 0:2. Moreover, if the clump consists of three or fewer cells, then our methodology can successfully segment individual cells even when the overlap is around 0:5. We also evaluate our approach quantitatively and qualitatively on a set of 16 extended depth of field images, where we are able to segment a total of 645 cells, of which only around 10% are free-lying. Finally, we demonstrate that our method of cell nuclei segmentation is competitive when compared to the current state of the art.
Article
Full-text available
Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.
Article
Full-text available
We study the problem of segmenting multiple cell nucle-i from GFP or Hoechst stained microscope images with a shape prior. This problem is encountered ubiquitously in cell biology and developmental biology. Our work is mo-tivated by the observation that segmentations with loose boundary or shrinking bias not only jeopardize feature ex-traction for downstream tasks (e.g. cell tracking), but also prevent robust statistical analysis (e.g. modeling of fluores-cence distribution). We therefore propose a novel extension to the graph cut framework that incorporates a "blob"-like shape prior. The corresponding energy terms are param-eterized via structured learning. Extensive evaluation and comparison on 2D/3D datasets show substantial quantita-tive improvement over other state-of-the-art methods. For example, our method achieves an 8.2% Rand index increase and a 4.3 Hausdorff distance decrease over the second best method on a public hand-labeled 2D benchmark.
Article
Full-text available
Characterizing cytoarchitecture is crucial for understanding brain functions and neural diseases. In neuroanatomy, it is an important task to accurately extract cell populations' centroids and contours. Recent advances have permitted imaging at single cell resolution for an entire mouse brain using the Nissl staining method. However, it is difficult to precisely segment numerous cells, especially those cells touching each other. As presented herein, we have developed an automated three-dimensional detection and segmentation method applied to the Nissl staining data, with the following two key steps: 1) concave points clustering to determine the seed points of touching cells; and 2) random walker segmentation to obtain cell contours. Also, we have evaluated the performance of our proposed method with several mouse brain datasets, which were captured with the micro-optical sectioning tomography imaging system, and the datasets include closely touching cells. Comparing with traditional detection and segmentation methods, our approach shows promising detection accuracy and high robustness.
Article
Full-text available
Motivation: Automatic tracking of cells in multidimensional timelapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this paper, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. Results: The main contributions of the challenge include the creation of a comprehensive video dataset repository and the definition of objective measures for comparison and ranking of the algorithms. With this benchmark, six algorithms covering a variety of segmentation and tracking paradigms have been compared and ranked based on their performance on both synthetic and real datasets. Given the diversity of the datasets, we do not declare a single winner of the challenge. Instead, we present and discuss the results for each individual dataset separately. Availability and implementation: The challenge website (http://www.codesolorzano.com/celltrackingchallenge) provides access to the training and competition datasets, along with the ground truth of the training videos. It also provides access to Windows and Linux executable files of the evaluation software and most of the algorithms that competed in the challenge. Contact: codesolorzano@unav.es Supplementary information: Supplementary data, including video samples and algorithm descriptions are available at Bioinformatics online.
Article
Full-text available
Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features of combinatorial graph cuts methods in vision: global optima, practical efficiency, numerical robustness, ability to fuse a wide range of visual cues and constraints, unrestricted topological properties of segments, and applicability to N-D problems. Graph cuts based approaches to object extraction have also been shown to have interesting connections with earlier segmentation methods such as snakes, geodesic active contours, and level-sets. The segmentation energies optimized by graph cuts combine boundary regularization with region-based properties in the same fashion as Mumford-Shah style functionals. We present motivation and detailed technical description of the basic combinatorial optimization framework for image segmentation via s/t graph cuts. After the general concept of using binary graph cut algorithms for object segmentation was first proposed and tested in Boykov and Jolly (2001), this idea was widely studied in computer vision and graphics communities. We provide links to a large number of known extensions based on iterative parameter re-estimation and learning, multi-scale or hierarchical approaches, narrow bands, and other techniques for demanding photo, video, and medical applications.
Article
Full-text available
Background Segmenting cell nuclei in microscopic images has become one of the most important routines in modern biological applications. With the vast amount of data, automatic localization, i.e. detection and segmentation, of cell nuclei is highly desirable compared to time-consuming manual processes. However, automated segmentation is challenging due to large intensity inhomogeneities in the cell nuclei and the background. Results We present a new method for automated progressive localization of cell nuclei using data-adaptive models that can better handle the inhomogeneity problem. We perform localization in a three-stage approach: first identify all interest regions with contrast-enhanced salient region detection, then process the clusters to identify true cell nuclei with probability estimation via feature-distance profiles of reference regions, and finally refine the contours of detected regions with regional contrast-based graphical model. The proposed region-based progressive localization (RPL) method is evaluated on three different datasets, with the first two containing grayscale images, and the third one comprising of color images with cytoplasm in addition to cell nuclei. We demonstrate performance improvement over the state-of-the-art. For example, compared to the second best approach, on the first dataset, our method achieves 2.8 and 3.7 reduction in Hausdorff distance and false negatives; on the second dataset that has larger intensity inhomogeneity, our method achieves 5% increase in Dice coefficient and Rand index; on the third dataset, our method achieves 4% increase in object-level accuracy. Conclusions To tackle the intensity inhomogeneities in cell nuclei and background, a region-based progressive localization method is proposed for cell nuclei localization in fluorescence microscopy images. The RPL method is demonstrated highly effective on three different public datasets, with on average 3.5% and 7% improvement of region- and contour-based segmentation performance over the state-of-the-art.
Article
Full-text available
We present a new fast active-contour model (a.k.a. snake) for image segmentation in 3D microscopy. We introduce a parametric design that relies on exponential B-spline bases and allows us to build snakes able to reproduce ellipsoids. We have designed our bases to have the shortest-possible support, subject to some constraints. Thus, computational efficiency is maximized. The proposed 3D snake can approximate blob-like objects with good accuracy and can perfectly reproduce spheres and ellipsoids, irrespective of their position and orientation. The optimization process is remarkably fast due to the use of Gauss' theorem within our energy computation scheme. Our technique yields successful segmentation results, even for challenging data where object contours are not well defined. This is due to our parametric approach that allows one to favor prior shapes. Together with this work, we provide software that gives full control over the snakes via an intuitive manipulation of few control points.
Article
Accurate and efficient segmentation of cell nuclei in fluorescence microscopy images plays a key role in many biological studies. Besides coping with image noise and other imaging artifacts, the separation of touching and partially overlapping cell nuclei is a major challenge. To address this, we introduce a globally optimal model-based approach for cell nuclei segmentation which jointly exploits shape and intensity information. Our approach is based on implicitly parameterized shape models, and we propose single-object and multi-object schemes. In the single-object case, the used shape parameterization leads to convex energies which can be directly minimized without requiring approximation. The multi-object scheme is based on multiple collaborating shapes and has the advantage that prior detection of individual cell nuclei is not needed. This scheme performs joint segmentation and cluster splitting. We describe an energy minimization scheme which converges close to global optima and exploits convex optimization such that our approach does not depend on the initialization nor suffers from local energy minima. The proposed approach is robust and computationally efficient. In contrast, previous shape-based approaches for cell segmentation either are computationally expensive, not globally optimal, or do not jointly exploit shape and intensity information. We successfully applied our approach to fluorescence microscopy images of five different cell types and performed a quantitative comparison with previous methods.
Article
Cell segmentation in microscopy images is a common and challenging task. In recent years, deep neural networks achieved remarkable improvements in the field of computer vision. The dominant paradigm in segmentation is using convolutional neural networks, less common are recurrent neural networks. In this work, we propose a new deep learning method for cell segmentation, which integrates convolutional neural networks and gated recurrent neural networks over multiple image scales to exploit the strength of both types of networks. To increase the robustness of the training and improve segmentation, we introduce a novel focal loss function. We also present a distributed scheme for optimized training of the integrated neural network. We applied our proposed method to challenging data of glioblastoma cell nuclei and performed a quantitative comparison with state-of-the-art methods. Insights on how our extensions affect training and inference are also provided. Moreover, we benchmarked our method using a wide spectrum of all 22 real microscopy datasets of the Cell Tracking Challenge.
Article
The recognition of diffierent cell compartments, types of cells, and their interactions is a critical aspect of quantitative cell biology. However, automating this problem has proven to be non-trivial, and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from diffierent classes and irregularly shaped structures. To alleviate this, graphical models are useful due to their ability to make use of prior knowledge and model inter-class dependencies. Directed acyclic graphs, such as trees have been widely used to model top-down statistical dependencies as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, we propose polytree graphical models that capture label proximity relations more naturally compared to tree based approaches. A novel recursive mechanism based on two-pass message passing was developed to efficiently calculate closed form posteriors of graph nodes on polytrees. The algorithm is evaluated on simulated data and on two publicly available fluorescence microscopy datasets, outperforming directed trees and three state-of-the-art convolutional neural networks, namely SegNet, DeepLab and PSPNet. Polytrees are shown to outperform directed trees in predicting segmentation error, by highlighting areas in the segmented image that do not comply with prior knowledge. This paves the way to uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement.
Book
Der ROMEIS ist seit fast 100 Jahren das Standardwerk der mikroskopischen Technik. Über 18 Auflagen hat dieses Methodenbuch die Entwicklung der lichtmikroskopischen Verfahren begleitet und ist bis heute ein unverzichtbares Laborhandbuch für Wissenschaftler und Studierende, die auf den Gebieten der Cytologie, Histologie, mikroskopischen Anatomie, Pathologie und Histochemie forschen. Der Inhalt der 19. Auflage des ROMEIS wurde aktualisiert und um viele moderne Methoden und Anwendungen der Mikroskopie erweitert. Wesentliche Inhalte dieses bewährten Werkes sind: ·Praxisorientierte Einführung in die gerätetechnischen Grundlagen licht-, fluoreszenz- und elektronenmikroskopischer Verfahren ·Hochauflösende Lichtmikroskopie ·Anleitungen zum Life Cell Imaging ·Detaillierte Anleitungen zur Präparation von Proben unterschiedlicher Herkunft für die Licht- und Elektronenmikroskopie ·Übersichtliche Rezepte für die gebräuchlichen histologischen Färbemethoden ·Protokolle zur speziellen Aufbereitung und Aufbewahrung von Einzellern und Wirbellosen sowie von Pflanzenmaterial für die Mikroskopie ·Histologische Verfahren zur Präparation von Tieren ·Anleitungen zum Nachweis und zur Lokalisation verschiedenster Moleküle in Zellen und Geweben ·Hintergrundinformationen und praktische Tipps zum Einsatz von Reporterproteinen ·Einführungen in digitale Mikrofotografie und Bildverarbeitung ·Quantitative Analysen durch „klassische“ und Software-gestützte Verfahren ·Einen Anhang zum schnellen Nachschlagen mit Tabellen ·Eine Liste mit Adressen von Herstellern, Lieferanten von Zubehör und Chemikalien, Software, Gesellschaften und Handbüchern ist von der Produkthomepage herunterladbar Unter der Herausgeberschaft von Dr. Maria Mulisch und Professor Dr. med. Ulrich Welsch haben 22 Experten der Mikroskopie aus Forschung und Industrie ihre Erfahrung eingebracht, um dieses Werk zu einem Arbeitsbuch zu machen, auf das man sich beziehen und verlassen kann.
Article
Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant loss to the performance of the predictor. The goal of this article is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. This article further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier's decision surface, which help in developing a better understanding of deep networks. Finally, we present recent solutions that attempt to increase the robustness of deep networks. We hope this review article will contribute to shed ding light on the open research challenges in the robustness of deep networks and stir interest in the analysis of their fundamental properties.
Conference Paper
Several machine learning models, including neural networks, consistently mis- classify adversarial examples—inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed in- put results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to ad- versarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Us- ing this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.
Conference Paper
We present a new method for cell segmentation which combines a marked point process model with a combinatorics-based method of finding global optima. The method employs an energy term that assesses possible segmentations by their fidelity to both local image information and a simple model of cell interaction, and we use a randomized iterative reweighting technique for its minimization. Our approach was successfully applied to cell microscopy images of varying difficulty and experimentally compared with both a standard segmentation method as well as a method based on Multiple Birth and Cut. The proposed method is found to improve upon previous approaches.
Article
The marked point process framework has been successfully developed in the field of image analysis to detect a configuration of predefined objects. The goal of this paper is to show how it can be particularly applied to biological imagery. We present a simple model that shows how some of the challenges specific to biological data are well addressed by the methodology. We further describe an extension to this first model to address other challenges due, for example, to the shape variability in biological material. We finally show results that illustrate the MPP framework using the ?simcep? algorithm for simulating populations of cells.
Article
Accurate segmentation of cervical cells in Pap smear images is an important step in automatic pre-cancer identification in the uterine cervix. One of the major segmentation challenges is overlapping of cytoplasm, which has not been well-addressed in previous studies. To tackle the overlapping issue, this paper proposes a learning-based method with robust shape priors to segment individual cell in Pap smear images to support automatic monitoring of changes in cells, which is a vital prerequisite of early detection of cervical cancer. We define this splitting problem as a discrete labeling task for multiple cells with a suitable cost function. The labeling results are then fed into our dynamic multi-template deformation model for further boundary refinement. Multi-scale deep convolutional networks are adopted to learn the diverse cell appearance features. We also incorporated high-level shape information to guide segmentation where cell boundary might be weak or lost due to cell overlapping. An evaluation carried out using two different datasets demonstrates the superiority of our proposed method over the state-of-the-art methods in terms of segmentation accuracy.
Conference Paper
There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .
Conference Paper
Glioblastoma (GBM) is a malignant brain tumor with uniformly dismal prognosis. Quantitative analysis of GBM cells is an important avenue to extract latent histologic disease signatures to correlate with molecular underpinnings and clinical outcomes. As a prerequisite, a robust and accurate cell segmentation is required. In this paper, we present an automated cell segmentation method that can satisfactorily address segmentation of overlapped cells commonly seen in GBM histology specimens. This method first detects cells with seed connectivity, distance constraints, image edge map, and a shape-based voting image. Initialized by identified seeds, cell boundaries are deformed with an improved variational level set method that can handle clumped cells. We test our method on 40 histological images of GBM with human annotations. The validation results suggest that our cell segmentation method is promising and represents an advance in quantitative cancer research.
Article
Accurate segmentation of cells in fluorescence microscopy images plays a key role in high-throughput applications such as quantification of protein expression and the study of cell function. In this paper, an integrated framework consisting of a new level sets based segmentation algorithm and a touching-cell splitting method is proposed. For cell nuclei segmentation, a new region-based active contour model in a variational level set formulation is developed where our new level set energy functional minimizes the Bayesian classification risk. For touching-cell splitting, the touching cells are first distinguished from non-touching cells, and then a strategy based on the splitting area identification is proposed to obtain splitting point-pairs. To form the appropriate splitting line, the image properties from different information channels are used to define the surface manifold of the image patch around the selected splitting point-pairs and geodesic distance is used to measure the length of the shortest path on the manifold connecting the two splitting points. The performance of the proposed framework is evaluated using a large number of fluorescence microscopy images from four datasets with different cell types. A quantitative comparison is also performed with several existing segmentation approaches.
Conference Paper
Time-lapse microscopy imaging has advanced rapidly in last few decades and is producing large volume of data in cell and developmental biology. This has increased the importance of automated analyses, which depend heavily on cell segmentation and tracking as these are the initial stages when computing most biologically important cell properties. In this paper , we propose a novel joint cell segmentation and tracking method for fluorescence microscopy sequences, which generates a large set of cell proposals, creates a graph representing different cell events and then iteratively finds the most probable path within this graph providing cell segmentations and tracks. We evaluate our method on three datasets from ISBI Cell Tracking Challenge and show that our greedy non-optimal joint solution results in improved performance compared with state of the art methods.
Chapter
We propose an algorithm to segment 2D ellipses or 3D ellipsoids. This problem is of fundamental importance in various applications of cell biology. The algorithm consists of minimizing a contrast invariant energy defined on sets of non overlapping ellipsoids. This highly non convex problem is solved by combining a stochastic approach based on marked point processes and a graph-cut algorithm that selects the best admissible configuration. In order to accelerate the computing times, we delineate fast algorithms to assess whether two ellispoids intersect or not and various heuristics to improve the convergence rate.
Conference Paper
Large Convolutional Neural Network models have recently demonstrated impressive classification performance on the ImageNet benchmark \cite{Kriz12}. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky \etal on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.