Multi-kernel graph embedding for detection, Gleason grading of prostate cancer via MRI/MRS

Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue Cleveland, OH 44106-7207, United States. Electronic address: .
Medical image analysis (Impact Factor: 3.65). 12/2012; 17(2). DOI: 10.1016/
Source: PubMed


Even though 1 in 6 men in the US, in their lifetime are expected to be diagnosed with prostate cancer (CaP), only 1 in 37 is expected to die on account of it. Consequently, among many men diagnosed with CaP, there has been a recent trend to resort to active surveillance (wait and watch) if diagnosed with a lower Gleason score on biopsy, as opposed to seeking immediate treatment. Some researchers have recently identified imaging markers for low and high grade CaP on multi-parametric (MP) magnetic resonance (MR) imaging (such as T2 weighted MR imaging (T2w MRI) and MR spectroscopy (MRS)). In this paper, we present a novel computerized decision support system (DSS), called Semi Supervised Multi Kernel Graph Embedding (SeSMiK-GE), that quantitatively combines structural, and metabolic imaging data for distinguishing (a) benign versus cancerous, and (b) high- versus low-Gleason grade CaP regions from in vivo MP-MRI. A total of 29 1.5Tesla endorectal pre-operative in vivo MP MRI (T2w MRI, MRS) studies from patients undergoing radical prostatectomy were considered in this study. Ground truth for evaluation of the SeSMiK-GE classifier was obtained via annotation of disease extent on the pre-operative imaging by visually correlating the MRI to the ex vivo whole mount histologic specimens. The SeSMiK-GE framework comprises of three main modules: (1) multi-kernel learning, (2) semi-supervised learning, and (3) dimensionality reduction, which are leveraged for the construction of an integrated low dimensional representation of the different imaging and non-imaging MRI protocols. Hierarchical classifiers for diagnosis and Gleason grading of CaP are then constructed within this unified low dimensional representation. Step 1 of the hierarchical classifier employs a random forest classifier in conjunction with the SeSMiK-GE based data representation and a probabilistic pairwise Markov Random Field algorithm (which allows for imposition of local spatial constraints) to yield a voxel based classification of CaP presence. The CaP region of interest identified in Step 1 is then subsequently classified as either high or low Gleason grade CaP in Step 2. Comparing SeSMiK-GE with unimodal T2w MRI, MRS classifiers and a commonly used feature concatenation (COD) strategy, yielded areas (AUC) under the receiver operative curve (ROC) of (a) 0.89±0.09 (SeSMiK), 0.54±0.18 (T2w MRI), 0.61±0.20 (MRS), and 0.64±0.23 (COD) for distinguishing benign from CaP regions, and (b) 0.84±0.07 (SeSMiK),0.54±0.13 (MRI), 0.59±0.19 (MRS), and 0.62±0.18 (COD) for distinguishing high and low grade CaP using a leave one out cross-validation strategy, all evaluations being performed on a per voxel basis. Our results suggest that following further rigorous validation, SeSMiK-GE could be developed into a powerful diagnostic and prognostic tool for detection and grading of CaP in vivo and in helping to determine the appropriate treatment option. Identifying low grade disease in vivo might allow CaP patients to opt for active surveillance rather than immediately opt for aggressive therapy such as radical prostatectomy.

1 Follower
55 Reads
  • Source
    • "Some examples are given as follows. Biological Domains: In structure based medical diagnose [12], [13], chemical compounds active against cancer are very rare and are expected to be carefully identified and investigated. A false negative identification (i.e. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Graph classification has drawn great interests in recent years due to the increasing number of applications involving objects with complex structure relationships. To date, all existing graph classification algorithms assume, explicitly or implicitly, that misclassifying instances in different classes incurs an equal amount of cost (or risk), which is often not the case in real-life applications (where misclassifying a certain class of samples, such as diseased patients, is subject to more expensive costs than others). Although cost-sensitive learning has been extensively studied, all methods are based on data with instance-feature representation. Graphs, however, do not have features available for learning and the feature space of graph data are likely infinite and needs to be carefully explored in order to favor classes with a higher cost. In this paper, we propose, CogBoost, a fast costsensitive graph classification algorithm, which aims to minimize the misclassification costs (instead of the errors) and achieve fast learning speed for large scale graph datasets. To minimize the misclassification costs, CogBoost iteratively selects the most discriminative subgraph by considering costs of different classes, and then solves a linear programming problem in each iteration by using Bayes decision rule based optimal loss function. In addition, a cutting plane algorithm is derived to speed up the solving of linear programs for fast learning on large graph datasets. Experiments and comparisons on real-world large graph datasets demonstrate the effectiveness and the efficiency of our algorithm.
    IEEE Transactions on Knowledge and Data Engineering 01/2015; DOI:10.1109/TKDE.2015.2391115 · 2.07 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a novel radiohistomorphometric method for quantitative correlation and subsequent discovery of imaging markers for aggressive prostate cancer (CaP). While this approach can be employed in the context any imaging modality and disease domain, we seek to identify quantitative dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI) attributes that are highly correlated with density and architecture of tumor microvessels, surrogate markers of CaP aggressiveness. This retrospective study consisted of five Gleason score matched patients who underwent 3 Tesla multiparametric MRI prior to radical prostatectomy (RP). The excised gland was sectioned and quartered with a rotary knife. For each serial section, digitized images of individual quadrants were reconstructed into pseudo whole mount sections via previously developed stitching program. The individual quadrants were stained with vascular marker CD31 and annotated for CaP by an expert pathologist. The stained microvessel regions were quantitatively characterized in terms of density and architectural arrange-ment via graph algorithms, yielding a series of quantitative histomorphometric features. The reconstructed pseudo whole mount histologic sections were non-linearly co-registered with DCE MRI to identify tumor extent on MRI on a voxel-by-voxel basis. Pairwise correlations between kinetic and microvessel features within CaP annotated regions on the two modalities were computed to identify highly correlated attributes. Preliminary results of the radiohistomorphometric correlation identified 8 DCE MRI kinetic features that were highly and sig-nificantly (p<0.05) correlated with a number of microvessel parameters. Most of the identified imaging features were related to rate of washout (Rwo) and initial area under the curve (IAUC). Association of those attributes with Gleason patterns showed that the identified imaging features clustered most of the tumors with primary Gleason pattern of 3 together. These results suggest that Rwo and IAUC may be promising candidate imaging markers for identification of aggressive CaP in vivo.
    SPIE Medical Imaging; 02/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at
    Medical Image Analysis 12/2013; 18(2):359-373. DOI:10.1016/ · 3.65 Impact Factor
Show more