Daniel L Rubin

Stanford University, Palo Alto, California, United States

Are you Daniel L Rubin?

Claim your profile

Publications (159)288.94 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: To display drusen and geographic atrophy (GA) in a single projection image from three-dimensional spectral domain optical coherence tomography images based on a novel false color fusion strategy.
    Retina (Philadelphia, Pa.) 07/2014; · 2.93 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.
    Journal of Digital Imaging 06/2014; · 1.10 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Invasion of tumor cells into adjacent brain parenchyma is a major cause of treatment failure in glioblastoma. Furthermore, invasive tumors are shown to have a different genomic composition and metabolic abnormalities that allow for a more aggressive GBM phenotype and resistance to therapy. We thus seek to identify those genomic abnormalities associated with a highly aggressive and invasive GBM imaging-phenotype.Methods and materials: We retrospectively identified 104 treatment-naive glioblastoma patients from The Cancer Genome Atlas (TCGA) whom had gene expression profiles and corresponding MR imaging available in The Cancer Imaging Archive (TCIA). The standardized VASARI feature-set criteria were used for the qualitative visual assessments of invasion. Patients were assigned to classes based on the presence (Class A) or absence (Class B) of statistically significant invasion parameters to create an invasive imaging signature; imaging genomic analysis was subsequently performed using GenePattern Comparative Marker Selection module (Broad Institute).
    BMC Medical Genomics 06/2014; 7(1):30. · 3.47 Impact Factor
  • Source
    Journal of clinical oncology : official journal of the American Society of Clinical Oncology. 05/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using high-order steerable Riesz wavelets and support vector machines (SVM). The organization of scales and directions that are specific to every VST are modeled as linear combinations of directional Riesz wavelets. The models obtained are steerable, which means that any orientation of the model can be synthesized from linear combinations of the basis filters. The latter property is leveraged to model VST independently from their local orientation. In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a non-hierarchical computationally-derived ontology of VST containing inter-term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave-one-patient-out cross-validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST when using SVMs in a feature space combining the magnitudes of the steered models with CT intensities. Likelihood maps are created for each VST, which enables high transparency of the information modeled. The computationally-derived ontology obtained from the VST models was found to be consistent with the underlying semantics of the visual terms. It was found to be complementary to the RadLex ontology, and constitutes a potential method to link the image content to visual semantics. The proposed framework is expected to foster human-computer synergies for the interpretation of radiological images while using rotation-covariant computational models of VSTs to (1) quantify their local likelihood and (2) explicitly link them with pixel-based image content in the context of a given imaging domain.
    IEEE transactions on medical imaging. 05/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Computer-assisted image retrieval applications could assist radiologist interpretations by identifying similar images in large archives as a means to providing decision support. However, the semantic gap between low-level image features and their high level semantics may impair the system performances. Indeed, it can be challenging to comprehensively characterize the images using low-level imaging features to fully capture the visual appearance of diseases on images, and recently the use of semantic terms has been advocated to provide semantic descriptions of the visual contents of images. However, most of the existing image retrieval strategies do not consider the intrinsic properties of these terms during the comparison of the images beyond treating them as simple binary (presence/absence) features. We propose a new framework that includes semantic features in images and that enables retrieval of similar images in large databases based on their semantic relations. It is based on two main steps: (1) annotation of the images with semantic terms extracted from an ontology, and (2) evaluation of the similarity of image pairs by computing the similarity between the terms using the Hierarchical Semantic-Based Distance (HSBD) coupled to an ontological measure. The combination of these two steps provides a means of capturing the semantic correlations among the terms used to characterize the images that can be considered as a potential solution to deal with the semantic gap problem. We validate this approach in the context of the retrieval and the classification of 2D regions of interest (ROIs) extracted from computed tomographic (CT) images of the liver. Under this framework, retrieval accuracy of more than 0.96 was obtained on a 30-images dataset using the Normalized Discounted Cumulative Gain (NDCG) index that is a standard technique used to measure the effectiveness of information retrieval algorithms when a separate reference standard is available. Classification results of more than 95% were obtained on a 77-images dataset. For comparison purpose, the use of the Earth Mover's Distance (EMD), which is an alternative distance metric that considers all the existing relations among the terms, led to results retrieval accuracy of 0.95 and classification results of 93% with a higher computational cost. The results provided by the presented framework are competitive with the state-of-the-art and emphasize the usefulness of the proposed methodology for radiology image retrieval and classification.
    Journal of Biomedical Informatics 03/2014; · 2.13 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To evaluate the ability of various software (SW) tools used for quantitative image analysis to properly account for source-specific image scaling employed by magnetic resonance imaging manufacturers. A series of gadoteridol-doped distilled water solutions (0%, 0.5%, 1%, and 2% volume concentrations) was prepared for manual substitution into one (of three) phantom compartments to create "variable signal," whereas the other two compartments (containing mineral oil and 0.25% gadoteriol) were held unchanged. Pseudodynamic images were acquired over multiple series using four scanners such that the histogram of pixel intensities varied enough to provoke variable image scaling from series to series. Additional diffusion-weighted images were acquired of an ice-water phantom to generate scanner-specific apparent diffusion coefficient (ADC) maps. The resulting pseudodynamic images and ADC maps were analyzed by eight centers of the Quantitative Imaging Network using 16 different SW tools to measure compartment-specific region-of-interest intensity. Images generated by one of the scanners appeared to have additional intensity scaling that was not accounted for by the majority of tested quantitative image analysis SW tools. Incorrect image scaling leads to intensity measurement bias near 100%, compared to nonscaled images. Corrective actions for image scaling are suggested for manufacturers and quantitative imaging community.
    Translational oncology 02/2014; 7(1):65-71. · 3.40 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: THERE ARE TWO KEY CHALLENGES HINDERING EFFECTIVE USE OF QUANTITATIVE ASSESSMENT OF IMAGING IN CANCER RESPONSE ASSESSMENT: 1) Radiologists usually describe the cancer lesions in imaging studies subjectively and sometimes ambiguously, and 2) it is difficult to repurpose imaging data, because lesion measurements are not recorded in a format that permits machine interpretation and interoperability. We have developed a freely available software platform on the basis of open standards, the electronic Physician Annotation Device (ePAD), to tackle these challenges in two ways. First, ePAD facilitates the radiologist in carrying out cancer lesion measurements as part of routine clinical trial image interpretation workflow. Second, ePAD records all image measurements and annotations in a data format that permits repurposing image data for analyses of alternative imaging biomarkers of treatment response. To determine the impact of ePAD on radiologist efficiency in quantitative assessment of imaging studies, a radiologist evaluated computed tomography (CT) imaging studies from 20 subjects having one baseline and three consecutive follow-up imaging studies with and without ePAD. The radiologist made measurements of target lesions in each imaging study using Response Evaluation Criteria in Solid Tumors 1.1 criteria, initially with the aid of ePAD, and then after a 30-day washout period, the exams were reread without ePAD. The mean total time required to review the images and summarize measurements of target lesions was 15% (P < .039) shorter using ePAD than without using this tool. In addition, it was possible to rapidly reanalyze the images to explore lesion cross-sectional area as an alternative imaging biomarker to linear measure. We conclude that ePAD appears promising to potentially improve reader efficiency for quantitative assessment of CT examinations, and it may enable discovery of future novel image-based biomarkers of cancer treatment response.
    Translational oncology 02/2014; 7(1):23-35. · 3.40 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The diverse set of human brain structure and function analysis methods represents a difficult challenge for reconciling multiple views of neuroanatomical organization. While different views of organization are expected and valid, no widely adopted approach exists to harmonize different brain labeling protocols and terminologies. Our approach uses the natural organizing framework provided by anatomical structure to correlate terminologies commonly used in neuroimaging.Description: The Foundational Model of Anatomy (FMA) Ontology provides a semantic framework for representing the anatomical entities and relationships that constitute the phenotypic organization of the human body. In this paper we describe recent enhancements to the neuroanatomical content of the FMA that models cytoarchitectural and morphological regions of the cerebral cortex, as well as white matter structure and connectivity. This modeling effort is driven by the need to correlate and reconcile the terms used in neuroanatomical labeling protocols. By providing an ontological framework that harmonizes multiple views of neuroanatomical organization, the FMA provides developers with reusable and computable knowledge for a range of biomedical applications. A requirement for facilitating the integration of basic and clinical neuroscience data from diverse sources is a well-structured ontology that can incorporate, organize, and associate neuroanatomical data. We applied the ontological framework of the FMA to align the vocabularies used by several human brain atlases, and to encode emerging knowledge about structural connectivity in the brain. We highlighted several use cases of these extensions, including ontology reuse, neuroimaging data annotation, and organizing 3D brain models.
    Journal of biomedical semantics. 01/2014; 5(1):1.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Analyzing Functional Magnetic Resonance Imaging (fMRI) of resting brains to determine the spatial location and activity of intrinsic brain networks-a novel and burgeoning research field-is limited by the lack of ground truth and the tendency of analyses to overfit the data. Independent Component Analysis (ICA) is commonly used to separate the data into signal and Gaussian noise components, and then map these components on to spatial networks. Identifying noise from this data, however, is a tedious process that has proven hard to automate, particularly when data from different institutions, subjects, and scanners is used. Here we present an automated method to delineate noisy independent components in ICA using a data-driven infrastructure that queries a database of 246 spatial and temporal features to discover a computational signature of different types of noise. We evaluated the performance of our method to detect noisy components from healthy control fMRI (sensitivity = 0.91, specificity = 0.82, cross validation accuracy (CVA) = 0.87, area under the curve (AUC) = 0.93), and demonstrate its generalizability by showing equivalent performance on (1) an age- and scanner-matched cohort of schizophrenia patients from the same institution (sensitivity = 0.89, specificity = 0.83, CVA = 0.86), (2) an age-matched cohort on an equivalent scanner from a different institution (sensitivity = 0.88, specificity = 0.88, CVA = 0.88), and (3) an age-matched cohort on a different scanner from a different institution (sensitivity = 0.72, specificity = 0.92, CVA = 0.79). We additionally compare our approach with a recently published method [1]. Our results suggest that our method is robust to noise variations due to population as well as scanner differences, thereby making it well suited to the goal of automatically distinguishing noise from functional networks to enable investigation of human brain function.
    PLoS ONE 01/2014; 9(4):e95493. · 3.53 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Purpose The purpose of our study was to assess whether a model combining clinical factors, MR imaging features, and genomics would better predict overall survival of patients with glioblastoma (GBM) than either individual data type. Methods The study was conducted leveraging The Cancer Genome Atlas (TCGA) effort supported by the National Institutes of Health. Six neuroradiologists reviewed MRI images from The Cancer Imaging Archive (http://cancerimagingarchive.net) of 102 GBM patients using the VASARI scoring system. The patients’ clinical and genetic data were obtained from the TCGA website (http://www.cancergenome.nih.gov/). Patient outcome was measured in terms of overall survival time. The association between different categories of biomarkers and survival was evaluated using Cox analysis. Results The features that were significantly associated with survival were: (1) clinical factors: chemotherapy; (2) imaging: proportion of tumor contrast enhancement on MRI; and (3) genomics: HRAS copy number variation. The combination of these three biomarkers resulted in an incremental increase in the strength of prediction of survival, with the model that included clinical, imaging, and genetic variables having the highest predictive accuracy (area under the curve 0.679 ± 0.068, Akaike's information criterion 566.7, P < 0.001). Conclusion A combination of clinical factors, imaging features, and HRAS copy number variation best predicts survival of patients with GBM.
    Journal of Neuroradiology 01/2014; · 1.24 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic "soft" prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs of images on a 25-images dataset, and evaluation relative to the diagnoses of the retrieved images on a 72-images dataset. A normalized discounted cumulative gain (NDCG) score of more than 0.92 was obtained with the first protocol, while AUC scores of more than 0.77 were obtained with the second protocol. This automatical approach could provide real-time decision support to radiologists by showing them similar images with associated diagnoses and, where available, responses to therapies.
    Medical Image Analysis. 01/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Radiology reports are the major, and often only, means of communication between radiologists and their referring clinicians. The purposes of this study are to identify referring physicians' preferences about radiology reports and to quantify their perceived value of multimedia reports (with embedded images) compared with narrative text reports. We contacted 1800 attending physicians from a range of specialties at large tertiary care medical center via e-mail and a hospital newsletter linking to a 24-question electronic survey between July and November 2012. One hundred sixty physicians responded, yielding a response rate of 8.9%. Survey results were analyzed using Statistical Analysis Software (SAS Institute Inc, Cary, NC). Of the 160 referring physicians respondents, 142 (89%) indicated a general interest in reports with embedded images and completed the remainder of the survey questions. Of 142 respondents, 103 (73%) agreed or strongly agreed that reports with embedded images could improve the quality of interactions with radiologists; 129 respondents (91%) agreed or strongly agreed that having access to significant images enhances understanding of a text-based report; 110 respondents (77%) agreed or strongly agreed that multimedia reports would significantly improve referring physician satisfaction; and 85 respondents (60%) felt strongly or very strongly that multimedia reports would significantly improve patient care and outcomes. Creating accessible, readable, and automatic multimedia reports should be a high priority to enhance the practice and satisfaction of referring physicians, improve patient care, and emphasize the critical role radiology plays in current medical care.
    Academic radiology 12/2013; 20(12):1577-83. · 2.09 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To develop and evaluate an improved method of generating en face fundus images from three-dimensional optical coherence tomography images which enhances the visualization of drusen. We describe a novel approach, the restricted summed-voxel projection (RSVP), to generate en face projection images of the retinal surface combined with an image processing method to enhance drusen visualization. The RSVP approach is an automated method that restricts the projection to the retinal pigment epithelium layer neighborhood. Additionally, drusen visualization is improved through an image processing technique that fills drusen with bright pixels. The choroid layer is also excluded when creating the RSVP to eliminate bright pixels beneath drusen that could be confused with drusen when geographic atrophy is present. The RSVP method was evaluated in 46 patients and 3-dimensional optical coherence tomography data sets were obtained from 8 patients, for which 2 readers independently identified drusen as the gold standard. The mean drusen overlap ratio was used as the metric to determine the accuracy of visualization of the RSVP method when compared with the conventional summed-voxel projection technique. Comparative results demonstrate that the RSVP method was more effective than the conventional summed-voxel projection in displaying drusen and retinal vessels, and was more useful in detecting drusen. The mean drusen overlap ratios based on the conventional summed-voxel projection method and the RSVP method were 2.1% and 89.3%, respectively. The RSVP method was more effective for drusen visualization than the conventional summed-voxel projection method, and it may be useful for macular assessment in patients with nonexudative age-related macular degeneration.
    Retina (Philadelphia, Pa.) 10/2013; · 2.93 Impact Factor
  • David S Mendelson, Daniel L Rubin
    [Show abstract] [Hide abstract]
    ABSTRACT: There are rapid changes occurring in the health care environment. Radiologists face new challenges but also new opportunities. The purpose of this report is to review how new informatics tools and developments can help the radiologist respond to the drive for safety, quality, and efficiency. These tools will be of assistance in conducting research and education. They not only provide greater efficiency in traditional operations but also open new pathways for the delivery of new services and imaging technologies. Our future as a specialty is dependent on integrating these informatics solutions into our daily practice.
    Academic radiology 10/2013; 20(10):1195-212. · 2.09 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spectral domain optical coherence tomography (SD-OCT) is a useful tool for the visualization of drusen, a retinal abnormality seen in patients with age-related macular degeneration (AMD); however, objective assessment of drusen is thwarted by the lack of a method to robustly quantify these lesions on serial OCT images. Here, we describe an automatic drusen segmentation method for SD-OCT retinal images, which leverages a priori knowledge of normal retinal morphology and anatomical features. The highly reflective and locally connected pixels located below the retinal nerve fiber layer (RNFL) are used to generate a segmentation of the retinal pigment epithelium (RPE) layer. The observed and expected contours of the RPE layer are obtained by interpolating and fitting the shape of the segmented RPE layer, respectively. The areas located between the interpolated and fitted RPE shapes (which have nonzero area when drusen occurs) are marked as drusen. To enhance drusen quantification, we also developed a novel method of retinal projection to generate an en face retinal image based on the RPE extraction, which improves the quality of drusen visualization over the current approach to producing retinal projections from SD-OCT images based on a summed-voxel projection (SVP), and it provides a means of obtaining quantitative features of drusen in the en face projection. Visualization of the segmented drusen is refined through several post-processing steps, drusen detection to eliminate false positive detections on consecutive slices, drusen refinement on a projection view of drusen, and drusen smoothing. Experimental evaluation results demonstrate that our method is effective for drusen segmentation. In a preliminary analysis of the potential clinical utility of our methods, quantitative drusen measurements, such as area and volume, can be correlated with the drusen progression in non-exudative AMD, suggesting that our approach may produce useful quantitative imaging biomarkers to follow this disease and predict patient outcome.
    Medical image analysis 07/2013; 17(8):1058-1072. · 3.09 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Genetic changes underlying clear cell renal cell carcinoma (ccRCC) include alterations in genes controlling cellular oxygen sensing (for example, VHL) and the maintenance of chromatin states (for example, PBRM1). We surveyed more than 400 tumours using different genomic platforms and identified 19 significantly mutated genes. The PI(3)K/AKT pathway was recurrently mutated, suggesting this pathway as a potential therapeutic target. Widespread DNA hypomethylation was associated with mutation of the H3K36 methyltransferase SETD2, and integrative analysis suggested that mutations involving the SWI/SNF chromatin remodelling complex (PBRM1, ARID1A, SMARCA4) could have far-reaching effects on other pathways. Aggressive cancers demonstrated evidence of a metabolic shift, involving downregulation of genes involved in the TCA cycle, decreased AMPK and PTEN protein levels, upregulation of the pentose phosphate pathway and the glutamine transporter genes, increased acetyl-CoA carboxylase protein, and altered promoter methylation of miR-21 (also known as MIR21) and GRB10. Remodelling cellular metabolism thus constitutes a recurrent pattern in ccRCC that correlates with tumour stage and severity and offers new views on the opportunities for disease treatment.
    Nature 06/2013; · 38.60 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: OBJECTIVE: To predict the response of breast cancer patients to neoadjuvant chemotherapy (NAC) using features derived from dynamic contrast-enhanced (DCE) MRI. MATERIALS AND METHODS: 60 patients with triple-negative early-stage breast cancer receiving NAC were evaluated. Features assessed included clinical data, patterns of tumor response to treatment determined by DCE-MRI, MRI breast imaging-reporting and data system descriptors, and quantitative lesion kinetic texture derived from the gray-level co-occurrence matrix (GLCM). All features except for patterns of response were derived before chemotherapy; GLCM features were determined before and after chemotherapy. Treatment response was defined by the presence of residual invasive tumor and/or positive lymph nodes after chemotherapy. Statistical modeling was performed using Lasso logistic regression. RESULTS: Pre-chemotherapy imaging features predicted all measures of response except for residual tumor. Feature sets varied in effectiveness at predicting different definitions of treatment response, but in general, pre-chemotherapy imaging features were able to predict pathological complete response with area under the curve (AUC)=0.68, residual lymph node metastases with AUC=0.84 and residual tumor with lymph node metastases with AUC=0.83. Imaging features assessed after chemotherapy yielded significantly improved model performance over those assessed before chemotherapy for predicting residual tumor, but no other outcomes. CONCLUSIONS: DCE-MRI features can be used to predict whether triple-negative breast cancer patients will respond to NAC. Models such as the ones presented could help to identify patients not likely to respond to treatment and to direct them towards alternative therapies.
    Journal of the American Medical Informatics Association 06/2013; · 3.57 Impact Factor
  • Qiang Chen, Fang Quan, Jiajing Xu, Daniel L Rubin
    [Show abstract] [Hide abstract]
    ABSTRACT: The measurement of the size of lesions in follow-up CT examinations of cancer patients is important to evaluate the success of treatment. This paper presents an automatic algorithm for identifying and segmenting lymph nodes in CT images across longitudinal time points. Firstly, a two-step image registration method is proposed to locate the lymph nodes including coarse registration based on body region detection and fine registration based on a double-template matching algorithm. Then, to make the initial segmentation approximate the boundaries of lymph nodes, the initial image registration result is refined with intensity and edge information. Finally, a snake model is used to evolve the refined initial curve and obtain segmentation results. Our algorithm was tested on 26 lymph nodes at multiple time points from 14 patients. The image at the earlier time point was used as the baseline image to be used in evaluating the follow-up image, resulting in 76 total test cases. Of the 76 test cases, we made a 76 (100%) successful detection and 38/40 (95%) correct clinical assessment according to Response Evaluation Criteria in Solid Tumors (RECIST). The quantitative evaluation based on several metrics, such as average Hausdorff distance, indicates that our algorithm is produces good results. In addition, the proposed algorithm is fast with an average computing time 2.58s. The proposed segmentation algorithm for lymph nodes is fast and can achieve high segmentation accuracy, which may be useful to automate the tracking and evaluation of cancer therapy.
    Computer methods and programs in biomedicine 06/2013; · 1.56 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.
    Journal of Digital Imaging 04/2013; · 1.10 Impact Factor

Publication Stats

3k Citations
288.94 Total Impact Points

Institutions

  • 2002–2014
    • Stanford University
      • • Department of Radiology
      • • Stanford Center for Biomedical Informatics Research
      • • Department of Genetics
      Palo Alto, California, United States
  • 2013
    • Mount Sinai Medical Center
      New York City, New York, United States
    • GE Global Research
      Niskayuna, New York, United States
    • Nanjing University of Science and Technology
      Nan-ching, Jiangsu Sheng, China
  • 2001–2013
    • Stanford Medicine
      • Department of Radiology
      Stanford, California, United States
  • 2011–2012
    • Vanderbilt University
      • Division of Hematology and Oncology
      Nashville, MI, United States
  • 2010
    • The Mind Research Network
      Albuquerque, New Mexico, United States
  • 2007–2010
    • Medical College of Wisconsin
      • Department of Radiology
      Milwaukee, WI, United States
  • 2009
    • University of Iowa
      • Department of Radiology
      Iowa City, IA, United States
    • Northwestern University
      • Department of Radiology
      Evanston, IL, United States
  • 2008
    • University of Washington Seattle
      • Department of Biological Structure
      Seattle, WA, United States
  • 1999–2000
    • VA Palo Alto Health Care System
      Palo Alto, California, United States