Michael L Bittner

Translational Genomics Research Institute, Phoenix, Arizona, United States

Are you Michael L Bittner?

Claim your profile

Publications (206)1039.97 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Insensitivity to standard clinical interventions, including chemotherapy, radiotherapy and tyrosine kinase inhibitor (TKI) treatment, remains a substantial hindrance towards improving the prognosis of patients with non-small cell lung cancer (NSCLC). The molecular mechanism of therapeutic resistance remains poorly understood. The TNF-like weak inducer of apoptosis (TWEAK)-FGF-inducible 14 (Fn14) signaling axis is known to promote cancer cell survival via NF-kappaB activation and the up-regulation of pro-survival Bcl-2 family members. Here, a role was determined for TWEAK-Fn14 pro-survival signaling in NSCLC through the up-regulation of myeloid cell leukemia sequence 1 (Mcl-1). Mcl-1 expression significantly correlated with Fn14 expression, advanced NSCLC tumor stage, and poor patient prognosis in human primary NSCLC tumors. TWEAK stimulation of NSCLC cells induced NF-kappaB-dependent Mcl-1 protein expression and conferred Mcl-1-dependent chemo- and radio-resistance. Depletion of Mcl-1 via siRNA or pharmacological inhibition of Mcl-1, using EU-5148, sensitized TWEAK-treated NSCLC cells to cisplatin- or radiation-mediated inhibition of cell survival. Moreover, EU-5148 inhibited cell survival across a panel of NSCLC cell lines. In contrast, inhibition of Bcl-2/Bcl-xL function had minimal effect on suppressing TWEAK-induced cell survival. Collectively, these results position TWEAK-Fn14 signaling through Mcl-1 as a significant mechanism for NSCLC tumor cell survival, and open new therapeutic avenues to abrogate the high mortality rate seen in NSCLC. Implications: The TWEAK-Fn14 signaling axis enhances lung cancer cell survival and therapeutic resistance through Mcl-1, positioning both TWEAK-Fn14 and Mcl-1 as therapeutic opportunities in lung cancer.
    Molecular Cancer Research 01/2014; · 4.35 Impact Factor
  • Source
    Jianping Hua, Michael L Bittner, Edward R Dougherty
    [Show abstract] [Hide abstract]
    ABSTRACT: Gene set enrichment analysis (GSA) methods have been widely adopted by biological labs to analyze data and generate hypotheses for validation. Most of the existing comparison studies focus on whether the existing GSA methods can produce accurate P-values; however, practitioners are often more concerned with the correct gene-set ranking generated by the methods. The ranking performance is closely related to two critical goals associated with GSA methods: the ability to reveal biological themes and ensuring reproducibility, especially for small-sample studies. We have conducted a comprehensive simulation study focusing on the ranking performance of seven representative GSA methods. We overcome the limitation on the availability of real data sets by creating hybrid data models from existing large data sets. To build the data model, we pick a master gene from the data set to form the ground truth and artificially generate the phenotype labels. Multiple hybrid data models can be constructed from one data set and multiple data sets of smaller sizes can be generated by resampling the original data set. This approach enables us to generate a large batch of data sets to check the ranking performance of GSA methods. Our simulation study reveals that for the proposed data model, the Q2 type GSA methods have in general better performance than other GSA methods and the global test has the most robust results. The properties of a data set play a critical role in the performance. For the data sets with highly connected genes, all GSA methods suffer significantly in performance.
    Cancer informatics 01/2014; 13(Suppl 1):1-16.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: BACKGROUND: Identifying similarities and differences in the molecular constitutions of various types of cancer is one of the key challenges in cancer research. The appearances of a cancer depend on complexmolecular interactions, including gene regulatory networks and gene-environment interactions. Thiscomplexity makes it challenging to decipher the molecular origin of the cancer. In recent years,many studies reported methods to uncover heterogeneous depictions of complex cancers, which areoften categorized into different subtypes. The challenge is to identify diverse molecular contextswithin a cancer, to relate them to different subtypes, and to learn underlying molecular interactionsspecific to molecular contexts so that we can recommend context-specific treatment to patients. RESULTS: study, we describe a novel method to discern molecular interactions specific to certainmolecular contexts. Unlike conventional approaches to build modular networks of individual genes,our focus is to identify cancer-generic and subtype-specific interactions between contextual genesets, of which each gene set share coherent transcriptional patterns across a subset of samples,termed contextual gene set. We then apply a novel formulation for quantitating the effect of thesamples from each subtype on the calculated strength of interactions observed. Two cancer data setswere analyzed to support the validity of condition-specificity of identified interactions. Whencompared to an existing approach, the proposed method was much more sensitive in identifyingcondition-specific interactions even in heterogeneous data set. The results also revealed that networkcomponents specific to different types of cancer are related to different biological functions thancancer-generic network components. We found not only the results that are consistent with previousstudies, but also new hypotheses on the biological mechanisms specific to certain cancer types thatwarrant further investigations. CONCLUSIONS: The analysis on the contextual gene sets and characterization of networks of interactioncomposed of these sets discovered distinct functional differences underlying various types of cancer. The resultsshow that our method successfully reveals many subtype-specific regions in the identified maps of biologicalcontexts, which well represent biological functions that can be connected to specific subtypes.
    BMC Genomics 02/2013; 14(1):110. · 4.40 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Two issues are critical to the development of effective cancer-drug combinations. First, it is necessary to determine common combinations of alterations that exert strong control over proliferation and survival regulation for the general type of cancer being considered. Second, it is necessary to have a drug testing method that allows one to assess the variety of responses that can be provoked by drugs acting at key points in the cellular processes dictating proliferation and survival. Utilizing a previously reported GFP (green fluorescent protein) reporter-based technology that provides dynamic measurements of individual reporters in individual cells, the present paper proposes a dynamical systems approach to these issues. It involves a three-state experimental design: (1) formulate an oncologic pathway model of relevant processes; (2) perturb the pathways with the test drug and drugs with known effects on components of the pathways of interest; and (3) measure process activity indicators at various points on cell populations. This design addresses the fundamental problems in the design and analysis of combinatorial drug treatments. We apply the dynamical approach to three issues in the context of colon cancer cell lines: (1) identification of cell subpopulations possessing differing degrees of drug sensitivity; (2) the consequences of different drug dosing strategies on cellular processes; and (3) assessing the consequences of combinatorial versus monotherapy. Finally, we illustrate how the dynamical systems approach leads to a mechanistic hypothesis in the colon cancer HCT116 cell line.
    Journal of Biological Systems 02/2013; 20(04). · 0.73 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In early drug development, it would be beneficial to be able to identify those dynamic patterns of gene response that indicate that drugs targeting a particular gene will be likely or not to elicit the desired response. One approach would be to quantitate the degree of similarity between the responses that cells show when exposed to drugs, so that consistencies in the regulation of cellular response processes that produce success or failure can be more readily identified. We track drug response using fluorescent proteins as transcription activity reporters. Our basic assumption is that drugs inducing very similar alteration in transcriptional regulation will produce similar temporal trajectories on many of the reporter proteins and hence be identified as having similarities in their mechanisms of action (MOA). The main body of this work is devoted to characterizing similarity in temporal trajectories/signals. To do so, we must first identify the key points that determine mechanistic similarity between two drug responses. Directly comparing points on the two signals is unrealistic, as it cannot handle delays and speed variations on the time axis. Hence, to capture the similarities between reporter responses, we develop an alignment algorithm that is robust to noise, time delays and is able to find all the contiguous parts of signals centered about a core alignment (reflecting a core mechanism in drug response). Applying the proposed algorithm to a range of real drug experiments shows that the result agrees well with the prior drug MOA knowledge. The R code for the RLCSS algorithm is available at http://gsp.tamu.edu/Publications/supplementary/zhao12a.
    Bioinformatics 05/2012; 28(14):1902-10. · 5.47 Impact Factor
  • Source
    Breast Cancer Research 04/2012; 2:1-1. · 5.33 Impact Factor
  • Chenzhao, Ivanivanov, Michael L.bittner, Edward R.dougherty
    [Show abstract] [Hide abstract]
    ABSTRACT: To effectively intervene when cells are trapped in pathological modes of operation it is necessary to build models that capture relevant network structure and include characterization of dynamical changes within the system. The model must be of sufficient detail that it facilitates the selection of intervention points where pathological cell behavior arising from improper regulation can be stopped. What is known about this type of cellular decision-making is consistent with the general expectations associated with any kind of decision-making operation. If the result of a decision at one node is serially transmitted to other nodes, resetting their states, then the process may suffer from mechanistic inefficiencies of transmission or from blockage or activation of transmission through the action of other nodes acting on the same node. A standard signal-processing network model, Bayesian networks, can model these properties. This paper employs a Bayesian tree model to characterize conditional pathway logic and quantify the effects of different branching patterns, signal transmission efficiencies and levels of alternate or redundant inputs. In particular, it characterizes master genes and canalizing genes within the quantitative framework. The model is also used to examine what inferences about the network structure can be made when perturbations are applied to various points in the network.
    Journal of Biological Systems 04/2012; 19(04). · 0.73 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: High-content cell imaging based on fluorescent protein reporters has recently been used to track the transcriptional activities of multiple genes under different external stimuli for extended periods. This technology enhances our ability to discover treatment-induced regulatory mechanisms, temporally order their onsets and recognize their relationships. To fully realize these possibilities and explore their potential in biological and pharmaceutical applications, we introduce a new data processing procedure to extract information about the dynamics of cell processes based on this technology. The proposed procedure contains two parts: (1) image processing, where the fluorescent images are processed to identify individual cells and allow their transcriptional activity levels to be quantified; and (2) data representation, where the extracted time course data are summarized and represented in a way that facilitates efficient evaluation. Experiments show that the proposed procedure achieves fast and robust image segmentation with sufficient accuracy. The extracted cellular dynamics are highly reproducible and sensitive enough to detect subtle activity differences and identify mechanisms responding to selected perturbations. This method should be able to help biologists identify the alterations of cellular mechanisms that allow drug candidates to change cell behavior and thereby improve the efficiency of drug discovery and treatment design.
    Journal of Biomedical Optics 04/2012; 17(4):046008. · 2.75 Impact Factor
  • Source
    Michael L Bittner, Edward R Dougherty
    [Show abstract] [Hide abstract]
    ABSTRACT: For science, theoretical or applied, to significantly advance, researchers must use the most appropriate mathematical methods. A century and a half elapsed between Newton's development of the calculus and Laplace's development of celestial mechanics. One cannot imagine the latter without the former. Today, more than three-quarters of a century has elapsed since the birth of stochastic systems theory. This article provides a perspective on the utilization of systems theory as the proper vehicle for the development of systems biology and its application to complex regulatory diseases such as cancer.
    Cancer informatics 01/2012; 11:185-90.
  • [Show abstract] [Hide abstract]
    ABSTRACT: MOTIVATION: Cancer encompasses various diseases associated with loss of cell cycle control, leading to uncontrolled cell proliferation and/or reduced apoptosis. Cancer is usually caused by malfunction(s) in the cellular signaling pathways. Malfunctions occur in different ways and at different locations in a pathway. Consequently, therapy design should first identify the location and type of malfunction to arrive at a suitable drug combination. RESULTS: We consider the growth factor (GF) signaling pathways, widely studied in the context of cancer. Interactions between different pathway components are modeled using Boolean logic gates. All possible single malfunctions in the resulting circuit are enumerated and responses of the different malfunctioning circuits to a 'test' input are used to group the malfunctions into classes. Effects of different drugs, targeting different parts of the Boolean circuit, are taken into account in deciding drug efficacy, thereby mapping each malfunction to an appropriate set of drugs.
    Bioinformatics 02/2011; 27(4):548-55. · 5.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cancer encompasses various diseases associated with loss of cell-cycle control, leading to uncontrolled cell proliferation and/or reduced apoptosis. Cancer is usually caused by malfunction(s) in the cellular signaling pathways. Malfunctions occur in different ways and at different locations in a pathway. Consequently, therapy design should first identify the location and type of malfunction and then arrive at a suitable drug combination. We consider the growth factor (GF) signaling pathways, widely studied in the context of cancer. Interactions between different pathway components are modeled using Boolean logic gates. All possible single malfunctions in the resulting circuit are enumerated and responses of the different malfunctioning circuits to a ‘test’ input are used to group the malfunctions into classes. Effects of different drugs, targeting different parts of the Boolean circuit, are taken into account in deciding drug efficacy, thereby mapping each malfunction to an appropriate set of drugs.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We utilize a tree-structured Bayesian network to characterize and detect master and canalizing genes via the coefficient of determination (CoD). Master genes possess strong regulation over groups of genes, whereas canalizing genes take over the regulation of large cohorts under certain cell conditions. While related, the two concepts are not the same and the analytic measures we employ reveal that difference. We also consider hypothesis testing for successful drug intervention in the framework of the Bayesian model.
    Genomic Signal Processing and Statistics (GENSIPS), 2011 IEEE International Workshop on; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: A novel preclinical model combining experimental methods and theoretical analysis is proposed to investigate the mechanism of action and identify pharmacodynamic characteristic of a drug. Instead of fixed time point analysis of the drug exposure to drug effect, the time course of drug effect for different doses are quantitatively studied on cell line-based platforms using Kalman filter, where tumor cells' responses to drugs through the use of fluorescent reporters are sampled frequently over a time-course. It is expected that such preclinical study will provide valuable suggestions about dosing regimens for in vivo experimental stage to increase productivity.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a framework to study the drug effect at the molecular level in order to address the following question of current interest in the drug community: Given a fixed total delivered drug, which is better, frequent small or infrequent large drug dosages? A hybrid system model is proposed to link the drug's pharmacokinetic and pharmacodynamic information, and allows the drug effects for different dosages and treatment schedules to be compared. A hybrid model facilitates the modeling of continuous quantitative changes that leads to discrete transitions. An optimal dosage-frequency regimen and the necessary and sufficient conditions for the drug to be effective are obtained analytically when the drug is designed to control a target gene. Then, we extend the analysis to the case where the target gene is part of a genetic regulatory network. A crucial observation is that there exists a "sweet spot," defined as the "drug efficacy region (DER)" in this paper, for certain dosage and frequency arrangements given the total delivered drug. This paper quantifies the therapeutic benefits of dosage regimen lying within the DER. Simulations are performed using MATLAB/SIMULINK to validate the analytical results.
    IEEE transactions on bio-medical engineering 11/2010; 58(3):488-98. · 2.15 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cutaneous squamous cell carcinoma (SCC) occurs commonly and can metastasize. Identification of specific molecular aberrations and mechanisms underlying the development and progression of cutaneous SCC may lead to better prognostic and therapeutic approaches and more effective chemoprevention strategies. To identify genetic changes associated with early stages of cutaneous SCC development, we analyzed a series of 40 archived skin tissues ranging from normal skin to invasive SCC. Using high-resolution array-based comparative genomic hybridization, we identified deletions of a region on chromosome 10q harboring the INPP5A gene in 24% of examined SCC tumors. Subsequent validation by immunohistochemistry on an independent sample set of 71 SCC tissues showed reduced INPP5A protein levels in 72% of primary SCC tumors. Decrease in INPP5A protein levels seems to be an early event in SCC development, as it also is observed in 9 of 26 (35%) examined actinic keratoses, the earliest stage in SCC development. Importantly, further reduction of INPP5A levels is seen in a subset of SCC patients as the tumor progresses from primary to metastatic stage. The observed frequency and pattern of loss indicate that INPP5A, a negative regulator of inositol signaling, may play a role in development and progression of cutaneous SCC tumors.
    Cancer Prevention Research 09/2010; 3(10):1277-83. · 4.89 Impact Factor
  • Source
    Edward R Dougherty, Michael L Bittner
    [Show abstract] [Hide abstract]
    ABSTRACT: Because the basic unit of biology is the cell, biological knowledge is rooted in the epistemology of the cell, and because life is the salient characteristic of the cell, its epistemology must be centered on its livingness, not its constituent components. The organization and regulation of these components in the pursuit of life constitute the fundamental nature of the cell. Thus, regulation sits at the heart of biological knowledge of the cell and the extraordinary complexity of this regulation conditions the kind of knowledge that can be obtained, in particular, the representation and intelligibility of that knowledge. This paper is essentially split into two parts. The first part discusses the inadequacy of everyday intelligibility and intuition in science and the consequent need for scientific theories to be expressed mathematically without appeal to commonsense categories of understanding, such as causality. Having set the backdrop, the second part addresses biological knowledge. It briefly reviews modern scientific epistemology from a general perspective and then turns to the epistemology of the cell. In analogy with a multi-faceted factory, the cell utilizes a highly parallel distributed control system to maintain its organization and regulate its dynamical operation in the face of both internal and external changes. Hence, scientific knowledge is constituted by the mathematics of stochastic dynamical systems, which model the overall relational structure of the cell and how these structures evolve over time, stochasticity being a consequence of the need to ignore a large number of factors while modeling relatively few in an extremely complex environment.
    Current Genomics 06/2010; 11(4):221-37. · 2.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The receiver operator characteristic (ROC) curves are commonly used in biomedical applications to judge the performance of a discriminant across varying decision thresholds. The estimated ROC curve depends on the true positive rate (TPR) and false positive rate (FPR), with the key metric being the area under the curve (AUC). With small samples these rates need to be estimated from the training data, so a natural question arises: How well do the estimates of the AUC, TPR and FPR compare with the true metrics? Through a simulation study using data models and analysis of real microarray data, we show that (i) for small samples the root mean square differences of the estimated and true metrics are considerable; (ii) even for large samples, there is only weak correlation between the true and estimated metrics; and (iii) generally, there is weak regression of the true metric on the estimated metric. For classification rules, we consider linear discriminant analysis, linear support vector machine (SVM) and radial basis function SVM. For error estimation, we consider resubstitution, three kinds of cross-validation and bootstrap. Using resampling, we show the unreliability of some published ROC results. Companion web site at http://compbio.tgen.org/paper_supp/ROC/roc.html edward@mail.ece.tamu.edu.
    Bioinformatics 03/2010; 26(6):822-30. · 5.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper reports the development of a biodosimetry device suitable for rapidly measuring expression levels of a low-density gene set that can define radiation exposure, dose and injury in a public health emergency. The platform comprises a set of 14 genes selected on the basis of their abundance and differential expression level in response to radiation from an expression profiling series measuring 41,000 transcripts. Gene expression is analyzed through direct signal amplification using a quantitative Nuclease Protection Assay (qNPA). This assay can be configured as either a high-throughput microplate assay or as a handheld detection device for individual point-of-care assays. Recently, we were able to successfully develop the qNPA platform to measure gene expression levels directly from human whole blood samples. The assay can be performed with volumes as small as 30 microL of whole blood, which is compatible with collection from a fingerstick. We analyzed in vitro irradiated blood samples with qNPA. The results revealed statistically significant discrimination between irradiated and non-irradiated samples. These results indicate that the qNPA platform combined with a gene profile based on a small number of genes is a valid test to measure biological radiation exposure. The scalability characteristics of the assay make it appropriate for population triage. This biodosimetry platform could also be used for personalized monitoring of radiotherapy treatments received by patients.
    Health physics 02/2010; 98(2):179-85. · 0.92 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When confronted with a small sample, feature-selection algorithms often fail to find good feature sets, a problem exacerbated for high-dimensional data and large feature sets. The problem is compounded by the fact that, if one obtains a feature set with a low error estimate, the estimate is unreliable because training-data-based error estimators typically perform poorly on small samples, exhibiting optimistic bias or high variance. One way around the problem is limit the number of features being considered, restrict features sets to sizes such that all feature sets can be examined by exhaustive search, and report a list of the best performing feature sets. If the list is short, then it greatly restricts the possible feature sets to be considered as candidates; however, one can expect the lowest error estimates obtained to be optimistically biased so that there may not be a close-to-optimal feature set on the list. This paper provides a power analysis of this methodology; in particular, it examines the kind of results one should expect to obtain relative to the length of the list and the number of discriminating features among those considered. Two measures are employed. The first is the probability that there is at least one feature set on the list whose true classification error is within some given tolerance of the best feature set and the second is the expected number of feature sets on the list whose true errors are within the given tolerance of the best feature set. These values are plotted as functions of the list length to generate power curves. The results show that, if the number of discriminating features is not too small-that is, the prior biological knowledge is not too poor-then one should expect, with high probability, to find good feature sets.
    Cancer informatics 01/2010; 9:49-60.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A major reason for constructing gene regulatory networks is to use them as models for determining therapeutic intervention strategies by deriving ways of altering their long-run dynamics in such a way as to reduce the likelihood of entering undesirable states. In general, two paradigms have been taken for gene network intervention: (1) stationary external control is based on optimally altering the status of a control gene (or genes) over time to drive network dynamics; and (2) structural intervention involves an optimal one-time change of the network structure (wiring) to beneficially alter the long-run behaviour of the network. These intervention approaches have mainly been developed within the context of the probabilistic Boolean network model for gene regulation. This article reviews both types of intervention and applies them to reducing the metastatic competence of cells via intervention in a melanoma-related network.
    International Journal of Systems Science 01/2010; 41:5-16. · 1.31 Impact Factor

Publication Stats

12k Citations
1,039.97 Total Impact Points

Top Journals


  • 2003–2013
    • Translational Genomics Research Institute
      • Division of Computational Biology
      Phoenix, Arizona, United States
  • 2000–2012
    • Texas A&M University
      • Department of Electrical and Computer Engineering
      College Station, TX, United States
    • Georgetown University
      • Department of Oncology
      Washington, D. C., DC, United States
  • 2010
    • Mayo Clinic - Scottsdale
      Scottsdale, Arizona, United States
    • Université René Descartes - Paris 5
      Lutetia Parisorum, Île-de-France, France
  • 2008
    • University of São Paulo
      • Departamento de Ciência da Computação (IME) (São Paulo)
      Ribeirão Preto, Estado de Sao Paulo, Brazil
    • Columbia University
      • Center for Radiological Research
      New York City, NY, United States
  • 2007
    • Arizona State University
      Phoenix, Arizona, United States
  • 2003–2007
    • University of Texas MD Anderson Cancer Center
      • Department of Pathology
      Houston, TX, United States
  • 2001–2007
    • National Human Genome Research Institute
      Maryland, United States
    • National Cancer Institute (USA)
      Maryland, United States
  • 1999–2005
    • NCI-Frederick
      Maryland, United States
    • Cancer Genetics, Inc.
      Rutherford, New Jersey, United States
  • 2004
    • National Institute on Aging
      • Laboratory of Immunology (LI)
      Baltimore, Maryland, United States
  • 1995–2003
    • National Institutes of Health
      • • Branch of Pediatric Oncology
      • • Branch of Cancer Genetics
      • • Laboratory of Cancer Biology and Genetics
      Maryland, United States
    • Johns Hopkins Medicine
      • Department of Pathology
      Baltimore, MD, United States
    • Concordia University–Ann Arbor
      Ann Arbor, Michigan, United States
  • 1997
    • Howard Hughes Medical Institute
      Ashburn, Virginia, United States
    • Universität Heidelberg
      • Institute of Human Genetics
      Heidelburg, Baden-Württemberg, Germany