Article

Deterministic and stochastic models of genetic regulatory networks.

Institute for Systems Biology, Seattle, Washington, USA.
Methods in enzymology (Impact Factor: 2.19). 01/2009; 467:335-56. DOI: 10.1016/S0076-6879(09)67013-0
Source: PubMed

ABSTRACT Traditionally molecular biology research has tended to reduce biological pathways to composite units studied as isolated parts of the cellular system. With the advent of high throughput methodologies that can capture thousands of data points, and powerful computational approaches, the reality of studying cellular processes at a systems level is upon us. As these approaches yield massive datasets, systems level analyses have drawn upon other fields such as engineering and mathematics, adapting computational and statistical approaches to decipher relationships between molecules. Guided by high quality datasets and analyses, one can begin the process of predictive modeling. The findings from such approaches are often surprising and beyond normal intuition. We discuss four classes of dynamical systems used to model genetic regulatory networks. The discussion is divided into continuous and discrete models, as well as deterministic and stochastic model classes. For each combination of these categories, a model is presented and discussed in the context of the yeast cell cycle, illustrating how different types of questions can be addressed by different model classes.

Download full-text

Full-text

Available from: John D Aitchison, Jul 06, 2015
0 Followers
 · 
73 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In 2007, the U.S. National Research Council (NRC) released a report, "Toxicity Testing in the 21st Century: A Vision and a Strategy," that proposes a paradigm shift for toxicity testing of environmental agents. The vision is based on the notion that exposure to environmental agents leads to adverse health outcomes through the perturbation of toxicity pathways that are operative in humans. Implementation of the NRC vision will involve a fundamental change in the assessment of toxicity of environmental agents, moving away from adverse health outcomes observed in experimental animals to the identification of critical perturbations of toxicity pathways. Pathway perturbations will be identified using in vitro assays and quantified for dose response using methods in computational toxicology and other recent scientific advances in basic biology. Implementation of the NRC vision will require a major research effort, not unlike that required to successfully map the human genome, extending over 10 to 20 years, involving the broad scientific community to map important toxicity pathways operative in humans. This article provides an overview of the scientific tools and technologies that will form the core of the NRC vision for toxicity testing. Of particular importance will be the development of rapidly performed in vitro screening assays using human cells and cell lines or human tissue surrogates to efficiently identify environmental agents producing critical pathway perturbations. In addition to the overview of the NRC vision, this study documents the reaction by a number of stakeholder groups since 2007, including the scientific, risk assessment, regulatory, and animal welfare communities.
    Journal of Toxicology and Environmental Health Part B 02/2010; 13(2-4):163-96. DOI:10.1080/10937404.2010.483933
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: First we define a new type of transition system in which states are symbolic (i.e., we should think of them as distributions on concrete states) and transitions are probabilistic. Apart from the in-tended meaning of states, this system type is equivalent to a special case of Segala's general PA. We propose a (non-deterministic) composition mechanism for this system type, as well as trace distribution and simu-lation semantics. (And pray very, very hard that these semantic notions are compositional.) Then we describe how to derive a transition system of the above type from a simple PIOA, using notions of probabilistic transition bundles and fibers. These notions are based on deterministic schedulers, as opposed to the more conventional randomized schedulers.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Surprisal analysis is a thermodynamic-like molecular level approach that identifies biological constraints that prevents the entropy from reaching its maximum. To examine the significance of altered gene expression levels in tumorigenesis we apply surprisal analysis to the WI-38 model through its precancerous states. The constraints identified by the analysis are transcription patterns underlying the process of transformation. Each pattern highlights the role of a group of genes that act coherently to define a transformed phenotype. We identify a major transcription pattern that represents a contraction of signaling networks accompanied by induction of cellular proliferation and protein metabolism, which is essential for full transformation. In addition, a more minor, "tumor signature" transcription pattern completes the transformation process. The variation with time of the importance of each transcription pattern is determined. Midway through the transformation, at the stage when cells switch from slow to fast growth rate, the major transcription pattern undergoes a total inversion of its weight while the more minor pattern does not contribute before that stage. A similar network reorganization occurs in two very different cellular transformation models: WI-38 and the cervical cancer HF1 models. Our results suggest that despite differences in a list of transcripts expressed in different cancer models the rationale of the network reorganization remains essentially the same.
    BMC Systems Biology 03/2011; 5:42. DOI:10.1186/1752-0509-5-42