Article

Selection Policy-Induced Reduction Mappings for Boolean Networks

Dept. of Veterinary Physiol. & Pharmacology, Texas A&M Univ., College Station, TX, USA
IEEE Transactions on Signal Processing (Impact Factor: 2.81). 10/2010; DOI: 10.1109/TSP.2010.2050314
Source: IEEE Xplore

ABSTRACT Developing computational models paves the way to understanding, predicting, and influencing the long-term behavior of genomic regulatory systems. However, several major challenges have to be addressed before such models are successfully applied in practice. Their inherent high complexity requires strategies for complexity reduction. Reducing the complexity of the model by removing genes and interpreting them as latent variables leads to the problem of selecting which states and their corresponding transitions best account for the presence of such latent variables. We use the Boolean network (BN) model to develop the general framework for selection and reduction of the model's complexity via designating some of the model's variables as latent ones. We also study the effects of the selection policies on the steady-state distribution and the controllability of the model.

0 Bookmarks
 · 
114 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel algorithm (CoDReduce) for reducing the size of a probabilistic Boolean network (PBN) model for genomic regulation. The algorithm uses the coefficient of determination (CoD) to find the best candidate for dasiadeletionpsila gene. The selection policy that determines how the transition probabilities for the reduced network are obtained from those in the original network is designed using the steady-state distribution (SSD) of the model. The performance of the algorithm is measured by the shift in the steady-state distribution after applying the mean-first-passage-time (MFPT) control policy, and the relative effect of the selection policy on the MFPT control policy.
    01/2009;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The most effective cancer treatments are the ones that prolong patients' lives while offering a reasonable quality of life during and after treatment. The treatments must also carry out their actions rapidly and with high efficiency such that a very large percentage of tumor cells die or shift into a state where they stop proliferating. Due to biological and micro-environmental variabilities within tumor cells, the action period of an administered drug can vary among a population of patients. In this paper, based on a recently proposed model for tumor growth inhibition, we first probabilistically characterize the variability of the length of drug action. Then, we present a methodology to devise optimal intervention strategies for any Markovian genetic regulatory network governing the tumor when the antitumor drug has a random-length duration of action.
    IEEE transactions on bio-medical engineering 07/2013; · 2.15 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Intervention in gene regulatory networks in the context of Markov decision processes has usually involved finding an optimal one-transition policy, where a decision is made at every transition whether or not to apply treatment. In an effort to model dosing constraint, a cyclic approach to intervention has previously been proposed in which there is a sequence of treatment windows and treatment is allowed only at the beginning of each window. This protocol ignores two practical aspects of therapy. First, a treatment typically has some duration of action: a drug will be effective for some period, after which there can be a recovery phase. This, too, might involve a cyclic protocol; however, in practice, a physician might monitor a patient at every stage and decide whether to apply treatment, and if treatment is applied, then the patient will be under the influence of the drug for some duration, followed by a recovery period. This results in an acyclic protocol. In this paper we take a unified approach to both cyclic and acyclic control with duration of effectiveness by placing the problem in the general framework of multiperiod decision epochs with infinite horizon discounting cost. The time interval between successive decision epochs can have multiple time units, where given the current state and the action taken, there is a joint probability distribution defined for the next state and the time when the next decision epoch will be called. Optimal control policies are derived, synthetic networks are used to investigate the properties of both cyclic and acyclic interventions with fixed-duration of effectiveness, and the methodology is applied to a mutated mammalian cell-cycle network.
    IEEE Transactions on Signal Processing 09/2012; 60(9):4930 -4944. · 2.81 Impact Factor

Full-text (3 Sources)

Download
18 Downloads
Available from
May 29, 2014