Systematic integration of experimental data and models in systems biology.

School of Chemistry, The University of Manchester, Manchester M13 9PL, UK.
BMC Bioinformatics (Impact Factor: 2.67). 01/2010; 11:582. DOI: 10.1186/1471-2105-11-582
Source: DBLP

ABSTRACT The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources.
Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML). A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis.
Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

1 Bookmark
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The scientific literature contains a tremendous amount of kinetic data describing the dynamic behaviour of biochemical reactions over time. These data are needed for computational modelling to create models of biochemical reaction networks and to obtain a better understanding of the processes in living cells. To extract the knowledge from the literature, biocurators are required to understand a paper and interpret the data. For modellers, as well as experimentalists, this process is very time consuming because the information is distributed across the publication and, in most cases, is insufficiently structured and often described without standard terminology. In recent years, biological databases for different data types have been developed. The advantages of these databases lie in their unified structure, searchability and the potential for augmented analysis by software, which supports the modelling process. We have developed the SABIO-RK database for biochemical reaction kinetics. In the present review, we describe the challenges for database developers and curators, beginning with an analysis of relevant publications up to the export of database information in a standardized format. The aim of the present review is to draw the experimentalist's attention to the problem (from a data integration point of view) of incompletely and imprecisely written publications. We describe how to lower the barrier to curators and improve this situation. At the same time, we are aware that curating experimental data takes time. There is a community concerned with making the task of publishing data with the proper structure and annotation to ontologies much easier. In this respect, we highlight some useful initiatives and tools.
    FEBS Journal 10/2013; 281(2). DOI:10.1111/febs.12562 · 3.99 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Systems biology projects and omics technologies have led to a growing number of biochemical pathway models and reconstructions. However, the majority of these models are still created de novo, based on literature mining and the manual processing of pathway data. To increase the efficiency of model creation, the Path2Models project has automatically generated mathematical models from pathway representations using a suite of freely available software. Data sources include KEGG, BioCarta, MetaCyc and SABIO-RK. Depending on the source data, three types of models are provided: kinetic, logical and constraint-based. Models from over 2 600 organisms are encoded consistently in SBML, and are made freely available through BioModels Database at Each model contains the list of participants, their interactions, the relevant mathematical constructs, and initial parameter values. Most models are also available as easy-to-understand graphical SBGN maps. To date, the project has resulted in more than 140 000 freely available models. Such a resource can tremendously accelerate the development of mathematical models by providing initial starting models for simulation and analysis, which can be subsequently curated and further parameterized.
    BMC Systems Biology 11/2013; 7(1):116. DOI:10.1186/1752-0509-7-116 · 2.85 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The quantitative effects of environmental and genetic perturbations on metabolism can be studied in silico using kinetic models. We present a strategy for large-scale model construction based on a logical layering of data such as reaction fluxes, metabolite concentrations, and kinetic constants. The resulting models contain realistic standard rate laws and plausible parameters, adhere to the laws of thermodynamics, and reproduce a predefined steady state. These features have not been simultaneously achieved by previous workflows. We demonstrate the advantages and limitations of the workflow by translating the yeast consensus metabolic network into a kinetic model. Despite crudely selected data, the model shows realistic control behaviour, a stable dynamic, and realistic response to perturbations in extracellular glucose concentrations. The paper concludes by outlining how new data can continuously be fed into the workflow and how iterative model building can assist in directing experiments.
    PLoS ONE 11/2013; 8(11):e79195. DOI:10.1371/journal.pone.0079195 · 3.53 Impact Factor

Full-text (3 Sources)

Available from
May 19, 2014