Article

The MGED Ontology: a resource for semantics-based description of microarray experiments.

Center for Bioinformatics and Department of Genetics, University of Pennsylvania School of Medicine, USA.
Bioinformatics (Impact Factor: 4.62). 05/2006; 22(7):866-73. DOI: 10.1093/bioinformatics/btl005
Source: DBLP

ABSTRACT The generation of large amounts of microarray data and the need to share these data bring challenges for both data management and annotation and highlights the need for standards. MIAME specifies the minimum information needed to describe a microarray experiment and the Microarray Gene Expression Object Model (MAGE-OM) and resulting MAGE-ML provide a mechanism to standardize data representation for data exchange, however a common terminology for data annotation is needed to support these standards.
Here we describe the MGED Ontology (MO) developed by the Ontology Working Group of the Microarray Gene Expression Data (MGED) Society. The MO provides terms for annotating all aspects of a microarray experiment from the design of the experiment and array layout, through to the preparation of the biological sample and the protocols used to hybridize the RNA and analyze the data. The MO was developed to provide terms for annotating experiments in line with the MIAME guidelines, i.e. to provide the semantics to describe a microarray experiment according to the concepts specified in MIAME. The MO does not attempt to incorporate terms from existing ontologies, e.g. those that deal with anatomical parts or developmental stages terms, but provides a framework to reference terms in other ontologies and therefore facilitates the use of ontologies in microarray data annotation.
The MGED Ontology version.1.2.0 is available as a file in both DAML and OWL formats at http://mged.sourceforge.net/ontologies/index.php. Release notes and annotation examples are provided. The MO is also provided via the NCICB's Enterprise Vocabulary System (http://nciterms.nci.nih.gov/NCIBrowser/Dictionary.do).
Stoeckrt@pcbi.upenn.edu
Supplementary data are available at Bioinformatics online.

0 Bookmarks
 · 
148 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The field of translational biomedical informatics seeks to integrate knowledge from basic science, directed research into diseases, and clinical insights into a form that can be used to discover effective treatments of diseases. Currently, representations of experimental provenance reside in models specific to each sub-domain: biospecimen management tools track the histories of biospecimens, how they are handled and disposed; high throughput assay-based experiments are described in a format specifically designed for that data; and experimental workflow systems, such as Laboratory Information Management Systems (LIMS), each represent their portion of the research pipeline using models specifically designed to those tasks. In recent years, a concept of a general-purpose provenance model has emerged from the computational workflow domain. In bioinformatics there has been an explosion of data due to the use of high-throughput assays such as microarrays for research in biology and biomedicine. These assays are used to produce data for experiments based on the current gene expression of cells, commonly expressed polymorphisms, determining the epigenetic regulation of genes, and many others. During this time, the community has developed and adopted a standard for describing experiments and the data that they generate. Adoption of these standards, along with data sharing requirements from funding institutions, has resulted in the publication of tens of thousands of high throughput experiments performed over the last ten years. Because of this, it has become the de-facto format for describing experiments in biomedicine. This standard is referred to as the MAGE (MicroArray and Gene Expression) standard. Like with other parts of the translational research pipeline, these experimental representations are primarily representations of workflow, but are not currently integrated with other types of biomedical data. We propose a vision for a common model of provenance representations across the translational research pipeline, and show that one of the largest sources of data in that pipeline, microarray-based experiments, can be accurately represented in general-purpose models of provenance that are already used to represent computational workflows. We demonstrate methods and tools to generate RDF representations of a commonly used MAGE format, MAGE-TAB, mappings of MAGE documents to two general-purpose provenance representations, OPM (Open Provenance Model) and PML (Proof Markup Language). We show through a use case simulation that the data represented in MAGE documents can be completely represented in OPM and PML through use of round trip analysis of certain examples. The success in mapping MAGE documents into general-purpose provenance models shows that promise in the implementation of the translational research provenance vision.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.
    Bioscience trends 01/2014; 8(4):192-201. · 1.21 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult.
    Journal of Biomedical Semantics 01/2014; 5(Suppl 1 Proceedings of the Bio-Ontologies Spec Interest G):S3.

Full-text (2 Sources)

Download
50 Downloads
Available from
May 21, 2014