Toward a checklist for exchange and interpretation of data from a toxicology study.

NIEHS, LMIT ITSS Contract, Research Triangle Park, North Carolina 27709-2233, USA.
Toxicological Sciences (Impact Factor: 4.33). 10/2007; 99(1):26-34. DOI: 10.1093/toxsci/kfm090
Source: PubMed

ABSTRACT Data from toxicology and toxicogenomics studies are valuable, and can be combined for meta-analysis using public data repositories such as Chemical Effects in Biological Systems Knowledgebase, ArrayExpress, and Gene Expression Omnibus. In order to fully utilize the data for secondary analysis, it is necessary to have a description of the study and good annotation of the accompanying data. This study annotation permits sophisticated cross-study comparison and analysis, and allows data from comparable subjects to be identified and fully understood. The Minimal Information About a Microarray Experiment Standard was proposed to permit deposition and sharing of microarray data. We propose the first step toward an analogous standard for a toxicogenomics/toxicology study, by describing a checklist of information that best practices would suggest be included with the study data. When the information in this checklist is deposited together with the study data, the checklist information helps the public explore the study data in context of time, or identify data from similarly treated subjects, and also explore/identify potential sources of experimental variability. The proposed checklist summarizes useful information to include when sharing study data for publication, deposition into a database, or electronic exchange with collaborators. It is not a description of how to carry out an experiment, but a definition of how to describe an experiment. It is anticipated that once a toxicology checklist is accepted and put into use, then toxicology databases can be configured to require and output these fields, making it straightforward to annotate data for interpretation by others.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We examined the extent to which consensus exists on the criteria that should be used for assessing the credibility of a scientific work, regardless of its funding source, and explored how these criteria might be implemented. Three publications, all presented at a session of the 2009 annual meeting of the Society for Risk Analysis, have proposed a range of criteria for evaluating the credibility of scientific studies. At least two other similar sets of criteria have recently been proposed elsewhere. In this article we review these criteria, highlight the commonalities among them, and integrate them into a list of 10 criteria. We also discuss issues inherent in any attempt to implement the criteria systematically. Recommendations by many scientists and policy experts converge on a finite list of criteria for assessing the credibility of a scientific study without regard to funding source. These criteria should be formalized through a consensus process or a governmental initiative that includes discussion and pilot application of a system for reproducibly implementing them. Formal establishment of such a system should enable the debate regarding chemical studies to move beyond funding issues and focus on scientific merit.
    Environmental Health Perspectives 12/2010; 119(6):757-64. · 7.26 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A major goal of the emerging field of computational toxicology is the development of screening-level models that predict potential toxicity of chemicals from a combination of mechanistic in vitro assay data and chemical structure descriptors. In order to build these models, researchers need quantitative in vitro and ideally in vivo data for large numbers of chemicals for common sets of assays and endpoints. A number of groups are compiling such data sets into publicly available web-based databases. This article (1) reviews some of the underlying challenges to the development of the databases, (2) describes key technologies used (relational databases, ontologies, and knowledgebases), and (3) summarizes several major database efforts that are widely used in the computational toxicology field.
    Journal of Toxicology and Environmental Health Part B 02/2010; 13(2-4):218-31. · 3.90 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Due to recent advances in data storage and sharing for further data processing in predictive toxicology, there is an increasing need for flexible data representations, secure and consistent data curation and automated data quality checking. Toxicity prediction involves multidisciplinary data. There are hundreds of collections of chemical, biological and toxicological data that are widely dispersed, mostly in the open literature, professional research bodies and commercial companies. In order to better manage and make full use of such large amount of toxicity data, there is a trend to develop functionalities aiming towards data governance in predictive toxicology to formalise a set of processes to guarantee high data quality and better data management. In this paper, data quality mainly refers in a data storage sense (e.g. accuracy, completeness and integrity) and not in a toxicological sense (e.g. the quality of experimental results). This paper reviews seven widely used predictive toxicology data sources and applications, with a particular focus on their data governance aspects, including: data accuracy, data completeness, data integrity, metadata and its management, data availability and data authorisation. This review reveals the current problems (e.g. lack of systematic and standard measures of data quality) and desirable needs (e.g. better management and further use of captured metadata and the development of flexible multi-level user access authorisation schemas) of predictive toxicology data sources development. The analytical results will help to address a significant gap in toxicology data quality assessment and lead to the development of novel frameworks for predictive toxicology data and model governance. While the discussed public data sources are well developed, there nevertheless remain some gaps in the development of a data governance framework to support predictive toxicology. In this paper, data governance is identified as the new challenge in predictive toxicology, and a good use of it may provide a promising framework for developing high quality and easy accessible toxicity data repositories. This paper also identifies important research directions that require further investigation in this area.
    Journal of Cheminformatics 01/2011; 3(1):24. · 3.59 Impact Factor

Full-text (2 Sources)

Available from
Jun 2, 2014

Similar Publications