A Decade of Toxicogenomic Research and Its Contribution to Toxicological Science
Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, US Food and Drug Administration, 3900 NCTR Road, Jefferson, AR 72079, USA.Toxicological Sciences (Impact Factor: 3.85). 07/2012; 130(2). DOI: 10.1093/toxsci/kfs223
Toxicogenomics enjoyed considerable attention as a ground-breaking addition to conventional toxicology assays at its inception. However, the pace at which toxicogenomics was expected to perform has been tempered in recent years. Next to cost the lack of advanced knowledge discovery and data mining tools significantly hampered progress in this new field of toxicological sciences. Recently, two of the largest toxicogenomics databases were made freely available to the public. These comprehensive studies are expected to stimulate knowledge discovery and development of novel data mining tools, which are essential to advance this field. In this review, we provide a concise summary of each of these two databases with a brief discussion on the commonalities and differences between them. We place our emphasis on some key questions in toxicogenomics and how these questions can be appropriately addressed with the two databases. Lastly, we provide a perspective on the future direction of toxicogenomics and how new technologies such as RNA-Seq may impact this field.
[Show abstract] [Hide abstract]
- "Toxicogenomics combines toxicology with omics technologies to investigate the mechanisms underlying a toxicological response (Waters and Fostel, 2004). Microarray-based gene expression profiling still remains the core technological platform in toxicogenomic research (Chen et al., 2012). It is a well-established technique and provides genome-wide information on transcriptomic changes (Shi et al., 2006) and is used to obtain better insight in the molecular mechanisms underlying drug-induced liver toxicity (Cheng et al., 2011; Cui and Paules, 2010; Nuwaysir et al., 1999). "
ABSTRACT: In order to improve attrition rates of candidate-drugs there is a need for a better understanding of the mechanisms underlying drug-induced hepatotoxicity. We aim to further unravel the toxicological response of hepatocytes to a prototypical cholestatic compound by integrating transcriptomic and metabonomic profiling of HepG2 cells exposed to Cyclosporin A. Cyclosporin A exposure induced intracellular cholesterol accumulation and diminished intracellular bile acid levels. Performing pathway analyses of significant mRNAs and metabolites separately and integrated, resulted in more relevant pathways for the latter. Integrated analyses showed pathways involved in cell cycle and cellular metabolism to be significantly changed. Moreover, pathways involved in protein processing of the endoplasmic reticulum, bile acid biosynthesis and cholesterol metabolism were significantly affected. Our findings indicate that an integrated approach combining metabonomics and transcriptomics data derived from representative in vitro models, with bioinformatics can improve our understanding of the mechanisms of action underlying drug-induced hepatotoxicity. Furthermore, we showed that integrating multiple omics and thereby analyzing genes, microRNAs and metabolites of the opposed model for drug-induced cholestasis can give valuable information about mechanisms of drug-induced cholestasis in vitro and therefore could be used in toxicity screening of new drug candidates at an early stage of drug discovery. Copyright © 2015. Published by Elsevier Ltd.Toxicology in Vitro 04/2015; 29(3). DOI:10.1016/j.tiv.2014.12.016 · 2.90 Impact Factor
[Show abstract] [Hide abstract]
- "Traditional approaches for the assessment of toxicological properties of compounds rely heavily on animal testing (Chen et al. 2012). Several issues related to animal experiments have led to the need for alternative experimental methods. "
ABSTRACT: A joint US-EU workshop on enhancing data sharing and exchange in toxicogenomics was held at the National Institute for Environmental Health Sciences. Currently, efficient reuse of data is hampered by problems related to public data availability, data quality, database interoperability (the ability to exchange information), standardization and sustainability. At the workshop, experts from universities and research institutes presented databases, studies, organizations and tools that attempt to deal with these problems. Furthermore, a case study showing that combining toxicogenomics data from multiple resources leads to more accurate predictions in risk assessment was presented. All participants agreed that there is a need for a web portal describing the diverse, heterogeneous data resources relevant for toxicogenomics research. Furthermore, there was agreement that linking more data resources would improve toxicogenomics data analysis. To outline a roadmap to enhance interoperability between data resources, the participants recommend collecting user stories from the toxicogenomics research community on barriers in data sharing and exchange currently hampering answering to certain research questions. These user stories may guide the prioritization of steps to be taken for enhancing integration of toxicogenomics databases.Archive für Toxikologie 10/2014; DOI:10.1007/s00204-014-1387-3 · 5.98 Impact Factor
[Show abstract] [Hide abstract]
- "For example, biological profiling may help identify molecule classes, which although chemically distinct, have a common biological mechanism and provide a means for compound repositioning or an understanding of adverse effects (Lounkine et al., 2012). Similarly, systematic efforts to understand the toxicity of compounds have resulted in large publicly available datasets with noted examples including data sets from Iconix Biosciences (Ganter et al., 2005), the National Institute of Biomedical Innovation (NIBIO, Japan) (Uehara et al., 2010), and ToxCast released from the Environmental Protection Agency (Chen et al., 2012; Kavlock et al., 2012; Knudsen et al., 2013). Optimal employment of these databases requires expert teams of biologists, chemists, and informatics scientists with critical consideration of the source data. "
ABSTRACT: The wealth of bioactivity information now available on low-molecular weight compounds has enabled a paradigm shift in chemical biology and early phase drug discovery efforts. Traditionally chemical libraries have been most commonly employed in screening approaches where a bioassay is used to characterize a chemical library in a random search for active samples. However, robust curating of bioassay data, establishment of ontologies enabling mining of large chemical biology datasets, and a wealth of public chemical biology information has made possible the establishment of highly annotated compound collections. Such annotated chemical libraries can now be used to build a pathway/target hypothesis and have led to a new view where chemical libraries are used to characterize a bioassay. In this article we discuss the types of compounds in these annotated libraries composed of tools, probes, and drugs. As well, we provide rationale and a few examples for how such libraries can enable phenotypic/forward chemical genomic approaches. As with any approach, there are several pitfalls that need to be considered and we also outline some strategies to avoid these.Frontiers in Pharmacology 07/2014; 5. DOI:10.3389/fphar.2014.00164 · 3.80 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.