Animal carcinogenicity studies: 1. Poor human predictivity

Animal Consultants International, London SE11 4NR, UK.
Alternatives to laboratory animals: ATLA (Impact Factor: 1.59). 02/2006; 34(1):19-27.
Source: PubMed


The regulation of human exposure to potentially carcinogenic chemicals constitutes society's most important use of animal carcinogenicity data. Environmental contaminants of greatest concern within the USA are listed in the Environmental Protection Agency's (EPA's) Integrated Risk Information System (IRIS) chemicals database. However, of the 160 IRIS chemicals lacking even limited human exposure data but possessing animal data that had received a human carcinogenicity assessment by 1 January 2004, we found that in most cases (58.1%; 93/160), the EPA considered animal carcinogenicity data inadequate to support a classification of probable human carcinogen or non-carcinogen. For the 128 chemicals with human or animal data also assessed by the World Health Organisation's International Agency for Research on Cancer (IARC), human carcinogenicity classifications were compatible with EPA classifications only for those 17 having at least limited human data (p = 0.5896). For those 111 primarily reliant on animal data, the EPA was much more likely than the IARC to assign carcinogenicity classifications indicative of greater human risk (p < 0.0001). The IARC is a leading international authority on carcinogenicity assessments, and its significantly different human carcinogenicity classifications of identical chemicals indicate that: 1) in the absence of significant human data, the EPA is over-reliant on animal carcinogenicity data; 2) as a result, the EPA tends to over-predict carcinogenic risk; and 3) the true predictivity for human carcinogenicity of animal data is even poorer than is indicated by EPA figures alone. The EPA policy of erroneously assuming that tumours in animals are indicative of human carcinogenicity is implicated as a primary cause of these errors.

Download full-text


Available from: Jonathan Balcombe, May 29, 2014
  • Source
    • "Similarly, when one considers all chronically used human pharma­ ceuticals, some 50% induce tumors in rodents, yet only 20 human pharmaceutical com­ pounds have been identified as carcinogens in epidemiological studies, despite the fact that quite a large number of epidemiological stud­ ies have been carried out on these compounds (e.g., nonsteroidal antiinflammatory drugs, benzodiazepines, and phenobarbital). This high incidence of tumors in bioassays has led to questions concerning the human relevance of tumors induced in rodents (Knight et al. 2006; Ward 2008). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The current safety paradigm for assessing carcinogenic properties of drugs, cosmetics, industrial chemicals, and environmental exposures relies mainly on in vitro genotoxicity testing followed by 2-year rodent bioassays. This testing battery is extremely sensitive but has low specificity. Furthermore, rodent bioassays are associated with high costs, high animal burden, and limited predictive value for human risks. We provide a response to a growing appeal for a paradigm change in human cancer risk assessment.Methods: To facilitate development of a road map for this needed paradigm change in carcinogenicity testing, a workshop titled "Genomics in Cancer Risk Assessment" brought together toxicologists from academia and industry and government regulators and risk assessors from the United States and the European Union. Participants discussed the state-of-the-art in developing alternative testing strategies for carcinogenicity, with emphasis on potential contributions from omics technologies. The goal of human risk assessment is to decide whether a given exposure to an agent is acceptable to human health and to provide risk management measures based on evaluating and predicting the effects of exposures on human health. Although exciting progress is being made using genomics approaches, a new paradigm that uses these methods and human material when possible would provide mechanistic insights that may inform new predictive approaches (e.g., in vitro assays) and facilitate the development of genomics-derived biomarkers. Regulators appear to be willing to accept such approaches where use is clearly defined, evidence is strong, and approaches are qualified for regulatory use.
    Full-text · Article · Dec 2010 · Environmental Health Perspectives
  • Source
    • "However, this has traditionally been undertaken by conducting experiments such as those currently being performed by the U. S. National Toxicology Program (NTP) involving lifetime exposures of two species of rodent to determine which substances may have carcinogenic effects on humans. While clearly such work is important, it is timeconsuming , expensive and requires the sacrifice of a large number of laboratory animals (Knight et al., 2006). This has resulted in numerous Toxicology and Applied Pharmacology 231 (2008) 197–207 ☆ This paper is dedicated to Maykel Pérez González in memoriam. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work, Quantitative Structure-Activity Relationship (QSAR) modelling was used as a tool for predicting the carcinogenic potency of a set of 39 nitroso-compounds, which have been bioassayed in male rats by using the oral route of administration. The optimum QSAR model provided evidence of good fit and performance of predicitivity from training set. It was able to account for about 84% of the variance in the experimental activity and exhibited high values of the determination coefficients of cross validations, leave one out and bootstrapping (q(2)(LOO)=78.53 and q(2)(Boot)=74.97). Such a model was based on spectral moments weighted with Gasteiger-Marsilli atomic charges, polarizability and hydrophobicity, as well as with Abraham indexes, specifically the summation solute hydrogen bond basicity and the combined dipolarity/polarizability. This is the first study to have explored the possibility of combining Abraham solute descriptors with spectral moments. A reasonable interpretation of these molecular descriptors from a toxicological point of view was achieved by means of taking into account bond contributions. The set of relationships so derived revealed the importance of the length of the alkyl chains for determining carcinogenic potential of the chemicals analysed, and were able to explain the difference between mono-substituted and di-substituted nitrosoureas as well as to discriminate between isomeric structures with hydroxyl-alkyl and alkyl substituents in different positions. Moreover, they allowed the recognition of structural alerts in classical structures of two potent nitrosamines, consistent with their biotransformation. These results indicate that this new approach has the potential for improving carcinogenicity predictions based on the identification of structural alerts.
    Full-text · Article · May 2008 · Toxicology and Applied Pharmacology
  • Source
    • "). the logistical challenges incurred through reliance on traditional animal bioassays to meet such unprecedented testing demands are aptly demonstrated by the traditional rodent carcinogenicity bioassay. this assay takes upwards of two years to produce results of demonstrably poor human specificity (Knight, et al., 2006b), at an average cost of € 780,000 (Fleischer, 2007; see also Combes et al., 2007). Unsurprisingly, by 1998, only about 2,000 (2.7%) of the 75,000 industrial chemicals then in use and listed within the ePA toxic Substances Control Act inventory had been tested for carcinogenicity (epstein, 1998). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Laboratory animal models are limited by scientific constraints on human applicability, and increasing regulatory restrictions, driven by social concerns. Reliance on laboratory animals also incurs marked - and in some cases, prohibitive - logistical challenges, within high-throughput chemical testing programmes, such as those currently underway within Europe and the US. However, a range of non-animal methodologies is available within biomedical research and toxicity testing. These include: mechanisms to enhance the sharing and assessment of existing data prior to conducting further studies, and physicochemical evaluation and computerised modelling, including the use of structure-activity relationships and expert systems. Minimally-sentient animals from lower phylogenetic orders or early developmental vertebral stages may be used, as well as microorganisms and higher plants. A variety of tissue cultures, including immortalised cell lines, embryonic and adult stem cells, and organotypic cultures, are also available. In vitro assays utilising bacterial, yeast, protozoal, mammalian or human cell cultures exist for a wide range of toxic and other endpoints. These may be static or perfused, and may be used individually, or combined within test batteries. Human hepatocyte cultures and metabolic activation systems offer potential assessment of metabolite activity and organ-organ interaction. Microarray technology may allow genetic expression profiling, increasing the speed of toxin detection, well prior to more invasive endpoints. Enhanced human clinical trials utilising micro- dosing, staggered dosing, and more representative study populations and durations, as well as surrogate human tissues, advanced imaging modalities and human epidemiological, sociological and psycho- logical studies, may increase our understanding of illness aetiology and pathogenesis, and facilitate the development of safe and effective pharmacologic interventions. Particularly when human tissues are used, non-animal models may generate faster, cheaper results, more reliably predictive for humans, whilst yielding greater insights into human biochemical processes. Greater commitment to their development and implementation is necessary, however, to efficiently meet the needs of high-throughput chemical testing programmes, important emerging testing needs, and the ongoing development of human clinical interventions.
    Preview · Article · Feb 2008
Show more