# An empirical comparison of low-dose extrapolation from points of departure (PoD) compared to extrapolations based upon methods that account for model uncertainty

**ABSTRACT** Experiments with relatively high doses are often used to predict risks at appreciably lower doses.A point of departure (PoD) can be calculated as the dose associated with a specified moderate response level that is often in the range of experimental doses considered. A linear extrapolation to lower doses often follows.An alternative to the PoD method is to develop a model that accounts for the model uncertainty in the dose-response relationship and to use this model to estimate the risk at low doses.Two such approaches that account for model uncertainty are model averaging (MA) and semi-parametric methods.We use these methods, along with the PoD approach in the context of a large animal (40,000+ animal) bioassay that exhibited sub-linearity. When models are fit to high dose data and risks at low doses are predicted, the methods that account for model uncertainty produce dose estimates associated with an excess risk that are closer to the observed risk than the PoD linearization.This comparison provides empirical support to accompany previous simulation studies that suggest methods that incorporate model uncertainty provide viable, and arguably preferred, alternatives to linear extrapolation from a PoD.

- References (5)
- Cited In (0)

- [Show abstract] [Hide abstract]

**ABSTRACT:**The conception of the ED01 study, its implementation, and now its revealing conclusions have coincided with the intense controversy over cancer risk assessmnent. Explanation of the dose-response relationship at low doses is fundamental to the regulatory agencies' posture that there is no threshold level below which a carcinogen cannot exert its carcinogenic effect. Concurrent with this philosophy, linear extrapolation is also a non-threshold method. The evidence from the ED01 study, although never intended to directly answer or validate these views, has provided a massive and overwhelming experimental profile and data base which lends support to regulatory policies. Admittedly, risk assessment is a young and developing science. Some scientists label it qualitative rather than quantitative. However, risk assessment is a necessary and vital step in the decision-making process for any individual chemical and cannot be avoided. Most of the objections to cancer risk assessment reflect a reluctance to accept extrapolation from high to low doses. The four most widely employed models in descending order of conservatism are the linear, multi-stage, Mantel-Bryan and Cornfield models. It is true that risk estimates using these four models vary dramatically. The debate centers on the fact that inadequate experimental evidence was available to determine which model most accurately delineated the dose-response relationship at extremely low exposure levels. The availability of the ED01 data base should help resolve this controversy in favor of the conservative linear model. Additionally, the experimental design of the ED01 study and the data generated from it should assist in developing more reliable and cost-effective animal bioassays with which to screen chemicals for carcinogenicity, as well as provide risk assessment information.Journal of environmental pathology and toxicology 02/1980; 3(3 Spec No):1-7. - [Show abstract] [Hide abstract]

**ABSTRACT:**Quantitative risk assessment proceeds by first estimating a dose-response model and then inverting this model to estimate the dose that corresponds to some prespecified level of response. The parametric form of the dose-response model often plays a large role in determining this dose. Consequently, the choice of the proper model is a major source of uncertainty when estimating such endpoints. While methods exist that attempt to incorporate the uncertainty by forming an estimate based upon all models considered, such methods may fail when the true model is on the edge of the space of models considered and cannot be formed from a weighted sum of constituent models. We propose a semiparametric model for dose-response data as well as deriving a dose estimate associated with a particular response. In this model formulation, the only restriction on the model form is that it is monotonic. We use this model to estimate the dose-response curve from a long-term cancer bioassay, as well as compare this to methods currently used to account for model uncertainty. A small simulation study is conducted showing that the method is superior to model averaging when estimating exposure that arises from a quantal-linear dose-response mechanism, and is similar to these methods when investigating nonlinear dose-response patterns.Risk Analysis 03/2012; 32(7):1207-18. DOI:10.1111/j.1539-6924.2011.01786.x · 1.97 Impact Factor - [Show abstract] [Hide abstract]

**ABSTRACT:**Quantitative risk assessment involves the determination of a safe level of exposure. Recent techniques use the estimated dose-response curve to estimate such a safe dose level. Although such methods have attractive features, a low-dose extrapolation is highly dependent on the model choice. Fractional polynomials, basically being a set of (generalized) linear models, are a nice extension of classical polynomials, providing the necessary flexibility to estimate the dose-response curve. Typically, one selects the best-fitting model in this set of polynomials and proceeds as if no model selection were carried out. We show that model averaging using a set of fractional polynomials reduces bias and has better precision in estimating a safe level of exposure (say, the benchmark dose), as compared to an estimator from the selected best model. To estimate a lower limit of this benchmark dose, an approximation of the variance of the model-averaged estimator, as proposed by Burnham and Anderson, can be used. However, this is a conservative method, often resulting in unrealistically low safe doses. Therefore, a bootstrap-based method to more accurately estimate the variance of the model averaged parameter is proposed.Risk Analysis 03/2007; 27(1):111-23. DOI:10.1111/j.1539-6924.2006.00863.x · 1.97 Impact Factor