Altex 30, 1/13
Food for Thought …
Integrated Testing Strategies for Safety
Thomas Hartung 1,2, Tom Luechtefeld 1, Alexandra Maertens 1, and Andre Kleensang 1
1Johns Hopkins University, Bloomberg School of Public Health, CAAt, Baltimore, USA; 2University of Konstanz, CAAt-
Replacing a test on a living organism with a cellular, chemico-
analytical, or computational approach obviously is reduction-
istic. Sometimes this might work well, e.g., when an extreme
pH is a clear indication of corrosivity. In general, however, it
is naïve to expect a single system to substitute for all mecha-
nisms, the entire applicability domain (substance classes), and
degrees of severity. Still, toxicology has long neglected this
when requesting a one-to-one replacement to substitute for
the traditional animal test. We might even extend this to say
it is similarly naïve to address an entire human health effect
with a single animal experiment using inbred, young rodents…
the only way to approximate human relevance is to mimic the
complexity and responsiveness of the organ situation and to
model the respective kinetics, i.e., the target hoped for from the
human-on-a-chip approach (Hartung and Zurlo, 2012). every-
thing else requires making use of several information sources,
if not compromising the coverage of the test. Genotoxicity is
a nice example, where patches have continuously been added
to cover the various mechanisms. However, here the simplest
possible strategy, i.e., a battery of tests where every positive
result is considered a liability, causes problems. We have seen
where the inevitable accumulation of false-positives leads
(Kirkland et al., 2005), ultimately undermining the credibility
of in vitro approaches.
Despite the fact that toxicology uses many stand-alone tests, a systematic combination of several
information sources very often is required: Examples include: when not all possible outcomes of interest
(e.g., modes of action), classes of test substances (applicability domains), or severity classes of effect are
covered in a single test; when the positive test result is rare (low prevalence leading to excessive false-
positive results); when the gold standard test is too costly or uses too many animals, creating a need for
prioritization by screening. Similarly, tests are combined when the human predictivity of a single test is not
satisfactory or when existing data and evidence from various tests will be integrated. Increasingly, kinetic
information also will be integrated to make an in vivo extrapolation from in vitro data.
Integrated Testing Strategies (ITS) offer the solution to these problems. ITS have been discussed for more
than a decade, and some attempts have been made in test guidance for regulations. Despite their obvious
potential for revamping regulatory toxicology, however, we still have little guidance on the composition,
validation, and adaptation of ITS for different purposes. Similarly, Weight of Evidence and Evidence-based
Toxicology approaches require different pieces of evidence and test data to be weighed and combined.
ITS also represent the logical way of combining pathway-based tests, as suggested in Toxicology for the
21st Century. This paper describes the state of the art of ITS and makes suggestions as to the definition,
systematic combination, and quality assurance of ITS.
Keywords: Integrated testing strategies, prioritization, predictivity, quality assurance, Tox-21c
“Playing safe is probably the most
unsafe thing in the world.
You cannot stand still.
You must go forward”
Robert Collier (1885-1950)
Hartung et al.
Altex 30, 1/13
et al., 2012). the two differ to some extent as the ReACH-ItS
also include in vivo data and are somewhat restricted to the tools
prescribed in legislation. this largely excludes the 21st century
methodologies (van Vliet, 2011), i.e., omics, high-throughput,
and high-content imaging techniques, which are not mentioned
in the legislative text. the very narrow interpretation of the leg-
islative text in administrating ReACH does not encourage such
additional approaches. this represents a tremendous lost oppor-
tunity, and some additional flexibility and “learning on the job”
would benefit one of the largest investments in consumer safety
Astonishingly, despite these prospects and billions of euros
spent for ReACH, the literature on ItS for safety assessments
is still poor, and little progress toward consensus and guidance
has been made. For example, two In Vitro testing Industrial
Platform workshops were summarized stating (De Wever et al.,
2012): “As yet, there is great dispute among experts on how to
represent ITS for classification, labeling, or risk assessments of
chemicals, and whether or not to focus on the whole chemical
domain or on a specific application. The absence of accepted
Weight of Evidence (WoE) tools allowing for objective judg-
ments was identified as an important issue blocking any signifi-
cant progress in the area.” Similarly, the eCVAM/ePAA work-
shop concluded (Kinsner-Ovaskainen et al., 2012): “Despite the
fact that some useful insights and preliminary conclusions could
be extracted from the dynamic discussions at the workshop, re-
gretfully, true consensus could not be reached on all aspects.”
We earlier commissioned a white paper on ItS (Jaworska and
Hoffmann, 2010) in the context of our transatlantic think tank
for toxicology (t4) and a 2010 conference on 21st Century Vali-
dation Strategies for 21st Century Tools. It similarly concluded:
“Although a pressing concern, the topic of ITS has drawn mostly
general reviews, broad concepts, and the expression of a clear
need for more research on ITS (Hengstler et al., 2006; Worth et
al., 2007; Benfenati et al., 2010). Published research in the field
remains scarce (Gubbels-van Hal et al., 2005; Hoffmann et al.,
2008a; Jaworska et al., 2010a).”
It is worth noting, also, that testing strategies from the phar-
maceutical industry do not help much. they try to identify an
active compound (the future drug) out of thousands of substanc-
es, without regard to what they miss – but this approach is unac-
ceptable in a safety ItS. Pharmacology screening also typically
starts with a target, i.e., a mode of action, while toxicological as-
sessments need to be open to various mechanisms, some as yet
uncharacterized, until we have a comprehensive list of relevant
pathways of toxicity (Hartung and McBride, 2011).
Due to them having grown from alternative methods and
ReACH, ItS discussions are more predominant in europe (Har-
tung, 2010d). However, in principle they resonate very strongly
with the US approach of toxicity testing in the 21st century (tox-
21c) (Hartung, 2009c). the latter suggests moving regulatory
toxicology to mechanisms (the pathways of toxicity, Pot). this
means breaking the hazard down into its modes of action and
combining them with chemico-physical properties (including
QSAR) and PBPK models. this implies, similarly, that different
pieces of evidence and tests be strategically combined.
the solution is the “intelligent” or “integrated” use of several
information sources in a testing strategy (ItS). there is a lot of
confusion around this term, especially regarding how to design,
validate, and use ItS.
this article aims to elaborate on these aspects with examples
and to outline the prospects of ItS in toxicology. It thereby ex-
pands the thoughts elaborated for the introduction to the road-
map for animal-free systemic toxicity testing (Basketter et al.,
2012). the underlying problems and the approach are not ac-
tually unique to toxicology. the most evident similarity is to
diagnostic testing strategies in clinical medicine, where several
sources of information are used, similarly, for differential di-
agnosis; we discussed these similarities earlier (Hoffmann and
The two origins of ITS in safety assessments
When do we need a test and when do we need a testing strategy?
We need more than one test, if:
– not all possible outcomes of interest (e.g., modes of action)
are covered in a single test
– not all classes of test substances are covered (applicability
– not all severity classes of effect are covered
– when the positive test result is rare (low prevalence) and the
number of false-positive results becomes excessive (Hoff-
mann and Hartung, 2005)
– the gold standard test is too costly or uses too many animals
and substances need to be prioritized
– the accuracy (human predictivity) is not satisfying and pre-
dictivity can be improved
– existing data and evidences from various tests shall be inte-
– kinetic information shall be integrated to make an in vivo ex-
trapolation from in vitro data (Basketter et al., 2012)
All together, it is difficult to imagine a case where we should
not apply a testing strategy. It is astonishing how long we have
continued to pursue “one test suits all” solutions in toxicology.
A restricted usefulness (applicability domain) was stated, but it
was only within the discussion on Integrated testing of in vitro,
in silico, and toxicokinetics (adsorption, distribution, metabo-
lism, excretion, i.e., ADMe) information that such integration
was attempted. Bas Blaauboer and colleagues long ago spear-
headed this (DeJongh et al., 1999; Forsby and Blaauboer, 2007;
Blaauboer, 2010; Blaauboer and Barratt, 1999). The first ITS
were accepted as OeCD test guidelines in 2002 for eye and skin
irritation (OeCD tG 404, 2002a; OeCD tG 405, 2002b). A
major driving force then was the emerging ReACH legislation,
which sought to make use of all available information for regis-
tration of chemicals (especially existing chemicals) in order to
limit costs and animal use. this prompted the call for Intelligent
tS (Anon., 2005; Van leeuwen et al., 2007; Ahlers et al., 2008;
Schaafsma et al., 2009; Vonk et al., 2009; Combes and Balls,
2011; leist et al., 2012; Gabbert and Benighaus, 2012; Rusyn
Hartung et al.
Altex 30, 1/13
A screen/screening test is often a rapid, simple test method
conducted for the purpose of classifying substances into a
general category of hazard. the results of a screening test
generally are used for preliminary decision making in the
context of a testing strategy (i.e., to assess the need for addi-
tional and more definitive tests). Screening tests often have a
truncated response range in that positive results may be con-
sidered adequate to determine if a substance is in the highest
category of a hazard classification system without the need
for further testing, but are not usually adequate without ad-
ditional information/tests to make decisions pertaining to
lower levels of hazard or safety of the substance
Test (or assay): An experimental system used to obtain in-
formation on the adverse effects of a substance. Used inter-
changeably with assay.
Test battery: A series of tests usually performed at the same
time or in close sequence. each test within the battery is
designed to complement the other tests and generally to
measure a different component of a multi-factorial toxic
effect. Also called base set or minimum data set in ecotoxi-
Test method: A process or procedure used to obtain infor-
mation on the characteristics of a substance or agent. toxi-
cological test methods generate information regarding the
ability of a substance or agent to produce a specified bio-
logical effect under specified conditions. Used interchange-
ably with “test” and “assay”.
Following a series of eCVAM internal meetings, an eCVAM/
ePAA workshop was held to address this (Kinsner-Ovaskain-
en et al., 2009), and it came up with a working definition: “As
previously defined within the literature, an ITS is essentially
an information-gathering and generating strategy, which in
itself does not have to provide means of using the information
to address a specific regulatory question. However, it is gen-
erally assumed that some decision criteria will be applied to
the information obtained, in order to reach a regulatory con-
clusion. Normally, the totality of information would be used
in a weight-of-evidence (WoE) approach.” Woe had been ad-
dressed in an earlier eCVAM workshop (Balls et al., 2006):
“Weight of evidence (WoE) is a phrase used to describe the
type of consideration made in a situation where there is un-
certainty and which is used to ascertain whether the evidence
or information supporting one side of a cause or argument is
greater than that supporting the other side.” It is of critical
importance to understand that Woe and ItS are two different
concepts although they combine the same types of informa-
tion! In Woe there is no formal integration, usually no strate-
gy, and often no testing. Woe is much more a “poly-pragmatic
shortcut” to come to a preliminary decision, where there is
no or only limited certainty. As proponents of evidence-based
toxicology (eBt) (Hoffmann and Hartung, 2006), we have
to admit that the term eBt further contributes to this confu-
sion (Hartung, 2009b). However, there is obvious cross-talk
between these approaches when, for example, the quality
The need for a definition of ITS
Currently, the best reference for definitions of terminology
is provided by OeCD guidance document 34 on validation
(OECD, 2005). An extract of the most relevant definitions is
given in Box 1. Notably, the term (integrated) test strategy is
Relevant definitions from OECD Series on
Testing and Assessment No. 34 (OECD, 2005)
Adjunct test: test that provides data that add to or help in-
terpret the results of other tests and provide information
useful for the risk assessment process
Assay: Uses interchangeably with test.
Data interpretation procedure (DIP): An interpretation
procedure used to determine how well the results from the
test predict or model the biological effect of interest. See
Decision Criteria: the criteria in a test method protocol
that describe how the test method results are used for de-
cisions on classification or other effects measured or pre-
dicted by the test method.
Definitive test: A test that is considered to generate suffi-
cient data to determine the specific hazard or lack of hazard
of the substance without the need for further testing, and
which may therefore be used to make decisions pertaining
to hazard or safety of the substance.
Hierarchical (tiered) testing approach: An approach where
a series of tests to measure or elucidate a particular effect
are used in an ordered sequence. In a typical hierarchical
testing approach, one or a few tests are initially used; the
results from these tests determine which (if any) subsequent
tests are to be used. For a particular chemical, a weight-of-
evidence decision regarding hazard could be made at any
stage (tier) in the testing strategy, in which case there would
be no need to proceed to subsequent tiers.
In silico models: Approaches for the assessment of chemi-
cals based on the use computer-based estimations or
simulations. examples include structure-activity relation-
ships (SAR), quantitative structure-activity relationships
(QSARs), and expert systems.
(Q)SARs (Quantitative Structure-Activity Relationships):
theoretical models for making predictions of physico-
chemical properties, environmental fate parameters, or bio-
logical effects (including toxic effects in environmental and
mammalian species). they can be divided into two major
types, QSARs and SARs. QSARs are quantitative models
yielding a continuous or categorical result while SARs are
qualitative relationships in the form of structural alerts that
incorporate molecular substructures or fragments related to
the presence or absence of activity.
Hartung et al.
Altex 30, 1/13
is prescribed, albeit loosely, based on average biological rel-
evance and is left to expert judgment. In contrast, our definition
enables an integrated and systematic approach to guide testing
such that the sequence is not necessarily prescribed ahead of
time but is tailored to the chemical-specific situation. Depend-
ing on the already available information on a specific chemical
the sequence might be adapted and optimized for meeting spe-
cific information targets.”
It might be useful to start from scratch with our definitions to
avoid some glitches.
– the leading principle should be that a test gives one result,
and it does not matter how many endpoints (measurements)
the test requires. Figure 1 shows these different scenarios. A
test/assay thus consists of a test system (biological in vivo or
in vitro model) and a Standard Operation Protocol (SOP) in-
cluding endpoint(s) to measure, reference substance(s), data
interpretation procedure (a way to express the result), infor-
mation on reproducibility / uncertainty, applicability domain /
information on limitations and favorable performance stand-
ards. Note that tests can include multiple test systems and/or
multiple endpoints as long as they lead to one result.
– An integrated test strategy is an algorithm to combine (differ-
ent) test result(s) and, possibly, non-test information (existing
data, in silico extrapolations from existing data or modeling)
to give a combined test result. they often will have interim
decision points at which further building blocks may be con-
– A battery of tests is a group of tests that complement each
other but are not integrated into a strategy. A classical exam-
ple is the genotoxicity testing battery.
scoring of studies developed for eBt (Schneider et al., 2009)
helps to filter their use in WoE and ITS approaches.
The following definition was put forward by the ECVAM/
ePAA workshop (Kinsner-Ovaskainen et al., 2009): “In the
context of safety assessment, an Integrated Testing Strategy is
a methodology which integrates information for toxicological
evaluation from more than one source, thus facilitating deci-
sion-making. This should be achieved whilst taking into con-
sideration the principles of the Three Rs (reduction, refinement
and replacement).” In line with the proposal put forward in the
2007 OeCD Workshop on Integrated Approaches to testing and
Assessment, they reiterated, “a good ITS should be structured,
transparent, and hypothesis driven” (OeCD, 2008).
Jaworska and Hoffmann (2010) defined ITS somewhat dif-
ferently: “In narrative terms, ITS can be described as combi-
nations of test batteries covering relevant mechanistic steps
and organized in a logical, hypothesis-driven decision scheme,
which is required to make efficient use of generated data and
to gain a comprehensive information basis for making deci-
sions regarding hazard or risk. We approach ITS from a system
analysis perspective and understand them as decision support
tools that synthesize information in a cumulative manner and
that guide testing in such a way that information gain in a test-
ing sequence is maximized. This definition clearly separates ITS
from tiered approaches in two ways. First, tiered approaches
consider only the information generated in the last step for a de-
cision as, for example, in the current regulated sequential test-
ing strategy for skin irritation (OECD, 2002[a]) or the recently
proposed in vitro testing strategy for eye irritation (Scott et al.,
2009). Secondly, in tiered testing strategies the sequence of tests
Fig. 1: Three prototypic tests
(a) a simple test with one endpoint, (b) two test systems giving a joint result, and (c) multiple endpoints (including omics and
other high-content analysis)
Hartung et al.
Altex 30, 1/13
– Tiered testing describes the simplest ItS, where a sequence of
tests is defined without formal integration of results.
– A probabilistic TS describes an ItS, where the different build-
ing blocks change the probability for a test result.
– Validation of a test or an ItS requires a prediction model (a
way to translate it to the point of reference) and the point of
reference itself, which can be correlative on the basis of re-
sults, or mechanistic.
Some of these aspects are shown in Figure 2.
Composition of ITS – no GOBSATT!
the ItS in use to date is based on consensus processes often
called “weight of evidence” (Woe) approaches. Such “Good
old boys sitting around the table” (GOBSAtt) is not the meth-
od of choice to compose ItS. the complexity of data and the
multiplicity of performance aspects to consider (costs, animal
use, time, predictivity, etc.) (Nordberg et al., 2008; Gabbert
and van Ierland, 2010) call for simulation based on test data.
the shortcomings of existing ItS were recently analyzed in
detail by Jaworska et al. (2010): “Though both current ITS
and WoE approaches are undoubtedly useful tools for sys-
temizing chemical hazard and risk assessment, they lack a
consistent methodological basis for making inferences based
on existing information, for coupling existing information
with new data from different sources, and for analyzing test
results within and across testing stages in order to meet tar-
get information requirements.” And in more detail in (Jawor-
ska and Hoffmann, 2010): “The use of flow charts as the ITS’
underlying structure may lead to inconsistent decisions. There
is no guidance on how to conduct consistent and transparent
inference about the information target, taking into account all
relevant evidence and its interdependence. Moreover, there is
no guidance, other than purely expert-driven, regarding the
choice of the subsequent tests that would maximize information
gain.” Hoffmann et al. (2008a) provided a pioneering example
of ItS evaluation focused on skin irritation. they compiled
a database of 100 chemicals. A number of strategies, both
animal-free and inclusive of animal data, were constructed
and subsequently evaluated considering predictive capacities,
severity of misclassifications, and testing costs. Note that the
different ItS to be compared were “hand-made,” i.e., based
on scientific reasoning and intuition, but not on any construc-
tion principles. they correctly conclude: “To promote ITS, fur-
ther guidance on construction and multi-parameter evaluation
need to be developed.” Similarly, the eCVAM/ePAA work-
shop only stated needs (Kinsner-Ovaskainen et al., 2009): “So
far, there is also a lack of scientific knowledge and guidance
on how to develop an ITS and, in particular, on how to com-
bine the different building blocks for an efficient and effective
decision-making process. Several aspects should be taken into
account in this regard, including:
– the extent of flexibility in combining the ITS components;
– the optimal combination of ITS components (including the
minimal number of components and/or combinations that
have a desired predictive capacity);
– the applicability domain of single components and the whole
Fig. 2: Components of a test (strategy) and its traditional (correlative) or mechanistic validation
Hartung et al.
Altex 30, 1/13
domain are more complex and typically only the overall out-
come can be validated. the other opportunity is to combine tests
not with Boolean logic but with fuzzy/probabilistic logic. this
means that the result is not dichotomous (toxic or not) but a
probability or score is assigned. We could say that a value be-
tween 0 (non-toxic) and 1 (toxic) is assigned. Such combinations
typically will only allow use in the overlapping applicability do-
mains. It also implies that only the overall ItS can be validated.
the challenge here lies mostly in the point of reference, which
normally needs to be graded and not dichotomous as well.
the advantages of a probabilistic approach were recently sum-
marized by Jaworska and Hoffmann (2010): “Further, probabi-
listic methods are based on fundamental principles of logic and
rationality. In rational reasoning every piece of evidence is con-
sistently valued, assessed, and coherently used in combination
with other pieces of evidence. While knowledge- and rule-based
systems, as manifested in current testing strategy schemes, typi-
cally model the expert’s way of reasoning, probabilistic systems
describe dependencies between pieces of evidence (towards an
information target) within the domain of interest. This ensures
the objectivity of the knowledge representation. Probabilistic
methods allow for consistent reasoning when handling conflict-
ing data, incomplete evidence, and heterogeneous pieces of evi-
The applicability domain of single components
and the whole ITS:
Simple logic shows, as discussed above, that in most instances
an ItS can be applied only where all building blocks applied to
a substance allow so. the picture changes only if the combina-
tion serves exactly the purpose of expanding the applicability
domain (by combining two tests with OR). this implies, how-
ever, that essentially the same thing is measured (i.e., similarity
of tests); if tests differ in applicability domain and what they
measure, a hierarchy needs to be established first. This is one of
the key arguments for flexibility of ITS, as we need to exchange
building blocks for others to meet the applicability domain for
a given substance.
The efficiency of the ITS:
Typically, efficiency refers to resources such as cost and labor.
Animal use and suffering, however, lies outside its scope. How
to value the replacement of an animal test is a societal deci-
sion. In the eU legislation, the term “reasonably available” is
used to mandate the use of an alternative (Hartung, 2010a).
this leaves room for interpretation, but there certainly are lim-
its: How much more costly can an alternative method be to be
reasonably available? The cost/benefit calculation also needs
to include societal acceptability. However, this is missing the
point: In the end, the concept of efficiency centers on predicting
human health and environmental effects. What are the costs of
a test versus the risk of a scandal? If we only attempt to be as
good as the animal test, however, this argument has no lever-
age. thus we need to advance to human relevance if we really
want impact. This is difficult on the level of correlation, because
we typically do not have the human data for a statistically suf-
ficient number of substances. More and more, however, we do
– the efficiency of the ITS (cost, time, technical difficulties)”
Using this “wish list” as guidance some aspects will be dis-
Extent of flexibility in combining the ITS components:
this is a key dilemma – any validation “sets tests into stone”
and “freezes them in time” (Hartung, 2007). An ItS, however,
is so much larger than individual tests that there are even more
reasons for change (technical advances, limitations of individu-
al ItS components for the given study substance, availability of
all tests in a given setting, etc.). What is needed here is a meas-
ure of similarity of tests and performance standards. the latter
concept was introduced in the modular approach to validation
(Hartung et al., 2004) and is now broadly used for the new vali-
dations. It defines what criteria a “me-too” development (a term
borrowed from the pharmaceutical industry, where a competitor
follows the innovative, pioneering work of another company,
introducing a compound with the same active principle) must
fulfill to be considered equivalent to the original one. The idea
is to avoid undertaking another full-blown validation ring trial
which requires enormous resources. there is some difference in
interpretation as to whether there still needs to be a multi-labo-
ratory exercise to establish inter-laboratory reproducibility and
transferability as well. Note that this requires demonstrating the
similarity of tests, for which we have no real guidance. It also
implies, however, that any superiority of the new test compared
to the originally validated one cannot be shown. For ItS com-
ponents, in the same way, similarity and performance criteria
need to be established to allow exchange for something differ-
ent without a complete reevaluation of the ITS. This can first be
based on the scientific relevance and the PoT covered, as argued
earlier (Hartung, 2010b). this means that two assays that cover
the same mechanism can substitute for each other. Alternatively,
it can be based on correlation of results. two assays that agree
(concordance) to a sufficient degree, can be considered similar.
We might call these two options “mechanistic similarity” and
The optimal combination of ITS components:
the typical combination of building blocks so far follows a
Boolean logic, i.e., the logical combinations are AND, OR, and
NOt. table 1 gives the different examples for combining two
tests with dichotomous (plus/minus) outcome with such logic
and the consequences for the joint applicability domain and the
validation need. Note that in most cases the validation of the
building blocks will suffice, but the joint applicability domain
will be just the overlap of the two tests’ applicability domains.
this is a simple application of set theory. Only if the two tests
measure the same but for different substances / substance sever-
ity classes, the logical combination OR results in the combined
applicability domain. If the result requires that both tests are
positive, e.g., when a screening test and a confirmatory test are
combined, it is necessary to validate the overall ItS outcome.
the principal opportunities in combining tests into the best
ItS lie, however, in interim decision points (Figure 3 shows
a simple example, where the positive or negative outcome is
confirmed). Here, the consequences for the joint applicability
Hartung et al.
Altex 30, 1/13
– Blocking the Pot once perturbed/activated diminishes mani-
festation of the adverse outcome
Please note that the current debate as to whether a Pot rep-
resents a chemico-biological interaction impacting on the bio-
logical system or the perturbed normal physiology is reflected
in using both terminologies.
Similarly, the Bradford-Hill criteria can be applied:
– Strength: the stronger an association between cause and ef-
fect the more likely a causal interpretation, but a small asso-
ciation does not mean that there is not a causal effect.
– Consistency: Consistent findings of different persons in dif-
ferent places with different samples increase the causal role
of a factor and its effect.
– Specificity: The more specific an association is between factor
and effect, the bigger the probability of a causal relationship.
– temporality: the effect has to occur after the cause.
– Biological gradient: Greater exposure should lead to greater
incidence of the effect, with the exception that it can also be
inverse, meaning greater exposure leads to lower incidence of
– Plausibility: A possible mechanism between factor and ef-
fect increases the causal relationship, with the limitation that
know the mechanisms relevant to human health effects. thus,
the efficiency with which a test system covers relevant mecha-
nisms for human health and environmental effects is becoming
increasingly important. I have called this “mechanistic valida-
tion” (Hartung, 2007). this requires that we establish causal-
ity for a given mechanism to create a health or environmental
effect. the classical frameworks of the Koch-Dale postulates
(Dale, 1929) and Bradford-Hill criteria (Hill, 1965) for assess-
ing evidence of causation come to mind first. Dale translated the
Koch postulates that need to be fulfilled to prove a pathogen to
be the cause of a certain disease to ones that prove a mediator (at
the time histamine and neurotransmitters) causes a physiologi-
cal effect. We recently applied this to systematically evaluate
the nature of the Gram-positive bacterial endotoxin (Rockel and
Hartung, 2012). Similarly, we can translate this to a Pot being
responsible for the manifestation of an adverse cellular outcome
of substance x:
– evidence for presence of the Pot in affected cells
– Perturbation/activation of the PoT leads to or amplifies the
– Hindering Pot perturbation/activation diminishes manifesta-
tion of the adverse outcome
Tab. 1: Test combinations and consequences for applicability domain and validation needs
Logic Example Joint Applicability Domain Validation Need
A AND B
Screening plus confirmatory
Overlap Total ITS
A OR B Different Mode of Action Overlap Building Blocks
Different Applicabilty Domain
or Severity Grades
A NOT B
Exclusion of a property
(such as cytotoxicity)
IF A positive: B
IF A negative: C
See Figure 3
Decision points, here
confirmation of result in a
Combined overlap A/B
and overlap A/C
Fuzzy / Probabilistic
as function of
A and B
Combined change of probability,
e.g., priority score
Fig. 3: Illustration of a simple decision tree, where outcomes of test A are confirmed by different second tests B or C
Hartung et al.
Altex 30, 1/13
This “shopping list” extends ITS from hazard identification
to exposure considerations and the inclusion of existing data
beyond de novo testing (including some quite questionable ap-
proaches of read-across and forming of chemical classes, for
which no guidance or quality assurance is yet available). It simi-
larly calls for flexibility, a key difference from the current guid-
ance documents from eCHA or OeCD. Compared to ReACH,
it calls for human predictivity and mode-of-action information
in the sense of Toxicity Testing for the 21st Century. Similarly,
an earlier report, also based on an IVtP symposium to which
the author contributed, made further recommendations relating
to a concept based on pathways of toxicity (Pot) (Berg et al.,
2011): “When selecting the battery of in vitro and in silico meth-
ods addressing key steps in the relevant biological pathways
(the building blocks of the ITS) it is important to employ stand-
ardized and internationally accepted tests. Each block should
be producing data that are reliable, robust and relevant (the
alternative 3R elements) for assessing the specific aspect (e.g.
biological pathway) it is supposed to address. If they comply
with these elements they can be used in an ITS.”
Hoffmann et al. ( 2008a) added an important consideration:
“Furthermore, the study underlined the need for databases of
chemicals with testing information to facilitate the construction
of practical testing strategies. Such databases must comprise a
good spread of chemicals and test data in order that the appli-
cability of approaches may be effectively evaluated. Therefore,
the (non-) availability of data is a caveat at the start of any ITS
construction. Whilst in silico and in vitro data may be readily
generated, in vivo data of sufficient quality are often difficult to
obtain.” this brings us back to both the need for data-sharing
(Basketter et al., 2012) and the construction of a point of refer-
ence for validation exercises (Hoffmann et al., 2008b).
the most comprehensive framework for ItS composition so
far was produced by Jaworska and Hoffmann as a t4-commis-
sioned white paper (Jaworska and Hoffmann, 2010), see also
(Jaworska et al., 2010):
“ITS should be:
a) Transparent and consistent
– As a new and complex development, key to ITS, as to any
methodology, is the property that they are comprehensible to the
maximum extent possible. In addition to ensuring credibility and
acceptance, this may ultimately attract the interest needed to
gather the necessary momentum required for their development.
The only way to achieve this is a fundamental transparency.
– Consistency is of similar importance. While difficult to achieve
for weight of evidence approaches, a well-defined and transpar-
ent ITS can and should, when fed with the same, potentially even
conflicting and/or incomplete information, always (re-)produce
the same results, irrespective of who, when, where, and how it is
applied. In case of inconsistent results, reasons should be iden-
tified and used to further optimize the ITS consistency.
– In particular, transparency and consistency are of utmost im-
portance in the handling of variability and uncertainty. While
transparency could be achieved qualitatively, e.g., by appropri-
ate documentation of how variability and uncertainty were con-
sidered, consistency in this regard may only be achievable when
knowledge of the mechanism is limited by best available cur-
– Coherence: A coherence between epidemiological and labo-
ratory findings leads to an increase in the likelihood of this ef-
fect. However, the lack of laboratory evidence cannot nullify
the epidemiological effect on the associations.
– experiment: Similar factors that lead to similar effects in-
crease the causal relationship of factor and effect.
Most recently, a new approach to causation was proposed, origi-
nating from ecological modeling (Sugihara et al., 2012; Mar-
shall, 2012). Whether this offers an avenue to systematically test
causality in large datasets from omics and/or high-throughput
testing needs to be explored. It might represent an alternative to
choosing meaningful biomarkers (Blaauboer et al., 2012), being
always limited to the current state of knowledge.
As a more pragmatic approach, DeWever et al. (De Wever et
al., 2012) suggested key elements of an ItS:
“(1) Exposure modelling to achieve fast prioritisation of chemi-
cals for testing, as well as the tests which are most relevant for
the purpose. Physiologically based pharmacokinetic model-
ling (PBPK) should be employed to determine internal doses
in blood and tissue concentrations of chemicals and metabo-
lites that result from the administered doses. Normally, in such
PBPK models, default values are used. However, the inclusion
of values or results form in vitro data on metabolism or expo-
sure may contribute to a more robust out-come of such model-
(2) Data gathering, sharing and read-across for testing a class
of chemicals expected to have a similar toxicity profile as the
class of chemicals providing the data. In vitro results can be
used to demonstrate differences or similarities in potency across
a category or to investigate differences or similarities in bio-
availability across a category (e.g. data from skin penetration
or intestinal uptake).
(3) A battery of tests to collect a broad spectrum of data focuss-
ing on different mechanisms and mode of actions. For instance
changes in gene expression, signalling pathway alterations
could be used to predict toxic events which are meaningful for
the compound under investigation.
(4) Applicability of the individual tests and the ITS itself has to
be assured. The acceptance of a new method depends on wheth-
er it can be easily transferred from the developer to other labs,
whether it requires sophisticated equipment and models, or if
intellectual property issues and the costs involved are impor-
tant. In addition, an accurate description of the compounds that
can and cannot be tested is essential in this context.
(5) Flexibility allowing for adjustment of the ITS to the target
molecule, exposure regime or application.
(6) Human-specific methods should be prioritised whenever
possible to avoid species differences and to eliminate ‘low dose’
extrapolation. Thus, the in vitro methods of choice are based
upon human tissues, human tissue slices or human primary cells
and cell lines for in vitro testing. If in vivo studies be unavoid-
able, transgenic animals should be the preferred choice if avail-
able. If not, comparative genomics (animal versus human) and
computational models of kinetics and dynamics in animals and
humans may help to overcome species differences.”
Hartung et al.
Altex 30, 1/13
sults. Important aspects of study quality include the selection of
a clinically relevant cohort [relevant test set of substances], the
consistent use of a single good reference standard [reference
data], and the blinding of results of experimental and reference
tests. The choice of statistical method for pooling results de-
pends on the summary statistic and sources of heterogeneity,
notably variation in diagnostic thresholds [thresholds of adver-
sity]. Sensitivities, specificities, and likelihood ratios may be
combined directly if study results are reasonably homogeneous.
When a threshold effect exists, study results may be best sum-
marised as a summary receiver operating characteristic curve,
which is difficult to interpret and apply to practice.”
Interestingly, Schünemann et al. (2008) developed GRADe
for grading quality of evidence and strength of recommenda-
tions for diagnostic tests and strategies. this framework uses
“patient-important outcomes” as measures, in addition to test
accuracy. A less invasive test can be better for a patient even if it
does not give the same certainty. Similarly, we might frame our
choices by aspects such as throughput, costs, or animal use.
The many faces of (I)TS for safety assessments
As defined earlier, any systematic combination of different
(test) results represents a testing strategy. It does not really mat-
ter if these results already exist, are estimated from structures
or related substances, measured by chemico-physical methods,
or stem from testing in a biological system or from human ob-
servations and studies. Jaworska et al. (2010) and Basketter et
al. (2012) list many of the more recently proposed ItS. One
of the authors (tHA) had the privilege to coordinate from the
side of the european Commission the ItS development within
the guidance for ReACH implementation for industry, which
formed the basis for current eCHA guidance.2 Classical exam-
ples in toxicology, some of them commonly used without the
label ItS, are:
Test battery of genotoxicity assays:
Several assays (3-6) depending on the field of use (Hartung,
2008) are carried out and, typically, any positive result is taken
as an alert. they often are combined with further mutagenicity
testing in vivo (Hartung, 2010c). the latter is necessary to re-
duce the tremendous rate of false-positive classifications of the
battery, as discussed earlier (Basketter et al., 2012). Interesting-
ly, Aldenberg and Jaworska (2010) applied a Bayesian network
to the dataset assembled by Kirkland et al. showing the potential
of a probabilistic network to analyze such datasets.
ITS for eye and skin irritation:
As already mentioned, these were the first areas to introduce in-
ternationally accepted though relatively simple ItS, e.g., sug-
gesting a pH test before progressing to corrosivity testing. the
rich data available from six international validation studies,
– Rationality of ITS is essential to ensure that information is
fully exploited and used in an optimized way. Furthermore,
generation of new information, usually by testing, needs to be
rational in the sense that it is focused on providing the most
informative evidence in an efficient way.
– ITS should be driven by a hypothesis, which will usually be
closely linked to the information target of the ITS, a concept
detailed below. In this way the efficiency of an ITS can be en-
sured, as a hypothesis-driven approach offers the flexibility to
adjust the hypothesis whenever new information is obtained or
… Having defined and described the framework of ITS, we pro-
pose to fill it with the following five elements:
1. Information target identification;
2. Systematic exploration of knowledge;
3. Choice of relevant inputs;
4. Methodology to evidence synthesis;
5. Methodology to guide testing.”
the reader is referred to the original article (Jaworska and Hoff-
mann, 2010) and its implementation for skin sensitization (Ja-
worska et al., 2011).
Guidance from testing strategies in clinical
We earlier stressed the principal similarities between a diagnostic
and a toxicological test strategy (Hoffmann and Hartung, 2005).
In both cases, different sources of information have to be com-
bined to come to an overall result. Vecchio pointed out as early
as 1966 the problem of single tests in unselected populations
(Vecchio, 1966) leading to unacceptable false-positive rates.
Systematic reviews of an evidence-based toxicology (eBt)
approach (Hoffmann and Hartung, 2006; Hartung, 2009b) and
meta-analysis could serve the evaluation and quality assurance
of toxicological tests. the frameworks for evaluation of clinical
diagnostic tests are well developed (Deeks, 2001; Devillé et al.,
2002; Leeflang et al., 2008) and led to the Cochrane Handbook
for Diagnostic test Accuracy Reviews (Anon., 2011). Devillé
et al. ( 2002) give very concise guidance on how to evaluate
diagnostic methods. this is closely linked to efforts to im-
prove reporting on diagnostic tests; a set of minimal reporting
standards for diagnostic research has been proposed: Standards
for Reporting of Diagnostic Accuracy statement (StARD)1.
We argued earlier that this represents an interesting approach
to complement or substitute for traditional method validation
(Hartung, 2010b). Deeks (2001) summarize their experience
as follows [with translation to toxicology inserted in brackets]:
“Systematic reviews of studies of diagnostic [hazard assessment]
accuracy differ from other systematic reviews in the assessment
of study quality and the statistical methods used to combine re-
Hartung et al.
Altex 30, 1/13
tion, already proposed to suit ItS (De Wolf et al., 2007; Ahlers
et al., 2008), was reported recently (Fernández et al., 2012),
showing improved prediction by combining several QSAR.
Validation of ITS
Concepts for the validation of ItS are only now emerging. the
eCVAM/ePAA workshop (Kinsner-Ovaskainen et al., 2009)
noted only: “There is a need to further discuss and to develop
the ITS validation principles. A balance in the requirements for
validation of the individual ITS components versus the require-
ments for the validation of a whole ITS should be considered.”
later in the text, the only statement made was: “It was con-
cluded that a formal validation should not be required, unless
the strategy could serve as full replacement of an in vivo study
used for regulatory purposes.” the workshop stated that for
screening, hazard classification & labeling, and risk assessment
neither a formal validation of the ItS components nor the entire
ItS is required. We would respectfully disagree, as validation
certainly is desirable for other uses, but it should be tailored
to the use scenario and the available resources. the follow-up
workshop (Kinsner-Ovaskainen et al., 2012) did not go much
further with regard to recommendations for validation: “Firstly,
it was agreed that the validation of a partial replacement test
method (for application as part of a testing strategy) should be
differentiated from the validation of an in vitro test method for
application as a stand-alone replacement. It was also agreed
that any partial replacement test method should not be any less
robust, reliable or mechanistically relevant than stand-alone
replacement methods. However, an evaluation of predictive ca-
pacity (as defined by its accuracy when predicting the toxico-
logical effects observed in vivo) of each of these test methods
would not necessarily be as important when placed in a testing
strategy, as long as the predictive capacity of the whole testing
strategy could be demonstrated. This is especially the case for
test methods for which the relevant prediction relates to the im-
pact of the tested chemical on the biological pathway of interest
(i.e. biological relevance). The extent to which (or indeed how)
this biological relevance of test methods could, and should, be
validated, if reference data (a ‘gold standard’) were not avail-
able, remained unclear.
Consequently, a recommendation of the workshop was for
ECVAM to consider how the current modular approach to vali-
dation could be pragmatically adapted for application to test
methods, which are only used in the context of a testing strat-
egy, with a view to making them acceptable for regulatory pur-
Secondly, it was agreed that ITS allowing for flexible and ad
hoc approaches cannot be validated, whereas the validation
of clearly defined ITS would be feasible. However, even then,
current formal validation procedures might not be applicable,
due to practical limitations (including the number of chemicals
needed, cost, time, etc).
Thirdly, concerning the added value of a formal validation
of testing strategies, the views of the group members differed
eight retrospective assessments, and three recently completed
validation studies of new tests (Adler et al., 2011; Zuang et
al., 2008) make it an ideal test case for ItS development. For
ocular toxicity, the OeCD tG 405 in 2002 provided an ItS
approach for eye irritation and corrosion (OeCD 2002a,b). In
spite of this TG, the Office of Pesticide Programs (OPPs) of
the US ePA requested the development of an in vitro eye irrita-
tion strategy to register anti-microbial cleaning products. the
Institute for In Vitro Sciences, in collaboration with industry
partners, developed such an ItS of three in vitro approaches,
which was then accepted by regulators (De Wever et al., 2012).
ItS development has advanced greatly as a result of this test
case (McNamee et al., 2009; Scott et al., 2009).
For skin irritation, we already referred to the work by Hoff-
mann et al. ( 2008a), which was based on an evaluation of the
prevalence of this hazard among new chemicals (Hoffmann et
al., 2005). the study showed the potential of simulations to
guide ItS construction.
Embryonic Stem Cell test (EST) – an ITS?
the eSt (Spielmann et al., 2006; Marx-Stoelting et al., 2009;
Seiler and Spielmann, 2011) is an interesting test case for our
definition of an ITS. It consists of two test systems (mouse em-
bryonic stem cells and 3T3 fibroblasts) and two endpoints (cell
differentiation into beating cardiomyocytes and cytotoxicity
in both cell systems). the result (embryotoxicity), however,
is only deduced from all this information. According to the
suggested definition of tests and ITS, therefore, this represents
a test and not an ItS. Note, however, that the eSt formed a
key element of the ItS developed at the end of the Integrated
Project ReProtect (Hareng et al., 2005); a feasibility study
showed the tremendous potential of this approach (Schenk et
this area has been subject to intense work over the last dec-
ade, which resulted in about 20 test systems. As outlined in the
roadmap process (Basketter et al., 2012), the area now requires
the creation of an ItS. It seems that only the gridlock of the
political decision process on the 2013 deadline, which includes
skin sensitization as an endpoint, hinders the finalization of this
important work. Since, at the same time, this represents a criti-
cal endpoint for ReACH (notably all chemicals under ReACH
currently require a local lymph node assay for skin sensitiza-
tion), such delays are hardly acceptable. It is very encouraging
that BASF has pushed the area by already submitting their ItS
(Mehling et al., 2012) for eCVAM evaluation. Pioneering work
to develop a Bayesian ItS for this hazard was referred to earlier
(Jaworska et al., 2011).
In silico ITS:
there also are attempts to combine only various in silico
(QSAR) approaches. We have discussed some of the limita-
tions of the in silico approaches in isolation earlier (Hartung
and Hoffmann, 2009). Since they are referred to in ReACH as
“non-testing methods” they might actually be called “Integrated
Non-Testing Strategies” (INtS). An example for bioaccumula-
Hartung et al.
Altex 30, 1/13
tic exposure modeling (van der Voet and Slob, 2007; Hartung
et al., 2012). this is the tremendous opportunity of probabilis-
tic hazard and risk assessment (thompson and Graham, 1996;
Hartung et al., 2012).
A key recommendation from the eCVAM/ePAA workshop
(Kinsner-Ovaskainen et al., 2009) was: “It is necessary to ini-
tiate, as early as possible, a dialogue with regulators and to
include them in the development of the principles for the con-
struction and validation of ITS.” An earlier OeCD workshop in
2008 (OECD, 2008) made some first steps and posed some of
the most challenging questions addressing:
– how these tools and methods can be used in an integrated ap-
proach to fulfill the regulatory endpoint, independent of cur-
rent legislative requirements;
– how the results gathered using these tools and methods can be
transparently documented; and
– how the degree of confidence in using them can be communi-
cated throughout the decision-making process.
With impressive crowd-sourcing of about 60 nominated experts
and three case studies, a number of conclusions were reached:
– “There is limited acceptability for use of structural alerts to
identify effects. Acceptability can be improved by confirming
the mode of action (e.g., in vitro testing, in vivo information
from an analogue or category).
– There is a higher acceptability for positive (Q)SAR results
compared to negative (Q)SAR results (except for aquatic tox-
– The communication on how the decision to accept or reject a
(Q)SAR result can be based on the applicability domain of a
(Q)SAR model and/or the lack of transparency of the (Q)SAR
– The acceptability of a (Q)SAR result can be improved by con-
firming the mechanism/mode of action of a chemical and us-
ing a (Q)SAR model applicable for that specific mechanism/
mode of action.
– Read-across from analogues can be used for priority setting,
classification & labeling, and risk assessment.
– The combination of analogue information and (Q)SAR results
for both target chemical and analogue can be used for classi-
fication & labeling and risk assessment for acute aquatic tox-
icity if the target chemical and the analogue share the same
mode of action and if the target chemical and analogue are in
the applicability domain of the QSAR.
– Confidence in read-across from a single analogue improves if
it can be demonstrated that the analogue is likely to be more
toxic than the target chemical or if it can be demonstrated
that the target chemical and the analogue have similar me-
– Confidence in read-across improves if experimental data is
available on structural analogues “bracketing” the target
substance. The confidence is increased with an increased
strongly, and a variety of perspectives were discussed, clearly
indicating the need for further informed debate. Consequently,
the workshop recommended the use of EPAA as a forum for in-
dustry to share case studies demonstrating where, and how, in
vitro and/or integrated testing strategies have been successfully
applied for safety decision-making purposes. Based on these
case studies, a pragmatic way to evaluate the suitability of par-
tial replacement test methods could be discussed, with a view to
establishing conditions for regulatory acceptance and to reflect
on the cost/benefit of formal validation, i.e. the confirmation of
scientific validity of a strategy by a validation body and in line
with generally accepted validation principles, as provided in
OECD Guidance Document 34 (OECD, 2005).
Finally, the group agreed that test method developers should
be encouraged to develop and submit to ECVAM, not only tests
designed as full replacements of animal methods, but also par-
tial replacements in the context of a testing strategy.”
Going somewhat further, De Wever et al. (2012) noted: “In
some cases, the assessment of predictive capacity of a single
building block may not be as important, as long as the pre-
dictive capacity of the whole testing strategy is demonstrated.
However, … the predictive capacity of each single element of an
ITS and that of the ITS as a whole needs to be evaluated.”
Berg et al. go even further, challenging the validation need
and suggesting a more hands-on approach to gain experience
(Berg et al., 2011): “Does it make sense to validate a strategy
that builds upon tests for hazard identification which change
over time, but is to be used for risk assessment? One needs to
incorporate new thinking into risk assessment. Regulators are
receptive to new technologies but concrete data are needed to
support their use. Data documentation should be comprehen-
sive, traceable and make it possible for other investigators to
retrieve information as well as reliably repeat the studies in
question regardless of whether the original work was performed
to GLP standards.”
What is the problem? If we follow the traditional approach
of correlating results, we need good coverage of each branch
of the ItS with suitable reference substances to establish cor-
rect classification. Even for these very simple stand-alone tests,
however, we are often limited by the low number of available
well-characterized reference compounds and how much test-
ing we can afford. However, such an approach would be valid
only for static ITS anyway, and it would lose all the flexibility
of exchanging building blocks. the opportunity lies in the ear-
lier suggested “mechanistic validation.” If we can agree that
a certain building block covers a certain relevant mechanism,
we might relax our validation requirements and also accept
as equivalent another test covering the same mechanism. this
does not blunt the need for reproducibility assessments, but
a few pertinent toxicants relevant to humans should suffice
to show that we at least identify the liabilities of the past.
the second way forward is to stop making any test a “game-
changer”: If we accept that each and every test only changes
probabilities of hazard, we can relax and fine-tune the weight
added with each piece of evidence “on the job.” It appears that
such probabilistic hazard assessment also should, ideally, be
compatible with probabilistic PBPK modeling and probabilis-
Hartung et al.
Altex 30, 1/13
MVts we need a generative model capable of determining test
probabilities. One simple and effective way to determine the
MVt is via the same method that decision trees use, i.e., an
iterative process of determining which tests gives the most “in-
formation” on the endpoint. Information gain can be calculated
given a generative model. to determine the test that gives the
most information, we can find the test that yields the greatest
reduction in Shannon entropy. this is basically a measure that
quantifies information as a function of the probability of dif-
ferent values for a test and the impact those values have on the
endpoint category (toxic vs. non-toxic). the mathematical for-
Where t is the test in question and p(Ti) signifies the prob-
ability of a test taking on one of its values (enumerated by i).
to determine the most valuable test we need not only the tox-
icity classifier but also the probability estimates for every test
as a function of all other tests. to determine these transition
probabilities we need to discretize every test into the n buckets
shown in the above equation.
We can expect that users applying this model would want
to determine probabilities of toxicity for their test item within
some risk threshold in the fewest number of test steps or min-
imizing the costs. When we start testing for toxicity we may
want to check the current level of risk before deciding on more
testing. For example, we might decide to stop testing if a test
item has less than 10% chance of being toxic or a greater than
90% chance. Finding MVts from a generative model has an
advantage over directly using decision trees. Unfortunately, de-
cision trees cannot handle sparse data effectively. the amount
of data needed to determine n tests increases exponentially with
the number of tests. By calculating MVts on top of a generative
model we can leverage a simple calculation from a complex
model that is not as heavily constrained by data size.
Combining the ITS concept with Tox-21c:
As discussed above, tox-21c relies on breaking risk assess-
ment down into many components. these need to be put to-
gether again in a way that allows decision making, ultimately
envisioned as systems toxicology by simulation (Hartung et al.,
2012). Before this, an ItS-like integration and possibly a proba-
bilistic condensation of evidence into a probability of risk are
the logical approaches. However, there are special challenges:
Most importantly, the technologies promoted by tox-21c, at this
stage mainly omics and high-throughput (Hattis, 2009), are very
different from the information sources promoted in the euro-
pean ItS discussion. We see how the ItS discussion is crossing
the Atlantic, however – in the context of endocrine disruptor
testing, for example (Willett et al., 2011). they are so data-rich
that, from the beginning, a data-mining approach is necessary,
which means that the weighing of evidence is left to the compu-
ter. Not all regulators are comfortable with this.
Our own research is approaching this for metabolomics
(Bouhifd et al., in press); using endocrine disruption as a
number of “good” analogues that provide concordant data.
– Lower quality data on a target chemical can be used for clas-
sification & labeling and risk assessment if it confirms an
overall trend over analogues and target.
– Confidence is reduced in cases where robust study summaries
for analogues are incomplete or inadequate.
– It is difficult to judge analogues with missing functional groups
compared to the target; good analogues have no functional
group compared to the target and when choosing analogues,
other information on similarity than functional groups is re-
taken together, these conclusions address more a Woe ap-
proach and the use of non-testing information than actual ItS.
they still present important information on the comfort zone of
regulators and how to handle such information for inclusion into
ItS. Note that the questions of documentation and expressing
confidence were not tackled.
Flexibility by determining the Most Valuable (next) Test:
A key problem is to break out of the rigid test guideline prin-
ciples of the past. ItS must not be forced into a scheme with
a yearlong debate of expert consensus and committees. too
often, technological changes to components, difficulties with
availability and applicability of building blocks, and case-by-
case adaptations for the given test sample will be necessary.
For example, the integration of existing data, obviously at the
beginning of an ItS, already creates a very different starting
point. Chemico-physical, structural properties (including read-
across or chemical category assignments) and prevalence also
will change the probability of risk even before the first tests are
applied. In order to maintain the desired flexibility in applying
an ItS the MVt (most valuable test) to follow needs to be de-
termined at each moment. Such an approach should have the
1. Assess, finally, the probability of toxicity from the different
2. Determine most valuable next test given from previous test
results and other information.
3. Have a measure of model stability (e.g., confidence intervals)
Assessing the probability of toxicity for given tests can be done
by machine learning tools. Generative models work best for
providing the values needed to find a most valuable test given
prior tests. One simple generative model would predict prob-
ability of toxicity using a discriminative model (e.g., Random
Forests (Breimann, 2001)), and test probability via a generative
model (e.g., Naive Bayes). A classifier for determining risk of
chemical toxicity must have the following traits:
– Outputs: unbiased and consistent probability estimates for
toxicity (e.g., by cross-validation).
– Outputs: probability estimates even when missing certain
results (both Random Forests and Naive Bayes can handle
– Reliable and stable results based on cross-validation meas-
The MVT identification based on previous tests is not a direct
consequence of building a toxicity probability estimator. To find
Hartung et al.
Altex 30, 1/13
from pathway identification to a parameterized model than can
be used for more complex simulations such as metabolic con-
trol analysis, flux analysis, and systems control theory to clarify
the wiring diagram that allows the cell to maintain homeostasis
and to determine where, within that wiring diagram, there are
Steering the new developments:
At this stage, no strategic planning and coordination for the chal-
lenge of ItS implementation exists. this was noticed in most of
the meetings so far, e.g., (Berg et al., 2011): “… there was a
clear call from the audience for a credible leadership with the
capacity to assure alignment of ongoing activities and initiation
of concerted actions, e.g. a global human toxicology project.”
the Human toxicology Project Consortium3 is one of the ad-
vocates for such steering (Seidle and Stephens, 2009). there is
still quite a way to go (Hartung, 2009a). While we aim to estab-
lish some type of coordinating center in the US at Johns Hop-
kins (working title PotoMaC – Pathway of toxicity Mapping
Center), no such effort is yet in place in europe. We suggested
the creation of a European Safety Sciences Institute (eSSI) in
our policy program, but this discussion is only starting. It is evi-
dent, however, that we need such structures for developing the
new toxicological toolbox, along with a global collaboration of
regulators of the different sectors, to finally revamp regulatory
Adler, S., Basketter, D., Creton, S., et al. (2011). Alternative
(non-animal) methods for cosmetics testing: current status
and future prospects – 2010. Arch Toxicol 85, 367-485.
Ahlers, J., Stock, F., and Werschkun, B. (2008). Integrated test-
ing and intelligent assessment-new challenges under ReACH.
Env Sci Pollut Res Int 15, 565-572.
Aldenberg, t. and Jaworska, J. S. (2010). Multiple test in silico
weight-of-evidence for toxicological endpoints. Issues Toxi-
col 7, 558-583.
Anon. (2005). ReACH and the need for Intelligent testing
Strategies [eUR 21554 eN]. Institute for Health and Con-
sumer Protection, eC Joint Research Centre, Ispra, Italy. ht-
Anon. (2011). Cochrane Handbook for Systematic Reviews
of Diagnostic test Accuracy. http://srdta.cochrane.org/hand-
Balls, M., Amcoff, P., Bremer, S., et al. (2006). the principles
of weight of evidence validation of test methods and test-
ing strategies. the report and recommendations of eCVAM
workshop 58. Altern Lab Animal 34, 603-620.
Basketter, D. A., Clewell, H., Kimber, I., et al. (2012). A road-
map for the development of alternative (non-animal) methods
for systemic toxicity testing – t4 report. ALTEX 29, 3-91.
Benfenati, e., Gini, G., Hoffmann, S., and luttik, R. (2010).
Comparing in vivo, in vitro and in silico methods and inte-
test case might illustrate some of the challenges of the high-
throughput, systems biology methods and omics technologies.
Metabolomics – defined as measuring the concentration of
“all” low molecular weight (<1500 Da) molecules in a system
of interest – is the closest “omics” technology to the pheno-
type, and it represents the upstream consequences of whatever
changes are observed in proteomic or transcriptomic studies.
Small changes in the concentration of a protein, which might
be undetectable at the level of transcriptomics or proteomics,
can result in large changes in the concentrations of metabolites
– changes that often are invisible at one level of analysis (i.e.,
co-factor regulation of an enzyme), are more likely to be ap-
parent in a metabolic profile. By taking a global view, metabo-
lomics provides clues to the systemic response to a challenge
from a toxin and does so in a way that provides both mechanis-
tic information and candidates for biomarkers (Griffin, 2006;
Robertson et al., 2011). In other words, metabolomics offers
both the possibility of seeing the high-level pattern of altered
biological pathways while drilling down for relevant mecha-
Metabolomics produces many of the same challenges as other
high-content methods – namely, how to integrate the surfeit of
data into a meaningful framework, but at the same time, it has
some unique challenges. In particular, metabolomics lacks the
large-scale, integrated databases that have been crucial to the
analysis of transcriptomic and proteomic data. As was the case
in the early years of microarrays, we are still without established
methods to interpret data. exploring data sets via several meth-
ods (Sugimoto et al., 2012) (ORA, QeA, correlation analysis,
and genome-scale network reconstruction), hopefully, will pro-
vide some guidance for future toxicological applications for me-
tabolomics and help us to better understand the puzzle as well
as to develop and provide new perspectives on how to integrate
several “-omics” technologies. At some level, metabolomics
remains, at this stage, a process of hypothesis generation and,
potentially, biomarker discovery, and as such will be dependent
on validation by other means.
One critical problem for metabolomics is that while a more-
or-less complete “parts list” and wiring diagrams exist for ge-
nomic and proteomic networks, knowledge of metabolic net-
works is still relatively incomplete. Currently, there are three
non-tissue specific genome-scale human metabolic networks:
Recon 1 (Rolfsson et al., 2011), the edinburgh Human Meta-
bolic Network (eHMN) (Ma et al., 2007), and HumanCyc
(Romero et al., 2005). These reconstructions are “first drafts”
– in addition to genes and proteins of unknown function, as well
as “dead end” or “orphaned” metabolites that are not associated
with specific anabolic or catabolic pathways. Furthermore, the
networks are not tissue-specific. Many toxicants, including en-
docrine disruptors, exhibit tissue-specific toxicity, and a cell or
tissue-specific metabolic network (Hao et al., 2012) should pro-
vide a more accurate model of pathology than a generic, global
human metabolic network. In the long term, a well-character-
ized, biochemically complete network will help make the leap
Hartung et al.
Altex 30, 1/13
Griffin, J. L. J. (2006). The Cinderella story of metabolic profil-
ing: does metabolomics get to go to the functional genomics
ball? Philosophical Transactions of the Royal Society B: Bio-
logical Sciences 361, 147-161.
Gubbels-van Hal, W. M., Blaauboer, B. J., Barentsen, H. M., et
al. (2005). An alternative approach for the safety evaluation
of new and existing chemicals, an exercise in integrated test-
ing. Regul Toxicol Pharmacol 42, 284-295.
Hao, t., Ma, H.-W., Zhao, x.-M., and Goryanin, I. (2012). the
reconstruction and analysis of tissue specific human metabol-
ic networks. Molec BioSyst 8, 663-670.
Hareng, l., Pellizzer, C., Bremer, S., et al. (2005). the Inte-
grated Project ReProtect: A novel approach in reproductive
toxicity hazard assessment. Reprod Toxicol 20, 441-452.
Hartung, t., Bremer, S., Casati, S., et al. (2004). A modular ap-
proach to the eCVAM principles on test validity. Altern Lab
Anim 32, 467-472.
Hartung, t. (2007). Food for thought ... on validation. ALTEX
Hartung, t. (2008). Food for thought ... on alternative methods
for cosmetics safety testing. ALTEX 25, 147-162.
Hartung, t. (2009a). A toxicology for the 21st century – map-
ping the road ahead. Toxicol Sci 109, 18-23.
Hartung, t. (2009b). Food for thought ... on evidence-based
toxicology. ALTEX 26, 75-82.
Hartung, T. (2009c). Toxicology for the twenty-first century.
Nature 460, 208-212.
Hartung, t. and Hoffmann, S. (2009). Food for thought ... on in
silico methods in toxicology. ALTEX 26, 155-166.
Hartung, t. (2010a). Comparative analysis of the revised Di-
rective 2010/63/eU for the protection of laboratory animals
with its predecessor 86/609/eeC – a t4 report. ALTEX 27,
Hartung, t. (2010b). evidence-based toxicology – the toolbox
of validation for the 21st century? ALTEX 27, 253-263.
Hartung, t. (2010c). Food for thought ... on alternative methods
for chemical safety testing. ALTEX 27, 3-14.
Hartung, t. (2010d). lessons learned from alternative methods
and their validation for a new toxicology in the 21st century. J
Toxicol Environ Health B Crit Rev 13, 277-290.
Hartung, t. and McBride, M. (2011). Food for thought ... on
mapping the human toxome. ALTEX 28, 83-93.
Hartung, t. and Zurlo, J. (2012). Alternative approaches for
medical countermeasures to biological and chemical terror-
ism and warfare. ALTEX 29, 251-260.
Hartung, t., van Vliet, e., Jaworska, J., et al. (2012). Food for
thought ... systems toxicology. ALTEX 29, 119-128.
Hattis, D. (2009). High-throughput testing – the NRC vision,
the challenge of modeling dynamic changes in biological sys-
tems, and the reality of low-throughput environmental health
decision making. Risk Anal 29, 483-484.
Hengstler, J. G., Foth, H., Kahl, R., et al. (2006). the ReACH
concept and its impact on toxicological sciences. Toxicol 220,
Hill, A. B. (1965). the environment and disease: association or
causation? Proc R Soc Med 58, 295-300.
grated strategies for chemical assessment: problems and pros-
pects. Altern Lab Animal 38, 153-166.
Berg, N., De Wever, B., Fuchs, H. W., et al. (2011). toxicology
in the 21st century – working our way towards a visionary
reality. Toxicol In Vitro 25, 874-881.
Blaauboer, B. J. (2010). Biokinetic modeling and in vitro-in
vivo extrapolations. J Toxicol Environ Health B Crit Rev 13,
Blaauboer, B. and Barratt, M. (1999). the integrated use of al-
ternative methods in toxicological risk evaluation. Altern Lab
Animal 27, 229-237.
Blaauboer, B. J., Boekelheide, K., Clewell, H. J., et al. (2012).
the use of biomarkers of toxicity for integrating in vitro haz-
ard estimates into risk assessment for humans. ALTEX 29,
Bouhifd, M., Hartung, t., Hogberg, H. t., et al. (in press). Re-
view: toxicometabolomics. J Appl Toxicol.
Breiman, l. (2001). Random Forests. Statistics Depart-
ment, University of California. Machine learning. http://
Combes, R. D. and Balls, M. (2011). Integrated testing strate-
gies for toxicity employing new and existing technologies.
Altern Lab Animal 39, 213-225.
Dale, H. H. (1929). Croonian lectures on some chemical factors
in the control of the circulation. Lancet 213, 1285-1290.
De Wever, B., Fuchs, H. W., Gaca, M., et al. (2012). Imple-
mentation challenges for designing integrated in vitro testing
strategies (ItS) aiming at reducing and replacing animal ex-
perimentation. Toxicol 26, 526-534.
De Wolf, W., Comber, M., and Douben, P. (2007). Animal use
replacement, reduction, and refinement: Development of an
integrated testing strategy for bioconcentration of chemicals
in fish. Integr Environ Assess Manag 3, 3-17.
Deeks, J. J. (2001). Systematic reviews in health care: System-
atic reviews of evaluations of diagnostic and screening tests.
Brit Med J 323, 157-162.
Devillé, W. l., Buntinx, F., Bouter, l. M., et al. (2002). Con-
ducting systematic reviews of diagnostic studies: didactic
guidelines. BMC Med Res Methodol 2, 9.
Dejongh, J., Forsby, A., Houston, J. B., et al. (1999). An Inte-
grated Approach to the Prediction of Systemic toxicity using
Computer-based Biokinetic Models and Biological In vitro
test Methods: Overview of a Prevalidation Study Based on
the eCIttS Project. Toxicol In Vitro 13, 549-554.
Fernández, A., lombardo, A., Rallo, R., et al. (2012). Quantita-
tive consensus of bioaccumulation models for integrated test-
ing strategies. Environ Intern 45, 51-58.
Forsby, A. and Blaauboer, B. (2007). Integration of in vitro neu-
rotoxicity data with biokinetic modelling for the estimation of
in vivo neurotoxicity. Hum Exp Toxicol 26, 333-338.
Gabbert, S. and van Ierland, e. C. (2010). Cost-effectiveness
analysis of chemical testing for decision-support: How to in-
clude animal welfare? Hum Ecol Risk Assess 16, 603-620.
Gabbert, S. and Benighaus, C. (2012). Quo vadis integrated
testing strategies? experiences and observations from the
work floor. J Risk Res 15, 583-599.
Hartung et al. Download full-text
Altex 30, 1/13
McNamee, P., Hibatallah, J., Costabel-Farkas, M., et al. (2009).
A tiered approach to the use of alternatives to animal testing
for the safety assessment of cosmetics: eye irritation. Regulat
Toxicol Pharmacol 54, 197-209.
Mehling, A., eriksson, t., eltze, t., et al. (2012). Non-animal
test methods for predicting skin sensitization potentials. Arch
Toxicol. 86, 1273-1295.
Nordberg, A., Ruden, C., and Hansson, e. (2008). towards more
efficient testing strategies – Analyzing the efficiency of toxic-
ity data requirements in relation to the criteria for classifica-
tion and labelling. Regulat Toxicol Pharmacol 50, 412-419.
OeCD (2002a). tG 404 ‘‘Acute dermal irritation/corrosion’’,
adopted April 24th 2002, including a Supplement to tG
404 entitled. A sequential testing strategy for dermal irrita-
tion and, corrosion. (pp. 11-14). http://www.oecd-ilibrary.
OeCD (2002b). tG 405 ‘‘Acute eye irritation/corrosion’’, adopt-
ed April 24th 2002, including a Supplement to tG 405 entitled.
A sequential testing strategy for eye irritation and, corrosion.
(pp. 9-13). http://www.oecd-ilibrary.org/environment/test-no-
OeCD (2005). Series on testing and Assessment No. 34. Guidance
document on the validation and international acceptance of new
or updated test methods for hazard assessment. http://search.
OeCD (2008). Series on testing and Assessment No. 88.
Workshop on integrated approaches to testing and assess-
Romero, P., Wagg, J., Green, M. l., et al. (2005). Computational
prediction of human metabolic pathways from the complete
human genome. Genome Biol 6, R2. doi: 10.1186/gb-2004-
Robertson, D. G., Watkins, P. B., and Reily, M. D. (2011). Me-
tabolomics in toxicology: preclinical and clinical applica-
tions. Toxicol Sci 120, Suppl 1, S146-170.
Rockel, C. and Hartung, t. (2012). Systematic review of mem-
brane components of Gram-positive bacteria responsible as
pyrogens for inducing human monocyte/macrophage cy-
tokine release. Front Pharmacol 3, 56.
Romero, P., Wagg, J., Green, M. l., et al. (2005). Computational
prediction of human metabolic pathways from the complete
human genome. Genome Biol 6, R2.
Rolfsson, O., Palsson, B. Ø., and thiele, I. (2011). the human
metabolic reconstruction Recon 1 directs hypotheses of novel
human metabolic functions. BMC Syst Biol 5, 155.
Rusyn, I., Sedykh, A., low, Y., et al. (2012). Predictive mod-
eling of chemical hazard by integrating numerical descrip-
tors of chemical structures and short-term toxicity assay data.
Toxicol Sci 127, 1-9.
Schenk, B., Weimer, M., Bremer, S., et al. (2010). the RePro-
tect feasibility study, a novel comprehensive in vitro approach
to detect reproductive toxicants. Reprod Toxicol 30, 200-218.
Scott, l., eskes, C., Hoffmann, S., et al. (2009). A proposed eye
Hoffmann, S. and Hartung, t. (2005). Diagnosis: toxic! – trying
to apply approaches of clinical diagnostics and prevalence in
toxicology considerations. Toxicol Sci 85, 422-428.
Hoffmann, S., Cole, t., and Hartung, t. (2005). Skin irritation:
prevalence, variability, and regulatory classification of exist-
ing in vivo data from industrial chemicals. Regulat Toxicol
Pharmacol 41, 159-166.
Hoffmann, S. and Hartung, t. (2006). toward an evidence-
based toxicology. Hum Exp Toxicol 25, 497-513.
Hoffmann, S., Saliner, A. G., Patlewicz, G., et al. (2008a). A
feasibility study developing an integrated testing strategy as-
sessing skin irritation potential of chemicals. Toxicol Lett 180,
Hoffmann, S., edler, l., Gardner, I., et al. (2008b). Points of
reference in the validation process: the report and recom-
mendations of eCVAM Workshop 66. Altern Lab Anim 36,
Jaworska, J. and Hoffmann, S. (2010). Integrated testing Strate-
gy (ItS) – Opportunities to better use existing data and guide
future testing in toxicology. ALTEX 27, 231-242.
Jaworska, J., Gabbert, S., and Aldenberg, t. (2010). towards
optimization of chemical testing under ReACH: A Bayesian
network approach to Integrated testing Strategies. Regul Tox-
icol Pharmacol 57, 157-167.
Jaworska, J., Harol, A., Kern, P. S., et al. (2011). Integrating non-
animal test information into an adaptive testing strategy – skin
sensitization proof of concept case. ALTEX 28, 211-225.
Kinsner-Ovaskainen, A., Akkan, Z., Casati, S., et al. (2009).
Overcoming barriers to validation of non-animal partial re-
placement methods/Integrated testing Strategies: the report
of an ePAA-eCVAM workshop. Altern Lab Anim 37, 437-
Kinsner-Ovaskainen, A., Maxwell, G., Kreysa, J., et al. (2012).
Report of the ePAA-eCVAM workshop on the validation of
Integrated testing Strategies (ItS). Altern Lab Anim 40, 175-
Kirkland, D., Aardema, M., Henderson, l., et al. (2005). evalu-
ation of the ability of a battery of three in vitro genotoxicity
tests to discriminate rodent carcinogens and non-carcinogens.
Mutat Res 584, 1-256.
Leeflang, M., Deeks, J. J., and Gatsonis, C. (2008). Systematic
reviews of diagnostic test accuracy. Annals Internal Med 49,
leist, M., lidbury, B. A., Yang, C., et al. (2012). Novel tech-
nologies and an overall strategy to allow hazard assessment
and risk prediction of chemicals, cosmetics, and drugs with
animal-free methods. ALTEX 29, 373-388.
Ma, H., Sorokin, A., Mazein, A., et al. (2007). the edinburgh
human metabolic network reconstruction and its functional
analysis. Mol Syst Biol 3, 135.
Marshall, M. (2012). How to prove cause leads to effect. New
Scientist of 29 Sep 2012, 8-9.
Marx-Stoelting, P., Adriaens, e., Ahr, H.-J., et al. (2009). A re-
view of the implementation of the embryonic stem cell test
(eSt). the report and recommendations of an eCVAM/Re-
Protect Workshop. Altern Lab Anim 37, 313-328.