ArticlePDF AvailableLiterature Review


Background Animal models of human behavioural deficits involve conducting experiments on animals with the hope of gaining new knowledge that can be applied to humans. This paper aims to address risks, biases, and fallacies associated with drawing conclusions when conducting experiments on animals, with focus on animal models of mental illness. Conclusions Researchers using animal models are susceptible to a fallacy known as false analogy, where inferences based on assumptions of similarities between animals and humans can potentially lead to an incorrect conclusion. There is also a risk of false positive results when evaluating the validity of a putative animal model, particularly if the experiment is not conducted double-blind. It is further argued that animal model experiments are reconstructions of human experiments, and not replications per se, because the animals cannot follow instructions. This leads to an experimental setup that is altered to accommodate the animals, and typically involves a smaller sample size than a human experiment. Researchers on animal models of human behaviour should increase focus on mechanistic validity in order to ensure that the underlying causal mechanisms driving the behaviour are the same, as relying on face validity makes the model susceptible to logical fallacies and a higher risk of Type 1 errors. We discuss measures to reduce bias and risk of making logical fallacies in animal research, and provide a guideline that researchers can follow to increase the rigour of their experiments.
Sjoberg Behav Brain Funct (2017) 13:3
DOI 10.1186/s12993-017-0121-8
Logical fallacies inanimal model
Espen A. Sjoberg*
Background: Animal models of human behavioural deficits involve conducting experiments on animals with the
hope of gaining new knowledge that can be applied to humans. This paper aims to address risks, biases, and falla-
cies associated with drawing conclusions when conducting experiments on animals, with focus on animal models of
mental illness.
Conclusions: Researchers using animal models are susceptible to a fallacy known as false analogy, where inferences
based on assumptions of similarities between animals and humans can potentially lead to an incorrect conclusion.
There is also a risk of false positive results when evaluating the validity of a putative animal model, particularly if the
experiment is not conducted double-blind. It is further argued that animal model experiments are reconstructions
of human experiments, and not replications per se, because the animals cannot follow instructions. This leads to an
experimental setup that is altered to accommodate the animals, and typically involves a smaller sample size than a
human experiment. Researchers on animal models of human behaviour should increase focus on mechanistic validity
in order to ensure that the underlying causal mechanisms driving the behaviour are the same, as relying on face valid-
ity makes the model susceptible to logical fallacies and a higher risk of Type 1 errors. We discuss measures to reduce
bias and risk of making logical fallacies in animal research, and provide a guideline that researchers can follow to
increase the rigour of their experiments.
Keywords: Argument from analogy, Confirmation bias, Type 1 error, Animal models, Double-down effect, Validity
© The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License
(, which permits unrestricted use, distribution, and reproduction in any medium,
provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license,
and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (
publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Logical fallacy
A logical fallacy is a judgment or argument based on
poor logical thinking. It is an error in reasoning, which
usually means that either the line of reasoning is flawed,
or the objects in the premise of the argument are dissimi-
lar to the objects in the conclusion [1]. Scientists are not
immune to logical fallacies and are susceptible to making
arguments based on unsound reasoning. For instance, a
common fallacy is affirming the consequent. is involves
the following line of reasoning: if A is true, then X is
observed. We observe X, therefore A must be true. is
argument is fallacious because observing X only tells us
that there is a possibility that A is true: the rule does not
specify that A follows X, even if X always follow A.1 Stud-
ies that have explicitly investigated this in a scientist sam-
ple found that 25–33% of scientists make the fallacy of
affirming the consequent and conclude that XA is a
valid argument [2, 3].
Making logical fallacies is a human condition, and there
is a large range of fallacies commonly committed [1, 4, 5].
In the present paper, we will focus on a select few that are
of particular relevance to animal model research, espe-
cially in the context of validity and reliability of conclu-
sions drawn from an experiment.
1 If you struggle to follow this line of reasoning, a concrete example makes
it easier: If it is wine, then the drink has water in it. Water is in the drink.
erefore, it must be wine. Nowhere does the rule specify that only wine
contains water as an ingredient, so simply making this observation does not
allow us to conclude that it is wine.
Open Access
Behavioral and
Brain Functions
Department of Behavioral Sciences, Oslo and Akershus University College
of Applied Sciences, St. Olavs Plass, P.O. Box 4, 0130 Oslo, Norway
Page 2 of 13
Sjoberg Behav Brain Funct (2017) 13:3
Conrmation andfalsication
e fallacy of affirming the consequent is connected with
a tendency to seek evidence that confirms a hypothesis.
Many scientists conduct their experiments under the
assumption that their experimental paradigm is a legiti-
mate extension of their hypothesis, and thus their results
are used to confirm their beliefs. As an example, imagine
a hypothesis that states that patients with bipolar disor-
der have reduced cognitive processing speed, and we do a
reaction time test to measure this. us, a fallacious line
of reasoning would be: if bipolar patients have reduced
cognitive processing speed, then we will observe slower reac-
tion time on a test. We observe a slower reaction time, and
therefore bipolar patients have reduced cognitive processing
speed. is would be affirming the consequent, because
the observed outcome is assumed to be the result of the
mechanism outlined in the hypothesis, but we cannot with
certainty say that this is true. e results certainly suggests
this possibility, and it may in fact be true, but the patients
may have exhibited slower reaction times for a variety of
reasons. If a significant statistical difference between bipo-
lar patients and controls is found, it may be common to
conclude that the results support the cognitive processing
speed hypothesis, but in reality this analysis only reveals
that the null hypothesis can be rejected, not necessarily
why it can be rejected [6, 7]. e manipulation of the inde-
pendent variable gives us a clue as to the cause of the rejec-
tion of the null hypothesis, but this does not mean that the
alternative hypothesis is confirmed beyond doubt.
Popper [8] claimed that hypotheses could never be
confirmed; only falsified. He claimed that we could not
conclude with absolute certainty that a statement is true,
but it is possible to conclude that it is not true. e clas-
sic example is the white swan hypothesis: even if we have
only observed white swans, we cannot confirm with
certainty the statement “all swans are white”, but if we
observe a single black swan then we can reject the state-
ment. Looking for confirmation (searching for white
swans) includes the risk of drawing the wrong conclusion,
which in this case is reached through induction. How-
ever, if we seek evidence that could falsify a hypothesis
(searching for black swans), then our observations have
the potential to reject our hypothesis. Note that rejecting
the null hypothesis in statistical analyses is not necessar-
ily synonymous with falsifying an experimental hypoth-
esis. Null-hypothesis testing is a tool, and when we use
statistical analyses we are usually analysing a numerical
analogy of our experimental hypothesis.
When a hypothesis withstands multiple tests of falsifi-
cation, Popper called it corroborated [9]. We could argue
that if a hypothesis is corroborated, then its likelihood
of being true increases, because it has survived a gaunt-
let of criticism by science [10]. However, it is important
to note that Popper never made any such suggestion, as
this would be inductive reasoning: exactly the problem he
was trying to avoid! Even if a hypothesis has supporting
evidence and has withstood multiple rounds of falsifica-
tion, Popper meant that it is not more likely to be true
than an alternative hypothesis, and cannot be confirmed
with certainty [11]. Instead, he felt that a corroborated
theory could not be rejected without good reason, such
as a stronger alternative theory [12]. Popper may be cor-
rect that we cannot confirm a hypothesis with absolute
certainty, but in practice it is acceptable to assume that
a hypothesis is likely true if it has withstood multiple
rounds of falsification, through multiple independent
studies using different manipulations (see “Animal model
experiments are reconstructions” section). However, in
the quest for truth we must always be aware of the pos-
sibility, however slight, that the hypothesis is wrong, even
if the current evidence makes this seem unlikely.
Conrmation bias
Confirmation bias is the tendency to seek information
that confirms your hypothesis, rather than seeking infor-
mation that could falsify it [13]. is can influence the
results when the experimenter is informed of the hypoth-
esis being tested, and is particularly problematic if the
experiment relies on human observations that has room
for error. e experimenters impact on the study is often
implicit, and may involve subtly influencing participants
or undermining methodological flaws, something also
known as experimenter bias [14].
e tendency to express confirmation bias in science
appears to be moderated by what field of study we belong
to. Physicists, biologists, psychologists, and mathemati-
cians appear to be somewhat better at avoiding confir-
mation bias than historians, sociologists, or engineers,
although performance varies greatly from study to study
[3, 1518]. In some cases, the tendency to seek confirm-
ing evidence can be a result of the philosophy of science
behind a discipline. For instance, Sidman’s [19] book Tac-
tics of Scientific Research, considered a landmark text-
book on research methods in behavior analysis [2022],
actively encourages researchers to look for similarities
between their research and others, which is likely to
increase confirmation bias.
Confirmation bias has been shown in animal research
as well, but this fallacy is reduced when an experiment is
conducted double-blind [23]. Van Wilgenburg and Elgar
found that 73% of non-blind studies would report a sig-
nificant result supporting their hypothesis, while this was
only the case in 21% of double-blind studies. An interest-
ing new approach to reduce confirmation bias in animal
research is to fully automatize the experiment [24, 25].
is involves setting up the equipment and protocols
Page 3 of 13
Sjoberg Behav Brain Funct (2017) 13:3
in advance, so that large portions of an experiment can
be run automatically, with minimal interference by the
experimenter. Along with double-blinded studies, this is
a promising way to reduce confirmation bias in animal
It is important to note that the confirmation bias phe-
nomenon occurs as an automatic, unintentional process,
and is not necessarily a result of deceptive strategies [26].
As humans, we add labels to phenomena and establish
certain beliefs about the world, and confirmation bias is a
way to cement these beliefs and reinforce our sense of
identity.2 Scientists may therefore be prone to confirma-
tion bias due to a lack of education on the topic, and not
necessarily because they are actively seeking to find cor-
roborating evidence.
Argument fromanalogy andanimal model
e issues reported in this paper apply to all of science,
and we discuss principles and phenomena that any scien-
tist would hopefully find useful. However, the issues will
primarily be discussed in the context of research on ani-
mal models, as some of the principles have special appli-
cations in this field. In this section, we outline how an
animal model is defined, and problems associated with
arguing from analogy in animal research.
Dening an animal model
e term “animal model” is not universally defined in the
literature. Here, we define an animal model as an animal
sufficiently similar to a human target group in its physi-
ology or behaviour, based on a natural, bred, or experi-
mentally induced characteristic in the animal, and which
purpose is to generate knowledge that may be extrapo-
lated to the human target group. In this article, we focus
on translational animal models in the context of behav-
ioural testing, which usually involve a specific species or
strain, or an animal that have undergone a manipulation
prior to testing.
An animal model can of course model another non-
human animal, but for the most part the aim of it is to
study human conditions indirectly through animal
research. at research is conducted on animals does
not necessarily mean that the animal acts as a model for
humans. It is only considered an animal model when its
function is to represent a target group or condition in
humans, e.g. people with depression, autism, or brain
injury. e current paper focuses on animal models of
mental illness, but animal models as a whole represent a
large variety of conditions, and are particularly common
2 anks to Rachael Wilner for pointing out this argument.
to use in drug trials. See Table1 for an overview of com-
mon animal models of mental illnesses.
It should also be noted that the term “animal model”
refers to an animal model that has at least been vali-
dated to some extent, while a model not yet validated is
referred to as a “putative animal model”. at a model is
“validated” does not mean that the strength of this vali-
dation cannot be questioned; it merely means that previ-
ous research has given the model credibility in one way
or another.
Arguing fromanalogy
In research on animal models, scientists sometimes use
an approach called the argument from analogy. is
involves making inferences about a property of one
group, based on observations from a second group,
because both groups have some other property in com-
mon [1]. Analogies can be very useful in our daily lives as
well as in science: a mathematical measurement, such as
“one meter”, is essentially an analogy where numbers and
quantities act as representations of properties in nature.
When applying for a job, a person might argue that she
would be a good supervisor because she was also a good
basketball coach, as the jobs have the property of lead-
ership in common. Concerning animal models, arguing
from analogy usually involves making inferences about
humans, based on an earlier observation where it was
found that the animals and humans have some prop-
erty in common. Arguing from analogy is essentially a
potentially erroneous judgment based on similarities
between entities. However, this does not make the argu-
ment invalid by default, because the strength of the argu-
ment relies on: (1) how relevant the property we infer is
to the property that forms the basis of the analogy; (2) to
what degree the two groups are similar; (3) and if there is
any variety in the observations that form the basis of the
argument [1].
Animal models themselves are analogies, as their exist-
ence is based on the assumption that they are similar to
a target group in some respect. If the two things we are
drawing analogies on are similar enough so that we will
reasonably expect them to correlate, an argument from
analogy can be strong! However, when we draw the con-
clusion that two things share a characteristic, because we
have established that they already share another, different
characteristic, then we are at risk of making the fallacy of
false analogy [27].
The false analogy
A false analogy is essentially an instance when an argu-
ment based on an analogy is incorrect. is can occur
when the basis of similarity between objects do not jus-
tify the conclusion that the objects are similar in some
Page 4 of 13
Sjoberg Behav Brain Funct (2017) 13:3
other respect. For instance, if Jack and Jill are siblings,
and Jack has the property of being clumsy, we might infer
that Jill is also clumsy. However, we have no information
to assert that Jill is clumsy, and the premise for our argu-
ment is based solely on the observation that Jack and Jill
have genetic properties in common. We are assuming
that clumsiness is hereditary, and therefore this is prob-
ably a false analogy. Note that knowledge gained later
may indicate that—in fact—clumsiness is hereditary, but
until we have obtained that knowledge we are operat-
ing under assumptions that can lead to false analogies.
Even if clumsiness was hereditary, we could still not say
with absolute certainty that Jill is clumsy (unless genetics
accounted for 100% of the variance). is new knowledge
would mean that our analogy is no longer false, as Jill’s
clumsiness can probably at least in part be explained by
genetics, but we are still arguing from analogy: we cannot
know for certain if Jill is clumsy, based solely on observa-
tions with Jack.
The false analogy inanimal models
With animal models, the false analogy can occur when
one group (e.g. an animal) share some characteristics
with another group (e.g. humans), and we assume that
the two groups also share other characteristics. For
instance, because chimpanzees can follow the gaze of
a human, it could be assumed that the non-human pri-
mates understand what others perceive, essentially dis-
playing theory of mind [2830]. However, Povinelli
etal. [31] argue that this is a false analogy, because we
are drawing conclusions about the inner psychological
state of the animal, based on behavioural observations.
It may appear that the animal is performing a behaviour
that requires complex thinking, while in reality it only
reminds us of complex thinking [32], most likely because
we are anthropomorphizing the animal’s behaviour
[33]—particularly the assumption that the mind of an ape
is similar to the mind of a human [30]. A different exam-
ple would be birds that are able to mimic human speech:
the birds are simply repeating sounds, and we are anthro-
pomorphising if we believe the birds actually grasp our
concept of language.
Robbins [34] pointed out that homology is not guar-
anteed between humans and primates, even if both the
behavioural paradigm and the experimental result are
identical for both species: different processes may have
been used by the two species to achieve the same out-
come. Since an animal model is based on common prop-
erties between the animal and humans, we may assume
that new knowledge gained from the animal model is
also applicable to humans. In reality, the results are only
indicative of evidence in humans.
Arguing from analogy, therefore, involves the risk
of applying knowledge gained from the animal over to
humans, without knowing with certainty if this applica-
tion is true. Imagine the following line of reasoning: we
find result A in a human experiment, and in an animal
model we also find result A, establishing face validity for
the animal model. Consequently, we then conduct a dif-
ferent experiment on the animal model, finding result B.
If we assume that B also exist in humans, without try-
ing to recreate these results in human experiments, then
we are arguing from analogy, potentially drawing a false
Illustration: argument fromanalogy inthe SHR model
An illustration of argument from analogy comes from
the SHR (spontaneously hypertensive rat) model of
ADHD (Attention-Deficit/Hyperactivity Disorder) [35,
Table 1 A summary ofsome available animal models ofmental illnesses, wherethe animals themselves act asthe model
forthe target group
The animals are genetically modied, bred for a specic trait, or manipulated in some physiological fashion (e.g. a lesion or drug injec tion)
Mental illness Model References
Anxiety Serotonin receptor 1A knockout mice [114]
Corticosterone treated mice [115]
Attention-Deficit/Hyperactivity Disorder Spontaneously Hypertensive rat [35]
Thyroid receptor β1 transgenic mice [116]
Autism Valproic Acid rat [81]
Depression Corticosterone treated rats and mice [117]
Chronic Mild Stress rats and mice [118]
Obsessive Compulsive Disorder Quinpirole treated rats [119]
Post-Traumatic Stress Syndrome Congenital learned helpless rat [120]
Schizophrenia Ventral hippocampus lesioned rats [121]
Methylazoxymethanol acetate treated rats [122]
Developmental vitamin D deficient rats [123]
Page 5 of 13
Sjoberg Behav Brain Funct (2017) 13:3
36]. Compared to controls, usually the Wistar Kyoto rat
(WKY), the SHRs exhibit many of the same behavioural
deficits observed in ADHD patients, such as impulsive
behaviour [3742], inattention [35, 37], hyperactivity [37,
43], and increased behavioural variability [4447].
One measure of impulsive behaviour is a test involving
delay discounting. In this paradigm, participants are faced
with the choice of either a small, immediate reinforcer
or a larger, delayed reinforcer. Both ADHD patients [48]
and SHRs [41] tend to show a preference for the smaller
reinforcer as the delay between response and reinforcer
increases for the large reinforcer. Research on delay dis-
counting with ADHD patients suggests that they are delay
averse, meaning that impulsivity is defined as making
choices that actively seek to reduce trial length (or overall
delay) rather than immediacy [4856], but this is usually
achieved by choosing a reinforcer with a short delay.
ere is no direct evidence to suggest that SHRs oper-
ate by the same underlying principles as ADHD patients.
Studies on delay discounting using SHRs tend to manipu-
late the delay period between response and reinforcer
delivery, but do not compare the results with alternative
explanations. is is because the rats cannot be told the
details of the procedure (e.g. if the experiment ends after
a specific time or a specific number of responses). ere-
fore, most authors who have investigated delay discount-
ing usually avoid the term delay aversion [57]. However,
some authors make the argument from analogy where
they assume that the rats show a similar effect to ADHD
children: Bizot etal. [58] concluded that …SHR are less
prone to wait for a reward than the other two strains, i.e.
exhibit a higher impulsivity level… (p. 220)”, and Pardey,
Homewood, Taylor and Cornish [59] concluded that “…
SHRs are more impulsive than the WKY as they are less
willing to wait for an expected reinforcer (p. 170).” Even
though the evidence shows that SHRs preference for
the large reinforcer drops with increased delay, we can-
not conclude with certainty that this occurs because the
SHRs do not want to wait. e experimental setup does
not tell us anything conclusive about the animal’s motiva-
tion, nor its understanding of the environmental condi-
tions. Hayden [60] has argued that the delay discounting
task is problematic in measuring impulsivity in animals
because it is unlikely that the animals understand the
concept of the inter-trial interval. Furthermore, if the
SHRs were less willing to wait for a reinforcer, then we
may argue that this shows immediacy, and not necessar-
ily delay aversion. In this case, it may instead support the
dual pathway model of ADHD, which takes into account
both delay aversion and an impulsive drive for immediate
reward [56, 61, 62].
Assuming that the rats are delay averse or impulsive is
arguing from analogy. e evidence may only suggests
that the rats are impulsive, not necessarily why they are
impulsive. e results may also not speak to whether the
reason for this behaviour is the same in ADHD and SHRs
(mechanistic validity—see “Mechanistic validity” sec-
tion). If we were to manipulate the magnitude of the large
reinforcer then we will also find a change in performance
[57, 63]. How do we know that the SHRs are sensitive to
temporal delays, and not to other changes in the experi-
mental setup, such as the inter-trial interval [60], rein-
forcer magnitude [63], or the relative long-term value of
the reward [64]?
The validity criteria ofanimal models
Before any further discussion on logical fallacies in ani-
mal models, the validity criteria of these models must
be addressed. We must also point out that there are two
approaches to animal model research: (1) validating a
putative animal model, and (2) conducting research on
an already validated model.
When asserting the criteria for validating an putative
animal model, the paper by Willner [65] is often cited,
claiming that the criteria for a valid animal model rests
on its face, construct, and predictive validity. is means
that the model must appear to show the same symp-
toms as the human target group (face validity), that the
experiment measures what it claims to measure and can
be unambiguously interpreted (construct validity), and
that it can make predictions about the human popula-
tion (predictive validity). However, there is no univer-
sally accepted standard for which criteria must be met in
order for an animal model to be considered valid, and the
criteria employed may vary from study to study [6670].
Based on this, Belzung and Lemoine [71] attempted to
broaden Willner’s criteria into a larger framework, pro-
posing nine validity criteria that assess the validity of
animal models for psychiatric disorders. Tricklebank and
Garner [72] have argued that, in addition to the three
criteria by Willner [65], a good animal model must also
be evaluated based on how it controls for third variable
influences (internal validity), to what degree results can
be generalized (external validity), whether measures
expected to relate actually do relate (convergent valid-
ity), and whether measures expected to not relate actu-
ally do not relate (discriminant validity). ese authors
argue that no known animal model currently fulfils all of
these criteria, but we might not expect them to; what is
of utmost importance is that we recognize the limitation
of an animal model, including its application. Indeed, it
could be argued that a reliable animal model may not
need to tick all the validity boxes as long it has predic-
tive validity, because in the end its foremost purpose is
to make empirical predictions about its human target
group. However, be aware that arguing from analogy
Page 6 of 13
Sjoberg Behav Brain Funct (2017) 13:3
reduces the model’s predictive validity, because its pre-
dictive capabilities may be limited to the animal studied.
Mechanistic validity
Behavioural similarities between a putative model and
its human target group is not sufficient grounds to vali-
date a model. In other words, face validity is not enough:
arguably, mechanistic validity is more important. is
is a term that normally refers to the underlying cogni-
tive and biological mechanisms of the behavioural defi-
cits being identical in both animals and humans [71],
though we can extend the definition to include external
variables affecting the behaviour, rather than attributing
causality to only internal, cognitive events. Whether the
observed behaviour is explained in terms of neurologi-
cal interactions, cognitive processes, or environmental
reinforcement depends on the case in question, but the
core of matter is that mechanistic validity refers to the
cause of the observed behavioural deficit or symptom. If
we can identify the cause of the observed behaviour in
an animal model, and in addition establish that this is
also the cause of the same behaviour in humans, then we
have established mechanistic validity. is validity crite-
rion does not speak to what has triggered the onset of a
condition (trigger validity), or what made the organism
vulnerable to the condition in the first place (ontopatho-
genic validity), but rather what factors are producing the
specific symptoms or behaviour [71]. For instance, falling
down the stairs might have caused brain injury (trigger
validity), and this injury in turn reduced dopamine trans-
mission in the brain, which lead to impulsive behaviour.
When an animal model is also impulsive due to reduced
dopamine transmissions, we have established mechanis-
tic validity (even if the trigger was different).
The validity ofmodels ofconditions withlimited etiology
Face validity has been argued to be of relatively low
importance in an animal model, because it does not
speak about why the behaviour occurs [33, 69], i.e. the
evidence is only superficial. However, it could be argued
that face validity is of higher importance in animal mod-
els of ADHD, because the complete etiology underlying
the condition is not yet fully known, and therefore an
ADHD diagnosis is based entirely on behavioural symp-
toms [73].
ere is limited knowledge of the pathophysiology on
many of the mental illnesses in the Diagnostic and Sta-
tistical Manual of Mental Disorders [74]; depression
and bipolar disorder are examples of heterogeneous
conditions where animal models have been difficult to
establish [75, 76]. When dealing with a heterogeneous
mental disorder, it is inherently harder for animal models
to mimic the behavioural deficits, particularly a range of
different deficits [75, 7780]. We could argue, therefore,
that mechanistic validity in animal models is difficult,
if not impossible, to establish from the outset when our
knowledge of causality in humans might be limited.
Models can be holistic or reductionist
Animal models can be approached with different applica-
tions in mind: it can aim to act holistic or reductionist. A
holistic approach assumes that the model is a good rep-
resentation of the target group as a whole, including all
or most symptoms and behavioural or neurological char-
acteristics. Alternatively, a reductionist approach uses an
animal model to mimic specific aspects of a target group,
such as only one symptom. is separation may not be
apparent, because animal models are usually addressed as
if they are holistic; for instance, the valproic acid (VPA)
rat model of autism is typically just labelled as an “animal
model of autism” in the title or text [81], but experiments
typically investigate specific aspects of autism [8284].
is does not mean that the model is not holistic, but
rather that its predictive validity is limited to the aspects
of autism investigated so far. Similarly, the SHR is typi-
cally labelled as an “animal model of ADHD” [35], but it
has been suggested that the model is best suited for the
combined subtype of ADHD [36, 73], while Wistar Kyoto
rats from Charles River Laboratories are more suited for
the inattentive subtype [85]. e point of this distinction
between holistic and reductionist approaches is to under-
line that animal models have many uses, and falsifying
a model in the context of one symptom does not mean
the model has become redundant. As long as the model
has predictive validity in one area or another, then it can
still generate hypotheses and expand our understand-
ing of the target group, even if the model is not a good
representation of the target group as a whole. Indeed,
an animal model may actually be treated as holistic until
it can be empirically suggested that it should in fact be
reductionist. However, researchers should take care not
to assume that a model is holistic based on just a few
observations: this would be arguing from analogy and
bears the risk of making applications about humans that
are currently not established empirically. e exact appli-
cations and limitations of an animal model should always
be clearly defined [33, 86].
Animal model experiments are reconstructions
e terms “replicate” and “reproduce” are often used
interchangeably in the literature [87], but with regards to
animal models their distinction is particularly important.
Replication involves repeating an experiment using the
same methods as the original experiment, while a repro-
duction involves investigating the same phenomenon
using different methods [88]. Replications assure that the
Page 7 of 13
Sjoberg Behav Brain Funct (2017) 13:3
effects are stable, but a reproduction is needed to ensure
that the effect was not due to methodological issues.
We suggest a third term, reconstruction, which has
special applications in animal models. A reconstruction
involves redesigning an experiment, while maintaining
the original hypothesis, in order to accommodate differ-
ent species. When an animal experiment aims to investi-
gate a phenomenon previously observed on humans, we
have to make certain changes for several reasons. First,
the animals are a different species than humans, and have
a different physiology and life experience. Second, the
animals do not follow verbal instructions and must often
(but not always) be trained to respond. ird, the experi-
mental setup must often be amended so that a behaviour
equivalent to a human behaviour is measured. A fourth
observation is that animal studies tend to use smaller
sample sizes than human experiments, which makes
them more likely to produce large effect sizes when a sig-
nificant result is found [89].
An animal model experiment actively attempts to
reconstruct the conditions of which we observed an
effect with humans, but makes alterations so that we can
be relatively certain that an equivalent effect is observed
in the animals (or vice versa, where a human experiment
measures an equivalent effect to what was observed in an
animal study). is questions the construct validity of the
study: how certain are we that the task accurately reflects
the human behaviour we are investigating?
Another problem concerned with reconstruc-
tion is the standardization fallacy [90]. This refers
to the fact that animal experiments are best repli-
cated if every aspect of the experiment is standard-
ized. However, by increasing experimental control
we lose external validity, meaning that the results are
less likely to apply to other situations [91]. The dif-
ficulty is therefore to find a balance between the two,
and finding this balance may depend on the research
question we seek to answer [33, 92]. One approach
is to initially begin with replications, and if these are
successful move on to perform reproductions, and
eventually reconstructions. This is essentially what
van der Staay, Arndt and Nordquist [92] have previ-
ously suggested: successful direct replication is fol-
lowed by extended replication where modifications
are made within the procedure, the animal’s environ-
ment (e.g. housing or rearing), or their gender. Should
the effect persevere, then we have systematically
established a higher degree of generalization without
losing internal validity. At the final stage, quasi-repli-
cations are conducted using different species, which
is similar to our concept of reconstructions, and it is
at this stage that the translational value of the find-
ings are evaluated.
The double‑down eect
When we run animal model experiments, we have to use
a control group for comparison. When we are evaluating
a putative model, we are therefore indirectly evaluating
both animal groups for their appropriateness as an ani-
mal model for the phenomenon in question, even if we
hypothesized beforehand that just one group would be
suitable, and this is the double-down effect. If we were to
discover that the control group, rather than the experi-
ment group, shows the predicted characteristic, then
it may be tempting to use hindsight bias to rationalize
that the result was predicted beforehand, something that
should always be avoided! In actuality, this is an occasion
that can be used to map the observable characteristics of
the animals, which is called phenotyping. is may show
that the control group has a property that makes them
a suitable candidate as a new putative model. Follow-up
studies can then formally evaluate whether this puta-
tive animal model has validity. is approach is perfectly
acceptable, provided that the initial discovery of the con-
trol group’s suitability is seen as suggestive and not con-
clusive, until further study provide more evidence.
When an animal model has already been validated, the
double-down effect still applies: we are still indirectly
evaluating two animal groups at once, but it is less likely
that that the control group will display the animal’s char-
acteristic due to previous validation. Failure to replicate
previous findings can be interpreted in many ways; it
could be an error in measurement, differences in experi-
mental manipulations, or that the animal model is simply
not suitable as a model in this specific paradigm (but still
viable in others). Should we observe that controls express
a phenomenon that was expected of the experimental
group, then we should replicate the study to rule out that
the finding occurred by chance or through some meth-
odological error. is may lead us to suggest the control
group as a putative model, pending further validation.
The double‑down eect andthe le drawer problem
Since the purpose of animal models is to conduct
research on non-human animals, with the aim to advance
knowledge about humans, then inevitably the animal
model and the human condition it mimics must be simi-
lar in some respect. If they were not, then the pursuit of
the model would be redundant. erefore, from the out-
set, there is likely to be publication bias in favour of data
that shows support for a putative animal model, because
otherwise it has no applications.
e double-down effect of evaluating two animal
groups at once makes animal models particularly suscep-
tible to the file drawer problem. is is a problem where
the literature primarily reflects publications that found
significant results, while null results are published less
Page 8 of 13
Sjoberg Behav Brain Funct (2017) 13:3
frequently [93, 94]. is aversion to the null creates what
Ferguson and Heene called “undead theories”, which are
theories that survive rejection indefinitely, because null
results that refute them are not published [95]. e ori-
gin of this trend is not entirely clear, but it probably came
into existence by treating the presence of a phenomenon
as more interesting than its absence. Once an effect has
been documented, replications may now be published
that support the underlying hypothesis.
e file drawer effect is probably related to the sunk-
cost effect: this is a tendency to continue on a project due
to prior investment, rather than switching to a more via-
ble alternative [96]. us, if we publish null results, it may
seem that previous publications with significant findings
were wasteful, and we may feel that we are contributing
towards dissent rather than towards finding solutions. It
may be in the researcher’s interest to find evidence sup-
porting the theory in order to justify their invested time,
thus becoming victim of confirmation bias.
Furthermore, if null results are found, they might be
treated with more skepticism than a significant result.
is is, of course, a fallacy in itself as both results should
be treated the same: why would a null result be subjected
to more scrutiny than a significant result? When the
CERN facility recorded particles travelling faster than the
speed of light, the observation appeared to falsify the the-
ory of relativity [97]. is result was met with skepticism
[98], and it was assumed that it was due to a measure-
ment error (which in the end it turned out to be). Nev-
ertheless, if the result had supported relativity, would the
degree of skepticism have been the same?
In the context of animal studies, the double-down
effect makes it more likely that a significant result is
found when comparing two animal groups. Either
group may be a suitable candidate for a putative animal
model, even if only one group was predicted to be suit-
able beforehand. If any result other than a null result will
show support for an animal model (or a putative model),
then multiple viable models will be present in the litera-
ture, all of which will be hard to falsify (as falsifying one
model may support another). Indeed, this is currently the
case for animal models, where there are multiple avail-
able models for the same human conditions [80, 99103].
e file drawer problem is a serious issue in science [104],
and the trend may often be invisible to the naked eye, but
methods such as meta-analyses have multiple tools to
help detect publication bias in the literature [105].
Measures toimprove animal model research
e main purpose of this paper was to address sev-
eral risks and fallacies that may occur in animal model
research, in order to encourage a rigorous scientific
pursuit in this field. We do not intend to discourage
researchers from using animal models, but rather
hope to increase awareness of potential risks and falla-
cies involved. In order to make the issues addressed in
the paper more overviewable, we have created a list for
researchers to confer when designing animal experiment
and interpreting their data.
1. Be aware of your own limitations. Some of the falla-
cies and risks addressed in this paper may be una-
voidable for a variety of reasons. Nevertheless, the
first step towards improving one’s research is to be
aware of the existence of these risks. When writing
the discussion section of a report, it may be neces-
sary to point out possible limitations. Even if they are
not explicitly stated, it is still healthy for any scientist
to be aware of them.3
2. Establish predictive and mechanistic validity. If you
are attempting to validate a putative animal model,
ensure that the experiment is as similar as pos-
sible to experiments done on humans. If this is not
possible, explain why in the write-up. If the experi-
ment is novel, and the animal model is already vali-
dated through previous research, then this principle
does not necessarily apply, because the purpose is to
uncover new knowledge that may be translated to
humans. In such instances, a new hypothesis gains
validity in a follow-up experiment on humans.
Remember that there are several criteria available
for validating an animal model, but there is no uni-
versal agreement on which set of criteria should be
followed. However, the two most important crite-
ria are arguably predictive validity and mechanistic
validity, because face validity is prone to logical fal-
lacies. Establishing mechanistic validity ensures that
the mechanisms causing the observed behaviour are
the same in the model and humans, while establish-
ing predictive validity means that knowledge gained
from the model is more likely to apply to humans.
3. Define an a priori hypothesis and plan the statistical
analysis beforehand. It is crucial to have an a priori
hypothesis prior to conducting the experiment,
otherwise one might be accused of data dredging
and reasoning after-the-fact that the results were
expected [107, 108]. When validating a putative ani-
mal model, this drastically reduces the double-down
effect. If the data do not show the predicted pat-
tern then it is perfectly acceptable to suggest a new
3 e author of this manuscript once held a conference talk where he sug-
gested the possibility that one of his own research results may have been
influenced by confirmation bias [106]. Never assume that only others are
prone to bias—even authors of logical fallacy papers may commit fallacies!.
Page 9 of 13
Sjoberg Behav Brain Funct (2017) 13:3
hypothesis and/or a putative animal model for fur-
ther research.
When designing the experiment, keep in mind which
statistical analysis would be appropriate for analysing
the data. If the statistical method is chosen post hoc,
then it may not correspond to the chosen design,
and one might be accused of data dredging, which
involves choosing a statistical procedure that is more
likely to produce significant results [107]. Also, keep
in mind which post hoc tests are planned, and that
the correct one is chosen to reduce familywise error
when there are multiple comparisons to be made. It
is highly recommended that effect sizes are reported
for every statistical test: this will give insight into the
strength of the observed phenomenon, and also allow
a more detailed comparison between studies [109].
4. Do a power analysis. For logistical, practical, or eco-
nomic reasons, animal model research may be forced
to use sample sizes smaller than what is ideal. Nev-
ertheless, one should conduct a power analysis to
ascertain how many animals should be tested before
the experiment starts. When doing multiple com-
parisons, it may be difficult to establish the sample
size because the power analysis may only grant the
sample size of an omnibus analysis (the analysis of
the whole, not its individual parts), and not what is
required to reach significance with post hoc tests
[110]. If all the post hoc analyses are of equal interest,
choose the sample size required to achieve power of
0.8 in all comparisons. Alternatively, use a compar-
ison-of-most-interest approach where the sample
size is determined by the power analysis of the post
hoc comparison that is of highest interest [110]. If
a power analysis is not conducted, or not adhered
to, it may be prudent to use a sample size similar to
previously conducted experiments in the literature,
and then do a post hoc power analysis to determine
the power of your study. Once the experiment is
completed and the data analysed, one must never
increase the sample size, because this will increase
your chances of finding a significant result (confirma-
tion bias) [109, 111, 112].
5. Double-blind the experiment. By doing the experi-
ment double-blind, we severely reduce the risk of
confirmation bias. is means that the experimenter
is blind to the a priori hypothesis of the study, as
well as what group each animal belongs to. How-
ever, in some cases it may be difficult or impossible
to do this. For instance, if the experimental group
has a phenotype that distinguishes them from con-
trols (e.g. white vs. brown rats), then it is difficult to
blind the experimenter. For logistical and monetary
reasons it may also be impractical to have a qualified
experimenter who is blind to the relevant literature
of the study. Also, avoid analysing data prior to the
experiment’s completion, because if the data are not
in line with your predictions then one might implic-
itly influence the experiment to get the data needed
(experimenter bias [14]). Be aware that it is neverthe-
less perfectly acceptable to inspect the data on occa-
sion without statistically analysing it, just to ensure
that the equipment is working as it is supposed to
(or state in advance at what point it is acceptable to
check the data, in case there are circumstances where
you may want to terminate the experiment early).
6. Avoid anthropomorphizing. While it is inevitable to
describe our results in the context of human under-
standing and language, we must be careful not to
attribute the animals with human-like qualities.
Avoid making inferences about the animal’s thoughts,
feelings, inner motivation, or understanding of the
situation. We can report what the animals did, and
what this means in the context of our hypothesis, but
take care not to make assumptions of the inner work-
ings of the animal.
7. Avoid arguing from analogy. No matter how vali-
dated an animal model is, we cannot be certain that
a newly observed effect also applies to humans. If
research on an animal model yields new information
that could give insight into the human target group,
ensure to mention that the data is suggestive, not
conclusive, pending further validation. Remember
that the strength of an animal model is to generate
new knowledge and hypotheses relevant to the target
group, including the assessment of potentially useful
treatments, but that these new possibilities are only
hypothetical once they are discovered.
8. Attempt to publish, despite a null result. If you pre-
dicted a specific result based on trends in the litera-
ture, but failed to find this result, do not be discour-
aged from publishing the data (especially if you failed
to replicate a result in a series of experiments). is
is particularly important if the experiment had a low
sample size, as null results from such studies are
probably the least likely to be published, thus fuelling
the file drawer problem. By making the data avail-
able via either an article (for instance through Jour-
nal of Articles in Support of the Null Hypothesis) or a
dataset online, then you are actively contributing to
reduce the file drawer problem.
9. Replicate, reproduce, and reconstruct. Replicating an
experiment in order to establish interval validity and
reliability of an animal model is essential. When rep-
licating experiments multiple times, we reduce the
risk that the original finding was a chance result. If
previous replications have succeeded, then attempt
Page 10 of 13
Sjoberg Behav Brain Funct (2017) 13:3
to include a new hypothesis, experimental manipu-
lation, or follow-up experiment during the study to
expand our knowledge of the research question. is
process establishes both internal and external valid-
ity. Finally, reconstruct the experiment on humans,
so that the findings may be attributed across species.
A note onneurological similarities
e principles discussed in this paper have been
addressed in a behavioural context, but it should be
noted that they also apply to neurological evidence for
animal models, though increasing the validity in this case
can operate somewhat differently.
When we find neurological elements that are the same
in both the animal model and the human target group
(that do not exist in controls), we should be careful to
draw any conclusions based on this. Just like behavioural
evidence, the links are suggestive and not necessarily con-
clusive. It is risky to assume that the physiological prop-
erties shared between humans and animals operate the
same way. In drug research, over 90% of drugs that show
effectiveness on animal models fail to work on humans,
a problem called attrition [113]. In the context of animal
models of mental illness, Belzung and Lemoine [71] pro-
posed the concept biomarker validity, which means that
the function of a neurological mechanism is the same
in the animal model and humans, even if the biomarker
responsible for this function may be different across the
species. In other words, the two species may have differ-
ent biological markers, but as long as they operate the
same way, and in turn produce the same symptoms, then
this adds validity to the model.
Of course, in reality things are not this simple. Neuro-
logical evidence is usually not based on the presence of
a single component, but rather multiple elements such
as rate of neurotransmitter release, reuptake, polymor-
phism, neural pathways, drug effectiveness, or a combi-
nation of factors. e core message is that we must be
aware that finding similar neurological elements in both
animals and humans does not mean that they operate the
same way. If we make this assumption, we are arguing
from analogy.
It should be noted that confirmation bias could also be
a problematic issue in neuroscientific research. Garner
[113] illustrates this with a car example: if we believe that
the gas pedal of a car is the cause of car accidents, then
removing the gas pedal from a car will drastically reduce
the accident rate of that car, confirming that indeed the
gas pedal was the cause of car accidents. In neuroscience,
we may knock out a gene or selectively breed strains to
add or remove a genetic component. When the hypoth-
esized behaviour is shown (or not shown), we might
conclude that we have confirmed our hypothesis. e
conclusion could be wrong because it is based on correla-
tion, and thus future replications of this result is likely to
make the same logical error [113].
Closing remarks
In this paper, it has been discussed how animal models
can be susceptible to logical fallacies, bias, and a risk of
getting results that could give a false sense of support for
a putative animal model. Researchers should remember
that behavioural results found in an animal model of a
human condition does not guarantee that this knowl-
edge is applicable to humans. Replicating, reproducing
and reconstructing results over numerous studies will
drastically reduce the probability that the results are
similar by chance alone, although this does not necessar-
ily shed light on why the behaviour occurs. Researchers
should therefore be encouraged to investigate mecha-
nistic validity, meaning what underlying processes are
causing the behaviour. By simply looking at face valid-
ity, we have an increased risk of making errors through
Animal models can be very useful for investigating the
mechanisms behind a human condition. is new knowl-
edge can help improve our understanding and treatment
of this condition, but the researcher must not assume that
the observed animal behaviour also applies to humans.
Ultimately, animal models only provide solid evidence for
the animal used, and indicative evidence of human behav-
iour. However, this is also the strength of animal models:
indicative evidence may open the door to new ideas about
human behaviour that were not previously considered.
rough reconstructions, it can be established whether or
not the phenomenon exists in humans, and if the model
has mechanistic validity and predictive validity then this
certainly increases the application of the model, as well as
its value for the progress of human health.
ADHD: Attention-Deficit/Hyperactivity Disorder; CERN: European Organiza-
tion for Nuclear Research; DSM: Diagnostic and Statistical Manual of Mental
Disorders; SHR: spontaneously hypertensive rat; VPA: valproic acid rat; WKY:
Wistar Kyoto rat.
Rachael Wilner gave valuable insight and feedback throughout multiple ver-
sions of the manuscript, especially into improving the language and structure
of the paper, as well as clarifying several arguments. A conversation with
Øystein Vogt was largely inspirational in terms of writing this article. Magnus
H. Blystad gave feedback that substantiated several claims, particularly the
neurology section. Espen Borgå Johansen offered critical input on several
occasions, which lead to some arguments being empirically strengthened.
Carsta Simon’s feedback improved some of the definitions employed in the
article. Other members of the research group Experimental Behavior Analysis:
Translational and Conceptual Research, Oslo and Akershus University College,
is to be thanked for their contribution and feedback, particularly Per Holth,
Rasmi Krippendorf, and Monica Vandbakk.
Page 11 of 13
Sjoberg Behav Brain Funct (2017) 13:3
Competing interests
The author declare that he has no competing interests.
Received: 21 January 2016 Accepted: 1 February 2017
1. Salmon M. Introduction to logic and critical thinking. Boston: Wads-
worth Cengage Learning; 2013.
2. Barnes B. About science. New York: Basil Blackwell Inc.; 1985.
3. Kern LH, Mirels HL, Hinshaw VG. Scientists’ understanding of
propositional logic: an experimental investigation. Soc Stud Sci.
4. Tversky A, Kahneman D. Extensional versus intuitive reasoning: the con-
junction fallacy in probability judgment. Psychol Rev. 1983;90:293–315.
5. Kahneman D. Thinking, fast and slow. London: Macmillan; 2011.
6. Haller H, Krauss S. Misinterpretations of significance: a problem stu-
dents share with their teachers. Methods Psychol Res. 2002;7:1–20.
7. Badenes-Ribera L, Frías-Navarro D, Monterde-i-Bort H, Pascual-Soler
M. Interpretation of the P value: a national survey study in academic
psychologists from Spain. Psicothema. 2015;27:290–5.
8. Popper KR. The LOGIC OF SCIENTIfiC DISCOVery. London: Hutchinson;
9. Lewens T. The meaning of science. London: Pelican; 2015.
10. Leahey TH. The mythical revolutions of american psychology. Am
Psychol. 1992;47:308–18.
11. Law S. The great philosophers. London: Quercus; 2007.
12. Keuth H. The Philosophy of Karl Popper. Cambridge: Cambridge Univer-
sity Press; 2005.
13. Nickerson RS. Confirmation bias: a ubiquitous phenomenon in many
guises. Rev Gen Psychol. 1998;2:175.
14. Rosenthal R, Fode KL. The effect of experimenter bias on the perfor-
mance of the albino rat. Behav Sci. 1963;8:183–9.
15. Inglis M, Simpson A. Mathematicians and the selection task. In: Pro-
ceedings of the 28th international conference on the psychology of
mathematics education; 2004. p. 89–96.
16. Jackson SL, Griggs RA. Education and the selection task. Bull Psychon
Soc. 1988;26:327–30.
17. Hergovich A, Schott R, Burger C. Biased evaluation of abstracts depend-
ing on topic and conclusion: further evidence of a confirmation bias
within scientific psychology. Curr Psychol. 2010;29:188–209.
18. Mahoney MJ. Scientist as subject: the psychological imperative. Phila-
delphia: Ballinger; 1976.
19. Sidman M. Tactics of scientific research. New York: Basic Books; 1960.
20. Moore J. A special section commemorating the 30th anniversary of tac-
tics of scientific research: evaluating experimental data in psychology
by Murray Sidman. Behav Anal. 1990;13:159.
21. Holth P. A research pioneer’s wisdom: an interview with Dr. Murray Sid-
man. Eur J Behav Anal. 2010;12:181–98.
22. Michael J. Flight from behavior analysis. Behav Anal. 1980;3:1.
23. van Wilgenburg E, Elgar MA. Confirmation bias in studies of nestmate
recognition: a cautionary note for research into the behaviour of ani-
mals. PLoS ONE. 2013;8:e53548.
24. Poddar R, Kawai R, Ölveczky BP. A fully automated high-throughput
training system for rodents. PLoS ONE. 2013;8:e83171.
25. Jiang H, Hanna E, Gatto CL, Page TL, Bhuva B, Broadie K. A fully auto-
mated drosophila olfactory classical conditioning and testing system
for behavioral learning and memory assessment. J Neurosci Methods.
26. Oswald ME, Grosjean S. Confirmation bias. In: Pohl R, editor. Cognitive
illusions: a handbook on fallacies and biases in thinking, judgement
and memory. Hove: Psychology Press; 2004. p. 79.
27. Mill JS. A system of logic. London: John W. Parker; 1843.
28. Premack D, Woodruff G. Does the chimpanzee have a theory of mind?
Behav Brain Sci. 1978;1:515–26.
29. Call J, Tomasello M. Does the chimpanzee have a theory of mind? 30
years later. Trends Cogn Sci. 2008;12:187–92.
30. Gomez J-C. Non-human primate theories of (non-human primate)
minds: some issues concerning the origins of mind-reading. In: Car-
ruthers P, Smith PK, editors. Theories of theories of mind. Cambridge:
Cambridge University Press; 1996. p. 330.
31. Povinelli DJ, Bering JM, Giambrone S. Toward a science of other minds:
escaping the argument by analogy. Cogn Sci. 2000;24:509–41.
32. Dutton D, Williams C. A view from the bridge: subjectivity, embodiment
and animal minds. Anthrozoös. 2004;17:210–24.
33. van der Staay FJ, Arndt SS, Nordquist RE. Evaluation of animal models of
neurobehavioral disorders. Behav Brain Funct. 2009;5:11.
34. Robbins T. Homology in behavioural pharmacology: an approach
to animal models of human cognition. Behav Pharmacol.
35. Sagvolden T. Behavioral validation of the spontaneously hypertensive
rat (Shr) as an animal model of attention-deficit/hyperactivity disorder
(Ad/Hd). Neurosci Biobehav Rev. 2000;24:31–9.
36. Sagvolden T, Johansen EB, Wøien G, Walaas SI, Storm-Mathisen J,
Bergersen LH, et al. The spontaneously hypertensive rat model of
ADHD—the importance of selecting the appropriate reference strain.
Neuropharmacology. 2009;57:619–26.
37. Sagvolden T, Aase H, Zeiner P, Berger D. Altered reinforcement mecha-
nisms in attention-deficit/hyperactivity disorder. Behav Brain Res.
38. Wultz B, Sagvolden T. The hyperactive spontaneously hypertensive rat
learns to sit still, but not to stop bursts of responses with short inter-
response times. Behav Genet. 1992;22:415–33.
39. Malloy-Diniz L, Fuentes D, Leite WB, Correa H, Bechara A. Impulsive
behavior in adults with attention deficit/hyperactivity disorder: char-
acterization of attentional, motor and cognitive impulsiveness. J Int
Neuropsychol Soc. 2007;13:693–8.
40. Evenden JL. The pharmacology of impulsive behaviour in rats Iv: the
effects of selective serotonergic agents on a paced fixed consecutive
number schedule. Psychopharmacology. 1998;140:319–30.
41. Fox AT, Hand DJ, Reilly MP. Impulsive choice in a rodent model of atten-
tion-deficit/hyperactivity disorder. Behav Brain Res. 2008;187:146–52.
42. Sonuga-Barke EJ. Psychological heterogeneity in Ad/Hd—a dual
pathway model of behaviour and cognition. Behav Brain Res.
43. Berger DF, Sagvolden T. Sex differences in operant discrimination
behaviour in an animal model of attention-deficit hyperactivity disor-
der. Behav Brain Res. 1998;94:73–82.
44. Uebel H, Albrecht B, Asherson P, Börger NA, Butler L, Chen W, et al.
Performance variability, impulsivity errors and the impact of incentives
as gender-independent endophenotypes for ADHD. J Child Psychol
Psychiatry. 2010;51:210–8.
45. Johansen EB, Killeen PR, Sagvolden T. Behavioral variability, elimination
of responses, and delay-of-reinforcement gradients in Shr and Wky rats.
Behav Brain Funct. 2007;3:1.
46. Adriani W, Caprioli A, Granstrem O, Carli M, Laviola G. The spontane-
ously hypertensive-rat as an animal model of ADHD: evidence for
impulsive and non-impulsive subpopulations. Neurosci Biobehav Rev.
47. Scheres A, Oosterlaan J, Sergeant JA. Response execution and inhibi-
tion in children with AD/HD and other disruptive disorders: the role of
behavioural activation. J Child Psychol Psychiatry. 2001;42:347–57.
48. Sonuga-Barke E, Taylor E, Sembi S, Smith J. Hyperactivity and delay
aversion—I. The effect of delay on choice. J Child Psychol Psychiatry.
49. Sonuga-Barke EJ, Williams E, Hall M, Saxton T. Hyperactivity and delay
aversion III: the effect on cognitive style of imposing delay after errors. J
Child Psychol Psychiatry. 1996;37:189–94.
50. Kuntsi J, Oosterlaan J, Stevenson J. Psychological mechanisms in
hyperactivity: I response inhibition deficit, working memory impair-
ment, delay aversion, or something else? J Child Psychol Psychiatry.
51. Solanto MV, Abikoff H, Sonuga-Barke E, Schachar R, Logan GD, Wigal T,
et al. The ecological validity of delay aversion and response inhibi-
tion as measures of impulsivity in AD/HD: a supplement to the NIMH
multimodal treatment study of AD/HD. J Abnorm Child Psychol.
Page 12 of 13
Sjoberg Behav Brain Funct (2017) 13:3
52. Dalen L, Sonuga-Barke EJ, Hall M, Remington B. Inhibitory deficits, delay
aversion and preschool AD/HD: implications for the dual pathway
model. Neural Plast. 2004;11:1–11.
53. Bitsakou P, Psychogiou L, Thompson M, Sonuga-Barke EJ. Delay aversion
in attention deficit/hyperactivity disorder: an empirical investigation of
the broader phenotype. Neuropsychologia. 2009;47:446–56.
54. Tripp G, Alsop B. Sensitivity to reward delay in children with atten-
tion deficit hyperactivity disorder (ADHD). J Child Psychol Psychiatry.
55. Marx I, Hübner T, Herpertz SC, Berger C, Reuter E, Kircher T, et al. Cross-
sectional evaluation of cognitive functioning in children, adolescents
and young adults with ADHD. J Neural Transm. 2010;117:403–19.
56. Marco R, Miranda A, Schlotz W, Melia A, Mulligan A, Müller U, et al. Delay
and reward choice in ADHD: an experimental test of the role of delay
aversion. Neuropsychology. 2009;23:367–80.
57. Garcia A, Kirkpatrick K. Impulsive choice behavior in four strains of rats:
evaluation of possible models of attention deficit/hyperactivity disor-
der. Behav Brain Res. 2013;238:10–22.
58. Bizot J-C, Chenault N, Houzé B, Herpin A, David S, Pothion S, et al.
Methylphenidate reduces impulsive behaviour in Juvenile Wistar
rats, but not in adult Wistar, Shr and Wky rats. Psychopharmacology.
59. Pardey MC, Homewood J, Taylor A, Cornish JL. Re-evaluation of an
animal model for ADHD using a free-operant choice task. J Neurosci
Methods. 2009;176:166–71.
60. Hayden BY. Time discounting and time preference in animals: a critical
review. Psychon Bull Rev. 2015;23:1–15.
61. Scheres A, Dijkstra M, Ainslie E, Balkan J, Reynolds B, Sonuga-Barke E,
et al. Temporal and probabilistic discounting of rewards in children and
adolescents: effects of age and ADHD symptoms. Neuropsychologia.
62. Sonuga-Barke EJ, Sergeant JA, Nigg J, Willcutt E. Executive dysfunction
and delay aversion in attention deficit hyperactivity disorder: nosologic
and diagnostic implications. Child Adolesc Psychiatr Clin N Am.
63. Botanas CJ, Lee H, de la Peña JB, de la Peña IJ, Woo T, Kim HJ, et al.
Rearing in an enriched environment attenuated hyperactivity and
inattention in the spontaneously hypertensive rats, an animal model of
attention-deficit hyperactivity disorder. Physiol Behav. 2016;155:30–7.
64. Sjoberg EA, Holth P, Johansen EB. the effect of delay, utility, and magni-
tude on delay discounting in an animal model of attention-deficit/hyper-
activity disorder (ADHD): a systematic review. In: Association of behavior
analysis international 42nd annual convention. Chicago, IL; 2016.
65. Willner P. Validation criteria for animal models of human mental disor-
ders: learned helplessness as a paradigm case. Prog Neuropsychophar-
macol Biol Psychiatry. 1986;10:677–90.
66. Geyer MA, Markou A. Animal models of psychiatric disorders. In: Bloom
FE, Kupfer DJ, editors. Psychopharmacology: the fourth generation of
progress. New York: Raven Press; 1995. p. 787–98.
67. McKinney W. Animal models of depression: an overview. Psychiatr Dev.
68. Koob GF, Heinrichs SC, Britton K. Animal models of anxiety disorders.
In: Schatzberg AF, Nemeroff CB, editors. The American Psychiatric Press
textbook of psychopharmacology. 2nd ed. Washington: American
Psychiatric Press; 1998. p. 133–44.
69. Sarter M, Bruno JP. Animal models in biological psychiatry. In: D’Haenen
H, den Boer JA, Willner P, editors. Biological psychiatry. Chichester:
Wiley; 2002. p. 37–44.
70. Weiss JM, Kilts CD. Animal models of depression and schizophrenia. In:
Schatzberg AF, Nemeroff CB, editors. The American Psychiatric Press
textbook of psychopharmacology. 2nd ed. Washington: American
Psychiatric Press; 1998. p. 89–131.
71. Belzung C, Lemoine M. Criteria of validity for animal models of psychi-
atric disorders: focus on anxiety disorders and depression. Biol Mood
Anxiety Disord. 2011;1(1):9. doi:10.1186/2045-5380-1-9.
72. Tricklebank M, Garner J. The possibilities and limitations of animal
models for psychiatric disorders. Cambridge: RSC Drug Discovery Royal
Society of Chemistry; 2012. p. 534–57.
73. Sagvolden T, Johansen EB. Rat models of ADHD. In: Stanford C, Tannock
R, editors. Behavioral neuroscience of attention-deficit/hyperactivity
disorder and its treatments. Berlin: Springer; 2012. p. 301–15.
74. Association AP. Diagnostic and statistical manual of mental disorders
(Dsm-5®). Arlington County: American Psychiatric Pub; 2013.
75. Nestler EJ, Hyman SE. Animal models of neuropsychiatric disorders. Nat
Neurosci. 2010;13:1161–9.
76. Gould TD, Einat H. Animal models of bipolar disorder and mood stabi-
lizer efficacy: a critical need for improvement. Neurosci Biobehav Rev.
77. Karatekin C. A comprehensive and developmental theory of ADHD is
tantalizing, but premature. Behav Brain Sci. 2005;28:430–1.
78. Willcutt EG, Doyle AE, Nigg JT, Faraone SV, Pennington BF. Validity of the
executive function theory of attention-deficit/hyperactivity disorder: a
meta-analytic review. Biol Psychiatry. 2005;57:1336–46.
79. Einat H, Manji HK. Cellular plasticity cascades: genes-to-behavior
pathways in animal models of bipolar disorder. Biol Psychiatry.
80. Sontag TA, Tucha O, Walitza S, Lange KW. Animal models of attention
deficit/hyperactivity disorder (ADHD): a critical review. ADHD Atten
Deficit Hyperact Disord. 2010;2:1–20.
81. Schneider T, Przewłocki R. Behavioral alterations in rats prenatally
exposed to valproic acid: animal model of autism. Neuropsychophar-
macology. 2005;30:80–9.
82. Mehta MV, Gandal MJ, Siegel SJ. Mglur5-antagonist mediated reversal
of elevated stereotyped, repetitive behaviors in the VPA model of
autism. PLoS ONE. 2011;6:e26077.
83. Markram K, Rinaldi T, La Mendola D, Sandi C, Markram H. Abnormal fear
conditioning and amygdala processing in an animal model of autism.
Neuropsychopharmacology. 2008;33:901–12.
84. Snow WM, Hartle K, Ivanco TL. Altered morphology of motor cortex
neurons in the VPA rat model of autism. Dev Psychobiol. 2008;50:633–9.
85. Sagvolden T, Dasbanerjee T, Zhang-James Y, Middleton F, Faraone S.
Behavioral and genetic evidence for a novel animal model of attention-
deficit/hyperactivity disorder predominantly inattentive subtype. Behav
Brain Funct. 2008;4:b54.
86. van der Staay FJ. Animal models of behavioral dysfunctions: basic
concepts and classifications, and an evaluation strategy. Brain Res Rev.
87. Gómez O, Juristo N, Vegas S. Replication, reproduction and re-analysis:
three ways for verifying experimental findings. In: Proceedings of the
1st international workshop on replication in empirical software engi-
neering research (RESER 2010). Cape Town, South Africa; 2010.
88. Cartwright N. Replicability, reproducibility, and robustness: comments
on Harry Collins. Hist Polit Econ. 1991;23:143–55.
89. Slavin R, Smith D. The relationship between sample sizes and effect
sizes in systematic reviews in education. Educ Eval Policy Anal.
90. Würbel H. Behaviour and the standardization fallacy. Nat Genet.
91. Richter SH, Garner JP, Würbel H. Environmental standardization: cure
or cause of poor reproducibility in animal experiments? Nat Methods.
92. Josef van der Staay F, Arndt S, Nordquist R. The standardization-general-
ization dilemma: a way out. Genes Brain Behav. 2010;9:849–55.
93. Rosenthal R. The file drawer problem and tolerance for null results.
Psychol Bull. 1979;86:638.
94. Sterling TD. Publication decisions and their possible effects on infer-
ences drawn from tests of significance—or vice versa. J Am Stat Assoc.
95. Ferguson CJ, Heene M. A vast graveyard of undead theories publication
bias and psychological science’s aversion to the null. Perspect Psychol
Sci. 2012;7:555–61.
96. Arkes HR, Blumer C. The psychology of sunk cost. Organ Behav Hum
Decis Process. 1985;35:124–40.
97. Brumfiel G. Particles break light-speed limit. Nature. 2011. doi:10.1038/
98. Matson J. Faster-than-light neutrinos? Physics luminaries voice doubts.
Sci Am. 2011.
nos/. Accessed 13 Feb 2017.
99. Davids E, Zhang K, Tarazi FI, Baldessarini RJ. Animal models of attention-
deficit hyperactivity disorder. Brain Res Rev. 2003;42:1–21.
100. Klauck SM, Poustka A. Animal models of autism. Drug Discov Today Dis
Models. 2006;3:313–8.
Page 13 of 13
Sjoberg Behav Brain Funct (2017) 13:3
We accept pre-submission inquiries
Our selector tool helps you to find the most relevant journal
We provide round the clock customer support
Convenient online submission
Thorough peer review
Inclusion in PubMed and all major indexing services
Maximum visibility for your research
Submit your manuscript at
Submit your next manuscript to BioMed Central
and we will help you at every step:
101. Arguello PA, Gogos JA. Schizophrenia: modeling a complex psychiatric
disorder. Drug Discov Today Dis Models. 2006;3:319–25.
102. Schmidt MV, Müller MB. Animal models of anxiety. Drug Discov Today
Dis Models. 2006;3:369–74.
103. Deussing JM. Animal models of depression. Drug Discov Today Dis
Models. 2006;3:375–83.
104. Pautasso M. Worsening file-drawer problem in the abstracts of natural,
medical and social science databases. Scientometrics. 2010;85:193–202.
105. Rothstein HR, Sutton AJ, Borenstein M. Publication bias in meta-analysis:
prevention, assessment and adjustments. Chichester: Wiley; 2006.
106. Sjoberg EA, D’Souza A, Cole GG. An evolutionary hypothesis concern-
ing female inhibition abilities: a literature review. In: Norwegian behav-
ior analysis society conference. Storeell, Norway; 2016.
107. Smith GD, Ebrahim S. Data dredging, bias, or confounding: they can all
get you into the BMJ and the friday papers. Br Med J. 2002;325:1437–8.
108. Simmons JP, Nelson LD, Simonsohn U. False-positive psychology
undisclosed flexibility in data collection and analysis allows presenting
anything as significant. Psychol Sci 2011:0956797611417632.
109. Sullivan GM, Feinn R. Using effect size—or why the P value is not
enough. J Grad Med Educ. 2012;4:279–82.
110. Brooks GP, Johanson GA. Sample size considerations for multi-
ple comparison procedures in Anova. J Mod Appl Stat Methods.
111. Royall RM. The effect of sample size on the meaning of significance
tests. Am Stat. 1986;40:313–5.
112. Nakagawa S, Cuthill IC. Effect size, confidence interval and statistical
significance: a practical guide for biologists. Biol Rev. 2007;82:591–605.
113. Garner JP. The significance of meaning: why do over 90% of behavioral
neuroscience results fail to translate to humans, and what can we do to
fix it? ILAR J. 2014;55:438–56.
114. Ramboz S, Oosting R, Amara DA, Kung HF, Blier P, Mendelsohn M, et al.
Serotonin receptor 1a knockout: an animal model of anxiety-related
disorder. Proc Natl Acad Sci. 1998;95:14476–81.
115. David DJ, Samuels BA, Rainer Q, Wang J-W, Marsteller D, Mendez I, et al.
Neurogenesis-dependent and-independent effects of fluoxetine in an
animal model of anxiety/depression. Neuron. 2009;62:479–93.
116. Siesser W, Zhao J, Miller L, Cheng SY, McDonald M. Transgenic mice
expressing a human mutant Β1 thyroid receptor are hyperactive,
impulsive, and inattentive. Genes Brain Behav. 2006;5:282–97.
117. Gourley SL, Taylor JR. Recapitulation and reversal of a persistent
depression-like syndrome in rodents. Curr Protoc Neurosci. 2009;Chap-
ter 9:Unit-9.32. doi:10.1002/0471142301.ns0932s49.
118. Willner P. Chronic mild stress (CMS) revisited: consistency and behav-
ioural-neurobiological concordance in the effects of CMS. Neuropsy-
chobiology. 2005;52:90–110.
119. Szechtman H, Sulis W, Eilam D. Quinpirole induces compulsive check-
ing behavior in rats: a potential animal model of obsessive-compulsive
disorder (OCD). Behav Neurosci. 1998;112:1475.
120. King JA, Abend S, Edwards E. Genetic predisposition and the develop-
ment of posttraumatic stress disorder in an animal model. Biol Psychia-
try. 2001;50:231–7.
121. Lipska BK, Jaskiw GE, Weinberger DR. Postpubertal emergence of
hyperresponsiveness to stress and to amphetamine after neonatal
excitotoxic hippocampal damage: a potential animal model of schizo-
phrenia. Neuropsychopharmacology. 1993;9:67–75.
122. Lodge DJ, Behrens MM, Grace AA. A loss of parvalbumin-containing
interneurons is associated with diminished oscillatory activity in an
animal model of schizophrenia. J Neurosci. 2009;29:2344–54.
123. Kesby JP, Burne TH, McGrath JJ, Eyles DW. Developmental vitamin D
deficiency alters Mk 801-induced hyperlocomotion in the adult rat: an
animal model of schizophrenia. Biol Psychiatry. 2006;60:591–6.
... According to these data, the logical competence is not an innate cognitive ability of a person and is transmitted only in the process of direct learning how to use it in practice. Even scientists may be deprived of this competence, according to some experimental studies (Kern et al., 1983;Sjoberg, 2017), although they are not just people with higher education, but also with some notable achievements in their scientific work. Ordinary people without a special preparation, as a rule, have no logical competence at all. ...
Full-text available
... Most models of social defeat stressors in mice operationally identify vulnerability and resilience using a ratio index of avoidance determined with social interaction test results (Henriques-Alves and Queiroz, 2015; Krishnan et al., 2007). A single measure of vulnerability and resilience facilitates comparisons across different studies, and minimizes common fallacies in animal model research (Sjoberg, 2017). Nevertheless, different measures of vulnerability and resilience in mice are not consistently predicted by social avoidance. ...
Laboratory mouse models offer opportunities to bridge the gap between basic neuroscience and applied stress research. Here we consider the ecological validity of social defeat stressors in mouse models of emotional vulnerability and resilience. Reports identified in PubMed from 1980 to 2020 are reviewed for the ecological validity of social defeat stressors, sex of subjects, and whether results are discussed in terms of vulnerability alone, resilience alone, or both vulnerability and resilience. Most of the 318 reviewed reports (95%) focus on males, and many reports (71%) discuss vulnerability and resilience. Limited ecological validity is associated with increased vulnerability and decreased resilience. Elements of limited ecological validity include frequent and repeated exposure to defeat stressors without opportunities to avoid or escape from unfamiliar conspecifics that are pre-screened and selected for aggressive behavior. These elements ensure defeat and may be required to induce vulnerability, but they are not representative of naturalistic conditions. Research aimed at establishing causality is needed to determine whether ecologically valid stressors build resilience in both sexes of mice.
... Animals in observation cages, especially when urine drops as well as fecal drops may be of interest to the researcher, may not have wood shavings. In neuropharmacological experiments, particularly behavioural studies, animals are used only once (Sjoberg, 2017). The observation cage (plexiglass material, also called open field or arena) is cleaned after each observation to annul lingering olfactory cues that may influence the behaviour of incoming experimental animal. ...
Full-text available
Background: Herbal remedies are making waves in many neurological conditions, and it will be wrong to assume that they do not have to be subjected to the same rigorous ethical investigational pathways as for the synthetic medicines/remedies. The primary and most important concern of pharmacologists in the team of drug developers is the safety of the remedy, whether herbal or synthetic. The remedies are aimed at the human body for the alleviation of the medical condition, so it makes sense to protect that body against further injury. In this context, there is no consideration for a different treatment when herbs are involved. Methods: This review is based on the teaching approaches of the author, with a view to explaining the rationale for some of the experimental steps in neuropharmacological experiments, particularly with herbs. The issues of experimental models are discussed, with sufficient explanations for the choice of the model, the indices to monitor and the interpretation of the indices. Supporting literature are also provided as appropriate. Conclusions: Appropriate conclusions are drawn and the target audience are put in a good stead of the appreciation of why they do what they do, while correcting what they have not done well.
... However, animal models exclusively provide definitive evidence for the animal being investigated and provide evidence for processes occurring in humans. Animal models are very beneficial for studying the mechanisms underlying human conditions [47]. Various animal models and even clinical trials are required in the future to enhance the valuable translational potential and application of DA-MeHA hydrogels in clinical practice. ...
Full-text available
Adipose-derived stem cells (ADSCs) show potential in skin regeneration research. A previous study reported the failure of full-thickness skin self-repair in an injury area exceeding 4 cm in diameter. Stem cell therapies have shown promise in accelerating skin regeneration; however, the low survival rate of transplanted cells due to the lack of protection during and after transplantation leads to low efficacy. Hence, effective biomaterials for the delivery and retention of ADSCs are urgently needed for skin regeneration purposes. Here, we covalently crosslinked hyaluronic acid with methacrylic anhydride and then covalently crosslinked the product with dopamine to engineer dopamine-methacrylated hyaluronic acid (DA-MeHA). Our experiments suggested that the DA-MeHA hydrogel firmly adhered to the skin wound defect and promoted cell proliferation in vitro and skin defect regeneration in vivo. Mechanistic analyses revealed that the beneficial effect of the DA-MeHA hydrogel combined with ADSCs on skin defect repair may be closely related to the Notch signaling pathway. The ADSCs from the DA-MeHA hydrogel secrete high levels of growth factors and are thus highly efficacious for promoting skin wound healing. This DA-MeHA hydrogel may be used as an effective potential carrier for stem cells as it enhances the efficacy of ADSCs in skin regeneration.
... Poor experimental design and statistical analysis, subjective biases and flawed interpretation of behavioral data are other reasons given to explain the lack of new therapeutic options brought by preclinical research (Sjoberg, 2017;Stanford, 2017Stanford, , 2020. Indeed, behavioral tests such as the forced swim and the tail suspension tests used for so-called "depressive-like phenotype" or "models of depression" have actually been developed since they were sensitive to the effect of pharmacological monoaminergic drugs discovered by serendipity. ...
Mood disorders, including major depressive disorder and bipolar disorders, are frequent and heterogeneous psychiatric diseases. In order to better understand their pathophysiology, a new research area based on dimensions has emerged. It consists of exploring domains derived from fundamental behavioral components to link them to neurobiological systems. Beyond mood, emotional biases differentiate mood states in patients. Mania episodes are associated with positive biases, i.e. emotional stimuli become more rewarding and less aversive, while the opposite characterizes depression. The objective of this thesis was to identify hedonic bias in mouse models of depression and mania, and to study the underlying neural mechanisms. Using the GBR 12909-induced mouse model of mania, we found apart from the classical mania-like phenotype characterized by hyperlocomotion, strong negative olfactory and gustatory hedonic biases, at the opposite of what we expected. On the contrary, we uncovered a negative olfactory hedonic bias in the corticosterone-induced mouse model of depression, as we predicted. This bias was accompanied by specific basolateral amygdala (BLA) circuits activity disturbances. Furthermore, manipulating some of these BLA circuits activity thanks to chemogenetics was sufficient to partially improve the negative olfactory hedonic bias induced by chronic corticosterone administration. Taken together, our results highlight the interest of olfactory hedonic evaluation in mouse models of depression and mania, and demonstrate the causal role of BLA circuits in hedonic biases associated with depressive-like states.
Schizophrenia is caused by interaction of a combination of genetic and environmental factors. Of the latter, prenatal exposure to maternal stress is reportedly associated with elevated disease risk. The main orchestrators of inflammatory processes within the brain are microglia, and aberrant microglial activation/function has been proposed to contribute to the aetiology of schizophrenia. Here, we evaluate the epidemiological and preclinical evidence connecting prenatal stress to schizophrenia risk, and consider the possible mediating role of microglia in the prenatal stress-schizophrenia relationship. Epidemiological findings are rather consistent in supporting the association, albeit they are mitigated by effects of sex and gestational timing, while the evidence for microglial activation is more variable. Rodent models of prenatal stress generally report lasting effects on offspring neurobiology. However, many uncertainties remain as to the mechanisms underlying the influence of maternal stress on the developing foetal brain. Future studies should aim to characterise the exact processes mediating this aspect of schizophrenia risk, as well as focussing on how prenatal stress may interact with other risk factors.
Contribution of animals in biomedical research—though in varied proportions—is indispensable. Both the cases for and against the use of laboratory animals are equally debatable. Apart from fundamental biological research, animals were extensively utilized in drug toxicity testings including non-pharmaceutical chemical safety assessments and also in biomedical teaching and training. However, with the growing understanding of animal experimentations and animal ethics—particularly with greater application of the 3R principles—nowadays, the experimental procedures involving animals warrant even more judicious perusal. Whenever feasible, the principle of replacement (absolute or relative) is given prime importance and engagement of appropriate alternatives to animal experiments (non-animal testing methods) is highly recommended. Reduction and refinement (and rehabilitation, the 4th R) are secondary principles of humane animal experimentations. This Chapter includes discussion on the principles of animal ethics, the evolution of ethical issues in animal experiments, the 3R approach including the alternatives to animal experiments, the present status of animal experimentations, and the various guidelines related to animal research.KeywordsAnimal research ethicsHumane research3R principlesAlternativesReplacement principleReduction principleRefinement principle
Animal research provides a major contribution to the discovery of new compounds and its mechanism of action. It also deals with the pharmacokinetics profile and determination of safe dose of a compound which is to be tested in humans. There is a necessity to choose an appropriate animal model for preclinical research in order to carry out a clinical trial. Research can be performed on already existing validated animal model or by validating a newer model. Validation criteria of an animal model changes from one to another based on the purpose of the model (fit-for-purpose). Face validity, predictive validity and construct validity ensures the closeness of the animal model to humans. In addition to these validity, few more criteria have been added to assess and optimise the animal model, i.e. epidemiology, symptomatology, natural history, end points, genetics, and biochemical parameters, pharmacological and histological features. There is no single animal model which can satisfy all types of validity for any disease. Even though shortcomings are inevitable, these models pave way for the safer research study in humans. One can choose an animal model closer to an ideal one. Thus validation plays a crucial role in translation of animal research to humans.KeywordsPredictiveFace and construct validityAnimal model validation
Full-text available
El objetivo de este artículo es argumentar la falta de validez del enfoque reduccionista para el estudio de la cognición animal y proponer su estudio desde un enfoque holista que tenga en cuenta toda la realidad de la cognición animal además de analizar el antropomorfismo y antropocentrismo relacionado con este tema. La metodología usada para esta investigación, fue una revisión de la literatura actual sobre el problema en cuestión y las conclusiones fueron que no se puede entender la mente de los sujetos no humanos sin un enfoque holista. Existen numerosos sesgos en la investigación científica, sea del observador, del sujeto experimental o respecto a los instrumentos de observación y medida. Entre todos los sesgos más conocidos, el muy conocido antropomorfismo, se ha visto como un sesgo inconsciente en donde el hombre se refleja en los elementos de su realidad exterior. La hipótesis de esta investigación es que el reduccionismo no toma en cuenta toda la riqueza y la verdadera realidad de la cognición animal no humana y ésta debe estudiarse mejor, desde un enfoque holista que tiene en cuenta la realidad entera de este fenómeno.
Twice-exceptional learners face a unique set of challenges arising from the intersection of extraordinary talent and disability. Neurobiology research has the capacity to complement pedagogical research and provide support for twice-exceptional learners. Very few studies have attempted to specifically address the neurobiological underpinnings of twice-exceptionality. However, neurobiologists have built a broad base of knowledge in nervous system function spanning from the level of neural circuits to the molecular basis of behavior. It is known that distinct neural circuits mediate different neural functions, which suggests that 2e learning may result from enhancement in one circuit and disruption in another. Neural circuits are known to adapt and change in response to experience, a cellular process known as neuroplasticity. Plasticity is controlled by a bidirectional connection between the synapse, where neural signals are received, and the nucleus, where regulated gene expression can return to alter synaptic function. Complex molecular mechanisms compose this connection in distinct neural circuits, and genetic alterations in these mechanisms are associated with both memory enhancements and psychiatric disorder. Understanding the consequences of these changes at the molecular, cellular, and circuit levels will provide critical insights into the neurobiological bases of twice-exceptional learning.
Conference Paper
Full-text available
The delay discounting paradigm involves choosing between a small, immediate reinforcer (SS) or larger, delayed reinforcer (LL). Children with ADHD tend to choose the SS reinforcer more often than controls, which is interpreted as impulsivity. Studies on an animal model of ADHD, the Spontaneously Hypertensive Rat (SHR), show the same pattern, with SHR preferring the SS reinforcer. However, it is not entirely clear why this pattern exists. It has been proposed that ADHD children tend to be delay averse, i.e. that they actively seek to reduce trial time. An alternative hypothesis is that ADHD children struggle to see the long-term utility of their choices. We reviewed data from eight SHR studies on delay discounting and investigated which hypothesis was the best predictor of LL preference. Results found that SHRs and controls do not differ in overall performance on the delay discounting task, regardless of whether the independent variable is delay between response and reinforcer, magnitude of the reinforcer, or utility of the large reinforcer. However, if utility is held constant while the response-reinforcer delay is manipulated, SHRs show a steeper discounting curve than controls.
Theories of Theories of Mind brings together contributions by a distinguished international team of philosophers, psychologists, and primatologists, who between them address such questions as: what is it to understand the thoughts, feelings, and intentions of other people? How does such an understanding develop in the normal child? Why, unusually, does it fail to develop? And is any such mentalistic understanding shared by members of other species? The volume's four parts together offer a state of the art survey of the major topics in the theory-theory/simulationism debate within philosophy of mind, developmental psychology, the aetiology of autism and primatology. The volume will be of great interest to researchers and students in all areas interested in the 'theory of mind' debate.
Background Delay-related motivational processes are impaired in children with Attention Deficit/Hyperactivity Disorder (ADHD). Here we explore the impact of ADHD on the performance of three putative indices of Delay Aversion (DAv): (i) the choice for immediate over delayed reward; (ii) slower reaction times following delay; and (iii) increased delay-related frustration—to see whether these tap into a common DAv construct that differentiates ADHD cases from controls and shows evidence of familiality. Method Seventy seven male and female individuals (age range 6–17) with a research diagnosis combined type ADHD, 65 of their siblings unaffected by ADHD and 50 non-ADHD controls completed three delay tasks. Results As predicted the size of the correlation between tasks was small but a common latent component was apparent. Children with ADHD differed from controls on all tasks (d = .4–.7) and on an overall DAv index (d = .9): The battery as a whole demonstrated moderate sensitivity and specificity. In general, deficits were equally marked in childhood and adolescence and were independent of comorbid ODD. IQ moderated the effect on the MIDA. Scores on the DAv factor co-segregated within ADHD families. Discussion There is value in exploring the broader DAv phenotype in ADHD. The results illustrate the power of multivariate approaches to endophenotypes. By highlighting the significant, but limited, role of DAv in ADHD these results are consistent with recent accounts that emphasize neuropsychological heterogeneity.
Background Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. New method The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. Results The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24 h) are comparable to traditional manual experiments, while minimizing experimenter involvement. Comparison with existing methods The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ∼$500US, making it affordable to a wide range of investigators. Conclusions This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays.