ChapterPDF Available

Evidence and Value Freedom



This chapter examines values and scientific evidence. It argues that one way to put the ideal of value freedom is believing that a proposition has good or bad ethical consequences is not evidence for its truth. The evidence relation is a three-place one between data, hypothesis, and background knowledge, thus allowing background knowledge to link fact and value. It is also argued that there is a qualified sense in which the value-free ideal is right: judgments about the moral consequences of a proposition cannot provide new information about its truth.
Elliott Sober
in these postpositivist times, the slogan “science is value free”
is frequently rejected disdainfully as a vestige of a bygone age. But, as of-
ten happens with babies and their bathwater, there may be something
worthwhile in this slogan that we should try to identify and retain. To
this end, I’ll begin with two absurdities:
(1) Scientists aren’t influenced by their ethical and political values
when they do science.
(2) Scientific inference is independent of values.
I don’t know if any philosopher ever believed either of these proposi-
tions. Proposition (1) is false for the simple reason that scientists are peo-
ple, just like the rest of us. Perhaps they often strive to leave their ethical
and political values at the laboratory door, but who ever thought that all
of them have this aim, and that those who do succeed 100% of the time?
This, by the way, does not mean that we get to assume that scientific ac-
tivity can be explained solely in terms of the ethical and political values
that scientists have. Rather, recognizing the absurdity of (1) should lead
us to approach such psychological and sociological questions on a case-
by-case basis. Scientists may vary among themselves, and a single scientist
may be more influenced by these values in some contexts than in oth-
ers. Maybe some scientific work proceeds completely independently of
these values and other parts of science are entirely driven by them, and
perhaps a good deal of the real world falls somewhere between these
two extremes.1
Proposition (2) is absurd because scientific inference is regulated
by normative rules. Scientists try to construct good tests of their hy-
potheses, they judge some explanations good and others bad, and they
say that some inferences are flawed or weak and others are strong. The
words I have italicized indicate that scientists are immersed in tasks of
evaluation. They impose their norms on the ideational entities they
construct. However, the obvious falsehood of (2) leaves it open that a re-
stricted version of that proposition might be on the right track:
(3) The fact that believing a proposition would have good or bad ethi-
cal or political consequences is not evidence for or against that
proposition’s being true.
Is this proposition, or some refinement of it, the kernel of truth in the
frequently misstated idea that “science is value free”?
We should not accept proposition (3) just because it “sounds right.”
After all, the evidence relation often connects facts that seem at first
glance to be utterly unrelated. The proposition that the dinosaurs went
extinct 65 million years ago because of a meteor hit and the proposition
that there now is an iridium layer in certain rocks may appear to have
nothing to do with each other. How could the presence of iridium in
present-day rocks bear on the question of why the dinosaurs went ex-
tinct so long ago? Well, appearances to the contrary, there may well be
such a connection (Alvarez and Asaro 1990). Why, then, should we be
so sure that the ethical consequences of believing a proposition have no
bearing on whether the proposition is true? This is a good question, and
in the absence of a good answer, we should not complacently assume
that (3) is correct.
Sometimes people believe (3) because they think there are no ethi-
cal truths in the first place. If there are no ethical truths, then ethical
truths don’t provide evidence for anything. Whether or not nonfactual-
ism is correct, I think it fruitful to consider proposition (3) on the
assumption that there are ethical facts. In our everyday lives, we treat
ethical statements as if some of them are true. Perhaps this is a mistake,
but for present purposes, let’s take that practice at face value. If some
normative ethical statements are true, what is to prevent them from
standing in evidential relations with nonethical, scientific propositions?
To investigate this question, let’s begin with a useful example of the
evidence relation at work—smoke is evidence of fire. This relation
holds because the probability of fire if smoke is present exceeds the
probability of fire if smoke is absent. The two events ±smoke and ±fire
are correlated. This is a perfectly objective relation that obtains between
smoke and fire; it obtains whether or not anyone believes that it does.
Understood in this way, the evidence relation has an interesting prop-
erty: It is symmetric. If smoke is evidence of fire, then fire is evidence of
smoke. This means that proposition (3) entails a further claim:
(4) The fact that a proposition is true is not evidence that believing the
proposition would have good or bad ethical or political conse-
If (3) is true, so is (4). But surely there are counterexamples to proposi-
tion (4). Consider a physician who will give a drug to her patients if she
thinks the drug is safe but will withhold the drug if she thinks it is not.
Suppose that the drug will provide significant health benefits if it is safe.
And suppose further that the physician is a pretty good judge of whether
the drug is safe. We then have a causal chain in which earlier links raise
the probability of later ones:
the drug is safe the doctor thinks the drug is safe the patients receive
the drug good consequences accrue to the patients
In this instance, correlation is transitive. The nonethical statement “the
drug is safe” is therefore evidence for the ethical statement “good con-
sequences accrue to the patients.” An ethical and a nonethical fact are
evidentially related, just like smoke and fire. I conclude that (4) and
therefore (3) are false (Stephens 2000).
It might be replied that (3) and (4) can be saved from refutation by
focusing exclusively on the two propositions under discussion. Consider
just the two statements “the drug is safe” and “good consequences ac-
crue to the patients.” In the absence of any further information, there is
no saying whether these statements are positively evidentially relevant
to each other, negatively relevant, or entirely irrelevant. We assumed in
our story that the physician is well meaning and discerning. This as-
sumption was enough to bring the two statements into a positive rela-
tion. However, if the doctor were malevolent or a very bad judge of drug
safety, the two statements would stand in a relation of negative rele-
vance or be mutually irrelevant. With no further information, the rela-
tion of the two statements is indeterminate. This is a probabilistic ana-
log of Duhem’s thesis (Sober 1988).
The trouble with this reply is that what is true for the pair of state-
ments about the physician is true for practically any pair of statements.2
The evidence relation isn’t binary; it has at least three places. When one
statement is evidence for a second, this usually is due to the mediation
of a third, which provides background information.3Given this, it is
hardly surprising that ethical facts can provide evidence about scientific
propositions; they can do this if one’s background assumptions include
other, ethical claims.
How is this criticism of (3) and (4) related to Hume’s famous dic-
tum that an ought cannot be inferred from an is? Although Hume was
talking about deduction, it is natural to generalize his thesis to a claim
about evidential relationships that are nondeductive and probabilistic.
What one needs to say here is that an is-statement and an ought-statement
are not evidentially relevant to each other, unless one’s background as-
sumptions include other ought-statements. But because one’s back-
ground assumptions often do include such “bridge principles,” evidential
reasoning can run from is to ought and from ought to is.4
How might propositions (3) and (4) be refined? What is needed is a
way to separate the pattern exemplified by the physician from a second
class of examples, which William James (1897) addressed in his famous
discussion of the will to believe. James argued that believing in God can
provide substantial psychological benefits. Faith in God can help peo-
ple feel that their lives are meaningful, which may save them from de-
pression and allow them to lead worthwhile lives. It is debatable how
general this point is; there are enough happy atheists and depressed the-
ists around to make one suspect that, at least for many people, mental
health is independent of theological conviction. But let’s restrict our at-
tention to people who fill James’s bill. These people would benefit from
believing that God exists; I’ll go further and say that it is a good thing for
P is true P is false
S believes P w x
S does not believe P y z
Table 5.1. Comparing utilities as a function of what is believed and what is
the case.
these people to embrace theism and thus save themselves from the
slough of despond. However, I still want to claim that the fact that they
would benefit from believing in God provides no evidence that God in
fact exists. It is examples like this that make proposition (3) sound so
plausible. What distinguishes James’s theist from the well-meaning
We can separate these cases by considering the two-by-two table
above (table 5.1). The entry in a cell represents utility—how good or
bad the consequences are of being in that situation. In the physician
case (where P =“the drug is safe”), there are both “vertical” and “hori-
zontal” effects. The well-being of the patients is affected both by
whether the drug is safe (w >x)5and by what the physician believes
(w >y and z >x). In the case of James’s believer (where P=“God exists”),
however, there are only vertical effects. As far as the individual’s psycho-
logical well-being is concerned, the only thing that matters is that he
or she believes in God (w >y and x >z); whether God actually exists
doesn’t matter (w =x and y =z).
This, I think, provides the key to revising propositions (3) and (4).
Our question—when do the ethical consequences of believing a propo-
sition provide evidence as to whether the proposition is true?—can be
represented by using the tools of decision theory. We begin by identify-
ing the expected value of each of the two “acts.”6
EV[Believe P] =wp +x(1 p)
EV[Don’t Believe P] =yp +z(1 p).
Here pdenotes the probability that the proposition P is true (and I as-
sume that acts are independent of states of the world). When does the
fact that EV[Believe P] >EV[Don’t Believe P] provide information about
the value of p? A little algebra reveals that
EV[Believe P] >EV[Don’t believe P] if and only if p/(1 p) >(z x)/
(w y).
Suppose that the left-hand inequality is true. When will filling in the
values for the utilities w, x, y, and z provide a nontrivial lower or upper
bound on the value of p? This fails to happen in the case of James’s the-
ist because (w y) is positive while (z x) is negative. With these values,
all that follows is that p/(1 p) must be greater than some negative num-
ber; this is entirely uninformative, since no ratio of probabilities can be
negative. The case of the physician is different. Here (zx) and (wy)
are both positive; therefore, their ratio provides a nontrivial lower bound
for the value of p.7Thus, science and ethics are not always as separate as
propositions (3) and (4) suggest; sometimes information about the ethi-
cal consequences of believing a proposition does provide information
about the probability that the proposition is true.
I have represented the ethical consequences of believing a proposi-
tion in terms of the expected utility of doing so. This may sound as if it
requires a consequentialist ethics, but in fact it does not. A nonconse-
quentialist also can enter payoffs in the four cells. Furthermore, the ar-
gument is general. Utilities may be calculated in accordance with an
ethical theory or on some nonethical basis. What we have here is a gen-
eral format in which judgments about which acts are better, plus infor-
mation about the utilities of outcomes, can have implications about the
probabilities of propositions.8
Before phoning the National Science Foundation with the news
that ethics can be a source of evidence for scientific claims, we should
reflect on the fact that the expected utility of an action is a composite
quantity, built up from probabilities and utilities. If one’s ethical judg-
ments about which actions one should perform are based on comparing
their expected values, then ethical judgments require information about
probabilities for their formulation. If so, how can ethical judgments be a
source of information about probabilities? This point does not rescue
propositions (3) and (4) from counterexample, but it does suggest a new
way to think about the problem.
There is a special circumstance in which it is possible to decide which
of the two actions (believe P, or don’t) is better without any information
about the probability that P is true. This is the case in which one action
dominates the other. James’s argument is of this type: He contends that
belief in God is beneficial, whether or not God in fact exists.9However,
we have already seen that the inequality of the expected utilities pro-
vides no information about the probabilities in James’s case. Without
dominance, no conclusion about which action is better can be reached
unless one already has information about the probabilities. What we
have here is an instance of the maxim “out of nothing, nothing comes.”
If comparing the ethical consequences of believing P and of not believ-
ing P has implications about the probability of P, this must be because
the description of the ethical consequences already has built into it
some information about those probabilities. Thus, ethical facts (about
expected utilities) and scientific facts (about probabilities) are con-
nected, contrary to what (3) and (4) assert. However, the problem is that
this connection is useless; we can’t use ethical information to gain infor-
mation we don’t already have about probabilities.
The situation would be different if we were able to discover which
actions are better than which others, when dominance fails, without al-
ready having to have information about probabilities. For example, if
there were an infallible guru who would simply tell us what to do, and
who would reveal the utilities that go with different states of the world,
we could use these inputs to obtain new information about probabili-
ties. But in the absence of such an authority, we are left with the con-
clusion that our access to information about what we should do must be
based on information about probabilities (except when there is domi-
nance). When an ethical conclusion requires information about proba-
bilities (as in the case of the physician), that conclusion can’t be a
source of new information about those probabilities. And when the eth-
ical conclusion can be reached without information about probabilities
(as in the case that James describes), the conclusion tells us nothing
about the probabilities. This suggests the following dilemma argument:
The question of whether believing P has better ethical consequences than
not believing P either depends for its answer on information about the
probability of P, or it does not.
If it does so depend, then we need information about probabilities to an-
swer the ethical question, and so the ethical judgment cannot supply
information about probabilities that we don’t already have.
If it does not so depend, then the ethical judgment has no implications
about the probability of P.
(5) Judgments about the ethical consequences of believing P cannot
supply new information about the probability of P.
The conclusion of this argument, proposition (5), is a reasonable suc-
cessor to the failed propositions (3) and (4).
The argument just presented is reminiscent of Rudner’s (1953) well-
known argument that the scientist qua scientist makes value judgments.10
Rudner describes a physician who must decide whether a drug is safe and
argues that this decision must be based on considering ethical features of
the four possible outcomes depicted in table 5.1, which we have already
discussed. Rudner’s argument elicited two criticisms. Levi (1967) con-
tended that accepting a proposition and acting on one’s belief are distinct
and that the former should not be based on ethical values; Jeffrey (1956)
maintained that science is not in the business of accepting and rejecting
but merely seeks to assign probabilities to hypotheses. My own argument
is neutral on Rudner’s position. Perhaps deciding what to believe depends
on ethical values; perhaps it does not. My point is that no matter how one
decides what to believe, one still can consider what the ethical conse-
quences are of that decision. My question was whether this ethical con-
sideration has implications concerning the probabilities of hypotheses. It
does in the case of the physician but not in the case of James’s theist.
In the example about the physician, and in many other examples of
moral deliberation about which action to perform, one’s ethical deci-
sion depends on matters of scientific fact. In terms of table 5.1, the ubiq-
uitous pattern is that an inequality gets reversed as one moves from the
first column to the second. Although one’s ethical decision thus de-
pends on a judgment about a matter of scientific fact, it is possible to
form a judgment about the scientific facts without having a commit-
ment, one way or the other, on the ethical question. The physician can’t
decide whether to administer the drug without knowing something
about its probability of being safe, but it is perfectly possible to discover
whether a drug is safe without having a view, one way or the other, on
whether unsafe drugs should be withheld from patients. Moral ignora-
muses can assess the weight of evidence, but scientific ignoramuses can-
not make good moral decisions (when those decisions depend, as they
almost always do, on scientific matters of fact).
Let’s review our progress from propositions (1) and (2) through (3)
and (4) and then to (5). Proposition (1) concerns the behavior of scien-
tists, whereas (3) and (4) concern the logic of various scientific concepts.
Proposition (2) ambiguously straddles this distinction; “scientific infer-
ence” can be taken to refer to what scientists do or to the formal proper-
ties of various types of argument. Proposition (5) addresses the concept
of evidence; it does not assert that scientists are immune from political
and ethical influence when they decide whether one proposition is evi-
dence for another. More specifically, the claim is not that ethical values
(represented by the expected utilities of actions and the utilities of out-
comes) have no implications about the probabilities of hypotheses, but
that ethical inputs are not needed to estimate those probabilities. This is
why looking to ethics for evidence concerning the truth of scientific hy-
potheses is to place the cart before the horse.
1. I am grateful to Ellery Eells and Dan Hausman for useful comments.
2. The exception arises when one statement deductively entails the other; then they
must be positively relevant or of zero relevance, and negative relevance is ruled out.
3. I say “at least” three places because in most scientific contexts, what one can dis-
cuss is whether the evidence discriminates between a pair of hypotheses, given a set of
background assumptions. See Sober (1994a) for discussion.
4. Does the ought implies can principle undermine Hume’s thesis? The princi-
ple’s contrapositive asserts that if it is impossible for an agent to perform an action, then
it is false that the agent ought to do so. Here an is implies the negation of an ought.
Hume’s thesis can be preserved by insisting that the negation of an ought-statement is
not itself an ought-statement. Similar reasoning is required if one wishes to reconcile
Hume’s thesis with the fact that philosophers have presented philosophical arguments
(of varying quality) for the claim that normative ethical statements lack truth values.
These arguments do not contain premises that are normative ethical statements. For ex-
ample, Harman (1977) and Ruse and Wilson (1986) each present parsimony arguments
for the nonexistence of ethical facts; see Sober (2005) and (1994b), respectively, for dis-
cussion of each.
5. I take it that it doesn’t matter whether the drug is safe if the doctor doesn’t believe
that it is (y =z), because patients won’t receive the drug in that situation regardless of
whether the drug would be good for them.
6. It might be suggested that believing a proposition is not an action, in the sense
that it is not subject to the will. This point is sometimes used against Pascal’s wager, but it
is an objection that Pascal successfully addressed: He says that if absorbing his argument
does not instantly trigger belief, one should go live among religious people so that habits
of belief will gradually take hold. Believing a proposition is like other “nonbasic actions”:
Being president of the United States isn’t something one can directly bring about by an
act of will, but this does not place it outside the domain of decision theory. For further dis-
cussion, see Mougin and Sober (1994).
7. Symmetrically, if not believing P were the better action, this would impose a
nontrivial upper bound on the value of the probability.
8. Decision theory from early on has taken an interest in describing the interrela-
tionships of expected utility, utility, and probability. For a brief introduction, see Skyrms
(2000, pp. 138–43).
9. Pascal’s wager, when the payoffs are finite, does not have this property. One needs
some information about the probability of God’s existing to reach a decision about whether
believing is better than not believing. In fact, the theist contemplating Pascal’s wager
(with finite payoffs) is in the same qualitative situation as the physician deciding whether
to believe that the drug is safe. See Mougin and Sober (1994) discussion.
10. The problem that Rudner addresses is the one that James (1897) and W. K. Clif-
ford (1879) debated. It also was central to the debate between the “left” and “right” wings
of the Vienna Circle. Neurath argued that evidence does not determine theory choice
and that ethical and political values can and should be used to close the gap; Schlick,
Carnap, and Reichenbach countered that the intrusion of such values into theory choice
is both undesirable and unnecessary: It would compromise the objectivity of science, and
scientific inferences can be drawn without taking such values into account. See Howard
(2002) for discussion.
Alvarez, W., and F. Asaro. 1990. “What Caused the Mass Extinction? An Extraterrestrial
Impact.” Scientific American, 263, pp. 78–84.
Clifford, W. K. 1879. “The Ethics of Belief,” in Lectures and Essays, vol. 2, pp. 177–211.
London: Macmillan.
Harman, G. 1977. The Nature of Morality. New York: Oxford University Press.
Howard, D. 2002. “Philosophy of Science and Social Responsibility—Some Historical
Reflections,” in A. Richardson and G. Hardcastle, eds., Logical Empiricism in North
America. Minneapolis: University of Minnesota Press.
James, W. 1897. “The Will to Believe,” in The Will to Believe and Other Essays in Popular
Philosophy. New York: Longmans Green.
Jeffrey, R. 1956. “Valuation and Acceptance of Scientific Hypotheses.” Philosophy of Sci-
ence, 33, pp. 237–46.
Levi, I. 1967. Gambling with Truth. Cambridge, MA: MIT Press.
Mougin, G., and E. Sober. 1994. “Betting against Pascal’s Wager.Nous, 28, pp. 382–95.
Rudner, R. 1953. “The Scientist Qua Scientist Makes Value Judgments.” Philosophy of
Science, 20, 1–6.
Ruse, M., and E. Wilson. 1986. “Moral Philosophy as Applied Science.” Philosophy, 61,
pp. 173–92. Reprinted in E. Sober, ed. 1993. Conceptual Issues in Evolutionary Biol-
ogy, 2nd ed. Cambridge, MA: MIT Press.
Skyrms, B. 2000. Choice and Chance, 4th ed. Belmont, CA: Wadsworth.
Sober, E. 1988. Reconstructing the Past—Parsimony, Evolution, and Inference. Cam-
bridge, MA: MIT Press.
Sober, E. 1994a. “Contrastive Empiricism,” in From a Biological Point of View. New York:
Cambridge University Press.
Sober, E. 1994b. “Prospects for Evolutionary Ethics,” in From a Biological Point of View.
New York: Cambridge University Press.
Sober, E. 2005. Core Questions in Philosophy. Upper Saddle River, NJ: Prentice-Hall.
Stephens, C. 2000. “Why Be Rational? Prudence, Rational Belief and Evolution.” PhD
dissertation, University of Wisconsin, Madison.
... 3), Kuhn's insular conception of the scientific community led to the establishment of the so-called value-free ideal . While they admit, like Kuhn, that non-epistemic values ( i.e., social, political, or moral values) can be important 'external factors' for science, for instance in the agenda-setting stage, supporters of the value-free ideal claim that such values ought not to play any 'internal role'; for example, they are not admitted in the evaluation of empirical evidence or in the justification of scientific theories(Sober 2007; Betz 2013; Hudson 2015). The main motivation behind the value-free ideal is the defense of the epistemic authority of science: should it be driven by social, political, or ethical values, then science would no longer be regarded as providing impartial and objective knowledge. ...
Full-text available
In the past few years, social epistemologists have developed several formal models of the social organisation of science. While their robustness and representational adequacy has been analysed at length, the function of these models has begun to be discussed in more general terms only recently. In this article, I will interpret many of the current formal models of the scientific community as representing the latest development of what I will call the 'Kuhnian project'. These models share with Kuhn a number of questions about the relation between individuals and communities. At the same time, they also inherit some of Kuhn's problematic charac-terisations of the scientific community. In particular, current models of the social organisation of science represent the scientific community as essentially value-free. This may put into question both their representational adequacy and their normative ambitions. In the end, it will be shown that the discussion on the formal models of the scientific community may contribute in fruitful ways to the ongoing debates on value judgements in science.
... This conception of objectivity claims that a scientific justification is objective as long as it is not influenced by non-epistemic values. There might be reasons to believe that value-free ideal should be followed (e.g., Betz, 2013;Sober, 2007) or that the corresponding notion of objectivity is compelling. However, many problems of value-free objectivity have been diagnosed. ...
The content of this dissertation spans four years of work, which was carried out in the Netherlands (Tilburg University and University of Amsterdam) and Italy (University of Turin). It is part of the ERC project “Making Scientific Inference More Objective” led by professor Jan Sprenger, for which philosophy of science and empirical research were combined. The dissertation can be summarized as a small set of modest attempts to contribute to improving scientific practice. Each of these attempts was geared towards either increasing understanding of a particular problem or making a contribution to how science can be practiced. The general focus was on philosophical nuance while remaining methodologically practicable. The five papers contained in this dissertation are both methodologically and philosophically diverse. The first three (Chapters 2 through 4) are more empirical in nature and are focused on understanding and evaluating how science is practiced: a meta-analysis of semantic intuitions research in experimental philosophy; a systematic review on essay literature on the null hypothesis significance test; and an experiment on how teams of statisticians analyze the same data. The last two (Chapters 5 and 6) are focused on the improvement of scientific practice by providing tools for the improvement of empirical research with a strong philosophical foundation: a practicable and testable definition of scientific objectivity and a Bayesian operationalization of Popper’s concept of a severe test.
... While some contemporary philosophers defend the value-free ideal (Lacey 1999;Mitchell 2004;Sober 2007;Betz 2013;Hudson 2016), many contributors to the discussions on science and values have challenged it and urged us to abandon it (Kourany 2008;Douglas 2009;Longino 1990;Elliott 2011;Kitcher 2011;Steele 2012;Hicks 2014;Steel 2014;de Melo-Martín and Intemann 2016). These authors do not represent any uniform position. ...
This article contributes to the philosophical debate on values in science by exploring how scientists themselves understand the proper role of moral, political, and social values in expert practice. I present findings from interviews with climate scientists who have participated as authors in the Intergovernmental Panel on Climate Change (IPCC). The climate scientists subscribe to the value-free ideal as a regulative ideal that applies both to the provision of knowledge to policymakers and how they engage with political issues in the public sphere. Yet their views on the moral responsibility of scientists and the aim of providing policy-relevant output challenge the value-free ideal. The article suggests ways in which their views can be relevant to the philosophical discussion.
Solving the "new demarcation problem" requires a distinction between epistemically legitimate and illegitimate roles for non-epistemic values in science. This paper addresses one 'half' (i.e. a sub-problem) of the new demarcation problem articulated by the Gretchenfrage: What makes the role of a non-epistemic value in science epistemically illegitimate? I will argue for the Explaining Epistemic Errors (EEE) account, according to which the epistemically illegitimate role of a non-epistemic value is defined via an explanatory claim: the fact that an epistemic agent is motivated by a non-epistemic value explains why the epistemic agent commits a particular epistemic error. The EEE account is inspired by Douglas' and Steel's "functionalist" or "epistemic constraint" accounts of epistemic illegitimacy. I will suggest that the EEE account is able to meet two challenges that these two accounts face, while preserving the key intuition underlying both accounts. If my arguments succeed, then the EEE account provides a solution to one half of the new demarcation problem (by providing a definition of what makes the role of a non-epistemic value epistemically illegitimate) and it opens up new ways for addressing the other half (i.e. characterizing an epistemically legitimate role for non-epistemic values).
Full-text available
In the last decade, many problematic cases of scientific conduct have been diagnosed; some of which involve outright fraud (e.g., Stapel, 2012) others are more subtle (e.g., supposed evidence of extrasensory perception; Bem, 2011). These and similar problems can be interpreted as caused by lack of scientific objectivity. The current philosophical theories of objectivity do not provide scientists with conceptualizations that can be effectively put into practice in remedying these issues. We propose a novel way of thinking about objectivity for individual scientists; a negative and dynamic approach.We provide a philosophical conceptualization of objectivity that is informed by empirical research. In particular, it is our intention to take the first steps in providing an empirically and methodologically informed inventory of factors that impair the scientific practice. The inventory will be compiled into a negative conceptualization (i.e., what is not objective), which could in principle be used by individual scientists to assess (deviations from) objectivity of scientific practice. We propose a preliminary outline of a usable and testable instrument for indicating the objectivity of scientific practice.
This dissertation explores several conceptual and methodological features of medical science that influence our ability to accurately predict medical effectiveness. Making reliable predictions about the effectiveness of medical treatments is crucial to mitigating death and disease and improving individual and population health, yet generating such predictions is fraught with difficulties. Each chapter deals with a unique challenge to predictions of medical effectiveness. In Chapter 1, I describe and analyze the principles underlying three prominent approaches to physical disease classification—the etiological, symptom-based, and pathophysiological models—and suggest a broadly pragmatic approach whereby appropriate classifications depend on the goal in question. In line with this, I argue that particular features of the pathophysiological model, such as its focus on disease mechanisms, make it most relevant for predicting medical effectiveness. Chapter 2 explores the debate between those who argue that statistical evidence is sufficient for inferring medical effectiveness and those who argue that we require both statistical and mechanistic evidence. I focus on the question of how mechanistic and statistical evidence can be integrated. I highlight some of the challenges facing formal techniques, such as Bayesian networks, and use Toulmin’s model of argumentation to offer a complementary model of evidence amalgamation, which allows for the systematic integration of statistical and mechanistic evidence. In Chapter 3, I focus on p-hacking, an application of analytic techniques that may lead to exaggerated experimental results. I use philosophical tools from decision theory to illustrate how severe the effects of p-hacking can be. While it is typically considered epistemically questionable and practically harmful, I appeal to the argument from inductive risk to defend the view that there are some contexts in which p-hacking may be warranted. Chapter 4 draws attention to a particular set of biases plaguing medical research: Meta-biases. I argue that biases of this type, such as publication bias and sponsorship bias, lead to exaggerated clinical trial results. I then offer a framework, the bias dynamics model, that corrects for the influence of meta-biases on estimations of medical effectiveness. In Chapter 5, I argue against the prominent view that AI models are not explainable by showing how four familiar accounts of scientific explanation can be applied to neural networks. The confusion about explaining AI models is due to the conflation of ‘explainability’, ‘understandability’, and ‘interpretability’. To remedy this, I offer a novel account of AI-interpretability, according to which an interpretation is something one does to an explanation with the explicit aim of producing another, more understandable, explanation.
This paper examines the case of Ebola, ça Suffit trial which was conducted in Guinea during Ebola Virus Disease (EVD) outbreak in 2015. I demonstrate that various non-epistemic considerations may legitimately influence the criteria for evaluating the efficacy and effectiveness of a candidate vaccine. Such non-epistemic considerations, which are social, ethical, and pragmatic, can be better placed and addressed in scientific research by appealing to non-epistemic values. I consider two significant features any newly developed vaccine should possess; (1) the duration of immunity the vaccine provides; and (2) safety with respect to the side effects of the vaccine. Then, I argue that social and ethical values are relevant and desirable in setting the parameters for evaluating these two features of vaccines. The parameters that are employed for setting up the criteria for assessing the features might have far-reaching implications on the well-being of society in general, and the health conditions of several thousand people in particular. The reason is that these features can play a decisive role during the evaluation of the efficacy and effectiveness of the vaccine. I conclude by showing why it is necessary to reject the concept of epistemic priority, at least when scientists engage in policy-oriented research.
Full-text available
The present paper discusses the claim that value-free science is impossible. After applauding the observation of Colombo et al. Review of Philosophy and Psychology 7: 743–763, (2016) that this is at least to a considerable extent a psychological question, and should therefore be studied using the methods of psychological science, the studies performed by these authors were examined and unfortunately found seriously wanting in various respects. Beyond the merits or demerits of that particular piece of work, the discussion lead to a conclusion likely relevant to the entire discussion about the alleged impossibility of value-free science: Showing the impossibility of value-free science would entail at least a) defining what the term science is intended to cover, b) providing high level evidence that few if any scientists in the relevant area(s) are immune to non-epistemic influences (else one could presumably achieve value-free science by having scientific hypothesis only evaluated by those who are immune), c) that these influences meaningfully bias the results of science, d) that there is no way to correct for these influences, and e) explain why – unlike epistemic appraisal in science – the epistemic appraisal of this argument can be trusted.
Full-text available
I am basically in agreement with the idea that objectivity in science can be reserved despite the infusion of extra-scientific factors such as contextual values, paradigm, current metaphysics or the world view implicit in ordinary language. I shall call the thesis as the “objectivity thesis”. Numerous entries that dealt extensively on the subject had been published already. What I would like to do, however, is to look in a schematic manner at some things that Nozick, Popper and Longino contributed to this thesis, with a thematic focus on how the social aspect of science with all its values, scientific or extra-scientific notwithstanding, is reconceptualized in the overall process of achieving scientific objectivity.
Full-text available
Ecology endeavors to explain significant portions of the living world. The sophisticated experimental tests and mathematical theories developed to do so deserve much more attention from philosophers of science. This paper describes some of the main contours of the newly emerging field of philosophy of ecology: how an ecological perspective shaped Darwin’s theory, particularly the niche concept and the idea that there is a “balance of nature”; the character and metaphysical status of biological communities; whether there are laws of ecology; and the concept of ecological stability. As these topics illustrate, ecology concerns a diverse conceptual terrain and an interesting set of theoretical and methodological issues that provide rich grist for philosophy.
Originally published in Contemporary Review, 1877. Reprinted in Lectures and Essays (1879). Presently in print in The Ethics of Belief and Other Essays (Prometheus Books, 1999).
• At most of our American Colleges there are Clubs formed by the students devoted to particular branches of learning; and these clubs have the laudable custom of inviting once or twice a year some maturer scholar to address them, the occasion often being made a public one. I have from time to time accepted such invitations, and afterwards had my discourse printed in one or other of the Reviews. It has seemed to me that these addresses might now be worthy of collection in a volume, as they shed explanatory light upon each other, and taken together express a tolerably definite philosophic attitude in a very untechnical way. Were I obliged to give a short name to the attitude in question, I should call it that of radical empiricism, in spite of the fact that such brief nicknames are nowhere more misleading than in philosophy. I say 'empiricism,' because it is contented to regard its most assured conclusions concerning matters of fact as hypotheses liable to modification in the course of future experience; and I say 'radical,' because it treats the doctrine of monism itself as an hypothesis, and, unlike so much of the half-way empiricism that is current under the name of positivism or agnosticism or scientific naturalism, it does not dogmatically affirm monism as something with which all experience has got to square. (PsycINFO Database Record (c) 2012 APA, all rights reserved) • At most of our American Colleges there are Clubs formed by the students devoted to particular branches of learning; and these clubs have the laudable custom of inviting once or twice a year some maturer scholar to address them, the occasion often being made a public one. I have from time to time accepted such invitations, and afterwards had my discourse printed in one or other of the Reviews. It has seemed to me that these addresses might now be worthy of collection in a volume, as they shed explanatory light upon each other, and taken together express a tolerably definite philosophic attitude in a very untechnical way. Were I obliged to give a short name to the attitude in question, I should call it that of radical empiricism, in spite of the fact that such brief nicknames are nowhere more misleading than in philosophy. I say 'empiricism,' because it is contented to regard its most assured conclusions concerning matters of fact as hypotheses liable to modification in the course of future experience; and I say 'radical,' because it treats the doctrine of monism itself as an hypothesis, and, unlike so much of the half-way empiricism that is current under the name of positivism or agnosticism or scientific naturalism, it does not dogmatically affirm monism as something with which all experience has got to square. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Only one traditional objection to Pascal's wager is telling: Pascal assumes,a particular theology, but without justification. We produce two new objections that go deeper. We show that even if Pascal's theology is assumed to be probable, Pascal's argument does not go through. In addition, we describe a wager that Pascal never considered, which leads away from Pascal's conclusion. We then consider the impact of these considerations on other prudential arguments,concerning what one should believe, and on the more general question of when and why belief formation ought to be based solely on the evidence.
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
The authors and other investigators discovered iridium in the clays that mark the sudden disappearance of dinosaurs from the fossil record. Because iridium is rare in the earth's crust but abundant in some meteorites, they concluded that a giant meteorite collided with the earth, hurling megatons of debris into the atmosphere. This paper describes and discusses the accumulating evidence that suggests an asteroid or comet caused the Cretaceous extinction.
(1) For much of this century, moral philosophy has been constrained by the supposed absolute gap between is and ought, and the consequent belief that the facts of life cannot of themselves yield an ethical blueprint for future action. For this reason, ethics has sustained an eerie existence largely apart from science. Its most respected interpreters still believe that reasoning about right and wrong can be successful without a knowledge of the brain, the human organ where all the decisions about right and wrong are made. Ethical premises are typically treated in the manner of mathematical propositions: directives supposedly independent of human evolution, with a claim to ideal, eternal truth.