To read the full-text of this research, you can request a copy directly from the author.
Since the scientific revolution, there have been persistent endorsements, both by scientists and philosophers of science,2 of the view that predictions have special value in confirming hypotheses. According to this view, evidence which was used in formulating a hypothesis does not confirm the hypothesis as strongly as it otherwise would. I shall call this view the predictivist thesis.
To read the full-text of this research, you can request a copy directly from the author.
... The essence of this debate has been framed by Maher (1988Maher ( , 1990 and can be symbolized as follows: P (H/EO) > (H/EO). That is, the probability of explaining a hypothesis (or theory), given the evidence, is greater if the evidence has not been observed (i.e., O) at the time the hypothesis (theory) was formulated. ...
... What is at stake, ultimately, is the issue of theoretical explanation: that is, the type of explanation generally judged to be the most significant type in matters of genuine knowledge production. Maher's (1990) argument is that unless a theory is able to predict, it does not allow itself enough genuine risk of modification or falsification. On this account, predictions are plausible but unknown states of affairs, which if confirmed as genuine predictions, enhance the credibility of the theory. ...
... Under the predictivist view, if grounded theory is truly a theory (even with broadly defined but generally agreed on criteria), it ought to be able to generate testable claims. Predictivists (Maher, 1988(Maher, , 1990 argue that a theory's credibility is a function of its ability to generate predictions, to test findings as opposed to accommodating those same findings. Although this, again, sounds somewhat like Popper's (1959) falsificationist theory, the difference is that the predictivist model is not concerned with subjecting a given theory to repeated efforts to falsify it but rather to making the claim that the very act of being able to test predictions is the hallmark of what it means to say you have a genuine theory. ...
This article argues that the concept of grounded theory, widely used in research in the human sciences, has not been adequately analyzed as to its structure as a theory. Analyzing grounded theory from predictionist and accommodationist views, as well as focusing on the issue of inference to the best explanation, it is concluded that this form of theorizing is basically accommodationist. Moreover, grounded theory, in terms of providing explanations, is simply a different version of a standard inductive argument. However, grounded theory’s strength lies in its potential to articulate a unique context and logic of discovery.
... Patrick Maher (1988Maher ( , 1990Maher ( , 1993 apresentou um experimento de pensamento seminal e uma análise bayesiana de suas implicações preditivistas. ...
... Maher (1988) faz a suposição simplificada de que qualquer método de predição usado por um preditor é ou completamente confiável (esta é alegação abreviada por 'R') ou não é melhor do que um método aleatório (¬R). (Maher  mostra que essa suposição pode ser derrotada e graus contínuos de confiabilidade assumidos; ainda assim o resultado preditivista é gerado). Em termos qualitativos, onde M gera T (e, assim, prediz E) sem inserção da evidência E, devemos inferir que é muito mais provável que o método que gerou E seja confiável no lugar de E simplesmente ter calhado de ser verdadeiro apesar de que R não era melhor do que um método aleatório. ...
A Série Investigação Filosófica, uma iniciativa do Núcleo de Ensino e Pesquisa em Filosofia do Departamento de Filosofia da UFPel e do Grupo de Pesquisa Investigação Filosófica do
Departamento de Filosofia da UNIFAP, sob o selo editorial do NEPFil online e da Editora da
Universidade Federal de Pelotas, com auxílio financeiro da John Templeton Foundation, tem por objetivo precípuo a publicação da tradução para a língua portuguesa de textos selecionados a partir de diversas plataformas internacionalmente reconhecidas, tal como a Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/), por exemplo. O objetivo geral da série é disponibilizar materiais bibliográficos relevantes tanto para a utilização enquanto material didático quanto para a própria investigação filosófica.
... 8 Different versions of weak predictivism then differ in what they identify as the relevant epistemic feature of which prediction is taken to be symptomatic. 9 For example, Maher (1988Maher ( , 1990Maher ( , 1993 argues that the information that a hypothesis predicted rather than accommodated data indicates (ceteris paribus) that the hypothesis was constructed by a more reliable method, i.e. by a method is more likely to generate correct hypotheses, which in turn increases the probability of the hypothesis. Lange (2001), by contrast, suggests that predicting hypotheses are less likely to consist of arbitrary conjunctions as compared to accommodating hypotheses, and so that predicting hypotheses are superior (ceteris paribus) in virtue of being sufficiently unified so as to be capable of being confirmed by past successes. ...
Many philosophers have argued that a hypothesis is better confirmed by some data if the hypothesis was not specifically designed to fit the data. ‘Prediction’, they argue, is superior to ‘accommodation’. Others deny that there is any epistemic advantage to prediction, and conclude that prediction and accommodation are epistemically on a par. This paper argues that there is a respect in which accommodation is superior to prediction. Specifically, the information that the data was accommodated rather than predicted suggests that the data is less likely to have been manipulated or fabricated, which in turn increases the likelihood that the hypothesis is correct in light of the data. In some cases, this epistemic advantage of accommodation may even outweigh whatever epistemic advantage there might be to prediction, making accommodation epistemically superior to prediction all things considered.
Strong predictivism, the idea that novel predictions per se confirm theories more than accommodations, is based on a “no miracle” argument from novel predictions to the truth of theories (NMAT). Eric Barnes rejects both: he reconstructs the NMAT as seeking an explanation for the entailment relation between a theory and its novel consequences, and argues that it involves a fallacious application of Occam’s razor. However, he accepts a no miracle argument for the truth of background beliefs (NMABB): scientists endorsed a successful theory because they were guided by largely true background beliefs. This in turn raises the probability that the theory is true; so Barnes embraces a form of weak predictivism, according to which predictions are only indirectly relevant to confirmation. To Barnes I reply that we should also explain how the successful theory was constructed, not just endorsed; background beliefs are not enough to explain success, scientific method must also be considered; Barnes can account for some measure of confirmation of our theories, but not for the practical certainty conferred to them by some astonishing predictions; true background beliefs and reliability by themselves cannot explain novel success, the truth of theories is also required. Hence, the NMAT is sound, and strong predictivism is right. In fact, Barnes misinterprets the NMAT, which does not involve Occam’s razor, takes as explanandum the building of a theory which turned out to predict surprising facts, and successfully concludes that the theory is true. This accounts for the practically certain confirmation of our most successful theories, in accordance with strong predictivism.
Maher (1988) and Lange (2001) appeal to intuitions about coin tosses to discern the justification of predictivism about the predictive accuracy of hypotheses. I point out two problems about their use of the coin flipping cases for this purpose. First, the questions that Maher and Lange seem to want to answer are empirical: What do people think about these coin toss cases? Why? A satisfying answer to each empirical question would require rigorous empirical investigation. Neither Maher’s nor Lange’s investigation of the coin cases achieve such rigor. Second, even if Maher and Lange did conduct the kind of investigation that is becoming of their empirical questions, it is not clear that the conclusions of such investigation would bear on the justification of the predictivist thesis. So neither Maher or Lange's coin tossing thought experiments — nor their inferences therefrom — can provide substantial support for their hypothesis.
Predictivism, the thesis that all things being equal a hypothesis that predicts a piece of evidence is better supported by that evidence than a hypothesis that only accommodates that evidence, comes in strong and weak forms. Interestingly, weak predictivism, which is widely accepted, can be used to formulate a persuasive argument against some forms of external world scepticism. In this article I formulate this predictivist argument and I explain why it deserves serious consideration despite the fact that it only succeeds as a response to some forms of external world scepticism.
The paper presents a further articulation and defence of the view on prediction and accommodation that I have proposed earlier. It operates by analysing two accounts of the issue—by Patrick Maher and by Marc Lange—that, at least at first sight, appear to be rivals to my own. Maher claims that the time-order of theory and evidence may be important in terms of degree of confirmation, while that claim is explicitly denied in my account. I argue, however, that when his account is analysed, Maher reveals no scientifically significant way in which the time-order counts, and that indeed his view is in the end best regarded as a less than optimally formulated version of my own. Lange has also responded to Maher by arguing that the apparent relevance of temporal considerations is merely apparent: what is really involved, according to Lange, is whether or not a hypothesis constitutes an “arbitrary conjunction.” I argue that Lange’s suggestion fails: the correct analysis of his and Maher’s examples is that provided by my account.
Maher (1988, 1990) has recently argued that the way a hypothesis is generated can affect its confirmation by the available evidence, and that Bayesian confirmation theory can explain this. In particular, he argues that evidence known at the time a theory was proposed does not confirm the theory as much as it would had that evidence been discovered after the theory was proposed. We examine Maher's arguments for this "predictivist" position and conclude that they do not, in fact, support his view. We also cast doubt on the assumptions of Maher's alleged Bayesian proofs.
Predictivism asserts that where evidence E confirms theory T, E provides stronger support for T when E is predicted on the basis of T and then confirmed than when E is known before T's construction and 'used', in some sense, in the construction of T. Among the most interesting attempts to argue that predictivism is a true thesis (under certain conditions) is that of Patrick Maher (1988, 1990, 1993). The purpose of this paper is to investigate the nature of predictivism using Maher's analysis as a starting point. I briefly summarize Maher's primary argument and expand upon it; I explore related issues pertaining to the causal structure of empirical domains and the logic of discovery.
Predictivism holds that, where evidence E confirms theory T, E confirms T more strongly when E is predicted on the basis of T and subsequently confirmed than when E is known in advance of T's formulation and ‘used’, in some sense, in the formulation of T. Predictivism has lately enjoyed some strong supporting arguments from Maher (1988, 1990, 1993) and Kahn, Landsberg, and Stockman (1992). Despite the many virtues of the analyses these authors provide it is my view that they (along with all other authors on this subject) have failed to understand a fundamental truth about predictivism: the existence of a scientist who predicted T prior to the establishment that E is true has epistemic import for T (once E is established) only in connection with information regarding the social milieu in which the T-predictor is located and information regarding how the T-predictor was located. The aim of this paper is to show that predictivism is ultimately a social phenomenon that requires a social level of analysis, a thesis I deem ‘social predictivism’.
In this paper I distinguish two kinds of predictivism, ‘timeless’ and ‘historicized’. The former is the conventional understanding
of predictivism. However, I argue that its defense in the works of John Worrall (Scerri and Worrall 2001, Studies in History and Philosophy of Science
32, 407–452; Worrall 2002, In the Scope of Logic, Methodology and Philosophy of Science, 1, 191–209) and Patrick Maher (Maher 1988, PSA 1988, 1, pp. 273) is wanting. Alternatively, I promote an historicized predictivism, and briefly defend such a predictivism at the
end of the paper.
Predictivism asserts that novel confirmations carry special probative weight. Epistemic pluralism asserts that the judgments of agents (about, e.g., the probabilities of theories) carry epistemic import. In this paper, I propose a new theory of predictivism that is tailored to pluralistic evaluators of theories. I replace the orthodox notion of use-novelty with a notion of endorsement-novelty, and argue that the intuition that predictivism is true has two roots. I provide a detailed Bayesian rendering of this theory and argue that pluralistic theory evaluation pervades scientific practice. I compare my account of predictivism with those of Maher and Worrall.
• Why construction is a red herring for pluralist evaluators
• The unvirtuous accommodator
• Virtuous endorsers and the two roots of predictivism
• The two roots in Bayesian terms: the priors and background beliefs of endorsers
• Who are the pluralist evaluators?
• Two contemporary theories of predictivism
• 7.1Maher: Reliable methods of theory construction
• 7.2Worrall: The confirmation of core ideas
When a scientist uses an observation to formulate a theory, it is no surprise that the resulting theory accurately captures that observation. However, when the theory makes a novel prediction—when it predicts an observation that was not used in its formulation—this seems to provide more substantial confirmation of the theory. This paper presents a new approach to the vexed problem of understanding the epistemic difference between prediction and accommodation . In fact, there are several problems that need to be disentangled; in all of them, the key is the concept of overfitting . We float the hypothesis that accommodation is a defective methodology only when the methods used to accommodate the data fail to guard against the risk of overfitting. We connect our analysis with the proposals that other philosophers have made. We also discuss its bearing on the conflict between instrumentalism and scientific realism.
Formulating the problem
What might Annie be doing wrong?
Mayo on severe tests
The miracle argument and scientific realism
Two books have been particularly influential in contemporary philosophy of science: Karl R. Popper's Logic of Scientific Discovery, and Thomas S. Kuhn's Structure of Scientific Revolutions. Both agree upon the importance of revolutions in science, but differ about the role of criticism in science's revolutionary growth. This volume arose out of a symposium on Kuhn's work, with Popper in the chair, at an international colloquium held in London in 1965. The book begins with Kuhn's statement of his position followed by seven essays offering criticism and analysis, and finally by Kuhn's reply. The book will interest senior undergraduates and graduate students of the philosophy and history of science, as well as professional philosophers, philosophically inclined scientists, and some psychologists and sociologists.
A widely endorsed thesis in the philosophy of science holds that if evidence for a hypothesis was not known when the hypothesis was proposed, then that evidence confirms the hypothesis more strongly than would otherwise be the case. The thesis has been thought to be inconsistent with Bayesian confirmation theory, but the arguments offered for that view are fallacious. This paper shows how the special value of prediction can in fact be given Bayesian explanation. The explanation involves consideration of the reliability of the method by which the hypothesis was discovered, and thus reveals an intimate connection between the 'logic of discovery' and confirmation theory.
We subjectivists conceive of probability as the measure of reasonable partial belief. But we need not make war against other conceptions of probability, declaring that where subjective credence leaves off, there nonsense begins. Along with subjective credence we should believe also in objective chance. The practice and the analysis of science require both concepts. Neither can replace the other. Among the propositions that deserve our credence we find, for instance, the proposition that (as a matter of contingent fact about our world) any tritium atom that now exists has a certain chance of decaying within a year. Why should we subjectivists be less able than other folk to make sense of that?
The following is an example of the simplest kind of probable inference:--About two per cent of persons wounded in the liver recover; This man has been wounded in the liver; Therefore, there are two chances out of a hundred that he will recover. Compare this with the simplest of syllogisms, say the following:--Every man dies; Enoch was a man; Hence, Enoch must have died. The latter argument consists in the application of a general rule to a particular case. The former applies to a particular case a rule not absolutely universal, but subject to a known proportion of exceptions. Both may alike be termed deductions, because they bring information about the uniform or usual course of things to bear upon the solution of special questions; and the probable argument may approximate indefinitely to demonstration as the ratio named in the first premise approaches to unity or to zero. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Imre Lakatos' philosophical and scientific papers are published here in two volumes. Volume I brings together his very influential but scattered papers on the philosophy of the physical sciences, and includes one important unpublished essay on the effect of Newton's scientific achievement. Volume II presents his work on the philosophy of mathematics (much of it unpublished), together with some critical essays on contemporary philosophers of science and some famous polemical writings on political and educational issues. Imre Lakatos had an influence out of all proportion to the length of his philosophical career. This collection exhibits and confirms the originality, range and the essential unity of his work. It demonstrates too the force and spirit he brought to every issue with which he engaged, from his most abstract mathematical work to his passionate 'Letter to the director of the LSE'. Lakatos' ideas are now the focus of widespread and increasing interest, and these volumes should make possible for the first time their study as a whole and their proper assessment.
El autor define que una teoría física no es una explicación, sino un sistema de proposiciones matemáticas, deducidas de un pequeño número de principios cuyo objeto es representar de la manera más simple, más completa y más exacta posible un conjunto de leyes experimentales. A partir de esta concepción aborda sus planteamientos acerca de esta disciplina científica.