Article

Some Hard Questions for Critical Rationalism

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

What distinguishes science from all other human endeavours is that the accounts of the world that our best, mature sciences deliver are strongly supported by evidence and this evidence gives us the strongest reason to believe them.' That anyway is what is said at the beginning of the advertisement for a conference on induction at a celebrated British seat of learning in 2007. It shows how much critical rationalists still have to do to make known the message of Logik der Forschung concerning what empirical evidence is able to do and what it does. This paper will focus not on these tasks of popularization faced by critical ration- alists, but on some logical problems internal to critical rationalism. Although we are rightly proud of having the only house in the neighbourhood that is logically watertight, we should be aware that not everything inside is in impeccable order. There are criticisms that have not yet been adequately met, and questions that have not yet been adequately answered. Each of the six di-culties to be discussed arises from Popper's exemplary solutions to the problems of demarcation and induction. They concern the management of contradictions; approximation to truth; the corroboration of already falsifled hypotheses; decision making under un- certainty; the role of evidence in the law; and the representation of logical content. In none of these areas does critical rationalism yet ofier, to my mind, an account comparable in clarity to its solutions to the problems of demarcation and induction. This is a personal selection, and it is not suggested that there are not other hard questions ahead. In only one or two cases shall I ofier anything like a solution.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In this sense, TedP on EduS must consider "critical questions [that] remain as to how to most effectively organize research and connect it to actions that advance social and natural wellbeing" (Miller, 2014). That is the second assumption: problematization of the issues that affect the planet, the human communities and each one of us. ...
... The macrostate is mostly formulated in terms of equilibrium conditions which means that "one does not need to know the details of the microstates, only the number of them that correspond to each macrostate" ( [248]: p. 33), a condition that can be highly problematic where significant microstate divergence exists. Thus, returning to the issue of the truths, questions can always be raised about how much knowledge is provided to satisfy requisite needs, with the recognition that this knowledge may be false [249]. One reason for this is that logical inconsistencies/confusions can arise. ...
Article
Full-text available
Living systems are complex dynamic information processing energy consuming entities with properties of consciousness, intelligence, sapience, and sentience. Sapience and sentience are autonomous attributes of consciousness. While sapience has been well studied over the years, that of sentience is relatively rare. The nature of sapience and sentience will be considered, and a metacybernetic framework using structural information will be adopted to explore the metaphysics of consciousness. Metacybernetics delivers a cyberintrinsic model that is cybernetic in nature, but also uses the theory of structural information arising from Frieden’s work with Fisher information. This will be used to model sapience and sentience and their relationship. Since living systems are energy-consuming entities, it is also natural for thermodynamic metaphysical models to arise, and most of the theoretical studies of sentience have been set within a thermodynamic framework. Hence, a thermodynamic approach will also be introduced and connected to cyberintrinsic theory. In metaphysical contexts, thermodynamics uses free-energy, which plays the same role in cyberintrinsic modelling as intrinsic structural information. Since living systems exist at the dynamical interface of information and thermodynamics, the overall purpose of this paper is to explore sentience from the alternative cyberintrinsic perspective of metacybernetics.
... Many of Popper's concepts and ideas have become a part of the philosophical and, indeed, scientific vernacular. Despite this undeniable contribution to our understanding of science, Popper's theory suffers from a number of unresolved problems (Miller, 2014). His insistence that the empirical method demands exposing the proposed hypotheses to falsifications "in every conceivable way" (Popper, 1935(Popper, /2005, and his stress on the role of negative arguments in the choice of scientific hypotheses, such as counter-examples, refutations, falsification (Popper, 1935(Popper, /20051979, p. 20), were criticized as unduly disregarding the need for positive confirmations which should be seen as important in selecting the most reliable hypotheses. ...
Article
Full-text available
In this paper, I consider whether the critical rationalist philosophy of science may provide a rationale for trusting scientific knowledge. In the first part, I refer to several insights of Karl Popper’s social and political philosophy in order to see whether they may be of help in offsetting the distrust of science spawned by the COVID-19 pandemic. In the second part, I address the more general issue of whether the theoretical principles of the critical rationalist philosophy of science may afford a foundation for building trust in science. Both parts of the discussion, confined for the sake of the argument largely to the repudiation of the concept of good reasons for considering a theory to be true, imply that this question would have to be answered negatively. Against this, I argue that such a conclusion is based on a misconception of the nature of scientific knowledge: critical rationalism views science as a cognitive regime which calls for bold theories and at the same time demands a rigorous and continuous distrust towards them, and it is precisely this attitude that should be adopted as a compelling argument for trusting science.
... I adopt a critical rationalist, anti-justificationist perspective on argument and decisionmaking (Popper, 1963(Popper, , 1979Miller, 1994Miller, , 2006Miller, , 2013Miller, , 2014. According to the philosophy of critical rationalism, arguments are "always negative; they are always critical arguments, used only and needed only to unseat conjectures that have been earlier surmised " Miller, 1983, p. 10). ...
Chapter
Full-text available
Focusing on a particular kind of so-called “conductive argument”, i.e. a “pro/con” argument intended to support a practical conclusion, I argue that “conductive argument” is a category mistake. There is no such thing as a “conductive argument”, if the term is meant to designate a single argument, with one conclusion. What (confusingly) appears to be a “conductive argument”, as structure, is one of two main possible outcomes of deliberative activity, understood as the critical testing of alternative proposals for action. More precisely, it is a recapitulation or summary of a process of critical questioning that has unfolded in time, where a practical conclusion has withstood criticism, in the sense that no decisive objections have emerged against it, though there are counter-considerations to it, as well as reasons in favour. To say that there are no decisive objections is to say that the opposite (negative) conclusion is not supported: it does not follow conclusively that the course of action being proposed is not reasonable. The positive conclusion, taken together with all the reasons that have been cited in favour and all the reasons that have been cited against (the counter-considerations), will be virtually indistinguishable from a so-called “conductive argument”. Whenever the positive conclusion does not survive criticism, the potential conductive argument will disintegrate, collapsing into a deductive argument in favour of the negative conclusion.
... (Reisch 1998) Sociologist Robert K. Merton (Merton 1973) proposed demarcation criteria based on the value of science, characterized by a spirit that can be summed up as four sets of institutional imperatives: universalism (affirmations must be subject to predetermined impersonal criteria), communism (the findings are products of social collaboration), disinterestedness (institutional control to reduce the effects of personal or ideological motives), and organized skepticism Popper thought that a hypothesis that failed in some tests, but did not fail very badly, will give rise to a hypothesis with some predictions tested with certainty beyond the limits of experimental errors, but not too wrong, will be closer to the truth than to a failed radical rival one, even if both are falsified. (Miller 2009) But the lack of a solution to this difficulty is not an excuse for a retreat into instrumentalism, inductivism or irrationality, and should not prevent us from seeking a more modest answer to the incontestable fact that "not all cases of falsification are the same." (Kvasz 2004, 263) ...
Preprint
Full-text available
Thomas Kuhn criticized falsifiability because it characterized "the entire scientific enterprise in terms that apply only to its occasional revolutionary parts," and it cannot be generalized. In Kuhn's view, a delimitation criterion must refer to the functioning of normal science. Kuhn objects to Popper's entire theory and excludes any possibility of rational reconstruction of the development of science. Imre Lakatos said that if a theory is scientific or non-scientific, it can be determined independently of the facts.He proposed a modification of Popper's criterion, which he called "sophisticated (methodological) falsification".
... Therefore "organization is constrained to multiple goals composed of an admixture of economic, political and social considerations" (Methe, et al., 2000). Synoptic and incremental approaches have been examined, criticized and/or comparatively studied by numerous authors of (strategic) management particularly Lindblom (Lindblom, 1959), Dror (Dror, 1964), Picot and Lange (Picot, et al., 1978), Fredrickson (Fredrickson, 1983), Johnson (Johnson, 1988), Mintzberg (Mintzberg, 1990), Ansoff (Ansoff, 1991), Toft (Toft, 2000), Methe et al. (Methe, et al., 2000), Miller (Miller, 2011), and Seidenberg (Seidenberg, 2012). Hard critiques and debates can be detected concerning "Incrementalism" and/or "Rationalism" (Synoptic formalism). ...
Preprint
Full-text available
Selon Popper, une théorie scientifique peut être légitimement sauvée de la falsification en introduisant une hypothèse auxiliaire permettant de générer de nouvelles prédictions falsifiables. De plus, s’il existe des soupçons de biais ou d’erreur, les chercheurs pourraient introduire une hypothèse auxiliaire falsifiable, qui permettrait de procéder à des tests. De nombreux autres auteurs ont proposé des critères pour démarquer la science de la pseudoscience. Celles-ci incluent généralement la croyance en l'autorité, des expériences irremplaçables, des exemples choisis, le manque de volonté de tester, le non-respect des informations de réfutation, un subterfuge intégré, des explications abandonnées sans remplacement.
Preprint
Full-text available
O teorie științifică, conform lui Popper, poate fi salvată în mod legitim de falsificare prin introducerea unei ipoteze auxiliare care să permită generarea de predicții noi, falsificabile. De asemenea, dacă există suspiciuni de părtinire sau eroare, cercetătorii ar putea introduce o ipoteză falsificabilă auxiliară, care să permită testarea. Dar această tehnică nu poate rezolva problema în general, deoarece orice ipoteză auxiliară poate fi contestată în același mod, ad infinitum. Pentru a rezolva această regresiune, Popper introduce ideea unei declarații de bază, o afirmație empirică care poate fi folosită atât pentru a determina dacă o teorie dată este falsificabilă și, dacă este cazul, pentru a corobora ipotezele de falsificare.
Book
Full-text available
Anlass der vorliegenden Arbeit war die sogenannte Agrarwende 2001 in Deutschland und insbesondere die Positionierung „Professoren mahnen zur Vernunft in der Agrarpolitik“. Gemahnt wurde jedoch nicht von ProfessorInnen der Agrarpolitik, sondern vielmehr von AgrarökonomInnen. Darin liegt eine Kernthematik dieser Arbeit, denn alle Professuren für Agrarpolitik an den Universitäten in Deutschland sind derzeit von ÖkonomInnen besetzt und diese dominieren auch die Wirtschafts-und Sozialwissenschaften aller Agrarfakultäten. Mit der institutionell, personell und thematisch umfassenden Historie beider Disziplinen, einschließlich der NS-Zeit, wird mit der vorliegenden Arbeit auf die alte und gleichzeitig aktuelle Frage, wie politisch Ökonomik ist und mit welchen Folgen sie agiert, am Beispiel des Agrarbereiches erstmals in dieser Breite eingegangen. Der Inhalt • Theorieentwicklung, Wissenschaftssoziologie und Performativität • Die Entwicklung der deutschen universitären Agrarpolitik und Agrarökonomik (1) ab den Anfängen bis 1933, (2) 1933 bis zur ersten Nachkriegsgeneration und (3) ab der ersten Nachkriegsgeneration bis ca. 2012; einschließlich Positionierungen zu bestehenden Geschichtsschreibungen Die Zielgruppen • Studierende und WissenschaftlerInnen der Agrarwissenschaft und -ökonomik • Im Bereich Agrarpolitik Engagierte Die Autorin Katrin Hirte ist aktuell wissenschaftliche Mitarbeiterin am Institut für die Gesamtanalyse der Wirtschaft (ICAE) der Universität Linz.
Chapter
Human kinds of problem-solving involve sophisticated cognitive processes for modeling, learning and finally solving a problem. The discourse of problem-solving has been established in strategic management, economics, computer science, artificial intelligence, mathematics and cognitive psychology. The disciplines represent a common ground for classification of problem-solving approaches. This paper reexamines the existing approaches, from two distinguished perspectives, first, synoptic formalism and incrementalism, second, (meta-)heuristics. The primary objective is to determine characteristics of the aforementioned approaches, and to discuss the possibility for combining or co-existence of problem-solving approaches. We provide a framework for proper selection of the approaches and assignment of activities to decision-situations. Finally, we emphasize on coexistent consideration of problem-solving approaches for making human kinds of problem computable.
Article
Purpose: The purpose of this paper is to investigate the use of problem-solving approaches in maintenance cost management (MCM). In particular, the paper aims to examine characteristics of MCM models and to identify patterns for classification of problem-solving approaches. Design/methodology/approach: This paper reflects an extensive and detailed literature survey of 68 (quantitative or qualitative) cost models within the scope of MCM published in the period from 1969 to 2013. The reviewed papers have been critically examined and classified based on implementing a morphological analysis which employs 8 criteria and associated expressions. In addition, the survey identified two main perspectives of problem-solving, first synoptic/incremental, second heuristics/meta-heuristics. Findings: The literature survey revealed the patterns for classification of the MCM models, especially the characteristics of the models for problem-solving in association with the type of modelling, focus of purpose, extent and scope of application, and reaction and dynamics of parameters. Majority of the surveyed approaches is mathematical respectively synoptic. Incremental approaches are much less and only few are combined (i.e. synoptic and incremental). A set of features is identified for proper classification, selection and coexistence of the two approaches. Research limitations/implications: This paper provides a basis for further study of heuristic and meta-heuristic approaches to problem-solving. Especially the coexistence of heuristic, synoptic and incremental approaches needs to be further investigated. Practical implications: The detected dominance of synoptic approaches in literature - especially in the case of specific application areas - contrasts to some extent to the needs of maintenance managers in practice. Hence the findings of this paper particularly address the need for further investigation on combining problem-solving approaches for improving planning, monitoring and controlling phases of MCM. Continuous improvement of MCM, especially problem-solving and decision-making activities, is tailored to the use of maintenance knowledge assets. In particular, maintenance management systems and processes are knowledge-driven. Thus, combining problem-solving approaches with knowledge management (KM) methods is of interest, especially for continuous learning from past experiences in MCM. Originality/value: This paper provides a unique study of 68 problem-solving approaches in MCM, based on a morphological analysis. Hence suitable criteria and their expressions are provided. The paper reveals the opportunities for further interdisciplinary research in the maintenance cost lifecycle. Link to publisher website: http://www.emeraldinsight.com/doi/abs/10.1108/JQME-04-2015-0012?journalCode=jqme
Thesis
Full-text available
Maintenance is a combination of multilateral and cross-functional activities and processes. Maintenance processes are identified in both strategic management and operation systems. Managers, engineers, technicians and operators collaboratively contribute in conducting and performing preventive or corrective maintenance activities. Maintenance management is to provide the long-term business strategy that ensures capacity of the production, quality of the product, and the best life cycle cost. It is a decision-making activity which has been highly correlated with expertise of maintenance staff and their own practical experience. Maintenance management intends not only to keep the desired performance of machinery, but to continuously improve quality and cost effectiveness of the pertained processes. Maintenance cost management (MCM), consisting of cost planning, monitoring and controlling, thereby is an essential part of the sustainable and efficient maintenance management system. MCM is determined as a knowledge-centered and experience-driven process where exploiting existing knowledge and generating new knowledge strongly influences every instance of cost planning. Taking into account the dynamics of knowledge assets, an interdisciplinary research raises practical implications in the domain of maintenance. The key aspect of the present work is learning from past experiences for continuous improvement of the maintenance cost planning and controlling. Learning in MCM is an evolutionary and iterative process through which a chief maintenance officer (CMO) compounds and deepens his/her knowledge. CMO analyzes former experiences gained in the past maintenance planning periods, identifies facts or artifacts (i.e. evidence for improving the planning process), and finally enhances planning of the upcoming events by applying the lessons learned. This work principally constitutes a model, Costprove, for meta-analysis of maintenance knowledge assets. The knowledge assets are articulated, represented, and stored in repositories (i.e. explicit knowledge), or remain with (a group of) individuals and need to be extracted, documented, and validated (i.e. implicit knowledge). Meta-analysis is a set of methods for discovering the strength of the relation between certain predefined entities. It provides evidence for decision-makers (e.g. CMO) to discover hidden improvement potentials in cost planning, and incrementally attain desired company objectives. The main focus of this work is to establish a mathematical meta-analysis for (i) identifying the relation between cost figures (planned, unplanned and total cost), and operation parameter (number of maintenance activities), and (ii) trading-off between planned and unplanned cost. Hence, the model deploys an economic approach for identifying desired cost figures in every planning period, and ultimately defining operation-related parameters. Anticipating the trend of the fourth industrial revolution, the foremost result of this thesis is the development of an integrated and practice-oriented knowledge-based approach to maintenance cost planning and controlling.
Article
The philosophical position referred to as critical rationalism (CR) is potentially important to OR because it holds out the possibility of supporting OR’s claim to offer managers a scientifically ‘rational’ approach. However, as developed by Karl Popper, and subsequently extended by David Miller, CR can only support practice (deciding what to do, how to act) in a very limited way; concentrating on the critical application of deductive logic, the crucial role of subjective judgements in making technical and moral choices are ignored or are at least left underdeveloped. By reflecting on the way that managers, engineers, administrators and other professionals take decisions in practice, three strategies are identified for handling the inevitable subjectivity in practical decision-making. It is argued that these three strategies can be understood as attempts to emulate the scientific process of achieving intersubjective consensus, a process inherent in CR. The perspective developed in the paper provides practitioners with a way of understanding their clients’ approach to decision-making and holds out the possibility of making coherent the claim that they are offering advice on how to apply a scientific approach to decision-making; it presents academics with some philosophical challenges and some new avenues for research.
Article
Full-text available
Historically OR has conceived of itself as a professional practice giving rational, objective advice rooted in the ethos of science. However, the claim of science to rationality and objectivity has wilted under the onslaught of relativist and post-modern attack. One proposed philosophy of science seeks to avoid such problems by adopting a strictly objectivist approach. Critical rationalism (CR), the philosophy originated by Karl Popper, attempts to eliminate all inductive, justificatory and merely subjective claims by the ruthless application of deductive logic. The philosophical development of the CR approach to practice is currently a work-in-progress; however, it is an approach that should on the face of it find favour with OR, particularly for those who want to claim that OR is logically rational. The paper, drawing on the work of David Miller, explores how such an approach can be applied in the OR context. It concludes that although as CR suggests it may be possible to drive out inductive and justificatory claims in OR, subjective choice is an essential element of managerial decision-making and cannot be ignored or assumed away. The paper identifies some of the challenges that confront philosophers of practice if OR is to take the insights of CR to heart, suggests some possible responses, and identifies areas for future research.
Article
Full-text available
In order to measure the degree of dissimilarity between elements of a Boolean algebra, the author’s (1984) proposed to use pseudometrics satisfying generalizations of the usual axioms for identity. The proposal is extended, as far as is feasible, from Boolean algebras (algebras of propositions) to Brouwerian algebras (algebras of deductive theories). The relation between Boolean and Brouwerian geometries of logic turns out to resemble in a curious way the relation between Euclidean and non-Euclidean geometries of physical space. The paper ends with a brief consideration of the problem of the metrization of the algebra of theories.
Article
Full-text available
The paper deals with reconstruction of the unique reductive counterpart of the deductive logic. The procedure results in the deductive-reductive form of logic. This extension is illustrated on the base of intuition-istic logics: Heyting's, Brouwerian and Heyting-Brouwer's ones.
Article
Full-text available
This note aims at critically assessing a little-noticed proposal made by Popper in the second edition ofObjective Knowledge to the effect that verisimilitude of scientific theories should be made relative to the problems they deal with. Using a simple propositional calculus formalism, it is shown that the relativized definition fails for the very same reason why Popper's original concept of verisimilitude collapsed-only if one of two theories is true can they be compared in terms of the suggested definition of versimilitude.
Article
We live in a society which sets great store by science. Scientific ‘experts’ play a privileged role in many of our institutions, ranging from the courts of law to the corridors of power. At a more fundamental level, most of us strive to shape our beliefs about the natural world in the ‘scientific’ image. If scientists say that continents move or that the universe is billions of years old, we generally believe them, however counter-intuitive and implausible their claims might appear to be. Equally, we tend to acquiesce in what scientists tell us not to believe. If, for instance, scientists say that Velikovsky was a crank, that the biblical creation story is hokum, that UFOs do not exist, or that acupuncture is ineffective, then we generally make the scientist’s contempt for these things our own, reserving for them those social sanctions and disapprobations which are the just deserts of quacks, charlatans and con-men. In sum, much of our intellectual life, and increasingly large portions of our social and political life, rest on the assumption that we (or, if not we ourselves, then someone whom we trust in these matters) can tell the difference between science and its counterfeit.
Article
Our standard logic is two-valued: every meaningful statement is regarded as being either true or false. Thus it may seem pointless or misleading to speak of degrees of truth or of partial truth. Nevertheless these expressions are commonplace in writings on epistomology and the philosophy of science, and it has been argued that an explication of the concept of partial truth is necessary for an adequate analysis and understanding of scientific method. For instance, in his book The Myth of Simplicity Mario Bunge says that “philosophers still owe scientists a clarification of the concept of relative and partial truth as employed in factual science”.1 Some philosophers have attempted to meet this challenge by defining quantitave measures of partial truth. Perusal of the literature on partial truth and degrees of truth indicates, however, that these terms are, as the saying goes, ‘vague and ambiguous’. Different measures of partial truth correspond to different explicanda, and some attempts to explicate this notion seem to involve a confusion between several distinct concepts.
Article
The modern history of verisimilitude can be divided into three periods. The first began in 1960, when Karl Popper proposed his qualitative definition of what it is for one theory to be more truthlike than another theory, and lasted until 1974, when David Miller and Pavel Tichý published their refutation of Popper's definition. The second period started immediately with the attempt to explicate truthlikeness by means of relations of similarity or resemblance between states of affairs (or their linguistic representations); the work within this similarity approach was summarized in the books of Graham Oddie [1986] and Ilkka Niiniluoto [1987]. During the subsequent third period, studies in verisimilitude have been actively continued, and interesting results and applications have been achieved, but not many dramatic novelties. While it is now obsolete to claim that truthlikeness with reasonable properties cannot be defined at all, there is still a lot of controversy about the best and least arbitrary approach to doing this.
Article
Proofs of the impossibility of induction have been falling `dead-born from the Press' ever since the first of them (in David Hume's Treatise of Human Nature) appeared in 1739. One of us (K.P.) has been producing them for more than 50 years. The present proof strikes us both as pretty.
Article
Arguably, there is no substantial, general answer to the question of what makes for the approximate truth of theories. But in one class of cases, the issue seems simply resolved. A wide class of applied dynamical theories can be treated as two-component theories -one component specifying a certain kind of abstract geometrical structure, the other giving empirical application to this structure by claiming that it replicates, subject to arbitrary scaling for units etc., the geometric structure to be found in some real-world dynamical phenomenon. In such a case, a theory is approximately true just if the one geometric structure approximately replicates the other (and if problems remain here, they are problems in geometry, of specifying suitable metric approximation relations, not conceptual problems). This article amplifies and defends this simple approach to approximate truth for dynamical theories.
Article
This chapter discusses several changes in the problem of inductive logic. There are two main problems of classical empiricism: inductive justification and inductive method. Classical epistemology in general is characterized by its two main problems: (1) the problem of the foundations of—epistemic, that is, perfect, infallible—knowledge (the logic of justification); and (2) the problem of the growth of—perfect, well-founded—knowledge or the problem of heuristic, or of method (the logic of discovery). The classical twin problems of induction were the justification of theories, and the discovery of theories from facts. Carnap's neoclassical solution provides at best a solution of the problem of weak justification. However, it leaves the problem of discovery, the problem of the growth of knowledge, untouched.
Article
This paper attempts to provide both an elaboration and a strengthening of the thesis of Popper & Miller (Nature, Lond. 302, 687f. (1983)) that probabilistic support is not inductive support. Although evidence may raise the probability of a hypothesis above the value it achieves on background knowledge alone, every such increase in probability has to be attributed entirely to the deductive connections that exist between the hypothesis and the evidence. We shall also do our best to answer all the criticisms of our thesis that are known to us. In 1878 Peirce drew a sharp distinction between `explicative, analytic, or deductive' and `amplifiative, synthetic, or (loosely speaking) inductive' reasoning. He characterized the latter as reasoning in which `the facts summed up in the conclusion are not among those stated in the premisses'. The Oxford English Dictionary records that the word `ampliative' was used in the same sense as early as 1842, and that in 1852 Hamilton wrote: `Philosophy is a transition from absolute ignorance to science, and its procedure is therefore ampliative.' This was the background to our letter to Nature on 21 April 1983. It was there shown that, relative to evidence e, the content of any hypothesis h may be split into two parts, the disjunction h ee e (read h or e) and the material conditional h
Article
The main formal notion involved in qualitative truth approximation by the HD-method, viz. ‘more truthlike’, is shown to not only have, by its definition, an intuitively appealing ‘model foundation’, but also, at least partially, a conceptually plausible ‘consequence foundation’. Moreover, combining the relevant parts of both leads to a very appealing ‘dual foundation’, the more so since the relevant methodological notions, viz. ‘more successful’ and its ingredients provided by the HD-method, can be given a similar dual foundation. According to the resulting dual foundation of ‘naive truth approximation’, the HD-method provides successes (established true consequences) and counterexamples (established wrongly missing models) of theories. Such HD-results may support the tentative conclusion that one theory seems to remain more successful than another in the naive sense of having more successes and fewer counterexamples. If so, this provides good reasons for believing that the more successful theory is also more truthlike in the naive sense of having more correct models and more true consequences. In the dual foundation of ‘refined truth approximation’, HD-results remain of the same two kinds, but ‘more successful’ is taken in the refined sense of accommodating counterexamples while saving relevant successes, in which case ‘more truthlike’ can be taken in the refined sense of improving relevant models while saving relevant consequences. In this way one gets a realistic dual account of qualitative truth approximation by the HD-method. The model foundation can also be extended to the methodological notions, but not in a very plausible way. The consequence foundation only seems specifiable for naive truth approximation, in which case it is plausible. In sum, the dual foundation is superior to both.
Looking for a Black Swan
  • C Mcginn
McGinn, C. (2002). ‘Looking for a Black Swan’. The New York Review of Books 49, 18, 21.xi.2002, pp.46–50
Die Philosophie Poppers T¨ ubingen: Mohr Siebeck. Translated into English as H The Philosophy of Karl Popper. Cambridge & elsewhere: CUP
  • H Keuth
Keuth. H. (2000). Die Philosophie Poppers. T¨ ubingen: Mohr Siebeck. Translated into English as H. Keuth (2005). The Philosophy of Karl Popper. Cambridge & elsewhere: CUP
The Objectives of Science
  • D. W. Miller