Foundations of Science

Published by Springer Nature

Online ISSN: 1572-8471

·

Print ISSN: 1233-1821

Articles


Conceptual Barriers to Progress Within Evolutionary Biology
  • Article
  • Full-text available

August 2009

·

106 Reads

·

John Odling-Smee

·

·

In spite of its success, Neo-Darwinism is faced with major conceptual barriers to further progress, deriving directly from its metaphysical foundations. Most importantly, neo-Darwinism fails to recognize a fundamental cause of evolutionary change, "niche construction". This failure restricts the generality of evolutionary theory, and introduces inaccuracies. It also hinders the integration of evolutionary biology with neighbouring disciplines, including ecosystem ecology, developmental biology, and the human sciences. Ecology is forced to become a divided discipline, developmental biology is stubbornly difficult to reconcile with evolutionary theory, and the majority of biologists and social scientists are still unhappy with evolutionary accounts of human behaviour. The incorporation of niche construction as both a cause and a product of evolution removes these disciplinary boundaries while greatly generalizing the explanatory power of evolutionary theory.
Download
Share

Technology as In-Between

March 2011

·

21 Reads

This commentary on Søren Riis’s paper “Dwelling in-between walls” starts from a position of solidarity with its attempt to build a postphenomenological perspective on architecture and the built environment. It proposes however that a clearer view of a technological structure of experience may be obtained by finding technological-perceptual wholes that incorporate perceiver and perceived as well as the mediating apparatus. Parts and wholes may be formed as nested human-technological interiorities that have structured relations with what is outside—so that the outside constitutes an interiority in its turn which contextualises and situates the first. This nested structure raises questions about the way architects and urbanists see the built environment and understand inhabitation. It is hoped that this effort continues with conceptual and empirical work to research ways to make the human places of our built environment. KeywordsArchitecture–Postphenomenology–Perception–Embodiment– Relationality–Space

Fig. 1 A ‘presentist-friendly’ space-time: Evolving 3-dimensional space-like surfaces in a space-time with a preferred time-direction. 
Present Time

March 2014

·

470 Reads

The idea of a moving present or `now' seems to form part of our most basic beliefs about reality. Such a present, however, is not reflected in any of our theories of the physical world. I show in this article that presentism, the doctrine that only what is present exists, is in conflict with modern relativistic cosmology and recent advances in neurosciences. I argue for a tenseless view of time, where what we call `the present' is just an emergent secondary quality arising from the interaction of perceiving self-conscious individuals with their environment. I maintain that there is no flow of time, but just an ordered system of events.

Reduction and Understanding

January 1998

·

16 Reads

Reductionism, in the sense of the doctrine that theories on different levels of reality should exhibit strict and general relations of deducibility, faces well-known difficulties. Nevertheless, the idea that deeper layers of reality are responsible for what happens at higher levels is well-entrenched in scientific practice. We argue that the intuition behind this idea is adequately captured by the notion of supervenience: the physical state of the fundamental physical layers fixes the states of the higher levels. Supervenience is weaker than traditional reductionism, but it is not a metaphysical doctrine: one can empirically support the existence of a supervenience relation by exhibiting concrete relations between the levels. Much actual scientific research is directed towards finding such inter-level relations. It seems to be quite generally held that the importance of such relations between different levels is that they are explanatory and give understanding: deeper levels provide deeper understanding, and this justifies the search for ever deeper levels. We shall argue, however, that although achieving understanding is an important aim of science, its correct analysis is not in terms of relations between higher and lower levels. Connections with deeper layers of reality do not generally provide for deeper understanding. Accordingly, the motivation for seeking deeper levels of reality does not come from the desire to find deeper understanding of phenomena, but should be seen as a consequence of the goal to formulate ever better, in the sense of more accurate and more-encompassing, empirical theories.

What is Information?

January 2004

·

65,950 Reads

The main aim of this work is to contribute tothe elucidation of the concept of informationby comparing three different views about thismatter: the view of Fred Dretske's semantictheory of information, the perspective adoptedby Peter Kosso in his interaction-informationaccount of scientific observation, and thesyntactic approach of Thomas Cover and JoyThomas. We will see that these views involvevery different concepts of information, eachone useful in its own field of application. This comparison will allow us to argue in favorof a terminological `cleansing': it is necessaryto make a terminological distinction among thedifferent concepts of information, in order toavoid conceptual confusions when the word`information' is used to elucidate relatedconcepts as knowledge, observation orentropy.

The Politics of the Poetics: Aristotle and Drama Theory in 17th Century France

November 2008

·

4,560 Reads

Since the Renaissance, dramatic theory has been strongly influenced, sometimes even dominated by Aristotle’s Poetics. Aristotle’s concept of tragedy has been perceived as both a descriptive and a normative concept: a description of a practice as it should be continued. This biased reading of ancient theory is not exceptional, but in the case of Aristotle’s Poetics, a particular question can be raised. Aristotle has written about tragedy, at a moment that tragedy had no meaningful political or civic function anymore. As political theory—e.g. as developed in the Politics and the Art of Rhetoric—should contain the risks of transgression in political practice, so poetic theory can contain the risks of the representation of transgressions in poetic practices such as the performance of tragedy. Apart from an account on Aristotle’s Poetics as a integral part of his ethical and political theory, this article argues the (mis)readings of Aristotelian dramatic theory since the Renaissance, and especially in 17th century France are not coincidental. Aristotle’s theory itself fits neatly into a political-theoretical framework or, if one puts it more brutally, an ideology. The particular theatricality of French absolutism took clearly its advantage from this ideological (mis)readings of Aristotle.

Philosophy of chemistry and the image of science. Foundations of Science, 12(3), 223-234

September 2007

·

38 Reads

The philosophical analysis of chemistry has advanced at such a pace during the last dozen years that the existence of philosophy of chemistry as an autonomous discipline cannot be doubted any more. The present paper will attempt to analyse the experience of philosophy of chemistry at the, so to say, meta-level. Philosophers of chemistry have especially stressed that all sciences need not be similar to physics. They have tried to argue for chemistry as its own type of science and for a pluralistic understanding of science in general. However, when stressing the specific character of chemistry, philosophers do not always analyse the question ‘What is science?’ theoretically. It is obvious that a ‘monistic’ understanding of science should not be based simply on physics as the epitome of science, regarding it as a historical accident that physics has obtained this status. The author’s point is that the philosophical and methodological image of science should not be chosen arbitrarily; instead, it should be theoretically elaborated as an idealization (theoretical model) substantiated on the historical practice of science. It is argued that although physics has, in a sense, justifiably obtained the status of a paradigm of science, chemistry, which is not simply a physical science, but a discipline with a dual character, is also relevant for elaborating a theoretical model of science. The theoretical model of science is a good tool for examining various issues in philosophy of chemistry as well as in philosophy of science or science studies generally.

Morphodynamical Abduction. Causation by Attractors Dynamics of Explanatory Hypotheses in Science

March 2005

·

16 Reads

Philosophers of science today by and large reject the cataclysmic and irrational interpretation of the scientific enterprise claimed by Kuhn. Many computational models have been implemented to rationally study the conceptual change in science. In this recent tradition a key role is played by the concept of abduction as a mechanism by which new explanatory hypotheses are introduced. Nevertheless some problems in describing the most interesting abductive issues rise from the classical computational approach. It describes a cognitive process (and so abduction) by the manipulation of internal symbolic representations of external world. This view assumes a discrete set of representations fixed in discrete time jumps, and cannot adequately account for the issue of anticipation and causation of a new hypothesis. An integration of the traditional computational view with some ideas developed inside the so-called dynamical approach can suggest some important insights. The concept of attractor is very significant. It permits a description of the abductive generation of new hypotheses in terms of a catastrophic rearrangement of the parameters responsible for the behavior of the system.

Abduction aiming at empirical progress or eventruth approximationleading to a challenge for computational modelling

September 1999

·

65 Reads

This paper primarily deals with theconceptual prospects for generalizing the aim ofabduction from the standard one of explainingsurprising or anomalous observations to that ofempirical progress or even truth approximation. Itturns out that the main abduction task then becomesthe instrumentalist task of theory revision aiming atan empirically more successful theory, relative to theavailable data, but not necessarily compatible withthem. The rest, that is, genuine empirical progress aswell as observational, referential and theoreticaltruth approximation, is a matter of evaluation andselection, and possibly new generation tasks forfurther improvement. The paper concludes with a surveyof possible points of departure, in AI and logic, forcomputational treatment of the instrumentalist taskguided by the `comparative evaluation matrix''.

Abductive Reasoning as a Way of Worldmaking

December 2001

·

216 Reads

The author deals with the operational core oflogic, i.e. its diverse procedures ofinference, in order to show that logicallyfalse inferences may in fact be right because –in contrast to logical rationality – theyactually enlarge our knowledge of the world.This does not only mean that logically trueinferences say nothing about the world, butalso that all our inferences are inventedhypotheses the adequacy of which cannot beproved within logic but only pragmatically. Inconclusion the author demonstrates, through therelationship between rule-following andrationality, that it is most irrational to wantto exclude the irrational: it may, at times, bemost rational to think and infer irrationally. Focussing on the operational aspects of knowingas inferring does away with the hiatus betweenlogic and life, cognition and the world(reality) – or whatever other dualism one wantsto invoke –: knowing means inferring, inferringmeans rule-governed interpreting, interpretingis a constructive, synthetic act, and aconstruction that proves adequate (viable) inthe ``world of experience'', in life, in thepraxis of living, is, to the constructivistmind, knowledge. It is the practice of livingwhich provides the orienting standards forconstructivist thinking and its judgments ofviability. The question of truth is replaced bythe question of viability, and viabilitydepends on the (right) kind of experiential fit.

Figure 1 An attempt at defining a chair  
The Abductive Loop: Tracking Irrational Sets

March 2008

·

85 Reads

·

·

·

[...]

·

We argue from the Church-Turing thesis (Kleene Mathematical logic. New York: Wiley 1967) that a program can be considered as equivalent to a formal language similar to predicate calculus where predicates can be taken as functions. We can relate such a calculus to Wittgenstein’s first major work, the Tractatus, and use the Tractatus and its theses as a model of the formal classical definition of a computer program. However, Wittgenstein found flaws in his initial great work and he explored these flaws in a new thesis described in his second great work; the Philosophical Investigations. The question we address is “can computer science make the same leap?” We are proposing, because of the flaws identified by Wittgenstein, that computers will never have the possibility of natural communication with people unless they become active participants of human society. The essential difference between formal models used in computing and human communication is that formal models are based upon rational sets whereas people are not so restricted. We introduce irrational sets as a concept that requires the use of an abductive inference system. However, formal models are still considered central to our means of using hypotheses through deduction to make predictions about the world. These formal models are required to continually be updated in response to peoples’ changes in their way of seeing the world. We propose that one mechanism used to keep track of these changes is the Peircian abductive loop.

Figure 1. Agent 'Tom' starts with belief that the coin is good (red line) but is persuaded by the evidence that the coin is double headed. Note that the green line (mis-labelled 'actor-entropy') actually plots the Indifference level.  
Figure 2. Agent 'David' starts with an unbiased view.  
Figure 3. Agent 'Jan' starts with a positive bias.
Figure 4. Agent 'Jan' starts with no knowledge of the correct hypothesis. Only the two major competing hypotheses are shown (medium red and medium dark blue). Overall confidence for agent 'Jan' in Figure 4 falls as new information is obtained, then rises. Notice how the probability of selecting the more informative experiments increases steadily from cycle 10.  
Simulation Methods for an Abductive System in Science

March 2008

·

83 Reads

We argue that abduction does not work in isolation from other inference mechanisms and illustrate this through an inference scheme designed to evaluate multiple hypotheses. We use game theory to relate the abductive system to actions that produce new information. To enable evaluation of the implications of this approach we have implemented the procedures used to calculate the impact of new information in a computer model. Experiments with this model display a number of features of collective belief-revision leading to consensus-formation, such as the influence of bias and prejudice. The scheme of inferential calculations invokes a Peircian concept of ‘belief’ as the propensity to choose a particular course of action.

Abductive Reasoning, Interpretation and Collaborative Processes

March 2008

·

847 Reads

In this paper we want to examine how the mutual understanding of speakers is reached during a conversation through collaborative processes, and what role is played by abductive inference (in the Peircean sense) in these processes. We do this by bringing together contributions coming from a variety of disciplines, such as logic, philosophy of language and psychology. When speakers are engaged in a conversation, they refer to a supposed common ground: every participant ascribes to the others some knowledge, belief, opinion etc. on which to rely in order to reach mutual understanding. As the conversation unfolds, this common ground is continually corrected and reshaped by the interchanges. An abductive reasoning takes place, in a collaborative setting, in order to build new possible theories about the common ground. In reconstructing this process through the use of a working example, we argue that the integration of a collaborative perspective within the Peircean theory of abduction can help to solve some of the drawbacks that the critics of the latter have outlined, for example its permissivity and non generativity.

Absence of Contingency in the Newtonian Universe

June 2004

·

21 Reads

I argue that, contrary to thestandard view, the Newtonian universe containsno contingency. I do this by arguing (i) thatno contingency is introduced into the Newtonianuniverse by the initial conditions of physicalsystems in the universe, and (ii) that theclaim that the Newtonian universe as a wholehas contingent properties leads to incoherence.This result suggests that Newtonian physics iseither inconsistent or incomplete, since thelaws of Newtonian physics are too weak todetermine all the properties of the Newtonianuniverse uniquely.

A Theory of Scientific Model Construction: The Conceptual Process of Abstraction and Concretisation

March 2005

·

159 Reads

The process of abstraction and concretisation is a label used for an explicative theory of scientific model-construction. In scientific theorising this process enters at various levels. We could identify two principal levels of abstraction that are useful to our understanding of theory-application. The first level is that of selecting a small number of variables and parameters abstracted from the universe of discourse and used to characterise the general laws of a theory. In classical mechanics, for example, we select position and momentum and establish a relation amongst the two variables, which we call Newton’s 2nd law. The specification of the unspecified elements of scientific laws, e.g. the force function in Newton’s 2nd law, is what would establish the link between the assertions of the theory and physical systems. In order to unravel how and with what conceptual resources scientific models are constructed, how they function and how they relate to theory, we need a view of theory-application that can accommodate our constructions of representation models. For this we need to expand our understanding of the process of abstraction to also explicate the process of specifying force functions etc. This is the second principal level at which abstraction enters in our theorising and in which I focus. In this paper, I attempt to elaborate a general analysis of the process of abstraction and concretisation involved in scientific- model construction, and argue why it provides an explication of the construction of models of the nuclear structure.

Foundations of Biology: On the Problem of “Purpose” in Biology in Relation to Our Acceptance of the Darwinian Theory of Natural Selection

March 1999

·

15 Reads

For many years, biology was largely descriptive (“natural history”), but with its emergence as a scientific discipline in its own right, a reductionist approach began, which has failed to be matched by adequate understanding of function of cells, organisms and species as whole entities. Every effort was made to “explain” biological phenomena in physico-chemical terms. It is argued that there is and always has been a clear distinction between life sciences and physical sciences, explicit in the use of the word biology. If this distinction is real, it implies that biological phenomena can never be entirely satisfactorily explained in terms of extant physicochemical laws. One notable manifestation of this is that living organisms appear to -- actually do -- behave in purposeful ways, and the inanimate universe does not. While this fundamental difference continues to be suppressed, the “purposiveness” (or teleology) which pervades biology remains anathema to almost all scientists (including most biologists) even to the present day. We argue here that it can, however, become a perfectly tenable position when the Theory of Natural Selection is accepted as the main foundation, the essential tenet, of biology that distinguishes it from the realm of physical sciences. In accepting this position, it remains quite legitimate to expect that in many but not all circumstances, extant physical laws (and presumably others still to be discovered) are in no way breached by biological systems, which cannot be otherwise since all organisms are composed of physical material.

Symbolic Languages and Natural Structures a Mathematician’s Account of Empiricism

January 2005

·

129 Reads

The ancient dualism of a sensible and an intelligible world important in Neoplatonic and medieval philosophy, down to Descartes and Kant, would seem to be supplanted today by a scientific view of mind-in-nature. Here, we revive the old dualism in a modified form, and describe mind as a symbolic language, founded in linguistic recursive computation according to the Church-Turing thesis, constituting a world L that serves the human organism as a map of the Universe U. This methodological distinction of L vs. U helps to understand how and why structures of phenomena come to be opposed to their nature in human thought, a central topic in Heideggerian philosophy. U is uncountable according to Georg Cantor’s set theory but Language L, based on the recursive function system, is countable, and anchored in a Gray Area within U of observable phenomena, typically symbols (or tokens), prelinguistic structures, genetic-historical records of their origins. Symbols, the phenomena most familiar to mathematicians, are capable of being addressed in L-processing. The Gray Area is the human Environment E, where we can live comfortably, that we manipulate to create our niche within hostile U, with L offering overall competence of the species to survive. The human being is seen in the light of his or her linguistic recursively computational (finite) mind. Nature U, by contrast, is the unfathomable abyss of being, infinite labyrinth of darkness, impenetrable and hostile to man. The U-man, biological organism, is a stranger in L-man, the mind-controlled rational person, as expounded by Saint Paul. Noumena can now be seen to reside in L, and are not fully supported by phenomena. Kant’s noumenal cause is the mental L-image of only partly phenomenal causation. Mathematics occurs naturally in pre-linguistic phenomena, including natural laws, which give rise to pure mathematical structures in the world of L. Mathematical foundation within philosophy is reversed to where natural mathematics in the Gray Area of pre-linguistic phenomena can be seen to be a prerequisite for intellectual discourse. Lesser, nonverbal versions of L based on images are shared with animals.

The Constructive Realist Account of Science and Its Application to Ilya Prigogine’s Conception of Laws of Nature

August 2008

·

36 Reads

Sciences are often regarded as providing the best, or, ideally, exact, knowledge of the world, especially in providing laws of nature. Ilya Prigogine, who was awarded the Nobel Prize for his theory of non-equilibrium chemical processes—this being also an important attempt to bridge the gap between exact and non-exact sciences [mentioned in the Presentation Speech by Professor Stig Claesson (nobelprize.org, The Nobel Prize in Chemistry 1977)]—has had this ideal in mind when trying to formulate a new kind of science. Philosophers of science distinguish theory and reality, examining relations between these two. Nancy Cartwright’s distinction of fundamental and phenomenological laws, Rein Vihalemm’s conception of the peculiarity of the exact sciences, and Ronald Giere’s account of models in science and science as a set of models are deployed in this article to criticise the common view of science and analyse Ilya Prigogine’s view in particular. We will conclude that on a more abstract, philosophical level, Prigogine’s understanding of science doesn’t differ from the common understanding.


A Context-Based Computational Model of Language Acquisition by Infants and Children

December 2002

·

13 Reads

This research attempts to understand howchildren learn to use language. Instead ofusing syntax-based grammar rules to model thedifferences between children''s language andadult language, as has been done in the past, anew model is proposed. In the new researchmodel, children acquire language by listeningto the examples of speech that they hear intheir environment and subsequently use thespeech examples that have been previously heardin similar contextual situations. A computermodel is generated to simulate this new modelof language acquisition. The MALL computerprogram will listen to examples of humanspeech, as would occur around a child, and thentry to use these examples in new situationsthat are similar to the contextual situationsin which the language examples were heard. This will provide a better understanding of howchildren learn to use language and howeducators can assist or improve the languagelearning process by providing required examplesof speech or by helping children to develop abetter understanding of similarities betweenvarious contexts.

On Gene’s Action and Reciprocal Causation

February 2010

·

155 Reads

Advancing the reductionist conviction that biology must be in agreement with the assumptions of reductive physicalism (the upward hierarchy of causal powers, the upward fixing of facts concerning biological levels) A. Rosenberg argues that downward causation is ontologically incoherent and that it comes into play only when we are ignorant of the details of biological phenomena. Moreover, in his view, a careful look at relevant details of biological explanations will reveal the basic molecular level that characterizes biological systems, defined by wholly physical properties, e.g., geometrical structures of molecular aggregates (cells). In response, we argue that contrary to his expectations one cannot infer reductionist assumptions even from detailed biological explanations that invoke the molecular level, as interlevel causal reciprocity is essential to these explanations. Recent very detailed explanations that concern the structure and function of chromatin—the intricacies of supposedly basic molecular level—demonstrate this. They show that what seem to be basic physical parameters extend into a more general biological context, thus rendering elusive the concepts of the basic level and causal hierarchy postulated by the reductionists. In fact, relevant phenomena are defined across levels by entangled, extended parameters. Nor can the biological context be explained away by basic physical parameters defining molecular level shaped by evolution as a physical process. Reductionists claim otherwise only because they overlook the evolutionary significance of initial conditions best defined in terms of extended biological parameters. Perhaps the reductionist assumptions (as well as assumptions that postulate any particular levels as causally fundamental) cannot be inferred from biological explanations because biology aims at manipulating organisms rather than producing explanations that meet the coherence requirements of general ontological models. Or possibly the assumptions of an ontology not based on the concept of causal powers stratified across levels can be inferred from biological explanations. The incoherence of downward causation is inevitable, given reductionist assumptions, but an ontological alternative might avoid this. We outline desiderata for the treatment of levels and properties that realize interlevel causation in such an ontology. KeywordsPhilosophy of biology–Causation–Ontology–Reductionism–Anti-Reductionism–Molecular biology–Cell biology


Adversus Singularitates: The Ontology of Space-Time Singularities

October 2012

·

179 Reads

I argue that there are no physical singularities in space-time. Singular space-time models do not belong to the ontology of the world, because of a simple reason: they are concepts, defective solutions of Einstein's field equations. I discuss the actual implication of the so-called singularity theorems. In remarking the confusion and fog that emerge from the reification of singularities I hope to contribute to a better understanding of the possibilities and limits of the theory of General Relativity.

Advertising Aristotle: A Preliminary Investigation into the Contemporary Relevance of Aristotle’s Art of Rhetoric

November 2008

·

49 Reads

In this article, a preliminary investigation will be conducted in order to try to discover whether or not Aristotle’s the Art of Rhetoric can have any relevance as a handbook for the rhetoricians of the twenty-first century and in particular for advertising designers. First, the background against which this question is posed will be set out. Second, the chosen methodology will be explained. Thereafter, some qualitative data will be presented and discussed. Finally, some conclusions will be drawn suggesting that The Art of Rhetoric may be just as relevant and influential today for advertising professionals as it was for the lawyers and politicians of classical times.

Webcams to Save Nature: Online Space as Affective and Ethical Space

May 2011

·

78 Reads

This article analyses the way in which websites of conservation foundations organise the affective investments of viewers in animals by the use of webcams. Against a background of—often overly—general speculation on the influence of electronic media on our engagement with the world, it focuses on one particular practice where this issue is at stake. Phenomenological investigation is supplemented with ethnographic observation of user practice. It is argued that conservation websites provide caring spaces in two interrelated ways: by providing affective spaces where users’ feelings are evoked, articulated and organised; and by opening up ethical space where the beauty of animals appears as an incentive to care. As an alternative to thinking of on- and off-line places as clearly delineated and of bodies and technologies as separate entities, the analysis focuses on trajectories of engagement that cut through these in various directions. In actual acts of looking and being affected, users, animals, places and technologies are intimately entwined. The article further suggests how focussing on trajectories of involvement can be developed to evaluate various websites and their user activity in relationship to clearly defined goals, e.g. conservation goals. Link to PDF of the article: http://rdcu.be/mSjx Keywords Affective space–Ethical space–Nature conservation–Phenomenology–Webcams

Top-cited authors